44 Local Organizations Stand Against SFPD’s Killer Robots

21 hours 48 minutes ago

EFF is announcing a letter signed by 44 community groups who stand united in opposition to the San Francisco Board of Supervisors authorizing the San Francisco Police Department to deploy deadly force with remote-control robots. The signers include racial justice groups, civil rights and civil liberties organizations, LGBTQ organizations, and labor unions.

You can read the entire letter here.

From the letter:

“SFPD’s proposal, if approved, threatens the privacy and safety of city residents and visitors. Police in the United States have killed 1,054 people in the last year. Black Americans are three times more likely than white Americans to be killed during police encounters. San Francisco is no exception; according to Mapping Police Violence, from 2013 to 2021, Black people were 9.7 times as likely and Latinx people were 4.3 times as likely to be killed by SFPD as a white person by population. According to Mission Local, from 2000 to 2021, over 30 percent of fatal police shootings in San Francisco killed Black people, even though Black people were only about 5 percent of the city's population. And despite California having one of the strongest laws governing police use of deadly force in the country,  unarmed people and bystanders are killed with disturbing frequency.

 There is no basis to believe that robots toting explosives might be an exception to police overuse of deadly force. Using robots that are designed to disarm bombs to instead deliver them is a perfect example of this pattern of escalation, and of the militarization of the police force that concerns so many across the city.”

We thank all of the groups who signed onto this letter, and the many groups and residents who attended today’s Stop Killer Robots rally outside of city hall. We again commend Supervisors Walton, Ronen, and Preston for their continued leadership in support of civil rights and civil liberties issues.

The policy passed with a vote of 8-to-3 last week, but is not effective without another vote at the Board of Supervisor meeting on Tuesday, December 6. Responding to public outcry, Supervisor Mar announced he will be changing his vote. We thank him for doing so, and urge other members of the Board to do the same.

Residents must continue reaching out to their supervisors and tell them to vote against SFPD’s  killer robots. You can find an email contact for your Board of Supervisors member here, and determine which Supervisor to contact here. Here's text you can use (or edit):


Do not give SFPD permission to kill people with robots. This broad policy would allow police to bring armed robots to every arrest, and every execution of a warrant to search a house or vehicle or device. Depending on how police choose to define the words “critical” or “exigent,” police might even bring armed robots to a protest. While police could only use armed robots as deadly force when the risk of death is imminent, this problematic legal standard has often been under-enforced by courts and criticized by activists. For the sake of your constituents' rights and safety, please vote no.

Matthew Guariglia

This Judge’s Investigation Of Patent Trolls Must Be Allowed to Move Forward

3 days 23 hours ago

If you get sued, you should be able to figure out who sued you. Remarkably, though, people and companies who are accused of patent infringement in federal court often have no idea who is truly behind the lawsuit. Patent trolls, companies whose main business is extorting others over patents, often hide behind limited liability companies (LLCs) that serve to keep secret the names of those who profit from their activities.

This shouldn’t be the case. Earlier this week, EFF filed a brief seeking to protect an ongoing investigation of one of the nation’s largest patent-trolling companies, IP Edge. 

In recent weeks, Delaware-based U.S. District Court Judge Colm Connolly asked that owners of several patent-holding LLCs, which have filed 69 lawsuits in his court so far, testify about their histories and financing. The hearings conducted in Judge Connolly’s courtroom have provided a striking window into patent assertion activity by entities that appear to be related to IP Edge. 

Judge Connolly has also filed his own 78-page opinion explaining the nature and reasoning of his inquiries. As he summarizes (page 75), he seeks to determine if there are “real parties in interest” other than the shell LLCs, such as IP Edge and a related “consulting” company called Mavexar. He also asks if “those real parties in interest perpetrated a fraud on the court” by fraudulently conveying patents and filing fictitious patent assignment documents with the U.S. Patent Office. Judge Connolly also wants to know if lawyers in his court have complied with his orders and the Rules of Professional Conduct that govern attorney behavior in court. He has raised questions about whether a lawyer who files and settles cases without discussing the matter with their supposed client is committing a breach of ethics.

Given the growth of patent trolling in the last two decades, these questions are critical for the future of innovation, technology, and the public interest. Let’s take a closer look at the facts that led EFF to get involved in this case. 

Owner of “Chicken Joint” Puts His Name On Patent Troll Paperwork

Hearings conducted by Judge Connolly on Nov. 4 and Nov. 10 revealed that the LLCs are connected to a shadowy network of partial-owners and litigation funders, including IP Edge, a large patent-trolling firm. The hearings also showed that the official “owners” of the LLCs have little or no involvement in litigation decisions, and collect small fractions of the overall settlement money that is collected. 

The owner of record of Mellaconic IP LLC, for instance, was Hau Bui, who described himself as the owner of a food truck and “fried chicken joint” in Texas. Bui was approached by a “friend” named Linh Dietz, who has an IP Edge email address and offered a “passive income” of a mere 5% of the money Mellaconic made from its lawsuits—about $11,000. He paid nothing to get the patent that his company acquired from an IP Edge-related entity. 

The owner of record of Nimitz Technologies LLC is Mark Hall, a Houston-based software salesman. When the judge asked Hall about what technology was covered in the patent he had used to sue more than 40 companies, Hall said, “I haven’t reviewed it enough to know.” He was “presented an opportunity” by Mavexar, a firm he called a “consulting agency” where Linh Dietz was his contact. Again, it was Linh Dietz who arranged the transfer of the patents. Hall told the judge he stood to get 10% of the proceeds from the patent, which has totaled about $4,000. However, Hall agreed that “all the litigation decisions are made by the lawyers and Mavexar.” 

After those hearings, Judge Connolly was concerned that the attorneys involved may have perpetrated a fraud on the court, and violated his disclosure rules. He asked for additional records to be provided. Instead of complying, the patent troll companies have asked the nation’s top patent appeals court to intervene and shut down Judge Connolly’s inquiry. 

The Public Has A Right To Know Who Benefits From Patent Lawsuits

That’s why EFF got involved in this case. This week, together with Engine Advocacy and Public Interest Patent Law Institute, we filed a brief in this case explaining why Judge Connolly’s actions are “proper and commendable.” 

“The public has a right—and need—to know who is controlling and benefiting from litigation in publicly-funded courts,” EFF writes in the brief. Parties who conceal this information undermine the promise of open courts. What’s more, patent-owning plaintiffs can hide behind insolvent entities, in order to avoid court rules and punishments for litigation misconduct. 

If the U.S. Court of Appeals for the Federal Circuit were to stop Judge Connolly from moving forward with enforcing these transparency rules, it will “encourage meritless suits, conceal unethical conduct, and erode public confidence in the judicial process.” There are circumstances where a privacy right or another interest can limit transparency. But those circumstances aren’t present here, where the identity of the party getting the lion’s share of any damages (and which is potentially the true patent owner) are concealed.

The disclosure requirements being enforced by Judge Connolly aren’t unusual. About 25% of federal courts require disclosure of “any person or entity… that has a financial interest in the outcome of a case,” which often includes litigation funders. 

Patent trolls often hide behind limited liability companies that are merely “shells,” which serve to hide the names of those who profit from patent trolling. When these LLCs have few or no assets, they can also serve to immunize the true owners against awards of attorneys’ fees or other penalties. This problem is widespread—in the first half of this year, nearly 85% of patent lawsuits against tech firms were filed by patent trolls. 

It’s also possible that in these cases, Mavexar or IP Edge may have structured the LLCs to insulate itself from penalties such as being required to pay litigation costs. That could create a structure in which sophisticated patent lawyers behind those firms make 90% or 95% of the profits, while a food truck owner with little knowledge of the patents or litigation could be stuck paying any penalties. In the past several years, fee shifting has become more common in patent litigation, due to Supreme Court rulings that have made it easier to get attorney’s fees paid in the most abusive patent cases. 

Even now, Mavexar-connected plaintiffs are continuing to file new lawsuits based on software patents they claim will compel a vast array of companies into paying them money. Mellaconic IP has filed more than 40 lawsuits, including some this week, claiming that basic HR software functions, like clocking in and out, infringe its patent.  

EFF got involved in the patent fight nearly 20 years ago because of software patents like these. These patents interfere with our rights to express ourselves, and perform basic business or non-commercial tasks in the online world. They make it harder for small actors to innovate and disrupt entrenched tech companies. And they often aren’t new “inventions” at all. The people who profit from lawsuits over these patents, and hide their identities while doing so, are long overdue for this type of investigation. 

Documents related to this case: 

Joe Mullin

India Requires Internet Services to Collect and Store Vast Amount of Customer Data, Building a Path to Mass Surveillance

4 days ago

Privacy and online free expression are once again under threat in India, thanks to vaguely worded cybersecurity directions—promulgated by India’s Computer Emergency Response Team (CERT-In) earlier this year—that impose draconian mass surveillance obligations on internet services, threatening privacy and anonymity and weakening security online.

Directions 20(3)/2022 - CERT-In came into effect on June 28th, sixty days after being published without stakeholder consultation. Astonishingly, India’s Minister of State for Electronics and Information Technology (MeitY) Rajeev Chandrasekhar said the government wasn’t required to get public input because the directions have “no effect on citizens.” The Directionsn itself states that they  were needed to help India defend against cybersecurity attacks, protect the security of the state and public order, and prevent offenses involving computers. Chandrasekhar said the agency consulted with entities “who run the relevant infrastructure,” without naming them.

Cybersecurity law and policy directly impact human rights, particularly the right to privacy, freedom of expression, and association. Across the world, national cybersecurity policies have emerged to protect the internet, critical infrastructure, and other technologies against malicious actors. However, overly broad and poorly defined proposals open the door to unintended consequences, leading to human rights abuses, and harming innovation. The Directions enable surveillance and jeopardize the right to privacy in India, raising alarms among human rights and digital rights defenders. A global NGO coalition has called upon CERT-in to withdraw the Directions and initiate a sustained multi-stakeholder consultation with human rights and security experts to strengthen cybersecurity while ensuring robust human rights protections. 

What’s Wrong With  CERT-in Cybersecurity Directions from a Human Rights Perspective?

Forced Data Localization and Electronic Logging Requirements

Direction No IV compels a broad range of service providers (telecom providers, network providers, ISPs, web hosting, cloud service providers, cryptocurrency exchanges, and wallets), internet intermediaries (social media platforms, search engines, and e-commerce platforms), and data centers (both corporate and government), to enable logs of  all their internet and communication technology (ICT) systems–and forces them to keep such data securely within India for 180 days. The Direction is not clear about exactly what systems this applies to, raising concerns about government access to more user data than necessary and compliance with  international personal data privacy principles that call for purpose limitation and data minimization. 

Requiring providers to store data within a country’s borders can exacerbate government surveillance by making access to users’ data easier. This is particularly true in India, which lacks strong legal safeguards and data protection laws. Data localization mandates also make providers easy targets for direct enforcement and penalties if they reject arbitrary data access demands.

General and Indiscriminate Data Retention Mandate

Direction No. V establishes an indiscriminate data retention obligation, which unjustifiably infringes on the right to privacy and the presumption of innocence. It forces data centers, virtual private server (VPS) providers, cloud service providers, and virtual private network service (VPN) providers to collect customers' data, including names, dates services began, email addresses, IP addresses, physical addresses, and contact numbers, among other things, for at least five years or longer, even if a person cancels or withdraws from the service.

Mandating the mass storage of private information for the mere eventuality that it may be of interest to the State at some point in the future is contrary to human rights standards. As the Office of the United Nations High Commissioner for Human Rights (OHCHR) has stated, “the obligation to indiscriminately retain data exceeds the limits of what can be considered necessary and proportionate.” Storing the personal information of political, legal, medical, and religious activists, human rights defenders, journalists, and everyday internet users would create honeypots for data thieves and put the data at risk in case of software vulnerabilities, fostering more insecurity than security. Moreover, VPN providers should not collect personal data or be forced to collect any data that are irrelevant to their operations just to comply with the new Directions. Personal data should always be relevant and limited to what is necessary regarding the purposes for which they are processed.

Onerous Cybersecurity Reporting Requirements

Direction No. II forces a broad range of service providers, internet intermediaries, including online game companies, and data centers (both corporate and government) to report cybersecurity incidents to the government within a tight time frame of six hours from detection—compared to 72 hours under the EU’s GDPR to notify data breaches—an onerous requirement for small and medium companies that would need staff available 24-7 to comply in such a short period. Moreover, such a tight time frame can exacerbate human errors. In contrast, the previous rules expected entities to report cybersecurity incidents “as early as possible to leave scope for action.” The new Direction does not mandate that users be notified of cybersecurity incidents. 

The reporting requirements apply to a wide range of cyber security incidents, including data breaches or data leaks, unauthorized access to ICT systems or resources, identity theft, spoofing, phishing attacks, DoS and DDoS attacks, malicious attacks like ransomware, and cyber incidents impacting the safety of human beings, among others. They also apply to “targeted” scanning (the automated probing of services running on a computer) of ICT systems; however, since targeting is ill-defined, this could be interpreted to mean any scanning of the system, which any system administrator can tell you, is the background noise of the internet. What’s more, many pro-cybersecurity projects engage in widespread scanning of the Internet.

Scanning is so ubiquitous on the internet that some smaller companies may choose to just automatically send all logs to CERT-In rather than risk being in violation of policy. This could make an already bad user privacy situation even worse.

Directions Grant CERT-In New Powers to Order Providers to Turn Over Information

Direction No. III grants CERT-In the power to order service providers, intermediaries, and data centers (corporate and government) to provide near real-time information or assistance when the agency is taking protective or preventive actions in response to cybersecurity incidents. The direction provides no oversight mechanism or data protection provision to guard against such orders being misused or abused. The direction also compels the same entities to designate a point of contact to receive CERT-In information requests and directions for complying with such requests.

Why Indiscriminate Data Retention Mandate is Anathema to VPNs

Consumer VPNs play a vital role in securing users’ confidential information and communications. They create a secure tunnel between a user’s device and the internet, enabling people to keep the data they send and receive private by hiding what servers they are communicating with from their ISP, and encrypting data in transit. This allows people to bypass local censorship and defeat local surveillance. 

VPNs are used everywhere. Activists, journalists, and everyday users want to protect their communications from the prying eyes of the government. Research shows that India has the highest growth rates in using VPN services worldwide. VPN installations during the first half of 2021 reached 348.7 million, a 671 percent increase in growth compared to the same period in 2020. Meanwhile, businesses use VPNs to provide secure access to internal resources (like file servers or printers) or ensure they can navigate securely on the Internet.

The massive data retention obligations under Direction No. V is anathema to VPNs—their core purpose is to not hold or collect user data and provide encryption to protect users’ anonymity and privacy. Forcing VPNs to retain customer data for potential government use will eliminate their ability to offer anonymous internet communications, making VPN users easy targets for state surveillance.

This is especially concerning in countries like India, where anti-terrorism or obscenity rules imposed on online platforms have been used to arrest academics, priests, writers, and poets for posting political messages on social media and leading rallies.

If VPNs comply with the CERT-In Cybersecurity Direction, they can no longer be relied upon as an effective anonymity tool to protect VPN’s user's free expression, privacy, and association, nor as an effective security tool. Chandrasekhar has said VPNs must comply with the Directions or curtail services in India. “You can’t say, 'No, it's our rules that we do not maintain logs,'” he told reporters earlier this year. “If you don't maintain logs, then this is not a good place to do business.”

VPNs “should not have to collect data that are not relevant to their operations to satisfy the new directions, just as private spaces cannot be mandated to carry out surveillance to aid law enforcement purposes,” IFF Policy Director Prateek Waghre said in a brief co-authored and published by the Internet Society. “What makes CERT-In’s directions related to data collection even riskier is that India does not have a data privacy or data protection law. Therefore, citizens in the country do not have the surety that their data will be safeguarded against overuse, abuse, profiling, or surveillance.”

The Internet Freedom Foundation (IFF) in India has called on CERT-In to recall the directions, saying the data retention requirements are excessive. The organization has also urged CERT-In to seek input from technical and cybersecurity experts and civil society organizations to revise them.

VPNs Fight Back

VPN operators have strongly objected, as the rules will essentially negate their purpose. Many said they would have to  pull out of India if forced to collect and retain user data. The good news is that most continue to offer services by routing traffic through virtual servers in Singapore, London, and the Netherlands. Meanwhile, Indian VPN service SnTHostings, which has just 15,000 customers, has filed a lawsuit challenging the rules on grounds that they violate privacy rights and exceed the powers conferred by the Information Technology Act 2000, India’s primary electronic commerce and cybercrime law. SnTHostings is represented by IFF in the case.

The CERT-In Directions come as the government has taken other steps to weaken privacy and restrict free expression; read more here, here, here, here, here, and here. Digital rights in India are degenerating, and while civil society organizations and VPN providers are raising red flags,

The Information Technology Industry Council (ITI), a global trade association representing Big Tech companies like Apple, Amazon, Facebook, and Google, has called on CERT-In to revise them, saying they will negatively impact Indian and global enterprises and actually undermine cybersecurity in India. “These provisions may have severe consequences for enterprises and their global customers without solving the genuine security concerns,” ITI said in a May 5 letter to CERT-In. A few weeks later, the agency clarified that the new directions don’t apply to corporate and enterprise VPNs.

A group of 11 industry organizations representing Big Tech companies in Asia, the EU, and the U.S. have also complained to CERT-In about the rules and urged that they be revised. While noting that internet service providers already collect the customer information required by the rules, they said requiring VPNs, cloud service providers, and virtual service providers to do the same would be “burdensome and onerous” for enterprise customers and data center providers to comply with. The threat to user privacy isn’t mentioned. We’d like to see this change. Tech industry groups, and the companies themselves, should stand with their users in India and urge CERT-In to withdraw these onerous data collection requirements.

To learn more, read Internet Freedom Foundation’s CERT-In Directions on Cybersecurity: An Explainer.

Karen Gullo

How to Make a Mastodon Account and Join the Fediverse

4 days 22 hours ago

This post is part of a series on Mastodon and the fediverse. We also have a post on understanding the fediverse, privacy and security on Mastodon, and why the fediverse will be great—if we don't screw it up, and more are on the way. You can follow EFF on Mastodon here.

The recent chaos at Twitter is a reminder that when you rely on a social media platform, you’re putting your voice, your privacy, and your safety in the hands of the people who run that system. Many people are looking to Mastodon as a backup or replacement for Twitter, and this guide will walk you through making that switch. Note this guide is current as of December 2022, and the software and services discussed are going through rapid changes.

What even is the fediverse? Well, we’ve written a more detailed and technical introduction, but put simply it is a large network of independently operated social media websites speaking to each other in a shared language. That means your fediverse social media account is more like email, where you pick the service you like and can still communicate with people who chose a different service.

EFF is excited and optimistic about the potential of this new way of doing things, but to be clear, the fediverse is still improving and may not be a suitable replacement for your old social media accounts just yet. That said, if you’re worried about relying on the stability of sites like Twitter, now is a good time to “backup” your social media presence in the fediverse.

1. Making an Account

When joining the fediverse, you are frontloaded with several important decisions. Keep in mind it’s easy enough to keep your account information when changing social media providers in the fediverse, so while important, these choices are not permanent. 

First, the social media site which connects you to the fediverse (called an “instance”) can run one of many applications which often mimic how other social media sites work. This guide focuses on the most popular of these called Mastodon, which is a microblogging application that works a lot like Twitter. If you strongly prefer another social media experience over Twitter, however, you may want to explore some of those alternative applications.

Next, using a site like joinmastodon.org, you’ll need to choose which specific Mastodon instance you join– and there are a lot of them. In making a selection you should consider three things:

  • Operators: Who owns the instance and how is it managed? You are trusting them not only with your privacy and security, but to be responsible content moderators. When reviewing an instance’s about page, make sure the rules they set are agreeable to you. You may also want to consider the jurisdiction in which the instance is operating, to help you anticipate what legal and extralegal pressures the moderators might face.
  • Community: Instances run the gamut from smaller or private options that center shared values and niche interests to large, general interest platforms open to everyone. When selecting one keep in mind that your local peers on an instance affect what content you see in direct and indirect ways. The result can be a close-nit community similar to a Facebook group, or a broad platform for exposure like Twitter. 
  • Affiliation: Your instance will be a part of your username, like with email For example, EFF’s account name is “@eff@mastodon.social” with “mastodon.social” being the instance. This affiliation may reveal information about yourself, especially if you join a special interest instance. If your instance is considered polarizing or poorly managed, other instances may also “defederate” or block it—meaning your messages won’t be shared with them. That’s likely not a concern with most popular instances, however.


Newcomers, especially those trying Mastodon after using Twitter, will likely want to try a large general-interest server. To reiterate, Mastodon makes it relatively easy to change this later without losing your followers and settings. So even if your preferred instance isn’t available to new users, you can get started elsewhere and move later. Some of you may even eventually want to start your own instance, which you can learn about here.

2. Privacy and Security settings

Once you’ve registered your account, there are a few important settings to consider. While there is a lot to say about Mastodon’s privacy and security merits, this guide will only cover adjusting built-in account settings. 

Remember, there is no one-size fits all approach, and you may want to review our general advice on creating a security plan.

Profile Settings

  • Require follow requests: Turning on this setting means another person can only follow your account after being approved. However, this does not affect whether someone can see your public posts (see next section).
  • Suggest Account to others: if you are worried about drawing too many followers, you can uncheck this option so that mastodon instances do not algorithmically suggest your account to other users.
  • Hide your social graph: Selecting this will hide who you are following and who is following you
Preference - Other

  • Opt-out of search engine: Checking this will make it more difficult for a stranger to find your profile, but it may still be possible if your account is listed elsewhere–e.g., on another social media site or on another fediverse account. 
  • Posting Privacy: 
    • Public: Your posts are publicly visible on your profile and are shared with non-followers
    • Unlisted: Your posts are publicly visible on your profile, but are not shared to non-followers. That means posts won’t be automatically shared to the fediverse, but anyone can visit your page to see your posts.
    • Followers-only: One needs to follow your account to view your posts.

Automated post deletion

Unlike Twitter, Mastodon has a built-in tool that gives users the ability to easily and automatically delete old posts on the site. 

This can be an effective way to limit the amount of information you leave publicly accessible, which is a good idea for people worried about online harassment or stalkers. However, public figures or organizations may opt to leave posts up as a form of public accountability.

Whatever you decide, remember that, as with any social media site, other users can download or screenshot your posts. Post deletion cannot unring that bell. An additional concern for the fediverse is that post deletion must be honored by every instance your post reaches, so some instances could significantly delay or not honor deletion requests (though this is not common).

Account settings - Enable 2FA

This group of settings lets you change your password, set up two factor authentication, and revoke access to your account from specific browsers and apps. If you notice any strange account activity, this is the section you can use to lock down access to your account

  1. Select 2fa
  2. Click setup and confirm your password
  3. Using a 2fa app, scan the presented QR code or manually enter the text secret
  4. Enter your two-factor code
  5. Click enable
  6. You’ll now receive 10 recovery codes in case you are not able to access the 2FA device you just set up. 

As with all 2FA recovery codes, take extra care to save these in a secure place such as a password manager, an encrypted file– or even written out by hand and locked away. If you ever lose these codes, or suspect that someone else might have access to them, you can return to this section to generate new ones to replace these.

Data export

Finally, if you have a secure way to store information, it is a good idea to regularly create a backup of your account. Mastodon makes it very easy to save your online life, so even if the instance you’re on today is bought by a bored billionaire, you can easily upload most of your account info to a new instance. If you’re planning ahead you can also port your followers to your new home by having your old account point to the new one.

It’s worth emphasizing again that your instance is controlled by its administrators—which means that its continued existence relies on them. Like a website, that means you’re trusting their continued work and wellbeing. However, if your instance is suddenly seized, censored, or bursts into flames, having a backup means you won’t have to completely start over.

3. Migrating and Verifying your Identity

Making sure your followers know you’re really you isn’t just to stroke your ego, it’s a crucial feature in combating misinformation and impersonation. However, if you’re looking for an equivalent of Twitter’s original blue-check verification system, you won’t find it on Mastodon– nor on Twitter, for that matter. You do have a few other options, though. 

Share your new Account

The easiest step is to simply link to your new Mastodon account on your other social media account(s). Adding the account to your name, bio, or a pinned message can help your followers find you on Mastodon through a number of methods

This is a good idea even if you plan for Mastodon to be your back-up account. You want users to know where you’ll be before it is necessary, and sharing early improves your ability to retain your following.

This is also a reason you may not want to delete your old account. Leaving this message up, especially from a verified account, will help your followers find you when they make the switch.

Website Verification

Mastodon also has a built-in verification system, but it’s a bit different than on centralized platforms. The original blue-check and similar, rely on users sharing sensitive documents to the social media company to verify that their online identity matches their legal identity– sometimes with that real name being required on the site. Ultimately, it is a system where users need to trust the diligence of that company’s bureaucratic process.

Instead, Mastodon instances only verify that your account has the ability to edit an external website. To do this, you first add the URL of a website you control to your profile under Profile > Appearance. The label for the URL does not matter.


Then you copy the line of HTML from your profile. This is simply a hyperlink to your account with a special message (`rel=”me”`) which most sites will remove from user-created text. Instead, you will need to edit the site’s HTML directly. For example, you can likely add or request this link be added to an employer’s website, who is then vouching for the account truly being yours. The result looks something like this:

On one hand, this system can eliminate an invasive, opaque, and often arbitrary bureaucracy from the verification process. On the other, you are now trusting two entities: the external website and the Mastodon instance hosting the user. This also asks users, like with email, to be careful for look-alike URLs being listed. 

So when setting up verification, a good strategy is to include the website(s) you have the most secure control over, and which has the most recognizable name. A personal blog is less assuring than an employers’ or schools’ site, while including all three can be very assuring– especially on reputable instances.

Mastodon: Into the Fediverse

Now you’re ready to jump into the fediverse itself. There are a few options for viewing posts: your “Home” feed will show you posts from everyone you follow; “Local” will show the listed posts from others on your instance; and “Federated” will show you all of the posts your instance is aware of–like a shared follow list you have with everyone on your instance. Keep this in mind as you follow accounts and “boost” posts by sharing them with your followers (similar to a retweet). There is no algorithm deciding what you see on Mastodon, but rather a shared process of curation and these actions increase the audience of a given post or user.

The fediverse, and Mastodon specifically, are rapidly developing and it is important to check regularly for changes to features and settings. Your particular security plan may also change in the future, so having regular reminders to check settings will help you adjust settings as needed. 

We have the chance to build something better than what the incumbent social media platforms. While this is an ongoing process, this overview of settings should put you in a good starting point to be a part of that change.

Rory Mir

International Coalition of Rights Groups Call on Internet Infrastructure Providers to Avoid Content Policing

5 days 1 hour ago
Except in Rare Cases, Intervening to Block Sites or Remove Content Can Harm Users

San Francisco—Internet infrastructure services—the heart of a secure and resilient internet where free speech and expression flows—should continue to focus their energy on making the web an essential resource for users and, with rare exceptions, avoid content policing. Such intervention often causes more harm than good, EFF and its partners said today.

EFF and an international coalition of 56 human and digital rights organizations from around the world are calling on technology companies to “Protect the Stack.” This is a global effort to educate users, lawmakers, regulators, companies, and activists about why companies that constitute basic internet infrastructure—such as internet service providers (ISPs), certificate authorities, domain name registrars, hosting providers, and more—and other critical services, such as payment processors, can harm users, especially less powerful groups, and put human rights at risk when they intervene to take down speech and other content. The same is true for many other critical internet services.

EFF today launched the Protect the Stack website at the Internet Governance Forum in Addis Ababa, Ethiopia. The website introduces readers to “the stack,” and explains how content policing practices can and have caused risks to the human rights. It is currently available in English, Spanish, Arabic, French, German, Portuguese, Hebrew, and Hindi.

"Internet infrastructure companies help make the web a safe and robust space for free speech and expression," said EFF Legal Director Corynne McSherry. "Content-based interventions at the infrastructure level often cause collateral damage that disproportionately harms less powerful groups. So, except in rare cases, stack services should stay out of content policing."

“We have seen a number of cases where content moderation applied at the internet’s infrastructural level has threatened the ability of artists to share their work with audiences,” said Elizabeth Larison, Director of the Arts and Culture Advocacy Program at the National Coalition Against Censorship. “The inconsistency of those decisions and the opaque application of vague terms of service have made it clear that infrastructure companies have neither the expertise nor the resources to make decisions on content.”

Infrastructure companies are key to online expression, privacy, and security. Because of the vital role they play in keeping the internet and websites up and running, they are increasingly under pressure to play a greater role in policing online content and participation, especially when harmful and hateful speech targets individuals and groups.

But doing so can have far-reaching effects and lead to unintended consequences that harm users. For example, when governments force ISPs to disrupt the internet for an entire country, people can no longer message their loved ones, get news about what’s happening around them, or speak out.

Another example is domain name system (DNS) abuse, where the suspension and deregistration of domain names is used as a means to stifle dissent. ARTICLE 19 has documented multiple instances of “DNS abuse” in Kenya and Tanzania.

Moreover, at the platform level, companies that engage in content moderation consistently reflect and reinforce bias against marginalized communities. Examples abound: Facebook decided, in the midst of the #MeToo movement’s rise, that the statement “men are trash” constitutes hateful speech. In addition, efforts to police “extremist” content by social media platforms have caused journalists’ and human rights defenders’ work documenting terrorism and other atrocities to be blocked or erased. There’s no reason to expect that things will be any different at other levels of the stack, and every reason to expect they will be worse.

A safe and secure internet helps billions of people around the world communicate, learn, organize, buy and sell, and speak out. Stack companies are the building blocks behind the web, and have helped keep the internet buzzing for businesses, families, and students during the COVID-19 and for Ukrainians and Russians during the war in Ukraine. We need infrastructure providers to stay focused on their core mission: supporting a robust and resilient internet.

For more information: https://protectthestack.org/

Contact:  CorynneMcSherryLegal Directorcorynne@eff.org
Karen Gullo

Let Data Breach Victims Sue Marriott

5 days 15 hours ago

A company harvested your personal data, but failed to take basic steps to secure it. So thieves stole it. Now you’ve lost control of your data, and you’re at greater risk of identity theft. But when you sue the negligent company, they say you haven’t really been injured, so you don’t belong in court – not unless you can prove a specific economic harm on top of the obvious privacy harm.

We say “no way.” Along with our friends at EPIC, and with assistance from Morgan & Morgan, EFF recently filed an amicus brief arguing that negligent data breaches inflict grievous privacy harms in and of themselves, and so the victims have “standing” to sue in federal court – without the need to prove more. The case, In re Marriott Customer Data Breach, arises from the 2018 breach of more than 130 million records from the hotel company’s reservation system. This included guests’ names, phone numbers, payment card information, travel destinations, and more. We filed our brief in the federal appeals court for the Fourth Circuit, which will decide whether the plaintiff class certified by the lower court shares a class-wide injury.

Our brief explains that once personal data is stolen, it can be used against the breach victims for identity theft, ransomware attacks, and to send unwanted spam. The risk of these attacks causes psychological injury, including anxiety, depression, and PTSD. To avoid these attacks, breach victims must spend time and money to freeze and unfreeze their credit reports, to monitor their credit reports, and to obtain identity theft prevention services.

Courts have long granted standing to sue over harms like these. Intrusion upon seclusion and other privacy torts are more than a century old. As the U.S. Supreme Court has recognized: “both the common law and literal understanding of privacy encompass the individual’s control of information concerning [their] person.”

Further, the harms from a single data breach must be understood in the context of the larger data broker ecology. As we explain in our amicus brief:

Data breaches like the Marriott data breach cannot be considered individually. Once data has been disclosed from databases such as Marriott’s, it is often pooled with other information, some gathered consensually and legally and some gathered from other data breaches or through other illicit means. That pooled information is then used to create inferences about the affected individuals for purposes of targeted advertising, various kinds of risk evaluation, identity theft, and more. Thus, once individuals lose control over personal data that they have entrusted to entities like Marriott, the kinds of harms can grow and change in ways that are difficult to predict. Also, it can be onerous, if not impossible, for an ordinary individual to trace these harms and find appropriate redress.

Standing doctrine gone wrong

Under the current standing doctrine, your privacy is violated – and so you have standing to sue – when your data leaves the custody of a company that is supposed to protect it. So In re Marriott is an easy case for the Fourth Circuit.

But make no mistake, the U.S. Supreme Court has wrongly narrowed the standing doctrine in recent data privacy cases, and it should reverse course. These cases are Spokeo v. Robins (2016) and TransUnion v. Ramirez (2021). They hold that to have standing, a person seeking to enforce a data privacy law must show a “concrete” injury. This includes “intangible harms” that have “a close relationship to harms traditionally recognized as providing a basis for lawsuits in American courts,” such as “reputational harms, disclosure of private information, and intrusion upon seclusion.”

In TransUnion, the credit reporting company violated the Fair Credit Reporting Act by negligently and falsely labeling some 8,000 people as potential terrorists. The Court held that some 2,000 of them suffered concrete injury, and thus had standing, because the company disclosed this dangerous information to others. Unfortunately, the Court also held that the remaining people lacked standing, because the company unlawfully made this dangerous information available to employers and other businesses, but did not actually disclose it to them.

We disagree. As we argued in amicus briefs in TransUnion and Spokeo (and have argued elsewhere), we need broader standing for private enforcement of data protection laws, not narrower. Our personal data, and the ways private companies harvest and monetize it, play an increasingly powerful role in modern life. Corporate databases are vast, interconnected, and opaque. The movement and use of our data is difficult to understand, let alone trace. Yet companies use it to reach inferences about us, leading to lost employment, credit, and other opportunities. In this data ecosystem, all of us are increasingly at risk from wrong, outdated, or incomplete information, yet it is increasingly hard to trace the causation from bad data to bad outcomes.

Congress made a sound judgment in the Fair Credit Reporting Act that a person should be able to sue a data broker that negligently compiled a dossier about them containing dangerously false information, and then made that dossier available to others. Four Justices in TransUnion would have deferred to Congress, but the majority thought it knew better.

So, even though TransUnion provides standing to the many millions of people harmed by data breaches, including Marriott’s, the Court still must revisit and overrule TransUnion.

You can read our In re Marriott amicus brief here.

Adam Schwartz

Let Them Know: San Francisco Shouldn’t Arm Robots

5 days 19 hours ago

The San Francisco Board of Supervisors on Nov. 29 voted 8 to 3 to approve on first reading a policy that would formally authorize the San Francisco Police Department to deploy deadly force via remote-controlled robots. The majority fell down the rabbit hole of security theater: doing anything to appear to be fighting crime, regardless of whether or not it has any tangible effect on public safety.

These San Francisco supervisors seem not only willing to approve dangerously broad language about when police may deploy robots equipped with explosives as deadly force, but they are also willing to smear those who dare to question its possible misuses as sensationalist, anti-cop, and dishonest.

TAKE ACTION

EMAIL YOUR SUPERVISOR: DON'T LET SFPD ARM ROBOTS 

When can police send in a deadly robot? According to the policy: “The robots listed in this section shall not be utilized outside of training and simulations, criminal apprehensions, critical incidents, exigent circumstances, executing a warrant or during suspicious device assessments.” That’s a lot of events: all arrests and all searches with warrants, and maybe some protests. 

When can police use the robot to kill? After an amendment proposed by Supervisor Aaron Peskin, the policy now reads: “Robots will only be used as a deadly force option when [1] risk of loss of life to members of the public or officers is imminent and [2] officers cannot subdue the threat after using alternative force options or de-escalation tactics options, **or** conclude that they will not be able to subdue the threat after evaluating alternative force options or de-escalation tactics. Only the Chief of Police, Assistant Chief, or Deputy Chief of Special Operations may authorize the use of robot deadly force options.”

The “or” in this policy (emphasis added) does a lot of work. Police can use deadly force after “evaluating alternative force options or de-escalation tactics,” meaning that they don’t have to actually try them before remotely killing someone with a robot strapped with a bomb. Supervisor Hillary Ronen proposed an amendment that would have required police to actually try these non-deadly options, but the Board rejected it.

The Board majority failed to address the many ways that police have used and misused technology, military equipment, and deadly force over recent decades.

Supervisors Ronen, Shamann Walton, and Dean Preston did a great job pushing back against this dangerous proposal. Police claimed this technology would have been useful during the 2017 Las Vegas mass shooting, in which the shooter was holed up in a hotel room. Supervisor Preston responded that it probably would not have been a good idea to detonate a bomb inside a  hotel.

The police department representative also said the robot might be useful in the event of a suicide bomber. But exploding the robot’s bomb could detonate the suicide bomber’s device, thus fulfilling the terrorist’s aims. After common sense questioning from their peers, pro-robot supervisors dismissed concerns as being motivated by ill-formed ideas of “robocops.”

The Board majority failed to address the many ways that police have used and misused technology, military equipment, and deadly force over recent decades. They seem to trust that police would roll out this type of technology only in the absolutely most dire circumstances, but that’s not what the policy says. They ignore the innocent bystanders and unarmed people already killed by police using other forms of deadly force only intended to be used in dire circumstances. They didn’t account for the militarization of police response to protesters, such as the Minneapolis demonstration with  overhead surveillance of a predator drone.

The fact is, police technology constantly experiences mission creep–meaning equipment reserved only for specific or extreme circumstances ends up being used in increasingly everyday or casual ways. This is why President Barack Obama in 2015 rolled back the Department of Defense’s 1033 program which had handed out military equipment to local police departments. He said at the time police must  “embrace a guardian—rather than a warrior— mind-set to build trust and legitimacy both within agencies and with the public.”

Supervisor Rafael Mandleman smeared opponents of the bomb-carrying robots as “anti-cop,” and unfairly questioned the professionalism of our friends at other civil rights groups. Nonsense. We are just asking why police need new technologies and under what circumstances they actually would be useful. This echoes the recent debate in which the Board of Supervisors enabled police to get live access to private security cameras, without any realistic scenario in which it would prevent crime. This is disappointing from a Board that in 2019 made San Francisco the first municipality in the United States to ban police use of face recognition.

We thank the strong coalition of concerned residents, civil rights and civil liberties activists, and others who pushed back against this policy. We’d also appreciate Supervisors Walton, Preston, and Ronen for their reasoned arguments and commonsense defense of the city’s most vulnerable residents, who too are harmed by police violence.

Fortunately, this fight isn’t over. The Board of Supervisors needs to vote again on this policy before it becomes effective. If you live in San Francisco, please tell your Supervisor to vote “no.” You can find an email contact for your Supervisor here, and determine which Supervisor to contact here. Here's text you can use (or edit):

Do not give SFPD permission to kill people with robots. There are many alternatives available to police, even in extreme circumstances. Police equipment has a documented history of misuse and mission creep. While the proposed policy would authorize police to use armed robots as deadly force only when the risk of death is imminent, this legal standard has often been under-enforced by courts and criticized by activists. For the sake of your constituents' rights and safety, please vote no.

TAKE ACTION

EMAIL YOUR SUPERVISOR: DON'T LET SFPD ARM ROBOTS 

Matthew Guariglia

Coalition of Human Rights, LGBTQ+ Organizations Tell Congress to Oppose the Kids Online Safety Act

1 week ago

Yesterday, nearly 100 organizations have asked Congress not to pass the Kids Online Safety Act (KOSA), which would “force providers to use invasive filtering and monitoring tools; jeopardize private, secure communications; incentivize increased data collection on children and adults; and undermine the delivery of critical services to minors by public agencies like schools.” EFF agrees. 

As we’ve said before, KOSA would not protect the privacy of children or adults, and would force technology companies to spy on young people and stop them from accessing content that is “not in their best interest,” as defined by the government, and interpreted by tech platforms. KOSA would also likely result in an elaborate age-verification system, run by a third-party, that maintains an enormous database of all internet users’ data. 

The letter continues: 

While KOSA has laudable goals, it also presents significant unintended consequences that threaten the privacy, safety, and access to information rights of young people and adults alike. We urge members of Congress not to move KOSA forward this session, either as a standalone bill or attached to other urgent legislation, and encourage members to work toward solutions that protect everyone’s rights to privacy and access to information and their ability to seek safe and trusted spaces to communicate online.

TAKE ACTION

TELL THE SENATE: VOTE NO TO CENSORSHIP AND SURVEILLANCE 

You can tell the Senate not to move forward with KOSA here. 

Jason Kelley

Power Up! Donations Get a 2X Match This Week

1 week ago

Power Up Your Donation Week is here! Right now, your contribution to the Electronic Frontier Foundation will have double the impact on digital privacy, security, and free speech rights for everyone.

Power Up!

Give today and get an automatic 2X match

A group of passionate EFF supporters created the Power Up Matching Fund and issued this challenge to all supporters of internet freedom: donate to EFF by December 6th and they’ll automatically match it up to a total of $272,000!

This means every dollar you give becomes two dollars for EFF. And we make every cent count. American nonprofit organizations rely heavily on fundraising that happens each November and December. During this season, the strength of members' support gives EFF the confidence to set its agenda for the following year. Your support powers EFF's initiatives to advance digital rights every day.

A Beacon in the Haze

Tech users face problems that shift as quickly as their digital tools. Sometimes the threat is a company’s sneaky methods to track your movements online. Other times it’s shortsighted lawmakers who overlook a dark future for your rights. Our digital world can be just as stormy as the one outside.

But thanks to public support, EFF is a leading voice for digital creators and users’ rights. You can ensure that EFF’s team of public interest lawyers, tech developers, and activists remains a beacon for a brighter web. Your donation will give twice the support for EFF initiatives that include:

Double Your Impact

Power Up Your Donation Week motivates thousands of people to support online rights every year. And we need your help to share this opportunity. Invite friends to join the cause! Here’s some sample language that you can share:

Donations to EFF get doubled this week thanks to a matching fund. Join me in supporting digital rights, and your contribution will pack double the punch, too! https://eff.org/power-up
Twitter | Facebook | Email

I’m grateful to all of the supporters who made EFF one of the most skilled defenders in the internet freedom movement. And now, you can help continue this critical work AND power up your donation.

Join EFF today

Pack twice the punch for civil liberties and human rights online

Aaron Jue

From Camera Towers to Spy Blimps, Border Researchers Now Can Use 65+ Open-licensed Images of Surveillance Tech from EFF

1 week ago

The U.S.-Mexico border is one of the most politicized technological spaces in the country, with leaders in both political parties supporting massive spending on border security and the so-called "Virtual Wall." Yet we see little debate over the negative impacts for human rights or the civil liberties of those who live in the borderlands. Despite all the political and media attention devoted to the border, most people hoping to write about, research, or learn how to identify the myriad technologies situated have to rely on images released selectively by Customs & Border Protection, copyright-restricted photographs taken by corporate press outlets or promotional advertisements from the vendors themselves.

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FnleYJgKSQrY%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

To address this information gap, EFF is releasing a series of images taken along the U.S. Mexico-Border in California, Arizona, and New Mexico under a Creative Commons Attribution 3.0 license, which means they are free to use, so long as credit is given to EFF (see EFF's Copyright policy).  Our goal is not only to ensure there are alternative and open sources of visual information to inform discourse, but to raise awareness of how surveillance is impacting communities along the border and the hundreds of millions of dollars being sunk into oppressive surveillance technologies.

Surveillance Towers

The images include various types of surveillance towers adopted by Customs & Border Protection over the last two decades: 

  1. Integrated Fixed Towers (IFT). These structures are from the vendor Elbit Systems of America, part of an Israeli corporation that has come under criticism for its role in surveillance in Palestine.  Some IFT towers are built using the same infrastructure as the earlier Secure Border Initiative (SBInet) program, which was widely considered a multi-billion-dollar boondoggle and canceled in January 2011. While there may be different IFT models along the border, the most common versions combine electro-optical and infrared sensors and radar and use solar panels. 
  2. Remote Video Surveillance Systems (RVSS). These structures from the vendor General Dynamics are most commonly, but not exclusively, found along the border fence. The platform at the top usually includes two sensor rigs with electro-optical and infrared cameras and a laser illuminator. The RVSS towers along the southwestern border (California, Arizona, New Mexico, and the El Paso area in Texas) differ in design than some of the RVSS models in south Texas; those are not included in this photo collection. 
  3. Autonomous Surveillance Towers (AST). These "Sentry" towers are made by Anduril Industries, founded by Oculus-creator Palmer Luckey. According to CBP, an AST "scans the environment with radar to detect movement, orients a camera to the location of the movement detected by the radar, and analyzes the imagery using algorithms to autonomously identify items of interest." In July 2020, CBP announced plans to acquire a total of 200 of these systems by the end of Fiscal Year 2022, a deal worth $250 million. EFF is publishing an image of one of these new towers installed in New Mexico along State Road 9; previously Anduril towers were only known to be located in Southern California and South Texas. 
  4. Mobile Surveillance Capabilities (MSC) from the vendor FLIR, which are surveillance towers mounted in the back of trucks so that they can be transported around or parked semi-permanently at particular locations. While CBP has used these trucks for many years, in early 2021 FLIR announced a new $21 million contract with CBP that will include additional units with new technologies "that can track up to 500 objects at once at ranges greater than 10 miles." While these trucks do move around the region, they are often parked in certain established areas, including next to permanent surveillance towers. 

CBP is currently in the early stages of the solicitation process for a massive expansion of this tower network on both the southern and northern border, according to an industry presentation from October. The "Integrated Surveillance Tower (IST) program is designed to "consolidate disparate surveillance tower systems under a single unified program structure and set of contracts," but it also contemplates upgrading 172 current RVSS towers and then adding 336 more, with the majority in California and Texas.

Tactical Aerostats

EFF's image set also includes two new tactical aerostats. First, the persistent ground surveillance (PGSS) tactical aerostat  that was launched without notice over the summer in Nogales, AZ, surprising and angering the local community. Secondly, we photographed a new aerostat in southern New Mexico that had not been previously reported. A third aerostat will soon be launched in Sasabe, AZ-with a total of 17 planned in the next fiscal year, according to a CBP report

These aerostats should not be confused with the "Tethered Aerostat Radar Systems," which are larger and permanently moored at air fields throughout the southern U.S. and Puerto Rico. TARS primarily use radar, while tactical aerostats  include "day and night cameras to provide persistent, low-altitude surveillance, with a maximum range of 3,000 feet above ground level," CBP says. Tactical aerostats are tethered to trailer-like platforms that can be moved to other locations within a Border Patrol's area of responsibility.

EFF and the Border

EFF's photographs were gathered up close when possible, and using a long-range lens when not, by EFF staff during two trips to the U.S.-Mexico border. In addition to capturing these images, EFF met with the residents, activists, humanitarian organizations, law enforcement officials, and journalists whose work is directly impacted by the expansion of surveillance technology in their communities. 

While officials in Washington, DC and state capitals talk in abstract and hyperbolic terms about a "virtual wall," there is nothing virtual at all about the surveillance for the people who live there. The towers break up the horizon and loom over their backyards. They can see the aerostats from the windows of their homes. This surveillance tech watches not just the border, and people crossing it, but also nearby towns and communities, on both sides, from air and the ground, and it can track them for miles, whether they're hiking, driving to visit relatives, or just minding their own business in solitude. 

People who live, work, and cross the border have rights. We hope these photographs document the degree to which freedoms and privacy have been curtailed for people in the borderlands.

A sample of the images is below. You can find the entire annotated collection here. 

An Anduril Sentry off CA-98 in Imperial County, CA

A Tactical Aerostat flying over State Road 9, Luna County, NM

An extreme close-up shot of the lens of an Integrated Fixed Tower (IFT) camera on Coronado Peak, Cochise County, AZ

A Mobile Surveillance Capability surveillance in Pima County, AZ

Matthew Guariglia

Red Alert: The SFPD Want the Power to Kill with Robots

1 week ago

The San Francisco Board of Supervisors will vote soon on a policy that would allow the San Francisco Police Department to use deadly force by arming its many robots. This is a spectacularly dangerous idea and EFF’s stance is clear: police should not arm robots.

TAKE ACTION

EMAIL YOUR SUPERVISOR: don't let SFPD arm robots 

Police technology goes through mission creep–meaning equipment reserved only for specific or extreme circumstances ends up being used in increasingly everyday or casual ways. We’ve already seen this with military-grade predator drones flying over protests, and police buzzing by the window of an activist's home with drones.

As the policy is currently written, the robots' use will be governed by this passage:

 “The robots listed in this section shall not be utilized outside of training and simulations, criminal apprehensions, critical incidents, exigent circumstances, executing a warrant or during suspicious device assessments. Robots will only be used as a deadly force option when risk of loss of life to members of the public or officers is imminent and outweighs any other force option available to SFPD.”

This is incredibly broad language. Police could bring armed robots to every arrest, and every execution of a warrant to search a house or vehicle or device. Depending on how police choose to define the words “critical” or “exigent,” police might even bring armed robots to a protest. While police could only use armed robots as deadly force when the risk of death is imminent, this problematic legal standard has often been under-enforced by courts and criticized by activists.

The combination of new technology, deadly weapons, tense situations, and a remote control trigger is a very combustible brew.

This occurs as many police departments have imported the use of robots from military use into regular policing procedures, and now fight to arm those robots.

In October 2022, the Oakland police department proposed a similar policy to arm robots. Following public outrage, the plans were scrapped within a week.

The San Francisco Board of Supervisors will be voting on whether to pass this bill on first reading at their November 29, 2022 meeting, which begins at 2pm. You can find an email contact for your Board of Supervisors member here, and determine which Supervisor to contact here. Please tell them to oppose this. Here's text you can use (or edit):

Do not give SFPD permission to kill people with robots. This broad policy would allow police to bring armed robots to every arrest, and every execution of a warrant to search a house or vehicle or device. Depending on how police choose to define the words “critical” or “exigent,” police might even bring armed robots to a protest. While police could only use armed robots as deadly force when the risk of death is imminent, this problematic legal standard has often been under-enforced by courts and criticized by activists. For the sake of your constituents' rights and safety, please vote no.

TAKE ACTION

EMAIL YOUR SUPERVISOR: DON'T LET SFPD ARM ROBOTS 

Matthew Guariglia

Experts Condemn The UK Online Safety Bill As Harmful To Privacy And Encryption

1 week 5 days ago

The British Parliament may start debating the Online Safety Bill again as soon as this week. The bill is a deeply flawed censorship proposal that would allow U.K. residents to be thrown in jail for what they say online. It would also force online service providers to use government-approved software to search for user content that is deemed to be related to terrorism or child abuse. In the process, it will undermine our right to have a private conversation, and the technologies that protect that right, like end-to-end encryption. 

In a letter published today, EFF has joined dozens of security researchers and human rights groups to send a clear message to incoming U.K. prime minister Rishi Sunak: the Online Safety Bill must not undermine encryption. As the letter notes, in its current form, the Online Safety Bill “contains clauses that would erode end-to-end encryption in private messaging.” It continues:  

Undermining protections for end-to-end encryption would make UK businesses and individuals less safe online, including the very groups that the Online Safety Bill intends to protect. Furthermore, because the right to privacy and freedom of expression are intertwined, these proposals would undermine freedom of speech, a key characteristic of free societies that differentiate the UK from aggressors that use oppression and coercion to achieve their aims.

In the past few years, we’ve seen a number of proposals brought forward by governments that want to scan user-to-user communications for criminal content: the U.S. EARN IT Act, and the EU’s proposal to scan private chats. All of these proposals suffer from the incorrect belief that a backdoor or other workaround to read encrypted messages can be designed for use only in benevolent ways. 

That isn’t the case, and never will be. Criminals, rogue employees, domestic abusers, and authoritarian governments are just some of the bad actors that will eagerly exploit backdoors like those proposed by the Online Safety Bill. Proposals like this threaten a basic human right: our right to have a private conversation. 

We hope Prime Minister Sunak acknowledges the consensus among technologists against the current Online Safety Bill and proposals like it. 

Joe Mullin

Top Prosecutors in CA, NY and DC Are Speaking Up For End-to-End Encryption

1 week 6 days ago

We all should have the ability to have a private conversation, and it follows that we need ways to communicate privately online as well. In the digital world, end-to-end encryption is our best chance to maintain our privacy and security. 

In the fraught legal landscape following the U.S. Supreme Court’s Dobbs decision, digital privacy around reproductive health has become critical. Several states are already enforcing abortion bans, and the Brennan Center for Justice has noted more than 100 state bills that were introduced in 2022 to further limit abortion access. At the same time, many states, including California and New York, have moved to protect or expand the right to abortion access, including for out-of-state persons. In this month’s elections, voters in California, Michigan, and Vermont enshrined the right to abortion in their state constitutions

In recent months, we’ve been pleased to see statements from attorneys general in New York, California, and Washington D.C., all advising citizens to use end-to-end encrypted services when seeking abortion services. 

These statements are good advice for consumers. Notably, these statements by elected law enforcement officials, seeking to protect their own constituents’ right to seek appropriate health care, stand in contrast to some federal agencies, including the FBI and the Department of Justice, that have sought to bypass or weaken encrypted services. 

California Attorney General Rob Bonta said California will be a “safe haven” for all those seeking reproductive health care, and he urged anyone seeking an abortion across state lines to take action to protect their privacy. “Every day, you leave behind a digital trail when you access an app, a website, or even start a search online," Bonta said in a September statement. “If you travel to California from a state where the right to choose is not protected, this information could be used to place you at risk.” 

In August, Nebraska police got a warrant for Facebook messages between a mother and a daughter, and used the messages as evidence to pursue abortion-related charges against the two women. 

Following the Nebraska prosecution, the Washington D.C. Attorney General issued a second consumer alert advising people to use end-to-end encryption when discussing abortion services. 

“While abortion remains fully legal in the District, consumers and those seeking abortions should be aware of how others may use their data, and they should take steps to protect themselves and their data and privacy as much as possible,” the statement says.  

At EFF, we’ve steadfastly opposed public officials who have called to undermine encryption. Strong encryption isn’t in tension with law enforcement—it’s vital for real public safety. 

Joe Mullin

EFF to Fifth Circuit: The First Amendment Protects the Right to Make Jokes on Social Media

1 week 6 days ago

EFF intern Izzy Simon contributed to this blog post.

The First Amendment to the U.S. Constitution protects the right to free expression and prohibits the government from “abridging the freedom of speech.” This includes protecting an individual's right to make jokes online—even bad or offensive jokes, as well as jokes about the police. TechFreedom and EFF have thus filed an amicus brief asking a federal appellate court in Louisiana, to reaffirm this basic principle after the police arrested Waylon Bailey for posting a joke at their expense on his Facebook account.

As COVID-19 gained a foothold in the United States in early 2020, Bailey likened the pandemic to the action movie World War Z in a post on Facebook to his friends and joked that the police in Rapides Parish, Louisiana, would shoot anyone infected with the virus “on sight.” Bailey wrote the post in all caps and added emojis and hashtags, including #weneedyoubradpitt, referencing Brad Pitt’s role in the film. His friends and wife commented on the post, going along with the joke. Within hours, a SWAT team showed up at Bailey’s house and arrested him for allegedly violating a Louisiana anti-terrorism law that prohibits “causing members of the public to be in sustained fear for their safety.” Upon arrest, the police allegedly warned Bailey “not to fuck with the police.” The district attorney dropped the charges, and Bailey sued the arresting officer and Sheriff for the unconstitutional arrest.

But a federal court wrongly dismissed his lawsuit in July of this year, holding that the First Amendment did not protect his speech and the arrest was lawful. The legal question here is whether Bailey’s post was intended or likely to incite “imminent lawless action,” as held by the Supreme Court in 1969. The court mentioned this standard but wrongly seemed to assume that it was met because Bailey supposedly incited “fear” through the post, thus violating Louisiana’s anti-terrorism law. Worse still, the court used a long-ago overturned Supreme Court case that upheld the government’s imprisonment of an anti-war pamphleteer during World War I. The high court’s terrible opinion in that case popularized the canard, repeatedly used by the court here, that free speech does not include “falsely shouting fire in a theatre.”

As our amicus brief argues, the post was obviously a joke and few people saw it outside of Bailey’s friends. No one called the police about the post. There is no evidence that anyone experienced any fear as a result of the post. Regardless, creating “sustained fear” is not an exception to the First Amendment. Using such wishy-washy language to define the boundaries of free speech empowers the government to censor viewpoints, art, journalism, and, yes, online jokes that it doesn’t like. And Bailey’s arrest is just the latest example of this. In another recent case out of Louisiana, EFF filed a brief in support of a comedian who created an obviously fake Facebook event satirizing right-wing hysteria over Antifa and was sued by the city government for the costs of policing the fake event.

If the Fifth Circuit agrees with the lower court and rules against Bailey, it will be a serious blow to First Amendment protections in the United States. The Supreme Court has recognized that the freedom to criticize the police distinguishes “a free nation from a police state.” If the First Amendment protects criticizing the police, it certainly protects a joke about the police that ran no risk of inciting any illegal activity. Courts must reaffirm this basic principle to protect the casual and spontaneous speech that is common on social media platforms. 

Mukund Rathi

See What We Accomplished Together in EFF's 2021 Annual Report

2 weeks ago

EFF's 2021 Annual Report is out now! Enjoy highlights of our work during the calendar year, along with a financial report covering our fiscal year of July 2020 - June 2021. 

EFF leveraged over $15M in public support to defend civil liberties and encourage innovation in the digital world last year. We continued long standing battles against street-level surveillance by companies such as Amazon Ring and technologies like Automated License Plate Readers (ALPRs). We also reacted to fast-breaking external events, such as our largely successful efforts to ensure that pandemic-related virus tracking software respects our privacy, and in our successful campaign to pressure Apple into dropping a dangerous message-scanning program.

Our work encrypting the web continued apace, as did our recognition that cybersecurity requires protecting everyone, including domestic violence victims who are subjected to stalkerware. And that’s only scratching the surface. Compared to recent annual reports, this year’s report includes a more comprehensive look at EFF’s work in six key issue areas, includes a “by the numbers” section, and other resources, such as links to our legal and policy victories and even amicus briefs EFF filed during the year. 

Sprinkled throughout the report are quotes from the 4,000 responses to our online member survey, where you affirmed that EFF is a trusted source of information, and that our supporters share our values. Thank you for standing by our side as we work together to protect civil liberties and make the world a better place, now and for future generations.

READ THE REPORT

See more of what EFF accomplished in 2021

(Pam) Mei Harrison

EFF, Coalition of California Privacy Advocates Caution Against Weakening CA Privacy Rights

2 weeks ago

EFF on Monday joined Privacy Rights Clearinghouse, ACLU California Action, Oakland Privacy, Media Alliance and the Consumer Federation of America in submitting comments to the California Privacy Protection Agency. The Agency is currently writing rules for the California Consumer Privacy Act as amended by 2020’s California Privacy Rights Act.

In our comments, EFF and other privacy advocates, criticized several changes to the regulations that “appear to set up additional barriers to consumers' ability to exercise their rights” under California’s landmark privacy law. These include several changes that loosen requirements for companies to pass on consumer requests to delete, opt-out of sale, or limit the use of sensitive personal information. We also raise concerns about the way the rules, as written, allow businesses to complicate how they process opt-out preference signals.

“This framework threatens to make the opt-out preference signals an unusable mechanism to communicate a consumer’s privacy choices,” the comments said.

This is the second time that privacy groups in California have filed joint comments on the Agency’s proposed rules. You can find comments on the most recent changes here.

Hayley Tsukayama

EFF Files Comments on the FTC’s Commercial Surveillance Rulemaking

2 weeks ago

EFF filed comments with the Federal Trade Commission Monday, in response to the Commission’s request for public comment addressing harmful commercial surveillance and lax data security.

EFF laid out many of its core principles on data privacy regulation in ways that fall in line with the FTC’s authority to address unfair and deceptive practices and protect the competitive process.

The comments urge the Commission to pay attention to specific issue areas and industries, including worker privacy, student privacy, the privacy of daycare apps, stalkerware, and location data brokers.

The comments also emphasize the need for the FTC to play a active role in protecting Americans’ privacy. “[T]here are many places in which American’s data privacy is not adequately protected by any current privacy law,” the comments said. “The Commission must issue new rules to place new limits on companies that violate our trust and strengthen the general privacy landscape. As the federal government’s privacy enforcer, the FTC must be the vanguard for privacy protections.”

You can find EFF’s full comments here.

Hayley Tsukayama

VICTORY! Congress Sends the Safe Connections Act to the President’s Desk

2 weeks ago

In the 21st century, it is difficult to lead a life without a cell phone. It is also difficult to change your number—you’ve given it to all your friends, family, doctors, children’s schools, and so on. It’s especially difficult if you are trying to leave an abusive relationship where your abuser is in control of your family’s phone plan and therefore has access to your phone records. 

Thankfully, Congress just passed a bill that will change that.

The Safe Connections Act (S. 120) was introduced in January 2021 by Senators Brian Schatz, Deb Fischer, Richard Blumenthal, Rick Scott, and Jacky Rosen. It would make it easier for survivors of domestic violence to separate their phone line from a family plan while keeping their own phone number. It also requires the FCC to create rules to protect the privacy of the people seeking this protection. This bill overwhelmingly passed both chambers of Congress and was sent to the President’s desk on November 18, 2022. 

Telecommunications carriers are already required to make numbers portable when users want to change carriers. So it should not be hard for carriers to replicate a seamless process when a paying customer wants to move an account within the same carrier. EFF strongly supports this bill.

We would have preferred a bill that did not require survivors to provide paperwork to “prove” their abuse. For many survivors, providing paperwork about their abuse from a third party is burdensome and traumatic, especially when it is required at the very moment when they are trying to free themselves from their abusers. However, this bill is a critical step in the right direction, and it is encouraging that Congress so overwhelmingly agreed.

India McKinney

Monetization, Not Human Rights or Vulnerable Communities, Matter Most at Twitter Under Musk

2 weeks ago

Billionaire Elon Musk says Twitter can be an “incredibly valuable service to the world,” a global forum where ideas and debates flourish. Yet, much of what he has done since taking over the company suggests that he doesn’t understand how to accomplish this, doesn’t appreciate the impact of his decisions on users—especially the most vulnerable—or doesn’t care.

Step by step, from the firing of top trust and safety executives and content moderation staff to the disastrous rollout and rollback of the $8 blue check program, Musk’s reign at Twitter has already increased risks to users—especially those in crisis zones around the world who flocked to Twitter for expression during unrest—by unraveling guardrails against misinformation, harassment, and censorship.

Hate speech will remain on the platform, but will be “max deboosted,” Musk said, presumably meaning it will not be promoted in users’ feeds by Twitter’s algorithms. It’s not clear how and what speech is being categorized as hateful.

“Elon has shown that his only priority with Twitter users is how to monetize them,” said an unidentified company lawyer on the privacy team, in a Twitter Slack post obtained by The Verge. “I do not believe he cares about the human rights activists. the dissidents, our users in un-monetizable regions, and all the other users who have made Twitter the global town square you have all spent so long building, and we all love.”

We’ve seen an exodus of Twitter executives on the front lines of protecting safety, security, speech, and accessibility. Some were fired, others resigned. Gone are Yoel Roth, Head of Trust and Safety, Lea Kissner, Chief Information Security Officer, Damien Kieran, Chief Privacy Officer, Marianne Fogart, Chief Compliance Officer, Raj Singh, Human Rights Counsel, and Gerard K. Cohen, Engineering Manager for Accessibility. Half of Twitter’s 7,500 employees were let go, with trust and safety departments hit the hardest. A second wave of global job cuts reportedly  hit over 4,000 outside contractors, many of whom worked as content moderators battling misinformation on the platform in the U.S. and abroad. As many as 1,200 staffers resigned late last week after Musk gave employees a deadline to decide whether to stay or leave.

Along the way, Musk fired the entire human rights team, a group tasked with ensuring Twitter adheres to the UN Guiding Principles on Business and Human Rights. That team’s work was crucial to combating harmful content, platform manipulation, and the targeting of high-profile users in conflict zones, including in Ethiopia, Afghanistan, and Ukraine. These teams were also involved in ensuring that Twitter resisted censorship demands from authoritarian countries that don’t comport with human rights standards. Such demands are on the rise; in the latter half of 2021, the company received a record 50,000 legal content takedown demands.

What is worse, Musk’s vow to “hew close to the laws of countries in which Twitter operates” could mean that the company will begin complying with censorship policies and demands for user data that it has previously withstood.

For example, Qatar—whose government is one of Musk’s financial backers—has a law that threatens imprisonment or fines to “anyone who broadcasts, publishes, or republishes false or biased rumors, statements, or news, or inflammatory propaganda, domestically or abroad, with the intent to harm national interests, stir up public opinion, or infringe on the social system or the public system of the state.” The broad law, which has been condemned by Amnesty International, is ripe for abuse and creates a chilling environment for speech.

And while Twitter’s moderation policies have been far from perfect, it has often stood up for its users. For example, when authorities in India have pressured Twitter to block accounts that have criticized the government, including activists, journalists, and politicians, the company has pushed back, including filing a lawsuit challenging the government’s demand to remove 39 tweets and accounts. Given that Musk fired 90 percent of Twitter’s 200 staffers in India earlier this month, will Twitter will continue to defend the case?

Even before the layoffs, access to internal controls used to moderate content and policy enforcement was reportedly cut off for some employees, raising questions about whether moderators could fend off misinformation ahead of the November 8 U.S. midterm elections. It’s no coincidence that the platform experienced a surge in racists slurs in the first few days after Musk’s $44 billion acquisition.

Another problem is Twitter Blue, a revamped subscription service that gives users a blue check mark and early access to new features for $7.99 a month. Pre-Musk, a blue check mark indicated that Twitter had independently verified the account as belonging to the person or organization—celebrities and journalists, but also activists and artists—it claimed to represent. It was a way to combat fake accounts and misinformation and garner trust in the platform. Musk wants to make it available to anyone willing to pay for it.

The new Twitter Blue blew up, as people, governments, and companies used it to impersonate others at will. Some of those were funny, some not so much, such as fake airline customer support accounts that tried to lure in Twitter users seeking help from real airlines.

Twitter suspended many of those accounts, but not before anti-trans trolls, far-right extremists, and conspiracy mongers, some of whom had been kicked off Twitter in the past for hateful content and misinformation, purchased blue check marks and picked right up where they left off. The program was temporarily suspended following the wave of abuse.

Whenever it’s resurrected, and even if it’s not actively abused, the Twitter Blue pay-to-play model will disproportionately affect people and groups that can’t afford $96 a year and undercut their ability to be heard. Blue checks were a sign of trustworthiness for journalists, human rights defenders, and activists, especially in countries with authoritarian regimes where Twitter has been a vital source of information and communication. Even worse, people who don’t pay will be harder to find on the platform, according to Musk. Paid accounts will receive priority ranking, and will appear first in search, replies, and mentions.

Also in the works is a content moderation council that will represent “widely diverse views.” In early November Musk met with officials from civil rights organizations, including the National Association for the Advancement of Colored People, Color of Change, and the Anti-Defamation League, saying he would restore content moderation tools that had been blocked from staff and inviting them to join the council. But marginalized communities outside of the United States have also relied on Twitter to get their voices heard. Will anyone on the council have the expertise and credibility to speak on their behalf?

Musk said in late October that no major content decisions or account reinstatements would occur until the council was formed. He hasn’t announced such a council yet exists, but on Nov. 20 reinstated high-profile people kicked off the platforms for hate speech and misinformation, including former President Donald Trump—who was let back on Twitter after Musk polled users—Kanye West, and the Babylon Bee, which was banned for anti-trans comments.

There is one potential positive development for Twitter users around the world: it appears Musk might be making good on his promise that Twitter direct messages will be end-to-end-encrypted. That would enable Twitter users to communicate more safely without leaving the platform.

But it doesn’t overshadow or outweigh the potential harms to the sites’ most vulnerable users. Prioritizing the monetization of users will inevitably leave behind millions of Twitter users in unmonetizable regions, and ensure that they their voices will be relegated to the bottom of the feed, where few will be able to find them.

Karen Gullo

Documents Show DOJ’s Multi-Pronged Effort to Undermine Section 230

2 weeks 1 day ago

In the summer of 2020, the Department of Justice was closely monitoring the public and congressional debate about a key law protecting internet users’ speech at the same time that it  pushed to undermine the law, documents show.

DOJ was tracking multiple efforts to repeal or frustrate 47 U.S.C. § 230 (Section 230), including implementation of then-President Donald Trump’s unconstitutional Executive Order and the department’s own proposed amendments to the law. Although all of those efforts were public, the fact that the DOJ was closely monitoring them was not.

DOJ also developed a series of talking points about Section 230 reform efforts. Those talking points included a claim that because “the Constitution treats criminal content and lawful speech differently, so too should platforms.” That statement ignores that many users of platforms do not want to see a host of awful but protected speech that platforms regularly moderate. In addition, online intermediaries have their own First Amendment rights to decide what speech they want to host. Those rights don’t rise and fall depending on whether the moderated speech is protected. Just as a newspaper can decide for itself what articles and opinions it publishes, websites and apps can as well.

Taken together, the documents reflect a concerted push by DOJ to either amend the law or undermine it via Trump’s Executive Order. Fortunately, neither DOJ nor Trump’s efforts succeeded.

The DOJ released the documents this fall in a long-running Freedom of Information Act lawsuit filed by EFF. In this case, we sought records showing the various ways the federal government attempted to retaliate against social media for moderating users’ content in ways the president did not like. EFF was also counsel to a group of plaintiffs who sued to block Trump’s unconstitutional Executive Order.

As part of EFF’s FOIA suit, the Office of Management and Budget also released a series of records (available here and here) that provided further insight into the Executive Order’s implementation. A key aspect of the order required all federal agencies to report how much money they spent on online advertising. That could have been the first step in an unconstitutional effort to punish platforms that Trump did not like.

Records released earlier showed that agencies spent more than $117 million to advertise online for a variety of reasons, such as posting about job openings and encouraging students to apply for federal financial aid. The released records at that time didn’t include several federal agencies or components, including many from within the DOJ. The new records add to the total, including more than $93 million spent on online advertising for the 2020 census. 

Related Cases: Rock the Vote v. TrumpEFF v. OMB (Trump 230 Executive Order FOIA)
Aaron Mackey
Checked
36 minutes 52 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed