How California Reproductive Health Workers Can Protect Information They Submit to the Government

49 minutes 46 seconds ago

With the U.S. Supreme Court's decision in Dobbs reversing long-standing rights to abortion access, workers and volunteers for reproductive health clinics must reevaluate the risks they face (also known as a threat model) and take steps to safeguard their personal information–including information they have submitted to the government.

In 2020, nearly 17% of abortions performed in the United States occured in California, according to data from the Guttmacher Institute, and that number is projected to grow substantially as California endeavors to become a safe haven for pregnant people coming from potentially dozens of states that restrict abortions. Consequentially, it is also reasonable to expect that anti-abortion activists will continue to use California’s public records laws to obtain and release personal information of health care workers at reproductive health clinics. Public records are among the most common sources of information that lead to doxxing and similar harassment of individuals.

EFF has provided many guides to protecting one's data in the online world. In this new guide, we'll cover how health care workers in California can use AB 1622, a law passed in 2019 that allows them to request protections for the data they must submit to the government from being released under the California Public Records Act. We include a letter template that clinics can submit to government agencies requesting data protection for their employees.

We will also provide information about California's Safe at Home Program, which allows individuals to request a proxy mailing address from the Secretary of State's office that they can use in place of their home address, including on government records.

Balancing transparency and accountability with worker privacy can be difficult when it comes to the certification documents and other records that health professionals must submit to the government. Patients have a right to know whether their health provider is licensed or has faced disciplinary measures for unsafe conditions or other violations. But health care workers also have a right to privacy, and personal information disclosed via public records laws can potentially lead to harassment, violence or other forms of intimidation.

For EFF, balancing government transparency and personal privacy—two of our core mission areas—can also be difficult. In this context, the privacy and safety rights of health care workers are compelling. From both a privacy and transparency perspective, we believe these privacy protections could have been more effectively structured, however we support health care workers exercising their rights under the law as it exists now. We think the public's interest in ensuring transparency into state agencies' oversight of health care facilities is not hindered by these narrow exemptions to public records laws.

Before we dive into the guide, there are two important caveats.

First, these laws generally only protect data that could be released to the public in response to public records requests, although the Safe at Home program does offer additional layers of privacy. However, under both programs, law enforcement agencies would still be able to obtain this personal information through legal process, such as a search warrant.

Second, both of these options require you to be your own best advocate for your rights. That may include multiple follow-up emails and calls to an agency or organization and repeatedly reminding officials of their obligations under the law. 

AB 1622 - An Opt Out in the California Public Records Act What It Does

AB 1622 says that four specific top-level state agencies dealing with healthcare are not required to provide certain categories of personal information belonging to reproductive health care workers in response to requests under the California Public Records Act. The rub is that these protections are not automatic—the workers have to send a letter asking for their information to be protected from disclosure.

The information covered by the law includes: "social security number, physical description, home address, home telephone number, statements of personal worth or personal financial data filed pursuant to subdivision (n) of Section 62541, personal medical history, employment history, electronic mail address, and information that reveals any electronic network location or identity."

The four agencies are the State Department of Health Care Services. the Department of Consumer Affairs, the Department of Managed Health Care and the State Department of Public Health.  The California Medical Board, Board of Registered Nursing, Physician Assistant Board, Board of Pharmacy, and other health care professional regulatory bodies are part of the Department of Consumer Affairs and therefore subject to this law. We recommend sending an additional letter to the relevant board that is responsible for the health care professional's licensing. 

Who Is Covered 

The text of the statute says that it applies to "employees, volunteers, board members, owners, partners, officers, or contractors" of a reproductive health services facility, which is defined as the "office of a licensed physician and surgeon whose specialty is family medicine, obstetrics, or gynecology, or a licensed clinic, where at least 50 percent of the patients of the physician or the clinic are provided with family planning or abortion services." Contractors include individuals or entities that contract with a facility for patient care services. 

How to Get the Protections

It's worth repeating: these protections aren't automatic. A health worker and their employer must specifically request it in writing via a letter. And the law doesn't make it simple.

  • The letter must be on the facility's official letterhead. 
  • The text of the letter must have the request for privacy protection clearly separated from other text on the page. 
  • The privacy protection request must be signed and dated by both the worker and the facility's executive officer or their designee.
  • The facility must retain a copy of the letter. 

The details are crucial: any misstep in the exact formatting required by the law may mean that the request isn't valid, and thus personal information may be disclosed in response to a public records request or by a court hearing in a public records lawsuit.

We have created a sample letter that we believe meets the requirements of the law, but we can’t guarantee success. Again, you will need to be your own best advocate. We have included the addresses for each of the agencies on the letter, but the letter must be sent to each of the agencies directly. 

The protections take effect once the individual submits the letter to the agency. 

Limitations

These privacy protections aren't guaranteed or permanent.

Like most exemptions to the California Public Records Act, the provisions protecting the disclosure of personal information under AB 1622 are discretionary. This means that the law does not mandate withholding the information, only that the California Public Records Act "does not require disclosure of any personal information." This discretion means that agencies now or in the future could opt not to withhold information.

In addition, the law does allow people to file a public records request for employment history information and, if it is rejected, they can petition a court to release the information. The judge will consider each case individually and can order disclosure of the information if "the public interest served by disclosure of employment history information clearly outweighs the public interest served by not disclosing the information."

When an employee leaves their job, the clinic has 90 days to report the separation to the agencies that received the original privacy protection request. If an employee has worked there for less than a year, their data is only protected for 6 months after they leave. If they have worked there for more than a year, their data can be protected for a full 12 months.

Another potential headache: clinics may find the process here burdensome, especially if they have to manage notifications for hundreds of employees.

EFF did reach out to the agencies named in the law to learn more about how they are carrying out the law. Unfortunately, in most cases we did not receive satisfying responses and if you are seeking these protections, you may need to apply additional pressure. You should be prepared to use this blog post to explain AB 1622 to them. 

Safe At Home - Confidential Mailing Addresses

California has a program that allows reproductive health care workers and patients facing threats to obtain a confidential snail mail address that they can use to protect their privacy.

The Safe at Home Program, also known as California's Address Confidentiality Program, was primarily developed for people experiencing domestic violence or stalking, and that remains the program's primary use, with 72% of the 4,858 enrollees in 2021 applying for that category of protection.

However, reproductive health workers and patients are also covered by the law, and during the pandemic Gov. Newsom issued an executive order to include all health care workers facing threats. 

What Safe at Home Does

If you are accepted into the Safe at Home program, you are given a P.O. Box that is monitored by the California Secretary of State's office and a program identification card. You can use that address instead of your home on a wide matter of official documents, including your driver's license. You can have common types of mail sent to this address, and the Secretary of State will forward them onto you (with a natural delay of course for processing). In 2021, the Secretary of State processed 81,159 pieces of mail on behalf of enrollees.

The Safe at Home program is most effective for people who have recently moved or are in the process of moving to a new address. If your existing address is already out on the internet, the program will be a bit limited in its efficacy.

In addition to confidential mail forwarding, you can also use the address as your agent for process serving and confidential vote registration. State and local agencies are required by law to accept your address, and that includes law enforcement. The exceptions are for birth, death, fetal death, marriage, and divorce certificates, which will still require your home address.

Once you're registered, you may be eligible for services from other agencies, including records suppression at the California Department of Motor Vehicles and the ability to apply for a confidential name change with the California Superior Court. However, these do require separate processes not covered in this guide. 

Who Is Covered

The Safe at Home program is available to reproductive health care service providers, employees, volunteers and patients. The definition broadly includes any "person who obtains, provides, or assists, at the request of another person, in obtaining or providing reproductive health care services, or a person who owns or operates a reproductive health care services facility."

That said, in order to be eligible for the program you need to be comfortable attesting–and have a facility operator who is willing to attest–that you or the facility has received violent threats or acts within the last year and that you fear for your safety. While applicants are encouraged to provide documentation of these threats, the agency does not mandate it. Proof of employment, however, is required.

If you have experienced stalking, you can apply for the program without invoking the reproductive health care-specific requirements.

How to Get the Protections

To use this program, you must apply through a Safe at Home designated enrolling agency. Because this program primarily serves people who have experienced domestic violence or stalking, the enrolling agencies are largely victim's services providers, including many private non-profits. A list is available here. While these agencies have been offered training on how the law applies to reproductive health, you may find that you need to explain the process or insist upon the individual case worker follow the legal requirements.

Unfortunately, the Safe at Home program is not free. There is an application fee of $30 for reproductive health care workers plus an annual $75 fee for the service. These fees do not apply to reproductive health care patients or their families. 

Key Points As You Spread the Word

When sharing this information with others in the reproductive justice space, there are a few key points to convey to reproductive health audiences.

Both the California Public Records Act privacy protections and the Safe at Home program are limited in their scope and do not provide comprehensive protection for people. It's worth reiterating that these two protections are not effective in preventing government agencies from using legal process to access personal data to enforce abortion bans. In addition, the efficacy of these programs depend on the competency and the will of the people administering the programs. We can't say this enough: applicants must be prepared to advocate for themselves and to educate government officials.

These protections do not impact the huge amount of digital data that a person generates every day with their devices and through their online activities. Reproductive health professionals should take additional steps to protect their privacy, and we've provided some digital security tips specifically tailored for abortion support providers.

If you are presenting this guide in trainings, you should recognize that these measures are laden with bureaucratic hurdles. For example, the Safe at Home program requirement that you meet with a victim's assistance counselor at a designated facility adds an additional layer of red tape and travel that may discourage applicants, especially those living in rural areas. The administrative costs also may be prohibitive for some people.

With that in mind, these options may be useful to people facing extreme risk models, particularly stalking, doxxing, and online threats. However, doxxing often happens before a person realizes their threat model is severe. You may also find our general guide on mitigating the risk of doxxing helpful.

However, both of these systems are currently underutilized, and it may require a number of new applicants to turn up flaws in these programs.  If you encounter problems or have advice on how to improve these directions, please email dm@eff.org.

Relevant Link:

  • 1. "Statements of personal worth or personal financial data required by a licensing agency and filed by an applicant with the licensing agency to establish their personal qualification for the license, certificate, or permit applied for."
Dave Maass

Hacking the Future at DEF CON 30

2 days 2 hours ago

Over nearly three decades, DEF CON grew and evolved into the world's largest hacker conference for computer security professionals, tinkerers, hobbyists, and more. The EFF staff will return to support the community at the Las Vegas summer security conferences—BSides LV, Black Hat USA, and DEF CON—for the first time since 2019, making the DC30 theme of "Hacker Homecoming" all the more appropriate. We may still be trying to escape the shadow of the pandemic, but the EFF staff will be available in person: masked, vaxxed, and ready to defend digital freedom for all.

Below is a listing of all of EFF's official appearances during DEF CON 30.  If you attend the conference, be sure to stop by the EFF booth in the Vendor hall and catch up with us directly! Learn more about the latest in online rights, get on our action alert list, and take advantage of on-site-only EFF membership specials. This year EFF will present limited-edition member t-shirt design for DC30, created in collaboration with artist Eddie the Y3t1 Mize and our multi-year t-shirt puzzle champions: @aaronsteimle, @0xCryptoK, @detective_6, and jabberw0nky of the Muppet Liberation Front.

Talks

"The Man" in the Middle (Virtual Presentation)
Friday, August 12 at 12:00 PDT at the Blacks In Cybersecurity Village
EFF Director of Engineering for Certbot Alexis Hancock

PSA: Doorbell Cameras have Mics, Too
Friday, August 12 at 12:00 in the Crypto & Privacy Village
EFF Policy Analyst Matthew Guariglia

Reproductive Justice in the Age of Surveillance
Friday, August 12 at 15:30, Forum Room 133
Speakers: EFF Staff Technologist Daly Barnett, Kate Bertash, EFF Director of Federal Affairs India McKinney, and EFF Legal Director Corynne McSherry.

Brazil Redux: Short Circuiting Tech-Enabled Dystopia with The Right to Repair
Saturday, August 13 at 10:00, Track 1
Speakers: Joe Grand, EFF Legal Director Corynne McSherry, Paul Roberts, Louis Rossmann, Kyle Wiens

Literal Self-Pwning: Why Patients—and Their Advocates—Should Be Encouraged to Hack, Improve, and Mod Med Tech
Saturday, August 13 at 10:00, Track 1
Speakers: EFF Special Advisor Cory Doctorow, Christian "quaddi" Dameff MD, and Jeff “r3plicant” Tully MD

Drones and Civil Liberties
Sunday, August 14 12:00, Aerospace Village
Speaker: EFF Director of Consumer Privacy Engineering Andrés Arrieta

And don't miss Ask the EFF at BSides Las Vegas earlier in the week! The panel is on Tuesday, August 9 at 14:00 with speakers EFF Director of Consumer Privacy Engineering Andrés Arrieta, EFF Senior Staff Technologist Bill Budington, EFF Deputy Executive Director and General Counsel Kurt OpsahlStanton Legal Fellow Mukund Rathi, and EFF Staff Attorney Hannah Zhao.

Meetups

Meet the EFF
Saturday, August 13 at 20:00-22:00, Forum Room 410
Speakers: EFF Director of Consumer Privacy Engineering Andrés ArrietaEFF Staff Technologist Daly BarnettEFF Senior Staff Technologist Bill BudingtonEFF Policy Analyst Matthew GuarigliaEFF Deputy Executive Director and General Counsel Kurt Opsahl, Stanton Legal Fellow Mukund Rathi, and EFF Staff Attorney Hannah Zhao.

Contests

EFF Tech Trivia
Friday, August 12 at 17:00-19:00, Main Contest Stage
Hosted by EFF Deputy Executive Director and General Counsel Kurt Opsahl and EFF Senior Staff Technologist Cooper Quintin

Betting on Your Digital Rights: EFF Benefit Poker Tournament
Friday, August 12 at 12:00-1500 in Bally's Poker Room
Hosted by Tarah Wheeler with emcee Jen Easterly.

Looking for Help?

As in past years, EFF staff attorneys will be present to help support the community. If you have legal concerns regarding an upcoming talk or sensitive infosec research that you are conducting at any time, please email info@eff.org. Outline the basic issues and we will do our best to connect you with the resources you need.

Read more about EFF's work defending, offering legal counsel to, and publicly advocating for technologists on our Coders' Rights Project page.

Aaron Jue

Victory! Federal Court Upholds First Amendment Protections for Student’s Off-Campus Social Media Post

5 days ago

EFF intern Emma Plankey contributed to this blog post.

Students should not have to fear expulsion for expressing themselves on social media after school and off-campus, but that is just what happened to the plaintiff in C1.G v. Siegfried. Last month, the Tenth Circuit Court of Appeals ruled the student’s expulsion violated his First Amendment rights. The court’s opinion affirms what we argued in an amicus brief last year.

We strongly support the Tenth Circuit’s holding that schools cannot regulate how students use social media off campus, even to spread “offensive, controversial speech,” unless they target members of the school community with “vulgar or abusive language.”

The case arose when the student and his friends visited a thrift shop on a Friday night. There, they posted a picture on Snapchat with an offensive joke about violence against Jews. He deleted the post and shared an apology just a few hours later, but the school suspended and eventually expelled him.

The Tenth Circuit first noted that these facts closely mimic those in Mahanoy v. B.L., a recent Supreme Court case that protected a student from suspension after she posted a picture to Snapchat with the caption “fuck cheer.” The Mahanoy Court explained that when students speak off campus, schools don’t have the same educational interests in regulating that speech. Social media does not change that. EFF argued to the Tenth Circuit that, if anything, courts should especially protect social media speech, because it is central to young people’s communication and activism.

The Tenth Circuit held the First Amendment protected the student’s speech because “it does not constitute a true threat, fighting words, or obscenity.” The “post did not include weapons, specific threats, or speech directed toward the school or its students.” While the post spread widely and the school principal received emails about it, the court correctly held that this did not amount to “a reasonable forecast of substantial disruption” that would allow regulation of protected speech.

Mukund Rathi

The UK Online Safety Bill Attacks Free Speech and Encryption

5 days 3 hours ago

The UK government has had more than a year to revise its Online Safety Bill into a proposal that wouldn’t harm users’ basic rights. It has failed to do so, and the bill should be scrapped. The current bill is a threat to free expression, and it undermines the encryption that we all rely on for security and privacy online. 

The government intended to advance and vote on the Online Safety Bill last month, but the scheduled vote was postponed until a new Prime Minister of the UK can be chosen. Members of Parliament should take this opportunity to insist that the bill be tossed out entirely. 

Subjective Standards for Censorship 

If the Online Safety Bill passes, the UK government will be able to directly silence user speech, and even imprison those who publish messages that it doesn’t like. The bill empowers the UK’s Office of Communications (OFCOM) to levy heavy fines or even block access to sites that offend people. We said last year that those powers raise serious concerns about freedom of expression. Since then, the bill has been amended, and it’s gotten worse. 

People shouldn’t be fined or thrown in jail because a government official finds their speech offensive. In the U.S., the First Amendment prevents that. But UK residents can already be punished for online statements that a court deems “grossly offensive,” under the 2003 Communications Act. If the Online Safety Bill passes, it would expand the potential scope of such cases. It would also significantly deviate from the new E.U. internet bill, the Digital Services Act, which avoids transforming social networks and other services into censorship tools

Section 10 of the revised bill even authorizes jail time—up to two years—for anyone whose social media message could cause “psychological harm amounting to at least serious distress.”  The message doesn’t even have to cause harm. If the authorities believe that the the offender intended to cause harm, and that there was a substantial risk of harm, that’s enough for a prosecution. There’s also a separate crime of transmitting “false communications,” punishable by fines or up to 51 weeks of imprisonment.  

The problem here should be obvious: these are utterly subjective criteria. People disagree all the time about what constitutes a false statement. Determining what statements have a “real and substantial risk” of causing psychological harm is the epitome of a subjective question, as is who might have a “reasonable excuse” for making such a statement. The apparent lack of legal certainty casts doubt on whether the UK's Online Safety Act meets international human rights standards.

The few exceptions in the section appear to be grants to large media concerns. For instance, recognized news publishers are exempt from the section on communications offenses. So is anyone “showing a film made for cinema to members of the public.” 

The exceptions are telling. The UK’s new proposed censors at OFCOM are making it clear they’ll never enforce against corporate media concerns; it’s only small media creators, activists, citizen journalists, and everyday users who will be subject to the extra scrutiny and accompanying punishments. 

Online platforms will also face massive liability if they don’t meet OFCOM’s deadlines with regards to removing images and messages relating to terrorism or child abuse. But it is extremely difficult for human reviewers to correctly discern between activism, counter-speech, and extremist content. Algorithms do an even worse job. When governments around the world pressure websites to quickly remove content they deem “terrorist,” it results in censorship. The first victims of this type of censorship are usually human rights groups seeking to document abuses and war. And whilst the bill does require online service providers consider the importance of journalistic freedom of expression, the safeguards are onerous and weak. 

Another Attack on Encryption

The bill also empowers OFCOM to order online services to “use accredited technology”—in other words, government-approved software—to find child abuse images (Section 104). Those orders can be issued against online services that use end-to-end encryption, meaning they currently don’t have any technical way to inspect user messages. This part of the bill is a clear push by the bill’s sponsors to get companies to either abandon or compromise their encryption systems. 

Unfortunately, we’ve seen this pattern before. Unable to get public support for the idea of police scanning every message online, some lawmakers in liberal democracies have turned to work-arounds. They have claimed certain types of encryption backdoors are needed to inspect files for the worst crimes, like child abuse. And they’ve claimed, falsely, that certain methods of inspecting user files and messages, like client-side scanning, don’t break encryption at all. We saw this in the U.S. in 2020 with the EARN IT Act, last year with Apple’s proposed client-side scanning system, and this year we have seen a similar system proposed in the E.U. 

These types of systems create more vulnerabilities that endanger the rights of all users, including children. Security experts and NGOs have spoken clearly about this issue, and asked for the anti-encryption sections of this bill to be withdrawn, but the bill’s sponsors have unfortunately not listened. 

If it passes, the censorious, anti-encryption Online Safety Bill won’t just affect the UK—it will be a blueprint for repression around the world. The next UK Prime Minister should abandon the bill in its entirety. If they won’t, Parliament should vote to reject it.

Joe Mullin

Virtual Vegas Member Week 💻💀⚡️

6 days ago

EFF is celebrating the spirit of Las Vegas hacker summer camp this week and—whether you're online or in person—you're invited to support the movement for digital freedom. Technology touches more of your life every day. Whether you’re telling a friend you’re on the way or you’re finding a new doctor, the modern world makes online access increasingly necessary. But law and policy often lag behind, and even threaten, your freedom online. That’s the reason why EFF exists. And you can help us defend the rights of digital creators, security researchers, and technology users everywhere.

Join EFF!

The EFF crew is celebrating a return to the Las Vegas hacker conferences—BSidesLV, Black Hat, and DEF CON—after two long years. It’s a meaningful way to reconnect with the grassroots curiosity and creativity that we need in tech. Technology users can and must do better than an internet dominated by a handful of corporations, and together we make it possible.

For the first time, EFF’s annual mystery-filled DEF CON t-shirt is available online AND in Las Vegas, but it won’t last long. This tongue-in-cheek Extremely Online T-Shirt is an expression of our love for the internet and the people who make it great. Many thanks to artist and hacker favorite Eddie the Y3t1 Mize for this collectors’ edition artwork, and to our multi-year t-shirt puzzle champions @aaronsteimle@0xCryptoK@detective_6, and jabberw0nky of the Muppet Liberation Front for creating the puzzle. Give it a try!

EFF is proud to support technology users everywhere by defending the rights of creators and security researchers. Sometimes this means opposing flawed copyright laws like the DMCA, dissecting the devastating effects of the Computer Fraud and Abuse Act, sharing information about security threats, championing web encryption, and even trying to fix the broken patent system that hinders so many innovators.

EFF runs on public support and we’re grateful to the members around the world who make all of our work possible. Thank you for standing on the side of tech users.

Support the Cause

Aaron Jue

Abortion Information Is Coming Down Across Social Media. What Is Happening and What Next.

1 week 5 days ago

Reports have surfaced about the removal of information about abortion from social media. Unfortunately, none of it is unprecedented. Platforms like Facebook and Instagram have long maintained broad and vague community standards that allow them to remove content with little recourse.

What Is Happening

As reported by Vice and followed up on by Wired, posts about abortion receive intense scrutiny online. The difference, one activist told Vice, is simply that more people are seeing their posts removed than before.

Vice found that the truthful sentence “abortion pills can be mailed” triggered a flag as violating Facebook’s rules about “buying, selling, or exchanging non-medical drugs.” A moderator running a group on Facebook connecting people seeking information about abortions told Wired she has always had to carefully monitor posts to avoid the group being removed entirely, with clear rules about what can be posted—any links are banned, for instance.

The moderator expressed a frustration we’ve heard constantly about community guidelines: that they have no idea what the lines actually are and find things suddenly shifting with no warning.

In the wake of the COVID-19 pandemic, social media platforms tightened up enforcement of rules surrounding medical information, making their automated systems and human reviewers arbiters of truth. Their rules also ban the buying, selling, or gifting of pharmaceuticals (it’s this rule that the posts with the sentence “abortion pills can be mailed” ran afoul of).

Furthermore, during the pandemic, Facebook removed posts at the request of states’ attorneys general related to the “promotion and sale of regulated goods and services.” In the context of abortion care and information, that precedent becomes especially dangerous.

Additionally, pretty much all social media platforms have some form of rule banning either “illegal” activity or the promotion thereof. What is unclear is, in the wake of the Supreme Court’s overturning of Roe, if those rules will now apply on a state-by-state basis. We don’t know how companies will react, if they are even capable of doing state-by-state blocking (and if they will, even if they can), if they will comply with an obviously unconstitutional or illegal law until it is struck down, and how these companies will deal with the uncertainty of people seeking abortion care in states where it is still illegal from states where it isn’t. There is so much uncertainty here and the companies do not have a history of providing clear guidance even in the best of circumstances.

These kinds of policies are also easily weaponizable by those seeking to silence those trying to share information and provide community support. All it takes is a few people reporting someone or a group in bad faith for posts to be removed and accounts banned. Even if eventually reinstated, the downtime means that people seeking help will not be able to find it.

 What Should Companies Be Doing

At the risk of repeating ourselves, companies need to make their policies clear and consistent. Vague and broad community standards don’t provide true guidance to users about what they can and cannot say. This becomes especially true when the appeals process is broken.

Vice tested Facebook by posting “abortion pills can be mailed” several times, once “disagreeing” with the flag and once “agreeing” with it. (These are the two options given to users.) The posts disappeared and the user’s account was suspended for 24 hours, even though the post where they disagreed with Facebook’s assessment was eventually reinstated.

That’s a problem—losing access to an account for 24 hours even though it was determined no rule was broken is something easily weaponizable. It is also a problem that, in the case of Facebook, appeals are limited to clicking yes or no on a box and whether or not a human being is reviewing the case is unclear.

Clear, consistent policies with a functional appeals process would do a lot for people looking to share information online. However uncomfortable for them, companies need to take stands on behalf of their users. Inconsistent enforcement based on press attention or political pressure ill-serves everyone. 

Companies’ transparency reports also need to start being broken down not just by country, but by state, where the difference in state laws are making a difference. It’s important to know which states’ attorneys generals and other law enforcement entities are making abortion-related asks of the companies. This will help us figure out how companies are reacting to various state laws. In the Facebook example above, we don’t know which states’ attorneys general asked Facebook to remove material. That information should be public.

We see transparency reports and policy that restrict access to material based on “local laws,” but nothing more granular than national data is given.

Thinking Beyond Facebook

Policies like those held by Facebook, Instagram, and Twitter exist beyond social media. And when infrastructure is implicated they can be even more dangerous.

Services like Cloudflare could be pressured to disable access to websites with abortion information on them. ISPs could be pressured to cut off internet access to accounts providing information. Payment processors could prevent people from paying for abortion care. In addition to the grave consequences for speech and access to speech, there are extra consequences the further down the technical stack you go.

Posts being removed is one thing—an entire account or website where only part is related to abortion is dangerous. And in the case of internet access, it does not merely prevent someone from speaking about abortion. It prevents them—and anyone in their household—from working from home, from remote schooling, and from connecting with their family.

Amazon’s AWS—the cloud computing service that dominates website hosting—has a policy allowing the disabling of “illegal” content or “with applicable law or any judicial, regulatory or other governmental order or request,” a broad ability for government entities to get websites providing abortion information taken down.

Similarly, Google prohibits users of Google Docs from “engaging in illegal activities or to promote activities, goods, services, or information that cause serious and immediate harm to people or animals.” Many use Google Docs as a place to gather research and information and disseminate the link to it among their communities. Using Google Docs for abortion care then subjects users to the possible loss of a Google Account—meaning a loss of any and everything you had entrusted to Google. Your emails, pictures, home videos, all gone.

And that doesn’t include the possibility that Google will report you to the authorities.

As users we find ourselves increasingly dependent on a handful of companies for internet access, hosting and sharing of resources, and communication. If we are booted from one, there are vanishingly few alternatives. Most Americans don’t have a choice in their internet service provider. If they are booted, they are offline. Google provides a suite of services to its users that, if lost, would be devastating. AWS could scrub the internet of many websites.

Optimally, we’d be able to choose. We’d be able to pick the social media site, ISP, or webhost that shared our values and had a commitment to standing up against unjust government demands. Instead, we are forced to try to figure out what is possible under the rules of these companies. So, we have to pressure them to stand up for us. To resist government pressure to remove information, users, or groups. To make the rules clear so we can share and access information. To serve us, who are their customers.

Katharine Trendacosta

Federal Preemption of State Privacy Law Hurts Everyone

1 week 6 days ago

There's a lot of discussion right now about how a federal privacy bill, the American Data Privacy Protection Act (H.R.8152), will affect state privacy laws. EFF has a clear position on this: federal privacy laws should not roll back state privacy protections. The ADPPA, as currently written, would override a broad swath of existing state laws and prevent states from future action on those areas, a structure called "preemption." We have expressed disappointment and called on Congress to do better.

The debate around the ADPPA's preemption provisions has centered largely on whether or not it's stronger than current state privacy laws, therefore lowering the bar for the country right now. But that's only part of the issue: we must also look to the future. So the ADPPA's current preemption language is bad for everyone in the country—not only those who happen to live in one of the states that have passed data privacy statutes.

Flattening Many Existing Privacy Laws

At least five states have enacted comprehensive consumer data privacy statutes in the past few years: Connecticut, Utah, Colorado, Virginia, and California. Like ADPPA, these laws govern how companies can collect, use, store, or share data, and they allow people to access, delete, or stop sale of their data. EFF wanted more from these laws, but they nonetheless demonstrate the ongoing commitment of state legislators to protect their residents’ data privacy. Some provisions of these state laws are stronger than parallel features of ADPPA. But the ADPPA would preempt them all.

Of course, current state and local protection of data privacy extends far beyond these recent comprehensive statutes. For example, ADPPA would roll back rights to data privacy that states have enshrined in their state constitutions. Based on the text of the current bill, ADPPA also endangers state privacy rules that address just some types of businesses, such as broadband providers or data brokers.

This stops states from acting on areas where we have seen some recent gains. State legislation often moves in waves: a strong statute in one state will inspire lawmakers in other states to follow suit. For example, Illinois' Biometric Information Privacy Act, passed in 2008, prompted Texas, Washington, and New York City to pass laws addressing biometric privacy (though Illinois’ is by far the strongest). And, as concern about biometric data collection and use have grown in recent years, lawmakers in Maine, Maryland, and Montana—wishing to see those protections for their own communities—have stepped up to try and replicate this gold-standard law.

Under ADPPA’s preemption, Illinois will get to keep its biometric privacy law, but no other state or city will be able to keep or pass similar, or even identical, legislation to protect their own constituents. Furthermore, the ADPPA doesn't grant equivalent protection to the rest of the country: Illinois requires opt-in consent to collect or transfer biometrics, and provides a strong private right of action, while ADPPA does not. Illinois keeps its law, but everyone else loses out.

There's strong precedent for federal privacy laws to serve as a floor but not a ceiling. For example, while every person in the United States enjoys the medical privacy protections of the federal Health Information Portability and Accountability Act (HIPAA), states can keep their existing, stronger laws and retain the ability to make protections stronger. Several have done so, including New York, Texas, Washington, and Louisiana, giving their residents additional needed protections. States have been able to react more quickly than Congress to emergent problems. Many other federal privacy laws, such as the Fair Credit Reporting Act, also take this approach.

Some states don't currently have data privacy laws, and their residents would benefit from a federal baseline. But there must be room for states to build on that federal foundation. We can't just do the minimum and call it a day. The ADPPA alone doesn't fix all the problems we face right now.

Freezing Further Action

Today’s ADPPA also does not fix all the currently unknown problems we are sure to face in the future. Congress is not nimble and often does not react to privacy concerns in a timely way. The last comparable chance to pass federal privacy legislation was in 2011, eleven years ago. That's the year Uber launched nationally. It's the year before Facebook went public. It predates the Apple Watch, consumer augmented reality, and products from companies such as TikTok, Slack, and Zoom.

Each of those developments has changed the privacy landscape, introducing new wrinkles and angles to consider when it comes to the legislation of privacy. Policymakers have to be able to react to changes. When it comes to privacy in the United States in the past decade, we've seen states lead the conversation—in many cases, acting as the impetus to address these issues at the federal level.

Big technology firms have fought tooth-and-nail to stop strong privacy laws at every level. They have only recently begun expressing some openness to federal legislation because of activity in the states. While working to stop states from passing strong legislation, they have also advocated for federal preemption to stop this so-called "patchwork" of state laws—because they are working.

The ability to pass bills at the state and local level is one of the strongest points of leverage that people have in the fight for data privacy. It is exciting that, at long last, there is bipartisan and bicameral agreement that there must be a federal privacy law to protect consumers. We ask that, in crafting that law, Congress does not compromise our privacy rights by undercutting the very state-level action that got us to this point in the first place.

Hayley Tsukayama

Americans Deserve More Than The Current American Data Privacy Protection Act

2 weeks 3 days ago

EFF is disappointed by the latest draft of the American Data Privacy Protection Act, or the ADPPA (H.R. 8152), a federal comprehensive data privacy bill. The bill passed the U.S. House Energy and Commerce committee on Wednesday, and is headed to the House floor.

We have been closely monitoring the progress of this bill, and carefully watched how negotiations played out. EFF last month sent a public letter to Congress seeking improvements to a prior version of this bill—many of these suggestions still stand. There were many changes to the bill earlier this week, and we are still evaluating the new language.

We have three initial objections to the version that the committee passed this week. Before a floor vote, we urge the House to fix the bill and use this historic opportunity to strengthen—not diminish—the country's privacy landscape now and for years to come.

The Bill Squashes Existing State Protections—and Freezes Them

The bill would override many kinds of state privacy laws. This is often called “preemption.” EFF opposes rolling back state privacy protections to meet a lower federal standard. We were troubled by this week’s committee vote against Rep. Eshoo’s proposed amendment, which would have ensured the bill serves as a baseline federal standard that states can build upon, and not a ceiling that states are banned from exceeding. Many advocates have long opposed preemption and several state Attorneys General recently told Congress that the bill as written harms their ability to protect the public.

ADPPA's preemption doesn't only steamroll state data privacy statutes, such as California's Consumer Privacy Rights Act. It also apparently rolls back protections in a number of other areas, even rights to privacy that states have seen fit to enshrine in their state constitutions. Based on the text of the current bill, endangered state privacy rules include those for biometric information (apart from face recognition), genetic data, broadband privacy, and data brokers—or "third-party collecting entities" as they ADPPA refers to them.

The preemption clause of the bill also means that there can be no forward progress at the state level on many key consumer issues. While it's exciting that Congress is considering consumer privacy legislation after literal decades of spinning its wheels, the ADPPA, as written, stops states from innovating on these issues. But states have been the engine for movement on privacy for years. Indeed, states have long been the “laboratories of democracy.”

EFF wants Congress to set a baseline for privacy protections. But the ADPPA should not trade away states' ability to react in the future to current and unforeseen problems. 

The Bill Steps Backward on Federal Telecommunications Regulation

The bill frees the telecommunications companies from complying with, and blocks the Federal Communications Commission (FCC) from enforcing, an important federal privacy law. The same is true for existing federal privacy laws that now apply to cable and satellite TV. The price of new privacy protections should not be the elimination of old privacy protections.

AT&T a few years ago violated this law by disclosing sensitive customer location data without customer consent (leading to an EFF lawsuit against AT&T). Under the current version of the ADPPA, the FCC would lose the ability to enforce the privacy provisions of the 1934 Communications Act. Instead, the Federal Trade Commission would pick up this area of regulation under a different set of standards. While this probably appeals to companies that only want to deal with one regulator on the beat, EFF urges that the ADPPA be amended to let both regulators enforce their respective privacy rules. Congress must not remove telecommunications companies from the scrutiny of expert federal regulators with a deep understanding of the industry. 

The Bill Needs Stronger Individual Rights to Fight Back

EFF long has argued that data privacy bills must include strong private rights of action, which allow people to sue companies that violate their privacy. But the private right of action in the ADPPA is riddled with exceptions and limits. A strong private right of action is necessary to ensure effective enforcement of privacy laws. Otherwise, the bill has no teeth.

Several privacy statutes have private rights of action. If a company fails to contain toxic waste, you rightly expect to be able to sue them for contaminating the drinking water. Consumer data privacy should be no different in this regard.

Many companies hate private rights of action: they don’t want you to have your day in court. So they have fought against them in statehouses from coast to coast. We have heard reports that with the current version of the ADPPA, some members of Congress are seeking to reach a compromise with those representing business interests. But, as a group that advocates for the interests of technology users and the general public, EFF seeks numerous changes to make sure the private right of action is workable for everyone injured by corporate violations of the new law. We have communicated those concerns to Congress.

For example, Congress must provide adults with protections against pre-dispute arbitration agreements. AT&T evaded EFF’s location data lawsuit by enforcing an arbitration agreement that our clients never read, because AT&T buried this needle in a haystack of fine-print legalese. So protection from forced arbitration is central to our approach to data privacy legislation. While the current version of ADPPA protects minors from forced arbitration, and protects adults bringing claims of gender violence, this is woefully inadequate.

The bill should also allow people to file suit as soon as it goes into effect—it currently has a two-year delay. Further, the bill denies private litigation as to many of the bill’s core protections, including data minimization, algorithmic transparency, and unified opt-out mechanisms.

People also should be able to recover liquidated damages and punitive damages. Moreover, the bill has a number of unnecessary and disruptive procedural hurdles before a suit can go forward, including requirements for consumers to give prior notice, follow unusual steps, and allow companies a right to fix problems to duck penalties. Individual lawsuits are important, but often require people to first marshal substantial resources; each additional roadblock makes this remedy less accessible.

New Major Loopholes

We are also concerned about newly accepted amendments to the bill that address data flows between companies such as Clearview AI or ID.me and the government. Specifically, the bill may treat these companies as “service providers”—defined in the ADPPA as companies that collect or process information for government entities—and gives these companies much more leeway than it should.

EFF has shined a light on the ways that such public-private partnerships leak data and violate privacy, and has called repeatedly for privacy legislation to address these relationships. The ADPPA should not give them free rein.

EFF urges Congress to strengthen the ADPPA. The people whose privacy we're trying to protect deserve no less. We realize that legislation requires compromises, and that the perfect must not be the enemy of the good. But lawmakers must not squander this opportunity by passing something insufficient that also prevents progress for years to come. 

Hayley Tsukayama

Police Are Still Abusing Investigative Exemptions to Shield Surveillance Tech, While Others Move Towards Transparency

2 weeks 5 days ago

How transparent are police about surveillance technology? It depends on where you look. When it comes to acceptable levels of secrecy around police tools, states have drawn their lines in very different places, resulting in some communities where it is much harder for the public to know what invasive tools are being used.

State public records laws are designed to provide residents a way to learn about their government’s activities. They are variably effective in practice. It is not unusual for requests related to surveillance technologies to be ignored. Police departments have a few specific exemptions that they may apply to withhold or redact records, and they will commonly apply these, like those designed to prevent “law enforcement techniques” to be withheld, in broad and sometimes completely inappropriate ways.

This variance has impacted EFF’s ability to catalog known uses of CSS and other surveillance equipment across the United States in the Atlas of Surveillance. Cell-site simulators (often referred to as IMSI catchers or stingrays) are an extra-sneaky and privacy-invasive type of police tech. The device acts like a legitimate cell phone tower to trick devices within a particular range to link up to it. The CSS can then pinpoint the location of particular devices and sometimes harvest or alter sensitive information on them, like the numbers called, the duration of the call, and the content of sent messages. Law enforcement considers CSSs so secret that they have been known to throw out a criminal case rather than disclose that they used a CSS. You can learn more about CSS here.

Although the Atlas tracks CSSs, much of our data is derived from a 2017 research project by journalist Kevin Collier, which in turn is partially based on even older data collected by the ACLU. For many years, the primary purveyor of CSS technology was Harris Corporation, but in June 2020, the company ceased selling these tools to law enforcement. That means that many agencies have begun purchasing new devices from other companies.

EFF worked with students at the Reynolds School of Journalism at the University of Nevada, Reno this spring to update and add more data to the Atlas of Surveillance. EFF staff co-led a class on public records laws, and students filed their own records requests to learn more about police use of CSS, as well as drones (also known as unmanned aerial vehicles). Their work demonstrated how departments across the country provide a range of responses to community members interested in how they’re being surveilled.

The UNR students used the Atlas of Surveillance to identify agencies that had previously been documented using Harris Corporation-brand CSS devices and targeted them with state-level public requests. Because the agencies were already known to have purchased the tech, it was a fair assumption they would have responsive materials. Students requested very basic information about the company from which the tools were purchased and for how much–nothing that would jeopardize an investigation. Some students received dozens of records and others nothing at all.

The Philadelphia Police Department, for example, still hasn’t sent student Autumn Oaks an acknowledgment of the request sent about the department’s use of drones, more than three months after it was first sent.

From the San Bernardino County Sheriff (who EFF sued for rejecting public records requests related to cell-site simulators) student Gracie Gordon got records showing that the department turned to the North Carolina company Tactical Support Equipment for a new cell-site simulator system that cost $635,000

The Chicago Police Department, which had previously used CSS, told student Samantha Welsh that there were no records to provide because they no longer use cell-site simulators. Meanwhile, Erie County, New York chose to respond by not responding at all. Despite a mandated five-day period in which the Sheriff could respond, after five months, no acknowledgment of the request has been provided at all.

Police departments in many states buy and use cell-site simulators (CSS) and other invasive surveillance technologies, but the ways communities are able to discuss its use and the deployment of other surveillance technologies vary widely. For example, California requires policies and training materials to be posted publicly, and it has implemented requirements for police departments to provide regular public reports on surveillance and military-grade equipment. Another California law requires law enforcement using cell-site simulators (CSS) to locate and track people’s cell phones to post their policies online. But in Virginia, that same information can be withheld entirely from public view. And on the federal level, an exemption in FOIA that prohibits the disclosure of information on “law enforcement techniques” is a known obstacle to disclosure.

While some agencies were forthcoming with materials like contracts and invoices for purchases, others denied access by claiming an investigative techniques exemption. The exemption, which can be applied at the discretion of the agency, is designed to prevent disclosure of materials that would reveal techniques that the police say could impact their investigations. The exact outlines of the exemption vary from state to state, but in general, the exemption claims that an agency does not have to be transparent about certain methods of investigation it uses, lest the disclosure of the records provide enough information for wanted and potential criminals to evade the law.

EFF intern Melanie Mendez received such an exemption from Anne Arundel County in Virginia. The agency sent to her purchasing records related to a purchase from KeyW Corporation worth more than half a million dollars, but it chose to redact the names and descriptions of the actual items purchased, citing the investigative techniques exemption.

Other students received more broadly applied versions of the exemption. Chesterfield, Virginia denied Gracie Gordon access to any of the policies or procedures used to govern the use of CSS. The Attorney General of Louisiana wrote Melanie that though it had responsive records, “these records are considered investigative techniques.”

EFF has long advocated for more transparency around the use of CSS, drones, and other surveillance technologies. EFF supports local ordinances that require disclosure and elected body approval of surveillance tech measures because these "Community Control Over Police Surveillance" (CCOPS) measures share with the public the decision-making about a community’s acquisition and use of invasive surveillance technology. This allows a city to make better decisions about how resources are allocated rather than on the unilateral decisions of cops regularly being lobbied by tech salespeople. It would also help to ensure that there is basic transparency around whether a tool has been purchased and what policies are in place to guide its proper use.

Police departments regularly acquire surveillance technology — including CSS and also face recognition, drones, and license plate readers — with little public notice. A formal records request is often one of the only ways a community may learn that its local law enforcement has been using face recognition or cell-site simulators (CSS) or other invasive technologies. Policing agencies’ refusal to release basic information on surveillance technologies in their response to these requests, variably applying secrecy in the form of exemptions and redaction, means that communities have variable abilities to contribute to and question the agencies that should serve them.

Beryl Lipton

New Amendments to Intermediary Rules threaten Free Speech in India

2 weeks 5 days ago

EFF joined the Association of Progressive Communications (APC) and other digital rights organizations from around the world, urging the Indian government to withdraw its new amendment to Intermediary Guidelines and Digital Media Ethics Code (2021 IT Rules).

EFF has already expressed its concerns about IT Rules’ chilling effect on Internet users’ freedom of expression and privacy. The 2021 IT Rules compel significant social media intermediaries (those with registered users in India above a 5 million user threshold)  to deploy “technology measures” to proactively monitor certain types of content that have previously been found in violation of the 2021 IT Rules. This includes child sexual abuse material and content that has previously been removed for violating rules. Proactive monitoring will force companies to provide “automated tools”  which require monitoring what users post and share online, and inevitably rely on error-prone filters that undermine lawful online expression.

Online intermediaries face harsh penalties for failure to comply with the 2021 IT Rules, including a jail term of up to seven years.

The online intermediaries are also forced to comply with strict removal timeframes, e.g., they have 36 hours to remove restricted content, and 72 hours to respond to government orders and requests for data—not allowing providers enough time to assess the legality, necessity and proportionality of the request. 

New Obligations And New Roles

The latest amendments to the 2021 IT Rules include three major developments which put these human rights at risk. They add new burdensome due diligence obligations, introduce new powers for the position of Grievance Officer, and envision the establishment of a new government-led Grievance Appellate Committee.

In the new due diligence obligations, online intermediaries must “ensure compliance” with the IT Rules. The intermediaries are required to both inform the users of the rules and make sure the users do not “host, display, upload, modify, publish, transmit, store, update or share’ any of the restricted types of contents. This actively pushes the online intermediaries towards more proactive monitoring of online content, deepening the adverse impact on freedom of expression.

The 2021 IT Rules oblige the intermediaries to appoint resident Grievance Officers to respond to user complaints and government and court orders. The new amendments further expand the Grievance Officer’s powers: the Officers can now address issues related to user account suspension, removal or blocking, or any user complaint on some types of restricted content. The Rules set a short deadline to resolve user complaints, stripping the users who were complained about of the opportunity to obtain any meaningful redress.

Additionally, the amendments also envisage the establishment of a government-led Grievance Appellate Committee to hear appeals against the Grievance Officer’s decisions.  The Committee would effectively have the power to overturn platform content moderation decisions, regardless of judicial assessment—which goes against our Manila Principles that advocate for content removal based on only judicial decisions.

Proactive monitoring restricts user privacy and leads to removal of legitimate speech. It also increases government involvement in content moderation, instead of direct judicial oversight.

These overbroad, restrictive, and intrusive amendments further tighten the rules for intermediary liability, which further exacerbates the disproportionate intrusion of free speech in India. Proactive monitoring restricts user privacy and leads to removal of legitimate speech. It also increases government involvement in content moderation, instead of direct judicial oversight.  And it is happening in a context where companies, such as Twitter, are being depicted as having “lost intermediary status” for their failure to comply with the IT Rules.

EFF and partners call on the Indian government to suspend the implementation of the 2021 IT Rules, withdraw the new amendments, and hold inclusive public consultations. 

To learn more about the legal trends affecting online intermediaries around the world, check out our recently published four-part series on the topic, which begins here

The full text of EFF's submission to the Indian government, and list of signatories, is below:

Meri Baghdasaryan

Letter to the United Nations on Inclusive Civil Society Participation

2 weeks 6 days ago

11 July 2022

H.E. Ms. Faouzia Boumaiza Mebarki 
Chairperson
Ad Hoc Committee to Elaborate a Comprehensive International Convention on Countering the Use of Information and Communication Technologies for Criminal Purposes

Your Excellency,

We, the undersigned, represent NGOs accredited to the Ad Hoc Committee to Elaborate a Comprehensive International Convention on Countering the Use of Information and Communication Technologies for Criminal Purposes (hereafter, AHC). Thus far, many of us have participated in both the first and second sessions of the AHC, as well as the intersessionals, whether remotely or in-person. We have also provided our expertise through written and oral input. At this point, we write with regard to the question of hybrid participation, and the ability to make oral interventions during future substantive sessions. 

We welcome the AHC Secretariat’s information circular, dated 5 July, 2022, inviting multistakeholders to participate in the third session of the Ad Hoc Committee. In particular, we appreciate the opportunity for hybrid participation since it has been critical for diverse participation so far, allowing many of the NGOs accredited to the AHC to contribute to discussions in person or remotely. 

We also welcome the provision of guiding questions in advance to inform and drive the deliberations. However, in order to be able to meaningfully contribute to the AHC discussions, we request clarification and timely information on the specific time and approximate duration of the slots allotted for accredited NGOs to deliver their statements and make interventions in response to the guiding questions. This will allow us to prepare our inputs and to provide our expertise so that it is relevant to the discussions at-hand. To make participation more effective and meaningful, we respectfully recommend that accredited NGOs are given the opportunity to intervene on each cluster of questions.

Finally, as a means to promote wider and meaningful civil society participation, we respectfully request the re-opening of the accreditation process for stakeholders that missed the opportunity to register at the start of the AHC discussions.   

Thank you for your attention and understanding. We would be highly appreciative if you could kindly circulate the present letter to the Ad Hoc Committee Members and publish it on the website of the Ad Hoc Committee.

Yours sincerely,

  1. Access Now
  2. ARTICLE 19
  3. Association for Progressive Communications 
  4. Center for Democracy & Technology
  5. Data Privacy Brasil Research Association
  6. Derechos Digitales
  7. Electronic Frontier Foundation 
  8. Eticas Foundation
  9. Global Partners Digital 
  10. Hiperderecho
  11. Human Rights Watch
  12. Igarapé Institute
  13. IPANDETEC
  14. The Kenya ICT Action Network - KICTANet
  15. Privacy International
  16. Red en Defensa de los Derechos Digitales A.C.



Katitza Rodriguez

EFF Poker Tournament at DEF CON 30

3 weeks ago

The DEF CON hacking conference is back in Las Vegas for its 30th year and we’re going all in on internet freedom. Security expert Tarah Wheeler will host EFF's first charity poker tournament at Bally's Poker Room on Friday, August 12. The event features emcee Jen Easterly and more special guests! This is an official DEF CON 30 contest, but all are welcome to participate in some friendly competition to support civil liberties and human rights online.

Take a break from hacking the Gibson to face off with your competition at the tables—and benefit EFF! Your buy-in is paired with a donation to support EFF’s mission to protect online privacy and free expression for all. Play for glory. Play for money. Play for the future of the web. Seating is limited, so reserve your spot today.

SIGN UP!

FIND FULL EVENT DETAILS AND REGISTRATION

Tournament Specs: $100 Bally’s tournament buy-in, with a donation of $250 or more to EFF to sign up. Rebuys are unlimited to level 6, with each having a suggested donation of $100 on site. Levels will be fifteen minutes, and the blinds go up at each level. Attendees must be 21+.

Pre-Tournament Clinic: Have you played some poker before but could use a refresher on rules, strategy, table behavior, and general Vegas slang at the poker table? Tarah will run a poker clinic from 11 am-11:45 am just before the tournament. Even if you know poker pretty well, come a bit early and help out. Just show up and donate anything to EFF. Make it over $50 and Tarah will teach you chip riffling, the three biggest tells, and how to stare blankly and intimidatingly through someone’s soul while they’re trying to decide if you’re bluffing.

Want One More Reason to Play?

How about two? If you can knock out Tarah, she will donate $500 to EFF in your name. And you will have an opportunity to go toe-to-toe with poker pro Mike Wheeler, a gentleman with a special connection to this tournament—watch below!

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DPg3lYUOp8Bo%20allow%3D%26autoplay%3D1%26mute%3D1%22%20accelerometer%3D%22%22%20autoplay%3D%22autoplay%22%20clipboard-write%3D%22%22%20encrypted-media%3D%22%22%20gyroscope%3D%22%22%20picture-in-picture%3D%22%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

Aaron Jue

Nigerian Twitter Ban Declared Unlawful by Court

3 weeks ago

The Economic Community of West African States (ECOWAS) Court has ruled that a seven-month ban on Twitter by Nigerian authorities in 2021 was unlawful and infringed freedom of expression and access to media. The court, which is a political and economic union of fifteen West African countries, has directed Nigeria to ensure that the unlawful suspension does not happen again, in an important decision for online rights across the region.  

In June of 2021, Nigerian authorities directed internet service providers in Nigeria to block access to Twitter after the platform flagged and removed a tweet from Nigerian President Muhammadu Buhari for violating its rules. The deleted tweet was seen by some as a threat of genocide; Twitter labeled it abusive. The Nigerian government’s rationale for the ban was vague. The Minister of Information and Culture claimed “persistent use of the platform for activities that are capable of undermining Nigeria's corporate existence.”

ECOWAS joined several cases challenging the Twitter ban,  including prominent Nigerian NGO Paradigm Initiative, Media Rights Agenda, the Centre For Journalism Innovation & Development, International Press Centre, Tap Initiative for Citizens Development and four journalists, represented by Media Defence. Along with Access Now and the Open Net Association, EFF filed a joint application to file as amicus curiae in the case against the ban, brought by the Socio-Economic Rights and Accountability Project (SERAP). In the application, we argued that the suspension of Twitter was not based on any law or court order, nor was it clear what law was breached by the company. Additionally, the application explained the rights contained in several legal codes, including the Nigerian Constitution, the African Charter on Human and Peoples' Rights (ACHPR), the International Covenant on Civil and Political Rights (ICCPR), and the International Covenant on Economic Social and Cultural Rights (ICESCR). In their decision, the Court agreed, ruling that the suspension unlawfully infringed on freedom of expression and access to information and the media contrary to the ICCPR and ACHPR. The Court ordered the government to ensure that acts of unlawful suspensions don’t happen again in the future; contradicting laws and policies must thus be amended. 

The ban was lifted in January of this year after Twitter agreed to some conditions, including registering its operations in Nigeria. But the seven month-ban was particularly troublesome for the country: Twitter is one of the main outlets Nigerians have to criticize their government, and around 20% of the population have an account on the platform. It has played a large role in political discourse in the country: for example, in 2020, the platform was used by activists to organize the largest protests in a decade in the country, against police brutality


Government bans or blocks of websites or social media platforms ripple out beyond the individual sites and countries affected, chilling speech across the internet. They intimidate those who wish to speak out elsewhere, either on other platforms or in other countries. They deprive people of the most powerful tools that exist for sharing information. Though circumvention techniques exist (and many Nigerian users reportedly accessed Twitter via VPNs and other methods despite the ban), internet shutdowns and large scale bans are repressive tools that violate online users’ rights to freedom of opinion and expression, as well as to peaceful assembly and association. We will continue to fight these unlawful and dangerous bans.

Jason Kelley

Self-Proclaimed Free Speech Platforms Are Censoring Nude Content. Here’s Why You Should Care

3 weeks ago

If their marketing is to be believed, self-avowed free speech maximalist sites like Parler—“where free speech thrives”—and Frank Speech—“the voice of free speech”—claim they will publish all user content. But the reality is a prohibition of many types of legal content, including legal sexual material. This restriction is all too familiar to queer communities, sex workers, and other marginalized groups—all of whom have experienced censorship for their perfectly legal content elsewhere.

Most sexually explicit and pornographic content is legal, and engaging with such content on social media platforms allows individuals to build communities and explore their identities. However, the moderation of sexual content has enabled social networks to become the arbiters of how people create and engage with nude content both offline and in the digital space. As a result of this flawed system, a crucial form of engagement for all kinds of users has been removed and the voices of people with less power have regularly been shut down.

These companies position themselves as free speech extremists, often for the purpose of rushing to the defense of hateful speech, but even for them, speech around sexuality is beyond the pale.

As private entities, these platforms have every right under U.S. law to censor lawful content, and the government can’t tell them what they must publish. But why position themselves as beacons of free speech while also hypocritically limiting content of a legal sexual nature and ignoring the importance of ensuring full access to free speech and expression?

For example: GETTR calls itself a “brand new social media platform founded on the principles of free speech, independent thought and rejecting political censorship and ‘cancel culture’,” yet a look to their Terms of Service (TOS) shows that user contributions must not contain any “sexually explicit, pornographic” content. Similarly, the TOS on Frank Speech prohibit “sexually explicit or pornographic material” from being posted on the site. Social networks like Parler circumvent any wholesale prohibitions but require sexual content to be tagged as NSFW (Not Safe for Work), thus limiting user access and free engagement.

Most sexual expression, even that which may colloquially be categorized as “pornographic” or “sexually explicit,” is protected under the First Amendment of the U.S. Constitution, which is, of course, the standard held out by many platforms as underpinning their commitment to free speech. Alternative social media platform Gab is explicit in noting that any “written expression that is protected political, religious, symbolic, or commercial speech under the First Amendment of the U.S. Constitution will be allowed on the Website,” and a brief look at the principles page on social network Minds shows that its content policy “is based on the First Amendment and governed by a community jury in order to minimize bias and censorship.” Yet, both Gab and Minds exclude legal sexual content from being posted or shared. And they are not alone in doing so.

When sexual content is restricted, the response is often one of apathy, as the censored content is perceived as either marginally illegal (so it should be disregarded), undesirable (so no one should care even if it is legal), or so frequently censored that it’s not worth protesting the restrictions. But sex workers, sexual freedom activists, and artists have experienced significant censorship of their legal expressions across the web, and the hypocrisy between calling for more free speech and consequently censoring sexual content is one that is particularly noticeable on far-right sites like MeWe and Rumble, which currently hold themselves out as free speech purists. But even there, individuals from non-marginalized communities seldom see or experience the censorship of these marginalized expressions. 

Has this always been the case? Well…yes. In fact, the TOS of these “free speech” platforms are often more rolled back versions of the moderation on major social platforms like Facebook and Instagram, where the de facto blanket bans on nudity affects all kinds of users, from those posting photographs of a 102-year old Little Mermaid statue, to creators posting their artwork depicting uncensored nipples. Most major platforms have been prolific in banning nudity from their sites despite sexual and nude expression having broad protection in the United States and across the globe. In some scenarios, platforms have prioritized the interests of users over external pressure, such as when OnlyFans reversed its ban on adult content after significant pushback from users. Twitter and Reddit users, too, readily engage with content of a legal sexual nature and enjoy their right to free speech. Similarly, sex worker-run sites like PEEP.me and certain Mastodon servers take risks to defend sexual speech and defend the First Amendment right to free expression. 

But most major platforms prohibit nude content, and self-proclaimed free speech proponents have almost unilaterally endeavored to moderate, and censor, sexual content as well.   

As we’ve said many, many, many times, the system of content moderation is broken—moderation policies are opaque, often arbitrary, and not applied evenly. And in censoring legal sexual content, these “free speech” platforms are not only sanitizing the digital space, but displaying a fundamental ignorance of what freedom of speech actually means. 

Paige Collings

Nominations Open for 2022 EFF Awards!

3 weeks ago

Nominations are now open for the 2022 EFF Awards! The nomination window will be open until August 2nd at 2:00 PM Pacific time. You could nominate the next winner today!

For thirty years, the Electronic Frontier Foundation presented awards to key leaders in the fight for freedom and innovation online. EFF’s annual Pioneer Award Ceremony celebrated the longtime stalwarts working on behalf of technology users, both in the public eye and behind the scenes. Honorees included visionary activist Aaron Swartz, human rights and security researchers The Citizen Lab, media activist Malkia Devich-Cyril, cyberpunk author William Gibson, and whistle-blower Chelsea Manning. We are forever grateful to all of our past Pioneers!

This year, we’re taking a new step to recognize the ways in which the digital world has fused with modern life. We invite you to celebrate the first annual EFF Awards.

The internet is not simply a frontier to conquer. It’s a necessity in modern life and a continually evolving tool for communication, creativity, and human potential. Together we carry—and must always steward—the movement to protect civil liberties and human rights online. Will you help us spotlight some of the latest and most impactful work towards a better digital future?

Remember, nominations close on August 2nd at 2:00 PM Pacific time!

GO TO NOMINATION PAGE

Nominate your favorite digital rights hero now!

After you nominate your favorite contenders, we hope you will consider joining us this fall to celebrate the work of the 2022 winners. If you have any questions or if you'd like to receive updates about the event, please email events@eff.org.

The EFF Awards depend on the generous support of individuals and companies with passion for digital civil liberties. To learn about how you can sponsor the EFF Awards, please email nicole@eff.org.

 

Hannah Diaz

EFF and Partners Urge the Indian Government to Keep End-to-End Encryption Alive

3 weeks 2 days ago

In a letter to the Indian Government, EFF and partner digital rights organizations from around the world called on the Indian Ministry of Electronics and Information Technology to withdraw the so-called traceability requirement under its Intermediary Guidelines and Digital Media Ethics Code (2021 IT Rules). The Rules compel private end-to-end encrypted messaging services to enable the identification of the “first originator” of information on their platforms. EFF has already expressed its concerns about IT Rules’ chilling effect on Internet users’ freedom of expression and privacy, including the traceability requirement that puts strong encryption in India under attack

End-to-end encryption is vital for private and secure communications. And while the Indian Supreme Court introduced a necessity and proportionality test when it recognized that the right to privacy as a fundamental right, the traceability requirement, in fact, is a disproportionate measure: it breaks encryption; threatens freedom of speech, privacy and national security of Indian people and businesses. Therefore, it is imperative to withdraw the traceability requirements under the Indian 2021 IT Rules. 

The full text of the letter and list of signatories are below. 

Meri Baghdasaryan

Ring Reveals They Give Videos to Police Without User Consent or a Warrant

3 weeks 4 days ago

Amazon’s Ring devices are not just personal security cameras. They are also police cameras—whether you want them to be or not. The company now admits there are “emergency” instances when police can get warrantless access to Ring personal devices without the owner’s permission. This dangerous policy allows police, in conjunction with Ring, to decide when access should be granted to private video. The footage is given in “​​cases involving imminent danger of death or serious physical injury to any person.”

The company has provided videos to law enforcement, without a warrant or device owner consent, 11 times already this year. This admission comes in response to a series of critical letters from Senator Ed Markey (D-MA). Markey chastised the company over many of the same privacy problems that EFF has brought up, including the far-reaching audio capabilities of Ring devices, and the company’s refusal to commit to not incorporate facial recognition technology into their cameras.

Amazon must consider the danger these products pose to the public by creating a growing web of surveillance systems that are owned by individuals, but are de-facto operated by law enforcement. 

Police are not the customers for Ring; the people who buy the devices are the customers. But Amazon’s long-standing relationships with police blur that line. For example, in the past Amazon has given coaching to police to tell residents to install the Ring app and purchase cameras for their homes—an arrangement that made salespeople out of the police force. The LAPD launched an investigation into how Ring provided free devices to officers when people used their discount codes to purchase cameras.

Ring, like other surveillance companies that sell directly to the general public, continues to provide free services to the police, even though they don’t have to. Ring could build a device, sold straight to residents, that ensures police come to the user’s door if they are interested in footage—but Ring instead has decided it would rather continue making money from residents while providing services to police.

These cameras can exacerbate racial profiling. They can also make people feel more paranoid, rather than more secure, because of the constant alerts the device is capable of providing.

Before this latest admission, Ring has faced other controversies about the way it facilitates police access to user footage. Ring had enabled police to send bulk requests directly to many device owners over a large area. Police did so at a staggering level: in  2020, for example, police requested videos over 20,000 times. In 2021, however, Ring caved to activist pressure and changed how police send requests, requiring them to publicly post them to the Neighbors app, which shed important light on these requests.

The “emergency” exception to this process allows police to request video directly from Amazon, and without a warrant. But there are insufficient safeguards to protect civil liberties in this process. For example, there is no process for a judge or the device owner to determine whether there actually was an emergency. This could easily lead to police abuse: there will always be temptation for police to use it for increasingly less urgent situations.

Sen. Markey also raised concerns about Ring’s audio recording of people in public places, and asked Amazon for information about this. The company failed to clarify the distance from which Ring products can capture audio recordings. Earlier this year, Consumer Reports revealed that Ring’s audio capabilities are more powerful than anyone anticipated, collecting conversation-level audio from up to 25-feet away. This has disturbing implications for people who walk, bike, or even drive by dozens of these devices every day, not knowing that their conversations may have been captured and recorded. The company also refused to commit to eliminating the default setting of automatically recording audio.

In addition, the company refused in its response to Sen. Markey to commit to making end-to-end encryption the default storage option for consumers—though it is available as an option as of 2021.

We thank Senator Markey for raising these issues. For too long, Amazon has not taken seriously the many civil liberties concerns with its Ring products. We hope the strong response to these latest admissions will help push Amazon to make privacy overhauls. The company must consider the danger these products pose to the public by creating a growing web of surveillance systems that are owned by individuals, but are de-facto operated by law enforcement. 

Jason Kelley

EFF and ACLU File Amicus Brief Objecting to Warrantless, Suspicionless Electronic Device Searches at the Border

3 weeks 5 days ago

In the past couple of decades, EFF has argued that when it comes to suspicionless and warrantless searches at the border, electronic devices like cell phones are not the same as a piece of luggage. Although certain searches at the border are permitted without a warrant, the search of a digital device while crossing into or out of the United States has been dubbed by judges to be “highly intrusive” and impacts the “dignity and privacy interests” of travelers. Digital device searches therefore should not be part of the “border search” exception to the Fourth Amendment. And yet, border officers continue to routinely conduct warrantless, suspicionless searches of electronic devices.

That is exactly what happened in the case of Haitao Xiang. Mr. Xiang was under investigation by the FBI and Monsanto while he was living in St. Louis. But, instead of applying for a warrant, government officials waited until he was traveling internationally to seize his electronic devices in order to search them warrantlessly.  

EFF and the ACLU have filed an amicus brief in the case arguing against warrantless, suspicionless border searches of electronic devices like Mr. Xiang’s. As the brief states:

“Individual privacy interests are at their zenith in devices such as cell phones and laptops, even at the border. Prior to the rise of mobile computing, the “amount of private information carried by international travelers was traditionally circumscribed by the size of the traveler’s luggage or automobile.” U.S. v. Cotterman, 709 F.3d 952, 964 (9th Cir. 2013) (en banc). Today, however, the “sum of an individual’s private life” sits in the pocket or purse of any traveler carrying a cell phone, laptop, or other electronic device. Riley, 573 U.S. at 394.”

And it is exactly for this reason that border protection officers and customs officials must have a higher standard when they want to search an electronic device.

EFF will continue to fight for the belief that digital rights do not end simply because a person approaches the border.

Matthew Guariglia

Impact Litigation in Action: Building the Caselaw Behind a Win for Free Speech

3 weeks 6 days ago

A recent District Court decision in In re DMCA 512(h) Subpoena to Twitter, Inc. is a great win for free speech. The Court firmly rejected the argument that copyright law creates a shortcut around the First Amendment’s protections for anonymous critics. In the case, a company tried to use copyright law and the DMCA processes to force Twitter to identify an anonymous critic of private equity billionaire Brian Sheth. Twitter thankfully resisted the demand for its user’s identity, and EFF filed a supporting amicus brief.

The win is not only good for those who would speak up against the powerful, it’s also a great example of how EFF’s patient work in the courts over many years can pay off. In fact, we were pleasantly surprised by the number of cases that EFF has been involved in that were relied upon by the Court.

By our count, the order cites eight cases that EFF participated in, either as counsel or as amicus curiae—and quite a few where we filed multiple briefs, participating at multiple levels of the judicial system. Not bad for an opinion that runs barely 15 pages long. The cases stretch across the issues, with several on copyright-specific John Doe cases (In re Verizon; In re DMCA Section 512(h) Subpoena to Reddit), a couple on copyright itself (Google v. Oracle; Kelly v. Arriba Soft), and some focusing on fair use (Lenz v. Universal; Dr. Seuss v. ComicMix). There are also a couple where we helped behind the scenes but ultimately did not file a brief. 

All in all, the case cites over 17 years’ of work by EFF lawyers who helped to carefully build the legal foundations that the Court relied upon. The earliest cases cited here by the Court were decided in 2003, and the latest were in 2021. We weren’t alone in this—our friends at Public Citizen had a huge role in developing this caselaw, as did the ACLU, EPIC, and many private attorneys over the years. But EFF’s mark is unmistakable.

We’ve marked up the decision below with links to our work, so you can see for yourself.

Of course, the Court made its own decisions both in this case and in all of the others cited. And not all of the cases we participated in were ones where we won everything we wanted.

But this decision vindicates our commitment to helping the courts steer First Amendment, copyright, and anonymity law in the direction of supporting users who want to have their voice heard without being chilled or directly attacked, including by companies working to protect billionaires.

Brick by brick, case by case — and thanks to the stalwart support of EFF's loyal members — we make justice.

(Thanks to EFF Legal Intern Molly Buckley for help with this blogpost)

Related Cases: RIAA v. Verizon Case ArchiveLenz v. UniversalKelly v. Arriba SoftIn Re DMCA Section 512(h) Subpoena to Reddit, Inc.Google v. Oracle
Cindy Cohn

The Department of Defense Should Disclose When it Purchases User Data

3 weeks 6 days ago

Congress must pass the Jacobs-Davidson Amendment to the National Defense Authorization Act (NDAA), the yearly funding bill for national security and the military. It would require the Department of Defense to disclose, both to Congress and the public, information about when it purchases geolocation data collected by cell phones or digital communications and internet metadata on internet usage.

This important amendment would bring more transparency to a glaring and growing privacy problem. The applications people download onto their smartphones often collect geolocation data for the purpose of selling to marketing companies, advertisers, and others. With all of this very personal data sitting on an open market, the government has been an increasingly voracious customer. Law enforcement or intelligence agencies would normally have to go to a judge and get a warrant to acquire this information. Purchasing it on the open market has become an unconstitutional way that the government has tried to get around warrant requirements.

Unfortunately, there are insufficient statutes regulating who can collect and sell this information and for what purpose. This is how U.S. military contractors ended up buying the location data of people who used a prayer app specifically for Muslim users

We urge Congress to approve this amendment to the NDAA. Transparency is necessary for us to learn how widespread this invasive practice has become. 

Matthew Guariglia
Checked
31 minutes 52 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed