Surveillance in San Francisco: 2022 in Review

2 months 3 weeks ago

We love San Francisco. It’s EFF’s home.  It’s often an example for other cities in regards to technology use and civil liberties. We helped make San Francisco the first city in the United States to ban government use of facial recognition, and one of the first to require community control of whether police can use surveillance technology.

Unfortunately, San Francisco took a wrong turn in 2022. Over objections of many community groups, the Board of Supervisors passed temporary legislation allowing police to get live access to private security cameras to address any crime, no matter how minor. We’ll be back in 2024, when the ordinance sunsets, to demand that the city not restart this surveillance program. 

 The prolonged fight began in early 2022 with the threat of dueling ballot measures on whether to strengthen or weaken the surveillance control ordinance.  A coalition came together, and the measures were withdrawn. Then the fight shifted to a new proposed ordinance to authorize a specific surveillance ordinance. The bill would allow police to request live access from the owner of any private security camera for up to 24 hours after an alleged crime, as well as during any “significant events.”

The SFPD’s proposal allowed the police to access thousands of private surveillance cameras,including those outside of residences and businesses, as well as the massive surveillance camera networks of the many Business Improvement Districts and Community Benefit Districts in various neighborhoods around the city. Before the new legislation, police could only request historical footage from these cameras. But this new proposal gave police the power to live monitor “significant events”—defined to include any “large or high-profile event,” implicating people exercising their First Amendment rights during protests or religious gatherings. The concern was far from hypothetical: EFF and the ACLU of Northern California sued the city after SFPD accessed a business district’s camera network to monitor protests for 8 days following the police murder of George Floyd in the summer of 2020.

Unfortunately, in late September, the Board of Supervisors voted 4-to-7 to grant these new powers to the SFPD. They should have listened to community objections. The police department never clearly articulated a real situation in which this new power would be more useful to public safety than existing powers.

Fortunately, this bill has a sunset provision: in 15 months, the SFPD loses its new power to access non-city cameras. That’s a partial win, based on opposition from many San Franciscans to any SFPD access to these cameras. So 15 months from now, we have another chance to put on our boots, dust off our megaphones, and fight like hell to protect San Franciscans from police overreach.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2022.

Matthew Guariglia

Pushing for Strong Digital Rights in the States: 2022 in Review

2 months 3 weeks ago

EFF worked on bills in more than a dozen states this year, fighting for strong digital rights at the state level. Across the country, legislators focused on issues including medical privacy, biometric privacy, and the right to repair.

 In California, EFF was proud to support three bills—A.B. 2091, A.B. 1242, and S.B. 107– that passed into law and take crucial first steps to make California a data sanctuary state for anyone seeking reproductive or gender-affirming care. Authored by Assemblymember Rebecca Bauer-Kahan, Assemblymember Mia Bonta, and California State Sen. Scott Wiener, these bills will protect people by forbidding health care providers and many businesses in California from complying with out-of-state warrants seeking information about reproductive or gender-affirming care. EFF also supported A.B. 2089, authored by Asm. Bauer-Kahan, which extended the protections of the California Confidentiality of Medical Information Act (CMIA) to information generated by mental health apps.

Not every privacy bill sailed through the California legislature, however. EFF sponsored two strong privacy bills in California this year to curb unnecessary data collection: the California Biometric Information Privacy Act (S.B. 1189) and the Student Test Takers’ Privacy Act (S.B. 1172). Unfortunately, S.B. 1189 was stopped before it could reach a floor vote in the Senate, after facing heavy business community opposition. And after S.B. 1172 was severely weakened by removing its primary enforcement mechanism—the individual right to sue or “private right of action”—neither EFF nor our co-sponsor Privacy Rights Clearinghouse could continue to support it.                                                                                                                                   

Of course, not all of our work happened in California.

 Across the country, we continued to see a number of states moving to legislate on privacy. While California passed its privacy law in 2018, no states have built a similarly strong foundation for privacy rights since then.

In fact, it’s been quite the opposite. Last year, Virginia passed an empty privacy bill that protects consumers in name only. Unfortunately, two similar weak privacy bills passed this year in Utah and Connecticut. Virginia’s bill should not become a template for state privacy legislation, and we urge other states not to even use it as a starting point for their own bills. We have seen some promise in other states such as Oregon, where EFF has been participating in a work group convened by Oregon’s Attorney General to craft a better state privacy law.

State legislators are often looking to copy language or even whole bills from other states, to harmonize bills across state lines. As such, it’s common to see the same bill pop up in many states, particularly if it’s been successful elsewhere. This year, two bills emerged in statehouses that we believe we will have to fight in the coming year. The first is the "Equal Protection at Conception - No Exceptions – Act,” introduced this year in South Carolina—a bill that portends a broader and growing divide on how states treat reproductive justice issues. This bill, which is based on language published by the National Right to Life Coalition, seeks to restrict what its residents will be able to read online about abortion. As EFF wrote in an op-ed in Scientific American, “As a consequence, it also threatens to restrict what all of us can say.” South Carolina’s legislature chose not to advance this bill, but we expect other states will introduce it in the coming year.

The second is California’s Age Appropriate Design Code, which was passed into law and creates new duties for “businesses that provide online services, products, or features that children are likely to access.” EFF has deep concerns about the vague language in the AADC, such as how businesses are supposed to act in the “best interests of children”—particularly as the law’s definition of children includes anyone under 18. We are also concerned about the potential that the language would prompt companies to require all users to verify their ages to access online services. The aim of the AADC is to protect children, and we respect that goal, but its drafting weaknesses are concerning enough that we urge other states not to use it as a starting point for their own bills.

On the flip side, there are bills that we’d like to see other states pick up. Right to repair legislation continues to gain momentum. In Colorado, the legislature passed a law that gives wheelchair users more freedom to fix their own chair. And in New York, the legislature passed a landmark broad right to repair bill—the first of its kind in the country. This bill would make it easier to people to fix their own electronic devices and choose who they trust to repair their own devices. We hope to see more states recognize the importance of this issue in their own legislatures.

We were also excited to work with Maryland State Senator Susan Lee to pass S.B. 134, which will require law enforcement agencies to learn, as part of their standard training, to recognize the common tactics of electronic surveillance and the laws around such activities. The bill originated from conversations between the Senator's office and EFF Director of Cybersecurity Eva Galperin, based on her extensive work on "stalkerware"—commercially-available apps that can be covertly installed on another person’s device for the purpose of monitoring their activity without their knowledge or consent. We’d love to see this bill replicated in other states as well.

Thank you to every person in every state who answered EFF’s calls to action by sending an email, picking up a phone, sharing a blog post, or speaking to a legislator about the issues that are important to you. Your voices are especially crucial in our state work, and every supporter on the ground in the states where we do work helps us be more effective.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2022.

Hayley Tsukayama

An Urgent Year for Interoperability: 2022 in Review

2 months 3 weeks ago
 

Walled gardens can be great: we all like it when Stuff Just Works because a single company oversees all its elements. 

Walled gardens can be terrible: when all of our data, our social relations and our educational, romantic, professional and family ties are trapped inside a company’s silo and the company does something we don’t like, the garden walls become prison walls. 

Leaving Facebook or Twitter or Amazon or Google means leaving behind many important, valuable things, from media to data to social connections, and these companies know it, and that means that when our wellbeing comes at the expense of their profits, they’re tempted to sell us out, knowing that we won’t leave. Far too often, large tech companies yield to that temptation.

That’s where interoperability comes in. From federated social media to alternative app stores to alternative clients to multiprotocol clients to tracker-blockers, interoperable tools put you in charge of the technology you use. You can block the parts you don’t like - like algorithmic feeds or bad moderation or privacy invasions -  and keep the parts you like: contact with your friends, colleagues and customers or access to your data and the media and apps you’ve paid for.

2022 was a big year for interoperability. The global Right to Repair campaign gathered more momentum and even broke through, with the passage of Colorado’s Right to Repair law for wheelchairs and a New York Right to Repair bill for electronics (which still needs the governor’s signature)

But there were disappointments in the US, particularly the failure of Congress to vote on the ACCESS Act and the Open App Markets Act (so far). The anti-interop coalition is powerful, but it is not united. Meta and Apple both fund the tech industry’s power anti-interop lobbying machine, but that doesn’t stop Meta from lobbying separately for interop with Apple’s mobile devices. This monopolist-on-monopolist violence reveals the deep fracture lines in the anti-interop forces.

Meanwhile, the public is sick to the back teeth of devices that are designed to control them and are crying out for third-party tools that let them unlock the value in the devices and products they own, from printers to luxury cars to motorcycles.

The anti-ripoff coalition is a broad ideological church: proponents of government regulation see these abuses as evidence that companies can’t be  trusted to self-regulate, while regulation skeptics see abuses arising out of a lack of competition and its power to discipline companies through the market. Both sides can agree that taking away dominant companies’ right to decide who can compete with them and how is sorely necessary.

In Europe, regulators are leapfrogging their American cousins. With the passage of the Digital Markets Act the EU put tech platforms on notice that they will be required to interoperate with small firms, co-ops, tinkerers, and other new market entrants.

We welcome the EU’s bold moves here, especially since the final DMA includes pro-interoperability amendments that we proposed, protecting “Adversarial Interoperability” (AKA “Competitive Compatibility” or “comcom”) - this being the act of connecting to an existing product or service without permission from its maker, such as connecting to the web with an ad-blocker installed or using third-party ink in your printer.

Though we’re very happy about the DMA’s passage, we’re have grave concerns about its implementation. The EU has decided that the first targets for mandatory interoperability will include end-to-end encrypted messaging services like WhatsApp and iMessage. Billions of people all over the world rely on the integrity of these services and a hasty interoperability mandate could endanger all kinds of people, everywhere.

We think social media is the right place to start the DMA’s work on interop: these services are much easier to federate (break into smaller, autonomous, interconnected servers run by lots of communities), and doing so would address many of the problems people have with monolithic social media platforms.

In a federated, interoperable social media world, you don’t have to tolerate harassment because quitting comes at a high personal and professional cost. In a federated world, you can quit a server whose management refuses to address your concerns and move to one with better moderation policies - and still stay connected to the people and communities that matter to you.

But it’s been more than a generation since the last large-scale federated social media was in wide use - many people today have no recollection of Usenet, Fidonet and other decentralized, federated systems.

That’s why we created “How to Ditch Facebook Without Losing Your Friends,” an animated slideshow and accompanying essay that explains how federated, decentralized social media services could interoperate with today’s legacy giants without sacrificing privacy or opening the door to harassment.

This couldn’t have been more timely. Interest in federated social media has exploded in the weeks since the chaotic change of ownership at Twitter; even as Meta’s bid to lure users from Facebook to the metaverse has accelerated the decline of Facebook, whose own technical staff are comically unexcited about the prospect of being transformed into legless cartoon characters.

As the world reconsiders the wisdom of entrusting our social selves to the unilateral judgments of unaccountable tech firms, there is an exciting opportunity for interoperability to step into the gap and protect us from a future of walled gardens that become prisons.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2022.

Cory Doctorow

Users Worldwide Said "Stop Scanning Us": 2022 in Review

2 months 4 weeks ago

The online conversations that bring us closer together can help build a world that’s more free, fair, and creative. But talking to each other only works when the people talking have their human rights respected, including their right to speak privately. The best tool to defend that right in the digital world is end-to-end encryption. 

In 2022, we fought back against large-scale attempts by governments to undermine secure and private online speech. The U.S. Senate introduced a new version of the toxically unpopular EARN IT Act. This bill would push companies to drop strong encryption by threatening the removal of key legal protections for websites and apps. EFF supporters spoke up and this bill was stopped in the Senate, again, although not before an unfortunate committee vote that endorsed the bill. 

In the U.K., Parliament debated an Online Safety Bill that would mandate tech providers use “accredited software” to constantly scan users for illegal material. And an even larger threat emerged in the European Union, where the European Parliament is debating a regulation that could lead to mandatory government scanning of every private message, photo, and video.

All three of these proposals are pushed by law enforcement agencies in their respective jurisdictions, and they all have the same reasoning: preventing child abuse. But constant surveillance doesn’t keep adults or kids safer. Minors also need to have private conversations with trusted adults, not devices with built-in backdoors.

We’ll continue to confront and oppose these proposals everywhere they pop up. We know that a world of broken encryption—whether it gets called a “ghost,” key escrow, or client-side scanning—is a world with less security and less privacy for everyone.

On Privacy, EU Leaders Consider A Huge Step Backwards 

The E.U. Commission, which is the executive branch of the European Union, is pushing ahead with a proposal that would compel tech companies to inspect user messages, including messages currently protected by encryption. If this proposal passes, tech platforms flagged by police will be subject to “detection orders” that will force them to provide private user messages and photos to governments.

We’re working with European partners to fight against this potentially disastrous “Chat Control” proposal. You can learn more on our joint website, Stop Scanning Me. Following our advocacy, the proposed regulation has already been rejected by the Austrian Parliament, and the German Federal Commissioner for Data Protection has called the proposal “incompatible with European values and data protection.” 

This proposal also has privacy-violating provisions that conflict with other aspects of European law, including the Digital Services Act. 

The debate in the European Parliament is still in its early stages. As more EU residents learn about this proposal, they’ll see that it’s wholly incompatible with their values. EFF will continue to lobby Members of the European Parliament (MEPs) to do the right thing. 

Law enforcement in democratic societies should not have access to unlimited, perpetual records of human conversation. The citizens they are sworn to protect do not want to live in a never-ending virtual line-up that does far more harm than good.

Learning More About Broken Scanning Systems 

We shouldn’t constantly scan people’s files, especially when they aren’t reasonably suspected of crimes. One reason is that the scanners don’t work right. In August, the New York Times reported about two fathers who were falsely accused of child abuse based on Google’s scanning software. Google didn’t back down or reinstate accounts, even after the fathers were cleared by police. 

Because of Google’s false accusations, police could have chosen to investigate these fathers for unrelated crimes, like drug possession or even copyright infringement. False accusations can be even more harmful when they are sent to a community or nation with a corrupt or biased police force, or are leveled against a member of a disfavored minority group. 

Evidence is mounting that even scanning systems that limit their search to authorities’ databases of known images of child abuse don’t work right. LinkedIn and Facebook have both examined material that their automated systems flagged as child sexual abuse material, and found accuracy rates of less than 50%. Another recent report shows that only about 20% of the images that U.S. authorities referred to Irish police as child abuse material were accurate reports. 

Towards a More Private And Secure World 

The solution isn’t more backdoors, more scanning, and endless suspicionless searches. It’s real privacy and security, including end-to-end encrypted services. Law enforcement agencies around the world should strive to do their critical work while they co-exist with secure and private online services. 

We are seeing important steps forward. Last year, Apple threatened to set up a client-side scanning system that would have constantly scanned users and reported back to law enforcement. Those plans were dropped after a public outcry, and this month, Apple said definitively that it won’t revive those scanning plans. 

What’s more, Apple has agreed to implement a complete encryption system for iCloud backups—a demand EFF has been making for more than 3 years now. When EFF supporters speak up and work together, we can win big victories like this one. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2022

Joe Mullin

The State of Online Free Expression Worldwide: 2022 in Review

2 months 4 weeks ago

It’s been a tumultuous year for free expression globally. From internet shutdowns, crackdowns on expression and closed-door partnerships to attempts to restrict anonymity and end to end encryption, in many places, digital rights are under threat. And while the European Union has made regulatory strides, elsewhere in the world, efforts to regulate—particularly those undertaken by authoritarian countries—threaten to fracture the global internet. 

EFF is deeply engaged in the global fight for free expression online. In 2022, we worked with the DSA Human Rights Alliance to ensure that EU lawmakers consider the global impacts of European legislation. We also joined the Arab Alliance for Digital Rights, a newly-formed coalition that brings together groups across the MENA region and international partners to protect civic space online. We continued our work as long-term members of the IFEX network. And with (cautious) travel back on the table, we participated in a number of international fora, including the Balkans-based POINT conference, FIFAfrica, Bread and Net in Lebanon, and the OSCE.

Working with international partners, we launched Protect the Stack, an initiative supported by more than 55 organizations worldwide aimed at ensuring infrastructure providers don’t become speech police. We also launched Tracking Global Online Censorship to monitor the impact of content moderation on free expression worldwide.

In addition to these joint efforts, there were quite a few places that warranted extra attention. Here are five ongoing threats that we will be watching in the year to come:

1. Ghana’s Repulsive Anti-LGBTQ Bill

Ghana, a constitutional democracy with a strong commitment to free expression, has become a regional tech hub, making this bill introduced by the Ghanaian parliament all the more atrocious. Ghanian law already criminalizes same-sex sexual activity, but this proposal goes further, threatening up to five years in jail to anyone who publicly identifies as LGBTQI+ or “any sexual or gender identity that is contrary to the binary categories of male and female.” The bill also criminalizes identifying as an LGBTQI+ ally.

We called on Twitter and Meta, both of which had previously opened offices in the Ghanian capital of Accra (Twitter’s office has since been shuttered), to speak out against the bill, and encouraged global allies to support the Ghanaian LGBTQI+ and human rights communities in opposing its passing. We will continue to monitor the situation for future developments.

2. Iran’s Crackdown on Protesters and Technologists

In September, the death of Jina (Mahsa) Amini at the hands of Iran’s morality police sparked protests that have continued for more than two months, despite a brutal crackdown that has included tens of thousands of arrests and several executions of high-profile anti-government protesters.

Amongst those targeted by government forces early on were several technologists and digital rights defenders. In October, we joined our friends at Access Now, Article19, and Front Line Defenders in issuing a statement calling on Iran to stop the persecution of the digital rights community and to release those detained, including technology specialist Aryan Eqbal and blogger and technologist Amiremad (Jadi) Mirmirani.

Eqbal was released in early November, and Mirmirani in mid-December, but Iranians still face serious threats to online free expression. We will continue working with our international partners to call attention to the situation.

3. Turkey's Latest Attempt to Hinder Free Expression

Turkey, an early adopter of measures to restrict social media, was at it again in 2022 with a new law aimed at curbing disinformation. Following in the footsteps of its 2020 mutant NetzDG copycat law, the Turkish government is now looking to fight disinformation with censorship in the form of a vaguely-worded law prescribing three years’ imprisonment for anyone who publishes “false information” with the intent to “instigate fear or panic” or “endanger the country’s security, public order and general health of society.”

The law was met with condemnation within Turkey and abroad, and we echoed that sentiment. We will be watching to see how the regulation impacts speech in the coming year.

4. Saudi Arabia’s Threats to Rights Online

Saudi Arabia has never offered a space for free expression, online or off, but as the country seeks to improve its international reputation with developments like smart city NEOM—just a few years after its brutal murder of journalist Jamal Khashoggi—its striking measures to restrict free expression have us paying close attention to the Gulf state.

In 2022, Saudi Arabia imposed strikingly harsh prison sentences on two Twitter users, one of whom is an American citizen. The other, Salma al-Shehab, was a student at the University of Leeds in the UK and was arrested upon her return to Saudi Arabia and held for more than a year before being sentenced to a whopping 34 years in prison, to be followed by a 34-year travel ban. Her “crime”? Sharing content in support of prisoners of conscience and women human rights defenders. Her sentence is four years longer than the maximum sentence suggested by the country’s anti-terror laws for activities such as supplying explosives or hijacking an aircraft.

In October, we joined more than a dozen international organizations in calling on the UK government to push for her release, and have continued to monitor her case. In light of both cases, and a number of other rights violations by the Saudi government, we also called on Google to abandon plans to open a data center in the country. And now, with Saudi Arabia one of Twitter’s largest investors, we have more reason to keep a close eye on Silicon Valley’s ventures with the human rights-violating country. 

5. Egypt’s Brutal Repression of Alaa Abd El Fattah 

We had hoped 2022 would be the year that we would see technologist, activist, and writer Alaa Abd El Fattah free and reunited with his family. A friend of EFF, Alaa’s case has been a cornerstone of our international advocacy work for many years. This year, as the COP27 Summit—hosted by Egypt despite international objections—neared, Alaa decided to escalate his ongoing hunger strike, putting his life in grave danger but also drawing eyes to his plight. Ultimately, the protests surrounding the COP27 calling for his freedom and that of other political prisoners in Egypt overshadowed the climate negotiations.

Alaa was one of three winners of the 2022 EFF Awards, and while we are proud to honor his accomplishments, the moment was bittersweet: Despite demands from the UK government, a number of members of U.S. Congress, and a broad swath of the international community, Alaa remains in prison.

But, to put it in his own words, we have not yet been defeated: Alaa ended his hunger strike in mid-November and was finally allowed a visit with his family shortly after. There is still hope, and Alaa’s family, friends, and allies around the world continue the fight for his freedom. The campaign’s latest ask (external link) is for UK and U.S. constituents to write to their members of parliament and Congress, respectively. We hope that Alaa finally gets his freedom back in 2023, and we won’t stop fighting until he does.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2022.

 

Jillian C. York

Police Drones and Robots: 2022 in Review

2 months 4 weeks ago

The rising tide of policing by robots and drones may seem relentless or even inevitable. But activism, legislative advocacy, and public outrage can do a lot to protect our safety and freedom from these technologies.

This year began with a report that elucidated what police are doing with drones. Answer? Not much for now. A law in Minnesota mandates police departments report  all of the times they deployed drones and for what reason. We’ve suspected that police have few clear uses, other than invasive surveillance. The Minnesota report reveals that drones were mostly  just for training purposes.


One purpose Axon was hoping to find for drones this year was to stop school shooters. The company announced they were developing a drone that came with a mounted taser for the purpose of subduing people in dangerous situations. The backlash was immediate. After a majority of Axon’s ethics board resigned the company paused the project.

In Oakland and in San Francisco, activists defeated municipal plans to authorize police to use deadly force with remote-controlled robots. In Oakland, police hoped to use a shotgun-mounted robot-–a plan which received so much backlash the proposal was pulled in just a few days. In San Francisco, it took a little longer. After the Board of Supervisors voted 8-to-3 to authorize police to use robots strapped with bombs to deploy deadly force, an EFF-led coalition mobilized. After one week, which included a rally and international press attention, the Board of Supervisors reversed course.

Of course, no fight stays won. Robot companies still want to make money. Police still want to send robots to do their work. The Department of Homeland Security still has plans to test autonomous robot dogs on the U.S. border as part of its massive infrastructure of border surveillance. But, with enough organizing, lobbying, and a fair bit of outrage, we can resist and often win.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2022.

Matthew Guariglia

The Battle For Online Speech Moved To U.S. Courts: 2022 in Review

2 months 4 weeks ago

EFF and our supporters have fought off numerous wrongheaded attempts by Congress to regulate online speech, including several that we wrote about last December.

The bevy of bad internet regulation proposals coming out of Congress hasn’t stopped. In 2022, the EARN IT Act was re-introduced. This wrongheaded bill would have allowed states to strip away critical legal protections for any websites, apps, or platforms, as long as state lawmakers linked their proposals to child abuse. If passed, the EARN IT Act would encourage censorship and prod companies away from using encryption. 

This bill, which produced massive pushback from EFF supporters, died in the Senate, but not before a committee vote that endorsed the bill.

Meanwhile, two laws that were passed last year by state legislatures in Texas and in Florida are being argued about in federal courts, with mixed results. These laws both violate the First Amendment and will harm internet users if they go into effect. As EFF has explained in multiple legal briefs over the past several years, mandating that any service carry the speech of another violates their First Amendment rights. For now, they’re on hold. Though they are both likely to go to the Supreme Court. 

Courts, Lawmakers Threaten Internet Users’ Speech

In Florida, the legislature passed, and Gov. Ron DeSantis signed, a bill that prohibits online platforms from banning political candidates, fact-checking their statements, or otherwise moderating their posts. The law allows for fines of up to $250,000 per day. We said it’s blatantly unconstitutional, and this year, the U.S. Court of Appeals for the Eleventh Circuit largely agreed. Both Florida officials and the industry groups challenging the law have asked the Supreme Court to take the case.

In Texas, Gov. Greg Abbott signed a bill that he said would stop social media companies that try to “silence conservative viewpoints and ideas.” EFF weighed in again, telling a Texas federal court that the measure is unconstitutional.

The district court agreed. However, the U.S. Court of Appeals for the Fifth Circuit issued a ruling that upended the bedrock First Amendment principle that private entities, from newspapers to parade organizers to online platforms, get to decide what speech they will publish, and who gets to use their services to speak. The ruling is a major threat to online users’ free speech because it jeopardizes everyone’s ability—both platforms and their users—to create online communities, decide for themselves what speech they will host, and how they will moderate it.

Instead, the Fifth Circuit used concerns about private censorship to hand Texas lawmakers enormous power to control speech online. This is both wrong as a matter of First Amendment law, and dangerous. The decision’s logic puts potentially every community forum and online service under threat of being forced to carry speech they find objectionable. And although EFF agrees that online services’ private censorship is routinely wrong and harmful, government-backed penalties for moderating user speech are not the way to address them. Instead, lawmakers at all levels should be focused on reducing the dominance of today’s largest services.

The Fifth Circuit’s decision also ignores that Congress has bolstered these fundamental First Amendment protections in 47 U.S.C. § 230 (“Section 230”). The federal law, which explicitly preempts state laws that conflict with it, has allowed diverse online services to thrive. This array of services, large and small, allow everyone to speak, organize, and advocate for change. Content moderation is often done poorly, especially when it’s done on a massive scale. But upending Section 230 won’t make the internet a nicer place, and we shouldn’t allow either Congress or state legislatures to step in and propose themselves as our new content moderators. 

Another Threat To Speech Is On The Horizon

This fall, the Supreme Court agreed to hear a pair of cases that could push online platforms to broadly censor a host of user-generated speech. Both cases involve claims that Twitter and YouTube aided terrorist organizations when they allowed terrorist content on their site.

The first case, Twitter v. Taamneh, concerns whether a platform’s generalized knowledge of terrorist content on its service is sufficient to state a claim under the Anti-Terrorism Act’s civil provisions. EFF joined a coalition of groups in a brief filed with the Supreme Court that cautioned against such a broad reading of the law because it has the potential to censor protected speech.

The second case, Gonzalez v. Google, is about whether a key law protecting internet speech online, 47 U.S.C. § 230 (“Section 230), prevents claims against YouTube that are based on its distribution of terrorist content. EFF is concerned that narrowing Section 230’s protections will result in platforms removing more users’ speech and potentially chilling the development of new platforms that might provide new and diverse opportunities for users to speak online.

The Supreme Court is expected to hear arguments in both cases early in 2023.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2022

Aaron Mackey

Privacy Shouldn't Clock Out When You Clock In: 2022 in Review

2 months 4 weeks ago

EFF continued to expand our work on technology issues in the workplace in 2022. We first renewed our attention to worker privacy when the specter of “bossware”—tracking software on work devices—reared its ugly head at the start of the pandemic.

Since then, EFF has joined with those in the labor community to learn more about surveillance in the workplace and on work devices, and the effect it has on employees. Particularly as regulators start to pay more attention, and legislators include workers’ privacy in general consumer privacy bills, it’s important to understand the ways that the workplace presents unique challenges in this arena.

Bossware has Real Effects on Workers

As white collar remote workers felt bossware breathing down their necks, there was more coverage than ever of how employers are monitoring the workforce, and the lasting effects it has on workers' health, safety, livelihood, and collective bargaining rights. Even for remote staff, these stresses affected their mental health and family responsibilities. But it is workers across all fields that have increasingly felt the heat of surveillance, and some of the coverage was propelled by blue collar workers who fought back, from meatpacking facilities to service workers to delivery drivers who experienced increased surveillance as a form of retaliation for wage demands. Neither the ineffectiveness nor the impact on real people calmed employers' desires for increasing means to monitor and control worker behavior, with some even floating a database on worker productivity. Courts and agencies in other countries, like the Netherlands, have been quicker to take on U.S. firms who they allege have violated the human rights of foreign remote workers with demands on their acquiescence to invasive monitoring.

While collective bargaining and union action is often the best solution to securing rights in the workplace against unreasonable and unaccountable punitive technologies, surveillance is also a bar to those solutions, as it sets up workers for unlawful retaliation for attempting to exert their right to organize. This is the first place that federal regulators—and, by extension, state and local governments—have grounds to step in.

The Department of Labor and the NLRB Start to Take on Workplace Surveillance

As some Senators called on federal agencies charged with workplace regulation to do more to protect workers, the Department of Labor and the National Labor Relations Board declared more interest in taking on these complex and constantly changing questions. The Department of Labor openly questioned the biases intrinsic to allegedly unbiased AI technology that monitors and judges workers, following an earlier report from the Department of Justice and the Equal Employment Opportunity Commission that some forms of AI monitoring may violate the Americans with Disabilities Act.

In a recent memo, the general counsel of the National Labor Relations Board (NLRB) called for regulators to protect workers against what she described as “unlawful electronic surveillance and automated management practices.” EFF agrees. The memo's position suggests two major strategies for increased protection: enforcement of existing workplace protections, like Section 7 of the National Labor Relations Act which bars bosses from retaliating against workers engaged in protected workplace speech about unions, by both the NLRB  directly and through inter-agency collaboration; and by exploring the constantly changing bossware landscape and what gaps exist so that federal agencies can begin writing new rules and regulations to protect workers to keep pace with the changing times.

New Consumer Privacy Rules May Include Workers

The power balances and incentives in a workplace scenario are very different from a consumer relationship. As we said in comments responding to the Federal Trade Commission’s (FTC) call for input on privacy issues—which itself included workers in its definition of consumer:

Even when an employer requests consent from an employee before processing their data, the consent frameworks can break down in such situations. Saying “no” to data collection in a consumer context may mean you have to use another company’s service. Saying “no” to your employer could get you fired, or otherwise seriously affect your livelihood.

A company may see some incentive for strengthening consumer privacy if they think it will help make their bottom line. An employer is much more likely to start looking to replace a worker if they raise privacy issues. Yet more workers are speaking out about surveillance, including warehouse workers, drivers, home healthcare givers, and lawyers.

As we told the FTC, we generally agree with the principles set out in the 2021 report of the University of California’s Labor Center “Data and Algorithms at Work: The Case for Worker Technology Rights.” The report lays out several principles for worker rights. For example, worker data should only be collected when it is necessary and closely related to the tasks of an employee’s job. It should be used only for the purposes for which it was collected. Similarly, workers should be given clear notice about how and why data are being collected, especially if it will be used in a way that could materially affect their working conditions, such as for performance evaluation or discipline. Indeed, most bossware is punitive, and meant to penalize workers or drive them to work harder and take fewer breaks. This stress in turn drives up workplace injuries and mental health effects

State Laws Can Protect the Rights of Workers

We were also proud to join California’s leading labor groups in supporting A.B. 1651, authored by Assemblymember Ash Kalra. This bill would have taken important first steps in providing workers with information about monitoring in the workplace and planted an important flag in demonstrating what privacy rights workers need.

This issue will continue to gain momentum in the coming years. We look forward to continuing to work with a number of allies to advocate for the rights of workers from potentially unlawful attempts by employers to monitor and control us all.

 This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2022.

Hayley Tsukayama

The Adoption of the EU's Digital Services Act: A Landmark Year for Platform Regulation: 2022 in Review

2 months 4 weeks ago

2022 marked an important year for digital rights across the European Union as the landmark Digital Services Act (DSA) came into force on 16 November seeking to foster a safer and more competitive digital space.

The DSA overhauls the EU’s core platform regulation, the e-Commerce Directive, and is intended to be an important tool in making the internet a fairer place by setting out new legal responsibilities for online platforms and educating users on why content is removed and what they can do about it. The powers of Big Tech are also reined in as the DSA subjects “very large online platforms (VLOPs)” to comply with far-reaching obligations and responsibly tackle systemic risks and abuse on their platform. These risks cover a variety of aspects, including the dissemination of illegal content, disinformation, and negative impact on fundamental rights. VLOPs also face oversight through independent audits, which will assess whether platforms respect the obligations under the DSA.

Whilst the obligations placed on intermediary services depend on the role, size, and impact of the service in the online ecosystem, the DSA introduces all-encompassing protections on user rights to privacy by prohibiting platforms from undertaking targeted advertising based on sensitive user information, such as ethnicity or sexual orientation. More broadly, the DSA increases the transparency about the ads users see on their feeds as platforms must place a clear label on every ad, with information about the buyer of the ad and other details. Despite being in its infancy, this provision is already inducing tension as companies like Twitter – whose primary source of income is obtained through ad revenue – have publicly affirmed their intention to further amplify targeted ads on the platform, in potential contravention of the DSA.

The DSA’s emphasis on greater transparency and user rights also includes requirements on platforms to explain their content curation algorithms in more detail and in user-friendly language. This aims to ensure that users can better understand how content decisions – which should be non-arbitrary – are made, and how they can pursue reinstatement should platforms make mistakes. The DSA also requires platforms to give users the option to choose a content curation algorithm that is not based on profiling.

By and large, and this is the right approach to platform governance regulation, the DSA doesn’t tell social media platforms what speech they can and can’t publish. Instead, it focuses a lot on making processes and content moderation clear to users, and requires platforms to take concerns for safety and the protection of fundamental rights seriously.    

Moreover, the DSA largely preserves the EU’s system of limited liability for online intermediaries, which means that platforms cannot be held responsible for user content provided that they remove content they actually “know” to be illegal. After extensive deliberation, the DSA rejected takedown deadlines that would have suppressed legal, valuable, and benign speech and EFF helped to ensure that the final language steered clear of intrusive filter obligations. This will enhance user rights online as without intermediary liability, users become subject to harmful profiling, stifled free speech, and a system that often leads to a pernicious culture of self-censorship. However, new due diligence standards could still encourage platforms to over-remove, whilst other requirements seek to moderate platform’s actions against user speech. We will be watching closely to see how this plays out in practice.

Many believe that the new DSA could become a new gold standard for other regulators in the world. But the DSA isn’t all good news and some aspects may be a good fit for Europe but not for other parts of the world. One particularly concerning omission from the DSA is an express protection on anonymous user speech. Instead, the DSA provides a fast-track procedure for law enforcement authorities to take on the role of “trusted flaggers” and uncover data about anonymous speakers and remove allegedly illegal content – which platforms become obligated to remove quickly. Issues with government involvement in content moderation are pervasive and whilst trusted flaggers are not new, the DSA’s system could have a significant negative impact on the rights of users, in particular that of privacy and free speech.

Since the DSA was first introduced by the European Commission in December 2020, EFF has fought for protections on four key areas: platform liability, interoperability mandates, procedural justice, and user control. And our message to the EU has remained clear: Preserve what works. Fix what is broken. And put users back in control.

Yet despite the DSA finally passing, our work has just begun. The success of the DSA’s pledge to create a user-protective online environment will depend on how social media platforms interpret their new obligations, and how European Union authorities enforce the regulation. Respect for the EU’s Fundamental Rights Charter and inclusion of digital rights groups and marginalized communities in the implementation process is crucial to ensure that the DSA becomes a positive model for legislation on digital rights – both inside and outside the EU’s borders. And as racist language proliferates across platforms like Twitter, and free speech is arbitrarily removed at the requests of law enforcement on other services, user-centered and transparent content governance processes are more pertinent than ever. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2022.

Christoph Schmon

Hacking Governments and Government Hacking in Latin America: 2022 in Review

2 months 4 weeks ago

In 2022, cyber-attacks on government databases and systems broke into headlines in several Latin American countries. These attacks have exposed government systems’ vulnerabilities—including sometimes basic ones, like failing to keep software updated with critical patches—and shown how attacks can affect government data, services, and infrastructure. On the other hand, they also served to shed light on arbitrary government surveillance practices concealed from proper oversight.

To give some examples, ransomware attacks affected government services in Quito, Ecuador; targeted Chile’s judicial system and the National Consumer Service (Sernac); as well as impacted operations that are dependent on the digital platforms of the Colombian sanitary authority (Invima) and companies’ oversight agency (Supersociedades). Probably the most extensive attack took place in Costa Rica, disrupting government services and leading President Rodrigo Chaves to declare a national emergency.

The Conti group, responsible for Costa Rica’s first hit in April,  has also accessed two email boxes belonging to the Intelligence Division of Perú’s Ministry of Interior (DIGIMIN), seeking a ransom in order not to publish the information obtained. Conti’s message states there was no data encryption in DIGIMIN’s network, and that almost all documents the group downloaded were classified as secret. According to media reports analyzing what Conti eventually published online, DIGIMIN has monitored—under the label of “terrorism,”—public events about missing persons and forced disappearances even when government entities were the organizers. The state's arbitrary monitoring of human rights defenders, political parties, journalists, and opposition leaders came more strongly into the spotlight with the “Guacamaya Leaks.”

#Guacamaya Leaks and #Ejército Espía

Guacamaya is the name of the hacktivist group that in September leaked around 10 terabytes of emails from mainly military institutions in Chile, México, Perú, Colombia, and El Salvador. This was not the first round of “Guacamaya Leaks,” though.

Earlier in 2022, the hacktivist group leaked documents related to mining projects in Guatemala and mining and oil companies in Chile, Ecuador, Colombia, Brazil, and Venezuela. The earlier leak led to the Forbidden Stories’ “Mining Secrets” series, which reported alarming abuses by the Swiss mining conglomerate Solway Group in Guatemala. Delving into documents from its local subsidiary, they discovered how reporters publishing about the mine in Guatemala were “systematically profiled, surveilled and even followed by drones.” Finally, Guacamaya accessed emails from Colombia’s General Attorney office, making them available by request for journalists and others committed to investigating the institution’s ties with drug trafficking, military and paramilitary groups, and corrupt companies. 

In that later hack first released in September, which Guacamaya dubbed “Repressive Forces,” they obtained emails from the Chilean Armed Forces’ Joint Chiefs of Staff (EMLO), the Mexican National Defense Secretariat (SEDENA), Perú’s Army and Joint Command of Armed Forces, Colombia’s General Command of the Military Forces, and El Salvador’s National Civil Police and Armed Forces. In most of them, the hacktivist group exploited Proxy Shell vulnerabilities in Microsoft Exchange email servers. Although Microsoft released security updates in 2021, the attacked servers did not yet have the vulnerability patched. In the Mexican case, Zimbra was the email platform, and audits had already warned the government about its cyber security vulnerabilities.

Unlike the Conti group attack, Guacamaya does not intrude on systems for a ransom. Their stated motivation is to shed light on abuses and rights violations so civil society can react and hold governments accountable. As Derechos Digitales’ Maria Paz Canales points out, such leaks are often the only meaningful source of information available to the public on arbitrary practices by armed forces and intelligence agencies in Latin America. Countries in the region generally lack robust legal frameworks and an effective oversight infrastructure to hold government surveillance and repressive powers accountable. On the other hand, leaks may expose sensitive information (e.g. the identity of persons internally pushing against abuses), demanding careful consideration from those releasing the data.

Media reports on the latest Guacamaya leak helped uncover different instances of repression and abusive surveillance. In Chile, reports highlighted that the Navy spent almost 700 million pesos in only six months to militarize the Biobío region during the state of emergency declared to stifle the conflict with indigenous Mapuche groups. The Chilean Armed Forces have also monitored civil society organizations and elected politicians through social media. Similarly, La Encerrona reports that the Peruvian Army’s documents about the monitoring of threats to the democratic State include activities of leftist parties and politicians. Civil society organizations, such as Amnesty International, working close to local communities in mining zones are also considered threats. In México, leaked documents unveil undue military influence seeking to hamper the investigation into the forced disappearance of 43 students in Ayotzinapa. And BBC pointed out that leaked files show military forces’ detailed monitoring of media outlets, journalists, activists, and human rights defenders. According to the BBC, there are lists of journalists classified as “for” and “against” the government.

The attention turned to Mexico’s SEDENA in the “Ejército Espía” investigation. In a joint effort, digital rights groups R3D, Article 19 México and Central America, SocialTIC, and Citizen Lab gathered evidence that at least two journalists and one human rights defender, working on issues related to Armed Forces’ human rights violations, suffered attacks from NSO Group’s Pegasus malicious software between 2019 and 2021. The evidence also supports claims that SEDENA purchased a remote monitoring system from a private vendor that is the exclusive Pegasus representative in México.

Such findings contradict President Obrador’s multiple promises that his government didn't have contracts with malware companies and would not use spying systems against journalists and human rights defenders. The author organizations stress that the Army does not even have the legal power to intercept the private communications of civilians. In fact, Mexican law does not clearly and specifically regulate the use of malware, despite evidence of its recurrent use in the country.

Again and Again: The Spread of Government Hacking With No Safeguards

As we have pointed out, the widespread government use of malicious software without strict necessity and proportionality standards, strong due process safeguards, and effective controls have repeatedly shown dire consequences and led to a growing call for states to halt the use of malware in the absence of robust safeguards and mechanisms ensuring the protection of human rights. While regulatory initiatives currently in place in the region do not live up to this task, government use of malicious software continues to grow.

The digital rights group IP.rec underlined this trend in Brazil in a thorough investigation about the exploitation of vulnerabilities by the government hacking of digital devices. IP.rec’s report includes both remote-access software, like Pegasus, and mobile device forensic tools (MDFT), such as Cellebrite, which generally involves physical access to the device. The research found contracts for the acquisition of hacking tools with the Ministry of Defense and the Ministry of Justice, at the federal level, and with law enforcement bodies in all Brazilian states. Verint Systems figures in as the main provider of remote-access tools. The Israeli company or its subsidiaries have contracts with Brazil’s Ministry of Defense and government entities in states like São Paulo, Alagoas, and Pará. The report highlights that Pará’s governor, Helder Barbalho, used the Verint tool the Civil Police acquired to spy on those investigating a corruption scheme in purchases of respirators during the Covid-19 pandemic.

IP.rec’s report raises concerns on the analogue application of other legal surveillance measures, such as search and seizure and telephone interception, to the use of government hacking tools. Since Brazilian law doesn’t have a specific regulation on the issue, law enforcement relies on broad interpretations of current law to employ hacking tools. However, requirements and safeguards of former surveillance measures don’t properly reflect the intrusiveness of the tools at stake. An ongoing legislative discussion to modify Brazil’s Criminal Procedure Code allegedly aims to bridge this gap. Versions of the bill sought to authorize law enforcement access to electronic evidence through forced access and remote collection. EFF worked closely with Brazil’s coalition of digital rights organizations, Coalizão Direitos na Rede, to stress the flaws of the bill. The bill’s current text dropped the provision authorizing remote collection of data, but the rule on forced access still remains and lacks robust safeguards.

Tackling Vulnerabilities While Ensuring Expression, Privacy and Security

The security vulnerabilities of electronic systems and devices open a dangerous backdoor to our daily communications, movements, and lives, as well as to governments’ and companies’ critical systems and databases. Government cybersecurity concerns should translate into incentives and actions to fix security vulnerabilities, instead of exploiting and perpetuating them. They should translate into the adoption and support of strong encryption in systems and devices, instead of repeated attempts to undermine the foundations of encryption. Government cybersecurity concerns should also entail the protection of security researchers and developers of secure software, instead of persecuting them based on vague cybercrime laws or problematic interpretations of cybercrime provisions. Finally, they should not result in policies that oppose privacy and security, but in measures that recognize both rights are intrinsically related.

Arbitrary government surveillance practices endanger people’s security and well-being. The application of human rights standards to government surveillance is a persistent challenge in the region. A case brought before the Inter-American Court of Human Rights (IA Court) this year, where EFF and partner orgs filed an amicus, provides a crucial opportunity for the IA Court to ensure inter-American human rights standards serve as a check on unparalleled surveillance powers in the digital age. EFF will keep monitoring developments and advocating that privacy, expression, security, and the protection of human rights always go hand in hand.

Veridiana Alimonti

Raising A Glass with EFF Members: 2022 in Review

2 months 4 weeks ago

Over the past few years, just like the rest of the world, EFF has had to adapt to change and face new challenges. We’ve even had to relearn how we did things before most of the world shut down for over a year. Even through all this, EFF members have shown that the fight for digital freedoms is still strong. This year, we have come together to push back against dangerous police surveillance technology, defend the use of strong encryption, and protect the privacy and security of those seeking and offering reproductive health care. And those are only a few of the biggest battles from the past year!

Just like the scope of EFF’s work, our members stand strong throughout the world. Last spring, the EFF membership team planned a virtual Members’ Speakeasy that focused on some of the work that we do outside of the U.S., specifically in the EU. EFF members showed up to attend a discussion on the Digital Services Act and the Digital Markets Act, bills that EFF continues to be heavily involved in, working to ensure that they protect users’ rights to free expression and strong encryption. 

Later in the spring we had a real challenge. The team had to relearn how to host in-person events! We hosted our sixth annual Tech Trivia and 14th annual Cyberlaw Trivia, the first in-person trivia nights since 2019. Though we had done these events in-person before, learning how to reconnect in the physical world again was an uphill battle. Thankfully, our members were ready to test the limits to their nerdiness and our Cybertiger was eager to escape from Zoom. Food was eaten, prizes were given out, and EFF staff even adorned totally authentic robes and wigs to judge each team’s knowledge on the fascinating and obscure minutiae of digital security, online rights, and internet culture. 

picture_1.png

The summer is one of EFF’s busiest times of the year, and this one was no exception. The summer hacker conferences, including DEF CON, Black Hat USA, and BSides Las Vegas all took place in-person. Being back in Las Vegas for the first time since 2019 was incredible. EFF supporters helped raise enough money to fund literally one lawyer for a full year, and some more. Even with EFF being away from Las Vegas for two years, our supporters rose to the challenge of keeping our team ongoing. 

We even had the opportunity to create one of our most unique t-shirts to date, with a collaboration from iconic hacker artist Eddie the Y3ti Mize and the esteemed multi-year winners of EFF’s t-shirt puzzle challenge. This DEF CON 30 t-shirt included EFF’s hardest-ever hidden puzzle (you can try it for yourself or see the solution here!). We are in awe of the incredible support from members during these conferences. It was a great return to in-person gatherings in Las Vegas and we are so glad to be back and to have your support.

Generally, fall can be pretty calm for EFF. But we were just too excited about the possibility of seeing more supporters in-person, so we decided to end the year with a few more events. Our first in-person fall Members’ Speakeasy since 2019 took place in October, where we raised a glass with digital freedom supporters in the Bay Area to catch up and chat about online rights. We were able to see long-time supporters and even some who had never been to an EFF Speakeasy before. 

Our final event of the year was easily our most ambitious. We celebrated our first-ever EFF Awards in-person and online. After learning a lot from the past two years of virtual events, we didn’t want to revert to only holding our awards ceremony in-person, so we did our best to have it live and online. Scores of EFF supporters attended the awards ceremony in San Francisco to celebrate, but we also got to chat with dozens of attendees online during the livestream. Being able to see people both in cyberspace and in meatspace was a great way to kick off the end of the year.

First EFF Awards Celebration

Now as 2022 comes to a close, users around the world continue to show their support for digital freedoms. We continue to rely on digital connections more than ever, and continue to face battles like surveillance, censorship, pushes to take away privacy and security from users, and much more. These battles make EFF’s mission to preserve our rights online crucial.

Thank you to all of the EFF members who joined forces with us and kept the fight for internet freedom strong this year. Our successes are only possible with help from people like you. If you haven’t joined EFF yet, now is a great time to do so!

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2022.

Christian Romero

Pivotal Year for the Metaverse and Extended Reality: 2022 in Review

3 months ago

Neal Stephenson’s classic 1992 dystopian novel Snow Crash inspired today’s tech industry in many ways. Google Earth is said to have been inspired by the novel’s “Earth” software, which lets people interactively visualize the whole world. Snow Crash is also the origin of an immersive virtual world Stephenson called the “Metaverse,” which, of course, Mark Zuckerberg used last year to describe his ambition to create a virtual reality platform and a new generation of computer interfaces.

Visions for the metaverse include ways for people to work, socialize, and interact using virtual reality (VR) and augmented reality (AR) technologies (collectively called Extended Reality or XR). While Meta has attempted to claim the term metaverse for its own, this area has also become a research focus for Google, Apple, Microsoft, Sony, and Tiktok’s parent company ByteDance among others, all of whom have launched or announced plans to launch XR hardware or services in the near future. This next generation of devices will be more sophisticated than today’s gadgets, which means their data collection capabilities will likewise increase along with new and substantial risks to our human rights and fundamental freedoms

Metaverse

There’s no single definitive understanding of what a “Metaverse” or “Metaverse(s)” might be. In its ambiguity, the term has become a placeholder for many things, often reflecting the interests of the person using it. Some, like Apple, have avoided the term entirely for their XR products. The major point of overlap in describing a “metaverse,” however, is the idea of additional virtual environments connected to the internet becoming important parts of our day-to-day lives in the real world.

One prominent vision of this is the fictional OASIS from Ready Player One, a virtual society where people can play massively multiplayer online games (MMO) using VR gadgets. Another popular conception emphasizes spatial computing and AR devices, creating a shared “annotation” of virtual objects in the real world. Others, however, define the term metaverse as making the internet a reflection of the physical world to facilitate work, socializing, and commerce, perhaps supported by a metaverse “tax”. Visions of the metaverse often interact with another ambiguous term for the future iteration of the internet— Web3. 

Many of these ideas are not themselves very new–Second Life, started nearly 20 years ago, fits many proposed definitions. The current excitement, however, comes from the belief that the prior iterations of virtual worlds remained niche due to technical limitations which are now being overcome by XR, rather than a lack of interest or demand from the public.

Stephenson’s Snow Crash was a cyberpunk dystopia—a warning, not a suggestion. It shouldn’t be used as a blueprint for a future internet.  The Snow Crash metaverse is a heavily gated and radically commercialized world, where inequity is embedded in the system’s underlying infrastructure—it’s a world where poor people have to flicker and stagger through cyberspace in low-resolution, low-frame-rate avatars. This is not the metaverse that we want to see. It’s imperative that we fight back now to prevent systemic digital inequity from infecting these new worlds before they become fully integrated with our everyday lives.

2022 Victory

When Meta (then called Facebook) bought industry leader Oculus in 2014, VR users were concerned about being forced to use their Facebook account, featuring their real or later “authentic identity,”in the virtual worlds they visited with their Oculus headsets. At first, Facebook promised Oculus owners that they could go on using their devices without needing a Facebook login. Then, shortly before the release of the Oculus Quest 2, the company changed course: Quest devices would henceforth require a Facebook account and could no longer be used pseudonymously.

EFF and others kept up the pressure on Meta over this broken promise, fighting for users’ privacy, until August, when Meta finally relented. VR users now have a path to “unlink” their device from Facebook by creating separate Meta and Horizon accounts (Meta’s social universe app), neither of which are subject to Facebook’s strict “real names” policies.

According to Meta CTO, Andrew "Boz" Bosworth, and Meta VP of Oculus, Mark Rabkin, the new Meta account only includes login credentials, payment information, and other settings you want to share between Facebook, Instagram, Horizon, or WhatsApp—and there is no limit to how many of these accounts a user can create.  Users can create unlimited Horizon Profile accounts—the successor to the old Oculus accounts and used for VR headsets—each with different names, avatars, and social graphs.

Meta’s VR customers made it clear: they want the ability to express themselves in ways that suit their needs, from separating their work and personal life to using pseudonyms to ensure their safety. Meta’s reversal of its decision should be a lesson to its competitors: users will not tolerate being corralled into privacy-invading procedures, and companies who try will face revolts.

Biometric Inferences and Privacy Protections

The fight for the right to pseudonymity and privacy was just an opening skirmish: there are bigger battles coming. We have sounded the alarm over the serious privacy dangers lurking in XR. 

Headset sensors can gather new, extraordinarily invasive forms of behavioral data, especially users’ involuntary and reactive physical behaviors. Our joint statement on Human Rights on the Metaverse and February submission to the UN Office of the High Commissioner for Human Rights explains why XR surveillance is a serious human rights matter.

The new generation of hardware, meant to be worn on the user’s body, poses serious privacy risks both to users and to bystanders. In our submission to OHCHR, we wrote:

“XR headsets are often designed with body-worn and environmental sensors which can collect unprecedented amounts of data about their user and their context. New sensors can make XR technology the frontier of more intimate forms of surveillance. These include monitoring vocal patterns, facial expressions or gazes, and when coupled with other technology like smartwatches, even heartbeats, and body temperature. Body-worn sensors can also track the unconscious responses that a user’s body makes, like eye movements, head motions, and hand gestures. This tracking can be needed for making virtual scenes feel natural, but can also reveal sensitive medical and psychological information, which some companies may choose to store on their own servers while others on the device itself.”

Data about our bodies and vitals are incredibly personal, and even the raw data can be sensitive. This goes beyond current data collection, such as whether you click a link and how far you scroll, but can note exactly what parts of an article you read and which images you look at. In XR, where nearly your entire field of view can be accounted for, this can be a record of exactly what you interacted with at a given time.

When paired with questionable pseudo-science to make inferences some claim reveals peoples’ beliefs, attitudes, and interests, there is an even bigger problem.  Such inferences, driven by emerging machine learning models about a person’s health or “emotions,” can be used in ways that impact the user, even if these predictions are inaccurate. What’s worse is these methods are being applied to often not conscious or involuntary physical behaviors of the user, making it not possible to control or consent freely to the way these tools are implemented.

Eye-tracking, for example, can be used to collect often involuntary actions or reactions to stimuli, such as how often we blink or look at something, and be used as data for targeted advertising. This use of your information opens the door to even creepier, and more invasive versions of user categorization already used in targeted ads today. Even worse, users won’t be able to truly consent to the monetization of their own involuntary bodily responses to stimuli.

For AR systems like smart glasses, sensors reach out to the world nearby, detecting, recording, and photographing everyone and everything in the vicinity. Without proper safeguards, users could unknowingly record conversations or videos in unethical or illegal ways. If recordings are automatically uploaded to centralized cloud servers that law enforcement could plunder, it may give the state unprecedented power to snoop on a user's private life. But it shouldn't be this way.

The time to figure out the appropriate safeguards is now before the collection of biometric, anatomical, and behavioral data, and other personal information begins in earnest and goes mainstream. These are key issues for the future of XR.

The data available to XR companies are powerful and sensitive, and it’s not adequately protected by existing data protection laws. In the global patchwork of privacy laws, Article 9 of the EU’s General Data Protection Regulation offers strong protections against the collection and processing of biometric data. The law prohibits processing biometrics that uniquely identifies a person unless it falls under some of its restricted and limited exceptions (for example, if the user gives their informed and freely-given consent). Even this falls short, though, as it only covers personal data resulting from a person’s physical, physiological or behavioral characteristics (face, iris scans, fingerprints, or voice) that is used to uniquely identify or single out individuals. This means that inferences from granular data, such as those related to eye movement or head inclination, may fall short of this definition as biometric data. Being said that, if the data allows an inference concerning health or sexual orientation, such data should be protected under Article 9.

Our allies in  European civil society are aware of these gaps, and they’ve begun the work of fixing them by seeking improvements to the European Union’s draft Artificial Intelligence Act (AI Act). EDRI and Access Now have proposed several amendments to the AI Act. Two of the amendments they’ve submitted include several improvements, including fighting against the definition of emotion recognition (and biometric categorization); a prohibition on using artificial intelligence for emotion recognition; and a prohibition on discriminatory forms of biometric categorization—categorizing people or groups based on data about their bodies and behaviors.

Access and EDRi warned that the definition of emotional recognition under Article 3(34) of the AI Act is too narrow, limited to biometric data, and fails to encompass physiological data that may not meet its high bar for the purpose of uniquely identifying a person. That means service providers will be able to argue that this information is out of the scope of protection of the AI Act.

Looking Forward

 In 2023, the European Union plans to launch a metaverse regulatory initiative, and the Japanese government has also signaled interest in the topic. In addition to the privacy dimension, the EU will look at competition issues. EC internal market commissioner Thierry Breton wrote (emphasis added):

Private metaverses should develop based on interoperable standards, and no single private player should hold the key to the public square or set its terms and conditions. Innovators and technologies should be allowed to thrive unhindered. [...] We have also learned a lesson from this work: we will not witness a new Wild West or new private monopolies.

EFF agrees interoperability is key for a future of privacy without monopolies and will advocate for this approach as these Metaverse systems are developed. This type of interoperable standard needs to go beyond sharing avatars and hats between games and must give users meaningful control over who they trust with their data, privacy, and safety online—and the practical ability to leave a platform that doesn’t protect them.  

Advocates will need to continue to push for policy and product design which prioritizes robust transparency, modernized statutes, and privacy-by-design engineering. If today's tech industry wants the metaverse to be the essential virtual layer of our daily lives, it must be built today in a way that respects the human rights and autonomy of all people in the future. We can and must have extended reality with extended privacy. The future is tomorrow, so let’s make it a future we would want to live in.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2022

Katitza Rodriguez

EFF’s Threat Lab Sharpens Its Knives: 2022 in Review

3 months ago

EFF’s Threat Lab is dedicated to deep-dive investigations that examine technology-enforced power imbalances in society. In 2022 we’ve sharpened our knives and honed our skills in an effort to bring down the stalkerware industry, taken aim at invasive surveillance by police, raised red flags around the security and privacy of daycare apps, developed new tools and techniques for reversing android malware, and taken part in coalitions to protect the most vulnerable in our society. Our crack team of technologists and researchers issued FOIAs, guided policymakers, pushed back against big tech, and dissected hardware and software to achieve these goals.

Here we highlight some of the achievements that made 2022 such an eventful year for Threat Lab.

Combating Surveillance

Our Atlas of Surveillance project surpassed a major milestone, documenting over 10,000 instances of police tech programs across the US. Shining a light on these programs was bittersweet, reminding us that this transparency also reveals just how expansive and widespread advanced technologies employed by police departments across the country have become. A collaborative effort between EFF and the University of Nevada Reno’s Reynolds School of Journalism crowdsourced thousands of distinct mini-research tasks to students to achieve this milestone.

Cell-site simulators are one such technology employed by law enforcement. Sometimes called “Stingrays,” these devices use a small, mobile transceiver to masquerade as a cellphone tower, tricking phones into connecting to it instead of the legitimate tower, and allowing location tracking and even potentially interception of communications from everyone in a certain area—not just those suspected of a crime. Alongside Threat Lab’s efforts to reveal cell-site simulators (CSSs), dozens of FOIA requests were issued to California police departments in 2018 to reveal the extent of their usage of CSSs. As a result, EFF learned that San Bernardino County law enforcement officials were improperly sealing search warrant records involving the use of CSSs indefinitely. In October, we asked the Supreme Court of California to review the case, arguing that sealing these records in perpetuity violates the public’s right to access court records and effectively prevents the public from raising important questions regarding the scope and overreach of law enforcement use of invasive technologies.

As part of our work combating creepy surveillance tech, we dissected a GPS tracking device that was surreptitiously installed on the car of one of our supporters. Trying to determine if it was placed there by an auto dealer or as a stalking tool led us to take the device apart and issue commands to determine when it had been installed and what information we could get out of the device, and having a good bit of fun hacking (literally) on it in the process. In the end, a bit of old-fashioned investigative querying in the form of phone calls got us the answer we were looking for: the GPS device was installed by the auto dealer as part of an agreement with an anti-theft company which may have seen GPS devices unknowingly installed in hundreds of thousands of vehicles.

Fighting Stalkerware

This year, we made significant inroads with policymakers and regulators as part of our work in the Coalition Against Stalkerware. In April, the Maryland legislature unanimously passed a law requiring law enforcement officers to be trained on what stalkerware looks like, as a direct result of conversations Threat Lab’s very own Director of Cybersecurity, Eva Galperin, had with state officials. The bill was signed into law in May, making it the first state to take on electronic forms of domestic violence and intimate partner abuse. We hope Maryland is only the first of many states to do so. In response to an investigation TechCrunch led, which revealed significant security vulnerabilities opened up by a string of stalkerware apps, we urged the Federal Trade Commission to take action to protect victims of this abusive industry by shutting stalkerware apps down.

Last year, Apple addressed a concerning ability for stalkers to use their AirTags to track their victims, releasing an Android app called Tracker Detect. This year, in response to our advocacy as well as numerous testimonies of unwanted tracking, Apple took new steps to shore up its protections against the practice.

Investigating Apps & Malware

Part of our mandate is to be the security team for those who are underrepresented. To that end, we investigated a number of popular apps which monitor the daily behaviors of toddlers in daycare and report these to parents. We found dangerous security and privacy flaws in the way these apps function, and alerted these app makers of the flaws. Unfortunately, little change was made to fix these problems, and in some cases no response was given at all. We raised a red flag to the FTC, asking them to look into the matter and issue regulations regarding the rampant negligence. The letter was subsequently included as part of an open comments period where the FTC solicited the public for information on industry surveillance, the first stage in the long process of its federal rulemaking to regulate commercial surveillance and lax data security practices.

Threat Lab’s malware analysis team focused its attention on the Android ecosystem this year, investigating a multi-stage class of malware called “tor-hydra” which masquerades as a banking app. The malware uses a number of obfuscation techniques to hide its true functionality: connecting to a C2 server via the tor network, and adding your device to a botnet controlled by malicious hackers in order to launch attacks. We also continued to work on uncovering Dark Caracal and will have a new report coming out next year, stay tuned.

In addition to investigating instances of Android malware, we also described in detail a technique researchers can use to observe the behavior of apps they are looking into without the need for a sophisticated multi-device lab setup, or where complex real-world interactions (such as unlocking a car door with an app) are being analyzed. We continued building out our Android app (apk) downloading application apkeep, bringing it to more platforms and supporting more app stores for download. One of the stores we now support is Huawei’s App Gallery, a popular source of apps in China, and one we feel will be of particular interest to privacy researchers.

Advancing Freedom of Information

Finally, Threat Lab worked to support ESPLERP, an organization of sex workers and erotic service providers, in their preparation of a report funded by the Rose Foundation about the technologies used to surveil sex workers in California. We've been working with them on their records request strategy, to file records requests across the state, to push back on recalcitrant police agencies, and to interpret the records they've gathered.

Our work focuses on supporting the most vulnerable segments of society with our investigative research and reverse engineering skills and policy recommendations. As we continue to grow our operations, we remain committed to this goal in the coming year and beyond. We hope you will support us as we continue making these groundbreaking strides for the advancement of privacy and security in an ever-increasingly interconnected world.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2022.

Bill Budington

Right to Repair Legislation and Advocacy: 2022 in Review

3 months ago

Your right to repair matters. When you repair a device, you can keep using it, rather than needing to throw it away—creating waste—and purchasing a new one, which uses even more resources. A right to repair creates a market for independent repair shops, encouraging competition between manufacturers and independent businesses, which benefits consumers (like you!). It rewards curiosity and means more people have a greater understanding of the technology in their lives. A right to repair empowers you to make determinations about your own needs and how to best meet them.

The EFF fights for your right to repair, but we’re not the only ones working hard. The successes we have to celebrate from 2022 come from collaborations with many people working on everything from legislation to education.

Thanks to the hard work of legislative activists, policy makers, and everyday activists, the New York State legislature passed the Digital Fair Repair Act (A7006-B/S.4104-A), proposed by Assemblymember Patricia Fahy and Senator Neil Breslin. It was supported by the Repair Coalition. This landmark legislation requires manufacturers to sell parts and special tools at “fair and reasonable terms'' to users and third-party repair technicians. Manufacturers are also required to provide access to repair information, software, and the ability to apply firmware patches. New York's bill comes after a narrow success in Colorado for wheelchair users, and a loss in California.

However, as of writing this, New York Governor Kathy Hochul still has not signed the Digital Fair Repair Act into law. We hope she will do the right thing and encourage New Yorkers to contact her.

We recognized the work of Kyle Wiens by giving him this year’s EFF Award for Right to Repair Advocacy. In addition to amazing advocacy work, Wiens has been running the website iFixIt since 2003, providing a home for the users and activists in the right to repair movement to share guides on how to repair everything.

Right to Repair advocate Adam Savage, known for his shows MythBusters and Savage Builds, joined the EFF’s Cindy Cohn and Danny O’Brien on our podcast, How to Fix the Internet. In this episode, "Making Hope," they discussed creating a world built on collaboration and creativity.

We celebrated Copyright Week at the EFF with a post from legislative activist Hayley Tsukayama on Right to Repair and how copyright law gets in the way. We also continue to litigate the ongoing EFF case Green v. U.S. Department of Justice, which has the potential to enhance the right to repair by eliminating a law saying you’re not allowed to look at the code in your own devices if there’s an access control technology you’d have to bypass. Our allies detail the lawsuit’s implications for the right to repair in a supporting amicus brief.

Thank you to everyone who signed a letter, went to a meeting, shared a repair guide, talked with a friend, or otherwise helped advance the right to repair this past year! We will continue to work towards a right to repair and look forward to seeing what we accomplish together in 2023.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2022.

Related Cases: Green v. U.S. Department of Justice
Molly de Blanc

Lifting the Fog: 2022 in Review

3 months ago

Earlier this year, a cross-team group of EFF technologists, researchers, activists, and lawyers broke the story of Fog Data Science, a secretive data broker that sells cops access to huge quantities of people's precise location data. The data they use is harvested from our phone apps, sent to data brokers, and sold to the highest bidder – including, we learned, to state and local law enforcement. This violates the Fourth Amendment. While federal agencies have been notorious in recent years for their purchase of such data, our reporting seems to be the first showing a company selling it to state and local police. In addition to our several deep-dives into Fog and its product, we collaborated with Associated Press on a major article which led to further reporting from dozens of other press outlets, criticism from members of Congress, and calls by others for the FTC to investigate. 

Perhaps most entertainingly, an Arkansas state prosecutor and former Fog Data Science trainer told the Associated Press that objectors to this location surveillance are part of a “cult of privacy.” We responded with a Slate article titled: “If caring about your digital privacy makes me a cult member, sign me up.”

These revelations were the result of more than a year of work. EFF spent months meticulously picking apart thousands of documents from public records requests, slowly building up Fog's connections to dozens of police agencies and other data brokers. These documents were the product of public records requests to scores of local, state, and federal law enforcement agencies across the country. We also carefully researched and reverse engineered Fog's public-facing product in order to document how we believe it works, and uncovered several worrying and previously undocumented features in their code.

The result of this work has been promising. After we revealed the problem, Rep. Anna Eshoo urged the FTC to investigate Fog and “work to ensure that surveillance advertising becomes a prohibited business practice.” Last month, EFF filed comments with the FTC also urging such an investigation. We explained that Fog “is mass surveillance, often with no judicial oversight, and flies in the face of Fourth Amendment protections against unreasonable search and seizure.”

Although Fog’s sale of location data to state and local law enforcement is new, this shady business practice of commodifying our data is not. Fog is merely repackaging data gathered and sold by other data brokers, who themselves often buy and rebundle the data from other brokers. As we documented earlier this year, this toxic marketplace of people’s phone data has been a goldmine for U.S. federal agencies, such as the IRS, the DHS and its subsidiaries ICE and CBP, the DEA, and the FBI. The data is available for purchase by the general public too: recently, some dubious researchers reportedly bought "10 trillion" geolocation data points from over 500,000 people's phones for the bogus documentary 2000 Mules. Clearly, the problem of location data brokers runs deeper than just Fog Data Science.

There’s a highly effective way to prevent your phone’s data from ending up in the data broker pipeline: disable Ad ID tracking on your phone. You can also urge your legislators to ban police from purchasing phone app location data.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2022.

 

Will Greenberg

Fighting Tech-Enabled Abuse: 2022 in Review

3 months ago

No year that includes tech-enabled abuse can be said to be a good year. But 2022 has certainly been an eventful year for the technologies used as instruments of coercive control domestic abuse situations, ranging from stalkerware to physical trackers.

In February, EFF called for the FTC to investigate a class of stalkerware apps uncovered by TechCrunch journalist and security researcher Zack Whittaker. The network of consumer-grade spyware apps wasn’t just pernicious, it was insecure. Whittaker discovered that the apps shared a security flaw that exposed the private data of approximately 400,000 people. TechCrunch identified the compromised apps, which are practically identical in look and operation, as Copy9, MxSpy, TheTruthSpy, iSpyoo, SecondClone, TheSpyApp, ExactSpy, FoneTracker, and GuestSpy. Not only did TechCrunch provide instructions for how to identify and remove the Android spyware from a device, but they also launched a tool to help Android users know if their device was compromised.

In April, Maryland's legislature unanimously passed SB 134, a bill that requires law enforcement agencies to learn, as part of their standard training, to recognize the common tactics of electronic surveillance and the laws around such activities. This bill, which was inspired by conversations between Senator Barbara Lee’s Office and EFF, aims to mitigate the frustration and gaslighting so many survivors of tech-enabled abuse have felt when trying to report their experiences to law enforcement.

In July, Australian police arrested Jacob Wayne John Keen, the creator of Imminent Monitor stalkerware. Keen allegedly sold the app, designed to spy on Windows computers, to 14,500 people in 128 countries over a period of seven years before the website was shut down. The website specifically advertised features designed to keep the presence of the app secret from the user. 85 warrants were executed in Australia and Belgium, 434 devices seized, including the app-maker’s custom-built computer, and 13 of the app’s most prolific users were arrested. The investigation involved actions in Colombia, Czechia, the Netherlands, Poland, Spain, Sweden, and the United Kingdom. EFF hopes to see more such actions in the future.

Apple has had a very mixed year, taking important steps to secure devices for high-risk users, including survivors of tech-enabled abuse, while also facing the fallout from the disastrous launch of their physical tracker, the Air Tag, whose mitigations against use as a stalking device began as woefully inadequate and have progressed to merely bad. Just in time for Christmas, Apple finds itself the defendant in a class action lawsuit on behalf of people who have been stalked using Air Tags, with a filing that draws heavily on EFF’s criticisms of the product.

Air Tags are not the only physical tracker that has sparked concerns about stalking. Tile put out a scanning app to allow people concerned about stalking discover if there is a Tile tracking them. Like Apple’s tracker detection app for Android, launched at the end of 2021, it requires a pro-active scan to search for unwanted tracking devices. EFF continues to advocate for a more comprehensive approach to anti-stalking mitigations for physical trackers, calling on all physical tracker manufacturers to agree on and publish an industry standard that would allow developers to incorporate physical tracking detection into both mobile apps and operating systems.

Last of all, EFF has finished off the year with a victory in the fight against tech-enabled abuse. The Safe Connections Act, a common-sense bill that makes it easier for survivors of domestic violence to separate their phone line from a family plan while keeping their own phone number and requires the FCC to create rules to protect their privacy, has passed into law.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2022.

Eva Galperin

Every Supporter Counts in EFF's Year-End Challenge

3 months ago

Social movements, businesses, and your own household depend on technology to succeed. That’s become a fact of modern life and it's why EFF has defended your online privacy, security, and free speech for over 30 years. Will you donate so EFF can keep on fighting?

Give Today

Every Supporter Counts!

Support EFF by December 31 and you’ll help unlock bonus grants! Every donation—no matter the size—helps EFF secure a series of seven challenge grants set by EFF’s board. These grants grow from $1,000 up to $15,000 as the number of online rights supporters grows. Check out the counter.

You can help out by joining EFF and sharing the membership drive with friends! Here’s some language you can use:

Do some good in the world and get an extra boost. Donate to support internet privacy & free speech by December 31, and you’ll help EFF unlock year-end challenge grants. https://eff.org/YEC

EmailFacebookTwitter

Your digital rights are not a game. We all rely on the web more each day, and we need to get it right. Just this month, EFF members helped to:

That’s amazing progress! And EFF’s team of lawyers, activists, and technologists keep up the fight every day. The challenges of the digital world aren’t easy. But you can ensure tech users always have a fierce advocate in EFF.

support internet freedom

Donate today & unlock special grants before 2023

Aaron Jue

Daycare and Early Childhood Education Apps: 2022 in Review

3 months ago

Last year, several parents at EFF enrolled kids into daycare and were instantly told to download an application for managing their children’s care. These applications frequently include notifications of feedings, diaper changes, pictures, activities, and who picked-up/dropped-off the child—potentially useful features for overcoming separation anxiety of newly enrolled children and their anxious parents. But working at a privacy-oriented organization, as we do, we had concerns about the security of this data.

Normally, our student privacy work focuses on those in elementary or middle school at the youngest. But EFF goes where the security risks are, so we decided to dig into these concerns further.

First, our technologists investigated the apps to identify privacy and security flaws. Next, our legal experts identified gaps in the law and highlighted the need for regulatory action in a letter to the Federal Trade Commission (“FTC”). And finally, our advocacy team reiterated our concerns in comments submitted to the FTC, in response to its request for public input on commercial surveillance. 

The Investigation  

EFF’s technologists, led by Director of Engineering Alexis Hancock, investigated several popular daycare apps and quickly uncovered dangerous security and privacy flaws in the way these apps function.

Lackluster security was rampant: common practices included public access to children’s photos, weak password policies, and inadequate or even absent encryption.

We also discovered that we weren’t alone in our concern. Of 42 daycare apps that privacy experts researched, 13 companies did not specify the data they collect in their privacy policies. In policies of those that do describe data collection processes, most admitted to sharing sensitive information (such as the average number of diaper changes per day) with third parties. Only 10 of the 42 apps stated in their privacy policies that they did not share data with third parties—but seven of those 10 actually were doing so anyway.

We alerted these app makers of the flaws. But unfortunately, little change was made to fix these problems, and in many cases, there was no response at all.

Letter to FTC 

Given the lack of response from the app developers themselves, we decided to raise a red flag to the FTC, asking them to look into the matter and address the rampant negligence.

The letter describes our troubling findings regarding the sensitivity of the data collected by these apps and the lack of sufficient privacy and security protections in place.

It also points out that current laws don’t address the problem. The Children’s Online Privacy Protection Act only applies to operators of online services “directed to” children under 13; early education and daycare apps, however, are used solely by adults. The Family Educational Rights and Privacy Act also falls short: It restricts schools from disclosing students’ “education records” to certain third parties without parental consent, but does not typically regulate the actions of third parties who may receive that data, such as daycare apps.

“Since parents do not have the tools or proper information to currently assess the privacy and security of their children’s data in daycare and early education apps, the Federal Trade Commission should review the current gaps in the law and assess potential paths to strengthen protections for young children’s data, or investigate other means to improve protections for children’s data in this context,” the letter concludes.

FTC Comments

The letter was subsequently included as part of an open comments period where the FTC solicited the public for information on industry surveillance, the first stage in the long process of its federal rulemaking to regulate commercial surveillance and lax data security practices. 

Our comments explain that there are insufficient safeguards to secure the data collected by daycare apps from theft or misuse. It is likely only a matter of time before these companies leak data or become subject of a breach, and a single compromise of the application servers could affect hundreds of daycares and preschools. 

The comments also point out that the problems with these apps—privacy policy defects and lackluster security practices—fall squarely under the “unfair or deceptive acts or practices” clause built into the FTC Act. It is deceptive to mislead parents and daycares into thinking these apps collect and share less information than they do. And it is unfair practice to expose young children to the risk of their data being misused or breached. 

We will continue to investigate this ecosystem in the coming year, and to follow up on possible regulations to protect this sensitive data. Daycare apps collect vast quantities of detailed information about young children and infants, and if this data were breached or given to a third party, it would form a very accurate profile of a child’s development. It doesn’t matter the age: private data should be secure, and right now, these apps are not.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2022.

Jennifer Pinsof

A Roller Coaster for Decentralization: 2022 in Review

3 months ago

This year has been a roller coaster for the movement to decentralize the services and tools that we rely on every day. Decentralizing internet services may help remedy concerns ranging from traditional big business competition to online privacy.

In order to devote closer attention to these issues which impact user autonomy and competition online, this year EFF hired a Senior Fellow of Decentralization, Ross Schulman. Among other projects, Ross spearheaded EFF’s involvement in DWeb Camp and Unfinished Live. In the wider world, the cryptocurrency markets have continued their dramatic volatility in 2022, which, combined with ongoing allegations of large scale fraud, has led legislators both at the federal and state levels to explore regulations in the space.

In August the US Treasury Department’s Office of Foreign Assets Control (OFAC) announced that it was placing a cryptocurrency privacy tool called Tornado Cash on one of its sanctions lists due to its use by North Korea, effectively prohibiting its use by people in the US. Many cryptocurrencies pose unique problems when it comes to financial privacy, since all transactions are publicly recorded in the global ledger. Tornado Cash was created to enable private and anonymous financial transactions on the Ethereum blockchain. Many people may want or need that privacy to operate, for perfectly lawful reasons, such as when paying for medical care, supporting LGBT groups in repressive regimes, or giving to religious organizations. But as with many technologies, it can also be used for unlawful purposes.

The OFAC decision was ostensibly intended to address those unlawful uses, but it was far too vague. Publishing these tools implicates core First Amendment rights, because, as EFF made  clear: code is speech. Because OFAC’s order was ambiguous as to whether it was attempting to control the actual code that ran Tornado Cash’s smart contracts, GitHub removed the public code repository of Tornado Cash and disabled the accounts of its primary developers. Courts have held since the late 1990s that computer code is a form of speech protected by the First Amendment, and OFAC’s actions could have impacted that freedom. We wrote publicly about that fact and, thankfully, OFAC clarified shortly afterward that they were not including merely hosting or discussing Tornado Cash’s underlying code in the new sanctions, removing that concern.

OFAC’s actions raise a number of other questions, however, from fundamental privacy rights online to questions of jurisdiction when it comes to autonomous code running on blockchains. We are following these issues closely.

After a few year’s hiatus due to COVID, 2022 also saw the Internet Archive return to hosting the Decentralized Web Camp. This year’s DWeb Camp took place in the redwoods outside Mendocino, California at the end of August. Coming just a couple weeks after OFAC’s Tornado Cash announcement, law and policy were front of mind for many of the participants, and EFF participated in conversations about the possible future outcomes, the biggest stumbling blocks with legislators and regulators, and how to effectively impact and advocate with governments.

Finally, in the last couple months of 2022 we have seen a boom of interest in the possibilities of decentralization sparked by the actions of what might be the unlikeliest of sources: Elon Musk. Rather than take our advice to improve the platform, he slashed a vital verification system and responsible content moderation. As a result people began investigating alternatives in large numbers.  Many concerned users turned to the “fediverse,” a collection of interconnected social networks driven by the ActivityPub protocol, the most prominent of which is called Mastodon.

The fediverse is a great example of a decentralized model for online services. It follows a federated network style (hence the “fediverse” name), much like email, and it enables a number of great benefits compared to  centralized alternatives. But it is also raising a host of new challenges as well as new facets of old challenges. We at EFF have put out a few pieces explaining the fediverse, its potential, and how to find a home there and get settled. We’ll be continuing this series and work to help realize the promise of a fully interoperable approach to social media.

Overall, 2022 was an exciting time for the broader decentralization movement, but it was just the beginning. With a number of opportunities presenting themselves to lessen the overwhelming control that the few huge internet platforms exert over online life, We’re looking forward to going into 2023 ready to promote a decentralized web that offers more autonomy and democratic control for the people who rely on it.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2022 

Ross Schulman

2022 Year in Review

3 months ago

EFF believes we can create a future where our rights not only follow us online, but are enhanced by new technology. The activists, lawyers, and technologists on EFF’s staff fight for that better future and against the kinds of dystopias best left to speculative fiction. In courts, in legislatures, and in company offices we make sure that the needs of the users are heard. Sometimes we send letters. Sometimes, we send planes.

We’ve pushed hard this year and won many hard-fought battles. And in the battles we have not won, we continue on, because it’s important to stand up for what’s right, even if the road is long and rocky. 

In 2022, we looked into the apps used by daycare centers that collect and share information about the children in their care with their parents. It turned out that not only are the apps dangerously insecure, but the companies that make them were uninterested in making them safer. We responded by giving parents information that they can use to bring their own pressure, including basic recommendations for these applications like implementing two-factor authentication to ensure that this sensitive information about our kids stays in the right hands.

We won big in security this year. After years of pressure, Apple has finally implemented one of our longstanding demands: that cloud backups be encrypted. Apple also announced the final death of its dangerous plan to scan your phone. 

We also continued our fight against police surveillance. Williams v. San Francisco, our lawsuit with the ACLU where the San Francisco Police Department illegally accessed surveillance cameras during the Black Lives Matters protests continues on appeal. Since the lawsuit was filed, the San Francisco Police Department has repeatedly tried to change the law to give the police unwarranted access to third-party cameras. Mayor London Breed introduced and then withdrew a proposal to give the police even more power. The San Francisco Board of Supervisors eventually passed a similar change to the law—but we secured a 15 month sunset. Rest assured, we will be fighting this mass surveillance that sweeps in protests and other First Amendment-protected activity when that sunset date approaches.

The camera setback was followed by a dramatic turnaround win, again in San Francisco. In one week the Board of Supervisors reversed its position on giving the SFPD the ability to deploy killer robots. (The SFPD would like you to know that they object to our “killer robots” framing. That’s because the robots do not act on their own or have guns. Instead, they have bombs and explode. We stand by our framing.) Make no mistake: this historic reversal would not have happened without the pushback of the activists. And of course our thanks to the many regular residents of the Bay Area who showed up and made good trouble. 

Through our representation of the Internet Archive, we also stood up against the four largest publishers who are looking to control how libraries serve their patrons. These publishers want to lock libraries into expensive and restrictive ebook licenses, while claiming, without evidence, that the Internet Archive’s Controlled Digital Lending (CDL) program, is a threat to their business. Libraries give us all knowledge and EFF stands with them. 

In the European Union, we lobbied hard for a Digital Markets Act that recognized the value of interoperability and meaningfully restrained the power of “gatekeeper” platforms.

Finally, sustained pressure from EFF and its allies—and you—kept Congress from mandating filters or link taxes, protecting free expression online. And Congress did some good this year, too, passing the Safe Connections Act, a bill that EFF pushed to make it easier for survivors of domestic violence to keep their phone number while leaving a family plan. This simple protection can be essential to stop abusers from using access to their victims’ cellphone plans to track and harass.

It's impossible to cover everything we’ve done this year in a blog post that doesn’t take the whole new year to read. But rest assured, we did a lot and none of it would be possible without our members, supporters, and all of you who stood up and took action to build a better future. 

EFF has an annual tradition of writing several blog posts on what we’ve accomplished this year, what we’ve learned, and where we have more to do. We will update this page with new stories about digital rights in 2022 every day between now and the new year.

Cindy Cohn
Checked
1 hour ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed