“Worst in Show Awards” Livestreams Friday: EFF’s Cindy Cohn and Cory Doctorow Will Unveil Most Privacy-Defective, Least Secure Consumer Tech Products at CES

2 weeks 5 days ago
"Cool" Products That Collect Your Data, Lock Out Users

Las Vegas—On Friday, January 7, at 9:30 am PT, Electronic Frontier Foundation (EFF) Executive Director Cindy Cohn and EFF Special Advisor and sci-fi author Cory Doctorow will present the creepiest, most privacy-invasive, and unsecure consumer tech devices debuting at this year’s Consumer Electronics Show (CES).

EFF, in partnership with iFixit, USPIRG, and Repair.Org, will unveil their 2022 Worst in Show picks, an annual award given at CES, the massive trade show in Las Vegas where vendors demonstrate the coolest in purportedly life-changing tech gadgets and devices (think movie-streaming sunglasses and color-changing cars).

Not all these products will change our lives for the better. A panel of judges will present the least secure, safe, repairable, and eco-friendly gadgets from the show. Doctorow will emcee the event and will join Cohn and guest judges Nathan Proctor (USPIRG), Gay Gordon-Byrne (Repair.org), Paul Roberts (securepairs), and Kyle Wiens (iFixit) to discuss their selections.

To watch the presentation live, before it goes on YouTube, fill out this form to request access. You’ll be sent a Zoom link to join the event (no-video/audio-only is fine).

Who: EFF’s Cindy Cohn and Cory Doctorow

What: Annual CES Worst in Show Awards

When: Friday, January 7, 2022, 9:30 am PT and 12:30 pm ET

Zoom link:
https://docs.google.com/forms/d/e/1FAIpQLSc_EAcNIZl-AzAU_yAu2jF-c21w1fhS_rKN7ACZb_WtaZd66Q/viewform

Check out last year’s winners:
https://www.repair.org/worstinshow

For more on Right to Repair:
https://www.eff.org/issues/right-to-repair

 

Contact:  KarenGulloAnalyst, Media Relations Specialistpress@eff.org press@ifixit.com
Karen Gullo

How are Police Using Drones?

2 weeks 5 days ago

Across the country, police departments are using myriad means and resources at their disposal to stock up on drones. According to the most recent tally on the Atlas of Surveillance (a project of EFF and the University of Nevada), at least 1,172 police departments nationwide are using drones. And over time, we can expect more law enforcement agencies to deploy them. A flood of COVID relief money, civil asset forfeiture money, federal grants, or military surplus transfers enable more departments to acquire these flying spies.

But how are police departments using them?

A new law in Minnesota mandates the yearly release of information related to police use of drones, and gives us a partial window into how law enforcement use them on a daily basis. The 2021 report released by the Minnesota Bureau of Criminal Apprehension documents use of drones in the state during the year 2020.

According to the report, 93 law enforcement agencies from across the state deployed drones 1,171 times in 2020—with an accumulative price tag of almost $1 million. The report shows that the vast majority of the drone deployments are not used for the public safety disasters that so many departments use to justify drone use. Rather, almost half (506) were just for the purpose of “training officers.” Other uses included information collection based on reasonable suspicion of unspecified crimes (185), requests from other government agencies unrelated to law enforcement (41), road crash investigation (39), and preparation for and monitoring of public events (6 and 12, respectively). There were zero deployments to counter the risk of terrorism.  Police deployed drones 352 times in the aftermath of an “emergency” and 27 times for “disaster” response.

This data isn’t terribly surprising. After all, we’ve spent years seeing police drones being deployed in more and more mundane policing situations and in punitive ways.

After the New York City Police Department accused one racial justice activist, Derrick Ingram, of injuring an officer’s ears by speaking too loudly through his megaphone at a protest, police flew drones by his apartment window—a clear act of intimidation. The government also flew surveillance drones over multiple protests against police racism and violence during the summer of 2020. When police fly drones over a crowd of protestors, they chill free speech and political expression through fear of reprisal and retribution from police. Police could easily apply face surveillance technology to footage collected by a surveillance drone that passed over a crowd, creating a preliminary list of everyone that attended that day’s protest.

As we argued back in May 2020, drones don’t disappear once the initial justification for purchasing them no longer seems applicable. Police will invent ways to use their invasive toys–which means that drone deployment finds its way into situations where they are not needed, including everyday policing and the surveillance of First Amendment-protected activities. In the case of Minnesota’s drone deployment, police can try to hide behind their use of drones as a glorified training tool, but the potential for their invasive use will always hang over the heads (literally) of state residents. 

Matthew Guariglia

EFF Condemns the Unjust Conviction and Sentencing of Activist and Friend Alaa Abd El Fattah

3 weeks ago

EFF is deeply saddened and angered by the news that our friend, Egyptian blogger, coder, and free speech activist Alaa Abd El Fattah, long a target of oppression by Egypt's successive authoritarian regimes, was sentenced to five years in prison by an emergency state security court just before the holidays.

According to media reports and social media posts of family members, Fattah, human rights lawyer Mohamed el-Baqer, and blogger Mohamed 'Oxygen' Ibrahim were convicted on December 20 of  "spreading false news undermining national security" by the court, which has extraordinary powers under Egypt's state of emergency. El-Baqer and Ibrahim received four-year sentences. 

A trial on the charges held in November was a travesty, with defense lawyers denied access to case files or a chance to present arguments. At least 48 human rights defenders, activists, and opposition politicians in pre-trial detention for months and years were referred to the emergency courts for trial just before Egyptian President Abdel Fattah El Sisi lifted the state of emergency in October, Human Rights Watch reported.

The profoundly unjust conviction and years-long targeting of Fattah and other civil and human rights activists is a testament to the lengths the Egyptian government will go to attack and shut down, through harassment, physical violence, arrest, and imprisonment, those speaking out for free speech and expression and sharing information. In the years since the revolution, journalists, bloggers, activists and peaceful protestors have been arrested and charged under draconian press regulations and anti-cybercrime laws being used to suppress dissent and silence those criticizing the government.

A free speech advocate and software developer, Fattah, who turned 40 on November 18, has repeatedly been targeted and jailed for working to ensure Egyptians and others in the Middle East and North Africa have a voice, and privacy, online. He has been detained under every Egyptian head of state in his lifetime, and has spent the last two years at a maximum-security prison in Tora, 12 miles south of Cairo, since his arrest in 2019.

It’s clear that Egypt has used the emergency courts as another tool of oppression to punish Fattah and other activists and government critics. We condemn the government’s actions and call for Fattah’s conviction to be set aside and his immediate release. We stand in solidarity with #SaveAlaa, and Fattah’s family and network of supporters. Fattah has never stopped fighting for free speech, and the idea that through struggle and debate, change is possible. In his own words (from a collection of Fattah’s prison writings, interviews, and articles, entitled “You Have Not Yet Been Defeated,” order here or here):

I’m in prison because the regime wants to make an example of us. So let us be an example, but of our own choosing. The war on meaning is not yet over in the rest of the world. Let us be an example, not a warning. Let’s communicate with the world again, not to send distress signals nor to cry over ruins or spilled milk, but to draw lessons, summarize experiences, and deepen observations, may it help those struggling in the post-truth era.

…every step of debate and struggle in society is a chance. A chance to understand, a chance to network, a chance to dream, a chance to plan. Even if things appear simple and indisputable, and we aligned – early on – with one side of a struggle, or abstained early from it altogether, seizing such opportunities to pursue and produce meaning re- mains a necessity. Without it we will never get past defeat.

Fattah’s lawyer said in September that Fattah was contemplating suicide because of the conditions under which he is being held. He has been denied due process, with the court refusing to give his lawyers access to case files or evidence against him, and jailed without access to books or newspapers, exercise time or time out of the cell and—since COVID-19 restrictions came in to play—with only one visit, for twenty minutes, once a month. 

Laila Soueif, a mathematics professor and Fattah’s mother, wrote in a New York Times op-ed just days before his sentencing that her son’s crime “is that, like millions of young people in Egypt and far beyond, he believed another world was possible. And he dared to try to make it happen.” He is charged with spreading false news, she said, “for retweeting a tweet about a prisoner who died after being tortured, in the same prison where Alaa is now held.” 

Fattah himself addressed the court at trial: “The truth is, in order for me to understand this, I must understand why I am standing here,” he said, according to an English translation of a Facebook post of his statement. “My mind does not accept that I am standing here for the sake of sharing.”

We urge everyone to order Fattah’s book and send a message to the Egyptian government and all authoritarian regimes that his fight for human rights, and our support for this courageous activist, will never be defeated.

Karen Gullo

Cross-Border Access to User Data by Law Enforcement: 2021 Year in Review

3 weeks 1 day ago

Law enforcement around the world is apparently getting its holiday wish list, thanks to the Council of Europe’s adoption of a flawed new protocol to the Budapest Convention, a treaty governing procedures for accessing digital evidence across borders in criminal investigations. The Second Additional Protocol (“the Protocol”) to the Budapest Convention, which will reshape how police in one country access data from internet companies based in another country, was heavily influenced by law enforcement and mandates new intrusive police powers without adequate protections for privacy and other fundamental rights.

It was approved on November 17, 2021—a major disappointment that can endanger technology users, journalists, activists, and vulnerable populations in countries with flimsy privacy protections and weaken everyone's right to privacy and free expression across the globe. Following the decision by the CoE’s Committee of Ministers of the Council of Europe, the Protocol will open for signatures to countries that have ratified the Budapest Convention (currently 66 countries) around May 2022. 

It’s been a long fight and a very busy year. EFF, along with CIPPIC, European Digital Rights (EDRi), and other allies, fought to let the CoE and the world know that the Protocol was being pushed through without adequate human rights protections. We sounded warnings in February about the problem and noted that draft meetings to finalize the text were held in closed session, excluding civil society and even privacy regulators. After the draft protocol was approved in May by the CoE’s Cybercrime Committee, EFF and 40 organizations urged the Committee of Ministers, which also reviews the draft, to allow more time for suggestions and recommendations so that human rights are adequately protected in the protocol.

In August, we submitted 20 solid, comprehensive recommendations to strengthen the Protocol, including requiring law enforcement to garner independent judicial authorization as a condition for cross border requests for user data, prohibiting police investigative teams from bypassing privacy safeguards in secret data transfer deals, and deleting provisions mandating that internet providers directly cooperate with foreign law enforcement orders for user data, even where local laws require them to have independent judicial authorization for such disclosures. We then defended our position at a virtual hearing before the Parliamentary Assembly of the Council of Europe (PACE), which suggested amendments to the Protocol text. 

Sadly, PACE did not take all of our concerns to heart. While some of our suggestions were acted on, the core of our concerns about weak privacy standards went unaddressed. PACE’s report and opinion on the matter responds to our position by noting a “difficult dilemma” about the goal of international legal cooperation given significantly inconsistent laws and safeguards in countries that will sign on to the treaty. PACE  fears that “higher standards [could] jeopardize” the goal of effectively fighting cybercrime and concludes that it would be unworkable to make privacy-protective rules stronger. Basically, PACE is willing to sacrifice human rights and privacy to get more countries to sign on to their treaty.

This position is unfortunate, since many parts of the Protocol are a law enforcement wish list—not surprising since it was mainly written by prosecutors and law enforcement officials. Meanwhile, gaps in human rights protections under some participating countries’ laws are deep. As EFF told PACE in testimony at its virtual hearing, detailed international law enforcement powers should come with robust legal safeguards for privacy and data protection. “The Protocol openly avoids imposing strong harmonized safeguards in an active attempt to entice states with weaker human rights records to sign on,” EFF stated. “The result is a net dilution of privacy and human rights on a global scale. But the right to privacy is a universal right.”

PACE suggested a few privacy-protecting changes to the Committee of Ministers—some of them based on our suggestions—but the Committee did not take these into account. For example, PACE agreed that the Protocol ought to incorporate new references to proportionality as a requirement in privacy and data protection safeguards (Articles 13 and 14). It also said that “immunities of certain professions, such as lawyers, doctors, journalists, religious ministers or parliamentarians” should be explicitly respected, and that there ought to be public statistics about how the powers created by the Protocol were used and how many people were affected.

Other civil society concerns were left unaddressed; among several examples, PACE did not propose changes to a provision that prohibits states from maintaining adequate standards for access to biometric data. The Council of Ministers then tied up a holiday gift to law enforcement by adopting the Protocol as-is, without any of the improvements that PACE suggested. As a result, applying human rights safeguards will be up to the broad range of individual countries that will now sign onto the treaty in the near future.

Further Fights on The Horizon For 2022

With the Protocol’s adoption, there will now be debates in national Parliaments across the world about its ratification and what standards countries adopt as they implement it. There will be an opportunity for countries to declare reservations when accessing the treaty. That means numerous chances at the domestic level to influence how governments act on the Protocol throughout 2022. People—and national data protection authorities—in countries with strong protections for personal information should demand that those safeguards not be circumvented by implementation of the Protocol.

This is notably the case of European Union countries. Despite strong criticism of the Protocol by the European Data Protection Board, which represents all 27 national data protection authorities in the EU, the European Commission advised Member States to join the Protocol with as few reservations as possible. Latin American countries should also be cautious and aware of their particular challenges

Law enforcement pushed for a quick adoption of the Protocol should have not override current legal safeguards or impair national debates towards adequate minimum standards. Data protection and privacy advocates around the world should be ready for the fight. 

CoE’s Secretary-General welcomed the Protocol’s adoption “in the context of a free and open internet where restrictions apply only as a means to tackle crime”—an optimistic view, to be sure, given the recent spate of intense internet crackdowns by governments, including some Budapest Convention signatories.

Part of the impetus for rushing the adoption of the Protocol in the first place was to forestall efforts to create a more intrusive framework for cross-border policing. Specifically, a new international cybercrime treaty, first proposed by Russia, is gaining support at the United Nations. The UN cybercrime treaty would address many of the same investigative powers as the Protocol and the Budapest Convention in ways that could be even more concerning for human rights. (As background, Russia has been promoting its cybercrime treaty for at least a decade). Unfortunately, the adoption of the Protocol has not staved off those efforts. Not only are these efforts actively moving forward, but the Protocol has now created a new baseline of privacy-invasive police powers that the UN treaty can expand upon. Negotiations on the UN treaty will begin in January

EFF and its civil society allies are already advocating for a human rights-based approach to drafting the proposed UN treaty, and pushing for a more active role in the UN negotiations than was afforded by the CoE. Our focus in the coming year will be on working with our allies across the world to ensure that any new data-access rules incorporate clear and robust human rights safeguards.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

Katitza Rodriguez

Fighting For A More Open, Balanced Patent System: 2021 in Review

3 weeks 2 days ago

At EFF, we’ve always stood up for the freedom to tinker and innovate. Unfortunately, our patent system doesn’t promote those freedoms. In some areas, like software, it’s causing much more harm than good. And the system is rife with patent trolls: companies that are focused on licensing and litigating patents, instead of making things. In 2021, the majority of all patent lawsuits were filed by these trolls. In fact, patent trolls have filed the majority of patent lawsuits for many years now

But there’s reason for hope. Patent trolls have finally been seen as the problem they are, and both courts and Congress seem to be moving away from simplistic misconceptions like believing they can create more innovation just by handing out more patents. 

This year, EFF fought hard for increased transparency in the patent system that will allow us to call out the worst actors, and ultimately get a more balanced patent system. We also worked to defend and strengthen patent review systems that allow the worst patents to be kicked out of the system more efficiently. 

Open Records in Courts, and at the Patent Office

Patent cases in particular suffer from a problem of overzealous secrecy. In 2019, EFF intervened in a court case called Uniloc v. Apple to defend the public’s right to know the details of what’s going on in patent cases. This case was an egregious one, in which a patent troll that had sued hundreds of companies was sealing up court records showing whether it even had the right to sue at all. 

Turns out, Uniloc didn’t have the legal right, known as standing, to sue over this patent. By intervening in the case, EFF was able to get the whole story showing that Uniloc did not have the ability to litigate the case, and vindicate the public’s right to access court records.  

Although EFF won the right for the public to read nearly all of the court records in this case, Uniloc has continued  to argue for keeping a small but critical portion of the evidence in this case hidden—the documents that show how much Uniloc was paid by the companies who paid it for a patent license. These license fees were generally paid due to litigation, or the threat of litigation. The sums were critical to whether Uniloc had a right to sue at all, as the court’s ruling dismissing Uniloc’s suit hinged on the fact that Uniloc had not made enough licensing revenue to have the right to bring the patent infringement claims. 

We won a powerful decision in February that ordered Uniloc to disclose all of the remaining information at issue, including the licensing information that was central to the district court’s dismissal of the patent suit. Uniloc appealed again, and in December we argued before the U.S. Court of Appeals for the Federal Circuit that the public had a right to access the records. We’ll continue to defend the public’s right to open courts in patent litigation

The Uniloc case isn’t the only place where we’re fighting for the public’s right to a more open patent system. We’re also continuing to push for real accountability and openness in Congress. 

Very often, victims of patent troll lawsuits don’t even know the identities of the people who sued them and stand to profit from the lawsuit. EFF is supporting a new bill in Congress that would remedy this unacceptable situation. The bill, called the “Pride in Patent Ownership Act” (S. 2774), would require patent owners to record their ownership at the U.S. Patent and Trademark Office (USPTO). The bill suffers from a very weak enforcement mechanism, in that the penalties for noncompliance are much too light. Still, we’re glad to see the issue of bringing more transparency to the patent system is getting some public attention. 

Fighting for Strong Defenses Against Bad Patents

The USPTO grants hundreds of thousands of patents each year, and examiners don’t have enough time to get it right. That’s why it’s critical that we have a robust patent review system, which gives people and companies threatened over patents a chance to get a patent reviewed by professionals—without spending the millions of dollars that a jury trial can cost. 

The best system for this so far is inter partes review, or IPR, a system that Congress set up 10 years ago to weed out some of the worst patents. IPR isn’t perfect, but it’s thrown out thousands of bad patents over the years and is a big improvement over the previous review systems that were used by the patent office. 

That’s why we’re supporting the “Restore America Invents Act,” (S. 2891), which was introduced in September and closes some big loopholes that certain patent owners have used to avoid IPR challenges. While other reforms are needed, the Restore AIA bill takes some important steps that will make clear a strong IPR system is here to stay. 

We also fought off an attempt to overthrow the IPR system altogether. Unsurprisingly, patent owners have tried repeatedly to convince the Supreme Court that post-grant challenges such as IPR are unconstitutional. This year, they failed again, when the Supreme Court declined to throw out the IPR system in U.S. v. Arthrex. As EFF explained in our brief for that case, filed together with Engine Advocacy, the IPR system has driven down the number of patent infringement lawsuits that clog federal courts, raise prices, and smother innovation. 

Speaking Up for Users at the Patent Office 

Finally, at two different times this year, EFF filed comments with the U.S. Patent and Trademark Office expressing our opposition to the agency’s continued efforts to increase the number of patent monopolies that are created at the public’s expense. 

First, we spoke out against proposed regulations that would have opened the floodgates to new and unnecessary types of design patents on computer-generated images. Design patents on the whole are a terrible deal for the public: they give rights holders the power to limit competition, like utility patents, but in return the patent owner provides almost nothing to the public realm. 

Later in the year, we also spoke up about a planned USPTO study of patent eligibility that looks to be rigged in favor of patent owners from the get-go. The “study” is a list of loaded questions proposed by U.S. senators who have made it clear they want to revoke important legal precedents, including Alice v. CLS Bank, the landmark decision that bars so many abstract “do-it-on-a-computer” style patents. 

In 2020, the great majority of software-related appeals where patent eligibility was at issue ended up with the patents being found invalid. That’s happening because of the Alice precedent, and we won’t let that progress get rolled back. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

Joe Mullin

Fighting For You From Coast to Coast: 2021 In Review

3 weeks 2 days ago

EFF makes its presence known in statehouses across the country to advocate for strong privacy laws, broadband access, and to protect and advance your digital rights. The pandemic has changed a lot about how state legislators operate in 2021, but one thing has remained the same: EFF steps up to fight for you from coast to coast.  

Golden Opportunities in the Golden State

We helped win a huge victory in for all Californians this year, finally securing an historic $6 billion investment for broadband infrastructure for the state of California. Building on the work and community support we started building in 2020 for investment to close the digital divide, we were able to help bring those efforts across the finish line.

EFF has vocally supported efforts to expand and improve broadband infrastructure to bring access to 21st-century broadband technology to every community. For years, internet service providers have carved up the state, neglecting low-income and rural communities. It's become abundantly clear that the market alone will not close the digital divide; that's why we need policy. The struggles many people had while learning and working remotely in the pandemic made it clearer than ever that California needed to change the status quo.

California’s new broadband program approaches the problem on multiple fronts. It empowers local public entities, local private actors, and the state government itself to be the source of the solution. Through a combination of new construction, low-interest loans, and grants, this money will allow communities to have more input on where and how networks are built.

This victory came from combination of persistent statewide activism from all corners, political leadership by people such as Senator Lena Gonzalez, investment funding from the American Rescue Plan passed by Congress, and a multi-billion broadband plan included in Governor Newsom’s budget.

In addition to our broadband work, we also collaborated with other civil liberties groups in California on a couple of bills to improve privacy around genetic data. S.B. 41, authored by Sen. Tom Umberg, adds privacy requirements for direct-to-consumer (DTC) genetic testing companies such as Ancestry.com and 23 and Me. It gives consumers more transparency about how these companies use their information, more control over how it’s shared and retained, and establishes explicit protections against discrimination using genetic data.

A.B. 825, authored by Assemblymember Marc Levine, expanded the definition of personal information, for the purposes of the state’s data security and data breach notification laws, to include genetic data. That means that if a company is irresponsible with your genetic data, they can be held to account for it.

We were pleased that Governor Newsom signed both bills into law.

Make no mistake: our victories are yours, too. We thank every person who picked up the phone or sent an email to their California representative or senator. We could not have done this without that support.

Across The Country

Of course, California is not the only state where we fight for your digital rights. We’ve advocated across the country—from Washington to Virginia—to fight the bad bills and support the good ones in partnership with friends in those states.

In Washington, we joined a coalition to help pass Rep. Drew Hansen’s HB 1336, which expanded broadband choice. Signed into law by Washington's Gov. Jay Inslee, HB 1336 will improve access not only for rural parts of the state, but also underserved urban communities.

Of course, we haven’t won every fight. Over our opposition, Virginia’s legislature passed an empty privacy law—weak, underfunded, not designed with consumers in mind—that puts the desires of companies over the needs of consumers. As Reuters reported, lobbyists for Amazon handed the bill to the author and pushed hard for it to pass. Virginians deserved better.

Privacy will continue to be a hot topic in legislatures across the country next year. We urge lawmakers not to look at weak bills, such as Virginia’s or the recent “model bill” put forward by the Uniform Law Commission as examples to follow. Instead, we urge you to consider EFF's top priorities for privacy legislation, including strong enforcement.

Looking Ahead

Our state legislative work is as busy as it’s ever been. We’re working with more partners on the ground in states across the country—especially those in our local Electronic Frontier Alliance—to connect with our fellow advocates and fight together for everyone's digital rights. We look forward to being just as busy in 2022.  

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

 

Hayley Tsukayama

Police Use of Artificial Intelligence: 2021 in Review

3 weeks 3 days ago

Decades ago, when imagining the practical uses of artificial intelligence, science fiction writers imagined autonomous digital minds that could serve humanity. Sure, sometimes a HAL 9000 or WOPR would subvert expectations and go rogue, but that was very much unintentional, right?  

And for many aspects of life, artificial intelligence is delivering on its promise. AI is, as we speak, looking for evidence of life on Mars. Scientists are using AI to try to develop more accurate and faster ways to predict the weather.

But when it comes to policing, the actuality of the situation is much less optimistic.  Our HAL 9000 does not assert its own decisions on the world—instead, programs which claim to use AI for policing just reaffirm, justify, and legitimize the opinions and actions already being undertaken by police departments.

AI presents two problems: tech-washing, and a classic feedback loop. Tech-washing is the process by which proponents of the outcomes can defend those outcomes as unbiased because they were derived from “math.” And the feedback loop is how that math continues to perpetuate historically-rooted harmful outcomes. “The problem of using algorithms based on machine learning is that if these automated systems are fed with examples of biased justice, they will end up perpetuating these same biases,” as one philosopher of science notes.

Far too often artificial intelligence in policing is fed data collected by police, and therefore can only predict crime based on data from neighborhoods that police are already policing. But crime data is notoriously inaccurate, so policing AI not only misses the crime that happens in other neighborhoods, it reinforces the idea that the neighborhoods they are already over-policed are exactly the neighborhoods that police are correct to direct patrols and surveillance to.

How AI tech washes unjust data created by an unjust criminal justice system is becoming more and more apparent.

In 2021, we got a better glimpse into what “data-driven policing” really means. An investigation conducted by Gizmodo and The Markup showed that the software that put PredPol, now called Geolitica, on the map disproportionately predicts that crime will be committed in neighborhoods inhabited by working-class people, people of color, and Black people in particular. You can read here about the technical and statistical analysis they did in order to show how these algorithms perpetuate racial disparities in the criminal justice system.

Gizmodo reports that, “For the 11 departments that provided arrest data, we found that rates of arrest in predicted areas remained the same whether PredPol predicted a crime that day or not. In other words, we did not find a strong correlation between arrests and predictions.” This is precisely why so-called predictive policing or any data-driven policing schemes should not be used. Police patrol neighborhoods inhabited primarily by people of color--that means these are the places where they make arrests and write citations. The algorithm factors in these arrests and determines these areas are likely to be the witness of crimes in the future, thus justifying heavy police presence in Black neighborhoods. And so the cycle continues again.

This can occur with other technologies that rely on artificial intelligence, like acoustic gunshot detection, which can send false-positive alerts to police signifying the presence of gunfire.

This year we also learned that at least one so-called artificial intelligence company which received millions of dollars and untold amounts of government data from the state of Utah actually could not deliver on their promises to help direct law enforcement and public services to problem areas.

This is precisely why a number of cities, including Santa Cruz and New Orleans, have banned government use of predictive policing programs. As Santa Cruz’s mayor said at the time, “If we have racial bias in policing, what that means is that the data that’s going into these algorithms is already inherently biased and will have biased outcomes, so it doesn’t make any sense to try and use technology when the likelihood that it’s going to negatively impact communities of color is apparent.”

Next year, the fight against irresponsible police use of artificial intelligence and machine learning will continue. EFF will continue to support local and state governments in their fight against so-called predictive or data-driven policing.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

Matthew Guariglia

2021 Year in Review: EFF Graphics

3 weeks 4 days ago

EFF's small design team sometimes struggles to keep up with the frenetic pace of our activist, legal and development colleagues. Whenever EFF launches a new legal case, activism campaign, tech project, or development campaign, we try to create unique and inspiring graphics to promote it. At EFF, we find that the ability to visualize the issues at hand encourages supporters to engage more fully with our work, to learn more and share more about what we do, and to donate to our cause.

All the graphics we create are original, and free to the public to use on a Creative Commons Attribution license. That means that if you are fighting to stop police misuse of surveillance technology in your community, promoting free expression online, or simply looking for a way to share your love for EFF and digital rights with the world, you are free to download our graphics and use them for your own purposes without permission. It's our way of seeding the Commons!

Below is a selection of graphics we produced this year. We hope you enjoy perusing them! To learn more about each project, go ahead and click the image. It will link you to a page where you can learn more.

Don't forget: many of our graphics are gifted to you in t-shirt or sticker form when you join EFF. And for a limited time, you can purchase postcard versions of some of our graphics in our shop.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

Hugh D'Andrade

In 2021, the Police Took a Page Out of the NSA’s Playbook: 2021 in Review

3 weeks 4 days ago

With increasing frequency, law enforcement has been using unconstitutional, suspicionless digital dragnet searches in an attempt to identify unknown suspects in criminal cases. Whether these searches are for everyone who was near a building where a crime occurred or who searched for a keyword like “bomb” or who shares genetic data with a crime scene DNA sample, 2021 saw more and more of these searches—and more attempts to push back and rein in unconstitutional law enforcement behavior. 

While dragnet searches were once thought to be just the province of the NSA, they are now easier than ever for domestic law enforcement to conduct as well. This is because of the massive amounts of digital information we share—knowingly or not—with companies and third parties. This data, including information on where we’ve been, what we’ve searched for, and even our genetic makeup, is stored in vast databases of consumer-generated information, and law enforcement has ready access to it—frequently without any legal process. All of this consumer data allows police to, essentially, pluck a suspect out of thin air.

EFF has been challenging unconstitutional dragnet searches for years, and we’re now seeing greater awareness and pushback from other organizations, judges, legislators, and even some companies. This post will summarize developments in 2021 on one type of dragnet suspicionless search—reverse location data searches. 

Reverse Location Searches: Geofence Warrants & Location Data Brokers

Reverse location searches allow the police to identify every device in a given geographic area during a specific period of time in the past as well as to track people’s paths of travel. Geographic areas can include city blocks full of people unconnected to the crime, including those living in private residences and driving on busy streets. 

Unlike ordinary searches for electronic records, which identify a suspect, account, or device in advance of the search, reverse location searches essentially work backward by scooping up the location data from every device in hopes of finding one that might be linked to the crime. The searches therefore allow the government to examine the data from potentially hundreds or thousands of individuals wholly unconnected to any criminal activity and give law enforcement unlimited discretion to try to pinpoint suspect devices—discretion that can be deployed arbitrarily and invidiously.

Two main types of reverse location searches gained prominence in 2021: Geofence warrants and searches of location data generated by device applications and aggregated and sold by data brokers.  

Geofence Warrants

The first type of search, a “geofence warrant,” is primarily directed at Google. Through these warrants, police are able to access precise location data on the vast majority of Android device users and other Google account holders (including iPhone users). This data comes from wifi connections, GPS and Bluetooth signals, and cellular networks and allows Google to estimate a device’s location to within 20 meters or less. Using the data, Google can infer where a user has been, the path they took to get there, and what they were doing at the time. Google appears to be disclosing location data only in response to court-authorized warrants. 

In 2021, we learned more about just how prevalent the use of geofence warrants has become. Over the summer, Google released a transparency report showing it had received approximately 20,000 geofence warrants between 2018 and 2020. The vast majority of these warrants (95.6%) came from state and local police agencies, with nearly 20% of all state requests coming solely from agencies in California. The report also shows that many states have ramped up their use of geofence warrants exponentially over the last couple years—in 2018, California issued 209 geofence warrant requests, but in 2020, it issued 1,909. Each of these requests can reveal the location of thousands of devices. Geofence requests now constitute more than a quarter of the total number of all warrants Google receives. This is especially concerning because police are continuing to use these warrants even for minor crimes. And, as The Markup discovered following Google’s report, agencies have been less than transparent about their use of this search technique—there are huge discrepancies between Google’s geofence warrant numbers and the data that California police agencies are disclosing to the state—data that they are explicitly required to report under state law.

Aggregated App-Generated Location Data

In 2021, we also learned more about searches of aggregated app-generated location data. With this second type of reverse location search, the government is able to access location data generated by many of the applications on users’ phones. This data is purportedly deidentified and then aggregated and sold to various shady and secretive data brokers who re-sell it to other data brokers and companies and to the government. Unlike geofence warrants directed to Google, neither the data brokers nor the government seem to think any legal process at all is required to access these vast treasure troves of data—data that the New York Times described as “accurate to within a few yards and in some cases updated more than 14,000 times a day.” And although the data brokers argue the data has been anonymized, data like this is notoriously easy to re-identify

In 2020, we learned that several federal agencies, including DHS, the IRS, and the U.S. military, purchased access to this location data and used it for law enforcement investigations and immigration enforcement. In 2021, we started to learn more about how this data is shared with state and local agencies as well. For example, data broker Veraset shared raw, individually-identifiable GPS data with the Washington DC local government, providing the government with six months of regular updates about the locations of hundreds of thousands of people as they moved about their daily lives. Ostensibly, this data was meant to be used for COVID research, but there appears to have been nothing that truly prevented the data from ending up in the hands of law enforcement. We also learned that the Illinois Department of Transportation (IDOT) purchased access to precise geolocation data about over 40% of the state’s population from Safegraph, a controversial data broker later banned from Google’s app store. For just $49,500, the agency got access to two years’ worth of raw location data. The dataset consisted of over 50 million “pings” per day from over 5 million users, and each data point contained precise latitude and longitude, a timestamp, a device type, and a so-called “anonymized” device identifier. We expect to find many more examples of this kind of data sharing as we further pursue our location data transparency work in 2022.

Location Data Has Been Used to Target First Amendment-Protected Activity

There is more and more evidence that data available through reverse location searches can be used to track protestors, invade people’s privacy, and falsely associate people with criminal activity. In 2021, we saw several examples of law enforcement trolling Google location data to identify people in mass gatherings, including many who were likely engaged in First Amendment protected political protests. The FBI requested geofence warrants to identify individuals involved in the January 6 riot at the U.S. Capitol. Minneapolis police used geofence warrants around the time of the protests following the police killing of George Floyd, catching an innocent bystander who was filming the protests. And ATF used at least 12 geofence warrants to identify people in the protests in Kenosha, Wisconsin following the police shooting of Jacob Blake. One of these warrants encompassed a third of a major public park for a two-hour window during the protests. 

Efforts to Push Back on Reverse Location Searches

In 2021, we also saw efforts to push back on the increasingly indiscriminate use of these search techniques. We called on Google to both fight geofence warrants and to be much more transparent about the warrants it’s receiving, as did the Surveillance Technology Oversight Project and a coalition of 60 other organizations. Both Google and Apple pushed back on shady location data aggregators by banning certain SDKs from their app stores and kicking out at least one location data broker entirely. 

There were other efforts in both the courts and legislatures. While we are still waiting on rulings in two criminal cases involving geofence warrants: People v. Dawes, (we filed an amicus brief), and United States v. Chatrie (a case being litigated by the National Association of Criminal Defense Lawyers), judges in other parts of the country have been proactive on these issues. In 2021, a Kansas federal magistrate judge issued a public order denying a government request for a geofence warrant, joining several other judges from Illinois who issued a series of similar orders in 2020. All of these judges held that the government’s geofence requests were overbroad and failed to meet the Fourth Amendment’s particularity and probable cause requirements, and one judge chided the government publicly, stating: 

[t]he government’s undisciplined and overuse of this investigative technique in run-of-the-mill cases that present no urgency or imminent danger poses concerns to our collective sense of privacy and trust in law enforcement officials. 

We’re hoping the judges in Dawes and Chatrie follow these magistrate judges and find those respective geofence orders unconstitutional as well. 

In the meantime, however, the Fourth Circuit Court of Appeals, sitting en banc, issued a great ruling over the summer in a case that could have ramifications for reverse location searches. In Leaders of a Beautiful Struggle v. Baltimore Police Department, the court held that Baltimore’s use of aerial surveillance that could track the movements of every person and vehicle across the city violated the Fourth Amendment. We filed an amicus brief in the case. The court recognized that, even if the surveillance program only collected data in “shorter snippets of several hours or less,” that was “enough to yield ‘a wealth of detail’ greater than the sum of the individual trips” and to create an “encyclopedic’” record of where those people came and went. Also, crucially, the court recognized that, even if people were not directly identifiable from the footage alone, police could distinguish individuals and deduce identity from their patterns of travel and through cross-referencing other surveillance footage like ALPR and security cameras. This was sufficient to create a Fourth Amendment violation. Like the reverse location searches discussed in this post, police could have used the Baltimore program to identify everyone who was in a given area in the past, so the ruling in this case will be important for our legal work in 2022 and beyond.

Finally, in 2021 we also saw legislative efforts to curb the use of reverse location search techniques. We strongly supported the federal “Fourth Amendment is Not For Sale Act,” introduced by Senator Ron Wyden, which would close loopholes in existing surveillance laws to prohibit federal law enforcement and intelligence agencies from purchasing location data (and other types of data) on people in the United States and Americans abroad. The bill has bipartisan support and 20 co-sponsors in the Senate, and a companion bill has been introduced in the House. We also supported a state bill in New York that would outright ban all reverse location searches and reverse keyword searches. This bill was reintroduced for the 2021-2022 legislative session and is currently in committee. We’re waiting to see what happens with both of these bills, and we hope to see more legislation like this introduced in other states in 2022.

The problem of suspicionless dragnet searches is not going away anytime soon, unfortunately. Given this, we will continue our efforts to hold the government and companies accountable for unconstitutional data sharing and digital dragnet searches throughout 2022 and beyond.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

Jennifer Lynch

Every State Has a Chance to Deliver a “Fiber for All” Broadband Future: 2021 in Review

3 weeks 5 days ago

This year’s passage of the Infrastructure Investment and Jobs Act (IIJA)—also known as the bipartisan infrastructure package—delivered on a goal EFF has sought for years. It finally creates a way for people to remedy a serious problem: a severe lack of fiber-to-the-home connectivity. Fiber optics lie at the core of all future broadband access options because it is the superior medium for moving data, with no close comparisons. As a result, global demand for fiber infrastructure is extremely high: China’s seeking to connect 1 billion of its citizens to symmetrical gigabit access, and many advanced EU and Asian nations rapidly approach near-universal deployment. Crucially, these countries did not reach these outcomes naturally through market forces alone, but rather by passing infrastructure policies much like the IIJA.

Now it’s up to elected officials in states, from governors to state legislators, to work to ensure the federal infrastructure program delivers 21st-century ready infrastructure to all people. Some states are ahead of the curve. In 2021, California embraced a fiber infrastructure for all effort with the legislature unanimously passing a historic investment in public fiber. State Senator Lena Gonzalez led this effort by introducing the first fiber broadband-for-all bill; EFF was a proud sponsor of this bill in Sacramento.

Other states are behind the curve by overly restricting the ability for local governments and utilities to plug the gaps that private internet service providers (ISPs) have left for sixteen years and counting. (2005 was when private fiber-to-the home deployment really kicked off.) Maintaining those barriers, even as federal dollars are finally released, guarantees those states’ failures to deliver universal fiber; the federal law, while important, isn’t sufficient on its own.  Success requires maximum input from local efforts to make the most of this funding.

What Is in the Federal Infrastructure Law?

Understanding what progress we’ve made this year—and what still needs to be done—requires understanding the IIJA itself. The basic structure of the law is a collaboration between the federal government’s National Telecommunication Information Administration (NTIA), the Federal Communications Commission (FCC), and the states and territories. Congress appropriated $65 billion in total. That includes $45 billion for construction funds and $20 billion for efforts promoting affordability and digital inclusion. This money can be paired with state money, which will be essential in many states facing significant broadband gaps.

Responsibility for different parts of this plan falls to different people. The NTIA will set up a grant program, provide technical guidance to the states, and oversee state efforts. The FCC will issue regulations that require equal access to the internet, produce mapping data that will identify eligible zones for funding, and implement a new five-year subsidy of $30 per month to improve broadband access for low-income Americans. Both agencies will be resources to the states, which will be responsible for creating their own multi-year action plan that must be approved by the NTIA.

The timelines behind many of these steps are varied. The NTIA’s grant program must be established by around May 2022; states will then take in that guidance and develop their own action plans. Every state will receive $100 million plus additional funding to reflect their share of the national unserved population—a statistic that the FCC will estimate.

Congress also ordered the FCC to issue “digital discrimination” (also known as digital redlining) rules that ban deployment decisions based on income, race, ethnicity, color, religion, or national origin. EFF and many others have sought such digital redlining bans. Without these kinds of rules, we risk cementing first- and second- class internet infrastructure based on income status. Currently, companies offer high-income earners ever-increasingly cheaper and faster broadband, while middle to low-income users are stuck on legacy infrastructure that grows more expensive to maintain, while increasingly growing slower as broadband needs expand.

The digital discrimination provisions do allow carriers to exempt themselves from the rules if they can show economic and technical infeasibility for building in a particular area, which will limit the impact of these rules in rural markets. However, there should be no mistake that there is no good excuse for discriminatory deployment decisions in densely populated urban markets. These areas are fully profitable to serve, which is why the major ISPs that don’t want to serve everyone, such as AT&T and Comcast, fought so hard to remove these provisions from the bipartisan agreement. But this rulemaking is how we fix the access problem. It is time to move past a world where kids go to fast-food parking lots to do their homework and where school districts' only solution is to rent a slow mobile hotspot. This rulemaking is how we change things for those kids and for all of us.

Local Choice and Open Access Are Necessary If States Want to Reach Everyone with Fiber

The states are going to need to embrace new models of deployment that focus on fostering the development of local ISPs, as well as openly accessible fiber infrastructure. The federal law explicitly prioritizes projects that can “easily scale” speeds over time to “meet evolving connectivity needs” and “support 5G [and] successor wireless technologies.” Any objective reading of this leads to the conclusion that pushing fiber optics deep into a community should lie at the core of every project (satellite and 5G rely on fiber optics). That’s true whether it is wired or wireless delivery at the end. A key challenge will be how to build one infrastructure to service all of these needs. The answer is to deploy the fiber and make it accessible to all players.

Shared fiber infrastructure is going to be essential in order to extend its reach far and wide. EFF has produced cost-model data demonstrating that the most efficient means to reach the most people with fiber connections is deploying it on an open-access basis. This makes sense when considering that all 21st-century broadband options from satellite to 5G rely on fiber optics, but not all carriers intend to build redundant, overlapping fiber networks in any place other than urban markets. The shared infrastructure approach is already happening in Utah, where fiber infrastructure local governments are deploying fiber and enabling several small private ISPs to offer competitive gigabit fiber services. Similarly, California’s rural county governments have banded together to jointly build open-access fiber to all people through the EFF-supported state infrastructure law.

Needless to, say states have to move past the idea that a handful of grants and subsidies will fix their long-term infrastructure problems. They have to recognize that we’ve done that already and understand the mistakes of the past. This is, in fact, the second wave of $45 billion in funding we’ve done for broadband. The previous $45 billion was just spent on slow speeds and non-future proofed solutions, which is why we have nothing to show for it in most states. Only fully embracing future-proofed projects with fiber optics at their core is going to deliver the long-term value Congress is seeking with its priority provisions written into statute.

States Must Resist The Slow Broadband Grift

Here is a fact: It is unlikely Congress will come around again to produce a national broadband infrastructure fund. A number of states will do it right this time, which will alleviate the political pressure to have Congress act again. A number of states will take the lessons of 2021 and of the past when planning how to spend their infrastructure funding. In a handful of years, those states are probably going to have a super-majority of their residents connected to fiber. But, unfortunately, it’s possible some states will fall for the lie—often pushed by big ISPs—that slow networks save money.

We know that the “good enough for now” mindset doesn’t work. Taking this path will waste every dollar, with nothing to show for it. Networks good enough for 2021 will look slow by 2026, forcing communities to replace them to remain economically competitive. The truth is, speed-limited networks cost a fortune in the long run because they will face obsolescence quickly as needs grow. On average, we use about 21% more data each year, and that trend has been with us for decades. Furthermore, with the transition towards distributed work, and the increasingly remote delivery of services such as healthcare and education, the need for ever-increasing upload speeds and symmetrical speeds will continue to grow.

The slow broadband grift will come from industry players who are over-invested in "good enough for now" deployment strategies. It is worth billions of dollars to them for states to get this wrong. And so they will repeat their 2021 playbook and deploy their lobbyists just like they did with Congress—though they mostly failed—to the states. Industry players failed to sway Congress because everyone understands the simple fact that we will need more and more broadband with each passing year.

Any ISP that comes to a state leader with a suggested plan needs to have its suggestions scrutinized using the technical objectives Congress has laid out this year. Can their deployment plan “easily scale” into ever increasing speeds? Will it meet community needs and enable 5G and successor wireless services? And, most importantly, will it deliver low-cost, high-quality broadband access?

Many of these questions are answerable with proper technical vetting. There are no magical secrets of technology, just physics and financial planning. But it remains to be seen whether the states will allow politically well-connected legacy industries to make the call for them, or to rely on objective analysis focused on long term value to their citizens. EFF worked hard in 2021 to make 21st century ready broadband-for-all a reality for every community. We will continue do everything we can to ensure the best long-term outcome for people. If you need help convincing your local leadership to do the right thing for the public—connecting everyone to 21st-century internet access through fiber optics laid deep into your community—you have a partner in EFF.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

Ernesto Falcon

Shining a Light on Black Box Technology Used to Send People to Jail: 2021 Year in Review

3 weeks 5 days ago

If you're accused of a crime based on an algorithm's analysis of the evidence, you should have a right to refute the assumptions, methods, and programming of that algorithm. Building on previous wins, EFF and its allies turned the tide this year on the use of these secret programs in criminal prosecutions.

One of the most common forms of forensic programs is probabilistic genotyping software. It is used by the prosecution to examine DNA mixtures, where an analyst doesn't know how many people contributed to the sample (such as a swab taken from a weapon). These programs are designed to make choices about how to interpret the data, what information to disregard as likely irrelevant, and compute statistics based on how often the different genes appear in different populations—and all of the different programs do it differently. These assumptions and processes are subject to challenge by the person accused of a crime. For that challenge to be meaningful, the defense team must have access to source code and other materials used in developing the software.

The software vendors claim both that the software contains valuable secrets that must not be disclosed and that their methods are so well-vetted that there's no point letting a defendant question them. Obviously, both can't be true, and in fact it's likely that neither is true.

When a was finally able to access one of these programs, the Forensic Statistical Tool (FST), they discovered an undisclosed function and shoddy programming practices that could lead the software to implicate an innocent person. The makers of FST submitted sworn declarations about how they thought it worked, it had been subject to 'validation' studies where labs test some set of inputs to see if the results seem right, and so on. But any programmer knows that programs don't always do what you think you programmed them to do, and so it was with FST: in trying to fix one bug, they unwittingly introduced another serious error.

That's why there's no substitute for independent review of the actual code that performs the analysis.

Fortunately, this year saw two very significant wins for the right to challenge secret software.

First, in a detailed and thoughtful opinion, a New Jersey appellate court explained in plain language why forensic software isn't above the law and isn't exempt from being analyzed by a defense expert to make sure it's reliable and does what it says it does.

Then, the first Federal court to consider the issue also ordered disclosure.

But that's not the end of the story. In the New Jersey case, the prosecution decided to withdraw the evidence to avoid disclosure. And in the federal case, the defense says that the prosecution handed over unusable and incomplete code fragments. The defense is continuing to fight to get meaningful transparency into the software used to implicate the defendant.

With the battle ongoing, we're also continuing to brief the issue in other courts. Most recently, we filed an amicus brief in NY v. Easely, where the defendant was assaulted by a half dozen people and then arrested, accused of unlawful possession of a firearm based solely on the fact that he was near it and the DNA software said the DNA mixture on the gun likely contained some of his DNA. To make matters worse, the software at issue is closely related to the version of FST that was found to contain serious flaws.

Given the history of junk science being used in the courtrooms, we must be vigilant to protect the rights of defendants to challenge the evidence used against them. We also fight to protect the public's interest in fair judicial proceedings, and that means no convictions based on the say-so of secret software programs.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

Kit Walsh

2021 Year In Review: Sex Online

3 weeks 6 days ago

We don’t entrust Internet companies to be arbiters of morality. We shouldn't hand them the responsibility of making broad cultural decisions about the forms sexuality is allowed to take online. And yet, this last year has been a steady and consistent drum of Internet companies doing just that. Rather than seeking consensus among users, Apple, Mastercard, Amazon, Ebay and others chose to impose their own values on the world, negatively affecting a broad range of people.

The ability to express oneself fully—including the expression of one’s sexuality—is a vital part of freedom of expression, and without that ability, free speech is an incomplete and impotent concept.

To be clear, we are talking here about legal sexual expression, fully protected in the U.S. by the First Amendment, and not the limited category of sexual expression, called “obscenity” in U.S. law, the distribution of which may be criminalized.

Here is a tiring and non-exhaustive list of the ways Internet platforms have taken it upon themselves to undermine free expression in this way in 2021.

Prologue: 2018, FOSTA

It’s apt to take a look backwards at the events leading up to SESTA/FOSTA, the infamously harmful carveout to Section 230, as it set the stage for platforms to take severe moderation policies regarding sexual content. Just before SESTA/FOSTA passed, the adult classified listing site Backpage was shut down by the U.S. government and two of its executives were indicted under pre-SESTA/FOSTA legal authority.

FOSTA/SESTA sought to curb sex trafficking by removing legal immunities for platforms that hosted content that could be linked to it. In a grim slapstick routine of blundering censorship, platforms were overly broad in disappearing any questionable content, since the law had no clear and granular rules about what is and isn't covered. Another notable platform to fall was Craigslist Personals section, which unlike Backpage, was attributed as a direct result of FOSTA. As predicted by many, the bill itself did not prevent any trafficking, but actually increased it—enough so that a follow-up bill, SAFE SEX Workers Study Act, was introduced as a federal research project to analyze the harms that occur to people in the sex industry when resources and platforms are removed online.

The culture of fear around hosting sexual expression has only increased since, and we continue to track the ramifications of SESTA/FOSTA. Jump ahead to December, 2020:

Visa, Mastercard, Mindgeek

Late last year, Pornhub announced that it would purge all unverified content from its platform. Shortly after the news broke, Visa and Mastercard revealed they were cutting ties with Pornhub. As we have detailed, payment processors are a unique infrastructural chokepoint in the ecosystem of the Internet, and it's a dangerous precedent when they decide what types of content is allowed.

A few months after the public breakup with Pornhub, Mastercard announced it would require "clear, unambiguous and documented consent" for all content on adult sites that it partners with, as well as the age and identity verification of all performers in said content. Mastercard claimed it was in an effort to protect us, demonstrating little regard for how smaller companies might be able to meet these demands while their revenue stream is held ransom.

Not long after that, Mindgeek, parent company to Pornhub, closed the doors for good on its other property, Xtube, which was burdened with the same demands.

 As documented by the Daily Beast and other publications, this campaign against Mindgeek was a concerted effort by evangelical group Exodus Cry and its supporters, though that detail was shrouded from the NYTimes piece that preceded, and seemed to garner support from the general public.

This moral policing on behalf of financial institutions would set a precedent for the rest of 2021.

AVN

Just this month, December 2021, AVN Media Network announced that because of pressure from banking institutions, they will discontinue all monetization features on their sites AVN Stars and GayVN Stars. Both sites are platforms for performers to sell video clips. As of January 1st, 2022, all content on those platforms will be free and performers cannot be paid directly for their labor on them.

Twitch

In May, Twitch revoked the ability for hot tub streamers to make money off advertisements. Although these streamers weren't in clear violation of any points on Twitch's community guidelines policies, it was more a you know it when you see it type of violation. It's not a far leap to draw a connection between this and the "brand safety score" that a cybersecurity researcher discovered on Twitch's internal APIs. The company followed up on that revelation that the mechanism was simply a way to ensure the right advertisements were “appropriately matched” with the right communities, then  said in their follow-up statement: “Sexually suggestive content—and where to draw the line—is an area that is particularly complex to assess, as sexual suggestiveness is a spectrum that involves some degree of personal interpretation of where the line falls.” After their mistake in providing inconsistent enforcement and unclear policies, Twitch added a special category for hot tub streamers. No word yet on their followup promise to make clearer community standards policies.

Apple App Store

During this year's iteration of the WWDC conference where Apple unveils new features and changes to their products, a quiet footnote to these announcements was a change to their App Store Review Guidelines: "hookup apps" that include pornography would not be allowed on the App Store. Following outcries that this would have a disproportionate impact on LGBTQ+ apps, Apple followed up with reporters that those apps, such as Grindr and Scruff, wouldn't be affected. They wanted to make it clear that only apps that featured pornography would be banned. They did not comment on if, or how, they cracked the code to determine what is and isn't porn.

Discord

Discord describes itself as “giving the people the power to create space to find belonging in their lives,”—that is, unless Apple thinks it’s icky. Discord is now prohibiting all iOS users from accessing NSFW servers, regardless of user age. Much like Tumblr did in 2018, this is likely in response to the pressure put on by the above-mentioned strict policies imposed by the Apple App Store. This means that adult users are no longer allowed to view entirely legal NSFW content on Discord. These servers are accessible on Android and Desktop.

OnlyFans

In August, OnlyFans declared that it would ban explicit content starting in October. Given their reputation, this was confusing. Following significant pushback and negative press, they backtracked on their decision just a week later.

Ebay

With just a month’s notice, Ebay revised their guidelines to ban adult items starting in June. Offending material includes movies, video games, comic books, manga, magazines, and more. Confusing matters even more, they took care to note that nudist publications (also known as Naturist publications, usually non-sexual media representing a lifestyle of those that choose not to wear clothes) would not be allowed, but risqué sexually explicit art from pre-1940 and some pin-up art from pre-1970 are allowed.

Many have pointed out that this change will endanger the archival capabilities and preservation of LGBTQ history.

Instagram

Instagram, a platform often criticized for its opaque restrictions on sexual content,  stands out in this list as the only example here that puts an option in the user's hands to see what they wish.

The new "Sensitive Content Control" was released in July. It is a feature which enables users to opt into how restrictive they would like the content they view on the app to be moderated.

Although Instagram still has many, many, many, issues when it comes to regulating sexual content on its platform, a feature like this, at the very least this type of interface, is a step in the right direction. Perhaps they are paying attention to the petition with over 120,000 signatures pleading them to stop unfairly censoring sexuality

Given that no two user-groups will agree on what is beyond the threshold of material too sexually explicit to be on social media, that Instagram itself can't agree with the professional art world on what is pornography versus art, the obvious choice is to let the users decide.

Let this "Sensitive Content Control" be a proof of concept for how to appropriately implement a moderation feature. Anything beyond what is already illegal should be up to users—and not advertisers, shareholders, developers, or biased algorithms—to decide whether or not they want to see it.

Internet for All, Not Few

The Internet is a complex amalgam of protocols, processes, and patchwork feature-sets constructed to accommodate all users. Like scar tissue, the layers are grown out of a need to represent us, a reflection of our complexities in the real world. Unfortunately, the last few years have been regressive to that growth; what we’ve instead seen is a pruning of that accommodation, a narrowing of the scope of possibility. Rather than a complex ecosystem that contains edges, like in the offline world, those edges are being shaved down and eliminated to make the ecosystem more child-proof. 

Research shows that overly restrictive moderation is discriminatory and destructive to non-normative communities, communities that because of their marginalized status, might exist in some of the edges these platforms deem to be too dangerous to exist. Laying unnecessary restrictions on how marginalized people get to exist online, whether intentional or not, has real world effects. It increases the margins that prevent people living in safety, with dignity, with free expression and autonomy.

If we take the proof of concept from the above Instagram example, we can imagine a way to accommodate more possibilities, without sanding down the edges for all. 

And if we’ve learned anything from the few positive changes made by companies this year, it’s that these platforms occasionally do listen to their user base. They’re more likely to listen when reminded that people, not the homogeneous monopolies they’ve constructed for themselves, hold the power. That’s why we’ll continue to work with diverse communities to hold their feet to the fire and help ensure a future where free expression is for everyone.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

Daly Barnett

Students Are Learning To Resist Surveillance: Year in Review 2021

3 weeks 6 days ago

It’s been a tough year for students - but a good one for resistance. 

As schools have shuffled students from in-person education to at-home learning and testing, then back again, the lines between “school” and “home” have been blurred. This has made it increasingly difficult for students to protect their privacy and to freely express themselves, as online proctoring and other sinister forms of surveillance and disciplinary technology have spread. But students have fought back, and often won, and we’re glad to have been on their side. 

Dragnet Cheating Investigations Rob Students of Due Process

Early in the year, medical students at Dartmouth’s Geisel School of Medicine were blindsided by an unfounded dragnet cheating investigation conducted by the administration. The allegations were based on a flawed review of an entire year’s worth of student log data from Canvas, the online learning platform that contains class lectures and other substantive information. After a technical examination, EFF determined that the logs easily could have been generated by the automated syncing of course material to devices logged into Canvas. 

When EFF and FIRE reached out to Dartmouth and asked them to more carefully review the logs—which Canvas’ own documentation explicitly states should not be used for high-stakes analysis—we were rebuffed. With the medical careers of seventeen students hanging in the balance, the students began organizing. At first, the on-campus protest, the letter to school administrators, and the complaints of unfair treatment from the student government didn’t make much of an impact. In fact, the university administration dug in, instituting a new social media policy that seemed aimed at chilling anonymous speech that had appeared on Instagram, detailing concerns students had with how these cheating allegations were being handled. 

But shortly after news coverage of the debacle appeared in the Boston Globe and the New York Times, the administration, which had failed to offer even a hint of proper due process to the affected students, admitted it had overstepped, and dropped its allegations. This was a big victory, and helped show that with enough pushback, students can help schools understand the right and wrong ways to use technology in education. Students from all over the country have now reached out to EFF and other advocacy organizations because teachers and administrators have made flimsy claims about cheating based on digital logs from online learning platforms that don’t hold up to scrutiny. We’ve created a guide for anyone whose schools are using such logs for disciplinary purposes, and welcome any students to reach out to us if they are in a similar position. 

Online Proctoring Begins to Back Down—But the Fight Isn’t Over

During 2020 and 2021, online proctoring tools saw upwards of a 500% increase in their usage. But legitimate concerns about the invasiveness of these tools, potential bias, and efficacy were also widespread, as more people became aware of the ways that automated proctoring software, which purports to flag cheating, often flags normal test-taking behavior—and may even flag behavior of some marginalized groups more often. During the October 2020 bar exam, ExamSoft flagged more than one-third of test-takers or almost 3200 people. Human review by the Bar removed nearly all flags, leaving only 47 examinees with sanctions. In December of 2020, the U.S. Senate even requested detailed information from three of the top proctoring companies—Proctorio, ProctorU, and ExamSoft—which combined have proctored at least 30 million tests over the course of the pandemic. 

This year, we continued the fight to protect private student data from proctoring companies, and to ensure students get due process when their behavior is flagged. We took a close look at the companies’ replies to the Senate and offered our own careful interpretation of how they missed the mark. In particular, we continue to take significant issue with the companies’ use of doublespeak—claiming that their services don't flag cheating, just aberrant behavior, and human review is required for any cheating to be determined. Why then do many of the companies offer an automation-only service? You simply can’t have it both ways. 

After coming under fire, ProctorU, one of the largest online proctoring companies, announced in May that it will no longer sell fully-automated proctoring services. The company admitted that “only about 10 percent of faculty members review the video” for students who are flagged by the automated tools—leaving the grades of the vast majority of test takers at the whims of biased and faulty algorithms. This is a big win, but it doesn’t solve every problem. Human review on the company side may simply result in teachers and administrators ignoring even more potential false flags, as they further trust the companies to make the decisions for them. 

We must continue to carefully scrutinize the danger to students whenever schools outsource academic responsibilities to third-party tools, algorithmic or otherwise. And we hope legislators begin to reign in unnecessary data collection by proctoring companies with some common-sense legislation in the new year.

The New Future of Privacy Forum “Student Privacy Pledge” Has New Problems (and Old Ones)

The Future of Privacy Forum (FPF) originally launched the Student Privacy Pledge in 2014 to encourage edtech companies, which often collect very sensitive data on K-12 students, to take voluntary steps to protect privacy. In 2016, we criticized the Legacy Pledge after it reached 300 signatories—to FPF’s dismay.

This year, we carefully reviewed the new Privacy Pledge, and found it equally lacking. This matters because schools, students, and parents may believe that a company which abides by the pledge is protecting privacy in ways that it is not. The Student Privacy Pledge is a self-regulatory program, but those who choose to sign are committing to public promises that are enforceable by the Federal Trade Commission (FTC) and state attorneys general under consumer protection laws—but this is cold comfort when the pledge falls so short, and because enforcement actions against edtech companies for violating students’ privacy have been few and far between. 

The new pledge stumbles in a variety of ways. In sum: it is filled with inconsistent terminology and fails to define material terms; it lacks clarity on which parts of a company that signs the pledge must abide by it; it leaves open the question of whether companies that update certain privacy policies must notify schools; it provides a variety of unclear exceptions for activities undertaken for “authorized educational/school purposes”; it does not define any sort of minimum standard for resources companies must offer to schools about using their tools in a privacy-protective way; and it does not give any guidance as to the privacy-by-design requirements that it otherwise expresses a company should engage in. 

EFF is not opposed to voluntary mechanisms like the Student Privacy Pledge to protect users—if they work. The FTC rarely brings enforcement actions focused on student privacy, and the gaps in the Pledge don’t help. We hope the FTC and state attorneys general are willing to enforce it, but so far, its usefulness has been underwhelming, and its holes don’t help. 

Disciplinary Technology Isn’t Going Away

While we’ve made some headway in protecting student privacy during the pandemic, the threats aren’t going away. Petitions and other campaigns have helped individual schools and students, but we are still pushing for Canvas, Blackboard, and other learning tools to clarify the accuracy of their logs. And we are glad that the California Bar this year is offering free re-do’s and adjusting scores of those affected by 2021’s glitch-filled experience--but that comes on the heels of the Bar also signing a lengthy agreement with ExamSoft. Proctoring must be reined in, and used more carefully; and the only data that should be collected from students should be what is required to offer proctoring services.

EFF devoted additional resources to student privacy this year, and we’re glad we did. We’ve learned a lot about what it takes to resist school surveillance, defend young people’s free expression, and protect student privacy—and we’ll continue the fight in 2022. If you’re interested in learning more about protecting your privacy at school, take a look at our Surveillance Self-Defense guide on privacy for students.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

Jason Kelley

Where Net Neutrality Is Today and What Comes Next: 2021 in Review

4 weeks ago

When all is said and done—and there are some major steps to take in 2022—the United States will mark 2021 as the last year without federal net neutrality protections. Next year is when we will undo the 2017 repeal and once again put the Federal Communications Commission (FCC) back to work doing its job: protecting consumers from bad actors, working towards universal and net-neutral internet access, and accurately assessing the playing field in telecommunications.

With President Biden’s appointments of Chairwoman Jessica Rosenworcel and Gigi Sohn, a net neutrality pioneer, to staff the FCC’s leadership team, we can usher in a better era. Both appointees made clear their support for the 2015 Open Internet Order and belief that the FCC should begin a process to re-establish federal authority over broadband carriers, including network neutrality rules. More fights lie ahead when the new federal rules are established but let’s review what’s happened so far and what they mean for protecting your access to the Internet.

The Pandemic Has Changed How We Use the Internet

At its core, the necessity for net neutrality protections rests on one simple fact: people don’t want their broadband provider to dictate their experience online. It’s a need that only grew during the pandemic.

As the country rapidly transitions education, social activities, and jobs to rely on a persistent, open, and non-discriminatory connection to the world, views of access have shifted. Today, an eye-popping 76% of American Internet users consider internet service to be as important as water and electricity in their daily life. But unlike those utility services, internet access is subject to the whims of private carriers for a large number of American users.

People do not like that power imbalance, and they should not settle for it. They pay for access, the providers are exceedingly well-compensated for access, and the Congress set aside nearly $20 billion in funding to help people afford broadband access. Yet major broadband providers such as AT&T, Comcast, and Verizon still resist the notion that their role as essential service providers should not mean rules that protect consumers should apply to them.

California’s Law and the Role of State Power to Protect Consumers

Right now California’s net neutrality law (SB 822) is being reviewed by the Ninth Circuit after the state’s Attorney General prevailed in the lower court. The law is now in effect in California, forcing carriers to abandon things that contradicted net neutrality such as AT&T self-preferencing its online streaming service HBO Max. We were glad to see the law get rid of a business practice that has generally been shown to make broadband access more expensive while negatively impacting the competitive landscape among services and products. No one likes it when a broadband carrier decides the products it owns should run “cheaper” by simply making alternatives on the internet more expensive to use, but that was exactly what AT&T was doing. If the 2015 Open Internet Order was still in effect, the federal rules would have blocked this practice as the FCC was investigating it as a net neutrality violation.

The battle over California’s law makes clear that ISPs like AT&T, Verizon, and Comcast didn’t ask Aijit Pai’s FCC to abolish net neutrality protections because it was an overreach of the federal government or because the FCC didn’t have the authority. It was because they wanted to be free of any consumer protections, at any level. They know they sit on an essential service that people literally cannot live without, but they want to be in complete control over what you have to pay, how you get it, and how you are treated by them. But it doesn’t work that way. The ISPs can’t have the FCC give up its authority and prevent the states for stepping in on behalf of their residents.

Remember, California was the state where Verizon was caught throttling a firefighter command center during a wildfire. California has a demonstrated need to regulate ISPs in the interest of public safety. The state in fact passed AB 1699 by Assembly Member Mark Levine the year after SB 822 to explicitly ban Verizon from throttling first responder access at times of emergency. This law was also opposed by the CTIA, which represents Verizon because even though they know they were completely wrong, they don't want to be regulated at any level.

The importance broadband access has for health, education, work, economic activity, public safety, and nearly every facet of everyday life cannot be overstated. That makes the legal question as to whether states can protect their citizens in the absence of federal protections an extremely important one where we at EFF hope California prevails.

If California were to prevail, there is little reason why consumers will need to rely exclusively on the FCC to protect their access to the internet when they can go to their Governors and state elected officials. Victory at the 9th Circuit would not only enshrine net neutrality for the 5th-largest economy in the world, but it would inoculate California citizens from the whims of DC. Furthermore, it would likely protect federal net neutrality because reversing it at the federal level would have less of an impact on broadband access and would be less attractive to the major ISPs that started us down this path in the first place.

We Will Fight to Push the FCC to Adopt New 21st Century Net Neutrality Rules in 2022

Net neutrality will always be pushed so long as the public continues to want and fight for it. Much to the chagrin of ISP lobbyists (though they get paid to do the bidding of their employers of perpetually opposing net neutrality), no one intends to let net neutrality just go away. EFF represent the public’s desire for the FCC to begin the process of restoring the rules. Chairwoman Rosenworcel stated clearly she intends to revisit the reinstatement of net neutrality rules in 2022. Once the Senate confirms Gigi Sohn as the 5th Commissioner to the FCC, the work will begin.

At a minimum, California’s state law establishes the basic floor of what net neutrality should look like federally, but even those rules were written in a pre-pandemic world. When broadband access is on par with access to electricity and water for most people, the rules should reflect that importance from the FCC. In fact, hundreds of organizations petitioned the incoming Biden Administration at the start of this year to issue rules that prohibited the disconnection from critical services such as water and electricity regularly would include broadband access.

Furthermore, when Americans were forced to switch to remote access to engage in social and economic activity, ISPs that still retained data caps opted to lift them. But less than a year into the pandemic with vaccinations just starting to come into circulation, these ISPs reversed themselves and restored artificial scarcity schemes despite home usage skyrocketing due to realities on the ground. In other words, despite the fact that internet usage was necessarily rising due to remote work and remote education, and despite solid profits, companies like AT&T decided they needed to make broadband access even more expensive for users. This is despite the fact that a multi-billion emergency benefit program came online to provide generous subsidies to ISPs at $50 a month ensured that no one would miss their bill and disrupt the carriers’ revenues. Should the power remain completely in the hands of the ISP to decide the entirety of your future connection to the internet? EFF does not believe so and we will fight for consumers next year at the FCC to ensure that the rules firmly empower users, not ISPs.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

Ernesto Falcon

In 2021, We Told Apple: Don't Scan Our Phones

4 weeks ago

Strong encryption provides privacy and security for everyone online. We can’t have private conversations, or safe transactions, without it. Encryption is critical to democratic politics and reliable economic transactions around the world. When a company rolls back its existing commitments to encryption, that’s a bad sign. 

In August, Apple made a startling announcement: the company would be installing scanning software on all of its devices, which would inspect users’ private photos in iCloud and iMessage. 

This scanning software, intended to protect children online, effectively abandoned Apple’s once-famous commitment to security and privacy. Apple has historically been a champion of encryption, a feature that would have been undermined by its proposed scanning software. But after years of pressure from law enforcement agencies in the U.S. and abroad, it appears that Apple was ready to cave and provide a backdoor to users’ private data, at least when it comes to photos stored on their phones. 

At EFF, we called out the danger of this plan the same day it was made public. There is simply no way to apply something like “client-side scanning” and still meet promises to users regarding privacy. 

Apple planned two types of scanning. One would have used a machine learning algorithm to scan the phones of many minors for material deemed to be “sexually explicit,” then notified the minors’ parents of the messages. A second system would have scanned all photos uploaded to iCloud to see if they matched a photo in a government database of known child sexual abuse material (CSAM). Both types of scanning would have been ripe for abuse both in the U.S., and particularly abroad, in nations where censorship and surveillance regimes are already well-established. 

EFF joined more than 90 other organizations to send a letter to Apple CEO Tim Cook asking him to stop the company’s plan to weaken security and privacy on Apple’s iPhones and other products. We also created a petition where users from around the world could tell Apple our message loud and clear: don’t scan our phones! 

More than 25,000 people signed EFF’s petition. Together with petitions circulated by Fight for the Future and OpenMedia, nearly 60,000 people told Apple to stop its plans to install mass surveillance software on its devices. 

In September, we delivered those petitions to Apple. We held protests in front of Apple stores around the country. We even flew a plane over Apple’s headquarters during its major product launch to make sure its employees and executives got our message. After the unprecedented public pushback, Apple agreed to delay its plans. 

A Partial Victory 

In November, we got good news: Apple agreed to cancel its plan to send notifications to parent accounts after scanning iMessages. We couldn’t have done this without the help of tens of thousands of supporters who spoke out and signed the petitions. Thank you. 

Now we’re asking Apple to take the next step and not break its privacy promise with a mass surveillance program to scan user phones for CSAM.

Apple’s recent ad campaigns, with slogans like “Privacy: That’s iPhone,” have sent powerful messages to its more than one billion users worldwide. From Detroit to Dubai, Apple has said it in dozens of languages: the company believes privacy is “a fundamental human right.” It has sent this message not just to liberal democracies, but also to people who live in authoritarian regimes, and countries where LGBTQ people are criminalized. 

It’s understandable that companies don’t want users to misuse their cloud-based systems, including using them to store illegal images. No one wants child exploitation material to spread. But rolling back commitments to encryption isn’t the answer. Abandoning encryption to scan images against a government database will do far more harm than good. 

As experts from around the world explained at the EFF-hosted Encryption and Child Safety event, once backdoors to encryption exist, governments can and will use them to go well beyond scanning for CSAM. These systems can and will be used against dissidents and minorities. We hope Apple will sidestep this dangerous pressure, stand with users, and cancel its photo scanning plans. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

Joe Mullin

We Encrypted the Web: 2021 Year in Review

4 weeks 1 day ago

In 2010, EFF launched its campaign to encrypt the entire web—that is, move all websites from non-secure HTTP to the more secure HTTPS protocol. Over 10 years later, 2021 has brought us even closer to achieving that goal. With various measurement sources reporting over 90% of web traffic encrypted, 2021 saw major browsers deploy key features to put HTTPS first. Thanks to Let’s Encrypt and EFF’s own Certbot, HTTPS deployment has become ubiquitous on the web.

Default HTTPS in All Browsers

For more than 10 years, EFF’s HTTPS Everywhere browser extension has provided a much-needed service to users: encrypting their browser communications with websites and making sure they benefit from the protection of HTTPS wherever possible. Since we started offering HTTPS Everywhere, the battle to encrypt the web has made leaps and bounds: what was once a challenging technical argument is now a mainstream standard offered on most web pages. Now HTTPS is truly just about everywhere, thanks to the work of organizations like Let’s Encrypt. We’re proud of EFF’s own Certbot tool, which is Let’s Encrypt’s software complement that helps web administrators automate HTTPS for free.

​​The goal of HTTPS Everywhere was always to become redundant. That would mean we’d achieved our larger goal: a world where HTTPS is so broadly available and accessible that users no longer need an extra browser extension to get it. Now that world is closer than ever, with mainstream browsers offering native support for an HTTPS-only mode.

In 2020, Firefox announced an “HTTPS-only” mode feature that all users can turn on, signaling that HTTPS adoption was substantial enough to implement such a feature. 2021 was the year the other major browsers followed suit, starting with Chrome introducing an HTTPS default for navigation when a user types in the name of a URL without specifying insecure HTTP or secure HTTPS. Then in June, Microsoft’s Edge announced an “automatic HTTPS feature” that users can opt into. Then later in July, Chrome announced their “HTTPS-first mode”, which attempts to automatically upgrade all pages to HTTPS or display a warning if HTTPS isn’t available. Given Chrome’s dominant share of the browser market, this was a huge step forward in web security. Safari 15 also implemented a HTTPS-first mode in its browsers. However, it does not block insecure requests like in Firefox, Chrome, and Edge. 

With these features rolled out, HTTPS is truly everywhere, accomplishing the long-standing goal to encrypt the web.

SSL/TLS Libraries Get A Critical Update

SSL/TLS libraries are heavily used in everyday critical components of our security infrastructure, like transportation of web traffic. These tools are primarily built in the C programming language. However, C has a long history of memory safety vulnerabilities. So the Internet Security Research Group has led the development of building an alternative to certain libraries like OpenSSL in the Rust language. Rust is a modern, memory-safe programming language and the TLS library built in Rust has been named “Rustls.” Rustls has also been integrated for support in popular networking command line utilities such as Curl. With Rustls, important tools that use TLS can gain memory safety and make networks ever more secure and less vulnerable.

Making Certbot More Accessible

Since 2015, EFF’s Certbot tool has helped millions of web servers deploy HTTPS by making the certificate process free and easy. This year we significantly updated the user experience of Cerbot’s command-line output for clarity. We also translated parts of the website into Farsi in response to user requests, and now we have the Instructions Generator available in this language. We hope to add more languages in the future and make TLS deployment in websites even more accessible across the globe.

On The Horizon

Even as we see positive movement by major browsers—from the HTTPS-by-default victories above to ending insecure FTP support and even Chrome adopting a Root Store program—we are also watching the potential dangers to these gains. Encrypting the net means sustaining the wins and fighting for tighter controls across all devices and major services. 

HTTPS is ubiquitous on the web in 2021, and this victory is the result of over a decade of work by EFF, our partners, and the supporters who have believed in the dream of encrypting the web every step of the way.

Thank you for your support in fighting for a safer and more secure internet.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

 

Alexis Hancock

The Battle for Communications Privacy in Latin America: 2021 in Review

4 weeks 1 day ago

Uncovering government surveillance and fighting for robust and effective legal safeguards and oversight is a continuous battle in Latin American countries. Surveillance capabilities and technologies are becoming more intrusive and prevalent, surrounded by a culture of secrecy and entrenched views that pit security against privacy. There are several challenges to face. Alongside growing resistance against government biometric surveillance, the long-standing problem of unfettered communications surveillance persists and presents new troublesome trends.

Both appear tied together, for example, in renewed attempts to compel individuals to give their biometric data in order to access mobile phone services, as we saw in México and Paraguay in 2021, with fierce opposition from civil society. The Supreme Court in Mexico indefinitely suspended the creation of the Padrón Nacional de Usuarios de Telefonía Móvil (PANAUT), a national registry of mobile users associated with their biometric data, after the federal agency assigned to implement the registry filed a constitutional complaint affirming its budgetary autonomy and its duty to ensure users' rights to privacy, data protection, and access to information. In Paraguay, the bill forcing users to register their biometrics to enable a mobile telephone service was rejected by a parliamentary commission and has been halted in Congress since then. 

This post highlights a few relevant developments this year regarding communications privacy in Latin America in its relation with other rights, such as freedom of expression and assembly.

#ParoNacional in Colombia: Patrolling Phones and the Web

In the wake of Colombia’s tax reform proposal, demonstrations spread over the country in late April, reviving the social unrest and socio-economic demands that led people to the streets in 2019. Media has reported on government crackdowns against the protestors, including physical violence, missing persons, and deaths. Fundación Karisma has also stressed implications for the right to protest online and the violation of rights due to internet shutdowns and online censorship and surveillance. Amid the turmoil, EFF has put together a set of resources to help people navigate digital security in protest settings.

As seen in the 2019 protests, Colombian security forces once again abused their powers searching people’s phones at their discretion in 2021. Relying on controversial regulation that allows law enforcement agents to check the IMEI of mobile devices to curb cell phone thefts, police officers compelled protesters to hand over their passwords or unlock their phones, even though neither of these are needed to verify the IMEI of a device. As Fundación Karisma pointed out, like the search of a house, police can only seize a cell phone with a court order. Otherwise, it will interfere with peoples’ fundamental rights to privacy, right to a fair trial, and the presumption of innocence. Over the years, the IMEI regulation has led to cases where the police reviewed people's social networks or deleted potential evidence of police brutality and abuses.

Colombian police "patrolling" on the web has also reinforced concerns over its invasive nature. Karisma points out that a 2015 Colombian police resolution authorizing law enforcement "cyber patrolling" is unclear about its specific scope, procedures, tools, and limits. Yet, a June 2021 Ministry of Defense report of their activities during the national strike indicates that digital patrolling served to detect cyberthreats, profile suspicious people and activities related to “vandalism” acts, and combat what the government deemed online disinformation. In the latter case, cyber patrolling was combined with a narrative dispute over the truth about reports, images, and videos on the excessive use of police force that have gained national and international attention. Karisma’s report shed light on government campaigns framing critical publications to the army or the police as fake news or digital terrorism. The report concludes that such prejudicial framing served as the first step in a government strategy to stigmatize protests online and offline and to encourage censorship of critical content.

Following the Inter-American Commission on Human Rights (IACHR) mission to Colombia in June, the IACHR expressed concern about Colombian security forces taking on a fact-checking role, especially on information related to their own actions. The IACHR has also highlighted the importance of the internet as a space for protest during the national strike, taking into account evidence of restrictions presented by Colombian groups, like Fundación Karisma, Linterna Verde, and FLIP. 

Threats and Good News for Encryption 

In Brazil, legislative discussions on the draft bill 2630/2020, the so-called “Fake News” bill, continued throughout 2021. Concerns around disinformation gained new strength with the propagation of a narrative in favor of ineffective methods to tackle the COVID pandemic promoted by President Jair Bolsonaro and his supporters. Despite its legitimate concerns with the disproportionate effects of disinformation campaigns, the text approved in the Senate still in 2020 contained serious threats for privacy, data protection, and free expression. Among them, the traceability mandate for instant messaging applications stood out. 

EFF along with civil society groups and activists on the ground stayed firm on opposing the traceability rule. This rule compelled instant messaging applications to massively retain the chain of forwarded communications, undermining users’ expectation of privacy and strong end-to-end encryption safeguards and principles. In August, we testified in Brazil’s Congress stressing how massive data retention obligations and pushes to move away from robust end-to-end encryption implementations not only erode rights of privacy and free expression, but also impair freedom of assembly and association. As a piece of good news, the traceability mandate was dropped in the latest version of the bill, though perils to privacy remain in other provisions. 

Also in the Brazilian Congress, a still-pending threat to encryption lies in a proposed obligation for internet applications to assist law enforcement in telematic interception of communications. The overbroad language of such assistance obligation endangers the rights and security of users of end-to-end encrypted services. Coalizão Direitos na Rede, a prominent digital rights coalition in Brazil, underlined this and other dangerous provisions in this bill that changes the country’s Criminal Procedure Code. The coalition pointed out serious concerns and even setbacks in regard to legal safeguards for law enforcement access to communications data. 

Gathering forces to coordinate efforts in advancing a proactive agenda to promote and defend encryption, another piece of great news took place regionally, with the launch of the Alliance for Encryption in Latin America and the Caribbean (AC-LAC). EFF is a member of the Alliance, which, so far, comprises over 20 organizations throughout the region. 

Pegasus Project: New Revelations, Persistent Rights Violations 

Last, but not least, one of the most remarkable developments in 2021  on the communications privacy front was the Pegasus Project revelations. In July, the Pegasus Project unveiled governments’ espionage on journalists, activists, opposition leaders, judges, and others based on a list of more than 50,000 smartphone numbers of possible targets of the spyware Pegasus since 2016. As reported by The Washington Post, the leaked phone numbers concentrated in countries known to engage in surveillance against their citizens and to have been clients of NSO Group, the Israeli company which develops and sells the spyware. The list of possible targets as well as confirmed attacks through forensic analysis contradict NSO Group’s claims that their surveillance software is used only against terrorism and serious crimes.

Phone numbers in the revealed list spanned more than 45 countries across the globe, but the greatest chunk of them related to Mexican phones—over 15,000 numbers in the leaked data. 

Among them were people from the inner circle of President López Obrador, including close political allies and family members, when he was an opposition leader still aspiring for the country's presidential position. Human rights defenders, researchers from the Inter-American Commission on Human Rights, and journalists were not spared on Mexico’s list. Cecilio Pineda Brito, a freelance reporter, was shot dead in 2017 just a few weeks after he was selected as a possible target for surveillance. When a mobile device is infected with Pegasus, messages, photographs, email messages, call logs, location data can be extracted, and microphones and cameras can be activated, giving full access to people's private information and lives.

The revelations confirmed findings published in 2017 by joint investigations held by R3D, Article 19, SocialTIC, and Citizen Lab about attacks carried out during former President Peña Nieto’s administration. Since then, the country’s General Attorney Office has initiated an investigation that is still open and with limited developments. Yet, the new leaked data has spurred advances in shedding light on government contracts related to Pegasus and in detaining and prosecuting, within the Attorney Office investigations, a key person in the political and business complex scheme of the spyware’s use in Mexico. 

Revelations of the Pegasus Project have raised the red flag regarding ongoing government negotiations with NSO Group in other Latin American countries, like Uruguay and Paraguay. It has also reinforced concerns around a troublesome procurement procedure involving Pegasus spyware in Brazil, firmly challenged by a group of human rights organizations, including Conectas and Transparency International Brazil. In El Salvador, Apple warned journalists from the well-known independent digital news outlet El Faro of possible targeting of their iPhones by state-sponsored attackers. Similar warnings were sent to El Salvadoran leaders of civil society organizations and opposition political parties.

At the regional level, leading digital rights groups in Latin America requested a thematic  hearing to discuss surveillance risks for human rights before the Inter-American Commission on Human Rights. During the October 2021 hearing, they stressed serious concerns with various surveillance technologies employed in countries in the region without proper controls, legal basis, and safeguards aligned with international human rights standards. They urged the Commission to start a regional consultation process to establish a set of inter-American guidelines to guide the processes of acquisition and use of technologies with surveillance capabilities, based on the principles of legality, necessity, and proportionality, which should be the baseline parameters of surveillance policies.

In fact, the widespread use of malicious software by Latin American governments generally occurs with no clear and precise legal authorization, much less strict necessity and proportionality standards or strong due process safeguards. The call for a global moratorium on the use of malware technology until states have adopted robust legal safeguards and effective controls to ensure the protection of human rights—voiced by United Nations experts, the U.N. High Commissioner, and dozens of organizations across the globe, including EFF—is the culmination of persistent human rights abuses and arbitrary violence related to government use of spywares. Moreover, as we said, outrage will continue until governments recognize that intelligence agency and law enforcement hostility to device security puts us all in danger. Instead of taking advantage of system weaknesses and bugs, governments should align in favor of strong cybersecurity for everyone.

Conclusion

Communications surveillance continues to be a pervasive problem in Latin America. Feeble legal safeguards and unfettered surveillance practices erode our ability to speak up against abuses, organize resistance, and fully enjoy a set of fundamental rights. Throughout 2021 and for years prior, EFF has been working with partners in Latin America to foster stronger human rights standards for government access to data. Along with robust safeguards and controls, governments must commit to promote and protect strong encryption and device security—they are two sides of the same coin. And we'll keep joining forces to push for advances and uphold victories on this front in 2022 and the years to come.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

Veridiana Alimonti

Vaccine Passports: 2021 in Review

4 weeks 2 days ago

2021 has been the year of vaccines, in light of the continuing worldwide pandemic. It has also been the year of vaccine passports. To fully tell this story, let’s go back to 2020, because the term vaccine passport as many people use it has changed since then.

Early in the pandemic, there were discussions of “immunity passports” that would declare that someone had recovered from COVID-19, and which we thought were a bad idea. We, along with other civil liberties organizations, are against creating a new surveillance system that will be hard to remove, and will be inequitable for people who have health issues or do not have smartphones. In those days before even a timeline for vaccines, immunity passports also created perverse incentives for people who could not shelter in place. Fortunately for us all, events kept immunity passports from gaining wide adoption.

As the hope for vaccines in 2020 became a certainty in 2021, attention shifted from immunity passports to vaccine passports. Our stance remained the same: we want equity for all and no surveillance. Thus, we raised concerns about “vaccination passports,” by which we meant efforts to digitize credentials,  rather than using the tried and true mechanism of vaccination documents. Digital, scannable credentials, we said, are hard to separate from the introduction of new systems used to track our movements. As we said in that post, "immunizations and providing proof of immunizations are not new. However, there's a big difference between utilizing existing systems to adapt to a public health crisis and vendor-driven efforts to deploy new, potentially dangerous technology under the guise of helping us all move past this pandemic."

In 2021, though, language shifted. The term vaccine passport shifted from meaning frequent, active document checks and came to mean simply vaccination records. In April, therefore, we wrote about our opposition to digital “vaccine bouncers”—proposals that required a new tracking infrastructure and normalized a culture of doorkeepers to public places. We opposed regularly requiring visitors to display a digital token as a condition of entry.  We also called for equitable distribution of vaccines.

In the middle part of 2021, we continued our skepticism of active, frequent checks, especially when this was outsourced to private companies that have a financial interest in surveillance and being vaccine bouncers. We also analyzed systems in Germany, California, New York, Illinois, Colorado, and other places. The spread of the Delta variant, anti-vaccination movements, paper and digital document forgeries, have further made the situation confusing especially as vaccine mandates have followed around the world. Our opinions have still remained the same: we are against surveillance and in favor of equity for all.

In 2021, that continued to mean strong support for paper documents over digital ones, because of the obvious links between digital documents and surveillance systems. As this year closes out, we are all now concerned about the Omicron variant, and how that is going to affect next year’s handling of the pandemic. This very article has been rewritten more than once because of it.

As we move into 2022, we expect more surprises in our continued pandemic-influenced life. We expect that companies selling surveillance will continue to exploit the moment. We continue to advocate for measures that do not create surveillance and provide for appropriate equities. This is apt to become more complex with digital documents expanding in context.

Summing up, we will continue to advocate against pandemic-related surveillance. We don’t believe such surveillance will help us out of the pandemic. We also continue to advocate for equitable treatment of marginalized people. Above all, we hope we won’t be writing a similar year-end post next year.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

Jon Callas

2021 Was the Year Lawmakers Tried to Regulate Online Speech

4 weeks 2 days ago

On the biggest internet platforms, content moderation is bad and getting worse. It’s difficult to get it right, and at the scale of millions or billions of users, it may be impossible. It’s hard enough for humans to sift between spam, illegal content, and offensive but legal speech. Bots and AI have also failed to  rise to the job.

So, it’s inevitable that services make mistakes—removing users’ speech that does not violate their policies, or terminating users’ accounts with no explanation or opportunity to appeal. And inconsistent moderation often falls hardest on oppressed groups

The dominance of a handful of online platforms like Facebook, YouTube, and Twitter increases the impact of their content moderation decisions and mistakes on internet users’ ability to speak, organize, and participate online. Bad content moderation is a real problem that harms internet users. 

There’s no perfect solution to this issue. But U.S. lawmakers seem enamored with trying to force platforms to follow a government-mandated editorial line: host this type of speech, take down this other type of speech. In Congressional hearing after hearing, lawmakers have hammered executives of the largest companies over what content stayed up, and what went down. The hearings ignored smaller platforms and services that could be harmed or destroyed by many of the new proposed internet regulations. 

Lawmakers also largely ignored worthwhile efforts to address the outsized influence of the largest online services—like legislation supporting privacy, competition, and interoperability. Instead, in 2021, many lawmakers decided that they themselves would be the best content moderators. So EFF fought off, and is continuing to fight off, repeated government attempts to undermine free expression online. 

The Best Content Moderators Don’t Come From Congress 

It’s a well-established part of internet law that individual users are responsible for their own speech online. Users and the platforms distributing users’ speech are generally not responsible for the speech of others. These principles are embodied in a key internet law, 47 U.S.C. § 230 (“Section 230”), which prevents online platforms from being held liable for most lawsuits relating to their users’ speech. The law applies to small blogs and websites, users who republish others’ speech, as well as the biggest platforms. 

In Congress, lawmakers have introduced a series of bills that suggest online content moderation will be improved by removing these legal protections. Of course, it’s not clear how a barrage of expensive lawsuits targeting platforms will improve online discourse. In fact, having to potentially litigate every content moderation decision will actually make hosting online speech prohibitively expensive, meaning that there will be strong incentives to censor user speech whenever anyone complains. Anyone who’s not a Google or a Facebook will have a very hard time affording to run a website that hosts user content, that is also legally compliant. 

Nevertheless, we saw bill after bill that actively sought to increase the number of lawsuits over online speech. In February, a group of Democratic senators took a shotgun-like approach to undermining internet law, the SAFE Tech Act. This bill would have knocked out Section 230 from applying to speech in which “the provider or user has accepted payment” to create the speech. If it had passed, SAFE Tech would have both increased censorship and hurt data privacy (as more online providers switched to invasive advertising, and away from “accepting payment,” which would cause them to lose protections.) 

The following month, we saw the introduction of a revised PACT Act. Like the SAFE Tech Act, PACT would reward platforms for over-censoring user speech. The bill would require a “notice and takedown” system in which platforms remove user speech when a requestor provides a judicial order finding that the content is illegal. That sounds reasonable on its face, but the PACT Act failed to provide safeguards, and would have allowed for would-be censors to delete speech they don’t like by getting preliminary or default judgments. 

The PACT Act would also mandate certain types of transparency reporting, an idea that we expect to see come back next year. While we support voluntary transparency reporting (in fact, it’s a key plank of the Santa Clara Principles), we don’t support mandated reporting that’s backed by federal law enforcement, or the threat of losing Section 230’s protections. Besides being bad policy, these regulations would intrude on services’ First Amendment rights.

Last but not least, later in the year we grappled with the Justice Against Malicious Algorithms, or JAMA Act. This bill’s authors blamed problematic online content on a new mathematical boogeyman: “personalized recommendations.” JAMA Act removes Section 230 protections for platforms that use a vaguely-defined “personal algorithm” to suggest third-party content. JAMA would make it almost impossible for a service to know what kind of curation of content might render it susceptible to lawsuits. 

None of these bills have been passed into law—yet. Still, it was dismaying to see Congress continue down repeated dead-end pathways this year, trying to create some kind of internet speech-control regime that wouldn’t violate the Constitution and produce widespread public dismay. Even worse, lawmakers seem completely uninterested in exploring real solutions, such as consumer privacy legislation, antitrust reform, and interoperability requirements, that would address the dominance of online platforms without having to violate users’ First Amendment rights.

State Legislatures Attack Free Speech Online

While Democrats in Congress expressed outrage at social media platforms for not removing user speech quickly enough, Republicans in two state legislatures passed laws to address the platforms’ purported censorship of conservative users’ speech. 

First up was Florida, where Gov. Ron DeSantis decried Twitter’s ban of President Donald Trump and other “tyrannical behavior” by “Big Tech.” The state’s legislature passed a bill this year that prohibits social media platforms from banning political candidates, or deprioritizing any posts by or about them. The bill also prohibits platforms from banning large news sources or posting an “addendum” (i.e., a fact check) to the news sources’ posts. Noncompliant platforms can be fined up to $250,000 per day, unless the platform also happens to own a large theme park in the state. A Florida state representative who sponsored the bill explained that this exemption was designed to allow the Disney+ streaming service to avoid regulation. 

This law is plainly unconstitutional. The First Amendment prohibits the government from requiring a service to let a political candidate speak on their website, any more than it can require traditional radio, TV, or newspapers to host the speech of particular candidates. EFF, together with Protect Democracy, filed a friend-of-the-court brief in a lawsuit challenging the law, Netchoice v. Moody. We won a victory in July, when a federal court blocked the law from going into effect. Florida has appealed the decision, and EFF has filed another brief in the U.S. Court of Appeals for the Eleventh Circuit.

Next came Texas, where Governor Greg Abbott signed a bill to stop social media companies that he said “silence conservative viewpoints and ideas.” The bill prohibits large online services from moderating content based on users’ viewpoints. The bill also required platforms to follow transparency and complaint procedures. These requirements, if carefully crafted to accommodate constitutional and practical concerns, could be appropriate as an alternative to editorial restrictions. But in this bill, they are part and parcel of a retaliatory, unconstitutional law. 

This bill, too, was challenged in court, and EFF again weighed in, telling a Texas federal court that the measure is unconstitutional. The court recently blocked the law from going into effect, including its transparency requirements. Texas is appealing the decision. 

A Path Forward: Questions Lawmakers Should Ask

Proposals to rewrite the legal underpinnings of the internet came up so frequently this year that at EFF, we’ve drawn up a more detailed process of analysis. Having advocated for users’ speech for more than 30 years, we’ve developed a series of questions lawmakers should ask as they put together any proposal to modify the laws governing speech online.

First we ask, what is the proposal trying to accomplish? If the answer is something like “rein in Big Tech,” the proposal shouldn’t impede competition from smaller companies, or actually cement the largest services’ existing dominance. We also look at whether the legislative proposal is properly aimed at internet intermediaries. If the goal is something like stopping harassment, or abuse, or stalking—those activities are often already illegal, and the problem may be better solved with more effective law enforcement, or civil actions targeting the individuals perpetuating the harm.

We’ve also heard an increasing number of calls to impose content moderation through the infrastructure level. In other words, shutting down content by getting an ISP or a content delivery network (CDN) to take certain action, or a payment processor. These intermediaries are potential speech “chokepoints” and there are serious questions that policymakers should think through before attempting infrastructure-level moderation. 

We hope 2022 will bring a more constructive approach to internet legislation. Whether it does or not, we’ll be there to fight for users’ right to free expression.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

Joe Mullin

Stalkerware: 2021 in Review

1 month ago

Stalkerware—that is, commercially-available apps that can be covertly installed on another person’s device for the purpose of monitoring their activity without their knowledge or consent—is nothing new, but 2021 has underscored just how prevalent and dangerous these apps continue to be and how important it is for companies and government to take action to rein them in. 

2021 saw the 2-year anniversary of the Coalition Against Stalkerware, of which EFF is a founding member. In 2021, the Coalition continued to provide training, published tools and research, and worked directly with survivors of domestic abuse and intimate partner violence and the organizations that support them. EFF also took part in dozens of awareness-raising events, including EFF at Home’s Fighting Stalkerware edition in May and a talk on the state of stalkerware in the Apple ecosystem at 2021’s Objective by the Sea.

A 2021 Norton Lifelock survey of 10,000 adults across ten countries found that almost 1 in 10 respondents who had been in a romantic relationship admitted to using a stalkerware app to monitor a current or former partner’s device activity. The same report indicates that the problem may be worsening. Norton Labs found that “the number of devices reporting stalkerware samples on a daily basis increased markedly by 63% between September 2020 and May 2021” with the 30-day moving average blowing up from 48,000 to 78,000 detections. Norton Labs reported that 250,000 devices were compromised with more than 6,000 stalkerware variants in May 2021 alone, with many devices infected with multiple stalkerware apps. Meanwhile, antivirus vendor Kaspersky reported that in the first ten months of 2021, almost 28,000 of its mobile users were affected by the threat of stalkerware. The range in numbers between these two antivirus companies suggests that we may be comparing apples to oranges, but even Kaspersky’s significantly lower number of detections indicates that stalkerware remains a significant threat in 2021.

2021 was also the year that Apple chose to enter the physical tracker market, debuting the AirTag. Apple used all of the existing iPhones to create a powerful network that gave it a major advantage over Tile and Chipolo in location tracking, but it had also created a powerful tool for stalkers with insufficient mitigations. Aside from an easily-muffled beep after 36 hours (shortened after our criticism to 24), there was no way for users outside of the Apple ecosystem to know that they were being tracked. In December, Apple introduced an Android app called Tracker Detect to allow Android users to scan for Air Tags, but there is still a long way to go before iPhone users have the same notification abilities as Android users.

2021 also continued the trend of stalkerware data leaks. In February, developer Till Kottman discovered that stalkerware app KidsGuard, which markets itself both as a stealthy way for parents to monitor their children and also as a useful tool to “catch a cheating spouse,” was leaking victims’ data by exfiltrating it to an unprotected Alibaba cloud bucket. And in September, security researcher Jo Coscia found that stalkerware app pcTattleTale left screenshots of victims’ phones entirely exposed and visible to anyone who knew the URL to go to. Coscia also showed that pcTattleTale failed to delete the screenshots made by users of the 30-day trial of the stalkerware whose 30 days had expired, even though the company explicitly claimed otherwise.

The FTC also cracked down on a stalkerware app maker, issuing its very first outright ban on Support King, maker of the Spyfone stalkerware app, and its CEO Scott Zuckerman. The FTC took action against Spyfone, which it says “harvested and shared data on people’s physical movements, phone use and online activities through a hidden device hack,” not just because the app facilitated illegal surveillance, but because like KidsGuard and pcTattleTale, the product leaked the data collected from victims. The FTC described Spyfone’s security as “slipshod,” stated its intention to “be aggressive about seeking surveillance bans when companies and their executives egregiously invade our privacy,” and cited our advocacy as inspiration. We hope this means we will see more bans in 2022.

In 2020, Google banned stalkerware ads in its Play store. The result has been the occasional purge of stalkerware ads, including one in October 2021. While many ads were purged, TechCrunch journalist Zack Whittacker found that “several stalkerware apps used a variety of techniques to successfully evade Google’s ban on advertising apps for partner surveillance and were able to get Google ads approved.” The whack-a-mole continues.

With your support, we can move beyond whack-a-mole and continue to fight stalkerware through policy, education, and detection in 2022.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

Eva Galperin
Checked
2 hours 34 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed