Digital Rights Updates with EFFector 34.1

5 hours 27 minutes ago

Start the new year right by keeping up with the latest news on your digital rights! Version 34, issue 1 of our EFFector newsletter is out now. Catch up on the latest EFF news, from our celebration of Copyright Week to Google releasing a "disable 2g" feature for new Android smartphones, by reading our newsletter or listening to the new audio version below. 

LISTEN ON YoutubE

EFFECTOR 34.01 - Ten years after the "Internet Blackout"

Make sure you never miss an issue by signing up by email to receive EFFector as soon as it's posted! Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and now listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

EFF Sues the U.S. State Department over Documents Related to Activist Leila Khaled

1 day 6 hours ago

Over the last few years, technology platforms have repeatedly censored online expression by Palestinians and their allies. In one example of this, technology companies have refused to host speech by Palestinian activist Leila Khaled. Khaled is associated with the Popular Front for the Liberation of Palestine, a group on the State Department’s list of designated terrorist organizations. Since 2020, Zoom, Facebook, YouTube, and Eventbrite have all censored academic events hosted by colleges and universities at which Khaled was invited to appear. 

EFF filed a Freedom of Information Act request for records from the State Department last summer to find out whether the federal government directed technology platforms to censor Khaled’s speech. Six months later, the State Department has still not confirmed whether any such records exist, much less turned records over as required by the Freedom of Information Act. EFF is suing to force the agency to comply with its obligations under FOIA and to learn what role the federal government played in this platform censorship.

 As EFF Legal Director Corynne McSherry wrote in 2020, "Particularly now, when so much intellectual debate depends on Internet communication, we need Internet services willing to let that debate happen. And if those service don’t exist, now would be a good time to create them—and for universities to commit to using them."

You can read the complaint below:

Naomi Gilens

DSA: EU Parliament Vote Ensures a Free Internet, But a Final Regulation Must Add Stronger Privacy Protections

1 day 9 hours ago

The European Parliament had an important decision to make this week about the Digital Services Act (DSA). After months of considering amendments, members oscillated between several policy options on how to regulate online platforms, including the dystopian idea of mandating dominant platforms act as internet police, monitoring content on behalf of governments and collecting user information to keep the internet "safe."

European Parliament Got Many Things Right...

In today's vote, the EU Parliament made the right choice. It rejected the idea of a made-in-Europe filternet, and refrained from undermining pillars of the e-Commerce Directive that are crucial to a free and democratic society. Members of Parliament (MEPs) followed the key Internal Market Affairs Committee (IMCO) and opted against upload filters and unreasonable take down obligations, made sure that platforms don't risk liability just for reviewing content, and rejected unworkably tight deadlines to remove potentially illegal content and interfere with private communication. Further analysis is required but, on the whole, the EU Parliament avoided following in the footsteps of prior controversial and sometimes disastrous EU internet rules, such as the EU copyright directive.

Parliamentarians also advocated for greater transparency by platforms, more professional content moderation, and users' rights rather than speech controls and upload filters. In other words, lawmakers focused on how processes should work on online platforms: reporting problematic content, structuring terms of use, and responding to erroneous content removals. If the proposed DSA becomes law, users will better understand how content decisions are made and enjoy a right to reinstatement if platforms make mistakes.

This is the right approach to platform governance regulation. It was a victory for civil society and other voices dedicated to making sure that all users are treated equally, including the Digital Services Act Human Rights Alliance, a group of civil society organizations from around the globe advocating for transparency, accountability, and human rights-centered lawmaking. For example, the Parliament rejected an unworkable and unfair proposal to make some media content unblockable so that publishers could profit under ancillary copyright rules. The Parliament also decided to step up efforts against surveillance capitalism by adopting new rules that would restrict the data-processing practices of big tech companies. Under the new rules, Big Tech will no longer be allowed to engage in targeted advertising if it is based on users' sensitive personal data. A "dark patterns" provision also forbids companies from using misleading tabs and obscuring functions to trick users into doing something they didn't mean to do.

But It Also Got Some Things Wrong

The DSA strengthens the right of users to retain anonymity online and promotes options for users to use and pay for services anonymously wherever reasonable efforts can make this possible. However, the DSA also requires mandatory cell phone registration for pornographic content creators, posing a threat to digital privacy. Also, no further improvements were made to ensure the independence of "trusted" flaggers of content, which can be law enforcement agencies or biased copyright industry associations.

Even worse, non-judicial authorities can order the removal of problematic content and request platforms to hand over sensitive user information without proper fundamental rights safeguards. That recent calls by EFF and its partners to introduce such safeguards haven't found majority support shows that lawmakers are either oblivious to, or unconcerned about, the perils of law enforcement overreach felt acutely by marginalized communities around the globe.

Negotiations: Parliament Must Stand Its Ground

It is clear that the DSA will not solve all challenges users face online, and we have a long way to go if we wish to reign in the power of big tech platforms. However, the EU Parliament's position, if it becomes law, could change the rules of the game for all platforms. During the upcoming negotiations with the European Council, whose positions are remarkably less ambitious than those of the Parliament, we will be working to ensure that Parliament stands its ground and any changes only further protect online expression, innovation, and privacy.

Christoph Schmon

In the Internet Age, Copyright Law Does Far More Than Antitrust to Shape Competition

2 days 2 hours ago

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.

There has been a notable, and long overdue flurry, of antitrust actions targeting Big Tech, launched by users, entrepreneurs, and governments alike. And in the US and abroad, policymakers are working to revamp our antitrust laws so they can be more effective at promoting user choice.

These are positive developments, but this renewed focus on antitrust risks losing sight of another powerful legal lever: copyright. Because there’s copyrighted software in every digital device and online service we use, and because the internet is essentially a giant machine for copying digital data, copyright law is a major force that shapes technology and how we use it. That gives copyright law an enormous role in enabling or impeding competition.

The Digital Millennium Copyright Act (DMCA) is a case in point. It contains two main sections that have been controversial since they went into effect in 2000. The "anti-circumvention" provisions (sections 1201 et seq. of the Copyright Act) bar circumvention of access controls and technical protection measures. The "safe harbor" provisions (section 512) protect service providers who meet certain conditions from monetary damages for the infringing activities of their users and other third parties on the net.

Congress ostensibly passed Section 1201 to discourage would-be infringers from defeating DRM and other access controls and copy restrictions on creative works. In practice, it’s done little to deter infringement – after all, large-scale infringement already invites massive legal penalties. Instead, Section 1201 has been used to block competition and innovation in everything from printer cartridges to garage door openers, videogame console accessories, and computer maintenance services. It’s been used to threaten hobbyists who wanted to make their devices and games work better. And the problem only gets worse as software shows up in more and more places, from phones to cars to refrigerators to farm equipment. If that software is locked up behind DRM, interoperating with it so you can offer add-on services may require circumvention.  As a result, manufacturers get complete control over their products, long after they are purchased, and can even shut down secondary markets (as Lexmark did for printer ink, and Microsoft tried to do for Xbox memory cards.)

On the other hand, Section 512’s “safe harbors” are essential to internet innovation, because they protect service providers from monetary liability based on their users’ infringing activities. To receive these protections service providers must comply with the conditions set forth in Section 512, including “notice and takedown” procedures that give copyright holders a quick and easy way to disable access to allegedly infringing content. Without these protections, the risk of potential copyright liability would prevent many online intermediaries—from platforms to small community websites to newspapers and ISPs -- from hosting and transmitting user-generated content. Without the DMCA, much of big tech wouldn’t exist today – but it is equally true that if we took it away now, new competitors would never emerge to challenge today’s giants. Instead, the largest tech companies would strike lucrative deals with major entertainment companies and other large copyright holders, and everyone else who hosted or transmitted third-party content would just have to shoulder the risk of massive and unpredictable financial penalties—a risk that would deter investment.

There is a final legal wrinkle: filtering mandates. The DMCA’s hair-trigger takedown process did not satisfy many rightsholders, so large platforms, particularly Google, also adopted filtering mechanisms and other automated processes to take down content automatically, or prevent it from being uploaded in the first place. In the EU, those mechanisms are becoming mandatory, thanks to a new copyright law that conditions DMCA-like safe harbors on preventing users from uploading infringing content. Its proponents insisted that filters aren't required, but in practice that’s the only way service providers will be able to comply. That’s created a problem in the EU – as the Advocate General of the EU Court of Justice acknowledged last year, automated blocking necessarily interferes with the human right to free expression.

But filtering mandates create yet another problem: they are expensive. Google has famously spent more than $100 million on developing its Content ID service – a cost few others could bear. If the price of hosting or transmitting content is building and maintaining a copyright filter, investors will find better ways to spend their money, and the current tech giants will stay comfortably entrenched.

If we want to create space for New Tech to challenge Big Tech, antitrust law can’t be the only solution. We need balanced copyright policies as well, in the U.S. and around the world. That’s why we fought to stop the EU’s mandate and continue to fight to address the inevitable harms of implementation, It’s why we are working hard to stop the current push to mandate filters in the U.S. as well. We also need the courts to do their part.  To that end, EFF just this month asked a federal appeals court to block enforcement of the copyright rules in Section 1201 that violate the First Amendment and criminalize speech about technology. We have also filed amicus briefs in numerous cases where companies are using copyright to shut out competition. And we’ll keep fighting, in courts, legislatures, agencies, and the public sphere, to make sure copyright serves innovation rather than thwarting it.

Corynne McSherry

Fact-Checking, COVID-19 Misinformation, and the British Medical Journal

2 days 5 hours ago

Throughout the COVID-19 pandemic, authoritative research and publications have been critical in gaining better knowledge of the virus and how to combat it. However, unlike previous pandemics, this one has been further exacerbated by a massive wave of misinformation and disinformation spreading across traditional and online social media.

The increasing volume of misinformation and urgent calls for better moderation have made processes like fact-checking—the practice that aims to assess the accuracy of reporting—integral to the way social media companies deal with the dissemination of content. But, a valid question persists: who should check facts? This is particularly pertinent when one considers how such checks can shape perceptions, encourage biases, and undermine longstanding, authoritative voices. Social media fact-checks currently come in different shapes and sizes; for instance, Facebook outsources the role to third party organizations to label misinformation, while Twitter’s internal practices determine which post will be flagged as misleading, disputed, or unverified.

That Facebook relies on external fact-checkers is not in and of itself a problem – there is something appealing about Facebook relying on outside experts and not being the sole arbiter of truth. But Facebook vests a lot of authority in its fact-checkers and then mostly steps out of the way of any disputes that may arise around their decisions. This raises concerns about Facebook fulfilling its obligation to provide its users with adequate notice and appeals procedures when their content is moderated by its fact-checkers.

According to Facebook, its fact-checkers may assign one of four labels to a post: “False,” “Partly False,” Altered,” or “Missing Context.”  The label is accompanied by a link to the fact-checker and a more detailed explanation of that decision. Each label triggers a different action from Facebook. ​Content rated either “False” or “Altered” is subject to a dramatic reduction in distribution and gets the strongest warning labels. Content rated “Partly False” also gets reduced distribution, but to a lesser degree than "False" or "Altered." Content rated "Missing Context" is not typically subject to distribution reduction; rather Facebook surfaces more information from its fact-checking partners. But under its current temporary policy, Facebook will reduce distribution of posts about COVID-19 or vaccines marked as “Missing Context” by its fact-checkers.

As a result, these fact-checkers exert significant control over many users' posts and how they may be shared.

A recent incident demonstrates some of the problems with this system.

In November 2021, the British Medical Journal (BMJ) published a story about a whistleblower’s allegations of poor practices at three clinical trial sites run by Ventavia, one of the companies contracted by Pfizer to carry out its COVID-19 vaccine trials. After publication, BMJ’s readers began reporting a variety of problems, including being unable to share the article and being prompted by Facebook that people who repeatedly share “false information” might have their posts removed from Facebook’s News Feed.

BMJ’s article was fact-checked by Lead Stories, one of the ten fact-checking companies contracted by Facebook in the United States. After BMJ contacted Lead Stories to inquire about the flagging and removal of the post, the company maintained that the “Missing Context” label it had assigned the BMJ article was valid. In response to this, BMJ wrote an open letter to Mark Zuckerberg about Lead Stories’ fact-check, requesting that Facebook allow its readers to share the article undisturbed. Instead of hearing from Facebook, however, BMJ received a response to its open letter from Lead Stories.

Turns out, Facebook outsources not just fact-checking but also communication. According to Facebook, “publishers may reach out directly to third-party fact-checking organisations if they have corrected the rated content or they believe the fact-checker’s rating is inaccurate.” Then Facebook goes on to note that “these appeals take place independently of Facebook.” Facebook apparently has no role at all once one of its fact-checkers labels a post.

This was the first mistake. Although Facebook may properly outsource its fact-checking, it’s not acceptable to outsource its appeals process or the responsibility for follow-up communications. When Facebook vests fact-checkers with the power to label its users' posts, Facebook remains responsible for those actions and their effects on its users' speech. Facebook cannot merely step aside and force its users to debate the fact-checkers. Facebook must provide, maintain, and administer its own appeals process.

But more about this in a while; now, back to the story:

According to Lead Stories’ response, the reasons for the “Missing Context” label could be summarized in two points: the first concerned the headline and other substantive parts of the publication, which, according to Lead Stories, overstated the jeopardy and unfairly disqualified the data collected from the Pfizer trials; and, the second doubted the credibility of the whistleblower, given that in some other instances it would appear they had not always expressed unreserved support for COVID vaccines on social media. Lead Stories claims it was further influenced by the fact that the article was being widely shared as part of a larger campaign to discredit vaccines and their efficacy.

What happens next is interesting. The “appeals” process, as it were, played out in the public. Lead Stories responded to BMJ’s open letter in a series of articles published on its site. And Lead Stories further used Twitter to defend its decision and criticize both BMJ and the investigative journalist who was the author of the article. 

What does this all tell us about Facebook’s fact-checking and the implications for the restriction of legitimate, timely speech and expression on the platform? It tells us that users with legitimate questions about being fact-checked will not get much help from Facebook itself, even if they are a well-established and well-regarded scholarly journal. 

It is unacceptable that users who feel disserviced by Facebook need to navigate a whole new and complex system with a party that they were not directly involved with. For example, since 2019, Facebook has endorsed the Santa Clara Principles, which, among others, require companies to ensure a clear and easily accessible appeals’ process. This means that “users should be able to sufficiently access support channels that provide information about the actioning decision and available appeals processes once the initial actioning decision is made.” Do Lead Stories offer such an appeal process? Have they signed up to the Santa Clara principles? Does Facebook require its outside fact-checkers to offer robust notice and appeals processes? Has Facebook even encouraged them to?

Given the current state of misinformation, there is really no question that fact-checking can help navigate the often-overwhelming world of content moderation. At the same time, fact-checking should not mean that users must be exposed to a whole new ecosystem, consisting of new actors, with new processes and new rules. Facebook and other technology companies cannot encourage processes that detach the checking of facts from the overall content moderation process. Instead, it must take on the task of creating systems that users can trust and depend on. Unfortunately, the current system created by Facebook fails to achieve that. 

Konstantinos Komaitis

Copyright Shouldn’t Stand in the Way of Your Right to Repair

3 days 2 hours ago

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.

If you bought it, you own it and you can do what you want with it. That should be the end of the story—whether we’re talking about a car, a tractor, a smartphone, a computer, or really anything you buy.

Yet product manufacturers have chipped away for years at the very idea of ownership, using the growing presence of software on devices to make nonsense arguments about why your tinkering with the things you own violates their copyright. It’s gotten so bad that there’s a booming market for 40-year-old tractors that don’t rely on software. We’ve worked for years with advocates with the Repair Coalition, iFixit, U.S. PIRG, and countless others, to get lawmakers to make it crystal clear that people have the right to tinker with their own stuff.

It’s working. The wind is at our backs right now. In just the past two years, the right to repair has won at the ballot box in Massachusetts, received a supportive directive from the Biden Administration, and made some gains at the Library of Congress to expand repair permissions.

Those wins have now built a lot of momentum for taking this fight to statehouses like never before. Advocates have gotten lawmakers to commit to or introduce bills or to affirm the right to repair in ten states. Some of these bills are general right-to-repair bills, while others focus on specific products such as cars or agricultural equipment. These efforts reach all corners of the country—from Massachusetts to Hawaii, from Florida to Washington. And it’s only January. As more states reach their deadlines to introduce new bills, EFF will be working to support those efforts and get our member involved in as many states as we can. Stay tuned for ways you can help at the state level throughout the year.

Change isn’t only coming in the form of possible legislation; pressure from consumers and activists have moved the needle in other ways. Even companies that have historically been the strongest opponents to right-to-repair legislation have made changes that acknowledge how important it is to their customers. Shareholder activism has changed policy at Microsoft to be friendlier to the right to repair. Apple, which has been hugely critical of right to repair legislation in the past,  announced a “Self Service Repair” program that makes genuine Apple parts and tools for a handful of products available for do-it-yourself repairs. We’ll be watching to make sure these companies live up to their promises.

At the heart of the matter, the right to repair your own things is pure common sense. Copyright shouldn’t dictate where you can take your cracked smartphone for repairs. It shouldn’t stop a mechanic or a medic from accessing a manual they need to fix vital equipment. It should never interfere with a farmer’s ability to get their time-sensitive work done, while they wait on an authorized repair provider. Copyright has been used for too long to chip away at the very idea of ownership. It’s time for state policymakers to join the growing number of people who know that makes no sense at all.

Hayley Tsukayama

Podcast Episode: How Private is Your Bank Account?

3 days 14 hours ago
Podcast Episode 108

Your friends, your medical concerns, your political ideology— financial transactions tell the story of your life in intimate details. But U.S. law has failed to protect  this sensitive data from prying eyes.  Join EFF’s Cindy Cohn and Danny O’Brien as they talk to Marta Belcher, one of the leading lawyers working on issues of financial censorship and financial privacy, as they help you understand why we need better protections for our financial lives—and the important role courts must play in getting things right. 

Click below to listen to the episode now, or choose your podcast player:

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F70b42d72-f770-4cef-8d3b-e08a55d925ca%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

  
  
  

When the Supreme Court considered the issue of financial privacy under the Bank Secrecy Act in the 1970s, we were living in a really different time. Online shopping, Apple Pay, and tools like PayPal and Venmo didn’t exist yet. But even as our financial lives have become increasingly complex, digital, and detailed, the Supreme Court hasn’t revisited its approach to our rights. Instead it has allowed this information to be handed over by default to the government, ensnaring hundreds of millions of nonsuspect people instead of just carefully targeting a few suspects . Marta thinks it’s time to revisit this situation.   

Marta offers a deep dive into financial surveillance and censorship. In this episode, you’ll learn about: 

  • The concept of the third party doctrine, a court-created idea that law enforcement doesn’t need to get a warrant to access metadata shared with third parties (such as companies that manage communications and banking services);
  • How financial surveillance can have a chilling effect on activist communities, including pro-democracy activists fighting against authoritarian regimes in Hong Kong and elsewhere;
  • How the Bank Secrecy Act means that your bank services are sharing sensitive banking details on customers with the government by default, without any request from law enforcement to prompt it;
  • Why the Bank Secrecy Act as it’s currently interpreted violates the Fourth Amendment; 
  • The potential role of blockchain technologies to import some of the privacy-protective features of cash into the digital world;
  • How one recent case missed an opportunity to better protect the data of cryptocurrency users;
  • How financial surveillance is a precursor to financial censorship, in which banking services are restricted for people who haven’t violated the law. 

Belcher serves as general counsel of Protocol Labs, chair of the Filecoin Foundation, and special counsel to the Electronic Frontier Foundation. She was previously an attorney focusing on blockchain and emerging technologies at Ropes & Gray in San Francisco.  She has spoken about blockchain law around the world, including presenting during the World Economic Forum, testifying before the New York State Senate,  speaking in the European Parliament, and testifying before the United States Congress. You can find Marta on Twitter @MartaBelcher.

If you have any feedback on this episode, please email podcast@eff.org. You can find a copy of this episode on the Internet Archive.

Below, you’ll find legal resources—including links to important cases, books, and briefs discussed in the podcast—as well as a full transcript of the audio.

Resources:

Financial Surveillance:

Payment Processors and Censorship:

 Cryptocurrency:

 Third-Party Doctrine:

 

Transcript:

Marta: When you're going about your life and you're engaging in financial transactions, all of that data is really exposed. Our financial transactions really paint an intimate portrait of our lives. Our financial transactions really expose our religious beliefs or our family status or a medical history, our location. And these are things that I think are very sensitive, and that should have full fourth amendment protection. These are things that ought to be private. 

Cindy: That’s Marta Belcher. One of the lawyers pioneering privacy and user freedom in the emerging world of blockchain technologies. She’s here to explain why financial privacy is vital for everyone and how the digitization of our financial lives has begun to erode that privacy and with it the protections that activists and organizers and all the rest of us need all around the world. 

Danny: Marta will also explain the ins and outs of important legal cases that have undermined our financial privacy. 

Cindy:  I'm Cindy Cohn. And I'm the Executive Director of the Electronic Frontier Foundation.

Danny: And I'm Danny, O'Brien.  Welcome to how to fix the internet, a podcast of the Electronic Frontier Foundation. 

Cindy: So we are delighted to have Marta with us today to talk about financial surveillance, Marta and EFF. We go back a long way. You were an intern with us, way back when you were in law school, but since then you've blazed a trail that's been just so fun for us to watch. You recently testified before Congress on financial privacy and the practical uses of cryptocurrency.

And before that you testified before the New York legislature. 

So we're official about it, Marta is the general counsel to Protocol Labs, and she serves as the chair of the board of the File Coin Foundation.

Danny: Where I should add you recently hired me. I don't know whether that's still in your good books there, Cindy.

Cindy: We're working our way to forgiving Marta about that. But all along the way, you've been one of EFFS official advisors in this space as our special counsel and at each step of the way we've relied on your wisdom, and honestly, the feeling that you were just living a little further into the future than the rest of us. So Marta, thank you for coming on the podcast.

Marta: Oh, my gosh. Thank you so much for having me. I am so excited to be here and to get to talk to some of my favorite people on the planet.

Cindy: Oh, it's just a love fest all around. Let's talk about financial surveillance. What kind of information about how we spend our money in our financial businesses is out there and what's happening to it? 

Marta: In the financial system financial transactions that go through certain intermediaries like banks, are often turned over to the government by default. So when there’s been a financial transaction over a certain amount for example, financial institutions will immediately turn that information over to the government, regardless of whether the government has specifically requested that information and without the government having to go and get a warrant to get that specific information. There's also requirements that for example, businesses, even if they receive something like cash, so not even electronic purchases, over a certain amount that they actually have to by default, file a form with the United States government that says I received a transaction in cash over X amount, and here's the identity of the person who handed me that cash.   

Cindy: So when you talk about financial transactions, can you make that real for us? What are the kinds of things that the US government is getting access to?  

Marta: The thing that we're talking about here is people's financial transactions, which includes for example, transactions that they're doing via their bank. It includes transactions that are done, for example, via cryptocurrency and that's things like, making purchases, sending money back and forth buying things, particularly if you're, if you're buying them electronically, but also if you're buying them with cash. So there's sort of a wide range of financial transactions that are subject to government surveillance.

Cindy: How did the United States get into this place where we treat financial transactions like they're, you know, not vitally private to people.

Marta: This is really one of the things that I find so frustrating about working on policy around financial surveillance is that for whatever reason, we seem to have gotten to a place where everyone accepts that financial surveillance in the banking system is totally normal.

Cindy: What should people know about the bank secrecy act and how it plays into this whole story? 

Marta: I think that the important thing for folks to know about the Bank Secrecy Act is that it effectively imposes reporting requirements on banks, so that for certain financial transactions those are turned over to the government without a warrant, en masse by default. So the issue here is that instead of law enforcement having to go get a warrant in order to get particular financial information, not only do they have the ability to just go to financial institutions and get that information, but actually it gets turned over to them by default. 

Cindy: So a warrant means you have to go in one by one and get information about a particular crime or a particular person. A subpoena lets you go a little more broadly without having probable cause or a judge sign off. And what I'm hearing is the Bank Secrecy Act actually flips that on its head and it starts out that the government gets the information rather than them having to go through any hoops at all. Is that a fair summary?

Marta: Exactly. Exactly. And that is exactly why I think it’s pretty shocking. I think this is something that is in my view, clearly violates the fourth amendment and is something and I really find it shocking that this is something that we see in our society today as being totally normal and acceptable. That somehow financial surveillance is different than other surveillance. 

Cindy: Part of the problem here is the Supreme court precedent. There is a decision from decades ago about this. Can you talk a little about that?

Marta: So there was a challenge to the Bank Secrecy Act in the 1970s. And unfortunately the Supreme court at the time held that because of a thing called the third party doctrine, as it existed at the time, the Bank Secrecy Act requirements that for example, banks turn over information about their customers by default, without a warrant, didn't violate the fourth amendment. And that is as a result of that 1976 Supreme court case, US v Miller

Cindy: If you look at the Miller case, and some similar cases, I mean, they really existed at a simpler time. That people's banking and their financial transactions, first of all, many of them were not available to their bank at all because things happened in cash. But otherwise things were just a simpler time. And I think we, I feel the same way about this with the financial side, as you do with people's email or other communications, you know, suddenly when things get digitized, there's much more information available. And so the default rule, which might've been okay in a simpler time, makes less and less sense in a more complicated time. Can you talk a little bit about the real world consequences of all this surveillance?

Marta: You know, you can imagine why a court in 1976 would say, okay, if you're turning over, details about people's financial transactions from, you know, from a bank to the government, the amount that you can learn about a person is pretty limited in 1976. Right. And of course, if you fast forward to today, you know, people's financial transactions really paint a detailed picture of their lives, right? It paints a picture of who they are interacting with, who they're associating with, what their religion is, what their location is. 

Danny: How to Fix the Internet is supported by the Alfred P Sloan foundations program in the public understanding of science enriching people's lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians. 

Cindy: It seems like part of this problem originates in the courts and in their interpretation of the Bank Secrecy Act. So let's, let's drill down a little bit, you know, a lawyer to lawyer with lots of non lawyers listening. The core thing here is something we call the third party doctrine, which provides that your fourth amendment rights end when a third party has your data. This is something EFF has worked to end for a very long time. And for those of you that have listened for a while, it was the topic of episode 3 of How to Fix the Internet with our friend Jumana Musa. So tell me about how the third party doctrine applies in the context of financial records.

Marta: Yeah, absolutely. So, I mean, going back even further, the fourth amendment really requires that law enforcement obtain a warrant supported by probable cause before they are conducting a search or seizure. And so why is it that in the financial system, law enforcement can engage in mass surveillance of bank customers without a warrant? So the answer is: the third-party doctrine, which is the idea that people don't have a reasonable expectation of privacy in the data that they share with a third party, like a bank. And so that was why that was what the Supreme court was relying on in 1976 in US v Miller when it held that the Bank Secrecy Act didn't violate the fourth amendment, it was because of the third party doctrine.

Cindy: The Miller case is interesting because it was about the cops trying to figure out whether somebody was illegally distilling whiskey. And then if you look at the case, they actually probably could've gotten a warrant. They knew a lot about this guy. The thing that we're arguing for is the difference between the cops having free range access to people's financial information and the cops having to do a probable cause warrant. And in most of these situations, if you drill down, if the police could have easily made their case to a judge, and just didn't want to. Again, I think a lot of this stuff around financial privacy really comes down to things that make cops jobs as easy as possible. But the entire thing about civil liberties is to make the cops’ job harder so that we have a zone of privacy that we can live in.

Are we safer because they have to automatically report any transaction of $10,000 or more when we know that the vast majority of those are going to be perfectly innocent? Or would we be safer if we made the cops actually do the work that they need to do to do probable cause and identify the suspects through all the other ways in which you can investigate. And it, of course, it's an important issue about all the other innocent people who are sideswiped along the way.

Marta: Yeah, I think that's really well put. And I think the thing that I would expand on is fundamentally if the thing we are optimizing for is solving every single crime, we could live in a society where people have cameras following them around at all times in their homes and everything is recorded, right? So there's a spectrum. And the way that the constitution balances civil liberties with the interests of enforcing the laws is the fourth amendment, which is to say that if there is probable cause law enforcement can go and show there's probable cause and get a warrant in order to obtain information. And so it's really the difference between does law enforcement need to go get that warrant to show probable cause in order to obtain information, which is what the fourth amendment requires, or do they have access to that information by default, even when there is no probable cause? Do they have the ability to look at people's transactions, even when there's no reason to believe that those transactions are in any way associated with crime? And that's really fundamentally the issue from a civil liberties perspective.

Cindy: I guess that leads to the obvious question. Do you think this is all constitutional?

Marta:  I absolutely do not think that the Bank Secrecy Act, as it's applied today,  is, is constitutional. And, you know, unfortunately the Supreme court disagreed with me on that, but that was back in 1976. I really do think that the court would come to a different decision if it was faced with that challenge again for a variety of reasons. The extent to which the surveillance under the Bank Secrecy Act has expanded, but I think more importantly, and as a Testament to EFFs work in the decades since that Miller decision, the Supreme court has really issued strong pro-privacy opinions in multiple cases. So they've been chipping away at the third party doctrine in the context of the digital world. So for example, the Supreme court held in Carpenter v US that law enforcement must have a warrant in order to obtain location information from a cell phone company. And really, I think that goes to show that the information that could be gleaned from bank data in the 1970s is just a complete world away from the picture of a person's life that can be painted with access to digital financial transactions today.

Danny: There seems to be a sort of global spread in this assumption that financial data is fair game for any country. Are there any sort of examples that really bring home just what it means to have the local state be able to peer directly into your day-to-day transactions?

Marta: Last year when we had the Hong Kong protests there were these really powerful pictures that showed long lines at the subway stations. As these pro-democracy protestors were waiting to purchase their tickets with cash because they didn't want their electronic purchases to place them at the scene of the protest. And so for me, that really underscores the importance of the ability for people to engage in anonymous transactions for civil liberties, and really underscores that a cashless society or a society where all transactions are tracked is really a surveillance society.

Danny: Do you think that that cryptocurrency really addresses some of the sort of privacy issues?

Marta: I think the most important thing about cryptocurrency is that it takes the civil liberties enhancing benefits of cash and imports them into the online world. I think that for me is the most important thing about the technology and because of that ability to transact anonymously, cryptocurrency has become a target of regulators, lawmakers to try to expand this surveillance to the cryptocurrency space. But for me, the fact that cryptocurrencies can enable anonymous transactions is a feature, not a bug. 

Cindy: Now we know cryptocurrency transactions can enable anonymity and transactions, but so far anyway, that's not really what we're seeing. Can you talk a little bit about you know, how the laws interacted with it so far, the, you know, specifically I'm thinking about the Gratkowski case.

Marta:  So I think first of all, it's important to say that not all cryptocurrency transactions are anonymous, many of them are actually pseudonymous. So Bitcoin for example, the Bitcoin ledger, the Bitcoin blockchain is a publicly viewable ledger of all transactions. So you can actually go see that user 123 sent one Bitcoin to user 456. Right? And if you are able to figure out that Marta is user  123, and Cindy is user 456, you can actually see anyone in the world can see that I have sent one Bitcoin to Cindy. And so what happens is you have these choke points such as cryptocurrency exchanges, which is where those cryptocurrency exchanges will do identity checks. In the Gratkowski case basically the law enforcement had gone to an exchange and basically done that. They had gone in and said we want to know who user 123 is. Right. And based on that, we're able to arrest this person. Now they could, as we've been discussing, they could have gone and gotten a warrant, but instead they just asked the exchange and the exchange just handed over that information. So the defendant, Gratkowski, challenged that based on the fourth amendment and that, went up to the Fifth Circuit court of appeals. Unfortunately the fifth circuit held that because of the third-party doctrine and because of US v Miller, the law enforcement did not need a warrant to go and get that information from the exchange. And I think that was the wrong decision and I think that that court really missed an opportunity to follow the Supreme court's lead in recognizing  that there are stronger privacy protections for digital data that's held by third parties.

Cindy: So let's go to my favorite part, which is how do we fix all of this? I think you've pointed to some ways. But let's, let's talk about them, what does the world look like if we get this right? 

Marta: I think there are a couple of things. I think the big one is, you know, there's really no reason that we need to take the financial surveillance of the traditional banking system and extend it out to cryptocurrency, just in the, in the cryptocurrency context specifically. And so we could really utilize this technology that enables people to make anonymous transactions, and really utilize that for enhancing civil liberties. The thing that I hope will happen is that there will be a fourth amendment challenge to the Bank Secrecy Act, as it is currently applied. And that the Supreme court would come out differently today and would basically decide if  the government wants to get this detailed financial information about bank customers, they do have to go get a warrant in order to do it.

Cindy: So, you know, in our future world, your transactions are your own, you get to buy what you want, whether that's a ticket to attend a protest or opening a bank account to start your opposition work against a dictator, or whether you just simply want to, you know, buy something without the government looking over your shoulder, you get to do all of that. You're free to do all of that. And if the government thinks you're doing something wrong, they have to go to a judge and get a warrant to get access to your information. 

Marta: Right now, one of the other issues in this space beyond just financial surveillance is the amount of censorship by financial intermediaries. So we've seen repeatedly Visa and MasterCard and PayPal and other financial intermediaries cut off access to financial services for all sorts of different legal websites, legal speech, and merely because of their own sort of moral whim. So some examples are adult booksellers, social networks, whistleblower websites, have all sort of suffered from financial censorship.

Cindy: I think that the other piece of this, it's not only that the companies are engaging in kind of some moralistic decisions about who gets to do transactions and who doesn't. We have this thing that we call jawboning, right,which is kind of a newly emerging term for politicians leaning on platforms to cut some people off or limit what people can do, because the politician wants to make political points. And, this, I would say, tends to come up, around election time a lot. So I think the other thing that we get in this world is not only that the corporations don't feel a push to be moralistic about who gets to do financial transactions, but they also aren't vulnerable to pressure from governments, US and otherwise to do that for them to outsourcing the censorship that a politician can't do directly to a private company.

Marta: I think it's a huge vulnerability that the way that electronic payments work really make these payment systems a choke point for controlling online content. And we have seen, as you said, instances of government officials actually pushing for financial services to cut off particular websites, particular speech. Luckily, you know, thanks in part to EFF, submitting an Amicus brief in at least one of those cases, in the Backpage V Dart seventh circuit case, there have been findings that, um, doing so would violate the first amendment.

Danny: So I started this thinking of this sort of solution, this future as being kind of the same as what we have now, but in the world of cash, right. Cash is reasonably protected, but it seems like part of the solution would actually be broader than that. It would give us more choice and more alternatives in a digital world that you would not only just deal with the limitations of cash, but you would also be able to escape the limitations of credit card companies and you will be able to pick and choose, who to transact with based on what you want to do rather than what the credit card companies want you to do. Is that right?

Marta: I think that there's a really interesting question for advocacy organizations in the financial space as to are we going to draw the line at, well, whatever restrictions there are on cash that's okay. We can extend those to other types of technologies as well. Or are we going to take a stronger stance and say, you know, actually we think all of these types of reporting requirements, including those that apply to cash are violations of the fourth amendment and that that should not be extended into new technologies.

Cindy: So Marta, what values are we trying to preserve and support in this new world where we get it all right?

Marta: Fundamentally, this is about civil liberties and this is about people's ability to go about their lives without government surveillance. We may not think about money when we think about, for example, exercising our first amendment rights and engaging in politics. But in reality, all of these things do involve financial transactions, whether that be political expenses or things that reveal your religion or your sexual associations or who you associate with all of these things can be revealed by your financial transactions. And it's very important to be able to live in a world where you can engage in those transactions privately without those being surveilled by the government by default.

Cindy:  Thank you so much Marta for taking this time and taking us through this tour of financial privacy. It's a tremendously important issue and sometimes it gets buried underneath a lot of the hype around cryptocurrency. 

Where can people find you? I understand that you have your own podcast and that our own Rainey Reitman was recently a guest. 

Marta: That’s right, we do have a podcast, the FileCoin Foundation as a podcast, it’s called the Future Rules, and not only has Rainey been a guest but also Danny has been a guest. You can definitely listen to that podcast.

Cindy: Wonderful, thanks again for taking the time to talk to us. 

Cindy: Wow. That was so fun and so interesting. Marta really opened my eyes even a bit more about how critical financial privacy is to real privacy. And I was especially struck by the image of the protesters in Hong Kong, buying their tickets with cash, because as we all know, you can be tracked to where you are based on what you buy, and, you know, at that particular time, especially being at a protest in Hong Kong, it was tremendously dangerous. 

Danny: And of course we have this background in preventing communication surveillance and all the arguments apply, right? Like actually tracking money lets you see everything about someone and everyone uses money in the same way as everyone has to communicate. It's not just criminals who use money and like every transaction over $10,000, you know, if the police talk about that, you go, oh yeah, $10,000, that's probably drugs. But actually of course, you know, houses, cars like, like monthly transactions. I'm not a rich person, $10,000 is still something that, you know, I run into occasionally. And everything else, and every credit credit card transaction, everything going to the government, getting stored in databases forever and ever. It's really sort of turned around my thoughts on this actually.

Cindy: You know, the thing that she really drove home is that, you know, this kind of financial surveillance, especially under the Bank Secrecy Act, it's mass surveillance, right? It is surveilling everybody first and then figuring out what you need second. There's millions, literally hundreds of millions of people who are innocent, who are caught up in this dragnet for the few people that they want to catch. And, and I honestly don't know that the case has been made that they couldn't catch these people any other way, most specifically by getting a probable cause warrant, which I think she told us over and over again was, you know, the thing that would switch this around from a situation in which there's a problem for a lot of people to something that's, you know, a reasonable law enforcement strategy.

Danny: It’s this classic problem of mass surveillance. Right. You're trying to tackle and find a thousand to 10,000 people who are doing a bad thing. And you're looking through millions of people's records in order to do that. The other thing I think is that in the same way, as we say, in communication surveillance, that surveillance leads to censorship, or at least you can't sensor without surveillance. If you can't see what websites people are visiting, then you can't censor them. And same thing here, right? Like if you have a basis that people have to share all their financial information with third parties, pretty soon, those third parties are going to have pressure put on them or just decide themselves that they don't want some kinds of business. Like you said, is it Jawboning, is that the phrase? 

Cindy: Yeah, that's the phrase I just heard about it.  

Danny: It's so Jawboning where, where Congress people like put pressure on, credit card providers, like Visa and MasterCard to throw sex workers or other people that other people don't like off the financial system. And you can only do that with this level of surveillance. 

Cindy: Thinking about this, you know, the cryptocurrency and the blockchain technologies are really technologies, but what's become so clear about this is that we've got some legal case law, frankly, that was written in the 1970s or, or adopted in the 1970s, it's really getting in the way here in a context where it's really not appropriately applied. So this is one where we think about, you know, again, we do this a lot, but Larry Lesig’s four areas we've got code, we've got law, we've got social norms and we've got markets. We've got code doing, giving us potentially some really good things and we need to get the law out of the way.

Danny: Right. And the norms, I think, beginning to rethink again, just how intrusive all of this surveillance is, and I think that that might be a little uphill work, right? Because people, people just think of it this way, but, but if we're going to build a better future, we have to start thinking and putting our own civil liberties first.

Danny: And thanks to Nat Keefe and Reed Mathis of Beat Mower for making the music for this podcast. Additional music is used under a Creative Commons license from CCMixter. You can see the credits and links to the music in our episode notes. Please visit eff.org/podcasts where you’ll find more episode, learn about these issues, you can donate to become a member of EFF, as well as lots more. Members are the only reason we can do this work plus you can get cool stuff like an EFF hat, an EFF hoodie or an EFF camera cover for your laptop. 

How to Fix the Internet is supported by the Alfred P Sloan foundation’s program and public understanding of science and technology. I'm Danny O’Brien.  I'm Danny O'Brien.

Cindy: and I'm Cindy Cohn. 

 

rainey Reitman

Welcome to the Public Domain, Winnie-the-Pooh

4 days 6 hours ago

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.

In 2019, for the first time in 20 years, U.S. copyright law allowed formerly copyrighted works to join the public domain. Works in the public domain are no longer under copyright, and anyone can republish or use those works in whatever way they want. The public domain is the default home of all creative endeavors because culture isn’t owned by any single person or corporation—it’s shared.

This year, the public domain opened up to include works from 1926 and a whopping 400,000 sound recordings. Of course, the real fun is that the third Hercule Poirot novel by Agatha Christie, Ernest Hemingway’s The Sun Also Rises, and the original books of Winnie-the-Pooh and Bambi are now free for anyone to use.

In particular, the popular images of Winnie-the-Pooh and Bambi have been dominated by one rightsholder’s vision for a long time: Disney. And while Disney’s versions of those stories remain under copyright, their exclusive hold on two cornerstones of childhood has come to an end. This is a good thing—it lets those stories be reinterpreted and repurposed by people with different takes. We can all decide whether the Disney versions are the actual best ones or were simply the only ones.

Public domain works can be used for such lofty goals. Or they can simply be used for fun, allowing anyone to participate in a worldwide sport of joy. With so many more uses suddenly available to so many more people, we get a flood of works and get to choose which ones we love most. And, of course, we can try our hand at joining in.

Last year, the Great Gatsby was at the center of a flurry of internet jokes when it entered the public domain. Archive of Our Own, the award-winning fanfiction archive, suddenly found itself home to very lightly altered versions of F. Scott Fitzgerald’s famous work. Some replaced the characters in the original with those from other works, putting them in dialog with each other. One absolute internet genius replaced every use of “Gatsby” with “Gritty,” replacing a memetic capitalist played by Leonardo DiCaprio in a recent film adaptation with a memetic anti-capitalist puppet hockey mascot.

When people compete to top each other for the most creative, weird, or just funny use of a public domain work, we all win.

Katharine Trendacosta

It’s Copyright Week 2022: Ten Years Later, How Has SOPA/PIPA Shaped Online Copyright Enforcement?

4 days 6 hours ago

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.

Ten years ago, a diverse coalition of internet users, non-profit groups, and internet companies defeated the Stop Online Piracy Act (SOPA) and the PROTECT IP Act (PIPA), bills that would have forced internet companies to blacklist and block websites accused of hosting copyright-infringing content. These were bills that would have made censorship very easy, all in the name of copyright enforcement. This collective action showed the world that the word of the few major companies who control film, music, and television can’t control internet policy for their own good.

We celebrate Copyright Week every year on the anniversary of the internet blackout that finally got the message across: Team Internet will always stand up for itself.

While SOPA and PIPA were ultimately defeated, their spirits live on. They live on in legislation like the CASE Act and the EU Copyright Directive. They live on in the use of copyright filters on major platforms, which exist because the largest entertainment companies insist on them. They live on every time you can’t fix a device you paid for and rightfully own. They live on in the licensing agreements that prevent us from owning digital goods.

We continue to fight for a version of copyright policy that doesn’t seek to control users. That doesn’t serve only a few multibillion-dollar corporations, but rather the millions of people online who are independent artists. That contributes to the growth, not stagnation, of culture.

Each year, we pick five issues in copyright to highlight and advocate a set of principles around. This year’s issues are:

  • Monday: The Public Domain
    The public domain is our cultural commons and a crucial resource for innovation and access to knowledge. Copyright should strive to promote, and not diminish, a robust, accessible public domain.
  • Tuesday: Device and Digital Ownership
    Copyright should not be used to control knowledge, creativity, or the ability to tinker with or repair your own devices. Copyright should encourage more people to share, make, or repair things.
  • Wednesday: Copyright and Competition
    Copyright policy should encourage more people to create and seek to keep barriers to entry low, rather than concentrate power in only a few players.
  • Thursday: Free Expression and Fair Use
    Copyright policy should encourage creativity, not hamper it. Fair use makes it possible for us to comment, criticize, and rework our common culture.
  • Friday: Copyright Enforcement as a Tool of Censorship
    Freedom of expression is a fundamental human right essential to a functioning democracy. Copyright should encourage more speech, not act as a legal cudgel to silence it.

Every day this week, we’ll be sharing links to blog posts and actions on these topics at https://www.eff.org/copyrightweek and at #CopyrightWeek on Twitter.

As we say every year, if you too stand behind these principles, please join us by supporting them, sharing them, and telling your lawmakers you want to see copyright law reflect them.

Katharine Trendacosta

EFF Asks Appeals Court to Rule DMCA Anti-Circumvention Provisions Violate First Amendment

1 week 1 day ago
Lawsuit Filed on Behalf of Computer Scientist and Security Researcher Seeks to Bar Enforcement of Section 1201 Provisions

Washington D.C.—The Electronic Frontier Foundation (EFF) asked a federal appeals court to block enforcement of onerous copyright rules that violate the First Amendment and criminalize certain speech about technology, preventing researchers, tech innovators, filmmakers, educators, and others from creating and sharing their work.

EFF, with co-counsel Wilson Sonsini Goodrich & Rosati, asked the U.S. Court of Appeals for the District of Columbia yesterday to reverse a district court decision in Green v. DOJ, a lawsuit we filed in 2016 challenging the anti-circumvention and anti-trafficking provisions of the Digital Millennium Copyright Act (DMCA) on behalf of security researcher Matt Green and technologist Andrew “bunnie” Huang. Both are pursuing projects highly beneficial to the public and perfectly lawful except for DMCA’s anti-speech provisions.

These provisions—contained in Section 1201 of the DMCA—make it unlawful for people to get around the software that restricts access to lawfully-purchased copyrighted material, such as films, songs, and the computer code that controls vehicles, devices, and appliances. This ban applies even where people want to make noninfringing fair uses of the materials they are accessing. The only way to challenge the ban is to go through an arduous, cumbersome process, held every three years, to petition the Library of Congress for an exemption.

While enacted to combat music and move piracy, Section 1201 has long served to restrict people’s ability to access, use, and even speak out about copyrighted materials—including the software that is increasingly embedded in everyday things. Our rights to tinker with or repair the devices we own are under threat by the law, which makes it a crime to create or share tools that could, for example, allow people to convert their videos so they can play on multiple platforms or conduct independent security research to find dangerous flaws in vehicles or medical devices.

Green, a computer security researcher at Johns Hopkins University, works to make Apple messaging and financial transactions systems more secure by uncovering software vulnerabilities, an endeavor that requires finding and exploiting weaknesses in code. Green seeks to publish a book about his work but fears that it could invite criminal charges under Section 1201.

Meanwhile Huang, a prominent computer scientist and inventor, and his company Alphamax LLC, are developing devices for editing digital video streams that would enable people to make innovative uses of their paid video content, such as captioning a presidential debate with a running Twitter comment field or enabling remixes of high-definition video. But using or offering this technology could also run afoul of Section 1201.

Ruling on the government’s motion to dismiss the lawsuit, a federal judge said Green and Huang could proceed with claims that 1201 violated their First Amendment rights to pursue their projects but dismissed the claim that the section was itself unconstitutional. The court also refused to issue an injunction preventing the government from enforcing 1201.

“Section 1201 makes it a federal crime for our clients, and others like them, to exercise their right to free expression by engaging in research, creating software, and publish their work,” said EFF Senior Staff Attorney Kit Walsh. “This creates a censorship regime under the guise of copyright law that cannot be squared with the First Amendment.”

For the filing:
https://www.eff.org/document/geen-v-doj-appellant-brief

For more about this case:
https://www.eff.org/cases/green-v-us-department-justice

Contact:  CorynneMcSherryLegal Directorcorynne@eff.org KitWalshSenior Staff Attorneykit@eff.org
Karen Gullo

EFF Threat Lab’s “apkeep” APK Downloader, Now More Capable and Available in More Places

1 week 1 day ago

In September, we introduced EFF Threat Lab’s very own APK Downloader, apkeep. It is a tool that allows us to make the job of tracking state-sponsored malware and combatting the stalkerware of abusive partners easier. Since that time, we’ve added some additional functionality that we’d like to share.

F-Droid

In addition to the ability to download Android packages from the Google Play Store and APKPure, we’ve added support for downloading from the free and open source app repository F-Droid. Packages downloaded from F-Droid are checked against the repository maintainers’ signing key, just like in the F-Droid app itself. The package index is also cached, which makes it easy to run multiple subsequent requests for downloads.

Versioning

You can now download specific versions of apps from either the apk-pure app store, which mirrors the Google Play Store, or from f-droid. To try it, issue the following command to see which versions are available:

apkeep -l -a com.instagram.android -d apk-pure

Once you’ve picked a desired version, download it with this command:

apkeep -a com.instagram.android@217.0.0.15.474 -d apk-pure .

Keep in mind not all versions will be retained by these download sources, so only recent versions may be available.

Additional Platform Support

On initial launch, we supported only 6 platforms:

  • GNU/Linux x86_64, i686, aarch64, and armv7
  • Android aarch64 and armv7

We have been quickly building our platform support to bring the current tally to 9:

  • GNU/Linux x86_64, i686, aarch64, and armv7
  • Android x86_64, i686, aarch64 and armv7
  • Windows x86_64

and we plan to continue to build out to more platforms in the future.

Termux Repositories

The Android terminal application Termux now makes it easy to install apkeep. We have added our package to their repository, so that Termux users now only need to issue a simple command to install the latest version:

pkg install apkeepFuture Plans

In addition to continuing to build out to additional platforms, we would also like to add more Android markets to download from, such as the Amazon Appstore. Have any suggestions for features or new platforms you’d like to see supported? Let us know by opening an issue on our GitHub page!

Special Thanks

We would like to thank the F-Droid and Termux communities for their assistance in this build-out, and thank our users for their feedback and support.

Bill Budington

San Francisco Police Illegally Used Surveillance Cameras at the George Floyd Protests. The Courts Must Stop Them

1 week 1 day ago

Update: This post has been updated to reflect that the hearing date in this case has been moved to January 21.

By Hope Williams, Nathan Sheard, and Nestor Reyes

The authors are community activists who helped organize and participated in protests against police violence in San Francisco after the murder of George Floyd. A hearing in their lawsuit against the San Francisco Police Department over surveillance of Union Square protests is scheduled for Friday. This article was first published in the San Francisco Standard.

A year and a half ago, the San Francisco Police Department illegally spied on us and thousands of other Bay Area residents as we marched against racist police violence and the murder of George Floyd. Aided by the Electronic Frontier Foundation (EFF) and the ACLU of Northern California, we have taken the SFPD to court.

Our lawsuit defends our right to organize protests against police violence without fear of illegal police surveillance. After the police murdered George Floyd, we coordinated mass actions and legal support and spent our days leading the community in chants, marches and protests demanding an end to policing systems that stalk and kill Black and Brown people with impunity.

Our voice is more important than ever as the mayor and Chris Larsen, the billionaire tech executive funding camera networks across San Francisco, push a false narrative about our lawsuit and the law that the SFPD violated. 

In 2019, the city passed a landmark ordinance that bans the SFPD and other city agencies from using facial recognition and requires them to get approval from the Board of Supervisors for other surveillance technologies. This transparent process sets up guardrails, allows for public input and empowers communities to say “no” to more police surveillance on our streets. 

But the police refuse to play by the rules. EFF uncovered documents showing that the SFPD violated the 2019 law and illegally tapped into a network of more than 300 video cameras in the Union Square area to surveil us and our fellow protesters. Additional documents and testimony in our case revealed that an SFPD officer repeatedly viewed the live camera feed, which directly contradicts the SFPD’s prior statements to the public and the city’s Board of Supervisors that “the feed was not monitored.”

Larsen has also backpedaled. Referencing the network, he previously claimed that “the police can’t monitor it live.” Now, Larsen is advocating for live surveillance and criticizing us for defending our right under city law to be free from unfettered police spying. He even suggests that we are to blame for recent high-profile retail thefts at San Francisco’s luxury stores. 

As Black and Latinx activists, we are outraged—but not surprised—by rich and powerful people supporting illegal police surveillance. They are not the ones targeted by the police and won’t pay the price if the city rolls back hard-won civil rights protections. 

Secret surveillance will not protect the public. What will actually make us safer is to shift funding away from the police and toward housing, healthcare, violence interruption programs and other services necessary for racial justice in the Bay Area. Strong and well-resourced communities are far more likely to be safe than they would be with ever-increasing surveillance.

As members of communities that are already overpoliced and underserved we know that surveillance is a trigger that sets our most violent and unjust systems in motion. Before the police kill a Black person, deport an immigrant, or imprison a young adult for a crime driven by poverty, chances are the police surveilled them first.

That is why we support democratic control over police spying and oppose the surveillance infrastructure that Larsen is building in our communities. We joined organizations like the Harvey Milk LGBTQ Democratic Club in a successful campaign against Larsen’s plan to fund more than 125 cameras in San Francisco’s Castro neighborhood. And we made the decision to join forces with the EFF and the ACLU to defend our rights in court after we found out the SFPD spied on us and our movement.

On January 21, we will be in court to put a stop to the SFPD’s illegal spying and evasion of democratic oversight. We won’t let the police or their rich and powerful supporters intimidate activists into silence or undermine our social movements.

Related Cases: Williams v. San Francisco
Nathan Sheard

Nearly 130 Public Interest Organizations and Experts Urge the United Nations to Include Human Rights Safeguards in Proposed UN Cybercrime Treaty

1 week 1 day ago

(UPDATE: Due to the ongoing situation concerning the coronavirus disease (COVID-19), the Ad Hoc Committee won't hold its first session from 17 to 28 January 2022 in New York, as planned. Further information will be provided in due course).

EFF and Human Rights Watch, along with nearly 130 organizations and academics working in 56 countries, regions, or globally, urged members of the Ad Hoc Committee responsible for drafting a potential United Nations Cybercrime Treaty to ensure human rights protections are embedded in the final product. The first session of the Ad Hoc Committee is scheduled to begin on January 17th

The proposed treaty will likely deal with cybercrime, international cooperation, and access to potential digital evidence by law enforcement authorities, as well as human rights and procedural safeguards. UN member states have already written opinions discussing the scope of the treaty, and their proposals vary widely. In a letter to the committee chair, EFF and Human Rights Watch along with partners across the world asked that members include human rights considerations at every step in the drafting process. We also recommended  that cross-border investigative powers include strong human rights safeguards, and that global civil society be provided opportunities to participate robustly in the development and drafting of any potential convention.

Failing to prioritize human rights and procedural safeguards in criminal investigations can have dire consequences.  As many countries have already abused their existing cybercrime laws to undermine human rights and freedoms and punish peaceful dissent, we have grave concerns that this Convention might become a powerful weapon for oppression. We also worry that cross-border investigative powers without strong human rights safeguards will sweep away progress on protecting people’s privacy rights, creating a race to the bottom among jurisdictions with the weakest human rights protections.

We hope the Member States participating in the development and drafting of the treaty will recognize the urgency of the risks we mention, commit to include civil society in their upcoming discussions, and take our recommendations to heart.

Drafting of the letter was spearheaded by EFF, Human Rights Watch, AccessNow, ARTICLE19, Association for Progressive Communications, CIPPIC, European Digital Rights, Privacy International, Derechos Digitales, Data Privacy Brazil Research Association, European Center For Not-For-Profit Law, IT-Pol – Denmark, SafeNet South East Asia, Fundación Karisma, Red en Defensa de los Derechos Digitales, OpenNet Korea, among many others.

The letter is available in English and Spanish, and will be available in other UN languages in due course.

The full text of the letter and list of signatories are below:

December 22, 2021

H.E. Ms Faouzia Boumaiza Mebarki
Chairperson
Ad Hoc Committee to Elaborate a Comprehensive International Convention on Countering the Use of Information and Communication Technologies for Criminal Purposes

Your Excellency,

We, the undersigned organizations and academics, work to protect and advance human rights, online and offline. Efforts to address cybercrime are of concern to us, both because cybercrime poses a threat to human rights and livelihoods, and because cybercrime laws, policies, and initiatives are currently being used to undermine people’s rights. We therefore ask that the process through which the Ad Hoc Committee does its work includes robust civil society participation throughout all stages of the development and drafting of a convention, and that any proposed convention include human rights safeguards applicable to both its substantive and procedural provisions.

Background

The proposal to elaborate a comprehensive “international convention on countering the use of information and communications technologies for criminal purposes” is being put forward at the same time that UN human rights mechanisms are raising alarms about the abuse of cybercrime laws around the world. In his 2019 report, the UN special rapporteur on the rights to freedom of peaceful assembly and of association, Clément Nyaletsossi Voule, observed, “A surge in legislation and policies aimed at combating cybercrime has also opened the door to punishing and surveilling activists and protesters in many countries around the world.” In 2019 and once again this year, the UN General Assembly expressed grave concerns that cybercrime legislation is being misused to target human rights defenders or hinder their work and endanger their safety in a manner contrary to international law. This follows years of reporting from non-governmental organizations on the human rights abuses stemming from overbroad cybercrime laws.

When the convention was first proposed, over 40 leading digital rights and human rights organizations and experts, including many signatories of this letter, urged delegations to vote against the resolution, warning that the proposed convention poses a threat to human rights.

In advance of the first session of the Ad Hoc Committee, we reiterate these concerns. If a UN convention on cybercrime is to proceed, the goal should be to combat the use of information and communications technologies for criminal purposes without endangering the fundamental rights of those it seeks to protect, so people can freely enjoy and exercise their rights, online and offline. Any proposed convention should incorporate clear and robust human rights safeguards. A convention without such safeguards or that dilutes States’ human rights obligations would place individuals at risk and make our digital presence even more insecure, each threatening fundamental human rights.

As the Ad Hoc Committee commences its work drafting the convention in the coming months, it is vitally important to apply a human rights-based approach to ensure that the proposed text is not used as a tool to stifle freedom of expression, infringe on privacy and data protection, or endanger individuals and communities at risk.  

The important work of combating cybercrime should be consistent with States’ human rights obligations set forth in the Universal Declaration of Human Rights (UDHR), the International Covenant on Civil and Political Rights (ICCPR), and other international human rights instruments and standards. In other words, efforts to combat cybercrime should also protect, not undermine, human rights. We remind States that the same rights that individuals have offline should also be protected online.

Scope of Substantive Criminal Provisions

There is no consensus on how to tackle cybercrime at the global level or a common understanding or definition of what constitutes cybercrime. From a human rights perspective, it is essential to keep the scope of any convention on cybercrime narrow. Just because a crime might involve technology does not mean it needs to be included in the proposed convention. For example, expansive cybercrime laws often simply add penalties due to the use of a computer or device in the commission of an existing offense. The laws are especially problematic when they include content-related crimes. Vaguely worded cybercrime laws purporting to combat misinformation and online support for or glorification of terrorism and extremism, can be misused to imprison bloggers or block entire platforms in a given country. As such, they fail to comply with international freedom of expression standards. Such laws put journalists, activists, researchers, LGBTQ communities, and dissenters in danger, and can have a chilling effect on society more broadly.

Even laws that focus more narrowly on cyber-enabled crimes are used to undermine rights. Laws criminalizing unauthorized access to computer networks or systems have been used to target digital security researchers, whistleblowers, activists,  and journalists. Too often, security researchers, who help keep everyone safe, are caught up in vague cybercrime laws and face criminal charges for identifying flaws in security systems. Some States have also interpreted unauthorized access laws so broadly as to effectively criminalize any and all whistleblowing; under these interpretations, any disclosure of information in violation of a corporate or government policy could be treated as “cybercrime.” Any potential convention should explicitly include a malicious intent standard, should not transform corporate or government computer use policies into criminal liability, should provide a clearly articulated and expansive public interest defense, and include clear provisions that allow security researchers to do their work without fear of prosecution.

Human Rights and Procedural Safeguards

Our private and personal information, once locked in a desk drawer, now resides on our digital devices and in the cloud. Police around the world are using an increasingly intrusive set of investigative tools to access digital evidence. Frequently, their investigations cross borders without proper safeguards and bypass the protections in mutual legal assistance treaties. In many contexts, no judicial oversight is involved, and the role of independent data protection regulators is undermined. National laws, including cybercrime legislation, are often inadequate to protect against disproportionate or unnecessary surveillance.

Any potential convention should detail robust procedural and human rights safeguards that govern criminal investigations pursued under such a convention. It should ensure that any interference with the right to privacy complies with the principles of legality, necessity, and proportionality, including by requiring independent judicial authorization of surveillance measures. It should also not forbid States from adopting additional safeguards that limit law enforcement uses of personal data, as such a prohibition would undermine privacy and data protection. Any potential convention should also reaffirm the need for States to adopt and enforce “strong, robust and comprehensive privacy legislation, including on data privacy, that complies with international human rights law in terms of safeguards, oversight and remedies to effectively protect the right to privacy."

There is a real risk that, in an attempt to entice all States to sign a proposed UN cybercrime convention, bad human rights practices will be accommodated, resulting in a race to the bottom. Therefore, it is essential that any potential convention explicitly reinforces procedural safeguards to protect human rights and resists shortcuts around mutual assistance agreements.

Meaningful Participation

Going forward, we ask the Ad Hoc Committee to actively include civil society organizations in consultations—including those dealing with digital security and groups assisting vulnerable communities and individuals—which did not happen when this process began in 2019 or in the time since.

Accordingly, we request that the Committee:

  • Accredit interested technological and academic experts and nongovernmental groups, including those with relevant expertise in human rights but that do not have consultative status with the Economic and Social Council of the UN, in a timely and transparent manner, and allow participating groups to register multiple representatives to accommodate the remote participation across different time zones.
  • Ensure that modalities for participation recognize the diversity of non-governmental stakeholders, giving each stakeholder group adequate speaking time, since civil society, the private sector, and academia can have divergent views and interests.
  • Ensure effective participation by accredited participants, including the opportunity to receive timely access to documents, provide interpretation services, speak at the Committee’s sessions (in-person and remotely), and submit written opinions and recommendations.
  • Maintain an up-to-date, dedicated webpage with relevant information, such as practical information (details on accreditation, time/location, and remote participation), organizational documents (i.e., agendas, discussions documents, etc.), statements and other interventions by States and other stakeholders, background documents, working documents and draft outputs, and meeting reports.

Countering cybercrime should not come at the expense of the fundamental rights and dignity of those whose lives this proposed Convention will touch. States should ensure that any proposed cybercrime convention is in line with their human rights obligations, and they should oppose any proposed convention that is inconsistent with those obligations.

We would be highly appreciative if you could kindly circulate the present letter to the Ad Hoc Committee Members and publish it on the website of the Ad Hoc Committee.

Signatories,*

  1. Access Now – International
  2. Alternative ASEAN Network on Burma (ALTSEAN) – Burma
  3. Alternatives – Canada
  4. Alternative Informatics Association – Turkey
  5. AqualtuneLab – Brazil
  6. ArmSec Foundation – Armenia
  7. ARTICLE 19 – International
  8. Asociación por los Derechos Civiles (ADC) – Argentina
  9. Asociación Trinidad / Radio Viva – Trinidad
  10. Asociatia Pentru Tehnologie si Internet (ApTI) – Romania
  11. Association for Progressive Communications (APC) – International
  12. Associação Mundial de Rádios Comunitárias (Amarc Brasil) – Brazil
  13. ASEAN Parliamentarians for Human Rights (APHR)  – Southeast Asia
  14. Bangladesh NGOs Network for Radio and Communication (BNNRC) – Bangladesh
  15. BlueLink Information Network  – Bulgaria
  16. Brazilian Institute of Public Law - Brazil
  17. Cambodian Center for Human Rights (CCHR)  – Cambodia
  18. Cambodian Institute for Democracy  –  Cambodia
  19. Cambodia Journalists Alliance Association  –  Cambodia
  20. Casa de Cultura Digital de Porto Alegre – Brazil
  21. Centre for Democracy and Rule of Law – Ukraine
  22. Centre for Free Expression – Canada
  23. Centre for Multilateral Affairs – Uganda
  24. Center for Democracy & Technology – United States
  25. Civil Society Europe
  26. Coalition Direitos na Rede – Brazil
  27. Collaboration on International ICT Policy for East and Southern Africa (CIPESA) – Africa
  28. CyberHUB-AM – Armenia
  29. Data Privacy Brazil Research Association – Brazil
  30. Dataskydd – Sweden
  31. Derechos Digitales – Latin America
  32. Defending Rights & Dissent – United States
  33. Digital Citizens – Romania
  34. DigitalReach – Southeast Asia
  35. Digital Security Lab – Ukraine
  36. Državljan D / Citizen D – Slovenia
  37. Electronic Frontier Foundation (EFF) – International
  38. Electronic Privacy Information Center (EPIC) – United States
  39. Elektronisk Forpost Norge – Norway
  40. Epicenter.works for digital rights – Austria
  41. European Center For Not-For-Profit Law (ECNL) Stichting – Europe
  42. European Civic Forum – Europe
  43. European Digital Rights (EDRi) – Europe
  44. ​​eQuality Project – Canada
  45. Fantsuam Foundation – Nigeria
  46. Free Speech Coalition  – United States
  47. Foundation for Media Alternatives (FMA) – Philippines
  48. Fundación Acceso – Central America
  49. Fundación Ciudadanía y Desarrollo de Ecuador
  50. Fundación CONSTRUIR – Bolivia
  51. Fundación Karisma – Colombia
  52. Fundación OpenlabEC – Ecuador
  53. Fundamedios – Ecuador
  54. Garoa Hacker Clube  –  Brazil
  55. Global Partners Digital – United Kingdom
  56. GreenNet – United Kingdom
  57. GreatFire – China
  58. Hiperderecho – Peru
  59. Homo Digitalis – Greece
  60. Human Rights in China – China 
  61. Human Rights Defenders Network – Sierra Leone
  62. Human Rights Watch – International
  63. Igarapé Institute -- Brazil
  64. IFEX - International
  65. Institute for Policy Research and Advocacy (ELSAM) – Indonesia
  66. The Influencer Platform – Ukraine
  67. INSM Network for Digital Rights – Iraq
  68. Internews Ukraine
  69. Instituto Beta: Internet & Democracia (IBIDEM) – Brazil
  70. Instituto Brasileiro de Defesa do Consumidor (IDEC) – Brazil
  71. Instituto Educadigital – Brazil
  72. Instituto Nupef – Brazil
  73. Instituto de Pesquisa em Direito e Tecnologia do Recife (IP.rec) – Brazil
  74. Instituto de Referência em Internet e Sociedade (IRIS) – Brazil
  75. Instituto Panameño de Derecho y Nuevas Tecnologías (IPANDETEC) – Panama
  76. Instituto para la Sociedad de la Información y la Cuarta Revolución Industrial – Peru
  77. International Commission of Jurists – International
  78. The International Federation for Human Rights (FIDH)
  79. IT-Pol – Denmark
  80. JCA-NET – Japan
  81. KICTANet – Kenya
  82. Korean Progressive Network Jinbonet – South Korea
  83. Laboratorio de Datos y Sociedad (Datysoc) – Uruguay 
  84. Laboratório de Políticas Públicas e Internet (LAPIN) – Brazil
  85. Latin American Network of Surveillance, Technology and Society Studies (LAVITS)
  86. Lawyers Hub Africa
  87. Legal Initiatives for Vietnam
  88. Ligue des droits de l’Homme (LDH) – France
  89. Masaar - Technology and Law Community – Egypt
  90. Manushya Foundation – Thailand 
  91. MINBYUN Lawyers for a Democratic Society - Korea
  92. Open Culture Foundation – Taiwan
  93. Open Media  – Canada
  94. Open Net Association – Korea
  95. OpenNet Africa – Uganda
  96. Panoptykon Foundation – Poland
  97. Paradigm Initiative – Nigeria
  98. Privacy International – International
  99. Radio Viva – Paraguay
  100. Red en Defensa de los Derechos Digitales (R3D) – Mexico
  101. Regional Center for Rights and Liberties  – Egypt
  102. Research ICT Africa 
  103. Samuelson-Glushko Canadian Internet Policy & Public Interest Clinic (CIPPIC) – Canada
  104. Share Foundation - Serbia
  105. Social Media Exchange (SMEX) – Lebanon, Arab Region
  106. SocialTIC – Mexico
  107. Southeast Asia Freedom of Expression Network (SAFEnet) – Southeast Asia
  108. Supporters for the Health and Rights of Workers in the Semiconductor Industry (SHARPS) – South Korea
  109. Surveillance Technology Oversight Project (STOP)  – United States
  110. Tecnología, Investigación y Comunidad (TEDIC) – Paraguay
  111. Thai Netizen Network  – Thailand
  112. Unwanted Witness – Uganda
  113. Vrijschrift – Netherlands 
  114. West African Human Rights Defenders Network – Togo
  115. World Movement for Democracy – International
  116. 7amleh – The Arab Center for the Advancement of Social Media  – Arab Region

Individual Experts and Academics

  1. Jacqueline Abreu, University of São Paulo
  2. Chan-Mo Chung, Professor, Inha University School of Law
  3. Danilo Doneda, Brazilian Institute of Public Law
  4. David Kaye, Clinical Professor of Law, UC Irvine School of Law, former UN Special Rapporteur on Freedom of Opinion and Expression (2014-2020)
  5. Wolfgang Kleinwächter, Professor Emeritus, University of Aarhus; Member, Global Commission on the Stability of Cyberspace
  6. Douwe Korff, Emeritus Professor of International Law, London Metropolitan University
  7. Fabiano Menke, Federal University of Rio Grande do Sul
  8. Kyung-Sin Park, Professor, Korea University School of Law
  9. Christopher Parsons, Senior Research Associate, Citizen Lab, Munk School of Global Affairs & Public Policy at the University of Toronto
  10. Marietje Schaake, Stanford Cyber Policy Center
  11. Valerie Steeves, J.D., Ph.D., Full Professor, Department of Criminology University of Ottawa

*List of signatories as of January 13, 2022

Katitza Rodriguez

VICTORY: Google Releases “disable 2g” Feature for New Android Smartphones

1 week 2 days ago

Update: This feature is only available on certain phones running Android 12. So far we have only confirmed it is available on the Pixel 6.

Last year Google quietly pushed a new feature to its Android operating system allowing users to optionally disable 2G at the modem level in their phones. This is a fantastic feature that will provide some protection from cell site simulators, an invasive police surveillance technology employed throughout the country. We applaud Google for implementing this much needed feature. Now Apple needs to implement this feature as well, for the safety of their customers. 

What is 2G and why is it vulnerable?
2G is the second generation of mobile communications, created in 1991. It’s an old technology from a time when standards bodies did not account for certain risk scenarios such as rogue cell towers and the need for strong encryption. As years have gone by, many vulnerabilities have been discovered in 2G.

There are two main problems with 2G. First, it uses weak encryption between the tower and device that can be cracked in real time by an attacker to intercept calls or text messages. In fact, the attacker can do this passively without ever transmitting a single packet. The second problem with 2G is that there is no authentication of the tower to the phone, which means that anyone can seamlessly impersonate a real 2G tower and a phone using the 2G protocol will never be the wiser. 

Cell-site simulators sometimes work this way. They can exploit security flaws in 2G in order to intercept your communications. Even though many of the security flaws in 2G have been fixed in 4G, more advanced cell-site simulators can downgrade your connection to 2G, making your phone susceptible to the above attacks. This makes every user vulnerable—from journalists and activists to medical professionals, government officials, and even law enforcement.

What you can do to protect yourself now
If you have a newer Android phone (such as a Pixel 6, or some new Samsung phones) you can disable 2G right now by going to Settings > Network & Internet > SIMs > Allow 2G and turning that setting off. 

2G_on.png

Here by default 2G is enabled. 

2G_off.png

Now 2G is disabled
If you have an older Android phone, these steps may or may not work. Unfortunately due to limitations of old hardware, Google was only able to implement this feature on phones running Android 12 and supporting version 1.6 of the radio HAL, so far this is limited to the Pixel 6. If you have a newer Samsung phone you may also be able to shut off 2G support the same way, unfortunately this is not supported on all networks or all Samsung phones. For iPhone owners unfortunately Apple does not support this feature, but you can tweet at them to demand it!

Take action

Tell apple: Let us turn off 2G!


We are very pleased with the steps that Google has taken here to protect users from vulnerabilities in 2G, and though there is a lot more work to be done this will ensure that many people can finally receive a basic level of protection. We strongly encourage Google, Apple, and Samsung to invest more resources into radio security so they can better protect smartphone owners.

Cooper Quintin

Livestreamed Hearing Moved to Jan. 21: EFF Will Ask Court to Issue Judgment Against SFPD for Illegally Spying on Protesters Marching in Support of Black Lives

1 week 2 days ago
San Francisco Police Violated City Law in Using Private Camera Network

Update: The hearing has been moved to January 21.

San Francisco—On Friday, Jan. 21, at 9:30 am, the Electronic Frontier Foundation (EFF) and the ACLU of Northern California will ask a California state court to find that the San Francisco Police Department (SFPD) violated city law when it used a network of non-city surveillance cameras to spy on Black-led protests in 2020 against police violence in the wake of George Floyd’s murder.

EFF and ACLU of Northern California sued the City and County of San Francisco in October 2020 on behalf of three activists of color for violating the city’s landmark Surveillance Technology Ordinance, which prohibits city departments from using surveillance technology without first putting it before the board of supervisors, who would have to pass an ordinance allowing it. The SFPD flouted a law meant to bring democratic control over government access to privacy-intrusive camera networks that can be used, as they were here, to spy on people exercising their First Amendment right to protest.

EFF uncovered evidence showing the SFPD broke the law when it obtained and used a business district’s network of more than 300 video surveillance cameras to conduct remote, live surveillance of Black-led protests for eight days in May and June 2020 without supervisors’ approval. Demonstrators marched through San Francisco’s Union Square business and shopping district to protest Floyd’s murder and racist police violence.

At a hearing scheduled for Jan. 21 that will be livestreamed for public viewing, EFF Staff Attorney Saira Hussain will tell the court that the evidence supports a judgment, without trial, against the SFPD and in favor of plaintiffs Hope Williams, Nathan Sheard, and Nestor Reyes. They are Black and Latinx activists who participated in and organized numerous protests that crisscrossed San Francisco in 2020.

The SFPD initially denied its officers viewed the camera feed during the eight days that it had access to the camera network. EFF and ACLU of Northern California obtained documents and deposition testimony showing at least one officer viewed the feed repeatedly over that time.

SFPD’s unlawful actions have made plaintiffs fearful of attending future protests, and will make it harder for them to recruit people for future demonstration, EFF and ACLU of Northern California wrote in a brief for the case Williams v. San Francisco.

Who:
EFF Staff Attorney Saira Hussain

What:
Oral arguments on motion for summary judgment in Williams v. San Francisco

When:
Friday, Jan. 21, 2022, at 9:30 am PT

Livestream link:
San Francisco Superior Court
https://sfsuperiorcourt-org.zoom.us/j/86246849687?pwd=MUxSSWxCSzNNYXhnK3hITldvQ1JpQT09#success

For EFF’s motion for summary judgment:
https://www.eff.org/document/williams-v-ccsf-plaintiffs-summary-judgment-brief

For more on this case:
https://www.eff.org/cases/williams-v-san-francisco

Contact:  KarenGulloAnalyst, Senior Media Relations Specialistkaren@eff.org SairaHussainStaff Attorneysaira@eff.org
Karen Gullo

Court Orders Authorizing Law Enforcement To Track People’s Air Travels In Real Time Must Be Made Public

1 week 3 days ago

The public should get to see whether a court that authorized the FBI to track someone’s air travels in real time for six months also analyzed whether the surveillance implicated the Fourth Amendment, EFF argued in a brief filed this week.

In Forbes Media LLC v. United States, the news organization and its reporter are trying to make public a court order and related records concerning an FBI request to use the All Writs Act to compel a travel data broker to disclose people’s movements.

Forbes reported on the FBI’s use of the All Writs Act to force the company, Sabre, to disclose a suspect’s travel data in real time after one of the agency’s requests was unsealed. The All Writs Act is not a surveillance statute, though authorities frequently seek to use it in their investigations. Perhaps most famously, the FBI in 2016 sought an order under the statute to require Apple to decrypt an iPhone by writing custom software for the phone.

But when Forbes sought to unseal court records related to the FBI’s request to obtain data from Sabre, two separate judges ruled that the materials must remain secret.

Forbes appealed to the U.S. Court of Appeals for the Ninth Circuit, arguing that the public has a presumptive right to access the court records under both the First Amendment and common law. EFF, along with the ACLU, ACLU of Northern California, and Riana Pfefferkorn, filed a friend-of-the-court brief in support of Forbes’ effort to unseal the records.

EFF’s brief argues the public has the right to see the court decisions and any related legal arguments made by the federal government in support of its requests because court decisions have historically been public under our transparent, democratic traditions.

But the public has a particular interest in these orders sought against Sabre for several reasons.

First, the disclosure of six months worth of travel data implicates the Fourth Amendment’s privacy protections, just as the U.S. Supreme Court recently recognized in Carpenter v. United States. “Just like in that case, air travel data creates ‘a detailed chronicle of a person’s physical presence’ that goes well beyond knowing a person’s location at a particular time,” the brief argues. The public has a legitimate interest in seeing the court’s ruling to learn whether it grappled with the Fourth Amendment questions raised by the FBI’s request.

Second, because federal law enforcement often requests secrecy regarding its requests under the All Writs Act, the public has very little understanding of the legal limits on when it can use the statute to require third parties to disclose private data about people’s movements. The brief argues:

This ongoing secrecy violates the public’s right of access to judicial records and, critically, it also frustrates public and congressional oversight of law enforcement surveillance, including whether the Executive Branch is evading legislative limits on its surveillance authority.

Third, the broad law enforcement effort to seal its requests under the All Writs Act and surveillance statutes frustrates the public’s ability to know what authorities are doing and whether they are violating people’s privacy rights. From the brief:

This results in the public lacking even basic details about how frequently law enforcement requests orders under the AWA or other statutes such as the SCA [Stored Communications Act] and PRA [Pen Register Act]. This is problematic because, without public access to dockets and orders reflecting authorities’ surveillance activities, there are almost no opportunities for public oversight or intervention by Congress.

Fourth, because Sabre collects data about the public’s travels without most people’s knowledge or consent, public disclosure is crucial so that people can understand whether the company is protecting their privacy. The brief argues:

Disclosure of the judicial records at issue here is thus crucial because the public has no way to avoid Sabre’s collection of their location data and has almost no information about when and how Sabre discloses their data. Court records reflecting law enforcement demands for people’s data are thus likely to be the only records of when and how Sabre responds to law enforcement requests.

Aaron Mackey

Standing Up For Privacy In New York State

1 week 3 days ago

New York’s legislature is open for business in the new year, and we’re jumping in to renew our support for two crucial bills that protect New Yorkers’ privacy rights. While very different, both pieces of legislation would uphold a principle we hold dear: people should not worry that their everyday activities will fuel unnecessary surveillance.

The first piece of legislation is A. 7326/S. 6541—New York bills must have identical versions in each house to pass—which protects the confidentiality of medical immunity information. It does this in several key ways, including: limiting the collection, use and sharing of immunity information; expressly prohibiting such information from being shared with immigration or child services agencies; and requiring that those asking for immunity information also accept an analog credential—such as a paper record.

As New Yorkers present information about their immunity—vaccination records, for example, or test results— to get in the door at restaurants or gyms, they shouldn’t have to worry that that information will end up in places they never expected. They shouldn’t have to worry that a company working with the government on an app to present these records will keep them to track their movements. And they should not have to worry that this information will be collected for other purposes by companies or government agencies. Assuring people that their information will not be used in unauthorized ways increases much-needed trust in public health efforts. 

The second piece of legislation, A. 84/ S. 296, also aims to stop unnecessary intrusion on people’s everyday lives. This legislation would stop law enforcement from conducting a particularly troubling type of dragnet surveillance on New Yorkers, by stopping “reverse location” warrants. Such warrants—sometimes also called “geofence” warrants—allow law enforcement agencies to conduct fishing expeditions and access data about dozens, or even hundreds, of devices at once. Government use of this surveillance tactic is incredibly dangerous to our freedoms, and has been used to disproportionately target marginalized communities. Unfortunately courts have rubber-stamped these warrant requests without questioning their broad scope. This has shown that requiring warrants alone is not enough to protect our privacy; legislatures must act to stop these practices.

Location data is highly sensitive, and can reveal information not only about where we go, but about whom we associate with, the state of our health, or how we worship. Reverse location warrant searches implicate innocent people and have a real impact on people’s lives. Even if you are later able to clear your name, if you spend any time at all in police custody, this could cost you your job, your car, and your ability to get back on your feet after the arrest.

We urge the New York Legislature to pass these bills, stand up for their constituents’ privacy, and stand against creeping surveillance that disrupts the lives of people just trying to get through the day. 

Hayley Tsukayama

Podcast Episode: Algorithms for a Just Future

1 week 3 days ago
Episode 107 of EFF’s How to Fix the Internet

Modern life means leaving digital traces wherever we go. But those digital footprints can translate to real-world harms: the websites you visit can impact the mortgage offers, car loans and job options you see advertised. This surveillance-based, algorithmic decision-making can be difficult to see, much less address. These are the complex issues that Vinhcent Le, Legal Counsel for the Greenlining Institute, confronts every day. He has some ideas and examples about how we can turn the tables—and use algorithmic decision-making to help bring more equity, rather than less.  

EFF’s Cindy Cohn and Danny O’Brien joined Vinhcent to discuss our digital privacy and how U.S. laws haven’t kept up with safeguarding our rights when we go online. 

Click below to listen to the episode now, or choose your podcast player:

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F72c98d21-5c9a-44fa-ae90-c2adcd4d6766%3Fdark%3Dtrue%26amp%3Bcolor%3D0B0A00%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

  
  

You can also find the MP3 of this episode on the Internet Archive.

The United States already has laws against redlining, where financial companies engage in discriminatory practices such as preventing people of color from getting home loans. But as Vinhcent points out, we are seeing lots of companies use other data sets—including your zip code and online shopping habits—to make massive assumptions about the type of consumer you are and what interests you have. These groupings, even though they are often inaccurate, are then used to advertise goods and services to you—which can have big implications for the prices you see. 

But, as Vinhcent explains, it doesn’t have to be this way. We can use technology to increase transparency in online services and ultimately support equity.  

In this episode you’ll learn about: 

  • Redlining—the pernicious system that denies historically marginalized people access to loans and financial services—and how modern civil rights laws have attempted to ban this practice.
  • How the vast amount of our data collected through modern technology, especially browsing the Web, is often used to target consumers for products, and in effect recreates the illegal practice of redlining.
  • The weaknesses of the consent-based models for safeguarding consumer privacy, which often mean that people are unknowingly waving away their privacy whenever they agree to a website’s terms of service. 
  • How the United States currently has an insufficient patchwork of state laws that guard different types of data, and how a federal privacy law is needed to set a floor for basic privacy protections.
  • How we might reimagine machine learning as a tool that actively helps us root out and combat bias in consumer-facing financial services and pricing, rather than exacerbating those problems.
  • The importance of transparency in the algorithms that make decisions about our lives.
  • How we might create technology to help consumers better understand the government services available to them. 

Vinhcent Le serves as Legal Counsel with the Greenlining Institute’s Economic Equity team. He leads Greenlining’s work to close the digital divide, protect consumer privacy, ensure algorithms are fair, and insist that technology builds economic opportunity for communities of color. In this role, Vinhcent helps develop and implement policies to increase broadband affordability and digital inclusion as well as bring transparency and accountability to automated decision systems. Vinhcent also serves on several regulatory boards including the California Privacy Protection Agency. Learn more about the Greenlining Institute

Resources

Data Harvesting and Profiling:

Automated Decisions Systems (Algorithms):

Community Control and Consumer Protection:

Racial Discrimination and Data:

Fintech Industry and Advertising IDs

Transcript

Vinhcent: When you go to the grocery store and you put in your phone number to get those discounts, that's all getting recorded, right? It's all getting attached to your name or at least an ID number. Data brokers purchased that from people, they aggregate it, they attach it to your ID, and then they can sell that out. There, there was a website, where you could actually look up a little bit of what folks have on you. And interestingly enough that they had all my credit card purchases, they thought I was a middle-aged woman that loved antiques, ‘cause I was going to TJ Maxx a lot. 

Cindy: That's the voice of Vinhcent Le. He's a lawyer at the Greenlining Institute, which works to overcome racial, economic, and environmental inequities. He is going to talk with us about how companies collect our data and what they do with it once they have it and how too often that reinforces those very inequities.

Danny: That's because  some companies look at the things we like, who we text and what we subscribe to online to make decisions about what we'll see next, what prices we'll pay and what opportunities we have in the future.

THEME MUSIC

Cindy: I'm Cindy Cohn, EFF’s Executive Director.

Danny: And I'm Danny O'Brien. And welcome to How to Fix the Internet, a podcast of the Electronic Frontier Foundation. On this show, we help you to understand the web of technology that's all around us and explore solutions to build a better digital future. 

Cindy: Vinhcent, I am so happy that you could join us today because you're really in the thick of thinking about this important problem.

Vinhcent: Thanks for having me. 

Cindy: So let's start by laying a little groundwork and talk about how data collection and analysis about us is used by companies to make decisions about what opportunities and information we receive.

Vinhcent: It's surprising, right? Pretty much all of the decisions that we, that companies encounter today are increasingly being turned over to AI and automated decision systems to be made. Right. The FinTech industry is determining what rates you pay, whether you qualify for a loan, based on, you  know, your internet data. It determines how much you're paying for a car insurance. It determines whether or not you get a good price on your plane ticket, or whether you get a coupon in your inbox or whether or not you get a job. It's pretty widespread. And, you know, it's partly driven by, you know, the need to save costs, but this idea that these AI automated algorithmic systems are somehow more objective and better than what we've had before. 

Cindy: One of the dreams of using AI in this kind of decision making is that it was supposed to be more objective and less discriminatory than humans are. The idea was that if you take the people out, you can take the bias out.. But  it’s very clear now that it’s more complicated than that. The data has bias baked it in ways that is hard to see, so walk us through that from your perspective. 

Vinhcent: Absolutely. The Greenlining Institute where I work, was founded to essentially oppose the practice of red lining and close the racial wealth gap. And red lining is the practice where banks refuse to lend to communities of color, and that meant that access to wealth and economic opportunity was limited for, you know, decades. Red lining is now illegal, but the legacy of that lives on in our data. So they look at the zip code and look at all of the data associated with that zip code, and they use that to make the decisions. They use that data, they're like, okay, well this zip code, which so, so often happens to be full of communities of color isn't worth investing in because poverty rates are high or crime rates are high, so let's not invest in this. So even though red lining is outlawed, these computers are picking up on these patterns of discrimination and they're learning that, okay, that's what humans in the United States think about people of color and about these neighborhoods, let's replicate that kind of thinking in our computer models. 

Cindy: The people who design and use these systems try to reassure us that they can adjust their statistical models, change their math, surveill more, and take these problems out of the equation. Right?

Vinhcent: There's two things wrong with that. First off, it's hard to do. How do you determine how much of an advantage to give someone, how do you quantify what the effect of redlining is on a particular decision? Because there's so many factors: decades of neglect and discrimination and like that that's hard to quantify for.

Cindy: It's easy to envision this based on zip codes, but that's not the only factor. So even if you control for race or you control for zip codes, there's still multiple factors that are going into this is what I'm hearing.

Vinhcent: Absolutely. When they looked at discrimination and algorithmic lending, and they found out that essentially there was discrimination. People of color were paying more for the same loans as similarly situated white people. It wasn't because of race, but it was because they were in neighborhoods that have less competition and choice in their neighborhood. The other problem with fixing it with statistics is that it's essentially illegal, right? If you find out, in some sense, that people of color are being treated worse under your algorithm, if you correct it on racial terms, like, okay, brown people get a specific bonus because of the past redlining, that's disparate treatment, that's illegal, under in our anti-discrimination law. 

Cindy: We all want a world where people are not treated adversely because of their race, but it seems like we are not very good at designing that world, and for the the last 50 years in the law at least we have tried to avoid looking at race. Chief Justice Roberts famously said “the way to stop discrimination on the basis of race is to stop discriminating on the basis of race. But it seems pretty clear that hasn’t worked, maybe we should flip that approach and actually take race into account? 

Vinhcent: Even if you're an engineer wanted to fix this, right, their legal team would say, no, don't do it because, there was a Supreme court case Ricci a while back where a fire department thought that its test for promoting firefighters was discriminatory. They wanted to redo the tests, and the Supreme court said that  trying to redo that test to promote more people of color, was disparate treatment, they got sued, and now no one wants to touch it. 

MUSIC BREAK

Danny: One of the issues here I think is that as the technology has advanced, we've shifted from, you know, just having an equation to calculate these things, which we can kind of understand.  Where are they getting that data from? 

Vinhcent: We're leaving little bits of data everywhere. And those little bits of data, may be what website we're looking at, but it's also things like how long you looked at a particular piece of the screen or did your mouse linger over this link or what did you click? So it gets very, very granular. So what data brokers do is they, you know, they have tracking software, they have agreements and they're able to collect all of this data from multiple different sources, put it all together and then put people into what are called segments. And they had titles like, single and struggling, or urban dweller down on their luck.

So they have very specific segments that put people into different buckets. And then what happens after that is advertisers will be like, we're trying to look for people that will buy this particular product. It may be innocuous, like I want to sell someone shoes in this demographic. Where it gets a little bit more dangerous and a little bit more predatory is if you have someone that's selling payday loans or for-profit colleges saying, Hey, I want to target people who are depressed or recently divorced or are in segments that are associated with various other emotional states that make their products more likely to be sold.

Danny: So it's not just about your zip code. It's like, they just decide, oh, everybody who goes and eats at this particular place, turns out nobody is giving them credit. So we shouldn't give them credit. And that begins to build up a kind of, it just re-enacts that prejudice. 

Vinhcent: Oh my gosh, there was a great example of exactly that happening with American express. A gentleman, Wint, was traveling and he went to a Walmart in I guess a bad part of town and American Express reduced his credit limit because of the shopping behavior of the people that went to that store. American Express was required under the equal credit opportunity act to give him a reason, right. That why this credit limit changed. That same level of transparency and accountability for a lot of these algorithmic decisions that do the same thing, but they're not as well regulated as more traditional banks. They don't have to do that. They can just silently, change your terms or what you're going to get and you might not ever know.  

Danny: You've talked about how red lining was a problem that was identified and there was a concentrated effort to try and fix that both in the regulatory space and in the industry. Also we've had like a stream of privacy laws again, sort of in this area, roughly kind of consumer credit. In what ways have those laws sort of failed to keep up with what we're seeing now? 

Vinhcent: I will say the majority of our privacy laws for the most part that maybe aren't specific to the financial sector, they fail us because they're really focused on this consent based model where we agree and these giant terms of service to give away all of our rights. Putting guardrails up so predatory use of data doesn't happen, hasn't been a part of our privacy laws. And then with regards to our consumer protection laws, perhaps around FinTech, our civil rights laws, it's because it's really hard to detect  algorithmic discrimination. You have to provide some statistical evidence to take a company to court, proving that, you know, their algorithm was discriminatory. We really can't do that because the companies have all that data so our laws need to kind of shift away from this race blind strategy that we've kind of done for the last, you know, 50, 60 years where like, okay, let's not consider a race, let's just be blind to it. And that's our way of fixing discrimination. With algorithms where you don't need to know someone's race or ethnicity to discriminate against them based on those terms, that needs to change. We need to start collecting all that data. You can be anonymous and then testing the results of these algorithms to see whether or not there's a disparate impact happening: aka are people of color being treated significantly worse than say white people or are women being treated worse than men?

If we can get that right, we get that data. We can see that these patterns are happening. And then we can start digging into where does this bias arise? You know, where is this like vestige of red lining coming up in our data or in our model. 

Cindy: I think transparency is especially difficult in this question of  machine learning decision-making because as Danny pointed out earlier, often even the people who are running it don't, we don't know what it's picking up on all that easily. 

MUSIC BREAK

Danny: “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

Cindy: We understand that different communities are being impacted differently...Companies are using these tools and we are seeing the disparate impacts.

What happens when those situations end up in the courts? Because from what I’ve seen the courts have been pretty hostile to the idea that companies need to show their reasons for those disparate impacts.

Vinhcent: Yeah. So, you know, my idea, right, is that if we get the companies on records, like showing that oh, you're causing disparate impact, it's their responsibility to provide a reason, a reasonable business necessity that justifies that disparate impact.

And that's what I really want to know. What reasons are you using, what reasons all these companies using to charge people of color more  for loans or insurance, right? It's not based off their driving record or their, their income. So what is it? And once we get that information, right, we can begin to have a conversation as a society around what are the red lines for us around like the use of data, what certain particular uses, say, targeting predatory ads towards depressed people should be banned. We can't get there yet because all of those cards are being held really close to the vest of the people who are designing the AI.

Danny:  I guess there is a positive side to this in that I think at a society level, we recognize that this is a serious problem. That excluding people from loans, excluding people from a chance to improve their lot is something that we've recognized that racism plays a part in and we've attempted to fix and that machine learning is, is contributing to this. I play around with some of the sort of more trivial versions of machine-learning, I play around with things like GPT three. What's fascinating about that is that it draws from the Internet's huge well of knowledge, but it also draws from the less salubrious parts of the internet. And you can, you can see that it is expressing some of the prejudices that it'ss been fed with.

My concern here is that that what we're going to see is a percolation of that kind of prejudice into areas where we we've never really thought about the nature of racism. And if we can get transparency in that area and we can tackle it here, maybe we can stop this from spreading to the rest of our automated systems. 

Vinhcent: I don't think all AI is bad. Right? There's a lot of great stuff happening in Google translate, I think is great. I think in the United States, what we're going to see is at least with housing and employment and banking, those are the three areas where we have strong civil rights protections in the United States. I'm hoping and pretty optimistic that we'll get action, at least in those three sectors to reduce the incidents of algorithmic bias and exclusion. 

Cindy: What are the kinds of things you think we can do that will make a better future for us, with these and pull out the good of machine learning and less of the bad

Vinhcent: I think we're at the early stage of algorithmic regulation and kind of reigning in the free hand that tech companies have had over the past decade or so.  I think what we need to have, do we need to have an inventory of AI systems, as they're used in government, right?

Is your police department using facial surveillance? Is your court system using criminal sentencing algorithms? Is your social service department determining your access to healthcare or food assistance using an algorithm? We need to figure out where those systems are, so we can begin to know, all right, where do we, where do we ask for more transparency?

When we're using taxpayer dollars to purchase an algorithm, then that's going to make decisions for millions of people. For example, Michigan purchased the Midas algorithm, which was, you know, over $40 million and it was designed to send out unemployment checks to people who recently lost their job.

They accused thousands, 40,000 people of fraud. Many people went bankrupt, and the algorithm was wrong. So when you're purchasing these, these expensive systems, there needs to be a risk assessment done around who could be impacted negatively by this obviously wasn't tested enough in Michigan.

Specifically in the finance industry, right, banks are allowed to collect data on mortgage loan race and ethnicity. I think we need to expand that, so that they are allowed to collect that data on small, personal loans, car loans, small business loans.

That type of transparency and allowing regulators, academia, folks like that to study those decisions that they've made and essentially hold, hold those companies accountable for the results of their systems is necessary.

Cindy: That's one of the things is that you think about who is being impacted by the decisions that the machine is making and what control do they have over how this thing is workin, and it can give you kind of a shortcut for how to think about, these problems. Is that something that you're seeing as well? 

Vinhcent: I think what is missing actually is that right? There is a strong desire for public participation, at least from advocates in the development of these models. But none, none of us including me have figured out what does that look like?

Because  the tech industry has pushed off any oversight by saying, this is too complicated. This is too complicated. And having delved into it, a lot of it is, is too complicated. Right. But I think people have a role to play in setting the boundaries for these systems. Right? When does something make me feel uncomfortable? When does this cross the line from being helpful to, to being manipulative? So I think that's what it should look like, but how does that happen? How do we get people involved into these opaque tech processes when they're, they're working on a deadline, the engineers have no time to care about equity and deliver a product. How do we slow that down to get community input? Ideally in the beginning, right, rather than after it's already baked, 

Cindy: That's what government should be doing. I mean, that's what civil servants should be doing. Right. They should be running processes, especially around tools that they are going to be using. And the misuse of trade secret law and confidentiality in this space drives me crazy. If this is going to be making decisions that have impact on the public, then a public servant’s job ought to be making sure that the public's voice is in the conversation about how this thing works, where it works, where you buy it from and, and that's just missing right now.

Vinhcent: Yeah, that, that was what AB 13, what we tried to do last year. And there was a lot of hand wringing about, putting that responsibility on to public servants. Because now they're worried that they'll get in trouble if they didn't do their job. Right. But that's, that's your job, you know, like you have to do it that's government's role to protect the citizens from this kind of abuse. 

MUSIC BREAK

Danny:  I also think there's a sort of new and emerging sort of disparity and inequity in that the fact that we're constantly talking about how large government departments and big companies using these machine learning techniques, but I don't get to use them. Well, I would love, as you said, Vincent, I would love the machine learning thing that could tell me what government services are out there based on what it knows about me. And it doesn't have to share that information with anyone else. It should be my little, I want to pet AI. Right? 

Vinhcent: Absolutely. The public use of AI is so far limited to like these, putting on a filter on your face or things like that, right? Like let's give us real power right over, you know, our ability to navigate this world to get opportunities. Yeah, how to flip. That is a great question and something, you know, I think I'd love to tackle with you all. 

Cindy: I also think if you think about things like the administrative procedures act, getting a little lawyerly here, but this idea of notice and comment, you know, before something gets purchased and adopted. Something that we've done in the context of law enforcement purchases of surveillance equipment in these CCOPS ordinances that EFF has helped pass in many places across the country. And as you point out disclosure of how things are actually going after the fact isn't new either and something that we've done in key areas around civil rights in the past and could do in the future. But it really does point out how important transparency, both, you know, transparency before, evaluation before and transparency after is as a key to, to try to solving, try to get at least enough of a picture of this so we can begin to solve it.

Vinhcent: I think we're almost there where governments are ready. We tried to pass a risk assessment and inventory bill in California AB 13 this past year and what you mentioned in New York and what it came down to was the government agencies didn't even know how to define what an automated decision system was.

So there's a little bit of reticence. And I think, uh, as we get more stories around like Facebook or, abuse in these banking that will eventually get our legislators and government officials to realize that this is a problem and, you know, stop fighting over these little things and realize the bigger picture is that we need to start moving on this and we need to start figuring out where this bias is arising.

Cindy: We would be remiss if we were talking about solutions and we didn't talk about, you know, a baseline strong privacy law. I know you think a lot about that as well, and we don't have the real, um, comprehensive look at things, and we also really don't have a way to create accountability when, when companies fall short. 

Vinhcent: I am a board member of the California privacy protection agency. California what is really the strongest privacy law in the United States, at least right now part of that agency's mandate is to require folks that have automated decision systems that include profiling, to give people the ability to opt out and to give customers transparency into the logic of those systems. Right. We still have to develop those regulations. Like what does that mean? What does logic mean? Are we going to get people answers that they can understand. Who is subject to, you know, those disclosure requirements, but that's really exciting, right? 

Danny: Isn't there a risk that this is sort of the same kind of piecemeal solution that we sort of described in the rest of the privacy space? I mean, do you think there's a need for, to put this into a federal privacy law? 

Vinhcent: Absolutely. Right. So this is, you know, what California does, hopefully will influence a overall federal one. I do think that the development of regulations in the AI space will happen. In a lot of instances in a piecemeal fashion, we're going to have different rules for healthcare AI. We're going to have different rules for, uh, housing employment, maybe lesser rules for advertising, depending on what you're advertising. So to some extent, these roles will always be sector specific. That's just how the United States legal system has developed these rules for all these sectors. 

Cindy: We think of three things and the California law has a bunch of them, but,  you know, we think of private right of action. So actually empowering consumers to do something, if this doesn't work for them and that's something we weren't able to get in California. We also think about non-discrimination, so if you opt out of, tracking, you know, you still get the service, right. We kind of fix this situation that we talked about a little little earlier where you know, we pretend like consumers have consent, but, the reality is they really don't have consent. And then of course, for us, no preemption, which is really just a tactical and strategic recognition that if we want the states to experiment with stuff that's stronger we can't have the federal law come in and undercut them, which is always a risk. We need the federal law to hopefully set a very high baseline, but given the realities of our Congress right now, making sure that it doesn't become a ceiling when it really needs to be a floor. 

Vinhcent: It would be a shame if California put out strong rules on algorithmic transparency and risk assessments and then the federal government said, no,you can't do that where you're preempted. 

Cindy: As new problems arise,  I don't think we know all the ways in which racism is going to pop up in all the places or other problems, other societal problems. And so we do want the states to be free to innovate, where they need to.

MUSIC BREAK

Cindy: Let's talk a little bit about what the world looks like if we get it right, and we've tamed our machine learning algorithms. What does our world look like?

Vinhcent: Oh my gosh, it was such a, it's such a paradise, right? Because that's why I got into this work. When I first got into AI, I was sold that promise, right? I was like, this is objective, like this is going to be data-driven things are going to be great. We can use these services, right, this micro-targeting, let's not use it to sell predatory ads, but let's give these people that need it, like the government assistance program.

So we have California has all these great government assistance programs that pay for your internet. They pay for your cell phone bill, enrollment is at 34%.

We have a really great example of where this worked in California. As you know, California has cap and trade. So you're taxed on your carbon emissions, that generates billions of dollars in revenue for California. And we got into a debate, you know a couple years back about how that money should be spent and what California did was create an algorithm with the input of a lot of community members that determined which cities and regions of California would get that funding. We didn't use any racial terms, but we used data sources that are associated with red lining. Right? Are you next to pollution? You have high rates of asthma, heart attacks. Does your area have more higher unemployment rates? So we took all of those categories that banks are using to discriminate against people in loans, and we're using those same categories to determine which areas of California get more access to a cap and trade reinvestment funds. And that's being used to build electronic electric vehicle charging stations, affordable housing, parks, trees, and all these things to abate the, the impact of the environmental discrimination that these neighborhoods faced in the past.

Vinhcent: So I think in that sense, you know, we could use algorithms for Greenlining, right? Uh, not redlining, but to drive equitable, equitable outcomes. And that, you know, doesn't require us to change all that much. Right. We're just using the tools of the oppressor to drive change and to drive, you know, equity. So I think that's really exciting work. And I think, um, we saw it work in California and I'm hoping we see it adopted in more places. 

Cindy: I love hearing a vision of the future where, you know, the fact that there are individual decisions possible about us are things that lift us up rather than crushing us down. That's a pretty inviting way to think about it. 

Danny: Vinhcent Le thank you so much for coming and talking to us. 

Vinhcent: Thank you so much. It was great. 

MUSIC BREAK

Cindy: Well, that was fabulous. I really appreciate how he articulates thethe dream of machine learning that we would get rid of bias and discrimination in official decisions. And instead, you know, we've, we've basically reinforced it. Um, and, and how, you know, it's, it's hard to correct for these historical wrongs when they're kind of based in so many, many different places. So just removing the race of the people involved, it doesn't get it all the ways in discrimination creeps into society.

Danny: Yea,  I guess the lesson that, you know, a lot of people have learned in the last few years, and everyone else has kind of known is this sort of prejudice is, is wired in to so many systems. And it's kind of inevitable that algorithms that are based on drawing all of this data and coming to conclusions are gonna end up recapitulating it.

I guess one of the solutions is this idea of transparency. Vinhcent was very honest about with just in our infancy about learning how to make sure that we know how algorithms make the decision. But I think that has to be part of the research and where we go forward with.

Cindy: Yeah. And, you know, EFF, we spent a little time trying to figure out what transparency might look like with these systems because the center of the systems, it's very hard to get the kind of transparency that we think about. But there's transparency in all the other places, right. He started off, he talked about an inventory of just all the places it's being used.

Then looking at how the algorithms, what, what they're putting out. Looking at the results across the board, not just about one person, but about a lot of people in order to try to see if there's a disparate impact. And then running dummy data through the systems to try to, to see what's going on.

Danny: Sometimes we talk about algorithms as though we've never encountered them in the world before, but in some ways, governance itself is this incredibly complicated system. And we don't know why like that system works the way it does. But what we do is we build accountability into it, right? And we build transparency around the edges of it. So we know how the process at least is going to work. And we have checks and balances. We just need checks and balances for our sinister AI overlords. 

Cindy: And of course we just need better privacy law. We need to set the floor a lot higher than it is now. And, of course that's a drum we beat all the time at EFF. And it certainly seems very clear from this conversation as well. What was interesting is that, you know, Vincent comes out of the world of home mortgages and banking and, other areas, and Greenlining itself, you know, who, who gets to buy, houses where, and at what terms, that has a lot of mechanisms already in place both to protect people's privacy, but to have more transparency. So it's interesting to talk to somebody who comes from a world where we're a little more familiar with that kind of transparency and how privacy plays a role in it than I think in the general uses of machine learning or on the tech side. 

Danny: I think it's, it's funny because when you talk to tech folks about this, you know, actually kind of pulling our hair out because we, this is so new and we don't understand how to handle this kind of complexity. And it's very nice to have someone come from like a policy background and come in and go, you know what? We've seen this problem before we pass regulations. We change policies to make this better, you just have to do the same thing in this space.

Cindy: And again, there's still a piece that's different, but as far less than I think sometimes people think about it. But what I, the other thing I really loved is is that he really, he gave us such a beautiful picture of the future, right? And, and it's, it's, it's one where we, we still have algorithms. We still have machine learning. We may even get all the way to AI. But it is empowering people and helping people. And I, I love the idea of better being able to identify people who might qualify for public services that we're, we're not finding right now. I mean, that's just a it's a great version of a future where these systems serve the users rather than the other way around, right. 

Danny: Our friend, Cory Doctorow always has this banner headline of seize the methods of computation. And there's something to that, right? There's something to the idea that we don't need to use these things as tools of law enforcement or retribution or rejection or exclusion. We have an opportunity to give this and put this in the hands of people so that they feel more empowered and they're going to need to be that empowered because we're going to need to have a little AI of our own to be able to really work better with these these big machine learning systems that will become such a big part of our life going on.

Cindy: Well, big, thanks to Vinhcent Le for joining us to explore how we can better measure the benefits of machine learning, and use it to make things better, not worse.

Danny:  And thanks to Nat Keefe and Reed Mathis of Beat Mower for making the music for this podcast. Additional music is used under a creative commons license from CCMixter. You can find the credits and links to the music in our episode notes. Please visit eff.org/podcasts where you’ll find more episodes, learn about these issues, you can donate to become a member of EFF, as well as lots more. Members are the only reason we can do this work plus you can get cool stuff like an EFF hat, or an EFF hoodie or an EFF camera cover for your laptop camera. How to Fix the Internet is supported by the Alfred P Sloan foundation’s program and public understanding of science and technology. I'm Danny O’Brien.  

rainey Reitman

“Worst in Show Awards” Livestreams Friday: EFF’s Cindy Cohn and Cory Doctorow Will Unveil Most Privacy-Defective, Least Secure Consumer Tech Products at CES

2 weeks 1 day ago
"Cool" Products That Collect Your Data, Lock Out Users

Las Vegas—On Friday, January 7, at 9:30 am PT, Electronic Frontier Foundation (EFF) Executive Director Cindy Cohn and EFF Special Advisor and sci-fi author Cory Doctorow will present the creepiest, most privacy-invasive, and unsecure consumer tech devices debuting at this year’s Consumer Electronics Show (CES).

EFF, in partnership with iFixit, USPIRG, and Repair.Org, will unveil their 2022 Worst in Show picks, an annual award given at CES, the massive trade show in Las Vegas where vendors demonstrate the coolest in purportedly life-changing tech gadgets and devices (think movie-streaming sunglasses and color-changing cars).

Not all these products will change our lives for the better. A panel of judges will present the least secure, safe, repairable, and eco-friendly gadgets from the show. Doctorow will emcee the event and will join Cohn and guest judges Nathan Proctor (USPIRG), Gay Gordon-Byrne (Repair.org), Paul Roberts (securepairs), and Kyle Wiens (iFixit) to discuss their selections.

To watch the presentation live, before it goes on YouTube, fill out this form to request access. You’ll be sent a Zoom link to join the event (no-video/audio-only is fine).

Who: EFF’s Cindy Cohn and Cory Doctorow

What: Annual CES Worst in Show Awards

When: Friday, January 7, 2022, 9:30 am PT and 12:30 pm ET

Zoom link:
https://docs.google.com/forms/d/e/1FAIpQLSc_EAcNIZl-AzAU_yAu2jF-c21w1fhS_rKN7ACZb_WtaZd66Q/viewform

Check out last year’s winners:
https://www.repair.org/worstinshow

For more on Right to Repair:
https://www.eff.org/issues/right-to-repair

 

Contact:  KarenGulloAnalyst, Media Relations Specialistpress@eff.org press@ifixit.com
Karen Gullo

How are Police Using Drones?

2 weeks 1 day ago

Across the country, police departments are using myriad means and resources at their disposal to stock up on drones. According to the most recent tally on the Atlas of Surveillance (a project of EFF and the University of Nevada), at least 1,172 police departments nationwide are using drones. And over time, we can expect more law enforcement agencies to deploy them. A flood of COVID relief money, civil asset forfeiture money, federal grants, or military surplus transfers enable more departments to acquire these flying spies.

But how are police departments using them?

A new law in Minnesota mandates the yearly release of information related to police use of drones, and gives us a partial window into how law enforcement use them on a daily basis. The 2021 report released by the Minnesota Bureau of Criminal Apprehension documents use of drones in the state during the year 2020.

According to the report, 93 law enforcement agencies from across the state deployed drones 1,171 times in 2020—with an accumulative price tag of almost $1 million. The report shows that the vast majority of the drone deployments are not used for the public safety disasters that so many departments use to justify drone use. Rather, almost half (506) were just for the purpose of “training officers.” Other uses included information collection based on reasonable suspicion of unspecified crimes (185), requests from other government agencies unrelated to law enforcement (41), road crash investigation (39), and preparation for and monitoring of public events (6 and 12, respectively). There were zero deployments to counter the risk of terrorism.  Police deployed drones 352 times in the aftermath of an “emergency” and 27 times for “disaster” response.

This data isn’t terribly surprising. After all, we’ve spent years seeing police drones being deployed in more and more mundane policing situations and in punitive ways.

After the New York City Police Department accused one racial justice activist, Derrick Ingram, of injuring an officer’s ears by speaking too loudly through his megaphone at a protest, police flew drones by his apartment window—a clear act of intimidation. The government also flew surveillance drones over multiple protests against police racism and violence during the summer of 2020. When police fly drones over a crowd of protestors, they chill free speech and political expression through fear of reprisal and retribution from police. Police could easily apply face surveillance technology to footage collected by a surveillance drone that passed over a crowd, creating a preliminary list of everyone that attended that day’s protest.

As we argued back in May 2020, drones don’t disappear once the initial justification for purchasing them no longer seems applicable. Police will invent ways to use their invasive toys–which means that drone deployment finds its way into situations where they are not needed, including everyday policing and the surveillance of First Amendment-protected activities. In the case of Minnesota’s drone deployment, police can try to hide behind their use of drones as a glorified training tool, but the potential for their invasive use will always hang over the heads (literally) of state residents. 

Matthew Guariglia
Checked
15 minutes 44 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed