EFF, Partners Launch New Edition of Santa Clara Principles, Adding Standards Aimed at Governments and Expanding Appeal Guidelines

12 hours 50 minutes ago
Revisions Based on Feedback About Inequitable Application, Algorithmic Tools

San Francisco—The Electronic Frontier Foundation (EFF) and a coalition of civil society organizations and academics today released the second edition of the Santa Clara Principles on Transparency and Accountability In Content Moderation, adding standards directed at government and state actors to beef up due process and expanding guidelines for reporting on and notifying users about takedowns.

The Santa Clara Principles outline standards related to transparency, due process, cultural competence, and respect for human rights that internet platforms should meet in order to provide meaningful, public-facing transparency around their moderation of all user-generated content, paid or unpaid.

In May 2018, a small coalition of advocates and individuals released the original Santa Clara Principles in response to growing concerns about the lack of transparency and accountability by internet platforms around how they create and enforce content moderation policies. This first version of the Principles outlined minimum standards that internet platforms must meet to provide adequate transparency and accountability about their efforts to moderate user-generated content or accounts that violate their rules.

Since the release of the initial Principles, many internet platforms have endorsed and committed to adhering to the Principles, including Apple, Facebook, Google, and Twitter. However, the original Principles were created by a small number of organizations and individuals based primarily in the United States. Following the 2018 launch, many allies—particularly our colleagues from countries outside the United States and Western Europe—raised legitimate concerns and suggestions for their revision.

Stakeholders from around the world emphasized that platforms invest more resources in providing transparency and due process to users in certain communities and markets, creating fundamental inequities. Further, over the past several years, platforms have expanded their content moderation tactics to include interventions implemented by algorithmic tools, such as downranking. These companies have not provided sufficient transparency around how these tools are developed and used, and what impact they have on user speech and access to information.

Because of these concerns, the Santa Clara Principles coalition initiated an open call for comments from a broad range of global stakeholders, with the goal of eventually expanding the Principles. Using the feedback received through this open call, as well as through a series of open consultations and workshops, the coalition drafted the second iteration of the Santa Clara Principles.

“Allies from more than 10 countries made 40 sets of recommendations that were used to revise the Santa Clara Principles,” said EFF Director for International Freedom of Expression Jillian York. “We urge technology companies to adopt these revised guidelines and make a greater commitment to users across the globe to be transparent, fair, and consistent in moderating their online speech.”

For more about the Santa Clara Principles:
https://santaclaraprinciples.org/

Contact:  KarenGulloAnalyst, Senior Media Relations Specialistkaren@eff.org
Karen Gullo

Restore the 4th Minnesota: Racking Up Victories in 2021

13 hours 42 minutes ago

With the Snowden revelations in 2013 on NSA spying, many who were outraged sought to channel their frustrations first into mobilizing protests against state surveillance, and then into organizing local groups in defense of Fourth Amendment rights against unreasonable search and seizure. From this initial mobilization, Restore the Fourth was born as a network of decentralized, local groups. Restore the Fourth Minnesota (RT4MN) is very active, helping organize a local coalition known as Safety Not Surveillance, and joining with a wide range of other local groups, from those seeking community accountability reforms to police abolitionists. A member of the Electronic Frontier Alliance, the organization has recently helped ban government facial recognition in Minneapolis, defeated extra state funding to their local fusion center, and pushed for an expanded form of CCOPS (Community Control Over Police Surveillance) that they call POSTME (Public Oversight of Surveillance Technology and Military Equipment). 

The EFF Organizing Team caught up with Chris at RT4MN to hear about how they got organized and won victories this year for their communities.

What is Restore the Fourth Minnesota?

Chris: RT4MN is a grassroots nonprofit dedicated to restoring the Fourth Amendment of the U.S. Constitution. We advocate for personal privacy and against mass government surveillance.

We try to focus on state and local issues, but we also advocate on the national level with other Restore The Fourth chapters and advocacy organizations. The national Restore The Fourth movement initially grew out of the wave of national protests after the Edward Snowden revelations in 2013. The people who attended the subsequent protests started meeting regularly. After a couple of years, the chapter went dormant, but we rebooted in 2019. I have always been immersed in the intersection of technology and policy, but it was the Snowden revelations and the activism that grew up around it that opened my eyes to the importance of this topic, too.

How did it come back together in 2019? Was restarting the chapter easier than starting it initially?

Chris: I was not part of that initial wave of activity back in 2013. But my co-chair Kurtis and the rest of the original activists definitely laid the groundwork for our success.

Both Kurtis and I had a lot of friends who worked in technology, and who could see that issues of privacy were getting worse, not better. We both invited some folks we thought might be interested, and then emailed everyone who was on the chapter's old mailing list, finally doing a couple of Reddit meetup posts. At that first meeting, we discussed a path forward, some thoughts on how to grow the chapter, what our legislative objectives should be, and so on. The main thing we decided was that we should meet semi-regularly and reach out to the local ACLU to see if they were interested in working with us.

What have been some of the issues you've concentrated on and what were some of your early successes?

Chris: Our most significant accomplishment before 2021 was partnering with the ACLU and other local organizations in 2020 to establish an anti-surveillance coalition. That laid the groundwork for our success in passing a facial recognition ban in Minneapolis earlier this year. Our other big accomplishment was advocating against a budget proposal that would have added millions of additional funds to a local fusion center.

That being said, our primary focus has been CCOPS. The pace with which technology develops necessitates a holistic framework. Spinning up new policy proposals to push through the legislature, trying to ban technology x while strictly regulating technology y and promoting technology z is not going to work. The tech moves too fast and the legislative and judicial bodies move too slow.

Are there members of your group or coalition who are abolitionists? How does CCOPS/POSTME work within both a reform and an abolitionist framework in Minneapolis?

Chris: Yes, there are definitely a couple of abolitionist types in our group, and a couple of more moderate "surveillance technology can be useful if used properly" types. CCOPS is great precisely because it is a framework and not a one-size-fits-all solution. It leaves it up to people’s duly elected representatives to decide whether and under what circumstances surveillance technology is used, while ensuring that the community that will be impacted by the technology will have the information they need to either fight against or support it.

Tell us about how the community won Minneapolis' facial recognition ban.

Chris: After some initial discussion, the coalition approached Minneapolis City Council Member Steve Fletcher, who had already publicly done some work on surveillance issues. After some back and forth with him and other council members, the city passed a series of "privacy principles" which were nonbinding, but laid the groundwork for further action.

Soon afterwords, George Floyd was murdered. The public outcry placed a lot of pressure on the council. So we decided to pivot from a comprehensive surveillance reform to a more narrow but achievable aim, placing a moratorium on the use of facial recognition technology (FRT). CCOPS is great because it is comprehensive, but the downside is that it therefore required the input of a lot of different stakeholders.

Did the murder of George Floyd and ensuing uprising change the course of RT4MN's campaigns, or how you thought about and worked on these issues?

Chris: Yes, in too many ways to count. It certainly brought in a new wave of activists. The government’s response to the protests also highlighted some of the worst aspects of surveillance. As one example, the Minnesota fusion center performs ongoing social media surveillance, and right after the murder they sent several "reports" to the Minneapolis Police Department highlighting some out-of-context and hyperbolic tweets. Despite the protesters being mostly peaceful, the report exaggerated threats and descriptions of suspicious behaviors, stoking police fears and setting the stage for a massively overmilitarized response.

How did the work around the Minneapolis fusion center come about and how did the movement help defeat added funding this year?

Chris: We heard vague rumblings about it in the local activist grapevine, but in mid March things became much more solid when the proposed budget came out and there was a corresponding media push. The goal was to add millions of dollars to the budget and transform it into a 24/7/52 operation.

The Minnesota Senate was controlled by Republicans, who are already primed to cut spending from the proposed budget, so our task was to make sure that $5 million got to the top of the cutting block. So we hosted a live panel, sent around a petition, submitted testimony, and had conversations with lawmakers.

It’s hard to know for sure what tipped the scales, but the fusion center funding was removed from the House Public Safety and Senate Judiciary omnibus budget bills, and never made it into the budget.

What has your group learned about training people in Surveillance Self-Defense and in your other popular education work?

Chris: These trainings are fun and important (especially when you are helping fellow activists), but for me they mostly serve to reinforce the idea that leaving it up to individuals to protect their privacy is a losing strategy. I say this as someone who spends a frankly unhealthy amount of time and effort trying to protect my personal privacy.

Yes, it is important that people have the knowledge and ability to fight back when the government abuses our rights, but that distracts from the fundamental problem that the government is abusing our rights, and it needs to stop doing that.

What's on the horizon for RT4MN? Are there campaigns that your group has wished to prioritize in the past and you're now putting back on the agenda?

Chris: Drones. Drones are always on the horizon.

In all seriousness, it’s hard to pin down. There is a never-ending list of ways that the government wants to spy on us. Corporations love spying on us too, and of course all the data they collect eventually makes its way into the hands of the government.

In the immediate future we are looking at trying to get CCOPS passed in Minneapolis and getting facial recognition banned in Saint Paul, while also seeing if we can get some movement on better regulating drones, getting some Consumer Privacy protections, and hopefully banning Keyword Search Warrants.

José Martín

First Circuit Affirms School's Punishment of Students for Online Social Media Posts

1 day 9 hours ago

The U.S. Court of Appeals for the First Circuit affirmed a public school’s punishment of students for speech posted on social media. It was unclear from the lower court proceedings whether the students had posted to social media while on campus or off campus. EFF had urged the court to draw a distinction between on- and off-campus social media speech, and to make clear that schools cannot reach into students’ private lives to punish them for speech that they utter outside of school, even if it’s online. Although the court declined to do that in light of a recent Supreme Court decision, the First Circuit’s ruling is limited to a narrow class of speech that schools have a heightened interest in policing: speech that infringes on the rights of others, such as “serious or severe bullying or harassment.”

The case, Doe v. Hopkinton Public Schools, involved a student, “Robert Roe,” who was bullied by teammates on his hockey team. The school punished a number of those teammates—and also the two plaintiffs in this case, students who made derogatory comments about Roe behind his back on the social media app Snapchat. The court found that the plaintiffs, by participating in the group chat about Roe, had “actively encouraged” other participants to directly bully Roe and so the plaintiffs’ comments constituted a violation of the Massachusetts state anti-bullying law.

Schools do, of course, have a significant interest in protecting their students from bullying and harassment by their peers. In a recent case, Mahanoy Area School District v. B.L., the Supreme Court held that schools have less leeway to police students’ speech when that speech occurs off campus, but that certain buckets of speech may warrant punishment no matter where it occurs:

  • serious or severe bullying or harassment targeting particular individuals;
  • threats aimed at teachers or other students;
  • the failure to follow rules concerning lessons, the writing of papers, the use of computers, or participation in other online school activities; or
  • breaches of school security devices, including material maintained within school computers.

In light of Mahanoy and the important interest in preventing bullying, the First Circuit’s conclusion that Hopkinton Public Schools could punish students’ social media speech that contributed to bullying, even if some of the speech might have been posted while off campus, is not a surprise.

However, we are disappointed that the First Circuit did not take this opportunity to make explicit that schools cannot generally police the speech that students utter in their private lives outside of school, and that the exception for bullying is just that: a narrow and limited exception, per the Supreme Court in Mahanoy.

As to whether participating in a group chat or otherwise communicating with those who directly bully rises to the level of “active encouragement,” we are heartened that the court stated: “[T]here may be circumstances in which encouragement is so minimal or ambiguous, the chain of communication so attenuated, or knowledge of direct bullying so lacking, that a school's punishment of certain speech would be unreasonable.”

We hope that the First Circuit and other courts will clarify that, while schools may punish students who engage in bullying or harassment wherever it occurs, students are generally free to speak freely after school and on weekends, in their private lives, without having to fear that their schools may reach in and punish them for expressing themselves.

Sophia Cope

Pay a Hacker, Save a Life

1 day 15 hours ago
Episode 104 of EFF’s How to Fix the Internet

How do we make the Internet more secure? Part of the solution is incentives, according to Tarah Wheeler, this week’s guest on EFF’s How to Fix the Internet. As a security researcher with deep experience in the hacker community, Tarah talks about how many companies are shooting themselves in the foot when it comes to responding to security disclosures. Along with EFF co-hosts Cindy Cohn and Danny O’Brien, Tarah also talks about how existing computer crime law can serve to terrify security researchers, rather than uplift them.

Click below to listen to the show now, or choose your podcast player:

%3Ciframe%20height%3D%22200px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F45e44f5b-6f6c-47bd-9e0f-5284bd5b0d69%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

  
  

Computers are in everything we do — and that means computer security matters to every part of our lives. Whether it’s medical devices or car navigation, better security makes us safer. 

Note: We'll be having a special, live event with Tarah Wheeler to continue this conversation on Thursday December 9th. RSVP or learn more.

On this episode, you’ll learn:

  • About the human impact of security vulnerabilities—and how unpatched flaws can change or even end lives;
  • How to reconsider the popular conception of hackers, and understand their role in helping build a more secure digital world;
  • How the Computer Fraud and Abuse Act (CFAA), a law that is supposed to punish computer intrusion, has been written so broadly that it now stifles security researchers;
  • What we can learn from the culture around airplane safety regulation—including transparency and blameless post-mortems;
  • How we can align incentives, including financial incentives, to improve vulnerability reporting and response;
  • How the Supreme Court case Van Buren helped security researchers by ensuring that the CFAA couldn’t be used to prosecute someone for merely violating the terms of service of a website or application;
  • How a better future would involve more collaboration and transparency among both companies and security researchers.

Tarah Wheeler is an information security executive, social scientist in the area of international conflict, author, and poker player. She serves on the EFF advisory board, as a cyber policy fellow at Harvard, and as an International Security Fellow at New America. She was a Fulbright Scholar in Cybersecurity last year. You can find her on Twitter at @Tarah or at her website: https://tarah.org/.  

If you have any feedback on this episode, please email podcast@eff.org.

Below, you’ll find legal resources - including important cases, books, and briefs discussed in the podcast - and a full transcript of the audio.

Resources

Consumer Data Privacy:

 Ransomware:

 Computer Fraud and Abuse Act (CFAA):

 Electoral Security:


Transcript

Tarah: So in 2010, I was getting married for the first time. And as I was walking down the street one night, I see one of the local bridal shops had its front door, just hanging open in the middle of the night.There's no one around, it just looks like someone has maybe thought that the door was closed and left out the back perhaps. So I, I poked my head in, I look around, Hey, is anybody in here? I closed the door, kind of latch it all the way until I can feel it rattle, and lock it from the inside, pull it shut. And I left a little note on the door saying, Hey, folks just want to let you know, your door was open in case there's something wrong with the lock.

And, and I left. Never heard back from them again. Not a single acknowledgement, not a thank you, not anything. And that is really the place that a lot of security researchers find themselves in when they try to make a third-party report of a security vulnerability to a company, they just get ignored. And you know, it's a little annoying. 

Danny: That's Tara Wheeler and she's our guest this week on how to fix the internet. we're going to talk to her about coordinated vulnerability disclosures and what should happen if you find a flaw in software that needs to be fixed and how people like Tarah can keep you safe. 

I'm Danny O'Brien.

Cindy: And I'm Cindy Cohen. Welcome to how to fix the internet a podcast from the Electronic Frontier Foundation helping you understand how we can all make our digital future better.  

Danny: Welcome Tarah. Thank you so much for joining us.

Tarah: Thank you so much for having me. Cindy's an incredible pleasure. Thanks so much, Danny.

Cindy: Tarah, you are a Cyberpolicy fellow at Harvard, an International  cybersecurity fellow at New America, you were a Fulbright scholar in cybersecurity last year and to our great delight you are also a member of the EFF advisory board. Suffice it to say that you know a lot about this stuff, and off the top, you told us a story about walking past a bridal shop, doing a good deed by locking the door and then never hearing back. Can you explain how that story connects to the coordinated vulnerability disclosure world that you live in?

Tarah: Absolutely so coordinated vulnerability disclosure is a process that's engaged in by multiple stakeholders, which translated down into normal human  means there needs to be a way for the company to get that information from somebody who wants to tell them that something's gone wrong.

Well, the problem is that companies are often either unaware that they should have an open door policy for third-party security researchers to let them know something's gone wrong. Security researchers, on the other hand, need to provide that information to companies without having it start off with sounding like a ransom demand, basically.

Danny: Right

Tarah: So let me give you an example. If you find that there's a vulnerability, something like, I don't know, a cross site scripting issue in a company's website, you might try to let that company know that something's gone wrong, that they have a security vulnerability that's easily exploitable and public to the internet.

Well, the question for a lot of people is what do you do if you don't know how to get that information to somebody at a company. The industry standard very first step is make sure that you have the alias security@company.com available as an email address that can take reports from third-party researchers. We in the industry and the community just sort of expect that that email alias will work.

Hopefully you can look on their site and find a way to contact somebody who's in technical support or who's on the security team and let them know something's wrong. However, a lot of companies have an issue with taking those reports in and acknowledging them because honestly, there's two different tracks the world operates on. There's sort of the community of information, security researchers who operate on gratitude and money. And then there's corporations that operate off of liability and public brand management. So when a company gets a third party report of a security vulnerability, there's a triage process that needs to happen at that company.

And I'm here to tell you, as a person who's done this both inside and outside companies, when you are a company that is receiving reports of a vulnerability, unless you can fix that vulnerability pretty quickly, you may not wish to acknowledge it. You sometimes get pressure from inside the company, especially from the lawyers, because that can be seen as an acknowledgement that the company has it seen a triage and we'll repair the vulnerability in a timely manner. Let me assure you a timely manner looks really different to a security researcher than to internal counsel.

Danny:  If somebody finds a vulnerability. And reports it to a company and a company either blows them off or tries to cover it up, what are the consequences for the average user?

Like how does it affect me?

Tarah: How does it affect a normal person who's a user of a product if somebody who's a security researcher has reported a vulnerability to that company and the company never fixes it?

Danny: Hmmm Hmmm

Tarah: Well, I don't know about you, but I'm one of the 143 million people that lost my personal information and credit history when Equifax decided not to patch a single vulnerability in their servers.

Behind every data breach is a story. And that story is either that people didn't know something was wrong, or people knew something was wrong, but they de-prioritized the fix for it, not understanding how severely it could impact them and consumers and the people whose data they're storing.

Danny: You talked about a coordinated, vulnerable disclosure, so who's coordinating and what's being coordinated? 

Tarah: When we talk about multiple stakeholders in a vulnerability, one of the things we're talking about is not just the people who found it and the people who need to fix it, but also the people who are advocating for the consumers who may be affected by it.

That's how you'll get situations like the FTC stepping in to have a conversation or two with companies that have repeatedly failed to fix major vulnerabilities in their systems when they're protecting consumer data. The EFF as a great example, tends to want to protect a larger community of people, not just the researchers, not just the people working at the company, but all the people who are impacted by a vulnerability. So a security researcher finds something that's wrong and reports it to a company, the company's incentives need to be aligned with the idea that they should be fixing the vulnerability, not suing the researcher into silence. 

Cindy: EFF’s had a pretty significant role in this, and I remember the bad old days when a security researcher really pretty much immediately either got a knock on the door from the law enforcement or, you know, service of process for being sued for having the audacity to tell a company that it's got a security problem.

And what I really love about the way the world has evolved is that we do have this conversation now more often than not in the software industry. But you know, computers are in everything now. They're in cars and refrigerators, they're in medical devices that are literally inside our bodies, like insulin pumps and heart monitors.

I'm wondering if you have a sense of how other industries are doing here now that they're in the computer software business too.

Tarah: I was recently on a panel for internet of things security at the Organization for Economic Cooperation and Development. And I was talking to somebody from who was previously from the Australian consumer product safety commission or their equivalent of it. And I am here to tell you that having computers in everything is a fascinating question as a consumer product safety person's entire perspective had very little to do with whether or not there was a software vulnerability in the computers that they were using, but whether or not the product that, that computer had been put in, dealt with temperatures 

So when we start talking about putting a computer in everything, we start talking about things that can kill people when we talk about temperature differentials, altering the temperature inside refrigerators and freezers changing, whether a sous vide machine has a readout that's appropriate or not on it, that's the kind of vulnerability that can kill people.

Danny: Do you think there's something particularly different from software compared to other disciplines where we've sort of sorted out the safety problem? Like bridges don't fall down anymore. Right. Is there something we're doing with bridges that we're not doing with software or is it just that software is hard?

Tarah: Number one, software's hard. Number two, I love an industry I'm going to bring up instead of bridges and that is aviation. And I will tell you what aviation does differently than we do in information security: they have a blameless post-mortem to discuss how and why something occurred when it came to accidents. They have a multi-stakeholder approach that brings in multiple different people to examine the causes of an incident. This is we're talking of course, about the NTSB and the FAA, the National Transportation Safety Board and the Federal Aviation Administration. And in aviation the knowledge exchange between pilots is perpetual, ongoing, expected, regulated, and the expectation for those of us who are in the aviation in the community, is that we will perpetually be telling other people what we did wrong. There's a culture perpetually of discussing, revealing our own mistakes and talking about how other people can avoid them in a way that is, there's no penalty for doing so. Everyone there will tell you what they've done wrong as a pilot to deeply convince you to not make the same mistakes. That's just not true in information security. We hide all of our mistakes as fast as we can bury them under piles of liability and that dreaded email subject line attorney, client confidential. That's that's where that comes from perpetually is this culture of secrecy here, and that's why we have this problem.

: “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation's Program in Public Understanding of Science, enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians. 

Cindy: I think that is really a great place to pivot a little bit to how we fix it, because you know, one of the reasons why the lawyers are so worried is because the liability risk is well sometimes massively overblown, but sometimes real. And, the same thing is true on the other side, which is the law is set up so that the risk to a security researcher, the risk of telling somebody the truth, right. And then on the other side, the liability threat doesn't line up the company's incentives with what's best for society. That to me does not mean that we protect the company against any liability, when a plane falls out of the sky, we still expect the company to be held accountable for the damage that they do. But the liability piece doesn't get in the way of learning from the mistakes. And I think in the rest of software world, we're seeing the liability piece get in the way. And some of that is changes we can make to law and policy, but some of that is I think some changes we need to make to how these problems are lawyered.

Tarah: The lawyering in a recent case in a ransomware attack on a healthcare facility, I think it was the South, just resulted in a lawsuit from a patient whose daughter died after not receiving sufficient care during a hospital, experiencing a cyber attack. And she's filed suit now and the concern by people who are watching this process occur: people suing hospitals for not revealing that they were under cyber attack or not providing them appropriate care during a ransomware attack is not likely to create a situation where hospitals are more open about the fact that they're experiencing a network outage. It's likely to result in hospitals, turning patients away from hospitals during the middle of ransomware attacks.

Now that's the exact wrong lesson to learn. So when I look at the way that we're thinking about liability in critical infrastructure, the incentives are totally wrong to be publicly open and honest about the fact that a hospital is experiencing a cyber attack. The hospital chose to not pay the ransom, which it's important to note that they may never have gotten the records back or the ability to get their network up back anyway, or that it may not have happened in time to save this woman's daughter. But at the same time, we can't teach hospitals that the lesson that they need to learn from experiencing cyber attacks and lawsuits is that they need to shut up more and pay more ransomes. We can't teach institutions that that is the right way to respond to this fear of liability.

Cindy: One of the ways that we've helped fix this in California is the data breach notification law, right in California. If you have a data breach, and this impacts a lot of people around the world, because so many companies are based in California, the company's liability risk goes down not to zero- cause I think it's tremendously important that people can still sue if their kids die because of a data breach. Like you can't remove all accountability, but the accountability shifts, if you tell people in a timely manner about the data breach. So there are things we can do and you know, a national data breach notification law is one of the things that we could consider. We can set these incentives, such that we shift the companies, risk evaluation towards talking about this stuff and a way from not talking about it.

The California law is a good example, but you know, many of the federal cybersecurity laws are about, well, you got to tell the government that, but you won't tell anybody else that. 

Tarah:  There is a dearth of qualified senior cyber security professionals out there. And that is the legacy of the 1986 computer fraud and abuse act. The government did it to themselves on this one because the people who care enough to try to follow the law, but also have the curious minds to start doing work on information security research, to start doing offensive security research and trying to help people understand what their vulnerabilities are, are terrified of the CFAA. The CFAA stops independent security research to a level that I think most people still don't really understand. So as a result, we end up with a culture of fear among people who are trying to be ethical and law abiding, and, a situation where people who aren't just have to evade somebody in federal law enforcement, in the United States long enough to make a quick profit and then get out.

Danny: And this isn't just in the United States, right? I mean, one of the things that's been most disappointing, I think about the CFAA has been that it's been exported around the world and that you have exactly that same challenge for people being turned into criminals instead of good citizens, wherever they live.

Tarah: And we're looking at it at a law that should have had a sunset provision in it to begin with and a law that was created and put into place and supported by a judge who thought that you could whistle into a payphone to launch nuclear weapons. Look, people, computers are not magic sky fairy boxes full of pixie dust that can somehow change the world. I mean, unless you're mining for blockchain, cause we all know that that's the magical stuff. The same situation applies here that computers are not magic. There's some flaws in our legal system, in the United States and the CFA is often used to sprinkle over the top to get indictments. We already have laws that describe what fraud is, what theft is and saying that it's happening over a computer doesn't make it not fraud. Doesn't make it worse than fraud. It's just a different medium of doing it. 

We all know what right and wrong is. Right. And so adding a law that says, if you use a magic pixie box to commit a crime, then it's somehow worse. Doesn't make any sense to those of us who work with computers on an everyday basis. 

Cindy: I think that one of the lessons of the CFAA is maybe we shouldn't write a law based upon a Matthew Broderick movie of the eighties. Right. The CFAA was apparently passed after president Reagan saw war games, which is a very fun movie, but not actually realistic. So please go on.

Tarah: So the nature of the CFAA, going back to the real story here, is that its being used by people who don’t understand it, to prosecute people who never intended to break laws, or who if they did, we already have a law to cover that question. So the CFAA now is being used mostly from what we're able to see in industry to stop exiting employees of large corporations from setting up competing businesses. 

That's the actual use, quietly behind the scenes of the CFAA. The other very public use is to go after people who have no business being prosecuted with that law. They might be bad people. The police officer who collaborated with criminals to harass women and used his departmental computer to look up information in the recent United States vs Van Buren. The problem we have here is that this police officer was charged under the CFAA. Now he wasn't a good guy, but we already have a name for the law that he broke. It's abuse of the public trust, there's fraud, there's theft. 

And the problem we're having here is that the crime he committed is the exact same whether or not he looked it up on his laptop or whether he looked information up on these women, by going back and looking through a file cabinet that was built entirely out of paper and wood back at the department.

So we're inventing a law to prosecute information security researchers, employees who are leaving companies or unfairly to prosecute bad people who committed crimes we already have names for. We don't need the CFAA. We already know it when people have done the right and the wrong thing, and we already have laws for those things, but it's very easy for a judge to be convinced that something scary is going to happen on a computer because they don't understand how they work.

Cindy: Let's switch a little bit into, you know, what does it look like if we get this right. So we've already talked about one, which is the computer fraud and abuse act isn't being used to scare people out of doing good deeds anymore. And that this idea that we have a global cooperative network of people who are all pointed towards making our security network is something that we embrace instead of something that we disincentivize. And we need to embrace that both on the individual level with things like the CFAA and on the corporate level, aligning the corporate incentives and then on the government level, right where we encourage governments, not to stockpile vulnerabilities and not to be big buyers on this private market, but instead to give that information over to the company so the companies can fix it and make things better.

What else do you think it looks like if we get it right, Tarah?

Tarah: If we get it right, security researchers who are reporting vulnerabilities to companies would be appropriately compensated. That doesn't mean that a security researcher who reports a vulnerability that hadn't been found but is a small one should be getting a Porsche every single time.

It does mean that researchers who try to help should at the very least experience some gratitude. When you find a vulnerability that's reported to, you report to a company that is a company killer, fully critical could take the entire system, the entire company down, you should be receiving appropriate compensation.

Cindy: We need to align the incentives for doing the right thing with the incentives for doing the wrong thing is what I'm hearing you say.

Tarah: That is correct. We need to align those incentives.

Cindy: And that's just chilling, right? Because what if those security researchers instead sold that vulnerability to somebody who wants to undermine our elections? I can't imagine something that's more important that it be secure and right, and protected, then our, you know, our basic right to vote and, and live in a democracy that works. When you set up a situation in which you're dealing with something that is that important to a functioning society, we shouldn't have to depend on the Goodwill of the security researchers to tell, you know, the good guys about this and not tell the bad guys we need to set the incentives up so it, it always pushes them in the direction of the good guys and things like, you know, monitors for our health and the protection of our vote, are where these incentives should be the strongest. 

Tarah: Absolutely. That same mindset that lets you find vulnerabilities after the fact lets you see where they're being created. Unfortunately there's first, not enough companies that do appropriate product security reviews early in the development process. Why? Because it's expensive and two, there's not enough people who are good qualified product security, reviewers and developers. They're just, there just aren't enough of them partially because companies don't welcome that a great deal of the time. Right at this moment, the cybersecurity field is exploding with people who want to be in cybersecurity. And yet at the most senior levels, there is simply no doubt that there is a massive lack of diversity in this field. It is very difficult for women, people of color, queer people to see themselves at the top of companies when they don't see themselves at the top of companies, right. They don't see themselves succeeding in this field, even though I am here to tell you, comparatively, the wages are really good in cybersecurity.

So please all of my friends out there, get into a training class, start taking computers apart because this is a great field to be in if you like puzzles and you want to get paid well, but there's a massive lack of diversity and to open those doors fully to the number of people that we need in this field, we have got to, got to, got to start thinking differently about what we think of cybersecurity expert looks like.

Cindy: There seems to be a lack of imagination about what a hacker looks like or should be like.  Tarah, you don’t look like that image….we really need to create better and a wider range of models of what somebody who cares about computer security looks like and sounds like.

Tarah: We do. What you're actually looking for is a sense of safety, a feeling of security in yourself that you hired a smart, educated person to tell you everything's going to be okay. That's this human frailty that gets introduced into cybersecurity again and again. It's hard to reassure somebody that you're an expert if you don't look like what they think an expert looks like, and that is the barrier for women, people of color in, and people of color in cybersecurity right now, they have to be trusted as experts. And it's just a problem to get through and to break through that barrier.

Cindy: This is a community project. to me is a piece of recognizing we are on other people’s computers all day long, and sometimes other people are on our computers. When somebody comes along and says “hey, I think you’ve got  a security problem here, the right thing to do is to thank them. Not to attack them, much less throw the criminal law at them.  

Tarah: If you are a person inside a company, I want you to send an email to security@yourcompany.com, whatever that is. And I want you to find out what happens. Does it bounce back? Does it go to some guy that doesn't work for the company anymore? Does it, as I have previously discovered go to the chief of staff of the CEO. So like, just take a look at where that goes, because that's how people are trying to talk to you and make your world better. And if nobody's checked that mailbox in a while, maybe open it up and, you know, stare at it with one eye closed behind one of those like eclipse viewers. Cause it's going to explode.

Cindy: I could totally do this all day, Tarah. It's so fascinating to talk to you and to see your perspective, because you really have been in multiple different places in this conversation. And I think with people like you we could get to a place where we fixed it if we just had more people listening to Tarah. So thank you so much for coming and talking to us and giving us the kind of straight story about how this is playing out on the ground.

Tarah: It's incredibly kind of you to invite me. Cindy and Danny, I just, I want to hang out with you and just, you know, drink inappropriate morning wine with you and yell about how everything's broken on the internet. I mean, it's a wonderful pastime at the same time it's a wonderful opportunity to, to make the world a little bit better, just recognizing that we are connected to each other, that these, the fixing of one thing in one place doesn't just impact that one thing, it impacts everybody. And it's wonderful to be with you and get a chance to make things a little better. 

Cindy: Well, that was just terrific. You know, Tarah's enthusiasm just spills out and her love for computer security and security research. Just, it's infectious. And it, it really made me think that we can do a lot here to make things better. You know, what, what really struck me is that, you know, I have been an enemy of the computer fraud and abuse act for a very long time, but she really grounded it in how it terrifies and chills security research. And ultimately, you know, hurts our country and the world. But what she said was very specific, which is it's created a culture of fear among people who are trying to be ethical and law abiding. I mean, that really ought to stop us cold. And you know, the good news is that we got a little bit of relief out of the Supreme court case van Buren that we talked about, but there's just so much more to go. 

Danny: I think she really managed to convey the stakes here and the human impact of these sort of vulnerabilities. It's not just about your credit rating going down because personal data was leaked. It's about how a child in a hospital could die if people don't address security vulnerabilities. 

Cindy: The other thing that I really liked was Tarah's really focusing on aligning financial incentives, kind of on both sides, the penalties for the companies who don't fix or talk about security vulnerabilities and compensating the security researchers who are doing us all a favor by finding them. You know, what I like about that is you talk a lot about the four levers of change that Larry Lessig first identified law code norms and markets.And this one is very focused on markets, and how we can align financial incentives to make things better. 

Danny: Yeah. And I think people get very nihilistic about solving the computer security problem. And I think that Tarah’s citing of an actual, real, pragmatic, inspiration for how you might go about improving it was really positive. And that was the airline industries, where you have a community that comes together across businesses, across countries, and works internationally in this very transparent and methodical way to defend against problems that have a very similar model, right? The tiniest error can have huge consequences and people's lives are on the line. So everybody has to work together. I liked the fact that there's something in the real world that we can base our utopian vision on.

Cindy: The other thing I really appreciated is how Tarah makes it so clear that we're just in a networked world now.  We spend a lot of time connected with each other on other people's computers and the way that we fix it is recognizing that and aligning everything towards a networked world. Embracing the fact that we are all connected is the way forward.

Thank you to Tarah Wheeler for joining us and giving us so much insight into her world.

DANNY: If you like what you hear, follow us on your favorite podcast player. We’ve got lots more episodes in store with smart people who will tell you how we can fix the internet. 

Music for the show is by Nat Keefe and Reed Mathis of BeatMower. 

“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology. I’m Danny O’Brien.

CINDY: And I’m Cindy Cohn. Thank you so much for joining us today. Until next time.



rainey Reitman

Livestreamed Hearing Monday: EFF Urges Appeals Court to Allow Public Access to Documents in Patent Troll Litigation

6 days 7 hours ago
Uniloc Improperly Hiding Documents

WASHINGTON, D.C.— On Monday, December 6, at 10 am ET, the Electronic Frontier Foundation (EFF), standing up for public access to courts and transparency in patent litigation, will urge a panel of judges to unseal documents that a notorious patent troll is trying to hide.

At a federal appeals court hearing that will be livestreamed, attorney Alexandra H. Moss, Executive Director at Public Interest Patent Law Institute, who is assisting EFF in the case, will argue that a judge’s order to unseal all documents and preserve public access in the case of Uniloc USA, Inc. v. Apple Inc. should be upheld. Uniloc is entitled to resolve its patent dispute in publicly-funded courts, Moss will argue, but it’s not entitled to do so secretly.

This is the second time Uniloc has appealed an order in this case requiring it to be more transparent about its patent litigation. As one of the world’s most active patent trolls, Uniloc files upwards of 170 lawsuits a year to make money on the vague and frivolous patents it holds. In this case, a Northern California district court consolidated five unrelated patent infringement lawsuits Uniloc filed against Apple. EFF intervened and filed friend of the court briefs in the case and advocated for public access after learning that Uniloc’s motions and exhibits were so heavily sealed and redacted that they were unreadable to anyone outside of the parties.

EFF has intervened on behalf of the public’s right to an open and accessible court system, where anyone can view and scrutinize the conduct of players in legal disputes. If the public has a right to access by default, Uniloc must prove compelling reasons for secrecy, and it has failed to do so.

EFF believes Uniloc v. Apple crystallizes the problem of excessive sealing in patent cases and the court must preserve the open and accessible court system.

WHO: EFF Volunteer Attorney Alexandra H. Moss

WHAT: Oral arguments in Uniloc USA, Inc. v. Apple Inc.

WHEN: Monday, Dec. 6 at 10 a.m. ET

WHERE: United States Court of Appeals for the Federal Circuit livestream:
https://www.youtube.com/channel/UC78NfBf28AQe3x7-SbbMC2A

For more on this case:
https://www.eff.org/cases/uniloc-v-apple

Contact:  AaronMackeySenior Staff Attorneyamackey@eff.org
Karen Gullo

InternetLab’s 2021 “Who Defends Your Data Brazil” Report Shows Improvement in Brazilian ISPs Privacy Practices, But Gaps Remain

6 days 16 hours ago

Brazil’s biggest internet connection providers continue to make strides towards better protection of customer data and greater transparency about their privacy practices, according to InternetLab’s 2021 “Quem Defende Seus Dados?" (“Who defends your data?)" report. Released today, the report is the sixth annual assessment of Brazilian providers’ adherence to best-practices criteria that look at whether they are doing their level best under the law to protect users when law enforcement requests their personal information, defend privacy rights in court and in their public policy positions, and publicly disclose information on user data collection, government requests for user data, and more.

InternetLab evaluated six providers in this edition, all of whom hold at least 1 percent of the telephony market in Brazil and looked at both broadband and mobile services. Brisanet, a leading independent provider, was evaluated for the first time, while Sky was dropped, as well as Nextel, which was incorporated into Claro after being acquired by América Móvil, Claro’s parent company.

Telecom provider TIM, owned by Telecom Italia SpA, received the highest score this year, as it did last year. Its broadband and mobile services received full credit for meeting standards in four of the six categories and 75 percent credit for a fifth category. Claro Mobil and NET, both part of América Móvil, were a close second, with full stars in four categories and a quarter star in a fifth, while Vivo achieved full stars in three categories, three-quarters star in a fourth, and half star in a fifth.  Algar made slight improvements over last year’s scores, with one full star, a ¾ star, and a half star, while Brisanet Mobil and Brisanet Broadband came in last, earning a half star in just one category.

As highlighted by Bárbara Simão, InternetLab’s Head of Research, the report shows Internet Service Providers (ISPs) improving disclosure of relevant information on how they handle user data. “Some companies, such as TIM, Vivo, and Algar, started to publish specific protocols with rules for handing over data to public authorities,” Simão said. However, there is still plenty of room for providers to strengthen privacy-protective best practices. One key issue relates to companies’ public response about security breaches. According to Simão, “some major data breaches related to ISPs were reported last year, and the companies involved failed to respond adequately.

All but two companies received top scores for providing clear and complete information on privacy policies, including what data they collect and why, how long it’s stores it, and who has access to it.  Improvements in this category are in part attributed to Brazil’s new law, the EU General Data Protection Act (GDPR)-inspired regulation that took effect last year. Oi and Vivo, which both received ¾ stars last year, improved their scores, while Algar slipped from getting a full star last year to ¾ star this year. Brisanet Mobil and Brisanet Broadband earned a half star, the only category in which the company scored.

All but a few companies performed well in taking a public stance supporting privacy and defending user privacy in court. Last year InternetLab evaluated companies’ activities defending privacy against unprecedented government pressure to access telecom data during the COVID-19 pandemic. This year the organization revised this parameter, and looked at whether companies took a public stance, in consultations and debates or in the media, in favor of practices promoting the security of its users' data and providing concrete information on strategies to mitigate risks and prevent security breaches.

The revised parameter reflects concerns about security and data breaches involving TIM, Claro, and other leading providers that occurred in 2020 and 2021, in which over 100 million cell phone numbers and personal information were exposed. Investigations into the incidents were opened by the National Consumer Secretariat (Senacon) and by Procon-SP. The companies involved provided only generic explanations and little in the way of information about safeguards to prevent future security break-ins. Regulatory agencies, in response, implemented initiatives aimed at combating cybersecurity threats, including the creation of the Regulation of Cybersecurity Applied to the Telecommunications Sector by Anatel (Brazil’s telecom regulator) and the technical note published by the National Data Protection Authority (ANPD), with guidelines for providers in the event of a security breach.  As such, this year’s report evaluated companies’ commitments to securing the personal data of their users and publicly voicing support for these initiatives.

Unfortunately, companies continue to fall well short of best practices for informing users about requests for their data. This year, as in 2020, not a single company received a star for this category. No Brazilian law compels companies to notify targets of surveillance, but they are not prevented from notifying users when secrecy is not legally or judicially required. Companies also lag when it comes to their transparency reports and data protection impact assessments. TIM, NET, and Claro received partial stars in this category, while the rest received no stars.

Main Results

Overall, this year's report evaluates providers in six criteria: data protection policies, law enforcement guidelines, defending users in the judiciary, defending privacy in policy debates or the media, transparency reports and data protection impact assessment, and user notification. The full report is available in Portuguese. These are the main results:

Category 1 Results: Data Protection Policies

While most providers are now telling users about what data they collect about them, how long the information is kept, and who they share it with, some are failing to be fully transparent in responding to users’ requests for information about their own personal data. InternetLab researchers tested company practices by requesting their personal data. Oi, TIM, Vivo, and Brisanet complied, but only disclosed subscriber information, even though the Brazilian Data Protection Law ensures users’ right to access all the personal data companies collect about them.

TIM, this year and last, took additional steps to certify the requestor's identity before disclosing the data, a good practice that deserves to be highlighted. Algar had done the same in 2020, but this year failed even to respond to requests for access to data.

InternetLab added a new standard that companies must meet to get full credit in this category: provide information about under what circumstances they will transfer users’ personal data to other countries. Law enforcement around the world is increasingly seeking data across borders in criminal investigations, so it’s important for companies to provide clear and detailed information about how it handles user data requests from foreign police.

The report shows that, except for TIM and Algar, companies are less than fully transparent, failing to provide specific information about where data is stored or what steps are required to transfer it to other countries.

For example, Claro’s privacy policy says the company hires cloud storage services, which “may take place outside the national territory.” However, there is no further detail about which international entities receive such data.  Oi’s policy has generic language saying it may transfer users’ personal data abroad for cloud storage or, if needed, to provide a service.  Vivo’s policy has limited information, saying, “as part of the Telefónica Group, (it) may, in certain circumstances and when necessary, share personal data with other companies within the Group. In addition, your data may be shared with partners and suppliers based in other countries, always in compliance with applicable law and in accordance with contractual clauses.”

TIM received full credit for the criteria, disclosing that the main third-party servers that store personal data under TIM’s control are in Brazil, EEA (European Economic Area), and California (USA). Algar also received a full score for explaining the legal criteria applied for international data transfers. Brisanet’s policies do not comply with InternetLab’s guidelines.

Category 2 Results: Law Enforcement Guidelines

To earn stars in this category, companies must have clear guidance for law enforcement about accessing user data and follow the most privacy protective interpretations of the law when personal data are requested by law enforcement agents.

Claro/NET, TIM and Vivo received full stars after receiving only partial stars last year.  Vivo and TIM for the first time received credit for publishing a specific document on how  they respond to government data requests. Although both documents could provide more information on the procedures adopted and break down details by different types of communications data, they certainly represent a good start.

Claro is more transparent than last year, telling users that it discloses subscriber data to authorities, which it identifies, as well as identifying which crimes justify the disclosure of subscriber data without a warrant. It also provides information on the circumstances in which it provides geolocation data and promises to provide authorities with connection records only by court order. However, it does not publish a specific document with information about procedures and rules followed to give user data to authorities, another criterion for Category 2.

Algar had the most dramatic improvement in this category. The company went from receiving no star last year to a full star this year. Algar’s first published law enforcement guidelines are the only ones providing more detailed information broken down by the type of data requested, clearly asserting the details and commitments that InternetLab’s report seeks to obtain in Category 2.

Category 3 Results:  Defending Users in Courts

To earn stars in this category, companies should challenge privacy-abusive legislation and abusive administrative or judicial requests for user data.

Claro, NET, and TIM achieved full starts for complying with both parameters after receiving half stars last year.  Oi received a full star this and last year. Algar and Brisanet received no score in this category.

Claro, Oi, TIM, and Vivo filed a lawsuit challenging a state law compelling companies to identify the caller number for every telephone call (preventing blocked numbers, for example). Vivo, along with other telco companies, challenged modifications to the General Regulation on Consumer Rights of Telecommunications Services that would oblige companies to provide, to any recipient of telephone calls, personal data of the person who made the call.

Meanwhile, Oi has challenged a judicial order authorizing a police request to provide passwords granting access to all telephone-related stored data for 6 months, including subscriber information, call and SMS records, and location data. The company challenged the general nature of the order and requested the police to specify which users and devices were targeted. It has also requested information about which criminal investigation the order was related to.

Claro denied a request for handing subscriber data directly to the Office of the Comptroller General (Controladoria Geral da União), without prior judicial authorization, stating that doing so would be a violation of constitutional and legal safeguards. Vivo has also denied police and prosecutors requests for users call records and location data without prior judicial orders.

Category 4 Results: Public Stance in Favor of Privacy

Claro, NET and TIM received full stars for taking a public stance in support of privacy, while Oi, Vivo, and Algar received 1/2 stars. Highlights included a new document from entitled "Information Security and Cybersecurity Policy," which, among other things, provides a specific communication channel for security cases. InternetLab’s report congratulates the company for making available a specific document that provides detailed information about security practices and means to exercise rights.

But the news wasn’t all good. Even though Claro, Oi, and TIM received stars for public statements in regard to security and cyber risk mitigation, InternetLab points out that they all failed to provide robust answers to accusations of data breaches (Claro in 2020, and Oi and TIM in 2021).

The ISPs provided only “generic answers,” InternetLab reports.  “No robust explanations about the case were given, nor were any standards or techniques concretely advocated that could address the allegations [of data breach],” the organization said. Vivo also faced data breach accusations in 2020, receiving notification from consumer and telecom authorities. InternetLab said the company sent public responses to authorities, claiming to have evaluated its internal systems and found no security incidents. The responses didn’t mention any improvements in Vivo’s security measures.

Category 5 Results: Transparency Reports and Data Protection Impact Assessments

Brazil’s internet and telecommunications providers are not where they should be when it comes to publishing transparency reports, a best practice that has grown in the tech industry. Algar, Oi, and Brisanet received no stars, while Claro and Net received a quarter star—they disclose aggregate data about users’ requests for their own data, but no statistical data about government data requests.

TIM received ¾ star; the company does not publish a transparency report, but does publish a Sustainability Report with general information about government data requests and total figures for last year’s requests for telephone interception, subscriber information, and “telephone extracts.” Vivo has improved its mark since last year. Telefónica Brazil, of which Vivo is part, published for the first time its comprehensive transparency report in Portuguese.

As in last year’s report, none of the featured companies published a data protection impact assessment (DPIA). The Brazilian data protection law has rules about DPIA, but the Data Protection Authority still must regulate when this assessment is mandatory.

Category 6 Results: User Notification

No company informs users when government authorities seek their data, so no stars were awarded. This is unchanged from last year.

Conclusion

Since its first edition in 2016, Brazil’s reports have shown solid progress, fostering ISP competition toward stronger standards in favor of transparency and users’ privacy. This year report highlights advances in the disclosure of law enforcement guidelines and Brazilian providers’ continuous commitment in defending their users in courts. It also shows that there is room for improvement in user notification, data protection impact assessments, and even in transparency reports—a best practice already consolidated in other countries and for other players, such as tech companies. InternetLab’s work is part of a series of reports across Latin America and Spain adapted from EFF’s Who Has Your Back? report, which for nearly a decade has evaluated the practices of major global tech companies.

 

 

 

 

Karen Gullo

Facebook’s Secret “Dangerous Organizations and Individuals” List Creates Problems for the Company—and Its Users

1 week ago

Along with the trove of "Facebook Papers" recently leaked to press outlets was a document that Facebook has, until now, kept intentionally secret: its list of "Dangerous Organizations and Individuals." This list comprises supposed terrorist groups, hate groups, criminal groups, and individuals associated with each, and is used to filter and remove speech on the platform. We're glad to have transparency into the document now, but as The Intercept recently reported, and as Facebook likely expected, seeing the list raises alarm bells for free speech activists and people around the world who are put into difficult, if not impossible, positions when it comes to discussing individuals or organizations that may play major roles in their government, for better or for worse. 

While the list included many of the usual suspects, it also contained a number of charities and hospitals, as well as several musical groups, some of whom were likely surprised to find themselves lumped together with state-designated terrorist organizations. The leaked document demonstrated the opaque and seemingly arbitrary nature of Facebook’s rulemaking.

Tricky business

Let’s begin with an example: In August, as the Taliban gained control over Afghanistan and declared its intent to re-establish the Islamic Emirate of Afghanistan, the role of the Internet—and centralized social media platforms in particular—became an intense focus of the media. Facebook was of particular focus, both for the safety features it offered to Afghans and for the company’s strong stance toward the Taliban.

The Taliban has long been listed as a terrorist organization by various entities, including the United Nations and the U.S. government, but is additionally subject to various draconian sanctions since the 1990s by the UN Security Council, the U.S., and other countries that are designed to effectively prevent any economic or other service-related interactions with the Taliban.

As a result of these strict sanctions, a number of internet companies, including Facebook, had placed restrictions on the Taliban’s use of their platforms even prior to the group’s takeover. But as the group took power, Facebook reportedly put new resources into ensuring that the Taliban couldn’t use their services. By contrast, Twitter continued to allow the group to maintain a presence, although they did later remove the Pashto and Dari accounts of Taliban spokesperson Zabihullah Mujahid, leaving only his English account intact.

The conflicting decisions taken by these and other companies, as well as their often-confused messaging around legal obligations vis-a-vis the Taliban and other extremist groups, is worth picking apart, particularly in light of the growing use of terrorist lists by states as a means of silencing and exclusion. Not one but several groups listed as terrorists by the United States occupy a significant role in their countries’ governments.

As The Lawfare Podcast’s Quinta Jurecic put it:  “What do you do when an insurgent group you’ve blocked on your platform is now de facto running a country?”

Legal obligations and privatized provisions

First, it’s important to clarify where companies’ legal obligations lie. There are three potential legal issues that come into play, and are, unfortunately, often conflated by company spokespeople.

The first is what is commonly referred to as “material support law,” which prohibits U.S. persons and entities from providing material support (that is, financial or in-kind assistance) to groups on the State Department’s list of foreign terrorist organizations (FTO). As we’ve written previously, “as far as is publicly known, the U.S. government has not taken the position that allowing a designated foreign terrorist organization to use a free and freely available online platform is tantamount to ‘providing material support’ for such an organization, as is prohibited under the patchwork of U.S. anti-terrorism laws” and U.S. courts have consistently rejected efforts to impose civil liability on online platforms when terrorist groups use them to communicate. More importantly, the Supreme Court has limited these restrictions to concerted “acts done for the benefit of or at the command of another.”

This is important because, as various documents leaked from inside Facebook have repeatedly revealed, the company appears to use the FTO list as part of the basis for their own policy on what constitutes a “dangerous organization” (though, notably, their list goes far beyond that of the U.S. government). Furthermore, the company has rules that restrict people who are not members of designated groups from praising or speaking positively in any way those entities—which, in practice, has resulted in the removal of large swathes of expression, including art, counterspeech, and documentation of human rights violations. In other words, the company simply isn’t very good at moderating such a complex topic.

The second legal issue is related to the more complicated issue of sanctions. U.S. sanctions are issued by the Department of Treasury’s Office of Foreign Asset Controls and have for many years had an impact on tech (we’ve written about that previously in the context of country-level sanctions on, for instance, Syria). 

Facebook has stated explicitly that it removes groups—and praise of those groups—which are subject to U.S. sanctions, and that it relies on sanctions policy to “proactively take down anything that we can that might be dangerous or is related to the Taliban in general.”

Specifically, the sanctions policy that Facebook relies upon stems from an Executive Order, 13224, issued by then-president George W. Bush in 2002. The Order reads:

“In general terms, the Order provides a means by which to disrupt the financial support network for terrorists and terrorist organizations by authorizing the U.S. government to designate and block the assets of foreign individuals and entities that commit, or pose a significant risk of committing, acts of terrorism. In addition, because of the pervasiveness and expansiveness of the financial foundations of foreign terrorists, the Order authorizes the U.S. government to block the assets of individuals and entities that provide support, services, or assistance to, or otherwise associate with, terrorists and terrorist organizations designated under the Order, as well as their subsidiaries, front organizations, agents, and associates.”

The Executive Order is linked to a corresponding list of “specially designated” nationals (SDNs)—groups and individuals—who are subject to the sanctions.

But whether this policy applies to social media platforms hosting speech remains an open question about which experts disagree. On the aforementioned Lawfare Podcast, Scott R. Anderson, a senior editor at Lawfare and a fellow at the Brookings Institution, explained that companies are facing a potential legal risk in providing in-kind support (that is, a platform for their speech) to SDNs. But while hosting actual SDNs may be a risky endeavor, Faiza Patel and Mary Pat Dwyer at the Brennan Center for Justice recently argued that, despite repeated claims by Facebook and Instagram, they are not in fact required to remove praise or positive commentary about groups that are listed as SDNs or FTOs.

US courts have alsoy rejected civil claims brought by victims of terrorist acts and their families against social media platforms, where those claims were  based on the fact that terrorists or terrorist  organizations used the platforms to organize and/or spread their messages. Although strong constitutional arguments exist, these cases are typically decided on statutory grounds. In some cases, the claims are rejected because the social media platforms’ actions were not a direct enough cause of the harm as required by the Anti-Terrorism Act, the law that creates the civil claims. In other cases, courts have found the claims barred by Section 230, the US intermediary immunity law.

An especially tricky community standard

Facebook’s Dangerous Individuals and Organizations community standard has proven to be one its most problematic. The standard has been at issue in six of the 21 cases the Oversight Board has taken. The Oversight Board has repeatedly criticized its vagueness. Facebook responded by clarifying the meaning of some of the terms, but left some ambiguity and also increased its unguided discretion in some cases. In one matter, Facebook had removed a post that shared news content from Al Jazeera about a threat of violence from the Izz al-Din al-Qassam Brigades, the military wing of Hamas, because the DIO policy stated that sharing official communications of Facebook-designated dangerous organizations was a form of substantive support—failing to apply its own exception for news reporting and neutral discussions. Facebook reversed the decision only after the Oversight Board selected the case, as it did in two other similar cases. In another case, Facebook apparently misplaced important policy guidance in implementing the DIO policy for three years. 

The real-world harms of Facebook’s policy

While Facebook—and indeed, many Western counter-terrorism professionals—seem to view the primary harm in hosting the speech of terrorist organizations, there are real and significant harms to enacting sweeping policies that remove such a broad range of expression related to groups that, for better or worse, play a role in governance. The way that Facebook implements its policies—using automation to remove whatever it deems to be terrorist or extremist content with little to no human oversight—has resulted in overly broad takedowns of all sorts of legitimate speech. Despite this, Mark Zuckerberg has repeatedly stated a belief that automation (not nuanced human review) is the way forward.

The combination of ever-increasing automation and Facebook’s vague and opaque rules (none of which cite any legal requirements) make it impossible for users in affected countries to understand what they can and cannot say.

As such, a Lebanese citizen must carefully avoid coming across as supporting Hezbollah, one of many political parties in their country that have historically engaged in violence against civilians. An Afghan seeking essential services from their government may simply not be able to find them online. And the footage of violence committed by extremist groups diligently recorded by a Syrian citizen journalist may never see the light of day, as it will likely be blocked by an upload filter. 

While companies are, as always, well within their rights to create rules that bar groups that they find undesirable—be they U.S.-designated terrorist organizations or domestic white supremacist groups—the lack of transparency behind these rules serves absolutely no one.

We understand that Facebook feels bound by perceived legal obligations. The Department of Treasury can and should clarify those obligations just as they did under the Obama administration. But Facebook also has a responsibility to be transparent to its users and let them know, in clear and unambiguous terms, exactly what they can and cannot discuss on its platforms.

Jillian C. York

The Internet Needs Fair Rules of the Road – and Competitive Drivers

1 week ago

In the past few weeks, the Biden Administration has finally moved forward with nominations to the Federal Trade Commission and the Federal Communications Commission. One of those nominees, Gigi Sohn (who, fair disclosure, has been an EFF board member), is testifying right now, and we expect a vote on all of the nominees soon.

As the agencies moves forward, fully staffed at last, we hope they will both recognize the role they can play in promoting net neutrality – meaning, in preventing ISPs from taking advantage of their effective gatekeeping roles to favor some services over others. Most people think of net neutrality as the province of the FCC, at least at the federal level. But that view loses sight of a prior problem: lack of competition in the ISP space. U.S. residents pay more than most of our peers around the world for internet access—and get less for our money. One reason for that is that roughly half of us have no choice when it comes to broadband access. Our providers have no incentive to do better. And that, in turn, is one reason we need net neutrality rules.

If we had a competitive broadband market, we might not need net neutrality rules, or at least not so many. But we don’t. If we had good net neutrality rules, the lack of competition might be less dangerous. Right now, in most places, we have neither. Instead, a few major companies—AT&T, Verizon, Comcast, and the like—have enormous power over our access to essential services, power they can use, in turn, to manipulate our online experience promoting or prioritizing some services over others.

Competition Incentivizes Innovation and Allows Consumers to Choose What They Value

As it currently stands, the large ISPs have no incentive to make their services better. As near-monopolies, they know that between the option of no internet and bad internet, customers will pick expensive, slow internet. Without other companies offering better services or better terms, there is no reason for these companies to shoulder the costs of improving either one. Why take the initial hit to your profits to build something new when you don’t have to do it to get new customers?

Furthermore, the guaranteed income of near-monopoly leads to exorbitant profits that are used not to improve service but rather to buy up content providers, such as AT&T’s acquisition of Warner Brothers and HBO, and Comcast’s stake in Universal and NBC. Once they own those companies, they have every incentive to violate net neutrality principles to preference their new purchases, as AT&T did with HBO Max. AT&T charged video streaming services an extra fee, but when that service was HBO Max, it essentially cost nothing, since AT&T was, in effect, paying itself the fee. So while competitors posted a loss, AT&T did not.

In theory, if there was actual competition in internet service providers, you, as a consumer, could choose a provider that committed to net neutrality as one of its selling points.

In San Francisco, for example, some residents can choose between multiple local ISPs. One of those providers, Sonic, has adopted and promotes policies that are both privacy protective and net neutral. Sonic also invests its capital in fiber infrastructure, not content. That gives at least some Bay Area residents a choice few others can imagine. What works best for you should not be determined by where you live and therefore which of the major ISPs is available to you.

Net Neutrality Promotes Competition Online

But there is more to the net neutrality-competition nexus. Net neutrality is the principle that your ISP can’t block, slow down, or charge extra for access to a website or service except as needed for reasonable network management. Consumers alone get to decide what gets their dollars and their eyeballs.

Those same rules of the road, which become necessary when you lose competition for broadband, support competition in other internet markets. Consumers are less likely to find themselves locked into certain services because they are offered “free” with their internet access, or because their ISP chooses to prioritize those services, which means competitive alternatives can emerge.

Both. Both Is Good.

Of course, neither piece alone is as good as both. Ideally, ISPs would compete to provide us with better service—spending money on better infrastructure, offering prices that actually match the service they are selling, offering better protections for their users, and so on—in order to entice customers. That would be a huge improvement on the status quo.

And ideally, we’d have net neutrality protections so that what you see online is separated entirely from the company that makes it possible for you to see any content online. With net neutrality protections, companies have less of an incentive to buy content and apps to manipulate you into watching and using, since they would have to treat competitors the same. That also frees up money for better internet access and stems the tide of concentration we’re seeing across the board. For those who live in areas underserved by ISPs because of the cost of building out there, less choice in ISP won’t result in less choice online.

In the meantime, if we can’t have one, we must have the other. Unfortunately, the situation right now is that most Americans have neither. That situation requires a federal response. It requires a fully-staffed FCC and FTC.

Katharine Trendacosta

A One-Two Punch for Internet Freedom 👊

1 week 1 day ago

Power Up Your Donation Week is here! Starting on #GivingTuesday, your contribution to EFF will have double the impact on digital privacy, security, and free speech rights for everyone.

Power Up

Donate today and get an automatic 2x match!

A group of passionate EFF supporters has created a special fund and issued a challenge: donate to EFF by December 7th and they’ll automatically match contributions up to a total of $308,500! This means every dollar you give becomes two dollars for EFF.

The Power Up Your Donation Week matching drive doubles the impact of public support for internet freedom. EFF members allow our team of attorneys, activists, and technologists to lead user-focused initiatives to: expand encryption across the web; protect privacy on our devices; develop tools like EFF’s Privacy Badger and Certbot; fight policies that enable censorship; end privacy-harming police practices; advocate for a future-proof internet that won’t leave people behind; and so much more.

With help from members around the world, we are taking on the big fights that no one else can. Give today and power up the movement for a better digital future.

Pack Twice the Punch for Internet Freedom

If you're already an EFF member, you can help by inviting your friends and colleagues to get involved! Here’s some sample language you can share:

It’s more important than ever to fight for technology users' rights. Join me in supporting @EFF this week, and your donation will pack double the punch with an automatic 2X match. https://eff.org/power-up

Twitter | Facebook | Email

Tech users everywhere rely on your support and EFF’s battle-hardened skill to defend civil liberties and human rights online. Give today and ensure that there is a vocal, independent force for tech users when they need it most.

Power Up

Double your impact (for free!)

_____________

EFF is a member-supported U.S. 501(c)(3) organization with a top rating from the nonprofit watchdog Charity Navigator. Donations are tax-deductible as allowed by law. Make membership even easier with an automatic monthly or annual donation!

Aaron Jue

Podcast Episode: Who Should Control Online Speech?

1 week 1 day ago
Episode 103 of EFF’s How to Fix the Internet

The bots that try to moderate speech online are doing a terrible job, and the humans in charge of the biggest tech companies aren’t doing any better. The internet’s promise was as a space where everyone could have their say. But today, just a few platforms decide what billions of people see and say online. 

Join EFF’s Cindy Cohn and Danny O’Brien as they talk to Stanford’s Daphne Keller about why the current approach to content moderation is failing, and how a better online conversation is possible. 

Click below to listen to the episode now, or choose your podcast player:

%3Ciframe%20height%3D%22200px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F47068a45-5ee2-406d-976e-c02cf50c9080%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

  
  

More than ever before, societies and governments are requiring a small handful of companies, including Google, Facebook, and Twitter, to control the speech that they host online. But that comes with a great cost in both directions -- marginalized communities are too often silenced and powerful voices pushing misinformation are too often amplified.

Keller talks with us about some ideas on how to get us out of this trap and back to a more distributed internet, where communities and people decide what kind of content moderation we should see—rather than tech billionaires who track us for profit or top-down dictates from governments. 

When the same image appears in a terrorist recruitment context, but also appears in counter speech, the machines can't tell the difference.

You can also find the MP3 of this episode on the Internet Archive.

In this episode you’ll learn about: 

  • Why giant platforms do a poor job of moderating content and likely always will
  • What competitive compatibility (ComCom) is, and how it’s a vital part of the solution to our content moderation puzzle, but also requires us to solve some issues too
  • Why machine learning algorithms won’t be able to figure out who or what a “terrorist” is, and who it’s likely to catch instead
  • What is the debate over “amplification” of speech, and is it any different than our debate over speech itself? 
  • Why international voices need to be included in discussion about content moderation—and the problems that occur when they’re not
  • How we could shift towards “bottom-up” content moderation rather than a concentration of power 

Daphne Keller directs the Program on Platform Regulation at Stanford’s Cyber Policy Center. She’s a former Associate General Counsel at Google, where she worked on groundbreaking litigation and legislation around internet platform liability. You can find her on twitter @daphnehk. Keller’s most recent paper is “Amplification and its Discontents,” which talks about the consequences of governments getting into the business of regulating online speech, and the algorithms that spread them. 

If you have any feedback on this episode, please email podcast@eff.org.

Below, you’ll find legal resources – including links to important cases, books, and briefs discussed in the podcast – as well a full transcript of the audio.

Resources 

Content Moderation: AI/Algorithms: Takedown and Must-Carry Laws: Adversarial Interoperability: Transcript of Episode 103: Putting People in Control of Online Speech

Daphne: Even if you try to deploy automated systems to figure out which speech is allowed and disallowed under that law, bots and automation and AI and other robot magic, they fail in big ways consistently.

Cindy: That’s Daphne Keller, and she’s our guest today. Daphne works out of the Stanford Centre for Internet and Society and is one of the best thinkers about the complexities of today’s social media landscape and the consequences of these corporate 

Danny: Welcome to how to fix the internet with the electronic frontier foundation. The podcast that explores some of the biggest problems we face online right now: problems whose source and solution is often buried in the obscure twists of technological development, societal change and the subtle details of internet law. 

Cindy: Hi everyone I'm Cindy Cohn and I'm the Executive Director of the Electronic Frontier Foundation. 

Danny: And I’m Danny O’Brien, special advisor to the Electronic Frontier Foundation.

Cindy: I'm so excited to talk to Daphne Keller because she's worked for many years as a lawyer defending online speech. She knows all about how platforms like Facebook, TikTok, and Twitter crack down on controversial discussions and how they so often get it wrong. 

Hi Daphne, thank you for coming. 

Daphne: First, thank you so much for having me here. I am super excited. 

Cindy: So tell: me how did the internet become a place where just a few platforms get to decide what billions of people get to see and not see, and why do they do it so badly?  

Daphne: If you rewind twenty, twenty-five years, you have an internet of widely distributed nodes of speech. There wasn't a point of centralized control, and many people saw that as a very good thing. At the same time the internet was used by a relatively privileged slice of society, and so what we've seen change since then, first, is that more and more of society has moved online So that's one big shift, is the world moved online—the world and all its problems. The other big shift is really consolidation of power and control on the internet. Even 15 years ago much more of what was happening online was on individual blogs  distributed on webpages and now so much of our communication, where we go to learn things, is controlled by a pretty small handful of companies, including my former employer Google, and Facebook and Twitter.  And that's a huge shift particularly since we as a society are asking those companies to control speech more and more, and maybe not grappling with what the consequences will be of our asking them to do that. 

Danny: Our model of how content moderation should work, where you have people looking at the comments that somebody has made and then picking and choosing, was really developed in an era where you assumed that the person making the decision was a little bit closer to you—that it was the person running your your neighborhood discussion forum or you're just editing comments on their blog. 

Daphne: The sheer scale of moderation on a Facebook for example means that they have to adopt the most reductive, non-nuanced rules they can in order to communicate them to a distributed global workforce. And that distributed global workforce inevitably is going to interpret things differently and have inconsistent outcomes. And then having the central decision-maker sitting in Palo Alto or Mountain View in the US subject to a lot of pressure from say, whoever sits in the White House, or from advertisers, means that there's both a huge room for error in content moderation, and inevitably policies will be adopted that 50% of the population thinks are the wrong policies. 

Danny: So when we see the platforms of Mark Zuckerberg go before the American Congress and answer questions from senators, one of the things that I hear them say again and again is that, we have algorithms that sort through our feeds. We're developing AI that can identify nuances in human communication, why does it appear that they failed so badly to kind of create a bot that reads every post and then picks and chooses which are the bad ones and then throw them off?

Daphne: Of course the starting point is that we don't agree on what the good ones are and what the bad ones are, but even if we could agree, even if you're talking about a bot that's supposed to enforce a speech law, a speech law which is something democratically enacted, and presumably has the most consensus behind it. And the crispest definition they fail in big ways consistently. You know they set out to take down ISIS and instead they take down the Syrian archive which exists to document war crimes for a future prosecution. The machines make mistakes a lot, and those mistakes are not evenly distributed, we have an increasing body of research showing disparate impact for example on speaker speakers of African-American English, and so there are just a number of errors that hit not just on free expression values but also on equality values  There's there's a whole bunch of societal concerns that are impacted when we try to have private companies deploy machines to police our speech. 

Danny: What kind of errors do we see machine learning making particularly in the example of like tackling terrorist content? 

Daphne: So I think the answers are slightly different depending which technologies we're talking about. A lot of the technologies that get deployed to detect things like terrorist content are really about duplicate detection. And the problems with those systems are that they can't take context into account. So when the same image appears in a terrorist recruitment context but also appears in counter speech the machines can't tell the difference.

Danny: And when you say counter-speech, you are referring to the many ways that people speak out against hate speech.

Daphne: They're not good at understanding things like hate speech because the ways in which humans are terrible to each other using language evolves so rapidly and so are the ways that people try to respond to that, and undermine it and reclaim terminology. I would also add most of the companies that we're talking about are in the business of selling things like targeted advertisements and so they very much want to promote a narrative that they have technology that can understand content, that can understand what you want, that can understand what this video is and how it matches with this advertisement and so forth. 

Cindy: I think you're getting at one of the underlying problems we have which is the lack of transparency by these companies and the lack of due process when they do the take-down, seem to me to be pretty major pieces of why the companies not only get it wrong but then double down on getting it wrong. There have also been proposals to put in strict rules in places like Europe so that if a platform takes something down, they have to be transparent and offer the user an opportunity to appeal. Let’s talk about that piece. 

Daphne: So those are all great developments, but I'm a contrarian. So now that I've got what I've been asking for for years I have problems with, my biggest problem really, has to do with competition. Because I think the kinds of more cumbersome processes that we absolutely should ask for from the biggest platforms can themselves become a huge competitive advantage for the incumbents if they are things that the incumbents can afford to do and smaller platforms can't.  And so the question of who should get what obligations is a really hard one and I don't think I have the answer. Like I think you need some economists thinking about it, talking to content moderation experts. But I think if we invest too hard in saying every platform has to have the maximum possible due process and the best possible transparency we actually run into a conflict with competition goals and and we need to think harder about how to navigate those two things.

Cindy: Oh I think that's a tremendously important point. It's always a balancing thing especially around regulation of online activities, because we want to protect the open source folks and the people who are just getting started or somebody who has a new idea. At the same time, with great power comes great responsibility, and we want to make sure that the big guys are really doing the right thing, and we also really do want the little guys to do the right thing too. I don't want to let them entirely off the hook but finding that scale is going to be tremendously important.  

Danny: One of the concerns that is expressed is less about the particular content of speech, more how false speech or hateful speech tends to spread more quickly than truthful or calming speech. So you see a bunch of laws or a bunch of technical proposals around the world trying to mess around with that aspect and to give something specific. There's been pressure on group chats like WhatsApp in India and Brazil and other countries to limit how easy it is to forward messages or have some way of the government being able to see messages that are being forwarded a great deal. Is that kind of regulatory tweak that you're happy with or is that going too far? 

Daphne: Well I think there may be two things to distinguish here: one is when WhatsApp limits how many people you can share a message with or add to a group. They don't know what the message is because it is encrypted and so they're imposing this purely quantitative limit on how widely people can share things. What we see more and more in the US discussion is a focus on telling platforms that they should look at what content is and then change what they recommend or what they prioritize in a newsfeed based on what the person is saying. For example,  there's been a lot of discussion in the past couple of years about whether YouTube recommendation algorithm is radicalizing. You know, if you search for vegetarian recipes will it push you to vegan recipes or as much more sinister versions of that problem. I think it's extremely productive for platforms themselves to look at that question to say, hey wait what is our amplification algorithm doing? Are there things we want to tweak so that we are not constantly rewarding our users worst instincts? What I see that troubles me, and that I wrote a paper on recently called Amplification and its Discontents, is this growing idea that this is also a good thing for governments to do. That we can have the law say, Hey platforms, amplify this, and don't amplify that. This is an appealing idea to a lot of people because they think maybe platforms aren't responsible for what their users say but they are responsible for what they themselves chose to amplify with an algorithm.  

All the problems that we see with content moderation are the exact same problems we would see if we applied the same obligations to what they amplify. The point isn't you can never regulate any of these things, we do in fact regulate those things. US law says if platforms see child sexual abuse material for example they have to take it down. We have a notice and take down system for a copyright. It's not that we live in a world where laws never can have platforms take things down, but those laws run into this very known set of problems about over removal, disparate impact, invasion of privacy and so forth. And you get those exact same problems with amplification laws.

Danny: We’ve spent some time talking about the problems with moderation, competition, and we know there are legal and regulatory options around what goes on social media that are being applied now and figured out for the future. Daphne, can we move on to how it’s being regulated now? 

Daphne: Right now we are seeing, we're going from zero government guidelines on how any of this happens to government guidelines so detailed that they take 25 pages to read and understand, and plus there will be additional regulatory guidance later. I think we may come to regret that, going from having zero experience with trying to set these rules to making up what sounds right in the abstract based on the little that we know now, with inadequate transparency and inadequate basis to really make these judgment calls. I think we're likely to make a lot of mistakes but put them in laws that are really hard to change.

Cindy: Where on the other hand, you don't want to stand for no change, because the current situation isn't all that great either. This is a place where perhaps a balance between the way the Europeans think about things which is often more highly regulatory and the American let the companies do what they want strategy. Like we kind of need to chart a middle path.

Danny: Yeah, and I think this raises another issue which of course, every country is struggling with this problem, which means that every country is thinking of passing rules about what should happen to speech. But it's the nature of the internet and it's one of its advantages, well it should be, is that everyone can talk to one another. What happens when this speech in one country that is being listened to in another with two different jurisdictional rules? Is that a resolvable problem?

Daphne: So there are a couple of versions of that problem. The one that we've had for years is what if I say something that's legal to say in the United States but illegal to say in Canada or Austria or Brazil? And so we've had a trickle of cases, and more recently some more important ones, with courts trying to answer that question and mostly saying, yeah I do have the power to order global take-downs, but don't worry, I'll only do it when it's really appropriate to do that. And I think we don't have a good answer. We have some bad answers coming out of those cases, like hell yeah, I can take down whatever I want around the world, but part of the reason we don't have a good answer is because this isn't something courts should be resolving. The newer thing that's coming, it's like kind of mind blowing you guys, which is we're going to have situations where one country says you must take this down and the other country says you cannot take that down, you'll be breaking the law if you do. 

Danny: Oh...and I think it's kind of counter intuitive sometimes to see who is making those claims. So for instance I remember there being a huge furor in the United States about when Donald Trump was taken off Twitter by Twitter, and in Europe it was fascinating, because most of the politicians there who were quite critical of Donald Trump were all expressing some concern that a big tech company could silence a politician, even though it was a politician that they opposed. And I think the traditional idea of Europe is that they would not want the kind of content that Donald Trump emits on something like Twitter.

Cindy: I think this is one of the areas where it's not just national, the kind of global split between that's happening in our society plays out in some really funny ways….because there are, as you said, these, we call these kind of must carry laws. There was one in Florida as well, and EFF participated, in, at least getting an injunction against that one. Must carry laws are what we call a set of laws that require social media companies to keep something up and give them penalties if they take something down. This is a direct flip of some of the things that people are talking about around hate speech and other things that require companies to take things down and penalize them if they don't.

Daphne: I don't want to geek out on the law too much here, but it feels to me like a moment when a lot of settled First Amendment doctrine could become shiftable very quickly, given things that we're hearing, for example, from Clarence Thomas who issued a concurrence in another case saying, Hey, I don't like the current state of affairs and maybe these platforms should have to carry things they don't want to.

Cindy: I would be remiss if I didn't point out I think this is completely true as a policy matter, it's also the case as a First Amendment matter, that this distinction between the speech and regulating the amplification is something that the Supreme Court has looked at a lot of times and basically said it's the same thing. I think the fact that it's causing the same problems shows that this isn't just kind of a First Amendment doctrine hanging out there in the air, the lack of a distinction in the law between whether you can say it or whether it can be amplified comes because they really do cause the same kinds of societal problems that free speech doctrine is trying to make sure don't happen in our world. 

Danny: I was talking to a couple of Kenyan activists last week. And one of the things that they noted is while the EU and the United States fighting over what kind of amplification controls are lawful and would work, they're facing the situation where any law about amplification in their own country is going to silence the political opposition because of course politics is all about amplification. Politics, good politics, is about taking a voice of a minority and making sure that everybody knows that something bad is happening to them. So I think that sometimes we get a little bit stuck in debating things from an EU angle or US legal angle and we forget about the rest of the world.

Daphne: I think we systematically make mistakes if we don't have voices from the rest of the world in the room to say, hey wait, this is how this is going to play out in Egypt or this is how we've seen this work in in Colombia. In the same way that, to take it back to content moderation generally, that in-house content moderation teams make a bunch of really predictable mistakes if they're not diverse. If they are a bunch of college educated white people making a lot of money and living in the Bay area there are issues they will not spot and that you need people with more diverse backgrounds and experience to recognize and plan around. 

Danny: Also by contrast if they're incredibly underpaid people who are doing this in a call center and have to hit ridiculous numbers and being traumatized by the fact that they're getting to filter through the worst garbage on the internet, I think that's a problem too.

Cindy: My conclusion from this conversation so far is just having a couple large platforms try to regulate and control all the speech in the world is basically destined to failure and it's destined to failure in a whole bunch of different directions. But the focus of our podcast is not merely to name all the things broken with modern Internet policy, but to draw attention to practical and even idealistic solutions. Let's turn to that.

Cindy: So you have dived deep into what we at EFF call adversarial interoperability or ComCom. This is the idea that users can have systems that operate across platforms, so for example you could use a social network of your choosing to communicate with your friends on Facebook without you having to join Facebook yourself. How do you think about this possible answer as a way to kind of make Facebook not the decider of everybody's speech?  

Daphne: I love it and I want it to work, and I see a bunch of problems with it. But, but I mean, part of, part of why I love it is because I'm old and I love the distributed internet where there weren't these sort of choke hold points of power over online discourse. And so I love the idea of getting back to something more like that.

Cindy: Yeah. 

Daphne: You know, as a first amendment lawyer, I see it as a way forward in a neighborhood that is full of constitutional dead ends. You know, we don't have a bunch of solutions to choose from that involve the government coming in and telling platforms what to do with more speech. Especially the kinds of speech that people consider harmful or dangerous, but that are definitely protected by the first amendment. And so the government can't pass laws about it. So getting away from solutions that involve top-down dictates about speech towards solutions that involve bottom up choices by speakers and by listeners and by community is about what kind of content moderation they want to see, seems really promising.

Cindy:  What does that look like from a practical perspective? 

Daphne: And there are a bunch of models of this that you can envision this as what they call a federated system, like the Mastodon social network where each node has its own rules. Or you can say, oh, you know, that goes too far, I do want someone in the middle who is able to honor copyright take down requests or police child, sexual abuse material, be a point of control, for things that society decides should be controlled.

You know, then you do something like what I've called magic APIs or what my Stanford colleague Francis Fukuyama has called middleware, where the idea is Facebook is still operating, but you can choose not to have their ranking or their content moderation rules, or maybe even their user interface and you can opt to have the version, from ESPN that prioritizes sports or from a Black Lives Matter affiliated group that prioritizes racial justice issues.

So you bring in competition in the content moderation layer, while leaving this underlying, like treasure trove of everything we've ever done, instead on the internet sitting with today's incumbents.

Danny: What are some of your concerns about this approach? 

Daphne: I have four big practical problems. The first is does the technology really work? Can you really have APIs that make all of this organization of massive amounts of data happen instantaneously in distributed ways. The second is about money and who gets paid. And the last two are things I do know more about. One is about content moderation costs and one is about privacy.  I unpack all of this in a recent short piece in the Journal of Democracy if people want to nerd out on this. But the content moderation costs piece is, you're never going to have all of these little distributed content moderators all have Chechen speakers and Arabic speakers and Spanish speakers and Japanese speakers. You know, so there's just a redundancy problem, where if you have all of them have to have all of the language capabilities to assess all of the content, that becomes inefficient. Or you know you're you're never going to have somebody who is enough of an expert in say American extremist groups to know what a Hawaiian shirt means this month you know versus what it meant last month.  

Cindy: Yeah.

Daphne: Can I just raise one more problem with competitive compatibility or adversarial interoperability? And I raise this because I've just been in a lot of conversations with smart people who I respect who really get stuck on this problem, which is aren't you just creating a bunch of echo chambers where people will further self isolate and listen to the lies or the hate speech. Doesn't this further undermine our ability to have any kind of shared consensus reality and a functioning democracy? 

Cindy: I think that some of the early predictions about this haven't really come to pass in the way that we're concerned about. I also think there's a lot of fears that are not really grounded in empirical evidence about where people get their information and how they share it, and that need to be brought into play here before we decide that we're just stuck with Facebook and that our only real goal here is to shake our fist at Mark Zuckerberg or write laws that will make sure that he protects a speech I like and takes down the speech I don't like, because other people are too stupid to know the difference. 

Daphne: If we want to avoid this echo chamber problem is it worth the trade-off of preserving these incredibly concentrated systems of power over speech? Do we think nothing's going to go wrong with that? Do we think we have a good future with greatly concentrated power over speech by companies that are vulnerable to pressure from say governments that control access to lucrative markets like China, which has gotten American companies to take down lawful speech? Companies that are vulnerable to commercial pressures from their advertisers which are always going to be at best majoritarian. Companies that faced a lot of pressure from the previous administration and will so from this and future administrations to do what politicians want. The worst case scenario to me of having a continued extremely concentrated power over speech looks really scary and so as I weigh the trade-offs, that weighs very heavily, but it kind of goes to almost questions you want to ask a historian or a sociologist or a political scientist or Max Weber.

Danny: When I talk to my friends or my wider circle of friends on the internet it really feels like things are just about to veer into an argument at every point. I see this in Facebook comments where someone will say something fairly innocuous and we're all friends, but like someone will say something and then it will spiral out of control. And I think about how rare that is when I'm talking to my friends in real life. There are enough cues there that people know if we talk about this then so-and-so is going to go on a big tirade, and I think that's a combination of coming up with new technologies, new ways of dealing with stuff, on the internet, and also as you say, better research, better understanding about what makes things spiral off in that way. And the best thing we can fix really is to change the incentives, because I think one of the reasons why we've hit what we're hitting right now is that we do have a handful of companies and they all have very similar incentives to do the same kind of thing. 

Daphne: Yeah I think that is absolutely valid. I start my internet law class at Stanford every year by having people read Larry Lessig. He lays out this premise that what truly shapes people's behavior is not just laws, as lawyers tend to assume. It's a mix of four things, what he calls Norms, the social norms that you're talking about, markets, economic pressure, and architecture, by which he means software and the way that systems are designed to make things possible or impossible or easy or hard. What we might think of as product design on Facebook or Twitter today. And I think those of us who are lawyers and sit in the legal silo tend to hear ideas that only use one of those levers. They use the lever of changing the law, or maybe they add a changing technology, but it's very rare to see more systemic thinking that looks at all four of those levers, and how they have worked in combination to create problems that we've seen, like there are not enough social norms to keep us from being terrible to each other on the internet but also how those levers might be useful in proposals and ideas to fix things going forward.

Cindy: We need to create the conditions in which people can try a bunch of different ideas, and we as a society can try to figure out which ones are working and which ones aren't. We have some good examples. We know that Reddit for instance made some great strides in turning that place to something that has a lot more accountability. Part of what is exciting to me about ComCom and this middleware idea is not that they have the answer, but that they may open up the door to a bunch of things, some of which are going to be not good, but a couple of which might help us point the way forward towards a better internet that serves us. We may need to think about the next set of places where we go to speak as maybe not needing to be quite as profitable. I think we're doing this in the media space right now, where we're recognizing that maybe we don't need one or two giant media chains to present all the information to us. Maybe it's okay to have a local newspaper or a local blog that gives us the local news and that provides a reasonable living for the people who are doing it but isn't going to attract Wall Street money and investment. I think that one of the the keys to this is to move away from this idea that five big platforms make this tremendous amount of money. Let's spread that money around by giving other people a chance to offer services. 

Daphne: I mean VCs may not like it but as a consumer I love it.

Cindy: And one of the ideas about fixing the internet around content moderation, hate speech, and these must carry laws, is really to try to to create more spaces where people can speak that are a little smaller and shrink the content moderation problem down to a size where we may still have problems but they're not so pervasive.   

Daphne: And on sites where social norms matter more.  You know where that lever, the thing that stops you from saying horrible racist things in a bar or at church or to your girlfriend or at the dinner table, if those sorts of the norms element of public discourse becomes more important online, by shrinking things down into manageable communities where you know the people around you, that might be an important way forward.

Danny: Yeah, I'm not an ass in social interactions not because there's a law against being an ass but because there's this huge social pressure and there's a way of conveying that social pressure in the real world and I think we can do that. 

Cindy: Thank you so much for all that insight Daphne and for breaking down some of these difficult problems into kind of manageable chunks we can begin to address directly. 

Daphne: Thank you so much for having me.

Danny: So Cindy, having heard all of that from Daphne, are you more or less optimistic about social media companies making good decisions about what we see online? 

Cindy: So I think if we're talking about today's social media companies and the giant platforms, making good decisions, I'm probably just as pessimistic as I was when we started. If not more so. You know, Daphne really brought home how many of the problems we're facing in content moderation in speech these days are the result of the consolidation of power and control of the internet in the hands of a few tech giants. And how the business models of these giants play into this in ways that are not good.

Danny: Yeah. And I think that like the menu, the palette of potential solutions in this situation is not great either. Like, I think the other thing that came up is, is, you watch governments all around the world, recognize this as a problem, try and come in to fix the companies rather than fix the ecosystem. And then you end up with these very clumsy rules. Like I thought the must carry laws where you go to a handful of companies and say, you absolutely have to keep this content up is such a weird fix. When you start thinking about it. 

Cindy: Yeah. And of course it's just as weird and problematic as  you must take this down, immediately. Neither of these directions are good ones. The other thing that I really liked was how she talked about the problems with this idea that AI and bots could solve the problem.

DANNY: And I think part of the challenge here is that we have this big blob of problems, right? Lots of articles written about, oh, the terrible world of social media and we need an instant one off solution and Mark Zuckerberg is the person to do it. And I think that the very nature of conversation, the very nature of sociality is that it's, it is a small scale, right? It is at the level of a local cafe.

Cindy: And of course, it leads us to the the fixing part that we liked a lot, which is this idea that we try to figure out how do we redistribute the internet and redistribute these places so that we have a lot more local cafes or even town squares. 

The other insight I really appreciate is kind of taking us back to, you know, the foundational thinking that our friend Larry Lessig did about how we have to think, not just about law as a fix, and not just about code, how do you build this thing as a fix, but we have to look at all four things. The law. Code, social norms, and markets as leverage that we have to try to make things better online.

Danny: Yeah. And I think it comes back to this idea that we have, like this big stockpile of all the world's conversations and we have to like crack it open and redirect it to these, these smaller experiments. And I think that comes back to this idea of interoperability, right? There's been such an attempt, a reasonable commercial attempt by these companies to create what the venture capitalists call a moat, right? Like this, this space between you and your potential competition. Well, we have to breach those modes and bridging them involves either by regulation or just by people building the right tools, having interoperability between the past, of social media giants and the future of millions and millions of individual social media places. 

Cindy: Thank you to Daphne Keller for joining us today. 

Danny: And thank you for joining us. If you have any feedback on this episode please email podcast@eff.org. We read every email. 

Music for the show is by Nat Keefe and Reed Mathis of BeatMower. 

“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology. 

I’m Danny O’Brien.

 And I’m Cindy Cohn. Thank you for listening, until next time. 

Joe Mullin

Dream Job Alert: Media Relations Director for EFF

1 week 2 days ago

We’ve got an amazing opportunity for a senior media relations person to join the EFF team.

Right now, we are hiring for a Media Relations Director role, a leadership role that oversees and directs EFF’s press strategy and engagement. Join EFF and help explain to journalists and the world why civil liberties matter so much for the future of technology and society. Apply today.

We are open to many different types of candidates for this role. We are especially interested in meeting people who have had experience working in journalism themselves. We are also interested in people who have handled press strategy for other nonprofit advocacy or civil liberties organizations. But more than any particular experience, we’re looking for someone who is a great communicator, has terrific organizational skills, can manage a team, and who loves to solve problems and tell stories. And, we value diversity in background and life experiences. 

EFF is committed to supporting our employees. That’s why we’ve got competitive salaries, incredible benefits (including student loan reimbursement and fantastic healthcare), and ample policies for paid time off and holidays. We want this work to be sustainable and fun—so that you’ll be part of our organization for a long time.

Please check out our job description and apply today!  And if you are a senior media relations professional or working journalist who has a question about this role, please email rainey@eff.org

Even if this job isn’t the right fit for you, please take a moment to spread the word on social media.

rainey Reitman

Our Patent Review System is Ten Years Old. It’s Time to Make It Stronger.

1 week 2 days ago

The U.S. Patent and Trademark Office (USPTO) grants more than 300,000 patents each year.  Some of those patent grants represent genuine new inventions, but many of them don’t. On average, patent examiners have about 18 hours to spend on each application. That’s not enough time to get it right. 

Thousands of patents get issued each year that never should have been issued in the first place. This is a particular problem in software, which is a bad fit for the patent system. That’s why it’s so critical that we have a robust patent review system. It gives companies that get threatened over patents the opportunity to get a second, more in-depth review of a patent—without spending the millions of dollars that a jury trial can cost. 

Our patent review system is ten years old now, and patent trolls and other aggressive patent holders have learned to game the system. Unfortunately, the USPTO has let them get away with it. A recently introduced bill, the Restoring the America Invents Act (S. 2891) will close some of the loopholes that patent owners have used to dodge or weaken reviews. 

Inter Partes Review

Congress recognized the need for such a system when it passed the 2011 America Invents Act, and created a review system called “inter partes review,” or IPR. The IPR process lets a particular department of the patent office, the Patent Trial and Appeal Board (PTAB), hold a quasi-judicial process in which they take a second look to decide if a patent really should have been granted in the first place. 

The IPR system isn’t perfect, but the process has been a big improvement over the patent office’s previous review systems. Over the 10 years it’s been in operation, the PTAB has reviewed thousands of patents. In the majority of cases that have gone to a final decision, PTAB judges have decided to cancel all or some of the claims in question. 

It’s important to put this in context. The thousands of canceled patents are just a tiny fraction of the number that the government is giving away. In the most recent fiscal year, 265 patents had one or more claims canceled, according to USPTO statistics. That’s less than .1% of the 340,000 patents that were granted in the same period, and a minute fraction of the 3.8 million patent monopolies that the patent office believes are active. 

The IPR system isn’t perfect, but overall it has been a win for the public. It’s no surprise that some patent owners don’t like it.  With more administrative tools to get at the truth, more patents are found to be invalid.

Closing Four Loopholes

First, the bill would close a big loophole in the process that has come to be known as “discretionary denial.” Basically, this is when a panel of PTAB judges refuses to even consider the merits of an IPR petition. The most common excuse for a discretionary denial is that there’s related court litigation on the same patents that is coming up soon. There’s nothing in the law that says PTAB needs to consider this, but it’s been happening more and more in recent years. 

This loophole even got an official stamp of approval in a PTAB proceeding called Fintiv. So-called Fintiv denials now represent nearly 40% of all denied IPR petitions. One particular federal judge has even marketed his court as a good place to go for patent owners who would like to get a Fintiv denial, as well as other benefits. 

EFF has spoken out against this problem, asking for Congress to stop patent owners’ gamesmanship of the IPR process. This bill would do just that. The Restoring the AIA Act states simply, “a petition that meets the requirements of this chapter shall be instituted.” It’s time to close this loophole, before patent trolls make it even bigger. 

Second, the bill creates new rules regarding the role of the Director of the USPTO. That’s important since a recent Supreme Court ruling (U.S. v. Arthrex) gave the Director the power to review, and even overturn, the results of IPR proceedings. The Restoring the AIA Act would make it a requirement that the Director issue a written decision when she chooses to use that power. 

While this new power of the Director may not be used frequently, it’s important that there be a written record. Just like a judicial decision, a Director review could decide whether a patent stands or falls—and whether or not accused infringers must pay royalties or cease making a product. It’s basic good government that there should be a written record of such a decision. 

Third, the bill will allow government agencies to file for IPRs. It doesn’t happen often, but government agencies do get accused of  patent infringement. These are important cases, since any damages or royalties will be paid to patent owners with public money. Accused government agencies should have the opportunity to ask for a PTAB review, just as a private company or individual would. 

That’s what EFF and some of our allies advocated for when this issue came up at the Supreme Court, in a case called Return Mail v. U.S. Postal Service. Unfortunately, the high court held otherwise in a 6-3 decision. We don’t think barring government agencies from the IPR process is what Congress intended, and this bill would make that clear. 

Finally, this bill will make sure that the patents that get knocked down by IPR, stay down. It prevents the patent office from issuing patents that are “not patentably distinct” from patents that have been canceled in the IPR process. 

The inter partes review process that Congress created 10 years ago is one of a few changes to IP law that has actually served the public interest. It’s no surprise that it’s made enemies over the years, some of whom have fought hard to dismantle the process altogether. Fortunately, so far, they’ve failed. By closing these loopholes and making the process even stronger, Congress can make clear that the patent system works for all of the public—not just a small group of large patent owners. 

Joe Mullin

Coalition Against Stalkerware Celebrates Two Years of Work to Keep Technology Safe for All

1 week 6 days ago

In this guest post by the Coalition Against Stalkerware marking its second anniversary, the international alliance takes a look back on its achievements while seeing a lot of challenges ahead.

Two years ago, in November 2019, the Coalition Against Stalkerware was founded by 10 organizations. Today, there are more than 40 members with experts working in different relevant areas including victim support and perpetrator work, digital rights advocacy, IT security, academia, security research and law enforcement. 

Stalkerware makes it possible to intrude into a person’s private life and is a tool for abuse in cases of domestic violence and stalking. By installing these applications on a person’s device, abusers can get access to someone’s messages, photos, social media, geolocation, audio or camera recordings (in some cases, this can be done in real-time). Such programs run hidden in the background, without a victim’s knowledge or consent.

This year, the Coalition welcomed new supporters like INTERPOL and members, among them CyberPeace Institute; Gendarmerie Nationale; the Gradus Project; Kandoo; Luchadoras; the Florida Institute for Cybersecurity Research; National Center for Victims of Crime (US); North Carolina A&T State University’s Center of Excellence for Cybersecurity Research, Education, and Outreach; Refuge UK; Sexual Violence Law Center (US), and The Tor Project. 

Fulfilling one of the founding missions, the Coalition’s partners in July launched a new technical training on stalkerware aimed at helping increase capacity-building among nonprofit organizations that work with survivors and victims, as well as law enforcement agencies and other relevant parties. In addition, the Coalition has put together a revised page with advice for survivors who suspect they may have stalkerware on their device.

Other key activities during the year include: 

  • In October, Coalition members Wesnet, Australia’s national umbrella organization for domestic violence services, US-based National Network to End Domestic Violence (NNEDV), and global privacy company Kaspersky teamed up with INTERPOL to provide more than 210 police officers with knowledge to investigate digital stalking on the basis of the Coalition’s technical training on stalkerware
  • Also last month, the EU-wide DeStalk project—in which WWP EN and Kaspersky are project partners, and Martijn Grooten, Coalition coordinator, and Hauke Gierow from G DATA are Advisory Board members—launched an e-learning course for public officials of regional authorities and workers of victim support services and perpetrator programs on how to tackle cyberviolence and stalkerware. DeStalk is supported by the Rights, Equality and Citizenship (REC) Program of the European Commission.
  • In October, Coalition members Refuge and Avast published an online tool that helps detect abuse of Internet-of-Things (IoT) devices and provides tips on how to secure them. IoT is increasingly used for harassment and control in abusive relationships.  
  • In January 2021, the Stalking Prevention, Awareness, and Resource Center marked the 17th annual Stalking Awareness Month, an annual call to action in the United States to recognize and respond to the crime of stalking. Hundreds of organizations across the country hosted workshops, promoted awareness and encouraged responders to promote victim safety and offender accountability.

Beyond that, members conducted a series of new research:

  • NNEDV’s Tech Abuse in the Pandemic and Beyond report (2021) found that the most common types of tech abuse—harassment, limiting access to technology, and surveillance—increased during the pandemic. Phones, social media, and messaging were the technologies most commonly misused as a tactic of tech abuse.
  • Malwarebytes published their Demographics of Cybercrime report (2021), a global study of consumer cybercrime impacts, showcasing the disproportionate impact of cybercrime on vulnerable populations.
    Kaspersky presented its Digital Stalking in Relationships report (2021), a global survey of more than 21,000 participants in 21 countries about their attitudes towards privacy and digital stalking in intimate relationships. The survey found that a significant share of people (30%) see no problem at all and find it acceptable to monitor their partner without consent. Additionally, 24% of respondents reported having been stalked by means of technology at least once. Partners advising on the research were Centre Hubertine Auclert, NNEDV, Refuge, Wesnet and WWP EN.

Data from member organizations show the following picture on the issue of cyberviolence and stalkerware:

  • The Centre Hubertine Auclert conducted research on technology-facilitated domestic violence (2018) and found that 9 out of 10 women victims of domestic violence are also victims of cyberviolence.
  • WESNET, with the assistance of Dr. Delanie Woodlock and researchers from Curtin University, published the Second National Survey of Technology Abuse and Domestic Violence in Australia (2020). The survey asks practitioners what kinds of abuse tactics other forms of violence against women frontline workers are seeing in their day-to-day work with survivors of domestic and family violence. The survey shows that 99.3% of Australian domestic violence workers say they have clients experiencing technology abuse. 18% of workers see  spyware “often” and 35% see it “sometimes.” Tracking and monitoring of women and stalking, often via technological means, by perpetrators rose 244% between 2015 and 2020. Stalking is a known factor associated with an increased risk of lethal and near-lethal harm. 
  • Following the Coalition Against Stalkerware’s detection criteria on stalkerware, Kaspersky analyzed its statistics, revealing how many of its users were affected by stalkerware in the first 10 months of the year. From January to October 2021, almost 28,000 mobile users were affected by this threat. During the same period, there were more than 3,100 cases in the EU and more than 2,300 users affected in North America. According to Kaspersky figures, Russia, Brazil, and the United States remain the most affected countries worldwide so far. Likewise, in Europe the picture has not changed: Germany, Italy and the United Kingdom (UK) are the top most-affected countries respectively. When looking only at the EU, instead of the UK, France comes in third place.
  • Malwarebytes, in comparing stalkerware activity before and well into the COVID-19 pandemic, found that the threat of stalkerware continues to rise. From October 2020 to September 2021, Malwarebytes recorded more than 62,000 detections of applications with stalkerware capabilities on Android devices. These detections represent a 52% increase compared to the same 12-month period the year before, which accounted for roughly 41,000 detections.
Quotes from the Coalition’s members:
  • "Stalkerware is only part of a whole ecosystem of tech-enabled abuse, but it is one of the most frightening tools and it leaves survivors especially vulnerable to physical stalking, coercive control, and escalating violence. The Coalition Against Stalkerware has been instrumental in changing the way the tech industry treats these tools and helping survivors, and the people who support them, to detect the threat. No one industry can stop stalkerware all by itself. The Coalition's interdisciplinary approach has been essential to its success. When academics, tech companies, and domestic violence service providers work together, we can quantify the problem, raise awareness, and push for both technical and policy solutions in way that none of us can by ourselves." - Eva Galperin, Electronic Frontier Foundation Cybersecurity Director and Coalition co-founder.
  • “Stalking is a prevalent, traumatic, and dangerous crime that impacts over 1 in 6 women and 1 in 17 men in the United States. Technology is used—or misused—to stalk in the majority of stalking cases. Too often, victims and responders have limited resources and little to no insight on what kind of technology is being used, how to safety plan around it, and/or how to collect evidence on it. We are so grateful to be part of the Coalition to better educate responders at recognizing and responding to stalking. This work is truly helping to keep victims safe and hold offenders accountable." - Jennifer Landhuis, SPARC Director
  • “Two years ago, the public and law enforcement understood little about the threat of stalkerware. Apps that non-consensually spy on users were readily available online, promoted on social media feeds, and off the radar of national governments. By combining varied expertise across several countries, the Coalition Against Stalkerware has defined and raised significant awareness about the threat of stalkerware. We are proud to be one of several antivirus vendors working together to offer more comprehensive stalkerware protection for all users. We are equally proud of our nonprofit members who have produced and offered tailored device trainings and guidance to law enforcement and targeted individuals alike. The commitment of every member of the Coalition Against Stalkerware has shaped this reality in ways we couldn’t even imagine just two years ago.” - David Ruiz, Online Privacy Advocate, Malwarebytes
  • “It’s amazing how impactful collective action can be, especially if you work with engaged people. One small step after another, and at the end, all together, it will make a big change. What makes me think in this positive way is reading about the intention of the European Parliament and European Commission to propose a law to combat violence against women that will include prevention, protection and effective prosecution, both online and offline, by the end of 2021. The Coalition still has a long way ahead, but I believe we’re going in the right direction.” - Kristina Shingareva, Head of External Relations at Kaspersky. 
About Coalition Against Stalkerware

The Coalition Against Stalkerware (“CAS” or “Coalition”) is a group dedicated to addressing abuse, stalking, and harassment via the creation and use of stalkerware. Launched in November 2019 by ten founding partners—Avira, Electronic Frontier Foundation, the European Network for the Work with Perpetrators of Domestic Violence, G DATA Cyber Defense, Kaspersky, Malwarebytes, The National Network to End Domestic Violence, NortonLifeLock, Operation Safe Escape, and WEISSER RING—the Coalition has grown into a global network of more than forty partners. It looks to bring together a diverse array of organizations working in domestic violence survivor support and perpetrator intervention, digital rights advocacy, IT security and academic research to actively address the criminal behavior perpetrated through stalkerware and raise public awareness about this important issue. Due to the high societal relevance for users all over the globe, with new variants of stalkerware emerging periodically, the Coalition Against Stalkerware is open to new partners and calls for cooperation. To find out more about the Coalition Against Stalkerware please visit the official website www.stopstalkerware.org.

 

Eva Galperin

UN Human Rights Committee Criticizes Germany’s NetzDG for Letting Social Media Platforms Police Online Speech

2 weeks 1 day ago

A UN human rights committee examining the status of civil and political rights in Germany took aim at the country’s Network Enforcement Act, or NetzDG, criticizing the hate speech law in a recent report for enlisting social media companies to carry out government censorship, with no judicial oversight of content removal.

The United National Human Rights Committee, which oversees the implementation of the United Nations International Covenant on Civil and Political Rights (ICCPR), expressed concerns, as we and others have, that the regulation forces tech companies to behave as the internet police with power to decide what is free speech and what is hate speech. NetzDG requires large platforms to remove content that appears “manifestly illegal” within 24 hours of having been alerted of it, which will likely lead to take downs of lawful speech as platforms err on the side of censorship to avoid penalties. The absence of court oversight of content removal was deemed especially alarming, as it limits “access to redress in cases where the nature of content is disputed.”

“The Committee is concerned that these provisions and their application could have a chilling effect on online expression,” according to a November 11 Human Rights Committee report on Germany. The report is the committee’s concluding observations of its independent assessment of Germany’s compliance with its human rights obligations under the ICCPR treaty.

It’s important that the UN body is raising alarms over NetzDG. We’ve seen other countries, including those under authoritarian rule, take inspiration from the regulation, including Turkey. A recent study reports that at least thirteen countries—including Venezuela, Australia, Russia, India, Kenya, the Philippines, and Malaysia—have proposed or enacted laws based on the regulatory structure of NetzDG since it entered into force, with the regulations in many cases taking a more privacy-invasive and censorial form.

To quote imprisoned Egyptian technologist Alaa Abd El Fattah, “a setback for human rights in a place where democracy has deep roots is certain to be used as an excuse for even worse violations in societies where rights are more fragile.”

The proliferation of copycat laws is disturbing not only because of what it means for freedom of expression around the world, but also because NetzDG isn’t even working to curb online abuse and hate speech in Germany. Harassment and abuse by far-right groups aimed at female candidates ahead of Germany’s election showed just how ineffective the regulation is at eliminating toxic content and misinformation. At the same time, the existence of the law and its many imitations provides less of an incentive for companies to work to protect lawful speech when faced with government demands.

And in general, holding companies liable for the user speech they host has the chilling effect on freedom of expression the UN body is concerned about. With the threat of penalties and shutdowns hanging over their heads, companies will be prone to over-remove content, sweeping up legitimate speech and silencing voices. Even if massive platforms like Facebook and YouTube can afford to pay any penalties assessed against them, many other companies cannot and the threat of costly liability will discourage new companies from entering the market. As a result, internet users have fewer choices and big tech platforms garner greater monopoly power.

The UN Committee recommended Germany take steps to prevent the chilling effects NetzDG is already having on online expression. Germany should ensure that any restrictions to online expression under NetzDG meet the requirements of Article 19 (3) of ICCPR. This means that restrictions under the law should be proportional and necessary for respect of the right or reputations of others; or for the protection of national security or of public order (ordre public), or of public health or morals.” Moreover, the Committee recommended that Germany considers revisiting NetzDG “to provide for judicial oversight and access to redress in cases where the nature of online material is disputed.”

Germany should adopt these recommendations as a first step to protect freedom of expression within its borders. Germans deserve it. We’ll wait.    

 

Meri Baghdasaryan

Indonesian Court Allows Internet Blocking During Unrest, Tightening Law Enforcement Control Over Users’ Communications and Data

2 weeks 1 day ago

Indonesia’s Constitutional Court dealt another blow to the free expression and online privacy rights of the country’s 191 million internet users, ruling that the government can lawfully block internet access during periods of social unrest. The October decision is the latest chapter in Indonesia’s  crackdown on tech platforms, and its continuing efforts to force compliance with draconian rules controlling content and access to users’ data. The court’s long-awaited ruling came in a 2019 lawsuit brought by Indonesia NGO SAFEnet and others challenging Article 40.2b of the Electronic Information and Transactions (EIT) Law, after the government restricted Internet access during independence protests and demonstrations in Papua. The group had hoped for a ruling reining in government blocking, which interferes with Indonesians’ rights to voice their opinions and speak out against oppression. Damar Juniarto, SAFEnet Executive Director told EFF:  

We are disappointed with the Constitutional Court’s decision. We have concerns that the Indonesian government will implement more Internet restrictions based on this decision that are in violation of, or do not address, human rights law and standards. 

SAFENET and Human Rights Watch have been sounding the alarm about threats to digital rights in Indonesia ever since the government last year passed, without public consultation, Ministerial Regulation #5 (“MR 5/2020”), a human rights-invasive law governing online content and user data and imposing drastic penalties on companies that fail to comply. 

From Data Localization to Other Government Mandates

In 2012, Indonesia adopted a data localization mandate requiring all websites and applications that provide online services to store data within Indonesia’s territorial jurisdiction. The mandate’s goal was to help Indonesian law enforcement officials force private electronic systems operators (ESOs)—anyone that operates “electronic systems” for users within Indonesia, including operators incorporated abroad—to provide data during an investigation. The 2012 regulation was largely not enforced, while a 2019 follow-up initiative (M71 regulation) limited the data localization mandate to those processing government data from public bodies. 

Since the adoption of MR5, Indonesia’s data localization initiative shifted its approach: private sector data can once again be stored abroad, but the regulation requires Private ESOs to appoint an official local contact in Indonesia responsible for ensuring compliance with data and system requests. Private ESOs will be obligated to register with the government if they wish to continue providing services in the country, and, once registered, will be subject to penalties for failing to comply with MR5’s requirements. Penalties range from a first warning to temporary blocking, full blocking, and finally revocation of its registration. Indonesia has mandated broad access to electronic systems for law enforcement and oversight and proactive monitoring of online intermediaries, including private messaging services and online games providers.

Proactive Monitoring Mandate

EFF has warned that, by compelling private platform operators to ensure that they do not host or facilitate prohibited content, MR5 forces them to become an arm of the government’s censorship regime, monitoring their users’ social media posts, emails, and other communications (Article 9 (3)). 

MR5 governs all private sector ESOs accessible in Indonesia, such as social media services, content-sharing platforms, digital marketplaces, search engines, financial and data processing services, communications services providing messaging, cloud service providers, video calling, and online games. The definition of prohibited information or content includes vague concepts such as content causing “community anxiety” or “disturbance in public order,” and grants the Indonesian Ministry of Communication and Information Technology (Kominfo) unfettered authority to define these terms (Article 9(5)). 

Along with SAFENET and Human Rights Watch, we pointed out earlier this year that  the phrase “prohibited” is  open to  interpretation  and  debate. For example, what is meant by “public disturbance,” what is the standard or measure for public disturbances and who has the authority to determine what qualifies? What if the public feels that peaceful demonstrations and protests are a fundamental right, not “disturbing the society”?

Article 9(3)(b) of the Ministerial Regulation also prohibits any system from facilitating either “access to prohibited Electronic information and/or documents” or informing people how to do that. Under Article 9 (4)(c) of the regulation, prohibited Electronic information or documents could be any information or document that explains how to use or how to get access to the Tor browser, virtual private networks (VPNs), or even materials showing how to bypass censorship. Adding insult to injury, companies failing to comply will be subject to draconian penalties ranging from temporary or full blocking of their service to revocation of their authorization to provide online services within Indonesia (Article 9(6)). Moreover, under Article 13, private sector ESOs are also required to take down and block any prohibited information and documents (Article 9(4)). 

Even worse, secure private messaging apps (such as WhatsApp, Signal, or iMessage) are also obliged to comply with Article 9. Private messaging services that offer end-to-end encryption do not know the content of users’ messages. Thus, MR5 effectively seeks to ban all end-to-end encryption and thus the ability for anyone in Indonesia to message or text someone without the threat of the provider or government listening in. Moreover, MR5 requires these providers, as it does with others, to determine if content is “prohibited.” 

The new regulation interferes with rights to free expression and privacy, and requires platform providers to carry out these abuses. This is why, together with SAFENET, Human Rights Watch, and others, we called upon Kominfo to repeal MR 5/2020. In a joint statement, we said  MR5 runs contrary to Article 12 of the Universal Declaration of Human Rights and Article 17 of the International Covenant on Civil and Political Rights. 

Mandatory Registration for Platforms 

MR5 requires private platforms to register with the government—in this case  Kominfo—and obtain an identification (ID) certificate to provide online services to people within Indonesia. This gives the government much more direct power over companies’ policies and operations: after all, the certificate can be revoked if a company later refuses the government’s demands. Even platforms that may not have infrastructure inside Indonesia have  to register and get a certificate before people in Indonesia can start accessing its services or content. Those that fail to register will be blocked within Indonesia. The original deadline to register was May 24, 2021, but was later extended for six months. As of today, the government has not extended the deadline or enforced the mandatory registration provision.  

The idea that websites and apps need to obtain an ID certificate from the government is a powerful and far-reaching form of state control because every interaction between authorities and companies, and every decision that might please or displease authorities, takes place against a backdrop of potential withdrawal of the ID and being blocked in the country.

ID Certificate and Appointment of Local Contact

 The Indonesian government has many expectations for companies that register for these IDs, including cooperation with orders for data about users. In some cases, that even includes granting the government direct access to their systems.

For example, MR5 compels private platforms that register to grant access to their “systems” and data to ensure effectiveness in the “monitoring and law enforcement process.” If a registered platform disobeys the requirement, for example, by failing to provide direct access to their systems (like computer servers—see Article 7 (c)), it can be punished in ways similar to the penalties for failing to flag “prohibited” content, from a written warning to temporary blocking to full blocking and a final revocation of its registration. 

Article 25 of MR5 forces companies to appoint at least one contact person domiciled in the territory of Indonesia to be responsible for facilitating Kominfo or Institution requests for access to systems and data. Laws forcing companies to appoint a local representative exist, for example, in Turkey or India

Both the ID requirement and the forced appointment of a local point of contact person are powerful coercive measures that give governments new leverage for informal pressure and arbitrary orders. As we noted in our post in February, with a representative on the ground, platforms will find it much harder to resist arbitrary orders and risk domestic legal action against that person, including potential arrest and criminal charges.

Human Rights Watch’s Asia Division has similarly worried that,

[w]hile the establishment of local representatives for tech companies can help them navigate and better understand the different contexts in which they operate, this is dependent on the existence of a legal environment in which it is possible to challenge unfair removal or access requests before independent courts. MR5 provides no mechanism for appeal to the courts, and the presence of staff on the ground makes it much harder for companies to resist overbroad or unlawful requests.

Remote Direct Access to Systems 

Direct access to system mechanisms are situations in which  law enforcement have a “direct connection to telecommunications networks in order to obtain digital communications content and data (both mobile and internet), often without prior notice, or judicial authorization, and without the involvement and knowledge of the Telco or ISP that owns or runs the network.” Direct access to personal data interferes  with the right to privacy, freedom of expression, and other human rights. The United Nations High Commissioner for Human Rights stated that direct access is “particularly prone to abuse and tends to circumvent key procedural safeguards.” The Industry Telecom Dialogue has explained that some governments require such access as a condition for operating in their country: 

some governments may require direct access into companies’ infrastructure for the purpose of intercepting communications and/or accessing communications-related data. This can leave the company without any operational or technical control of its technology. While in countries with independent judicial systems actual interception using such direct access may require a court order, in most cases independent oversight of proportionate and necessary use of such access is missing.

MR5 expanded its approach and applies it to all private ESOs including cloud computing services. It does not say exactly what type of access to “systems” (servers and infrastructure) private platforms may be requested to provide, though access to information technology systems (Art. 1)—which includes communication or computer systems, hardware and software—is explicitly called out as a possible subject of an order, over and above requests to turn over particular data. 

When it comes to access to systems for oversight purposes, MR5 compels providers to grant access either by letting the government in, handing over what the government is asking for, or giving the government results of an audit (Art. 29(1) and Art. 29(4) ). When it comes to access to systems for criminal law enforcement purposes, MR5 fails to explicitly include an audit result as a valid option. Overall, direct access to systems is an alarming provision.

Access to Data and System

Under MR5, broad remote direct access mandates compel any provider or private ESO to grant access to “data” and “systems” to Kominfo or another government institution for “oversight” purposes (administrative monitoring or regulatory administrative compliance) (Art. 21). They are also required to grant access to law enforcement officials for criminal investigations, prosecutions, or trials for crimes carried out within Indonesian territory (Art. 32 and 33).  Law enforcement is required to obtain a court order to access ESO systems when investigating crimes that carry prison sentences of two to five years. But there’s no such requirement for crimes that carry heavier sentences of over five years imprisonment (Art. 33). 

MR5 also requires private ESOs that process and/or store data or systems to grant cross-border direct access requests about Indonesian citizens or business entities established within Indonesia even if that information is processed and stored outside the country (Art. 34). The cross-border obligation to disclose data applies for crimes carrying penalties of two to five years imprisonment (Art 32), while the obligation to grant access to the providers’ system applies for investigations or prosecution of crimes that carry sentences of over five years imprisonment (Art. 35). Unlike Mutual Legal Assistance Treaty (MLAT) agreements, MR5 fails to include a “dual criminality” requirement, meaning Indonesian police could seize data from foreign providers while investigating activity that is not a crime in the foreign country but it is a crime in Indonesia. While practical challenges currently exist in cross-border access to data, these challenges can be addressed through: 

  • The express codification of a dual privacy regime that meets the standards of both the requesting and the host state. Dual data privacy protection will help ensure that as nations seek to harmonize their respective privacy standards, they do so on the basis of the highest privacy standards. Absent a dual privacy protection rule, nations may be tempted to harmonize at the lowest common denominator.
  • Improved training for law enforcement to draft requests that meet such standards, and other practical measures.

Cross-border data demands for the content of users’ communications imposed on companies like Google, Twitter, and Facebook may create a conflict of law between Indonesia and countries like the European Union or the United States. The EU’s General Data Protection Regulation (GDPR) does not allow companies to disclose data voluntarily without a domestic legal basis. US law also forbids companies from disclosing communications content without an MLAT process which requires first obtaining a warrant issued by a US judge. While we understand that Indonesia does not have an MLAT with the United States, the process for resolving conflicts of law needs considerable work. The Indonesian government should not expect companies to stride deliberately into legal paradoxes, where complying with a regulation in one country would lead them to not only violate the law in another country but also violate international human rights law and standards. The principle of dual criminality should also be taken into account when a cross-border request is needed.

Access to “Electronic Data”

Access to “electronic data” for oversight purposes can be ordered by Kominfo or other competent government institutions (Art. 26). When such access is requested for criminal investigations, it can be done by a law enforcement official (Article  38 (1)). 

In both cases, MR5 explicitly states that remote access should be granted using a link created by the private platform, or any other way as agreed between Kominfo or Institutions and the platform or the platform and law enforcement. In many cases, private ESOs can satisfy these requests through negotiating a compliance plan with the requester (which may avoid actually giving Indonesian government officials direct access to companies’ servers, at least most of the time (Article 28 (1), Article 38 (1)).  Specifically, MR5 provides no information regarding factual background of the investigation or regarding any grounds establishing investigative relevance and necessity

Law enforcement officials can also get access to very broad categories of data, like subscriber identities (“electronic system user information”), traffic data, content, and “specific personal data.” This last category can include sensitive data such as health or biometric data, political opinions, religious or philosophical beliefs, trade union membership, and genetic data. Law enforcement can get access to it, without a court order, for investigations of crimes that carry sentences of over five years imprisonment. Court orders are only required for crimes carrying penalties of two to five years imprisonment.

Gag Orders

Kominfo or government institution orders to access “systems” for oversight purposes (Art. 30) and for criminal law enforcement (Art. 40) are expected to be “limited” and “confidential” but must be responded to quickly—within five calendar days upon receipt of the order (Art. 31 and 41)—a very short time period that does not allow providers to assess the legality, necessity, and proportionality of the request. 

Confidentiality provisions such as those featured in MR5 have also been problematic in the past, and sidestep surveillance transparency, as well as the right of individuals to challenge surveillance measures. While investigative secrecy may be necessary, it can also shield problematic practices that pose a threat to human rights.  This is why providers should be able to challenge gag orders, and get authorities to provide a reasoned opinion as to why confidentiality is necessary.

Civil society has strongly advocated for the public’s right to know and understand how police and other government agencies obtain customer data from service providers. Service providers should be able to publicly disclose aggregate statistics about the nature, purpose, and disposition of government requests in each jurisdiction, and notifying targets as soon as possible, unless doing so would endanger the investigation. 

Technical Assistance Mandates

Technical assistance mandates such as those set out in MR5 have, in the past, been leveraged in attempts to erode encryption or gain direct access to providers’ networks. Article 29, too, uses similar language; the government entities requesting access to a “system” may also request “technical assistance” from the private ESOs, which they are expected to provide. The government is planning to issue technical guidelines regulating the procedures for data retrieval and access to the system by December 2021.

Cloud Computing in Case of Emergency 

Article 42 compels cloud service providers to allow access to electronic systems or data (voice, images, text, photos, maps, and emails) by law enforcement in cases of emergency. While other laws and treaties (even less-protective treaty mechanisms for streamlining this kind of international access like those in the Council of Europe’s Second Additional Protocol to the Budapest Convention) have narrowly defined  emergencies  as  preventing imminent threats to people’s physical safety, Article 42(2) defines emergency more broadly to include terrorism, human trafficking, child pornography, and organized crime, in addition to physical injury and life-threatening situations. These categories may implicate life-threatening emergency threats, like a terrorist bomb plot or a child in current danger of ongoing sexual exploitation. But if there is no imminent threat to safety, Article 42 should not apply. 

Conclusion

MR5 runs afoul of Article 12 of the Universal Declaration on Human Rights and Article 17 of the International Covenant on Civil and Political Rights (ICCPR). MR5 is a regulation adopted by the Executive Branch. It lacks detailed procedural safeguards and its wording is overly broad, giving unfettered discretion to the authorities to request a wide range of user data and access to system. 

Under international human rights law, restrictions to the right to privacy can only be permissible if it meets the test that applies to Article 19 of the ICCPR. Such position has been clearly set out by the UN Special Rapporteur on Promotion of Human Rights while Countering Terrorism, the UN Human Rights Committee, and the UN Commission on Human Rights

To protect human rights in a democratic society, data access laws should make clear that authorities can access personal information and communications only under the most exceptional circumstance and only as long as access is prescribed by enacted legislation, after public debate and scrutiny by legislators. Further, such laws must be clear, precise, and non discriminatory, while data requests should be always necessary, proportionate and adequate. User data should only be accessed for a specific legitimate aim, authorized by an independent judicial authority that is impartial and supported with sufficient due process guarantees such as transparency, user notifications, public oversight, and the right to an effective remedy. The law should spell out a clear evidentiary basis for accessing the data; ensuring that providers will obtain enough factual background to assess compliance with human rights standards, and protected privileges. Confidentiality should be the exception, not the rule, only to be invoked where strictly necessary to achieve important public interest objectives and in a manner that respects the legitimate interests and fundamental rights of individuals. Moreover, in case of cross border request, the law should ensure the respect of the principle of dual criminality as most MLATs do.

MLATs have traditionally provided the primary framework for government cooperation on cross-border criminal investigations. MLATs are typically bilateral agreements, negotiated between two countries. While specific details may vary across different MLATs, most share the same core features: a mechanism for requesting assistance to access data stored in a hosting country; a Central Authority that assesses and responds to assistance requests from foreign countries, and a lawful authority for the central authority to obtain data on behalf of the requesting country. Generally speaking, in responding to foreign requests for assistance, the Central Authority will rely on domestic search powers (and be bound by accompanying national privacy protections) to obtain the data in question.

MR5's draconian requirements hand the Indonesian government a dangerous level of control and power over online free expression and users’ personal data, making it a tool for censorship and human rights abuses. The regulation copies many provisions used by authoritarian regimes to compel platforms to bend to government demands to break encryption, hand over people’s private communications, and access personal information without procedural safeguards and proportionality requirements against arbitrary interference. The Indonesian people deserve better. Their privacy and security are at risk unless MR5 is repealed. EFF is committed to working with SAFENET in urging Kominfo to roll back this unacceptable regulation. The 13 Necessary and Proportionate Principles can provide a blueprint for States to consider safeguards when it comes to law enforcement access to data.

Katitza Rodriguez

Podcast Episode: The Revolution Will Be Open Source

2 weeks 1 day ago
Episode 102 of EFF’s How to Fix the Internet

The open source movement focuses on collaboration and empowerment of users. It plays a critical role in building a better digital future, but the movement is changing as more people from around the world join and bring their diverse interests with them. Join EFF’s Cindy Cohn and Danny O’Brien as they talk to EFF board member and Open Tech Strategies partner James Vasile about the challenges that growth is creating and the opportunities it presents to make open source, and the internet, even better.

Click below to listen to the episode now, or choose your podcast player:

%3Ciframe%20height%3D%22200px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F50bcbac6-5c83-403a-8804-6f177978f4f1%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com


  
  

To James Vasile, an ideal world is one where all technology is customizable by the people who use it and that people have control over the devices they use. The open source dream is that all software should be licensed to be free, modified, distributed, and copied without penalty.

The open source movement is growing, and that growth is creating pressures. Some stem from too many projects and not enough resources. Others arise because, as more people worldwide join in, they bring different dreams for open source. Balancing initial ideas and new ones can be a real challenge, but it can also be healthy for a growing movement.

All tech should be customizable by the people who use it. Because as soon as things are proprietary, you lose control over the world around you.

 You can also find the MP3 of this episode on the Internet Archive.

In this episode, you’ll learn about

  • Some of the roots and founding principles of the open source and free software communities.
  • How the open source and free software communities are changing and adapting as more people get involved and bring their own ideals with them.
  • How licenses affect the open source community, including how communities are working to bring additional values like protecting labor and protecting against abusive uses to these licenses. Policy changes that could help support the open source community and its developers and how those could ultimately help support transparency and civil liberties.
  • How critical open source is to the decentralization of the web and more.


James Vasile is an EFF board member and a partner at Open Tech Strategies, a company that offers advice and services to organizations that make strategic use of free and open source software. Jame’s work centers on improving access to technology and reducing centralized control over the infrastructure of our daily lives. You can find him on Twitter @jamesvasile.

If you have any feedback on this episode, please email podcast@eff.org.

Below, you’ll find legal resources - including important cases, books, and briefs discussed in the podcast - and a full transcript of the audio.

Resources Transcript of Episode 102: The Revolution Will Be Open Source

James:  I feel like in an ideal world, everything would be customizable in this way, would be stuff that I can take and do new things with. . right? All tech should be customizable by the people who use it. Because as soon as things are proprietary, as soon as you lose the ability to do that, you lose control over the world around you

Cindy: That's James Vasile. And he's our guest today on How to Fix the Internet. James:  has been building technology and community for many years. And he's going to share his insights with us. He's going to tell us all about how free software and open source is growing, it's changing and it's getting better.

Danny: We’re going to unpack how more and more people in the tech space are joining the free software movement, but with bigger crowds do come some growing pains.

Cindy: I am Cindy Cohn, EFF's Executive Director.

Danny:  And I'm Danny O'Brien and welcome to How to Fix the Internet, a podcast of the Electronic Frontier Foundation

Cindy: Today we're going to talk about open source communities and how they have been changing as more people get involved. We're going to talk about the roots of the movement, some of the challenges presented by the growth of the community and then we get to my favorite part, how we fix the internet. 

James Vasile is here with us. He's on our board of directors at the Electronic Frontier Foundation, and he's been working in the open source community for decades. James: , you consult through OpenTech strategies as well as the Decentralized Social Networking Protocol. Welcome to How to Fix the Internet.

James: Hi, thanks for having me.

Cindy: So you're well positioned to know what's going on in the open source world. Can you give us a sense of the health of the community right now?

James: I mean, in some senses, things are going amazingly well. The way in which free and open source software has become an indispensable part of every project, every stack, every sector. Anything touched by technology depends on open source today. And increasingly, everything in the world is touched by technology, which is to say that open source is everywhere. 

Free software is at the heart of every device that people are buying on a daily basis, using on a daily basis, relying on. Whether they can see it or not, it's there and our tech lives are just powered by free software collaboration, very broadly speaking. So, that's pretty cool right? 

And that's amazing. So from that point of view, we're doing really well. 

Unfortunately, there are other aspects in which that growth has created some problems. We have lots and lots of free software projects that are under-resourced, that are not receiving the help that they need to succeed and be sustainable. And at the same time, we have a bunch of crucial infrastructure that depends on that software. 

And that becomes a problem How do we sustain free software at this scale? How do we make it reliable as it grows and as the movement changes character? Adding people is such a big deal.

Danny: Yeah. 

When it first started the free and open source movement was powered by idealism and ideology. What are those founding principles that it was built on? And are they still there? Have they been eaten away by the practicalities of supporting the whole of the internet?

James:   Yeah, that's a really good question. I mean, free software as it exists based on these initial notions laid down by Richard Stallman, that's a wing of the community that is very identifiable, but is not growing as fast as the free and open source software movement writ large. And one of the things that happens when you add people to a movement is you don't always get the same people joining for the same reasons as initially the first people started it.

You add people who are joining in for reasons of practicality, you add people who are joining in just because peer production of software works.

This is a really good way to go about linking arms with other producers and making really cool stuff. So there's some people who are just here for the cool stuff. And that's okay too. 

And as we add more and more people, as we convince the next generation of free and open source software developers, I think we're finding that people are getting further and further away from these initial ideals

Danny: What are those initial ideals? What are the things that if you were going to give an open source  newbie, the potted guide to what open source means, how would you describe those principles?

James: Yeah, I mean, we describe them very broadly as the freedom to run the software, the freedom to modify the software, the freedom to distribute those modifications, the freedom to copy the software, the freedom to learn about the software. And that notion that you can open it up, look inside and learn about it. And that that is a thing you should be able to do by right and that you should be able to then share those things with everyone else in the community. 

And that when you pass on that sharing, you are also passing on those rights is baked into some of the earliest licenses that this community has used. Like the licenses published by the Free Software Foundation, the GNU family of licenses had this ideal, this notion that not only do I have the right to look, the right to copy, the right to modify, the right to distribute, but that when I hand the software to the next person, they should get all of those rights as well. 

So that's what this community was founded upon. But very early on, from those initial free software ideals, we also had the rise of the open source wing of the movement, which was very much trying to make free software palatable to commercial interests and to expand the pool of contributors to this world. There was a weakening of the ideals where people decided that in order to appeal to new audiences, we needed licenses and we needed projects that use the licenses that we can do the passing on of rights.

So you can hand it to somebody but not give them all of those rights along with it. You could hand somebody a piece of software and while you yourself enjoyed the ability to study it, to copy it, to modify it, maybe you don't give those rights to the next person that you hand the software to. 

And that was the first big shift.

Danny: Right. Why are those initial rights so important, do you think?

James: I mean, do you remember in the early days of the web when-

Danny:  I do.

James:  ... you could view source? You know? And a bunch of people have about this. This isn't an idea I came up with, but I think Anil Dash talks about this a lot, this notion that we all learned about the internet by going to the top of our browser and clicking view source. And that's how I learned, that's how I learned HTML. 

And the notion that you could make the ecosystem legible all the way down to anyone who approaches is extremely powerful for bringing more people the ability to affect their environment. Without that ability to look inside and tinker with it and make it your own, you don't have a lot of entry points into software if you're not already a professional developer. And so this really just expands the universe of people and gives them the ability to take control of the software that they are using and make real substantive changes to it.

So those are the original principles. And then have those principles changed again in the modern era? I mean, I think they are slowly changing. I mean, also all of the conversation we have been having so far has been very much localized to the United States and Europe. 

So in the United States, you have a lot of techno libertarianism in the free software world. 

But then if you look in South America, you will find a lot of communities that are much more explicitly political on the left side of the spectrum and building software as a way to empower communities as opposed to a way to empower individuals. That diversity is increasing as free software moves to more places and moves to new places.

And in order to accept those people into the community, you can't just demand that they shed all of their old identity and values and adopt yours wholesale.

Instead, what happens is as more people join in, they start pulling the community in their direction, which is what every successful movement has ever done. Every time you talk to anyone who has ever been part of a movement that has gained in popularity, you will hear a bunch of people complaining about how their original ideals have been diluted, but it turns out that that shift, that evolution is just part of growth. 

It's actually a good thing. It's a sign that things are working. It's a sign that what you are doing is going to be more acceptable to more people over time. It is how you maintain that growth and sustain it over time. So I'm actually really excited about that diversity.

Cindy: Yeah.

To your point about this growing community, we've seen proposals to embed human rights principles into new standard form open source licenses. It's long been a dream and I think it was some of the original dream to use these licenses as a lever to force a more ethical use of technology. How do you see that working and not working as this movement grows?

James: Yeah. Man, I love all the folks working to try to figure out what is the next step. So there's a bunch of people who want to address labor issues in these licenses. So there's a 996 license that is meant to address harsh labor conditions in China. There's ethical source licenses that are designed to address what we use the software for. 

If you're going to use this software, make sure that you are not promoting war or oil, that you're protecting climate change. There's a variety of licenses for different areas of concern. And that notion that we can stop allowing anyone to use our software for whatever purpose, but instead put some guide rails around it to say, "Okay, we're going to come together as a community to make software, but we are going to only allow that use to track in certain ways and we're going to exclude what we consider to be unethical use of software."

I love the notion that people are thinking about that. And there's a couple debates going on about that right now. One is, is that stuff open source? And honestly, I don't care. There are people who care a lot about protecting the term open source as a brand. And I guess, that's important to some degree, but the question of whether this particular thing is open source or not open source is not actually a thing I lose a lot of sleep over. 

But the real question is, how would you make any of that work? How would that work as a practical matter? 

Could you, as a group of people, get together and decide that you are going to contribute, pool your effort and make something really valuable, but not have it get used by say the Defence Department or get used by pharmaceutical companies or oil companies or whoever it is that you believe is acting unethically and you don't want to benefit from your labor? 

That starts to get really interesting. That is people getting together to make technology with very particular political aims. And from my point of view, the point of technological collaboration is to uplift communities, to help communities achieve their social goals. It's not just to make cool tech. And if you believe that this technology is supposed to be enabling and empowering, then folks who are trying to figure out how to do it in ways that drive change towards the good, that makes a lot of sense to me.

I love that experimentation. I don't know where it comes out practically

Every major company, every enterprise company has taken the position that they will not use these ethical source licenses. 

You can go make them, you can go make the tech, but we just won't use it. In the absence of that corporate investment, is there a sustainability model? Is there a way to grow that niche, grow that market in the same way that we've grown the corporate invested software? And I don't know the answer to that question. I think it's probably too early to say.

Cindy: Yeah. I have to say, I mean, I find this stuff really fascinating as well, but as a lawyer who spends a lot of time around licenses and copyrights and trying to make them as small as possible because we create so much space for innovation, the street finds its uses for things if they're not locked down. There's a tension at the bottom of this that trying to use licensing and contractual terms to try to limit what you can do with something is a double-edged sword because it can go the other way as well.

And so in general, we try to create as much open space as possible. And this movement towards licenses that are trying to push towards ethical tech is really interesting to me. And it'll be interesting to see if the copyright licensing framework can really hold that. And I have my doubts, but the whole idea of copyleft to begin with was a bit of a strange thing for people who came up in a traditional copyright background. And it's succeeded so far. 

So I also don't like to pre predict, but I think as someone who spends a lot of time trying to think about how end user license agreements could be made smaller so they're not so limiting and so dumb about people's privacy and other contractual and licensing places shrinking them. This approach is really a whole different direction. And I admit to a little bit of uneasiness about it because we're always working on the unintended consequences are often the nasty tail that comes up and slaps you in the face when you're trying to do something good.

Danny: I think there's always this challenge where you look at copyright as a tool to achieve a certain aim.

And definitely one of the things we've experienced at EFF is that people are always using intellectual property as a way of achieving something in the digital space because it's so powerful, because it's been written and armed with so much power. And I think there's always a risk attached to that, partly because you're trying to bend intellectual property law to achieve different aims. 

But the other thing is once you start depending on it, you have to make it stronger. If it doesn't work, you have this instinct to go, "Oh, we just need to enforce this even more drastically on the internet. And I think that's a really risky temptation to be drawn into.

James:  It is. I tend to think that because of some quirks in our history, we over-indexed on licensing activity. We built this thing and we described it as, "Oh, look, the licenses. They're doing all this work." And from my point of view, it turns out really, it was actually just communities doing this work and the licenses were convenient and helpful. And there has been a little bit of enforcement activity, but the amount of enforcement activity relative to the amount of activity to the size of the free and open source software worldwide is tiny. 

The licenses are good for setting expectations. They almost never get enforced. And that's useful to keep in mind because it turns out that what's keeping folks inside the tent, what's keeping folks contributing and working together is not really about the license because they know they're never going to enforce the license.

Most projects don't have the resources to do that. Most projects, even if they had the financial resources to do that would not want to spend their time doing that.

Cindy: I think that's such a great insight that... that the community is so much more important than whatever the legal scheme is that's around it. It's such a great insight that I think helps us point to how we continue to make things better.

Danny: “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation's Program in Public Understanding of Science, enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians. 

We're seeing people all around the world with very different values coming in and beginning to use open source. How is the movement changing? And then what are some of your insights about sustaining it?

James:  Man, we are at this moment of doing many different experiments to try to figure out what does sustainability look like? 

And into that space has stepped a bunch of efforts.

I guess, the most prominent one, the one that gets the most attention is probably Tidelift. And Tidelift is doing, from my point of view, a great job. They have a program where they say to companies, "Look, you're using a bunch of open source software. We will help you figure out what you're using. And then we will give you a number and you will give that to us and we will then support all of these developers who you are relying on, who you don't know and you don't even really know you're relying on these people yet, but you don't want those people to stop doing their work. You don't want them to burn out. You need to support them like you would any other vendor that you want to be reliable."

And that's been pretty good. They've gotten a bunch of companies to agree that yes, the need is there. And Tidelift makes it easy for them to address that need. And a bunch of projects have signed up with Tidelift to be on the receiving end of that. And that's pretty promising for one sector of the movement. 

 But they're not the only one. That's not the only model. There are so many other models. Open Collective is also a really cool model. And that's just a straight community crowdfunding campaign as far as I can tell. And some projects there are bumping along and some are wildly successful where they've got full-time devs who are getting monthly payments and doing real work that the community really needs.

And so from my point of view, there are a million different ways you could do sustainability for a project. And it should run the gamut from individuals on LibraPay, individuals on Kickstarter, groups on Open Collective, corporate funding through Tidelift, individuals on GitHub. They have efforts to try to allow you to pay developers on GitHub directly. The Linux Foundation has a program as well that's I think like Tidelift. 

 And all these programs together form many different options, many different models that a project could address. And some projects of course are doing multiple ways to address it. And because every project is different and a project's needs could change over time, we need many different models to address them, but we all also need many different forms of the infrastructure necessary to support all these different models.

And I'm excited to see more of that. I'm excited to see more experiments in more different directions.

Cindy: Let me shift gears a little bit and talk about how we get to our better future. What are some of the policy areas where you could see some support for open source coming through?

James:  Well, I mean, I could give you a really nerdy one. You want a really nerdy one?

Danny: Yes.

Cindy: We are deep nerds here. No problem at all.

James:  I mean, we could decide that contributions to open source projects are tax deductible. That the labor you put into your project gives you some tax deduction, which could instantly provide a financial benefit very broadly across all of that unpaid developer space. And that's not crazy. You write code, you make a thing. That thing has value. 

 You donate it to a project and you have just transferred value to them instead of keeping it for yourself and selling it or whatever. And that is a policy shift that I have been trying to get people to pay attention to for, I don't know, 15 years or so. Well, there's definitely been places where it could plug in. This is a piece with tax rules around making art and then donating that art.

It's the same idea. You make a thing and you donate it. And your tax deductibility is the paints and the canvas. The basis of that is not the value of the thing you created. And it should be. It should be the value that you create. And so from an extremely nerdy tax perspective, I would love to see just basic government recognition that the work we do, even on an unpaid basis, actually has tremendous value. It should be recognized because it is valuable.

Danny:I'm always surprised by actually how little government support open source and free software projects. And I think it's maybe because the open source community got there first. That traditionally, people look to governments for the provision of public goods. And here we have a system that's actually doing a pretty good job separate from that system providing for public goods. But do you think there's a role for governments not only in financially supporting projects, but maybe also using free software and open source software themselves?

James: Yeah, absolutely. I mean, so there's a couple places where government could plug in. We see uptake of open source software in government at much lower levels than in the mainstream tech industry.

Danny: Interesting.

James: And I've done a lot of work with a lot of government agencies trying to help them figure that out and get them over the hump. And there are a lot of institutional barriers, there's a lot of cultural barriers, but the thing that could move it is leadership from the top, is rules about procurement that require a certain consideration for open source, approaches to open source that are not just about it has to have a license that is open, it has to actually have practices that are open, that actually make a thing susceptible to the dynamics of open source. 

And we don't have any of that in this country. We don't have much movement on that. California, I think, is doing better than other states, but there's not a lot of work in most states. And at the federal level, you have AT&F pushing along this way, but you don't have any requirements. You don't have agencies saying, Everything we do is going to be open." 

And to some degree, that doesn't really make a lot of sense. If software is going to be funded by public money, shouldn't it be a public good? Shouldn't it be a thing that everyone should have access to, that everyone can use, can share, can learn from, can contribute to the general welfare of anybody in the country? I always-

Cindy: Well, I just love this idea and I love it for some other tactical reasons, which is of course we spend a lot of time trying to get access to the software used to surveil people by the cops. We've just seen a tool called ShotSpotter be revealed to be really poorly created to try to do the thing that it does, because when we get a look at the source code in some of these things and how it works, we realize that so many things that the government buys, especially in the context of surveillance are really snake oil. 

 They're not very good at what they're trying to do. And it gets even harder when you're talking about machine learning systems. So to me, a government rule that requires transparency of the code that the government is relying on to do the things they do, now that could work for some proprietary systems, but it's going to be such a smooth ride for the open source systems because they start that way.

 That could be, I think, a really important step forward around transparency that would have this tremendous benefit to the open source community, but frankly would help us in all other situations in which we find the government is using code and then hiding behind trade secrets or proprietary agreements that they have with vendors to stop the public from having access, even in situations in which somebody's going to go to jail as a result. 

 So I think this is a tremendous idea. It's certainly something that we've pushed a little bit, but reframing this as a transparency goal to me is one of the things that could be really terrific about our fixed future.

James:  Yeah. You should not have to request that software, it should just be downloadable. It should have been reviewed by the public before it gets put into service. And there's no actual good reason why we can't do that.

Danny: So we always try and envisage what this better future should be like on the show. And I mean, I'm guessing, James: , that you are the person who uses a lot of free software. Do you think that's-

James:  That's right.

Danny: Is that your vision of the future? Is the vision of future that all software is free software or are you more humble in your dreams?

James:  I mean, I feel like in an ideal world, everything would be susceptible to inspection, would be customizable in this way, would be stuff that I can take and do new things with, that I can drag in new directions maybe that only matter to me. All tech should be customizable by the people who use it.  

People should have control over the tech they use. And so yes, I would like as much of it as possible to be susceptible to those dynamics because as soon as things are proprietary, as soon as you lose the ability to do that, you lose control over the world around you. And so much of our world is mediated by technology.

And as soon as you start removing the ability to look under the hood and the ability to tinker with it, the ability to change it, you just rob everyone of their ability to control the world around them. So as much of it as possible, yes, I would never say that you should never have any proprietary software. I would never say we should have rules that outlaw it. But what I would say is that everywhere we can insert it, everywhere that we can move it, we do a great benefit to all the people who have to interact with that software and have to use it over time.

So one of the things that I think is inherent in this embrace of the open source culture is the way that it will help us facilitate a redistributed internet where we have communities that are writing the tools that they need for themselves. And I think open source is critical to this conversation. And I know you think so too. So I'm hoping you can talk us a little bit more about that. 

James: The ways in which people are trying to re decentralize the web to go back to a world in which we did not all live inside monolithic silos like the Facebook stack, the Google stack, the Yahoo stack for the people who are still living in that world, all of that activity is based on open source and open standards because there is not actually any way to build a worldwide tech ecosystem except to use open source and open standards. 

So all of that future that people are trying to build where you have a little bit more control over the technology around you, where things are a little bit more modular, where you can choose the provider of your various social services and your communication services, all of that is going to depend very heavily on being open source, on being portable, on being interoperable, on adhering to open standards to enable that interoperability. 

 And so yes, I think without open source, we would not get there, but with open source, we actually have a pretty good chance at building vital ecosystems that can accept lots and lots of people from all walks of life from all around the world. So I'm pretty excited about that.

Cindy: James, thank you so much for joining us today. You've given us a lot to think about, about how we can work together for a better open source future and frankly, how a better open source future is the key to a better future.

So thank you so much for taking the time to talk to us today.

James:  Thanks for having me. This has been a lot of fun.

Danny: That was fascinating and actually changed my mind on a few things. I mean, we always talk a little bit in these bits of the show about Lawrence Lessig's four levers of change in the technological world, which is... See if I can get them right, is the law, is code, is markets, and cultural norms. And I thought that this was going to be very much a discussion of code and law because obviously open source code and the licenses. The licenses are pivotal in free and open source software. But he really brought it out that it's more about the culture and the cultural norms.

 Cindy: I really love that insight about how, how communities become successful in the long term with open source.

Danny: Yeah. And talking about how things have changed. He really hit home with that point about how open source is moving to a global community and the something that was, I mean, not only, you know, very American in Europe centric, but actually was rooted in a very specific subculture of MIT. And these hacker cultures is now being used to empower folks in the global south, in different communities and Asia in Africa. And inevitably, because of that change the actual values  of the community as a whole, uh, changing. And I'm, I'm going to be fascinated to see. I have no idea how that's going to play out, but I'm fascinated to see how it does.

Cindy: I really appreciated his concrete thinking about how we get to a fixed place and specifically the proposals he had for the government. So everything from his tiny little nerdy suggestion that we let open source developers get a tax write off for contributing code to a thing, to something as broad as how the transparency requirement including transparency into the code itself would be a way that the government could support open source being used more by the communities, but also, of course, is I was excited about how that could help things more broadly.

Danny: It's inevitable that a vibrant open source and free software community is going to help is, this movement that we are all part of, to re-decentralize the internet. And I hadn't quite taken on board that as James:  says, there's no other way of doing it. If you are moving away from these centrally controlled platforms, you'll be moving towards protocols, and protocols have to be open so that everyone can interoperate, but more importantly, that software that implements them has to be free and open source software too. 

Danny: And thank you out there for joining us on How to Fix the Internet. Please visit eff.org/podcasts where you find more episodes, learn about these issues, donate to become a member,and lots more. 

Members are the only reason we can do this work, plus you can get cool stuff like an EFF hat or an EFF hoodie or even an EFF camera cover for your laptop. 

Music for the show is from Nat Keefe and Beat Mower. How to fix the internet is supported by the Alfred P Sloan Foundations program in public understanding of science and technology. 

I'm Danny O'Brien.

Cindy: And I'm Cindy Cohn.

 

 

Christian Romero

EU Parliament Takes First Step Towards a Fair and Interoperable Market

2 weeks 1 day ago

The EU’s Proposal for a Digital Market Act (DMA) is an attempt to create a fairer and more competitive market for online platforms in the EU. It sets out a standard for very large platforms, which act as gatekeepers between business users and end users. As gatekeepers “have substantial control over the access to, and are entrenched in digital markets,” the DMA sets out a list of dos and don’ts with which platforms will have to comply. There was a lot to like in the initial proposal: we agreed with the DMA’s premise that gatekeepers are international in nature, applauded the “self-executing” nature of many obligations, and supported the use of effective penalties to ensure compliance. Several anti-monopoly provisions showed ambition to end the corporate concentration and revitalize competition, such as the ban on mixing data ((Art 5(a)), the ban on forced single sign-ons (Art 5(e)), the ban on cross-tying (Art 5(f)), and the ban on lock-ins (Art 6(e).

End-Users Perspective: EFF Pushes for Interoperability

However, we didn’t like that the DMA proposals missed the mark from the end-user perspective, in particular the lack of interoperability obligations for platforms. The Commission met us half-way by introducing a real-time data portability mandate into the DMA, but it failed to go the full distance. Would it lead to a measurable behavioral change of Facebook if frustrated users could only benefit from data portability if they continued being signed up to Facebook’s terms of service? We doubt it.

The EU Parliament’s Lead Committee Calls for Interconnection and Functional Interaction

In today’s vote, the Internal Market Committee (IMCO) of the EU Parliament overwhelmingly agreed to preserve most of the proposed anti-monopoly rules and agreed on key changes of the Commission’s Proposal. We'll analyze them in more details in the coming weeks, but some elements are striking. One is that the Committee opts for an extremely high threshold before platforms will be hit by the rules (market capitalization of at least €80bn) which means that only a few, mainly U.S based, firms would legally be presumed to act as gatekeepers and hold an entrenched and durable position in the internal market. Members of Parliament also agreed on incremental improvements on the ban of mixing data, added clarification on the limits of targeted ads, including substantial protection of minors, and introduced an ambitious dark patterns prohibition in the DMA’s anti-circumvention provision. It also added a prohibition on new acquisitions as a possible punishment for systematic non-compliance with the anti-monopoly rules.

On interoperability, Members of Parliament followed the strong recommendation by EFF and other civil society groups to not settle for the low-hanging fruits of data portability and interoperability in ancillary services. Focusing on the elephant in the room - namely, messaging services and social networks - the DMA’s lead committee proposes key provisions that would allow any providers of “equivalent core platform services” to interconnect with the gatekeeper’s number independent interpersonal communication services (like messaging apps) or social network services upon their request and free of charge. To avoid discrimination, interconnection must be provided under objectively the same conditions and quality that are available or used by the gatekeeper, its subsidiaries, or its partners. The objective is a functional interaction with these services while guaranteeing a high level of security and data protection.

Competitive Compatibility

Another positive feature is the DMA’s anti-circumvention provision, which follows EFF’s suggestions by stating that gatekeeper should abstain from any behavior that discourages interoperability by using “technical protection measures, discriminatory terms of service, subjecting application programming interfaces to copyright or providing misleading information.” (Article 6(a)).

Interoperability Caveats

The interoperability obligations for gatekeeper come with caveats and question marks. The implementation of the interconnection rules for messaging services is subject to the requirements of the Electronic Communications Code, while those for social networks depend on yet-to-be-defined specifications and standards. The phrasing, too, leaves room for interpretation. For example, the relationship between the obligation to provide interconnection and how to provide it (“same conditions available or used”) is unclear and could lead to restrictions in practice. On the other hand, the Preamble of the DMA is makes the legislative intent crystal-clear. It explains that “the lack of interconnection features can affect users’ choice and ability to switch due to the incapacity for end user to reconstruct social connections and networks provided by the gatekeeper.” Providers of alternative core platform services should thus be allowed to interconnect. For number-dependent intercommunication services, this means that third-party providers can request interconnection for features “such as text, video, voice and picture;” for social networking services, this means interconnection on basic features “such as posts, likes, and comments”.

Next Steps: Vote and Negotiations

It’s now the job of the EU lawmakers to put this objective into a clear and enforceable language. The text approved in committee will be submitted for vote to the full House in an upcoming plenary session and the Council of the EU, whose position is much less ambitious, must also agree on the text for it to become law. EFF will continue pushing for rules that can end corporate concentration.

Christoph Schmon

Manifest V3: Open Web Politics in Sheep's Clothing

2 weeks 2 days ago

When Google introduced Manifest V3 in 2019, web extension developers were alarmed at the amount of functionality that would be taken away for features they provide users. Especially features like blocking trackers and providing secure connections. This new iteration of Google Chrome’s web extensions interface still has flaws that might be addressed through thoughtful consensus of the web extension developer community. However, two years and counting of discussion and conflict around Manifest V3 have ultimately exposed the problematic power Google holds over how millions of people experience the web. With the more recent announcement of the official transition to Manifest V3 and the deprecation of Manifest V2 in 2023, many privacy based web extensions will be mitigated in how they are able to protect users.

The security and privacy claims that Google has made about web extensions may or may not be addressed with Manifest V3. But the fact remains that the extensions that users have relied on for privacy will be heavily stunted if the current proposal moves forward. A move that was presented as user-focused, actually takes away the user’s power to block unwanted tracking for their security and privacy needs.

Large Influence, Little Challenge

First, a short history lesson. In 2015, Mozilla announced its move to adopt the webRequest API, already used by Chrome, in an effort to synchronize the landscape for web extension developers. Fast forwarding to the Manifest V3 announcement in 2019, Google put Mozilla in the position of choosing to split or sync with their Firefox browser. Splitting would mean taking a strong stand against Manifest V3 as an alternative and supporting web extensions developers’ innovation in user privacy controls. Syncing would mean going along with Google’s plan for the sake of not splitting up web extension development any further.

Mozilla has decided to support Manifest V2’s blocking webRequest API and MV3’s declarativeNetRequest API for now. A move that is very much shaped by Google’s push to make MV3 the standard, supporting both APIs is only half the battle. MV3 dictates an ecosystem change that limits MV2 extensions and would likely force MV2 based extensions to conform to MV3 in the near future. Mozilla’s acknowledgement that MV3 doesn’t meet web extension developers’ needs shows that MV3 is not yet ready for prime time. Yet, there is pressure to get stable, trusted extensions to allocate resources to port their extensions to more limited versions of themselves with a less stable API.

Manifest V3 Technical Issues

Even though strides have been made in browser security and privacy, web extensions like Privacy Badger, NoScript, and uBlock Origin have filled the gap of providing the granular control users want. One of the most significant changes outlined in Manifest V3 is the removal of blocking webRequest API and the flexibility it gave developers to programmatically handle network requests on behalf of the user. Queued to replace blocking webRequest API, the declarativeNetRequest API includes low caps on how many sites these extensions could cover. Another mandate is moving from Background Pages, a context that allows web extension developers to properly assess and debug, to an alternative, less powerful context called Background Service Workers. This context wasn’t originally built with web extension development in mind, which has led to its own conversation in many forums.

In short, Service Workers were meant for a sleep/wake cycle of web asset-to-user delivery—for example, caching consistent images and information so the user won’t need to use a lot of resources when reconnecting to that website again with a limited connection. Web extensions need persistent communication between the extension and the browser, often based on user interaction, like being able to detect and block ad trackers as they load onto the web page in real time. This has resulted in a significant list of issues that will have to be addressed to cover many valid use cases. These discussions, however, are happening as web extension developers are being asked to port to MV3 in the next year without a stable workflow available with pending issues such as no defined service worker context for web extensions, pending WebAssembly support, and lack of consistent and direct support from the Chrome extensions team itself.

Privacy SandStorm

Since the announcement of Manifest V3, Google has announced several controversial “Privacy Sandbox” proposals for privacy mechanisms for Chrome. The highest-stakes discussions about these proposals are in the World Wide Web Consortium, or W3C. While technically “anyone” can listen into the open meetings, only W3C members can propose formal documentation on specifications and have leadership positions. Being a member has its own overhead of fees and time commitment. This is something a large multinational corporation can easily overcome, but it can be a barrier to user-focused groups. Unless these power dynamics are directly addressed, a participant’s voice gets louder with market share.

Recently this year, after the many Google forum-based discussions around Manifest V3, a WebExtensions Community Group has been formed in the W3C. Community group participation does not require W3C membership, but they do not produce standards. Chaired by employees from Google and Apple, this group states that by “specifying the APIs, functionality, and permissions of WebExtensions, we can make it even easier for extension developers to enhance end user experience, while moving them towards APIs that improve performance and prevent abuse.”

But this move for greater democracy would have been more powerful and effective before Google’s unilateral push to impose Manifest V3. This story is disappointingly similar to what occurred with Google’s AMP technology: more democratic discussions and open governance were offered only after AMP had become ubiquitous.

With the planned deprecation of Manifest V2 extensions, the decision has already been made. The rest of the web extensions community are forced to comply, deviate from, or leave a large browser extension ecosystem that doesn’t include Chrome. And that’s harder than it may sound: Chromium, the open-source browser engine based on Chrome, is the basis for Microsoft Edge, Opera, Vivaldi, and Brave. Statements have been made by Vivaldi, Brave, and Opera on MV3 and their plans to preserve ad-blockers and privacy preserving features of MV2, yet the ripple effects are clear when Chrome makes a major change.

What Does A Better MV3 Look Like?

Some very valid concerns and asks have been raised with the W3C Web Extensions Community Group that would help to propel the web extensions realm back to a better place.

  1. Make the declarativeNetRequest API optional in Chrome, as it is currently. The API provides a path for extensions that have more static and simplistic features without needing to implement more powerful APIs. Extensions that use the blocking webRequest API, with it’s added power can be given extra scrutiny upon submission review. 
  2. In an effort to sooth the technical issues around Background Service Workers, Mozilla proposed in the W3C Group an alternative to Service Workers for web extensions, dubbed “Limited Event Pages”. Where this approach restores a lot of the standard website APIs and support lost with Background Service Workers. Safari expressed support, but Chrome has expressed lack of support with reasons pending but not explicitly stated at the time of this post.
  3. No further introduction of regressions in important functionality that MV2 has. For example, being able to inject scripts before page load. This is broken with pending amendments in MV3.

Even though one may see the moves between web extensions API changes and privacy mechanism proposals as two separate endeavors, it speaks to the expansive power of how one company can impact the ecosystem of the web; both when they do great things, and when they make bad decisions. The question that must be asked is who has the burden of enforcing what is fair: the standards organizations that engage with large company proposals or the companies themselves? Secondly, who has the most power if one constituency says “no” and another says “yes”? Community partners, advocates, and smaller companies are permitted to say no and not work with companies who enter the room frequently with worrying proposals, but then that company can claim that silence means consensus when they decide to go forward with a plan. Similar dynamics have occurred before when the W3C grappled with Do Not Track (DNT) where proponents of weaker privacy mechanisms feigned concern over user privacy and choice. So in this case, large companies like Google can make nefarious or widely useful decisions without much incentive to say no to themselves. In the case of MV3, they gave room and time to discuss issues with the web extensions community. That is the bare minimum standard for making such a big change, so to congratulate a powerful entity for making space for many other voices would only add to the sentiment that this should be the norm in open web politics.

No matter how well meaning a proposal can be, the reality is millions of people’s experiences on the internet are often left up to the ethics of a few in companies and standards organizations.

Alexis Hancock

Police Aerial Surveillance Endangers Our Ability to Protest

2 weeks 2 days ago

The ACLU of Northern California has concluded a year-long Freedom of Information campaign by uncovering massive spying on Black Lives Matter protests from the air. The California Highway Patrol directed aerial surveillance, mostly done by helicopters, over protests in Berkeley, Oakland, Palo Alto, Placerville, Riverside, Sacramento, San Francisco, and San Luis Obispo. The footage, which you can watch online, includes police zooming in on individual protestors, die-ins, and vigils for victims of police violence.

You can sign the ACLU’s petition opposing this surveillance here

Dragnet aerial surveillance is often unconstitutional. In summer 2021, the Fourth Circuit ruled that Baltimore’s aerial surveillance program, which surveilled large swaths of the city without a warrant, violated the Fourth Amendment right to privacy for city residents. Police planes or helicopters flying overhead can easily track and trace an individual as they go about their day—before, during, and after a protest. If a government helicopter follows a group of people leaving a protest and returning home or going to a house of worship, there are many facts about these people that can be inferred. 

Not to mention, high-tech political spying makes people vulnerable to retribution and reprisals by the government. Despite their constitutional rights, many people would be chilled and deterred from attending a demonstration protesting against police violence if they knew the police were going to film their face, and potentially identify them and keep a record of their First Amendment activity.

The U.S. government has been spying on protest movements for as long as there have been protest movements. The protests for Black Lives in the summer of 2020 were no exception. For over a year, civil rights groups and investigative journalists have been uncovering the diversity of invasive tactics and technologies used by police to surveil protestors and activists exercising their First Amendment rights. Earlier this year, for example, EFF uncovered how the Los Angeles Police Department requested Amazon Ring surveillance doorbell footage of protests in an attempt to find “criminal behavior.” We also discovered that police accessed BID cameras in Union Square to spy on protestors

Like the surveillance used against water protectors at the Dakota Access Pipeline protests, the Occupy movements across the country, or even the Civil Rights movements in the mid-twentieth century, it could takes years or even decades to uncover all of the  surveillance mobilized by the government during the summer of 2020. Fortunately, the ACLU of Northern California has already exposed CHPs aerial surveillance against the protests for Black lives.

We must act now to protect future protestors from the civil liberties infringements the government conjures on a regular basis. Aerial surveillance of protests must stop.

Matthew Guariglia

Digital Rights Updates with EFFector 33.7

2 weeks 5 days ago

Want the latest news on your digital rights? Then you’ve come to the right place! Version 33, issue 7 of EFFector, our monthly-ish newsletter, is out now! Catch up on the latest EFF news, from how Apple is listening and retracting some of its phone-scanning features to how Congress can act on the Facebook leaks, by reading our newsletter or listening to the new audio version below.

LISTEN ON THE INTERNET ARCHIVE

EFFECTOR 33.07 - Victory: Apple will retract some harmful phone-scanning

Make sure you never miss an issue by signing up by email to receive EFFector as soon as it's posted! Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and now listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero
Checked
1 hour 2 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed