Colombian ISPs Show Steady Commitments to User Privacy But Key Transparency Gaps Remain

2 months 3 weeks ago

Colombia’s top internet and cell phone companies continued to maintain a high level of transparency about their privacy practices, and continued to implement best practices to protect customer data, free expression, and security in 2021. But they faced challenges from the impacts of COVID-19 and pressure to inform users about government spying on mobile phone communications, according to a new report released today by Fundacion Karisma, Colombia’s leading digital rights organization.

¿Dónde están mis datos?" (“Where Is My Data?”) evaluated seven leading internet and cell phone companies: Claro (América Móvil), Movistar (Telefónica), Tigo (Millicom), ETB, DirecTv, Emcali,  and Avantel. Karisma also included satellite internet companies Hughesnet and Skynet for their role in connecting rural areas.

Today’s report is Karisma’s seventh annual ¿Dónde Estan Mis Datos? for Colombia—an assessment of telecommunication companies’ commitment to transparency and user privacy. As in prior years, Karisma looked at whether companies’ transparency reports provide detailed information about government requests for user data and content blocking, how strong their data protection policies are, and whether they adequately disclose content blocking practices and data breaches.

In these categories, Colombia’s internet and cell phone companies were steady, mostly meeting, or exceeding, levels achieved in the last few years. Movistar was the overall top performer, with 15 out of a possible 16 points, followed by Tigo with 13 points, and Claro and Avantel, each with 10 points. ETB scored 8 points, DirectTV earned 7 points, Hughesnet and Emcali each earned 5 points, while Skynet earned 3.

In new evaluation categories added to assess companies’ policies regarding net neutrality and government interception of communications, the results were mixed.

The COVID-19 pandemic put significant pressure on internet and telecommunications providers, as their network infrastructures were tested by higher traffic from remote work. Also, government demands for data to track and contain the virus tested their commitment to user privacy. What’s more, the Colombian government and the country’s Communications Regulatory Commission decided through emergency regulations to prepare the ground in case it was necessary to suspend net neutrality—a key tenet t of an open internet.

Under net neutrality, internet service providers treat all data that travels over their networks fairly, without improper discrimination in favor of particular apps, sites, or services. While the suspension of net neutrality did not occur, Karisma for the first time added new categories in ¿Dónde Estan Mis Datos? to evaluate companies’ disclosure of their net neutrality practices.

Movistar, Tigo, Avantel and Hughesnet were the standouts in these categories, each earning points for publishing their traffic management practices and publicly committing to protect net neutrality.

Karisma also added new categories to document a highly controversial and constitutionally questionable surveillance practice that has come to light. After analyzing the last few ¿Dónde Estan Mis Datos? reports, Karisma has concluded that Colombian authorities are intercepting users’ mobile phone communications, directly accessing communications without making formal requests or involving telecommunications companies hosting the networks. 

Little is known about how this deeply problematic surveillance practice occurs. To provide users with information and shed light on this troubling practice, ¿Dónde Estan Mis Datos? will, starting with today’s report, evaluate whether companies are clearly disclosing that direct access occurs. 

Main Results

Each company is evaluated in the following categories:

Political commitments: This category looks at whether companies have internal gender equality rules and accessibility policies for users with disabilities, and if they publish annual transparency reports (or the equivalent) for Colombia. New criteria added this year includes whether companies disclosed content blocking requests justified by a national health emergency, and if they have publicly committed to net neutrality.

Movistar continues to lead in the category; it fully reports on all expected criteria and in a disaggregated manner, including blocking events related to states of emergency or other exceptions.

Claro reports on the occurrence of each event, the legal framework in which each order is justified, and the authorities that raise them before the company. But when it comes to providing disaggregated statistics, it only does so in relation to content blocking orders and the four subtypes in which this can be justified (not data about government requests for user data).

Privacy: This category includes whether companies publish data protection policies with relevant information for users, publicly disclose the legal basis for complying with government requests to turn over data, and notify users about data requests. New criteria includes whether companies disclose the possibility that authorities have direct access to their communications networks, what legal basis exists for that, and their role in direct access. 

Movistar stands out for the clarity of the information it provides on direct access, while both Claro and Tigo disclose information on the different legal frameworks that allegedly underpin such communications surveillance. Tigo is also to be recognized for transparency on direct access, which is reinforced by the global report of its parent company Millicom. Movistar, Claro and Tigo each received 2 points in the areas—the other four companies received no points. 

Free expression: This category evaluates companies on whether they publish procedures that they have in place to respond to government requests to block content or terminate internet service, and whether companies publish guidelines, so users know which kind of practices can face blocking.  

Claro, Movistar, Tigo, ETB and Avantel report the execution of orders to block websites or URLs. Emcali and Hughesnet report blocking websites or URLs only in the case of circulation of child sexual abuse content. Skynet does not provide information on any of these criteria.

Digital security: In this category, companies are rated on their practices for disclosing data breaches and mitigation measures, and whether they use the secure data transmission protocol (HTTPS) on their websites. 

Movistar, Tigo and Avantel are the only companies that have a protocol and documentation for data breach mitigation actions. Skynet warns in general what security measures it deploys, but not what contingency measures it would apply for possible security breaches.

Karisma’s full report is available in Spanish, and is part of a region-wide initiative that since 2015 has been holding ISPs accountable for their commitments on transparency and user privacy in key Latin American countries.

Karen Gullo

Digital Rights Updates with EFFector 34.2

2 months 3 weeks ago

Want the latest news on your digital rights? Well, you're in luck! Version 34, issue 2 of our EFFector newsletter is out now. Catch up on the latest EFF news by reading our newsletter or listening to the new audio version below. This issue covers stories from our opposition of the new SMART Copyright Act, to explaining the risks and cautions of messaging with Telegram if you are using the app in Russia or Ukraine.

LISTEN ON YOUTUBE

EFFECTOR 34.02 - Corporate Media wants Copyright Law to Rewrite the INternet

Make sure you never miss an issue by signing up by email to receive EFFector as soon as it's posted! Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and now listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Podcast Episode: Securing the Internet of Things

2 months 4 weeks ago

Today almost everything is connected to the internet - from your coffeemaker to your car to your thermostat. But the “Internet of Things” may not be hardwired for security. Window Snyder, computer security expert and author, joins EFF hosts Cindy Cohn and Danny O’Brien as they delve into the scary insecurities lurking in so many of our modern conveniences—and how we can change policies and tech to improve our security and safety.

Window Snyder is the founder and CEO of Thistle Technologies. She’s the former Chief Security Officer of Square, Fastly and Mozilla, and she spent five years at Apple focusing on privacy strategy and features for OS X and iOS. Window is also the co-author of Threat Modeling, a manual for security architecture analysis in software.

Click below to listen to the episode now, or choose your podcast player:

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F8f2a3740-4a96-4194-9394-41d23b8b4b4d%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

  
  

You can also find the MP3 of this episode on the Internet Archive.

In this episode, Window explains why malicious hackers might be interested in getting access  to your refrigerator, doorbell, or printer. These basic household electronics can be an entry point for attackers to gain access to other sensitive devices on your network.  Some of these devices may themselves store sensitive data, like a printer or the camera in a kid’s bedroom. Unfortunately, many internet-connected devices in your home aren’t designed to be easily inspected and reviewed for inappropriate access. That means it can be hard for you to know whether they’ve been compromised.

But the answer is not forswearing all connected devices. Window approaches this problem with some optimism for the future. Software companies have learned, after an onslaught of attacks, to  prioritize security. And we can bring the lessons of software security  into the world of hardware devices. 

In this episode, we explain:

  • How it was the hard costs of addressing security vulnerabilities, rather than the sharp stick of regulation, that pushed many tech companies to start prioritizing cybersecurity. 
  • The particular threat of devices that are no longer being updated by the companies that originally deployed them, perhaps because that product is no longer produced, or because the company has folded or been sold.
  • Why we should adapt our best current systems for software security, like our processes for updating browsers and operating systems, for securing newly networked devices, like doorbells and refrigerators.
  • Why committing to a year or two of security updates isn’t good enough when it comes to consumer goods like cars and medical technology. 
  • Why it’s important for hardware creators to build devices so that they will be able to reliably update the software without “bricking” the device.
  • The challenge of covering the cost of security updates when a user only pays once for the device – and how  bundling security updates with new features can entice users to stay updated.

If you have any feedback on this episode, please email podcast@eff.org.

Below, you’ll find legal and technical resources as well as a full transcript of the audio.

Music

Music for How to Fix the Internet was created for us by Reed Mathis and Nat Keefe of BeatMower.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes the following music licensed Creative Commons Attribution 3.0 Unported by their creators:

  • Drops of H2O (The Filtered Water Treatment ) by J.Lang Ft: Airtone.
    http://dig.ccmixter.org/files/djlang59/37792
  • Warm Vacuum Tube by Admiral Bob Ft: starfrosch
    http://dig.ccmixter.org/files/admiralbob77/59533
  • Xena's Kiss / Medea's Kiss by mwic
    http://dig.ccmixter.org/files/mwic/58883
  • reCreation by airtone
    http://dig.ccmixter.org/files/airtone/59721
Resources

Firmware updates

Internet of Things:

Hacking vulnerabilities through printers:

 Malware:

 Cyber attacks on hospitals:

Privacy Harms through Smart Appliances

Interoperability:

Right to Repair:

Transcript:

Window: I bought a coffee mug that keeps my coffee at like 133 degrees, which I'm delighted by. But the first thing I did when I took it out of the package is it wanted a firmware update. I was like, "Yes, awesome." I had it for like two and a half weeks and it wanted another firmware update. 

I don't even know it was a security issue, it could be a functionality issue, maybe they're making my battery performance last longer. I don't know what the updates do. It's completely opaque. But at least there's an opportunity for that to also include security issues being resolved if that's a problem for that specific device. So I think there's some folks that are making space for it, they recognize that security updates are a critical path to developing a resilient device that will support the actual lifespan of that device. I mean, how long do you expect to be able to use a cup?

Cindy: That's Window Snyder, and she'll be joining us today to walk us through her ideas about how we can build a more secure world of connected devices without, hopefully, having to start over from scratch. I'm Cindy Cohn, EFF's executive director.

Danny: And I'm Danny O'Brien, special advisor to the EFF. Welcome to How to Fix the Internet, a podcast of the Electronic Frontier Foundation, where we bring you big ideas, solutions, and hope that we can fix the biggest problems we face online.

Cindy: Window, we are so excited to have you join us on How to Fix the Internet. You are someone who is always working towards a real, concrete, tech world that supports everyone. And I'm so happy be to have you here to share your ideas.

Window: Thanks so much, Cindy. I'm really glad to be here.

Cindy: So we now have internet connected computers in so many things. They're in our thermostats, our doorbells, our fridges, our TVs. Now, you've been thinking about security in our devices for a very long time. So what about this keeps you up at night?

Window: One of the things that I've seen over the years as we've built so many different mechanisms into general purpose operating systems like Windows or Linux that you might find on a server or OS10 on your Mac, that we've not seen the same kind of investment in those kinds of security mechanisms in devices. There are good reasons for that. These devices are very often kind of minimal in their functionality. They're trying to do something that's specific to a purpose, and very often they're optimized for performance or for interoperability with different hardware components. And so very often they haven't spent the time to invest in the kinds of security mechanisms that make some of the general purpose OSes or even mobile device OSs more resilient. And so we've kind of got a problem now because we took those devices and then we attached them to the internet and all that attack surface is now how exposed to the world and without the same sort of investment in making those devices more security resilient, we’ve got a growing problem.

Cindy: I think some people have seen the images of hackers taking over the cameras in kids' bedrooms and telling kids what to do. I mean, this is, I think, the kind of problem that you have when you've got systems that are really not internet ready or internet protected, that then gets connected to the internet.

Window: Exactly.

Danny: So what are the incentives for someone to hack into these things? I mean, we can talk about like those sort of prank or threatening things, but what are the people breaking into this at such a large scale trying to do with this technology?

Window: Well, very often they're opportunistic and someone who finds a vulnerability in your refrigerator and then uses it to get onto your network, they're not trying to spoil your food by changing the temperature in your refrigerator, they're using your refrigerator as a launch point to see if there are any other interesting devices on your network. And some of the attacks that you can deploy on a local network are different than the attacks that you would otherwise have to deploy from outside of the network.

Window: So what they're leveraging is access, very often. Some of these devices actually have access to all kinds of things. So if it's in a corporate network and that embedded device happens to be a printer, right, that printer is basically a data store where you send all your most important documents to be printed, but they're still stored on that printer. And then the printer is mostly uninspectable by the administration team. So it's a great place to camp out if you're an attacker and set up shop and then use that access to launch attacks against the rest of the infrastructure.

Danny: So it's effectively that they're the weakest point in your home network and it's like the entry way for anything else that they want to hack.

Window: Sometimes it's the weakest point, but it's also that it's like a deep dark corner that's difficult to shine a light into, and so it's a great place to hide.

Cindy: The stakes in this can be really high. One of the things that we've heard is these taking over of hospital networks can end up really harming people. And even things like the ability to turn up the heat or turn down the heat or those kinds of things, they can end up being, not just pranks, but really life threatening for folks.

Window: The problem that we described about the refrigerator in your home is really different if we're talking about a refrigerator at a hospital that's intended to keep blood at a certain temperature, for example, right, or medicine at a certain temperature.

Danny: So you said that general purpose computers, the laptops, and to certain extent the phones that we have today, have had 20 years of security sort of concentration. What caused those companies to kind of shift into a more defensive posture?

What encouraged them to do that? Was it regulation?

Window: That would be amazing if we could use regulation to just fix everything, but no, can't regulate your way out of this. Basically it was pain and it was pain directed at the wallet. So Microsoft is feeling a lot of pain with malware and with worms. And I don't know if you guys remember Slammer and Melissa, I love you. These viruses that they're feeling and their customers were feeling a lot of pain around it and saying, "Hey, Microsoft, you need to get your house in order." And so Bill Gates sent out this memo saying we're going to do something about security. That was around the time that I joined Microsoft. And honestly, we had a tremendous amount of work to do. It was an attempt to boil the ocean. And I was very lucky to be in a situation there where both I had experience with it, and also this is what I came to do. And also now I had the support of executives all the way up.

But how do you take a code base that is so rich and has so many features and so much functionality and get to a place where it's got a more modern security posture? So for a general purpose operating system, they needed to reduce attack surface, they needed to get rid of functionality, make it optional so that if there was a vulnerability that was present in one of those components, it didn't impact the entire deployment of... Back then it was hundreds of millions. At this point, it'd be a billion plus devices out there. You want to make sure that you are compartmentalizing access as much as possible. They deployed modern memory mitigation mechanisms that made it difficult to exploit memory corruption issues.

Window: And then they worked at it and it's been 20 years and they're still working at it. There's still problems, but there are not the same kind of problems that you saw in 2002.

Cindy: You said you wished regulation could do something about that. Do you think that is something that's possible? I think oftentimes we worry that the pace of regulation, the pace of legislation is so slow compared to the pace of technological advancement that we will... It can't keep up, and in some ways it shouldn't try to keep up because we want innovation to be racing ahead. I don't know, from where you sit, how do you think about the role of law and regulations in this space?

Window: I think regulations are great for regulating responsibility. Like for example, saying that when a security issue has been identified and it has these kinds of characteristics, let's say it's critical in that it's reachable from the network, and without any special access an attacker's able to achieve arbitrary code execution, that means they can basically do whatever they want with the system at that point, right? Then a security update needs to be made available. That's something they could regulate because it's specific enough and it's about responsibility. This is actually one of the significant differences between the security problems on software systems and hardware devices.

When there's a problem on, let's say, in your web browser, right? And you go to do your update, if the update fails, you can try it again, pretty easily. You can figure out for yourself how to get back to that known good state. So a failure rate of like 3%, 4%, 5%, for the most part users can help themselves out. And they are able to, to, to get back to work pretty quickly. But if these low-level components need to be updated and they have a failure rate of like 1%, right, and that device doesn't come back up, there's no interface. What's the user going to do at this point? They can't even necessarily initiate it to try it again. That might just be a completely bricked device, completely useless at this point.

But if it's a car, now it has to go back to the dealership. And if it's a phone, it has to come back into the shop. But if it's a satellite, that's just gone forever. If it's an ATM, maybe somebody has to physically come out with a USB and plug it in and then try and update the firmware for that ATM. For physical devices in the world, it gets really different. And then here's the other end of this. For software developers, they get away with saying, "Oh, we'll send you security updates for a year or two," or maybe they don't say at all, and it's just kind of at their whim, because there is no regulatory requirement to ship security updates for any period of time. We're all just at the whim of the folks who produce these technology components that we rely on.

But if it's an MRI in a county hospital and they're not getting security updates, but they're vulnerable to something, they're not going to go buy a new MRI, right? We expect these devices to be in use for a lot longer than even a phone or a general purpose computer, a laptop, a web browser. For sure, those things get updated every 10 minutes, right? Both the difficulty of bring a highly reliable update mechanism and also the lifespan of these devices completely changed the story. So instead of saying it's efficient to deliver security updates for a year or two years, you now get to this place where it's just like, "Well, how long do you expect a car to be useful?" Right? I expect to be able to drive my car for 10 years, and then I sell it to somebody else, and then you can drive it for 10 years. And until it's in bits, like that car should still be functional, right?

Danny: “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

Cindy: I wanted to circle back as the lawyer in the house about the... What I heard you saying about the role that regulation and law can play in this is really about liability, it's that if you need to build in ongoing security into your tool, and if you don't, you need to be responsible for the harm that comes, and the liability does work a little that way now, and I think one of the problems that we have is that some of the harms that happen are privacy harms or the other kinds of harm that the law is struggling with. But I really like this idea that the way that you have a lever on these companies is to basically make them responsible for the long tail of problems that these things can create, not just the moment of sale.

Window: Absolutely. That's exactly right. But on my end of things, since I'm not thinking of about liability, because that feels like something that someone like you can probably contribute to that conversation better, but in terms of how do we get there? Well, having an update mechanism that is robust enough that you're able to ship a security update or any sort of update with confidence that that device is going to come back up, right? Because that's one of the reasons that it's hard to update devices is that if you're worried that the device might not come back up, even if it's like a 1% failure rate, then you don't want to ship up updates unless you absolutely have to, because the cost of addressing that issue is really more significant potentially than a security issue might be. And they have to make a judgment about that for all these different kinds of ways their device might be used.

But on the updates side of things, since, as you described, they only make their money when you buy the device, and after that it's out the door. And so to continuously have to pay to support security updates is kind of difficult, but there are other ways to support it without actually being responsible for shipping the security update yourself. If you make the code open source, then the community can potentially support it. If you make the code available to a third party, then that third party can be able to provide security updates for those issues.

But even for the device manufacturer themselves, getting to a place where you have a highly reliable security update mechanism could be used, not just to deliver security updates, but functional updates. And then you could potentially have an ongoing relationship with that party who purchased the device by selling them new functionality once the device is already out the door, like they could sell new features for those existing devices. And Tesla has really embraced that, right? They're doing great with it, that you buy a car and then later you can buy a new functionality for your car. Fantastic.

Cindy: So to get to a world where our devices are really secure, I am hearing three things:a lot more open source, a lot more interoperability, and in general a lot more ability for third parties to repair or update or give us more security than we have now. Is that right?

Window: I think actually the most critical component is going to be leveraging existing security mechanisms that have been built for resilience and incorporating those into these devices, which is actually what I'm building right now. That's what Thistle Technologies is doing, we're trying to help companies get to that place where they've got modern security mechanisms in their devices without having to build all the infrastructure that's required in order to deliver that. 

So the industry is in agreement, for the most part, that you should not implement your own cryptographic libraries, right? That you should leverage an existing cryptographic library that is tested, that was implemented by folks who understand cryptography and more importantly, understand how cryptographic implementations can fail, especially when they're being attacked, right? So this is actually true for security mechanisms way beyond cryptography. And that's why I think that building these security sensitive mechanisms in one place and letting folks pick and choose and incorporate those into their devices makes sense. And I think this is actually how devices are going to get there. And maybe some of those will be open source projects, and maybe some of those will be commercial projects like mine, but I think not having to all of us go it alone in all these different places, reinvent the wheel over and over again, get to a place where we've got security sensitive systems that are built and incorporated into all these different kinds of systems that don't have them yet.

Danny: So a lot of what you're describing seems to be like building or slotting in robust software that is built with security in mind. But one thing I hear from security researchers all the time is that security is a process. And is there some way that a small hardware manufacturer, right, someone who just makes light bulbs or just makes radios, if they still exist, what is part of the process there? What do they have to change in their outlook?

Window: So it's the same for any small development team that the most important stuff that you want to do is still true for software and for hardware, and that is to reduce the attack surface. And if there's functionality that's only in use by a small number of folks in your deployment, make it modular so that those folks can have the functionality, but not everybody has to have all of the risk, to move to memory-safe languages, higher-level languages, so that memory management is not managed by the developers, because that reduces the ability for an attacker to take advantage of any of the problems that can result in memory corruption.

Danny: And when you say attack surface here, you're sort of describing the bits of this technology which are vulnerable and are kind of exposed, right? You're just talking about making them less exposed and less likely to damage everything else if they break.

Window: Yeah. So if you think about your body as having some sort of attack surface, like as we're walking around in the world we can get infections through our mucus membranes, like our eyes, our nose, our mouth, and so on. So it reduces our risk if we wear a mask, it reduces, let's say, the risk for a healthcare worker if they're also wearing like a face shield to prevent somebody coughing on them and having it get in through their eyes, etc. So reducing your attack surface means providing a filter or a cover.

The attacker has a harder time coming in through the wall, they're going to come in through the doors. And so if you think of these services where you're listening as doors, then you want to make sure that you have authentication really early on in your protocol, so that there's less opportunity for them to say something that could be interpreted the wrong way by your computer, and now they're computing code on your system, for example. And then that same kind of idea, but applied through all the different components in the system.

Cindy: That's great. I love this idea that you're trying to shrink down 25 years worth of work in operating systems into a little box that somebody can plug into their internet connected thing. I think that's awesome.

Window: It's better than trying to wait 25 years for everyone else to catch up, right?

Cindy: Absolutely. So what are the values we're going to get if we get to this world? What is it going to feel like, what is it going to seem like when we have these a world in which we've got more protected devices?

Window: I think, first we'll feel some pain and then we'll feel devices that we're able to have more confidence in, that we might feel more comfortable sharing information that's very personal because we are able to evaluate what they're going to do with that data. Maybe that company that's building this thing has a very clear policy that is easy to understand, doesn't require 10 pages of legal language that's designed to be, as let's say, conservative as possible and reserve every possible right for the company to do whatever they want with your information, right? When folks are understand that, then they're more able to use it. One of the things that I'm thinking about constantly about every device I bring into my home is how is this increasing my attack surface? Do I need a separate network to manage these devices in my house? Yes, I do apparently.

Window: But is that reasonable? No, it's not reasonable. People should be able to bring home a device and use it and not worry that the microphone on their television is listening to them or that an attacker could leverage kids' baby camera to capture pictures of the inside of their house, right? People want to feel comfortable and safe when they use these things. And that's just consumers. If it's on the enterprise side, folks want to be able to, let's say, understand the risk for their organization, make reasonable trade offs, deploy their resources in things that build their business, not just things that, let's say, allow the business to function. And security is one of those things that if you have to spend money securing your infrastructure, then you're not spending money creating all the functionality that the infrastructure exists to serve.

Danny: So we have this sort of utopia where we have all these devices and they talk to one another and before I get home, my stove has set the water boiling and it's completely safe and okay. Then we have kind of another vision that I think people have of a secure world requiring this sort of bureaucracy, right? You know everything is locked down, I maybe have to go and sign out something at work, or there is someone who tells me, "I can't install this software." 

In order to feel safe, do we have to head to that second future? Or do we still get all our cool gadgets in your vision of how this plays out?

Window: That's the problem, right? We want up to be able to install the software and that it's safe. It's not going to create a new vulnerability in your corporate network, right? When you tell folks to say like, "Oh, be careful on clicking links in email," or whatever. It's like, why on earth should that be a problem? If your email is allowing you to click on a link to launch another application or a web browser, then that should be safe. The problem's not with a user randomly clicking on things, they should be able to randomly click on things and have it not compromise their device. The problem is that the web browser, which for all the functionality that the web has brought us, it is honestly a terrible idea to say, "I'm going to take this software. I'm going to put all my passwords and credit cards in it. And then I'm going to go visit all these different servers. And I'm going to take code from these servers and execute it on my device locally and hope it doesn't create problems on my system."

That is a really difficult model to secure. So you should be able to go anywhere and install software from wherever. That would be the ideal that if we can get to a place where we have a high degree of compartmentalization, we could install software off the internet, it runs in a sandbox that we have a high degree of confidence is truly a high degree of compartmentalization away from everything else that you care about in the system. You use that functionality, it does something delightful, and you move on with your life without ever having to think about like, "Is this okay?" But right now you have to spend a lot of time thinking about like, "Do I want to let it in my house? What is it going to do?" So the ideal version is you just get to use your stuff and it works.

Danny: That is a vision of a future that I want.

Cindy: Yeah, me too. Me too. And I really love the embrace of like, we should have this cool stuff, we should have this cool functionality, new things shouldn't be so scary, right? It doesn't need to be that way. And we can build a world where we get all the cool stuff and we have this ongoing innovation space without all the risk. And that's the dream. Sometimes I talk to people and they're like, "Just don't do it. Don't buy-

Danny: Get off the internet.

Cindy: Yeah. Get off the internet. Don't buy a smart TV, don't buy this, don't buy that, don't buy that. And I totally understand that impulse, but I think that you're talking about a different world, one where we get all the cool stuff, but we also get our security too.

Window: Yeah. Wouldn't you love to be able to connect with your friends online, share pictures of the family and feel like no one is collecting this to better create a dossier about how to better advertise to you? And then where does this sit and how long does it sit there for, and who's it shared with? And what's it going to mean? Are they identifying who's influential in my network so they can tell me that so and so really enjoyed this new brand of cookware, right? I would love to be able to communicate freely in public or in forums and only worry about humans instead of like all the different ways this data is going to be collected and hashed and rehashed and used to create a profile of me that doesn't necessarily represent who I am today or who I might be 10 years from now.

Cindy: Well, and the nice thing about fixing the security is that that puts that control back into our hands, right? A lot of the things that are consumer surveillance stuff are really side effects of the fact that these systems are not designed to serve us, they're not designed to secure us. And so some of these business models kind of latch onto that and ride along with that. So the good thing about more security is that we not only get security from the bad guys, we also get maybe some more control over some of the companies who are riding along with the insecure world as part of their business models.

Danny: Well, thank you Window for both making us safer in the present day and building this secure and exciting future.

Cindy: Yeah. Thank you so much, Window.

Window: Thanks for having me, guys. It's definitely been a lot of fun.

Cindy: That was just great. And what I really love about Window is that sometimes security researchers are all about, no, don't do this, don't do that, this is dangerous, this is scary. And Window is firmly on the side of, we need to have of all our cool devices, even our coffee mug that connects to the internet for some strange reason, but we need to have all of our cool devices, but we need them to be secure too.

Danny: Yeah. She's always very modest. And I think she actually has been the calm, collected voice in Apple and Microsoft as they slowly try and fix these things I think one of the things I took away from that is that it is a different world, right, but we have got some knowledge that we can drop into that new plate.

Cindy: Yeah. And I love the idea of kind of taking all that we've learned in the 25 years of trying to make operating systems secure and shrink it down into modules that people can use in their devices in ways that will stop us from having to spend the next 25 years figuring out how to do security in devices.

Danny: Something else that struck me was that the kind of thing that prompted the solution, or the attempt to fix the security problems of big PCs and desktops and later phones was this Bill Gates memo. And it struck me that the challenge here is that there are there is not one company, there is not a monopoly and there is no Bill Gates to write the memo to the Samsungs and the anchors of this world. So I don't know how you do it, but I have a feeling Window does.

Cindy: Well, I think she's working on it, and I think that's great, but she also pointed to a place that regulation might come in as well, and specifically on the idea of liability, right? Like making sure that the companies that put this out are accountable, not just at the point of sale, but over the long run for the risks that they create. And then hopefully that will help spur a kind of long relationship where, not just the company that sold you the device, but a whole bunch of other companies and people, and hobbyists and other people can come in and help you keep your device secure over the long run. And also, as she pointed out, maybe even give you some additional features. So once again, we see interoperability being key to this better future, even in this other place where we're talking about just simply making our devices not so dangerous.

Danny: Yeah. The solution to not having one big company is not to put one big government or one new big company in charge, but to share the knowledge and communicate that round. And if somebody can't, or doesn't have the resources to take that responsibility, there's someone else who can represent the consumer or the hospital, and step in and fix and repair those problems.

Cindy: I think it was interesting to me how this conversation kind of started off as a hardcore security modeling kind of thing, but we ended up again with adversarial interoperability and right-to-repair being so central to how we fix this problem. And I really appreciate how these things are starting to connect together in a single story.

Danny: Music for How to Fix the Internet was created for us by Reed Mathis and Nat Keefe of BeatMower. This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. You can find their names and links to their music in our episode notes, or on our website at eff.org/podcast. Please visit eff.org/podcasts where you’ll find more episodes, learn about these issues, you can donate to become a member of EFF, as well as lots more. Members are the only reason we can do this work plus you can get cool stuff like an EFF hat, and EFF hoodie or an EFF camera cover for your laptop camera. How to fix the internet is supported by the Alfred P Sloan foundation’s program and public understanding of science and technology. 

Danny: I'm Danny O'Brien.
Cindy: And I'm Cindy Cohn. Thanks for listening.

 

Music:

Resources

Transcript

rainey Reitman

The Pro Codes Act Is a Wolf in Sheep’s Clothing

2 months 4 weeks ago

When a pipeline bursts, journalists might want to investigate whether the pipeline complied with federal regulations. When a toy is recalled, parents want to know whether its maker followed child safety rules. When a fire breaks out, homeowners and communities want to know whether the building complied with fire safety regulations. Online access to safety regulations helps make that review – and accountability – possible. But a new dangerous and deceptive bill will undermine existing efforts to make that happen: the Pro Codes Act

The proposal looks simple enough. A huge portion of the regulations we all live by (such as fire safety codes, or the national electrical code) are initially written -- by industry experts, government officials, and other volunteers -- under the auspices of standards development organizations (SDOs). Federal, state, or municipal policymakers then review the codes and decide whether the standard is good broad rule. If so, it is adopted into law “by reference.” In other words, the law cites the code by name but doesn’t copy and paste the entire thing into law (useful when the code is long and detailed). For example, if a regulation requires compliance with a provision in the National Fire Safety Code, it might simply refer to that provision, rather than copying it in directly. But that doesn’t make compliance any less mandatory.

Currently, SDOs have to make such incorporated codes available to the public somehow, in keeping with the basic principle that everyone has a right to know the law that binds them. But the requirements are far out of date. For example, a hard copy of a standard that is incorporated into federal law by reference must be deposited with the National Archives in Washington, DC – not exactly an easily accessible location.

The main provision of the Pro Codes Act pretends to address this problem by requiring that

An original work of authorship otherwise subject to protection under this title that has been adopted or incorporated by reference, in full or in part, into any Federal, State, or municipal law or regulation, shall retain such protection only if the owner of the copyright makes the work available at no monetary cost for viewing by the public in electronic form on a publicly accessible website in a location on the website that is readily accessible to the public.

Sounds good, right? In fact, it sounds obvious: mandatory regulations should be made available online, for free, so the people to which they are subject can more easily know, share, and comment on them.

But this proposal isn’t really intended to facilitate public access. Here’s the trick: the bill is attempting to codify a flawed assumption that a code incorporated by reference into law has any copyright protection to “retain.”

The SDOs that develop codes, and lobby for their adoption into law, love this assumption. That's because they often want to be able to assert a monopoly over those codes – and profit from them – even after they become law.  Paywalls and restrictive licensing on texts that the public needs can be as lucrative as putting up private tollbooths on a major highway.

Unfortunately for them, court after court has recognized that no one can own the law.  The Supreme Court held as much in its very first copyright case, and recently reaffirmed it: if “every citizen is presumed to know the law,” the Court observed, “it needs no argument to show . . . that all should have free access to its contents.”

SDOs insist that mandatory codes are a glaring exception to this longstanding rule if those codes were initially drafted under the supervision of nongovernmental entities. If private group develops the rules, in other words, that group retains the copyright in even after the rules become law – including the ability to restrict access to them. In other words, if a group writes a good rule, and asks the government to make it law, they should be able to control access to that rule for decades.

Based on this theory, they are suing a nonprofit, Public.Resource.Org (PRO). PRO’s mission is to improve public access to the law. As part of that mission, it posts safety codes on its website, for free, in a fully accessible format -- including codes adopted into law by reference. The SDOs claim that public service is copyright infringement.

The Pro Codes Act would effectively, and sneakily, bless the SDOs’ copyright theory by suggesting that they can indeed “retain” copyright in codes, even after they are made law, as long as they make the codes available through a “publicly accessible” website.

There are many problems with this approach. First, lobbyists (who often draft laws which are then enacted by legislatures) could make the same claim, placing any number of laws in private hands. Second, the many volunteers who develop those codes neither need nor want a copyright incentive. Third, it’s unconstitutional under the First, Fifth, and Fourteenth Amendments, which guarantee the public’s right to read, share and discuss the laws by which we govern ourselves.

Finally, there is no need for this bill, because it simply mandates that SDOs do what Public.Resource.Org is already doing. The difference is, under the bill, the SDOs would get a statutory monopoly in return, which they can use to extract royalties from anyone who wants to share the law in a different way. Which many will: currently the SDOs that make their codes available to the public online do so via clunky, disorganized websites, often inaccessible to the print-disabled, subject to onerous contractual terms. Anyone wishing to make the law accessible in a better format would suddenly find themselves either paying a rent to the SDOs or in legal jeopardy.

The PRO Codes Act is a deceptive power grab that will help giant industry associations put up tollbooths in front of huge swaths of U.S. law. Congress, and anyone who cares about public access, should refuse to be fooled by this wolf in sheep’s clothing.

 

 

Related Cases: Freeing the Law with Public.Resource.Org
Corynne McSherry

An EFF Investigation: Mystery GPS Tracker On A Supporter’s Car

2 months 4 weeks ago

Being able to accurately determine your location anywhere on the planet is a useful technological trick. But when tracking isn’t done by you, but to you—without your knowledge or consent—it’s a violation of your privacy. That’s why at EFF we’ve long fought against dragnet surveillance, mobile device tracking, and warrantless GPS tracking.

Several weeks ago, an EFF supporter brought her car to a mechanic, and found a mysterious device wired into her car under her driver's seat. This supporter, who we’ll call Sarah (not her real name), sent us an email asking if we could determine whether this device was a GPS tracker, and if so, who might have installed it. Confronted with a mystery that could also help us learn more about tracking, our team got to work.

Sarah sent us detailed pictures of the device. It was a black and gray box, about four inches long, with a bundle of 6 wires coming out of one end. On one side, the words “THIS SIDE DOWN” were printed in block letters, next to three serial numbers. 

img_8180.jpg

First, we wanted to confirm that this was, in fact, a GPS device. We started by searching for the device’s FCC ID in the FCC’s database. Each device that has a radio transmitter or receiver is required to have an FCC ID. With that ID you can find manuals, pictures, and even internal schematics on any device the FCC has reviewed.

The FCC search confirmed that the device was a GPS tracker sold under the brand name “Apollo,” and made by a company called M-Labs. According to the manual, the Apollo can track a car’s location, then send the location to a server over a cellular connection. The manual also said the Apollo had a special type of port for communicating with the device, known as a UART serial port. Using this port, we could interact with the device in order to find out more about it.

A quick web search search also revealed that a number of people all over the US had found these exact devices in their cars. Some people believed the GPS trackers  were being installed by dealerships for repossession, or by rental car companies for fleet tracking.

We told Sarah what we had found, and agreed that with direct access to the GPS tracker, we might be able to find out when it had been installed, and therefore who had installed it. If it was installed at the time she bought the car, or before that time, then it could have been installed by the dealership. If it was installed after that date, then it's possible that Sarah had a stalker who had installed the device. The device was put in the mail and sent to our offices.

A few days later we received the Apollo and got to work. The first step was to pry off the case and get access to the internal components. We wanted to find the UART connectors, which would give us the ability to get diagnostic information out of the Apollo’s cellular modem. 

img_8181.jpg

Typically UART comes in a series of four pins, or at least four holes in a row, but this board didn’t have anything like that. Looking closer, we noticed that there were some very tiny contact pads labeled ART1, RX, and TX. We decided to start there.  

img_8190.jpg

Let’s take a step back and discuss why getting access to the UART port was so important. UART stands for Universal Asynchronous Receiver and Transmitter. It is both hardware, and a protocol. The UART protocol lets you receive input and output over  common copper wires by sending and receiving bits one at a time, encoded by either high or low voltage (the technical term for this is a “serial bus.”) The hardware interface is typically 4 connections: voltage, ground, receive (rx), and transmit (tx). Put simply, the UART connection lets you interact with the hardware as if you had a keyboard and monitor attached directly to it.

To connect to the UART bus on the GPS device we used a fun little tool called a “Bus Pirate.” The Bus Pirate lets you connect to different hardware interfaces, including UART, and turns them into a USB interface that you can connect to with your computer. 

We connected the Bus Pirate to a computer and gingerly held its wire probes against the contact points labeled RX and TX on the board, and set the Bus Pirate to connect over UART. The Bus Pirate sprang to life and returned the following:

����3�����f��������b���{= ^����H���������x�������?���������������������������������~�����������H�?� �?����a�����>���8�8�����N'?0 ����~ ���� �s�2��G����

It was nothing but gibberish. We decided to try using different baud rates, that is, the rate at which symbols are transmitted in an electronic communication. We finally discovered that an 115200 baud rate was what was needed to get coherent communication from the device.

In between lines of more gibberish, we saw some readable text pop up:

�����x���� V�� ����D������~��L����"����������Bƀ����3����>3�(P�K� P�����                                                                                                                                               
@�������� ���0���_����q������� �� �B!�� [�
FW:2.4.3; BIN:1.1.95T; MEID:A100005B46F154
IP:10.90.1.52:3078; LPORT:3078
RI:0,0,0; DTE:0,0,0,0,0,0; DI:0; HB:0; NR:2940,0,0; RS:0,900
���������CI��}��������|>0o��������P D���39@��                                                                                                       
��    �K��G���_������                                                                                                                       
��C�� �����: �����(�����@���

Success! We finally had some data out of the GPS device, but why was it still surrounded by garbage data?  For the answer to this, we have to look again at how UART works. Since UART is just measuring voltage differences on the RX and TX pins, anything that interferes with those voltages will change the input and output. In this case, an EFF team member’s hand was holding the Bus Pirate pin to the transmit connector of the GPS device, and that was creating extra interference, which then got interpreted as data coming from the GPS device, causing the garbled output.

Next, we soldered an RX and TX wire directly onto the GPS board and connected it to the Bus Pirate. After turning on the GPS device again, the output came out clean!

FW:2.4.3; BIN:1.1.95T; MEID:A100005B46F154
IP:10.90.1.52:3078; LPORT:3078
RI:0,0,0; DTE:0,0,0,0,0,0; DI:0; HB:0; NR:2940,0,0;

img_8214.jpg

Now that we had a connection we could communicate with the Apollo’s cellular modem by typing what are called “AT commands.” AT commands are the standard way that humans and machines can interact with a cellular modem. They are called AT commands because they universally start with the letters “AT.” For example: the command “ATD” would let you dial a number, and the command “ATA” would answer an incoming call.

We entered a basic AT command to determine whether things were working, and got nothing back. We tried several more AT commands and still nothing. We had been hoping to at least get an error code back but the cursor sat there, blinking at us like a patient dog, not understanding a word of what we were saying.

After several more hours of cursing, reading docs, banging our heads against the wall, and self medicating, we figured out the problem: we hadn’t connected the ground pin. The UART connection was incomplete. Our carefully typed AT commands were not being sent to the waiting GPS device. Not wanting to get out the soldering iron again, we carefully placed a ground wire from the Bus Pirate onto the ground plane of the GPS device. It worked! We were able to send AT commands and get back data.

FW:2.4.3; BIN:1.1.95T; MEID:A100005B46F154
IP:10.90.1.52:3078; LPORT:3078
RI:0,0,0; DTE:0,0,0,0,0,0; DI:0; HB:0; NR:2940,0,0; RS:0,90000,0
Ready
ATZONRS
ERROR
ATZ
OK
AT+IONRS
ERROR
AT+IONRS?
ERROR
AT+IONVO
ERROR
AT+IONVO?
17569

The manual for the Apollo listed several special built-in AT commands for retrieving data. Under certain conditions, the device would generate a report of its activities, including its location history. This report is also what gets sent to the GPS tracker’s owner. We hoped that the report would also contain information about when and where the Apollo was first activated.

We tried various commands for several hours, trying to get a report out of the GPS device. All of our attempts failed. The documentation for the device was severely lacking. We wrote to M-Labs, the manufacturing company, hoping they would kindly send us a better manual, but never heard back.  Eventually we tried a command which would tell us the number of miles on the device’s “virtual odometer.” The answer: 17569, apparently the number of miles this device has traveled.

Now we were getting somewhere. If our supporter Sarah had driven this car less than 17,000 miles, we could be certain it was installed before she had the car.

We called Sarah and told her the news. We asked how many miles were on the car? Unfortunately, Sarah had driven the car 29,000 miles since buying it, and she had bought it new, with less than 200 miles on it. This would seem to lead to an unsettling conclusion: could our supporter have a stalker?

Our odometer finding wasn’t a sure thing, though. Given the sparse documentation, we couldn’t be sure how accurate the virtual odometer was, or even how it worked. There was also the possibility that the device could have been reset at some point. We were going to need more information for a definitive answer to this mystery.

We tried once again to get the report out. Several more days and several hundred curse words later, we still couldn’t devise a way to get the GPS to print the report that the manual promised. We began to believe the report would contain all the answers we were looking for—perhaps even the answers to life, the universe, and everything. We had tried every command and every trick we could think of. Staring at a dead end, we decided it was time to take the low tech approach.

Sarah said that when she first found the device she had asked her dealership if they ever installed GPS devices in the cars they sold. Dealership employees swore that they had never done such a thing. While we couldn’t know for sure if that was true, it was a mechanic from that dealership who first found the device, so we were inclined to believe them.

Sarah also mentioned that the car had been transferred from another Audi dealership in Orange County, California, when she bought it. Could they be the culprits? We called the original dealership and asked if they were familiar with this hardware or if they install GPS devices in their customers' cars. The dealership told us that they used to work with a company called Sky Link to install anti-theft devices, but didn’t activate them unless the buyer paid for the service. Could this be an explanation for this rogue GPS device?

We wanted to confirm that this device did indeed belong to Sky Link. Looking at their website it seemed to have not been updated in years. It even contained a widget for Adobe Flash, a very old way of creating animation on websites. Still, there was a customer service number.

We called Sky Link and asked if they could confirm whether this was one of their devices. The car’s VIN (Vehicle Identification Number) wasn’t in their database as having ever been activated. We had one last idea. We gave them the serial number of the hardware, and asked if it had ever been a part of their supply chain at all.

Turns out, it had. The GPS device was bought by the dealership, but it was never activated. At last, we had proof that this was a device installed by the dealership.  We called Sarah to share the good news. She was very glad to find out that she didn’t have a stalker.

While we regrettably can’t spend this kind of time investigating every tech mystery that an EFF supporter has, we decided to take on this case because there was a lot we could learn. We learned about UART and the hidden consoles that are built in to many hardware devices. And we were reminded that sometimes a low tech approach is better than a high tech one for solving a mystery. Sometimes you can hack your way to solving a problem, and sometimes you can solve it by calling the right people and asking the right questions.

Another question lingers: Is the sky-link GPS device still sending location data back to a Sky Link server? If so, could it be accessed by an employee, or someone who activates the device in the future? We were unable to reach Sky Link to get a confirmation either way, but it's a concerning possibility. Given how many people have been surprised to find this specific GPS tracker in their cars (as mentioned above) it’s possible that many car dealerships are installing these devices without proper customer notification. Those GPS devices could one day enable misuses or abuses. If you have found a device like this in your car, or if you work for Sky Link or a similar company, we would be interested to hear from you.

Cooper Quintin

The Public Has a Right to Know How DHS is Spending Millions to Spy on Immigrants on Social Media 

2 months 4 weeks ago

The Department of Homeland Security (DHS) has offered no transparency about its multi-million dollar program of spying on immigrants’ and other foreign visitors’ social media posts, which it uses as evidence in deportations and visa denials.

We want to change that, so we sued DHS today under the Freedom of Information Act (FOIA) for records about the Visa Lifecycle Vetting Initiative (VLVI). We want to know what VLVI does, how it works, and what information DHS is gathering. 

The lawsuit, filed in the U.S. District Court in San Francisco, seeks records on the current status of the program, including whether the government is monitoring people’s social media profiles and for what purpose, how this impacts visa approvals and denials, and details about a $4.8 million transaction last spring. 

EFF opposes the U.S. government’s monitoring of anyone’s social media accounts and internet activity, and in this case, the government is targeting potential immigrants who risk being unfairly labeled a threat and denied access into the U.S. EFF previously urged DHS to abandon any such vetting program because social media surveillance invades privacy and violates the First Amendment by chilling speech and allowing the government to target and punish people for expressing views it doesn’t like. Any vetting based on speech on social media would be ineffective and discriminatory.

In a FOIA request submitted last fall, EFF asked DHS for all VLVI contracts, notes on how the program works, performance work statements, recent datasets used for input, training materials, operating procedures, privacy impact statements, audits, and reports to legislative bodies. DHS released no records in response to the request, which is unacceptable and illegal. 

Though DHS and Immigration and Customs Enforcement (ICE) have been using the VLVI to spy on people who wish to come to the U.S. for years, the public knows little of the program. Our lawsuit aims to bring the program’s details into the light. 

Malaika Fraley

EFF Files FOIA Lawsuit Against DHS to Shed Light on Vetting Program to Collect and Data Mine Immigrants' Social Media

2 months 4 weeks ago
Little is Known About Trump-Era Visa Vetting Initiative That Continues Under Biden

SAN FRANCISCO—The Electronic Frontier Foundation (EFF) today filed a Freedom of Information Act (FOIA) lawsuit against the Department of Homeland Security (DHS) for records about a multi-million dollar, secretive program that surveils immigrants and other foreign visitors’ speech on social media.

DHS and Immigration and Customs Enforcement (ICE) use the Visa Lifecycle Vetting Initiative (VLVI) to spy on potential immigrants and label them as threats based on their social media and internet presence, possibly leading to deportation and visa denials. VLVI grew out of the "extreme vetting" initiatives of former President Trump.

The public has no knowledge of how VLVI operates, EFF said in a complaint filed today in federal court in San Francisco. This is unacceptable. Social media surveillance violates the First Amendment by chilling speech and allowing the government to target and punish people for expressing views it doesn’t like. EFF previously called on DHS to stop the program for violating the First Amendment and because any vetting based on speech on social media would be ineffective and discriminatory.

“There has been no transparency about this program, and the public has a right to know how tax dollars are being spent to support surveillance of people’s speech,” said EFF Stanton Fellow Mukund Rathi. “While it began under the former administration, DHS seems to have continued paying for the system under the Biden administration, with a nearly $5 million transaction last May.”

DHS entered into at least two contracts with SRA International for VLVI, according to the complaint. They cover spending from August 2018 to August 2023, with a total obligated amount of $42.1 million. A $4.8 million transaction was paid on May 4, 2021.

EFF’s lawsuit seeks records on the current status of the program, including whether the government is monitoring people’s social media profiles and for what purpose, how this impacts visa approvals and denials, and details about the May 4, 2021, transaction. In a November 2021 FOIA request, EFF asked for all contracts, notes on how the program works, performance work statements, recent datasets used for input, training materials, operating procedures, privacy impact statements, audits, and reports to legislative bodies.

“How the initiative was supposed to work was unclear when it was announced five years ago and is even less clear as time has passed,” said Rathi. “We’re filing this lawsuit to enforce our FOIA request and change that.”

For the complaint:
https://www.eff.org/document/eff-v-dhs-vlvi


For more on social media monitoring:
https://www.eff.org/issues/social-media-surveilance

Contact:  MukundRathiStanton Legal Fellowmukund@eff.org
Joshua Richman

Another Tracker Scanning App Highlights the Need for a Better Way to Protect Victims From Digital Stalking

3 months ago

First came tracking devices like Tiles and AirTags, marketed as clever, button-sized Bluetooth-enabled gizmos that can find your lost backpack. Then, after bad actors started using the devices to stalk or follow people, came scanning apps to help victims find out whether those same gizmos were tracking them.

Such is the twisted, dangerous path of tracking devices in the wrong hands. That device makers are rolling out scanning apps that can potentially help stalking victims is a win for privacy—but with a couple of big asterisks. Tile’s new scanning app shows why.

The company, which has sold over 40 million trackers, is the latest manufacturer to roll out a scanning app. Its Scan & Secure feature allows people to determine if someone is tracking them using a Tile product. This follows Apple’s introduction in December of an Android app called Tracker Detect that allows people using Android devices to find out if someone is tracking them with its popular AirTag device or other devices equipped with sensors compatible with the Apple Find My network.

As we noted when Apple released the Android app, AirTags are an order of magnitude more dangerous than other device trackers because Apple has made every iPhone that doesn’t specifically opt out into part of the Bluetooth tracking network that AirTags use to communicate, meaning AirTags’ reach is much greater than other trackers. Nearly all of us cross paths with Bluetooth-enabled iPhones multiple times a day, even if we don’t know it.

To use the Tile scanner, you need to download the Tile app on your phone and tap Scan & Secure under settings. Users need to walk around or move/drive away from where they launched the app, which will scan six times to detect Tiles and Tile-enabled devices that may be traveling with them. The app displays the scan results showing both the known and unknown Tiles and Tile-enabled devices it detected and how many times they showed up in the six scans.

The need to download an app and pro-actively run a scan to find out whether someone is tracking you is the major weakness of this mitigation. Victims of stalking or partner violence may be completely unaware that a device is tracking them, much less which kind of device is being used. For the scanning apps to be effective, a target of tracking would need to know what device is being used, then find and download a scanning app for that device. A world in which survivors of stalking and abuse need to download a separate app for every type of physical scanner and use it to run dozens of individual scans is better than what we have now, but this is not a solution that scales well.

EFF calls on the makers of physical trackers to agree on and publish an industry standard that would allow developers to incorporate physical tracking detection into both mobile apps and operating systems. We continue to call on these tech giants to work together to address the threats their users face from ubiquitous, cheap, and powerful physical trackers. It’s easy for stalkers to use these devices to harass and threaten their victims. It should be easier for victims to find out if this is happening to them so they can protect themselves.

Karen Gullo

EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims

3 months ago

EFF client Erik Johnson, a Miami University computer engineering undergraduate, reached a settlement in the lawsuit we brought on his behalf against exam surveillance software maker Proctorio, in a victory for fair use of copyrighted material and people’s right to fight back against bad faith Digital Millennium Copyright Act (DMCA) takedowns used to silence critics.[1]

Johnson, who is also a security researcher, sued Proctorio a year ago after it misused the copyright takedown provisions of the DMCA to remove his posts. Proctorio had gone after a series of tweets Johnson published critiquing Proctorio that linked to short excerpts of its software code and a screenshot of a video illustrating how the software captures images of students’ rooms that are accessible to teachers and potentially Proctorio’s agents. Johnson’s lawsuit asked the court to rule that his posts were protected by the fair use doctrine and hold Proctorio responsible for submitting takedown notices in bad faith.

Under the settlement, Proctorio dropped its copyright claim and other claims it had filed blaming Johnson’s advocacy for damaging its reputation and interfering with its business. In return, Johnson dropped his claims against Proctorio. Johnson’s tweets, which were restored by Twitter through the DMCA’s counter-notice process, will remain up.

Proctoring apps like Proctorio’s are privacy-invasive software that “watches” students using tools like face detection for supposed signs of cheating as they take tests or complete schoolwork. Their use skyrocketed during the pandemic, leading privacy advocates and students to protest this new kind of surveillance. Johnson, whose instructors use Proctorio, was concerned about how much private information the software could collect from students’ computers and used his skills as a security researcher to examine its functions.

Shining a light on how the software worked rankled Proctorio, but it did not infringe on the company’s copyrights. As we said when we brought the lawsuit, using pieces of code to explain your research or support critical commentary is no different from quoting a book in a book review.

DMCA abuse is not new. Bogus copyright complaints have threatened all kinds of creative expression, opinions, and speech on the Internet. Recipients of bogus takedown notices can fight back because DMCA provisions allow users to challenge improper takedowns through counter-notices and sue for damages when infringement notices are submitted in bad faith.

Unfortunately, not everyone has the resources to take on big business interests that use the DMCA to bully and retaliate against critics, as was the case here. All kinds of fair use, non-infringing content is removed, further emboldening rightsholders to abuse the DMCA’s censorship power.

In this case, Johnson fought back, with our help.

Falsely accusing researchers, creators, or a parent who posted a cute video of their child dancing is seriously wrong, especially when the goal is plainly to intimidate and undermine. We hope this case shows that people will fight back if they can and deters other rightsholders from using bogus DMCA claims to harass their critics.

[1] This post has been updated to revise descriptions of Proctorio's software and the claims against Johnson that were dismisssed, and incorporates stylistic changes and additional details throughout.

Karen Gullo

The Kids Online Safety Act Is a Heavy-Handed Plan to Force Platforms to Spy on Young People

3 months ago

Putting children under surveillance and limiting their access to information doesn’t make them safer—in fact, research suggests just the opposite. Unfortunately those tactics are the ones endorsed by the Kids Online Safety Act of 2022 (KOSA), introduced by Sens. Blumenthal and Blackburn. The bill deserves credit for attempting to improve online data privacy for young people, and for attempting to update 1998’s Children's Online Privacy Protection Rule (COPPA). But its plan to require surveillance and censorship of anyone sixteen and under would greatly endanger the rights, and safety, of young people online.

KOSA would require the following:

  • A new legal duty for platforms to prevent certain harms: KOSA outlines a wide collection of content that platforms can be sued for if young people encounter it, including “promotion of self-harm, suicide, eating disorders, substance abuse, and other matters that pose a risk to physical and mental health of a minor.”
  • Compel platforms to provide data to researchers
  • An elaborate age-verification system, likely run by a third-party provider
  • Parental controls, turned on and set to their highest settings, to block or filter a wide array of content

There are numerous concerns with this plan. The parental controls would in effect require a vast number of online platforms to create systems for parents to spy on—and control—the conversations young people are able to have online, and require those systems be turned on by default. It would also likely result in further tracking of all users.

Data collection is a scourge for every internet user, regardless of age. 

And in order to avoid liability for causing the listed harms, nearly every online platform would hide or remove huge swaths of content. And because each of the listed areas of concern involves significant gray areas, the platforms will over-censor to attempt to steer clear of the new liability risks. 

These requirements would be applied far more broadly than the law KOSA hopes to update, COPPA. Whereas COPPA applies to anyone under thirteen, KOSA would apply to anyone under sixteen—an age group that child rights organizations agree have a greater need for privacy and independence than younger teens and kids. And in contrast to COPPA’s age self-verification scheme, KOSA would authorize a federal study of “the most technologically feasible options for developing systems to verify age at the device or operating system level.” Age verification systems are troubling—requiring such systems could hand over significant power, and private data, to third-party identity verification companies like Clear or ID.me. Additionally, such a system would likely lead platforms to set up elaborate age-verification systems for everyone, meaning that all users would have to submit personal data. 

Lastly, KOSA’s incredibly broad definition of a covered platform would include any “commercial software application or electronic service that connects to the internet and that is used, or is reasonably likely to be used, by a minor.” That would likely encompass everything from Apple’s iMessage and Signal to web browsers, email applications and VPN software, as well as platforms like Facebook and TikTok—platforms with wildly different user bases and uses. It’s also unclear how deep into the ‘tech stack’ such a requirement would reach – web hosts or domain registries likely aren’t the intended platforms for KOSA, but depending on interpretation, could be subject to its requirements. And, the bill raises concerns about how providers of end-to-end encrypted messaging platforms like iMessage, Signal, and WhatsApp would interpret their duty to monitor minors' communications, with the potential that companies will simply compromise encryption to avoid litigation.

TAKE ACTION

TEll the SeNATE: Vote NO TO CENSORSHIP AND SURVEILLANCE 

Censorship Isn’t the Answer

KOSA would force sites to use filters to block content—filters that we’ve seen, time and time again, fail to properly distinguish“good” speech from “bad” speech. The types of content targeted by KOSA are complex, and often dangerous—but discussing them is not bad by default. It’s very hard to differentiate between minors having discussions about these topics in a way that encourages them, as opposed to a way that discourages them. Under this bill, all discussion and viewing of these topics by minors should be blocked. 

The law requires platforms to ban the potentially infinite category of “other matters that pose a risk to physical and mental health of a minor. 

Research already exists showing bans like these don’t work: when Tumblr banned discussions of anorexia, it discovered that the keywords used in pro-anorexia content were the same ones used to discourage anorexia. Other research has shown that bans like these actually make the content easier to find by forcing people to create new keywords to discuss it (for example, “thinspiration” became “thynsperation”). 

The law also requires platforms to ban the potentially infinite category of “other matters that pose a risk to physical and mental health of a minor.” As we’ve seen in the past, whenever the legality of material is up for interpretation, it is far more likely to be banned outright, leaving huge holes in what information is accessible online. The law would seriously endanger access to information to teenagers, who may want to explore ideas without their parents knowledge or approval. For example, they might have questions about sexual health that they do not feel safe asking their parents about, or they may want to help a friend with an eating disorder or a substance abuse problem. (Research has shown that a large majority of young people have used the internet for health-related research.)

KOSA would allow individual state attorneys general to bring actions against platforms when the state’s residents are “threatened or adversely affected by the engagement of any person in a practice that violates this Act.” This leaves it up to individual state attorneys general to decide what topics pose a risk to the physical and mental health of a minor. A co-author of this bill, Sen. Blackburn of Tennessee, has referred to education about race discrimination as “dangerous for kids.” Many states have agreed, and recently moved to limit public education about the history of race, gender, and sexuality discrimination.

Recently, Texas’ governor directed the state’s Department of Family and Protective Services to investigate gender affirming care as child abuse. KOSA would empower the Texas attorney general to define material that is harmful to children, and the current position of the state would include resources for trans youth. This would allow the state to force online services to remove and block access to that material everywhere—not only Texas. That’s not to mention the frequent conflation by tech platforms of LGBTQ content with dangerous “sexually explicit” material. KOSA could result in loss of access to information that a vast majority of people would agree is not dangerous, but is under political attack. 

Surveillance Isn’t the Answer

Some legitimate concerns are driving KOSA. Data collection is a scourge for every internet user, regardless of age. Invasive tracking of young people by online platforms is particularly pernicious—EFF has long pushed back against remote proctoring, for example. 

But the answer to our lack of privacy isn’t more tracking. Despite the growing ubiquity of technology to make it easy, surveillance of young people is actually bad for them, even in the healthiest household, and is not a solution to helping young people navigate the internet. Parents have an interest in deciding what their children can view online, but no one could argue that this interest is the same if a child is five or fifteen. KOSA would put all children under sixteen in the same group, and require that specific types of content be hidden from them, and that other content be tracked and logged by parental tools. This would force platforms to more closely watch what all users do. 

KOSA’s parental controls would give parents, by default, access to monitor and control a young person’s online use. While a tool like Apple’s Screen Time allows parents to restrict access to certain apps, or limit their usage to certain times, platforms would need to do much more under KOSA. They would have to offer parents the ability to modify the results of any algorithmic recommendation system, “including the right to opt-out or down-rank types or categories of recommendations,” effectively deciding for young people what they see – or don’t see – online. It would also give parents the ability to delete their child’s account entirely if they’re unhappy with their use of the platform. 

The answer to our lack of privacy isn’t more tracking. 

The bill tackles algorithmic systems by requiring that platforms provide “an overview of how algorithmic recommendation systems are used …to provide information to users of the platform who are minors, including how such systems use personal data belonging to minors.” Transparency about how a platform’s algorithms work, and tools to allow users to open up and create their own feeds, are critical for wider understanding of algorithmic curation, the kind of content it can incentivize, and the consequences it can have. EFF has also supported giving users more control over the content they see online. But KOSA requires that parents be able to opt-out or down-rank types or categories of recommendations, without the consent or knowledge of the user, including teenage users.

Lastly, under KOSA, platforms would be required to prevent patterns of use that indicate addiction, and to offer parents the ability to limit features that “increase, sustain, or extend use of the covered platform by a minor, such as automatic playing of media, rewards for time spent on the platform, and notifications.” While minimizing dark patterns that can trick users into giving up personal information is a laudable goal, determining what features “cause addiction” is highly fraught. If a sixteen year-old spends three hours a day on Discord working through schoolwork or discussing music with their friends, would that qualify as “addictive” behavior? KOSA would likely cover features as different as Netflix’s auto-playing of episodes and iMessage’s new message notifications. Putting these features together under the heading of “addictive” misunderstands which dark patterns actually harm users, including young people.

EFF has long supported comprehensive data privacy legislation for all users. But the Kids Online Safety Act would not protect the privacy of children or adults. It is a heavy-handed plan to force technology companies to spy on young people and stop them from accessing content that is “not in their best interest,” as defined by the government, and interpreted by tech platforms. 

TAKE ACTION

TELL THE SENATE: VOTE NO TO CENSORSHIP AND SURVEILLANCE 

Jason Kelley

Stop Invasive Remote Proctoring: Pass California’s Student Test Taker Privacy Protection Act

3 months ago


Remote proctoring companies like Proctorio, ProctorU, and ExamSoft collect all manner of private data on students and test takers, from biometric information to citizenship status to video and audio of a user’s surroundings. During the pandemic there has been a 500% increase in the usage of these proctoring tools—in 2020, more than half of higher education institutions used remote proctoring services and another 23% were considering doing so. At this point, remote proctoring services are a given for many students. But despite their rocketing use–and data breaches, and concern from federal lawmakers and California’s Supreme Court, no meaningful data protections have been put into place to protect the privacy of test takers. 

Too many schools across the state continue to use remote proctoring at its most invasive settings.

California’s Student Test Taker Privacy Protection Act (STTPPA) will correct this. It is put forward by Senator Dr. Richard Pan (S.B. 1172) and sponsored by EFF and Privacy Rights Clearinghouse

The TTPPA directs proctoring companies to follow reasonable data minimization practices, meaning they cannot collect, use, retain, or disclose test takers’ personal information except as strictly necessary to provide proctoring services. In the event a student’s data was processed beyond what was required to proctor the exam, the student has the opportunity to take the proctoring company to court. This allows the courts to decide, narrowly and thoughtfully, what data is actually required to collect for proctoring services, how long it needs to be be held, and how it needs to be used and disclosed. It’s a simple bill that should give the people harmed—test takers—the opportunity to protect their data and privacy. A summary of the bill is here.

Proctoring software creates serious problems for students, including:

  • Bias: the software inaccurately flags disabled students as cheating more often, and fails to recognize [1] black and brown faces properly, making it harder for some (already disadvantaged) users to succeed.
  • Surveillance: the software can collect extremely personal and private data, and retention periods are often years-long. Data breaches of this private, and often biometric, information have already occurred
  • Security vulnerabilities: these tools often force students to hand over administrator rights to their devices, putting them at risk of dangerous security invasions. 

This bill will tackle these problems. Right now, students have little recourse for protecting the private data that is collected from them during these proctored exams, or for limiting access the software has on their devices. Proctoring companies claim their customers are the schools who pay them, and not the test takers whose private data is processed. Together, those two entities decide on the surveillance features used, the data retention period, and more. Even where there is no school involvement, such as during the bar exam, companies say the students are still not the customer–the test administrator is. On this basis, proctoring companies have made it difficult for test takers to leverage California’s Consumer Privacy Act to protect their data, or request its deletion. 

This simple bill will right the imbalance that exists and give students the ability to protect their own private information. It will also require proctoring companies to be more transparent about their data collection and usage, which will help improve security. And it will allow courts to analyze the effectiveness of remote proctoring tools. 

Many are not effective. For example, over one-third of California Bar examinees were flagged as cheating. That’s ridiculous on its face. For a virtual simulation of how these flags work, visit here

Who Supports the STTPPA

The STTPPA is currently sponsored by EFF and Privacy Rights Clearinghouse. These organizations have fought to protect students’ rights for years. 

California’s state government has already recognized the problem that the STTPPA would address. In late 2020, the California Supreme Court directed the California State Bar to prepare a timetable for destruction of all bar examinees’ personally identifiable information retained by the remote proctoring company (ExamSoft). The court recognized that some data collection was unrelated to the administration of the bar, and that unnecessary retention of sensitive PII data increases the risk of unintentional disclosure. The STTPPA would enshrine this sort of requirement in law. 

This simple bill will right the imbalance and give students the ability to protect their own private information.

Students Have Led the Way—Now California Should Step Up

Students should be able to take tests without fearing for their privacy. But they rarely can opt out of data collection when using remote proctoring, let alone say they don’t want to use an online proctoring platform. Rather, the choice is often between using the invasive software or not taking the exam and getting a zero. One study showed that 97% of students using online proctoring tools were required to do so. That’s why students have been at the forefront of this fight, pushing back against automated proctoring in their schools. 

Dozens of schools across California have already taken meaningful action. For example, UC Berkeley, for example, uses Zoom to proctor exams—no data collection required. UC Santa Barbara, as well, has recommended teachers move away from high stakes testing during remote learning, and has required that faculty cannot require students to participate in remote proctoring. Many educators have likewise recommended against remote proctoring

Still, too many schools across the state continue to use remote proctoring at its most invasive settings. STTPPA will alleviate privacy concerns for students like these who are required to use remote proctoring.

[1] This link has been updated to a new source.

Jason Kelley

EFF Director of Investigations Dave Maass Honored With Sunshine Award For Driving Public Disclosure of Government Surveillance Records

3 months ago

When journalists want to know if and how local police or governments are using technology tools to surveil communities, one of the first people they call (or message on Signal) is Dave Maass, EFF Director of Investigations.

Maass’ expertise in the use of police tech like automated license plate readers, drones, and camera networks, and his work pushing governments to be more transparent, has earned him accolades by reporters, researchers, and citizens. Today, Maass will receive the Sunshine Award from the San Diego Chapter of the Society of Professional Journalists (SD-SPJ) in recognition of this important work.

Maass is the driving force behind the EFF-led Atlas of Surveillance project, the largest-ever collection of searchable data on police use of surveillance technologies. Built using crowdsourcing, data journalism, and public records requests in partnership with Reynolds School of Journalism at the University of Nevada, Reno, the Atlas of Surveillance documents the alarming increase in the use of unchecked high-tech tools that collect biometric records, photos, and videos of people in their communities, locate and track them via their cell phones, and purport to predict where crimes will be committed. San Diego County was one of early communities examined in work on the Atlas.

"San Diego County has long been a hot spot for law enforcement surveillance tech, from handheld face recognition devices to extreme drone 'first responder' programs,’” Maass said. “Over the last few years, it's been a pleasure to help journalists across numerous regional news outlets probe these new technologies, be it through sharing knowledge or documents EFF has collected or elevating the work these reporters have produced. San Diego journalism has not only helped start a dialogue over surveillance tech, it has also helped shape the conversation in favor of accountability, privacy, and civil rights.”

Maass worked as an investigative journalist for San Diego County alt weekly newspapers before coming to EFF in 2013. He has been previously honored for his investigative series into deaths at San Diego County jails and the use of pepper spray in its juvenile halls. His reputation for annoying the local politicians with public records requests led the San Diego City Council to declare Feb. 13, 2013 as “Dave Maass Day.” 

The Atlas of Surveillance began with a pilot project focused on all the counties along the U.S.-Mexico border, but San Diego County ended up being as great and rewarding a challenge as the other 22 counties combined, Maass says. The project included more than 250 technologies in communities along the border, and that laid the groundwork for the full Atlas of Surveillance project, which now contains 9,000 datapoints nationwide.

In addition to leading EFF’s deep-dive investigations in how law enforcement on local, state and federal level use surveillance technologies, Maass coordinates many of EFF’s large-scale public records campaigns, advocates for state legislation and compilesThe Foilies, our annual review of outrageous FOIA responses. He is also a Scholar in Residence lecturing on cybersecurity, surveillance and public records laws  at the Reynolds School of Journalism at the University of Nevada, Reno.

According to the SD-SPJ, its Sunshine Award is bestowed on a “journalist or community member who went above and beyond to make the government more transparent and hold elected officials accountable.”

San Diego-based journalist Katy Stegall told the SD-SPJ that Maass is “one of the few experts in the country who is able to explain this highly complex topic to both academics, reporters, activists and any layperson who wants to learn more about surveillance.”

“His deep knowledge and understanding of the topic is even further amplified by his passion, willingness and flexibility to meet others where they are and help them fully understand how surveillance impacts communities,” Stegall said.

Malaika Fraley

Podcast Episode: Hack to the Future

3 months ago

Like many young people, Zach Latta went to a school that didn't teach any computer classes. But that didn’t stop him from learning everything he could about them and becoming a programmer at a young age. After moving to San Francisco, Zach founded Hack Club, a nonprofit network of high school coding clubs around the world, to help other students find the education and community that he wished he had as a teenager. 

This week on our podcast, we talk to Zach about the importance of student access to an open internet, why learning to code can increase equity, and how school's online security and the law often stand in the way. We’ll also discuss how computer education can help create the next generation of makers and builders that we need to solve some of society’s biggest problems. 

Click below to listen to the episode now, or choose your podcast player:

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F3d2d347f-be2e-49f2-ba0e-dfd76c7ada74%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

  
  

You can also find the MP3 of this episode on the Internet Archive

In this episode, you’ll learn about:

  • Why schools block some harmless educational content and coding resources, from common sites like Github to “view source” functions on school-issued devices
  • How locked down digital systems in schools stop young people from learning about coding and computers, and create equity issues for students who are already marginalized
  • How coding and “hack” clubs can empower young people, help them learn self-expression, and find community 
  • How pervasive school surveillance undermines trust and limits people’s ability to exercise their rights when they are older
  • How young people’s curiosity for how things work online has helped bring us some of the technology we love most 

Zach Latta is the executive director of Hack Club, a national nonprofit connecting over 14,000 young people to help them create and participate in coding clubs, hackathons, and workshops around the world. He is a Forbes 30 Under 30 recipient and a Thiel Fellow.

Music

Music for How to Fix the Internet was created for us by Reed Mathis and Nat Keefe of BeatMower. 

This podcast is licensed Creative Commons Attribution 4.0 International, and includes the following music licensed Creative Commons Attribution 3.0 Unported by their creators: 

  • Warm Vacuum Tube  by Admiral Bob (c) copyright 2019 Licensed under a Creative Commons Attribution (3.0) license. http://dig.ccmixter.org/files/admiralbob77/59533 Ft: starfrosch

  • Drops of H2O ( The Filtered Water Treatment ) by J.Lang (c) copyright 2012 Licensed under a Creative Commons Attribution (3.0) license. http://dig.ccmixter.org/files/djlang59/37792 Ft: Airtone

  • reCreation by airtone (c) copyright 2019 Licensed under a Creative Commons Attribution (3.0) license. http://dig.ccmixter.org/files/airtone/59721
Resources

Coders’ Rights

Students’ Rights and Surveillance

Censorship Requires Surveillance

Hack Club

Transcript:

Zach: I grew up near Los Angeles, both my parents were social workers and growing up, I went to public schools that most schools in America didn't teach any computer classes. And for me, as a young person, I just felt like, oh my God, if only I could figure out how these magical devices work, this is where the secrets of the universe lie. But it was always a solitary activity for me. 

As a teenager I was very lonely and that culminated for me, I ended up dropping out of high school after my freshman year when I was sixteen and I moved to San Francisco to become a programmer. And after working at a couple startups to get some money and put together some savings, I started Hack Club to try and create the sort of place and community that I so desperately wished I had when I was a teenager.

Cindy:  That's Zach Latta. He's the founder of Hack Club and he's our guest today. Zach is going to tell us about how groups like Hack Club are teaching kids how to hack and otherwise be creators online and how that's one of the ways we can help shift them from being just passive consumers of the digital world to actually charting their own futures.

Danny: We're going to talk to Zach about student rights to an open internet, why learning to code can increase equity and what happens when a school's online security and the law get in the way of all that. 

Cindy: I'm Cindy Cohn, EFF's executive director.

Danny: And I'm Danny O'Brien, special advisor to the EFF. Welcome to How to Fix the Internet, a podcast of the Electronic Frontier Foundation, where we bring you big ideas, solutions, and hope that we can fix the biggest problems we face online.

Cindy: Zach, thanks so much for joining us.

Zach: Well, thank you so much for having me. I'm so honored. Growing up as a teenager, I just loved the EFF and everything the organization stood for. It's a real honor to be with all of you here today.

Cindy: Oh, terrific.

You reached out to EFF for help and that's how we ended up really meeting you. Can you talk to us about what led you to do that?

Zach: We are a network of teenagers all across the world who love building things with computers and run communities to try and bring teenagers together, to make things with technology. And almost every month, we have a major problem where a school district just blocks Hack Club. And there is no worse call to get from a Hack Club, they're saying, "All right, I got 20 people in the room, we're trying to get started, hackclub.com is blocked, github.com is blocked, Stack Overflow is blocked, how can we possibly run our meeting from here?"

Because of this problem, kind of in a bit of frustration. With some Hack Clubbers I wrote a letter to EFF support line, just saying, "Hey, is there any way that EFF might be able to help us with this? Because this is starting to be a thing where it's not like one school has this problem, it's like we have dozens of schools around America where just everything's blocked." 

Danny: Just to be clear here, this isn't just you being blocked, this is major informational resources, right?

Zach: Oh yeah. It's crazy. If you're a young person who wants to learn about computers and wants to learn how to code, you kind of need the internet to do that. And you rely on sites like Google, like GitHub, like Stack Overflow, like GitLab. There's a whole ecosystem that every single professional developer relies on every single day and at a significant percentage of schools around America, all of these resources are just blocked, including hackclub.com.     

We run a club locally here in Vermont, where we test out all of our stuff before we put it online and open source it. And I was talking with a Hack Clubber there where literally every single website besides school classroom is blocked on their school computer. And this Hack Clubber isn't from a family with means so the only computer that they have access to at home is their school issued Chromebook. And as a result, he's six weeks behind everybody else in this club and still hasn't gotten past the initial hurdle of building early websites.

Danny: Obviously what you are doing in Hack Club must be extremely subversive to be blocked in this way. What are you doing? What are these kids learning or failing to learn because they can't actually access to the internet?

Zach: What Hack Club's all about is bringing teenagers together who love computers and want to learn how to make things with computers. Whether it's building a website or making a video game or maybe even starting a local business and most schools don't offer any curriculum or support around that. What Hack Clubbers are doing is in their meetings, they're usually trying to learn HTML, CSS, JavaScript or later on, more advanced languages like Rust or recently there's a big movement around Zig, which is a new popular language. And when you're trying to run the meeting and bring people to github.com, where we have a lot of our resources, when it's blocked, it's the meeting's dead on arrival. I don't think school administrators are bad people. I come from a long line of teachers and I think that people in schools are doing their best but are probably afraid around things like liability.

Cindy: Their incentive is just to make sure that kids don't ever get to anything that might possibly be problematic. They don't have an incentive to make sure kids can actually learn some of these skills. And so, when you outsource this to people whose business it is to block, they're going to block as opposed to having a thoughtful process by which you figure out what do students really need to learn? And I think you're totally right, when it comes to computer programming and understanding how computers work, everybody learned this by going out onto the internet and finding the places where other people are sharing this and something like GitHub, a huge percentage of what actually runs the internet is there. It is a little crazy

Danny:  When we teach people to read and write, we're not expecting them to be English literature students or novelists. We're giving them the tools to work in society. When we have reading, writing and algorithms or whatever, it's so that they can do what they want to do in society and they can build society with an understanding of the things around them.

Zach: When you realize that the world around us is built by other human beings, you realize you could be one of those human beings. I think that starting 10 years ago, there was this massive shift in education that happened. And for some reason still isn't really part of the dialogue around what good classrooms or good learning environments looks like, which is that every single young person on the planet started having these magical devices in their pockets, which had all of human history and knowledge on them. These things are better than the Library of Alexandria. This is it. It doesn't get better. And I think that so much of public education systems around the world are designed to solve access problems. How do we just simply get access to knowledge in front of everybody and to them?: And we've built this incredible distribution mechanism. It's really remarkable but I think the new challenge of learning in the 21st century is one of motivation. How do we get people to care? How do we get people to use this? And I think that when we lock down digital systems around young people, we kind of tell them, "Don't poke and prod, don't try things, don't go out of your way to go down a path that we haven't pre-approved for you." And I think that that kind of kills curiosity. It's really counterproductive. 

Danny: How much do you think of this is because you're called Hack Club? How much do you think is because people associate that with malicious hacking?

Zach: I think it's maybe a small element. Even though I think Hack Club as an organization is a little subversive in nature. We work directly with teenagers. We operate kind of outside of the system, in some regards. The schools that Hack Clubs are in, usually the school loves Hack Club because it's teenagers at their school who are getting together in a way that means that they're really engaged in their learning.  And we are one of hundreds of groups that run into these problems every single day. And I think this concept of students' rights, particularly on the internet, because it's so new, it's so technical, just for some reason isn't talked about at all, even though it affects young people more than almost any other decision made at their school. 

Cindy: We've been talking a lot about blocking access to information, blocking websites and things like that but I think that you've seen problems with the devices themselves, haven't you?

Zach: Yeah. Increasingly Hack Clubbers, the only device they have access to either in meetings or at home is a school issued Chromebook. And one of the options on school issued Chromebooks is to disable right clicking and clicking inspect element. And you can't learn how to program websites without being able to do that. And this is such a real problem that we've had to build our own debugger to help with that.

Danny: Just to be clear here, when you say right click, this is the thing where you have the second mouse button and then people always stumble on this by accident and wonder what the heck have I done? Because you click and then there's a little menu. It's for coders or for someone who wants to kind of go a bit deeper or of course save an image. It's the sort of metaphor for, okay, let's go a little bit deeper into what we're looking at here. And that doesn’t… kids can't do that on these lockdown computers?

Zach: Yeah. It's a device security setting. You can turn off inspecting element, which means that young people in Hack Club meetings who don't have a school issued computer can view the source code of any website that they go to. And if you don't have the resources at home to have one and you only the school issued computer, you just can't.

Danny: Everybody in the early web learned how to build the rest of the early web by view source. There was a little pull down menu.

Cindy: Absolutely.

Danny: And if you saw a web page that you liked, you could look at the original HTML and then cut and paste it and mess around with it. And you're saying that kids just have to take what they've given now?

Zach: You just right click and it's not an option.

Danny: Holy cow.

Cindy: And this is a setting. Chromebooks don't come like this necessarily but they give the administrators the ability to lock kids out of this knowledge. It's just, it's hard to imagine the thinking that leads you to decide that we're going to deny kids knowledge in school.

Danny: And just me and Zach and Cindy and now are vibrating in the studio. You can't really see this. One of the things so upsetting about this is that the environment, the mouse, the windowing environment that you're using was specifically built to be an educational environment that you could explore and learn. It's an absolute perversion of the very fundamental way these things were developed and intended to use. It's like if you gave someone a painting set but no paints. 

Cindy: The equity issues here are just tremendous. Because we know that one of the great things is that we're now giving kids devices that they can use to help themselves learn. But if they're locked down devices and that's the rich kids have another device that they can use but the poor kids end up with just a lockdown device, a poor device for poor people really it sounds like.

Zach: When you look at the marketing for some of these school filter companies, the marketing is like, we prevent student suicide. And it's, we prevent school shootings. What a strange connection to draw. And then the things they do to be able to draw that connection is not only do they filter what websites you're able to go to but they actually scan every single email you send from your school account, every single IM that you send from your school account, they scan the things you do on websites. For this one district that we're in, in Georgia, when you go to a website that's blocked, not only does it say, "This website's blocked, you're not allowed to come here," but it actually says that there's a security issue with your computer and that the way fix it is to download this intermediate SSL certificate, install it on your computer, set as a trusted source and what that means is it allows the school to man in the middle all of your encrypted traffic.

Danny: Right. That's like your undermining the security of that computer. And I think this is really important to emphasize. One of the things that we always talk about at EFF is you can't do censorship without surveillance. You have to be able to see what people are looking at to block it. And what that means for these sort of systems is, as you say, just to be clear, what that person is being asked to download there is the master key to all of their communications on that computer, from their financial details to everything. 

Cindy: Yes. And it's a problem that predates COVID but it really got supercharged during COVID, this idea that constant surveillance is what you have to tolerate if you're a student. And that's dangerous first because that's dangerous for kids but it's also dangerous because we're creating a generation of kids who think that being watched all the time is okay. This is a fundamental human right. It's central to human dignity. And one of the things that we've learned is you can't deny children completely human dignity and then expect them to suddenly at age 18, be able to exercise their full rights in a way that will work. It doesn't work that way.   

Danny: “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

How do the kids themselves feel about this? What do you get from them?

Zach: Well, there's two things I'd love to touch on there. I think an idea that I would love for us all to start talking about is this idea of digital civic duty. And I think it's the same thing where you not only receive being a consumer but you give too. You make your own websites, you modify the internet, you modify technology. You're not just a consumer, you're a creator too. 

In terms of what Hack Clubbers feel about school surveillance. Hack Clubbers feel like they live in an Orwellian surveillance state because you spend your time on networks that are surveilled, where if you try to poke prod, bad things could happen. And I think definitely Hack Clubbers feel like they can't interact with their school on issues like these because I think a lot of school administrators are not technical enough to understand what's going on. If you flag the wrong thing, you could very easily find yourself  facing disciplinary action or something like that. I had this happen when I was a teenager, I installed a VPN on my laptop, what I brought to my school, I was the only person at my school that I knew on a laptop and I was pulled aside by the vice principal because they were like, "Why are you hacking our school?"

Danny: And I think it undermines trust. First of all, you set the stakes. That the administration is kind of saying, "We don't really trust you so we're going to put this software." But then when kids who are curious and interested in this look into it, they realize that they're also being lied to.

Zach: And I think it really undermines these values that we talk a lot about, like curiosity, like tinkering, like trying things out, figuring out who you want to be through trying to make things. When there's a consequence to these actions, which is the case when you have your web activity filtered and then automatically reported in some cases, it means that suddenly trying to learn there could be a consequence if you Google the wrong thing. And I think that in a place where we care a lot about independence and where we care a lot about helping people become their own individual agents of change, I think that our digital environments that we create for young people inside of schools, I think kind of does the opposite. It tells you, "No, you're a consumer, keep watching Netflix, don't mess with your computer."

Cindy: I think this really hearkens back to the beginning of the Electronic Frontier Foundation, where we had law enforcement coming in and doing raids on a lot of kids who were poking around on the early internet, trying to figure out how things work. This is really one of the founding stories of EFF. And the flip side of it is some of those same kids or kids who were friends with them, by the name of maybe Wozniak or other things, they went on to develop some of the tools and the things that we love the most. We're not just doing something unfair to these kids, we may be short circuiting the next generation of people who are going to bring us a better world.

Cindy: Let's talk about some of Hack Club's successes. And by the way, I just want to give you extra love for reclaiming the term hack for doing something good. This is being a hacker, again, I'm an old school internet person, being a hacker was being somebody who dug in deeply, tried to figure things out. And it might have been not the prettiest thing but actually made things work. And I think that somehow we've lost that sense of the word and it's become synonymous with evil. And so I really appreciate you reclaiming it and lifting it up but that's just my little soapbox moment. But let's hear some success stories. What is Hack Club doing for kids? What are you seeing?

Zach: Oh, it's incredible. I don't know. There's a Hack Clubbers who wrote an entire game engine in Rust. I was talking with Hack Clubbers who built a whole clone of Minecraft in Rust where they made the OpenGL calls themselves. But the thing that I think is really important about Hack Club for people who are in it beyond just the coding and beyond the socialization is I think that for Hack Clubbers, coding isn't just a way to make video games or make a personal website or I don't know, get a job in the future. It's a form of self expression. It's this is a place where I can be myself, where I can get what is in my head out on paper. It's a thing that gives you power and an agency as a young person that you don't really find in school and don't really find in other activities or around your life. And it's a place where it doesn't really matter where you're from or what you look like or who your parents are, how much money you make. It's this is a place where people will treat you like a real person with real respect. And I know for me, when I was a young person, I was really desperate for that.

Danny: As you talked about this, I was thinking about the early days of the web and the internet. And I suddenly thought to myself, it's not just Hack Club, it's not just these places where kids gather, I think a huge chunk of the positive sides of the internet were built by kids or built by teenagers. I think of Aaron Swartz, who very close to EFF. Me and Cindy knew him well.

Zach:  Wow. He's a personal hero of mine

Danny:  Right. And when we first met Aaron, he was hacking on the fundamental code that was building the internet with Tim Berners-Lee at, I think he must have been 14. Lots of people start out at that age. And the other thing is and I think this goes to the heart of what we try and talk about on this show is you're modeling the positive future of the internet. And it's driven by people wanting to build that, wanting to build that for themselves. Do the kids you talk to, do they think about this more widely?

Zach: I think coding is the glue. It's the thing that brings everyone together but the magic is in all the why questions. Because Hack Club's a space where people ask questions like, who am I? Who do I want to be? What is this world I live in? What is my relationship with it? And I think that we have this concept of hacker friends where if I think if Hack Club does one thing, we want to try and help young people find other hacker friends because when you have someone else like you, that shares your interest at a very deep level, it means that when you explore those questions, you can go much deeper and you feel heard in a way that you might not if you don't have friends that are as into some of these things as you.

Cindy: Hack Club's not the only one. There are programs like this all around the world that are really specifically aimed at reaching communities who basically weren't the focus of kind of the first generation of hacker kids. If you'd talk about that too, I'd love it.

Zach: For me growing up and I think this is built into Hack Club's DNA, I definitely felt like a child of the world or a child of the internet because the people I was having so many of these formative conversations with online were from all over the world from all backgrounds. And I think that that is just so incredibly important.

One of my favorite things about Hack Club is since we don't this design a playbook that then everybody runs, every Hack Club at every school is different. And as a result, when you go to a Hack Club in Kerala India, it's dramatically different than a Hack Club in America. It's different. It makes more sense for local context.

And as a result, when you walk into some of these clubs from around the world, the local leaders have really asked, "What makes the most sense for me? What makes the most sense for other people like me?" And I think that, particularly in areas where people feel marginalized or they don't see a home for themselves or they don't have role models in the same way that some more traditional folks might have, my hope is that with Hack Club, that they can build the home that they've always been looking for. And I think that the internet allows young people to do that in a way that just wasn't possible before.

Danny: This is such a cliche, but this is actually the next generation. This is the future. Do you have any predictions about the future of the internet? What are the things that they're building that are missing in the existing system?

Zach: We face some of the biggest challenges over the next 50 years that humanity's ever had to reckon with. And I think that we need a generation of young people who not only have real hard skills, they can actually do something from a builder perspective around these huge challenges but they also have the right mindset and network to think a little bit differently.

The mindset is that if there's a problem, what does it take to fix it? It's very actionable rather than feel, we are born with problems and we will have to deal with these problems. There's nothing that we can do about it. It's a very empowered mindset.

They kind of see technology not as an end in itself but as a tool for every single thing needed to build amazing communities in this new world that we live in.

Cindy: Such a good vision. Let's jump to that future. What does it look like if we get this right? If we unleash all the Hack Clubbers and the other kids who are using technology and envisioning technologies to build a better world than the one we have now. Take us to that world. What does it look like?

Zach: I don't know if this is too big of an idea but I want to live in a world where there's a hacker president. But in more concrete terms, I want all the innovative, exciting stuff to be open source because it means that suddenly the people  who can engage with it, isn't everyone who can afford to buy a license to their company but it's every single person that has technical knowledge in the entire world and internet access. I want to live in a world where the constraints of location, of locale are smaller than ever before. 

Cindy: And what I really love about this vision is that it really is about a movement. I think one of the things that distresses me about the stories coming out of the early internet is they all seem to one guy who did one thing. And honestly, they're almost all guys and guys of a certain color. And I think that this way of storytelling, I'm not sure it was actually all that true for those of us who lived through it but what I hear you is really, really doubling down on this idea that it takes a movement, that people move together and that this kind of single person narrative is not actually the narrative of good change and that you're working to try to build communities and networks so that we get past that.

Zach: And I think that one thing that really helps with that is the open source movement and the open source community because it means that if you are coding on real projects, the connection between you and the person that wrote that line of code is closer than ever. And you see, wow, projects like Ruby on Rails, they weren't built by one person. They were built by 2,000 people. And you see that similar things with big projects, like Firefox, big projects like Rust, these are things that take tribes.

Cindy: Yeah. And let's just double down, we got to get those obstacles out of the way. Kids need to be able to access all the information. They need to be able to right click on their Chromebooks and view source and all of these things. And the role of that, which sounds like funny little geeky things, it's central to how we get from here to there.

Danny: Well, thank you so much, Zach. I look forward to not only seeing what you have to come up with in the future but seeing the next 20 years of what these kids produce.    

Zach: Thank you so much for having me here. It is such an honor to be able to join you in this conversation. It is such an honor for Hack Clubbers to have their story and their struggles be a part of the conversation and for the work you're doing. Thank you, thank you, thank you, thank you, thank you. 

Cindy: It goes both ways, Zach. You are raising the next generation of EFF members, probably EFF staffers and maybe congressional and administrative staffers who have this in their bones. And that's the world. Just understanding how technology works isn't enough. And I think that's really clear from what you're doing is you're building networks and you're building ethical and responsible frameworks for how do you be somebody who understands about tech but is using it for good?

Cindy: Zach, thank you so much. This has been so fun talking to you and so inspiring. I agree, we started off and we were talking about the problems that you're having and they're tremendously important. And of course that's where EFF's rubber meets the road is trying to get these obstacles out of the way. But we ended in such a happy place in terms of this future. So thank you.

Cindy: I so appreciate hearing about optimistic, young people finding, using and building the tools to make things better and the role that the internet is playing in both helping them connect, and helping them really build this into a movement that is going to build the tools that are going to make a better internet in the future.

Danny: So much of this talk of the surveillance and the censorship of children is wrapped this idea of keeping them safe. And then Zach who's caught in the middle. He goes to the websites of these makers of filter technology where they're literally claiming to be preventing school shootings and yet we all want kids to be safe but I do question whether this is really safety when Zack talks to the actual Hack Clubbers and they say that they feel like they're in an Orwellian surveillance state, that's not safety.

Cindy: No, no. And I think school administrators, it's just clear that they're outgunned here and we need to really support them in recognizing what kids really need to grow. I also really appreciated him talking about coding as a form of self expression. Obviously that's near and dear to my heart as EFF started with the idea that code is speech but also that this self expression isn't just in a constitutional sense. It's about a place where I can be myself, where I can really be the real me and all of that coming out of the idea that people are learning how to code, this as a means of self expression it's just heartening.

Danny: You teach kids how to express themselves, whether it's code and speaking up and then they get to be part of that debate. And I think they're an important part of that debate.

Cindy: One of the things that I really loved about the way Zach talked about the community he's building is it's being built by teenagers for teenagers, maybe for the rest of us too. But recognizing that this community needs to be designing the technologies and developing the technologies that this community needs. That where it needs to be centered. It reminds me of the conversation we had with Matt Mitchell, where he talked about communities needing to build the tools that they need, whether they're in, where he was in Harlem or in a rural area or somewhere around the world. This community empowerment works not only in geography but also in the difference between being a kid and being an adult.

Cindy: Well, thanks to our guest, Zach Latta, for sharing his optimism and the work that he's doing. If you'd like to start a Hack Club or donate to help support them, they are at hackclub.com. There are similar organizations all across the country and all across the world.  But supporting this work, I think is tremendously important to build a future internet that we all want to live in.

Danny: Thanks again, for joining us. If you have any feedback on this episode, do email us at podcast@eff.org. We read every email and we learn from all of your comments. If you do like what you hear, follow us on your favorite podcast player. We've got lots more episodes in store this season. Nat Keefe and Reed Mathis at Beat Mower made the music for this podcast with additional music and sounds used under the creative commons license from CCMixter. You can find the credits for each of the musicians and links to the music in our episode notes. How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in the public understanding of science and technology. I'm Danny O'Brien.

Music for How to Fix the Internet was created for us by Reed Mathis and Nat Keefe of BeatMower. This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. You can find their names and links to their music in our episode notes, or on our website at eff.org/podcast. I’m Danny O’Brien. 

Cindy: And I'm Cindy Cohn.

 

Jason Kelley

Ban Online Behavioral Advertising

3 months ago

Tech companies earn staggering profits by targeting ads to us based on our online behavior. This incentivizes all online actors to collect as much of our behavioral information as possible, and then sell it to ad tech companies and the data brokers that service them. This pervasive online behavioral surveillance apparatus turns our lives into open books—every mouse click and screen swipe can be tracked and then disseminated throughout the vast ad tech ecosystem. Sometimes this system is called “online behavioral advertising.”

The time has come for Congress and the states to ban the targeting of ads to us based on our online behavior. This post explains why and how.

The harms of online behavioral advertising

The targeting of ads to us based on our online behavior is a three-part cycle of track, profile, and target.

  1. Track: A person uses technology, and that technology quietly collects information about who they are and what they do. Most critically, trackers gather online behavioral information, like app interactions and browsing history. This information is shared with ad tech companies and data brokers.
  2. Profile: Ad tech companies and data brokers that receive this information try to link it to what they already know about the user in question. These observers draw inferences about their target: what they like, what kind of person they are (including demographics like age and gender), and what they might be interested in buying, attending, or voting for.
  3. Target: Ad tech companies use the profiles they’ve assembled, or obtained from data brokers, to target advertisements. Through websites, apps, TVs, and social media, advertisers use data to show tailored messages to particular people, types of people, or groups.

This business has proven extremely lucrative for the companies that participate in it: Facebook, Google, and a host of smaller competitors turn data and screen real estate into advertiser dollars at staggering scale. Some companies do all three of these things (track, profile, and target); others do only one or two.

Targeted advertising based on online behavior doesn’t just hurt privacy. It also contributes to a range of other harms.

The industry harms users in concrete ways. First, online behavioral targeting is almost single-handedly responsible for the worst privacy problems on the internet today. Behavioral data is the raw fuel that powers targeting, but it isn’t just used for ads. Data gathered for ad tech can be shared with or sold to hedge funds, law enforcement agencies, and military intelligence. Even when sensitive information doesn’t leave a company’s walls, that information can be accessed and exploited by people inside the company for personal ends.

Moreover, online behavioral advertising has warped the development of technology so that our devices spy on us by default. For example, mobile phones come equipped with “advertising IDs,” which were created for the sole purpose of enabling third-party trackers to profile users based on how they use their phones. Ad IDs have become the lynchpin of the data broker economy, and allow brokers and buyers to easily tie data from disparate sources across the online environment to a single user’s profile. Likewise, while third-party cookies were not explicitly designed to be used for ads, the advertising industry’s influence has ensured that they remain in use despite years of widespread consensus about their harms.

Targeted advertising based on online behavior doesn’t just hurt privacy. It also contributes to a range of other harms.

Such targeting supercharges the efforts of fraudulent, exploitive, and misleading advertisers. It allows peddlers of shady products and services to reach exactly the people who, based on their online behavior, the peddlers believe are most likely to be vulnerable to their messaging. Too often, what’s good for an advertiser is actively harmful for their targets.

Many targeting systems start with users’ behavior-based profiles, and then perform algorithmic audience selection, meaning advertisers don’t need to specify who they intend to reach. Systems like Facebook’s can run automatic experiments to identify exactly which kinds of people are most susceptible to a particular message. A 2018 exposé of the “affiliate advertiser” industry described how Facebook’s platform allowed hucksters to make millions by targeting credulous users with deceptive ads for modern-day snake oil. For example, this technology helps subprime lenders target the financially vulnerable and directs investment scams to thousands of seniors. Simply put, tracking amplifies the impact of predatory and exploitative ads.

Furthermore, ad targeting based on online behavior has discriminatory impacts. Sometimes, advertisers can directly target people based on their gender, age, race, religion, and the like. Advertisers can also use behavior-based profiles to target people based on proxies for such demographic characteristics, including “interests,” location, purchase history, credit status, and income. Furthermore, using lookalike audiences, advertisers can specify a set of people they want to reach, then deputize Facebook or Google to find people who, based on their behavior profiles, are “similar” to that initial group. If the advertiser’s list is discriminatory, the “similar” audience will be, too. As a result of all this, targeted advertising systems – even those that only use behavioral data – can enable turnkey housing discrimination and racist voter suppression. Behavioral targeting systems can have discriminatory impacts even when the advertiser does not intend to discriminate.

How to draft a ban on online behavioral advertising

Given these severe harms, EFF calls on Congress and the states to ban the targeting of ads to people based on their online behavior. This ban must be narrowly tailored to protect privacy and equity without placing unnecessary burdens on speech and innovation.

Legislators should focus on the personal data most central to targeted ads: our online behavior. This includes the web searches we conduct, the web pages we visit, the mobile apps we use, the digital content we view or create, and the hour we go online. It also includes the ways our online devices document our offline lives, such as our phones using GPS to track our geolocation or fitness trackers monitoring our health.

Legislators should ban any entity that delivers online ads from doing so by targeting users based on their online behavior. This ban would apply to dominant ad tech players like Facebook and Google, among many others. By “ad,” we mean paid content that concerns the economic interests of the speaker and audience. This ban should apply whether or not an ad is targeted to a traditional personal identifier, like a name or email address.

Legislators should also address the role of data brokers in ad tech. This sector profiles users based on their online behavior, and creates lists of users to whom various ads might be delivered. But many data brokers do not subsequently deliver any ads. Rather, they sell these lists to advertisers, or directly to online ad deliverers.

Thus, legislators should ban an ad deliverer from using a list created by another entity, if the deliverer knows it is based on users’ online behavior, or would have known but for reckless disregard of known facts. Likewise, a data broker must be banned from disclosing a list of users that is based on online behavior, if the data broker knows it will be used to deliver ads, or would have known but for reckless disregard of known facts.

We suggest two limited exceptions from these bans, both involving what a user is doing right now, and not over time. First, the ban should exempt “contextual ads” based on content a user is currently interacting with. For example, while a user visits an online nature magazine, they might be shown an ad about hiking boots. Second, the ban should exempt ad delivery based on a user’s rough, real-time location. For example, while a user visits a particular city, they might be sent an ad for a restaurant in that city.

Next steps

Of course, banning online behavioral advertising is just one tool in the larger data privacy toolbox. EFF has long supported legislation to require businesses to get consumers’ opt-in consent before processing their data; to bar data processing except as necessary to give consumers what they asked for (often called “data minimization”); and to allow us to access, port, correct, and delete our data. To enforce these laws, we need a private right of action and a ban on compelled arbitration.

EFF looks forward to working with legislators, privacy and equity advocates, and other stakeholders to enact comprehensive consumer data privacy legislation, in Congress and the states. This must include banning targeted ads that are based on our online behavior.

Bennett Cyphers

The New Filter Mandate Bill Is An Unmitigated Disaster

3 months ago

After the defeat of SOPA/PIPA, Big Content has mostly focused on quiet, backroom deals for copyright legislation, like the unconstitutional CASE Act, which was so unpopular it had to be slipped into a must-pass bill in the dead of winter. But now, almost exactly a decade later, they’ve come screaming out of the gate with a proposal almost as bad as SOPA/PIPA. Let’s hope it doesn’t take an Internet Blackout to kill it this time.

The new proposal, cynically titled the SMART Copyright Act, gives the Library of Congress, in “consultation” with other government agencies, the authority to designate “technical measures” that internet services must use to address copyright infringement. In other words, it gives the Copyright Office the power to set the rules for internet technology and services, with precious little opportunity for appeal.

First, a little background: One of the conditions of the safe harbors from copyright liability included in the Digital Millennium Copyright Act—safe harbors that are essential to the survival of all kinds of intermediaries and platforms, from a knitting website to your ISP—is that the provider must accommodate any “standard technical measures” for policing online infringement. Congress sensibly required that any such measures be “developed pursuant to a broad consensus of copyright owners and service providers in an open, fair, voluntary, multi-industry standards process.” As a practical matter, no such broad consensus has ever emerged, nor even a “multi-industry standards process” to develop it. There are many reasons why, and one of the biggest ones is that the number and variety of both service providers and copyright owners has exploded since 1998.  These industries and owners have wildly varying structures, technologies, and interests. What has emerged instead are privately developed and deployed automated filters, usually deployed at the platform level. And some influential copyright owners want to see those technologies become a legal requirement for all levels.

This legislation seeks to accomplish that by setting up a new process that jettisons the whole notion of consensus and fair process. Instead, it puts the Librarian of Congress in charge of designating technical measures—and requires virtually every service provider to comply with them.

This bill cannot be fixed. Let us count the ways:

Tech Mandates Will Inevitably Stifle Lawful Expression

For decades, Big Tech has struggled to appease Big Content by implementing a technical measure they love: filters. The most well-known example, YouTube’s Content ID system, works by having people upload their content into a database maintained by YouTube. New uploads are compared to what’s in the database, and when the algorithm detects a match, the system applies the default rule chosen by copyright holders, such as taking it down or monetizing it (the benefits of which will flow to the copyright holder.) They can also, after being informed of a match, send a DMCA notice, putting the creator in danger of losing their account.

Despite more than a decade of tinkering, and more than $100 million in sunk costs, the system fails regularly. In 2015, for example, Sebastien Tomczak uploaded a ten-hour video of white noise. A few years later, as a result of YouTube’s Content ID system, a series of copyright claims were made against Tomczak’s video. Five different claims were filed on sound that Tomczak created himself. Although the claimants didn’t force Tomczak’s video to be taken down, they all opted to monetize it. In other words, ads were put on the video without Tomczack’s consent and the ten-hour video would then generate revenue for those claiming copyright on the static. In 2020, CBS found that its Comic-Con panel had been blocked. YouTube creators report avoiding any use of music on their videos, no matter how clearly lawful, for fear of copyright flags.

Things are no better on Facebook. For example, because filters cannot tell the difference between two different performances of the same public domain work, a copyright holder’s claim to a particular version of a work can block many other performances. As a result, as one headline put it, “Copyright bots and classical musicians are fighting online. The bots are winning.”

Third-party tools can be even more flawed. For example, a “content protection service” called Topple Track sent a slew of abusive takedown notices to have sites wrongly removed from Google search results. Topple Track boasted that it was “one of the leading Google Trusted Copyright Program members.” In practice, Topple Track algorithms were so out of control that it sent improper notices targeting an EFF case page, the authorized music stores of both Beyonce and Bruno Mars, and a New Yorker article about patriotic songs. Topple Track even sent an improper notice targeting an article by a member of the European Parliament that was about improper automated copyright notices.

The core problem is this: distinguishing lawful from unlawful uses usually requires context. For example, the “amount and substantiality” factor in fair use analysis depends on the purpose of the use. So while the use may be a few seconds, as for some kind of music criticism, it can also be the whole piece, such as in a music parody. Humans can flag these differences; automated systems cannot.

Tech Mandates Will Stifle Competition

Any requirement to implement filtering or another technical measure would distort the market for internet services by privileging those service providers with sufficient resources to develop and/or implement costly filtering systems, reduce investment in new services, and impair incentives to innovate.

In fact, the largest tech companies will likely have already implemented mandated technical measures, or something close to them, so the burden of this mandate will fall mainly on small and medium-sized services.  If the price of hosting or transmitting content is building and maintaining a copyright filter, investors will find better ways to spend their money, and the current tech giants will stay comfortably entrenched.

Tech Mandates Put Your Security and Privacy at Risk

Virtually any tech mandate will raise security and privacy concerns. For example, when DNS filtering was proposed a decade ago as part of SOPA/PIPA, security researchers raised the alarm, explaining the costs would far outweigh the benefits. And as 83 prominent internet inventors and engineers explained in connection with the site-blocking and other measures proposed in those ill-fated bills, any measures that interfere with internet infrastructure will inevitably cause network errors and security problems. This is true in China, Iran, and other countries that censor the network today; it will be just as true of American censorship. It is also true regardless of whether censorship is implemented via the DNS, proxies, firewalls, or any other method. Network errors and insecurity that we wrestle with today will become more widespread and will affect sites other than those blacklisted by the American government.

The desires of some copyright holders to offload responsibility for stopping online infringement to service providers large and small must give way to the profound public interest in a robust, reliable, and open internet.

Tech Mandates Give the Library of Congress a Veto Right on Innovation—a Right It Is Manifestly Ill-Equipped to Exercise

Bill proponents apparently hope to mitigate at least some of these harms through the designation process itself, which is supposed to include consideration of the various public interests in play, as well as any effects on competition, privacy, and security. Recognizing that the Librarian of Congress is unlikely to have the necessary expertise to evaluate those effects, the bill requires them to consult with other government agencies that do.

There are at least two fundamental problems here. First, at best this means a group of well-meaning D.C. bureaucrats get to dictate how we build and use technology, informed primarily by those who can afford to submit evidence and expertise. Startups, small businesses, independent creators, and ordinary users, all of whom will be affected, are unlikely to know about the process, much less have a voice in it.

Second—and this is perhaps the most cynical aspect of the entire proposal—it is modeled on the Section 1201 exemption process the Library already conducts every three years. Anyone who has actually participated in that process can tell you it has been broken from the start.

Section 1201 of the DMCA makes it illegal to “circumvent” digital locks that control access to copyrighted works and to make and sell devices that break digital locks. Realizing that the law might inhibit lawful fair uses, the statute authorizes the Library of Congress to hold a triennial rulemaking process to identify and grant exemptions for such uses. The supposed “safety valve” is anything but. Instead, it creates a burdensome and expensive speech-licensing regime that has no binding standards, does not move at the speed of innovation, and functions at all only because of the work of clinical students and public interest organizations, all of whom could put that energy to better causes than going hat in hand to the Copyright Office every three years.

What is worse, while the 1201 exemptions for lawful expression expire if they are not renewed, once adopted the tech mandates will be permanent until they are successfully challenged. In other words, there are higher barriers to protecting fair use than impeding it.

Worse still, the Library of Congress will now be in charge of both designating technical mandates and designating when and how it’s OK to break them for fair use purposes. That is a terrifying power—and one far too great to put in the hands of a bunch of D.C. lawyers, no matter how well-meaning. It’s worth remembering that the Copyright Office didn’t grant a single meaningful exemption to Section 1201 for the first six years of that law’s operation. What innovative new services, and which potential challengers to today’s tech giants, could we lose in six years?

Remaking the internet to serve the entertainment industry was a bad idea ten years ago and it’s a bad idea today. This dangerous bill is a nonstarter.

Take Action

Tell your senators to oppose The Filter Mandate

Corynne McSherry

Anti-War Hacktivism is Leading to Digital Xenophobia and a More Hostile Internet

3 months ago

The horrific Russian military invasion of Ukraine has understandably led to a backlash against Russia. The temptation is to label anything Russian, from state media and students to cats, as bad and block it to signal outrage and ostracization. This type of thinking has infected the open source and internet security communities as well—a terrible idea with potentially harmful consequences.

Recently the maintainer of a popular open source Node JS package “node-ipc” released a new plugin called “peacenotwar.” A Node JS package is publicly-available JavaScript code used by developers to add functionality to applications. According to the maintainer, this plugin would display a message of peace on users’ desktops, serving “as a non-violent protest against Russia’s aggression.” Some versions of the node-ipc package, a networking tool that has been downloaded millions of times, will automatically run this protest-ware. Then a post on Github claimed that some versions of the node-ipc package were deleting and overwriting all files with the heart emoji if the package was installed on a computer with a Russian or Belarusian IP address

If the accusations are true, this is a terrible idea which could result in all sorts of horrible and unintended outcomes. What if a Russian human rights or anti-war organization, or a Russian hospital, was using this particular software package? This action—although conceived of as a simple nonviolent protest by the package creator—could result in the loss of important footage of protests or war crimes, loss of medical records, or even the deaths of innocent people. 

The trend of half-baked hacktivism involving everyday internet users is now growing into sites and games that encourage users to become part of DDoS (Distributed Denial of Service) attacks against some Russian digital assets. For the same reasons mentioned above, randomly sending attacks without thinking through the consequences and potential collateral damage are feel-good actions that amount to shooting in the dark. Also unknown are the consequences for users that were part of this campaign. Are users aware that they could have their IPs logged by a potentially aggressive and vindictive target? It’s an incredibly irresponsible action that gives tools to ordinary users without the due diligence it deserves, putting innocent lives at risk on all sides.

Targeting every computer with a Russian or Belarusian IP address with this sort of hacktivism as a means of protest against the actions of a government is patently absurd and harmful. Developers living in countries that commit war crimes, including the US, might want to consider how they would feel if the tables were turned.

This sort of digital xenophobia didn’t start with the Russian military invasion of Ukraine, however. For many years the common network defender orthodoxy has been to block certain countries deemed disreputable from your network, effectively creating no-fly lists for IP addresses. Most traffic coming from Russia or China is malicious, the thinking goes, so why not block all traffic coming from Russian or Chinese IP addresses? Putting aside for a moment the question of whether Russian and Chinese hackers have heard of VPNs, this bit of network security theater ensures that entire countries are thrown under the bus, many of whom might find your service useful, because of a few bad actors. 1

Calls are mounting to disconnect Russia from the internet since the Russian invasion of Ukraine. This is an awful idea and it once again treats Russia as a monolith, punishing the Russian people  because of the actions of their authoritarian leaders. Russians who might be looking up information about a protest or trying to find news about those killed in the war will be blocked. Someone living in Ukraine in an area bordering Russia or Belarus could have their  IP address incorrectly categorized as Russian or Belarusian. Their communications and ability to access websites about relief or evacuation efforts could be blocked.

We have warned that remaking fundamental internet infrastructure protocols—like disconnecting Russia from the internet by revoking its top level domain names or revoking IP addresses—to protest a war will likely lead to a host of dangerous and long-lasting consequences. It will deprive people of a powerful tool for sharing information when they need it the most, compromise security and privacy, and undermine trust in the global communications infrastructures we all rely on.

Treating the population of a country as a monolith risks alienating and denying services to people who would agree with you, people who are your allies, and people who desperately need sources of information and help. It makes the internet less open and more hostile for all involved. Equating people with their authoritarian governments in your performative activism is never a good idea.

  • 1. Of course if you are running a network that is only meant for a few specific people to access, you might feel justified in engaging in this bit of security theater. If that is the case then you should ask yourself why you are not banning all outside traffic except for a dedicated VPN? After all, malicious traffic can come from a country you trust just as much as a country that you don’t.
Cooper Quintin

Brazil’s “Remuneration Right” Strengthens Big Tech and Big Media, At the Cost of Free Expression and a Free Press

3 months ago

Update: a new text of the Fake News Bill was released days after our post, which brings some clarity to a few of many critical ambiguities in this proposal. We welcome these attempts to provide greater clarity to the rule, but this proposal is still dangerously underspecified and the many dangerous gaps that we pointed out in our initial analysis remain.

Essential definitions are left for further regulation, including the definition of "news," what constitutes a "use" of news, how use would be measured and compensated. By failing to define these critical terms, this law writes a blank check to a future regulator.

We are especially concerned that the revised draft leaves open the question of whether copyright exceptions and limitations will cover normal social media users' quotations of news content. A failure to safeguard this practice would undermine free discussion of important reporting (as well as reporting itself). The revised draft still uses copyright to address tech firms' abuses of publishers, even though these abuses primarily relate to corrupt practices in the ad market, and are not copyright related.

Brazil’s “Fake News Bill” (PL 2630/2020) is the latest salvo in the global battle between Big Tech companies and the media industry—which is itself highly concentrated, controlled by a handful of dominant firms. The remuneration rule in the “Fake News Bill” is a made-in-Brazil installment that is poised to become law, despite the complaints of civil society and independent media associations.

EFF has been analyzing and reporting on the bill since its earliest stages in the Brazilian Congress. The latest text, which has been approved by a working group in the Chamber of Deputies, still contains dangerous language for free expression and digital rights. The Brazilian digital rights groups coalition Coalizão Direitos na Rede has published a relevant analysis  of improvements in the latest draft. It also identifies the serious challenges that remain, including the expansion of data retention obligations. Beyond what’s in the text of the current bill, there is also persistent political pressure to revive the unsettling traceability mandate for private messages.

Just as dangerous, however, is the “remuneration obligation” for publishers, an unrelated legislative initiative that has been shoehorned into one article of this bill without the thoughtfulness, consultation, or nuance that such a proposal warrants. We fear that, for the big media companies who have advocated for this measure, that lack of consideration and nuance is a feature, and not a bug.

In a nutshell, this  provision compels platforms to compensate media companies for use of "journalistic content." While it exempts from its scope some of the exceptions and limitations established in Brazil's copyright law, it’s not clear whether that exemption would extend to user links on social media that automatically import just a few sentences from the beginning of an article as a preview. Would courts consider such a link a non-infringing quotation? Although Brazilian case law has settled that Brazilian copyright law’s exceptions and limitations should be broadly interpreted, this is still an open question.

The proposed rule does allow “the simple sharing of the IP address of the original content.” This is confusing and technically inaccurate, as IP addresses generally don’t refer to specific articles, and often many websites will share a single IP address. Moreover, social media users do not typically share IP addresses of journalistic organizations. It’s possible that the drafters used “IP address” as a synonym for “URL;” if that is the case, proponents should amend the provision accordingly.

As is, the provision, if enacted, will change the way core digital human rights values like free expression and access to knowledge are interpreted by businesses, courts and regulators. Quotation and linking is a key mode of online expression and journalism; if it comes at a financial price, few will engage in it.

In theory, the remuneration obligation proposal could address some of these concerns by articulating narrow, well-crafted definitions of “journalistic content” and “use;”  as well as clear rules on how the system will be designed and overseen. Proponents should also explain whether any of the money paid by platforms will be earmarked for journalists. Instead, proponents of the obligation have left the provision alarmingly vague: compensation arrangements will be left to further regulation to be approved by the executive branch. In other words, the President has the final call, and can use that power to favor the most influential players.

The obligation would apply to commercial search engines, social media networks, and instant messaging applications with over 10 million users registered in Brazil. While the proposal would hit only major commercial internet platforms, its scope still raises serious concerns. For instant messaging applications, compliance implies mass surveillance of private communications and a major and dangerous blow at end-to-end encryption. The pernicious impacts of the remuneration rule will not spare small and medium-sized players, be they media companies or tech companies, as we saw in France after the passage of the EU Copyright Directive (see below for more).

It is unlikely to benefit the publishers who are most in need of financial support. Attempts to create similar obligations in France and Australia demonstrated that when big media companies negotiate with big tech companies, smaller and independent publishers get left behind. The French implementation of publishers’ neighboring rights approved in the EU Copyright Directive has led to grievances from media outlets frozen out of the bounty. This, in turn, has triggered a battle between Google and the French competition authority, that is still in progress.

In Australia, the News Media and Digital Platforms Bargaining Code – which aimed to create  balanced agreements between publishers and internet platforms – ultimately became a bargaining tool for major media outlets who opted for private deals with tech platforms, sidelining other smaller publishers. The Australian Treasurer must designate the digital platforms that fall within the scope of the Code, and that are subject to its rules. Even though the law was enacted in early 2021, the government hasn’t yet ordered any platform to pay. In the meantime, a range of commercial content agreements between Facebook and Google and news businesses have been concluded outside of the legislation. Those, too, have been unequal: For example, Rupert Murdoch’s News Corp. secured its deal with Facebook just a few weeks after the new Australian Code passed. But other, less connected publishers had their negotiations requests shot down without justification.

Brazil is likely to see the same. There’s the risk that the country's media giant Globo and other big media outlets will capture the regulators who design and enforce the payment system. Even if the obligation to pay is never enforced, the looming threat of its enforcement could be used by big media companies to secure paydays from big tech companies, leaving small media companies behind. As with the Australian case, the Brazilian proposal is primarily being pushed by big media companies with political power and few competitors. That’s why associations of independent publishers released, last year and once again last week, public statements urging Congress to drop this vague rule from the Fake News Bill. 

On the platforms' side, Google’s deals in France offer a lesson in how this provision can impose indirect adverse effects to non-dominant platforms and start-ups. Google’s strategy for complying with the remuneration rule was to tie publishers’ compensation to their use of  Google’s news aggregator product News Showcase. For example, Google and the French press association Alliance de la presse d'information generale (APIG) worked out a deal for Google to pay French news outlets. APIG represents most major French publishers. For news outlets to participate in the deal, they had to join Google’s New Showcase product. This requirement was one of the reasons the French competition watchdog fined Google for failing to negotiate neighboring rights in good faith. With this move, Google leveraged its obligation to come to a remuneration agreement to give a marketplace advantage to its own news aggregator product. That, in turn, serves to entrench the company's central role as the intermediary people go through in order to reach news sites.

Calls for media remuneration by tech companies are grounded in the premise that tech giants are misappropriating journalistic organizations’ content. This represents a dangerous understanding of copyright, because it assumes that copyright holders are empowered to license (and thus control and block) quotation and discussion of the news of the day. That would undermine both the free discussion of important reporting and reporting itself. The press, after all, is a prolific user of quotations from rival media outlets. This reporting is key to understanding the role of the media in shaping public opinion - including its role in magnifying or debunking so-called “fake news.” Brazil's 1998 copyright law is explicit that there is no copyright violation when the press reproduces news articles, provided they mention the source of the content, specifically so the wider public can access, discuss  and criticize journalistic reporting.

Link taxes are a bad idea – but that doesn’t mean Congress and regulators shouldn’t do anything to help publishers, especially small ones. A better approach starts with recognizing that Big Tech primarily harms publishers by misappropriating their money (not exactly their copyrights).

Online advertising is dominated by a duopoly - Meta (née Facebook) and Google - who have been repeatedly accused of defrauding publishers. These frauds include allegedly undercounting viewers and, more disturbingly, allegations of direct collusion by senior executives at both companies to rig the entire ad market to both maximize the share of revenue raked off by the ad-tech duopoly (including through blatant fraud), and to exclude other ad-tech companies who might have paid publishers more. Google allegedly charges higher fees than rival ad exchanges and, according to regulators, cheats when it collects those fees.

Even worse: the remuneration provision in the bill may actually entrench the dominance of the ad-tech duopoly by enshrining them as permanent structural elements of the media industry, such that efforts to reduce their dominance would undermine media outlets that depend on them.

Regulating quotation isn't the way to give all the nation's publishers a fair deal. Cleaning up the ad-tech market is a far better approach. For example, regulators could consider the following:

  • Restrict firms from offering both “demand-side” and “supply-side” ad services. Today, the ad-tech giants routinely represent both sellers of advertising space (publishers) and buyers (advertisers) in the same transaction, creating many opportunities for cheating in ways that benefit the platforms at the expense of publishers. The law should require companies to represent either the buyers of ads or the sellers, but not both;
  • Require ad-tech platforms to disclose the underlying criteria (including figures) used to calculate ad revenues and viewership, backstopped by independent auditors;
  • Find ways  to allow smaller players to participate in real-time bidding for ad space; and
  • Build on Brazil's data protection law to make surveillance advertising less attractive and encourage non-invasive, content-based advertising that uses the text of articles, not the behavior of readers, to target ads. This would erode the data advantage enjoyed by companies that have practiced decades of nonconsensual mass surveillance.

These measures may take longer and might require more administration than a “remuneration obligation,” but they have one signal advantage: they will work. A hastily constructed, underspecified remuneration obligation is the epitome of the Silicon Valley ethic of “move fast and break things” – it’s the kind of thinking that created this mess in the first place. By contrast, restructuring the ad market to make it fair to publishers, to eliminate mass surveillance, and to purge it of widespread fraud is a project that involves “moving slowly and fixing things.” It is the antidote to Silicon Valley’s toxicity.  

When we allow debates about compensation for publishers and the sustainability of journalism  to be posed as a battle between Big Tech and Big Media, we miss the real stakes: fostering freedom of expression, and access to information and knowledge. Good digital policy should strive for an online environment with a rich plurality of voices and a diverse range of solid news sources. These priorities will not emerge from private deals cut between media and tech giants in back rooms, and giving either side more power and less competition from upstarts will only make things worse.

Brazilian civil society has rejected the proposition that we must choose one or the other. They are demanding more diversity and fairness, not entrenchment for dominant and highly concentrated players. Brazilian legislators should listen and drop the flawed remuneration obligation from the Fake News bill. 

Cory Doctorow

To Make Social Media Work Better, Make It Fail Better

3 months ago

Pity the poor content moderator. Big Tech platforms expect their mods to correctly apply a set of rules to users in more than a hundred countries, in over a thousand languages. These users are clustered into literally millions of online communities, each with its own norms and taboos. 

What a task! Some groups will consider a word to be a slur, while others will use it as a term of affection. Actually, it’s even more confusing: some groups consider some words to be slurs when used by outsiders, but not by insiders, which means that a moderator has to understand which participants in a single group are considered insiders, and who is considered an outsider. 

Mods have to make this call in languages they speak imperfectly, or not at all, assisted by a machine translation of unknowable quality. 

Small wonder that trust and safety experts can’t agree when to remove content, when to label it, and when to leave it be. Moderation at scale is an impossible task. Moderators don’t just miss a torrent of vile abuse and awful scams, they also remove Black users’ discussions of racism for being racist, suspend users who report dangerous conspiracy-fodder for pushing conspiracies, punish scientists who debunk vaccine misinformation for spreading misinformation, block game designers ads’ because they contain the word “supplement,” and remove comments praising a cute cat as a “beautiful puss.”

Everyone hates the content moderation on Big Tech platforms. Everyone thinks they’re being censored by Big Tech. They’re right

Every community has implicit and explicit rules about what kinds of speech are acceptable, and metes out punishments to people who violate those rules, ranging from banishment to shaming to compelling the speaker to silence. You’re not allowed to get into a shouting match at a funeral, you’re not allowed to use slurs when addressing your university professor, you’re not allowed to explicitly describe your sex-life to your work colleagues. Your family may prohibit swear-words at Christmas dinner or arguments about homework at the breakfast table. 

One of the things that defines a community are its speech norms. In the online world, moderators enforce those “house rules” by labeling or deleting rule-breaking speech, and by cautioning or removing users. 

Doing this job well is hard even when the moderator is close to the community and understands its rules. It’s much harder when the moderator is a low-waged employee following company policy  at a frenzied pace. Then it’s impossible to do well and consistently.

It’s no wonder that so many people, of so many different backgrounds and outlooks, are unhappy with Big Tech platforms’s moderation choices.

Which raises the question: why are they still using Big Tech platforms?

Big Tech platforms enjoy “network effects”: the more people join an online community, the more reasons there are for others to sign up. You join because you want to hang out with the people there and then others join because they want to hang out with you.

This network effect also creates a “switching cost” - that’s the price you pay for leaving a platform behind. Maybe you’ll lose the people who watch your videos, or the private forum for people struggling with the same health condition as you, or contact with your distant relations, half a world away.

By all means, let’s try to make the platforms better, but let’s also make it less important, by giving people technological self-determination.

These people are why so many of us put up with the many flaws of major social media platforms. It’s not that  we value the glorious free speech of our harassers, nor that we want our views “fact-checked” or de-monetized by unaccountable third parties, nor that we want copyright filters banishing the videos we love, nor that we want juvenile sensationalism rammed into our eyeballs or controversial opinions buried at the bottom of an impossibly deep algorithmically sorted pile.

We tolerate all of that because the platforms have taken hostages: the people we love, the communities we care about, and the customers we rely upon. Breaking up with the platform means breaking up with those people. 

It doesn’t have to be this way. The internet was designed on protocols, not platforms: the principle of running lots of different, interconnected services, each with its own “house rules” based on its own norms and goals. These services could connect to one another, but they could also block one another, allowing communities to isolate themselves from adversaries who wished to harm or disrupt their fellowship.

In fact, there are millions of people energetically trying to create an internet that looks that way. The fediverse is a collection of free/open software projects designed to replace centralized servers like Facebook with decentralized alternatives that work in much the same way, but delegate control to the communities they serve. Groups of friends, co-ops, startups, nonprofits and others can host their own Mastodon or Diaspora instances and connect to as many of the other servers as will connect with them, based on their preferences and needs.

The fediverse is amazing, but it’s not growing the way many of us hoped. Even though millions of people claim to hate the moderation policies and privacy abuses of Facebook, they’re not running for the exits. Could it be that they secretly like life on Facebook?


That’s one theory. 

Another theory, one that requires much less of an imaginative leap, is that while people hate Facebook, they love the people they would have to leave behind if they quit it more.

Which raises an obvious possibility: what if we made it possible for people to leave Facebook without being cut off from their friends?

Enter “interoperability.”

Interoperability is the act of plugging something new into an existing product or service. Interop is why you can send email from a Gmail account to an Outlook account. It’s why you can load any website on any browser. It’s why you can open Microsoft Word files with Apple Pages. It’s why you can use an iPhone connected to Verizon to call an Android user on T-Mobile.

Interoperability is also why you can switch between these services. Throw away your PC and buy a Mac? No problem, Pages will open all the Word documents you created back when you were a Microsoft customer. Switch from Android to iPhone, or T-Mobile to Verizon? You can still call your friends and they can still call you - and they won’t even know that anything’s changed unless you tell them.

Proposals in the US (the ACCESS Act) and the EU (the Digital Markets Act) aim to force the largest platforms to allow interoperability with their services. Though the laws differ in their specifics, in broad strokes they would both require platforms like Facebook (which claims it is now called “Meta”) to let startups, co-ops, nonprofits, and personal sites connect to it so that Facebook users can leave the service without leaving behind their friends.

Under these proposals, you could leave Facebook and set up or join a small service. That service would set its own moderation policies but also interoperate with Facebook. You could send messages to users and groups on Facebook, which would also be shared with people using other small services that were members of the same groups as you.

This moves moderation choices closer to users and further from Facebook. If the mods on your service allow speech that’s blocked on Facebook, you and the others on your service will see it, though it may be blocked by Facebook’s moderators and users there won’t see it. 

Likewise, if there’s some speech Facebook allows that you and your community don’t want to see, the mods on your service can block it, either by removing messages or blocking users from communicating with your server. 

Some people want to fix Big Tech platforms: get them to moderate better and more transparently. We get it. There’s lots of room for improvement there. We even helped draft a roadmap for improving moderation: the Santa Clara Principles.

But fixing Big Tech platforms is something that only works well. It fails really badly. If all the conversations you need to be a part of are on a platform that won’t fix itself and you’re being harmed by undermoderation or overmoderation, you’re stuck. 

There’s a better way. Interoperability puts communities in charge of their own norms, without having to convince a huge “trust and safety” department of a tech company - possibly a company in a different country, where no one speaks your language or understands your context - that they’ve missed some contextual nuance in their choices about what to leave up and what to delete. 

Frank Pasquale’s Tech Platforms and the Knowledge Problem poses two different approaches to tech regulation: “Hamiltonians” and “Jeffersonians” (the paper was published in 2018, and these were extremely zeitgeisty labels!). 

Hamiltonians favor “improving the regulation of leading firms rather than breaking them up,” while Jeffersonians argue that the “very concentration (of power, patents, and profits) in megafirms” is itself a problem, making them both unaccountable and dangerous.

That’s where we land. We think that technology users shouldn’t have to wait for Big Tech platform owners to have a moment of enlightenment that leads to its moral reform, and we understand that the road to external regulation is long and rocky, thanks to the oligopolistic power of cash-swollen, too-big-to-fail tech giants.

We are impatient. Too many people have already been harmed by Big Tech platforms’s bad moderation calls. By all means, let’s try to make the platforms better, but let’s also make it less important, by giving people technological self-determination. We all deserve to belong to online communities that get to make their own calls on what’s acceptable and what’s not. 




Cory Doctorow

EFF Tells E.U. Commission: Don't Break Encryption

3 months 1 week ago

An upcoming proposal from the European Union Commission could make government scanning of user messages and photos mandatory throughout the E.U. If that happens, it would be inconsistent with providing  true end-to-end encryption in Europe. That would be a disaster, not just for the privacy and security of citizens in the E.U., but worldwide. 

The excuse for this attack on basic human rights is the same one we have seen used repeatedly in the U.S. over the last few years: crimes against children. This is the same excuse that sponsors of the anti-encryption EARN IT Act used in 2020, and again earlier this year. It’s the same excuse that was used to put overwhelming pressure on Apple to develop a phone-scanning plan that disrespected user rights. Neither of these plans have advanced, because the public is overwhelmingly opposed to such surveillance. 

The plan in both the U.S. and E.U. is similar: coerce private companies to scan all user data, check what they find against government databases, and report their findings to the authorities. It’s unacceptable, and no matter what they say, it’s completely incompatible with end-to-end encryption. 

Today, EFF has joined European Digital Rights (EDRi) and dozens of other civil liberties and human rights organizations, sending a letter to tell the Commissioners we can’t accept this attack on our privacy. Child abuse can be, and is, investigated and prosecuted without blanketing people with surveillance systems. 

The need for the protections of encryption don’t change during a time of conflict. As we state in the letter:

As the shocking events of the past three weeks have emphasized, privacy and safety are mutually reinforcing rights. People under attack depend on privacy-preserving technologies to communicate with journalists, to coordinate protection for their families, and to fight for their safety and rights. 

Experts agree that there is no way to give law enforcement access to communications that are encrypted end-to-end  without creating vulnerabilities that criminals and repressive governments can exploit.  

The letter makes clear we can’t accept mass surveillance, indiscriminate spying on peoples’ private communications, or any measures that would break or bypass encryption, including client-side scanning

We hope the European Commission takes up the offer to work together with EDRi to craft legislation that is respective of peoples’ privacy and security. 

Joe Mullin

Letter to Iran, Regarding the Regulatory System for Cyberspace Services Bill

3 months 1 week ago

Today, EFF joins Article 19 and more than 50 organizations in urging the Iranian government to rescind a bill with severe implications for the privacy, security and freedom of expression of Internet users in Iran. The text of our letter is below.

Iran: Human rights groups sound alarm against draconian Internet Bill

We, the undersigned human rights and civil society organizations, are alarmed by Iranian parliament’s move to ratify the general outlines of the draconian "Regulatory System for Cyberspace Services Bill," previously known as the “User Protection Bill'' and referred to hereafter as “the Bill.” If passed, the Bill will violate an array of human rights of people in Iran, including the right to freedom of expression and right to privacy. We urge the Iranian authorities to immediately withdraw the Bill in its entirety. We further call on the international community, along with states engaged in dialogue with Iranian authorities, to ensure that the promotion and protection of human rights in Iran is prioritized, including by urging Iran’s parliament to rescind the Bill as a matter of urgency.

While UN Human Rights Council member states will soon vote on whether to renew the mandate of the Special Rapporteur on Iran, the Iranian parliament is attempting to further curtail the rights of people inside Iran with passage of this Bill. If implemented, this will carry grave risks of increased and even complete communication blackouts in Iran, and it is likely to be used as a tool to conceal serious human rights violations.

While we welcome the Iranian parliament presidium’s decision to annul the 22 February 2022 ratification attempt by the special parliamentary committee, we are still alarmed at the ratification attempt following a vote of only 18 parliamentarians. The threat of this Bill passing looms. In July 2021, parliament voted to allow the Bill to pass under Article 85 of the Iranian constitution. This   would mean a small 24-person committee (with a majority vote of 18 to pass) within parliament could ratify the Bill for an experimental period of between three and five years, circumventing typical parliamentary procedures. This unusual Article 85 process, and the moves to ratify it on 22 February, demonstrate that the authorities remain adamant to take forward this regressive legislation despite the domestic and international outcry. We are still concerned the Bill’s enforcement is at the whim of a small committee attempting to circumvent the rights of an entire country.

The Bill Introduces Alarming Changes to Internet Controls

The undersigned civil society groups are gravely concerned that the passage of the Bill will result in even further reductions in the availability of international Internet bandwidth in Iran and violate the right to privacy and access to a secure and open Internet. Particularly alarming are provisions of the Bill that place Iran’s Internet infrastructure and Internet gateways under the control of the country’s armed forces and security agencies. In the latest draft of the Bill, the Secure Gateway Taskforce will control international gateways that connect Iran to the Internet. This Taskforce, newly created  as part of the Bill’s specifications, in turn will be under the authority of National Centre of Cyberspace (NCC), which is under the direct oversight of the Supreme Leader. The Secure Gateway Taskforce is to be composed of representatives from the General Staff of the Armed Forces, the Intelligence Organization of the Islamic Revolutionary Guards Corps (IRGC Intelligence Organization), the Ministry of Intelligence, the Ministry of Information and Communications Technology (ICT), the Passive Defense Organization, the Police Force, and the office of Prosecutor General. 

Delegating such control over Internet and communications access to entities that repeatedly commit serious human rights violations with complete impunity will have chilling effects on the right to freedom of expression in Iran. As documented by human rights organizations, Iran’s security forces, including the Revolutionary Guards and the Ministry of Intelligence, perpetrated gross violations of human rights and crimes under international law, including the unlawful use of lethal force, mass arbitrary detentions, enforced disappearances and torture and other ill-treatment to crush the nationwide protests in 2017, 2018, and November 2019. Alarmingly, passage of the Bill will make Internet shutdowns and online censorship even easier and less transparent. We note that Internet shutdowns not only constitute violations of human rights, such as the right to access information and freedom of expression, but also act as a tool to facilitate the commission and concealment of other gross violations. Indeed, Iran’s deadly repression of nationwide protests in November 2019 took place amid the darkness of a week-long near total Internet shutdown.

Disconnecting Foreign Social Media and Internet Services

In the latest draft of the Bill, all tech companies offering services inside Iran are required to introduce representatives in the country, collaborate with the Islamic Republic of Iran in surveillance and censorship efforts, and pay taxes. They are also required to store “big data and critical information inside Iran” belonging to users inside the country and can face legal penalties if they do not. Access to services provided by companies that do not comply will be throttled and the Committee Charged with Determining Offensive Content (CCDOC)[1] can eventually decide to outright ban them from operating in Iran. Compliance by companies with such requirements will carry severe repercussions for all Internet users in Iran. The Bill therefore places platforms in a position to choose between throttling or complying with regulations that undermine the right to privacy and freedom of expression. Such requirements are meant to further consolidate the National Information Network (NIN), a domestic Internet infrastructure hosted inside Iran. This will place information and communications under the monitoring and censorship of the authorities and may result in Iran’s eventual disconnection from the global Internet. Either foreign services comply and become partially integrated into the national network (at least in terms of data storage) or they refuse, and users will be forced to seek out their alternatives on the NIN. The Bill also introduces new criminal measures against those failing to comply with its terms. Proxy or Virtual Private Network (VPN) service development, reproduction or distribution can result in two years’ imprisonment under Article 20 of the proposed Bill. Article 21 also stipulates Internet Service Providers who allow unlicensed foreign services to access the data of users inside Iran can face up to ten years’ imprisonment.

Domestic and International Backlash Against the Bill

Since the advancement of the Bill, Internet users, along with businesses and guilds representing them, as well as human rights defenders, digital rights activists, international human rights organizations and United Nations experts have raised grave concerns. In October 2021, four UN Special Rapporteurs sent a Communication to the Iranian authorities (OL IRN 29/2021) expressing concerns about the Bill and the lack of transparency that permeated its processing within the parliament and calling for it to be withdrawn. Criticisms of the Bill and parliament’s decisions to proceed with the legislation without any regard for due process have not been limited to civil society actors. As of 23 February 2022, 150 Iranian parliamentarians had signed a letter to parliament’s board of presidents requesting the Bill to be considered and voted on in a general session of parliament rather than in a special committee.

Members of the international community, including the states engaged in bilateral and multilateral negotiations and dialogues with the Islamic Republic of Iran, and the Human Rights Council member states must press Iran to uphold its human rights obligations. Without urgent action, people in Iran will be at even graver risk of isolation and human rights violations.

Signatories
  1. Abdorrahman Boroumand Center for Human Rights in Iran
  2. Access Now
  3. Advocacy Initiative for Development (AID)
  4. All Human Rights for All in Iran
  5. Amnesty International
  6. Arc Association for the Defence of Human Rights of Azerbaijanis of Iran - ArcDH
  7. Article18
  8. ARTICLE19
  9. Association for the human rights of the Azerbaijani people in Iran (AHRAZ)
  10. Azerbaijan Internet Watch
  11. Center for Democracy & Technology
  12. Center for Human Rights in Iran (CHRI)
  13. Commission on Global Feminisms and Queer Politics, International Union of Anthropological and Ethnological Sciences (IUAES)
  14. Committee to Protect Journalists (CPJ)
  15. Democracy for the Arab World Now (DAWN)
  16. Electronic Frontier Foundation (EFF)
  17. Freedom Forum
  18. Front Line Defenders
  19. Global Voices
  20. Human Rights Activists (in Iran) (HRA)
  21. Human Rights Consulting Group, Kazakhstan
  22. Human Rights Watch
  23. Ideas Beyond Borders
  24. IFEX
  25. Impact Iran
  26. Internet Protection Society, Russia
  27. INSM Network
  28. Iran Human Rights
  29. Iran Human Rights Documentation Center
  30. Justice for Iran
  31. Kijiji Yeetu
  32. Kurdistan Human Rights Association -Geneva (KMMK-G)
  33. Kurdistan Human Rights Network (KHRN)
  34. Kurdpa Human Rights Organization
  35. Lawyers’ Rights Watch Canada
  36. Media Foundation for West Africa (MFWA)
  37. Miaan Group
  38. Mnemonic
  39. Open Net
  40. OutRight Action International
  41. PEN America
  42. Queer Kadeh
  43. Ranking Digital Rights
  44. RosKomSvoboda
  45. Sassoufit collective
  46. Siamak Pourzand Foundation (SPF)
  47. SMEX
  48. SOAP
  49. Spectrum
  50. Wikimédia France
  51. WITNESS (witness.org)
  52. Ubunteam
  53. United for Iran
  54. Xnet
  55. 6Rang (Iranian Lesbian and Transgender network)

[1] The CCDOC was established in 2009 as per the Computer Crimes Law that was ratified in the same year. It is a multi-agency oversight body that is in charge of online censorship in Iran.

Shirin Mori
Checked
1 hour 35 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed