Internet Service Providers Plan to Subvert Net Neutrality. Don’t Let Them

2 days 12 hours ago

In the absence of strong net neutrality protections, internet service providers (ISPs) have made all sorts of plans that would allow them to capitalize on something called "network slicing." While this technology has all sorts of promise, what the ISPs have planned would subvert net neutrality—the principle that all data be treated equally by your service provider—by allowing them to recreate the kinds of “fast lanes” we've already agreed should not be allowed. If their plans succeed, then the new proposed net neutrality protections will end up doing far less for consumers than the old rules did.

The FCC released draft rules to reinstate net neutrality, with a vote on adopting the rules to come the 25th of April. Overall, the order is a great step for net neutrality. However, to be truly effective the rules must not preempt states from protecting their residents with stronger laws and clearly find the creation of “fast lanes” via positive discrimination and unpaid prioritization of specific applications or services are violations of net neutrality.

Fast Lanes and How They Could Harm Competition

Since “fast lanes” aren’t a technical term, what do we mean when we are talking about a fast lane? To understand, it is helpful to think about data traffic and internet networking infrastructure like car traffic and public road systems. As roads connect people, goods, and services across distances, so does network infrastructure allow for data traffic to flow from one place to another. And just as a road with more capacity in the way of more lanes theoretically means the road can support more traffic moving at speed1, internet infrastructure with more “lanes” (i.e. bandwidth) should mean that a network can better support applications like streaming services and online gaming.

Individual ISPs have a maximum network capacity, and speed, of internet traffic they can handle. To continue the analogy, the road leading to your neighborhood has a set number of lanes. This is why the speed of your internet may change throughout the day. At peak hours your internet service may slow down because a slowdown has occurred from too much requested traffic clogging up the lanes.

It’s not inherently a bad thing to have specific lanes for certain types of traffic, actual fast lanes on freeways can improve congestion by not making faster moving vehicles compete for space with slower moving traffic, having exit and entry lanes in freeways also allows cars to perform specialized tasks without impeding other traffic. A lane only for buses isn’t a bad thing as long as every bus gets equal access to that lane and everyone has equal access to riding those buses. Where this becomes a problem is if there is a special lane only for Google buses, or for consuming entertainment content instead of participating in video calls. In these scenarios you would be increasing the quality of certain bus rides at the expense of degraded service for everyone else on the road.

An internet “fast lane” would be the designation of part of the network with more bandwidth and/or lower latency to only be used for certain services. On a technical level, the physical network infrastructure would be split amongst several different software defined networks with different use cases using network slicing. One network might be optimized for high bandwidth applications such as video streaming, another might be optimized for applications needing low latency (e.g. a short distance between the client and the server), and another might be optimized for IoT devices. The maximum physical network capacity is split among these slices. To continue our tortured metaphor, your original six lane general road is now a four lane general road with two lanes reserved for, say, a select list of streaming services. Think dedicated high speed lanes for Disney+, HBO, and Netflix, but those services only. In a network neutral construction of the infrastructure, all internet traffic shares all lanes, and no specific app or service is unfairly sped up or slowed down. This isn’t to say that we are inherently against network management techniques like quality of service or network slicing. But it’s important that quality of service efforts be undertaken, as much as possible, in an application agnostic manner.

The fast lanes metaphor isn’t ideal. On the road having fast lanes is a good thing, it can protect more slow and cautious drivers from dangerous driving and improve the flow of traffic. Bike lanes are a good thing because they make cyclists safer and allow cars to drive more quickly and not have to navigate around them. But with traffic lanes it’s the driver, not the road, that decides which lane they belong in (with penalties for doing obviously bad faith things such as driving in the bike lane.)

Internet service providers (ISPs) are already testing their ability to create these network slices. They already have plans of creating market offerings where certain applications and services, chosen by them, are given exclusive reserved fast lanes while the rest of the internet must shoulder their way through what is left. This kind of networking slicing is a violation of net neutrality. We aren’t against network slicing as a technology, it could be useful for things like remote surgery or vehicle to vehicle communication which requires low latency connections and is in the public interest, which are separate offerings and not part of the broadband services covered in the draft order. We are against network slicing being used as a loophole to circumvent principles of net neutrality.

Fast Lanes Are a Clear Violation of Net Neutrality

Where net neutrality is the principle that all ISPs should treat all legitimate traffic coming over their networks equally, discriminating between  certain applications or types of traffic is a clear violation of that principle. When fast lanes speed up certain applications or certain classes of applications, they cannot do so without having a negative impact on other internet traffic, even if it’s just by comparison. This is throttling, plain and simple.

Further, because ISPs choose which applications or types of services get to be in the fast lane, they choose winners and losers within the internet, which has clear harms to both speech and competition. Whether your access to Disney+ is faster than your access to Indieflix because Disney+ is sped up or because Indieflix is slowed down doesn’t matter because the end result is the same: Disney+ is faster than Indieflix and so you are incentivized to use Disney+ over Indieflix.

ISPs should not be able to harm competition even by deciding to prioritize incumbent services over new ones, or that one political party’s website is faster than another’s. It is the consumer who should be in charge of what they do online. Fast lanes have no place in a network neutral internet.

  • 1. Urban studies research shows that this isn’t actually the case, still it remains the popular wisdom among politicians and urban planners.
Chao Liu

EFF, Human Rights Organizations Call for Urgent Action in Case of Alaa Abd El Fattah

2 days 19 hours ago

Following an urgent appeal filed to the United Nations Working Group on Arbitrary Detention (UNWGAD) on behalf of blogger and activist Alaa Abd El Fattah, EFF has joined 26 free expression and human rights organizations calling for immediate action.

The appeal to the UNWGAD was initially filed in November 2023 just weeks after Alaa’s tenth birthday in prison. The British-Egyptian citizen is one of the most high-profile prisoners in Egypt and has spent much of the past decade behind bars for his pro-democracy writing and activism following Egypt’s revolution in 2011.

EFF and Media Legal Defence Initiative submitted a similar petition to the UNGWAD on behalf of Alaa in 2014. This led to the Working Group issuing an opinion that Alaa’s detention was arbitrary and called for his release. In 2016, the UNWGAD declared Alaa's detention (and the law under which he was arrested) a violation of international law, and again called for his release.

We once again urge the UN Working Group to urgently consider the recent petition and conclude that Alaa’s detention is arbitrary and contrary to international law. We also call for the Working Group to find that the appropriate remedy is a recommendation for Alaa’s immediate release.

Read our full letter to the UNWGAD and follow Free Alaa for campaign updates.

Paige Collings

Congress: Don't Let Anyone Own The Law

2 days 21 hours ago

We should all have the freedom to read, share, and comment on the laws we must live by. But yesterday, the House Judiciary Committee voted 19-4 to move forward the PRO Codes Act (H.R. 1631), a bill that would limit those rights in a critical area. 

TAKE ACTION

Tell Congress To Reject The Pro Codes Act

A few well-resourced private organizations have made a business of charging money for access to building and safety codes, even when those codes have been incorporated into law. 

These organizations convene volunteers to develop model standards, encourage regulators to make those standards into mandatory laws, and then sell copies of those laws to the people (and city and state governments) that have to follow and enforce them.

They’ve claimed it’s their copyrighted material. But court after court has said that you can’t use copyright in this way—no one “owns” the law. The Pro Codes Act undermines that rule and the public interest, changing the law to state that the standards organizations that write these rules “shall retain” a copyright in it, as long as the rules are made “publicly accessible” online. 

That’s not nearly good enough. These organizations already have so-called online reading rooms that aren’t searchable, aren’t accessible to print-disabled people, and condition your ability to read mandated codes on agreeing to onerous terms of use, among many other problems. That’s why the Association of Research Libraries sent a letter to Congress last week (supported by EFF, disability rights groups, and many others) explaining how the Pro Codes Act would trade away our right to truly understand and educate our communities about the law for cramped public access to it. Congress must not let well-positioned industry associations abuse copyright to control how you access, use, and share the law. Now that this bill has passed committee, we urgently need your help—tell Congress to reject the Pro Codes Act.

TAKE ACTION

TELL CONGRESS: No one owns the law

Joe Mullin

Two Years Post-Roe: A Better Understanding of Digital Threats

3 days 14 hours ago

It’s been a long two years since the Dobbs decision to overturn Roe v. Wade. Between May 2022 when the Supreme Court accidentally leaked the draft memo and the following June when the case was decided, there was a mad scramble to figure out what the impacts would be. Besides the obvious perils of stripping away half the country’s right to reproductive healthcare, digital surveillance and mass data collection caused a flurry of concerns.

Although many activists fighting for reproductive justice had been operating under assumptions of little to no legal protections for some time, the Dobbs decision was for most a sudden and scary revelation. Everyone implicated in that moment somewhat understood the stark difference between pre-Roe 1973 and post-Roe 2022; living under the most sophisticated surveillance apparatus in human history presents a vastly different landscape of threats. Since 2022, some suspicions have been confirmed, new threats have emerged, and overall our risk assessment has grown smarter. Below, we cover the most pressing digital dangers facing people seeking reproductive care, and ways to combat them.

Digital Evidence in Abortion-Related Court Cases: Some Examples Social Media Message Logs

A case in Nebraska resulted in a woman, Jessica Burgess, being sentenced to two years in prison for obtaining abortion pills for her teenage daughter. Prosecutors used a Facebook Messenger chat log between Jessica and her daughter as key evidence, bolstering the concerns many had raised about using such privacy-invasive tech products for sensitive communications. At the time, Facebook Messenger did not have end-to-end encryption.

In response to criticisms about Facebook’s cooperation with law enforcement that landed a mother in prison, a Meta spokesperson issued a frustratingly laconic tweet stating that “[n]othing in the valid warrants we received from local law enforcement in early June, prior to the Supreme Court decision, mentioned abortion.” They followed this up with a short statement reiterating that the warrants did not mention abortion at all. The lesson is clear: although companies do sometimes push back against data warrants, we have to prepare for the likelihood that they won’t.

Google: Search History & Warrants

Well before the Dobbs decision, prosecutors had already used Google Search history to indict a woman for her pregnancy outcome. In this case, it was keyword searches for misoprostol (a safe and effective abortion medication) that clinched the prosecutor’s evidence against her. Google acquiesced, as it so often has, to the warrant request.

Related to this is the ongoing and extremely complicated territory of reverse keyword and geolocation warrants. Google has promised that it would remove from user profiles all location data history related to abortion clinic sites. Researchers tested this claim and it was shown to be false, twice. Late in 2023, Google made a bigger promise: it would soon change how it stores location data to make it much more difficult–if not impossible–for Google to provide mass location data in response to a geofence warrant, a change we’ve been asking Google to implement for years. This would be a genuinely helpful measure, but we’ve been conditioned to approach such claims with caution. We’ll believe it when we see it (and refer to external testing for proof).

Other Dangers to Consider Doxxing

Sites propped up for doxxing healthcare professionals that offer abortion services are about as old as the internet itself. Doxxing comes in a variety of forms, but a quick and loose definition of it is the weaponization of open source intelligence with the intention of escalating to other harms. There’s been a massive increase in hate groups abusing public records requests and data broker collections to publish personal information about healthcare workers. Doxxing websites hosting such material are updated frequently. Doxxing has led to steadily rising material dangers (targeted harassment, gun violence, arson, just to name a few) for the past few years.

There are some piecemeal attempts at data protection for healthcare workers in more protective states like California (one which we’ve covered). Other states may offer some form of an address confidentiality program that provides people with proxy addresses. Though these can be effective, they are not comprehensive. Since doxxing campaigns are typically coordinated through a combination of open source intelligence tactics, it presents a particularly difficult threat to protect against. This is especially true for government and medical industry workers whose information may be subjected to exposure through public records requests.

Data Brokers

Recently, Senator Wyden’s office released a statement about a long investigation into Near Intelligence, a data broker company that sold geolocation data to The Veritas Society, an anti-choice think tank. The Veritas Society then used the geolocation data to target individuals who had traveled near healthcare clinics that offered abortion services and delivered pro-life advertisements to their devices.

That alone is a stark example of the dangers of commercial surveillance, but it’s still unclear what other ways this type of dataset could be abused. Near Intelligence has filed for bankruptcy, but they are far from the only, or the most pernicious, data broker company out there. This situation bolsters what we’ve been saying for years: the data broker industry is a dangerously unregulated mess of privacy threats that needs to be addressed. It not only contributes to the doxxing campaigns described above, but essentially creates a backdoor for warrantless surveillance.

Domestic Terrorist Threat Designation by Federal Agencies

Midway through 2023, The Intercept published an article about a tenfold increase in federal designation of abortion-rights activist groups as domestic terrorist threats. This projects a massive shadow of risk for organizers and activists at work in the struggle for reproductive justice. The digital surveillance capabilities of federal law enforcement are more sophisticated than that of typical anti-choice zealots. Most people in the abortion access movement may not have to worry about being labeled a domestic terrorist threat, though for some that is a reality, and strategizing against it is vital.

Looming Threats Legal Threats to Medication Abortion

Last month, the Supreme Court heard oral arguments challenging the FDA’s approval of and regulations governing mifepristone, a widely available and safe abortion pill. If the anti-abortion advocates who brought this case succeed, access to the most common medication abortion regimen used in the U.S. would end across the country—even in those states where abortion rights are protected.

Access to abortion medication might also be threatened by a 150 year old obscenity law. Many people now recognize the long dormant Comstock Act as a potential avenue to criminalize procurement of the abortion pill.

Although the outcomes of these legal challenges are yet-to-be determined, it’s reasonable to prepare for the worst: if there is no longer a way to access medication abortion legally, there will be even more surveillance of the digital footprints prescribers and patients leave behind. 

Electronic Health Records Systems

Electronic Health Records (EHRs) are digital transcripts of medical information meant to be easily stored and shared between medical facilities and providers. Since abortion restrictions are now dictated on a state-by-state basis, the sharing of these records across state lines present a serious matrix of concerns.

As some academics and privacy advocates have outlined, the interoperability of EHRs can jeopardize the safety of patients when reproductive healthcare data is shared across state lines. Although the Department of Health and Human Services has proposed a new rule to help protect sensitive EHR data, it’s currently possible that data shared between EHRs can lead to the prosecution of reproductive healthcare.

The Good Stuff: Protections You Can Take

Perhaps the most frustrating aspect of what we’ve covered thus far is how much is beyond individual control. It’s completely understandable to feel powerless against these monumental threats. That said, you aren’t powerless. Much can be done to protect your digital footprint, and thus, your safety. We don’t propose reinventing the wheel when it comes to digital security and data privacy. Instead, rely on the resources that already exist and re-tool them to fit your particular needs. Here are some good places to start:

Create a Security Plan

It’s impossible, and generally unnecessary, to implement every privacy and security tactic or tool out there. What’s more important is figuring out the specific risks you face and finding the right ways to protect against them. This process takes some brainstorming around potentially scary topics, so it’s best done well before you are in any kind of crisis. Pen and paper works best. Here's a handy guide.

After you’ve answered those questions and figured out your risks, it’s time to locate the best ways to protect against them. Don’t sweat it if you’re not a highly technical person; many of the strategies we recommend can be applied in non-tech ways.

Careful Communications

Secure communication is as much a frame of mind as it is a type of tech product. When you are able to identify which aspects of your life need to be spoken about more carefully, you can then make informed decisions about who to trust with what information, and when. It’s as much about creating ground rules with others about types of communication as it is about normalizing the use of privacy technologies.

Assuming you’ve already created a security plan and identified some risks you want to protect against, begin thinking about the communication you have with others involving those things. Set some rules for how you broach those topics, where they can be discussed, and with whom. Sometimes this might look like the careful development of codewords. Sometimes it’s as easy as saying “let’s move this conversation to Signal.” Now that Signal supports usernames (so you can keep your phone number private), as well as disappearing messages, it’s an obvious tech choice for secure communication.

Compartmentalize Your Digital Activity

As mentioned above, it’s important to know when to compartmentalize sensitive communications to more secure environments. You can expand this idea to other parts of your life. For example, you can designate different web browsers for different use cases, choosing those browsers for the privacy they offer. One might offer significant convenience for day-to-day casual activities (like Chrome), whereas another is best suited for activities that require utmost privacy (like Tor).

Now apply this thought process towards what payment processors you use, what registration information you give to social media sites, what profiles you keep public versus private, how you organize your data backups, and so on. The possibilities are endless, so it’s important that you prioritize only the aspects of your life that most need protection.

Security Culture and Community Care

Both tactics mentioned above incorporate a sense of community when it comes to our privacy and security. We’ve said it before and we’ll say it again: privacy is a team sport. People live in communities built on trust and care for one another; your digital life is imbricated with others in the same way.

If a node on a network is compromised, it will likely implicate others on the same network. This principle of computer network security is just as applicable to social networks. Although traditional information security often builds from a paradigm of “zero trust,” we are social creatures and must work against that idea. It’s more about incorporating elements of shared trust pushing for a culture of security.

Sometimes this looks like setting standards for how information is articulated and shared within a trusted group. Sometimes it looks like choosing privacy-focused technologies to serve a community’s computing needs. The point is to normalize these types of conversations, to let others know that you’re caring for them by attending to your own digital hygiene. For example, when you ask for consent to share images that include others from a protest, you are not only pushing for a culture of security, but normalizing the process of asking for consent. This relationship of community care through data privacy hygiene is reciprocal.

Help Prevent Doxxing

As somewhat touched on above in the other dangers to consider section, doxxing can be a frustratingly difficult thing to protect against, especially when it’s public records that are being used against you. It’s worth looking into your state level voter registration records, if that information is public, and how you can request for that information to be redacted (success may vary by state).

Similarly, although business registration records are publicly available, you can appeal to websites that mirror that information (like Bizapedia) to have your personal information taken down. This is of course only a concern if you have a business registration tied to your personal address.

If you work for a business that is susceptible to public records requests revealing personal sensitive information about you, there’s little to be done to prevent it. You can, however, apply for an address confidentiality program if your state has it. You can also do the somewhat tedious work of scrubbing your personal information from other places online (since doxxing is often a combination of information resources). Consider subscribing to a service like DeleteMe (or follow a free DIY guide) for a more thorough process of minimizing your digital footprint. Collaborating with trusted allies to monitor hate forums is a smart way to unburden yourself from having to look up your own information alone. Sharing that responsibility with others makes it easier to do, as well as group planning for what to do in ways of prevention and incident response.

Take a Deep Breath

It’s natural to feel bogged down by all the thought that has to be put towards privacy and security. Again, don’t beat yourself up for feeling powerless in the face of mass surveillance. You aren’t powerless. You can protect yourself, but it’s reasonable to feel frustrated when there is no comprehensive federal data privacy legislation that would alleviate so many of these concerns.

Take a deep breath. You’re not alone in this fight. There are guides for you to learn more about stepping up your privacy and security. We've even curated a special list of them. And there is Digital Defense Fund, a digital security organization for the abortion access movement, who we are grateful and proud to boost. And though it can often feel like privacy is getting harder to protect, in many ways it’s actually improving. With all that information, as well as continuing to trust your communities, and pushing for a culture of security within them, safety is much easier to attain. With a bit of privacy, you can go back to focusing on what matters, like healthcare.

Daly Barnett

Fourth Amendment is Not For Sale Act Passed the House, Now it Should Pass the Senate

3 days 19 hours ago

The Fourth Amendment is Not For Sale Act, H.R.4639, originally introduced in the Senate by Senator Ron Wyden in 2021, has now made the important and historic step of passing the U.S. House of Representatives. In an era when it often seems like Congress cannot pass much-needed privacy protections, this is a victory for vulnerable populations, people who want to make sure their location data is private, and the hard-working activists and organizers who have pushed for the passage of this bill.

Everyday, your personal information is being harvested by your smart phone applications, sold to data brokers, and used by advertisers hoping to sell you things. But what safeguards prevent the government from shopping in that same data marketplace? Mobile data regularly bought and sold, like your geolocation, is information that law enforcement or intelligence agencies would normally have to get a warrant to acquire. But it does not require a warrant for law enforcement agencies to just buy the data. The U.S. government has been using its purchase of this information as a loophole for acquiring personal information on individuals without a warrant.

Now is the time to close that loophole.

At EFF, we’ve been talking about the need to close the databroker loophole for years. We even launched a massive investigation into the data broker industry which revealed Fog Data Science, a company that has claimed in marketing materials that it has “billions” of data points about “over 250 million” devices and that its data can be used to learn about where its subjects work, live, and their associates. We found close to 20 law enforcement agents used or were offered this tool.

It’s time for the Senate to close this incredibly dangerous and invasive loophole. If police want a person—or a whole community’s—location data, they should have to get a warrant to see it. 

Take action

TELL congress: 702 Needs serious reforms

Matthew Guariglia

About Face (Recognition) | EFFector 36.5

4 days 18 hours ago

There are a lot of updates in the fight for our freedoms online, from a last-minute reauthorization bill to expand Section 702 (tell your senators to vote NO on the bill here!), a new federal consumer data privacy law (we deserve better!), and a recent draft from the FCC to reinstate net neutrality (you can help clean it up!).

It can feel overwhelming to stay up to date, but we've got you covered with our EFFector newsletter! You can read the full issue here, or subscribe to get the next one in your inbox automatically! You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below:

LISTEN ON YouTube

EFFECTOR 36.5.- About Face (Recognition)

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

How Political Campaigns Use Your Data to Target You

5 days 16 hours ago

Data about potential voters—who they are, where they are, and how to reach them—is an extremely valuable commodity during an election year. And while the right to a secret ballot is a cornerstone of the democratic process, your personal information is gathered, used, and sold along the way. It's not possible to fully shield yourself from all this data processing, but you can take steps to at least minimize and understand it.

Political campaigns use the same invasive tricks that behavioral ads do—pulling in data from a variety of sources online to create a profile—so they can target you. Your digital trail is a critical tool for campaigns, but the process starts in the real world, where longstanding techniques to collect data about you can be useful indicators of how you'll vote. This starts with voter records.

Your IRL Voting Trail Is Still Valuable

Politicians have long had access to public data, like voter registration, party registration, address, and participation information (whether or not a voter voted, not who they voted for). Online access to such records has made them easier to get in some states, with unintended consequences, like doxing.

Campaigns can purchase this voter information from most states. These records provide a rough idea of whether that person will vote or not, and—if they're registered to a particular party—who they might lean toward voting for. Campaigns use this to put every voter into broad categories, like "supporter," "non-supporter," or "undecided." Campaigns gather such information at in-person events, too, like door-knocking and rallies, where you might sign up for emails or phone calls.

Campaigns also share information about you with other campaigns, so if you register with a candidate one year, it's likely that information goes to another in the future. For example, the website for Adam’s Schiff’s campaign to serve as U.S. Senator from California has a privacy policy with this line under “Sharing of Information”:

With organizations, candidates, campaigns, groups, or causes that we believe have similar political viewpoints, principles, or objectives or share similar goals and with organizations that facilitate communications and information sharing among such groups

Similar language can be found on other campaign sites, including those for Elizabeth Warren and Ted Cruz. These candidate lists are valuable, and are often shared within the national party. In 2017, the Hillary Clinton campaign gave its email list to the Democratic National Committee, a contribution valued at $3.5 million.

If you live in a state with citizen initiative ballot measures, data collected from signature sheets might be shared or used as well. Signing a petition doesn't necessarily mean you support the proposed ballot measure—it's just saying you think it deserves to be put on the ballot. But in most states, these signature pages will remain a part of the public record, and the information you provide may get used for mailings or other targeted political ads. 

How Those Voter Records, and Much More, Lead to Targeted Digital Ads

All that real world information is just one part of the puzzle these days. Political campaigns tap into the same intrusive adtech tracking systems used to deliver online behavioral ads. We saw a glimpse into how this worked after the Cambridge Analytica scandal, and the system has only grown since then.

Specific details are often a mystery, as a political advertising profile may be created by combining disparate information—from consumer scoring data brokers like Acxiom or Experian, smartphone data, and publicly available voter information—into a jumble of data points that’s often hard to trace in any meaningful way. A simplified version of the whole process might go something like this:

  1. A campaign starts with its voter list, which includes names, addresses, and party affiliation. It may have purchased this from the state or its own national committee, or collected some of it for itself through a website or app.
  2. The campaign then turns to a data broker to enhance this list with consumer information. The data broker combines the voter list with its own data, then creates a behavioral profile using inferences based on your shopping, hobbies, demographics, and more. The campaign looks this all over, then chooses some categories of people it thinks will be receptive to its messages in its various targeted ads.
  3. Finally, the campaign turns to an ad targeting company to get the ad on your device. Some ad companies might use an IP address to target the ad to you. As The Markup revealed, other companies might target you based on your phone's location, which is particularly useful in reaching voters not in the campaign's files. 

In 2020, Open Secrets found political groups paid 37 different data brokers at least $23 million for access to services or data. These data brokers collect information from browser cookies, web beacons, mobile phones, social media platforms, and more. They found that some companies specialize in more general data, while others, like i360, TargetSmart, and Grassroots Analytics, focus on data useful to campaigns or advocacy.

A sample of some categories and inferences in a political data broker file that we received through a CCPA request shows the wide variety of assumptions these companies may make.

These political data brokers make a lot of promises to campaigns. TargetSmart claims to have 171 million highly accurate cell phone numbers, and i360 claims to have data on 220 million voters. They also tend to offer specialized campaign categories that go beyond the offerings of consumer-focused data brokers. Check out data broker L2’s “National Models & Predictive Analytics” page, which breaks down interests, demographics, and political ideology—including details like "Voter Fraud Belief," and "Ukraine Continue." The New York Times demonstrated a particularly novel approach to these sorts of profiles where a voter analytics firm created a “Covid concern score” by analyzing cell phone location, then ranked people based on travel patterns during the pandemic.

Some of these companies target based on location data. For example, El Toro claims to have once “identified over 130,000 IP-matched voter homes that met the client’s targeting criteria. El Toro served banner and video advertisements up to 3 times per day, per voter household – across all devices within the home.”

That “all devices within the home” claim may prove important in the coming elections: as streaming video services integrate more ad-based subscription tiers, that likely means more political ads this year. One company, AdImpact, projects $1.3 billion in political ad spending on “connected television” ads in 2024. This may be driven in part by the move away from tracking cookies, which makes web browsing data less appealing.

In the case of connected televisions, ads can also integrate data based on what you've watched, using information collected through automated content recognition (ACR). Streaming device maker and service provider Roku's pitch to potential political advertisers is straightforward: “there’s an opportunity for campaigns to use their own data like never before, for instance to reach households in a particular district where they need to get out the vote.” Roku claims to have at least 80 million users. As a platform for televisions and “streaming sticks,” and especially if you opted into ACR (we’ll detail how to check below), Roku can collect and use a lot of your viewing data ranging from apps, to broadcast TV, or even to video games.

This is vastly different from traditional broadcast TV ads, which might be targeted broadly based on a city or state, and the show being aired. Now, a campaign can target an ad at one household, but not their neighbor, even if they're watching the same show. Of the main streaming companies, only Amazon and Netflix don’t accept political ads.

Finally, there are Facebook and Google, two companies that have amassed a mountain of data points about all their users, and which allow campaigns to target based on some of those factors. According to at least one report, political ad spending on Google (mostly through YouTube) is projected to be $552 million, while Facebook is projected at $568 million. Unlike the data brokers discussed above, most of what you see on Facebook and Google is derived from the data collected by the company from its users. This may make it easier to understand why you’re seeing a political ad, for example, if you follow or view content from a specific politician or party, or about a specific political topic.

What You Can Do to Protect Your Privacy

Managing the flow of all this data might feel impossible, but you can take a few important steps to minimize what’s out there. The chances you’ll catch everything is low, but minimizing what is accessible is still a privacy win.

Install Privacy Badger
Considering how much data is collected just from your day-to-day web browsing, it’s a good idea to protect that first. The simplest way to do so is with our own tracking blocker extension, Privacy Badger.

Disable Your Phone Advertising ID and Audit Your Location Settings
Your phone has an ad identifier that makes it simple for advertisers to track and collate everything you do. Thankfully, you can make this much harder for those advertisers by disabling it:

  • On iPhone: Head into Settings > Privacy & Security > Tracking, and make sure “Allow Apps to Request to Track” is disabled. 
  • On Android: Open Settings > Security & Privacy > Privacy > Ads, and select “Delete advertising ID.”

Similarly, as noted above, your location is a valuable asset for campaigns. They can collect your location through data brokers, which usually get it from otherwise unaffiliated apps. This is why it's a good idea to limit what sorts of apps have access to your location:

  • On iPhone: open Settings > Privacy & Security > Location Services, and disable access for any apps that do not need it. You can also set location for only "While using," for certain apps where it's helpful, but unnecessary to track you all the time. Also, consider disabling "Precise Location" for any apps that don't need your exact location (for example, your GPS navigation app needs precise location, but no weather app does).
  • On Android: Open Settings > Location > App location permissions, and confirm that no apps are accessing your location that you don't want to. As with iOS, you can set it to "Allow only while using the app," for apps that don't need it all the time, and disable "Use precise location," for any apps that don't need exact location access.

Opt Out of Tracking on Your TV or Streaming Device, and Any Video Streaming Service
Nearly every brand of TV is connected to the internet these days. Consumer Reports has a guide for disabling what you can on most popular TVs and software platforms. If you use an Apple TV, you can disable the ad identifier following the exact same directions as on your phone.

Since the passage of a number of state privacy laws, streaming services, like other sites, have offered a way for users to opt out of the sale of their info. Many have extended this right outside of states that require it. You'll need to be logged into your streaming service account to take action on most of these, but TechHive has a list of opt out links for popular streaming services to get you started. Select the "Right to Opt Out" option, when offered.

Don't Click on Links in (or Respond to) Political Text Messages
You've likely been receiving political texts for much of the past year, and that's not going to let up until election day. It is increasingly difficult to decipher whether they're legitimate or spam, and with links that often use a URL shortener or odd looking domains, it's best not to click them. If there's a campaign you want to donate to, head directly to the site of the candidate or ballot sponsor.

Create an Alternate Email and Phone Number for Campaign Stuff
If you want to keep updated on campaign or ballot initiatives, consider setting up an email specifically for that, and nothing else. Since a phone number is also often required, it's a good idea to set up a secondary phone number for these same purposes (you can do so for free through services like Google Voice).

Keep an Eye Out for Deceptive Check Boxes
Speaking of signing up for updates, be mindful of when you don't intend to sign up for emails. Campaigns might use pre-selected options for everything from donation amounts to signing up for a newsletter. So, when you sign up with any campaign, keep an eye on any options you might not intend to opt into.

Mind Your Social Media
Now's a great time to take any sort of "privacy checkup" available on whatever social media platforms you use to help minimize any accidental data sharing. Even though you can't completely opt out of behavioral advertising on Facebook, review your ad preferences and opt out whatever you can. Also be sure to disable access to off-site activity. You should also opt out of personalized ads on Google's services. You cannot disable behavioral ads on TikTok, but the company doesn't allow political ads.

If you're curious to learn more about why you're seeing an ad to begin with, on Facebook you can always click the three-dot icon on an ad, then click "Why am I seeing this ad?" to learn more. For ads on YouTube, you can click the "More" button and then "About this advertiser" to see some information about who placed the ad. Anywhere else you see a Google ad you can click the "Adchoices" button and then "Why this ad?"

You shouldn't need to spend an afternoon jumping through opt out hoops and tweaking privacy settings on every device you own just so you're not bombarded with highly targeted ads. That’s why EFF supports comprehensive consumer data privacy legislation, including a ban on online behavioral ads.

Democracy works because we participate, and you should be able to do so without sacrificing your privacy. 

Thorin Klosowski

Speaking Freely: Lynn Hamadallah

5 days 16 hours ago

Lynn Hamadallah is a Syrian-Palestinian-French Psychologist based in London. An outspoken voice for the Palestinian cause, Lynn is interested in the ways in which narratives, spoken and unspoken, shape identity. Having lived in five countries and spent a lot of time traveling, she takes a global perspective on freedom of expression. Her current research project investigates how second-generation British-Arabs negotiate their cultural identity. Lynn works in a community mental health service supporting some of London's most disadvantaged residents, many of whom are migrants who have suffered extensive psychological trauma.

York: What does free speech or free expression mean to you? 

Being Arab and coming from a place where there is much more speech policing in the traditional sense, I suppose there is a bit of an idealization of Western values of free speech and democracy. There is this sense of freedom we grow up associating with the West. Yet recently, we’ve come to realize that the way it works in practice is quite different to the way it is described, and this has led to a lot of disappointment and disillusionment in the West and its ideals amongst Arabs. There’s been a lot of censorship for example on social media, which I’ve experienced myself when posting content in support of Palestine. At a national level, we have witnessed the dehumanization going on around protesters in the UK, which undermines the idea of free speech. For example, the pro-Palestine protests where we saw the then-Home Secretary Suella Braverman referring to protesters as “hate marchers.” So we’ve come to realize there’s this kind of veneer of free speech in the West which does not really match up to the more idealistic view of freedom we were taught about.

With the increased awareness we have gained as a result of the latest aggression going on in Palestine, actually what we’re learning is that free speech is just another arm of the West to support political and racist agendas. It’s one of those things that the West has come up with which only applies to one group of people and oppresses another. It’s the same as with human rights you know - human rights for who? Where are Palestinian’s human rights? 

We’ve seen free speech being weaponized to spread hate and desecrate Islam, for example, in the case of Charlie Hebdo and the Quran burning in Denmark and in Sweden. The argument put forward was that those cases represented instances of free speech rather than hate speech. But actually to millions of Muslims around the world those incidents were very, very hateful. They were acts of violence not just against their religious beliefs but right down to their sense of self. It’s humiliating to have a part of your identity targeted in that way with full support from the West, politicians and citizens alike. 

And then, when we— we meaning Palestinians and Palestine allies—want to leverage this idea of free speech to speak up against the oppression happening by the state of Israel, we see time and time again accusations flying around: hate speech, anti-semitism, and censorship. Heavy, heavy censorship everywhere. So that’s what I mean when I say that free speech in the West is a racist concept, actually. And I don’t know that true free speech exists anywhere in the world really. In the Middle East we don’t have democracies but at least there’s no veneer of democracy— the messaging and understanding is clear. Here, we have a supposed democracy, but in practice it looks very different. And that’s why, for me, I don’t really believe that free speech exists. I’ve never seen a real example of it. I think as long as people are power hungry there’s going to be violence, and as long as there’s violence, people are going to want to hide their crimes. And as long as people are trying to hide their crimes there’s not going to be free speech. Sorry for the pessimistic view!

York: It’s okay, I understand where you’re coming from. And I think that a lot of those things are absolutely true. Yet, from my perspective, I still think it’s a worthy goal even though governments—and organizationally we’ve seen this as well—a lot of times governments do try to abuse this concept. So I guess then I would just as a follow-up, do you feel that despite these issues that some form of universalized free expression is still a worthy ideal? 

Of course, I think it’s a worthy ideal. You know, even with social media – there is censorship. I’ve experienced it and it’s not just my word and an isolated incident. It’s been documented by Human Rights Watch—even Meta themselves! They did an internal investigation in 2021—Meta had a nonprofit called Business for Social Responsibility do an investigation and produce a report—and they’ve shown there was systemic censorship of Palestine-related content. And they’re doing it again now. That being said, I do think social media is making free speech more accessible, despite the censorship. 

And I think—to your question—free speech is absolutely worth pursuing. Because we see that despite these attempts at censorship, the truth is starting to come out. Palestine support is stronger than it’s ever been. To the point where we’ve now had South Africa take Israel to trial at the International Court of Justice for genocide, using evidence from social media videos that went viral. So what I’m saying is, free speech has the power to democratize demanding accountability from countries and creating social change, so yes, absolutely something we should try to pursue. 

York: You just mentioned two issues close to my heart. One is the issues around speech on social media platforms, and I’ve of course followed and worked on the Palestinian campaigns quite closely and I’m very aware of the BSR report. But also, video content, specifically, that’s found on social media being used in tribunals. So let me shift this question a bit. You have such a varied background around the world. I’m curious about your perspective over the past decade or decade and a half since social media has become so popular—how do you feel social media has shaped people’s views or their ability to advocate for themselves globally? 

So when we think about stories and narratives, something I’m personally interested in, we have to think about which stories get told and which stories remain untold. These stories and their telling is very much controlled by the mass media— BBC, CNN, and the like. They control the narrative. And I guess what social media is doing is it’s giving a voice to those who are often voiceless. In the past, the issue was that there was such a monopoly over mouthpieces. Mass  media were so trusted, to the point where no one would have paid attention to these alternative viewpoints. But what social media has done… I think it’s made people become more aware or more critical of mass media and how it shapes public opinion. There’s been a lot of exposure of their failure for example, like that video that went viral of Egyptian podcaster and activist Rahma Zain confronting CNN’s Clarissa Ward at the Rafah border about their biased reporting of the genocide in Palestine. I think that confrontation spoke to a lot of people. She was shouting “ You own the narrative, this is our problem. You own the narrative, you own the United Nations, you own Hollywood, you own all these mouthpieces— where are our voices?! Our voices need to be heard!” It was SO powerful and that video really spoke to the sentiment of many Arabs who have felt angry, betrayed and abandoned by the West’s ideals and their media reporting.

Social media is providing  a voice to more diverse people, elevating them and giving the public more control around narratives. Another example we’ve seen recently is around what’s currently happening in Sudan and the Democratic Republic of Congo. These horrific events and stories would never have had much of a voice or exposure before at the global stage. And now people all over the world are paying more attention and advocating for Sudanese and Congolese rights, thanks to social media. 

I personally was raised with quite a critical view of mass media, I think in my family there was a general distrust of the West, their policies and their media, so I never really relied personally on the media as this beacon of truth, but I do think that’s an exception. I think the majority of people rely on mass media as their source of truth. So social media plays an important role in keeping them accountable and diversifying narratives.

York: What are some of the biggest challenges you see right now anywhere in the world in terms of the climate for free expression for Palestinian and other activism? 

I think there’s two strands to it. There’s the social media strand. And there’s the governmental policies and actions. So I think on social media, again, it’s very documented, but it’s this kind of constant censorship. People want to be able to share content that matters to them, to make people more aware of global issues and we see time and time again viewership going down, content being deleted or reports from Meta of alleged hate speech or antisemitism. And that’s really hard. There’ve been random strategies that have popped up to increase social media engagement, like posting random content unrelated to Palestine or creating Instagram polls for example. I used to do that, I interspersed Palestine content with random polls like, “What’s your favorite color?” just to kind of break up the Palestine content and boost my engagement. And it was honestly so exhausting. It was like… I’m watching a genocide in real time, this is an attack on my people and now I’m having to come up with silly polls? Eventually I just gave up and accepted my viewership as it was, which was significantly lower.

At a government level, which is the other part of it, there’s this challenge of constant intimidation that we’re witnessing. I just saw recently there was a 17-year-old boy who was interviewed by the counterterrorism police at an airport because he was wearing a Palestinian flag. He was interrogated about his involvement in a Palestinian protest. When has protesting become a crime and what does that say about democratic rights and free speech here in the UK? And this is one example, but there are so many examples of policing, there was even talk of banning protests all together at one point. 

The last strand I’d include, actually, that I already touched on, is the mass media. Just recently we’ve seen the BBC reporting on the ICJ hearing, they showed the Israeli defense part, but they didn’t even show the South African side. So this censorship is literally in plain sight and poses a real challenge to the climate of free expression for Palestine activism.

York: Who is your free speech hero? 

Off the top of my head I’d probably say Mohammed El-Kurd. I think he’s just been so unapologetic in his stance. Not only that but I think he’s also made us think critically about this idea of narrative and what stories get told. I think it was really powerful when he was arguing the need to stop giving the West and mass media this power, and that we need to disempower them by ceasing to rely on them as beacons of truth, rather than working on changing them. Because, as he argues, oppressors who have monopolized and institutionalized violence will never ever tell the truth or hold themselves to account. Instead, we need to turn to Palestinians, and to brave cultural workers, knowledge producers, academics, journalists, activists, and social media commentators who understand the meaning of oppression and view them as the passionate, angry and, most importantly, reliable narrators that they are.

Jillian C. York

Americans Deserve More Than the Current American Privacy Rights Act

5 days 16 hours ago

EFF is concerned that a new federal bill would freeze consumer data privacy protections in place, by preempting existing state laws and preventing states from creating stronger protections in the future. Federal law should be the floor on which states can build, not a ceiling.

We also urge the authors of the American Privacy Rights Act (APRA) to strengthen other portions of the bill. It should be easier to sue companies that violate our rights. The bill should limit sharing with the government and expand the definition of sensitive data. And it should narrow exceptions that allow companies to exploit our biometric information, our so-called “de-identified” data, and our data obtained in corporate “loyalty” schemes.

Despite our concerns with the APRA bill, we are glad Congress is pivoting the debate to a privacy-first approach to online regulation. Reining in companies’ massive collection, misuse, and transfer of everyone’s personal data should be the unifying goal of those who care about the internet. This debate has been absent at the federal level in the past year, giving breathing room to flawed bills that focus on censorship and content blocking, rather than privacy.

In general, the APRA would require companies to minimize their processing of personal data to what is necessary, proportionate, and limited to certain enumerated purposes. It would specifically require opt-in consent for the transfer of sensitive data, and most processing of biometric and genetic data. It would also give consumers the right to access, correct, delete, and export their data. And it would allow consumers to universally opt-out of the collection of their personal data from brokers, using a registry maintained by the Federal Trade Commission.

We welcome many of these privacy protections. Below are a few of our top priorities to correct and strengthen the APRA bill.

Allow States to Pass Stronger Privacy Laws

The APRA should not preempt existing and future state data privacy laws that are stronger than the current bill. The ability to pass stronger bills at the state and local level is an important tool in the fight for data privacy. We ask that Congress not compromise our privacy rights by undercutting the very state-level action that spurred this compromise federal data privacy bill in the first place.

Subject to exceptions, the APRA says that no state may “adopt, maintain, enforce, or continue in effect” any state-level privacy requirement addressed by the new bill. APRA would allow many state sectoral privacy laws to remain, but it would still preempt protections for biometric data, location data, online ad tracking signals, and maybe even privacy protections in state constitutions or some other limits on what private companies can share with the government. At the federal level, the APRA would also wrongly preempt many parts of the federal Communications Act, including provisions that limit a telephone company’s use, disclosure, and access to customer proprietary network information, including location information.

Just as important, it would prevent states from creating stronger privacy laws in the future. States are more nimble at passing laws to address new privacy harms as they arise, compared to Congress which has failed for decades to update important protections. For example, if lawmakers in Washington state wanted to follow EFF’s advice to ban online behavioral advertising or to allow its citizens to sue companies for not minimizing their collection of personal data (provisions where APRA falls short), state legislators would have no power to do so under the new federal bill.

Make It Easier for Individuals to Enforce Their Privacy Rights

The APRA should prevent coercive forced arbitration agreements and class action waivers, allow people to sue for statutory damages, and allow them to bring their case in state court. These rights would allow for rigorous enforcement and help force companies to prioritize consumer privacy.

The APRA has a private right of action, but it is a half-measure that still lets companies side-step many legitimate lawsuits. And the private right of action does not apply to some of the most important parts of the law, including the central data minimization requirement.

The favorite tool of companies looking to get rid of privacy lawsuits is to bury provision in their terms of service that force individuals into private arbitration and prevent class action lawsuits. The APRA does not address class action waivers and only prevents forced arbitration for children and people who allege “substantial” privacy harm. In addition, statutory damages and enforcement in state courts is essential, because many times federal courts still struggle to acknowledge privacy harm as real—relying instead on a cramped view that does not recognize privacy as a human right. In addition, the bill would allow companies to cure violations rather than face a lawsuit, incentivizing companies to skirt the law until they are caught.

Limit Exceptions for Sharing with the Government

APRA should close a loophole that may allow data brokers to sell data to the government and should require the government to obtain a court order before compelling disclosure of user data. This is important because corporate surveillance and government surveillance are often the same.

Under the APRA, government contractors do not have to follow the bill’s privacy protections. Those include any “entity that is collecting, processing, retaining, or transferring covered data on behalf of a Federal, State, Tribal, territorial, or local government entity, to the extent that such entity is acting as a service provider to the government entity.” Read broadly, this provision could protect data brokers who sell biometric information and location information to the government. In fact, Clearview AI previously argued it was exempt from Illinois’ strict biometric law using a similar contractor exception. This is a point that needs revision because other parts of the bill rightly prevent covered entities (government contractors excluded) from selling data to the government for the purpose of fraud detection, public safety, and criminal activity detection.

The APRA also allows entities to transfer personal data to the government pursuant to a “lawful warrant, administrative subpoena, or other form of lawful process.” EFF urges that the requirement be strengthened to at least a court order or warrant with prompt notice to the consumer. Protections like this are not unique, and it is especially important in the wake of the Dobbs decision.

Strengthen the Definition of Sensitive Data

The APRA has heightened protections for sensitive data, and it includes a long list of 18 categories of sensitive data, like: biometrics, precise geolocation, private communications, and an individual’s online activity overtime and across websites. This is a good list that can be added to. We ask Congress to add other categories, like immigration status, union membership, employment history, familial and social relationships, and any covered data processed in a way that would violate a person’s reasonable expectation of privacy. The sensitivity of data is context specific—meaning any data can be sensitive depending on how it is used. The bill should be amended to reflect that.

Limit Other Exceptions for Biometrics, De-identified Data, and Loyalty Programs

An important part of any bill is to make sure the exceptions do not swallow the rule. The APRA’s exceptions on biometric information, de-identified data, and loyalty programs should be narrowed.

In APRA, biometric information means data “generated from the measurement or processing of the individual’s unique biological, physical, or physiological characteristics that is linked or reasonably linkable to the individual” and excludes “metadata associated with a digital or physical photograph or an audio or video recording that cannot be used to identify an individual.” EFF is concerned this definition will not protect biometric information used for analysis of sentiment, demographics, and emotion, and could be used to argue hashed biometric identifiers are not covered.

De-identified data is excluded from the definition of personal data covered by the APRA, and companies and service providers can turn personal data into de-identified data to process it however they want. The problem with de-identified data is that many times it is not. Moreover, many people do not want their private data that they store in confidence with a company to then be used to improve that company’s product or train its algorithm—even if the data has purportedly been de-identified.

Many companies under the APRA can host loyalty programs and can sell that data with opt-in consent. Loyalty programs are a type of pay-for-privacy scheme that pressure people to surrender their privacy rights as if they were a commodity. Worse, because of our society’s glaring economic inequalities, these schemes will unjustly lead to a society of privacy “haves” and “have-nots.” At the very least, the bill should be amended to prevent companies from selling data that they obtain from a loyalty program.

We welcome Congress' privacy-first approach in the APRA and encourage the authors to improve the bill to ensure privacy is protected for generations to come.

Mario Trujillo

Tell the FCC It Must Clarify Its Rules to Prevent Loopholes That Will Swallow Net Neutrality Whole

5 days 17 hours ago

The Federal Communications Commission (FCC) has released draft rules to reinstate net neutrality, with a vote on adopting the rules to come on the 25th of April. The FCC needs to close some loopholes in the draft rules before then.

Proposed Rules on Throttling and Prioritization Allow for the Circumvention of Net Neutrality

Net neutrality is the principle that all ISPs should treat all traffic coming over their networks without discrimination. The effect of this principle is that customers decide for themselves how they’d like to experience the internet. Violations of this principle include, but are not limited to, attempts to block, speed up, or slow down certain content as means of controlling traffic.

Net neutrality is critical to ensuring that the internet remains a vibrant place to learn, organize, speak, and innovate, and the FCC recognizes this. The draft mostly reinstates the bright-line rules of the landmark 2015 net neutrality protections to ban blocking, throttling, and paid prioritization.

It falls short, though, in a critical way: the FCC seems to think that it’s not okay to favor certain sites or services by slowing down other traffic, but it might be okay to favor them by giving them access to so-called fast lanes such as 5G network slices. First of all, in a world with a certain amount of finite bandwidth, favoring some traffic necessarily impairs other traffic. Secondly, the harms to speech and competition would be the same even if an ISP could conjure more bandwidth from thin air to speed up traffic from its business partners. Whether your access to Spotify is faster than your access to Bandcamp because Spotify is sped up or because Bandcamp is slowed down doesn’t matter because the end result is the same: Spotify is faster than Bandcamp and so you are incentivized to use Spotify over Bandcamp.

The loophole is especially bizarre because the 2015 FCC already got this right, and there has been bipartisan support for net neutrality proposals that explicitly encompass both favoring and disfavoring certain traffic. It’s a distinction that doesn’t make logical sense, doesn’t seem to have partisan significance, and could potentially undermine the rules in the event of a court challenge by drawing a nonsensical distinction between what’s forbidden under the bright-line rules versus what goes through the multi-factor test for other potentially discriminatory conduct by ISPs.

The FCC needs to close this loophole for unpaid prioritization of certain applications or classes of traffic. Customers should be in charge of what they do online, rather than ISPs deciding that, say, it’s more important to consume streaming entertainment products than to participate in video calls or that one political party’s websites should be served faster than another’s.

The FCC Should Clearly Rule Preemption to be a Floor, Not a Ceiling

When the FCC under the previous administration abandoned net neutrality protections in 2017 with the so-called “Restoring Internet Freedom” order, many states—chief among them California—stepped in to pass state net neutrality laws. Laws more protective than federal net neutrality protections—like California's should be explicitly protected by the new rule.

The FCC currently finds that California’s law “generally tracks [with] the federal rules[being] restored. (269)” It goes on to find that state laws are fine so long as they do not “interfere with or frustrate…federal rules,” are not “inconsistent,” or are not “incompatible.” It then reserves the right to revisit any state law if evidence arises that a state policy is found to “interfere or [be] incompatible.”

States should be able to build on federal laws to be more protective of rights, not run into limits to available protections. California’s net neutrality is in some places stronger than the draft rules. Where the FCC means to evaluate zero-rating, the practice of exempting certain data from a user’s data cap, on a case-by-case basis, California outright bans the practice of zero rating select apps.

There is no guarantee that a Commission which finds California to “generally track” today will do the same in two years time. The language as written unnecessarily sets a low bar for a future Commission to find California’s, and other states’, net neutrality laws to be preempted. It also leaves open unnecessary room for the large internet service providers (ISPs) to challenge California’s law once again. After all, when California’s law was first passed, it was immediately taken to court by these same ISPs and only after years of litigation did the courts reject the industry’s arguments and allow enforcement of this gold standard law to begin.

We urge the Commission to clearly state that, not only is California consistent with the FCC’s rules, but that on the issue of preemption the FCC considers its rules to be  the floor to build on, and that further state protections are not inconsistent simply because they may go further than the FCC chooses to.

Overall, the order is a great step for net neutrality. Its rules go a distance in protecting internet users. But we need clear rules recognizing that the creation of fast lanes via positive discrimination and unpaid prioritization are violations of net neutrality just the same, and assurance that states will continue to be free to protect their residents even when the FCC won’t.

Tell the FCC to Fix the Net Neutrality Rules:

1. Go to this link
2. For "Proceeding" put 23-320
3. Fill out the form
4. In "brief comments" register your thoughts on net neutrality. We recommend this, which you can copy and paste or edit for yourself:

Net neutrality is the principle that all internet service providers treat all traffic coming through their networks without discrimination. The effect of this principle is that customers decide for themselves how they’d like to experience the internet. The Commission’s rules as currently written leave open the door for positive discrimination of content, that is, the supposed creation of fast lanes where some content is sped up relative to others. This isn’t how the internet works, but in any case, whether an ISP is speeding up or slowing down content, the end result is the same: the ISP picks the winners and losers on the internet. As such the Commission must create bright line rules against all forms of discrimination, speeding up or slowing down, against apps or classes of apps on general traffic in the internet.

Further, while the Commission currently finds state net neutrality rules, like California’s, to not be preempted because they “generally track” its own rules, it makes it easy to rule otherwise at a future date. But just as we received net neutrality in 2015 only to have it taken away in 2017, there is no guarantee that the Commission will continue to find state net neutrality laws passed post-2017 to be consistent with the rules. To safeguard net neutrality, the Commission must find that California’s law is wholly consistent with their rules and that preemption is taken as a floor, not a ceiling, so that states can go above and beyond the federal standard without it being considered inconsistent with the federal rule.

Take Action

Tell the FCC to Fix the Net Neutrality Rules

Chao Liu

S.T.O.P. is Working to ‘Ban The Scan’ in New York

1 week 2 days ago

Facial recognition is a threat to privacy, racial justice, free expression, and information security. EFF supports strict restrictions on face recognition use by private companies, and total bans on government use of the technology. Face recognition in all of its forms, including face scanning and real-time tracking, pose threats to civil liberties and individual privacy. “False positive” error rates are significantly higher for women, children, and people of color, meaning face recognition has an unfair discriminatory impact. Coupled with the fact that cameras are over-deployed in neighborhoods with immigrants and people of color, spying technologies like face surveillance serve to amplify existing disparities in the criminal justice system.

Across the nation local communities from San Francisco to Boston have moved to ban government use of facial recognition. In New York, Electronic Frontier Alliance member Surveillance Technology Oversight Project (S.T.O.P.) is at the forefront of this movement. Recently we got the chance to speak with them about their efforts and what people can do to help advance the cause. S.T.O.P. is a New York-based civil rights and privacy organization that does research, advocacy, and litigation around issues of surveillance technology abuse.

What does “Ban The Scan” mean? 

When we say scan, we are referring to the “face scan” component of facial recognition technology. Surveillance, and more specifically facial recognition, disproportionately targets Black, Brown, Indigenous, and immigrant communities, amplifying the discrimination that has defined New York’s policing for as long as our state has had police. Facial recognition is notoriously biased and often abused by law enforcement. It is a threat to free speech, freedom of association, and other civil liberties. Ban the Scan is a campaign and coalition built around passing two packages of bills that would ban facial recognition in a variety of contexts in New York City and New York State. 

Are there any differences with the State vs City version?

The City and State packages are largely similar. The main differences are that the State package contains a bill banning law enforcement use of facial recognition, whereas the City package has a bill that bans all government use of the technology (although this bill has yet to be introduced). The State package also contains an additional bill banning facial recognition use in schools, which would codify an existing regulatory ban that currently applies to schools.

What hurdles exist to its passage? 

 For the New York State package, the coalition is newly coming together, so we are still gathering support from legislators and the public. For the City package, we are lucky to have a lot of support already, and we are waiting to have a hearing conducted on the residential ban bills and move them into the next phase of legislation. We are also working to get the bill banning government use introduced at the City level.

What can people do to help this good legislation? How to get involved? 

We recently launched a campaign website for both City and State packages (banthescan.org). If you’re a New York City or State resident, you can look up your legislators (links below!) and contact them to ask them to support these bills or thank them for their support if they are already signed on. We also have social media toolkits with graphics and guidance on how to help spread the word!  

Find your NYS Assemblymember: https://nyassembly.gov/mem/search/ 

Find your NYS Senator: https://www.nysenate.gov/find-my-senator 

Find your NYC Councilmember: https://council.nyc.gov/map-widget/  

Christopher Vines

EFF Submits Comments on FRT to Commission on Civil Rights

1 week 2 days ago

Our faces are often exposed and, unlike passwords or pin numbers, cannot be remade. Governments and businesses, often working in partnership, are increasingly using our faces to track our whereabouts, activities, and associations. This is why EFF recently submitted comments to the U.S. Commission on Civil Rights, which is preparing a report on face recognition technology (FRT).   

In our submission, we reiterated our stance that there should be a ban on governmental use of FRT and strict regulations on private use because it: (1) is not reliable enough to be used in determinations affecting constitutional and statutory rights or social benefits; (2) is a menace to social justice as its errors are far more pronounced when applied to people of color, members of the LGBTQ+ community, and other marginalized groups; (3) threatens privacy rights; (4) chills and deters expression; and (5) creates information security risks.

Despite these grave concerns, FRT is being used by the government and law enforcement agencies with increasing frequency, and sometimes with devastating effects. At least one Black woman and five Black men have been wrongfully arrested due to misidentification by FRT: Porcha Woodruff, Michael Oliver, Nijeer Parks, Randal Reid, Alonzo Sawyer, and Robert Williams. And Harvey Murphy Jr., a white man, was wrongfully arrested due to FRT misidentification, and then sexually assaulted while in jail.

Even if FRT was accurate, or at least equally inaccurate across demographics, it would still severely impact our privacy and security. We cannot change our face, and we expose it to the mass surveillance networks already in place every day we go out in public. But doing that should not be license for the government or private entities to make imprints of our face and retain that data, especially when that data may be breached by hostile actors.

The government should ban its own use of FRT, and strictly limit private use, to protect us from the threats posed by FRT. 

Hannah Zhao

What Does EFF Mean to You?

1 week 2 days ago

We could go on for days talking about all the work EFF does to ensure that technology supports freedom, justice, and innovation for all people of the world. In fact, we DO go on for days talking about it — but we’d rather hear from you. 

What does EFF mean to you? We’d love to know why you support us, how you see our mission, or what issue or area we address that affects your life the most. It’ll help us make sure we keep on being the EFF you want us to be.

So if you’re willing to go on the record, please send us a few sentences, along with your first name and current city of residence, to testimonials@eff.org; we’ll pick some every now and then to share with the world here on our blog, in our emails, and on our social media.

Josh Richman

Bad Amendments to Section 702 Have Failed (For Now)—What Happens Next?

1 week 3 days ago

Yesterday, the House of Representatives voted against considering a largely bad bill that would have unacceptably expanded the tentacles of Section 702 of the Foreign Intelligence Surveillance Act, along with reauthorizing it and introducing some minor fixes. Section 702 is Big Brother’s favorite mass surveillance law that EFF has been fighting since it was first passed in 2008. The law is currently set to expire on April 19. 

Yesterday’s decision not to decide is good news, at least temporarily. Once again, a bipartisan coalition of law makers—led by Rep. Jim Jordan and Rep. Jerrold Nadler—has staved off the worst outcome of expanding 702 mass surveillance in the guise of “reforming” it. But the fight continues and we need all Americans to make their voices heard. 

Use this handy tool to tell your elected officials: No reauthorization of 702 without drastic reform:

Take action

TELL congress: 702 Needs serious reforms

Yesterday’s vote means the House also will not consider amendments to Section 702 surveillance introduced by members of the House Judiciary Committee (HJC) and House Permanent Select Committee on Intelligence (HPSCI). As we discuss below, while the HJC amendments would contain necessary, minimum protections against Section 702’s warrantless surveillance, the HPSCI amendments would impose no meaningful safeguards upon Section 702 and would instead increase the threats Section 702 poses to Americans’ civil liberties.

Section 702 expressly authorizes the government to collect foreign communications inside the U.S. for a wide range of purposes, under the umbrellas of national security and intelligence gathering. While that may sound benign for Americans, foreign communications include a massive amount of Americans’ communications with people (or services) outside the United States. Under the government’s view, intelligence agencies and even domestic law enforcement should have backdoor, warrantless access to these “incidentally collected” communications, instead of having to show a judge there is a reason to query Section 702 databases for a specific American's communications.

Many amendments to Section 702 have recently been introduced. In general, amendments from members of the HJC aim at actual reform (although we would go further in many instances). In contrast, members of HPSCI have proposed bad amendments that would expand Section 702 and undermine necessary oversight. Here is our analysis of both HJC’s decent reform amendments and HPSCI’s bad amendments, as well as the problems the latter might create if they return.

House Judiciary Committee’s Amendments Would Impose Needed Reforms

The most important amendment HJC members have introduced would require the government to obtain court approval before querying Section 702 databases for Americans’ communications, with exceptions for exigency, consent, and certain queries involving malware. As we recently wrote regarding a different Section 702 bill, because Section 702’s warrantless surveillance lacks the safeguards of probable cause and particularity, it is essential to require the government to convince a judge that there is a justification before the “separate Fourth Amendment event” of querying for Americans’ communications. This is a necessary, minimum protection and any attempts to renew Section 702 going forward should contain this provision.

Another important amendment would prohibit the NSA from resuming “abouts” collection. Through abouts collection, the NSA collected communications that were neither to nor from a specific surveillance target but merely mentioned the target. While the NSA voluntarily ceased abouts collection following Foreign Intelligence Surveillance Court (FISC) rulings that called into question the surveillance’s lawfulness, the NSA left the door open to resume abouts collection if it felt it could “work that technical solution in a way that generates greater reliability.” Under current law, the NSA need only notify Congress when it resumes collection. This amendment would instead require the NSA to obtain Congress’s express approval before it can resume abouts collection, which―given this surveillance's past abuses—would be notable.

The other HJC amendment Congress should accept would require the FBI to give a quarterly report to Congress of the number of queries it has conducted of Americans’ communications in its Section 702 databases and would also allow high-ranking members of Congress to attend proceedings of the notoriously secretive FISC. More congressional oversight of FBI queries of Americans’ communications and FISC proceedings would be good. That said, even if Congress passes this amendment (which it should), both Congress and the American public deserve much greater transparency about Section 702 surveillance.  

House Permanent Select Committee on Intelligence’s Amendments Would Expand Section 702

Instead of much-needed reforms, the HPSCI amendments expand Section 702 surveillance.

One HPSCI amendment would add “counternarcotics” to FISA’s definition of “foreign intelligence information,” expanding the scope of mass surveillance even further from the antiterrorism goals that most Americans associate with FISA. In truth, FISA’s definition of “foreign intelligence information” already goes beyond terrorism. But this counternarcotics amendment would further expand “foreign intelligence information” to allow FISA to be used to collect information relating to not only the “international production, distribution, or financing of illicit synthetic drugs, opioids, cocaine, or other drugs driving overdose deaths” but also to any of their precursors. Given the massive amount of Americans’ communications the government already collects under Section 702 and the government’s history of abusing Americans’ civil liberties through searching these communications, the expanded collection this amendment would permit is unacceptable.

Another amendment would authorize using Section 702 to vet immigrants and those seeking asylum. According to a FISC opinion released last year, the government has sought some version of this authority for years, and the FISC repeatedly denied it—finally approving it for the first time in 2023. The FISC opinion is very redacted, which makes it impossible to know either the current scope of immigration and visa-related surveillance under Section 702 or what the intelligence agencies have sought in the past. But regardless, it’s deeply concerning that HPSCI is trying to formally lower Section 702 protections for immigrants and asylum seekers. We’ve already seen the government revoke people’s visas based upon their political opinions—this amendment would put this kind of thing on steroids.

The last HPSCI amendment tries to make more companies subject to Section 702’s required turnover of customer information in more instances. In 2023, the FISC Court of Review rejected the government’s argument that an unknown company was subject to Section 702 for some circumstances. While we don’t know the details of the secret proceedings because the FISC Court of Review opinion is heavily redacted, this is an ominous attempt to increase the scope of providers subject to 702. With this amendment, HPSCI is attempting to legislatively overrule a court already famously friendly to the government. HPSCI Chair Mike Turner acknowledged as much in a House Rules Committee hearing earlier this week, stating that this amendment “responds” to the FISC Court of Review’s decision.

What’s Next 

This hearing was unlikely to be the last time Congress considers Section 702 before April 19—we expect another attempt to renew this surveillance authority in the coming days. We’ve been very clear: Section 702 must not be renewed without essential reforms that protect privacy, improve transparency, and keep the program within the confines of the law. 

Take action

TELL congress: 702 Needs serious reforms

Brendan Gilligan

Virtual Reality and the 'Virtual Wall'

1 week 4 days ago

When EFF set out to map surveillance technology along the U.S.-Mexico border, we weren't exactly sure how to do it. We started with public records—procurement documents, environmental assessments, and the like—which allowed us to find the GPS coordinates of scores of towers. During a series of in-person trips, we were able to find even more. Yet virtual reality ended up being one of the key tools in not only discovering surveillance at the border, but also in educating people about Customs & Border Protection's so-called "virtual wall" through VR tours.

EFF Director of Investigations Dave Maass recently gave a lightning talk at University of Nevada, Reno's annual XR Meetup explaining how virtual reality, perhaps ironically, has allowed us to better understand the reality of border surveillance.

%3Ciframe%20width%3D%22560%22%20height%3D%22315%22%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FwEolW4-b6Dw%3Fsi%3DfeR80hnnRLET4VpX%26autoplay%3D1%26mute%3D1%22%20title%3D%22YouTube%20video%20player%22%20frameborder%3D%220%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%3B%20web-share%22%20referrerpolicy%3D%22strict-origin-when-cross-origin%22%20allowfullscreen%3D%22%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

Dave Maass

The Motion Picture Association Doesn’t Get to Decide Who the First Amendment Protects

1 week 4 days ago

Twelve years ago, internet users spoke up with one voice to reject a law that would build censorship into the internet at a fundamental level. This week, the Motion Picture Association (MPA), a group that represents six giant movie and TV studios, announced that it hoped we’d all forgotten how dangerous this idea was. The MPA is wrong. We remember, and the internet remembers.

What the MPA wants is the power to block entire websites, everywhere in the U.S., using the same tools as repressive regimes like China and Russia. To it, instances of possible copyright infringement should be played like a trump card to shut off our access to entire websites, regardless of the other legal speech hosted there. It is not simply calling for the ability to take down instances of infringement—a power they already have, without even having to ask a judge—but for the keys to the internet. Building new architectures of censorship would hurt everyone, and doesn’t help artists.

The bills known as SOPA/PIPA would have created a new, rapid path for copyright holders like the major studios to use court orders against sites they accuse of infringing copyright. Internet service providers (ISPs) receiving one of those orders would have to block all of their customers from accessing the identified websites. The orders would also apply to domain name registries and registrars, and potentially other companies and organizations that make up the internet’s basic infrastructure. To comply, all of those would have to build new infrastructure dedicated to site-blocking, inviting over-blocking and all kinds of abuse that would censor lawful and important speech.

In other words, the right to choose what websites you visit would be taken away from you and given to giant media companies and ISPs. And the very shape of the internet would have to be changed to allow it.

In 2012, it seemed like SOPA/PIPA, backed by major corporations used to getting what they want from Congress, was on the fast track to becoming law. But a grassroots movement of diverse Internet communities came together to fight it. Digital rights groups like EFF, Public Knowledge, and many more joined with editor communities from sites like Reddit and Wikipedia to speak up. Newly formed grassroots groups like Demand Progress and Fight for the Future added their voices to those calling out the dangers of this new form of censorship. In the final days of the campaign, giant tech companies like Google and Facebook (now Meta) joined in opposition as well.

What resulted was one of the biggest protests ever seen against a piece of legislation. Congress was flooded with calls and emails from ordinary people concerned about this steamroller of censorship. Members of Congress raced one another to withdraw their support for the bills. The bills died, and so did site blocking legislation in the US. It was, all told, a success story for the public interest.

Even the MPA, one of the biggest forces behind SOPA/PIPA, claimed to have moved on. But we never believed it, and they proved us right time and time again. The MPA backed site-blocking laws in other countries. Rightsholders continued to ask US courts for site-blocking orders, often winning them without a new law. Even the lobbying of Congress for a new law never really went away. It’s just that today, with MPA president Charles Rivkin openly calling on Congress “to enact judicial site-blocking legislation here in the United States,” the MPA is taking its mask off.

Things have changed since 2012. Tech platforms that were once seen as innovators have become behemoths, part of the establishment rather than underdogs. The Silicon Valley-based video streamer Netflix illustrated this when it joined MPA in 2019. And the entertainment companies have also tried to pivot into being tech companies. Somehow, they are adopting each other’s worst aspects.

But it’s important not to let those changes hide the fact that those hurt by this proposal are not Big Tech but regular internet users. Internet platforms big and small are still where ordinary users and creators find their voice, connect with audiences, and participate in politics and culture, mostly in legal—and legally protected—ways. Filmmakers who can’t get a distribution deal from a giant movie house still reach audiences on YouTube. Culture critics still reach audiences through zines and newsletters. The typical users of these platforms don’t have the giant megaphones of major studios, record labels, or publishers. Site-blocking legislation, whether called SOPA/PIPA, “no fault injunctions,” or by any other name, still threatens the free expression of all of these citizens and creators.

No matter what the MPA wants to claim, this does not help artists. Artists want their work seen, not locked away for a tax write-off. They wanted a fair deal, not nearly five months of strikes. They want studios to make more small and midsize films and to take a chance on new voices. They have been incredibly clear about what they want, and this is not it.

Even if Rivkin’s claim of an “unflinching commitment to the First Amendment” was credible from a group that seems to think it has a monopoly on free expression—and which just tried to consign the future of its own artists to the gig economy—a site-blocking law would not be used only by Hollywood studios. Anyone with a copyright and the means to hire a lawyer could wield the hammer of site-blocking. And here’s the thing: we already know that copyright claims are used as tools of censorship.

The notice-and-takedown system created by the Digital Millennium Copyright Act, for example, is abused time and again by people who claim to be enforcing their copyrights, and also by folks who simply want to make speech they don’t like disappear from the Internet. Even without a site-blocking law, major record labels and US Immigration and Customs Enforcement shut down a popular hip hop music blog and kept it off the internet for over a year without ever showing that it infringed copyright. And unscrupulous characters use accusations of infringement to extort money from website owners, or even force them into carrying spam links.

This censorious abuse, whether intentional or accidental, is far more damaging when it targets the internet’s infrastructure. Blocking entire websites or groups of websites is imprecise, inevitably bringing down lawful speech along with whatever was targeted. For example, suits by Microsoft intended to shut down malicious botnets caused thousands of legitimate users to lose access to the domain names they depended on. There is, in short, no effective safeguard on a new censorship power that would be the internet’s version of police seizing printing presses.

Even if this didn’t endanger free expression on its own, once new tools exist, they can be used for more than copyright. Just as malfunctioning copyright filters were adapted into the malfunctioning filters used for “adult content” on tumblr, so can means of site blocking. The major companies of a single industry should not get to dictate the future of free speech online.

Why the MPA is announcing this now is anyone’s guess. They might think no one cares anymore. They’re wrong. Internet users rejected site blocking in 2012 and they reject it today.

Mitch Stoltz

Speaking Freely: Mary Aileen Diez-Bacalso

1 week 5 days ago

This interview has been edited for length and clarity.*

Mary Aileen Diez-Bacalso is the executive director of FORUM-Asia. She has worked for many years in human rights organizations in the Philippines and internationally, and is best known for her work on enforced disappearances. She has received several human rights awards at home and abroad, including the Emilio F. Mignone International Human Rights Prize conferred by the Government of Argentina and the Franco-German Ministerial Prize for Human Rights and Rule of Law. In addition to her work at FORUM-Asia, she currently serves as the president of the International Coalition Against Enforced Disappearances (ICAED) and is a senior lecturer at the Asian Center of the University of the Philippines.

York: What does free expression mean to you? And can you tell me about an experience, or experiences, that shaped your views on free expression?

To me, free speech or free expression means the exercise of the right to express oneself and to seek and receive information as an individual or an organization. I’m an individual, but I’m also representing an organization, so it means the ability to express thoughts, ideas, or opinions without threats or intimidation or fear of reprisals. 

Free speech is expressed in various avenues, such as in a community where one lives or in an organization where one belongs at the national, regional, or international levels. It is the right to express these ideas, opinions, and thoughts for different purposes, for instance; influencing behaviors, opinions, and policy decisions; giving education; addressing, for example, historical revisionism—which is historically common in my country, the Philippines. Without freedom of speech people will be kept in the dark in terms of access to information, in understanding and analyzing information, and deciding which information to believe and which information is incorrect or inaccurate or is meant to misinform people. So without freedom of speech people cannot exercise their other basic human rights, like the right of suffrage and, for example, religious organizations who are preaching will not be able to fulfill their mission of preaching if freedom of speech is curtailed. 

I have worked for years with families of the disappeared—victims of enforced disappearance—in many countries. And this forced disappearance is a consequence of the absence of free speech. These disappeared people are forcibly disappeared because of their political beliefs, because of their political affiliations, and because of their human rights work, among other things. And they were deprived of the right to speech. Additionally, in the Philippines and many other Asian countries, rallies, for example, and demonstrations on various legitimate issues of the people are being dispersed by security forces in the name of peace. That’s depriving legitimate protesters from the rights to speech and to peaceful assembly. So these people are named as enemies of the state, as subversives, as troublemakers, and in the process they’re tear-gassed, arrested, detained, etcetera. So allowing these people to exercise their constitutional rights is a manifestation of free speech. But in many Asian countries—and many other countries in other regions also—such rights, although provided for by the Constitution, are not respected. Free speech in whatever country you are in, wherever you go, is freedom to study the situation of that country to give your opinion of that situation and share your ideas with others. 

York: Can you share some experiences that helped shape your views on freedom of expression? 

During my childhood years, when martial law was imposed, I’d heard a lot of news about detention, arrest and detention of journalists because of their protest against martial law that was imposed by the dictator Ferdinand Marcos, Sr, who was the father of the present President of the Philippines. So I read a lot about violations of human rights of activists from different sectors of society. I read about farmers, workers, students, church people, who were arrested, detained, tortured, disappeared, and killed because of martial law. Because they spoke against the Marcos administration. So during those years when I was so young, this actually formed my mind and also my commitment to freedom of expression, freedom of assembly, freedom of association. 

Once, I was arrested during the first Marcos administration, and that was a very long time ago. That is a manifestation of the curtailment of the right of free speech. I was together with other human rights defenders—I was very young at the time. We were rallying because there was a priest who was made to disappear forcibly. So we were arrested and detained. Also, I was deported by the government of India on my way to Kashmir. I was there three times, but on my third time I was not allowed to go to Kashmir because of our human rights work there. So even now, I am banned in India and I can not go back there. It was because of those reports we made on enforced disappearances and mass graves in Kashmir. So free speech means freedom without thread, intimidation, or retaliation. And it means being able to use all avenues in various contexts to speak in whatever forms—verbal speeches, written speeches, videos, and all forms of communication.

Also, the enforced disappearance of my husband informed my views on free expression. Two weeks after we got married he was briefly forcibly disappeared. He was tortured, he was not fed, and he was forced to confess that he was a member of the Communist Party of the Philippines. He was together with one other person he did not know and did not see, and they were forced to dig a grave for themselves to be buried alive inside. Another person who was disappeared then escaped and informed us of where my husband was. So we told the military that we knew where my husband was. They were afraid that the other person might testify so they released my husband in a cemetery near his parent’s house.

And that made an impact on me, that’s why I work a lot with families of enforced disappearances both in the Philippines and in many other countries. I believe that the experience of enforced disappearance of my husband, and other family members of the disappeared and their experience of having family members disappeared until now, is a consequence of the violation of freedom of expression, freedom of assembly, freedom of speech. And also my integration or immersion with families of the disappeared has contributed a lot to my commitment to human rights and free speech. I’m just lucky to have my husband back. And he’s lucky. But the way of giving back, of being grateful for the experience we had—because they are very rare cases where victims of enforced disappearances surfaced alive—so I dedicate my whole life to the cause of human rights. 

York: What do you feel are some of the qualities that make you passionate about protecting free expression for others?

Being brought up by my family, my parents, we were taught about the importance of speaking for the truth, and the importance of uprightness. It was also because of our religious background. We were taught it is very important to tell the truth. So this passion for truth and uprightness is one of the qualities that make me passionate about free expression. And the sense of moral responsibility to rectify wrongs that are being committed. My love of writing, also. I love writing whenever I have the opportunity to do it, the time to do it. And the sense of duty to make human rights a lifetime commitment. 

York: What should we know about the role of social media in modern Philippine society? 

I believe social media contributed a lot to what we are now. The current oppressive administration invested a lot in misinformation, in revising history, and that’s why a lot of young people think of martial law as the years of glory and prosperity. I believe one of the biggest factors of the administration getting the votes was their investment in social media for at least a decade. 

York: What are your feelings on how online speech should be regulated? 

I’m not very sure it should be regulated. For me, as long as the individuals or the organizations have a sense of responsibility for what they say online, there should be no regulation. But when we look at free speech on online platforms these online platforms have the responsibility to ensure that there are clear guidelines for content moderation and must be held accountable for content posted on their platforms. So fact-checking—which is so important in this world of misinformation and “fake news”—and complaints mechanisms have to be in place to ensure that harmful online speech is identified and addressed. So while freedom of expression is a fundamental right, it is important to recognize that this can be exploited to spread hate speech and harmful content all in the guise of online freedom of speech—so this could be abused. This is being abused. Those responsible for online platforms must be accountable for their content. For example, from March 2020 to July 2020 our organization, FORUM-Asia and its partners, including freedom of expression group AFAD, documented around 40 cases of hate speech and dangerous speech on Facebook. And the study scope is limited as it only covered posts and comments in Burmese. The researchers involved also reported that many other posts were reported and subsequently removed prior to being documented. So the actual amount of hate speech is likely to be significantly higher. I recommend taking a look at the report. So while FORUM-Asia acknowledges the efforts of Facebook to promote policies to curb hate speech on the platform, it still needs to update and constantly review all these things, like the community guidelines, including those on political advertisements and paid or sponsored content, with the participation of the Facebook Oversight Board. 

York: Can you tell me about a personal experience you’ve had with censorship, or perhaps the opposite, an experience you have of using freedom of expression for the greater good?

In terms of censorship, I don’t have personal experience with censorship. I wrote some opinion pieces in the Union of Catholic Asian News and other online platforms, but I haven’t had any experience of censorship. Although I did experience negative comments because of the content of what I wrote. There are a lot of trolls in the Philippines and they were and are very supportive of the previous administration of Duterte, so there was negative feedback when I wrote a lot on the war on drugs and the killings and impunity. But that’s also part of freedom of speech! I just had to ignore it, but, to be honest, I felt bad. 

York: Thank you for sharing that. Do you have a free expression hero? 

I believe we have so many unsung heroes in terms of free speech and these are the unknown persecuted human rights defenders. But I also answer that during this week we are commemorating the Holy Week [editor’s note: this interview took place on March 28, 2024] so I would like to say that I would like to remember Jesus Christ. Whose passion, death, and resurrection Christians are commemorating this week. So, during his time, Jesus spoke about the ills of society, he was enraged when he witnessed how defenseless poor were violated of their rights and he was angry when authority took advantage of them. And he spoke very openly about his anger, about his defense for the poor. So I believe that he is my hero.

Also, in contemporary times, Óscar Arnulfo Romero y Galdámez, who was canonized as a Saint in 2018, I consider him as my free speech hero also. I visited the chapel where he was assassinated, the Cathedral of San Salvador, where his mortal remains were buried. And the international community, especially the Salvadoran people, celebrated the 44th anniversary of his assassination last Sunday the 24th of March, 2024. Seeing the ills of society, the consequent persecution of the progressive segment of the Catholic church and the churches in El Salvador, and the indiscriminate killings of the Salvadoran people in his communities San Romero courageously spoke on the eve of his assassination. I’d like to quote what he said. He said:

“I would like to make a special appeal to the men of the army, and specifically to the ranks of the National Guard, the police and the military. Brothers, you come from our own people. You are killing your own brother peasants when any human order to kill must be subordinate to the law of God which says, ‘Thou shalt not kill.’ No soldier is obliged to obey an order contrary to the law of God. No one has to obey an immoral law. It is high time you recovered your consciences and obeyed your consciences rather than a sinful order. The church, the defender of the rights of God, of the law of God, of human dignity, of the person, cannot remain silent before such an abomination. We want the government to face the fact that reforms are valueless if they are to be carried out at the cost of so much blood. In the name of God, in the name of this suffering people whose cries rise to heaven more loudly each day, I implore you, I beg you, I order you in the name of God: stop the repression.”

So as a fitting tribute to Saint Romero of the Americas the United Nations has dedicated the 24th of March as the International Day for Truth, Justice, Reparation, and Guarantees of Non-repetition. So he is my hero. Of course, Jesus Christ being the most courageous human rights defender during these times, continues to be my hero. Which I’m sure was the model of Monsignor Romero. 

Jillian C. York

Podcast Episode: Antitrust/Pro-Internet

1 week 6 days ago

Imagine an internet in which economic power is more broadly distributed, so that more people can build and maintain small businesses online to make good livings. In this world, the behavioral advertising that has made the internet into a giant surveillance tool would be banned, so people could share more equally in the riches without surrendering their privacy.

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F7766e45d-3612-4a96-af70-8059ab24761d%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

 

   

(You can also find this episode on the Internet Archive and on YouTube.)

That’s the world Tim Wu envisions as he teaches and shapes policy on the revitalization of American antitrust law and the growing power of big tech platforms. He joins EFF’s Cindy Cohn and Jason Kelley to discuss using the law to counterbalance the market’s worst instincts, in order to create an internet focused more on improving people’s lives than on meaningless revenue generation. 

In this episode you’ll learn about: 

  • Getting a better “deal” in trading some of your data for connectedness. 
  • Building corporate structures that do a better job of balancing the public good with private profits. 
  • Creating a healthier online ecosystem with corporate “quarantines” to prevent a handful of gigantic companies from dominating the entire internet. 
  • Nurturing actual innovation of products and services online, not just newer price models. 

Timothy Wu is the Julius Silver Professor of Law, Science and Technology at Columbia Law School, where he has served on the faculty since 2006. First known for coining the term “net neutrality” in 2002, he served in President Joe Biden’s White House as special assistant to the President for technology and competition policy from 2021 to 2023; he also had worked on competition policy for the National Economic Council during the last year of President Barack Obama’s administration. Earlier, he worked in antitrust enforcement at the Federal Trade Commission and served as enforcement counsel in the New York Attorney General’s Office. His books include “The Curse of Bigness: Antitrust in the New Gilded Age” (2018), "The Attention Merchants: The Epic Scramble to Get Inside Our Heads” (2016), “The Master Switch: The Rise and Fall of Information Empires” (2010), and “Who Controls the Internet? Illusions of a Borderless World” (2006).

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here

Transcript

TIM WU
I think with advertising we need a better deal. So advertising is always a deal. You trade your attention and you trade probably some data, in exchange you get exposed to advertising and in exchange you get some kind of free product.

You know, that's the deal with television, that's been the deal for a long time with radio. But because it's sort of an invisible bargain, it's hard to make the bargain, and the price can be increased in ways that you don't necessarily notice. For example, we had one deal with Google in, let's say, around the year 2010 - if you go on Google now, it's an entirely different bargain.

It's as if there's been a massive inflation in these so-called free products. In terms of how much data has been taken, in terms of how much you're exposed to, how much ad load you get. It's as if sneakers went from 30 dollars to 1,000 dollars!

CINDY COHN
That's Tim Wu – author, law professor, White House advisor. He’s something of a swiss army knife for technology law and policy. He spent two years on the National Economic Council, working with the Biden administration as an advisor on competition and tech policy. He worked on antitrust legislation to try and check some of the country’s biggest corporations, especially, of course, the tech giants.

I’m Cindy Cohn - executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley - EFF’s Activism Director. This is our podcast, How to Fix the Internet. Our guest today is Tim Wu. His stint with the Biden administration was the second White House administration he advised. And in between, he ran for statewide office in New York. And that whole thing is just a sideline from his day job as a law professor at Columbia University. Plus, he coined the term net neutrality!

CINDY COHN
On top of that, Tim basically writes a book every few years that I read in order to tell me what's going to happen next in technology. And before that he's been a programmer and a more traditional lab based scientist. So he's kind of got it all.

TIM WU
Sounds like I'm a dilettante.

CINDY COHN
Well, I think you've got a lot of skills in a lot of different departments, and I think that in some ways, I've heard you call yourself a translator, and I think that that's really what all of that experience gives you as a superpower is the ability to kind of talk between these kinds of spaces in the rest of the world.

TIM WU
Well, I guess you could say that. I've always been inspired by Wilhelm Humboldt, who had this theory that in order to have a full life, you had to try to do a lot of different stuff. So somehow that factors into it somewhere.

CINDY COHN
That's wonderful. We want to talk about a lot of things in this conversation, but I kind of wanted to start off with the central story of the podcast, which is, what does the world look like if we get this right? You know, you and I have spent a lot of years talking about all the problems, trying to lift up obstacles and get rid of obstacles.

But if we reach this end state where we get a lot of these problems right, in Tim Wu's world, what, what does it look like? Like, what does your day look like? What do people's experience of technology look like?

TIM WU
I think it looks like a world in which economic power surrounding the internet and surrounding the platforms is very much more distributed. And, you know, what that means practically is it means a lot of people are able to make a good living, I guess, based on being a small producer or having a service based skill in a way that feels sustainable and where the sort of riches of the Internet are more broadly shared.

So that's less about what kind of things you click on or, you know, what kind of apps you use and more about, I guess, the economic structure surrounding the Internet, which I think, you know, um, I don't think I'm the only person who thinks this, you know, the structure could be fairer and could work for more people.

It does feel like the potential and, you know, we've all lived through that potential starting in the 90s of this kind of economically liberating force that would be the basis for a lot of people to make a decent living has seemed to turn into something more where a lot of money aggregates in a few places.

CINDY COHN
Yeah, I remember, people still talk about the long tail, right, as a way in which the digitization of materials created a revenue stream that's more than just, you know, the flavor of the week that a movie studio or a book publisher might want us to pay attention to on kind of the cultural side, right?

That there was space for this. And that also makes me think of a conversation we just had with the folks in the right to repair movement talking about like their world includes a place where there's mom and pop shops that will help you fix your devices all over the place. Like this is another way in which we have centralized economic power.

We've centralized power and if we decentralize this or, or, or spread it more broadly, uh, we're going to create a lot of jobs and opportunities for people, not just as users of technology, but as the people who help build and offer it to us.

TIM WU
I'm writing a new book, um, working title, Platform Capitalism, that has caused me to go back and look at the, you know, the early promise of the internet. And I went back and I was struck by a book, some of you may remember, called "An Army of Davids," by Glenn Reynolds the Instapundit.
Yeah, and he wrote a book and he said, you know, the future of the American economy is going to be all these kind of mom and pop sellers who, who take over everything – he wrote this about 2006 – and he says, you know, bloggers are already competing with news operations, small sellers on eBay are already competing with retail stores, and so on, journalists, so on down the line that, uh, you know, the age of the big, centralized Goliath is over and the little guys are going to rule the future.

Kind of dovetailed, I went back and read Yochai Benkler's early work about a production commons model and how, you know, there'll be a new node of production. Those books have not aged all that well. In fact, I think the book that wins is Blitzscaling. That somewhere along the line, instead of the internet favoring small business, small production, things went in the exact opposite direction.

And when I think about Yochai Benkler's idea of sort of production-based commons, you know, Waze was like that, the mapping program, until one day Waze was just bought by Google. So, I was just thinking about those as I was writing that chapter of the book.

CINDY COHN
Yeah, I think that's right. I think that identifying and, and you've done a lot of work on this, identify the way in which we started with this promise and we ended up in this other place can help us figure out, and Cory Doctorow, our colleague and friend has been doing a lot of work on this with choke point capitalism and other work that he's done for EFF and elsewhere.

And I also agree with him that, like, we don't really want to create the good old days. We want to create the good new days, right? Like, we want to experience the benefits of an Internet post-1990s, but also have those, those riches decentralized or shared a little more broadly, or a lot more broadly, honestly.

TIM WU
Yeah, I think that's right, and so I think part of what I'm saying, you know, what would fix the internet, or what would make it something that people feel excited about. You know, I think people are always excited about apps and videos, but also people are excited about their livelihood and making money.

And if we can figure out the kind of structure that makes capitalism more distributed surrounding platforms, you know, it's not abandoning the idea of you have to have a good site or a product or something to, to gain customers. It's not a total surrender of that idea, but a return to that idea working for more people.

CINDY COHN
I mean, one of the things that you taught me in the early days is how kind of ‘twas ever so, right? If you think about radio or broadcast medium or other previous mediums, they kind of started out with this promise of a broader impact and broader empowerment and, and didn't end up that way as much as well.

And I know that's something you've thought about a lot.

TIM WU
Yeah, the first book I wrote by myself, The Master Switch, had that theme and at the time when I wrote it, um, I wrote a lot of it in the, ‘09, ‘08, ‘07 kind of period, and I think at that point I had more optimism that the internet could hold out, that it wouldn't be subject to the sort of monopolizing tendencies that had taken over the radio, which originally was thousands of radio stations, or the telephone system – which started as this ‘go west young man and start your own telephone company’ kind of technology – film industry and and many others. I was firmly of the view that things would be different. Um, I think I thought that, uh, because of the CCP IP protocol, because of the platforms like HTML that were, you know, the center of the web, because of net neutrality, lasting influence. But frankly, I was wrong. I was wrong, at least when I was writing the book.

JASON KELLEY
As you've been talking about the sort of almost inevitable funneling of the power that these technologies have into a single or, or a few small platforms or companies, I wonder what you think about newer ideas around decentralization that have sort of started over the last few years, in particular with platforms like Mastodon or something like that, these kinds of APIs or protocols, not platforms, that idea. Do you see any promise in that sort of thing? Because we see some, but I'm wondering what you think.

TIM WU
I do see some promise. I think that In some ways, it's a long overdue effort. I mean, it's not the first. I can't say it's the first. Um, and part of me wishes that we had been, you know, the idealistic people. Even the idealistic people at some of these companies, such as they were, had been a bit more careful about their design in the first place.

You know, I guess what I would hope … the problem with Mastodon on some of these is they're trying to compete with entities that already are operating with all the full benefits of scale and which are already tied to sort of a Delaware private corporate model. Uh, now this is a little bit, I'm not saying that hindsight is 20/20, but when I think about the major platforms and entities the early 21st century, it's really only Wikipedia that got it right in my view by structurally insulating themselves from certain forces and temptations.

So I guess what I'm trying to say is that, uh, part of me wishes we'd done more of this earlier. I do think there's hope in them. I think it's very challenging in current economics to succeed. And sometimes you'd have to wonder if you go in a different, you know, that it might be, I don't want to say impossible, very challenging when you're competing with existing structures. And if you're starting something new, you should start it right.
That said, AI started in a way structurally different and we've seen how that's gone recently.

CINDY COHN
Oh, say more, say more!

JASON KELLEY
Yeah. Yeah. Keep, keep talking about AI.

CINDY COHN
I'm very curious about your thinking about that.

TIM WU
Well, you know, I said that, The Holy Roman Empire was neither holy, nor Roman, nor an empire. And OpenAI is now no longer open, nor non-profit, nor anything else. You know, it's kind of, uh, been extraordinary that the circuit breakers they tried to install have just been blown straight through. Um, and I think there's been a lot of negative coverage of the board. Um, because, you know, the business press is kind of narrow on these topics. But, um, you know, OpenAI, I guess, at some point, tried to structure itself more carefully and, um, and, uh, you know, now the board is run by people whose main experience has been, um, uh, taking good organizations and making them worse, like Quora, so, yeah, I, I, that is not exactly an inspiring story, uh, I guess of OpenAI in the sense of it's trying to structure itself a little differently and, and it, uh, failing to hold.

CINDY COHN
I mean, I think Mozilla has managed to have a structure that has a, you know, kind of complicated for profit/not-for-profit strategy that has worked a little better, but II hear you. I think that if you do a power analysis, right, you know, a nonprofit is going to have a very hard time up against all the money in the world.

And I think that that seems to be what happened for OpenAI. Uh, once all the money in the world showed up, it was pretty hard to, uh, actually impossible for the public interest nonprofit side to hold sway.

TIM WU
When I think about it over and over, I think engineers and the people who set up these, uh, structures have been repeatedly very naive about, um, the power of their own good intentions. And I agree. Mozilla is a good example. Wikipedia is a good example. Google, I remember when they IPO'd, they had some set up, and they said, ‘We're not going to be an ordinary company,’ or something like that. And they sort of had preferred stock for some of the owners. You know, Google is still in some ways an impressive company, but it's hard to differentiate them from any other slightly money grubbing, non-innovative colossus, um, of the kind they were determined not to become.

And, you know, there was this like, well, it's not going to be us, because we're different. You know, we're young and idealistic, and why would we want to become, I don't know, like Xerox or IBM, but like all of us, you begin by saying, I'm never going to become like my parents, and then next thing you know, you're yelling at your kids or whatever.

CINDY COHN
Yeah, it's, it's the, you know, meet the new boss the same as the old boss, right? When we, what we were hoping was that we would be free of some of the old bosses and have a different way to approach, but, but the forces are pretty powerful that stick people back in line, I think.

TIM WU
And some of the old structures, you know, look a little better. Like, I'm not going to say newspapers are perfect, but a structure like the New York Times structure, for example, basically is better than Google's. And I just think there was this sense that, Well, we can solve that problem with code and good vibes. And that turned out to be the great mistake.

CINDY COHN
One of the conversations that you and I have had over the years is kind of the role of regulation on, on the internet. I think the fight about whether to regulate or not to regulate the Internet was always a little beside the point. The question is how. And I'm wondering what you're thinking now. You've been in the government a couple times. You've tried to push some things that were pretty regulatory. How are you thinking now about something like a centralized regulatory agency or another approach to, you know, regulating the Internet?

TIM WU
Yeah, I, you know, I continue to have mixed feelings about something like the central internet commission, mostly for some of the reasons you said, but on the other hand, sometimes, if I want to achieve what I mentioned, which is the idea of platforms that are an input into a lot of people being able to operate on top of them and run businesses-like, you know, at times, the roads have been, or the electric system, or the phone network, um, it's hard to get away from the idea of having some hard rules, sometimes I think my sort of platonic form of, of government regulation or rules was the 1956 AT&T consent decree, which, for those who are not as deep in those weeds as I am, told AT&T that it could do nothing but telecom, and therefore not do computing and also force them to license every single one of their patents for free. And the impact of that was more than one -  one is because they were out of computing. They were not able to dominate it and you had companies then new to computing like IBM and others that got into that space and developed the American computing industry completely separate from AT&T.

And you also ended up, semiconductor companies start that time with the transistor patent and other patents they used for free. So you know, I don't know exactly how you achieve that, but I'm drawn to basically keeping the main platforms in their lane. I would like there to be more competition.
The antitrust side of me would love it. And I think that in some areas we are starting to have it, like in social media, for better or for worse. But maybe for some of the more basic fundamentals, online markets and, you know, as much competition as we can get – but some rule to stay out of other businesses, some rule to stop eating the ecosystem. I do think we need some kind of structural separation rules. Who runs those is a little bit of a harder question.

CINDY COHN
Yeah, we're not opposed to structural separation at EFF. I think we, we think a lot more about interoperability to start with as a way to, you know, help people have other choices, but we haven't been opposed to structural separation, and I think there are situations in which it might make a lot of good sense, especially, you know, in the context of mergers, right?

Where the company has actually swallowed another company that did another thing. That's, kind of the low hanging fruit, and EFF has participated a lot in commenting on potential mergers.

TIM WU
I'm not opposed the idea of pushing interoperability. I think that it's based on the experience of the last 100 years. It is a tricky thing to get right. I'm not saying it's impossible. We do have examples: Phone network, in the early 20th century, and interconnection was relatively successful. And right now, you know, when you change between, let's say, T-Mobile and Verizon, there's only three left, but you get to take your phone number with you, which is a form of interoperability.

But it has the risk of being something you put a lot of effort into and it not necessarily working that well in terms of actually stimulating competition, particularly because of the problem of sabotage, as we saw in the ‘96 Act. So it's actually not about the theory, it's about the practice, the legal engineering of it. Can you find the right thing where you've got kind of a cut point where you could have a good interoperability scheme?

JASON KELLEY
Let’s take a quick moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

And now back to our conversation with Tim Wu. I was intrigued by what he said about keeping platforms in their lane. I wanted to hear him speak more about how that relates to antitrust – is that spreading into other ecosystems what sets his antitrust alarm bells off? How does he think about that?

TIM WU
I guess the phrase I might use is quarantine, is you want to quarantine businesses, I guess, from others. And it's less of a traditional antitrust kind of remedy, although it, obviously, in the ‘56 consent decree, which was out of an antitrust suit against AT&T, it can be a remedy.

And the basic idea of it is, it's explicitly distributional in its ideas. It wants more players in the ecosystem, in the economy. It's almost like an ecosystem promoting a device, which is you say, okay, you know, you are the unquestioned master of this particular area of commerce. Maybe we're talking about Amazon and it's online shopping and other forms of e-commerce, or Google and search.

We're not going to give up on the hope of competition, but we think that in terms of having a more distributed economy where more people have their say, um, almost in the way that you might insulate the college students from the elementary school students or something. We're going to give other, you know, room for other people to develop their own industries in these side markets. Now, you know, there's resistance say, well, okay, but Google is going to do a better job in, uh, I don't know, shopping or something, you know, they might do a good job. They might not, but you know, they've got their returns and they're always going to be an advantage as a platform owner and also as a monopoly owner of having the ability to cross-subsidize and the ability to help themselves.

So I think you get healthier ecosystems with quarantines. That's basically my instinct. And, you know, we do quarantines either legally or de facto all the time. As I said, the phone network has long been barred from being involved in a lot of businesses. Banking is kept out of a lot of businesses because of obvious problems of corruption. The electric network, I guess they could make toasters if they want, but it was never set up to allow them to dominate the appliance markets.

And, you know, if they did dominate the appliance markets, I think it would be a much poorer world, a lot less interesting innovation, and frankly, a lot less wealth for everyone. So, yeah, I have strong feelings. It's more of my net neutrality side that drives this thinking than my antitrust side, I’ll put it that way.

JASON KELLEY
You specifically worked in both the Obama and Biden administration sort of on these issues. I'm wondering if your thinking on this has changed. In experiencing those things from from the sort of White House perspective and also just how different those two, sort of, experiences were, obviously the moments are different in time and and and everything like that, but they're not so far apart – maybe light years in terms of technology, but what was your sort of experience between those two, and how do you think we're doing now on this issue?

TIM WU
I want to go back to a slightly earlier time in government, not the Obama, actually it was the Obama administration, but my first job in the, okay, sorry, my third job in the federal government, uh, I guess I'm a, one of these recidivists or something, was at the Federal Trade Commission.

CINDY COHN
Oh yeah, I remember.

TIM WU
Taking the first hard look at big tech and, in fact, we're investigating Google for the first time for antitrust possible offenses, and we also did the first privacy remedy on Facebook, which I will concede was a complete and absolute failure of government, one of the weakest remedies, I think. We did that right before Cambridge Analytica. And obviously had no effect on Facebook's conduct at all. So, one of the failed remedies. I think that when I think back about that period, the main difference was that the tech platforms were different in a lot of ways.

I believe that, uh, monopolies and big companies have, have a life cycle. And they were relatively early in that life cycle, maybe even in a golden age. A company like Amazon seemed to be making life possible for a lot of sellers. Google was still in its early phase and didn't have a huge number of verticals. Still had limited advertising. Most searches still didn't turn up that many ads.

You know, they were in a different stage of their life. And they also still felt somewhat, they were still already big companies. They still felt relatively in some sense, vulnerable to even more powerful economic forces. So they hadn't sort of reached that maturity. You know, 10 years later, I think the life cycle has turned. I think companies have largely abandoned innovation in their core products and turned to defense and trying to improve – most of their innovations are attempting to raise more revenue and supposed to make the product better. Uh, kind of reminds me of the airline industry, which stopped innovating somewhere in the seventies and started making, trying to innovate in, um, terms of price structures and seats being smaller, that kind of thing.

You know, there's, you reach this end point, I think the airlines are the end point where you take a high tech industry at one point and just completely give up on anything other than trying to innovate in terms of your pricing models.

CINDY COHN
Yeah, I mean, I, you know, our, our, we, Cory keeps coming up, but of course Cory calls it the “enshittification” of, uh, of services, and I think that is, uh, in typical Corrie way captures, this stage of the process.

TIM WU
Yeah, I just to speak more broadly. I you know, I think there's a lot of faith and belief that the, uh, company like Google, you know, in its heart meant well, and I do still think the people working there mean well, but I feel that, you know, the structure they set up, which requires showing increasing revenue and profit every quarter began to catch up with it much more and we’re at a much later stage of the process.

CINDY COHN
Yep.

TIM WU
Or the life cycle. I guess I'd put it.

CINDY COHN
And then for you, kind of coming in as a government actor on this, like, what did that mean in terms of, like, was it, I'm assuming, I kind of want to finish the sentence for you. And that, you know, that meant it was harder to get them to do the right thing. It meant that their defenses were better against trying to do the right thing.

Like how did that impact the governmental interventions that you were trying to help make happen?

TIM WU
I think it was both. I think there was both, in terms of government action, a sense that the record was very different. The Google story in 2012 is very different than 2023. And the main difference is in 2023 Google is paying out 26.3 billion a year to other companies to keep its search engine where it is, and arguably to split the market with Apple.

You know, there wasn't that kind of record back in 2012. Maybe we still should have acted, but there wasn't that much money being so obviously spent on pure defensive monopoly. But also people were less willing. They thought the companies were great. They overall, I mean, there's a broader ideological change that people still felt, many people from the Clinton administration felt the government was the problem. Private industry was the solution. Had kind of a sort of magical thinking about the ability of this industry to be different in some fundamental way.

So the chair of the FCC wasn't willing to pull the trigger. The economists all said it was a terrible idea. You know, they failed to block over a thousand mergers that big tech did during that period, which it's, I think, very low odds that none of those thousands were anti-competitive or in the aggregate that maybe, you know, that was a way of building up market power.

Um, it did enrich a lot of small company people, but I, I think people at companies like Waze really regret selling out and, you know, end up not really building anything of their own but becoming a tiny sub-post of the Google empire.

CINDY COHN
Yeah, the “acquihire” thing is very central now and what I hear from people in the industry is that like, if that's not your strategy to get acquired by one of the ones, it's very hard to get funded, right? It feeds back into the VC and how you get funded to get something built.

If it's not something that one of the big guys is going to buy, you're going to have a hard time building it and you're going to have a hard time getting the support to get to the place where you might actually even be able to compete with them.

TIM WU
And I think sometimes people forget we had different models. You know, some of your listeners might forget that, you know, in the ‘70s, ‘80s, and ‘90s, and early 2000s, people did build companies not just to be bought...

CINDY COHN
Right.

TIM WU
...but to build fortunes, or because they thought it was a good company. I mean, the people who built Sun, or Apple, or, you know, Microsoft, they weren't saying, well, I hope I'm gonna be bought by IBM one day. And they made real fortunes. I mean, look, being acquired, you can obviously become a very wealthy person, but you don't become a person of significance. You can go fund a charity or something, but you haven't really done something with your life.

CINDY COHN
I'm going to flip it around again. And so we get to the place where the Tim Wu vision that the power is spread more broadly. We've got lots of little businesses all around. We've got many choices for consumers. What else, what else do you see in this world? Like what role does the advertising business model play in this kind of a better future. That's just one example there of many, that we could give.

TIM WU
Yeah, no, I like your vision of a different future. I think, uh, just like focus on it goes back to the sense of opportunity and, you know, you could have a life where you run a small business that's on the internet that is a respectable business and you're neither a billionaire nor you're impoverished, but you know, you just had to have your own business the way people have, like, in New York or used to run like stores and in other parts of the country, and in that world, I mean, in my ideal world, there is advertising, but advertising is primarily informational, if that makes sense.

It provides useful information. And it's a long way to go between here and there, but where, um, you know, it's not the default business model for informational sources such that it, it has much less corrupting effects. Um, you know, I think that advertising obviously everyone's business model is going to affect them, but advertising has some of the more, corrupting business models around.

So, in my ideal world, we would not, it's not that advertising will go away, people want information, but we'd strike a better bargain. Exactly how you do that. I guess more competition helps, you know, lower advertising, um, sites you might frequent, better privacy protecting sites, but, you know, also passing privacy legislation might help too.

CINDY COHN
I think that’s right, I think EFF has taken a position that we think we should ban behavioral ads. That's a pretty strong position for us and not what we normally do, um, to, to say, well, we need to ban something. But also that we need, of course, comprehensive privacy law, which is, you know, kind of underlines so many of the harms that we're seeing online right now is this, this lack of a baseline privacy protection.

I don't know if you see it the same way, but it's certainly it seems to be the through line for a lot of harms that are coming up as things people are concerned about. Yeah.

TIM WU
I mean, absolutely, and I, you know, don't want to give EFF advice on their views, but I would say that I think it's wise to see the totally unregulated collection of data from, you know, millions, if not billions of people as a source of so many of the problems that we have.

It drives unhealthy business models, it leads to real-world consequences, in terms of identity theft and, and so many others, but I think I, I'd focus first on what, yeah, the kind of behavior that encourages the kind of business model is encourages, which are ones that just don't in the aggregate, feel very good for the businesses or for, for us in particular.

So yeah, my first priority legislatively, I think if I were acting at this moment would be starting right there with, um, a privacy law that is not just something that gives supposed user rights to take a look at the data that's collected, but that meaningfully stops the collection of data. And I think we'll all just shrug our shoulders and say, oh, we're better off without that. Yes, it supported some, but we will still have some of the things – it's not as if we didn't have friends before Facebook.

It's not as if we didn't have video content before YouTube, you know, these things will survive with less without behavioral advertising. I think your stance on this is entirely, uh, correct.

CINDY COHN
Great. Thank you, I always love it when Tim agrees with me and you know, it pains me when we disagree, but one of the things I know is that you are one of the people who was inspired by Larry Lessig and we cite Larry a lot on the show because we like to think about things or organize them in terms of the four levels of, um, You know, digital regulation, you know, laws, norms, markets, and code as four ways that we could control things online. And I know you've been focusing a lot on laws lately and markets as well.

How do you think about, you know, these four levers and where we are and, and how we should be deploying them?

TIM WU
Good question. I regard Larry as a prophet. He was my mentor in law school, and in fact, he is responsible for most of my life direction. Larry saw that there was a force arising through code that already was somewhat, in that time, 90s, early 2000s, not particularly subject to any kind of accountability, and he saw that it could take forms that might not be consistent with the kind of liberties you would like to have or expect and he was right about that.

You know, you can say whatever you want about law or government and there are many examples of terrible government, but at least the United States Constitution we think well, there is this problem called tyranny and we need to do something about it.

There's no real equivalent for the development of abusive technologies unless you get government to do something about it and government hasn't done much about it. You know, I think the interactions are what interests me about the four forces. So if we agree that code has a certain kind of sovereignty over our lives in many ways and most of us on a day-to-day basis are probably more affected by the code of the devices we use than by the laws we operate under.

And the question is, what controls code? And the two main contenders are the market and law. And right now the winner by far is just the market, which has led codemakers in directions that even they find kind of unfortunate and disgraceful.

I don't remember who had that quote, but it was some Facebook engineer that said the greatest minds of our generation are writing code to try to have people click on random ads, and we have sort of wasted a generation of talent on meaningless revenue generation when they could be building things that make people's lives better.

So, you know, the answer is not easy is to use law to counter the market. And that's where I think we are with Larry's four factors.

CINDY COHN
Yeah, I think that that's right, and I agree that it's a little ro-sham-bo, right, that you can control code with laws and, and markets and you can control markets with code, which is kind of where interoperability comes in sometimes and laws and you know, norms play a role in kind of a slightly different whammy role in all of these things, but I do think that those interactions are really important and we've, again, I've always thought it was a somewhat phony conversation about, you know, "to regulate or not to regulate, that is the question" because that's not actually particularly useful in terms of thinking about things because we were embedded in a set of laws. It's just the ones we pay attention to and the ones that we might not notice, but I do think we're in a time when we have to think a lot harder about how to make laws that will be flexible enough to empower people and empower competition and not lock in the winners of today's markets. And we spend a lot of time thinking about that issue.

TIM WU
Well, let me say this much. This might sound a little contradictory in my life story, but I'm not actually a fan of big government, certainly not overly prescriptive government. Having been in government, I see government's limits, and they are real. But I do think the people together are powerful.

I think laws can be powerful, but what they most usefully do is balance out the market. You know what I'm saying? And create different incentives or different forces against it. I think trying to have government decide exactly how tech should run is usually a terrible idea. But to cut off incentives – you talked about behavioral advertising. So let's say you ban behavioral advertising just the way we ban child labor or something. You know, you can live without it. And, yeah, maybe we're less productive because we don't let 12 year olds work in factories. There's a marginal loss of revenue, but I frankly think it's worth it.

And, you know, and some of the other practices that have shown up are in some ways the equivalent. And we can live without them. And that's the, you know, it's sort of easy to say. we should ban child labor. But when you look for those kind of practices, that's where we need law to be active.

JASON KELLEY
Well, Cindy, I came away from that with a reading list. I'm sure a lot of people are familiar with those authors and those books, but I am going to have to catch up. I think we'll put some of them, maybe all the books, in the, in the show notes so that people who are wondering can, can catch up on their end.

You, as someone who's already read all those books, probably have different takeaways from this conversation than me.

CINDY COHN
You know what I really, I really like how Tim thinks he's, you know, he comes out of this, especially most recently from an economics perspective. So his future is really an economics one.

It's about an internet that has lots of spaces for people to make a reasonable living as opposed to the few people make a killing, or sell their companies to the big tech giants. And I think that that vision dovetails a lot with a lot of the people that we've talked. to on this show that, you know, in some ways we've got to think about how do we redistribute the internet and that includes redistributing the economic benefits.

JASON KELLEY
Yeah. And thinking about, you know, something you've said many times, which is this idea of rather than going backwards to the internet we used to have, or the world we used to have, we're really trying to build a better world with the one we do have.

So another thing he did mention that I really pulled away from this conversation was when antitrust makes sense. And that sort of idea of, well, what do you do when companies start spreading into other ecosystems? That's when you really have to start thinking about the problems that they're creating for competition.

And I think the word he used was quarantine. Is that right?

CINDY COHN
Yeah I love that image.

JASON KELLEY
Yeah, that was just a helpful, I think, way for people to think about how antitrust can work. And that was something that I'll take away from this probably forever.

CINDY COHN
Yeah, I also liked his vision of what kind of deal we have with a lot of these free tools or AKA free tools, which is, you know, at one time when we signed up for, you know, a Gmail account, it's, you know, the, the deal was that it was going to look at what you searched on and what you wrote and then place you ads based on the context and what you did.

And now that deal is much, much worse. And I think he, he's right to likening that to something that, you know, has secretly gotten much more expensive for us, that the deal for us as consumers has gotten worse and worse. And I really like that framing because again, it kind of translates out from the issues that where we live, which is, you know, privacy and free speech and fairness and turns it into something that is actually kind of an economic framing of some of the same points.

I think that the kind of upshot of Tim and, and honestly, some of the other people we've talked to is this idea of ‘blitzscaling’, um, and growing gigantic platforms is really at the heart of a lot of the problems that we're seeing in free speech and in privacy and also in economic fairness. And I think that's a point that Tim makes very well.

I think that from, you know, The Attention Merchants, The Curse of Bigness, Tim has been writing in this space for a while, and he, what I appreciate is Tim is really a person, um, who came up in the Internet, he understands the Internet, he understands a lot of the values, and so he's, he's not writing as an outsider throwing rocks as much as an insider who is kind of dismayed at how things have gone and looking to try to unpack all of the problems. And I think his observation, which is shared by a lot of people, is that a lot of the problems that we're seeing inside tech are also problems we're seeing outside tech. It's just that tech is new enough that they really took over pretty fast.

But I think that it's important for us to both recognize the problems inside tech and it doesn't let tech off the hook. To note that these are broader societal problems, but it may help us in thinking about how we get out of them.

JASON KELLEY
Thanks for joining us for this episode of How to Fix the Internet. If you have feedback or suggestions, we'd love to hear from you. Visit EFF. org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some merch and just see what's happening in digital rights this week and every week.

We’ve got a newsletter, EFFector, as well as social media accounts on many, many, many platforms you can follow

This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators.

In this episode you heard Perspectives *** by J.Lang featuring Sackjo22 and Admiral Bob, and Warm Vacuum Tube by Admiral Bob featuring starfrosch.

You can find links to their music in our episode notes, or on our website at eff.org/podcast.

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

We’ll talk to you again soon.

I’m Jason Kelley

CINDY COHN
And I’m Cindy Cohn.

Josh Richman

"Infrastructures of Control": Q&A with the Geographers Behind University of Arizona's Border Surveillance Photo Exhibition

2 weeks 2 days ago

Guided by EFF's map of Customs & Border Protection surveillance towers, University of Arizona geographers Colter Thomas and Dugan Meyer have been methodologically traversing the U.S.-Mexico border and photographing the infrastructure that comprises the so-called "virtual wall."

Anduril Sentry tower beside the Rio Grande River. Photo by Colter Thomas (CC BY-NC-ND 4.0)

From April 12-26, their outdoor exhibition "Infrastructures of Control" will be on display on the University of Arizona campus in Tucson, featuring more than 30 photographs of surveillance technology, a replica surveillance tower, and a blow up map based on EFF's data.

Locals can join the researchers and EFF staff for an opening night tour at 5pm on April 12, followed by an EFF Speakeasy/Meetup. There will also be a panel discussion at 5pm on April 19, moderated by journalist Yael Grauer, co-author of EFF's Street-Level Surveillance hub. It will feature a variety of experts on the border, including Isaac Esposto (No More Deaths), Dora Rodriguez (Salvavision), Pedro De Velasco (Kino Border Initiative), Todd Miller (The Border Chronicle), and Daniel Torres (Daniel Torres Reports).

In the meantime, we chatted with Colter and Dugan about what their project means to them.

MAASS: Tell us what you hope people will take away from this project?

MEYER: We think of our work as a way for us to contribute to a broader movement for border justice that has been alive and well in the U.S.-Mexico borderlands for decades. Using photography, mapping, and other forms of research, we are trying to make the constantly expanding infrastructure of U.S. border policing and surveillance more visible to public audiences everywhere. Our hope is that doing so will prompt more expansive and critical discussions about the extent to which these infrastructures are reshaping the social and environmental landscapes throughout this region and beyond.

THOMAS: The diversity of landscapes that make up the borderlands can make it hard to see how these parts fit together, but the common thread of surveillance is an ominous sign for the future and we hope that the work we make can encourage people from different places and experiences to find common cause for looking critically at these infrastructures and what they mean for the future of the borderlands.

An Integrated Fixed Tower in Southern Arizona. Photo by Colter Thomas (CC BY-NC-ND 4.0)

MAASS: So much is written about border surveillance by researchers working off documents, without seeing these towers first hand. How did your real-world exploration affect your understanding of border technology?

THOMAS: Personally I’m left with more questions than answers when doing this fieldwork. We have driven along the border from the Gulf of Mexico to the Pacific, and it is surprising just how much variation there is within this broad system of U.S. border security. It can sometimes seem like there isn’t just one border at all, but instead a patchwork of infrastructural parts—technologies, architecture, policy, etc.—that only looks cohesive from a distance.

An Integrated Fixed Tower in Southern Arizona. Photo by Colter Thomas (CC BY-NC-ND 4.0)

MAASS: That makes me think of Trevor Paglen, an artist known for his work documenting surveillance programs. He often talks about the invisibility of surveillance technology. Is that also what you encountered?

MEYER: The scale and scope of U.S. border policing is dizzying, and much of how this system functions is hidden from view. But we think many viewers of this exhibition might be surprised—as we were when we started doing this work—just how much of this infrastructure is hidden in plain sight, integrated into daily life in communities of all kinds.

This is one of the classic characteristics of infrastructure: when it is working as intended, it often seems to recede into the background of life, taken for granted as though it always existed and couldn’t be otherwise. But these systems, from surveillance programs to the border itself, require tremendous amounts of labor and resources to function, and when you look closely, it is much easier to see the waste and brutality that are their real legacy. As Colter and I do this kind of looking, I often think about a line from the late David Graeber, who wrote that “the ultimate hidden truth of the world is that it is something that we make, and could just as easily make differently.”

THOMAS: Like Dugan said, infrastructure rarely draws direct attention. As artists and researchers, then, our challenge has been to find a way to disrupt this banality visually, to literally reframe the material landscapes of surveillance in ways that sort of pull this infrastructure back into focus. We aren’t trying to make this infrastructure beautiful, but we are trying to present it in a way that people will look at it more closely. I think this is also what makes Paglen’s work so powerful—it aims for something more than simply documenting or archiving a subject that has thus far escaped scrutiny. Like Paglen, we are trying to present our audiences with images that demand attention, and to contextualize those images in ways that open up opportunities and spaces for viewers to act collectively with their attention. For us, this means collaborating with a range of other people and organizations—like the EFF—to invite viewers into critical conversations that are already happening about what these technologies and infrastructures mean for ourselves and our neighbors, wherever they are coming from.

Dave Maass

Federal Court Dismisses X's Anti-Speech Lawsuit Against Watchdog

2 weeks 2 days ago

This post was co-written by EFF legal intern Melda Gurakar.

Researchers, journalists, and everyone else has a First Amendment right to criticize social media platforms and their content moderation practices without fear of being targeted by retaliatory lawsuits, a federal court recently ruled.

The decision by a federal court in California to dismiss a lawsuit brought by Elon Musk’s X against the Center for Countering Digital Hate (CCDH), a nonprofit organization dedicated to fighting online hate speech and misinformation, is a win for greater transparency and accountability of social media companies. The court’s ruling in X Corp. v. Center for Countering Digital Hate Ltd. shows that X had no legitimate basis to bring its case in the first place, as the company used the lawsuit to penalize the CCDH for criticizing X and to deter others from doing so.

Vexatious cases like these are known as Strategic Lawsuits Against Public Participation, or SLAPPs. These lawsuits chill speech because they burden speakers who engaged in protected First Amendment activity with the financial costs and stress of having to fight litigation, rather than seeking to vindicate legitimate legal claims. The goal of these suits is not to win, but to inflict harm on the opposing party for speaking. We are grateful that the court saw X’s lawsuit was a SLAPP and dismissed it, ruling that the claims lacked legal merit and that the suit violated California’s anti-SLAPP statute.

The lawsuit filed in July 2023 accused the CCDH of unlawfully accessing and scraping data from its platform, which X argued CCDH used in order to harm X Corp.'s reputation and, by extension, its business operations, leading to lost advertising revenue and other damages. X argued that CCDH had initiated this calculated “scare campaign” aimed at deterring advertisers from engaging with the platform, supposedly resulting in a significant financial loss for X. Moreover, X claimed that the CCDH breached its Terms of Service contract as a user of X.

The court ruled that X’s accusations were insufficient to bypass the protective shield of California's anti-SLAPP statute. Furthermore, the court's decision to dismiss X Corp.'s claims, including those related to breach of contract and alleged infringements of the Computer Fraud and Abuse Act, stemmed from X Corp.'s inability to convincingly allege or demonstrate significant losses attributable to CCDH's activities. This outcome not only is a triumph for CCDH, but also validates the anti-SLAPP statute's role in safeguarding critical research efforts against baseless legal challenges. Thankfully, the court also rejected X’s claim under the federal Computer Fraud and Abuse Act (CFAA). X had argued that the CFAA barred CCDH’s scraping of public tweets—a erroneous reading of the law. The court found that regardless of that argument, the X had not shown a “loss” of the type protected by the CFAA, such as technological harms to data or computers.

EFF, alongside the ACLU of Northern California and the national ACLU, filed an amicus brief in support of CCDH, arguing that X Corp.'s lawsuit mischaracterized a nonviable defamation claim as a breach of contract to retaliate against CCDH. The brief supported CCDH's motion to dismiss, arguing that the term of service against CCDH as it pertains to data scraping should be deemed void, and is contrary to the public interest. It also warned of a potential chilling effect on research and activism that rely on digital platforms to gather information.

The ramifications of X Corp v. CCDH reach far beyond this decision. X Corp v. CCDH affirms the Center for Countering Digital Hate's freedom to conduct and publish research that critiques X Corp., and sets precedent that protects critical voices from being silenced online. We are grateful that the court reached this correct result and affirmed that people should not be targeted by lawsuits for speaking critically of powerful institutions.

Aaron Mackey
Checked
2 hours ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed