John Gilmore Leaves the EFF Board, Becomes Board Member Emeritus

3 days 6 hours ago

Since he helped found EFF 31 years ago, John Gilmore has provided leadership and guidance on many of the most important digital rights issues we advocate for today. But in recent years, we have not seen eye-to-eye on how to best communicate and work together, and we have been unable to agree on a way forward with Gilmore in a governance role. That is why the EFF Board of Directors has recently made the difficult decision to vote to remove Gilmore from the Board.

We are deeply grateful for the many years Gilmore gave to EFF as a leader and advocate, and the Board has elected him to the role of Board Member Emeritus moving forward. "I am so proud of the impact that EFF has had in retaining and expanding individual rights and freedoms as the world has adapted to major technological changes,” Gilmore said. “My departure will leave a strong board and an even stronger staff who care deeply about these issues."

John Gilmore co-founded EFF in 1990 alongside John Perry Barlow, Steve Wozniak and Mitch Kapor, and provided significant financial support critical to the organization's survival and growth over many years. Since then, Gilmore has worked closely with EFF’s staff, board, and lawyers on privacy, free speech, security, encryption, and more.

In the 1990s, Gilmore found the government documents that confirmed the First Amendment problem with the government’s export controls over encryption, and helped initiate the filing of Bernstein v DOJ, which resulted in a court ruling that software source code was speech protected by the First Amendment and the government's regulations preventing its publication were unconstitutional. The decision made it legal in 1999 for web browsers, websites, and software like PGP and Signal to use the encryption of their choice.

Gilmore also led EFF’s effort to design and build the DES Cracker, which was regarded as a fundamental breakthrough in how we evaluate computer security and the public policies that control its use. At the time, the 1970s Data Encryption Standard (DES) was embedded in ATM machines and banking networks, as well as in popular software around the world. U.S. government officials proclaimed that DES was secure, while secretly being able to wiretap it themselves. The EFF DES Cracker publicly showed that DES was in fact so weak that it could be broken in one week with an investment of less than $350,000. This catalyzed the international creation and adoption of the much stronger Advanced Encryption Standard (AES), now widely used to secure information worldwide.

Among Gilmore’s most important contributions to EFF and to the movement for digital rights has been recruiting key people to the organization, such as former Executive Director Shari Steele, current Executive Director Cindy Cohn, and Senior Staff Attorney and Adams Chair for Internet Rights Lee Tien.

EFF has always valued and appreciated Gilmore’s opinions, even when we disagree. It is no overstatement to say that EFF would not exist without him. We look forward to continuing to benefit from his institutional knowledge and guidance in his new role of Board Member Emeritus.

Cindy Cohn

Police Can’t Demand You Reveal Your Phone Passcode and Then Tell a Jury You Refused

4 days 3 hours ago

The Utah Supreme Court is the latest stop in EFF’s roving campaign to establish your Fifth Amendment right to refuse to provide your password to law enforcement. Yesterday, along with the ACLU, we filed an amicus brief in State v. Valdez, arguing that the constitutional privilege against self-incrimination prevents the police from forcing suspects to reveal the contents of their minds. That includes revealing a memorized passcode or directly entering the passcode to unlock a device.

In Valdez, the defendant was charged with kidnapping his ex-girlfriend after arranging a meeting under false pretenses. During his arrest, police found a cell phone in Valdez’s pocket that they wanted to search for evidence that he set up the meeting, but Valdez refused to tell them the passcode. Unlike many other cases raising these issues, however, the police didn’t bother seeking a court order to compel Valdez to reveal his passcode. Instead, during trial, the prosecution offered testimony and argument about his refusal. The defense argued that this violated the defendant’s Fifth Amendment right to remain silent, which also prevents the state from commenting on his silence. The court of appeals agreed, and now the state has appealed to the Utah Supreme Court.

As we write in the brief: 

The State cannot compel a suspect to recall and share information that exists only in his mind. The realities of the digital age only magnify the concerns that animate the Fifth Amendment’s protections. In accordance with these principles, the Court of Appeals held that communicating a memorized passcode is testimonial, and thus the State’s use at trial of Mr. Valdez’s refusal to do so violated his privilege against self-incrimination. Despite the modern technological context, this case turns on one of the most fundamental protections in our constitutional system: an accused person’s ability to exercise his Fifth Amendment rights without having his silence used against him. The Court of Appeals’ decision below rightly rejected the State’s circumvention of this protection. This Court should uphold that decision and extend that protection to all Utahns.

Protecting these fundamental rights is only more important as we also fight to keep automated surveillance that would compromise our security and privacy off our devices. We’ll await a decision on this important issue from the Utah Supreme Court.

Related Cases: Andrews v. New Jersey
Andrew Crocker

Victory! Oakland’s City Council Unanimously Approves Communications Choice Ordinance

4 days 7 hours ago

Oakland residents shared the stories of their personal experience; a broad coalition of advocates, civil society organizations, and local internet service providers (ISPs) lifted their voices; and now the Oakland City Council has unanimously passed Oakland’s Communications Service Provider Choice Ordinance. The newly minted law frees Oakland renters from being constrained to their landlord's preferred ISP by prohibiting owners of multiple occupancy buildings from interfering with an occupant's ability to receive service from the communications provider of their choice.

Across the country—through elaborate kickback schemes—large, corporate ISPs looking to lock out competition have manipulated landlords into denying their tenants the right to choose the internet provider that best meets their family’s needs and values. In August of 2018, an Oakland-based EFF supporter emailed us asking what would need to be done to empower residents with the choice they were being denied. Finally, after three years of community engagement and coalition building, that question has been answered.  

Modeled on a San Francisco law adopted in 2016, Oakland’s new Communications Choice ordinance requires property owners of multiple occupancy buildings to provide reasonable access to any qualified communication provider that has received a service request from a building occupant. San Francisco’s law has already proven effective. There, one competitive local ISP, which had previously been locked out of properties of forty or more units with active revenue sharing agreements, gained access to more than 1800 new units by 2020. Even for those who choose to stay with their existing provider, a competitive communications market benefits all residents by incentivizing providers to offer the best services at the lowest prices. As Tracy Rosenberg, the Executive Director of coalition member Media Alliance—and a leader in the advocacy effort—notes, "residents can use the most affordable and reliable services available, alternative ISP's can get footholds in new areas and maximize competitive benefits, and consumers can vote with their pockets for platform neutrality, privacy protections, and political contributions that align with their values.”

Unfortunately, not every city is as prepared to take advantage of such measures as San Francisco and Oakland. The Bay Area has one of the most competitive ISP markets in the United States, including smaller ISPs committed to defending net neutrality and their users’ privacy. In many U.S. cities, that’s not the case.

We hope to see cities and towns across the country step up to protect competition and foster new competitive options by investing in citywide fiber-optic networks and opening that infrastructure to private ISPs.

Nathan Sheard

Why Is It So Hard to Figure Out What to Do When You Lose Your Account?

4 days 7 hours ago

We get a lot of requests for help here at EFF, with our tireless intake coordinator being the first point of contact for many. All too often, however, the help needed isn’t legal or technical. Instead, users just need an answer to a simple question: what does this company want me to do to get my account back?

People lose a lot when they lose their account. For example, being kicked off Amazon could mean losing access to your books, music, pictures, or anything else you have only licensed, not bought, from that company. But the loss can have serious financial consequences for people who rely on the major social media platforms for their livelihoods, the way video makers rely on YouTube or many artists rely on Facebook or Twitter for promotion.

And it’s even worse when you can’t figure out why your account was closed, much less how to get it restored.  The deep flaws in the DMCA takedown process are well-documented, but at least the rules of a DMCA takedown are established and laid out in the law. Takedowns based on ill-defined company policies, not so much.

Over the summer, writer and meme king Chuck Tingle found his Twitter account suspended due to running afoul of Twitter’s ill-defined repeat infringer policy. That they have such a policy is not a problem in and of itself: to take advantage of the DMCA safe harbor, Twitter is required to have one. It’s not even a problem that the law doesn’t specify what the policy needs to look like—flexibility is vital for different services to do what makes the most sense for them. However, a company has to make a policy with an actual, tangible set of rules if they expect people to be able to follow it.

This is what Twitter says:

What happens if my account receives multiple copyright complaints?

If multiple copyright complaints are received Twitter may lock accounts or take other actions to warn repeat violators. These warnings may vary across Twitter’s services.  Under appropriate circumstances we may suspend user accounts under our repeat infringer policy. However, we may take retractions and counter-notices into account when applying our repeat infringer policy. 

That is frustratingly vague. “Under appropriate circumstances” doesn’t tell users what to avoid or what to do if they run afoul of the policy. Furthermore, if an account is suspended, this does not tell users what to do to get it back. We’ve confirmed that “We may take retractions and counter-notices into account when applying our repeat infringer policy” means that Twitter may restore the account after a suspension or ban, in response to counter-notices and retractions of copyright claims. But an equally reasonable reading of it is that they will take those things into account only before suspending or banning a user, so counter-noticing won’t help you get your account back if you lost it after a sudden surge in takedowns.

And that assumes you can even send a counter-notice. When Tingle lost his account under its repeat infringer policy, he found that because his account was suspended, he couldn’t use Twitter’s forms to contest the takedowns. That sounds like a minor thing, but it makes it very difficult for users to take the steps needed to get their accounts back.

Often, being famous or getting press attention to your plight is the way to fast-track getting restored. When Facebook flagged a video of a musician playing a public domain Bach piece, and Sony refused to release the claim, the musician got it resolved by making noise on Twitter and emailing the heads of various Sony departments. Most of us don’t have that kind of reach.

Even when there are clear policies, those rules mean nothing if the companies don’t hold up their end of the bargain. YouTube’s Content ID rules claim a video will be restored if, after an appeal, a month goes by with no word from the complaining party. But there are numerous stories from creators in which a month passes, nothing happens, and nothing is communicated to them by YouTube. While YouTube’s rules need fixing in many ways, many people would be grateful if YouTube would just follow those rules.

These are not new concerns. Clear policies, notice to users, and a mechanism for appeal are at the core of the Santa Clara principles for content moderation. They are basic best practices for services that allow users to post content, and companies that have been hosting content for more than a decade have no excuse not to follow them.

EFF is not a substitute for a company helpline. Press attention is not a substitute for an appeals process. And having policies isn’t a substitute for actually following them.

Katharine Trendacosta

Crowd-Sourced Suspicion Apps Are Out of Control

4 days 7 hours ago

Technology rarely invents new societal problems. Instead, it digitizes them, supersizes them, and allows them to balloon and duplicate at the speed of light. That’s exactly the problem we’ve seen with location-based, crowd-sourced “public safety” apps like Citizen.

These apps come in a wide spectrum—some let users connect with those around them by posting pictures, items for sale, or local tips. Others, however, focus exclusively on things and people that users see as “suspicious” or potentially hazardous. These alerts run the gamut from active crimes, or the aftermath of crimes, to generally anything a person interprets as helping to keep their community safe and informed about the dangers around them.

"Users of apps like Citizen, Nextdoor, and Neighbors should be vigilant about unverified claims"

These apps are often designed with a goal of crowd-sourced surveillance, like a digital neighborhood watch. A way of turning the aggregate eyes (and phones) of the neighborhood into an early warning system. But instead, they often exacerbate the same dangers, biases, and problems that exist within policing. After all, the likely outcome to posting a suspicious sight to the app isn’t just to warn your neighbors—it’s to summon authorities to address the issue.

And even worse than incentivizing people to share their most paranoid thoughts and racial biases on a popular platform are the experimental new features constantly being rolled out by apps like Citizen. First, it was a private security force, available to be summoned at the touch of a button. Then, it was a service to help make it (theoretically) even easier to summon the police by giving users access to a 24/7 concierge service who will call the police for you. There are scenarios in which a tool like this might be useful—but to charge people for it, and more importantly, to make people think they will eventually need a service like this—adds to the idea that companies benefit from your fear.

These apps might seem like a helpful way to inform your neighbors if the mountain lion roaming your city was spotted in your neighborhood. But in practice they have been a cesspool of racial profiling, cop-calling, gatekeeping, and fear-spreading. Apps where a so-called “suspicious” person’s picture can be blasted out to a paranoid community, because someone with a smartphone thinks they don’t belong, are not helping people to “Connect and stay safe.” Instead, they promote public safety for some, at the expense of surveillance and harassment for others.

Digitizing an Age Old Problem

Paranoia about crime and racial gatekeeping in certain neighborhoods is not a new problem. Citizen takes that old problem and digitizes it, making those knee-jerk sightings of so-called suspicious behavior capable of being broadcast to hundreds, if not thousands of people in the area.

But focusing those forums on crime, suspicion, danger, and bad-faith accusations can create havoc. No one is planning their block party on Citizen like they might be on other apps, which is filled with notifications like “unconfirmed report of a man armed with pipe” and “unknown police activity.” Neighbors aren’t likely to coordinate trick-or-treating on a forum they exclusively use to see if any cars in their neighborhood were broken into. And when you download an app that makes you feel like a neighborhood you were formerly comfortable in is now under siege, you’re going to use it not just to doom scroll your way through strange sightings, but also to report your own suspicions.

There is a massive difference between listening to police scanners, a medium that reflects the ever-changing and updating nature of fluid situations on the street, and taking one second of that live broadcast and turning it into a fixed, unverified, news report. Police scanners can be useful by many people for many reasons and ought to stay accessible, but listening to a livestream presents an entirely different context than seeing a fixed geo-tagged alert on a map. 

As the New York Times writes, Citizen is “converting raw scanner traffic—which is by nature unvetted and mostly operational—into filtered, curated digital content, legible to regular people, rendered on a map in a far more digestible form.” In other words, they’re turning static into content with the same formula the long-running show Cops used to normalize both paranoia and police violence.

Police scanners reflect the raw data of dispatch calls and police response to them, not a confirmation of crime and wrongdoing. This is not to say that the scanner traffic isn’t valuable or important—the public often uses it to learn what police are doing in their neighborhood. And last year, protesters relied on scanner traffic to protect themselves as they exercised their First Amendment rights.

But publication of raw data is likely to give the impression that a neighborhood has far more crime than it does. As any journalist will tell you, scanner traffic should be viewed like a tip and be the starting point of a potential story, rather than being republished without any verification or context. Worse, once Citizen receives a report, many stay up for days, giving the overall impression to a user that a neighborhood is currently besieged by incidents—when many are unconfirmed, and some happened four or five days ago.

From Neighborhood Forum to Vigilante-Enabler

It’s well known that Citizen began its life as “Vigilante,” and much of its DNA and operating procedure continue to match its former moniker. Citizen, more so than any other app, is unsure if it wants to be a community forum or a Star Wars cantina where bounty hunters and vigilantes wait for the app to post a reward for information leading to a person’s arrest.

When a brush fire broke out in Los Angeles in May 2021, almost a million people saw a notification pushed by Citizen offering a $30,000 reward for information leading to the arrest of a man they thought was responsible. It is the definition of dangerous that the app offered money to thousands of users, inviting them to turn over information on an unhoused man who was totally innocent.

Make no mistake, this kind of crass stunt can get people hurt. It demonstrates a very narrow view of who the “public” is and what “safety” entails.

Ending Suspicion as a Service

Users of apps like Citizen, Nextdoor, and Neighbors should be vigilant about unverified claims that could get people hurt, and be careful not to feed the fertile ground for destructive hoaxes.

These apps are part of the larger landscape that law professor Elizabeth Joh calls “networked surveillance ecosystems.” The lawlessness that governs private surveillance networks like Amazon Ring and other home surveillance systems—in conjunction with social networking and vigilante apps—is only exacerbating age-old problems. This is one ecosystem that should be much better contained.

Matthew Guariglia

On Global Encryption Day, Let's Stand Up for Privacy and Security

4 days 12 hours ago

At EFF, we talk a lot about strong encryption. It’s critical for our privacy and security online. That’s why we litigate in courts to protect the right to encrypt, build technologies to encrypt the web, and it’s why we lead the fight against anti-encryption legislation like last year’s EARN IT Act.

We’ve seen big victories in our fight to defend encryption. But we haven’t done it alone. That’s why we’re proud this year to join dozens of other organizations in the Global Encryption Coalition as we celebrate the first Global Encryption Day, which is today, October 21, 2021.

For this inaugural year, we’re joining our partner organizations to ask people, companies, governments, and NGOs to “Make the Switch” to strong encryption. We’re hoping this day can encourage people to make the switch to end-to-end encrypted platforms, creating a more secure and private online world. It’s a great time to turn on encryption on all the devices or services you use, or switch to an end-to-end encrypted app for messaging—and talk to others about why you made that choice. Using strong passwords and two-factor authentication are also security measures that can help keep you safe. 

If you already have a handle on encryption and its benefits, today would be a great day to talk to a friend about it. On social media, we’re using the hashtag #MakeTheSwitch.

The Global Encryption Day website has some ideas about what you could do to make your online life more private and secure. Another great resource is EFF’s Surveillance Self Defense Guide, where you can get tips on everything from private web browsing, to using encrypted apps, to keeping your privacy in particular security scenarios—like attending a protest, or crossing the U.S. border. 

We need to keep talking about the importance of encryption, partly because it’s under threat. In the U.S. and around the world, law enforcement agencies have been seeking an encryption “backdoor” to access peoples’ messages. At EFF, we’ve resisted these efforts for decades. We’ve also pushed back against efforts like client-side scanning, which would break the promises of user privacy and security while technically maintaining encryption.

If you already have a handle on encryption and its benefits, today would be a great day to talk to a friend about it. On social media, we’re using the hashtag #MakeTheSwitch.

The Global Encryption Coalition is listing events around the world today. EFF Senior Staff Technologist Erica Portnoy will be participating in an “Ask Me Anything” about encryption on Reddit, at 17:00 UTC, which is 10:00 A.M. Pacific Time. Jon Callas, EFF Director of Technology Projects, will join an online panel about how to improve user agency in end-to-end encrypted services, on Oct. 28.

Joe Mullin

New Global Alliance Calls on European Parliament to Make the Digital Services Act a Model Set of Internet Regulations Protecting Human Rights and Freedom of Expression

4 days 13 hours ago

The European Parliament’s regulations and policy-making decisions on technology and the internet have unique influence across the globe. With great influence comes great responsibility. We believe the European Parliament (EP) has a duty to set an example with the Digital Services Act (DSA), the first major overhaul of European internet regulations in 20 years. The EP should show that the DSA can address tough challenges—hate speech, misinformation, and users’ lack of control on big platforms—without compromising human rights protections, free speech and expression rights, and users’ privacy and security.

Balancing these principles is complex, but imperative. A step in the wrong direction could reverberate around the world, affecting fundamental rights beyond European Union borders. To this end, 12 civil society organizations from around the globe, standing for transparency, accountability, and human rights-centered lawmaking, have formed the Digital Services Act Human Rights Alliance to establish and promote a world standard for internet platform governance. The Alliance is comprised of digital and human rights advocacy organization representing diverse communities across the globe, including in the Arab world, Europe, United Nations member states, Mexico, Syria, and the U.S.

In its first action towards this goal, the Alliance today is calling on the EP to embrace a human rights framework for the DSA and take steps to ensure that it protects access to information for everyone, especially marginalized communities, rejects inflexible and unrealistic take down mandates that lead to over-removals and impinge on free expression, and strengthen mandatory human rights impact assessments so issues like faulty algorithm decision-making is identified before people get hurt.

This call to action follows a troubling round of amendments approved by an influential EP committee that crossed red lines protecting fundamental rights and freedom of expression. EFF and other civil society organizations told the EP prior to the amendments that the DSA offers an unparalleled opportunity to address some of the internet ecosystem’s most pressing challenges and help better protect fundamental rights online—if done right.

So, it was disappointing to see the EP committee take a wrong turn, voting in September to limit liability exemptions for internet companies that perform basic functions of content moderation and content curation, force companies to analyze and indiscriminately monitor users’ communication or use upload filters, and bestow special advantages, not available to ordinary users, on politicians and popular public figures treated as trusted flaggers.

In a joint letter, the Alliance today called on the EU lawmakers to take steps to put the DSA back on track:

  • Avoid disproportionate demands on smaller providers that would put users’ access to information in serious jeopardy.
  • Reject legally mandated strict and short time frames for content removals that will lead to removals of legitimate speech and opinion, impinging rights to freedom of expression.
  • Reject mandatory reporting obligations to Law Enforcement Agencies (LEAs), especially without appropriate safeguards and transparency requirements.
  • Prevent public authorities, including LEAs, from becoming trusted flaggers and subject conditions for becoming trusted flaggers to regular reviews and proper public oversight.   
  • Consider mandatory human rights impact assessments as the primary mechanism for examining and mitigating systemic risks stemming from platforms' operations.

For the DSA Human Rights Alliance Joint Statement:
https://www.eff.org/document/dsa-human-rights-alliance-joint-statement

For more on the DSA:
https://www.eff.org/issues/eu-policy-principles

Karen Gullo

EFF to Federal Court: Block Unconstitutional Texas Social Media Law

4 days 18 hours ago

Users are understandably frustrated and perplexed by many big tech companies’ content moderation practices. Facebook, Twitter, and other social media platforms make many questionable, confounding, and often downright incorrect decisions affecting speakers of all political stripes. 

A new Texas law, which Texas Governor Greg Abbott said would stop social media companies that “silence conservative viewpoints and ideas,” restricts large platforms from removing or moderating content based on the viewpoint of the user. The measure, HB 20, is unconstitutional and should not be enforced, we told a federal court in Texas in an amicus brief filed Oct. 15. 

In NetChoice v. Paxton, two technology trade associations sued Texas to prevent the law from going into effect. Our brief, siding with the plaintiffs, explains that the law forces popular online platforms to publish speech they don’t agree with or don’t want to share with their users. Its broad restrictions would destroy many online communities that rely on moderation and curation. Platforms and users may not want to see certain kinds of content and speech that is legal but still offensive or irrelevant to them. They have the right under the First Amendment to curate, edit, and block everything from harassment to opposing political viewpoints.

Contrary to HB 20’s focus, questionable content moderation decisions are in no way limited to conservative American speakers. In 2017, for example, Twitter disabled the verified account of Egyptian human rights activist Wael Abbas. That same year, users discovered that Twitter had marked tweets containing the word “queer” as offensive. Recent reporting has highlighted how Facebook failed to enforce its policies against hate speech and promotion of violence, or even publish those policies, in places like Ethiopia.

However, EFF’s brief explains that users also rely on the First Amendment to create communities online, whether they are niche or completely unmoderated. Undermining speech protections would ultimately hurt users by limiting their options online. 

HB 20 also requires large online platforms to follow transparency and complaint procedures, such as publishing an acceptable use policy and biannual statistics on content moderation. While EFF urges social media companies to be transparent with users about their moderation practices, when governments mandate transparency, they must accommodate constitutional and practical concerns. Voluntary measures such as implementing the Santa Clara Principles, guidelines for a human rights framework for content moderation, best serve a dynamic internet ecosystem.

HB 20’s requirements, however, are broad and discriminatory. Moreover, HB 20 would likely further entrench the market dominance of the very social media companies the law targets because compliance will require a significant amount of time and money.

EFF has filed several amicus briefs opposing government control over content moderation, including in a recent successful challenge to a similar Florida law. We urge the federal court in Texas to rule that HB 20 restricts and burdens speech in violation of the Constitution.

Mukund Rathi

From Bangkok to Burlington — The Public Interest Social Internet

5 days 8 hours ago

This blog post is part of a series, looking at the public interest internet—the parts of the internet that don’t garner the headlines of Facebook or Google, but quietly provide public goods and useful services without requiring the scale or the business practices of the tech giants. Read our earlier installments.

In the last installment, we discussed platforms that tie messaging apps together. These let users chat with more people more easily, no matter where they are or what app they’re using, making it possible for someone using the latest chat tool, like Slack, to talk to someone on a decades old-platform like IRC. But localized services matter to the public interest internet as well. While  forums like Nextdoor have drawn attention (and users) for offering neighborhood communication regardless of your zip code, other services that predate those—and get around many of their controversies—do exist. 

Is the best of the Internet doomed to exist in just some narrow strongholds? 

This post will be about two very different social networks:

The first is Front Porch Forum, a Vermont-local platform that is “a micro hyperlocal social network,” tied to local services and with a huge percentage of uptake of local users. A caveat that many find more freeing than restricting: comments, replies, and posts don’t reach their neighbors until the following day, in a newsletter-style digest.

The other is Pantip, which is one of the top ten websites in Thailand. It’s a giant compared to Front Porch Forum, but its ability to persist—and stay independent—make it a worthwhile subject.

Growing Slowly 

Cofounders Michael and Valerie Wood-Lewis, of Burlington, Vermont, began Front Porch Forum in the early 2000’s by passing out flyers in their neighborhood. The goal wasn’t to build a company, or create a startup—it was to meet their neighbors. Users of other neighborhood sites will be familiar with the benefits of a local online community—posts ranged early on from people looking to borrow tools to helping one another find lost pets. 

As the site grew, others outside of Burlington asked to join. But Wood-Lewis turned them down, opting to focus the community on his area only. At first, he created a how-to guide for those who wanted to build their own local network, but eventually, the site allowed anyone in Vermont to join (it’s now expanded to some parts of New York and Massachusetts). 

But even as it's grown, the focus has been on public good—not profit. Instead of increasing the amount of posts users can make to drum up more content (and space for ads), the site has continued functioning effectively as an upgrade to its earlier listserv format. And rather than collect data on users beyond their location (which is necessary to sign up for the site and communicate with neighbors), or plastering it with advertising, Wood-Lewis uses Front Porch Forum’s hyperlocal geography to its advantage

“We have been pretty much diametrically opposed to the surveillance business model from the beginning. So our basic business model is we sell ads, advertising space to local businesses and nonprofits. The ads are distributed by geography, and by date, and that's it. There's no, "Yeah, let's check people's browser history, or let's pry into people's lives." We do not do that.”

These simple ads make it easy for local businesses and others to offer services to their community without hiring a graphic designer or having to learn anything complicated about online advertising, like how to make contextual ads that rely on perceived user interests (and data). 

In contrast to the well-known issues of racism and gatekeeping on Nextdoor or Ring’s Neighbors app, Wood-Lewis attributes the general positivity of the site to a variety of factors that are all baked into the public interest mindset: slow-growth, a focus on community, and moderation. But not necessarily that kind of moderation—while posts are all reviewed by moderators, and there are some filtering tools, posts typically come out as a newsletter, once a day, by default. If you want to yell at your neighbor, you’ve got the option to mail them directly through the site, but you’re probably better off knocking on their door. Users say that while most of the internet “is like a fire hose of information and communication, Front Porch Forum is like slow drip irrigation.” 

While Front Porch Forum has grown, it’s done so through its own earnings and at its own pace. While many of the most popular social networks need to scale to perform for investors, which relies on moving fast and breaking things, Front Porch Forum could be described as a site for moving slowly and fixing things.


Staying Afloat Despite Free Speech Challenges

On the other side of the world, the forum Pantip, a sort of Thai reddit, has grown to be one of the most popular sites in the country since its creation in 1997. Pantip's growth (and survival) is all the more significant because Thailand has some of the harshest legal frameworks in the world for online speech. Intermediaries are strictly liable for the speech of their users, which is particularly troubling, since the crime of criticizing members of the royal family (“lese majeste”) can lead to imprisonment for both poster and site administrator. 

As a result, the site’s strict rules may seem overbearing to Western users—participating in the upvoting and points system requires validating your identity with a government ID, for example—yet the site remains popular after over twenty years of being run without outside investment. Pantip has navigated treacherous waters for a very long time, and has even had parts of the site shut down by the government, but it chugs along, offering a place for Thai users to chat online, while many other sites have been scuppered. For example, many newspapers have shut down comment sections for fear of liability. Though this legal regime puts Pantip’s owner in danger, particularly during regime changes—he still won't sell out to bigger companies:  “Maybe I’m too conservative. I don’t believe that internet [business] needs a lot of money to run. Because we can do internet business with a very small [investment].” 

Models for the Future?

Neither Front Porch Forum nor Pantip get the headlines of a Facebook or a Twitter—but not because they're unsuccessful. Rather, their relatively specific rules and localized audiences make them poor models for scaling to world domination. To a certain extent, they benefit from not garnering huge amounts of publicity. In Front Porch Forum's case, mass appeal is irrelevant — the site’s membership spreads by word of mouth in a local context, and advertising revenues grow with it. For Pantip, it's better to keep a low profile, even if hundreds of thousands of users are in on the secret. And both sites are proof that social media doesn't have to be run by venture capital-funded, globally-scaled services, and that we don’t need a Facebook to give people in local areas or in developing countries forums to connect and organize.

But is part of their success due to Front Porch Forum and Pantip’s lack of interest in disrupting the tech giants of the world? If the Public Interest Internet is an alternative to Facebook and Google, is it really a viable future for just a few lucky groups? Is the best of the Internet doomed to exist in just some narrow strongholds? Or can we scale up the human-scale: make sure that everyone has their own Front Porch, or Pantip close to hand, instead of having to stake everything in the chilly, globalised space of the tech giants?

One area of the Net that has been with it since the beginning, and has managed to both be a place for friends to collaborate, and somewhere that has made a wider impact on the world, is the subject of the next post in this series. We’re going to discuss the world of fan content, including the Hugo Award-winning fanfiction archive at Archive of Our Own, and how the Public Interest Internet makes it possible for people to comment on and add to the stories they love, in places that better serve them than our current giants.

This is the sixth post in our blog series on the public interest internet. Read more in the series:

Jason Kelley

EFF Files New Lawsuit Against California Sheriff for Sharing ALPR Data with ICE and CBP

6 days 8 hours ago

The Marin County Sheriff illegally shares the sensitive location information of millions of drivers with out-of-state and federal agencies, including Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP). The Sheriff uses automated license plate readers (ALPRs)—high-speed cameras mounted on street poles or squad cars—to scan license plates and record the date, time, and location of each scan. This data can paint a detailed picture of the private lives of Marin County residents, including where they live and work, visit friends or drop their children off at school, and when they attend religious services or protests.

Last week, EFF filed a new lawsuit on behalf of three immigrant rights activists against Sheriff Bob Doyle and Marin County for violating two California laws that protect immigrants and motorists’ privacy. Our co-counsel are the ACLU Foundations of Northern California, Southern California, and San Diego & Imperial Counties, and attorney Michael Risher. We seek a court order prohibiting the Sheriff from sharing ALPR data with out-of-state and federal agencies.

The Marin Sheriff’s ALPRs scan thousands of license plates each month. That sensitive data, including photos of the vehicle and sometimes its drivers and passengers, is stored in a database. The Sheriff permits over 400 out-of-state and 18 federal agencies, including CBP and ICE, to run queries of full or partial license plates against information the Sheriff has collected.

This data sharing particularly impacts the safety and privacy of immigrants, communities of color, and religious minorities. Like many other surveillance technologies, ALPRs have a history of disproportionately impacting marginalized communities. ICE has used ALPR data to detain and deport immigrant community members. NYPD used ALPRs to scan license plates near mosques.

The Sheriff’s sharing of ALPR data to entities outside of California violates state law. S.B. 34, enacted in 2015, prohibits California law enforcement agencies from sharing ALPR data with entities outside of California. Moreover, the California Values Act (S.B. 54), enacted in 2018, limits the use of local resources to assist federal immigration enforcement, including the sharing of personal information.

To learn more, read the complaint, the press release, our case page, the ACLU of Northern California’s case page, and our clients’ statements.

Related Cases: Lagleva v. Marin County Sheriff
Saira Hussain

After Years of Delays and Alarmingly Flimsy Evidence, Security Expert Ola Bini’s Trial Set for This Week

6 days 8 hours ago

Update, October 19: Ola Bini's trial has been suspended by request of the Corporación Nacional de Telecomunicaciones (CNT) whose representative was unable to attend the hearing. No new date has yet been set.

For over two years EFF has been following the case of Swedish computer security expert Ola Bini, who was arrested in April, 2019, in Ecuador, following Julian Assange's ejection from that country’s London Embassy. Bini’s pre-trial hearing, which was suspended and rescheduled at least five times during 2020, was concluded on June 29, 2021. Despite the cloud that has hung over the case—political ramifications have seemed to drive the allegations, and Bini has been subjected to numerous due process and human rights violations—we are hopeful that the security expert will be afforded a transparent and fair trial and that due process will prevail. 

Ola Bini is known globally as a computer security expert; he is someone who builds secure tools and contributes to free software projects. Ola’s team at ThoughtWorks contributed to Certbot, the EFF-managed tool that has provided strong encryption for millions of websites around the world, and in 2018, Ola co-founded a non-profit organization devoted to creating user-friendly security tools.

From the very outset of Bini’s arrest at the Quito airport there have been significant concerns about the legitimacy of the allegations against him. In our visit to Ecuador in July, 2019, shortly after his arrest, it became clear that the political consequences of Bini’s arrest overshadowed the prosecution’s actual evidence. In brief, based on the interviews that we conducted, our conclusion was that Bini's prosecution is a political case, not a criminal one. His arrest occurred shortly after Maria Paula Romo, then Ecuador’s Interior Minister, held a press conference to claim (without evidence) that a group of Russians and Wikileaks-connected hackers were in the country, planning a cyber-attack in retaliation for the government's eviction of Assange; a recent investigation by La Posta revealed that the former Minister knew that Ola Bini was not the "Russian hacker" the government was looking for when Bini was detained in Quito's airport. (Romo was dismissed as minister in 2020 for ordering the use of tear gas against anti-government protestors).

A so-called piece of evidence against Bini was leaked to the press and taken to court: a photo of a screenshot, supposedly taken by Bini himself and sent to a colleague, showing the telnet login screen of a router. The image is consistent with someone who connects to an open telnet service, receives a warning not to log on without authorization, and does not proceed—respecting the warning. As for the portion of a message exchange attributed to Bini and a colleague, leaked with the photo, it shows their concern with the router being insecurely open to telnet access on the wider Internet, with no firewall.

Bini’s arrest and detention were fraught with due process violations. Bini faced 70 days of imprisonment until a Habeas Corpus decision considered his detention illegal (a decision that confirmed the weakness of the initial detention). He was released from jail, but the investigation continued, seeking evidence to back the alleged accusations against him. After his release the problems continued, and as the delays dragged on, the Office of the Inter-American Commission on Human Rights (IACHR) Special Rapporteur for Freedom of Expression included its concern with the delay in Bini’s trial in its 2020's annual report. At the time of our visit, Bini's lawyers told us that they counted 65 violations of due process, and journalists told us that no one was able to provide them with concrete descriptions of what he had done. 

In April 2021, Ola Bini’s Habeas Data recourse, filed in October 2020 against the National Police, the Ministry of Government, and the Strategic Intelligence Center (CIES), was partially granted by the Judge. According to Bini's defense, he had been facing continuous monitoring by members of the National Police and unidentified persons. The decision requested CIES to provide information related to whether the agency has conducted surveillance activities against the security expert. The ruling concluded that CIES unduly denied such information to Ola Bini, failing to offer a timely response to his previous information request. 

Though the judge decided in June’s pre-trial hearing to proceed with the criminal prosecution against Bini, observers indicated a lack of a solid motivation in the judge's decision. The judge was later "separated" from the case in a ruling that admitted the wrongdoing of successive pretrial suspensions and the violation of due process. 

It is alarming, but perhaps not surprising, that the case will proceed after all these well-documented irregularities. While Ola Bini’s behavior and contacts in the security world may look strange to authorities, his computer security expertise is not a crime. Since EFF's founding in 1990, we have become all-too familiar with overly politicized "hacker panic" cases, which encourage unjust prosecutions when the political and social atmosphere demands it. EFF was founded in part due to a notorious, and similar, case pursued in the United States by the Secret Service. Our Coder’s Rights Project has worked for decades to protect the security and encryption researchers that help build a safer future for all of us using digital technologies, and who far too often face serious legal challenges that prevent or inhibit their work. This case is, unfortunately, part of a longstanding history of countering the unfair criminal persecution of security experts, who have unfortunately been the subject of the same types of harassment as those they work to protect, such as human rights defenders and activists. 

In June of this year, EFF called upon Ecuadors’ Human Rights Secretariat to give special attention to Ola Bini’s upcoming hearing and prosecution. As we stressed in our letter

Mr. Bini's case has profound implications for, and sits at the center of, the application of human rights and due process, a landmark case in the context of arbitrarily applying overbroad criminal laws to security experts. Mr. Bini's case represents a unique opportunity for the Human Rights Secretariat Cabinet to consider and guard the rights of security experts in the digital age.  Security experts protect the computers upon which we all depend and protect the people who have integrated electronic devices into their daily lives, such as human rights defenders, journalists, activists, dissidents, among many others. To conduct security research, we need to protect the security experts, and ensure they have the tools to do their work.

The circumstances around Ola Bini's detention have sparked international attention and indicate the growing seriousness of security experts' harassment in Latin America. The flimsy allegations against Ola Bini, the series of irregularities and human rights violations in his case, as well as its international resonance, situate it squarely among other cases we have seen of politicized and misguided allegations against technologists and security researchers. 

We hope that justice will prevail during Ola Bini’s trial this week, and that he will finally be given the fair treatment and due process that the proper respect of his fundamental rights requires.

Jason Kelley

EFF Joins Press Freedom Groups In Asking U.S. To Drop Assange Extradition Efforts

1 week ago

EFF has joined a coalition of press freedom, civil liberties, and human rights groups that sent a letter to Attorney General Merrick Garland urging the Department of Justice to drop its efforts to extradite and prosecute Julian Assange.

The renewed request comes after a Yahoo News report that the CIA discussed kidnapping or killing Assange in 2017, before charges against Assange were filed. The agency also reportedly planned extensive spying on WikiLeaks associates.

Assange has been charged under the Espionage Act. The charges have been widely condemned by journalists and press freedom organizations, including by outlets that have been critical of Assange. Leaks of information that the government would prefer to keep secret, and the publication of those leaks by journalists, are vital to our democracy. Regardless of what one thinks about other criminal charges against Assange, his indictment on charges that mostly reflect basic journalistic practices will have a chilling effect on critical national security journalism. 

In January, a British judge denied the Trump Administration’s extradition request, on the basis that the conditions of confinement in the U.S. would be overly harsh. The U.S. chose to appeal that decision. A hearing on the case is scheduled to be heard next week. Human rights and press freedom groups, including EFF, first asked in February for the Biden Administration to drop the extradition effort.

In addition to EFF, the letter to DOJ has been signed by the ACLU, Amnesty International, Center for Constitutional Rights, Fight for the Future, Freedom of the Press Foundation, Human Rights Watch, PEN America, Reporters Without Borders, and many other groups. 

Joe Mullin

Flight of the Concord Drones

1 week 3 days ago

This blog post was written by Kenny Gutierrez, EFF Bridge Fellow.

The City Council of Concord, California, is tone deaf to community concerns regarding a proposed police Unmanned Aerial Surveillance (UAS) system – commonly referred to as drones. In a city where the police department is responsible for nearly 60% of the city budget, this should come as no surprise. The UAS program, however, will strangely be funded by Marathon Petroleum, which has no offices or facilities in Concord. EFF, ACLU, and 15 other Contra Costa community organizations opposed this action. We also demanded police oversight and ample safeguards to protect civil liberties and civil rights if the program were to be adopted.

Privacy and police accountability are a massive issue with UAS systems, and are both high priority issues for California voters, as evidenced by the passage of California Consumer Privacy Act and police accountability legislation.

Potential Issues with Drones

Drones are unmanned aerial vehicles that can be equipped with high definition, live-feed video cameras, thermal infrared video cameras, heat sensors, and radar—all of which allow for sophisticated and persistent surveillance. Drones can record video or still images in daylight or at night (with infrared lenses). They can also be equipped with software tools like license plate readers, face recognition, and GPS trackers that extend the dangers they pose to privacy. There have even been proposals for law enforcement to attach lethal and non-lethal weapons to drones. Additionally, newly developed drone automation allows for automatic tracking of vehicles and individuals.

EFF Letter to Concord City Council

EFF and our allies sent a letter urging the Concord City Council to not adopt a UAS system for Concord Police Department. If, however, the city council decided to approve the UAS system, we made specific recommendations to provide ample safeguards to protect civil liberties and civil rights:

First, drones should be deployed only with a warrant, or in an emergency that threatens death or serious bodily injury to a person.  All deployments should be thoroughly documented, and that documentation must be made publicly available.

Second, facial recognition technology, artificial intelligence, automated tracking, heat sensors, license plate readers, cell-phone interception, and lethal and non-lethal weapons should be prohibited as incorporated technologies to UAS drones.

Third, there must be clear rules regarding access to UAS footage.  Officers suspected of misconduct must not be allowed to view footage until they have made an initial statement regarding the episode.  People depicted in footage must have access to the footage. Also, footage depicting police use of force must be available to the public. Similar to the flaws in body worn cameras, police can exercise too much control over the video before the public sees it without police oversight.

More generally, Concord should adopt a Community Control Over Police Surveillance (CCOPS) ordinance.  A CCOPS law acts to promote transparency, the public’s welfare, civil rights, and civil liberties in all decisions regarding the funding, acquisition, and deployment of surveillance equipment by local police.  Such a law would appropriately require city departments to provide the public with information about surveillance proposals, including a draft usage policy, weeks rather than days before the proposals are debated before the City Council.

Not only did the Concord City Council approve the police drones, it also failed to adopt these meaningful safeguards. Concord, like many other municipalities, decided instead to use a boilerplate, unprotective Lexipol policy. Will Concord become the next Chula Vista, which deploys a drone for nearly every 911 call? Only time will tell. Concord’s drone policy is permissive enough that it could be.

Adam Schwartz

California Activists Sue Marin County Sheriff for Illegally Sharing Drivers’ License Plate Data With ICE, CBP and Other Out-of-State Agencies

1 week 4 days ago
Immigrants’ Privacy, Security Threatened by Sheriff’s Practice, Which Violates California Law

San Francisco—Community activists in Northern California today sued Marin County Sheriff Robert Doyle for illegally sharing millions of local drivers’ license plates and location data, captured by a network of cameras his office uses, with hundreds of federal and out-of-state agencies—a practice that violates two California laws, endangers the safety and privacy of local immigrant communities, and facilitates location tracking by police.

The ACLU Foundations of Northern California, Southern California, and San Diego & Imperial Counties, the Electronic Frontier Foundation (EFF), and attorney Michael T. Risher represent community activists Lisa Bennett, Cesar S. Lagleva, and Tara Evans, long-time Marin community members, in a lawsuit filed in Marin County Superior Court. The suit seeks to end the sheriff’s illegal practice of giving hundreds of agencies outside California access to a database of license plate scans used to identify and track people, revealing where they live and work, when they visit friends or drop their kids at school, and when they attend religious services or protests.

“The information unveiled through this lawsuit shows that the freedoms that people think they possess in Marin County are a mirage: people cannot move about freely without being surveilled,” said Bennett. “Our county sheriff, who has sworn to uphold the law, is in fact violating it by sharing peoples’ private information with outside agencies. This has especially alarming implications for immigrants and people of color: two communities that are traditionally the targets of excessive policing, surveillance, and separation from loved ones and community through incarceration or deportation.”

License plate scans occur through Automated License Plate Readers (ALPRs): high-speed cameras mounted in a fixed location or atop police cars moving through the community that automatically capture all license plates that come into view, recording the exact location, date, and time that the vehicle passes by. The information can paint a detailed picture of our private lives, our daily schedules, and our social networks.

 Targeting Immigrant Communities

Documents show that the sheriff’s office shares and transfers ALPR information with Immigration and Customs Enforcement (ICE), Customs and Border Protection (CBP), over a dozen other federal law enforcement agencies, and over 400 out-of-state law enforcement agencies.

“In the hands of police, the use of ALPR technology is a threat to privacy and civil liberties, especially for immigrants. Federal immigration agencies routinely access and use ALPR information to locate, detain, and deport immigrants. The sheriff’s own records show that Sheriff Doyle is sharing ALPR information with two of the most rogue agencies in the federal government: ICE and CBP,” said Vasudha Talla, Immigrants’ Rights Program Director at the ACLU Foundation of Northern California (ACLU NorCal). “Police should not be purchasing surveillance technology, let alone facilitating the deportation and incarceration of our immigrant communities.”

Using its ALPR system, the Marin County Sheriff’s Office scans tens of thousands of license plates each month. That sensitive personal information, which includes photographs of the vehicle and sometimes its driver and passengers, is stored in a database. The sheriff permits hundreds of out-of-state agencies and several federal entities, including units of the Department of Homeland Security, to run queries of a license plate against information the sheriff has collected. The agencies are also able to compare their own bulk lists of vehicle license plates of interest, known as “hot lists,” against the ALPR information collected by the sheriff’s office.

California’s S.B. 34, enacted in 2015, bars this practice. The law requires agencies that use ALPR technology to implement policies to protect privacy and civil liberties, and specifically prohibits police from sharing ALPR data with entities outside of California. The sheriff also violates the California Values Act (S.B. 54), also known as California’s “sanctuary” law. Enacted in 2018, the law limits the use of local resources to assist federal immigration enforcement.

In 2019, ACLU NorCal released documents revealing that ICE agents use their access to the license plate data gathered by police agencies across the nation to find and arrest people.

“This lawsuit sends a message to other California law enforcement agencies that are unlawfully sharing ALPR data and helping ICE—and we know there are others,” said EFF Staff Attorney Saira Hussain. “In recent years, California has enacted laws specifically to protect immigrant communities and prohibit the sharing ALPR data with entities outside the state. Local police and sheriffs are not above the law and should be held accountable when they violate measures designed to protect the privacy of all Californians generally, and vulnerable communities specifically.”

The lawsuit is the first of its kind to challenge the sharing of private information collected by ALPR mass surveillance.

For the complaint:
https://www.eff.org/document/lagleva-v-marin-county-sheriff

For more on ALPRs:
https://www.eff.org/pages/automated-license-plate-readers-alpr

Contact:  press@aclunc.org
Karen Gullo

Be The Face of Change and Pledge for EFF Through CFC Today!

1 week 4 days ago

The pledge period for the Combined Federal Campaign (CFC) is underway and EFF needs your help! Last year, U.S. government employees raised over $38,000 for EFF through the CFC, helping us fight for privacy, free speech, and security on the internet so that we can help create a better digital future.

The Combined Federal Campaign is the world's largest and most successful annual charity campaign for U.S. federal employees and retirees. Since its inception in 1961, the CFC fundraiser has raised more than $8.5 billion for local, national, and international charities. This year's campaign runs from September 1 to January 15, 2022. Be sure to make your pledge before the campaign ends!

U.S. government employees can give to EFF by going to GiveCFC.org and clicking DONATE to give via payroll deduction, credit/debit, or an e-check! Be sure to use EFF's CFC ID # 10437. You can even scan the QR code below!

This year's CFC theme is "You Can Be The Face of Change," as your donations help countless charities to make the world better. With your support, EFF can continue our strides towards internet freedom. Recently we have: pushed Apple to pause its plan to install dangerous photo and message scanning features onto its devices, worked with California residents and lawmakers to pass one of the largest state investments in public fiber broadband in U.S. history, and launched the "Dark Patterns Tip Line" with partners to shed light on deceptive patterns in consumer tech products and services. 

Government employees have a tremendous impact on the shape of our democracy and the future of civil liberties and human rights online. Support EFF today by using our CFC ID #10437 when you make a pledge!

Christian Romero

Come Back with a Warrant: Congress Should Pass the Protecting Data at the Border Act

1 week 5 days ago

We do not lose our constitutional rights at the border. The U.S. Department of Homeland Security (DHS), however, believes you do. In fiscal year 2019 alone (before the pandemic curbed international travel), U.S. Customs and Border Protection (CBP) officers conducted nearly 41,000 electronic device searches without seeking a warrant supported by probable cause of wrongdoing from a judge. Unfettered border searches of electronic devices pose a significant threat to personal privacy. That’s why we urge Congress to pass the Protecting Data at the Border Act, a bill recently re-introduced by Sen. Ron Wyden (D-OR) and Sen. Rand Paul (R-KY) that would create a warrant requirement for these types of searches, thereby protecting our constitutional rights at the border. 

CBP as well as U.S. Immigration and Customs Enforcement (ICE) have been conducting intrusive warrantless border device searches since at least 2009, when CBP and ICE first published their device search policies (CBP’s currently operative policy was published in 2018). The number of device searches at the border has been steadily increasing, affecting tens of thousands of international travelers each year. Our electronic devices contain intimate information about our personal preferences and daily routines, as well as private communications with friends and family. They contain data revealing health conditions, financial standing, and religious and political beliefs. A search that reveals this information to anyone, let alone law enforcement, is a gross violation of our privacy and free speech rights. 

EFF and the American Civil Liberties Union (ACLU)—believing that any warrantless search of electronic devices at the border violates travelers’ rights to privacy under the Fourth Amendment, and freedom of speech and press, private association, and anonymity under the First Amendment—filed suit in 2017 on behalf of 11 individuals whose devices were searched without a warrant at the border. Where the U.S. Supreme Court in Riley v. California (2014) acknowledged electronic devices contain “the sum of an individual’s private life” and thus ruled that a warrant must be obtained before searching the cell phone of an arrestee, EFF/ACLU’s suit sought to extend this warrant requirement to border searches of electronic devices. Unfortunately, our path in the courts is currently stalled. The Supreme Court this summer declined to take our case, and despite making some progress in the appellate courts, no circuit court has required a warrant for border device searches in all circumstances.

The Protecting Data at the Border Act takes the fight to Congress (Rep. Ted Lieu (D-CA) is expected to introduce the House bill). Along with requiring government officials to obtain a probable cause warrant before accessing the contents of an electronic device, the bill would also protect our digital privacy and free speech rights in the following ways: 

  • Prohibiting border officers from denying entry or exit to a U.S. citizen or permanent resident if they refuse to provide a device password or unlock their device;
  • Requiring border officers to notify travelers - before requesting consent to a search of their devices - that they have the right to refuse; 
  • Requiring that consent to a search must be written; 
  • Requiring border officers to have probable cause that a traveller committed a felony before confiscating their device;
  • Forbidding border officers from keeping information obtained from a device search unless that information amounts to probable cause of a crime; 
  • Requiring that evidence gathered in violation of any of the above be inadmissible in court; and 
  • Requiring the government to gather and publish statistics regarding border searches of electronic devices, including how officers obtained access, the breakdown of U.S. versus non-U.S. persons whose devices were searched, the countries from which travelers arrived, and the perceived race and ethnicity of the traveler subject to a search. 

EFF voiced support for the 2017 and the 2019 versions of this bill as well. Since 2017, our lives have only become more digital with our devices holding ever increasing amounts of sensitive personal information. Congress should enact the Protecting Data at the Border Act and recognize that we do not lose our constitutional rights at the border. 

Related Cases: Merchant v. Mayorkas (formerly Alasaad v. Wolf)
Chao Liu

Meet the Alliance for Encryption in Latin America and the Caribbean

1 week 5 days ago

Today EFF and other internet and digital rights organizations are announcing the Alliance for Encryption in Latin America and the Caribbean (AC-LAC). The Alliance is a platform for collective capacity building and information, based on the principle that encryption is an essential tool for security and respect for human and fundamental rights in the region, including freedom of expression and privacy.

The virtual launch event is October 21, with the participation of member organizations. It is open to the public.

This regional Alliance seeks to advance a proactive agenda to promote and defend encryption in Latin America and the Caribbean. It aims to strengthen the use of encryption and generate an ecosystem of trust, security and stability within information and communications technologies (ICTs), particularly the critical infrastructure of the internet and its applications and services.

The platform, comprised of 14 organizations throughout the region, seeks to coordinate efforts with encryption initiatives at the global, regional, and national levels, and generate spaces for exchanging information and mobilizing actions to respond to the effects weakened encryption have on security and fundamental rights.

The member organizations, which have outlined a joint agenda despite their diverse natures and interests, are:  Access Now, ALAI, APC; Article 19; Coalizão Direitos na Rede (CDR); Derechos Digitales; EFF; Karisma Foundation; IP.rec; IRIS; ISOC Brazil; Nic.br; R3D. The eLAC initiative will participate as an observer member. The Alliance  is open to new members who share its  principles and ideas.

On  Thursday, October 21, during Global Encryption Day, AC-LAC will present its regional pro-encryption agenda. A live event will be held to introduce the Alliance and its mission, and discuss why encryption is imperative for a more secure internet.

In addition to the 14 member organizations, AC-LAC counts on the Institute for Digital Development of Latin America and the Caribbean (IDD LAC) as the Alliance's secretariat.

Follow us on our social networks: twitter: @aclac_alianza and linkedIn: AC-LAC or on our website www.ac-lac.org for more information. 

Veridiana Alimonti

Records Shed New Light on Trump White House Officials’ Efforts to Punish Social Media

1 week 6 days ago

Within a day of Twitter fact-checking President Donald Trump’s May 2020 false tweets about mail-in voting, federal officials began trying to find out how much government agencies spent to advertise on social media. This inquiry was likely part of a planned effort to cut that funding, according to records released last month.

The records, released to EFF and the Center for Democracy & Technology as part of a joint FOIA lawsuit, add additional details to the timeline before Trump issued his unconstitutional Executive Order retaliating against online social media services for moderating his posts. President Joseph R. Biden revoked the order in May.

Although Trump’s Executive Order is no longer in effect, the new documents show the lengths officials within the Office of Management and Budget (OMB) went to as part of an unconstitutional effort to leverage the federal government’s spending power to punish platforms for exercising their First Amendment rights to moderate Trump’s speech.

A day before Trump issued the order on May 28, 2020, OMB officials sought to learn whether the government already had data that would show how much money all federal agencies spent to advertise on social media. In an email exchange on May 27, 2020, officials inquired whether it was possible to use www.usaspending.gov to calculate the figure.

It is unclear from the May 27, 2020 thread, but it does not appear that OMB officials could calculate that number. Thus, a day later, Trump issued the Executive Order that required all federal agencies to report to OMB the amount of online advertising they had spent, as well as any laws that would permit the agencies to restrict further advertising spending online. (The Executive Order had several other unconstitutional aspects, which you can read about here.)

Earlier this year, EFF and CDT made public the OMB records showing that federal agencies spent more than $117 million to advertise online. OMB’s latest records document a different aspect of federal agencies’ responses: whether they had any legal authority to unilaterally cut their online advertising spending.

The bulk of the records released to EFF and CDT show that federal agencies largely did not believe they had any legal basis to withdraw their funding. A compilation of agency responses created by OMB begins on page 174 of the recently released records.

Most agencies reported that there was no law or regulation on point that would permit them to cut online advertising spending. The General Services Administration, however, provided a path forward for potentially accomplishing Trump’s retaliatory goal. The GSA stated that although it had no existing legal authority to cut online ad spending, the law governing the agency permitted it to write new “regulations prohibiting GSA services and staff from using GSA funds for advertising or marketing on online platforms.” The documents do not indicate whether Trump administration officials followed up with the GSA regarding its proposal, but it does not appear as though any federal agencies cut online advertising in retaliation for fact-checking Trump’s tweets.

The records also show that the White House remained interested in the results of OMB’s survey of federal agencies’ online advertising spending and whether they could cut that funding. A June 26, 2020 email circulating the results of OMB’s compilation of agency responses states that officials within the White House Office of General Counsel reached out about the results: “We assume you will connect with whoever in the White House needs to see this information, and share this information with them.”

Despite Biden rescinding Trump’s order, the effort was a chilling and unconstitutional abuse of power. That is why EFF was part of a legal team, along with Cooley LLP and Protect Democracy, representing voting rights and civil society plaintiffs that challenged the order in Rock The Vote v. Biden. It is also why EFF and CDT continue litigate our FOIA suit against OMB and Department of Justice and push the agencies to disclose more records that will shed light on what happened.

Related Cases: EFF v. OMB (Trump 230 Executive Order FOIA)
Aaron Mackey

Why Is PayPal Denying Service to Palestinians?

1 week 6 days ago

For many years, Palestinian rights defenders have championed the cause of Palestinians in the occupied territories, who are denied access to PayPal, while Israeli settlers have full access to PayPal products. A recent campaign, led by Palestinian digital rights group 7amleh, calls on PayPal to adhere to its own code of business conduct and ethics, by halting its discrimination against residents and citizens of Palestine. 7amleh has also published a detailed report on PayPal’s actions in Palestine. 

This is not the first time PayPal has denied service to a vulnerable group; the company routinely cuts off payments to those engaged in sex work or the sale of sexually explicit content, and last year, PayPal division Venmo was sued for blocking payments associated with Islam or Arab nationalities or ethnicities.

Just four months ago, EFF and 21 other rights groups wrote to PayPal, taking the company to task for censoring legal, legitimate transactions, and calling on both PayPal and Venmo to provide more transparency and accountability on account freezes and closures. Our coalition's demands included a call for regular transparency reports, meaningful notice to users, and a timely and meaningful appeals process.  These recommendations align with the Santa Clara Principles on Transparency and Accountability in Content Moderation, developed by free expression advocates and scholars to help companies protect human rights when moderating user-generated content and accounts.

It is unclear why PayPal chose to deny service to Palestinians, but they're not unique. Many American companies have taken an overly broad interpretation of anti-terrorism statutes and sanctions, denying service to entire groups or geographic areas—rather than narrowly targeting those parties whom they are legally obligated to block. This practice is deeply troubling, causing serious harm to those who rely on digital services for their basic needs.

PayPal is among the most global of payment processors, and for many it is a lifesaver, allowing people to sidestep local banks' extortionate overseas transfer fees and outright prohibitions. PayPal is how many around the world purchase goods and services from abroad, pay freelancers, or send money to family. By denying access to Palestinians, PayPal makes it hard or even impossible to engage in the normal commerce of everyday life.

We call on PayPal to explain their decision to deny services to Palestinians. And we renew our call—and that of our co-signers—for PayPal to review its practices to implement the Santa Clara Principles and permit lawful transactions on its platform, halting its discrimination against marginalized groups.



Jillian C. York

EFF to Tenth Circuit: First Amendment Protects Public School Students’ Off-Campus Social Media Speech

1 week 6 days ago

EFF filed an amicus brief in the U.S. Court of Appeals for the Tenth Circuit in support of public school students’ right to speak while off school grounds or after school hours, including on social media. We argued that Supreme Court precedent makes clear that the First Amendment rarely allows schools to punish students for their off-campus social media speech—including offensive speech.

In this case, C1.G. v. Siegfried, a student and some friends visited a thrift shop on a Friday night. The student took a picture of his friends wearing wigs and hats, including one hat that looked like a foreign military hat from World War II. Intending to be funny, the student posted a picture of his friends with an offensive caption related to violence against Jews to Snapchat (and deleted it a few hours later). The school suspended and eventually expelled the student.

EFF’s brief argued in favor of the expelled student, focusing on the Supreme Court’s strong protection for student speech rights in its decision from this summer in Mahanoy v. B.L. There, the Court explained that three “features” of students’ off-campus speech diminish a school’s authority to regulate student expression. Most powerfully, “from the student speaker’s perspective, regulations of off-campus speech, when coupled with regulations of on-campus speech, include all the speech a student utters during the full 24-hour day.” Mahanoy makes clear that students’ longstanding right to speak on campus except in narrow circumstances, as recognized by the Supreme Court in its 1969 decision in Tinker v. Des Moines, is even stronger off campus—and that includes, as the Mahanoy Court said, “unpopular expression.”

Our brief also urged the appellate court to reject a special rule for social media. The school argued, and the district court agreed, that the uniquely shareable and accessible nature of speech on the internet—that it can easily make its way onto campus—justifies greater school authority over students’ off-campus social media speech. Rejecting this argument is particularly important given that social media is a central means for young people to express themselves, connect with others, and engage in advocacy on issues they care about; and heeds the Supreme Court’s concern about “full 24-hour day” speech regulations.

As of 2018, 95 percent of U.S. teenagers reported that they have access to a smartphone, and 45 percent said that they use the internet “almost constantly.” Students and young people use social media to rally support for political candidates, advocate for racial justice, and organize around issues like gun control, climate change, and more recently COVID-19. For example, when University of Alabama student Zoie Terry became one of the first students in the U.S. to be quarantined, her posts about the experience on TikTok led to important changes in university policies, including medical monitoring of quarantined students.

Students must have an outlet for their expression, free from the censorial eye of public school officials. We hope the Tenth Circuit applies Mahanoy appropriately and overturns the student’s expulsion in this case.

Mukund Rathi
Checked
2 hours 33 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed