EFF and Other Civil Society Organizations Issue Report on Danger to Digital Rights in Ola Bini Trial

1 month 2 weeks ago

In preparation for what may be the final days of the trial of Ola Bini, an open source and free software developer arrested shortly after Julian Assange's ejection from Ecuador’s London Embassy, civil society organizations observing the case have issued a report citing due process violations, technical weaknesses, political pressures, and risks that this criminal prosecution entails for the protection of digital rights. Bini was initially detained three years ago and previous stages of his prosecution had significant delays that were criticized by the Office of the Inter-American Commission on Human Rights (IACHR) Special Rapporteur for Freedom of Expression. An online press conference is scheduled for May 11th, with EFF and other organizations set to speak on the violations in Bini's prosecution  and the danger this case represents. The trial hearing is set for May 16-20, and will most likely conclude next week. If convicted, Bini's defense can still appeal the decision.

What’s Happened So Far

The first part of the trial against Ola Bini took place in January. In this first stage of testimony and expert evidence, the court repeatedly called attention to various irregularities and violations to due process by the prosecutor in charge. Human rights groups observing the hearing emphasized the flimsy evidence provided against Bini and serious flaws in how the seizure of his devices took place. Bini's defense stressed that the raid happened without him present, and that seized encrypted devices were examined without following procedural rules and safeguards.

These are not the only problems with the case. Over two years ago, EFF visited Ecuador on a fact-finding mission after Bini’s initial arrest and detention. What we found was a case deeply intertwined with the political effects of its outcome, fraught with due process violations. EFF’s conclusions from our Ecuador mission were that political actors, including the prosecution, have recklessly tied their reputations to a case with controversial or no real evidence. 

Ola Bini is known globally as someone who builds secure tools and contributes to free software projects. Bini’s team at ThoughtWorks contributed to Certbot, the EFF-managed tool that has provided strong encryption for millions of websites around the world, and most recently, Bini co-founded a non-profit organization devoted to creating user-friendly security tools.

What  Bini is not known for, however, is conducting the kind of security research that could be mistaken for an “assault on the integrity of computer systems,” the crime for which he was initially investigated, or "unauthorized access to a computer system," the crime for which he is being accused now (after prosecutors changed the charges). In 2019, Bini's lawyers counted 65 violations of due process, and journalists told us at the time that no one was able to provide them with concrete descriptions of what he had done. Bini’s initial imprisonment was ended after a decision considered his detention illegal, but the investigation continued. The judge was later "separated" from the case in a ruling that admitted the wrongdoing of successive pre-trial suspensions and the violation of due process.

Though a judge decided in last year’s pre-trial hearing to proceed with the criminal prosecution against Bini, observers indicated a lack of solid motivation in the judge's decision.

A New Persecution

A so-called piece of evidence against Bini was a photo of a screenshot, supposedly taken by Bini himself and sent to a colleague, showing the telnet login screen of a router. The image is consistent with someone who connects to an open telnet service, receives a warning not to log on without authorization, and does not proceed—respecting the warning. As for the portion of a message exchange attributed to Bini and a colleague, leaked with the photo, it shows their concern with the router being insecurely open to telnet access on the wider Internet, with no firewall.

Between the trial hearing in January and its resumption in May, Ecuador's Prosecutor's Office revived an investigation against Fabián Hurtado, the technical expert called by Ola Bini's defense to refute the image of the telnet session and who is expected to testify at the trial hearing.

On January 10, 2022, the Prosecutor's Office filed charges for procedural fraud against Hurtado. There was a conspicuous gap between this charge and the last investigative proceeding by prosecutors in the case against Hurtado, when police raided his home almost 20 months before, claiming that he had “incorporated misleading information in his résumé". This raid was violent and irregular, and considered by Amnesty International as an attempt to intimidate Ola Bini's defense. One of the pieces of evidence against Hurtado is the document by which Bini’s lawyer, Dr. Carlos Soria, included Hurtado’s technical report in Bini's case file.

Hurtado's indictment hearing was held on February 9, 2022. The judge opened a 90-day period of investigation which is about to end. As part of this investigation, the prosecutor's office and the police raided the offices of Ola Bini's non-profit organization in a new episode of due process violations, according to media reports.

Civil Society Report and Recommendations

Today’s report, by organizations gathered in the Observation Mission of Bini's case, is critical for all participating and to others concerned about digital rights around the world. There is still time for the court to recognize and correct the irregularities and technical weaknesses in the case. It points out key points that should be taken into consideration by the judicial authorities in charge of examining the case.

In particular, the report notes, the accusations have failed to demonstrate a consistent case against Ola Bini. Irregularities in court procedures and police action have affected both the speed of the procedure and due process of law in general. In addition, accusations against Bini show little technical knowledge, and could lead to the criminalization of people carrying out legitimate activities protected by international human rights standards. This case may lead to the further persecution of the so-called "infosec community" in Latin America, which is made up primarily of security activists who find vulnerabilities in computer systems, carrying out work that has a positive impact on society in general. The attempt to criminalize Ola Bini already shows a hostile scenario for these activists and, consequently, for the safeguard of our rights in the digital environment.

Moreover, these activists must be guaranteed the right to use the tools necessary for their work—for example, the importance of online anonymity must be respected as a premise for the exercise of several human rights, such as privacy and freedom of expression. This right is protected by international Human Rights standards, which recognize the use of encryption (including tools such as Tor) as fundamental for the exercise of these rights.

These researchers and activists protect the computer systems on which we all depend, and protect the people who have incorporated electronic devices into their daily lives, such as human rights defenders, journalists and activists, among many other key actors for democratic vitality. Ola Bini, and others who work in the field, must be protected—not persecuted.

Jason Kelley

Thomson Reuters to Review Human Rights Impact of its Data Collection for ICE

1 month 2 weeks ago

EFF, along with many other organizations, has loudly sounded the alarm about data brokers and the myriad ways they can collect data on unsuspecting users, as well as the numerous dangers of public-private surveillance partnerships. One of the companies that has sometimes flown under the radar, however, is the Canada-based media conglomerate Thomson Reuters.  But after coming under increasing criticism for its provision of surveillance technologies to and contracts with U.S. Immigration and Customs Enforcement (ICE), the company has announced it will conduct a company-wide human rights assessment of its products and services. This comes on the heels of multiple years of investor activism where a minority shareholder, the BC General Employees’ Union (BCGEU), joined the Latinx rights organization Mijente in urging Thomson Reuters to cut its ties with ICE.

The union issued a blog post about the decision, stating that “Thomson Reuters contracts with ICE have a total value exceeding $100m USD. The contracts are to provide data brokerage services that help the U.S. agency target undocumented immigrants for detention and deportation. The company, via its Consolidated Lead Evaluation and Reporting (CLEAR) software, amassed data from private and public databases on individuals, like social media information, names, emails, phone data, license plate scans, utility bills, financial information, arrest records, insurance information, employment records, and much more.”

In addition, the CLEAR program provided Automated License Plate Reader (ALPR) data collected by Vigilant Solutions to ICE. EFF has long been monitoring the widespread use of Vigilant Solutions and ALPR data by law enforcement. We find the use of ALPR data to further human rights abuses a particularly troubling use of this invasive technology.

BCGEU’s capital markets advisor Emma Pullman told the Verge: “[Thomson Reuters] has realized that investors are quite concerned about this, and that the public are increasingly very concerned about data brokers. In that kind of perfect storm, the company has had to respond.” 

While welcome, an investigation of the impact of providing surveillance technologies to human rights abusers is not itself enough. ICE’s human rights record is both horrific and well-documented. This investigation should not be used to rubber-stamp existing contracts with ICE, no matter how lucrative they may be.

Bill Budington

SafeGraph’s Disingenuous Claims About Location Data Mask a Dangerous Industry

1 month 2 weeks ago

On Tuesday, Motherboard reported that data broker SafeGraph was selling location information “related to visits to clinics that provide abortions including Planned Parenthood facilities.” This included where people came from and where they went afterwards.

In response, SafeGraph agreed to stop selling data about Planned Parenthood visitors. But it also defended its behavior, claiming “SafeGraph has always committed to the highest level of privacy practices ensuring individual privacy is NEVER compromised.“ The company, it continued, “only sell[s] data about physical places (not individuals.)”

This framing is misleading. First, SafeGraph for years did sell data about individuals—and then remained closely tied to a business that still did so. Second, the aggregated location data that SafeGraph now sells is based on the same sensitive, individual location traces that are collected and sold without meaningful consent. 

SafeGraph’s History of Privacy Violations

Last year, EFF reported public records showing that SafeGraph had sold 2 years of “disaggregated, device-specific” location data about millions of people to the Illinois government, starting in January 2019.

Older materials about SafeGraph indicate that it used to offer a product called “Movement Panel.” A 2017 blog post from two people at SafeGraph describes Movement Panel as a “database of ultra-accurate GPS-location data that comes from anonymized mobile devices.” It also describes how SafeGraph used “the bidstream”—that is, data siphoned from the millions of apps that solicit ads on the open market through real-time bidding. Use of bidstream data is considered ethically dubious even within marketing circles, in part because it is nearly impossible to get knowing consent when data is shared and sold among hundreds of unseen parties.

It’s entirely possible that SafeGraph itself no longer sells this kind of data. But that’s not the whole story.

In 2019, SafeGraph spun off a company called Veraset, and the two companies remained tight. In 2020, Quartz reported that “[SafeGraph] says it gets mobility data from providers like its spin-off Veraset, which own the relationships with the apps that gather its data (Veraset doesn’t share the names of the apps with SafeGraph).” Founder Auren Hoffman and other SafeGraph employees have also used SafeGraph forums to direct potential customers to Veraset for specific data needs.

Veraset sells raw, disaggregated, per-device location data. Last year, EFF received records showing how Veraset gave a free trial of such data to officials in Washington, D.C., as well as other unnamed agencies. Veraset offers a product called “Movement”. As the company explains it: “Our core population human movement dataset delivers the most granular and frequent GPS signals available in a third-party dataset. Unlike other data providers who rely on one SDK, we source from thousands of apps and SDKs to avoid a biased sample.” (“SDK” means a “software development kit” embedded in a mobile app, which can be used to gather location data.)

In sum, Veraset is in the business of selling precise, ping-level location data from the smart phones of millions of people. Safegraph itself was in this business until it spun those services off to Veraset. And after this spin-off, Safegraph continued to acquire data from Veraset and steer business there. But a corporate restructuring does not make anyone safer. Highly invasive data about millions of people is still up for sale, putting vulnerable people at serious risk. 

The “Places Not People” Fallacy

With that context in mind, let’s consider SafeGraph’s claim that it “only sells data about physical places (not individuals).” However the company frames it, the data is about people. Safegraph’s data comes from mobile devices carried by human beings, and represents large portions of their daily movements, habits, and routines. Marketers, transportation departments, law enforcement, and others are only interested in location data because it reveals things about the people who visit those locations.

When location data is disaggregated and device-specific (as in SafeGraph’s contract with Illinois), it is effectively impossible to “de-identify.” Information about where a person has been itself is usually enough to re-identify them. For example, someone who travels frequently between a given office building and a single-family home is probably unique in those habits and therefore identifiable from other readily identifiable sources. One widely cited study from 2013 even found that researchers could uniquely characterize 50% of people using only two randomly chosen time and location data points. 

A national security contractor that peddles the same kind of data relies on its specificity. As one spokesperson said during a live demonstration, “If I’m a foreign intel officer, I don’t have access to things like the agency or the fort, I can find where those people live, I can find where they travel, I can see when they leave the country.” 

Aggregation of location data can sometimes preserve individual privacy, depending on appropriate aggregation parameters and choices. Factors include the number of people and phone pings in the data set, and the granularity of the location described (such as square miles versus square feet). But no privacy-preserving aggregation protocols can justify the initial collection of location data from people without their voluntary opt-in consent, especially when that location data is then exploited for profit. Sensitive data should only be collected and used with specific, informed consent, and we must reserve the right to withdraw that consent at any time. Data brokers like SafeGraph do not meet these standards.

What Can We Do?

Users who are concerned about tracking by data brokers can take simple steps to reduce their impact. 

Read our new guide to digital safety and privacy tips for people involved in abortion access, as well as our Surveillance Self-Defense playlist for reproductive healthcare providers, seekers, and advocates. You can also check out more information on creating a personal security plan, attending a protest, and location tracking on mobile phones.

To start, disable the advertising ID on your phone, which is the primary key that brokers use to link data to individuals. (Here’s how on Android and iOS.) Disable location permissions for apps you don’t trust, and generally audit the permissions that third-party apps are granted. Use a browser that respects your privacy, like Safari or Firefox, and install a tracker blocker like Privacy Badger for extra protection. 

If you live in California, you can file a “right to know” request with SafeGraph and Veraset to see what information they have about you. You can also exercise your right to opt out of sale and request that the companies delete your personal information. Unfortunately, Safegraph and Veraset are just two of the hundreds of data brokers that profit from personal information: you can see a list of brokers, and find out how to exercise your rights, at the California attorney general’s registry. Nevada residents can also request that the brokers refrain from selling your data in the future. 

If you are a sitting member of Congress, you can pass a comprehensive privacy law to stop this invasive business model once and for all.

Bennett Cyphers

The Movement to Ban Government Use of Face Recognition

1 month 3 weeks ago

In the hands of police and other government agencies, face recognition technology presents an inherent threat to our privacy, free expression, information security, and social justice. Our faces are unique identifiers that can’t be left at home, or replaced like a stolen ID or compromised password. The technology facilitates covert mass surveillance of the places we frequent, people we associate with, and, purportedly, our emotional state.

Fortunately, communities across the country are fighting back. In the three years since San Francisco passed its first-of-a-kind ban on government use of facial recognition, at least 16 more municipalities, from Oakland to Boston, have followed their lead. These local bans are necessary to protect residents from harms that are inseparable from municipal use of this dangerous technology.

The most effective of the existing bans on government face surveillance have crucial elements in common. They broadly define the technology, provide effective mechanisms for any community member to take legal enforcement action should the ordinance be violated, and limit the use of any information acquired in an inadvertent breach of the prohibition.

There are, however, important nuances in how each ordinance accomplishes these goals. Here we will identify the best features of 17 local bans on government use of face recognition. We hope this will help show authors of the next round how best to protect their communities.

You can press the play button below to see a map showing the 17 communities that have adopted these bans.

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.google.com%2Fmaps%2Fd%2Fembed%3Fmid%3D1SYYYrCe8rmRPrZyz5uFPSo4j4a_Dlkhb%26amp%3Behbc%3D2E312F%22%20width%3D%22640%22%20height%3D%22480%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from google.com

Here is a list of these 17 communities:

 

Definition of “face recognition”

Particular consideration must be given in any tech-related legislation to define what tools and applications are, and are not, intended to be covered. Complicating that challenge is the need to define the relevant technology broadly enough to assure that emerging capabilities are suitably captured, while not inadvertently impacting technologies and applications that should not fall within the bill's scope.

Here, many forms of government use of face recognition technology may present significant threats to essential civil liberties. They may also exacerbate bias. Today, the most widely deployed class of face recognition is often called “face matching.” This can be used for “face identification,” that is, an attempt to link photographs of unknown people to their real identities. For example, police might take a faceprint from a new image (e.g., taken by a surveillance camera) and compare it against a database of known faceprints (e.g., a government database of ID photos). It can also be used for “face verification,” for example, to determine whether a person may have access to a location or device. Other forms of face matching include “face clustering,” or automatically assembling together all the images of one person, and “face tracking,” or automatically following a person’s movements through physical space. All of these threaten digital rights.

Another application of face recognition is “face analysis,” also known as “face inference,” which proponents claim can identify demographic traits, emotional state, and more based on facial features. This invites additional bias, and suggests a return to the age of phrenology.

Bans on government use of face recognition must be drawn broadly enough to address all of these threats. Fortunately, many of the existing bans follow Boston’s example in defining face surveillance and face surveillance systems as:

Face surveillance” shall mean an automated or semi-automated process that assists in identifying or verifying an individual, or in capturing information about an individual, based on the physical characteristics of an individual's face.

“Face surveillance system” shall mean any computer software or application that performs face surveillance.

Critically, these definitions are not limited just to face identification and face verification, but extend also to other technologies that use face characteristics to capture information about people.

Oakland, California offers another strong example:

“Face Recognition Technology” means an automated or semi-automated process that: (A) assists in identifying or verifying an individual based on an individual's face; or (B) identifies or logs characteristics of an individual's face, head, or body to infer emotion, associations, expressions, or the location of an individual.

Notably, it extends beyond face characteristics, to also cover head and body characteristics. It thus captures many of the current uses and future-proofs for some of the most concerning types of biometric data.

Importantly, each definition effectively captures the intended technology and applications, while not inadvertently capturing less-concerning practices such as ordinary film, video, and still photography.

Don’t use it, don’t outsource it

While it is critical that cities ban their own agencies from acquiring and using face recognition technology, this alone is not enough to protect residents from harm. It is also necessary for cities to ban their agencies from acquiring or using information derived from face recognition technology. Otherwise, city employees banned from using the technology could just ask others to use the technology for them.

While police departments in large cities like New York and Detroit may have in-house face recognition systems and teams of operators, many more local police agencies around the country turn to state agencies, fusion centers, and the FBI for assistance with their face recognition inquiries. Thus, legislation that addresses the technology while not addressing the information derived from the technology may have little impact.

Lawmakers in several cities including Berkeley have taken the important additional step of making it unlawful to access or use information obtained from Face Recognition Technology, regardless of the source of that information:

it shall be a violation of this ordinance for the City Manager or any person acting on the City Manager’s behalf to obtain, retain, request, access, or use: i) any Face Recognition Technology; or ii) any information obtained from Face Recognition Technology...

Berkeley's ordinance further elaborates that even when city employees inadvertently gain access to information derived from face recognition technology, the data generally must be promptly destroyed and cannot be used. Also, any inadvertent receipt or use of this information must be logged and included in the city’s annual technology report, including what measures were taken to prevent further transmission or use. This vital transparency measure assures residents and legislators are made aware of these errors, and can better identify any patterns suggesting intentional circumvention of the law’s intent.

Exemptions

Exceptions can swallow any rule. Authors and supporters of bans on government use of face recognition must tread carefully when carving out allowable uses.

First, some ordinances allow face detection technologies that identify and blur faces in government records, to prepare them for disclosure under Freedom of Information Acts (FOIAs). This can help ensure, for example, transparent public access to government-held videos of police use of force, while protecting the privacy of the civilians depicted. Face detection technology does not require the creation of faceprints that distinguish one person from another, so it raises fewer privacy concerns. Unfortunately, there can be racial disparities in accuracy.

King County’s ordinance provides two necessary safeguards for government use of face detection technology. It can only be used “for the purpose of redacting a recording for release …, to protect the privacy of a subject depicted in the recording.” Also, it “can not generate or result in the retention of any facial recognition information.”

Second, some ordinances allow local government to provide its employees with phones and similar personal devices, for use on the job, that unlock with the employee’s faceprint. Some employees use their devices to collect personal information about members of the public, and that information should be securely stored. While passwords provide stronger protection, some employees might fail to lock their devices at all, without the convenience of face locks.

Third, some ordinances allow local government to use face locks to control access to restricted government buildings. Portland, Maine’s ordinance has two important safeguards. As to people authorized for entry, no data can be processed without their opt-in consent. As to other people, no data can be processed at all.

Fourth, a few ordinances allow police, when investigating a specific crime, to acquire and use information that another entity obtained through face recognition. EFF opposes these exemptions, which invite gamesmanship. At a minimum, police prohibited from themselves using this tech must also be prohibited from asking another agency to use this tech on their behalf. Boston has this rule. But unsolicited information is also a problem. San Francisco police broadly circulated a bulletin to other agencies, including the photo of an unknown suspect; one of these agencies responded by running face recognition on that photo; and then San Francisco police used the resulting information. New Orleans’ ordinance goes a step farther, prohibiting use of information generated by this tech “with the knowledge of” a city official. Fortunately, 12 of 17 jurisdictions do not have this exemption at all.

Fifth, a few jurisdictions exempt compliance with the National Child Search Assistance Act. This is unnecessary: that Act simply requires agencies to report information they already have, and does not require any acquisition or use of technology or information. Fortunately, 13 of 17 jurisdictions eschew this exemption.

Enforcement

It is not enough to ban government use of face recognition. It is also necessary to enforce this ban. The best way is to empower community members to file their own enforcement lawsuits. These are called private rights of action.

The best ones broadly define who can sue. In Oakland, for example, “Any violation of this Article … constitutes an injury and any person may institute proceedings …” It is a mistake to limit enforcement just to a person who can show injury from being subjected to face recognition. It can be exceedingly difficult to identify such people, despite a brazen violation of the ordinance. Further, government use of face recognition harms the entire community, including through the chilling of protest in public spaces.

Private enforcement requires a full arsenal of remedies. A judge must have the power to order a city to comply with the ordinance. Also, there should be damages for a person who was subjected to face recognition. Oakland provides this. A prevailing plaintiff should be paid their reasonable attorney fees. This ensures access to the courts for everyone, and not just wealthy people who can afford to hire a lawyer. San Francisco properly allows full recovery of all reasonable fees.

Other enforcement tools are also important. First, evidence collected in violation of the ordinance should be excluded from court proceedings, as in Minneapolis. Second, employees who blow the whistle on rule-breaking should be protected, as in Berkeley. Third, employees who break the rules should be subject to workplace discipline, as in Brookline.

Other bans

When legislators and advocates write a ban on government use of face recognition, they should consider whether to also ban government use of other kinds of surveillance technologies. Many are so dangerous and invasive that government should not use them at all.

For example, EFF opposes government use of predictive policing. We are pleased that four cities have ordinances forbidding municipal use: New Orleans, Oakland, Pittsburgh, and Santa Cruz. Likewise, EFF supported Oakland’s ban on municipal use of voiceprints.

Nationwide ban

City and county-level lawmakers are not alone in understanding that government use of face surveillance technology chills free speech, threatens residents’ privacy, and amplifies historical bias. Federal lawmakers including Senators Edward Markey, Jeff Merkley, Bernie Sanders, Elizabeth Warren, and Ron Wyden alongside U.S. Representatives Pramila Jayapal, Ayanna Pressley, and Rashida Tlaib have stepped forward in introducing the Facial Recognition and Biometric Technology Moratorium Act (S.2052/H.R.3907). If passed, it would ban federal agencies like Immigration and Customs Enforcement, the Drug Enforcement Administration, the Federal Bureau of Investigation, and Customs and Border Protection from using face recognition to surveil U.S. residents and travelers. The act would also withhold certain federal funding from local and state governments that use face recognition.

Take Action

If you don’t live in one of the 17 cities that have already adopted a local ban on government use of face recognition, there’s no place like home to begin making a change. In fact, there may already be groups in your community setting the wheels in motion. Our About Face campaign helps local organizers educate their representatives and communities, and every resident to take that first step in calling for change. If you have an Electronic Frontier Alliance group in your area, they can also be a great resource in finding like-minded neighbors and activists to amplify your efforts. If your city has already protected you and your neighbors (and even if it has not yet), you can still stand up for friends and loved ones by letting your congressional representatives know it’s time to ban federal use of face recognition, too.  

Nathan Sheard

Digital Security and Privacy Tips for Those Involved in Abortion Access

1 month 3 weeks ago

Legislation deputizing people to find, sue, and collect damages from anyone who tries to help people seeking abortion care creates serious digital privacy and security risks for those involved in abortion access. Patients, their family members and friends, doctors, nurses, clinic staff, reproductive rights activists, abortion rights counselors and website operators, insurance providers, and even drivers who help take patients to clinics may face grave risks to their privacy and safety. Other legislation that does not depend on deputizing “bounty hunters,” but rather criminalizes abortion, presents even more significant risks.

Those targeted by anti-abortion laws can, if they choose, take steps to better protect their privacy and security. Though there is no one-size-fits-all digital security solution, some likely risks are clear. One set of concerns involves law enforcement and state actors, who may have expensive and sophisticated surveillance technology at their disposal, as well as warrants and subpoenas. Because of this, using non-technical approaches in combination with technical ones may be more effective at protecting yourself. Private actors in states with "bounty laws" may also try to enlist a court's subpoena power (to seek information associated with your ISP address, for example, or other data that might be collected by the services you use). But it may still be easier to protect yourself from this “private surveillance” using technical approaches. This guide will cover some of each. 

Developing risk awareness and a routine of keeping your data private and secure takes practice. Whether the concern is over digital surveillance, like tracking what websites you’ve visited, or attempts to obtain personal communications using the courts, it’s good to begin by thinking at a high level about ways you can improve your overall security and keep your online activities private. Then, as you come to understand the potential scope of risks you may face, you can narrow in on the tools and techniques that are the best fit for your concerns. Here are some high-level tips to help you get started. We recommend pairing them with some specific guides we’ve highlighted here. To be clear, it is virtually impossible to devise a perfect security strategy—but good practices can help.

1: Compartmentalization

In essence, this is doing your best to keep more sensitive activities separate from your day-to-day ones. Compartmentalizing your digital footprint can include developing the habit of never reusing passwords, having separate browsers for different purposes, and backing up sensitive data onto external drives.

Recommendations:

  • Use different browsers for different use cases. More private browsers like DuckDuckGo, Brave, and Firefox are better for more sensitive activities. Keeping separate browsers can protect against accidental data spillover from one aspect of your life into another.
  • Use a secondary email address and/or phone number to register sensitive accounts or give to contacts with whom you don’t want to associate too closely. Google Voice is a free secondary phone number. Protonmail and Tutanota are free email services that offer many privacy protections that more common providers like Gmail do not, such as end-to-end encryption when emailing others also on Protonmail and Tutanota, and fewer embedded tracking mechanisms on the service itself.
  • Use a VPN when you need to dissociate your internet connection from what you’re doing online. Be wary of VPN products that sell themselves as cure-all solutions.
  • If you're going to/from a location that's more likely to have increased surveillance, or if you're particularly worried about who might know you're there, turning off your devices or their location services can help keep your location private.

2: Community Agreements

It’s likely that others in your community share your digital privacy concerns. Deciding for yourself what information is safer to share with your community, then coming together to decide what kind of information cannot be shared outside the group, is a great nontechnical way to address many information security problems. Think of it in three levels: what information should you share with nobody? What information is OK to share with a smaller, more trusted group? And what information is fine to share publicly?

Recommendations:

  • Come up with special phrases to mask sensitive communications.
  • Push a culture of consent when it comes to sharing data about one another, be it pictures, personal information, and so on. Asking for permission first is a good way to establish trust and communication with each other.
  • Agree to communicate with each other on more secure platforms like Signal, or offline.

3: Safe Browsing

There are many ways that data on your browser can undermine your privacy and security, or be weaponized against you. Limiting unwanted tracking and reducing the likelihood that data from different aspects of your life spills into one another is a great way to layer on more protection.

Recommendations:

  • Install privacy-preserving browser extensions on any browsers you use. Privacy Badger, uBlock Origin, and DuckDuckGo are great options.
  • Use a privacy-focused search engine, like DuckDuckGo.
  • Carefully look at the privacy settings on each app and account you use. Turn off location services on phone apps that don’t need them. Raise the bar on privacy settings for most, if not all, your online accounts.
  • Disable the ad identifier on mobile devices. Ad IDs are specifically designed to facilitate third-party tracking, and disabling them makes it harder to profile you. Instructions for Android devices and iOS devices are here.
  • Choose a browser that’s more private by design. DuckDuckGo on mobile and Firefox (with privacy settings turned up) on the desktop are both good options.

4:  Security Checklists

Make a to-do list of tools, techniques, and practices to use when you are doing anything that requires a bit more care when it comes to digital privacy and security. This is not only good to have so that you don’t forget anything, but is extremely helpful when you find yourself in a more high-stress situation, where trying to remember these things is far from the top of your mind.

Recommendations:

  • Tools: VPNs for hiding your location and circumventing local internet censorship, encrypted messaging apps for avoiding surveillance, and anonymized credit cards for keeping financial transactions separate from your day-to-day persona.
  • Strategies: use special code words with trusted people to hide information in plain sight; check in with someone via encrypted chat when you are about to do something sensitive; turn off location services on your cell phone before going somewhere, and back up and remove sensitive data from your main device.
Daly Barnett

The EU's Copyright Directive Is Still About Filters, But EU’s Top Court Limits Its Use

1 month 3 weeks ago

The Court of Justice of the European Union has issued a long-awaited judgment on the compatibility of the EU Copyright Directive’s filtering requirements with the Charter of Fundamental Rights of the European Union. The ruling recognizes the tension between copyright filters and the right to freedom of expression, but falls short of banning upload filters altogether.

Under Article 17 of the EU’s controversial Copyright Directive, large tech companies must ensure that infringing content is not available on their platforms or they could be held liable for it. Given that legal risk, platforms will inevitably rely on error-prone upload filters that undermine lawful online speech – as Poland pointed out in the legal challenge that led to the judgment.

No Alternatives to Filtering Tools, But Strong User Safeguards

The Court acknowledged that Article 17’s obligation to review content constitutes a de facto requirement to use automatic recognition and filtering tools, and held that such mechanisms would indeed constitute an interference with users’ freedom of expression rights. However, as with last year’s opinion of the Court of Justice’s Advocate General, the judges concluded that the safeguards provided by Article 17 were adequate. Because those safeguards include an obligation to ensure the availability of lawful uploads, an automated system that cannot “distinguish adequately between unlawful content and lawful content” won’t pass muster under EU law.

The Court also highlighted the responsibility of rightsholders to provide platforms with undoubtedly relevant and necessary information” of an unlawful use of copyrighted material. Platform providers cannot be forced to “generally monitor” user content to check the legality of content; that also means that they cannot be required to conduct an “independent assessment” of the content. If a platform ends up removing lawful content, users can invoke the Directive’s “complaint and redress” mechanisms.

To Block or Not to Block

The court’s focus on interpreting exceptions and limitations to copyright in a way that preserves fundamental rights is laudable and follows the EFF’s own suggestions. Following the court’s criteria, platforms can argue that they are only required to use upload filters in obvious cases. That, in turn, could trigger a requirement for several EU Member States to go rework their implementations of the EU Copyright Directive (which ignore the fundamental rights perspective). The ruling means that national governments must pay much stronger attention to user rights.

However, the Court failed to set out parameters to help platforms decide when and when not to block content. Worse, it side-stepped the core issue – whether automated tools can ever be reasonably implemented. It’s hard to see how the measures implied by this ruling can actually ensure that speech-intrusive measures are  “strictly targeted.” In the ruling, the Court explained the limits of content monitoring by referring to the Glawischnig-Piesczek v Facebook case, a speech-intrusive case involving the removal of defamatory content. But that reference doesn’t tell us much: the Court in Glawischnig-Piesczek v Facebook ignored the state of the art and real-world operations of “automated search tools and technologies tools” and underestimated how screening efforts by platforms could easily become excessive, undermining users’ fundamental rights. 

Christoph Schmon

Digital Rights Updates with EFFector 34.3

1 month 3 weeks ago

Want the latest news on your digital rights? Well, you're in luck! Version 34, issue 3 of our EFFector newsletter is out now. Catch up on the latest EFF news by reading our newsletter or listening to the new audio version below. This issue includes Google's willingness to give U.S. law enforcement information from keyword search warrants and, of course, our thoughts and suggestions for Twitter's new owner.

LISTEN ON YOUTUBE

EFFECTOR 34.3 - How to Prevent Twitter from Going the Way of the Dodo

Make sure you never miss an issue by signing up by email to receive EFFector as soon as it's posted! Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Podcast Episode: Teaching AI to Its Targets

1 month 3 weeks ago

Too many young people – particularly young people of color – lack enough familiarity or experience with emerging technologies to recognize how artificial intelligence can impact their lives, in either a harmful or an empowering way. Educator Ora Tanner saw this and rededicated her career toward promoting tech literacy and changing how we understand data sharing and surveillance, as well as teaching how AI can be both a dangerous tool and a powerful one for innovation and activism.

By now her curricula have touched more than 30,000 students, many of them in her home state of Florida. Tanner also went to bat against the Florida Schools Safety Portal, a project to amass enormous amounts of data about students in an effort to predict and avert school shootings – and a proposal rife with potential biases and abuses.

Tanner speaks with EFF's Cindy Cohn and Jason Kelley on teaching young people about the algorithms that surround them, and how they can make themselves heard to build a fairer, brighter tech future.

%3Ciframe%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2Fdd3d61b4-cbe9-4463-96fd-7babf197363d%3Fdark%3Dtrue%26amp%3Bcolor%3D1F1C00%22%20width%3D%22100%25%22%20height%3D%2252px%22%20frameborder%3D%22no%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com


  
  

You can also listen to this episode on the Internet Archive.

In this episode you’ll learn about:

  • Convincing policymakers that AI and other potentially invasive tech isn’t always the answer to solving public safety problems.
  • Bringing diverse new voices into the dialogue about how AI is designed and used.
  • Creating a culture of searching for truth rather than just accepting whatever information is put on your plate.
  • Empowering disadvantaged communities not only through tech literacy but by teaching informed activism as well.

Ora Tanner is co-founder and former chief learning officer at The AI Education Project, a national non-profit centering equity and accessibility in AI education; she also is an Entrepreneur-in-Residence with Cambiar Education. She has presented at numerous academic conferences, summits, and professional development trainings, and spoken on panels as an EdTech expert to discuss topics related to AI, education, emerging technologies, and designing innovative learning experiences. She earned both a B.S. and M.S. in physics and completed course work toward a Ph.D. in instructional technology at the University of South Florida. 

Music

Music for How to Fix the Internet was created for us by Reed Mathis and Nat Keefe of BeatMower.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes the following music licensed Creative Commons Attribution 3.0 Unported by their creators: 

  • Meet Me at Phountain by gaetanh (c) copyright 2022
    http://ccmixter.org/files/gaetanh/64711
  • Hoedown at the Roundabout by gaetanh (c) copyright 2022
    http://ccmixter.org/files/gaetanh/64711
  • JPEG of a Hotdog by gaetanh (c) copyright 2022
    http://ccmixter.org/files/gaetanh/64711
  • reCreation by airtone (c) copyright 2019
    http://dig.ccmixter.org/files/airtone/59721
Resources

Student and School Surveillance:

Bias in AI and Machine Learning:

Predictive Policing:

 Transcript

Ora: I had the opportunity to give public comment at the Marjory Stoneman Douglas High School Commission. And they're the committee that's responsible for the legislation, that's being passed for school shootings because of what took place in Parkland.

When I went up to the mic, I actually felt somewhat intimidated because the entire room was all police officers and it's all white males. We were there to present our feedback regarding the Florida Schools Safety Portal, which is a project they were starting that was just collecting massive amounts of data .

Basically the portal was to predict the probability of a student being on the path to being a school shooter. And so they thought using AI technology was a way to do this.

It was problematic because 58% of the students in the state are black and Hispanic, and it's the majority of their information that's in those databases. So if you're using those data sources, it's most likely it's going to be a black or Hispanic person that's going to get a high probability score.

Cindy: That's Ora Tanner. She's an educator and an entrepreneur working to change how we understand data sharing and surveillance, and especially its impact on young people.

Jason: Ora's going to talk with us about educating young people on artificial intelligence and how she's tackling the ways that algorithms both harm and empower young people. This is How to Fix the Internet, a podcast of the Electronic Frontier Foundation. I'm Jason Kelly, a digital strategist at EFF sitting in for Danny O'Brien.

Cindy: And I'm Cindy Cohen. EFF's executive director.

Jason: Welcome to How to Fix the Internet.

Cindy: Ora, thank you so much for joining us today. Now, before you started your latest venture, you were a classroom teacher for a while and I think that woke you up to some of the issues that you're focusing on now. Can you talk to us about what happened and how you became committed to teaching students about AI and surveillance?

Ora: I have actually taught at every grade level from preschool through college, but it was while I was at a school here in Tampa where I'm located teaching eighth grade science and they just had all this different technology in all the classrooms, but none of the teachers were really using it. 

There were two things, I noticed the students didn't really have any familiarity or experience with them, which to me was very troubling because I'm like, "Hey, this is eighth grade. You should have some minimal types of experiences with technology." But then I also saw the power when I showed them like, "This is how it can be used. You can use it to create." So that inspired me to go back to school to pursue my PhD in instructional technology.

Cindy: So how did you end up researching specifically AI and then surveillance? 

Ora: During my PhD work I had the opportunity to have a fellowship with the Aspen Tech Policy Hub out in San Francisco. During that time I really was taking just a deep dive into artificial intelligence, then I got into algorithmic bias and then I just realized, "Hey, this bias seems to always be against the same group of people: poor people, black people, women."

And so the first thing that came to my mind was my students. And I'm like, "If my students don't know about this and how it works and what the plans are of the people who are creating this technology, they're just going to get slammed by the future, and they're not going to know how to navigate or understand." And just in my head I had an image like if a student's trying to get a job, but they don't understand the bias and the application tracking systems, they'll wonder, "Hey, why don't I ever get a call back? Or why does it..." It will just seem like some ubiquitous force that they won't really understand. So that's when I got the idea to start creating a curriculum or learning experiences to teach students about emerging technologies, especially AI.

Jason: And one of the things that you've found was there was a surveillance within the school system, right? Something in Florida in particular. Can you talk a little bit about that kind of surveillance that you found?

Ora: So as I was doing my research, I came up on a project that they were working on in the state called the Florida Schools Safety Portal. And I say that in air quotes.

So because Parkland had happened a couple years prior, so everyone's on high alert, but I guess in order to try to prevent that, the solution at the state level was to create this database that just had massive amounts of data like their juvenile justice records or grades in school, if they've been suspended. Just different types of data from all these databases and they're going to put it into one huge one, which was just a lot of privacy concerns there, the sharing of data across all these different organizations and agencies. But they were going to use it to predict whether a child, the probability of them being the next school shooter, which is just mind blowing that they thought they would be able to do that just from a logistical point of view.

And so I was able to kind of bring a lot of attention to it. And I even spoke at one of their commission meetings to just tell them, "Hey, these are the dangers of going down this route. These are the unintended consequences. These are the harms that could happen by trying to pursue this path of trying to stop school shooting."

Cindy: There are problems in kind of two ways. And I'd love for you to unpack them for people, because I think sometimes people think intuitively that this is a good idea, and I think you've done a good job when I've watched you, unpacking I think both sides like A, it doesn't work, and B, it's really dangerous in other ways as well.

Ora: A lot of times with solutions, people just automatically go to technology because there's this misconception that technology fixes everything and we just have throw technology at it. And most times people like policy makers are making these decisions, but they don't have full understanding of how the technology actually works. And so if they did, they would probably make some different decisions.  And so that's what I was trying to do with the Aspen Tech Policy Hub, like educate the policy makers and the lawmakers, people in the Department of Education about it as well.

So with the Florida School Safety Portal, it was problematic because 58% of the students in the state are black and Hispanic, and it's the majority of their information that's in those databases. So if you're using those data sources, it's most likely it's going to be a black or Hispanic person that's going to get a high probability score. So I try to do both sides like, "Hey, if you're going to attempt to do this, this is what it literally would take to build a database. And this is how the predictions would work. This is the amount of data you would have to have. This is the types of data you would need to have in this database in order for it not to be biased and to be fair." That's if you are going to pursue it. And on the other hand, "Hey, this is why this totally will not work, and this is not a good idea."

Jason: So you've got this database and it's got data from all these different sources for young people, right? It's got information about I assume whether they've been involved in a crime or whether they've gotten in trouble at school. And you put all that into a big system and then it spits out what? Like a number or like a sort of a pre-crime status for a student?

Ora: There really wasn't a lot of details shared with the public like just exactly how this was going to work. And I showed up to some meetings. I used the Freedom of Information Act to get some information because it just wasn't freely available. They had their presentation they did, and with the graphics like, "Hey, it's easy. Here are the things. It goes to this one place. We're protecting students." But the actual details were not very clear.

Cindy: We've seen this in the predictive policing context, right? Where essentially the people who've gotten access to the way that the algorithm works and what it's trained up on, all it does is to really just replicate the decisions that the police are already making, right? So instead of predicting policing, it's predicting the police, Which is I suspect you probably had a version of that as well, where the school officers who are flagging the kids that they always flag, which we know has bias built in it. And then this algorithm is looking at what the officers are flagging and then predicting that they're going to keep flagging the same kids. 

We've seen that kind of circular logic in some of the other things that are trying to predict whether somebody's likely to be a bad actor.

These kinds of predictive systems when they're trying to predict like whether you want to buy shoes or something, they have a massive amount of data and they have ground truth about whether somebody's actually buying shoes or has bought shoes in the past. And one of the things that is always troubling in these kinds of trying to predict who's a bad person things is that they don't have nearly enough data to begin with, so trying to train up a model to do this kind of prediction is always isn't going to work.

I know our audience anyway, there are a lot of people who do understand how these models work, and it's important to bring that critical eye when you're trying to predict future behavior by humans. It goes terribly wrong very quickly.

Ora: Yeah. I definitely agree. Especially if the school shooter, even though it's horrific that it happens just, but just the numbers are so small. So to try to do that. And even the FBI, they've tried to look to see if there's any patterns among the people who have done it, but the conclusion is there's not enough info. And even if you did, it's like the thing with AI, it's you're always acting on historical data, and so that's another big problem.

Cindy: Yeah, absolutely.

Jason: 

How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology, enriching people's lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

Jason: One of the things that we've seen in Florida is a lot of problems around Pasco County in particular, where we've looked into what's tended to be a data-driven policing strategy for young people there that sounds very similar or connected to the Florida School Safety Portal. What we saw there is there's a lot of harassment as a result of being labeled a potential X by these systems. Is there a similar kind of problem?

Ora: That was something they mentioned with the Safety Portal that they would be piloting a version. But it was kind of like the room itself was all like police officers and I think there were a couple of lawyers, so it's literally a room full of white male police officers. And so it appeared that they just had their mindset like, "Hey, this is the course we're going to take. We think this is going to work."

And I know also just at upper levels in our state, it is the desire, I don't know if it still is, but to have Florida be the leader in intelligent policing is some of the reports that I had read. And I think that's what's playing out in Pasco. As you said, I think a majority of the students are... I mean the people being harassed are under the age of 18 and they're just knocking on doors and checking on them in the morning, late at night, harassing their families and they haven't done anything. And so it's just like, "Well, hey, you're hanging out with these people and that's going to lead to this." And it's like, "Where are you getting this from? What is prompting you? And what trigger is it that's causing you to come out to a student's house who has not done anything?"

Cindy: Seeing this, you've really thrown yourself into the solution of trying to educate students especially about this. And why did you pick that particular solution and how are you seeing it play out?

Ora Tanner :

Initially I just thought one, it takes too long for legislative solutions to anything. It's just very, very slow moving. And I really just wanted to empower the students, especially in the communities that would be impacted the most by algorithms, which would be the black and brown communities. So I wanted to attempt to give them some type of recourse and just a heads up like, "Hey, this is what's happening." And the ability to push back. And I just imagined say, it's happened a couple times already where people were misidentified through facial recognition systems and then sent to jail. But if you're aware of how those systems work and how flawed they are and the inaccuracies you could push back and be like, "Hey, can I see the data? How did you come to this conclusion? And if they can't, then you need to let me go."

And so I think just education is one of the easiest ways and most empowering ways to get information to students.

Jason: So one thing I think is really interesting about your approach... I'm on the activism team at EFF, and we spend a lot of time killing bad bills, bad projects, bad legislation, bad things that companies are doing, but it's not enough just to kill the bad thing, right? You have to educate people around how to do it better in the future. And you're clearly doing that with this approach. And I'm just wondering if you could talk a little bit about how this has been successful?

Ora: Yeah, I think as far as the policing and surveillance, I know prior to the pandemic I was just on this education campaign and I was just trying to tell as many people as possible. So I spoke to pre-service teachers at University of South Florida, especially those who teach special education students, so they would be aware of it. I was just talking to other teachers and none of them were aware of it, which was problematic. 

So what I'm doing now I co-founded an organization called the AI Education Project. And so I developed curricula to teach specifically high school students, but now we've branched out into community college and college university students about artificial intelligence and its social impacts.

So the last couple of two semesters I've been teaching a course called the Social Impacts of Artificial Intelligence. And we do talk about the Florida Safety Portal and also what's happening in Pasco County. And so for their final project I do challenge the students to look at local based problems that have to do with AI and emerging technologies and bias systems and challenge them how would they redesign those systems? And so I think it's been really impactful so far because we're educating a lot of students. We started off with 300 in summer 2020, and as of the end of fall semester 2021, we've reached 30,000 students with the large majority of them being here in the state of Florida.

Cindy: That's fabulous. And what I really appreciate about the approach is that you're really tying it to the local community and empowering the local communities, and that's something we've heard throughout this podcast that the things that really work are the things that don't come top down, and you're a highly trained scientist so in some ways you're the top of this. But that really teaching the communities that are affected by a lot of these technologies and giving them the power to stand up for themselves and design systems that'll work for them. I think that there's lots of ways in which computers and computer modeling could really help these communities, and instead, what we're seeing are all of these things aimed at them by law enforcement or schools or other things. And so I really love the approach of trying to flip this on its head and giving the power to the affected communities.

Ora: One, I don't want students to be afraid of technology. So that's one of my top number ones things. I just want them to understand it and be able to make informed decisions, but I also want them to know, "Hey, these are the problems, the issues where it's not getting it right." So we have had some students that were so taken aback or just enamored with the whole thing that it has sparked them to want to go into this area and study it when they go to university or do something about it. So that's the good byproduct of all of this, like you can't solve problems that you don't know exist. And so once you're aware, if you feel deeply and passionately about it, it's like, "Hey, I'm going to do something about this." And I also show students you don't have to be an AI engineer, you don't have to be a programmer or developer to address this issue. And so that's another part of the curriculum.

Jason: Do students as you're telling this story about showing them how the specific assessment tools work in the criminal justice system, for example, are they surprised? Are any of them already aware of these? Or is it completely new to most people? Because I think I would be surprised if anyone really knew about it. The average person probably doesn't, but I'm wondering what their responses are?

Ora: Yes. In 100% of the cases they were just totally unaware of just all the different ways AI is being used, how their data's being collected, sold, when they're on these platforms. And the interesting thing that's funny… the course I teach at, well, the course in general is for everyone and the curriculum's for everyone. So we have students from different backgrounds. But at the university it's open to all majors and I always get the computer science students coming in and they come in with this hubris like, "Oh, I already know this. I build models. I whatever." And then when we start going into this, they had no clue, no idea whatsoever. But to me it's not surprising because I think it's only like 12% of computer science and data science majors, even address ethical issues.

So before they were just coding, they were making all the stuff and, "Oh, I can make it do this and that. Now I have to consider who may I be harming? How is this going to be used? How is this going to play out?"

Cindy: I'd like to shift gears because our focus here is how do we fix the internet? And I want to hear your vision of what it looks like if we get this right. What if we flip the script around and we have technologies are actually serving communities, that communities are in control of, and that really work for students. What do you see? What's your vision?

Ora: What I'm trying to do now is just there needs to be some new narratives. So right now the main narratives are owned by the big tech companies and academia. So I just think if there's this spectrum, so you have the technological utopians, if you want to call them. And that's more your big tech and your policy makers, which are, "Hey, technology is the best. It can do no wrong. It can be used to solve any problem." And then you have the technological skeptics, which are more like your sociologists and people like that like, "Hey, it's going to destroy us all. It's the worst thing ever." And the people in the middle, the contextualist, "Hey, it depends." So I think most of what people see are on that one end that, "Hey, technology is great and there's a lot of hype around it, and we don't actually look at the reality of it."

So I think just encouraging some new voices to talk about what's actually happening, what are the effects? And then if we can get those two ends of the spectrum to work together, because we have these huge conferences where all the developers are at, you have these other huge conferences where just the sociologists and that end of the spectrum. But you never have one where they're together to solve these problems. So I would love to see that happen or just a more evenly distributed amount of information about the realities of the technology.

Cindy: If we had more evenly distributed information, how would our world change? How would a student in the future go through their experience that's different than what's happening today?

Ora: Yeah. Well, I'm actually seeing that now. I've taught in the Upward Bound Program. Students I'm teaching right now in my courses they are actually thinking. Before it's like you download an app, "Agree all." They don't think about any of it. But now I have my students are like, "OMG, they're going to sell my information to third parties and they're tracking me." Just making informed decisions. Also knowing we're coming up on midterm elections. They're understanding the information they're seeing is being targeted at them to nudge them. They know what nudging is. So, "Hey, I might have to go outside of my filter bubble to get some other perspectives on this." So I think it would just make people more informed. A culture of just searching for truth instead of just, "Hey, whatever you tell me I'm going to believe it."

Cindy: Being back in charge rather than being manipulated by the way that the information is presented and by whom and where. Yeah, that would be great. 

Ora: Yes.

Cindy: So what are the values we are protecting here? Already I hear self determination, control, ready access to information. Are there others? 

Ora: Yes. One of my big things I just call it like tech equity, diversity, however it is, diversity beyond just race and gender. It's also cognitive diversity. So we need to hear from teachers, we need to hear from sociologists, we need to hear from young people when it comes to these different aspects of technology. So I really push for, as far as the tech equity piece, I really push for black voices. So there is a lot of awesome work being done, especially in AI. Even in other things like metaverse black scholars, but their work is never highlighted. It's always Stanford, Berkeley, MIT. And so there's great work being done at HBCUs, historically black colleges and universities.

And even with the courses I teach for my students, I'm very careful not to only present Western views because a lot of times we just get stuck in the United States and it's like, "Okay. There's these other big land masses where they have people doing awesome work as well." So yeah, just that cognitive diversity coming from just a lot of different areas.

Cindy: Do you have a particular thing that you do with your students that might give us a little more concreteness about what you're doing in your classes that would be fun to share?

Ora: We have them watch this video on how AI is being used in the criminal justice system, like pretrial risk assessment. And we have them to basically write a letter to the people who make the algorithm, or on behalf of someone who's been wrongly accused, or to the attorney who's doing the case as to why. So we've gotten some really powerful things from that. Another activity we have them do is I call it PITCH AI. 

We ask them to envision a world, how would they use AI to solve problems in a career or field that they care about? And so we've gotten some really creative things of what people would do. 

Cindy: Give me an example. I'd love to hear what people are thinking they would do. 

Ora: we had a student she grew up in foster care like her and her sister. And so she described how she would use AI to do better matching between a child, especially when they have a sibling and the host family. And so she describes this is the data that would be collected. This is how the algorithm would work,

I had another student who was interested in political science and they talked about, I think it was called Dictator Tracker, how they would collect all this data on people who are running for office, who do they have relationships with? Who are they corresponding with to predict the probability of them being a world dictator? And so that one was kind of-

Cindy: That might be one. Hopefully, we don't have enough dictators to train a model for that, but I appreciate the effort.

Jason: And I like that you're taking... You're literally turning the targeting of this AI against the powerful in some ways, which it's often being used against the powerless. So that's really wonderful.

Cindy: So well, that was terrific. Thank you so much for the work that you're doing. I mean, really honestly, I feel like in some ways you're building the next generation of EFF staffers and activists and go, go, go, we need more. And we especially need people from these communities, right? I mean, it's just not right that the people who are generally most impacted by these kinds of systems are the ones who have the least knowledge, the least transparency, and the least control. I mean, it's just exactly backwards. So thank you for the work that you're doing.

Ora: Thank you so much for having me on. This was great and fun.

Cindy: Well, I loved how she turned her horror at learning about the massive collection of data about students and the difficulties of trying to predict who's a school shooter from that and turned it into really a passion for helping to empower her community.

Jason: And I really appreciate that she's using it, not just as an educational opportunity, but as an activist opportunity. She's really doing or having students do hardcore grassroots activism, learning why these systems are the way they are, how to fix them, and then reaching out to people who make them to try to make them better. I think that's really it's a model for what EFF does on the activism team and in general.

Cindy: I do think that the part where the students are not only learning how horrible it is, but are learning how to write the letter to explain it to other people. I mean, that's the piece where the knowledge turns into power.

Jason: I'd love for her to have been a teacher in my school.

Cindy: I'd love her vision of a better future, right? It's the stuff that we're just hearing over and over again, local control, local empowerment, real transparency, and the simple truth, knowledge is power.

Jason: Absolutely. Well, thank you all for joining us. If you've enjoyed this episode, please visit eff.org/podcast where you'll find more. You can learn more about the issues and you can also donate to become an EFF member. Members are the only reason we can do this work. Plus you can get cool stuff like an EFF hat or an EFF hoodie, or even an EFF camera cover for your laptop. Please get in touch if you have thoughts about this podcast by emailing us at podcast@eff.org. We do read every single email that we get.

Music for How to Fix the internet was created for us by Nat Keefe and Reed Mathis of Beat Mower. This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology. I'm Jason Kelly.

Cindy: And I'm Cindy Cohn. 

VOICE:
This podcast is licensed Creative Commons Attribution 4.0 International, and includes the following music licensed Creative Commons Attribution 3.0 Unported by their creators: 

Meet Me at Phountain 

Hoedown at the Roundabout and 

JPEG of a Hotdog all by Gaetan H

and 

Recreation by airtone.

 

Josh Richman

Tracking Exposed: Demanding That the Gods Explain Themselves

1 month 3 weeks ago

Imagine if your boss made up hundreds of petty rules and refused to disclose them, but every week, your pay was  docked based on how many of those rules you broke. When you’re an online creator and your “boss” is a giant social media platform, that’s exactly how your compensation works.

Algospeak” is a new English dialect that emerged from the desperate attempts of social media users to “please the algorithm”: that is, to avoid words and phrases that cause social media platforms’ algorithms to suppress or block their communication. 

Algospeak is practiced by all types of social media users, from individuals addressing their friends to science communicators and activists hoping to reach a broader public. But the most ardent practitioners of algospeak are social media creators, who rely—directly or indirectly—on social media to earn a living.

For these creators, accidentally blundering into an invisible linguistic fence erected by social media companies can mean the difference between paying their rent or not. When you work on a video for days or weeks—or even years—and then “the algorithm” decides not to show it to anyone (not even the people who explicitly follow you or subscribe to your feed), that has real consequences. 

Social media platforms argue that they’re entitled to establish their own house rules and declare some subjects or conduct to be off-limits. They also say that by automating recommendations, they’re helping their users find the best videos and other posts. 

They’re not wrong. In the U.S., for example, the First Amendment protects the right of platforms to moderate the content they host. Besides, every conversational space has its own norms and rules. These rules define a community. Part of free speech is the right of a community to freely decide how they’ll speak to one another. What’s more, social media—like all human systems—has its share of predators and parasites, scammers and trolls and spammers, which is why users want tools to help them filter out the noise so they can get to the good stuff.

But legal issues aside, the argument is a lot less compelling when the tech giants are making it. Their  moderation policies aren’t “community norms”—they’re a single set of policies that attempts to uniformly regulate the speech of billions of people in more than 100 countries, speaking more than 1,000 languages. Not only is this an absurd task, but the big platforms are also pretty bad at it, falling well short of the mark on speech, transparency, due process, and human rights.

Algospeak is the latest in a long line of tactics created by online service users to avoid the wrath of automated moderation tools. In the early days of online chat, AOL users used creative spellings to get around profanity filters, creating an arms race with a lot of collateral damage. For example, Vietnamese AOL users were unable to talk about friends named “Phuc” in the company’s chat-rooms.

But while there have always been creative workarounds to online moderation, Algospeak and the moderation algorithms that spawned it represent a new phase in the conflict over automated moderation: approaching moderation as an attack on new creators that help these platforms thrive.. 

The Online Creators’ Association (OCA) has called on TikTok to explain its moderation policies. As OCA cofounder Cecelia Gray told the Washington Post’s Taylor Lorenz: “People have to dull down their own language to keep from offending these all-seeing, all-knowing  TikTok gods.”

For TikTok creators, the judgments of the service’s recommendation algorithm are hugely important. TikTok users’ feeds do not necessarily feature new works by creators they follow. That means that you, as a TikTok user, can’t subscribe to a creator and be sure that their new videos will automatically be brought to your attention. Rather, TikTok treats the fact that you’ve explicitly subscribed to a creator’s feed as a mere suggestion, one of many signals incorporated into its ranking system.

For TikTok creators—and creators on other platforms where there’s no guarantee that your subscribers will actually be shown your videos—understanding “the algorithm” is the difference between getting paid for your work or not.

But these platforms will not explain how their algorithms work: which words or phrases trigger downranking. As Lorenz writes, “TikTok creators have created shared Google docs with lists of hundreds of words they believe the app’s moderation systems deem problematic. Other users keep a running tally of terms they believe have throttled certain videos, trying to reverse engineer the system” (the website Zuck Got Me For chronicles innocuous content that Instagram’s filters blocked without explanation).

The people who create the materials that make platforms like YouTube, Facebook, Twitter, Snap, Instagram, and TikTok valuable have dreamed up lots of ways to turn attention into groceries and rent money, and they have convinced billions of platform users to sign up to get their creations when they’re uploaded. But those subscribers can only pay attention to those creations if the algorithm decides to include them, which means that creators only get to eat and pay the rent if they please the algorithm.

Unfortunately, the platforms refuse to disclose how their recommendation systems work. They say that revealing the criteria by which the system decides when to promote or bury a work would allow spammers and scammers to abuse the system.

Frankly, this is a weird argument. In information security practice, “security through obscurity” is considered a fool’s errand. The gold standard for a security system is one that works even if your adversary understands it. Content moderation is the only major domain where “if I told you how it worked, it would stop working” is considered a reasonable proposition. 

This is especially vexing for the creators who won’t get compensated for their creative work when an algorithmic misfire buries it: for them, “I can’t tell you how the system works or you might cheat” is like your boss saying “I can’t tell you what your job is, or you might trick me into thinking you’re a good employee.” 

That’s where Tracking Exposed comes in: Tracking Exposed is a small collective of European engineers and designers who systematically probe social media algorithms to replace the folk-theories that inform Algospeak with hard data about what the platforms up- and down-rank.

Tracking Exposed asks users to install browser plugins that anonymously analyze the recommendation systems behind Facebook, Amazon, TikTok, YouTube, and Pornhub (because sex work is work). This data is mixed with data gleaned from automated testing of these systems, with the goal of understanding how the ranking system tries to match the inferred tastes of users with the materials that creators make, in order to make this process legible to all users. 

But understanding the way that these recommendation systems work is just for starters. The next stage—letting users alter the recommendation system—is where things get really interesting. 

YouChoose is another plug-in from Tracking Exposed: it replaces the YouTube recommendations in your browser with recommendations from many services from across the the internet, selected according to criteria that you choose (hence the name).

Tracking Exposed’s suite of tools is a great example of contemporary adversarial interoperability (AKA “Competitive Compatibility” or “comcom”). Giving users and creators the power to understand and reconfigure the recommendation systems that produce their feed—or feed their families—is a profoundly empowering vision.

The benefits of probing and analyzing recommendation systems doesn’t stop with helping creative workers and their audiences. Tracking Exposed’s other high-profile work includes a study of how TikTok is promoting pro-war content and demoting anti-war content in Russia and quantifying the role that political disinformation on Facebook played in the outcome of the 2021 elections in the Netherlands.

The platforms tell us that they need house rules to make their conversational spaces thrive, and that’s absolutely true. But then they hide those rules, and punish users who break them. Remember when OCA cofounder Cecelia Gray said that her members tie themselves in knots “to keep from offending these all-seeing, all-knowing TikTok gods?” 

They’re not gods, even if they act like them. These corporations should make their policies legible to audiences and creators, adopting The Santa Clara Principles

But creators and audiences shouldn’t have to wait for these corporations that think they’re gods  to descend from the heavens and deign to explain themselves to the poor mortals who use their platforms. Comcom tools like Tracking Exposed let us demand an explanation from the gods, and extract that explanation ourselves  if the gods refuse.

Cory Doctorow

The EU Digital Markets Act Places New Obligations on “Gatekeeper” Platforms

1 month 3 weeks ago

The European Union’s Digital Markets Act (DMA) is a proposal for bringing competition and fairness back to online platform markets. It just cleared a major hurdle on the way to becoming law in the EU as the European Parliament and the Council, representing the member states, reached a political agreement.

The DMA is complex and has many facets, but its overall approach is to place new requirements and restrictions on online “gatekeepers”: the largest tech platforms, which control access to digital markets for other businesses. These requirements are designed to break down the barriers businesses face in competing with the tech giants.

Although the details are very different, this basic approach is the same one used in various bills currently making their way through the US Congress, including the American Innovation and Competition Online Act (S. 2992), the Open App Markets Act (S. 2710) and the ACCESS Act (H.R. 3849).

This post describes the DMA’s overall approach and the requirements it places on gatekeepers. One section of the DMA requires gatekeepers to make their person-to-person messaging systems (like WhatsApp and iMessage) interoperable with competitors’ systems on request. Messaging systems raise a unique set of concerns surrounding how to preserve and strengthen end-to-end encryption. We walk through those issues here.

Obligations for Gatekeepers

The DMA only places obligations on “gatekeepers,” which are companies that create bottlenecks between businesses and consumers and have an entrenched position in digital markets. The DMA’s threshold is very high: companies will only be hit by the rules if they have an annual turnover of €7.5 billion within the EU or a worldwide market valuation of €75 billion. Gatekeepers must also have at least 45 million monthly individual end-users and 100,000 business users. Finally, gatekeepers must control one or more “core platform services” such as “marketplaces and app stores, search engines, social networking, cloud services, advertising services, voice assistants and web browsers.” In practice, this will almost certainly include Meta (Facebook), Apple, Alphabet (Google), Amazon, and possibly a few others.

The DMA restricts gatekeepers in several ways, including:

  • limiting how data from different services can be combined, 
  • banning forced single sign-ons, and 
  • forbidding app stores from conditioning access on the use of the platform’s own payment systems. 

Other parts of the DMA make it easier for users to freely choose their browser or search engine, and force companies to make unsubscribing from their “core platform services” as easy as subscribing was in the first place.

To improve anti-monopoly enforcement, the DMA also requires gatekeepers to inform the European Commission about their mergers and acquisitions.

The Stick (Potential Sanctions)

If a gatekeeper violates the new rules, it risks a fine of up to 10% of its total worldwide turnover (revenues). Even harsher sanctions are foreseen in the event of repeated or systematic infringement, which could ultimately lead to behavioral or structural remedies. These are significant remedies, and the threat of fines at this level should be a strong deterrent.

A Moving Target

There was a lot to like in the initial proposal, presented by the EU Commission in December 2020, but the DMA has been in flux, and the uncertainty continues right up to this moment. For example, at the last minute, lobbyists pushed to have the DMA include a “remuneration right” for the press, which would have obliged search engines and social networks to offer publishers uniform payment tariffs for news content displayed on their platforms. This was headed off by EFF and its allies. The political agreement will still need to be formally approved by the EU lawmakers. Once adopted, the DMA Regulation will apply six months after entry into force (expected in 2023).

Transatlantic Impact

Many of the DMA’s provisions for addressing gatekeeper power on the internet can also be taken up on the other side of the Atlantic. The Big Tech bills currently making their way through Congress, in particular, the Open App Markets Act, the American Innovation and Choice Online Act, and the ACCESS Act, also seek to impose a set of requirements and restrictions on Big Tech that are intended to give space for competition. These bills raise some of the same implementation challenges as the DMA, especially with regard to encrypted messaging apps, but the need to address the gatekeeper power of our largest tech platforms is the same. EFF will continue to work with policymakers and enforcers to address these challenges. An effective competition policy for the internet, one that keeps users’ needs front and center, will help align market forces and innovation to serve users’ security and privacy needs.

Mitch Stoltz

The EU Digital Markets Act’s Interoperability Rule Addresses An Important Need, But Raises Difficult Security Problems for Encrypted Messaging

1 month 3 weeks ago

The European Union’s Digital Markets Act (DMA) allows new messaging services to demand interoperability (the ability to exchange messages) from the internet's largest messaging services (like WhatsApp, Facebook Messenger, and iMessage). Interoperability is an important tool to promote competition and prevent monopolists from shutting down user-empowering innovation. But an interoperability requirement for messaging services that are end-to-end encrypted raises particularly thorny security and privacy concerns, and those concerns need to be addressed before interoperability requirements are enforced against those services. Looking into these concerns will take years—much longer than the text of the DMA currently envisions—but we have some thoughts on where to start.

The DMA’s Interoperability Rule

The DMA is a complex new law aimed at addressing the “gatekeeper” power of Big Tech firms. While some of the final details of the DMA are still in flux, negotiators from the EU Parliament and the Council of the EU have reached a "political agreement." The drafters considered several proposals relating to interoperability, including rules that would cover gatekeepers’ social networking services as well as messaging apps. But the compromise between the EU lawmakers that’s on the way to becoming law only includes an interoperability requirement for messaging apps. Specifically, the giant gatekeepers will be required to make their messaging services interoperable with other messaging apps at the request of competing developers. Negotiators have agreed to assess the feasibility of including an interoperability requirement for social networking as part of a future review of the DMA.

The DMA’s interoperability rule will apply to “number-independent” messaging services that are part of “gatekeeper” platforms, meaning platforms with the power to control other companies’ access to customers. This probably includes messaging apps from Apple, Google, Meta Platforms (e.g., Facebook Messenger, WhatsApp, Instagram Direct Messenger), and Microsoft. Of these, only WhatsApp, Apple’s iMessage, and Android Messages currently offer default end-to-end encryption modes, but these services together have billions of users. These services will be required to make “end-to-end text messaging,” including various kinds of media attachments, interoperable on request by a competing service, within three months of a request. Group texts will need to be interoperable in two years, and voice and video calls in four.

Why Interoperability Matters

Recall that the goal of interoperability is to make it easier for people to leave Big Tech platforms for competing platforms, without hampering their ability to communicate with anyone who chooses to remain within Big Tech’s walled gardens. Interoperability dismantles one of the biggest barriers faced by users who want to leave the tech giants’ platforms: the choice between changing to a platform you prefer or staying behind on a platform where all your friends, communities, and customers are.

That gives new services a chance to compete—by offering better protections for users’ privacy and security, new features, and better terms of service. Having multiple services for users, especially vulnerable users, to choose from may help protect against improper governmental surveillance and censorship. And once users can easily stroll away from Big Tech’s walled gardens, the market will give today’s giants a much stronger push to treat their users right, from continuing to improve users’ security to resisting governments’ demands for surveillance to eschewing surveillance-based business models.

Messaging Is A Tough Place To Start

While interoperability is important, mandating it for encrypted messaging services presents some of the most difficult technical and policy challenges. Most critically, interoperability obligations must not weaken end-to-end encryption in services like WhatsApp and iMessage, or undermine security in ways that otherwise break the promise of end-to-end encryption, such as adding client-side scanning of messages. Keeping that promise is particularly important because strong security and encryption are vital to protecting and preserving human rights. Not only does encryption in messaging enable the rights of expression and association for the users, but it is also critical to protecting human rights defenders who depend upon strong security while opposing or exposing abuses in dangerous environments.

Many security experts agree that requiring interoperability without unacceptable tradeoffs in security or privacy is a very high hurdle, one that might turn out to be insurmountable. Not all of the challenges involved are strictly technical—platforms are right to pay attention to combatting impersonation and maintaining usability so that the broadest possible range of users continues to have ready access to state-of-the-art secure communications.

Demanding that vendors of encrypted messengers figure out how to simultaneously open up their service to interoperators and maintain security is a tall order, even though the demand is limited to very large, well-funded companies like Apple and Meta Platforms (Facebook). As applied to encrypted messaging, interoperability could encompass a range of approaches from simply requiring users to be able to connect to a service with the client of their choice, all the way to a fully federated model akin to email. These approaches would have vastly different effects on security. A technological solution that is simple to express in legislative terms can have unintended consequences, such as creating incentives for companies to compromise on the security of users’ communications. As with recent US proposals for law enforcement access to encrypted data, policymakers need to safeguard users’ access to truly secure communications.

At the same time, dominant companies should not be able to exclude rivals and maintain their dominant position by making bad-faith claims about security requirements, or by employing requirements that unnecessarily exclude smaller rivals. Security should not be used as a smokescreen to protect anticompetitive behavior. This too is something we’ve seen before.

How The EU Could Get There

The details of how the European Commission implements the DMA will make a huge difference in whether there can ever be secure interoperability in encrypted messaging. EFF has three initial recommendations.

First, the implementing rules should strengthen the security-protective exception for encrypted messaging. The agreement reached by the EU lawmakers requires that “[t]he level of security, including end-to-end encryption where applicable, that the gatekeeper provides to its own end-users shall be preserved across the interoperable services.” This is an important requirement, but it does not go far enough. To avoid ambiguities, implementing rules should clarify that the word “preserved” encompasses both ensuring that connecting services must live up to the gatekeeper’s level of security and allowing for continued progress in security and privacy. The Commission should make clear that any service that breaks the promise of end-to-end encryption through any means—including by scanning messages in the client-side app or adding “ghost” participants to chats—will not be able to demand interoperability.

Second, making encrypted messaging interoperable simply cannot happen in the timeframe envisioned by the DMA if it has any hope of resolving the significant technical and policy hurdles. The DMA’s time limits on gatekeepers to provide interoperability—three months after a request in the case of one-to-one encrypted messaging; and within two years for group messaging—are far too short. By comparison, Meta Platforms (Facebook) announced plans to interconnect and encrypt three of its own messaging products in March 2019, and this project is still not complete. Getting interoperability right would require participation by a much larger group of stakeholders as part of a standards-setting and governance process and would therefore likely move at an even statelier pace. Based on the public text of this and other sections of the DMA, we hope that the Commission will have adequate flexibility to delay enforcement of the interoperability requirements—even for very long periods—until security concerns can be fully worked out. For end-to-end encrypted messaging, it’s worth taking the time necessary to ensure we get it right.

Third, gatekeepers and the Commission need more tools to make good on their stated goal of preserving security in interoperable systems. The DMA says that the gatekeeper can take “duly justified” and “strictly necessary and proportionate” measures to ensure the preservation of security. To implement this, both gatekeepers and the Commission should have the ability to request necessary information from third parties who seek to make their messaging apps interoperable, to ensure that those services keep the promise of end-to-end encryption. Interoperability requirements should not be enforced until any questions about users’ security have been resolved.

EFF will carefully analyze the final text, which is now subject to formal approval by the EU lawmakers. Once adopted, EFF will continue to engage with the European Commission on the implementation of the DMA. And of course, we will continue to monitor the actual realities of user experience. For instance, we will watch both for when DMA requirements are being misused to undermine security and for when security requirements are being misused to undermine competition. Interoperability requirements on other services besides messaging, such as social networks and app stores, may be more feasible to implement in the short term.

At bottom, while EFF strongly supported, and continues to support interoperability in the DMA, we recognize that the move toward it, especially in the area of secure messaging, requires care. Specifically, we must ensure that our future ensures both strong security in messaging and robust competition to provide services to users. Or, stated in the reverse: we must ensure that we neither allow interoperability to be an excuse to reduce security nor allow unfounded claims of security needs to become an excuse to insulate a company from market competition.

While we are clear-eyed about the challenge, we are hopeful that we can meet both of these two critical values.

Mitch Stoltz

EFF Statement on the Declaration for the Future of the Internet

1 month 4 weeks ago

The White House announced today that sixty one countries have signed the Declaration for the Future of the Internet. The high-level vision and principles expressed in the Declaration—to have a single, global network that is truly open, fosters competition, respects privacy and inclusion, and protects human rights and fundamental freedoms of all people—are laudable.

But clearly they are aspirational. Implementing these principles will require many signatory countries to change their current practices, which include censoring online speech of marginalized communities, failing to build out affordable high-speed internet, using malware and mass surveillance to spy on users, fostering misinformation, secretly collecting personal information, and pressuring big tech platforms to police online speech

We are pleased that the Declaration lays out important standards for achieving a free, open, and human rights-protecting Internet. Hopefully, the signatories to the Declaration will deliver on the Declaration’s promises, by aligning their practices, policies, and laws with its principles.

Karen Gullo

Canvas and other Online Learning Platforms Aren't Perfect—Just Ask Students

1 month 4 weeks ago

School digital environments are increasingly locked down, increasingly invasive, and increasingly used for disciplinary action. This has never been more troubling than during the pandemic, with schools adopting remote proctoring and surveillance tools at alarming rates and entering students’ homes via school-issued and personal devices. As students have tried to educate their teachers and administrators about the dangers of surveillance and the need for student privacy, they have often fought a losing battle. 

At Dartmouth College in 2021, for example, administrators inaccurately accused students of cheating based on a misinterpretation of data from Canvas, a “Learning Management Software” (LMS) platform that offers online access to coursework for classes. Unfortunately, Canvas, Blackboard, and other LMS systems like them are often used, incorrectly, as arbiters of truth during examinations. Suspicious of cheating, administrators at Dartmouth’s Geisel School of Medicine conducted a flawed dragnet review of an entire year’s worth of student log data from Canvas. When a student advocate reached out to us about the situation, EFF determined that the logs easily could have been generated by the automated syncing of course material to devices logged into Canvas but not being used during an exam. In many of the students’ cases, the log entries were not even relevant to the tests being taken.

We call on both Canvas and Blackboard to put clearer disclaimers on their log data and publicly defend any student who has been accused of misusing these platforms.

EFF, the Foundation for Individual Rights in Education (FIRE), students, professors, and even alumni reached out to the school. In our letter, we explained it was simply impossible to know from the logs alone if a student intentionally accessed any of the files, and that even Canvas acknowledged log entries are not reliable records of user activity. After press coverage from the New York Times, which also found that students’ devices could automatically generate Canvas activity data even when no one was using them, Dartmouth withdrew the disciplinary charges and apologized to the students. To help students in similar situations, we’ve written a guide for anyone accused of cheating based on inaccurate data like this. 

Unfortunately, this has not stopped school officials from considering their interpretation of LMS data to be above reproach. Though Canvas and Blackboard have publicly stated that their logs should not be used for disciplinary decisions regarding high-stakes testing, we’ve heard other stories from students who were accused of misconduct—even well before the pandemic—based on inadequate interpretations of data from these platforms.  

One example: an undergraduate student at Brown University who contacted us was penalized in 2018 based on Canvas access logs. The student had attempted to access the Canvas database immediately before an exam, and had then left multiple browser windows open to the Canvas web address on her mobile phone. The phone remained unused in her backpack during the exam, during which time the log records (inaccurately) appeared to indicate that the site was being actively accessed by a user.

After being accused of cheating based on this Canvas log data, the student reached out to Canvas. Multiple Canvas technical support representatives responded by explaining that the log data was not a reliable record of user access. The student shared their statements with Brown.  Notably, the student, who had a 4.0 record, had little motive to cheat on the exam at issue, rendering the cheating accusation—based as it was virtually entirely on the log data—all the more flimsy.

A Brown disciplinary panel nonetheless ruled against the student, and placed a permanent mark on her academic record, grounding its decision on the accuracy of the Canvas log data.

Last year, in the wake of Dartmouth’s apology to its wrongfully accused students, and Canvas’ more public acknowledgment that its logs should not be used as the basis for disciplinary decisions, the former student asked Brown to clear her academic. Brown, however, refused even to consider voiding its disciplinary decision on the ground that the student had no remaining right to appeal. This is a common thread we’ve seen in these situations: the students at Dartmouth were also not afforded reasonable due process—they were not provided complete data logs for the exams, were given less than 48 hours to respond to the charges (and only two minutes to make their cases in online hearings), and were purportedly told to admit guilt.

In an implicit acknowledgment of its error, Brown now says that it will provide its former student with a letter of support if she applies to graduate school. An implicit admission of injustice, however, is not a sufficient remedy. Like Dartmouth, Brown should withdraw the record of the discipline it wrongfully imposed on this student, as well as any others who may have likewise been found responsible for cheating based on such unreliable log records.  

We call on both Canvas and Blackboard to put clearer disclaimers on their log data and publicly defend any student who has been accused of misusing these platforms based on similar misinterpretations. Schools, too, should remove any marks on any student records that were based on this information, and make a clear policy not to use it in the future. At Dartmouth, it took significant activism by a group of students to clear the record. As the example of the Brown student demonstrates, when individual students are accused, it’s much harder for them to find support—and much easier for schools to brush their mistakes under the rug. Please reach out to EFF if you’ve been inaccurately accused of misconduct based on log data in these platforms—we want to hear from you.

 

Jason Kelley

Amidst Invasion of Ukraine, Platforms Continue to Erase Critical War Crimes Documentation

1 month 4 weeks ago

When atrocities happen—in Mariupol, Gaza, Kabul, or Christchurch—users and social media companies face a difficult question: how do we handle online content that shows those atrocities? Can and should we differentiate between pro-violence content containing atrocities and documentation by journalists or human rights activists? In a conflict, should platforms take sides as to whose violent content is allowed?

The past decade has demonstrated that social media platforms play an important role in the documentation and preservation of war crimes evidence. While social media is not the ideal place for sharing such content, the fact is that for those living in conflict zones, these platforms are often the easiest place to quickly upload such content.

Most platforms have increasingly strict policies on extremism and graphic violence. As such, documentation of human rights violations—as well as counterspeech, news, art, and protest—often gets caught in the net. Platforms are taking down content that may be valuable to the public and that could even be used as evidence in future trials for war crimes. This has been an ongoing issue for years that continues amidst Russia’s invasion of Ukraine

YouTube proudly advertised that it removed over 15,000 videos related to Ukraine in just 10 days in March. YouTube, Facebook, Twitter, and a number of other platforms also use automated scanning for the vast majority of their content removals in these categories. But the speed that automation provides also leads to mistakes. For example, in early April, Facebook temporarily blocked hashtags used to comment on and document killings of civilians in the northern Ukrainian town of Bucha. Meta, Facebook’s owner, said that this happened because they automatically scan for and take down violent content.

We have criticized platforms for their overbroad removal of “violent” or “extremist” content for many years.  These removals end up targeting marginalized users the most. For example, under the guise of stopping terrorism, platforms often selectively remove the content of Kurds and their advocates. Facebook has repeatedly removed content criticizing the Turkish government for its repression of Kurdish people.

Facebook has at various times admitted its mistake or defended itself by linking the removed content to the Kurdistan Workers’ Party (PKK), which the US State Department designates to be a terrorist organization. Whether this justification is genuine or not (Facebook allegedly left up Turkey’s ruling party’s photos of Hamas, another US-designated terrorist organization), it effectively means the platform aligned with the government against political dissenters.

When a platform removes “violent” content, it may effectively censor journalists documenting conflicts and hamper human rights activists that may need the content as evidence. At the beginning of the Syrian uprising, without access to receptive media channels, activists quickly turned to YouTube and other platforms to organize and document their experiences.

They were met with effective censorship, as YouTube took down and refused to restore hundreds of thousands of videos documenting atrocities like chemical attacks, attacks on hospitals and medical facilities, and destruction of civilian infrastructure. Beyond censorship, this hampers human rights cases that increasingly use content on social media as evidence. A war crimes investigator told Human Rights Watch that “I am constantly being confronted with possible crucial evidence that is not accessible to me anymore.”

During the Ukraine invasion, online platforms added some promising nuances to their content moderation policies that were absent from previous conflicts. For example, Facebook began allowing users in Ukraine and a few other countries to use violent speech against Russian soldiers, such as “death to the Russian invaders,” calling this a form of political expression. Twitter stopped amplifying and recommending government accounts that limit information access and engage in “armed interstate conflict.” This seems to be a nod to concerns about Russian disinformation, but it remains to be seen whether Twitter will apply its new policy to US allies that arguably behave similarly, such as Saudi Arabia. Of course, there may be disagreement with some of this “nuance,” such as Facebook’s reversal of its ban on the Azov Battalion, a Ukrainian militia with neo-Nazi origins.

Ultimately, online platforms have much more nuance to add to their content moderation practices, and just as important, more transparency with users. For example, Facebook did not inform users about its reversal on Azov; rather, the Intercept learned that from internal materials. Users are often in the dark about why their dissenting content is removed or why their government’s propaganda is left up, and this can seriously harm them. Platforms must work with journalists, human rights activists, and their users to establish clear content moderation policies that respect freedom of expression and the right to access information.

Mukund Rathi

EFF to European Court: No Intermediary Liability for Social Media Users

1 month 4 weeks ago

Courts and legislatures around the globe are hotly debating to what degree online intermediaries—the chain of entities that facilitate or support speech on the internet—are liable for the content they help publish. One thing they should not be doing is holding social media users legally responsible for comments posted by others to their social media feeds, EFF and Media Defence told the European Court of Human Rights (ECtHR).

Before the court is the case Sanchez v. France, in which a politician argued that his right to freedom of expression was violated when he was subjected to a criminal fine for not promptly deleting hateful comments posted on the “wall” of his Facebook account by others. The ECtHR’s Chamber, a judicial body that hears most of its cases, found there was no violation of freedom of expression, extending its rules for online intermediaries to social media users. The politician is seeking review of this decision by ECtHR’s Grand Chamber, which only hears its most serious cases.

EFF and Media Defence, in an amicus brief submitted to the Grand Chamber, asked it to revisit the Chamber’s expansive interpretation of how intermediary liability rules should apply to social media users. Imposing liability on them for third-party content will discourage social media users, especially journalists, human rights defenders, civil society actors, and political figures, from using social media platforms, as they are often targeted by governments seeking to suppress speech. Subjecting these users to liability would make them vulnerable to coordinated attacks on their sites and pages meant to trigger liability and removal of speech, we told the court.

Further, ECtHR’s current case law does not support and should not apply to social media users who act as intermediaries, we said. The ECtHR laid out its intermediary liability rules in Delfi A.S. v. Estonia, which concerned the failure of a commercial news media organization to monitor and promptly delete “clearly unlawful” comments online. The ECtHR rules consider whether the third-party commenters can be identified, and whether they have any control over their comments once they submit them.

In stark contrast, Sanchez concerns the liability of an individual internet user engaged in non-commercial activity. The politician was charged with incitement to hatred or violence against a group of people or an individual on account of their religion based on comments others posted on his Facebook wall. The people who posted the comments were convicted of the same criminal offence, and one of them later deleted the allegedly unlawful comments.

What’s more, the decision about what online content is “clearly unlawful” is not always straightforward, and generally courts are best placed to assess the lawfulness of the online content. While social media users may be held responsible for failing or refusing to comply with a court order compelling them to remove or block information, they should not be required to monitor content on their accounts to avoid liability, nor should they be held liable simply when they get notified of allegedly unlawful speech on their social media feeds by any method other than a court order. Imposing liability on an individual user, without a court order, to remove the allegedly unlawful content in question will be disproportionate, we argued.

Finally, the Grand Chamber should decide whether imposing criminal liability for third party content violates the right to freedom of expression, given the peculiar circumstances in this case. Both the applicant and the commenters were convicted of the same offence a decade ago. EFF and Media Defence asked the Grand Chamber to assess the quality of the decades-old laws—one dating back to 1881—under which the politician was convicted, saying criminal laws should be adapted to meet new circumstances, but these changes must be precise and unambiguous to enable someone to foresee what conduct would violate the law.   

Subjecting social media users to criminal responsibility for third-party content will lead to over-censorship and prior restraint. The Grand Chamber should limit online intermediary liability, and not chill social media users’ right to free expression and access to information online.

You can read our amicus brief here:
https://www.eff.org/document/sanchez-v-france-eff-media-defence-ecthr-brief

Meri Baghdasaryan

What Low-Income People Will Lose with a Deadlocked FCC

2 months ago

When the massive, bipartisan infrastructure package passed Congress, the Federal Communications Commission (FCC) was tasked with ensuring equal access to broadband services. That provision is called “Digital Discrimination” and it states, for the first time in federal law, that specifically broadband access cannot be built along the lines of race, income, and other protected classes unless an ISP has an economic or technical justification for the discrimination. In other words, it is now a matter of federal law that digital redlining is banned.

Major ISPs fought hard to remove this provision, mostly because they’ve engaged in discrimination based on income status for many years. It is why EFF and dozens of organizations have called for a ban on digital redlining of broadband access back in 2020. Study, after study, after study has shown the same result. Wealthy Americans are getting fiber optic connectivity pushed closer to their homes starting as far back as 2005 while low-income people have been forced to stay on legacy copper and coaxial cable connections built as far back as 30 years ago.

But despite the evidence, the law, and the command by Congress, it is still a possibility that equal access for all Americans will be denied if the Senate does not confirm the Biden Administration’s FCC nominee, Gigi Sohn, to the agency. That is because the current four commissioners on the FCC have deep ideological differences of opinion with two believing broadband should be a regulated service with the other two supporting the full deregulation of broadband providers that started under the Pai FCC. Ms. Sohn’s public commitments have made clear she would support regulating broadband as an essential service, which aligns with where most Americans are today with 80 percent of people believing broadband is as important to their lives as electricity and water. If you support the idea that broadband should be treated as importantly as water and electricity, you should call your two Senators now and ask them to vote yes on Ms. Sohn.

Take Action

Tell the Senate to Fully Staff the FCC

How Carriers Have Engaged in Digital Discrimination

2005 was the start of the transition towards fiber optics in broadband access in the United States at the last mile, which meant very different things depending on the type of last mile connection a carrier deployed. But the pattern is always the same, namely that large ISPs target areas where they believe their investment will return fast profits on a very tight Wall Street driven timeline of three to five years. That meant favoring areas willing to pay a lot for broadband as opposed to areas where profits would be more modest. This is why the quality of broadband services deployed is so uneven in any given community.

For telephone companies like AT&T and Frontier, upgrading to 21st-century access meant completely replacing their older AT&T monopoly-era copper wires, which were already hitting their capacity limits with DSL broadband. Fully replacing the wire required a significant investment in each household, which is why fiber to the home from these companies is so limited in the United States. It isn’t because its unprofitable to deploy, but rather they are just focusing on deploying to people who will pay very high prices for broadband for very high profits. For the rest, they are left on the old copper DSL system that has long ago been paid off. To use car rentals as an analogy, it is like having everyone pay the renters fee but only the wealthy get the new cars while poorer Americans can only have the used cars on the lot that were already paid off.

For cable companies like Comcast and Spectrum, upgrading their lines meant replacing only a portion of their coaxial networks with fiber because the underlying system was more data robust as a television distribution system. As far back as 2007, cable companies noted they would only have to spend a fraction of what telephone companies would need to spend in order to upgrade because of this incremental approach advantage. As a result, discrimination by cable companies looks different but shows up when you notice how low-income people can only get Internet Essential-type packages of low-cost low quality while everyone else enjoys access to gigabit connections. That happens because much like their AT&T/Frontier Communications counterparts, they choose to push the fiber closer and closer to areas with the highest profits while not pushing fiber towards less profitable low-income areas. If that trend is allowed to continue, cable networks will eventually become fiber to the home for the wealthy and legacy coaxial for low-income neighborhoods.

What this means is we are seeing the creation of 1st class internet and 2nd class internet neighborhoods within the same community where wealthy Americans will get faster and cheaper offerings while low-income people will remain with increasingly more expensive and slower connections. It will also stifle the effectiveness of low-income support programs such as the Affordable Connectivity Program because the underlying infrastructure would be unable to deliver 21st-century ready access. The wires will dictate the future of price and quality.

States With High Poverty Rates Have the Most to Gain from the FCC Enforcing the New Digital Discrimination Law

What is widely underestimated by policymakers is the extent low-income residents are profitable to serve with 21st-century access. They know that their low-income residents are being neglected when they are given spotty mobile hotspots or forced to use fast-food parking lot WiFi during the pandemic, but do not appreciate the extent the new law they created under the bipartisan infrastructure package will eliminate this problem if it is fully enforced. That is because low-income people are profitable to serve in the long run as networks are paid off. Fiber will last anywhere between 30 to 70 years once laid, which gives a carrier a much longer window of flexibility to recover the investment if the law pushes that result. So states with high poverty rates such as West Virginia, Louisiana, and Mississippi for example would invite the most scrutiny under the new digital discrimination law and also the most opportunity to recapture revenues the major ISPs are withholding from upgrading and investing into their networks. That means companies like Comcast probably can’t spend $10 billion on stock buybacks and raise dividends for shareholders because more and more of that money will need to be invested back into upgrading low-income access lines to comply with the law.

Carriers before the law were allowed to segregate and segment communities into sections to allow for greater profits stemming from that discrimination. Every dollar spent on a high-income person meant high returns while every dollar not spent on people closer to poverty was a dollar pocketed. The digital discrimination law forbids that going forward because you can’t treat people differently with your upgrade plans based on their income status. If the FCC forces ISPs to upgrade their networks equally so long as its economically feasible, it will force a shift by carriers to aggregate their costs and aggregate their revenues. 

Profitability would come from the entirety of a community, not just from its wealthiest residents. And most importantly, they will still be profitable because the cross subsidization of high-income and low-income residents covers the expenses in the long run. What will end under the law is the siphoning of money from low-income broadband users to finance the wealthy. But all of this will depend on the FCC’s ability to carry out the new law. So long as it remains deadlocked, it will remain irrelevant and unable to make good on the promise of equal access to the internet.

The FCC Will Be Responsible for Ensuring Equal Access for Nearly 90% of Broadband Connections Under the Digital Discrimination Law

The command by Congress is significant when it states only economic or technical justifications will excuse discriminatory deployment. That is because the commercial feasibility of deploying fiber upgrades according to one study already reaches 90 percent of American households with the costs rising rapidly for the final 10 percent (see chart below). For the final 10 percent where it is commercially infeasible to deploy, Congress gave $45 billion to the National Telecommunications Information Administration (NTIA) to provide to the states to issue grants under the infrastructure law. In other words, the FCC is responsible for the other 90 percent of American broadband users when it comes to ensuring equal treatment of investments and upgrades under the bipartisan infrastructure law.

Source: Fiber Broadband Association Cartesian study - Sep 10, 2019

But decades of neglect will take a lot of hard detailed work by the FCC for the agency to make good on the law’s promise. Just in the county of Los Angeles alone, which the University of Southern California found that black neighborhoods were being skipped for fiber upgrades, less than 50 percent of the county has upgraded 21st-century fiber broadband connections, which impacts nearly 5 million people. EFF’s own cost model study of LA County finds that 95 percent of LA County residents are economically feasible to fiber up before you even need a single dollar of subsidies. Just in that one region, the major ISPs are siphoning billions of dollars from underinvesting, despite it being profitable in the long run to upgrade nearly everyone today. Investigating, compelling ISPs to supply data to the government, and taking enforcement action when necessary to ensure compliance all will require a fully staffed FCC committed to enforcing the law. The major ISPs have nothing to fear from a deadlocked FCC, which is why they are investing so much dark money in preventing the confirmation of a 5th commissioner seat.

If you want those ISPs to lose this fight, you have to call your two Senators and ask them to vote for Ms. Sohn.

Take Action

Tell the Senate to Fully Staff the FCC

Ernesto Falcon

DSA Agreement: No Filternet, But Human Rights Concerns Remain

2 months ago

The European Union reached another milestone late last week in its journey to pass the Digital Services Act (DSA) and revamp regulation of digital platforms to address a myriad of problems users face—from overbroad content takedown rules to weak personal data privacy safeguards. There’s a lot to like in the new DSA agreement EU lawmakers reached, and a lot to fear.

Based on what we have learned so far, the deal avoids transforming social networks and search engines into censorship tools, which is great news. Far too many proposals launched since work on the DSA began in 2020 posed real risks to free expression by making platforms the arbiters of what can be said online. The new agreement rejects takedown deadlines that would have squelched legitimate speech. It also remains sensitive to the international nature of online platform, which will have to consider regional and linguistic aspects when conducting risks assessments.

What’s more, the agreement retains important e-Commerce Directive Principles that helped make the internet free, such as rules allowing liability exemptions and limiting user monitoring.  And it imposes higher standards for transparency around content moderation and more user control over algorithmically-curated recommendations.

But the agreement isn’t all good news. Although it takes crucial steps to limit pervasive online behavioral surveillance practices and rejects the concerning parliamentary proposal to mandate cell phone registration for pornographic content creators, it fails to grant users explicit rights to encrypt their communications and use digital service anonymously to speak freely and protect their private conversations. In the light of an upcoming regulation that, in the worst case, could make government scanning of user messages mandatory throughout the EU, the DSA is a missed opportunity to reject any measure that leads to spying on people’s private communication. In addition, new due diligence obligations could incentivize platforms in certain situations to over-remove content to avoid being held liable for it.

We’re also worried about the flawed “crisis response mechanism” proposal—introduced by the Commission in closed-door trilogue negotiations—giving the European Commission too much power to control speech on large platforms when it decides there’s a crisis.  But we were glad to see it tempered by adding an extra step requiring the Commission to get a green light from national independent platform regulators first. Also, the Commission will have to take due regard of the crisis’ gravity and consider how any measure taken will impact on fundamental rights.

Finally, the agreement retains provisions allowing government agencies to order a broad range of providers to remove allegedly illegal content, and giving governments alarming powers to uncover data about anonymous speakers, and everyone else. Without specific and comprehensive limitations, these provisions add up to enforcement overreach that will interfere with the right to privacy and threatens the foundation of a democratic society. Unfortunately, European lawmakers didn’t introduce necessary human rights-focused checks and balances in the agreement to safeguard users against abuse of these powers.

The agreement is not the end of the process—the text is still subject to technical revisions and discussions, and may not be released in its final form for days or weeks. We will be analyzing details as we get them, so stayed tuned. Once the DSA text is finalized, it still needs to be voted into law before taking effect.

Karen Gullo

Plaintiffs Press Appeals Court to Rule That FOSTA Violates the First Amendment

2 months ago

Two human rights organizations, a digital library, a sex worker activist, and a certified massage therapist on Monday appealed a ruling that denied  their constitutional challenge to FOSTA (Allow States and Victims to Fight Online Sex Trafficking Act), an overbroad and censorious internet law that harms sex workers.

The plaintiffs, Woodhull Freedom Foundation, Human Rights Watch, The Internet Archive, Alex Andrews, and Eric Koszyk, have been challenging the law since it was enacted in 2018. The district court hearing their challenge dismissed the case last month, ruling that FOSTA did not violate the First Amendment.

The plaintiffs are disappointed in the district court’s ruling and disagree with it. As they have repeatedly argued, FOSTA is one of the most restrictive laws governing online speech and it has resulted in significant harm to sex workers and their allies, depriving them of places online to advocate for themselves and their community.

FOSTA created new civil and criminal liability for anyone who “owns, manages, or operates an interactive computer service” and creates content (or hosts third-party content) with the intent to “promote or facilitate the prostitution of another person.” The law also expands criminal and civil liability to classify any online speaker or platform that allegedly assists, supports, or facilitates sex trafficking as though they themselves were participating “in a venture” with individuals directly engaged in sex trafficking.

FOSTA doesn't just seek to hold platforms and hosts criminally responsible for the actions of sex-traffickers. It also introduces significant exceptions to the civil immunity provisions of one of the internet’s most import laws, 47 U.S.C. § 230. These exceptions creates new criminal and civil liability for online platforms based on whether their users' speech might be seen as promoting or facilitating prostitution, or as assisting, supporting or facilitating sex trafficking.

The appeal marks the second time the case has gone up to the U.S. Court of Appeals for the District of Columbia. The plaintiffs previously prevailed in the appellate court when it ruled in 2020 that they had the legal right, known as standing, to challenge FOSTA, reversing an earlier district court ruling.

The plaintiffs are represented by EFF, Davis, Wright Tremaine LLP, Walters Law Group, and Daphne Keller.

Related Cases: Woodhull Freedom Foundation et al. v. United States
Aaron Mackey

Twitter Has a New Owner. Here’s What He Should Do.

2 months ago

Elon Musk’s purchase of Twitter highlights the risks to human rights and personal safety when any single person has complete control over policies affecting almost 400 million users. And in this case, that person has repeatedly demonstrated that they do not understand the realities of platform policy at scale. 

The core reality is this: Twitter and other social networks play an increasingly important role in social and political discourse, and have an increasingly important corollary responsibility to ensure that their decision-making is both transparent and accountable. If he wants to help Twitter meet that responsibility, Musk should keep the following in mind: 

Free Speech Is Not A Slogan

Musk has been particularly critical of Twitter’s content moderation policies. He’s correct that there are problems with content moderation at scale. These problems aren’t just specific to Twitter, though Twitter has some particular challenges. It has long struggled to deal with bots and troubling tweets by major figures that can easily go viral in just a few minutes, allowing mis- or disinformation to rapidly spread. At the same time, like other platforms, Twitter’s community standards restrict legally protected speech in a way that disproportionately affects frequently silenced speakers. And also like other platforms, Twitter routinely removes content that does not violate its standards, including sexual expression, counterspeech, and certain political speech.

Better content moderation is sorely needed: less automation, more expert input into policies, and more transparency and accountability overall. Unfortunately, current popular discourse surrounding content moderation is frustratingly binary, with commentators either calling for more moderation (or regulation) or, as in Musk’s case, far less.

To that end, EFF collaborated with organizations from around the world to create the Santa Clara Principles, which lay out a framework for how companies should operate with respect to transparency and accountability in content moderation decisions. Twitter publicly supported the first version of the Santa Clara Principles in its 2019 transparency report. While Twitter has yet to successfully implement the Principles in full, that declaration was an encouraging sign of its intent to move toward them: operating on a transparent set of standards, publicly sharing details around both policy-related removals and government demands, making content moderations clear to users, and giving them the opportunity to appeal. We call on Twitter’s management to renew the company’s commitment to the Santa Clara Principles.

Anonymous and Pseudonymous Accounts Are Critical for Users

Pseudonymity—the maintenance of an account on Twitter or any other platform by an identity other than the user’s legal name—is an important element of free expression. Based on some of his recent statements, we are concerned that Musk does not fully appreciate the human rights value of pseudonymous speech. 

Pseudonymity and anonymity are essential to protecting users who may have opinions, identities, or interests that do not align with those in power. For example, policies that require real names on Facebook have been used to push out Native Americans; people using traditional Irish, Indonesian, and Scottish names; Catholic clergy; transgender people; drag queens; and sex workers. Political dissidents may be in grave danger if those in power are able to discover their true identities. 

Furthermore, there’s little evidence that requiring people to post using their “real” names creates a more civil environment—and plenty of evidence that doing so can have disastrous consequences for some of the platform’s most vulnerable users. 

Musk has recently been critical of anonymous users on the platform, and suggested that Twitter should “authenticate all real humans.” Separately, he’s talked about changing the verification process by which accounts get blue checkmarks next to their names to indicate they are “verified.” Botnets and trolls have long presented a problem for Twitter, but requiring users to submit identification to prove that they’re “real” goes against the company’s ethos

There are no easy ways to require verification without wreaking havoc for some users, and for free speech. Any free speech advocate (as Musk appears to view himself) willing to require users to submit ID to access a platform is likely unaware of the crucial importance of pseudonymity and anonymity. Governments in particular may be able to force Twitter and other services to disclose the true identities of users, and in many global legal systems, do so without sufficient respect for human rights.

Better User Privacy, Safety, and Control Are Essential

When you send a direct message on Twitter, there are three parties who can read that message: you, the user you sent it to, and Twitter itself. Twitter direct messages (or DMs) contain some of the most sensitive user data on the platform. Because they are not end-to-end encrypted, Twitter itself has access to them. That means Twitter can hand them over in response to law enforcement requests, they can be leaked, and internal access can be abused by malicious hackers and Twitter employees themselves (as has happened in the past). Fears that a new owner of the platform would be able to read those messages are not unfounded.

Twitter could make direct messages safer for users by protecting them with end-to-end encryption and should do so. It doesn’t matter who sits on the board or owns the most shares—no one should be able to read your DMs except you and the intended recipient. Encrypting direct messages would go a long way toward improving safety and security for users, and has the benefit of minimizing the reasonable fear that whoever happens to work at, sit on the board of, or own shares in Twitter can spy on user messages. 

If users have more control, it matters less who’s running the ship, and that’s good for everyone. 

Another important way to improve safety on the platform is to give third-party developers, and users, more access to control their experience. Recently, the platform has experimented with this, making it easier to find tools like BlockParty that allow users to work together to decide what they see on the site. Making these tools even easier to find, and giving developers more power to interact with the platform to create more tools that let users filter, block, and choose what they see (and what they don’t see), would greatly improve safety for all users. In the event that the platform was to pivot to a different method of content moderation, it would become even more important for users to access better tools to modify their own feeds and block or filter content more accurately.

There are more ambitious ways that would help improve the Twitter experience, and beyond: Twitter’s own Project Blue Sky put forward a plan for an interoperable, federated, standardized platform. Supporting interoperability would be a terrific move for whoever controls Twitter. It would help move power from corporate boardrooms to the users that they serve. If users have more control, it matters less who’s running the ship, and that’s good for everyone. 

Jillian C. York

Our Fight To Prevent Patent Suits From Being Shrouded in Secrecy

2 months ago

The public has a right to know what happens when companies litigate in publicly funded courts. Unfortunately, when it comes to patent cases, companies routinely ignore the public’s rights—for example, by filing entire documents under seal without making any attempt to justify that much secrecy. Even when courts have specific rules requiring justification for sealing requests and publicly filing redacted versions of sealed documents, parties can often defy them without consequence.

That’s why EFF, along with the Public Interest Patent Law Institute, and the assistance of Columbia Law School’s Science, Health, and Information Clinic has filed a motion to intervene and unseal documents in a patent case, Uniloc v. Google, in the Eastern District of Texas. When Google filed a motion to dismiss the lawsuit, the parties filed their briefs and documentary exhibits entirely under seal, keeping even basic facts about those documents (like their length) secret. Worse, the parties did not file any sealing motions or make any other attempt to justify their excessive sealing requests. This conduct violated the public’s access rights under the Constitution and common law as well as the standing order of the presiding judge, Judge Rodney Gilstrap. It also undermines earlier efforts by EFF to ensure greater transparency in patent cases in this Texas federal court, which has one of the largest dockets of patent cases in the country.

These sealed documents are important: they go to whether Uniloc has a legal right, known as standing, to bring lawsuits based on these patents. As one of the country’s most prolific patent litigants, Uniloc’s right to sue affects the freedom of countless technology makers and users.

Many of the documents that Uniloc filed under seal in Texas were already unsealed in another case—yet in Texas, they remain sealed in their entirety. There is no justification for that. Once information is public, it cannot be sealed. Hoping the parties would recognize that as well, EFF and PIPLI asked Google and Uniloc to unseal those already public records and to file motions to seal any information they could justify keeping sealed.

Google and Uniloc refused. So we’re asking the federal court to order them to unseal these materials.

This case is important for another reason: It appears that many companies litigating in this district court are ignoring the Constitution, common law, and the court’s own standing order. Those rules require them to justify any sealing requests and file redacted versions of the sealed material on the public docket. Google and Uniloc did not attempt to justify their sealing requests or even ask the court’s permission before making all of their filings secret. Instead, they appear to believe a private agreement governing the exchange of documents in litigation, known as a protective order, gives them a free pass to file material under seal without showing any cause. But private parties cannot violate the public’s right to access federal court records by entering into a secrecy agreement.

Crucially, the misuse of protective orders is not unique to this case. It is rooted in misinterpretation of the model protective order used for all patent cases in the district. And Uniloc and Google are not alone in reading this model order incorrectly. Patent litigants in the Eastern District of Texas routinely file documents under seal without making any showing of cause. That’s unlawful and shields the public from activities occurring in courts that they fund. EFF and PIPLI intervened in this case to stop patent litigants in the district from using the model protective order to trump the public’s right to access court records.

We hope that in filing this motion to intervene and unseal, we can help the public learn more about the issues in this case and vindicate its right to access court records in patent cases going forward.

EFF and PIPLI would like to thank Columbia University law students Caleb Monaghan and David Ratnoff, along with Associate Professor of Clinical Law Chris Morten, for their work on the case.

Related Cases: Uniloc 2017 LLC v. Google
Alex Moss
Checked
1 hour 45 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed