Indonesia’s Proposed Online Intermediary Regulation May be the Most Repressive Yet

2 weeks 6 days ago

Indonesia is the latest government to propose a  legal framework to coerce social media platforms, apps, and other online service providers to accept local jurisdiction over their content and users’ data policies and practices. And in many ways, its proposal is the most invasive of human rights. 

This rush of national regulations started with Germany’s 2017 “NetzDG” law, which compels internet platforms to remove or block content without a court order and imposes draconian fines on companies that don’t proactively submit to the country's own content-removal rules. Since NetzDG entered into force, Venezuela, Australia, Russia, India, Kenya, the Philippines, and Malaysia have followed with their own laws or been discussing laws similar to the German example. 

NetzDG, and several of its copycats, require social media platforms with more than two million users to appoint a local representative to receive content takedown requests from public authorities and government access to data requests. NetzDG also requires platforms to remove or disable content that appears to be “manifestly illegal” within 24 hours of notice that the content exists on their platform. Failure to comply with these demands subjects companies to draconian fines (and even raises the specter of blocking of their services). This creates a chilling effect on free expression: platforms will naturally choose to err on the side of removing gray area content rather than risk the punishment. 

Indonesia’s NetzDG variant—dubbed MR5—is the latest example. It entered into force in November 2020, and, like some others, goes significantly further than its German inspiration. In fact, the Indonesian government is exploring new lows in harsh, intrusive, and non-transparent Internet regulation. The MR5 regulation, issued by the Indonesian Ministry of Communication and Information Technology (Kominfo), seeks to tighten the government’s grip over digital content and users’ data. 

MR5 Comes Amid Difficult Times In Indonesia

The MR5 regulation also comes at a time of increased conflict, violence, and human rights abuses in Indonesia: at the end of 2020, the UN High Commissioner for Human Rights raised concern about the escalating violence in Papua and West Papua and shed light on reports about “intimidation, harassment, surveillance, and criminalization of human rights defenders for the exercise of their fundamental freedoms.” According to APC, the Indonesian government has used hate speech laws, envisioned to protect minority and vulnerable groups, to silence dissent and people critical of the government. 

These provisions are not only a serious threat to Indonesians’ free expression rights, they are also a major compliance challenge for Private ESOs

MR5 further exacerbates the challenging situation of freedom of expression in Indonesia this year and in the future, according to Ika Ningtyas, Head of the Freedom of Expression Division at the Southeast Asia Freedom of Expression Network (SAFEnet). She told EFF: 

The Ministry's authority, in this case, Kominfo, is increasing capacity so it can judge and decide whether the content is appropriate or not. We're very concerned that MR5 will be misused to silence groups criticizing the government. Independent branches of government have been excluded, making it unlikely that this regulation will include transparent and fair mechanisms. MR5 can be followed by other countries, especially in Southeast Asia. Regional and global solidarity is needed to reject it.

Business enterprises have a responsibility to respect human rights law. The UN Special Rapporteur on Free Expression has already reminded States that they “must not require or otherwise pressure the private sector to take steps that unnecessarily or disproportionately interfere with freedom of expression, whether through laws, policies, or extralegal means.” The Special Rapporteur also pointed out that any measures to remove online content must be based on validly enacted law, subject to external and independent oversight, and demonstrate a necessary and proportionate means of achieving one or more aims under Article 19 (3) of the ICCPR.

We join SAFEnet in urging the Indonesian government to bring its legislation into full compliance with international freedom of expression standards. 

Below are some of MR5’s most harmful provisions.

Forced ID Registration To Operate in Indonesia

MR5 obliges every “Private Electronic System Operator” (or “Private ESO”) to register and obtain an ID certificate issued by the Ministry before people in Indonesia start accessing its services or content. A “Private ESO” includes any individual, business entity or the community that operates “electronic systems” for users within Indonesia, even if the operators are incorporated abroad. Private ESOs subject to this obligation are any digital marketplace, financial services, social media and content sharing platforms, cloud service providers, search engines, instant messaging, email, video, animation, music, film and games, or any application which collects, processes, or analyzes users’ data for electronic transactions within Indonesia. 

Registration must take place by mid-May 2021. Under MR5, Kominfo will sanction non-registrants by blocking their services. Those Private ESOs who decide to register must provide information granting access to their “system” and data to ensure the effectiveness in the “monitoring and law enforcement process.” If a registered Private ESO disobeyed the MR5 requirements, for example, by failing to provide the “direct access” to their systems (Article 7 (c)), it can be punished in various ways, ranging from a first warning to temporary blocking to full blocking and a final revocation of its registration. Temporary or full blocking of a site is a general ban of a whole site, an inherently disproportionate measure, and therefore an impermissible limitation under Article 19 (3) of the UN’s International Covenant on Civil and Political Rights (ICCPR). When it comes to general blocking, the Council of Europe has recommended that public authorities should not, through general blocking measures, deny access by the public to information on the Internet, regardless of frontiers. The United Nations and three other special mandates on freedom of expression explain that “[m]andatory blocking of entire websites, IP addresses, ports, network protocols or types of uses (such as social networking) is an extreme measure – analogous to banning a newspaper or broadcaster – which can only be justified in accordance with international standards, for example where necessary to protect children against sexual abuse.” 

A general ban of a Private ESO platform will also not be compatible with Article 15 (3) of the UN’s International Covenant on Economic, Social and Cultural Rights (ICESCR), which states that individuals have a right to “take part in cultural life” and to “enjoy the benefits of scientific progress and its applications.” The UN has identified “interrelated main components of the right to participate or take part in cultural life: (a) participation in, (b) access to, and (c) contribution to cultural life." They explained that access to cultural life also includes a “right to learn about forms of expression and dissemination through any technical medium of information or communication.” 

Moreover, while a State party can impose restrictions on freedom of expression, these may not put in jeopardy the right itself, which a general ban does. The UN Human Rights Committee has said that the “relation between right and restriction and between norm and exception must not be reversed.” And Article 5, paragraph 1 of the ICCPR, states that “nothing in the present Covenant may be interpreted as implying for any State … any right to engage in any activity or perform any act aimed at the destruction of any of the rights and freedoms recognized in the Covenant.”

Forced Appointment of a Local Contact Person

Tech companies have come under increasing criticism for decisions to flout and ignore local laws or treat non-U.S. countries with attitudes that lack understanding of the local context. In that sense, a local point of contact can be a positive step. But forcing the appointment of a local contact is a complex decision that can make companies vulnerable to domestic legal actions, including potential arrest and criminal charges of their local contact as has happened in the past. With a local representative, platforms will also find it much harder to resist arbitrary orders and can be vulnerable to domestic legal action, including potential arrest and criminal charges. MR5 compels everyone whose digital content is used or accessed within Indonesia to appoint a local point of contact based in Indonesia and who would be responsible to respond to content removal or personal data access orders. 

Regulations Requiring Take Down of Content and Documents Deemed “Prohibited by the Government”

Article 13 of the MR5 forces Private ESOs (except cloud providers) to take down prohibited information and/or documents. Article 9(3) defines prohibited information and content as anything that violates any provision of Indonesia’s laws and regulations, or creates “community anxiety” or “disturbance in public order.” Article 9 (4) grants the Ministry, a non-independent authority, unfettered discretion to define this vague notion of “community anxiety” and “public disorder.” It also forces these Private ESOs to take down anything that would “inform ways or provide access” to these prohibited documents.

Laws must provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not. 

This language is extremely concerning. Compelling Private ESOs to ensure that they are not “informing ways'' or “providing access” to prohibited documents and information, in our interpretation, would mean that if a user of a Private ESO platform or site decides to publish a tutorial on how to circumvent prohibited information or content (for example, by explaining how to use VPN to bypass access blocking), such a tutorial itself could be considered prohibited information. Use of a VPN itself could be considered prohibited information. (The Communications Minister has told Internet users in Indonesia to stop using Virtual Private Networks, which he claims allow users to hide from authorities and put users’ data at risk.)

While maintaining public order may in some circumstances be considered a legitimate aim, this provision could be used to justify limitations to freedom of expression. Any restrictions in the name of public order must be prescribed by law, be necessary and proportionate, and be the least restrictive means of realizing that legitimate aim. Moreover, as the Human Rights Committee stated, States' restrictions on the exercise of freedom of expression may not put in “jeopardy the right itself.” To comply with the “Prescribed by Law” requirement, they must not only be formulated with sufficient precision to enable an individual to regulate their conduct, but they must also be made accessible to the public. And they must not confer unfettered discretion for the restriction of freedom of expression on those charged with their execution. 

Article 9(3) includes within  “prohibited content and information” any speech that violates Indonesian law and regulations. GR71, a regulation one level higher than MR5, and the later Law No. 11 of 2008 on Electronic Information and Transactions, both use similar vague language without offering any further definition or elucidation. For example, Law No. 11 of 2008 defines “Prohibited Acts” as any person knowingly and without authority distributing and/or transmitting and/or causing to be accessible any material thought to violate decency; promote gambling; insult or defame; extort; spread false news resulting in consumer losses in electronic transactions; cause hatred based on ethnicity, religion, race, or group; or contain threats of violence. We see a similar systematic problem with the definition of “community anxiety” and “public order,” which fails to comply with the requirements of Article 19 (3) of the ICCPR. 

Additionally, Indonesia’s criminal code considers blasphemy a crime—even though outlawing "blasphemy" is incompatible with international human rights law. The United Nations Human Rights Committee has clarified that laws that prohibit displays of lack of respect for a religion or other belief systems, including blasphemy laws, are incompatible with the ICCPR. When it comes to defamation law, the UNHRC states that any law be crafted with care to ensure it does not stifle freedom of expression. The laws should allow for the defense of truth and should not be applied to other expressions that are not subject to verification. Likewise, the UNHRC has stated that “laws that penalize the expression of opinions about historical facts are incompatible with the obligations that the ICCPR imposes on States parties to respect for the right to freedom of opinion and expression.” Criminal defamation law has been widely criticized by UN Special Rapporteurs on Free Expression for hindering free expression. Yet under this new law, any speech that violates Indonesian law is deemed prohibited.

Forcing Private Companies To Proactively Monitor 

MR5 also obliges Private ESOs (except cloud providers) to ensure that their service, websites or platforms do not contain and do not facilitate the dissemination of such prohibited information or documents. Private ESOs are then required to ensure that their system does not carry prohibited content or information, which will in practice require a general monitoring obligation, and the adoption of content filters. Article 9 (6) imposes disproportionate sanctions, including a general blocking of systems for those who fail to ensure there is no prohibited content and information in their systems. 

We join SafeNet in urging the Indonesian government to bring its legislation into full compliance with international freedom of expression standards

These provisions are not only a serious threat to Indonesians’ free expression rights, they are also a major compliance challenge for Private ESOs. If the Ministry gets to determine what information is “prohibited,” a Private ESO would be hard-pressed to proactively ensure its system does not contain that information or facilitate its dissemination even before a specific takedown.

According to Ika Ningtyas, Head of the Freedom of Expression Division at the Southeast Asia Freedom of Expression Network (SAFEnet), leaving it up to the Ministry will allow it to censor content containing criticism of public policies and some discussion of LGBT rights or activities or the ongoing Papua conflict.

Who Decides What Is Prohibited? 

MR5 empowers an official with the Orwellian title “Minister for Access Blocking” to coordinate the prohibited information that will be blocked. Blocking requests may originate with Indonesian law enforcement agencies, courts, the Ministry of Information, or concerned members of the public. (The courts can issue “instructions” to the Access Blocking Minister, while other government entities send requests that the Minister can evaluate. Individuals’ requests related to pornography or gambling can be sent directly to the Access Blocking Minister, while those related to other matters are addressed first to the Ministry of Information.) The Minister then emails platform operators with orders to block particular things, which they are expected to obey within 24 hours—or only 4 hours for “urgent” requests. “Urgent” requests include  terrorism; child pornography; or content causing “unsettling situations for the public and disturbing public order.” If a Private ESO (with the exception of a cloud provider) does not comply, it may receive warnings, fines, and eventually have its services blocked in Indonesia—even if the prohibited information was legal under international human rights law.

It requires time to understand the local context and complexity of the cases, and to assess such government orders. Careful assessments are particularly needed when it comes to material that relates to minority groups and movements, regardless of the context in which the complaint is raised—copyright, defamation, blasphemy, or any of the categories MR5 describes as harmful or causing. Laws must provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not. 

Even the use of copyright law as a cudgel by the state to censor dissent is not hypothetical. According to  Google’s Transparency Report on Government requests: 

We received a request through our copyright complaints submission process from an Indonesian Consul General who requested that we remove six YouTube videos. Outcome: We did not remove the videos, which appeared to be critical of the Consulate.

Forcing User-Generated Content Platforms to Become Government Enforcers

MR5 Articles 11, 16(11), and 16(12) enlist user-generated content platforms (like Youtube, Twitter, TikTok or any local sites that distribute user generated content) as content enforcers by threatening them with legal liability for their users’ expression unless they agree to help monitor the content of communication in various ways specified by the Indonesian government. Under Article 11, a User Generated Content Private ESO must ensure that prohibited information and documents are not transmitted or distributed digitally through their services, and must disclose subscriber information revealing who uploaded such information for the purpose of supervision by administrative agencies (Trade Agency) and law enforcement, and must perform access blocking (takedowns) on prohibited content. 

User-Generated Content Private ESOs who fail to remove prohibited information and/or documents are subject to an administrative sanction based on the provisions of the law and regulations concerning Non-Tax State Revenue (Article 16 (11)).

The Minister can force ISPs to block access to the Social Media Private ESO and/or can impose a fine that would accumulate every 24 or 4 hours until compliance, up to a maximum of 3 times (i.e. the fine can be multiplied up to 3 times, over a total of 4x3 = 12 hours for emergency cases such as terrorism, requiring a turnaround time of 4 hours), or 24x3=72 hours for other “normal” cases. The result: if changes aren’t made within 12 or 72 hours, on top of owing 3 times the fine, the Private ESO could find itself blocked. ((Article 16 (11)(12)).

MR5 Regulation Should be Repealed


We join SafeNet in urging the Indonesian government to repeal MR5 for its incompatibility with international freedom of expression law and standards. Companies should not remove content that is inconsistent with the permissible limitation test. General blocking measures as sanctions, in our opinion, are always inconsistent with Article 19 of the ICCPR. Companies should legally challenge such general blocking orders. They should also fight back strategically under any pressure from the Indonesian government.

Katitza Rodriguez

New EFF Report Shows Cops Used Ring Cameras to Monitor Black Lives Matter Protests

2 weeks 6 days ago
LAPD Wanted Unknown Amount of Video for Unknown Reasons – Raising First Amendment Concerns

San Francisco - The Electronic Frontier Foundation (EFF) has obtained emails that show that the Los Angeles Police Department (LAPD) sent at least one request—and likely many more—for Amazon Ring camera video of last summer’s Black-led protests against police violence. In a report released today, EFF shows that the LAPD asked for video related to “the recent protests,” and refused to disclose to EFF what crime it was investigating or how many hours of footage it ultimately requested.

“The emails we received raise many questions about what the LAPD wanted to do with this video,” said EFF Policy Analyst Matthew Guariglia. “Police could have gathered hours of footage of people engaged in First-Amendment-protected activity, with a vague hope that they could find evidence of something illegal. LAPD should tell the public how many hours of surveillance footage it gathered around these protests, and why.”

EFF filed its public records request with LAPD after widespread complaints about police tactics during the protests in May and June of 2020. After receiving the emails in response to our request, we asked for clarification from the LAPD about what it was looking for and how much video it wanted. The agency said simply that it was attempting to “identify those involved in criminal behavior.”

“Outdoor surveillance cameras like Ring have the potential to provide the police with video footage covering every inch of an entire neighborhood. This poses an incredible risk to First Amendment rights,” said Guariglia. “People are less likely to exercise their right to political speech, protest, and assembly if they know that police can get video of these actions with just an email to people with Ring cameras.”

Los Angeles isn’t the only city where the police department tried to get video of last summer’s protests for racial justice. The San Francisco Police Department (SFPD) used a network of over 400 cameras operated by a business district to spy on protests in early June 2020, under the guise of public safety. Last fall, EFF and ACLU of Northern California filed a lawsuit against the City and County of San Francisco on behalf of three protesters, asking the court to require the city to follow its Surveillance Technology Ordinance and prohibit the SFPD from acquiring, borrowing, or using non-city networks of surveillance cameras absent prior approval from the city’s Board of Supervisors.

For the full report “LAPD Requested Ring Footage of Black Lives Matter Protests”:
https://www.eff.org/deeplinks/2021/02/lapd-requested-ring-footage-black-lives-matter-protests

Contact:  MatthewGuariglia Policy Analystmatthew@eff.org
Rebecca Jeschke

LAPD Requested Ring Footage of Black Lives Matter Protests

2 weeks 6 days ago

Along with other civil liberties organizations and activists, EFF has long warned that Amazon Ring and other networked home surveillance devices could be used to monitor political activity and protests. Now we have documented proof that our fears were founded.

According to emails obtained by EFF, the LAPD sent requests to Amazon Ring users specifically targeting footage of Black-led protests against police violence that occurred in cities across the country last summer. While it is clear that police departments and federal law enforcement across the country used many different technologies to spy on protests, including aerial surveillance and semi-private camera networks, this is the first documented evidence that a police department specifically requested footage from networked home surveillance devices related to last summer’s political activity.

A map of Ring-police partnerships in the United States. Clicking the map will bring you to an interactive version.

In May 2019, LAPD became the 240th public safety agency to sign a formal partnership with Ring and it's associated app, Neighbors. That number has now skyrocketed to more than 2,000 government agencies. The partnerships allow police to use a law-enforcement portal to canvass local residents for footage.

Requests from police to Ring users typically contain the name of the investigating detective and an explanation of what incident they are investigating. Police requesting footage also specify a time period, usually a range spanning several hours, because it’s often hard to identify exactly what time certain crimes occurred, such as an overnight car break-in. 

A June 16, 2020 email showing an LAPD request for footage to an Amazon Ring user.

In its response to EFF’s public records requests, the LAPD produced several messages it sent to Ring users, but redacted details such as the circumstances being investigated and the dates and times of footage requested. However, one email request on behalf of the LAPD “Safe L.A. Task Force” specifically asked for footage related to “the recent protests.” Troublingly, the LAPD also redacted the dates and times sought for the requested footage. This practice is concerning, because if police request hours of footage on either side of a specific incident, they may receive hours of people engaging in First Amendment protected activities with a vague hope that a camera may have captured illegal activity at some point. Redacting the hours of footage the LAPD requested is a cover up of the amount of protest footage the police department sought to acquire.

EFF asked the LAPD for clarification of the specific context under which the department sent requests concerning the protests. The LAPD would not cite a specific crime they were investigating, like a theft from a specific storefront or an act of vandalism. Instead, the LAPD told EFF, “SAFE LA Task Force used several methods in an attempt to identify those involved in criminal behavior.”

Their full response reads:

The SAFE LA Task Force used several methods in an attempt to identify those involved in criminal behavior. One of the methods was surveillance footage. It is not uncommon for investigators to ask businesses or residents if they will voluntarily share their footage with them. Often, surveillance footage is the most valuable piece in an investigators case.

Police have used similar tactics before. EFF investigated the San Francisco Police Department’s use of a Business Improvement District’s network of over 400 cameras to spy on protests in early June 2020, under the guise of public safety and situational awareness. We learned that police gained over a week of live access to the camera network, as well as a 12-hour “data dump” of footage from all cameras in the network. In October 2020, EFF and ACLU of Northern California filed a lawsuit against the City and County of San Francisco on behalf of three protesters. We seek a court order requiring the city to comply with the city’s Surveillance Technology Ordinance by prohibiting the SFPD from acquiring, borrowing, or using non-city networks of surveillance cameras absent prior approval from the city’s Board of Supervisors.

The LAPD announced the creation of the Safe L.A. Task Force on June 2, 2020, in order to receive tips and investigate protests against police violence that started just four days earlier. The LAPD misleadingly labeled these protests as an “Unusual Occurrence (UO).” The FBI announced they would join the task force “in order to investigate significant crimes that occurred at or near locations where legitimate protests and demonstrations took place in Los Angeles beginning on May 29, 2020.” The Los Angeles Police Department, Beverly Hills Police Department, Santa Monica Police Department, Torrance Police Department, Los Angeles City Fire Department, Los Angeles City Attorney’s Office, Los Angeles County District Attorney’s Office, and United States Attorney’s Office for Los Angeles also joined the task force. 

Protests began in Los Angeles County following the Minneapolis police killing of George Floyd on May 25, 2020. LAPD sent a number of requests for Ring footage from users starting at the end of May, but because of the extensive redactions of circumstances, dates, and times, we’re unable to verify if all of those requests are related to the protests. However, some of the detectives associated with the Safe L.A. Task Force are the same people that began requesting Ring footage at the end of May and early June. 

On June 1, 2020, the same day of Los Angeles' largest protests, police receive footage from a Ring user.

The LAPD’s response shows that on June 1, 2020, the morning after one of the largest protests of last summer in Los Angeles, Det. Gerry Chamberlain sent Ring users a request for footage. Within two hours, Chamberlain received footage from at least one user. The nature of the request was redacted; however, the next day, his unit was formally assigned to the protest task force.

The LAPD’s handling of last summer’s protest are under investigation after widespread complaints about unchecked suppression and use of disproportionate tactics. At least 10 LAPD officers have been taken off the street pending internal investigations of their use of force during the protests. 

Technologies like Ring have the potential to provide the police with video footage covering nearly every inch of an entire neighborhood. This poses an incredible risk to First Amendment rights. People are less likely to exercise their right to political speech, protest, and assembly if they know that police can acquire and retain footage of them. This creates risks of retribution or reprisal, especially at protests against police violence. Ring cameras, ubiquitous in many neighborhoods, create the possibility that if enough people share footage with police, authorities are able to follow protestors’ movements, block by block. Indeed, Gizmodo found that on a walk of less than a mile between a school and its gymnasium in Washington D.C., students had to walk by no less than 13 Ring cameras, whose owners regularly posted footage to social media. Activists may need to walk past many more such cameras during a protest. 

We Need New Legal Limits on Police Access

This incident once again shows that modern surveillance technologies are wildly underregulated in the United States. A number of U.S. Senators and other elected officials have commented on—and sent inquiries to Amazon—to uncover how few legal restrictions govern this rapidly growing surveillance empire. The United States is ripe for a legislative overhaul to protect bystanders, as well as consumers, from both corporations and government. A great place to start would be stronger limits on government access to data collected by private companies. 

One of EFF’s chief concerns is the ease with which Ring-police partnerships allow police to make bulk requests to Ring users for their footage, although a new feature does allow users to opt out of requests. Ring has introduced end-to-end encryption, preventing police from getting footage directly from Amazon, but this doesn't limit their ability to send these blanket requests to users. Such “consent searches” pose the greatest problems in high-coercion settings, like police “asking” to search your phone during a traffic stop, but they are also highly problematic in less-coercive settings, like bulk email requests for Ring footage from many residents. 

Thus, an important way to prevent police from using privately-owned home security devices as political surveillance machines would be to impose strict regulations governing “Internet of Things” consent search requests. 

EFF has previously argued that in less-coercive settings, consent searches should be limited by four rules. First, police must have reasonable suspicion that crime is afoot before sending a request to a specific user. Such requests must be specific, targeting a particular time and place where there is reasonable suspicion that crime has happened, rather than general requests that, for example, blanket an entire neighborhood for an entire day in order to investigate one broken window. Second, police must collect and publish statistics about their consent searches of electronic devices, to deter and detect racial profiling. Third, police and reviewing courts must narrowly construe the scope of a person’s consent to search their device. Fourth, before an officer attempts to acquire footage from a person’s Ring camera, the officer must notify the person of their legal right to refuse. 

Ring has made some positive steps concerning its user’s privacy—but the privacy of everyone else in the neighborhood is still in jeopardy. The growing ubiquity of Ring means that if the footage exists, police will continue to access more and more of it. The LAPD’s use of Ring cameras to gather footage of protesters should be a big red flag for politicians. 

You can view the emails between Ring and the LAPD below:

Related Cases: Williams v. San Francisco
Matthew Guariglia

Virginians Deserve Better Than This Empty Privacy Law

3 weeks 2 days ago

A very weak consumer data privacy bill is sailing through the Virginia legislature with backing from Microsoft and Amazon, which have both testified in support of the bill. The bill, SB 1392 and its companion HB 2307, are based on a Washington privacy law backed by tech giants that has threatened for two years to lower the bar for state privacy bills. If you’re a Virginia resident who cares about privacy, please submit a comment to the House Committee on Technology, Communications, and Innovation before it meets on Monday, Feb. 15.

EFF has long advocated for strong privacy legislation. Consumer privacy has been a growing priority for legislatures across the country since California in 2018 passed the California Consumer Privacy Act—a sweeping, first-of-its kind piece of privacy legislation in the country. Since then, several states have considered broad data privacy laws; California amended its privacy law in 2020.

But not all privacy laws are the same. While California’s law is itself not perfect, a bill in the style of the Washington Privacy Act is a step in the wrong direction—particularly the version of the bill under consideration in Virginia. Bills that follow this model allow companies to appear to be doing a lot to protect privacy but are full of carveouts that fail to address some of the industry’s worst data privacy abuses.

A strong privacy bill would protect people’s privacy by default

Virginia’s bill copies much of what we’ve spoken out against in Washington state—and is, in some ways, even worse. For one, Virginia’s bill has almost no teeth. While the Attorney General’s office could bring a lawsuit, it offers no way for people to sue companies for violating their privacy, an enforcement tool known as a private right of action. Broad private rights of action are a vital tool for ensuring that people can act in their own interest to protect their privacy. Even California’s law and the Washington bill that Virginia’s measure is based on—which themselves could both benefit from stronger enforcement—have at least a narrow private right of action, offering at least a limited way for people can hold businesses to account without having to wait for the attorney general to act.

The Virginia bill stacks the deck against consumers even more under its “right to cure” provision:  If the Attorney General sues a business for violating people’s privacy, the business has a chance to fix what it did wrong, which would make the Attorney General's lawsuit go away. Considering how much time and work goes into bringing a lawsuit, giving the other side a cheap and easy out clearly illustrates how a right to cure allows you to look like you care about privacy without actually having to care.

Virginia’s privacy law also explicitly allows companies to engage in “pay for privacy” schemes, which punish consumers for exercising their privacy rights. In Virginia’s case, the bill says that consumers who opt-out of having their data used for targeted advertising, having it sold, or for profiling, can be charged a different “price, rate, level, quality or selection of goods and services.” That means punishing people for protecting their privacy—a structure that ends up harming those who can’t afford to protect themselves against data protection. Privacy should have no price tag.

A strong privacy bill would protect people’s privacy by default by letting them opt-in to data sale and use, rather than having to go to each company to ask them to stop using their information. It would require companies to commit to strict standards for what information they ask to collect in the first place. And it would also have real teeth to make sure that companies don’t get away with violating privacy rights.

EFF has joined with other national privacy groups, as well as with consumer advocates in Virginia, to ask the legislature to consider amendments that prioritize their constituents’ rights over empty promises from businesses.

Virginia’s lawmakers have made it clear that they want to hear from their own constituents who may be concerned about this matter. Tell your lawmakers to hit the brakes on this bill, and work to craft a better law for the people they serve.

Hayley Tsukayama

Victory! EFF Scores Another Win for the Public’s Right of Access against Patent Owner Fighting for Secrecy

3 weeks 6 days ago

Patents generate profits for private companies, but their power comes from the government, and in this country, the government’s power comes from the people. That means the rights patents confer, regardless of who exercises them, are fundamentally public in nature.

Patent owners have no right to keep their patents rights secret. The whole point of the patent system is to encourage people to disclose information about their inventions to the public by giving certain exclusive rights to those who do. But that doesn’t stop private companies from trying to keep information about their patents secret—even when their disputes to go court, where the public has a right to know what happens.

A recent decision by a federal court in a long-running transparency push by EFF affirmed the public’s right to access important information about a patent dispute. For more than two years, we have been working to vindicate the public’s right of access to important sealed court documents in Uniloc v. Apple. The sealed documents supported Apple’s argument that the case should be dismissed because Uniloc lost ownership of the patents when it sued Apple, and thus lost the right to bring the suit. But as filed, the documents were so heavily redacted that it was impossible to understand them. So EFF intervened to oppose the sealing requests on the public’s behalf—and we won. When Uniloc asked for reconsideration, the court refused—and we won again. When Uniloc appealed, the Federal Circuit overwhelmingly upheld the district court’s decision—and for the third time, we won.

EFF hoped that the string of victories would mark the end of our intervention and that the parties would promptly file properly-redacted documents as required at last. But they did not do so.

In October 2020, after more than three months had passed since the Federal Circuit’s ruling, we discovered Apple had filed a new motion to dismiss against Uniloc. Again, the motion and exhibits were so heavily redacted that it was impossible to know what Apple’s argument for dismissal was. So EFF moved to intervene, challenging Uniloc’s failure to comply with the Federal Circuit’s ruling as well as its new failure to submit proper sealing requests. The district court agreed, and for the fourth time, we won.

That EFF had to intervene underscores the problem of excessive sealing in patent cases between private companies. No matter how much they disagree on other issues, otherwise-warring sides often have a mutual interest in wanting to keep information about the litigation secret. When that happens, both sides are motivated to make excessive requests to seal court records—but not to oppose them. If there’s no opposition, there’s no guarantee a judge will weigh the request against the public’s right of access. To make sure that happens, EFF often intervenes in patent cases to vindicate the public’s access rights.

In its December 2020 decision, the district court did not mince words, excoriating both parties for their casual attitude toward the public’s right of access. The court emphasized the perils of “collusive oversealing,” which happens in cases such as this where “both parties seek to seal more information than they have any right to and so do not police each other’s indiscretion.” Although Apple did not request secrecy, it had ample opportunity to challenge Uniloc’s sealing requests, but “opted instead to grab its December 4 victory on the standing issue and head for the hills.” Seeing Apple and Uniloc’s mutual interest in secrecy, the court realized that “[w]ithout EFF, the public’s right of access will have no advocate,” and granted our motion for intervention with thanks.

The court then denied all of Uniloc’s sealing requests—including the requests to seal the names and amounts paid by Uniloc’s licensees. In doing so, the court emphasized the public’s right to information about U.S. patents in addition to the right to access court records. As it explained: “a patent is a public grant of rights. . . . The public has every right to account for all its tenants, all its sub-tenants, and (more broadly) anyone holding even a slice of the public grant.” It also emphasized the public’s “interest in inspecting the valuation of the patent rights . . . particularly given secrecy so often plays to the patentee’s advantage in forcing bloated royalties.” We commend the court for recognizing the gravity of the public’s right—and need—for information about the ownership, licensing, and valuation of U.S. patents.

We hoped this victory would convince Uniloc to admit defeat and change its sealing practices, but it has decided to appeal its loss to the Federal Circuit again. EFF’s fight for access to Uniloc’s licensing secrets will continue. In the meantime, we hope this decision will encourage judges and litigants to enforce the public’s right of access, especially when the adversarial process collapses.

 

Related Cases: Uniloc v. Apple
Alex Moss

EFF, Freedom of the Press Foundation and 22 Other Press Freedom Organizations Call on Attorney General to Drop Assange Prosecution

3 weeks 6 days ago

The prosecution of Julian Assange for charges related to his publications of government documents on the whistleblower website Wikileaks poses a grave threat to press freedom, EFF, Freedom of the Press Foundation and other human rights organizations argue. In an open letter published today, we call on President Biden’s acting Attorney General Monty Wilkinson to halt the prosecution and the threat of extradition.

The majority of the charges against Assange relate to the Espionage Act, a federal law passed in 1917 designed to punish espionage. The law’s broad language criminalized those who obtain and/or transmit materials related to the national defense (read the text of the law). While the authors of the law may have intended to keep the scope broad in order to encapsulate a wide range of espionage activities, today that law is being turned against publishers of information that seeks to hold government officials to account for unethical behavior.

As we argue in our letter, prosecuting Assange under the Espionage Act raises the specter of prosecuting other journalistic institutions for routine investigative and publishing practices. As we state in our letter, “a precedent created by prosecuting Assange could be leveraged—perhaps by a future administration—against publishers and journalists of all stripes.” Both the Espionage Act and the Computer Fraud and Abuse Act raise serious constitutional concerns, and the selective enforcement of these laws is used to threaten journalists, whistleblowers, and publishers who seek to cast light on government malfeasance.

The United States’ extradition request for Julian Assange was recently dismissed by a British judge, but Julian Assange is still in prison and the charges are likely to be appealed. Read EFF’s deeper dive into why the prosecution of Assange threatens press freedom and how the use of the CFAA against Assange fits into a larger pattern of selective enforcement of computer crime laws.

You can read the letter below:

U.S. Department of Justice

950 Pennsylvania Avenue, NW

Washington, DC 20530-0001

February 8, 2021

Acting Attorney General Monty Wilkinson:

 

We, the undersigned press freedom, civil liberties, and international human rights advocacy organizations, write today to share our profound concern about the ongoing criminal and extradition proceedings relating to Julian Assange, the founder of Wikileaks, under the Espionage Act and the Computer Fraud and Abuse Act.

​While our organizations have different perspectives on Mr. Assange and his organization, we share the view that the government’s indictment of him poses a grave threat to press freedom both in the United States and abroad. We urge you to drop the appeal of the decision by Judge Vanessa Baraitser of the Westminster Magistrates’ Court to reject the Trump administration’s extradition request.

 We also urge you to dismiss the underlying indictment. The indictment of Mr. Assange threatens press freedom because much of the conduct described in the indictment is conduct that journalists engage in routinely—and that they must engage in in order to do the work the public needs them to do. Journalists at major news publications regularly speak with sources, ask for clarification or more documentation, and receive and publish documents the government considers secret. In our view, such a precedent in this case could effectively criminalize these common journalistic practices. In addition, some of the charges included in the indictment turn entirely on Mr. Assange’s decision to publish classified information. News organizations frequently and necessarily publish classified information in order to inform the public of matters of profound public significance. We appreciate that the government has a legitimate interest in protecting bona fide national security interests, but the proceedings against Mr. Assange jeopardize journalism that is crucial to democracy.

The Trump administration positioned itself as an antagonist to the institution of a free and unfettered press in numerous ways. Its abuse of its prosecutorial powers was among the most disturbing. We are deeply concerned about the way that a precedent created by prosecuting Assange could be leveraged—perhaps by a future administration—against publishers and journalists of all stripes. Major news organizations share this concern, which is why the announcement of charges against Assange in May 2019 was met with vociferous and nearly universal condemnation from virtually every major American news outlet, even though many of those news outlets have criticized Mr. Assange in the past. It is our understanding that senior officials in the Obama administration shared this concern as well. Former Department of Justice spokesperson Matthew Miller told the Washington Post in 2013, “The problem the department has always had in investigating Julian Assange is there is no way to prosecute him for publishing information without the same theory being applied to journalists.”

​It was reportedly the press freedom implications of any prosecution of Mr. Assange that led Attorney General Eric Holder’s Justice Department to decide against indicting him after considering doing so. It is unfortunately the case that press freedom is under threat globally. Now more than ever, it is crucial that we protect a robust and adversarial press—what Judge Murray Gurfein in the Pentagon Papers case memorably called a “cantankerous press, an obstinate press, an ubiquitous press”—in the United States and abroad. With this end in mind, we respectfully urge you to forgo the appeal of Judge Baraitser’s ruling, and to dismiss the indictment of Mr. Assange.

​Respectfully,

(in alphabetical order)

Access Now

American Civil Liberties Union

Amnesty International - USA

Center for Constitutional Rights

Committee to Protect Journalists

Defending Rights and Dissent

Demand Progress

Electronic Frontier Foundation

Fight for the Future

First Amendment Coalition

Free Press

Freedom of the Press Foundation

Human Rights Watch

Index on Censorship

Knight First Amendment Institute at Columbia University

National Coalition Against Censorship

Open The Government

Partnership for Civil Justice Fund

PEN America

Project on Government Oversight

Reporters Without Borders

Roots Action

The Press Freedom Defense Fund of First Look Institute

Whistleblower & Source Protection Program (WHISPeR) at ExposeFacts

 

rainey Reitman

Some Answers to Questions About the State of Copyright in 2021

1 month ago

In all the madness that made up the last month of 2020, a number of copyright bills and proposals popped up—some even became law before most people had any chance to review them. So now that the dust has settled a little and we have some better idea what the landscape is going to look like, it is time to answer a few frequently asked questions.

What Happened?

In December 2020, Congress was rushing to pass a massive spending bill and coronavirus relief package. This was “must-pass” legislation, in the sense that if it didn’t pass there would be no money to do things like fund the government. Passing the package was further complicated by a couple of threats from President Trump to veto the bill unless certain things were in it.

In all this, two copyright bills were added to the spending package, despite them not having any place there—not least because there hadn’t been robust hearings where the issues with them could be pointed out. One of the bills didn’t even have text available to the public until the very last second. And they are now law.

The omnibus bill is 5,593 pages long. These new copyright laws are pretty close to smack dab in the middle, starting on page 2,539.

What Are These Laws?

They are the Protecting Lawful Streaming Act of 2020 and the Copyright Alternative in Small-Claims Enforcement Act (CASE Act). The former makes operating certain kinds of commercial streaming services a felony. The second creates a weird “Copyright Claims Board” within the Copyright Office that can hand out $30,000 awards for claims of copyright infringement. One is not going to impact the average Internet user that much. One is more dangerous.

What Is the Felony Streaming Law?

The Protecting Lawful Streaming Act of 2020 only had text publicly released about two weeks before it became law, and interest in it was high. This was partially because people heard there was a felony streaming law but no details whatsoever.

It isn’t a great law—we simply do not need more penalties for copyright infringement and definitely not ones that make it a felony—but the good news is it won’t affect most people.

The law makes it a felony to operate for either “commercial advantage” or personal gain a service that either: (1) is primarily designed or provided to be used for infringing streaming; (2) doesn’t have any significant non-infringing uses; or (3) is marketed to promote its use for infringing streaming.

Since most people don’t run such services, and the law does not affect the safe harbor provisions of the Digital Millennium Copyright Act, most of us won’t be running afoul of this law.

What Is the Copyright Alternatives in Small-Claims Enforcement (CASE) Act?

The CASE Act is a different story altogether. It is, at best, a huge waste of time and money. At worst, it will hover unconstitutionally like a dark cloud over everyone attempting to share anything online.

The CASE Act creates a “Copyright Claims Board” in the Copyright Office that can hear infringement claims by rightsholders seeking redress of no more than $30,000 per proceeding.

CASE Act’s proponents claim this process is voluntary, but rather than both parties agreeing to this process—aka an “opt in” system—everyone is presumptively bound by the board’s decisions unless they “opt out.” That is, you must affirmatively, in whatever manner the Copyright Office decides, say you do not want to participate in this system. You must do this every time you get a notice from this board if you don’t want to be subject to its decisions. If you don’t, if you ignore it in any way, you are on the hook for whatever they decide. And it’s a decision they can make without you defending yourself. And it’s a decision that has very limited appeal options.

For many people, opting out will be the best option as this process does not have the protections and limitations that a court case has. For example, a bad decision on fair use in court is subject to multiple levels of appeal. Under the CASE Act, decisions made by claims officers are extremely difficult to appeal. Making matters worse, the penalties the Copyright Claims Board is authorized to impose are high and will be, especially at first, unpredictable.

Okay, How Do I Opt Out?

Sadly, we cannot tell you that yet. A lot of this is left up to the Copyright Office to determine. The Copyright Office has until December of 2021 to get this thing up and running (with an option to extend that deadline by 180 days). In that time, they have to establish regulations about opting out. We hope that the regulations and system will be simple, clear, and easy to use.

That also means that the Copyright Claims Board does not exist yet. It could come into existence at any point this year. At the latest, it will start hearing cases in mid-2022.

What Should I Do If I Get Anything Related to the CASE Act?

If you get a letter from someone threatening to take you to the Copyright Claims Board unless you pay them and you don’t know what to do, get in contact with us by emailing info@eff.org.

One of the bigger problems with the CASE Act—and there are many—is that anyone with money or access to other resources like lawyers will know how to opt out and will be able to decide if that is the right decision for them. Such individuals or companies are unlikely to miss a notice or forget to opt out. Regular people, however, will be vulnerable to copyright trolls, who will profit from people unintentionally forfeiting their rights or caving to threats like we describe above.

Is That All?

Sadly not! In addition to these laws, there is also a proposed wholesale change to the online copyright ecosystem called the “Digital Copyright Act” or DCA. A draft of it was released in late December 2020, and it is very bad for anyone who uses the Internet. Worse, in many ways, than any other copyright proposal we’ve seen. We will continue to fight to keep these bad ideas out of the law, and we will need your help to do so.

Katharine Trendacosta

Online-Only Vaccine Distribution Will Leave Too Many Behind

1 month ago

As the rollout of COVID-19 vaccines has begun across the U.S., there have been numerous reports of people having trouble getting it—not just because of its limited availability, but also because some counties and states have chosen to require computer and Internet access to sign up. This is a dangerous path. Implementing online-only signup requirements effectively ensures that only residents with computer and Internet access can sign up to receive the vaccine. We implore anyone organizing vaccinations to provide alternative signup options. 

To distribute the vaccine to all who need it, we must meet people where they are, even if that’s offline.

Restricting life-saving drugs, treatments, vaccines, or services to those who have Internet and computer access will inevitably leave some people behind—often the ones who need the services the most. For one, Internet access is not universal in the United States—upwards of 10% of Americans do not have Internet at all. Recent research shows that more than 25% of those 65 and older do not use the Internet. There are also gaps in use and availability for racial minorities. Given that frontline workers are more likely to be in a racial minority, to be elderly, or to live with someone who is, it is especially critical that access to the vaccine be offered through a variety of options that do not limit who can obtain it.

That’s not to say that online services themselves are a problem. They may work very well for many people. But it is simply impossible to expect everyone to be able to navigate the sometimes labyrinthine requirements for vaccine signups online. Digital literacy is unevenly distributed, with rates decreasing in populations that are older, or Black, Hispanic, or foreign born. High-income and well-educated Internet users are much more likely than others to use online government services.  Having alternatives to an online signup is necessary to ensure equity, especially as many of the critical groups in need are also those who may benefit most from other options.

The pandemic has already been particularly detrimental for many of those same groups who lack reliable, high-speed Internet. Already-marginalized young people have been disconnected from education, rural and low-income communities have been separated from everything from grocery options to conducting their businesses, and seniors have been unable to obtain essential services. While alternative options for vaccine signups, such as a phone system, are easily overwhelmed, they are necessary for ensuring that some of the most vulnerable populations have access. We applaud all those working tirelessly to help end the COVID-19 pandemic, from researchers and scientists to those in government and health organizations, to nurses, doctors, and volunteers. But just as it takes a wide group of people working together to accomplish this monumental task, it will also take more than just the Internet to reach everyone. To distribute the vaccine to all who need it, we must meet people where they are, even if that’s offline.

Jason Kelley

Facebook's Latest Proposed Policy Change Exemplifies the Trouble With Moderating Speech at Scale

1 month ago

Hateful speech presents one of the most difficult problems of content moderation. At a global scale, it’s practically impossible. 

That’s largely because few people agree about what hateful speech is—whether it is limited to derogations based on race, gender, religion, and other personal characteristics historically subject to hate, whether it includes all forms of harassment and bullying, and whether it applies only when directed from a place of power to those denied such power. Just as governments, courts, and international bodies struggle to define hateful speech with the requisite specificity, so do online services. As a result, the significant efforts online services do undertake to remove hateful speech can often come at the expense of freedom of expression.

That’s why there’s no good solution to Facebook’s current dilemma of how to treat the term “Zionism.”

The trouble with defining hateful speech

Hateful speech presents one of the most difficult problems of content moderation by the dominant online services. Few of these services want to host hateful speech; at best it is tolerated, and never welcomed. As a result, there are significant efforts across services to remove hateful speech, and unfortunately, but not unexpectedly, often at the expense of freedom of expression. Online services struggle, as governments, courts and various international bodies have, to define hateful speech with the necessary degree of specificity. They often find themselves in the position of having to take sides in long-running political and cultural disputes. 

In its attempts to combat hate speech on its platform, Facebook has often struggled with nuance. In a post from 2017, the company’s VP of policy for Europe, Middle East, and Africa, Richard Allan, illustrated the difficulties of moderating with nuance on a global platform, writing:

People who live in the same country—or next door—often have different levels of tolerance for speech about protected characteristics. To some, crude humor about a religious leader can be considered both blasphemy and hate speech against all followers of that faith. To others, a battle of gender-based insults may be a mutually enjoyable way of sharing a laugh. Is it OK for a person to post negative things about people of a certain nationality as long as they share that same nationality? What if a young person who refers to an ethnic group using a racial slur is quoting from lyrics of a song?

Indeed, determining what is or isn’t “hate speech” is a hard problem: words that are hateful in one context are not in another, words that are hateful among the population or in one area are not in others. And laws designed to protect minority populations from hateful words based on race or ethnicity, have historically been used by the majority race or ethnicity to suppress criticism or expressions of minority pride. Most recently, this has been seen in the United States, as some have labeled the speech of the Movement for Black Lives as hate speech against  white people.

Of course, making such determinations on a platform with billions of users from nearly every country in the world is much more complicated—and even more so when the job is either outsourced to low-paid workers at third-party companies or worse: to automated technology.

The difficulty of automating synonymy

The latest vocabulary controversy at Facebook surrounds the word “Zionist,” used to describe an adherent to the political ideology of Zionism—the historic and present-day nationalist movement that supports a Jewish homeland in Israel—but also sometimes used in a derogatory manner, as a proxy for “Jew” or “Jewish.” Adding complexity, the term is also used by some Palestinians to refer to Israelis, whom they view as colonizers of their land.

Because of the multi-dimensional uses of this term, Facebook is reconsidering the ways in which it moderates the term’s use. More specifically, the company is considering adding “Zionist” as a protected category under its hate speech policy.

Facebook’s hate speech policy has in the past been criticized for making serious mistakes, like elevating “white men” as a protected category, while refusing to do so for groups like “black children.”

According to reports, the company is under pressure to adopt the International Holocaust Remembrance Alliance’s working definition of anti-semitism, which functionally encompasses harsh criticism of Israel within it. This definition has been criticized by others in the Jewish community. A group of 55 scholars of anti-semitism, Jewish history and the Israeli-Palestinian conflict called it “highly problematic and controversial”. A U.S.-based group called Jewish Voice for Peace, working in collaboration with a number of Jewish, Palestinian, and Muslim groups, has launched a campaign to urge Facebook not to include “Zionist” as a protected category to their hate speech policy.

Moderation at scale

While there is no denying that “Zionist” can be used as a proxy for “Jewish” in ways that are anti-semitic, the term’s myriad uses make it a prime example of what makes moderation at scale an impossible endeavor.

In a perfect world, moderation might be conducted by individuals who specialize in the politics of a given region or who have a degree in political science or human rights. In the real world, however, the political economy of content moderation is such that companies pay workers to sit at a desk and fulfill time-based quotas deciding what expression does or does not violate a set of constantly-changing rules. These workers, as EFF Pioneer Award recipient and scholar Sarah T. Roberts describes in her 2019 book, labor “under a number of different regimes, employment statuses, and workplace conditions around the world—often by design.”

While some proportion of moderation still receives a human touch, Facebook and other companies are increasingly using automated technology—and even moreso amidst the global pandemic—to deal with content, meaning that a human touch isn’t guaranteed even when the expression at hand requires a nuanced view.

Imagine the following: A user posts an angry rant about a group of people. In this example, let’s use the term “socialist”—also contested and multi-dimensional—to illustrate the point. The rant talks about the harm that socialists have caused to the person’s country and expresses serious anger—but does not threaten violence—toward socialists.

If this rant came from someone in the former Eastern bloc, it would carry a vastly different meaning than if it came from someone in the United States. A human moderator well-versed in those differences and given enough time to do their job could understand those differences. An algorithm, on the other hand, could not.

Facebook has admitted these difficulties in other instances, such as when dealing with the Burmese term kalar, which when used against Rohingya Muslims is a slur, but in other cases carries an entirely innocuous meaning (among its definitions is simply “split pea”). Of that term, Richard Allan wrote:

We looked at the way the word’s use was evolving, and decided our policy should be to remove it as hate speech when used to attack a person or group, but not in the other harmless use cases. We’ve had trouble enforcing this policy correctly recently, mainly due to the challenges of understanding the context; after further examination, we’ve been able to get it right. But we expect this to be a long-term challenge.

Everything old is new again

This isn’t the first time that the company has grappled with the term “Zionist.” In 2017, the Guardian released a trove of documents used to train content moderators at Facebook. Dubbed the Facebook Files, these documents listed “Zionist” alongside “homeless people” and “foreigners” when dealing with credible threats of violence. Also considered particularly vulnerable were journalists, activists, heads of state, and “specific law enforcement officers.” The leaked slides were not only sophomoric in their political analysis, but provided far too much information—and complexity—for already over-taxed content moderation workers.

In Jillian C. York's forthcoming book, Silicon Values: The Future of Free Speech Under Surveillance Capitalism, a former operations staffer from Facebook’s Dublin office, who left in 2017 and whose job includes weighing in on these difficult cases, said that the consideration of “Zionism” as a protected category was a “constant discussion” at the company, while another said that numerous staffers tried in vain to explain to their superiors that “Being a Zionist isn’t like being a Hindu or Muslim or white or Black—it’s like being a revolutionary socialist, it’s an ideology ... And now, almost everything related to Palestine is getting deleted.”

“Palestine and Israel [have] has always been the toughest topic at Facebook. In the beginning, it was a bit discreet,” she further explained, with the Arabic-language team mainly in charge of tough calls, but later, Facebook began working more closely with the Israeli government (just as it did with the governments of the United States, Vietnam, and France, among others), resulting in a change in direction.

This story is ongoing, and it remains unclear what Facebook will decide. But ultimately, the fact that Facebook is in the position to make such a decision is a problem. While we hope that they will not limit yet another nuanced term that they lack the capacity to moderate fairly, whatever they choose, they must ensure that their rules are transparent, and that users have the ability to appeal—to a human moderator—any decisions that are made.

Jillian C. York

Incoming Biden Administration Officials Should Change Course on Encryption

1 month ago

To have privacy and security in the digital world, encryption is an indispensable ingredient. Without it, we’re all at risk of exploitation—by authoritarian governments, over-reaching police, nosy corporations, and online criminals.

But for some years now, federal law enforcement has paid lip service to “cybersecurity,” while actually seeking to make us all less secure. Officials like former Attorney General William Barr, FBI Director James Comey and numerous others have claimed that widespread encryption poses a severe danger to investigations because of the risk of “going dark,” and have called on technology companies to design secure systems that allow the government to access the contents of encrypted data upon request. But it just isn’t possible to combine secure, encrypted systems with a special “backdoor” for law enforcement to gain access, no matter what you call it

There are no golden keys and no magic bullets. It’s time to have law enforcement and intelligence officials who recognize that and say it publicly. Unfortunately, key personnel that have already been selected for the new administration of President Biden don’t have an inspiring history on this topic.

Let’s start with FBI Director Christopher Wray, who is continuing on from the Trump Administration as part of a standard ten-year term. He’s stated many times that law enforcement should be granted exceptional access to encrypted conversations, and has described “user-controlled default encryption” as a “real challenge for law enforcement.”

Avril Haines, who has been confirmed as the new Director of National Intelligence, was part of a group of experts sponsored by the Carnegie Institute for Peace to jump-start a more “pragmatic and constructive” debate on encryption. The Carnegie group’s report was released in 2019, and for advocates of encryption and privacy, it was disappointing. Instead of acknowledging the technological realities of encryption, it punted on a number of important questions and offered a variant of a “key escrow” scheme for encrypted devices, a discredited approach that been proposed, and rightly rejected, for decades.

Lisa Monaco, President Biden’s nominee for Deputy Attorney General, was also a co-author on the Carnegie report. Attorney General Merrick Garland, also still unconfirmed, has no clear record on encryption, but has a long record as a federal prosecutor.

Regardless of officials’ backgrounds, a new presidential administration is a chance for a new path forward. We’ve already sent our transition memo to the Biden team, recommending that the new president adopt a formal policy in favor of encryption and disavowing any attempts to weaken digital security, including introducing encryption backdoors. These key officials must repudiate their misguided statements that weakening encryption and computer security is needed for public safety. It isn’t, and it never has been.

Joe Mullin

Section 1201’s Harm to Security Research Shown by Mixed Decision in Corellium Case

1 month ago

Under traditional copyright law, security research is a well-established fair use, meaning it does not infringe copyright. When it was passed in 1998, Section 1201 of the Digital Millennium Copyright Act upset the balance of copyright law. Since then, the balance has been further upset as it has been interpreted so broadly by some courts that it effectively eliminates fair use if you have to bypass an access control like encryption to make that fair use.

The District Court’s ruling in Apple v. Corellium makes this shift crystal-clear. Corellium is a company that enables security researchers to run smartphone software in a controlled, virtual environment, giving them greater insights into how the software functions and where it may be vulnerable. Apple sued the company, alleging that its interactions with Apple code infringed copyright and that it offered unlawful circumvention technology under Section 1201 of the Digital Millennium Copyright Act.

Corellium asked for “summary judgment” that it had not violated the law. Summary judgment is decided as a matter of law when the relevant facts are not in dispute. (Summary judgment is far less expensive and time-consuming for the parties and the courts, while having to go to trial can be prohibitive for individual researchers and small businesses.) Corellium won on fair use, but the court said that there were disputed facts that prevented it from ruling on the Section 1201 claims at this stage of the litigation. It also rejected Corellium’s argument that fair use is a defense to a claim under Section 1201.

Fair use is part of what makes copyright law consistent with both the First Amendment and the Constitution’s requirement that intellectual monopoly rights like copyright – if created at all – must promote the progress of "science and the useful arts."

We’re disappointed that the District Court failed to uphold the traditional limitations on copyright law that protect speech, research, and innovation. Applying fair use to Section 1201 would reduce the harm it does to fundamental rights.

It’s also disappointing that the provisions of Section 1201 that were enacted to protect security testing are so much less protective than traditional fair use has been. If those provisions were doing their job, the 1201 claim would be thrown out on summary judgment just as readily as the infringement claim, saving defendants and the courts from unnecessary time and expense.

We’ll continue to litigate Section 1201 to protect security researchers and the many other technologists and creators who rely on fair use in order to share their knowledge and creativity.

Related Cases: Green v. U.S. Department of Justice
Kit Walsh

No Secret Evidence in Our Courts

1 month ago

If you’re accused of a crime, you have a right to examine and challenge the evidence used against you. In an important victory, an appeals court in New Jersey agreed with EFF and the ACLU of NJ that a defendant is entitled to see the source code of software that’s used to generate evidence against them.

The case of New Jersey v. Pickett involves complex DNA analysis using TrueAllele software. The software analyzed a DNA sample obtained by swabbing a weapon, a sample that likely contained the DNA of multiple people. It then asserted that it was likely that the defendant, Corey Pickett, had contributed DNA to that sample, implicating him in the crime.

But when the defense team wanted to analyze how that software arrived at that conclusion, the prosecutors and the software vendor insisted that it was a secret. They argued that the defense team shouldn’t be allowed to look at how the software actually worked, because the vendor has a commercial interest in preventing competitors from knowing its trade secrets.

The court correctly ruled in favor of the defendant’s right to understand and challenge the software being used to implicate him. The code will not be publicly disclosed, but will be made available to the defense team. The defense needs this information about TrueAllele so that it can fairly participate in a procedural step known as a Frye hearing, used to ensure that a defendant’s rights are not undermined through the introduction of unreliable expert evidence.

In previous instances, defense experts have found fatal flaws in this kind of software. For instance, a complex DNA analysis program called “FST” was shown to have an undisclosed function in the code with the potential to tip the scales against a defendant. After the defense team found the issue, journalists at ProPublica persuaded the court to have the source code disclosed to the public.

This issue has arisen all around the country, and we have filed multiple briefs in different courts warning of the danger of secret software being used to convict criminal defendants. No one should be imprisoned or executed based on secret evidence that cannot be fairly evaluated for its reliability, and the ruling in this case will help prevent that injustice.

Kit Walsh

Despite Progress, Metadata Still Under "Second Class" Protection in Latam Legal Safeguards

1 month ago

This post is the fourth in a series about our new State of Communications Privacy Laws report, a set of questions and answers about privacy and data protection in Argentina, Brazil, Chile, Colombia, Mexico, Paraguay, Panama, Peru, and Spain. The research builds upon work from the Necessary and Proportionate Principles—guidelines for evaluating whether digital surveillance laws are consistent with human rights standards. The series’ first three posts were  “A Look-Back and Ahead on Data Protection,” “Latin American Governments Must Commit to Surveillance Transparency,” and "When Law Enforcement Wants Your Private Communications, What Legal Safeguards Are in Place in Latin America and Spain?." This fourth post adds to the third one, providing greater insight on the applicable standards and safeguards regarding communications metadata in Latin America and Spain.

Privacy advocates are working to undo antiquated and artificial distinctions between privacy protections afforded to communications “content” (the words written or spoken) and those provided to “metadata”. Metadata, such as the identification of parties engaged in communication, IP addresses, locations, the time and duration of communications, and device identifiers, can reveal people’s activities, where they live, their relationships, habits, and other details of their lives and everyday routines. As EFF, Article19, and Privacy International stated in PIETRZAK v. Poland before the European Court of Human Rights, “‘metadata’ is just as intrusive as the content of communications and therefore must be given the same level of protection.” Yet domestic privacy laws often treat metadata as less worthy of protection compared to the contents of a communication. Such distinctions were based on artificial analogies to a time when telephone calls used pulse dialing, and personal computers were a rarity. 

International human rights courts are starting to become more sophisticated about this. The EU Court of Justice stated

“that data, taken as a whole, is liable to allow very precise conclusions to be drawn concerning the private lives … such as everyday habits, permanent or temporary places of residence, daily or other movements, the activities carried out, the social relationships of those persons and the social environments frequented by them… In particular, that data provides the means … of establishing a profile of the individuals concerned, information that is no less sensitive, having regard to the right to privacy, than the actual content of communications.” 

 Similarly, in the case Escher et al v. Brazil, the Inter-American Court of Human Rights recognized that the American Convention on Human Rights applies to both communications content and metadata. The Court has ruled

…  Article 11 applies to telephone conversations irrespective of their content and can even include both the technical operations designed to record this content …, or any  other  element  of  the communication  process;  for  example,  the  destination or origin of the calls that are made, the identity of the speakers, the frequency, time and duration of the calls, aspects that can be verified without the need to record the content  of  the  call  ….  In brief,  the  protection  of  privacy  is  manifested in the right that individuals other than those conversing may not illegally obtain information on the content of the telephone conversations or other aspects inherent in the communication process, such as those mentioned.

 Nevertheless, protecting metadata as much as we protect content is still a major challenge. A good chunk of countries in our updated reports do broadly require a court order for the government to access metadata (for example, Mexico, Chile, Peru, and Spain). Still, others apply this protection to communications content, but not to data that identifies a communication (as in Panama and Paraguay). In Brazil, the level of protection for telephone communications metadata is still contentious, while the need for a warrant is clear for accessing Internet communications related data.

Chile’s Criminal Procedure Code requires telecom companies to retain and disclose to prosecutors the list of authorized IP addresses and connection logs for at least a year. The Criminal Code, which regulates telephone interceptions, doesn't detail the procedure companies should follow to make this metadata available. However, it should abide by the rule requiring prior judicial authorization for all proceedings that affect, deprive, or restrict the constitutional rights of the accused or a third party, established by Article 9 of the same law. Chilean telecom companies show an uneven practice, though. While they usually require a previous judicial order to hand over call records, GTD and VTR don't mention this requirement for other metadata in their law enforcement guidelines. Entel, Claro, WOM, and Telefónica are sharper in providing commitments in this sense. The effect of Derechos Digitales’ ¿Quien Defiende tus Datos? Chile on this commitment cannot be overstated.

What About Subscriber Data?

Subscriber data includes a user’s name, address, and their device’s IMSI or IMEI (user identification numbers). Latam laws generally treat it like “traffic data” or give it less protection. Depending on the jurisdiction, traffic data can include IP addresses, call and message records, or location data. Spain, for example, requires prior judicial authorization for government access to traffic data but has certain legal exceptions for specific subscriber data. In Colombia, a 2008 resolution requires telecom service providers to allow the Directorate of Criminal Investigation of the National Police (Dijín, in Spanish) to make a remote connection to obtain subscribers’ names, home addresses, and mobile numbers. Companies must grant Dijín the ability to carry out individualized “queries” for each subscriber, providing a username and password for this purpose. 

In Mexico, Metadata Equals Content

In Mexico, legal rules explicitly give equal protection to data that identifies communications and the content of communications. Law enforcement also needs a prior judicial order to access stored metadata. In a lawsuit filed by R3D.mx, the Mexican Supreme Court ruled in 2016 that metadata is equally protected by the Constitution as the content of communications. Unfortunately, the court did not overturn retention mandates compelling telephone operators and ISPs to retain massive amounts of metadata, as the EU Court of Justice did with the EU Data Retention Directive in 2014. The EU Court ruled that to compel ISPs to retain customer communications data in bulk for up to two years to “prevent” and “detect” serious crimes breached users’ rights to privacy and data protection under Articles 7 and 8 of the EU Charter of Fundamental Rights.

Argentina’s Supreme Court has also ruled that “the communications … and everything that individuals transmit through the pertinent channels are part of the sphere of personal privacy,” and enjoy constitutional privacy protections. However, Telefónica’s transparency report for Argentina casts doubt on whether authorities follow this ruling --giving the impression that metadata is being handed over to authorities without prior judicial order.

As the previous examples show, courts play a pivotal role in applying constitutional and legal safeguards in a manner consistent with the evolving nature of digital communications and the simultaneously in-depth and wide reach of the data they yield. However, a ruling of Paraguay’s Supreme Court in 2010 authorized prosecutors to directly request metadata from telecom companies without a judicial order. This came despite a provision in the Telecommunications Law asserting that the inviolability of communications ensured in the Constitution refers not only to the content itself but also to what indicates the existence of communication, which would cover traffic data. In Panama, it is the country’s extensive Data Retention Law that allows prosecutors to directly request traffic and subscriber data from ISPs. Among other uses, the retained data can enable authorities to identify and track the origin of communications, establish the time, date, and duration of communications as well as the location of the mobile device and the cell where the communication originates.

Location Data Deserves Particular Attention

Location data can reveal intimate details of daily life, including who we see, where we go, when we visit the doctor or a self-help group, and whether we participate in protests or engage in political activity. Many communication services and apps gather our location data on a nearly continuous basis over long periods of time. Our privacy is threatened by government seizure of our location data as much as it is threatened by government seizure of the content of our communications. Despite this, stored location data is usually treated like other metadata (and thus may receive limited legal protection in many jurisdictions). 

Specific laws authorizing real-time location tracking are found in Spain, Colombia, Mexico, and Peru. Panama’s Criminal Procedure Code refers to “satellite tracking.” The provisions (except in Colombia) generally require a previous judicial order, while Spain, Mexico, and Peru provide an exception in certain emergency situations. Brazil’s Criminal Procedure Code has a specific rule by which prosecutors and police authorities may request a judicial order to compel telecom companies to reveal the location of victims or suspects of an ongoing human trafficking crime. Yet, if the judge doesn’t decide within 12 hours, authorities are allowed to demand the data directly. This provision is currently under constitutional challenge before Brazil’s Supreme Court. 

In Peru, Legislative Decree 1182 grants the country’s specialized police investigation unit power to request from telecom operators access to real-time phone or electronic device location data without a warrant when three requirements are met simultaneously: there is a blatant crime (delito flagrante, in Spanish), the punishment for the crime under investigation is greater than four years of imprisonment, and the access to this information is necessary to the investigation. Judicial review is performed after the police have already accessed the data. The decree requires a judge to review whether the real-time access was legal within 72 hours of the location data being accessed. The process by which device location data is turned over to police has not been made public. Peruvian news reported that, to implement Legislative Decree 1182, the Ministry of the Interior signed a secret protocol with ISPs in October of 2015 for police access to location data. As of 2020, the document remains classified. Seeking to shed at least some light on how the measure is used, Peru’s digital rights group Hiperderecho filed a set of FOIA requests in 2016. The responses have been incomplete and delivered in a way that has revealed few meaningful answers. Peruvians need far more transparency about this location surveillance program.

Reverse Searches: From Locations to Suspects?

A troubling location data investigative practice on the rise relates to backward, or reverse, requests. Rather than starting with a suspect, an account, or a specific identifier (or a few of them), the request aims to search for all active devices within a certain geographic area during a particular period of time. The investigation sweeps in a massive amount of data from the devices of people who happened to be in the area around the time of the crime, regardless of whether they are linked to criminal activity. 

Early last year, Chile’s media outlets reported that prosecutors asked telecom companies to turn over all mobile phone numbers connected to towers and base stations near five subway stations in Santiago between 6:00 p.m. and midnight on a particular day. The requests were part of an investigation into disorder sparked by fare hikes that led to an intense period of social unrest and protests. According to information released, prosecutors requested a court to order the search after mobile network companies’ WOM refused to comply with a direct request. The rest remained silent and it is unclear if they have or how much information they have provided.

In Brazil, the Superior Court of Justice (STJ) has upheld a judicial request for Google to turn over data, such as IP addresses, of all users who, during a 15-minute time period on December 2, 2018, passed through a toll gate on an expressway that runs through Rio de Janeiro.  On that day, cameras in the toll gate identified the car used in an ambush that killed councilwoman and human rights advocate Marielle Franco, and her driver, Anderson Gomes, in March 2018. The crime has sparked outrage as a dire demonstration of political violence. Suspects in the crime are in custody, but investigations have yet to identify who ordered the attack.  The STJ's ruling in August 2020 was followed by Google's appeal to Brazil's Constitutional Court. A thorough examination of necessary and proportionate standards is needed to guard against authorities abusing the court ruling in the future.  

In the U.S., reverse searches are often called geofence warrants. In one case involving searches of historical mobile phone location information held by Google, as we’ve noted, the warrants follow a multi-stage process. It starts with compelling Google to provide anonymized location data for all devices that reported their location within a specific area. It ends with prosecutors requiring Google to turn over information identifying Google accounts for specific devices located within the geofence area. Recent U.S. federal magistrate judge opinions have held these warrants violate the U.S. Constitution’s Fourth Amendment probable cause and particularity requirements. Arguments raised in Latin America closely align with the case EFF and others have been making against geofence warrants in the U.S.

 

Katitza Rodriguez

When Law Enforcement Wants Your Private Communications, What Legal Safeguards Are in Place in Latin America and Spain?

1 month ago

This post is the third in a series about our new State of Communications Privacy Laws report, a set of questions and answers about privacy and data protection in Argentina, Brazil, Chile, Colombia, Mexico, Paraguay, Panama, Peru, and Spain. The research builds upon work from the Necessary and Proportionate Principles—guidelines for  evaluating whether digital surveillance laws are consistent with human rights safeguards. The series’ first two posts were  “A Look-Back and Ahead on Data Protection,” and “Latin American Governments Must Commit to Surveillance Transparency.” This third post provides an overview of the applicable standards and safeguards for criminal investigations in eight Latin American countries and Spain

In December 1992, a Paraguayan lawyer discovered the so-called “Terror Archive,” an almost complete record of the interrogations, torture, and surveillance conducted during the 35-year military dictatorship of Alfredo Stroessner. The files reported details of “Operation Condor,” a clandestine program between the military dictatorships in Argentina, Chile, Paraguay, Bolivia, Uruguay, and Brazil between the 1970s and 1980s. The military governments of those nations agreed to cooperate in sending their teams into other countries to track, monitor, and kill their political opponents. The Terror files listed more than 50,000 deaths and 400,000 political prisoners throughout Argentina, Bolivia, Brazil, Chile, Paraguay, Uruguay, Colombia, Peru, and Venezuela. Stroessner’s secret police used informants, cameras with telephoto lenses, and wiretaps to build a paper database of everyone who was viewed as a threat, plus their friends and associates. The Terror Archive shows how far a government can sink when unchecked by judicial authorities, public oversight bodies, and an informed public. As we have written, Latin America abounds with recent abuses of surveillance powers, and many countries are still struggling with a culture of secrecy.

Civil society around the world has been fighting to ensure strong legal safeguards are established and enforced, including those described in the Necessary and Proportionate Principles. Our State of Communication Privacy Laws report builds upon this work to provide an overview of the legal standards and safeguards that apply today for criminal investigations in eight Latin American countries and Spain.

Significant Protections Exist Against Intercepting and Listening In On  Conversations

The most common method of communications surveillance is wiretapping or similar forms of intercepting communications. Most countries’ laws and legal systems explicitly address this intrusion and place limits on how and when it can occur. In Brazil, Colombia, Mexico, Panama, Paraguay, Peru, and Spain, the constitution directly states that private communications may not be breached without a court order. Mexico and Panama’s constitutions also protect the secrecy of private communications setting its violation is subject to criminal penalties. Beyond constitutional protections, there are usually criminal statutes against unauthorized interception, such as in Brazil, Peru, and Spain. In a few countries, there is a separate emergency track where a judicial review can come after the interception; Peru, Spain, and Mexico allow this emergency scenario.

Even though judicial orders are usually needed to intercept (or “intervene in”) private communications, the rules criminal courts are expected to apply when granting these orders can vary hugely from country to country. If you’ve followed EFF’s privacy litigation, you’ve seen how big a deal these variations in rules can be and how much controversy there is about how they apply to particular technologies, from geofence warrants to warrants targeting identifiers like addresses rather than people.

Fans of U.S. privacy law may recall the notion of “specificity” that grew out of the desire to prevent “general warrants” that allow authorities to access private information untethered to the specific target or purpose of an investigation. In Brazil, Chile, Mexico, Spain, and Peru, an interception must target specific persons, lines, or devices. For Spain, this identification is required, provided the data is known; Brazil’s law waives the identification when it’s shown to be “manifestly impossible” to obtain. Both Chile’s and Brazil’s laws also add a reasonable suspicion requirement.

Criminal procedural laws also establish that intercepting communications is an exceptional measure to be used in limited circumstances and not in every investigation. Chile, Brazil, Spain, and Peru limit interception to investigations of serious crimes, punishable with higher penalties. Constitutional protections in Brazil, Peru, Mexico, Paraguay, and Spain require that any intervention measure be necessary or indispensable. Chilean Constitution requires that the law establishes the “cases and forms” in which private communications may be intercepted.

Argentina and Panama’s laws specify that interception is an exceptional measure,  but they are somewhat unclear about what is meant by “exceptional.” The Argentinian Supreme Court has helped to clarify the need to apply to communications interception existing case law prohibiting the opening of letters or other correspondence; interception should be authorized by law, adequate, and strictly necessary, to achieve a legitimate aim. 

In Spain, “communications investigative measures” must comply with principles of relevance, adequacy, exceptionality, necessity, and proportionality. Such standards are key to set boundaries and guidance on how judges assess data requests. Requests tend to come in urgent situations; applying the principles helps avoid responding in disproportionate or unaccountable ways. International human rights law and the Inter-American standards, binding for several countries in the region, also reinforce the principles that guide judges’ scrutiny of interception orders.

Stored Communications Are Protected

The vast majority of legal systems featured in the reports require judicial order for law enforcement access to stored communications—whether following interception procedures, “search and seizure”-like rules, or other constitutional provisions. Unfortunately, this can be contentious for stored data not regarded as “correspondence” or for communications content contained on devices accessed by law enforcement authorities in situations where a search warrant is usually waived.

In Brazil, Marco Civil legislation approved in 2014 requires a judicial order to access both stored and ongoing Internet communications, establishing a path to override legal interpretation that constitutional protection afforded to communications secrecy (art. 5, XII) covered the "communication" of data but not the data itself. In 2012, Brazil's Supreme Court (STF) had followed this interpretation, setting a distinction between telephone conversations and stored call records, to consider lawful the identification of another suspect by police officers checking the records on devices found on an arrested person without a previous warrant. However, in 2016, the country’s Superior Court of Justice (STJ) relied on the constitution’s privacy protection clause (art. 5, X) and the Marco Civil to rule that judicial orders are required before accessing WhatsApp messages stored on a device obtained by police when its owner is caught in the act of committing a criminal offense.  Late last year, it was the turn of the Supreme Court (STF) to overrule its 2012 precedent. As stressed by the presiding Justice

[n]owadays, these devices are able to record the most varied information about their users. Mobile phones are the main form of access for Brazilians … to the internet. This reason alone would be enough to conclude that the rules on data protection, data flows, and other information contained in these devices are relevant.

In the U.S, EFF worked to help the courts correctly apply the “search incident to arrest” doctrine to new technologies. This doctrine sometimes allows police to search an object, like a bag, simply because the person carrying it was arrested, even if there was no cause to believe it contained something suspicious. In Riley v. California (2014), the U.S. Supreme Court held that this doctrine does not justify search of an arrestee’s phone. EFF filed an amicus brief in support of this holding. 

 In Panama, both the Criminal Procedure Code and the law on organized crime investigations require a judicial order before seizing correspondence or private documents, including electronic communications. Data stored in seized electronic devices, however, are only subject to subsequent judicial review. While the deadline for this review in the Criminal Procedure Code is ten days, it may take up to 60 days for organized crime investigations. The accused and their attorneys will be invited to take part in the analysis of the data contained in the devices, but the examination can proceed without their participation. 

Drawing the line between “correspondence” and other kinds of electronic data to determine whether stronger or weaker protections apply is challenging in the digital context. Moreover, the attempt to draw this distinction commonly overlooks the ways that “non-content” information, such as messaging history or location data, can also reveal intimate and sensitive details deserving similar protection. Here we go deeper into this issue. Yet, even when focusing on communications content, there are serious concerns about the level of privacy protections in the region.

Colombia’s Concerning Post-Interception Judicial Review Standard

We might imagine that all countries now follow this established pattern: a law enforcement officer seeks permission from a judge to perform some kind of otherwise-prohibited investigative action; the judge considers the request, and issues an order, and then the officer proceeds according to the parameters approved by the judge. 

But Colombia, surprisingly, sometimes does things backward, with a judge retroactively approving investigatory measures that have already been taken. The Colombian Constitutional Court has stated that, as a general rule, a judge’s prior authorization is necessary if an investigation will interfere with the fundamental rights of the person targeted. There exists an exception to this rule: when the law gives the Office of the Attorney General power to interfere with an individual’s rights for the purpose of collecting information relevant to a criminal investigation, actions taken are subject to after-the-fact judicial review. This exception must be strictly limited to searches, house visits, seizures, and interceptions of communications. Yet, the definition of interception by an administrative regulation includes both content and related metadata, leveling down, rather than upwards, the protection granted to communications data.

On the other hand, the Colombian Constitutional Court has also held that law enforcement practices, like selectively searching a database for an accused person’s confidential information, require prior judicial authorization. In the U.S, law enforcement cannot search a database without a prior warrant subject to specific exceptions like consent and emergency. EFF work in the U.S. focuses on ensuring warrants are necessary and proportionate in scope. For example, if the only thing relevant to an investigation is the emails to person X during week Y, the warrant should limit the search of the database to just that scope. 

Government Authorities' Direct Access To Intercepted Communications

Direct access mechanisms are situations when law enforcement or intelligence authorities have a “direct connection to telecommunications networks in order to obtain digital communications content and data (both mobile and internet), often without prior notice or judicial authorization and without the involvement and knowledge of the Telco or ISP that owns or runs the network.” The European Court of Human Rights ruled that direct access is “particularly prone to abuse.” The Industry Telecom Dialogue has explained that some governments require such access as a condition for operating in their country: 

“some governments may require direct access into companies’ infrastructure for the purpose of intercepting communications and/or accessing communications-related data. This can leave the company without any operational or technical control of its technology. While in countries with independent judicial systems actual interception using such direct access may require a court order, in most cases independent oversight of proportionate and necessary use of such access is missing.”

As we’ve written before, EFF  and our partners have urged private companies to issue transparency reports, explaining how and on what scale they turn users’ private information over to government entities. This practice is growing all around the world. We found one surprise in Millicom and Telefónica Transparency Reports. Both global telecom companies are currently operating in Colombia, and both disclose that they don't report the number of times someone’s communications on their mobile lines were intercepted because government authorities directly perform the procedure in their systems without their help or knowledge. 

According to Millicom's report, direct access requirements for telecom companies' mobile networks in Honduras, El Salvador, and Colombia prevent the ISPs from knowing how often or for what periods interception occurs. Millicom reports that in Colombia the company is subject to strong sanctions, including fines, if authorities find it gained information about the interception taking place in its system. This is why Millicon does not possess information regarding how often and for what periods of time communications are intercepted. Millicom states that a direct access requirement also exists in Paraguay, but the procedures there allow the company to view judicial orders required for government authorities to start the interception. To the best of our knowledge, nothing in Paraguay's legislation explicitly and publicly compels telecom providers to provide direct access.

Modern Surveillance Techniques

The European Court on Human Rights, in Marper v. UK, has observed that the right to privacy would be unacceptably weakened if the use of “modern scientific techniques in the criminal-justice system were allowed at any cost and without carefully balancing the potential benefits of the extensive use of such techniques against important private-life interests.” We couldn’t agree more. 

Unfortunately, the region is plagued by improper access to people’s communications data and a culture of secrecy that persists even when authoritarian regimes are no longer in place. From recurrent unlawful wiretaps to the unfettered use of malware, the extensive evidence of improper surveillance tactics used by governments in the region is likely the tip of the iceberg.  

In our research, we haven’t seen any specific legislation authorizing law enforcement use of cell-site simulators (CSSs). CSSs,  often called IMSI catchers or Stingrays, masquerade as cell phone towers and trick our phones into connecting to them so police can track down a target. In the United States, EFF has long opposed the government use of CSSs. They are a form of mass surveillance, forcing the phones of countless innocent people to disclose information to the police in violation of the U.S. Constitution. They disrupt cellular communications, including 911 calls. They are deployed disproportionately within communities of color and poorer neighborhoods. They exploit vulnerabilities in the cellular communication system that the government should fix instead of exploit. In the US, EFF argues that the government should not acquire IMSI catchers, but if the government does so, they should not be used for anything other than locating a particular phone. They should require a warrant, be used only for violent felonies, and require the immediate deletion of data not related to the target. There also must be oversight mechanisms to ensure the tool is used in compliance with the proportionality principle.

Regarding law enforcement use of malware, except for Spain, no other country in our research clearly authorizes malware as an investigative tool in criminal investigations, despite the government's widespread use of such technology. Malware or malicious software seeks to gain access or damage a computer without the owner’s consent. Malware includes spyware, keyloggers, viruses, worms, or any type of malicious code that infiltrates a computer. Malware, for example, is known to be used in Mexico, Panama, Venezuela, Colombia, Brazil, Chile, Ecuador, Honduras, and Paraguay with insufficient legal authorization. In certain countries, law accounts for the possibility that some authorities may require judicial authorization for the intervention of private communications for specific purposes, and that might be the legal authority employed by some governments to use malware. 

For example, in Paraguay, Article 200 of the Criminal Procedure Code states a judge may authorize the intervention of the communication "irrespective of the technical means used to intervene it." However, constitutional protections and international human rights law balances such interference with the right to privacy. Any intervention requests must comply with a three-step test: be prescribed by law; have a legitimate aim; and be necessary and proportionate. Limitations must also be interpreted and applied narrowly. In Guatemala, the Control and Prevention of Money Laundering Law authorizes the use of any technological means available for the investigation of any offense to facilitate the clarification of a crime.  

In the United States, EFF’s work on malware has focused on the deployment of government hacking tools in violation of the Fourth Amendment. The use of remote hacking tools by the FBI against all visitors to specific visitors to a website containing child exploitation materials, in particular, points to the need for limits on hacking authority to ensure they meet the probable cause and particularity requirements of the Fourth Amendment.

Conclusion


Police and intelligence agencies should never have direct and unrestricted access to communications data. Any access to communications data should be prescribed by law through clear and precise mandates and subject to specific conditions such as access is necessary to prevent a serious crime; independent judicial authorization is obtained; a factual basis for accessing data is provided; access is subject to independent and effective oversight, and users are notified, especially in cases of secret surveillance (even if after the fact). We also believe that any legal framework to access communications data should include special protections for the communications data of civil society organizations similar to those enjoyed by lawyers and the press. Otherwise, rule of law and human rights protections will continuously fail to succeed. Unrestricted access to communications data or any personal data via direct access to networks, malware, or IMSI catchers are serious human rights violations. In Schrems I, the European Court of Human Rights made clear that legal frameworks that grant public authorities access to data on a generalized basis compromise "the essence of the fundamental right to private life." In other words, any law that compromises the “essence to the right to private life” cannot ever be proportionate or necessary. 

Katitza Rodriguez

San Francisco Takes Small Step to Establish Oversight Over Business Association Surveillance

1 month ago

The San Francisco Board of Supervisors last week voted unanimously in favor of requiring all special business districts—such as the Union Square Business Improvement District (USBID)—to bring any new surveillance plans to the Board before adopting new technologies. 

The resolution—passed in the wake of an EFF investigation, a lawsuit brought by local activists, and a sustained local coalition effort—challenging police use of the USBID camera network to monitor last summer's protests - is non-binding and it will be up to City agencies to determine whether and how to carry out the request. We'll be watching to see if the city follows through, but one thing we already know: much more must be done to address the problem.

Under San Francisco's surveillance oversight ordinance, the San Francisco Police Department and other City agencies are generally forbidden from using new surveillance technology without Board approval and a public process. Despite this requirement, during the height of the Black-led protests against police violence, the USBID provided SFPD with live access to its network of hundreds of cameras. Police investigators also requested and received a "data dump" of all images from certain cameras covering large portions of the protest. On behalf of three protesters, EFF and the ACLU of Northern California filed a lawsuit seeking a court order to stop the SFPD from acquiring, borrowing, or using non-city networks of surveillance cameras absent prior Board approval. The City's resolution rightfully calls for more surveillance transparency and accountability from business improvement districts, but the onus is still on the City to ensure its departments fully comply with the surveillance ordinance.

Instead, this resolution focuses on the troubling growth of these public-private camera networks. Over the last few years, several San Francisco business improvement districts and community benefit districts (essentially non-profits approved by the city to collect and spend property assessments) have accepted money from private donors to build out camera networks equipped with advanced video analytic capabilities. Another—the Castro Community Benefit District—has been weighing its own surveillance camera network, but delayed its vote on the issue after an SF Examiner investigation found that police had also accessed BID cameras in order to monitor San Francisco’s 2019 Pride Parade as well as Super Bowl celebrations.

As a non-binding resolution, last week's vote only urges the Office of Workforce and Economic Development and San Francisco Public Works to put these new requirements in motion. The Board also ordered the agencies to send a copy of the resolution to all 17 of the City's business improvement/community benefit districts to put them on notice. The resolution was authored by Supervisors Aaron Peskin, Gordon Mar, and Matt Haney.

"The City has an interest in ensuring that the Districts remain accountable and transparent to the public, and it is in the best interest of the public that the City should hold the Districts to the same good government standards and public health and safety laws set forth in the Charter and Municipal Code of the City and County of San Francisco," the resolution says. "There is a particular interest in transparency, accountability, and good government with respect to Districts’ acceptance of private contributions and with respect to Districts’ use of surveillance technology."

 

Related Cases: Williams v. San Francisco
Dave Maass

Can Government Officials Block You on Social Media? A New Decision Makes the Law Murkier, But Users Still Have Substantial Rights

1 month ago

This blog post was co-written by EFF Legal Fellow Houston Davidson.

It’s now common practice for politicians and other government officials to make major policy announcements on Twitter and other social media forums. That’s continuing to raise important questions about who can read those announcements, and what happens when people are blocked from accessing and commenting on important social media feeds. A new decision out of a federal appeals court affirms much of the public’s right to read and reply to these government communications, but muddies one particular, commonly occurring issue.

This case, Campbell v. Reisch, involves a Twitter account belonging to Missouri state representative Cheri Reisch. In 2018, Reisch blocked her constituent, Mike Campbell, after Campbell retweeted a comment critical of her. Campbell filed a lawsuit arguing that the First Amendment protects his right to access information from Reisch’s account, and asked the court to order Reisch to unblock him. Reisch appealed to the Eighth Circuit, claiming that Campbell had no First Amendment right to follow her account because it was her personal campaign account and not her official government account. EFF filed an amicus brief in support of Campbell, as did the Knight First Amendment Institute.

The Eighth Circuit joined other federal appeals courts that have addressed similar cases in acknowledging that the public has a right to access official communications on social media. Just as the Second Circuit found in Knight First Amendment Institute v. Trump, the Eighth Circuit concluded that even government officials’ nominally private accounts can in fact be used for official purposes—in which case it would violate the First Amendment for these accounts to block followers based on their viewpoints. The Eighth Circuit made it clear that “the essential character of a Twitter account” is not “fixed forever,” explaining that “[a] private account can turn into a governmental one if it becomes an organ of official business.”

However, these cases have also concluded that not every social media account maintained by a governmental official would necessarily be an “official” account. Reisch argued the account in this case was for her campaign, and thus maintained by her as a private citizen, not as a governmental official. Unfortunately, the Eighth Circuit agreed with Reisch, and concluded that the way Reisch used her Twitter feed was not enough to transform it into a government account.

We find this to be too narrow of a definition of a “governmental account,” and Judge Kelly, dissenting from her colleagues, agreed. As Judge Kelly details, once she was elected, Reisch used her account to report on new laws, provide information about the state legislature’s work, and interact with constituents. She also clothed the account “in the trappings of her public office,” including by describing herself in her bio as “MO State Rep 44th District,” and used Twitter’s blocking feature in order to silence criticism of her conduct of official duties or fitness for office. Judge Kelly thus would have found a First Amendment violation.

While the court’s reasoning on this point is questionable, the decision is limited to the facts of this particular official’s account and doesn’t affect those who wish to engage with accounts more commonly used to conduct official business.

We receive frequent requests for legal help from users who find themselves blocked by governmental officials, and the law in this area remains strong for those who want to follow or reply to governmental officials. We will work to make sure that this new decision remains limited to its unique facts. But users who are blocked from accounts that are arguably “personal” or “campaign” accounts should understand that their First Amendment rights may turn on the specific ways in which the official uses the account, and that determination can sometimes be hard to predict.

Rebecca Jeschke

Amazon Ring’s End-to-End Encryption: What it Means

1 month ago

Almost one year after EFF called on Amazon’s surveillance doorbell company Ring to encrypt footage end-to-end, it appears they are starting to make this necessary change. This call was a response to a number of problematic and potentially harmful incidents, including larger concerns about Ring’s security and reports that employees were fired for watching customers’ videos. Now, Ring is finally taking a necessary step—making sure that the transmission of footage from your Ring camera to your phone cannot be viewed by others, including while that footage is stored on Amazon’s cloud.

Ring should take the step to make this feature the default, but for the time being, you will still have to turn encryption on.

You can read more about Ring’s  implementation of end-to-end encryption in Ring’s whitepaper.

How to Turn it On

Amazon is currently rolling out the feature, so it may not be available to you yet . When it is available for your device, you can follow Ring’s instructions. Make sure to note down the passphrase in a secure location such as a password manager, because it’s necessary to authorize additional mobile devices to view the video. A password manager is software that encrypts a database of your passwords, security questions, and other sensitive information, and is protected by a master password. Some examples are LastPass and 1Password.

How it Works 

Videos taken by the Ring device for either streaming or later viewing are end-to-end encrypted such that only mobile devices you authorize can view them. As Amazon itself claims, “[w]ith video E2EE, only your enrolled mobile device has the special key needed to unlock these videos, designed so no one else can view your videos -- not even Ring or Amazon.”

The security whitepaper gives the details for how this is implemented. Your mobile device locally generates a passphrase and several keypairs, which are stored either locally or encrypted on the cloud in such a way that the passphrase is needed to decrypt it. This is helpful for enrolling additional mobile devices. The Ring device then sets up a local WiFi connection, which the mobile device connects to. The public key information for the enrolled mobile device is sent over that connection, and subsequently used to encrypt videos before sending them over the Internet.

To break the system, someone would have to gain access to the temporary local network you created while you were doing initial setup, or you would have to approve adding them as an authorized user by entering the passphrase while setting up an additional mobile device.

So long as the implementation in the software matches the whitepaper specification and footage is not escrowed in any other way, we have high hopes for the encryption scheme Ring has devised. It may be close to a best-practice implementation of this kind of technology. 

What it Means for Privacy

Ring’s relationship to law enforcement has long been a concern for EFF. Ring now has over a thousand partnerships with police departments across the country that allow law enforcement to request, with a single click, footage from Ring users. When police are investigating a crime, they can click and drag on a map in the police portal and automatically generate a request email for footage from every Ring user within that designated area. 

What happens when Ring users refuse to share that footage, without end-to-end encryption,  has been a major concern. Even if a user refuses to share their footage, police can still bring a warrant to Amazon to obtain it. That means users’ video and audio could end up contributing to investigations they wish they had not facilitated—like immigration cases or enabling police spying on protests—even without the users knowing this had happened

This access is made possible because Ring footage is stored by Amazon on Amazon servers. The end-to-end encryption model described in Ring’s whitepaper should cut off this access. If your footage on Amazon’s servers is encrypted and only your phone has the keys, then police would have to bring a warrant directly to you for your footage, rather than going behind your back and having Amazon share the video. Contrary to what law enforcement officials may claim, therefore, end-to-end encryption will not put these videos completely off limits from their investigations.

Unanswered Questions 

One question that remains unanswered is whether Ring’s encryption will block the ability for other companies to transmit live-streamed footage from Ring cameras to police. In November 2020, local media reported that Jackson, Mississippi would start a pilot program with the help of a company called PILEUM/Fusus that would allow police to live stream footage from the security cameras of consenting participants. Although camera registries and shared access to security cameras is not novel, what was particularly troubling about this was the insistence that this program would allow people with networked home security devices, including Ring cameras, to also transmit their live footage straight to the local police surveillance centers.  

Ring reached out to a number of organizations, including EFF, to reaffirm that they are in no way involved with this pilot program. Fusus technology reportedly works by installing a “Fusus core” on your local network, which can supposedly find and transmit any live footage on your network, including Ring cameras. 

These changes to Ring raise the question of whether turning on Ring’s new end-to-end encryption feature will undermine Fusus’s ability to transmit footage. It’s unclear why anyone would consent to participating in a similar pilot program and installing a Fusus core, and then undermine that decision by opting into Ring encryption. But this scenario still leaves us wondering what current and future schemes by law enforcement to get Ring footage will undermine the use of end-to-end encryption.


Conclusion 

It may seem like EFF expends a lot of effort fighting against Ring and other Internet connected home security devices—but we do it for good reason. Police departments that could not legally build and use a large-scale government surveillance network are using Ring cameras as a loophole to avoid public input and accountability. Consumers’ choice to buy a camera cannot and should not be a way to launder mass surveillance and streamline digital racial profiling.

In the wake of investigative reporting and public advocacy, Ring has made a number of concessions. They’ve beefed up security measures, jettisoned undisclosed third party trackers, and even allowed people to opt out of receiving police requests for footage. These were all good steps, but they all did nothing to prevent police from bringing a warrant to Amazon in order to use your footage as evidence without your permission or even direct knowledge. One of Ring’s security and privacy soft spots has always been that it stores your footage for you. With end-to-end encryption enabled, a safeguard against blanket requests for footage from the cloud is introduced. It means that users have the ability to decide when and if to share their footage, in a way Amazon or Ring can not easily circumvent. It also means that law enforcement requests for footage have to go directly to the camera owner, just as they did before the advent of cloud storage.

We hope Ring takes the step to make this feature the default. With these safeguards in place, we can now move on to other concerns, like more federal regulation, ending consent searches so that police would be required to get a warrant any time they want your footage, preventing local police from sharing your footage with other agencies for unrelated reasons, and finding safeguards that prevent the technology from being used as a pipeline for sending racially biased “suspicions” straight to the police. 

Matthew Guariglia

The Old Media and the New Must Work Together to Preserve Free Speech Values

1 month 1 week ago

EFF Civil Liberties Director David Greene delivered the following as a keynote address on March 6, 2020, at the Media Law and Policy in the Digital Age: Global Challenges and Opportunities symposium hosted by Indiana University's Center for International Media Law and Policy Studies and its Barbara Restle Press Law Project.

A few years ago, I was summoned to the office of an eminent TV journalist, one of those people commonly described as “the dean of . . . ” something. He wanted me to come by, he said, because “he had an idea to run by me.” So I went. 

After the small talk – we both had suffered the same back injury! – he ran his idea by me. This is a paraphrase: “We should bring back the Fairness Doctrine. And not just for broadcast news, but for all media, especially the Internet. Looking back, I think it made us better journalists.” He was planning a conference and wanted this to be a major discussion point. In my memory, my jaw dropped cartoonishly all the way to the floor. 

The Fairness Doctrine was a Federal Communications Commission rule that imposed “fair” reporting requirements on radio and television broadcasters. By “broadcasters,” I, and the FCC, mean those entities that have a license to broadcast over a certain over-the-air frequency, as opposed to cable or satellite or now streaming services. It’s the stuff you get for free if you just plug in a TV or radio with an antenna. The Fairness Doctrine had many facets. But the main one required broadcasters to devote time to discussing controversial matters of public interest, and then to air contrasting views as well. In some circumstances this could require the broadcaster to provide reply time to any person. The rule was in effect from 1949 until 1987. I’ll talk more about it a little later. 

As I said, I was taken aback by this eminent journalist’s suggestion. I’ve been a First Amendment lawyer for 20+ years and have worked with and on behalf of journalists and news organizations for much of that time. During all that time, without exception, journalists considered the fairness doctrine to be a serious infringement on editorial discretion and freedom of the press in general. How could this person who I knew to be a champion of a free press want to revive it, and apply it to all news media?

So I responded that it was a terrible idea and probably unconstitutional. Needless to say, I was not invited to participate in his conference.

Unfortunately, this was not an aberration. I’ve seen it repeated in different forms ever since: news media advocates calling for regulation that would have until recently been seen as heretical to our established conceptions of a free press. 

The cause, of course, is social media and Internet platforms and Big Tech.

But it’s not that the advent and popularity of social media has adjusted our free press priorities. Rather, social media and the Internet in general has changed the business of news reporting. Legacy new media, especially print, are largely suffering financially, especially at the regional and local levels. And when they see certain social media companies – Facebook, Instagram, Twitter, Google, YouTube, Snapchat – thriving, they reach out for ways to fight these intruders. To look for ways to level the playing field.

I completely understand the frustration that motivates this. I also fear a country with diminished or no local or regional reporting. I’ve seen that there is so much less money now to fund public records requests and court access litigation. Indeed, these lawsuits now often fall to nonprofit organizations like EFF. I subscribe to home delivery of two newspapers and a bunch of magazines. 

But it’s a huge mistake to let this despair lead us to a path of abandoning or weakening important free press principles and open the door to the regulation of journalism. Especially when, as I will discuss toward the end of this talk, abandoning these principles won’t actually help. 

So my job here today is to convince you that the news media, all facets of it, from news gatherers and reporters to those who simply provide platforms for others to publish to those who simply suggest news reading to others, must stick together and remain unified champions of a free press. To do otherwise is far too dangerous, especially in the anti-press climate cultivated by the sitting Executive branch. 

The Fairness Doctrine 

Over the past few years, I’ve noticed at least three formerly taboo regulatory threats being given some life by those who are otherwise free press champions. 

I’ve already mentioned the Fairness Doctrine. So I’ll start there. As I said earlier, the Fairness Doctrine required broadcasters to present contrasting views of any public controversy. The U.S. Supreme Court upheld the rule in 1969 in a case called Red Lion Broadcasting v. FCC, on the basis that the FCC was merely requiring the broadcaster to momentarily and occasionally share the license that the FCC had granted it. The Court stated, though, that it would reconsider that decision if it became clear that the doctrine was restraining speech (that is, that broadcasters were choosing to avoid discussing public controversies rather than being forced to present both sides of them).

Five years later, the Supreme Court made clear that a similar rule could not be imposed on newspapers. In that case, Miami Herald Co v. Tornillo, the Court struck down a Florida right of reply law that required any newspaper that endorsed a candidate in an election to offer all opponents equal and equally prominent space in the newspaper to respond. The Court explained that such an intrusion into the editorial freedom of a newspaper was per se a violation of the First Amendment. And then in 1996, in ACLU v. Reno, the Supreme Court, in a different context, ruled that the Internet would be treated like print media for the purposes of the First Amendment, not broadcast. 

The FCC revoked the Fairness Doctrine in 1987 (although it formally remained on the books until 2011) after a few lower courts questioned its continuing validity and amid great unpopularity among Republicans in Congress. There are occasional Congressional or FCC-initiated attempts to bring it back – many blame it for the advent of seemingly partisan news broadcasts like Fox News, even though the rule never applied to cable television – but none have been successful. 

To bring back the Fairness Doctrine and then apply it to all media would mark a serious incursion on First Amendment rights.

Enshrining Professional "Ethics Codes" With the Force of Law 

I’ve seen a similar flip with respect to professional ethics, specifically news media advocates urging the legal codification of their voluntary industry ethical standards, embodied in the ethical codes created by professional societies like the Society of Professional Journalists and the Radio and Television News Directors Association, and the National Press Photographers, etc. This typically takes the form of calling for conditioning legal protections for online news production, distribution, aggregation, or recommendation services on following these ethical standards. Like, for example, saying that Wikileaks should be subject to the Espionage Act, because it does not follow such practices, while "ethical journalists" must be exempted from it.

These codes have always been very intentionally voluntary guidelines and not law for several good reasons. 

First, ethics are inherently flexible principles that don’t easily lend themselves to absolute rules, tend to be fact-intensive in application, and can vary greatly depending on number of legitimate and worthy priorities. They are generally an ill fit for the bright lines we insist on for laws that limit speech.

Second, free press advocates have been rightfully concerned that transforming journalism's ethical codes to legal standards will only lead to vastly increased legal liability for journalists. This could happen both directly -- by the codes be written into laws -- and indirectly -- by the codes becoming the "standard of care" against which judges would assess negligence. "Negligence," that is, the failure to act reasonably, is a common basis for tort liability. It is typically assessed with reference to a standard of care, that is, the care a reasonable person would have exercised. Were ethical codes to become the standard of care, journalists could bear legal liability any time they failed to follow an ethical rule, and, even worse, have to defend a lawsuit every time their compliance with an ethics rule was even a question. And they would then be held to a higher standard than non-journalists who would only need to act as a "reasonable person," instead of as a "professional journalist."

Third, and perhaps most basically, this would be direct governmental regulation of the press, something antithetical to our free speech principles. 

These all remain correct and relevant, and it remains a bad idea to give professional ethical codes the force of law or condition other legal protections on adherence to them. 

Expanding Republication Liability 

The third flip I’ve seen, and this is probably the most common one, is a sudden embrace of republication liability. Republication liability is the idea that you are legally responsible for all statements that you republish even if you accurately quote the original speaker and attribute the statement to them. To have my students truly understand the implications of this rule, that is, to scare them, I like to discuss two examples.

In one case, Little v. Consolidated Publishing, (Ala App 2010), a reporter attended a city council meeting. Her reporting on the meeting included an accurate quotation of a city council member, Spain, who at the meeting repeated rumors that one of his rival council members, Little, was in a personal relationship with a city contractor and thus pushed for her hiring, a move that was now being questioned. The article included another statement from Spain in which he said that if the rumors about Little were untrue, they would be very unfair to Little. The article also included Little’s denial. Nevertheless, Little sued the newspaper for defamation. The court rejected the argument that the publication was true since the rumor was in fact circulating at the time. The court explained that “publication of libelous matter, although purporting to be spoken by a third person, does not protect the publisher, who is liable for what he publishes,” and that it did not matter if in the same article the newspaper had decried the rumor as false.

In another case, Martin v. Wilson Publishing (RI 1985), a newspaper published an article about a real estate developer buying up historic properties in a small village. The article was generally supportive of the development and investment in the village, but explained that some residents were “less than enthusiastic” about the developer’s plans and “doubted his good intentions.” The article then stated that “some residents stretch available facts when they imagine Mr. Martin is connected with the 1974 rash of fires in the village. Local fire official feel that certain local kids did it for kicks.” And the article further expressed doubts about the claims of arson. The developer sued, and the court found that the newspaper could be liable for this republication even though the rumors did in fact exist and even though the newspaper had reported that it believed they were false. 

The republication liability rule apparently dates back to old English common law, the foundation of almost all US tort law. Originally it seems to have been a defense to accurately attribute the statement to the original speaker. But attribution hasn’t helped a reporter since at least 1824, when English courts adopted the present rule, and it quickly was adopted by US courts.

In my twenty or so years of teaching this stuff, republication liability is by far the most counter-intuitive thing I teach. Students commonly refuse to believe it’s true. It leads to absurd results. Countless journalists ignore it and hope they don’t get sued. 

And it gets worse, or at least more complicated. Since at least 1837 (the earliest English case I could find), republication liability has been imposed not just on those who utter or put someone else’s libelous words in print, but also to those who are merely conduits for libel reaching the audience. The 1837 case, Day v. Bream, imposed liability on a courier who delivered a box of handbills that allegedly contained libelous statements in them, unless he could prove that he did not know, and should not have known, of the contents of the box. Early cases similarly impose knowledge-based liability on newsstands, libraries, and booksellers. The American version of this knowledge-based “distributor” liability is most commonly associated with the U.S. Supreme Court’s 1959 decision in Smith v. California, which found that a bookseller could not be convicted of peddling obscene material unless it could be proven that the bookseller knew of the obscene contents of the book. Outside of criminal law, US courts imposed liability on distributors who simply should have known that they were distributing actionable content. 

Given this, there developed two subcategories of republication liability: “distributor liability” for those like booksellers, newsstands, and couriers who merely served as passive conduits for others’ speech; and “publisher liability” for those who engaged with the other person’s speech in some way, whether by editing it, modifying it, affirmatively endorsing it, or including it as part of larger original reporting. For the former, group, the passive distributors, there could be no liability unless they knew, or should have known, of the libelous material. For the latter group, the publishers, they were treated the same as the original speakers whom they quoted. Because one was treated a bit better if they were a passive distributor, the law actually disincentivized editing, curation, or reviewing content for any reason, and thus, some believed, encouraged bad journalism.

Historically, free press advocates have thus steadfastly resisted any expansion of republication liability. Indeed, they have jumped at any opportunity to limit it. 

A Short History of Online Speech Law

So why is this changing now? 

It all started way back in the 1990s, when courts started to apply republication liability to early online communications services, bulletin boards, chat rooms, and even email forwarding. A New York Court found that the online subscription service, Prodigy, which had created a bulletin board called "Money Talk" for its users to share financial tips, was the publisher of an allegedly defamatory statement about the investment banking firm Stratton Oakmont (later immortalized in The Wolf of Wall Street) even though the comment was solely authored by a Prodigy user, and not edited by Prodigy. The court found that Prodigy was nevertheless a publisher, and not merely a distributor, because it (1) maintained community guidelines for users of its bulletin boards, (2) enforced the guidelines by selecting leaders for each bulletin board, and (3) used software to screen all posts for offensive content. This decision was in contrast to a previous decision, Cubby v. Compuserve, in which distributor liability was applied to Compuserve because it lacked any editorial involvement. (Compuserve had created a news forum but contracted out the creation of content to a contractor which then engaged a subcontractor, Rumorville.) 

These holding gave rise to three major concerns about applying these print-world rules to online publication:

  1. Scale. While it might be reasonable and practical to ask a newspaper to review all third-party content (like ads, letters to the editor, wire-service stories, and op-eds), it would be nearly impossible for most online services to do so. Online publication facilitates third party content at a scale not seen before. If in order to eliminate liability or even just minimize it to a manageable risk, online intermediaries were required to review all content before it published, online intermediaries wouldn’t exist, because that’s practically impossible.
  2. Porn! It’s hard to understate the importance of sexual content in the broad acceptance of the Internet as a means of communication. But as you might imagine, the fear of readily accessible sexual content, and accessible from one’s home without the public shame of  having to go out in public to get it, was one of the first motivators for regulating the Internet. However, because any effort to remove sexual content from a platform would transform a passive distributor to a publisher, the law disincentivized what regulators saw as “responsible” acts by intermediaries to keep sexual content (and other objectionable content) off of their sites. This in effect recognizes the bad disincentives of republication liability that free press complained about for years.
  3. The Hecklers Veto. Even distributor liability, with its know-or-should-have-known standard, carries numerous unscalable risks. The heckler’s veto refers to the fact that one who wants to see speech censored need only register a complaint about it, and the speech will be removed regardless of the merits of the complaint. It’s frequently far easier to just remove content than to investigate its truthfulness, obscenity, etc. This problem is magnified by the problem of scale. If it’s difficult to investigate when it’s just a few complaints, it’s impossible when its thousands. As a result, knowledge- or notice-based liability systems are frequently exploited in a way that results in the removal of unobjectionable legally protected speech.

In order to address these concerns, Congress enacted 47 USC § 230, which essentially gets rid of republication liability (both publisher and distributor liability) for much third-party speech. (There were two big exceptions: user speech that infringes intellectual property rights and user speech that violates federal criminal law.) Members of Congress acted on concerns that the unmanageable threat of liability would thwart the growth and wide adoption of the Internet and the development of new communications technologies within it. And those worried about sexual content wanted to remove all disincentives to remove content when an intermediary wanted to do so. 

Section 230 has always been a bit controversial, and has been firmly in the crosshairs of regulators angry about all things online these days. I’m not going to use more time here to go over those various attacks on the law. The point I want to make is that in the past few years, legacy news media advocates have joined the throngs blaming Section 230 for pretty much everything they see as wrong with the Internet – that is, pretty much anything they don’t like about Facebook is because of Section 230. That is, the loss of advertising dollars that used to sustain newspapers.

Again, this is remarkable to me, because as I said, the press has always hated republication liability and sought to chip away at it. But it is now supporting efforts to chip away at some of the protections that are in place. Just a few months ago, the News Media Alliance, as part of convening on Section 230 called by Attorney General Barr, called for reforming of the immunity as part of a larger overhaul of the news media landscape. And – this is important – the Section 230 protections apply to the new media when it publishes non-original content online, like reader comments, op-eds, or advertisements. Indeed, as I wrote a few months back, one of the most widely successful applications of Section 230 is to the online version of legacy news media. And Section 230 also protects individual users when the forward email or maintain a community website. It’s not a Tech Company immunity; it’s a user immunity. 

Moreover, it’s largely assumed that online intermediaries, that is, those who transmit the speech of others, don’t want to screen that speech for misinformation or other harmful speech. While it is true that some services adhere to an unmoderated pipeline model, it’s more the case, especially with the big services like Facebook, You Tube, Twitter, etc., that services very much want to moderate content, but that monitoring and evaluating speech at the appropriate scale is impossible to do well. The vast majority of decisions are highly contextual close calls. This impossibility is exactly why Congress passed Section 230 – faced with liability for making the wrong decision and republishing actionable speech, these intermediaries will err on the side of censorship. And that brand of censorship inevitably has greater impact on already marginalized speakers.

Leveling the Playing Field

Each of these examples of abandonment of traditional free press principles are motivated by the same desire: to level the playing field between traditional news media and online services. That is, the news media now see their ethical and professional norms and legal burdens as giving them a market disadvantage against their competitors for advertising dollars, namely Facebook and Google. And they see the imposition of their norms and legal obligations on these competitors as a matter of fundamental fairness. They in effect want to make “good journalism” a legal requirement.

That’s astounding. Free press advocates have historically recognized the need to support legal challenges aimed at “bad journalism” tabloids like the National Enquirer because they rightfully recognized that those who seek to weaken legal protections target the lowest hanging fruit. And even if you look to defamation law as an example where “good journalism” gives you some legal advantage, free press advocates have rightfully argued that even if they can prove in court that their journalistic practices were solid, to do so is very expensive and the prospect of doing so exerts a powerful chill on reporting. 

And it is really dangerous to hand government the power to reward what it believes to be good journalism and punish what it believes to be the bad. Just imagine the havoc our last press-demeaning administration would wreak with such power. As it is, we have seen press libel suits by President Trump and Devin Nunes, and offhand threats to pull the nonexistent “licenses” of cable broadcasters.

We should be calling for more protections for speakers, writers, and their platforms now, not fewer. I understand that unlike the fairness doctrine or ethics codes, legacy news media advocates aren’t now claiming to love republication liability. Rather, they are saying, “if we are burdened by it, then they should be too.” But still, wouldn’t it be better to level the playing field, as it were, be removing republication liability from everyone, rather than placing the nonsensical and counterproductive legal requirement on everyone? 

As I said above, I understand this perceived unfairness and I am very concerned about the economic instability of our news media ecosystem. But I am also concerned about abandoning free press principles in the false hope that in doing so, we will reclaim some of that stability.

And—I don’t think it will help. I don’t see a connection between the imposition of journalistic norms as legal requirements and the financial disruption to the news media marketplace. That is, I doubt that elevating “good journalism” to the force of law would help stabilize the market place.

There is no historic correlation between advertising income and quality of journalism. That is, advertisers don’t and never have rewarded newspapers with advertising because of their journalistic prowess. Rather, newspapers used to have a functional monopoly over certain types of advertising. If an advertiser wanted an ad to reach most person’s houses, they could either use direct mail or newspapers. Newspapers were especially effective for classified advertising, but also for car sales and other full-spread ads and inserts. Newspapers’ stalwart “sections” of highly marketable news – sports, entertainment, national news – in effect supported local and investigative journalism that standing alone might not have been a draw for either readers of advertisers. 

But seemingly overnight, Craigslist gutted the classified advertising market. It’s not because Craigslist was a more righteous platform to advertise, it’s because a continuously updating online platform with either targeted or broader reach to which any person with an Internet connection can almost instantly add is just a far better way of advertising for such things.

In many ways, and certainly for certain populations, the type of online advertising offered by Facebook and Google is simply a better deal for advertisers. They are not deceiving advertisers into thinking they are “good journalists,” and advertisers don’t really care (nor do I) whether an online service is considered a “publisher” or a “platform.” It’s a legally and practically irrelevant distinction. They just want effective advertising. 

The hope, I think, is that enshrining good journalism into the law will either drive their advertising competitors out of business or burden them with costs that will make them less hugely profitable. At a minimum, it will just make us feel like the system is more fair. But none of that drives advertising dollars back to legacy news media. 

(I’ll acknowledge one exception – Section 230 means that online services can accept certain ads that print publishers could not – ones that are deceptive or misleading or discriminatory. But this is not a significant source of revenue.) 

Moreover, the Internet is not just Facebook and Google, or a few other other large and rich sites. It represents a huge number and variety of communications platforms, from the very very local to the very global. And many of them are not hugely profitable. Many of them serve vital human rights functions, from connecting diaspora communities, to coordinating human rights reporting, to undermining communications bans in oppressive regimes. These are the sites and services that are threatened by the costs the “good journalism” legal standards would impose. Those with lots of money, the very sites these efforts actually target, are the very ones that have the financial wherewithal to absorb them. 

The non-economic reason for giving “good journalism” the force of law is more compelling to me, though not ultimately availing. Ellen Goodman in her recently published paper for the Knight First Amendment Institute writes of the policy need to re-introduce friction into digital journalism in order to restore the optimal signal to noise ratio, “signal” being “information that is truthful and supportive of democratic discourse”; “noise” being that which misinforms and undermines discursive potential.” Journalism norms boost the signal and diminish the noise. Digital delivery of information is relatively frictionless, resulting in less filtering of noise. So, the argument goes, the imposition of good journalism norms inserts productive friction into digital media. 

I see the appeal to this and I understand the goals. Nevertheless, I would look to other methods, as outlined by Goodman to introduce friction – built-in delays or limits on virality (such as what WhatsApp self-imposed)—rather than placing in government’s hands the setting and enforcement of journalistic norms, which is essentially government control of reporting itself.

Aside from what I see as the democratic threat to the government adoption, and thus co-option, of good journalism norms, there are also serious practical concerns.

And this is mostly because whereas a newspaper delivers almost only news, Internet media are typically far more diverse. Most Internet sites are multi-purpose: they may serve news and political advocacy. They may include journalists who have the luxury of attaching their own names to articles and who have the resources to fact-check and lawyers to vet stories. But they may also include political dissidents who must remain pseudonymous, or dissident news organizations whose reporting is otherwise blocked in a country, or independent journalists, or community organizers. Or just the average Internet user sharing information with friends. Were “good journalism” to become the law, these speakers may lose their audiences. I don’t think we want an Internet shrunk down to manageable scale, where user created content is limited so that it is as manageable as the letters to the editor page. 

So, in closing, I urge us all to stay steadfast to our traditional distaste for government regulation of journalistic practice. Good journalism is certainly an ideal. It is an admirable quality to urge any media outlet to adopt and follow. The norms are important and should continue to be taught, not merely to avoid legal liability, but because they serve an important democratic function. But they are not law and should not be.

Related Cases: Ashcroft v. ACLU
David Greene

Arizona High Court Misses Opportunity to Uphold Internet Users’ Online Privacy

1 month 1 week ago

It’s an uncontroversial position that EFF has long fought for: Internet users expect their private online activities to stay that way. That’s why law enforcement should have to get a search warrant before getting records of people’s Internet activities. 

But in a disappointing decision earlier this month, the Arizona Supreme Court rejected a warrant requirement for services to disclose Internet users’ activities and other information to law enforcement, a setback for people’s privacy online.

In a 4-3 opinion, the Arizona high court ruled in State v. Mixton that people do not have a reasonable expectation of privacy in information held by online services that record their online activities, such as IP address logs. According to the Court, that information is not protected by either the federal Constitution’s Fourth Amendment or the state’s constitution, because people disclose that information to third-party online services whenever they use them, a legal principle known as the third-party doctrine.

The decision is wrong. As EFF, ACLU, and the ACLU of Arizona argued in a friend-of-the-court brief, “Individuals today conduct the vast majority of their expressive lives through technology. As a result, we entrust the most sensitive information imaginable—about our politics, religion, families, finances, health, and sexual lives—to third parties.” 

Given that reality, courts should not blithely apply outdated legal principles, such as the third-party doctrine, to records that Internet users consider to be private and that reflect our private lives. The dissenting justices in Mixton recognized the hazard of doing just that:

We entrust private information to third parties every day: every time we use a credit card, provide our Social Security number, use a security card reader, mail a saliva sample to a genetics lab, make a bank deposit or withdrawal, use a password to enter a website, or even send an email . . . The notion that anything one must share for purposes of voluntary transactions is thereby subject to government inspection would eviscerate any meaningful notion of privacy. 

The decision is also wrong because it fails to recognize that the U.S. Supreme Court has increasingly rejected the third-party doctrine in cases involving digital technologies, such as Carpenter v. U.S.

The Mixton decision’s reliance on the third-party doctrine is also disappointing because the majority missed an opportunity to rule that the Arizona Constitution’s “private affairs” clause provided stronger privacy protections than the Fourth Amendment, particularly given that it contains different language and was drafted long after the U.S. Constitution.

As the dissent wrote, “Whatever the continuing vitality of this doctrine following Carpenter in the Fourth Amendment context, we should reject it here.”

Aaron Mackey

It’s Not 230 You Hate, It’s Oligopolies

1 month 1 week ago

As we continue to hear calls to repeal or change Section 230, it appears that many people have conflated a law that affects the tech giants (among many others) with Big Tech as a whole. Section 230 is not a gift to Big Tech, nor is repealing it a panacea for the problems Big Tech is causing—to the contrary repealing it will only exacerbate those problems. The thing you hate is not 230. It’s lack of competition.

Section 230 stands for the simple principle that the party responsible for unlawful speech online is the person who said it, not the website where they posted it, the app they used to share it, or any other third party. That is, the only person responsible for your online speech is you. It has some limitations—most notably, it does nothing to shield intermediaries from liability under federal criminal law—but it is, at its core, a common-sense law that incentivizes new services to allow users to share and store expression. And Section 230 isn't just about Internet companies, either. Any intermediary that hosts user-generated material receives this shield, including nonprofit and educational organizations like Wikipedia and the Internet Archive.

What Section 230 does not do is grant Big Tech companies a magical shield against competitors or entrench their power. In fact, it does the opposite. If a new Internet startup needed to be prepared to defend against countless lawsuits on account of its users’ speech, startups would never get the investment necessary to grow and compete with large tech companies. Changes to Section 230 would not bring Facebook to heel. Facebook will be able to afford the lawyers, the staffing, or whatever other costs that change would bring. You know who would not? Any service trying to compete with Facebook. This may be why Facebook has endorsed changes to Section 230.

So while many people rightly are concerned with the power of companies like Amazon, Apple, Facebook, and Google, the uproar around Section 230 is misplaced. It’s not 230 that is the problem. It’s oligopoly.

Billions of people use Facebook and Google every month. Their reach is larger than the population of most countries. There are two downsides to large size: first, it is almost impossible to have transparent and equitable terms of service consistently enforced at that size. Second, because these are not bespoke services catering to specific needs and wants of users, size is their value. It’s what their ad services is selling. It’s also why it’s such a big deal when accounts are lost.

People should be able to seek out the platform that works for them. The one that, say, puts a premium on fighting harassment. Or wants to be a safe space for all knitters. Or makes promises about privacy. Or, yes, claims to be about “free speech.” It should not be an obligation to be on any platform in order to participate in society. Yet many small businesses and individuals feel like they do not have a choice. That if they are not on Facebook or Google, they may as well not exist.

A recent article on CNN about the implosion of the alt-right pointed out that hateful rhetoric lost steam once it was only on platforms where everyone agreed with it. There were no journalists there to give them airtime or column inches. There were no “libs” to “own,” so it sputtered out.

Many tech companies have operated under the assumption that the only important metric is growth. Start-ups burn money on growing as large as they can as fast as they can, often with the goal of being bought by one of the Big Tech companies. Big Tech has encouraged this by ramping up mergers and acquisitions so that no company ever becomes a true threat to them, in what is known as the “kill zone.”

Instead of changing Section 230, we need a change in antitrust law and enforcement. We need closer scrutiny of mergers and acquisitions—it’s a good sign that the proposed Visa and Plaid merger fell apart. We need new ways of thinking about the harm done by companies—not just whether they are making us pay too much. Facebook and Google are free, so perhaps we should consider privacy harms, workers’ rights harms, or harms to our ability to repair or truly own our digital media or technology. We need to punish companies who claim they will not commingle our data when they buy new services and then go back on their promises. We need to be concerned with privacy legislation, and give everyone a private right of action for privacy harms, rather than make companies liable for the speech of its users.

And where the badness of these companies is inextricably linked to their bigness, we need to explore ways to break them up. But, as the long list of other issues listed above attests, even that is only part of the solution.

Mostly, what we need to understand is that there is no single change to the law that will fix Big Tech. It’s much more complicated than Section 230. It’s more complicated than just breaking these companies up. We need to change the entire ecosystem that Big Tech has manipulated to protect its power. And it starts with competition, not Section 230.

Katharine Trendacosta
Checked
48 minutes 9 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed