Victory! Maryland Legislature Says Police Must Now Be Trained To Recognize Stalkerware

2 months ago

Maryland's legislature has unanimously passed a bill that will require law enforcement agencies to learn, as part of their standard training, to recognize the common tactics of electronic surveillance and the laws around such activities. This victory  will help provide survivors of domestic abuse, intimate partner violence, and other electronic stalking with much-needed support.

The bill, S.B. 134, passed unanimously through the Maryland Senate and House of Delegates. EFF thanks this bill's author, Senator Susan Lee, and her staff for all of their work on this bill. The bill originated from conversations between the Senator's office and EFF Director of Cybersecurity Eva Galperin, based on her extensive work on "stalkerware"—commercially-available apps that can be covertly installed on another person’s device for the purpose of monitoring their activity without their knowledge or consent. The bill is now on Governor Larry Hogan's desk. If he either signs it or waives his right to veto, then it will become law.

For more than two years, EFF has been a proud founding member of the Coalition Against Stalkerware. This coalition provides training, published tools and research to raise awareness about stalkerware. Its members also work directly with survivors of domestic abuse and intimate partner violence and the organizations that support them.

A 2021 Norton Lifelock survey of 10,000 adults across ten countries found that almost 1 in 10 respondents who had been in a romantic relationship admitted to using a stalkerware app to monitor a current or former partner’s device activity. Yet many survivors report that law enforcement officials often fail to understand the seriousness of electronic stalking or stalkerware. This raises a simple point: law enforcement officials can't find what they don't know to look for.

The bill is simple, but it will have a big impact. It requires that the Maryland Police Training and Standards Commission require new and currently serving police with training to better recognize cyberstalking, and understand the criminal laws concerning electronic surveillance and tracking.

We applaud Senator Lee again for her leadership. We also encourage other lawmakers to use S.B. 134 as a template for future bills, to give these survivors the support they need. 

Hayley Tsukayama

Stop Forced Arbitration in Data Privacy Legislation

2 months 1 week ago

People who want their day in court should be able to have it. That's why EFF has long opposed forced arbitration agreements—agreements that require people to resolve conflicts without going to court—because they place unfair limits on one’s ability to exercise their fundamental rights. Too often, they also act as a get-out-of-jail-free card for companies.

There is a growing recognition that forced arbitration agreements impede the defense of fundamental rights. The U.S. Supreme Court hears three cases on forced arbitration this session, and the President recently signed a bill preventing victims of sexual assault and harassment from being forced to settle their claims through forced arbitration. Such clauses trapped these victims by requiring them to waive their rights to a day in court before they could possibly know there would be any reason to sue. More lawmakers should follow suit in recognizing forced arbitration violates our rights. In fact, they should go beyond recognition: they should put it in their laws. 

EFF already believes that for data privacy legislation to be effective, it must have a private right of action, which expressly allows people to sue companies that violate their rights. In line with that principle, we also support bills that bar forced arbitration agreements, also known sometimes as pre-dispute arbitration agreements.

An Old Law Gone Bad 

Pre-dispute arbitration agreements provide that the parties to a contract must resolve any future legal disputes about that contract through arbitration and not court. The original intent was to create an efficient way for businesses with comparable bargaining power to negotiate and agree upon an alternate means of conflict resolution. In 1925 Congress enacted the Federal Arbitration Act (FAA) to mandate enforcement of some of these agreements. 

Judges have long recognized there is a power imbalance inherent in such negotiations. In a 1967 case called Prima Paint Corp. v. Flood Mfg. Co., the U.S. Supreme Court explained that the FAA does not apply where “one of the parties characteristically has little bargaining power.” Justice Ginsburg likewise explained, in her four-Justice dissent from the Court’s 2018 expansion of the FAA in Epic Corp. v. Lewis: 

The legislative hearings and debate leading up to the FAA’s passage evidence Congress’ aim to enable merchants of roughly equal bargaining power to enter into binding agreements to arbitrate commercial disputes. … Congress never endorsed a policy favoring arbitration where one party sets the terms of an agreement while the other is left to “take it or leave it.”

Beginning in the 1980s, however, the Supreme Court in a series of rulings greatly expanded the ability of businesses to use the FAA to enforce arbitration clauses that businesses unilaterally place into their agreements with ordinary consumers. For example, the Court allowed businesses to use the FAA to avoid being sued in state as well as federal court, and even to avoid class action suits altogether. Put another way, the Court made it so businesses could require individual customers to sign away their legal right to sue in court. This opened the floodgates for businesses to insert arbitration clauses anywhere they could to protect themselves from being brought into court for their misconduct, at the direct detriment of consumers. 

If the original intent of forced arbitration was to focus on negotiation and agreement between people and companies, it's clear that it's gone badly wrong. Here's a not-so-fun fact from a 2021 American Association of Justice report: "More people climb Mount Everest in a year (and they have a better success rate) than win their consumer arbitration case."

What Makes Forced Arbitration Forced and Unjust 

When consumers buy goods or services from corporations, arbitration clauses are often buried in standardized contracts. These contracts typically are offered by a party with superior bargaining power (the corporation) to a party with lesser power (the consumer) on a take-it-or-leave-it basis. Such arbitration is forced, and not negotiated and agreed-to.

Consider EFF’s thwarted legal challenge to AT&T’s unlawful disclosure of customer location data. AT&T violated the rights of their customers by disclosing their real-time location information without consent, in violation of state and federal law, and contrary to AT&T’s own promises against disclosure. EFF represented AT&T customers in a class action lawsuit to end this ongoing privacy violation and hold AT&T accountable, along with our co-counsel at Hagens Berman Sobol Shapiro LLC. The court held that because AT&T customers sign an arbitration agreement when they buy a cellphone or service, they lose their legal right to sue the company in a court of law. 

But these AT&T arbitration agreements were hidden within hundreds of pages of dense, small-font legalese. AT&T is a multi-billion dollar business with all the bargaining power. Rejection of the arbitration agreement would mean the inability to use AT&T services. To repeat, AT&T writes the contract without customer input or negotiation, AT&T buries the clause in documents that nobody reads, AT&T offers said contract to every customer, and if you don’t accept, you don’t get service. But if you do, you’ve signed away your legal right to sue them in court, even when they violate your legal rights, federal law, and their own corporate policy. 

People forced into arbitration do not get the protections or advantages that come with civil litigation. Without the ability to join similarly wronged consumers in a class action, an individual’s case and its possible damages awarded becomes too small for most lawyers to offer representation. Also, compared to class damages in a civil lawsuit, individual damages from arbitration are too small to discourage corporate wrongdoing. Additionally, arbitration often denies the litigants a full opportunity to gather information from each other (called “discovery” in court), denies the general public a full opportunity to gather information about the often-secretive arbitration proceedings, and denies future litigants of legal precedent to inform further cases. Finally, arbitration denies consumers a jury of their peers, leaving them at the mercy of the arbitrators. Where arbitrators are often hired by the same business the customer has a dispute with, this inherent conflict of interest often leads to biased decisions in favor of the business. 

Unfortunately these forced arbitration agreements are distressingly common. Whenever you select “I agree” on a long and confusing click-through contract to purchase something, and whenever you mark what’s called a “signature-capture device”—for instance, at a checkout counter—to get a good or service, you may be signing away your legal right to sue a business that hurts you. 

Thanks to arbitration clauses, consumers have no choice but to sign away their fundamental rights and give businesses a get-out-of-jail-free card, just to use a service. 

EFF Supports Consumer Data Privacy Legislation That Bars Forced Arbitration

We need comprehensive consumer data privacy legislation. To ensure effective enforcement of these privacy laws, we also need a “private right of action,” so that consumers hurt by corporations can bring their claims to court. 

Private rights of action are stronger when they also include a bar against forced arbitration agreements. This prevents businesses from burying a forced arbitration clause into the agreements we as consumers all must sign just to use their service.

A consumer data privacy bill that effectively addresses these particular concerns is U.S. Sen. Cantwell’s 2019 Consumer Online Privacy Rights Act (COPRA). COPRA not only had a strong private right of action that allowed any individual subjected to a violation to bring a civil suit, but also barred enforcement of forced arbitration agreements. We encourage more lawmakers to recognize their importance and include them in future legislation.

Consumer data privacy laws should put people in the best possible position to fight back against companies. That's why Congress should consider COPRA as the framework for consumer data privacy legislation, as the only bill that contains both a private right of action and a bar on forced arbitration. The last thing Congress should do is leave businesses that unlawfully harvest and monetize our personal information with a get-out-of-jail-free card.

Chao Liu

Scraping Public Websites (Still) Isn’t a Crime, Court of Appeals Declares

2 months 1 week ago

Reiterating its prior common-sense opinion, the Ninth Circuit Court of Appeals ruled in hiQ v. LinkedIn that the Computer Fraud and Abuse Act likely does not bar scraping data from a public website against the wishes of the website owner. Last year, after the Supreme Court decided its first CFAA case, Van Buren v. United States, it vacated the Ninth Circuit’s original ruling in hiQ and sent it back to court of appeals for reconsideration. According to the Ninth Circuit, Van Buren only “reinforces” the court’s earlier determination that access to a public website cannot be “without authorization” under the meaning of the CFAA, as EFF argued in our most recent amicus brief. The hiQ decision is good news for all those who collect, aggregate, and index publicly available information, as well as the work of journalists, researchers, and watchdog organizations, who use automated tools to find security flaws, news stories and investigate discrimination in public websites.

Why Has LinkedIn Gotten So Many Tries at Playing CFAA Gatekeeper on the Open Web?

The long-running dispute in hiQ concerns LinkedIn’s attempts to stop hiQ from scraping public information from LinkedIn user profiles as part of hiQ’s data analytics services. LinkedIn tried to block hiQ’s access and threatened to sue for violation of the CFAA, on the theory that hiQ’s access violated the website’s terms of service and LinkedIn’s explicit wishes. But hiQ sued first and obtained a preliminary injunction to preserve its access.

The key question for the Ninth Circuit on appeal was whether access to a public website can ever be “without authorization” under the CFAA. According to an earlier Ninth Circuit precedent, Facebook v. Power, merely violating a website’s terms of service is not enough to be a violation of the CFAA, but individualized notice in the form of a cease-and-desist letter can revoke a user’s prior authorization. However, the court noted that the phrase “access without authorization” implies that there is a baseline requirement of authorization, and public websites like the LinkedIn profiles at issue do not require any permission to begin with. As a result, the court held that access to public information online likely cannot be a violation of the CFAA. (Because it was considering an appeal from a preliminary injunction, the holding was discussed in terms of the “likely” outcome of a final ruling.)

Then, in Van Buren, the Supreme Court answered a different question interpreting a different term in the CFAA, holding that an police officer did not “exceed authorized access” by using a law enforcement database for an unofficial purpose that violated the department’s written rules and procedures. The Court held that the CFAA does not encompass “violations of circumstance-based access restrictions on employers’ computers.” Rather, it adopted what it called a “gates-up-or-down approach,” writing that violations of the “exceeds authorized access” provision are limited to someone who “accesses a computer with authorization but then obtains information located in particular areas of the computer—such as files, folders, or databases—that are off limits to him.”

Although there was nothing in that opinion that obviously called the Ninth Circuit’s hiQ ruling into question, the Supreme Court nevertheless sent hiQ back to the court of appeals for reconsideration in light of Van Buren

Unsurprisingly then, the Ninth Circuit found that Van Buren merely reinforced its earlier conclusion: no authorization is required to access a public website, so scraping that website likely cannot be access without authorization, no matter what the website owner thinks about it. The court explained that the Supreme Court’s “gates up-or-down inquiry” applies when a website requires authorization such as a username and password, writing that “if authorization is required and has been given, the gates are up; if authorization is required and has not been given, the gates are down.” But “applying the ‘gates’ analogy to a computer hosting publicly available webpages, that computer has erected no gates to lift or lower in the first place.”

The CFAA is Fixed Now, Right? Right?

Both the new ruling in hiQ and Van Buren are victories that place important limits on the scope of the CFAA, but unfortunately it remains a vague law that gives prosecutors and private parties significant discretion to attack security researchers, journalists, and follow-on innovators.

Most importantly, both decisions scrupulously avoid defining the contours of what counts as “authorization” in the CFAA. Congress passed the CFAA to address computer “break-ins”—malicious hacking—and EFF has long argued that violations of the law should involve circumvention of effective technical barriers. Whether you call the requirement of a technical authorization “gates down” or something else, computer owners should not get to invoke power of the CFAA based merely on a written agreement or a cease-and-desist letter.

Also, despite the Ninth Circuit’s clear-eyed approach to public websites, the hiQ opinion includes a disappointing reference to the possibility of a “trespass to chattels” claim against scrapers. As EFF helped establish in the California Supreme Court’s Intel v. Hamidi ruling, that ancient common law tort at the minimum cannot apply to situations where there is no harm to a computer or any proprietary right in data. Here, LinkedIn does not even claim to own its users’ data, so it’s difficult to see how it could win a trespass to chattels argument. 

Despite the narrowness of these opinions, though, there’s reason to be hopeful that courts will continue to cut back at the CFAA’s overbreadth. EFF will continue to fight as well, through our work on the Coders Rights Project, our attempts to reduce barriers to innovation and interoperability, and our support for online investigative journalism.

Related Cases: Van Buren v. United StateshiQ v. LinkedIn
Andrew Crocker

Mobile MitM: Intercepting Your Android App Traffic On the Go

2 months 1 week ago

Note: This post provides technical guidance only. Testing described in this post is done at the reader’s own risk and should only be conducted on devices and networks that you have permission to test on.

Introduction

In order to audit the privacy and security practices of the apps we use on a daily basis, we need to be able to inspect the network traffic they are sending. An app asking for permission to your location may only use it to send it to your friends, or it may be tracking your every move. Without knowing exactly what traffic is being sent, you’d never know. Traditionally, this has been the job of dynamic analysis - running the app and capturing traffic as the user interacts with it. A typical setup might involve a test device where the app runs, connected to a wireless access point running mitmproxy, Burp Suite or something similarly tasked with recording traffic. An additional control laptop might be added to the mix, which is connected to the test device via USB, to run adb commands on the device or overload Java methods using the dynamic instrumentation toolkit Frida. HTTPS traffic can be intercepted in this way by overloading the app calls to Java’s TrustManager and providing our own, which accepts the proxy certificates that we provide. In combination, this device schema provides a powerful setup to analyze traffic in a stationary, controlled setting.

But what if we don’t have the luxury of a testing lab? What if the app behavior changes based on your location, or interaction with the outside world? For instance, if you use an app to rent a car or unlock a door to a shared workplace, the real-time behavior of the app will be different from what you can replicate in a lab. For these kinds of complex interactions, a roaming Machine-in-the-Middle (MitM) schema is needed. In fact, all three components of the previous schema (test device, interceptor, and control device) will need to be consolidated into a single device running the software required for all three components. If the app being audited is a form of disciplinary technology – that is, a surveillance app that one person installs on the device of another person – then the auditor will also need to surreptitiously capture traffic being sent by the app, which may pose additional testing complications.

This post will detail the steps involved to configure an Android device to audit the traffic of any app installed on it, requiring no other device to be physically present. The device will have to be rooted in order to install the software required for this setup. All of the software required in this post is free of cost and open-source, not requiring an extra penny of investment above and beyond that of the device itself. The end result will allow the user to open an app in a specialized way that allows the traffic to be logged, without attaching extraneous devices or requiring the device to be connected to any specific network or access point.

Requirements
  1. A rooted Android device
  2. A basic understanding of the Linux command line
  3. A basic understanding of ipv4 networking
  4. A PC or VM with Linux (we use Debian 11 x86_64 below) for cross-compilation steps during setup

We breeze through some of the steps below for the sake of brevity, but almost all are well-documented elsewhere. Where there isn’t sufficient documentation available, we’ve gone into further detail and provided screenshots.

Setup Overview

In the setup phase, the intention is to allow the user to launch an app in a way that its traffic can be intercepted. The user should not have to issue any commands themselves to do this. We will assume there is a third party auditor involved in the setup, who we want to be able to remotely access the device and issue commands. This allows the auditor, who has a degree of technical proficiency, to be separate from the user, who may have special access to app functionality (such as logging in with their credentials). The user should only be involved in the process to open and interact with the desired app. In this example, the app id is com.example.android - substitute this for the app you want to audit.

To add a further complication to our setup, we want this to be as automated a process as possible. The user should not have to manually start services upon device boot.

In order to give the third party auditor access to the device from a remote location, we will have to deal with the likelihood of network switching and being behind a NAT router, not having a publicly routable IP address. Perhaps the simplest way to resolve this issue is to connect the device to a Wireguard VPN, which we will detail below. If the auditor and the device operator are one and the same, this step will not be necessary - they can issue commands themselves to prepare for interception locally.

To open the app in a way that allows interception, we’ll need frida-server daemonized on the phone and frida-tools available to run locally. A frida-server module is available for the great low-level Android tool Magisk which automatically runs the server at boot. Allowing frida-tools to run locally is a bit more complicated. We will have to deploy a full Linux distribution on the device, and install frida-tools on top of that.

To allow the user to easily launch the app in an interceptable way, on the Linux container we will set up an SSH server which runs a custom frida command when a user with a specific SSH key is connected to it. We will then use the popular Android SSH client ConnectBot to generate this SSH key and trigger launching the app.

Finally, we’ll need mitmproxy installed on the device to view and record the traffic we encounter.

We recommend the device remain plugged in during this setup phase to avoid various complications where Android will background processes to save battery life.

Clearly, a lot of these tools operate at a low level within the Android OS. This being the case, the first step is to install a custom Android OS which gives us this level of access.

Flashing the Required OS / Apps

First, you’ll have to flash a recovery image to your test device. We recommend TWRP.

Next, boot into recovery mode. You can find instructions specific to your device.

Connect your device to a PC or VM, and make sure adb is installed. For debian-based distros, it’s as simple as apt-get install adb.

Next, you’ll have to download, push, and flash the following:

  1. LineageOS. Again, reference the installation guide for your device to download the correct build.
  2. Magisk. This should be a similar flash process to LineageOS above.
  3. (Optional) Open GApps. This may or may not be required, depending on the specific app you’re auditing. Note: these apps are closed-source.
Setup within Android

Reboot the device into the newly installed Lineage OS, and connect to your network.

If you are testing an app which requires installation from the Google Play Store, you may want to connect the device to your Google account for convenience. This is not strictly required. You can use the apkeep tool we developed to bypass this requirement and install the app of your choice without Google Play being installed on the device.

Install F-Droid from its website. You’ll be prompted to allow installing applications from the browser.

Configuring WireGuard

The following steps are only required if you want to allow a third party (such as a researcher) to remotely monitor connections.

On a VPS of your choice, set up a WireGuard peer (server) to connect to. Follow these instructions for a break-down of the process.

Open F-Droid and install the WireGuard app. You’ll be prompted to allow installing applications from F-Droid.

Open WireGuard and set up a connection to your VPS WireGuard instance. For example:

Once this is done, connect to the VPS peer and allow the permission for the app to act as a VPN. In Settings -> Networks & internet -> Advanced -> VPN, set WireGuard to be an Always-on VPN.

Now, as long as the device is connected to the internet, the third party will be able to reach it. This is useful in the mobile context where there is frequent switching of network connections.

Configuring ConnectBot

ConnectBot is an Android SSH client, and will allow our user to launch the app in a specific way that allows monitoring.

Open F-Droid and install the ConnectBot app.

In ConnectBot, generate a new SSH keypair.

Note the public key, as you will need it later.

Next, add a new host to connect to:

Protocol: ssh android@localhost Nickname: MitM App Use pubkey authentication: <choose the key you just generated>


Installing SimpleSSHD

We will need SimpleSSHD to transfer files from the Linux container later.

Open F-Droid and install the SimpleSSHD app.

Open SimpleSSHD and start the SSH daemon. In the app settings, enable “Start on Boot.”

Installing Magisk and Frida Server

Now you will want to install a Magisk module to start the Frida server upon device boot.

First, download the latest release of magisk-frida to your device. This may fail to download the correct zip file in the default browser. If this happens, try downloading it in another browser.

Previously, we flashed Magisk to our device when we were in recovery mode. This should have added a Magisk app to our installed apps. Open it, and continue to install the full app when asked.

Installing the full Magisk app will overwrite the bootstrap app. Open Magisk, and go to the Modules tab. Tap “Install from Storage” and select the zip you previously downloaded to install it. You will need to reboot the device.

Deploying Linux on the Device

We will need to install a full distribution of Linux on the device in order to launch apps via Frida locally in a way that allows interception of network traffic.

Install Linux Deploy from its GitHub repo.

Before configuring or installing Linux from the default configuration, a few settings need to be changed. Tap the lower-left corner of the screen to go into settings for this configuration, and change the distribution to debian stable. The image size should be at least 4G. Enable the init system and SSH server. Copy the password for the user as well.

Now you can select “install” from the top-right menu, then “start” to start the container.

Lastly, go into the app-wide settings and enable autostart. We’ve found it convenient to change the autostart delay to 10 seconds.

Android Linux Container

Using ConnectBot, you should be able to log in to the Linux container with the profile we previously set up and the password you copied in a prior step.

Use the following commands to install our dependencies:

sudo apt install tmux python3-venv python3-dev gcc-arm-linux-gnueabihf python3-pyasn1 python3-flask python3-urwid libxml2-dev libxslt-dev libffi-dev python3-pip libssl-dev zlib1g-dev iptables rustc pip install mitmproxy prompt_toolkit pygments

and allow us to connect to the host Android SSH instance with a new SSH key:

ssh-keygen # enter, enter, enter scp -P 2222 .ssh/id_rsa.pub localhost:./authorized_keys

In the SimpleSSHD app, you will see a new temporary password to enter at the prompt. Next:

editor ~/.ssh/authorized_keys

Switch back to the ConnectBot keystore and copy the public key we previously noted. Using this public key, enter the following into the editor and save:

command="/home/android/frida_*/bin/frida -H 127.0.0.1 -f com.example.android -l frida-android-repinning.js --no-pause",no-agent-forwarding,no-x11-forwarding,no-port-forwarding CONNECTBOT_PUBKEY_HERE

Lastly, create the frida-android-repinning.js script referenced above to the home directory. You can find the script here. Special thanks to @pcipolloni for writing it.

You can log out of the container. Now, when we connect to our local Linux host with ConnectBot, it will automatically attempt to issue a command to open our desired app with frida. The next step is to install the frida-tools within our Linux container.

Compiling Frida Tools for Android

Unfortunately, we run into some problems when installing frida-tools the typical way, via pip:

ImportError: /home/android/.local/lib/python3.9/site-packages/_frida.cpython-39-arm-linux-gnueabihf.so: wrong ELF class: ELFCLASS64

It looks like installing the tools via pip on an Android platform isn’t expected. Fortunately, there is another way to build and install. We can compile the tools directly from the Frida repository. We’ve found compiling directly on an Android device to take a prohibitively long time, so in this example we’ve opted to cross-compile on a more powerful machine. Here, we’re using a Debian 11 x86_64 build environment. Even for arm64 / aarch64 devices, we’ve had better luck compiling to armhf. Instructions for devices with Intel-based chipsets will be different.

sudo dpkg --add-architecture armhf sudo apt update sudo apt install curl git npm pkg-config gcc-arm-linux-gnueabihf g++-arm-linux-gnueabihf libpython3.9-dev:armhf git clone --recurse-submodules https://github.com/frida/frida.git cd frida && make tools-linux-armhf

Once this is finished, we will have frida-tools available in a format runnable on the device. Copy it over:

scp -r build/frida_thin-linux-armhf/ android@device_ip:.

Finally, back on the Android device, run the following line:

ln -s /home/android/frida_thin-linux-armhf/bin/* .local/bin/

This should add the frida-tools binaries to your $PATH so you can run them directly in a shell. You can test if it is connecting to the local frida-server by running

frida-ps -H 127.0.0.1

If you see a list of processes, frida is working properly. If you see an error, something has gone wrong.

Real-Time Interception

With setup complete, we have all we need to start a real-time interception. The following steps are run every time an interception is made.

With Wireguard set up to auto-connect at boot, the auditor should be able to SSH to the phone without a problem. We want to ensure that the auditor is able to re-connect to any commands running on the device should they become disconnected. Luckily, we’ve installed the terminal emulator tmux above.

Start a new tmux session with tmux new -s interception. Here, set up iptables rules to redirect any traffic of the desired app to our mitmproxy instance. In Android, each app has a unique UID assigned to it. To find it, you can reference the file /data/system/packages.xml from an Android terminal (not within the Linux container). Alternatively, simply run the app and within the Linux container issue a ps aux | grep com.example.android. With this UID handy, we can issue the following commands:

APP_UID=10154 # modify this line sudo iptables-legacy -t nat -A OUTPUT -p tcp --dport 80 -m owner --uid-owner $APP_UID -j DNAT --to 127.0.0.1:8080 sudo iptables-legacy -t nat -A OUTPUT -p tcp --dport 443 -m owner --uid-owner $APP_UID -j DNAT --to 127.0.0.1:8080 mitmproxy --mode transparent

iptables will now only be intercepting traffic generated by our desired app, which ignores all other system traffic.

In a second tmux session, we will install the forged certificate generated by mitmproxy into the host system. Copy the certificate:

cat ~/.mitmproxy/mitmproxy-ca-cert.cer

Then install the certificate as the shell user:

ssh -p 2222 localhost su - shell echo "PASTED_CERTIFICATE_HERE" > /data/local/tmp/cert-der.crt chmod o+r /data/local/tmp/cert-der.crt

Now that we have the certificate installed where the host Android OS can access it, we can test whether it all works! Back in the Linux container, run:

frida -H 127.0.0.1 -f com.example.android -l frida-android-repinning.js --no-pause

Now switch to the mitmproxy tmux session and observe traffic rolling on through.

Now that we’ve confirmed everything works together seamlessly, we won’t have to run the frida command manually. Just have the auditor set up the iptables rules and mitmproxy to run after boot (or automate this with the init system in the Linux container). Then, the user can open the app when they want it audited.

Conclusion

Lots of apps that we use everyday spy on us without our knowledge. One way to fight back is to expose the spying by analyzing an app’s network traffic. But testing an app in a controlled lab is not always possible or desirable. But as we have demonstrated, we can still perform an audit of an app, and observe how its behaviors might change when run on the go. We hope this demonstration will prove fruitful for those investigating the security and privacy properties of apps in real-world situations.

Bill Budington

California May Require Low-Cost Broadband from Subsidized Networks

2 months 1 week ago

Through the pandemic, the state of California passed a number of bills that resulted in a once-in-a-generation, multi-billion dollar investment to solve the digital divide. The California Public Utilities Commission (CPUC), in response, initiated a proceeding to explore the rules, expectations, and standards necessary to turn available funds into real infrastructure that benefits Californias for decades to come. 

In a recent proposed decision the CPUC established a bare minimum offering of $40 at 50/50 mbps for broadband networks fully subsidized by taxpayers. This moves in the right direction and EFF fully supports the CPUC’s analysis and decision. We find it appropriate for the government to set expectations on networks that are completely paid for by the taxpayer. 

A Robust Low Cost Offering Protects Consumers From Price Gouging and Ensure Their Taxpayer Dollars Go Back to Serving Them 

A recent poll by Consumer Reports found that 80% of Americans view broadband access to be as important as water and electricity. Americans depend on their internet connection for employment, education, entertainment, community, health, and much more. Unfortunately, most Americans do not have a choice in internet service providers (ISPs) and are trapped in situations of monopolistic exploitation, paying high prices for what can be viewed as an essential service. 

Recognizing this situation, the CPUC established a bare minimum offering of $40 at 50/50 mbps to ensure that an affordable offering is always present in order to prevent exploitation.  

ISPs, such as AT&T, asserted to the legislature that they cannot operate a fully subsidized network with a basic low-cost tier offering, that such an offering would harm consumers and prevent them from closing the digital divide. These assertions are categorically false. It is not that an ISPs (agan fully subsidized) cannot operate a network with a basic low-cost offering, it is just that some of these ISPs (some of which are the most indebted in the world) do not want to operate a network that cannot overcharge to yield outsized profits. 

Consider how it has been demonstrated that the ‘at cost’ delivery (meaning the pure cost of providing broadband) on a fully paid out network is as low as $3 a month per household. In the city of Chattanooga, Tennessee, for example, public data of the municipal ISP (which has completely paid off its construction debt) indicated that providing 100/100 mbps broadband to all low-income families with children in public school for free had negligible impact on their costs. 

Furthermore, the state is willing to finance up to nearly $10,000 per household, at the taxpayer’s expense, to enable fiber gigabit capacity connectivity to virtually Californian. Construction is the most expensive part of provisioning broadband. For the state to shoulder the cost of construction not only rids carriers of any construction debt obligations, but also allows them to provision service at a monthly price that doesn’t need to account for paying off said debt. Additionally, EFFs extensive cost model studies and research in this space finds that unsubsidized fiber networks can reach very far into rural markets today. A fully paid off fiber network with potentially 1/10th the operations and maintenance cost of legacy networks and negligible costs to upgrade further should have very few actual new costs that are passed on to the consumer. That is what makes it appropriate for the CPUC rules to require long term low cost offerings for these networks.

Dozens of local private and public providers in California have already stepped up to deliver fiber to the homes of all Californians, without exception even in rural areas. Outside of California, those states that emphatically embraced local small businesses and local governments to the exclusion of large private national carriers have only benefited from that choice. For example, North Dakota has no national private carriers and more than 2/3s of their residents already have affordable gigabit fiber to the home connectivity. 

The CPUC’s bare minimum offering of $40 at 50/50 mbps as a guaranteed affordable service is thus reasonable because of the low cost of operating a fully subsidized network. The ISPs concerns are dubious at best, designed to misinform those without the proper context. 

Setting Expectations Establishes Controls Against Waste, Fraud, and Abuse 

Setting expectations, rules, and rigorous standards for infrastructure paid for by taxpayers helps ensure that California does it right, solving the digital divide issues for decades to come. If the government takes a hands-off approach, it risks squandering this once-in-a-generation opportunity. 

Take for example the FCC’s Rural Development Opportunity Fund (RDOF) that allowed speculative wireless deployments to substitute for proven fiber to the home options. A total of $885 million was awarded from the FCC towards SpaceX’s Low Earth Orbiting Satellites (LEOs) to deliver 100/20 to about 640,000 locations. In even the best case scenario where SpaceX’s Starlink fleet is at full size, has the maximum stated throughput capacity, and only serves RDOF locations, more than half of RDOF subscribers are predicted to have congested services by 2028. Fiber would have lasted them potentially into the 22nd century. Taking into account how SpaceX’s LEOs would likely also serve non-RDOF subscribers and may not reach full fleet size or maximum stated throughput capacity, less than promised service would likely occur before then. An in-depth analysis comparing unproven deployment methods against proven deployment methods like cooperatives, municipal, and local rural providers would have revealed this. 

The FCC’s lack of expectations, rules, and rigorous standards meant  RDOF monies went to companies that will likely fail to deliver future proof networks (let alone be ready for today’s needs) and either undervalue or fail to deliver upon promised service ready for the future. The CPUC has charted a different course. The legislature should support the CPUC’s efforts to ensure California’s investment in its own future pays off. 

The California Legislature’s Commitment to Delivering 21st Century Fiber Optic Connectivity for All 

The passage of SB 156/AB 156 is the first time in telecommunications policy that a state invested and made available enough funding to provide not just basic broadband service, but future proof fiber infrastructure capable of delivering multiple gigabytes of capacity, and enabling 5G and successor wireless advancements, to all its citizens. California made a commitment to deliver high quality and affordable service to all. 

The only way to make good on that commitment is with regulatory expertise from the CPUC, reliance on local communities to partner with the state, and an understanding that the monopolistic ISPs have intentionally neglected our communities for decades. The ISPs opposed SB 156 and have taken active interest in hindering all steps of its implementation. Their concerns should be taken with skepticism.

Delivering universal 21st century ready broadband access will require courage from our legislators, expertise from our regulators, and hard work from our communities. The CPUC’s bare minimum offering of $40 at 50/50 mbps for broadband networks fully subsidized by taxpayers is a step in the right direction. Californians should have had reliable, affordable broadband yesterday; we must now work to ensure we set a path for them to get it in the future. 

Chao Liu

Let's Encrypt Wins Levchin Prize For Work On Internet Security

2 months 1 week ago
Let’s Encrypt is part of the effort to encrypt the entire internet as a means of maximizing privacy and security online.

SAN FRANCISCO—Let’s Encrypt—a project of the nonprofit Internet Security Research Group (ISRG), which is supported by the Electronic Frontier Foundation (EFF) and other sponsors—won the prestigious international Levchin Prize for significant contributions to real-world cryptography.

Let’s Encrypt is part of the effort to encrypt the entire internet as a means of maximizing privacy and security online. In 2013, 28 percent of page loads happened with HTTPS protection; today, at least 80 percent are protected—a significant cultural shift away from the outmoded belief that only banks and password-access sites need encryption.

EFF participates in the Let’s Encrypt project by maintaining and improving Certbot, the most popular software for getting and installing security certificates, designed to help people who run their own websites. EFF also works alongside Let’s Encrypt staff to develop Boulder, the software that runs Let’s Encrypt.

“Receiving the Levchin Prize was an honor and great recognition for Let’s Encrypt and the work we’ve done to improve the internet,” said Josh Aas, Executive Director of ISRG and Let’s Encrypt. “We’re grateful to longtime collaborators like EFF for helping us get here!”

The Levchin Prize, administered by the International Association for Cryptologic Research and bestowed during the Real World Crypto 2022 conference this week in Amsterdam, honors major innovations in cryptography that have had a significant impact on the practice of cryptography and its use in real-world systems. Two awards are given annually, each with a cash prize of $10,000; the prize was established in 2016 by a generous donation from Max Levchin, a long-term supporter of real-world cryptography. Previous winners include The Tor Project as well as Moxie Marlinspike and Trevor Perrin, creators of Signal.

EFF and the University of Michigan started work in 2012 on a protocol to automatically issue and renew HTTPS certificates. By then, EFF already had been campaigning to encrypt the web for two years and saw that the difficulty and expense that website owners faced to get certificates were too burdensome, blocking millions of small websites from the HTTPS-encrypted web of the future. At the same time, staff at Mozilla were working on a similar project to start a free and automated certificate authority. The two projects teamed up to create what would become the ISRG and Let’s Encrypt.

"As someone who saw the shift from HTTP to HTTPS as a web developer, and as the Certbot project’s manager today, I am proud of the work this team has done to make HTTPS ubiquitous on the web," said EFF Director of Engineering Alexis Hancock. "Encrypting web traffic is one of the clearest, strongest gains in internet security of the past decade. I am pleased that EFF had such a big role in bringing it about and honored that the Levchin Prize judges chose Let's Encrypt this year."

Contact:  AlexisHancockDirector of Engineering, Certbot alexis@eff.org
Josh Richman

Civil Liberties Groups Urge Social Media Platforms to Better Protect Free Flow of Information in Crisis Zones

2 months 1 week ago
Russia's Invasion of Ukraine Underscores Need to Plan for Emergencies and Treat All Crisis Zones Even-Handedly

SAN FRANCISCO—Whether in Ukraine or in other crisis zones around the globe, social media platforms have a duty to ensure that people have access to the free flow of life-saving information, according to a statement issued today by 31 international human rights and civil liberties organizations, including the Electronic Frontier Foundation (EFF).

“As a global community of civil society actors, we do not demand a one-size-fits-all approach to responding to human rights crises,” the groups said in the statement. “What we are asking platforms to do is to invest more time and effort in improving their operations now, not when unfolding violence gets into the media spotlight and it is often already too late to act.”

It has become increasingly clear that platforms have followed the same playbook in Ukraine as they have elsewhere: surface-level or extractive relationships with civil society; insufficient support for local language and lack of understanding of context; and responsiveness to media pressure, not civil society pressure or human rights concerns. The Russian invasion of 2022 was a re-escalation of events that began in 2014, and platforms should have been better prepared.

The statement issued Wednesday calls upon platforms to be better prepared going forward, and urges them to address structural inequalities in how they treat different countries, markets, and regions. Specifically, the statement calls upon platforms to provide:

  1. Real human rights due diligence: Platforms should engage in ongoing and meaningful human rights due diligence globally, prioritizing for immediate review their operations in those countries and regions whose inhabitants are at risk of mass killings or grave human rights violations. 
  2. Equitable investment:Platform investments in policy, safety, and integrity must be determined by the level of risk they pose to human rights, not just by the commercial value of a particular country or whether they are located in jurisdictions with enforceable regulatory powers. 
  3. Meaningful engagement: Platforms must build meaningful relationships with civil society globally that are based not on extraction of information to improve products, but also provide civil society with meaningful opportunities to shape platform tools and policies. 
  4. Linguistic equity in content moderation: Platforms must hire adequate numbers of content moderators and staff for every language in which they provide services. They must fully translate all of their policies into all the languages in which they operate. 
  5. Increased transparency: Platforms should increase transparency and accountability in their content moderation practices. The Santa Clara Principles, which were updated and elaborated in 2021, provide concrete guidance for doing so. 
  6. Clarity about so-called “Terrorist and Violent Extremist Content” (TVEC): Platforms should be fully transparent regarding any content guidelines or rules related to the classification and moderation of “terrorism” and “extremism,” including how they define TVEC, exceptions to those rules, and how the company determines when to make such exceptions. Platforms should push back against attempts by governments to use the TVEC label to silence dissent and independent reporting, and should be clear about how their TVEC policies relate to other policies such as incitement to violence. 
  7. Multi Stakeholder Debriefs: When platforms take extraordinary actions or are forced to engage in a “surge response” to emergencies, they also must take stock afterwards to evaluate and share what they’ve learned.  

“With this statement, we wanted to express solidarity with Ukrainian civil society while pushing social media platforms to do better around the world,” said Dia Kayyali, Associate Director of Advocacy for Mnemonic. “Ukraine, Yemen, India, Sri Lanka, Myanmar, Syria, Sudan—the list of places where platforms need to learn from their failures and be prepared to invest in human rights going forward is far too long. After many years of pressure from global civil society, including dozens of open statements from impacted communities, there is no longer any excuse not to be prepared. We look forward to working with platforms on implementing our demands.” 

“We stand with Ukrainians and with all people in crisis zones who rely upon the free flow of information to survive,” said Jillian C. York, EFF's Director for International Freedom of Expression. “Social media platforms must recognize that all too often their services are misused to both spread misinformation and block from view desperately needed factual information, including evidence of war crimes and other gross human rights violations.  These companies must take real steps to ensure that their policies are applied even-handedly and transparently and that their efforts continue after the immediate media spotlight moves on.” 

Maksym Dvorovyi, Legal Counsel for Digital Security Lab Ukraine, said an inconsistent approach to content moderation has been a problem since Russia invaded and annexed Crimea in 2014.

“Over the years, Ukrainian users suffered from coordinated reporting of social media posts by Russians and a lack of the social media platforms' desire to combat this problem,” he said. “Amid a non-transparent approach of assigning moderators to deal with a certain type of reported comments, misperceptions emerged and spread among the Ukrainian society about the review of Ukrainian content by Russian-speaking moderators (lacking knowledge of Ukrainian language and context), or by intermediaries' ‘Moscow offices’ (often non-existent). Thus, at least in the mind of the Ukrainian users, platforms were biased when dealing with Ukrainian cases under the advice of their predominantly Russian staff."

Read the full statement here: https://www.eff.org/document/letter-social-media-platforms-crisis-zones

Signatories to the statement include:

  • Access Now
  • Association for Progressive Communications (APC)
  • Australian Muslim Advocacy Network (AMAN)
  • Center for Democracy & Technology (CDT)
  • Civil Liberties Union for Europe (Liberties)
  • Chayn
  • Derechos Digitales
  • Digital Action
  • Digital Africa Research Lab
  • Digital Security Lab Ukraine
  • Digital Rights Foundation
  • Doublethink Lab
  • Electronic Frontier Foundation (EFF)
  • European Sex Workers' Rights Alliance (ESWA)
  • Fight for the Future
  • Global Forum for Media Development (GFMD)
  • Global Project Against Hate and Extremism
  • Global Voices Advox
  • INSM Network for Digital Rights in Iraq
  • Jordan Open Source Association
  • Miaan Group
  • Mnemonic
  • New America's Open Technology Institute
  • Ranking Digital Rights
  • Social Media Exchange (SMEX)
  • Taraaz
  • The Dangerous Speech Project
  • WITNESS
  • Woodhull Freedom Foundation
  • Zašto ne (Bosnia and Herzegovina)
  • 7amleh- The Arab Center for the Advancement of Social Media
Contact:  Jillian C.YorkDirector for International Freedom of Expressionjillian@eff.org
Josh Richman

EFF and Partners to Ninth Circuit Court of Appeals: Retaliatory Investigation of Twitter Chills First Amendment Rights

2 months 2 weeks ago

Censorship doesn’t always look like a black line across a document, or a clear order to remove a piece of content. Websites feel pressured without the government having to issue a clear directive that they host certain speakers or carry certain content. The First Amendment recognizes that speech can often be ‘chilled’ in other ways–for example, by a burdensome governmental investigation. In an amicus brief filed yesterday, EFF, the Center for Democracy and Technology, and R Street, urged the Ninth Circuit Court of Appeals to take the case “en banc” and protect Twitter from a retaliatory investigation by Texas Attorney General Ken Paxton.

On January 8, 2020, two days after the January 6 riots at the U.S. Capitol, Twitter banned then-President Trump from the platform, citing a “risk of further incitement of violence.” Five days later, Attorney General Paxton issued a Civil Investigative Demand (CID) to Twitter (and other major online platforms) for, among other things, any documents relating to its terms of use and content moderation practices. Paxton explicitly connected his investigation to Twitter’s decision. The CID alleged "possible violations" of Texas's deceptive practices law. 

The demand subjected Twitter’s internal discussion about content moderation rules or decisions to discovery under the CID and second-guessing by AG Paxton. This put Twitter in a difficult position and pressured it to minimize its legal, reputational, and financial risks by self-censoring along the lines indicated by AG Paxton. Twitter sued Paxton, claiming that he was “abusing his authority as the highest law-enforcement officer of the State of Texas to intimidate, harass, and target Twitter in retaliation for Twitter’s exercise of its First Amendment rights.” 

Last week, a panel of judges on the Ninth Circuit wrongly ruled that Twitter cannot sue Paxton until a possible enforcement action at the conclusion of Paxton’s investigation, or until the CID is enforced.. But as our brief to the Ninth Circuit says, “even pre-enforcement, threatened punishment of speech has a chilling effect.” Since the previous panel got this wrong, a more comprehensive “en banc” hearing is needed. From the moment it was issued, the CID chilled Twitter from exercising its First Amendment-protected right to engage in content moderation. Requiring the company to endure even more retaliation by Paxton before it can sue harms Twitter’s First Amendment rights. 

From the brief:

An investigation and CID from a state attorney general for documents about a host’s content moderation practices—particularly when coupled with the attorney general’s critical public statements about the host’s content moderation decisions—send a strong message of disapproval and threat of legal consequences for the host if it continues its “disfavored” content moderation actions. Hosts targeted by a CID as part of a state attorney general’s retaliatory investigation will fear harsh legal consequences if they continue content moderation practices like those that sparked the investigation. In the face of such retaliation, a host may believe that the state attorney general will treat it more leniently or drop an investigation entirely if it ceases the content moderation practices with which the attorney general disagrees.

Paxton’s investigation is part of a trend of government officials in the United States using investigations to pressure or punish hosts for making content moderation decisions with which they disagree. This is bad for everyone: access to online platforms with different rules and environments generally benefits users. The Ninth Circuit’s decision risks encouraging this unconstitutional trend of government officials investigating hosts for content moderation decisions with which they disagree. There are certainly content moderation issues on online platforms, which have rightfully been criticized for removing benign posts, censoring human rights activists and journalists, and other bad content moderation practices—as we noted in our brief. But a chilling government investigation is not the right way to resolve those issues. 

Jason Kelley

Podcast Episode: Making Hope, with Adam Savage

2 months 2 weeks ago

The joy of tinkering, making, and sharing is part of the human condition. In modern times, this creative freedom too often is stifled by secrecy as a means of monetization - from non-compete laws to quashing people’s right to repair the products they’ve already paid for.

Adam Savage—the maker extraordinaire best known from the television shows MythBusters and Savage Builds—is an outspoken advocate for the right to repair, to tinker, and to put creativity and innovation to work in your own garage. He says a fear-based approach to invention, in which everyone thinks secrecy is the path to a big payday, is exhausting and counterproductive.

Savage speaks with EFF's Cindy Cohn and Danny O’Brien about creating a world in which we incrementally keep building on each others’ work, keep iterating the old into new, and keep making things better through collaboration.

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F16bba9c8-9883-401f-8c54-60f61e4ed2d1%3Fdark%3Dtrue%26amp%3Bcolor%3D131204%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

  
  

You can also find the MP3 of this episode on the Internet Archive.

In this episode you’ll learn about:

  • How cosplay symbolizes what’s best about the instincts to make and share
  • Why it’s better to live in the Star Trek universe than the Star Wars universe
  • Balancing the desire for profit with wide dissemination of ideas that benefit society and culture
  • Building a movement to encourage more people to be makers - and getting the law out of the way

Adam Savage got his start as a model maker and special effects artist for movies including Galaxy Quest, Star Wars: Episode II—Attack of the Clones, The Mummy, and The Matrix Reloaded, among many others. In 2003, he became the host of The Discovery Channel program MythBusters, in which he and co-hosts debunked or confirmed popular myths through testing and experiments. Today he runs Adam Savage’s Tested, a website and YouTube channel that provides “a community playground for makers and curious minds” by exploring the intersection of science, popular culture, and emerging technology, showing how we are all makers.

Music

Music for How to Fix the Internet was created for us by Reed Mathis and Nat Keefe of BeatMower.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes the following music licensed Creative Commons Attribution 3.0 Unported by their creators: 

Resources

Right to Repair

Patents and Patent Trolls

Anti-circumvention provisions of the DMCA Section 1201

Transcript

Adam: One of my favorite communities is the cosplay community.  And what I find in that community is almost no gatekeeping and a really incredible open sharing of information.

One of the things I just love about going to pop culture cons and finding great cosplayers is they'll be like, "I took a cricket and then I modified it and I added this software and I did this thing. Then I sprayed it with rubber and here's how it came out." Then someone else builds off of that.

A few years ago on the replica props forum, there was a guy who posted a popular robot costume and he had done a beautiful version of it.

It was really amazing. It had all these moving parts and it was gorgeous. People immediately started saying, "Can you share some of your build pictures? We'd love to see how this was built." He was like, "No." People just stopped paying attention to his threads because that wasn't what you sign up for the forum for. I haven't heard hide or hair of him in years because that's not what we're about. 

Cindy: That's Adam Savage, who you may know from the show's MythBusters and Savage Builds among others. If you've watched any of his work, it's no surprise that Adam is a strong advocate for the right to repair, to tinker and to put creativity and innovation to work in your own garage.

Danny:  Adam is going to tell us how we can build a movement to encourage more people to be makers and how the law can get out of the way.

Cindy: I'm Cindy Cohn, EFF's Executive Director.
Danny:  I'm Danny O'Brien, special advisor to the EFF. Welcome to How to Fix the Internet, a podcast of the Electronic Frontier Foundation.

Cindy: You are, I would say, one of the world's most famous makers, but tell us, how did this start? How did you get into the idea that you were going to go build things?

Adam: I grew up in it. It was the water that I swam in growing up. My father was a painter, first and foremost, but he was also a madman back in the late '50s and early '60s in advertising. He actually got out of advertising and raised our family by doing animated interstitials for a new children's show called Sesame Street in the early '70s.

He was also a DIYer. He built a deck on the back of our house three times. I think by the third try, he got it right.

So, yah. I was always raised with the example that you built your environment.

Cindy: So, you're an inventor and you hang around with people who see technology as a space for invention as well, and who are creating and using and inventing. What kind of obstacles are you seeing in the kind of technological space of invention? Are they different or the same as the ones you see kind of in the more traditional putting wood together with nails kind of invention?

Adam: Well, I mean, I guess the nice thing about putting wood together with nails is it's being ignored by patent trolls.

It's like everyone gets so terrified to tell you their idea because we have this fictional economy that's supposedly based on ideas. But it's not, and that's a fiction. You'll have people like, "I'd love to tell you about something, but you've got to sign this big thing." That sort of fear-based approach to invention is exhausting because everyone thinks that secrecy is the root to their payday. Instead of, we'll incrementally keep on iterating and making this better as we collaborate and move forward.

Cindy: That's such an important point. Really you got to blame the lawyers for this idea that the only way to get paid by something is not sharing it, is really at odds with how society develops. We stand on the shoulders of giants.

Danny:  I think that there's always this interesting kind of confusion too. When you talk to Silicon Valley originals, people who built this stuff, they learnt by working at The Homebrew Computer Club and sharing everything. That the two Steve's, Jobs and Wozniak, would turn out there and show their working. Then I think some other people perceive it as, they have to come here and then go, "No, you need to sign my NDA. I can't talk about it." You go, "Everybody talks about everything."

I think one of the things that you see from your work is that, invention and creation doesn't have to be a professional endeavor. There's this whole wider community of students and just part-timers. I think that there's a big wash between those two things, right?

Adam: Yeah. It's funny, one of the best hardware, software engineers that I know won't work for a big company because he likes the freedom of taking weird freelance jobs and having total control over what he's working on. 

There's a lot of folks out there like that, and it's such an important sort of hidden economy of moving us forward.

Cindy: The open source community has really developed this idea and there's a whole community around it, but it still flies a little under the radar, I think from the top line stories we tell ourselves about invention and progress.

Adam: Jamie Hyneman and I spoke at the RSA Conference here in San Francisco a few years ago.

Danny:  The big cryptography-

Adam: Yeah. The big crypto-

Danny:  ... conference. Yeah.

Adam: ... conference. I think the year after us, Stephen Colbert made a big splash at that conference. That's how people might remember it. But when we were there, I told a joke that died in the room. Which was, they said, "Do you have any security myths that you'd love to bust?" I said, "Sure. How about that security through obscurity is idiotic and doesn't ever work." crickets and two laughs in the room.

Danny:  We're not supposed to talk about that. Security from obscurity being the idea that you can protect something by making it secret. That if you don't reveal how the lock works, then the lock will magically be protected from lock pickers.

Adam: Well, and the absurdity of this is that these locks that the companies are hiding are effectively password 123456. But they're making it illegal for your brain to receive that ... I mean, they'd love to make it illegal for your brain to receive that information. That the password is so simple. That's not security, it's theater.

Cindy: That's a huge chunk of the problem that EFF has with the anti circumvention provisions of the DMCA, which is a mouthful. But section 1201 of the DMCA basically says, if there's something copyrightable that's surrounded by a fence, you can't break the fence and you can't teach anybody else how to break the fence regardless of how ineffective the fence actually is. We've seen this. We've seen this law used against people replacing their printer ink by themselves, people creating garage door openers that interact with garages so you don't have to pay the zillion dollars to the manufacturer to get something that's actually quite simple. I'm wondering, do you have any experience with this? Has this happened to you?

Adam: I actually find hope in every corner of recent events. Like the anti circumvention for hospital equipment is absolutely absurd when you see hospitals with hundreds of thousands of dollars of equipment they can't use for want of a washer or a bolt or some little bit of software. The farmers for right to repair is a very exciting development. As well as states continuing to pass anti-non-compete laws. I think those are all part and parcel of the same thing of the absurdity of these companies’ attempt to extract money from parts of the chain that are before the end, and of consumers going, "This doesn't work for anybody."

Cindy: I recently did an event with the right to repair people out of the consumer electronic show where we chose the Worst in Show. The John Deere tractor that gives you a warning light if you try to open the hood of it, that something horrible may be happening, was one of the big winners. I focused more on the privacy ones, but John Deere is really on such a bad track with actual farmers who want to be able to fix their own equipment. It's insane.

Adam: The one part of the John Deere story I haven't seen recently is, a few years ago, they were getting dinged for hoovering up all the data about what farmers were doing with their tractors and then selling that. Are they still doing that?

Cindy: Yeah. In fact, there was an FTC complaint that was just filed against them, 

by the iFixit people who are good friends of EFF and a bunch of farmers. That I mean farmers have strategies for how they want to grow the most things, and they're really not crazy about John Deere having access to all of that information. 

The company that sells you your tractor ought not be surveilling you.

Adam: It ought not to know more about you than you know.

Danny:  When you create a law that says, this part of something you own is hived off from you, there's going to be all kinds of rotten things going on in that. This is something we try to explain when they introduce DRM, digital rights management, into web browsers. Web browsers are this thing that is where hackers go to break into your computer. That's the most vulnerable part. They wanted to create a little ticking time bomb in these browsers that would be locked away where not only would it be hard for you to work out what was going on, because it's to stop people copying movies and they don't want hackers looking and working out how to do that, but also it would be illegal for security researchers to even probe it for these same 1201 reasons.

Adam: Right. It's funny, because one of the founders of the EFF, John Perry Barlow, wrote that beautiful essay so many years ago about information wanting to be free. I remember the first time I read it, I was thinking of it as a kind of hippie, anti-capitalist, like it's all granola. I loved the stance, but I now think that it was actually the opposite. He was writing that essay to set a frame of, you're never going to put these genies back in their respective bottles. You have to build a system that understands that this information's going to leak and we're going to need cooperative strategies for being able to make sure we have privacy and liberty and that we can profit from our inventions.

Cindy: Yeah. Free as in speech, not free as in beer, I believe, is one of the ways that…

Adam: Oh my God, I want that as a T-shirt.

Adam: 
One of my favorite communities is the cosplay community. Which to be fair, when I first dove into cosplay for real, which is around 2008, 2009, I found a community of very striated with some people at a very high level, some people at a medium level, some people at the bottom no one's paying attention to. A lot of gatekeeping about what was and wasn't cosplay. And now what I find in that community is almost no gatekeeping and a really incredible open sharing of information.

Danny:  I think another thing that's really interesting about the cosplay community and also the fan fiction community, is that they're both examples of areas where people don't take it seriously to begin with. I think that makes them very vulnerable from a legal point of view. Like in the mid 2000s at EFF, we spent a lot of time working with fanfic writers because they were getting legally harassed about whether it was okay to use the characters from books. One of the reasons why that happened, I think, was because people said, "Well, that's not a thing. That's not real fiction. You're just riffing." Same with remix culture. Same with mixing music in the '90s and '80s. Again, I think it's this weird transition where people do this for fun and also because it's captivating, but it's in a legal gray area. If you're not careful, it gets thrown into that legal gray area and you just get a whole amazing community of creativity that becomes hip hop, that becomes endless sequels of Star Wars and so forth.

Adam: You have some wonderful artists illuminating that. Kirby Ferguson's incredible documentary, Everything is a Remix, which he keeps on updating and keeping current, I just love that. Just the fertility of that sharing of ideas, it ... One of the things that I have also encountered is, I once played poker with the head council for Lucasfilm, and he was telling me, this is 20 years ago, but he was like, "We know everybody who's making stuff out there. We know all the stormtrooper costume makers. We know who's making what." He said, "We're not interested in stopping someone from making them. We just don't want someone to turn it into a multimillion dollar business."

Which fair enough actually, but they wouldn't ever say that publicly and therein lies the problem. It's like, remember the whole legal thing we tried to thread of allowing doctors to say, I'm sorry, without expressing a legal liability for saying, I'm sorry? This is this problem we have in our culture of this, well, we can't really admit it publicly that we do this, but Lucasfilm was very clear on how much the fandom supported and helped Lucasfilm in its mission. 

Cindy: Yeah. I think this points a way to some stuff that we could fix pretty easily. Fair use is a kind of complicated legal doctrine, but it doesn't have to be. Companies could just say, "We think this is fair use. This is ..." Just put it outside of this realm where you have to come ask for permission, you have to sign a document, you have to do all this thing. It's just fair use. 

Or lawyers with overboard trademark argues. This idea that if you don't come down hard on anybody who uses your trademark, somehow you're going to lose your trademark. That is actually not a big threat for most of the people who are wielding it around. It's too bad that that's not happening with more grace from a lot of companies.

Adam: Humans, we are ludicrously inconsistent. We believe everything we believe fiercely right up until we're angry. Then we're leaning on our car horns to get you to move.

Cindy: Yeah.

We're fixing the internet in this podcast. Let's assume that all of the obstacles to makers get out of the way, what do we get in this world? What does it look like if we get this right?

Adam: How do I say this? It's like the hundredth monkey. Once you take a bit of technology and it gets out to five great people, really cool stuff happens. But once it gets to 100 people, you can't even predict where it's going to go and what sort of amazing things are going to come out of that.

People are incredible. The human mind is an unbelievable machine. The idea that we're going to pretend that commerce can somehow inhibit human invention is a fantasy. That's where you get the dystopia. I mean, that's why I love Star Trek people. I grew up on Star Wars, it's in my bones. But if you ask me where I want to live, it's in the Star Trek universe, not the Star Wars universe.

Cindy: Yeah. Me too.

Adam: I think that drones were a really great example of a real net benefit to everyone to the largest degree. I actually appreciated ... I don't know. It felt like the FAA has really been attempting to find a reasonable path through to drone usage. At the very beginning, every TV show had like the drone shot and now you see the technology maturing, people are using it in more interesting ways. I'm not noticing drone shots as often. It's like you get this brand new toy and there's only one way to use it until everyone sort of figures out the subtleties of the execution.

Cindy: Yeah. I think drones are a good example. EFF has been involved in some work with the FAA around this, and especially around kind of the kind of bigger drones and privacy and issues. But I do think that drones are one of these things that are getting cheaper and cheaper and people are figuring out more things to do with them. Some of them are annoying and we might need some regulation, but a lot of them are really cool. Again, back to our farmers, farmer's ability to fly over their fields is one of the things that's really exciting about kind of taking this technology and making it usable by people for their everyday lives.

Adam: I mean, the people we're fighting against here are people who would love to have patented the idea of using a drone for real estate photography, and then nobody else can do that without paying them. That's mind-bendingly stupid.

Cindy: Yeah. It's funny how IP, intellectual property, comes up a lot in this space because it is often a set of things that are trying to create artificial scarcity around things that are not scarce. Getting that balance right is really important because not everything should be scarce in our society.

Adam: Years ago, I actually read through, I can't remember what author inspired me to do it, but I read through both the Copyright Act and the patent act at the beginning of the founding of America. They're both really, really clear. They're both crystal clear in their language about how much ideas that go out into the world are a net benefit to society and culture as a whole. Let everyone make a little money at the beginning, let everyone else benefit from the invention at the end. It's such a beautiful plan. It just feels like we've been chipping away at it for 200 years.

Adam: I did some work with the Obama administration, helping them promote making around the country and around the world. One of the most amazing places I visited was the Elizabeth Forward High School in New Jersey, where they'd gotten a grant for some 3D printers and they gave these 3D printers to a young teacher and said, "You have a 3D printing class now." She didn't know how to use them. She didn't know what to do with them. She got together with the students, a bunch of Christian girls and 13, 14 years old, and she took them across the street to an assisted living facility where each student adopted one of the residents of the facility and spent a semester iterating a 3D printed thing that would improve the life of that resident.

Cindy: That's fabulous.

Adam: I can't think of a more beautiful curriculum, and it was completely organic based on the teacher's interactions with the printers and the students and the assisted living facility. That to me is the utopic future we can shoot for. Where it's not like someone's going to come in and monetize all those inventions. It's just that we're going to know about them and we can expand upon those things and keep on taking the lessons from that iteration. Again, improve the lives of the residents, not just of the facility, but of earth.

Isn't that beautiful? I was so moved by the story as they told it to me and realizing there's no greater lesson for a young industrial designer, whether or not you're going to end up being an industrial designer, than iterating a product with somebody who is vastly different from you in their life and experience. That, right there, we know from our vantage point how much value there is for a young person to go through that experience. To me, that's what a real shop class curriculum should be.

Cindy: The sense of power, the sense of being heard and listened to, and this creative conversation. We talked with a guy named Zach Latta who runs a thing called Hack Club and has kids learning how to do a lot of these skills.  This is really something that helps our future generations and it helps our current generations really have agency over their lives.

Danny:  I think the goal here is to move the impediments away from that kind of direct connection. I think if we want a utopia where those sort of things happen on a daily basis, well, we have to sit and work out, what does the law look like that means that someone can do that? Someone can just go and make a thing for someone else without there being interference from people worried that they're going to lose money because of it or liability in particular directions. I think that's how you get to there.

Adam: I also think within rapid manufacturing, I think, it is as William Gibson so succinctly said, the future is here. It's just not evenly distributed. You still have most marginalized communities don't have access to 3D printing and rapid manufacturing. Every White kid in school has access to a 3D printer, give or take. I sit on the board of Nation of Makers and one of the specific tracks we're going to have at the next Nation of Makers Conference, NOMCON, is around racial and social justice and activism. Because these are really fertile areas. When we only allow one community to have the benefits of some new technology, we're not exploiting what it could bring us as a culture and as a society. We have to include all of these other voices in order to reap the full benefits.

Cindy: Absolutely. The other lesson that we've learned from this podcast over and over again, is that the communities know what they need best. It's, the best ideas come from within these communities. It's not just that the rest of us don't get the benefit of these things, that the things that we might being developed, even with good intentions to try to assist this community, they're not going to be as good a fit because they don't come from it.

Adam: My friend, Sonya Pryor-Jones, in Cleveland has a wonderful organization called Mantles and Makers. She is working on building a house in a neighborhood in Cleveland that will be a community maker space with multiple tracks and multiple stories where kids and locals can come and learn about this and dive into this plasma pool and be part of this invention cycle.

Cindy: That's so great. Because one of the other things that we see at EFF is that there's little pockets of makers. Every community has pockets of makers. I grew up in a small town in Iowa. We had the gearheads, the people who were always working on their cars. That's a maker community. As technology makes some kinds of making a lot easier and a lot ... Those communities, they already exist. We don't have to build them. We just have to protect them from the law and from kind of misunderstanding of what they're up to and then they will grow.

Adam: Well, and I love the strange bedfellows of your average power user of a hacker space turning out to have the same goal as farmer in Iowa for the use and utility and repurposing of their equipment, I'm fascinated by that. I love that convergence of the most disparate life experiences yet coming onto the same plane for the same battle.

Cindy: Well, and I think it really talks at about making, it's a universal value. It expresses itself differently in different communities, but you're right. One of the fun things about EFF is, we got a section 1201 exemption for tractors and stuff. We took on John Deere and we won in the copyright office. But just finding those pockets all around the world of people who have a specific make need and working with them to try to build the law. I do think that the farmers are going to be a huge piece. There's federal law pending on right to repair. Joe Biden has said that he's supportive. There are a lot of people who are really talking about this now. Honestly, I think we have the farmers to thank because they enlivened a whole other piece of our representative government around an issue that just probably didn't feel like it was relevant to that community until recently.

Adam: Well, and the ethos of being a farmer, which is so deeply baked into American culture, turns out to be the ethos that we're fighting for. Which is, I can repair it with number three fencing wire if I need to. There's a joke that they make in ... I'm modifying a joke in New Zealand. It's, if it can't be fixed with number eight fencing wire, it can't be fixed.

Cindy: It's how I feel about duct tape.

Danny:  We'll just fix the law with the fencing wire.

Adam: Number eight fencing wire?

Danny:  Yeah.

Cindy: Yeah. Just go to D.C. with a bunch of it and be like, "Here, we have what you need."

I feel like there's more of us and we're louder and we're stronger, but this is not a battle that's going to be won automatically. This is one of those things that seems obvious to the people who are on our side of this that it ought just happened. The amount of fight we have and are going to have around this is going to continue.

Danny:  Some people see the future as being that we just rent things. We just have subscriptions. We don't own anything, and that will be very freeing. I sense you do not feel the same way. That actually ownership is an important component of having control over your technology. What is that difference? Where does that come from?

Adam: That's fascinating. I mean, I was really surprised that I was so happy moving to a subscription model for Adobe Photoshop, for instance. I was pissed off about it at first and then when I went through it, I realized actually over the aggregate, I was saving money on this and I was having more up to date software and I didn't have to worry about it. On that front, that model worked out beautifully for me. But you're right, we're at this point where I have songs I downloaded into my iTunes 15 years ago that are just gone because through some update, Apple just replaced it with something else.

I don't have enough personal time to go chase all those genies and put them back into the bottle, but it makes me really sad because it is this sort of very subtle mind fight against society about what ownership really is. The idea that I could have this pocket knife and not be able to drill a hole through it to do something important that I wanted to do because somehow there's some label on it that says you're not allowed to modify this thing, nobody wants a future in which we can't do that. There's no way in  which that's a net benefit to anybody. We'd have hospitals full of this year's equipment that worked and last year's equipment that didn't.

Cindy: We use ownership as a shorthand for control and power. I think that for me, whether it's licensed or whether it's owned is important because it signals where the power lies and where things lie. As we learn over and over again, often shifting to licensing means surveillance. Because you can't control what people do unless you're watching what people do, and so surveillance ends up being a piece of this. I think that ownership, it's tremendously important for some things, but it's important because of the power dynamic. You could create a license that doesn't have that power dynamic. It's just that so few companies do.

Adam: It's the flip side of the liberty to do what you want with the stuff that you have. It's this really interesting way in which fair use and copyright dovetails immediately with liberty and freedom. I think those values about liberty and the use of our objects are completely nonpartisan. I don't think we'd find Republicans ... I mean, I'm very much a left leaning liberal, but I don't think I'd find a Republican that would disagree with me about farmers being able to fix their own equipment. That actually gives me hope for the future. That, again, that image of the farmer and where they are in American culture is so powerful, it highlights deeply the absurdity of the position of John Deere.

Cindy: 

Right to repair and tinker and all of those kinds of things, in some ways I think they're deeply cross partisan. It's this idea that you fix your own truck. I mean, again, I grew up in rural Iowa. The idea that you fix your own truck is not a partisan idea. It's deeply, deeply embedded in whatever kind of self-respect, self-control, power, autonomy ideas that really, in some ways get voiced a lot more by people on the right. I think they're widely shared though.

Adam: Well, I mean, I'll point out another hidden economy that someone is going to try and monetize at some point. Which is, I was trying to fix one of my bathroom faucets a couple weeks ago and I called up a 10 year old Grohe service manual video of how to repair this faucet. We all love YouTube for that because anything you want to fix. Whether it's this thing or your video camera, there's someone who's taken it apart. Whether they're in Micronesia or Australia, it doesn't matter. But the moment someone figures out that they can make a bunch of dough from that, I'm sure you would see companies start to like, "No, you can't put up those repair videos. You can only put up these repair videos." We all lose when that happens.

Cindy: It's definitely the case. I mean, one of the biggest copyright verdicts in the world was a case where Oracle sued people who were granting access to internal materials about how you fix Oracle's servers and the copyright damages were huge. I do think this is one area where the more you get into a purely digital situation, the harder it is to convince people that it's the same problem. If it's using genuine GM parts to fix your GM truck, most people know that that's an optional thing. That you don't have to do that and that that's GM trying to make more money off of you. Don't get confused by one's hardware and one's software or digital, it's the same thing. Being able to fix your own stuff or fix stuff yourself is a value that shouldn't turn on whether you're talking hardware or software.

Cindy: What are the values that we are going to protect when we protect the right to repair and the right to tinker? I think we've said this a little, but I'd love to drill down a little bit.

Adam: To me, the word that comes up first is sharing. The sharing economy is an economy and just like any economy, it can benefit everybody if you treat it and grow it well like a house plant. That involves us getting past our own egos, moving past our fantasies of ruling industry and realizing that the things we invent can make things better for each other. But it doesn't mean that if you invent one widget that's popular, you should live the rest of your life in perfect sultan-like comfort.

Cindy: Yeah, absolutely. Well, Adam, thank you so much. This was just delightful and also very heartening. That this community is alive and well. We've got obstacles we need to get out of their way and kind of crazy things that are happening that are blocking people, but the community that you're so deeply a part of is thriving. That's good news.

Adam: Well, through organizations that I get to work with like Nation of Makers and Mantles and Makers, it is I'm hoping to help raise a generation or two of digital natives who understand and swim in this language and can build the next generation's sharing economy.

Cindy: That's wonderful. I can't wait. I can't wait for all the stuff they're going to give us.

Adam: I know.

Cindy: ... stuff at EFF.

Danny:  Cool things. Thank you so much.

Adam: I'm so glad you guys are out there. Like I said, I believe in your mission. Send me in coach, I'll testify before Congress, whatever you need, I'm here for you EFF.

Cindy: You know I'm going to take you up on that. That's wonderful. Thank you.

Adam: Well, thank you guys.

Cindy: Well, that was just such great fun. The thing that really comes out in talking to Adam is how making is a culture. It's not just an individual endeavor. It's not just somebody in their garage all by themselves. That, that person may exist, but they're part of a community, they're part of a fabric of how people are engaging with their stuff, digital and non-digital and how these communities set norms for themselves. 

Danny:  Yeah. There was a thread going through it, I think, of this connection between commerce and the world of technology and commerce and the world of technology amongst enthusiasts and hackers and hobbyists. Which of course is a really strong connection in computing technology. But it's nice to sort of see it made explicit elsewhere. In both of those cases, one of the themes is that openness and sharing is actually beneficial. It's beneficial not just to everyone, but also to each individual endeavor.

It is very confusing where we get this idea of secrecy as the key to a payday. I think anybody who's spent any time in the development of technology knows there's all of this sort of sharing of knowledge that goes on. That close knowledge really isn't good for an individual and isn't good for society as a whole. When we all participate in the development of technology, things naturally get better.

Cindy: Yeah. I really love how Adam really grounds this in an all American kind of context. I mean, I believe, and I think Adam would agree, that the urge to be a maker is universal. It's part of our basic humanity, but the American story has a really lot of resonance. That farmers are makers. That the automotive hackers, when I was a kid, we called them gearheads, the people who would mess around with their cars, they make it better for all the rest of us and they are just a central piece of some core American identity. Adam makes that case really well that this is a piece of the, do it yourself ethos that is really across all sorts of other ideological differences that people might have.

Danny:  Yeah. I think that the shared idea here is sort of incremental improvement and incremental improvement in society. I mean, we have it embedded in our idea of technology and the idea that technology gets better, but we should have it for everything. We should be constantly aspiring to improve the society we have.

Like the idea that you don't need to have a contract. You don't need to have a really dicty copyright regime when what you are actually trying to develop is a much more looser kind of liberal idea of, well, we know that it's okay for you to ... It's good if you are building Star Wars costumes or something like that, as long as you're not making a million dollars out of it. That means we don't have to use the heavy hand of the law to extinguish every possible transgression while people are experimenting in this area.

Cindy: Yeah. I also appreciate how he really grounds this in the idea of liberty, of doing what you want in the kind of classic good way. The liberty to take your stuff and make it better and make it fit you. It's all a piece of what I really appreciate about Adam, which is his optimism. 

Danny:  Yeah. I think people underestimate how powerful it is just to be optimistic. If you actually believe, oh maybe we can fix this, then you can go a long way. Of course, makers by definition must be optimistic because they're always convinced they're going to be able to fix or build something in the end. Otherwise, they wouldn't start.

Cindy: I think you fundamentally have to be optimistic if you're trying to build a better future. Whether that's a better future by adding something cool to this tool or by making a cool cosplay costume, or in the work that we do a lot, trying to make the rules of the road and the law better. There's an optimism that has to be built in that Adam just shines through with. 

Well, thank you so much for joining us today.

Danny:  If you've enjoyed this episode, please visit eff.org/podcast where you'll find more. You can learn about the issues and you can donate to become an EFF member.

Members are the only reason we can do this work. Plus you can get cool stuff like an EFF hat or an EFF hoodie or even an EFF camera cover for your laptop. Music for How to Fix the Internet was created for us by Reed Mathis and Nat Keefe of BeatMower. This podcast is licensed by the Creative Commons Attribution 4.0 International License. It includes music licensed under the Creative Commons Attribution 3.0 Unported license by their creators. You can find their names and links to their music in our episode notes or on our website at eff.org/podcast. How to Fix the Internet is supported by the Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology. I'm 

Danny O'Brien.

Cindy: I'm Cindy Cohn.

 

Josh Richman

The Catalog of Carceral Surveillance: Tablet Advertising That Can Also Issue Discipline

2 months 2 weeks ago

This post is part of our series "The Catalog of Carceral Surveillance: Exploring the Future of Incarceration Technology." You can read more about the role of patents in our post, “The Catalog of Carceral Surveillance: Patents Aren't Products (Yet).”

Imagine you've been arrested and are sitting in county lock-up. You need to make arrangements for bail, a lawyer, and a caretaker for your kids or pets. Maybe you need someone to bring your prescription or you need to talk to your AA sponsor. On top of that, you're traumatized by the invasive booking process and scared to the bone of what might happen to you, all too aware that many people wind up injured or dead while awaiting trial.

An officer hands you a digital tablet and assures you that you can use it to communicate to sort out your affairs. It's a glimmer of a lifeline… but then you try to use the device.

A pop-up opens on the tablet's screen, and you're forced to watch a commercial for a shady bail bond firm before being allowed to access the video call app.  When your family member picks up, you both have to sit through another advertisement. When you finally get to talk, both you and your relative have the logos of a local law firm hovering over your shoulder, like the worst Zoom background ever. Throughout the call, your conversation is interrupted with periodic video advertisements that you have to watch to keep the line open. There’s also, an ever-present scrolling text ad, and another ad that is bouncing around the screen trying to get your attention.

Suddenly the call is cut off, the tablet freezes up, and an alert blinks on the screen: your privileges have been revoked because an algorithm determined someone on the video flashed a gang sign. It must have been a mistake, but the message says you have to go through the appeal process to have your account reinstated.

This horrific but fictional scenario is based on a very real patent titled "Advertisements in controlled-environment communication systems using tablet computing devices," that was granted to prison communications vendor Securus in 2016. The patent describes a system that, among other things, would display pop-up advertisements during video calls that would be based on “characteristics” of the callers and the interests of the advertiser.

It's important to note that patents on their own are not products or even an indication of a company's plans. They're ideas that companies have developed and want to get a commercial lock on before a competitor does. Companies may stake a claim on an invention by patenting it, without ever building the system they describe.

Nevertheless, the fact that companies are devoting resources to developing these ideas should serve as an early warning sign of what may be just over the horizon.

We reached out to Securus about this patent. A spokesperson responded via email: “Securus values its Intellectual Property Portfolio, although we may not currently be practicing this patent, we are continuously looking to provide ways to reduce the costs of communications while allowing more contact between incarcerated individuals and their friends and families.”

Although Securus filed for this patent in 2014, which may seem a long time ago, it's important to view this patent in the context of the long, slow-moving efforts to reform prison payphone costs. For years, detention facilities and telecommunications vendors have gouged imprisoned people and their families with exorbitant phone rates—$5.74 for a 15-minute call—while some detention facilities have required phone providers to pay them kickbacks called "site commissions."

With some of those practices coming to an end, an advertising-based model could be one way to replace the lost profits. Securus itself now supports rate caps and moving to a subscription-based model. Although Securus has not mentioned advertising in this context, we have seen advertising applied to subscription models in streaming and other subscription models.  And the patent certainly provides a roadmap for how Securus might incorporate advertising into its communications systems.

Whatever Securus' intentions are for filing the patent, the language used in it invites prison and jail officials to view inmates as a source of revenue, even if they're indigent.

"This situation may arise in a law enforcement context when an arrestee, prisoner, or other detainee, who is being held in a holding cell, jail, prison or other law enforcement facility, does not have a trustee account, calling account, prepaid account or other means by which to pay for telephone calls," Securus writes in the introduction to its patent. "The detainee is then unable to initiate calls, but usually desires to make telephone calls. This results in lost revenue for the law enforcement facility. Depending upon the nature of the law enforcement facility, the detainee may be offered free calls. For example, a recent arrestee may be able to make free telephone calls to try to secure legal representation or bail. These free calls also represent lost revenue to the law enforcement facility and or telecommunication vendor."

The system envisioned by the patent is a digital interface jam-packed with advertisements targeting both the person behind bars and the person they're trying to reach on the other side.The patent describes several different possibilities for when it might deploy ads: before a person can initiate an app, background advertisements behind the speakers on a video call, audio ads that run during a call, a scrolling text ad under the images, and even an option for advertisers to purchase an ad "that moves or slides around the screen, then the viewing party may be more likely to notice and watch the advertisement." The ads aren't always random either: the system proposed by the patent allows for serving ads based on location, demographics or other unarticulated "parameters."

Since the patent was ostensibly written to cover all scenarios in which advertising could be displayed during a video call, it doesn’t necessarily mean that all of these ad systems would be run simultaneously during any given call. If implemented, however, the patent appears to leave almost no opportunity for people to communicate without some form of advertisement potentially intruding on that conversation.

Studies have consistently shown that incarcerated people are healthier and better able to re-enter society if they maintain meaningful relationships, through communication and visitation, with a support network on the outside, including family and friends. This patent raises the question of how meaningful those communications will be if they are drowning in invasive advertising.

This patent imagines being simultaneously an advertising broker and an automated prison guard, through an integrated "communications monitoring system" that listens in on calls and issues discipline when a person violates policies. As Securus writes in the patent: "[The] [c]ommunication monitoring engine monitors calls for improper content, such as gang signs, pornography, sexual content, and criminal activity, and/or may monitor calls for feedback on advertisements."

When the system detects "improper content," it would disconnect the communication "and/or alert a monitoring agent or authority." The system could also "document revocation of the resident's privileges by recording the violation in the resident's record or file in the resident database."

The patent provides a specific scenario (for clarity, we've removed numbers that referred to parts of a diagram):

For example, an inmate may be advised of a system's flexible one-strike rule on improper content when the inmate is provided with the inmate's identification number and PIN. By using the system, the inmate agrees to the monitoring and also to comply with the content restrictions. When the inmate violates the one-strike rule, the system is flexible in how the inmate's access to the system will be limited. For example, the first violation may result in a warning from the interactive communication engine, a one-day suspension from use of the telecommunication system, or a complete revocation of telecommunication privileges. While the system is flexible in how the inmate is disciplined for violations of the improper content rule, the system strictly enforces the improper content rule. In some embodiments, the communication monitoring engine will also send the suspected communication to prison officials for review. Upon review, the prison official may reinstate the inmate's privileges by overwriting the violation in the resident database.

The concept described in the patent should indeed raise red flags. Automated systems used in investigations–from face recognition to automatic license plate readers to acoustic gunshot detection–frequently make mistakes, often with a disproportionate impact on people of color. The patent doesn’t specify how this monitoring would occur. One could easily imagine how an algorithm monitoring jail calls could mistake affectionate slang for a threat or confuse  American Sign Language gestures with gang signs.  

There are many good reasons for incarcerated people to have access to digital communications and resources, especially in an age of Covid restrictions. These systems should always center the humanity of the person incarcerated and the value that comes from communicating with their loved ones, family, and community, not the revenue potential for the facility or the vendor.

Unfortunately, the U.S. incarceration system has a long track record of imposing predatory fees on incarcerated people and their loved ones in an effort to supplement their budgets and generate profits for vendors.

With this history in mind, we hope that this patent, aimed at a new form of profiting from the needs of imprisoned people, remains confined to paper. 

Dave Maass

Brighter Stars and Persistent Gaps Mark New Paraguay’s “Who Defends Your Data” Report

2 months 2 weeks ago

Paraguay’s leading broadband service providers last year upped their commitments to users to be transparent about their privacy policies and to adopt accessibility practices, but most fell short on disclosing information about government requests seeking their data, according to the new edition of TEDIC’s ¿Quién Defiende Tus Datos? (“Who Defends Your Data).

The report reveals a troubling trend among Paraguayan internet and telecommunications providers: most don’t publish reports with statistical data on such requests or disclose procedures they follow when handing user’s data to authorities. What’s more, companies still resist  making a public commitment to notify users about government data demands. Although this is usually a difficult  commitment to get from companies evaluated in the region, we see some strides in Chile, Colombia, and Argentina. In Paraguay’s report, however, all service providers failed in this category.

This undermines users’ ability to make informed choices about which companies they should entrust their data to. Our reliance on internet connection providers to browse, access information online, and communicate with others puts vast amounts of highly sensitive data into the hands of service providers. TEDIC’s report shows they must address this weakness by giving users more information about how these requests are handled and revealing to what extent they have users’ back when the government demands their data.

The new report looked at publicly available documents and policies of five companies: Tigo (Millicom), Claro (América Móvil), Personal, Copaco, and Vox. All have been assessed by TEDIC since the report’s first edition in 2017. Since then, Claro showed notable advances in  the privacy policy and transparency report categories. Tigo has also improved its privacy and data protection policies’ marks over the years. In this 2022 edition, Tigo kept its leadership position while Claro stayed firm in second place. Copaco, which didn’t receive any score in the last edition, shared third place with Personal this year. Finally, Vox lagged  far behind, getting only a minimum score in one of the evaluated categories. 

Paraguay’s full study is available in Spanish

Regarding privacy and data protection policies, TEDIC’s study checked whether companies provided clear and easily accessible information about personal data collection, processing, and sharing with third parties, as well as retention times and security practices. All companies but Vox scored for publishing their privacy and data protection policies and earned at least partial stars for accessibility features on their websites. Both Claro and Tigo received a full star. Personal and Copaco earned only a quarter star, as their policies are generic and don’t contain relevant details on what personal information is collected and stored, for how long, and how third parties and authorities can have access to and use customers’ data.

Unlike other countries in Latin America, Paraguay still doesn’t have a comprehensive data protection law in force. TEDIC and allies have been working to push forward legislation that can fulfill this gap and ensure robust data protection principles and safeguards for  all Paraguayans. Yet, companies don’t have to wait for the bill to be approved to increase transparency about how they process their users’ personal information. In fact, the broadband providers in the report should have already taken that on.

Paraguay’s report also analyzed whether companies publish transparency reports with information about government requests for users’ data, and whether such reports provide additional detail on types of data requested, requesting authorities, and justification for the request. Only Tigo and Claro scored in this category, earning half-stars for reports published by their parent companies—Millicom and América Móvil, respectively. Both companies didn’t provide all the details required. They do not break out figures for government requests received in Paraguay. Their reports instead aggregate all government data demands received in the South American countries they operate. Paraguay’s branches of both companies should follow the example of other South American units that publish local transparency reports, like Tigo in Colombia, and Claro in Chile. Also, Millicom and América Móvil reports are not available on their local providers’ websites, which should be fixed.

As for publishing law enforcement guidelines they follow when responding to user data requests, again only Claro and Tigo received points for documents disclosed by their parent companies. América Móvil published for the first time a specific global report with information on its procedures before responding to government data demands and applicable legal frameworks in each country. In turn, Millicom’s report presents their global guidelines for assisting law enforcement, without breaking down information by country. That’s why the company received only a half star in the category, as in the previous year’s edition.

Judicial authorization and user notification remained the lowest scoring categories. Companies still don’t make any public commitment to notify their users about government data requests, while only Tigo explicitly states that a judicial order is needed before handing the content of users’ communications to authorities. América Móvil’s report fails to make this clear in its section on Paraguay, even while sections on other countries in the report say a judicial order is needed. No company publicly commits to request a judicial order for delivering users’ metadata to authorities. This occurs mainly because of a problematic Supreme Court ruling that didn't consider law enforcement access to telephone metadata without a court order a violation of Paraguay's constitutional privacy safeguards. As TEDIC explains in the report, such interpretation runs afoul of constitutional and legal protections, as well as inter-American privacy standards that apply both to metadata and communications content. 

Finally, regarding human rights policies, all companies received at least a quarter star for public campaigns providing information about or training in issues like digital security or privacy, or for joining sectoral or multistakeholder initiatives aligned with the promotion and defense of human rights.

TEDIC’s Paraguay ¿Quién Defiende Tus Datos? series of reports is part of a region-wide project, inspired by EFF’s Who Has Your Back, aimed at encouraging companies to be more transparent and better protect user privacy to garner a competitive advantage in Latin America and Spain. Fundación Karisma has recently launched their new edition for Colombia.

Veridiana Alimonti

The Latest Threat to Independent Online Creators Is the Filter Mandate Bill

2 months 2 weeks ago

Copyright filters are having a bit of a moment in Washington D.C. The Copyright Office is moving ahead with a process to determine what, if anything, constitutes a standard technical measure (STM) that platforms would have to accommodate. And, if that proves too onerous, Congress has Big Content’s back with the Strengthening Measures to Advance Rights Technologies Copyright Act.

This filter mandate bill would task the Copyright Office with designating technical measures (DTMs instead of STMs) that internet services must use to address copyright infringement. Both the Copyright Office proceeding and this bill have the potential to result in the same thing: more copyright filters.

For those who make and share things online, be it through scripted and edited videos or livestreams, filters have routinely been a huge problem. Right now, the only silver lining has been that American law doesn’t require any service to have a filter. YouTube, Facebook, and Twitch use these tools voluntarily, to terrible effect, but they are not doing so under any legal requirement. That means that at least sometimes when they mess up, they can take whatever measures necessary to fix the problem.

And they mess up a lot. Automated systems cannot tell the difference between lawful expression and infringement. YouTube’s system flagged static as copyrighted material five separate times. Facebook can’t tell the difference between different classical musicians playing public domain pieces. And Twitch has completely failed its users in its implementation of anything resembling copyright rules.

Both the Copyright Office proceeding and the rules imagined by the filter mandate bill could result in a series of new, required automated systems.

If a company’s risk is only lowered by having a filter, then the company will want a filter that is oversensitive; the danger of a copyright suit brought by a billion-dollar company looms much larger in the risk equation than ruining the livelihood of independent creators.

That’s why we need to make sure that Congress hears from the many independent creators who don’t want filters, as opposed to the few multi-billion-dollar corporations that want them.

Join us and the Organization for Transformative Works this Friday for a Copyright for Creators Town Hall. We’ll update everyone on what’s going on in D.C. and answer your questions.

EFF At Home: Copyright for Creators Town Hall on Filter Mandates
Friday, April 8, 2022 at 11:30 AM Pacific Time
Streaming Discussion with Q&A

RSVP NOW

Katharine Trendacosta

Google Fights Dragnet Warrant for Users’ Search Histories Overseas While Continuing to Give Data to Police in the U.S.

2 months 3 weeks ago

Google is fighting back against a Brazilian court order to turn over data on all users who searched for specific terms, including the name of a well-known elected official and a busy downtown thoroughfare. (Brief in Portuguese / English*) While we applaud Google for challenging this digital dragnet search in Brazil, it must also stand up for the rights of its users against similar searches in the U.S. and elsewhere.

Background: Keyword Search Warrants

Keyword search warrants like the one in Brazil are far broader than traditional search warrants described in the Fourth Amendment to the U.S. Constitution. The Fourth Amendment requires police to establish probable cause to search a particular place or seize a particular person or thing before the court authorizes the warrant. But keyword search warrants don’t start with a suspect person or device. Instead, they require Google to comb through the search histories of all of its users, including users who are not logged into a Google account when they search.

Keyword warrants allow the police to learn anyone and everyone who may have searched for particular terms on the off-chance one of those people could have been involved with the crime. Like better-known geofence warrants, keyword warrants allow police to conduct a fishing expedition and sweep up data on innocent people, turning them into criminal suspects. Police are using both types of expansive, suspicionless searches with increasing frequency.

Google Takes a Stand Against Keyword Search Warrants—in Brazil

The Brazilian case arises out of the assassination of Rio de Janeiro City Councilor Marielle Franco. Franco was murdered, along with her driver, Anderson Gomes, near Rio de Janeiro in 2018. It was a terrible crime that stirred up public outcry.

As part of the investigation into the assassination, police ordered Google to trawl through its users’ search histories, scanning for searches of certain terms—including the name of a heavily trafficked street in Rio de Janeiro (“Rua dos Inválidos”), Franco’s name, and the name of a nonprofit cultural space intended to support Black women (Casa das Pretas), where Franco had participated in an event earlier on the day she was killed. The order required Google to turn over identifying data about all users who searched for these and other related terms over the course of four days.

Google has challenged this order, eventually appealing it all the way to Brazil’s Supreme Federal Court, arguing that this kind of indiscriminate search violates the Brazilian constitution. (Google’s brief in Portuguese / English*) As Google rightly explains, the warrant is wildly overbroad. The search terms would all have been popular and common queries, and many people are likely to have used them—including citizens and journalists interested in the activities of a city councilor, or people interested in collaborating with or receiving support from the nonprofit cultural center Casa das Pretas.

This particular keyword search warrant is particularly egregious, given the sheer number of people likely caught in its dragnet, but even a more narrow warrant should trigger human rights concerns. These types of warrants inevitably sweep in users whom police have no reason to believe were involved in the crime, and they give police unbridled discretion to determine which of these people to target for further investigation. In the Fourth Amendment framework, the unbridled discretion inherent in keyword search warrants, like geofence warrants, makes them an unconstitutional “general warrant.”

As Google emphasized in its brief, this case in Brazil has far-reaching implications. This method of investigating transforms a platform intended to provide access to information into a tool for the government to collect highly revealing private data from innocent people. And Google receives thousands of law enforcement orders to provide user data in Brazil each year, affecting tens of thousands of users. If Brazil’s Supreme Court signs off on dragnet keyword searches, the number of impacted users could skyrocket.

Google Fails to Challenge Keyword Search Warrants in the U.S.

Keyword search orders are becoming increasingly common in the U.S.—but Google seemingly hasn’t fought nearly as hard to protect the privacy of its U.S. users. We aren’t aware of any cases in which Google has pushed back against keyword search warrants in the U.S. In fact, we have no idea how many keyword warrants Google receives or how it responds to them at all, because Google has kept that information entirely secret. That secrecy surrounding keyword warrants contrasts with Google’s recent reporting on geofence warrants; Google has now shared the number of geofence warrants it receives and the three-step process it uses to respond to them.

It's remarkable that Google has taken a strong stand in favor of user privacy in Brazil. But this problem isn’t limited to one country, and Google could do much more to protect its users. Google can and should take proactive steps to address the highly revealing capacity of its databases and adopt robust data minimization measures on how user data is processed and for long it is stored. And it should take a stand in the courts to protect users in the U.S. and other countries from dragnet keyword searches, just like it’s doing in Brazil. 

* The official copy of the brief that Google submitted to the Brazilian court is only available in Portuguese. We used an online tool to translate the brief into English so there may be some inaccuracies in translation.

Naomi Gilens

Podcast Episode: Your Tax Dollars At Work

2 months 3 weeks ago

Democracy means allowing everyday people to have their voices heard on public matters involving their communities. One of the goals of civic technology is to allow a more diverse group of people to have input on government affairs through the use of technology and the internet. 

Beth Noveck, author of Solving Public Problems and Director of the Governance Lab, chats with EFF's Cindy Cohn and Danny O'Brien about how civic technology can enhance people's relationship with the government and help improve their communities.

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2Fc1fb2a85-552e-4d5f-82c6-c9a78415433d%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

  
  

You can also find the MP3 of this episode on the Internet Archive.

One challenge that governments have is finding out what problems are plaguing their citizens and then figuring out how to solve them. But without input from community members, governments can't figure out what problem they are trying to solve. One of the promises of civic technology is to allow for a more diverse group of citizens, from those who cannot attend in-person forums to marginalized communities, to raise awareness about problems in their areas, and offer solutions that they believe would be beneficial.

"It really helps when we can show how citizen engagement can actually lead to diverse people giving useful contributions that can actually help to inform a process"

But, as Beth explains, governments have some work to do to allow this civic technology to work—from educating or hiring government officials who know how to use this technology to ensuring that communities know these outlets exist. 

In this episode you’ll learn about:

  • What civic technology is and how it can be used to approach and fix public problems while enhancing the relationship between people and their government. 
  • The importance of deciding what problem you are trying to solve before working on a solution.
  • Ways that civic technology can ensure that the government is held accountable for its actions. 
  • How we can build civic technology tools to increase inclusion, specifically for those who have been marginalized or previously left out of the conversation.
  • Why civic technology allows for more people to get engaged in their democracy.
  • The good and bad that can come with governments increasing their knowledge of technology.

Beth Noveck is a professor at Northeastern University, where she directs the Burnes Family Center for Social Change and Impact and its partner project, The Governance Lab (The GovLab). She is the author of Solving Public Problems: How to Fix Our Government and Change Our World.  You can find Beth on Twitter @bethnoveck

If you have any feedback on this episode, please email podcast@eff.org.

Music

Music for How to Fix the Internet was created for us by Reed Mathis and Nat Keefe of BeatMower.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes the following music licensed Creative Commons Attribution 3.0 Unported by their creators: 

  • Drops of H2O (The Filtered Water Treatment ) by J.Lang Ft: Airtone
    http://dig.ccmixter.org/files/djlang59/37792
  • Xena's Kiss / Medea's Kiss by mwic
    http://dig.ccmixter.org/files/mwic/58883
  • Kalte Ohren by Alex Ft: starfrosch & Jerry Spoon
    http://dig.ccmixter.org/files/AlexBeroza/59612
  • Come Inside by Snowflake Ft: Starfrosch, Jerry Spoon, Kara Square, spinningmerkaba
    http://dig.ccmixter.org/files/snowflake/59564rr4
  • Come Inside by Zep Hurme Ft: snowflake
    http://dig.ccmixter.org/files/zep_hurme/59681

Transcript

Beth: Lakewood, Colorado. Community of about, I think 110,000, 120,000 people outside of Denver where they have one urban planner.  So he was getting really tired of saying to people, "Sorry, I can't do much more about climate change. Sorry, I can't do much more about sustainability. I'm just one guy." So instead what he did was turn to the internet to organize and crowdsource what ended up being 20,000 residents to run what has become 500 different sustainability projects in the community of Lakewood, and now they've replicated it across four cities in Colorado.

Cindy: That's Beth Noveck. She's one of the leading thinkers on how we can use technology to make government and democracy better. Beth's most recent book is called Solving Public Problems.

Danny: Beth is a professor at Northeastern University and the Director of the Governance Lab and also the Chief  Innovation Officer for the state of New Jersey. Today, we're going to talk to Beth about how government can get better input from citizens and how we can leverage technology that the government uses to improve our lives and our democracy.

Cindy: I'm Cindy Cohn, EFF's Executive Director.

Danny: And I'm Danny O'Brien special advisor to EFF. Welcome to How to Fix the Internet, a podcast of the Electronic Frontier Foundation.

Cindy: Beth, thank you so much for coming on our program.

Beth: Hi, it's great to be here. I'm just delighted.

Cindy: Well, we've talked on this podcast a lot about the dangers of government using technology tools that can add to discrimination, to racism, to other societal problems. And one thing I really like about your work is that you're really focusing on how we can approach public problems better. Can you give us an example of this?

Beth: Oh, goodness. Yes. I mean, first of all, the work that you do is so important to protect us from all the dangers, both from technology and from the government's  overreach into technology. But because you have that field so well covered, I think it's very important that some of the rest of us also focus on the positive applications for technology as many social innovators, civic tech users, and programmers actually do, which is looking at how can we use technology to improve how government works to fix public problems, to deepen democracy.

I think we saw a lot of that during COVID, So whether it's the symptom trackers that allowed, for example, over three quarters of a million people in New Jersey to actually track their own symptoms and then be able to know what do I do about it, whether I have a cough or a cold or a sneeze or a fever, but at the same time to contribute data back into the process. So I think if we weren't aware of it before, and there are plenty of examples I can give you from pre COVID. I think all of us have become sadly aware of the role that data and technology can play in really saving lives as it's done during the pandemic.

Cindy: This is one of the areas where EFF was pretty involved in the early days with some of these trackers and to try to keep them focused on the problem and not have issue creep. Because one of the things we know is that if you build it, they will come.  If you gather a bunch of data about humans, people are going to find, and governments are going to find lots of uses for it. And trying to cabin this to the specific health crisis was something that I know you did some of, and we did a lot of as well.

Beth: Those issues about responsible uses of tech and data that actually safeguard privacy, that treat people ethically and humanely are hugely important. I do think that there is a significant problem with what Evgeny Morozov used to call solutionism. The idea that if we  build it, they will come. That there's an app for that, that you can create the solution. What I find in the work that I do is that far too often, we are building solutions without being clear on the problem that we're trying to solve.

And so the really good uses of tech and data are ones that start, first and foremost, by defining the problem that we're trying to solve. So let me give you a quick example, sticking with COVID for a moment here although I think we'd all like to stop talking about COVID.

Cindy: One day.

Beth: One day.

Beth: One day. I hope that I'm dating this conversation, that COVID will be in the rear view mirror at some point. But if we're talking about things like why people do or don't wear masks, if the problem is that you're giving out masks because you think masks are too expensive for people to wear, for example. That's a solution, but only to one problem. If the root cause of the reason why people aren't wearing masks is because politicians are telling them that COVID doesn't exist, or politicians are telling them that wearing masks is somehow a virtue signal of the wrong politics, then simply giving out more masks is not going to be the solution. So I think that's a stark example, but we need to really focus on what the root causes of the problem are so that we can come up with solutions, especially tech solutions. 

But we need to really focus on what the root causes of the problem are so that we can come up with solutions, especially tech solutions. 

Cindy: That's an area where you've done a lot of thinking. I think of this as how do you get citizen input into governmental questions? That's something I know you've thought a lot about, and you've got some good examples of I'd love for you to talk about some of those.

Beth: The best way to understand the problem and therefore to build the right solutions is to talk to actual humans about what they need. What designers like to call human centered design. What we used to just call citizen engagement, frankly, or public engagement. But we're seeing some pretty exciting examples in the civic tech world of ways in which institutions are turning to technology to make that engagement with citizens easier.

So again, there's lots of wrong uses or undeveloped and unfulfilled uses. You guys will know, but maybe not everybody here knows that since 1946, we've had a law in this country that has encouraged citizens to comment on regulatory rule making. But in the past, Congress would pass a few hundred at most pieces of legislation a year. Regulatory agencies passed thousands of regulations about everything from clean air, to clean water, to the width of doorways, to make them wide enough for a wheelchair to get through, or last year there was rulemaking on ceiling fans.

I mean, every aspect of life is governed by regulatory agencies at the federal or state level and you had a right to put participate. But for the last 20 years, the federal government in particular has turned to using new technology to create a website called Regulations.gov, where you can comment. Still nobody knows about it. Nobody uses it. It's not very well designed. It leads sometimes to hundreds of thousands of comments.

Some of you may have commented for example, on the FCC's net neutrality rule making a few years ago, or maybe on snowmobiles in national parks a few years before that. But there's lots of new examples of where people are turning to new technology, including machine learning to make those engagement processes much, much, much more relevant and  I think to create new opportunities for people to do things.  

Cindy: EFF sent a lot of people to that Network Neutrality rulemaking and, it was also filled with fraudulent entries too but overall it was  ignored by the FCC at the time, led by Ajit Pai.  So civic  tech that doesn’t work or doesn’t actually result in government listening can in some ways be worse than no tech at all.  

Danny: That involvement is not evenly spread out – if we use a tech based involvement tool, then some issues like net neutrality will attract lots of input, while others don’t.  

Beth: In Helsinki, they developed civic tech. Actually, a global civic tech group developed a series of tools to help the city to create 147 different sustainability goals to have full data transparency about how the city is accomplishing those goals, to get citizens to give input on those sustainability projects and to hold the government accountable. These are just some examples of the ways in which I think the most progressive and forward thinking institutions are turning to new technology to really tap not just a wisdom of the crowd in traditional ways that we've thought about it, like giving comments on a rule, but actually to take action.   

Danny: There's a huge chunk of my mind that's cheering you on at this point, because I've always enjoyed this use of technology in government and this is what the internet was expected to do. I know as an activist everybody is competing to give greater visibility to their cause as they should in these consultations. But it ends with a real risk of a distortion of what's being said. For instance, people who don't have access to this technology, don't get to have a proportion of voice in that. Is that something that you think has progressed and solutions are beginning to, coming to appear to tackle that? 

Beth: So you're absolutely right again that we need to be critical about how we do these things. Design matters. Not all participation is created equal. So, for example, there's really great intentions in the city of Madrid. They were really path breaking after the occupying movements around the world and student protests in Madrid, they created a platform called Decide Madrid which it does two things. One is it helps do participatory budgeting where citizens can dictate some portion of the budget. Should we build a soccer pitch or a playground or a school in my neighborhood. And that works pretty well, but they had this component where citizens can propose legislation to the city council. So they had 400,000 people sign up for this. That's a huge number of people. A lot of goodwill and effort. And in the first few years of this project, the number of citizen proposals that actually turned into legislation?

Cindy: Zero.

Beth: Correct. Zero. It's not because of bad intentions. It's not because of lobbyist manipulation. It's not the malfeasance problem. It's just a really badly designed process, both legally and technically, it doesn't work very well. And then on top of that, to your earlier point, I think so many of us in the civic tech community, especially before George Floyd and the Black Lives Matter protests, we’re very content to focus on building something. Again, if we build it, they'll come and not really look at who's coming and who's showing up. 

Cindy: I think that's a good point; it's important that we build civic tech tools in a way that increases inclusion, and especially increases the inclusion of people who have too often been marginalized or left out of the conversation, whether because of their race or language skills or where they live  or economic background. 

Danny: So what are the incentives for governments to do this right? 

Beth: So first and foremost, the best incentive, both for government and for citizens is that it's relevant. So it's got to be useful. We for a long time thought about engagement as a nice to have as a democratic right and value, but let me analogize here to the world of open data. As somebody who's worked in the open data and transparency field for a long time, many of us argued opening up government data is the right thing to do. But it was when we shifted to begin to explain to political leaders that open data actually helps us to solve problems.

But I think it really helps the selling point when we can show how citizen engagement can actually lead to diverse people, giving useful contributions that can actually help to inform a process.

And when you can construct a process that's actually manageable and efficient for well intentioned, often public servants, but who don't only have so many hours in the day, that it actually works. It works well. We get good ideas. We get good contributions. It's why participatory budgeting has taken off and is now in 1500 communities around the world because it's an efficient, easy to manage process that leads to real outcomes. You get a decision at the end of it about how do we spend X% of the budget. And as a result, many, many cities have dramatically increased 10X the budgets that they're actually devoting to these citizen feedback processes, because it just works well, it’s efficient, and it fits within a 9:00 to 5:00 day.

Danny: “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

Cindy: Let's go to this good future. Let's say that these ideas end up getting implemented across the United States and around the world. What would it look like for me as a citizen or me as a governmental official in some place?

Beth: So let me say I am relentlessly optimistic or I wouldn't get out of bed. Again, I want to double down on just the fact that that's why my contribution dollars go to EFF to ensure the basic preconditions of free internet and freedom of speech and civil liberties that allow us to imagine this future. Without those preconditions, we don't get the future that I'm talking about. 

Beth: So the future looks like the opportunity for more of us to get engaged in the life of our democracy and the life of our government in ways that we want to around the topics that interest us, not as a one size fits all. You may remember early experiments in the US and the UK and elsewhere with things like E-petitions. Didn't really work very well, but just the concept that you could say, "Hey, here's what I want you to focus on and sign your name to it." And yes, people suggested goofy stuff like we should build a death star or the most popular petition. Do you remember what it was?

Cindy: I think it was marijuana legalization. Wasn't it?

Beth: Incorrect, that was the second or third most popular. The first popular was we should deport Justin Bieber back to Canada. The second most popular was the death star. And I believe, and forgive me, I may have my data wrong here in the top three was marijuana legalization. I do think it was an important, not the Justin Bieber one, but the marijuana petitions were really important as far as signaling how public opinion had changed on that issue and did significantly contribute to the subsequent legalization of marijuana and the decriminalization of marijuana across so many states in the United States.

So coming back to your question on what does the future look like? It's lots of different ways of getting involved, knowing what those ways are and knowing the ways in which those can make a difference in your city. 

So quick example here. I did some work with my colleagues in Mexico, where we worked with the city of San Pedro Garza García, Mexico on running a process whereby citizens were asked to co-design solutions to urban challenges with the city. It was so successful because it was efficient, it led to real outcomes to projects that got implemented that led to new green spaces, reducing the amount of people commuting to school, a new commuting plan that was designed to promote carpooling, new green spaces, all kinds of things around trash.

What led to was then the city going... We really like this, because it helps us to get in better ideas to get citizens engaged in collective action to do things in their own community and it's more effective. So we're going to institutionalize it in legislation and make this one of the core ways that we do business. 

Cindy: I hear you really loud and strong. What you have to do is you have to show the community and the governmental officials that for the governmental officials, it makes things easier and you do things better. And for the citizenry that if you invest the time, it'll be worth it. I'm wondering if you've seen some good examples of how to get maybe more marginalized people to be able to participate. People who may be working a couple of jobs, have a couple of kids and don't have a lot of extra time to take on a another job.

Beth: So we've had a lot of conversations over the years about the ways in which tech can be exclusionary because of the digital divide and wanting to ensure that people have face-to-face opportunities. But the other side of that coin is that in fact, for busy people who have to take care of an elderly parent or a small child and who work a job, or increasingly two jobs, technology actually provides an opportunity for greater equity.  I worked on a project in the last year together with the New Jersey Department of Education to ask what ended up in close to 20,000 parents, students and teachers about the major challenges that they're facing. What to them are the most important issues in connection with education during COVID. So to do that process, we did three kinds of engagement. First, we ran an open engagement, using a simple open source platform, called All Our Ideas. We made it anonymous to participate because a lot of people don't want to give the government a lot of information about themselves and they want to be able to speak freely. And we wanted to create something that was very fast. So then we ran another process whereby we selected a representative sample, but we didn't just go representative. We said, knowing the disparate impacts of COVID on black and brown communities, we over sampled so that we had 50% black and brown participants over selecting for people who came from school districts that had a disproportionate number of children receiving free school lunch or school breakfast.

And to ensure that it was equitable and fair, we paid people for their participation because we asked an hour of their time in that case to have what amounted to you can think about it as a focus group on steroids that used machine learning to actually be able to sort what people told us very, very rapidly and be able to visualize and help you see what people's opinions were and to sort it based on different demographics.

And to organize all of this, we recruited again, a representative sample of 20 kids from around the country to design and run the program. It was a program about education after all and about schools. So who are the most experts, but kids? So it would be foolish not to include them and their perspective.

So I think long story short, it's about paying people for their time and it's about ensuring that we combine representativeness with self selection. 

Danny: I think certainly a few years ago, it was the case that there was a real knowledge gap, I think, in governments all around the world. I do feel that's changed.

Part of the way that changed was with the Obama administration, really picking and running with the idea of bringing in people from the tech industry, but that's has its downsides too because they're, they're kind of a constituency of their own and you really don't want in this particular age, government and big tech to be the same people or friendly together. Do you think governments are getting better at having that tech knowledge and is that tech knowledge helping or hindering their ability to regulate and control big tech?

Beth: The white house, having a chief technology officer and a chief information officer then led to agencies creating those roles that trickle down into states creating those roles and local governments creating things like my job as chief innovation officer and cities doing the same thing. So there is a proliferation. There's much better talent. There's much more of an appreciation for the need for that talent and for those skills than there's ever been before. But we need those skills to be much, much more widespread. And to your point, Danny, we don't want it just to be a revolving door between Silicon Valley and Washington. We want to have those skills be much more dispersed and to be shared by people who have different kinds of background and training.

But right now our government, which is the largest employer in the country as the public sector, doesn't have any requirement that people need to know anything about the internet, or need to know anything about tech or need to know anything about data. I think that needs to change. We need to make those skills much, much more distributed just as we need to get more people into these roles in government at the same time.

Cindy: I know you've done a lot of thinking about skills sharing and other kinds of things, and the idea that we shouldn't take government employees as they are when they first get hired, but we should really have paths forward for them to get more skills in this area. And that we really don't right now at the level that we need. Are there some good examples that you have about places where this is starting to work?

Beth: The federal government has started within the office of personnel management, for example. They've created courses in civic tech, they've created courses in human centered design. Unfortunately, the way that the federal government works, they're forced to charge a lot for those courses. It's a fee for service model that they're required to get reimbursed, which creates a real disincentive for that kind of learning. In the state of New Jersey, one of the first things I did as the chief innovation officer was to create a platform that we call our innovation skills accelerator and to provide free video based courses in these new ways of working as well. So trying to introduce people to the idea of using data and using community wisdom to solve problems differently.

Danny: What I think we're getting from what you were describing, so there are all these really exciting seeds and projects that may not be super well known yet, but really show the promise of a digitally enabled government. What's your most optimistic vision of how this plays out?

Beth: Well, the optimistic vision, of course, in certain ways when it comes to service delivery is that really the government recedes almost into the background. Instead of the horror stories about waiting online at motor vehicles that everybody likes to tell, it becomes the case that you automatically, because the government has your tax data and knows a lot about you, you get the benefits to which you're entitled. And your tax return should be automatic and your driver's license should be renewed. And if you are homeless or in need of food assistance, again, that you're getting those benefits automatically. And that when you have a question you can type into a chatbot and get an answer to what that question is, 24 hours a day. 

So in other words, the kind of customer service that you expect from a good company is the same kind of customer service you should be getting from government. It's accountable to us as to how that's done, but that we're taking advantage of the best of technology to really allow us to solve problems and to do so together by co-creating solutions with government.

Cindy: I love this vision, both of the government receding from the places where it's pain point now and your engagement with the government stepping up in the areas where you're actually empowered by governmental action. I like both of these pieces of the vision, and thank you so much, Beth, for coming and talking to us and giving us so many good examples. I mean, this has really been a heartening conversation about the possibilities here.

Beth: First of all, I'm just so delighted that you had me on thank you so much. I think we do have to be relentlessly optimistic because we have to fight for this future. 

That sometimes that positive future can really get lost and can feel really utopian but the good news is it's happening and it's happening in pockets and it's happening in places all around the world. We just have to figure out how to bring more awareness to, and we have to know how to demand it and make it part of how we do business.

Cindy: Wow, this is so great. I mean, this is exactly why this podcast is called ``How to Fix the Internet” is because I feel similarly that you can't make a better future unless you can envision it. In so many places, and of course, EFF is in the heart of some really hard fights all around the world. People seem to have been losing that vision. And so we intentionally wanted to bring in people like you, who are helping us to have a picture of a fixed internet so that we can take the steps necessary to build it. So thank you again.

Beth: Thank you.

Danny: Thanks a lot. 

Cindy: That was just such a great conversation and I feel so inspired because I feel like Beth really gets the need to have this balance, the need to make sure that we have government using tech tools that protect our privacy, that serve us while at the same time, making sure that they actually meet the needs of the people involved. 

I also really like that her vision of the future involves making government smaller in people's lives in the ways that it's annoying and bigger in people's lives in the way that they get to participate. Those are things that make people legitimately concerned that the government isn't very functional and is basically wasting their time. Reducing those and using machine learning and other tools to try to reduce that. At same time that we're increasing the side of government where we're actually participating and we have power. 

Danny: I guess, my naive understanding of where this whole area has been is about increasing representation. I think it was great to get an update on what that is. In the '90s there was just like, "Oh yeah, we'll be able to click buttons and vote for everything." And Beth brings a sophistication to it about how it's actually tricky to get people involved in a way that is, particularly for people who don't have the time of day to spend on this kind of thing and how you can do tricks like over sampling of different groups and incentives rather than just who turns up.

Cindy: Yeah. I think that was really great and there's some good ideas in there about how do you compensate people for their time when they're making their voices heard. And I also loved, of course, getting the high school kids involved in how to actually think about what we need for our schools

Danny: I also really like this idea that she touched on at the end, which is that government is less a kind of thing outside of your or experience, or it shouldn't be. It should be the place where we have the discussion about the important things. Someone has to make a decision about these things. We need in a democratic society at least to have a place where we do that. I hadn't really thought of technology being the thing that transforms government in that way. The other thing that really struck me is that we don't get to hear these positive stories about where technology is working.

I think that's partly because that conversation is usually about critiques. Right? 

Cindy: Yeah. It's important to be critical, but I agree with you that at this point, the criticisms get all the attention and the success stories don't get enough. So we need to rebalance that. And I hope that there's a lot of ideas out there. Not only for people as citizens, but also for the folks out there in our audience who have technical skills and passion. There's a lot of ideas out there for how you can get involved in how you can make your government better for you and your neighbors and your family. 

Cindy: Thanks so much to our guests, Beth Noveck, coming to talk to us and share so many examples of how we can fix public problems using technology. Not just the dangers of doing that but the exciting possibilities.

Danny: And thanks you for joining us. If you have feedback on this episode, do please email us at podcast@eff.org. We read every email. If you like what you hear, follow us on your favorite podcast player. We've got lots more episodes in store this season. 

Music for How to Fix the Internet was created for us by Reed Mathis and Nat Keefe of BeatMower. This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. You can find their names and links to their music in our episode notes, or on our website at eff.org/podcast
How to Fix the Internet is supported by The Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology. I’m Danny O’Brien.

Cindy: And I'm Cindy Cohn.

 

Christian Romero

Anatomy of an Android Malware Dropper

2 months 3 weeks ago

Recently at EFF’s Threat Lab, we’ve been focusing a lot on the Android malware ecosystem and providing tools for its analysis. We’ve noticed lot of samples of Android malware in the tor-hydra family have surfaced, masquerading as banking apps to lure unsuspecting customers into installing them. In this post, we will take an example of one such sample and analyze it using open-source tools available to anyone.

At a Glance

The sample we’ll be looking at was first seen on March 1st, 2022. This particular malware presents itself as the banking app for BAWAG, a prominent financial institution in Austria. Upon first run, the app prompts the user to give “accessibility services” permission to the app. The accessibility services permission grants an app broad access to read the screen and mimic user interaction. Upon granting the permission, the app backgrounds itself. Any attempt by the user to uninstall the app is prevented by the app interrupting and closing the uninstall dialogues. Attempting to open the app again also fails—nothing happens.


Analyzing the Android Package APK AndroidManifest.xml

The Android app manifest file contains a list of permissions, activities, and services that an app provides. If an activity is not listed in the app manifest, the app can’t launch that activity. Using an Android static analysis tool like jadx or apktool we can take a look at the manifest XML. The malware app’s manifest asks for a wide range of permissions, including the ability to read and send SMS messages (a common way for malware to propagate), request installation and deletion of packages, read contacts, initiate calls, and request the aforementioned accessibility service. In addition, a number of classes are referenced which are not defined anywhere in our jadx-reversed code:

  • com.ombththz.ufqsuqx.bot.components.commands.NLService
  • com.ombththz.ufqsuqx.bot.components.injects.system.InjAccessibilityService
  • com.ombththz.ufqsuqx.bot.components.locker.LockerActivity
  • com.ombththz.ufqsuqx.bot.components.locker.LockerActivity$DummyActivity
  • com.ombththz.ufqsuqx.bot.components.screencast.ScreencastService
  • com.ombththz.ufqsuqx.bot.components.screencast.ScreencastStartActivity
  • com.ombththz.ufqsuqx.bot.components.screencast.UnlockActivity
  • com.ombththz.ufqsuqx.bot.components.socks5.Socks5ProxyService
  • com.ombththz.ufqsuqx.bot.HelperAdmin$MyHomeReceiver
  • com.ombththz.ufqsuqx.bot.PermissionsActivity
  • com.ombththz.ufqsuqx.bot.receivers.MainReceiver
  • com.ombththz.ufqsuqx.bot.sms.ComposeSmsActivity
  • com.ombththz.ufqsuqx.bot.sms.HeadlessSmsSendService
  • com.ombththz.ufqsuqx.bot.sms.MmsReceiver
  • com.ombththz.ufqsuqx.bot.sms.SmsReceiver
  • com.ombththz.ufqsuqx.core.injects_core.Screen
  • com.ombththz.ufqsuqx.core.injects_core.Worker
  • com.ombththz.ufqsuqx.core.PeriodicJobReceiver
  • com.ombththz.ufqsuqx.core.PeriodicJobService
  • com.ombththz.ufqsuqx.MainActivity
  • info.pluggabletransports.dispatch.service.DispatchReceiver
  • info.pluggabletransports.dispatch.service.DispatchService
  • info.pluggabletransports.dispatch.service.DispatchVPN
  • org.torproject.android.service.OrbotService

The fact that the manifest references activities, services and receivers it wants to be run without defining them is the first indication that we are dealing with an “Android dropper.”

Unpacking Android Droppers

An Android dropper is malware which obfuscates its behavior by hiding its payload and only decoding and loading the code it needs at runtime. As Ahmet Bilal Can explains, this makes it harder for AV and security researchers to detect the malware by including “reflection, obfuscation, code-flow flattening and trash codes to make [the] unpacking process stealthy.” While stealthy, the steps the malware takes to hide itself can still be detected and subverted with a little help from the dynamic instrumentation toolkit Frida. Frida is able to inject itself into the control-flow of a running app, introducing its own code. This can be helpful to detect typical methods malware uses to disguise itself and load the underlying payload. In this case, we can use a short script to detect that Java classes are being loaded dynamically:

var classLoader = Java.use('java.lang.ClassLoader'); var loadClass = classLoader.loadClass.overload('java.lang.String', 'boolean'); loadClass.implementation = function(str, bool){ console.log("== Detected ClassLoader usage =="); console.log("Args: ", str, bool); return this.loadClass(str, bool) }

Running this code, we get

$ frida -U -f com.ombththz.ufqsuqx -l class-loader-usage.js --no-pause ____ / _ | Frida 15.1.16 - A world-class dynamic instrumentation toolkit | (_| | > _ | Commands: /_/ |_| help -> Displays the help system . . . . object? -> Display information about 'object' . . . . exit/quit -> Exit . . . . . . . . More info at https://frida.re/docs/home/ Spawned `com.ombththz.ufqsuqx`. Resuming main thread! [Android Emulator 5554::com.ombththz.ufqsuqx]-> == Detected ClassLoader usage == Args: com.honey.miletes.k false == Detected ClassLoader usage == Args: android.support.v4.content.FileProvider false == Detected ClassLoader usage == Args: com.ombththz.ufqsuqx.App false == Detected ClassLoader usage == Args: com.ombththz.ufqsuqx.MainActivity false == Detected ClassLoader usage == Args: com.ombththz.ufqsuqx.core.injects_core.Worker false == Detected ClassLoader usage == Args: com.ombththz.ufqsuqx.bot.PermissionsActivity false == Detected ClassLoader usage == Args: org.torproject.android.service.OrbotService false

Our missing classes are indeed being loaded dynamically!

Previous iterations of tor-hydra malware dynamically loaded a dex file (an Android Dalvik executable file), which could be seen with adb logcat, and used the syscall unlink to delete that file, which would be seen in an strace call. For this app, we can use the command

monkey -p com.ombththz.ufqsuqx -c android.intent.category.LAUNCHER 1 && set `ps -A | grep com.ombththz.ufqsuqx` && strace -p $2

to see the syscalls in real time. We did not observe unlink being used in this sample, so this iteration was doing something different. Java provides a method in java.io.File called delete, which will not trigger the unlink syscall. Using this script, we can detect when that method is used, alert us of the file it attempted to delete, and make it a non-operation:

var file = Java.use("java.io.File") file.delete.implementation = function(a){ console.log("=> Detected and bypassed Java file deletion: ", this.getAbsolutePath()); return true; }

The first few files deleted are of interest:

=> Detected and bypassed Java file deletion: /data/user/0/com.ombththz.ufqsuqx/tyfkjfUjju/HjIgfhjyqutIhjf/tmp-base.apk.gjGyTF88583765359401054429.88g => Detected and bypassed Java file deletion: /data/user/0/com.ombththz.ufqsuqx/tyfkjfUjju/HjIgfhjyqutIhjf/dfGgIgyj.HTgj => Detected and bypassed Java file deletion: /data/user/0/com.ombththz.ufqsuqx/tyfkjfUjju/HjIgfhjyqutIhjf/base.apk.gjGyTF81.88g => Detected and bypassed Java file deletion: /data/user/0/com.ombththz.ufqsuqx/tyfkjfUjju/HjIgfhjyqutIhjf => Detected and bypassed Java file deletion: /data/user/0/com.ombththz.ufqsuqx/shared_prefs/multidex.version.xml.bak => Detected and bypassed Java file deletion: /data/user/0/com.ombththz.ufqsuqx/shared_prefs/pref_name_setting.xml.bak => Detected and bypassed Java file deletion: /data/user/0/com.ombththz.ufqsuqx/files => Detected and bypassed Java file deletion: /data/user/0/com.ombththz.ufqsuqx/shared_prefs/prefs30.xml.bak => Detected and bypassed Java file deletion: /data/user/0/com.ombththz.ufqsuqx/files/all_tor.zip

Once we issue an adb pull to download the base.apk.gjGyTF81.88g file from the device, we can use jadx again to determine that this includes the missing class definitions referenced in the manifest.

Investigating the Unpacked Payload

Looking into these files, there is a string obfuscation method that appears thousands of times throughout the code, unaltered from instance to instance:

private static String $(int i, int i2, int i3) { char[] cArr = new char[i2 - i]; for (int i4 = 0; i4 < i2 - i; i4++) { cArr[i4] = (char) ($[i + i4] ^ i3); } return new String(cArr); }

Wherever we see a call which looks like $(166, 217, 28670) in the code, it refers to this function and uses the $ variable in the same scope to return a string. We can use a Java sandbox like this one to define the locally-scoped $ variable, the $ method, and print out the decoded string.

In sources/com/ombththz/ufqsuqx/bot/network/TorConnectionHelper.java we see a method which looks like a promising lead called loadAdminUrl. Decoding the $(556, 664, 4277) call, we get a base64-encoded onion address:

http://loa5ta2rso7xahp7lubajje6txt366hr3ovjgthzmdy7gav23xdqwnid.onion/api/mirrors

This address is available over the Tor network, and contains a base64-encoded URL which references the command and control (C&C) server, the server from which the malware operator issues commands. The author of this post reached out to the Tor Project on March 7th informing them of this C&C server. On app bootstrap, the Tor network is connected to by code lifted from Orbot in order to discover the C&C server, and then the Tor connection is promptly dropped. When first doing this investigation, the domain referenced yuuzzlllaa.xyz, but this has since changed to zhgggga.in. We can see a login page for the C&C server administrator when accessed:

One of the main features of the Tor network is censorship-resistance. If you can access the Tor network, you can access information and websites that cannot easily be taken down because of the way the network is architected. This is a good thing for dissidents in censorship regimes or whistleblowers trying to get privileged information to reporters: the services they rely on will be available even if their adversaries don’t want them to be. This is a double-sided coin, though—in this case malware is also able to direct victims’ devices to C&C servers in a way that can’t be taken down. There is no way to have one without the other and keep the integrity of the network intact. In this case, the clearnet domain yuuzzlllaa.xyz was presumably taken down after being reported and then the malware operator spun up another domain at zhgggga.in without much interruption of the malware command and control. In these cases, reporting malicious C&C domains seems like a game of whack-a-mole: as soon as you take one down, the next pops up.

In the file com/ombththz/ufqsuqx/bot/DexTools.java we see an interesting method, run(), which loads a stage-2 payload from the admin C&C url path /payload. This is a dex file which can be decoded by jadx to an app ID of com.fbdevs.payload. Unfortunately for the sake of our analysis, this file contains mostly uninteresting and non-malicious code.

Looking at the om/ombththz/ufqsuqx/bot/components/ path, many of the components seem to be inherited directly from the Android BianLian malware, an excellent analysis of which can be found here. One of the components not included in this previous iteration is under the socks5 path, which opens a proxy server to a specified host in order to receive commands and launch attacks. All the components are activated and controlled by the C&C server through a Firebase Cloud Messaging (FCM) connection, allowing messages targeting specific devices.

Fighting Back Against Malware

Despite relatively state-of-the-art techniques employed to thwart analysis, a few powerful publicly accessible open-source tools were used to interrupt the control flow and reverse engineer this sample. More complex malware will detect hardware profiles and be able to determine that it is being run in an emulator, and change its behavior to further hide its core functionality. Still others will deploy malicious code in deeper stage payloads in an attempt to further bury its true behavior. However, this sample shows how a few simple steps can be taken to peel those layers back to eventually discover the control flow of a new class of malware. Moving forward, other samples in this class can be analyzed in much the same way to track changes in the ecosystem and how malware developers are responding to attempts to mitigate their effectiveness.

Analyzing malware and tracing its evolution is important for fighting back against it. Not only does it result in better signatures for anti-virus software to use and protect users, it helps us understand what protections are necessary on the operating-system level and guides platform security recommendations. Sometimes, it can lead to C&C servers being shut down and the targets of the botnets gaining some much-needed reprieve. And lastly, it gives users insight into what software is running on their devices so they can take control back.

Bill Budington

The NDO Fairness Act Is an Important Step Towards Transparency

2 months 3 weeks ago

The First Amendment guarantees the right to speak your own involvement about court proceedings. Yet the Stored Communications Act currently allows the government to prevent electronic communications companies from notifying their users when they receive law enforcement orders for customer data. These gag orders can silence the companies for any period that a court deems appropriate—even indefinitely, sometimes with no fixed end date at all. This leaves the targets of the collection and the broader public unaware of the surveillance and unable to challenge it in court. EFF has long objected to the government’s use of indefinite gag orders to silence communications platforms. 

That’s why we recently sent a letter to the House Committee on the Judiciary in support of H.R. 7072, the Nondisclosure Order  (NDO) Fairness Act. This bill takes important steps toward bringing transparency and accountability to the federal government’s use of sweeping gag orders accompanying requests for user data, and we appreciate Chair Nadler and the other co-sponsors for addressing these important issues.The legislation does away with indefinite gag orders, limiting the duration of nondisclosure orders to a maximum of 30 days and allowing the government to seek extensions only in 30-day increments. The NDO Fairness Act also requires courts to explain in writing why notice of the collection would be substantially likely to result in harm before issuing nondisclosure orders and to narrowly tailor orders to avoid complete bans on speech wherever possible. This is a much more demanding standard than the current requirement that courts find there be “reason to believe” that such harm “may” occur. And the legislation puts in place important measures to ensure greater transparency around the government’s use of these secretive orders, both for targeted individuals and the larger public, including by requiring the government to notify targets of surveillance that their communications were intercepted and to publish an annual report that provides information about the use of surveillance under Section 2703. 

These reforms are a welcome step forward in reforming the secrecy surrounding electronic surveillance and bringing the Stored Communications Act closer in line with constitutional guarantees. The bill would be even stronger if it provided a more accessible path for individuals to seek remedies for government violations of this law, and we look forward to working with the Committee to enact these and other reforms.

India McKinney

Day of Action for Antitrust: Our Rights Are Tied to Having Choices

2 months 3 weeks ago

Today, EFF joins a diverse coalition of civil society and tech companies to call on Congress to pass strong anti-monopoly rules for the Internet. We do this because it has long been EFF’s belief that users have the right to make their own choices—and the current state of Big Tech has taken many of our choices from us.

Join the Day of Action

Tell Congress to support competition

We live in a world that increasingly requires us to be online. The promise of all this technology was that barriers would be lowered, allowing more people to exercise their rights—especially rights related to speech. For those who work in securing rights for others, activists and journalists, for example, this has been an invaluable change.

However, that promise has been broken by the rise of a few unassailable companies. No longer does something replace Facebook the way Facebook replaced MySpace. Instead, Facebook buys Instagram and WhatsApp to prevent being replaced. And then being on Facebook stops being a choice, but a requirement for communicating with some segments of the population. It’s difficult to leave if those people are your friends and family, but it’s impossible if you are a small business who needs to reach a certain group of people.

If you can’t be found by a Google search, you might as well not exist. And if your app can’t get into an Apple or Google app store, it also may as well not exist.

All of this runs counter to a basic principle of the internet: you get to decide. If you learn that there’s a vulnerability in iMessage, you should be able to uninstall it. If you are an activist or journalist who could be targeted, that is a potentially lifesaving choice you need to be able to make. And if you simply want a different app to take iMessage’s place on your phone, you should get make that choice.

If you choose to use the Apple store it should be because you trust its safety determinations, not simply because it ranges from difficult to impossible to do otherwise. If you choose to use Google search it should be because it gives the best service, not simply because alternatives are harder to access. And if you are on Facebook it should be because it’s the best social network, not simply because it keeps buying up its competitors and shoving everything under the “Meta” brand.

Services need to go back to competing for our time by being the best, rather than the only. Our security is increased when multiple companies are working to be the safest, most secure service. Our privacy is increased when one company isn’t able to track us across our search, email, video watching, map, and health data.

The Open App Markets Act would give us back control of our mobile devices by forcing companies to let us decide what is on the devices we paid hundreds of dollars to own. The ACCESS Act would allow us to choose to leave the biggest services by promoting interoperability—allowing us to stay in contact with people on the largest services without being forced to sign over our information to those services. The Platform Competition and Opportunity Act would increase oversight on acquisitions by the biggest tech companies, giving new services a fair chance to grow into real competition instead of being bought up by a giant.

Join us in this day of action by emailing your representatives about these bills today. And if you have a business or organization that would like to join the day, visit antitrustday.org.

Join the Day of Action

Tell Congress to support competition

Katharine Trendacosta

California: Speak Up For Biometric and Student Privacy

2 months 3 weeks ago

California has shown itself to be a national privacy leader. But there is still work to do. That’s why EFF is proud to sponsor two bills in this year’s legislature—both with the co-sponsorship of Privacy Rights Clearinghouse—that would strengthen privacy protections in the state. These bills are focused on two particularly pernicious forms of data collection. Both will be heard on April 5 in the California Senate Judiciary Committee, and we’re asking Californians to tell committee members to pass these bills.

Advancing Biometric Privacy

Authored by Senator Bob Wieckowski, S.B. 1189 requires private entities to obtain your opt-in consent before collecting your biometric information. Biometric information is incredibly sensitive and, by its very nature, is tied immutably to our identities. While you can change a password, you can’t change easily change your face, the rhythm of your walk, or the ridges of your fingerprints. Despite this, some companies collect and share this information without asking first—by, for example, taking faceprints from every person who walks into a store. They may then go on to share or sell that information.

This is wrong. People should have control over who they trust with their biometric information. And companies must be held accountable if they break that trust. Like the landmark Illinois Biometric Information Privacy Act (BIPA), S.B. 1189 gives individuals the right to sue companies that violate the law. This is the same type of provision that allowed Facebook users in Illinois to take the company to task for collecting their faceprints without permission. That case ended in a $650 million settlement for Illinois’ Facebook users.

This bill has the support of a broad range of both California and national organizations active on surveillance issues, which speaks to the importance of implementing this privacy protection. The Greenlining Institute, Media Alliance, Oakland Privacy, the Consumer Federation of California, the Consumer Federation of America, Consumer Action and Fairplay are all in support. If you'd like to join them in supporting this bill, take our action to support S.B. 1189.

Take Action

Speak up for California Biometric Privacy

Protecting Student Privacy

EFF is also proud to sponsor S.B. 1172, the Student Test Taker Privacy Protection Act (STTPPA), a first-of-its-kind piece of legislation aimed at curbing some of the worst practices of remote proctoring companies. Authored by Senator Dr. Richard Pan, this bill places limits on what proctoring companies collect and provides students the right to their day in court for privacy violations. There has been a 500% increase in the use of these proctoring tools during the pandemic—in 2020, more than half of higher education institutions used remote proctoring services and another 23% were considering doing so.

Proctoring companies have also suffered data breaches­, and federal lawmakers and California’s Supreme Court have raised questions about proctoring company practices. But no meaningful data protections have been put into place to protect the privacy of test takers. Given their widespread use, proctoring companies must be held accountable, and this bill will do that.

The STTPPA directs proctoring companies not to collect, use, retain, or disclose test takers’ personal information except as strictly necessary to provide proctoring services. If they do not, the student has the opportunity to take the proctoring company to court. This simple bill gives those directly harmed by privacy violations—test takers—the opportunity to protect their data and privacy.

Leading student and privacy advocates have lent their support to the bill, including: Center for Digital Democracy, Citizens Privacy Coalition of Santa Clara County (CPCSCC), Common Sense, Fairplay, The Greenlining Institute, The Parent Coalition for Student Privacy, Media Alliance, and Oakland Privacy.

Take Action

Speak up for California Student Privacy

If you believe that companies should have limits on the information they collect and that people should have ways to hold them accountable, please tell the California Senate Judiciary Committee to vote “yes” on S.B. 1189 and S.B. 1172.

Hayley Tsukayama

Public.Resource.Org Can Keep Freeing the Law: Court Allows Posting Public Laws And Regulations Online

2 months 3 weeks ago
Private entities lose bid to control and profit from how we learn about the law

SAN FRANCISCO—As part of its ongoing work to ensure that people can know and understand the laws they live under, Public.Resource.org, a nonprofit organization, on Thursday vindicated its ability to publicly post important laws online in standard formats, free of copy protections and cumbersome user interfaces.

The win for Public Resource—represented by the Electronic Frontier Foundation (EFF) with co-counsel Fenwick & West and David Halperin—in the U.S. District Court for the District of Columbia reinforces the critical idea that our laws belong to all of us, and we should be able to find, read, and comment on them free of registration requirements, fees, and other roadblocks.

“This is a crucial victory for the public as well as Public Resource,” said Corynne McSherry, EFF’s legal director. “We are pleased that the court recognized and affirmed that no private entity should be able to dictate how we learn about and comment on the law.”

The American Society for Testing and Materials (ASTM), National Fire Protection Association Inc. (NFPA), and American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) are among organizations that develop private sector codes and standards aimed at advancing public safety, ensuring compatibility across products and services, facilitating training, and spurring innovation. ASTM, for example, has developed more than 12,000 standards used in fields ranging from consumer products to construction to medical services and devices. Federal, state, and local governments have incorporated by reference thousands of such standards into law, making them binding upon everyone.

Public Resource—a tiny California nonprofit founded by open records advocate Carl Malamud, whose mission is to make government more accessible—acquires and posts online a wide variety of public documents that the public should have ready access to but often does not, such as nonprofits’ tax returns, government-produced videos, and codes and standards incorporated into law by reference.

“Technical standards incorporated into law are some of the most important rules of our modern society,” said Malamud. “In a democracy, the people must have the right to read, know, and speak about the laws by which we choose to govern ourselves.”

ASTM, NFPA, and ASHRAE sued Public Resource in 2013 for copyright and trademark infringement and unfair competition.

In a decision issued Thursday, U.S. District Judge Tanya S. Chutkan agreed that Public Resource’s sharing of the vast majority of standards is a lawful fair use, although she ruled that Public Resource should not use the plaintiffs’ trademarked logos in such posts.

“In today’s world, the ability to access our laws online, without paywalls or technical barriers, is vital,” said EFF attorney Mitch Stoltz. “That’s why this fair use decision that allows Public Resource to continue its work, is so important.”

EFF and Mr. Malamud have worked together to free the law for public use for many years. EFF and its co-counsel also represented Public Resource in a separate but similar lawsuit filed in 2014 by the American Educational Research Association Inc., the American Psychological Association, and the National Council on Measurement in Education. Those groups dropped their lawsuit in October 2020.

And Public Resource also prevailed in a 2013 lawsuit that EFF filed on its behalf against a sheet metal and air conditioning contractor group that tried to force Public Resource to take down a federally mandated standard on air-duct leakage. The group backed down and agreed to publicly affirm that it will no longer claim copyright in the standards.

For the opinion:
https://eff.org/document/astm-v-publicresourceorg-summary-judgment-opinion

For background on the case:
https://www.eff.org/cases/publicresource-freeingthelaw

Contact:  CorynneMcSherryLegal Directorcorynne@eff.org
Joshua Richman

Dream Job Alert: EFF Seeks a Lawyer with Patent and Copyright Experience 

2 months 3 weeks ago

We’ve got an amazing opportunity for a litigator to join EFF’s legal team. If you have experience and interest in patent policy or litigation and want to represent the public interest in patent and copyright policy, we want to hear from you. 

EFF has always stood up for the freedom to innovate and learn. When the U.S. patent system doesn’t promote those freedoms, we step in to fight back. Just take a look at some of our work in 2021

  • We fought in court for transparency and open records in the patent system
  • We filed an amicus brief defending the most important review system for granted patents against an attempt to have it thrown out, leading to a victory at the U.S. Supreme Court 
  • We spoke up for users at the U.S. Patent and Trademark Office, speaking out against new types of design patents that would have hurt computer graphics designers 
  • We worked to defend essential limits on patentable subject matter
  • We supported a bill that would help the public find out the true owners of litigated patents
  • We supported a bill to close loopholes that have let some patent trolls avoid having their patents reviewed by the U.S. Patent and Trademark Office

We have a long history of advocating for a proportionate, transparent, and accountable patent system in the courts and before Congress and the PTO. And when internet users and small businesses get threatened by patent trolls with bogus software patents, EFF is often their first stop for truthful information, legal advice, and advocacy help. 

The lawyer who steps into this role will have a wide-ranging array of responsibilities: all aspects of litigation, blogging and writing for a general audience, public speaking and media appearances, and some direct client counseling. 

We’re looking for interest and experience in patent law, as well as related fields such as copyright. EFF works on cutting-edge issues that frequently touch on multiple issue areas, and half or more of your work will be in these related areas rather than being solely patent practice.

Please check out our job description and apply today! We’re seeking a candidate with at least 3 years of litigation experience, although we’re open to more. If you are an attorney with  questions about this role, please email me at kit@eff.org

Kit Walsh
Checked
1 hour 12 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed