To Best Serve Students, Schools Shouldn’t Try to Block Generative AI, or Use Faulty AI Detection Tools

3 months 1 week ago

Generative AI gained widespread attention earlier this year, but one group has had to reckon with it more quickly than most: educators. Teachers and school administrators have struggled with two big questions: should the use of generative AI be banned? And should a school implement new tools to detect when students have used generative AI? EFF believes the answer to both of these questions is no.

AI Detection Tools Harm Students

For decades, students have had to defend themselves from an increasing variety of invasive technology in schools—from disciplinary tech like student monitoring software, remote proctoring tools, and comprehensive learning management systems, to surveillance tech like cameras, face recognition, and other biometrics. “AI detection” software is a new generation of inaccurate and dangerous tech that’s being added to the mix.

Tools such as GPTZero and TurnItIn that use AI detection claim that they can determine (with varying levels of accuracy) whether a student’s writing was likely to have been created by a generative AI tool. But these detection tools are so inaccurate as to be dangerous, and have already led to false charges of plagiarism. As with remote proctoring, this software looks for signals that may not indicate cheating at all. For example, they are more likely to flag writing as AI-created when the word choice is fairly predictable and the sentences are less complex—and as a result, research has already shown that false positives are more frequent for some groups of students, such as non-native speakers

Instead of demonizing it, schools should help students by teaching them how this potentially useful technology works and when it’s appropriate to use it. 

There is often no source document to prove one way or another whether a student used AI in writing. As AI writing tools improve and are able to reflect all the variations of human writing, the possibility that an opposing tool will be able to detect whether AI was involved in writing with any kind of worthwhile accuracy will likely diminish. If the past is prologue, then some schools may combat the growing availability of AI for writing with greater surveillance and increasingly inaccurate disciplinary charges. Students, administrators, and teachers should fight back against this. 

If you are a student wrongly accused of using generative AI without authorization for your school work, the Washington Post has a good primer for how to respond. To protect yourself from accusations, you may also want to save your drafts, or use a document management system that does so automatically.

Bans on Generative AI Access in Schools Hurt Students

Before AI detection tools were more widely available, some of the largest districts in the country, including New York Public Schools and Los Angeles Unified, had banned access to large language model AI tools like ChatGPT outright due to cheating fears. Thankfully, many schools have since done an about face, and are beginning to see the value in teaching about them, instead. New York City Public Schools lifted its ban after only four months, and the number of schools with a policy and curriculum that includes them is growing. New York City Public School’s Chancellor wrote that the school system “will encourage and support our educators and students as they learn about and explore this game-changing technology while also creating a repository and community to share their findings across our schools.” This is the correct approach, and one that all schools should take. 

This is not an endorsement of generative AI tools, as they have plenty of problems, but outright bans only stop students from using them while physically in school—where teachers could actually explain how they work and their pros and cons—and obviously won’t stop their use the majority of the time. Instead, they will only stop students who don’t have access to the internet or a personal device outside of school from using them. 

These bans are not surprising. There is a long history of school administrators and teachers blocking the use of a new technology, especially around the internet. For decades after they became accessible to the average student,  educators argued about whether students should be allowed calculators in the classroom. Schools have banned search engines; they have banned Wikipedia—all of which have a potentially useful place in education, and one that teachers are well-positioned to explain the nuances of. If a tool is effective at creating shortcuts for students, then teachers and administrators should consider emphasizing how it works, what it can do, and, importantly, what it cannot do. (And in the case of many online tools, what data it may collect). Hopefully, schools will take a different trajectory with generative AI technology.

Artificial intelligence will likely impact students throughout their lives. The school environment  presents a good opportunity to help them understand some of the benefits and flaws of such tools. Instead of demonizing it, schools should help students by teaching them how this potentially useful technology works and when it’s appropriate to use it. 

Jason Kelley

Speaking Freely: Agustina Del Campo

3 months 1 week ago

Agustina Del Campo is the Director at the Center for Studies on Freedom of Expression and Access to Information (CELE) at the University of Palermo in Buenos Aires, Argentina. She holds a law degree from Universidad Catolica Argentina and an LL.M. in International Legal Studies from American University Washington College of Law.

Agustina has extensive experience in human rights training, particularly as it relates to freedom of expression and the press in the Inter-American human rights system. She has taught and lectured in several Latin American countries and the U.S.

EFF’s Executive Director Cindy Cohn caught up with Agustina at RightsCon 2023 in Costa Rica. In this brief but powerful exchange Agustina discusses how, though free speech has a bad rap these days, it is inherent in any advocacy agenda aimed at challenging – and changing – the status quo and existing power dynamics.

Cindy Cohn: Would you introduce yourself?

Sure, I’m Agustina Del Campo and I direct the Center for Studies on Freedom of Expression and Access to Information (CELE) in Argentina.

Cohn: First, what does free speech mean to you?

Free speech means a lot of things to me, but it basically means the power to bring unpopular ideas to the mainstream. That is what free speech means to me. It’s the power of advocating for something.

Cohn: Wonderful. How do you think online speech should or shouldn’t be regulated?

Well, I think it should or shouldn’t be regulated in the same way that offline speech should or shouldn’t be regulated. The power of speech is basically not the power to share popular ideas, but the power to share unpopular ideas, and popular ideas are online and offline and they have an impact online and offline. We’ve been discussing the limits and the possibilities and the opportunities and the challenges for speech offline for a number of years, so I think in whatever we decide to do in online speech we should at least bear in mind the discussions that we had prior to getting to this new technology and new tools.

Cohn: I know you’ve told me in the past that you’re a feminist and, obviously you live in Argentina, so you come from the Global Majority. Often we are told that free speech is a white western concept—how do you react to that accusation?

It’s interesting, in a lot of countries the freedom of expression agenda has been somewhat lost. It’s an unpopular time for freedom of expression. A lot of that unpopularity may be due to this association precisely—the freedom of expression agenda as a white male, middle-aged kind of right—and there’s a lot of anger to this place that freedom of expression has. My immediate reaction is the fact that you can have an advocacy agenda for women, for abortion rights, for anything basically, the fact that you were able to bring vulnerable populations to the mainstream conversation, the fact that we are sensitive to gender, to pronouns, to indigenous populations, to children’s need—it’s a lot the product of people fighting for the possibilities of those groups and voices to be heard. It wasn’t long ago that in my country and in my region, Latin America, there was a very conservative regime in a lot of countries where a lot of these claims that today are mainstream and popular and shared were unspeakable. You could not raise them anywhere. It is freedom of expression that has facilitated and allowed those discussions to flourish to become what they are. The fact that a lot of those agendas, the feminist agenda, the most vulnerable populations’ agendas are now really established in a lot of countries and flourishing took a lot of fighting from freedom of expression advocates so that those voices could be heard. The fact that we’re winning doesn’t mean we’ll always be. And we need to protect the guarantees and rights that allowed us to get to where we are now.

Cohn: That is so perfect. I think I just want to stop there. I wish I could put that on posters.

Cindy Cohn

Low Budget Should Not Mean High Risk: Kids' Tablet Came Preloaded with Sketchyware

3 months 2 weeks ago

It’s easy to get Android devices from online vendors like Amazon at different price points. Unfortunately, it is also easy to end up with an Android device with malware at these lower budgets. There are several factors that contribute to this: multiple devices manufactured in the same facility, lack of standards on security when choosing components, and lack of quality assurance and scrutiny by the vendors that sell these devices. We investigated a tablet that had potential malware on it bought from the online vendor Amazon; a Dragon Touch KidzPad Y88X 10 kid’s tablet. As of this post, the tablet in question is no longer listed on Amazon, although it was available for the majority of this year.

Dragon Touch KidzPad Y88X 10

It turns out malware was present, with an added bonus of pre-installed riskware and a very outdated parental control app. This is a major concern since this is a tablet marketed for kids.

Parents have plenty of worry and concern about how their kids use technology as it is. Ongoing conversations and negotiations about the time spent on devices happen in many households. Potential malware or riskware should not be a part of these concerns just because you purchased a budget Android tablet for your child. It just so happens that some of the parents at EFF conduct security research. But this is not what it should take to keep your kid safe.

“Stock Android”

To understand this issue better, it's useful to know what “stock Android” means and how manufacturers approach choosing an OS. The Android operating system is open sourced by Google and officially known as the "Android Open Source Project" or AOSP. The source code is stripped down and doesn't even include Google apps or the Google Play Store. Most phones or tablets you purchase with Android are AOSP with layers of customization; or a “skinned” version of AOSP. Even the current Google flagship phone, Pixel, does not come with stock Android.

Even though custom Android distributions or ROMs (Android Read Only Memory) can come with useful features, others can come with “bloatware” or unwanted apps. For example, in 2019 when Samsung pre-installed the Facebook app on their phones, the only option was to “disable” the app. Worse, in some cases custom ROMS can come with pre-installed malware. Android OEMs (original equipment manufacturers) can pre-install apps that have high-level privileges and may not be as obvious as an icon you can see on your home screen. It's not just apps, though. New features provided with AOSP may be severely delayed with custom OEMs if the device manufacturer isn't diligent about porting them in. This could be because of reasons like hardware limitations or not prioritizing updates.

Screen Time for Sketchyware

Similar to an Android TV we looked into earlier this year, we found the now notorious Corejava malware directories on the Dragon Touch tablet. Unlike the Android TV box we saw, this tablet didn’t come rooted. However, we could see that the directories “/data/system/Corejava” and “/data/system/Corejava/node” were present on the device. This indicates Corejava was active on this tablet’s firmware.

We originally didn’t suspect this malware’s presence until we saw links to other manufacturers and odd requests made from the tablet prompting us to take a look. We first booted up this Dragon Touch tablet in May 2023, after the Command and Control (C2) servers that Corejava depends on were taken down. So any attempts to download malicious payloads, if active, wouldn't work (for now). With the lack of “noise” from the device, we suspect that this malware indicator is at minimum, a leftover remnant of “copied homework” from hasty production; or at worst, left for possible future activity.

The tablet also came preloaded with Adups (which were also found on the Android TV boxes) in the form of “firmware over the air” (FOTA) update software that came as the application called “Wireless Update.”

Adups has a history of being malware, but there are “clean versions” that exist. One of those “clean” versions was on this tablet. Thanks to its history and extensive system level permissions to download whatever application it wants from the Adups servers, it still poses a concern. Adups comes preinstalled with this Dragon Touch OEM, if you factory reset this device, the app will return. There’s no way to uninstall or disable this variant of Adups without technical knowledge and being comfortable with the command line. Using an OTA software with such a fraught history is a very questionable decision for a children’s tablet.

Connecting the Dots

The connection between the infected Dragon Touch and the Android TV box we previously investigated was closer than we initially thought. After seeing a customer review for an Android TV box for a company at the same U.S. address as Dragon Touch, we discovered Dragon Touch is owned and trademarked by one company that also owns and distributes other products under different brand names.

This group that registered multiple brands, and shared an address with Dragon Touch, sold the same tablet we looked at in other online markets, like Walmart. This same entity apparently once sold the T95Z model of Android TV boxes under the brand name “Tablet Express,” along with devices like the Dragon Touch tablet. The T95Z was in the family of TV boxes investigated after researchers started taking a closer look at these types of devices.

With the widespread use of these devices, it’s safe to say that any Android devices attached to these sellers should be met with scrutiny.

Privacy Issues

The Dragon Touch tablet also came with a very outdated version of the KIDOZ app pre-installed. This app touts being “COPPA Certified” and “turns phones & tablets into kids friendly devices for playing and learning with the best kids’ apps, videos and online content.” This version operates as kind of like a mini operating system where you can download games, apps, and configure parental controls within the app.

We noticed the referrer for this app was “ANDROID_V4_TABLET_EXPRESS_PRO_GO.” “Tablet Express” is no longer an operational company, so it appears Dragon Touch repurposed an older version of the KIDOZ app. KIDOZ only distributes its app to device manufacturers to preload on devices for kids, it's not in the Google Play Store.

This version of the app still collects and sends data to “kidoz.net” on usage and physical attributes of the device. This includes information like device model, brand, country, timezone, screen size, view events, click events, logtime of events, and a unique “KID ID.” In an email, KIDOZ told us that the “calls remain unused even though they are 100% certified (COPPA)” in reference to the information sent to their servers from the app. The older version still has an app store of very outdated apps as well. For example, we found a drawing app, "Kids Paint FREE", attempting to send exact GPS coordinates to an ad server. The ad server this app calls no longer exists, but some of these apps in the KIDOZ store are still operational despite having deprecated code. This leakage of device specific information over primarily HTTP (insecure) web requests can be targeted by bad actors who want to siphon information either on device or by obtaining these defunct domains.

Several security vendors have labeled the version of the KIDOZ app we reviewed as adware. The current version of KIDOZ is less of an issue since the internal app store was removed, so it's no longer labeled as adware. Thankfully, you can uninstall this version of KIDOZ. KIDOZ does offer the latest version of their app to OEM manufacturers, so ultimately the responsibility lies with Dragon Touch. When we reached out to KIDOZ, they said they would follow up with various OEMs to offer the latest version of the app.

Simple racing games from the old KIDOZ app store asking for location and contacts.

Malware and riskware come in many different forms. The burden of remedy for pre-installed malware and sketchyware falling to consumers is absolutely unacceptable. We'd like to see some basic improvements for how these devices marketed for children are sold and made:

  • There should be better security benchmarks for devices sold in large online markets. Especially devices packaged to appear safe for kids.
  • If security researchers find malware on a device, there should be a more effective path to remove these devices from the market and alert customers.
  • There should be a minimum standard set on Android OEMs sold to offer a minimum requirement of available security and privacy features from AOSP. For instance, this Dragon Touch kid’s tablet is running Android 9, which is now five years old. Android 14 is currently the latest stable OS at the time of this report.

Devices with software with a malicious history and out-of-date apps that leak children’s data create a larger scope of privacy and security problems that should be watched with closer scrutiny than they are now. It took over 25 hours to assess all the issues with this one tablet. Since this was a custom Android OEM, the only possible source of documentation was from the company, and there wasn’t much. We were left to look at the breadcrumbs they leave on the image instead, such as custom system level apps, chip processor specific quirks, and pre-installed applications. In this case, following the breadcrumbs allowed us to make the needed connections to how this device was made and the circumstances that lead to the sketchyware on it. Most parents aren't security researchers and do not have the time, will, or energy to think about these types of problems, let alone fix them. Online vendors like Amazon and Walmart should start proactively catching these issues and invest in better quality and source checks on the many consumer electronics on their markets.

Investigated Apps, Logs, and Tools List:

APKs (Apps):

Logs:

Tools:

  • Android Debug Bridge (adb) and Android Studio for shell and emulation.
  • Logcat for app activity on device.
  • MOBSF for initial APK scans.
  • JADX GUI for static analysis of APKs.
  • PiHole for DNS requests from devices.
  • VirusTotal for graphing connections to suspicious domains and APKs.

EFF Director of Investigations Dave Maass contributed research to this report.

Alexis Hancock

EFF Urges FTC to Address American Resellers of Malware on Android TV Set-Top Boxes

3 months 2 weeks ago
Regulators must step in to halt the sale to consumers of devices that are known to be compromised by malware.

SAN FRANCISCO—The Federal Trade Commission (FTC) must act to halt sales by Amazon, AliExpress, and other resellers of Android television set-top boxes and mobile devices manufactured by AllWinner and RockChip that have been pre-infected with malware before ever reaching consumers, the Electronic Frontier Foundation (EFF) urged Tuesday in a letter to FTC commissioners

“We believe that the sale of these devices presents a clear instance of deceptive conduct: the devices are advertised without disclosure of the harms they present. They also expose the buyers to an unfair risk which starts after simply powering the device on and connecting it to the internet,” EFF’s letter says. “Here, where products are sold containing real malware at the point of sale, issuing sanctions to the resellers will provide a powerful incentive for them to pull these products from the market and protect their customers.” 

When first connected to the internet, these infected devices immediately start communicating with botnet command and control servers, the letter explains. Then they connect to a vast click-fraud network—in which bots juice advertising revenue by producing bogus ad clicks—which a recent report by HUMAN Security dubbed BADBOX. This operates in the background of the device, unseen by the buyers; even if buyers do find out about it, they can’t do much to regain control of their devices without extensive technical know-how. 

The malware also lets its makers, or those to whom they sell access, use buyers’ internet connections as proxies—meaning that any nefarious deeds will look as though they came from the buyers, possibly exposing them to significant legal risk. 

Despite widespread reporting on these compromised devices, they are still being sold by Amazon, AliExpress, and other vendors. 

“We believe the resellers of these devices bear some responsibility for the broad scope of this attack and for failing to create a reliable pathway for researchers to notify them of these issues,” the letter reads. “While it would be impractical for resellers to run comprehensive security audits on every device they make available, they should pull these devices from the market once they are revealed and confirmed to include harmful malware.” 

The HUMAN Security report found the malware is a variant of the Triada trojan, installed between the time when a Chinese company manufactured the devices and when they are provided to resellers. This constitutes a supply-chain attack on consumer-based Internet of Things devices, so EFF also sent its letter to Cybersecurity and Infrastructure Security Agency Director Jen Easterly. 

“This is the very essence of consumer protection: ensuring that the products we bring into our homes aren’t preset to be hijacked for malicious purposes,” said EFF Senior Staff Technologist William Budington. “We urge the Federal Trade Commission to take swift action.” 

For EFF's letter to the FTC: https://www.eff.org/document/11-14-2023-eff-letter-ftc-re-malware-android-tv-set-top-boxes

For the HUMAN Security report: https://www.humansecurity.com/learn/blog/badbox-peachpit-and-the-fraudulent-device-in-your-delivery-box  

For more background on the problem: https://www.eff.org/deeplinks/2023/05/android-tv-boxes-sold-amazon-come-pre-loaded-malware 

Tags: threat labcybersecurityContact:  WilliamBudingtonSenior Staff Technologistbill@eff.org
Josh Richman

To Address Online Harms, We Must Consider Privacy First

3 months 2 weeks ago

Every year, we encounter new, often ill-conceived, bills written by state, federal, and international regulators to tackle a broad set of digital topics ranging from child safety to artificial intelligence. These scattershot proposals to correct online harm are often based on censorship and news cycles. Instead of this chaotic approach that rarely leads to the passage of good laws, we propose another solution in a new report: Privacy First: A Better Way to Address Online Harms.

In this report, we outline how many of the internet's ills have one thing in common: they're based on the business model of widespread corporate surveillance online. Dismantling this system would not only be a huge step forward to our digital privacy, it would raise the floor for serious discussions about the internet's future.

What would this comprehensive privacy law look like? We believe it must include these components:

  • No online behavioral ads.
  • Data minimization.
  • Opt-in consent.
  • User rights to access, port, correct, and delete information.
  • No preemption of state laws.
  • Strong enforcement with a private right to action.
  • No pay-for-privacy schemes.
  • No deceptive design.

A strong comprehensive data privacy law promotes privacy, free expression, and security. It can also help protect children, support journalism, protect access to health care, foster digital justice, limit private data collection to train generative AI, limit foreign government surveillance, and strengthen competition. These are all issues on which lawmakers are actively pushing legislation—both good and bad.

Comprehensive privacy legislation won’t fix everything. Children may still see things that they shouldn’t. New businesses will still have to struggle against the deep pockets of their established tech giant competitors. Governments will still have tools to surveil people directly. But with this one big step in favor of privacy, we can take a bite out of many of those problems, and foster a more humane, user-friendly technological future for everyone.

Corynne McSherry

Reauthorizing Mass Surveillance Shouldn’t be Tied to Funding the Government

3 months 2 weeks ago

Section 702 is the controversial and much-abused mass surveillance authority that expires in December unless Congress renews it. EFF and others have been working hard to get real reforms into the law and have opposed a renewal, and now, we’re hearing about a rushed attempt to tie renewal to funding the government. We need to stop it.

In September, President Biden signed a short-term continuing resolution to fund the government preventing a full shutdown. This week Congress must pass another bill to make sure it doesn’t happen again. But this time, we understand that Congress wants to vote on a "clean" renewal of Section 702—essentially, kicking the can down the road, as they've done before.

The program was intended to collect communications of people outside of the United States, but because we live in an increasingly globalized world, the government retains a massive trove of communications between Americans and people overseas. Increasingly, it’s this U.S. side of digital conversations that domestic law enforcement agencies trawl through—all without a warrant.

This is not how the government should work. Lawmakers should not take an unpopular, contested, and dangerous piece of legislation and slip it into a massive bill that, if opposed, would shut down the entire government. No one should have to choose between funding the government and renewing a dangerous mass surveillance program that even the federal government admits is in need of reform

EFF has signed onto a letter with a dozen organizations opposing even a short-term reauthorization of a program as dangerous as 702 in a piece of vital legislation. The letter says:

“In its current form, this authority is dangerous to our liberties and our democracy, and it should not be renewed for any length of time without robust debate, an opportunity for amendment, and — ultimately — far-reaching reforms. Allowing a short-term reauthorization to be slipped into a must-pass bill would demonstrate a blatant disregard for the civil liberties and civil rights of the American people.

For months, EFF and a large coalition of civil rights, civil liberties, and racial justice groups have been fighting the renewal of Section 702. Just last week, a group of privacy-minded Senators and Representatives introduced the Government Surveillance Reform Act, which would introduce some much-needed safeguards and oversight onto a historically out-of-control surveillance program. Section 702 is far too powerful, invasive, and dangerous to renew it cleanly as a matter of bureaucratic necessity and we say that it has to be renewed with massive reforms or not at all. Sneaking something this important into a massive must-pass bill is dishonest and a slap in the face to all people who care about privacy and the integrity of our digital communications. 

Matthew Guariglia

S.T.O.P.: Putting a Check on Unchecked Local N.Y. Government Surveillance

3 months 2 weeks ago

Recently I got the chance to speak with longtime Electronic Frontier Alliance member Surveillance Technology Oversight Project (S.T.O.P.). They’ve got a new Advocacy Manager, Kat Phan, and exciting projects are coming down the pike! Kat took some time to share with EFF how things are looking for STOP, their many successes, education & advocacy work, and how people from across the country can plug-in and support.

Can you share how S.T.O.P. came to be, got started, and its mission?   

S.T.O.P. as an organization grew from the belief that emerging surveillance technologies pose an unprecedented threat to public safety and the promise of a free society. Our executive director, Albert Fox Cahn, started S.T.O.P. in 2019 to address the long-ignored threat of state and local government surveillance. While federal advocates spent years at loggerheads over the federal surveillance powers, the growth of local police surveillance, particularly the NYPD, often went unchecked. S.T.O.P. started with an understanding that digital surveillance has played a key role in the historic criminalization of BIPOC, Muslim, and immigrant communities.  Building an intersectional coalition, we began to educate New Yorkers on the disparate and discriminatory impact police surveillance has on Muslim Americans, immigrants, the LGBTQ+ community, Indigenous peoples, communities of color, and disabled people. These local collaborations, which enable us to share resources for anti-surveillance work and advocate for legislation with the support of impacted community members, form the backbone of our mission – which is to systematically dismantle the local surveillance apparatus here in New York City, as well as to build a model for dismantling local surveillance across the United States.  

What have been some of the issues you've concentrated on and could you walk us through a timeline of some of your early successes to more recent?  

Unveiling the NYPD’s sprawling surveillance systems has been a huge chunk of our work thus far. The department is notoriously opaque about the surveillance technologies at their disposal, obscuring how they disproportionately surveil Black, Latinx, and Muslim New Yorkers. One of S.T.O.P.’s earliest successes was the passage of the Public Oversight of Surveillance Technology (POST) Act, which ordered the department to disclose information about their use of surveillance to the public. The POST Act allowed S.T.O.P. and Legal Aid Society to uncover nearly $3 billion in formerly hidden NYPD surveillance contracts. However, the NYPD regularly violates the POST Act, systematically and unlawfully refusing to comply with requests for information related to its use of surveillance technology.  

Our impact litigation extends beyond revealing records to the public. We have also successfully represented survivors of surveillance abuse, suing government agencies and their vendors to end surveillance practices. In 2020, we took the NYPD to court, forcing the department to end its Islamophobic “hijab ban” policy, which required arrestees to remove head coverings for mugshots and fueled the NYPD’s facial recognition database.   

Establishing privacy protections around health and location data has been another major focus of our work. The year following S.T.O.P.’s launch, the Covid-19 pandemic hit. We quickly responded to New York City’s proposed contact tracing system, working to ensure that data collected to “flatten the curve” was not put to other uses or shared with law enforcement agencies or other third parties. In anticipation of the Dobbs decision in 2022, we conducted similar rapid response work, publishing Pregnancy Panopticon: Abortion Surveillance After Roe, a widely-viewed white-paper report warning pregnant people and reproductive advocates of the risks posed by digital surveillance.  

Our other wins include helping outlaw K-12 facial recognition technology, drafting more than 20 bills, publishing more than 140 op-eds, drafting and releasing dozens of whitepapers, testifying before lawmakers in New York and nationally dozens of times, and more.  

S.T.O.P. is getting a lot of visibility online due to your work on Voyager Labs and the NYPD. Can you shed light on this work?   

Our organization found that in 2018, NYPD entered a nearly $9 million contract with Voyager Labs, an AI-based data surveillance firm which scrapes and monitors social media data. Voyager Labs claims its products can predict future crimes by using spyware, creating fake social media profiles, and making inaccurate predictions on suspects for criminal activity based on social media connections, location tracking, and other forms of data surveillance. For instance, Voyager Labs has claimed that its AI can assign prediction scores to social media users on “ties or affinity for Islamic fundamentalism or extremism” or “provide an automated indication of individuals who may pose a risk.”    

In response to this, S.T.O.P. has continued to fight NYPD’s exploitation of social media monitoring and digital “stop and frisk” practices. We are a leading advocate of the “Stop Online Police Fake Accounts and Keep Everyone Safe” (STOP FAKES) Act in New York State. This first-of-its-kind legislation would ban police from leveraging fake social media accounts to surveil New Yorkers. We have also introduced a bill in the City Council which would dissolve NYPD’s so-called “gang database” and prohibit the future use of surveillance practice predicated on association.  

Can you tell us about some of your other current projects?   

S.T.O.P. juggles a multitude of projects developing anti-surveillance resources for local advocates and directly impacted community members. For example, Guilt By Association, a white-paper report recently released by S.T.O.P. detailing how police databases use non-criminal criteria, such as neighborhoods, peer groups, and clothing, as a reason to surveil Black and Latinx youth, supports our work with the GANGS Coalition to dissolve the NYPD database and prohibit the future use of surveillance practices predicated on association. And our latest report, The Kids Won’t Be Alright: The Looming Threat of Child Surveillance Laws, will inform the curriculum for our youth-focused privacy trainings.   

What's next on the horizon for STOP?   

Our upcoming legislative work will heavily focus on passing a set of anti-surveillance and privacy laws at both the state and City Council level. At the state level, there is “Banning Big Brother: New York’s Surveillance Sanctuary State Blueprint”, a 10-bill package which includes first-in-the-nation bans on geofence warrants and fake police social media profiles. You can learn more about that campaign through its website: https://www.banbigbro.tech/.   

At the local level, S.T.O.P. just relaunched our Ban the Scan campaign to introduce a city-wide ban on government use of biometric surveillance and pass two bills in the Council that would prohibit landlords and places of public accommodations from using biometric surveillance, such as facial recognition technology. You can learn more about that campaign through its website, banthescan.org, when it launches on October 17th.   

Do opportunities exist for people to get involved? How can people contact and support your work?  

Because our team and work thrive when we are connected to community work and can work alongside other organizations or groups similarly invested in privacy protections, S.T.O.P. welcomes collaboration on public and digital events and campaigns. For those who would prefer to be involved in our campaigns in an individual capacity, S.T.O.P. welcomes support as a volunteer, junior board member, intern, or legal fellow. We know that lived experiences are the most informative when it comes to demanding change and encourage folks from all backgrounds to join the fight to abolish all systems of mass surveillance.  

What supports our work is not only direct participation as a staff member or volunteer, but donations and contributions to fuel our fight against mass surveillance (www.stopspying.org/donate). Additionally, community partnerships are key to our sustainment, and working with organizations near and far helps us share our resources and build a network of allies in our mission to end governmental abuse of surveillance technologies.

Christopher Vines

Debunking the Myth of “Anonymous” Data

3 months 2 weeks ago

Today, almost everything about our lives is digitally recorded and stored somewhere. Each credit card purchase, personal medical diagnosis, and preference about music and books is recorded and then used to predict what we like and dislike, and—ultimately—who we are. 

This often happens without our knowledge or consent. Personal information that corporations collect from our online behaviors sells for astonishing profits and incentivizes online actors to collect as much as possible. Every mouse click and screen swipe can be tracked and then sold to ad-tech companies and the data brokers that service them. 

In an attempt to justify this pervasive surveillance ecosystem, corporations often claim to de-identify our data. This supposedly removes all personal information (such as a person’s name) from the data point (such as the fact that an unnamed person bought a particular medicine at a particular time and place). Personal data can also be aggregated, whereby data about multiple people is combined with the intention of removing personal identifying information and thereby protecting user privacy. 

Sometimes companies say our personal data is “anonymized,” implying a one-way ratchet where it can never be dis-aggregated and re-identified. But this is not possible—anonymous data rarely stays this way. As Professor Matt Blaze, an expert in the field of cryptography and data privacy, succinctly summarized: “something that seems anonymous, more often than not, is not anonymous, even if it’s designed with the best intentions.” 

Anonymization…and Re-Identification?

Personal data can be considered on a spectrum of identifiability. At the top is data that can directly identify people, such as a name or state identity number, which can be referred to as “direct identifiers.” Next is information indirectly linked to individuals, like personal phone numbers and email addresses, which some call “indirect identifiers.” After this comes data connected to multiple people, such as a favorite restaurant or movie. The other end of this spectrum is information that cannot be linked to any specific person—such as aggregated census data, and data that is not directly related to individuals at all like weather reports.

Data anonymization is often undertaken in two ways. First, some personal identifiers like our names and social security numbers might be deleted. Second, other categories of personal information might be modified—such as obscuring our bank account numbers. For example, the Safe Harbor provision contained with the U.S. Health Insurance Portability and Accountability Act (HIPAA) requires that only the first three digits of a zip code can be reported in scrubbed data.

However, in practice, any attempt at de-identification requires removal not only of your identifiable information, but also of information that can identify you when considered in combination with other information known about you. Here's an example: 

  • First, think about the number of people that share your specific ZIP or postal code. 
  • Next, think about how many of those people also share your birthday. 
  • Now, think about how many people share your exact birthday, ZIP code, and gender. 

According to one landmark study, these three characteristics are enough to uniquely identify 87% of the U.S. population. A different study showed that 63% of the U.S. population can be uniquely identified from these three facts.

We cannot trust corporations to self-regulate. The financial benefit and business usefulness of our personal data often outweighs our privacy and anonymity. In re-obtaining the real identity of the person involved (direct identifier) alongside a person’s preferences (indirect identifier), corporations are able to continue profiting from our most sensitive information. For instance, a website that asks supposedly “anonymous” users for seemingly trivial information about themselves may be able to use that information to make a unique profile for an individual. 

Location Surveillance

To understand this system in practice, we can look at location data. This includes the data collected by apps on your mobile device about your whereabouts: from the weekly trips to your local supermarket to your last appointment at a health center, an immigration clinic, or a protest planning meeting. The collection of this location data on our devices is sufficiently precise for law enforcement to place suspects at the scene of a crime, and for juries to convict people on the basis of that evidence. What’s more, whatever personal data is collected by the government can be misused by its employees, stolen by criminals or foreign governments, and used in unpredictable ways by agency leaders for nefarious new purposes. And all too often, such high tech surveillance disparately burdens people of color.  

Practically speaking, there is no way to de-identify individual location data since these data points serve as unique personal identifiers of their own. And even when location data is said to have been anonymized, re-identification can be achieved by correlating de-identified data with other publicly available data like voter rolls or information that's sold by data brokers. One study from 2013 found that researchers could uniquely identify 50% of people using only two randomly chosen time and location data points. 

Done right, aggregating location data can work towards preserving our personal rights to privacy by producing non-individualized counts of behaviors instead of detailed timelines of individual location history. For instance, an aggregation might tell you how many people’s phones reported their location as being in a certain city within the last month, but not the exact phone number and other data points that would connect this directly and personally to you. However, there’s often pressure on the experts doing the aggregation to generate granular aggregate data sets that might be more meaningful to a particular decision-maker but which simultaneously expose individuals to an erosion of their personal privacy.  

Moreover, most third-party location tracking is designed to build profiles of real people. This means that every time a tracker collects a piece of information, it needs something to tie that information to a particular person. This can happen indirectly by correlating collected data with a particular device or browser, which might later correlate to one person or a group of people, such as a household. Trackers can also use artificial identifiers, like mobile ad IDs and cookies to reach users with targeted messaging. And “anonymous” profiles of personal information can nearly always be linked back to real people—including where they live, what they read, and what they buy.

For data brokers dealing in our personal information, our data can either be useful for their profit-making or truly anonymous, but not both. EFF has long opposed location surveillance programs that can turn our lives into open books for scrutiny by police, surveillance-based advertisers, identity thieves, and stalkers. We’ve also long blown the whistle on phony anonymization

As a matter of public policy, it is critical that user privacy is not sacrificed in favor of filling the pockets of corporations. And for any data sharing plan, consent is critical: did each person consent to the method of data collection, and did they consent to the particular use? Consent must be specific, informed, opt-in, and voluntary. 

Paige Collings

It’s Time to Oppose the New San Francisco Policing Ballot Measure

3 months 2 weeks ago

San Francisco Mayor London Breed has filed a ballot initiative on surveillance and policing that, if approved, would greatly erode our privacy rights, endanger marginalized communities, and roll back the incredible progress the city has made in creating democratic oversight of police’s use of surveillance technologies. The measure will be up for a vote during the March 5, 2024 election.

Specifically, the ballot measure would erode San Francisco’s landmark 2019 surveillance ordinance which requires city agencies, including the police department, to seek approval from the democratically-elected Board of Supervisors before it acquires or deploys new surveillance technologies. Agencies also need to put out a full report to the public about exactly how the technology would be used. This is an important way of making sure people who live or work in the city have a say in policing technologies that could be used in their communities.

However, the new ballot initiative attempts to gut the 2019 surveillance ordinance. The measure says “..the Police Department may acquire and/or use a Surveillance Technology so long as it submits a Surveillance Technology Policy to the Board of Supervisors for approve by ordinance within one year of the use or acquisition, and may continue to use that Surveillance Technology after the end of that year unless the Board adopts an ordinance that disapproves the Policy…”  In other words, police would be able to deploy any technology they wished for a full year without any oversight, accountability, transparency, or semblance of democratic control.

But there is something we can do about this! It’s time to get the word out about what’s at stake during the March 5, 2024 election and urge voters to say NO to increased surveillance and decreased police accountability.

Like many other cities in the United States, this ballot measure would turn San Francisco into a laboratory where police are given free reign to use the most unproven, dangerous technologies on residents and visitors without regard for criticism or objection. That’s one year of police having the ability to take orders from faulty and racist algorithms. One year in which police could potentially contract with companies that buy up the geolocation data from millions of cellphones and  sift through the data.

In the summer of 2020, in response to a mass Black-led movement against police violence that swept the nation, Mayor Breed said, “If we’re going to make real significant change, we need to fundamentally change the nature of policing itself…Let’s take this momentum and this opportunity at this moment to push for real change.” A central part of that vision was “ending the use of police in response to non-criminal activity; addressing police bias and strengthening accountability; [and] demilitarizing the police.”

It appears that Mayor Breed has turned her back on that stance and, with the introduction of her ballot measure, instead embraced increased surveillance and decreased police accountability. But there is something we can do about this! It’s time to get the word out about what’s at stake during the March 5, 2024 election and urge voters to say NO to increased surveillance and decreased police accountability.

There’s more: this Monday, November 13, 2023 at 10:00am PT, the Rules Committee of the Board of Supervisors will meet to discuss upcoming ballot measures, including this awful policing and surveillance ballot measure. You can watch the Rules Committee meeting here, and most importantly, the live feed will tell you how to call in and give public comment. Tell the Board’s Rules Committee that police should not have free reign to deploy dangerous and untested surveillance technologies in San Francisco . 

Matthew Guariglia
Checked
1 hour 22 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed