San Francisco’s Board of Supervisors Grants Police More Surveillance Powers

2 months 2 weeks ago

In a 4-7 vote, San Francisco’s Board of Supervisors passed a 15-month pilot program granting the San Francisco Police Department (SFPD) more live surveillance powers. This was despite the objections of a diverse coalition of community groups and civil rights organizations, residents, the Bar Association of San Francisco, and even members of the city’s Police Commission, a civilian oversight body comprising of mayoral and Board appointees. The ordinance, backed by the Mayor and the SFPD, enables the SFPD to access live video streams from private non-city cameras for the purposes of investigating crimes, including misdemeanor and property crimes. Once the SFPD gets access, they can continue live streaming for 24 hours. The ordinance authorizes such access by consent of the camera owner or a court order.

Make no mistake, misdemeanors like vandalism or jaywalking happen on nearly every street of San Francisco on any given day—meaning that this ordinance essentially gives the SFPD the ability to put the entire city under live surveillance indefinitely.

This troubling ordinance also allows police to surveil “significant events,” loosely defined as large or high-profile events, “for placement of police personnel.” This essentially gives police a green light to monitor—in real-time—protests and other First Amendment-protected activities, so long as they require barricades or street closures associated with public gatherings. The SFPD has previously been caught using these very same cameras to surveil protests following George Floyd’s murder, and the SF Pride Parade, facts that went unaddressed by the majority of Supervisors who authorized the ordinance.

The Amendments

During the hearing, Supervisor Hillary Ronen introduced two key amendments to address and mitigate the ordinance’s civil liberties impacts. The first would have prohibited the SFPD from live monitoring public gatherings unless there was imminent threat of death or bodily harm. This failed, by the same 4-7 tally as the ordinance itself.

The second, which was successful, required stronger reporting requirements on SFPD’s use of live surveillance and the appointment of an independent auditor on the efficacy of the pilot program. This amendment was needed to ensure that an independent entity rather than the SFPD itself, assesses the pilot program’s data to determine exactly how, when, and why these new live monitoring powers were used.

What’s This All About?

What is this all about? During the hearing, several of the Supervisors talked about how San Franciscans are worried about crime, but failed to articulate how giving police live monitoring abilities addresses those fears.

And in fact, many of the examples that both the SFPD and the Supervisors who voted for this ordinance pointed to are the types of situations where live surveillance would not help. Some Supervisors pointed to retail theft or car break-ins as examples of why live surveillance is needed. But under the ordinance, an officer would need to first seek permission from an SFPD captain and then go to a camera owner to request access to live surveillance—steps that would take far longer than the seconds or minutes for these incidents to occur. And if police have reason to believe a crime is about to occur at a particular location, it makes far more sense to send an officer rather than go through the process of getting permission to live monitor a camera, which carries the risk of putting an intersection or a pharmacy under constant police surveillance for no reason.

Moreover, as Supervisor Shamann Walton pointed out, police have always been able to get historical footage of crimes simply by sending a request to the camera’s owner—this is especially true of the thousands of Business Improvement District/Commercial Benefit District cameras from which police have long been obtaining historic footage to build cases or gather evidence. So other than a desire to actively watch large swaths of the city, it’s unclear how live monitoring helps police get anything they couldn’t already get by sending a simple request after the fact.

Which leaves us to the sad conclusion that this ordinance isn’t really about the safety of San Franciscans—it’s about security theater. It’s about putting voters at ease that something, anything is being done about crime—even if that proactive move has no discernible effect on crime and, in fact, actively threatens to harm San Francisco’s activists and most vulnerable populations.

A Heartfelt Thank You

A very large coalition pushed against this ordinance. Without their efforts and the efforts of many other other San Franciscans who weighed in during public comment, the 15-month sunset date for the pilot or the independent audit provision would not have been possible.

Commendations should also be heaped upon Supervisors Chan, Preston, Ronen, and Walton for their brave stand at the Board of Supervisors meeting, their sharp critique and questioning of the legislation, and their willingness to listen to concerned community members.

Watching the Watchers

Because this bill has a sunset provision that requires it to be renewed 15 months from now, we have another chance to put on our boots, dust off our megaphones, and fight like hell to protect San Franciscans from police overreach. In the meantime, and along with our coalition, we’ll be monitoring for violations and tracking the data that the SFPD produces. And we’ll be there in 15 months to hopefully prevent the reauthorization of this dangerous ordinance. 

Related Cases: Williams v. San Francisco
Matthew Guariglia

Lawsuit: SMUD and Sacramento Police Violate State Law and Utility Customers’ Privacy by Sharing Data Without a Warrant

2 months 2 weeks ago
The public power utility and police racially profiled Asian communities in the illegal data-sharing scheme.

SACRAMENTO—The Sacramento Municipal Utility District (SMUD) searches entire zip codes’ worth of people’s private data and discloses it to police without a warrant or any suspicion of wrongdoing, according to a privacy lawsuit filed Wednesday in Sacramento County Superior Court.

SMUD’s bulk disclosure of customer utility data turns its entire customer base into potential leads for police to chase and has particularly targeted Asian homeowners, says the lawsuit filed by the Electronic Frontier Foundation (EFF) and law firm Vallejo, Antolin, Agarwal, and Kanter LLP on behalf of plaintiffs the Asian American Liberation Network, a Sacramento-based nonprofit, and Khurshid Khoja, an Asian American Sacramento resident, SMUD customer, cannabis industry attorney, and cannabis rights advocate. 

“SMUD’s policies claim that ‘privacy is fundamental’ and that it ‘strictly enforces privacy safeguards,’ but in reality, its standard practice has been to hand over its extensive trove of customer data whenever police request it,” said EFF Staff Attorney Saira Hussain. “Doing so violates utility customers’ privacy rights under state law and the California Constitution while disproportionately subjecting Asian and Asian American communities to police scrutiny.”

Utility data has historically provided a detailed picture of what occurs within a home. The advent of smart utility meters has only enhanced that image. Smart meters provide usage information in increments of 15 minutes or less; this granular information is beamed wirelessly to the utility several times each day and can be stored in the utility’s databases for years. As that data accumulates over time, it can provide inferences about private daily routines such as what devices are being used, when they are in use, and how this changes over time.

The California Public Utilities Code says public utilities generally “shall not share, disclose, or otherwise make accessible to any third party a customer’s electrical consumption data ....” except “as required under federal or state law.” The California Public Records Act prohibits public utilities from disclosing consumer data, except “[u]pon court order or the request of a law enforcement agency relative to an ongoing investigation.” 

“Privacy, not discrimination, was what SMUD promised when it rolled out smart meters,” said Monty Agarwal, EFF’s co-counsel at Vallejo, Antolin, Agarwal, and Kanter LLP.

Yet SMUD in recent years has given protected customer data to the Sacramento Police Department, who asked for it on an ongoing basis—without a warrant or any other court order, nor any suspicion of a particular resident—to find possible illicit cannabis grows. The program has been highly lucrative for the city: Sacramento Police in 2017 began issuing large penalties to owners of properties where cannabis is found under a new city ordinance, and levied nearly $100 million in fines in just two years. 

About 86 percent of those penalties were levied upon people of Asian descent. The lawsuit alleges that officials intentionally designed their mass surveillance to have this disparate impact on Asian communities. The complaint details how a SMUD analyst who provided data to police excluded homes in a predominantly white neighborhood, as well as how one police architect of Sacramento’s program removed non-Asian names on a SMUD list and sent only Asian-sounding names onward for further investigation.  

“SMUD and the Sacramento Police Department’s mass surveillance program is unlawful, advances harmful stereotypes, and overwhelmingly impacts Asian communities,” said Megan Sapigao, co-executive director of the Asian American Liberation Network. “It’s unacceptable that two public agencies would carelessly flout state law and utility customers’ privacy rights, and even more unacceptable that they targeted a specific community in doing so.”

“California voters rejected discriminatory enforcement of cannabis laws in 2016, while the Sacramento Police Department and SMUD conduct illegal dragnets through utility customer data to continue these abuses to this day,” Khoja said. “This must stop.”

For the complaint:

Contact:  SairaHussainSenior Staff AaronMackeySenior Staff
Josh Richman

How to Ditch Facebook Without Losing Your Friends (Or Family, Customers or Communities)

2 months 2 weeks ago

Today, we launch “How to Ditch Facebook Without Losing Your Friends” - a narrated slideshow and essay explaining how Facebook locks in its users, how interoperability can free them, and what it would feel like to use an “interoperable Facebook” of the future, such as the one contemplated by the US ACCESS Act. Privacy info. This embed will serve content from

Watch the video on the Internet Archive

Watch the video on Youtube

Millions of Facebook users claim to hate the service - its moderation, both high-handed and lax, its surveillance, its unfair treatment of the contractors who patrol it and the publishers who fill it with content - but they keep on using it.

Both Facebook and its critics have an explanation for this seeming paradox: people use Facebook even though they don’t like it because it’s so compelling. For some critics, this is proof that Facebook has perfected an “addictive technology” with techniques like “dopamine loops.” Facebook is rather fond of this critique, as it integrates neatly with Facebook’s pitch to advertisers: “We are so good at manipulating our users that we can help you sell anything.”

We think there’s a different explanation: disgruntled Facebook users keep using the service because they don’t want to leave behind their friends, family, communities and customers. Facebook’s own executives share this belief, as is revealed by internal memos in which those execs plot to raise “switching costs” for disloyal users who quit the service.

“Switching costs” are the economists’ term for everything you have to give up when you switch products or services. Giving up your printer might cost you all the ink you’ve bulk-purchased; switching mobile phone OSes might cost you the apps and media you paid for. 

The switching cost of leaving Facebook is losing touch with the people who stay behind. Because Facebook locks its messaging and communities inside a “walled garden” that can only be accessed by users who are logged into Facebook, leaving Facebook means leaving behind the people who matter to you (hypothetically, you could organize all of them to leave, too, but then you run into a “collective action problem” - another economists’ term describing the high cost of getting everyone to agree to a single course of action).

That’s where interoperability comes in. Laws like the US ACCESS Act and the European Digital Markets Act (DMA) aim to force the largest tech companies to allow smaller rivals to plug into them, so their users can exchange messages with the individuals and communities they’re connected to on Facebook - without using Facebook.

“How to Ditch Facebook Without Losing Your Friends” explains the rationale behind these proposals - and offers a tour of what it would be like to use a federated, interoperable Facebook, from setting up your account to protecting your privacy and taking control of your own community’s moderation policies, overriding the limits and permissions that Facebook has unilaterally imposed on its users.

You can get the presentation as a full video, or a highlight reel, or a PDF or web-page. We hope this user manual for an imaginary product will stimulate your own imagination and give you the impetus to demand - or make - something better than our current top-heavy, monopoly-dominated internet.

Cory Doctorow

Giving Big Corporations “Closed Generic” Top-Level Domain Names to Run as Private Kingdoms Is Still a Bad Idea

2 months 2 weeks ago

No business can own the generic word for the product it sells. We would find it preposterous if a single airline claimed exclusive use of the word “air,” or a broadband service tried to stop its rivals from using the word “broadband.” Until this year, it seemed settled that the internet’s top-level domain names (like .com, .org, and so on) would follow the same obvious rule. Alas, ICANN (the California nonprofit that governs the global domain name system) seems intent on taking domains in a more absurd direction by revisiting the thoroughly discredited concept of “closed generics.”

In a nutshell, closed generics are top-level domain names using common words, like “.car.” But unlike other TLDs like “.com,” a closed generic TLD is under the control of a single company, and that company controls all of the domain names within the TLD. This is a terrible idea, for all of the same reasons it has failed twice already. And for one additional reason—defenders of open competition and free expression should not have to fight the same battle a third time.

Closed Generics Rejected and Then Resurrected

The context of this fight is the “new generic top-level domains” process, which expanded the list of “gTLDs” from the original six (.com, .net, .org, .edu, .gov, and .mil) to the 1,400 or so in use today, like .hot, .house, and .horse. In 2012, during the first round of applications to operate new gTLDs, some companies asked for complete, exclusive control over domains like .baby, .blog, .book, .cars, .food, .mail, .movie, .music, .news, .shop, and .video, plus similar terms written in Chinese characters. Most of the applicants were among the largest players in their industries (like Amazon for .book and Johnson & Johnson for .baby).

The outcry was fierce, and ICANN was flooded with public comments. Representatives of domain name registrars, small businesses, non-commercial internet users, and even Microsoft urged ICANN to deny these applications.

Fortunately, ICANN heeded the public’s wishes, telling the applicants that they could operate these top-level domains only if they allowed others to register their own names within those domains. Amazon would not be the sole owner of .book, and Google would not control .map as its private fiefdom. (Some TLDs that are non-generic brand names like .honda, .hermes, and .hyatt were given to the companies that own those brands as their exclusive domains, and some like .pharmacy are restricted to a particular kind of business . . . but not one business.)

A working group within the ICANN community continued to debate the “closed generics” issue, but the working group’s final report in 2020 made no recommendation. Both the supporters and opponents of closed generics tried to find some middle ground, but there was none to be found that protected competition and prevented monopolization of basic words.

That’s where things sat until early this year, when the Chairman of the ICANN Board, out of the blue, asked two bodies who don’t normally make policy to conduct a “dialogue” on closed generics: the ICANN GNSO Council (which oversees community policymaking for generic TLDs) and the ICANN Government Advisory Committee (a group of government representatives which as its name indicates, only “advises”). The Board hasn’t voted on the issue, so it’s not clear how many members actually support moving forward.

The Board’s letter was followed up a few days later by a paper from ICANN’s paid staff. It claimed to be a “framing paper” on the proposed dialogue. But in reality, the paper presented a slanted and one-sided history of the issue, suggesting incorrectly that closed generics were “implicitly” allowed under previous ICANN policies. The notion of “implicit” policy is anathema to a body whose legitimacy depends on open, transparent, and participatory decision-making. What’s more, the ICANN staff paper gives no weight to a huge precedent – one of ICANN’s largest waves of global public input, which was almost unanimously opposed to closed generics.

As the ICANN Board (or at least some of its members) try to start a “dialogue” that would keep the closed generics proposal alive, the staff paper went even further and tried to pre-determine the outcome of that dialogue, by suggesting that some closed generic domains would have to be allowed, as long as lawyers for the massive companies that seek to control those domains could come up with convincing “public interest goals.”

As a result, the land rush for new private kingdoms at the highest level of the internet’s domain name system appears poised to begin again.

Still a Bad, Pro-Monopoly Idea

The problems with giving control of every possible domain name within a generic top-level domain to a single company are the same as they were in 2012 and in 2020.

First, it’s out of step with trademark law. In the US and most countries, businesses can’t register a trademark in the generic term for that kind of business. That’s why a computer company and a record label can get trademarks in the name “Apple,” but a fruit company cannot. Some trademark attorneys in the ICANN community have suggested that the US Supreme Court’s decision in the case means that trademarks in generic words are now fair game, but that’s misleading. The Supreme Court ruled that adding “.com” to a generic word might result in a valid trademark—but the applicant still has to show with evidence that the public associates that domain name with a particular business, not a general category. And that’s still difficult and rare. If trademark law doesn’t allow companies to “own” generic words, as part of a domain name or otherwise, then ICANN shouldn’t be giving a single company what amounts to ownership over those words as top-level domains.

Second, closed generics are bad policy because they give an unfair advantage to businesses that are already large and often dominant in their field. Control of a new gTLD doesn’t come cheap—the application fee alone is several hundred thousand dollars, and ongoing fees to ICANN are also high. Allowing a bookstore owner named Garcia to run a website at is a powerful tool for building a new independent business with its own online identity. A business with a memorable, descriptive domain name like is less dependent on its placement in Google’s search results, or Facebook’s news feed. If, instead, only Amazon could create websites that ended in .book, the small businesses of the world would lose that competitive boost, and the image of Amazon as the only online bookseller would be even more durable.

Third, closed generics would blast a big hole in the pro-competitive firewall at the heart of ICANN: the rule that registries (the wholesalers like Verisign who operate top-level domains) and registrars (the retailers like Namecheap who register names for internet users) must remain separate. That rule dates from ICANN’s founding in 1998, and was designed to break a monopoly over domain names. The structural separation rule, which is relatively easy to enforce, helps stop new monopolists from arising in the domain name business. Exclusive control over a generic top-level domain would mean that single companies would act as the registry and the sole registrar for a top-level domain.

The Public Doesn’t Need Closed Generics, and “Public Interest” Promises Don’t Work in ICANN-Land

The ICANN Board’s letter shared the GAC’s 2013 suggestion that closed generics should be allowed if they could be structured to “serve the public interest.” But which “public” might that be? There’s no reason why giving full control of a generic TLD to a single company would serve internet users better than a domain that’s open to all (or at least all members of a particular business or profession). The justifications we’ve seen boil down to arguing that someone, somewhere will come up with an innovative use for a closed generic domain. That simply begs the question, while not explaining how exclusive control is a necessary feature.

On top of that, ICANN does not have a good track record of holding domain registries to the “public interest” promises they make—its enforcement mechanism is slow, cumbersome, and tends to embroil ICANN in content moderation issues, which is something the organization is rightfully forbidden to do.

No More Sequels

Over the decade-plus of ICANN’s project to expand the top-level domains, no company has been allowed to operate a generic TLD as its private kingdom. And despite two rounds of heated debate, the community has not come up with a plan for doing this well or fairly.

It’s time to stop.

The only motive behind the continuing push for “compromise” on the closed generics issue is the wealthiest players’ desire to control the internet’s basic resources. ICANN should put its foot down at last, put the closed generics idea on the shelf, and leave it there.

Mitch Stoltz


2 months 2 weeks ago

Puzzlemaster Aaron Steimle of the Muppet Liberation Front contributed to this post.

Every year, EFF joins thousands of computer security professionals, tinkerers, and hobbyists for Hacker Summer Camp, the affectionate term used for the series of Las Vegas technology conferences including BSidesLV, Black Hat, DEF CON, and more. EFF has a long history of standing with online creators and security researchers at events like these for the benefit of all tech users. We’re proud to honor this community’s spirit of curiosity, so each year at DEF CON we unveil a limited edition EFF member t-shirt with an integrated puzzle for our supporters (check the archive!). This year we had help from some special friends.

"The stars at night are big and bright down on the strip of Vegas"

For EFF’s lucky 13th member t-shirt at DEF CON 30, we had the opportunity to collaborate with iconic hacker artist Eddie the Y3t1 Mize and the esteemed multi-year winners of EFF’s t-shirt puzzle challenge: Elegin, CryptoK, Detective 6, and jabberw0nky of the Muppet Liberation Front.

Extremely Online members' design with an integrated challenge.

The result is our tongue-in-cheek Extremely Online T-Shirt, an expression of our love for the internet and the people who make it great. In the end, one digital freedom supporter solved the final puzzle and stood victorious. Congratulations and cheers to our champion cr4mb0!

But How Did They Do It?

Take a guided tour through each piece of the challenge with our intrepid puzzlemasters from the Muppet Liberation Front. Extreme spoilers ahead! You’ve been warned…


Puzzle 0

The puzzle starts with the red letters on the shirt on top of a red cube. Trying common encodings won’t work, but a quick Google search of the letters will return various results containing InterPlanetary File System (IPFS) links. The cube is also the logo for IPFS. Thus, the text on the shirt resolves to the following IPFS hash/address:


QR codes have a standard format and structure that requires the large squares to be placed in three of the four corners. With this in mind, the image can be seen as four separate smaller squares, with the two middle ones overlapping at the large square in the center. These squares can be reconstructed into a valid QR code using an image editing program.


Resolves to

This site contains two groups of text: the first paragraph contains four lines that start with the same letters, and the second paragraph looks like Base64-encoded information. Notice that the four lines in the first paragraph all start with the same letters as the text on the shirt. These are also IPFS addresses of the remaining puzzles.

Puzzle 1


Wordle players will immediately recognize the style of the puzzle. You can use a wordlist and some regular expressions / pattern matching to identify the only possible solution to this puzzle. Note that the first five words also act as a hint to the theme of each puzzle answer: space/stars.


Puzzle 2 Challenge Text

Word on the street is that the font of youth is the key.

[Flight enabling bird feature.] + [Short resonant tones, often indicating a correct response.] + [First Fermat Prime]

55rhyykkqisq 4ubhYpYfwg 5pYrmmkks6qi prkuy6qlf eakjZjk4a rhXkgwy6iqhrddb

This puzzle consists of some cryptic clues and a line of ciphertext. First, consider the wording of the initial line: “Word on the street is that the font of youth is the key.” These clues should indicate that the solver will need to look into Microsoft Word Fonts.

Next, to decode the clues in the second line:

  1. Flight enabling bird feature = WING
  2. Short resonant tones, often indicating a correct response = DINGS
  3. First Fermat Prime = 3


Decoding the Cipher Text

55rhyykkqisq 4ubhYpYfwg 5pYrmmkks6qi prkuy6qlf eakjZjk4a rhXkgwy6iqhrddb

The solver now knows that the ciphertext has something to do with Microsoft Word and the Wingdings 3 font. Typed out in Wingdings 3 font, each character results in some type of arrow. The characters are categorized as arrows as follows:

UP: XYhpr5
DOWN: iqs60
LEFT: Zbdftv
RIGHT: aceguw4

Using these arrows as instructions to a pen, one can draw shapes that resemble letters. Each word of the ciphertext should map to a single letter, with a new plot starting after each space.


Reading the drawn shapes as letters – the solution: MIMOSA

Puzzle 3

Puzzle solution:

"The name of the game isn’t Craps" and the picture of a person snapping their fingers are references to the game "Snaps." The puzzle uses the rules of Snaps transferred onto a Craps board. Snaps is a game where a clue-giver uses statements and finger-snapping to spell out a well-known name.

Looking at the differences between the given board and a standard Craps board indicates which components are meant to give clues. In a game of Snaps, vowels are indicated by the number of snaps, translated here as the number of pips shown on the colored die. Consonants are indicated with the first letter of a statement given by the clue-giver. On this board, "COME," "NOT PASS BAR," "PASS LINE," and "HOW TO PLAY" have been added or altered, indicating that these statements give the necessary consonants C, N, P, and H by taking the first letter of each statement, as in the game Snaps. The dice have been colored, giving the numbers 1-4 which in Snaps indicate the vowels A, E, I, and O. To order these elements, the rainbow circles to the left of the dice have been colored with the corresponding colors, giving the answer PHOENICIA.

Final answer: PHOENICIA

Puzzle 4

Puzzle Solution:

Unlike the previous puzzles, this image does not take up the entire page, indicating that there might be more information available by inspecting the html. Doing so shows that the embedded image has the file name "OrangeJuicePaperFakeBook.jpg." Deconstructing this, "OrangeJuicePaper" clues the word "pulp" and "FakeBook" clues the word fiction, letting the solver know the puzzle's theme will revolve around the movie Pulp Fiction.

The image itself is hiding information steganographically, and the information can be extracted using the tool steghide. Using steghide on OrangeJuicePaperFakeBook.jpg with no password will write the file QuartDeLivreAvecDuFromage.txt, containing a long series of binary strings of length 8.

'Quart de livre avec du fromage' is 'quarter pounder with cheese' in French. "Do you know what they call a quarter pounder with cheese in Paris?" is a quote from Vincent Vega in Pulp Fiction.

The binary numbers within the file are the ASCII representation of letters and spaces, and can be converted using any of the many tools available upon searching for "binary ASCII converter." Converting the file contents gives legible but nonsensical results:

overconstructed efficiencyapartments coeffect jeffs counterefforts phosphatidylethanolamines eye effed I nonefficient aftereffects theocracy teachereffectiveness inefficaciousnesses a ineffervescibility psychoneuroimmunologically superefficiency coefficientofacceleration o toxic jeffersonian teffs differentialcoefficient milkshake propulsiveefficiency effulges bad lockpick effed upper nonrevolutionaries revolutionarinesses teffs temperaturecoefficient maleffect effable foe butterflyeffect eerie tranquillizing magnetoopticaleffect jeffs plantthermalefficiency nulls rappers I effectiveresistance

These words aren't used directly, but instead the length of each word is relevant. Converting each word to its character count, and then converting that character count to its letter of the alphabet gives: othenyceallitsarzoyaelewithcheersevigcoentevegas

"They call it a royale with cheese" is another quote from Vincent Vega, also the answer to the previous quote ("Do you know what they call a quarter pounder with cheese in Paris?").

Looking at othenyceallitsarzoyaelewithcheersevigcoentevegas, it contains "they call it a royale with cheese," followed by "vigcent vega." The extra characters mixed in spell 'ones zeroes,' which is a hint that each of the nonsensical words should be converted to a one or a zero themselves. But how? Looking back at the original image, it shows that the EFF score is 1 and the DEF CON score is 0—so represent each word containing the letters "EFF" with a 1, and all other words with a 0. This gives a new binary string, which can itself be again converted to ASCII, giving the ciphertext ymgdzq.

Going back to the quote derived from counting the number of characters in each word, note that Vincent was intentionally misspelled as Vigcent. This is a clue to use a vigenere cipher to decrypt this new ciphertext with key vega.

Applying Vigenere to text 'ymgdzq' with key 'vega' gives the solution: DIADEM

Bonus Easter Egg: The first character of each non-eff word in the wordlist results in: opeitapotmblunrfetnri, which anagrams to muppet liberation front.


The final block of text is encoded in Base64. Decoding it reveals that the data starts with "Salted__", an artifact of encrypting using OpenSSL.

Concatenate the answers from the four previous puzzles in alphabetical order to create the passphrase that will be used to decrypt the text. With the block of text placed in a file called final.enc, the openssl command to decrypt the text is as follows:

$ openssl aes-256-cbc -d -in final.enc -out final.txt
enter aes-256-cbc decryption password: DiademMimosaPeacockPhoenicia

Decrypting it reveals the solution to the puzzle:

"On behalf of EFF and Muppet Liberation Front,

congratulations on solving the puzzle challenge!

Email the phrase 'The stars at night are big and bright down on the strip of Vegas' to"


EFF is deeply thankful to the Muppet Liberation Front members for creating this puzzle and Eddie the Y3t1 for designing the artwork. After all, how can we fight for a better digital future without some beauty and brainteasers along the way? The movement for digital rights depends on cooperation and mutual support in our communities, and EFF is grateful to everyone on the team!

Happy Hacking!

Aaron Jue

It’s Time For A Federal Anti-SLAPP Law To Protect Online Speakers

2 months 3 weeks ago

Our country’s fair and independent courts exist to resolve serious disputes. Unfortunately, some parties abuse the civil litigation process to silence others’ speech, rather than resolve legitimate claims. These types of censorious lawsuits have been dubbed Strategic Lawsuits Against Public Participation, or SLAPPs, and they have been on the rise over the past few decades. 

Plaintiffs who bring SLAPPs intend to use the high cost of litigation to harass, intimidate, and silence critics who are speaking out against them. A deep-pocketed plaintiff who files a SLAPP doesn’t need to win the case on the merits—by putting financial pressure on a defendant, along with the stress and time it takes to defend a case, they can take away a person’s free speech rights. 

Fortunately, a bill introduced in Congress today, the SLAPP Protection Act of 2022 (H.R. 8864), aims to deter vexatious plaintiffs from filing these types of lawsuits in federal court.



To stop lawsuits that are meant to harass people into silence, we need strong anti-SLAPP laws. When people get hit with a lawsuit because they’re speaking out on a matter of public concern, effective anti-SLAPP law allows for a quick review by a judge. If it’s determined that the case is a SLAPP, the lawsuit gets thrown out, and the SLAPP victim can recover their legal fees. 

In recent years, more states have passed new anti-SLAPP laws or strengthened existing ones.  Those state protections are effective against state court litigation, but they don’t protect people who are sued in federal court. 

Now, a bill has been introduced that would make real progress in stopping SLAPPs in federal courts. The SLAPP Protection Act will provide strong protections to nearly all speakers who are discussing issues of public concern. The SLAPP Protection Act also creates a process that will allow most SLAPP victims in federal court to get their legal fees paid by the people who bring the SLAPP suits. (Here’s our blog post and letter supporting the last federal anti-SLAPP bill that was introduced, more than seven years ago.) 

“Wealthy and powerful corporate entities are dragging citizens through meritless and costly litigation, to expose anyone who dares to stand up to them to financial and personal ruin,” said bill sponsor Rep. Jamie Raskin (D-MD) at a hearing yesterday in which he announced the bill. 

SLAPPs All Around 

SLAPP lawsuits in federal court are increasingly being used to target activists and online critics. Here are a few recent examples: 

Coal Ash Company Sued Environmental Activists

In 2016, activists in Uniontown, Alabama—a poor, predominantly Black town with a median per capita income around $8,000—were sued for $30 million by a Georgia-based company that put hazardous coal ash into Uniontown’s residential landfill. The activists were sued over statements on their website and Facebook page, that said things like the landfill “affected our everyday life,” and “You can’t walk outside, and you can not breathe.” The plaintiff settled the case after the ACLU stepped in to defend the activist group. 

Shiva Ayyadurai Sued A Tech Blog That Reported On Him

In 2016, technology blog Techdirt published articles disputing Shiva Ayyadurai’s claim to have “invented email.” Techdirt founder Mike Masnick was hit with a $15 million libel lawsuit in federal court. Masnick fought back in court and his reporting remains online, but the legal fees had a big effect on his business. 

Logging Company Sued Greenpeace 

In 2016, environmental non-profit Greenpeace was sued along with several individual activists by Resolute Forest Products. Resolute sued over blog post statements such as Greenpeace’s allegation that Resolute’s logging was “bad news for the climate.” (After four years of litigation, Resolute was ordered to pay nearly $1 million in fees to Greenpeace—because a judge found that California’s strong anti-SLAPP law should apply.) 

Pipeline Company Sued Environmental Activists

In 2017, Greenpeace, Rainforest Action, the Sierra Club, and other environmental groups were sued by Energy Transfer Partner because they opposed the Dakota Access Pipeline project. Energy Transfer said that the activists’ tweets, among other communications, amounted to a “fraudulent scheme” and that the oil company should be able to sue them under RICO anti-racketeering laws, which were meant to take on organized crime. 

Congressman Sued His Twitter Critics 

In 2019, anonymous Twitter accounts were sued by Rep. Devin Nunes, then a Congressman representing parts of Central California. Nunes used lawsuits to attempt to unmask and punish two Twitter users who used the handles @DevinNunesMom and @DevinCow to criticize his actions as a politician. Nunes filed these actions in a state court in Henrico County, Virginia. The location had little connection to the case, but Virginia’s lack of an anti-SLAPP law has enticed many plaintiffs there. 

The Same Congressman Sued Media Outlets For Reporting On Him

Over the next few years, Nunes went on to sue many other journalists who published critical articles about him, using state and federal courts to sue CNN, The Washington Post, his hometown paper the Fresno Bee, and NBC. 

Fast Relief From SLAPPs

The SLAPP Protection Act meets EFF's criteria for a strong anti-SLAPP law. It would be a powerful tool for defendants hit with a federal lawsuit meant to take away their free speech rights. If the bill passes, any defendant sued for speaking out on a matter of public concern would be allowed to file a special motion to dismiss, which will be decided within 90 days. If the court grants the speaker’s motion, the claims are dismissed. In many situations, speakers who prevail on an anti-SLAPP motion will be entitled to their legal fees. 

The bill won’t reduce protections under state anti-SLAPP laws, either. So in cases where the state law may be as good, or even stronger, the current bill will become a floor, not a ceiling, for the rights of SLAPP defendants. 

EFF has been defending the rights of online speakers for more than 30 years. A strong federal anti-SLAPP law will bring us closer to the vision of an internet that allows anyone to speak out and organize for change, especially when they speak against those with more power and resources. Anti-SLAPP laws enhance the rights of all. We hope Congress passes the SLAPP Protection Act soon. 



Joe Mullin

Members of Congress Urge FTC to Investigate Fog Data Science

2 months 3 weeks ago

In the week since EFF and the Associated Press exposed how Fog Data Science purchases geolocation data on hundreds of millions of digital devices in the United States, and maps them for easy-to-use and cheap mass surveillance by police, elected officials have voiced serious concerns about this dangerous tech.

In a strong letter to Lina Khan, the chair of the Federal Trade Commission (FTC), Rep. Anna Eshoo of California on Tuesday criticized the “significant Fourth Amendment search and seizure concerns” raised by Fog and urged the FTC to investigate fully. As public records obtained by EFF show, police often use Fog’s mass surveillance tools without a warrant, in violation of our Fourth Amendment rights.

Eshoo wrote:

“The use of Fog is also seemingly incompatible with protections against unlawful search and seizure guaranteed by the Fourth Amendment. Consumers do not realize that they are potentially nullifying their Fourth Amendment rights when they download and use free apps on their phones. It would be hard to imagine consumers consenting to this if actually given the option, yet this is functionally what occurs.”

Eshoo also pointed out the new threat that Fog’s surveillance tool poses to people seeking reproductive healthcare. In a state where abortion has been criminalized, Fog’s Reveal tool could potentially allow police, without a warrant, to draw a geofence around a health clinic over state lines in a state where abortion is not criminalized, allowing them to see if any phones there return to their state. “In a post Roe v. Wade world., it’s more important than ever to be highly mindful of how tools like Fog Reveal may present new threats as states across the country pass increasingly draconian bills restricting people’s access to abortion services and targeting people seeking reproductive healthcare,” Eshoo wrote.

The FTC recently sued another company selling geolocation data, Kochava, a commendable step to hold the company accountable for its unfair practices.

Eshoo is not alone. Senator Ron Wyden said in a tweet about Fog’s ability to facilitate mass surveillance, “Unfortunately, while it’s outrageous that data brokers are selling location data to law-enforcement agencies, it’s not surprising.”

We echo Eshoo’s request that the FTC conduct a full and thorough investigation into Fog Data Science. We continue to urge Congress to act quickly to regulate this out-of-control industry that jeopardizes our privacy, and allows police to conduct warrantless mass surveillance.  

Matthew Guariglia

The Fight to Overturn FOSTA, an Unconstitutional Internet Censorship Law, Continues

2 months 3 weeks ago

More than four years after its enactment, FOSTA remains an unconstitutional law that broadly censored the internet and harmed sex workers and others by chilling their ability to speak, organize, and access information online.

And the fight to overturn FOSTA continues. Last week, two human rights organizations, a digital library, a sex worker activist, and a certified massage therapist filed their opening brief in a case that seeks to strike down the law for its many constitutional violations.

Their brief explains to a federal appellate court why FOSTA is a direct regulation of people’s speech that also censors online intermediaries that so many rely upon to speak—classic First Amendment violations. The brief also details how FOSTA has harmed the plaintiffs, sex workers, and allies seeking to decriminalize the work and make it safer, primarily because of its vague terms and its conflation of sex work with coercive trafficking.

“FOSTA created a predictable speech-suppressing ratchet leading to ‘self-censorship of constitutionally protected material’ on a massive scale,” the plaintiffs, Woodhull Freedom Foundation, Human Rights Watch, The Internet Archive, Alex Andrews, and Eric Koszyk, argue. “Websites that support sex workers by providing health-related information or safety tips could be liable for promoting or facilitating prostitution, while those that assist or make prostitution easier—i.e., ‘facilitate’ it—by advocating for decriminalization are now uncertain of their own legality.”

FOSTA created new civil and criminal liability for anyone who “owns, manages, or operates an interactive computer service” and creates content (or hosts third-party content) with the intent to “promote or facilitate the prostitution of another person.” The law also expands criminal and civil liability to classify any online speaker or platform that allegedly assists, supports, or facilitates sex trafficking as though they themselves were participating “in a venture” with individuals directly engaged in sex trafficking.

FOSTA doesn’t just seek to hold platforms and hosts criminally responsible for the actions of sex-traffickers. It also introduces significant exceptions to the civil immunity provisions of one of the internet’s most important laws, 47 U.S.C. § 230. These exceptions create new state law criminal and civil liability for online platforms based on whether their users' speech might be seen as promoting or facilitating prostitution, or as assisting, supporting or facilitating sex trafficking.

The plaintiffs are not alone in viewing FOSTA as an overbroad censorship law that has harmed sex workers and other online speakers. Four friend-of-the-court briefs filed in support of their case this week underscore FOSTA’s disastrous consequences. 

The Center for Democracy & Technology’s brief argues that FOSTA negated the First Amendment’s protections for online intermediaries and thus undercut the vital role those services provide by hosting a broad and diverse array of users’ speech online.

“Although Congress may have only intended the laudable goal of halting sex trafficking, it went too far: chilling constitutionally protected speech and prompting online platforms to shut down users’ political advocacy and suppress communications having nothing to do with sex trafficking for fear of liability,” CDT’s brief argues.

A brief from the Transgender Law Center describes how FOSTA’s breadth has directly harmed lesbian, gay, transgender, and queer people.

“Although FOSTA’s text may not name gender or sexual orientation, FOSTA’s regulation of speech furthers the profiling and policing of LGBTQ people, particularly TGNC people, as the statute’s censorial effect has resulted in the removal of speech created by LGBTQ people and discussions of sexuality and gender identity,” the brief argues. “The overbroad censorship resulting from FOSTA has resulted in real and substantial harm to LGBTQ people’s First Amendment rights as well as economic harm to LGBTQ people and communities.”

Two different coalitions of sex worker advocacy and harm reduction groups filed briefs in support of the plaintiffs that show FOSTA’s direct impact on sex workers and how the law’s conflation of consensual sex work with coercive trafficking has harmed both victims of trafficking and sex workers.

A brief led by Call Off Your Old Tired Ethics (COYOTE) of Rhode Island published data from its recent survey of sex workers showing that FOSTA has made sex trafficking more prevalent and harder to combat.

“Every kind of sex worker, including trafficking survivors, have been impacted by FOSTA precisely because its broad terms fail to distinguish between different types of sex work and trafficking,” the brief argues. The brief goes on to argue that FOSTA’s First Amendment problems have “made sex work more dangerous by curtailing the ability to screen clients on trusted online databases, also known as blacklists.”

A brief led by Decriminalize Sex Work shows that “FOSTA is part of a legacy of federal and state laws that have wrongfully conflated human trafficking and adult consensual sex work while overlooking the realities of each.”

“The limitations on free speech caused by FOSTA have essentially censored harm reduction and safety information sharing, removed tools that sex workers used to keep themselves and others safe, and interrupted organizing and legislative endeavors to make policies that will enhance the wellbeing of sex workers and trafficking survivors alike,” the brief argues. “Each of these effects has had a devastating impact on already marginalized and vulnerable communities; meanwhile, FOSTA has not addressed nor redressed any of the issues cited as motivation for its enactment.”

The plaintiffs’ appeal marks the second time the case has gone up to the U.S. Court of Appeals for the District of Columbia. The plaintiffs previously prevailed in the appellate court when it ruled in 2020 that they had the legal right, known as standing, to challenge FOSTA, reversing an earlier district court ruling.

Members of Congress have also been concerned about FOSTA’s broad impacts, with senators introducing the SAFE SEX Workers Study Act for the last two years, though it has not become law.

The plaintiffs are represented by Davis, Wright Tremaine LLP, Walters Law Group, Daphne Keller, and EFF.

Related Cases: Woodhull Freedom Foundation et al. v. United States
Aaron Mackey

San Francisco Police Must End Irresponsible Relationship with the Northern California Fusion Center

2 months 3 weeks ago

In yet another failure to follow the rules, the San Francisco Police Department is collaborating with the regional fusion center with nothing in writing—no agreements, no contracts, nothing— governing the relationship, according to new records released to EFF in its ongoing complaint against the agency.

This means that there is no document in place that establishes the limits and responsibilities for sharing and handling criminal justice data or intelligence between SFPD and the fusion center and other law enforcement agencies who access sensitive information through its network.

SFPD must withdraw immediately from any cooperation with the Northern California Regional Information Center (NCRIC). Any moment longer it continues to collaborate with NCRIC puts sensitive data and the civil rights of Bay Area residents at severe risk.

Fusion centers were started in the wake of 9/11 as part of a Department of Homeland Security program to improve data sharing between local, state, tribal, and federal law enforcement agencies. There are 79 fusion centers across the United States, each with slightly different missions and responsibilities, ranging from generating open-source intelligence reports to monitoring camera networks. NCRIC historically has served as the Bay Area hub for sharing data across agencies from automated license plate readers (ALPRs), face recognition, social media monitoring, drone operations, and "Suspicious Activity Reports" (SARS).

NCRIC requires all participating agencies to sign a data sharing agreement and non-disclosure agreement ("Safeguarding Sensitive But Unclassified Information"), which is consistent with federal guidelines for operating a fusion center. EFF has independently confirmed with NCRIC staff that SFPD has not signed such an agreement. This failure is even more surprising considering that SFPD has had two liaisons assigned to the fusion center and the police chief has served as chair of NCRIC's executive board.

In December 2020, EFF filed a public records request under the San Francisco Sunshine Ordinance, following a San Francisco Chronicle report suggesting that an SFPD officer had submitted a photo of a suspect to the fusion center's email list and received in response a match generated by face recognition, which would potentially violate San Francisco's face recognition ban. We sought records related to this particular case, but more generally, we sought communications related to other requests for photo identification submitted by SFPD, communications about face recognition, and any agreements between SFPD and NCRIC.

When SFPD failed to comply with our records request, we filed a complaint with the San Francisco Sunshine Ordinance Task Force, the citizen body assigned to oversee violations of open records and meetings laws. Many new documents were released and SFPD was found by the task force to have violated both the Sunshine Ordinance and the California Public Records Act. One document was missing though: the fusion center agreement.

New records released in the complaint now explain why: no such agreements exist. SFPD didn't sign any, according to multiple emails sent between staff.

SFPD can't simply solve this problem by signing the boilerplate agreement tomorrow. Any formal partnership or data-sharing relationship with NCRIC would have to go through the process required by the city's surveillance oversight ordinance, which requires public input into such agreements and the board of supervisors’ approval. SFPD should expect public opposition to its involvement with the fusion center, just as there was opposition to its involvement in the FBI's Joint Terrorism Task Force.

Even if that process were to move forward, the public must be involved in crafting the exact language of the agreement. For example, when the Bay Area Rapid Transit (BART) Police Department pursued an agreement with NCRIC, the grassroots advocacy group Oakland Privacy (an Electronic Frontier Alliance member) helped negotiate an agreement with stronger considerations for civil liberties and privacy.

This isn't the first time SFPD has played fast and loose with data regulations. EFF is currently suing the department for accessing a live camera network to spy on protesters without first following the process required by the surveillance oversight ordinance. EFF has also filed a second Sunshine Ordinance complaint against SFPD for failing to produce a mandated ALPR report in response to a public records request.

This latest episode re-emphasizes that SFPD has not earned the trust of the people when it comes to its use of technology and data. SFPD should be cut off from NCRIC immediately, and the Board of Supervisors should treat any claim about accountability from SFPD with skepticism. SFPD has proven it doesn't believe rules matter, and that should always be a deal-breaker when it comes to surveillance. 

Related Cases: Williams v. San Francisco
Dave Maass

EFF’s “Cover Your Tracks” Will Detect Your Use of iOS 16’s Lockdown Mode

2 months 3 weeks ago

Apple’s new iOS 16 offers a powerful tool for its most vulnerable users. Lockdown Mode reduces the avenues attackers have to hack into users’ phones by disabling certain often-exploited features. While providing a solid defense against intrusion, it is also trivial to detect that this new feature is enabled on a device. Our web fingerprinting tool Cover Your Tracks has incorporated detection of Lockdown Mode and alerts the user that we’ve determined they have this mode enabled.

Over the last few years, journalists, human rights defenders, and activists have increasingly become targets of sophisticated hacking campaigns. With a small cost to usability, at-risk populations can protect themselves from commonly used entry points into their devices. One such entry point is downloading remote fonts when visiting a webpage. iOS 16 in Lockdown Mode disallows remote fonts from being loaded from the web, which would otherwise have the potential to allow access to a device by exploiting the complex ways fonts are rendered. However, it is also easy to use a small piece of JavaScript code on the page to determine whether the font was blocked from being loaded.

While a large win for endpoint security, this is also a small loss for privacy. Lockdown Mode is unlikely to be used by many people, compared to the millions who use iOS devices, and as such it makes those that do enable it stand out amongst the crowd as a person who needs extra protection. Web fingerprinting is a powerful technique to determine a user's browsing habits, circumventing normal mechanisms users have to avoid tracking, such as clearing cookies.

Make no mistake: Apple’s introduction of this powerful new protection is a welcome development for those that need it the most. But users should also be aware of the information they are exposing to the web while using this feature.

Bill Budington

U.S. Federal Employees Can Take A Stand for Digital Freedoms

2 months 3 weeks ago

It’s that time of the year again when the weather starts to cool down and the leaves start to turn all different shades and colors. More importantly, it is also time for U.S. federal employees to pledge their support for digital freedoms through the Combined Federal Campaign (CFC)!

The pledge period for the CFC is underway and EFF needs your help. Last year, U.S. federal employees raised over $34,000 for EFF through the CFC, helping us fight for free expression, privacy, and innovation on the internet so that we can help create a better digital future.

The Combined Federal Campaign is the world’s largest and most successful annual charity campaign for U.S. federal employees and retirees. Since its inception in 1961, the CFC fundraiser has raised more than $8.6 billion for local, national, and international charities. This year’s campaign runs from September 1 to January 14, 2023. Be sure to make your pledge for the Electronic Frontier Foundation before the campaign ends!

U.S. federal employees and retirees can give to EFF by going to and clicking the DONATE button to give via payroll deduction, credit/debit, or an e-check! If you have a renewing pledge, you can increase your support as well. Be sure to use EFF’s CFC ID #10437. Scan the QR code below to easily make a pledge!

This year’s CFC campaign theme continues to build off of 2020’s “You Can Be The Face of Change.” U.S. federal employees and retirees give through the CFC to change the world for the better, together. With your support, EFF can continue our strides towards a diverse, and free internet that benefits all of its users.

With support from those who pledged EFF last year we have: rang alarm bells about a police equipment vendor’s now-thwarted plan to arm drones with tasers in response to school shootings, pushed back against government involvement in content moderation on social media platforms, and developed numerous digital security guides applicable for those seeking and offering abortion resources after the overturning of federal protections for reproductive rights.

Federal employees have a tremendous impact on the shape of our democracy and the future of civil liberties and human rights online. Support EFF today by using our CFC ID #10437 when you make a pledge!

Christian Romero

EFF to California Governor: Protect Abortion Data Privacy

2 months 3 weeks ago

In the wake of the Supreme Court’s Dobbs decision, anti-choice sheriffs and bounty hunters will try to investigate and punish abortion seekers based on their internet browsing, private messaging, and phone app location data. Legislators must act now to protect this personal data. Reproductive justice requires data privacy.

That’s why EFF urges California Governor Gavin Newsom to sign A.B. 1242, authored by Assemblymember Rebecca Bauer-Kahan. This bill would protect the data privacy of people seeking abortion, by limiting how California-based entities disclose abortion-related information. Some of the bill’s requirements include the following:     

  • California courts would be prohibited from authorizing wiretaps, pen registers, and other searches for the purpose of enforcing out-of-state laws against abortions that are lawful in California.
  • California businesses that provide electronic communication services, such as email and private messaging, would be prohibited from, in California, providing information in response to out-of-state legal process that arises from anti-abortion laws.
  • California businesses that provide electronic communication services or remote computing services would be prohibited from disclosing communications content and metadata in response to an out-of-state warrant that arises from anti-abortion laws.
  • California government agencies would be prohibited from providing information to any individual or out-of-state agency regarding an abortion lawfully performed in California.

This bill is a strong step forward. But more is needed. Congress and the states must enact comprehensive consumer data privacy legislation, like the federal “My Body, My Data” bill, that limits how businesses collect, retain, use, and share our data. Legislators also must enact new limits on police obtaining personal data from businesses, like banning dragnet police demands to identify all people who visited the same place or used the same keyword search term.

EFF also supports two other California bills that would protect the data privacy of vulnerable people who seek medical sanctuary in California. S.B. 107 would protect trans youths who visit to obtain gender-affirming care, and A.B. 2091 would protect people who visit to obtain abortion.

You can read here our letter urging California’s Governor to sign A.B. 1242.

Adam Schwartz

VICTORY: Slack Offers Retention Settings to Free Workspaces

2 months 4 weeks ago

In a victory for users, Slack has fixed its long-standing retention problems for free workspaces. Instead of holding onto your messages on its servers for as long as your workspace exists, Slack is now giving free workspace admins the option to automatically delete all messages older than 90 days. This basic ability to decide which information Slack should keep and which information it should delete should be available to all users, and we applaud Slack for making this change.

The new retention settings for free accounts were announced in a July blog post and are effective as of September 1st. Follow these steps from Slack to change retention in your own Slack workspaces, or share them with your workspace admin:

Since 2018, we have urged Slack to recognize its higher-risk users and take more steps to protect them. While Slack is intended for use in white-collar office environments, its free version has proven useful for abortion rights activists, get-out-the-vote phone banking organizers, unions, and other political organizing and activism activities.

Some might argue that the mismatch between enterprise tool design and wider use cases means Slack is simply the wrong tool for high-risk activists. But for many people, especially small and under-resourced organizations, Slack is the most viable option: it’s convenient, easy to use without extensive technical expertise, and already familiar to many.

Enterprise companies have a prerogative to charge more money for an advanced product, but best-practice privacy and security features should not be restricted to those who can afford to pay a premium. Slack’s decision to do the right thing and offer basic retention settings more widely is especially important because the people who cannot afford enterprise subscriptions are often the ones who need strong security and privacy protections the most. 

Gennie Gebhart

FTC Sues Location Data Broker

2 months 4 weeks ago

Phone app location data brokers are a growing menace to our privacy and safety. All you did was click a box while downloading an app. Now the app tracks your every move and sends it to a broker, which then sells your location data to the highest bidder.

So three cheers for the Federal Trade Commission for seeking to end this harmful marketplace! The FTC recently sued Kochava, a location data broker, alleging the company violated a federal ban on unfair business practices. The FTC’s complaint against Kochava illustrates the dangers created by this industry.

Kochava harvests and monetizes a staggering volume of location data. The company claims that on a monthly basis, it provides its customers access to 94 billion data points arising from 125 million active users. The FTC analyzed just one day of Kochava’s data, and found 300 million data points arising from 60 million devices.

Kochava’s data can easily be linked to identifiable people. According to the FTC:

The location data provided by Kochava is not anonymized. It is possible to use the geolocation data, combined with the mobile device’s MAID [that is, its “Mobile Advertising ID”], to identify the mobile device’s user or owner. For example, some data brokers advertise services to match MAIDs with ‘offline’ information, such as consumers’ names and physical addresses.

Even without such services, however, location data can be used to identify people. The location data sold by Kochava typically includes multiple timestamped signals for each MAID. By plotting each of these signals on a map, much can be inferred about the mobile device owners. For example, the location of a mobile device at night likely corresponds to the consumer’s home address. Public or other records may identify the name of the owner or resident of a particular address.

Kochava’s location data can harm people, according to the FTC:

[T]he data may be used to identify consumers who have visited an abortion clinic and, as a result, may have had or contemplated having an abortion. In fact, … it is possible to identify a mobile device that visited a women’s reproductive health clinic and trace that mobile device to a single-family residence.

Likewise, the FTC explains that the same data can be used to identify people who visit houses of worship, domestic violence shelters, homeless shelters, and addiction recovery centers. Such invasions of location privacy expose people, in the words of the FTC, to “stigma, discrimination, physical violence, emotional distress, and other harms.”

The FTC Act bans “unfair or deceptive acts or practices in or affecting commerce.” Under the Act, a practice is “unfair” if: (1) the practice “is likely to cause substantial injury to consumers”; (2) the practice “is not reasonably avoidable by consumers themselves”; and (3) the injury is “not outweighed by countervailing benefits to consumers or to competition.”

The FTC lays out a powerful case that Kochava’s brokering of location data is unfair and thus unlawful. We hope the court will rule in the FTC’s favor. Other location data brokers should take a hard look at their own business model or risk similar judicial consequences.

The FTC has recently taken many other welcome actions to protect people’s digital rights. Last month, the agency announced it is exploring new rulemaking against commercial surveillance. Earlier this year, the FTC fined Twitter for using account security data for targeted ads, brought lawsuits to protect people’s right-to-repair, and issued a policy statement against edtech surveillance.

Adam Schwartz

EFF to Ninth Circuit: Social Media Content Moderation is Not "State Action"

2 months 4 weeks ago

Former EFF intern Shashank Sirivolu contributed to this blog post.  

Social media users who have sued companies for deleting, demonetizing, and otherwise moderating their content have tried several arguments that this violates their constitutional rights. Courts have consistently ruled against them because social media platforms themselves have the First Amendment right to moderate content. The government and the courts cannot tell them what speech they must remove or, on the flip side, what speech they must carry. And when the government unlawfully conspires with or coerces a platform to censor a user, the user should only be able to hold the platform liable for the government’s interference in rare circumstances.  

In some cases, based on the “state action” doctrine, courts can treat a platform’s action as that of the government. This may allow a user to hold the platform liable for what would otherwise be a the platform’s private exercise of their First Amendment rights. These cases are rare and narrow. “Jawboning,” or when the government influences content moderation policies, is common. We have argued that courts should only hold a jawboned social media platform liable as a state actor if: (1) the government replaces the intermediary’s editorial policy with its own, (2) the intermediary willingly cedes its editorial implementation of that policy to the government regarding the specific user speech, and (3) the censored party has no remedy against the government.  

To ensure that the state action doctrine does not nullify social media platforms’ First Amendment rights, we recently filed two amicus briefs in the Ninth Circuit in Huber v. Biden and O'Handley v. Weber. Both briefs argued that these conditions were not met, and the courts should not hold the platforms liable under a state action theory.  

In Huber v. Biden, the plaintiff accused Twitter of conspiring with the White House to suspend a user’s account for violating the company’s policy against disseminating harmful and misleading information related to COVID-19. Our brief argued that the plaintiff’s theory was flawed for several reasons. First, the government did not replace Twitter’s editorial policy with its own, but, at most, advised the company about its concerns regarding the harm of misinformation about the virus. Second, Huber does not allege that the government ever read, much less talked to Twitter about, the tweet at issue. Finally, because Huber brought a claim against the government directly, she may have a remedy for her claim.  

In O’Handley v. Weber, the plaintiff accused Twitter of conspiring with the California Secretary of State to censor and suspend a user’s Twitter account for violating the company’s policies regarding election integrity. In direct response to concerns about election interference in the 2016 Presidential election, the California Legislature established the Office of Election Cybersecurity within the California Secretary of State's office. While the Office of Election Cybersecurity notified Twitter about one of the plaintiff’s tweets that it believed contained potential misinformation, there is nothing unconstitutional about the government speaking about its concerns to a private actor. And even if the government did cross the line, O'Handley did not demonstrate that this one notification got Twitter to cede its editorial decision-making to the government. Rather, Twitter may have considered the government’s view but ultimately made its own decision to suspend O’Handley. Finally, because O’Handley brought a claim against the Secretary of State directly, he may have a remedy. 

While it is important that internet users have a well-defined avenue for holding social media companies liable for harmful collaborations with the government, it must be narrow enough to preserve the platforms’ First Amendment rights to curate and edit their content. Otherwise, users themselves will end up being harmed because they will lose access to platforms with varied forums for speech. 

Mukund Rathi

Arizona Law Tramples People’s Constitutional Right to Record Police

3 months ago
EFF, two Arizona chapters of the National Lawyers Guild, Poder in Action, and Mass Liberation AZ filed a brief in federal court opposing the government's attempt to thwart police accountability.

SAN FRANCISCO–A new Arizona law that bans people from recording videos within eight feet of police violates the constitutional rights of legal observers, grassroots activists, and other Arizonans, says a brief filed Friday in federal court by the Electronic Frontier Foundation (EFF), two Arizona chapters of the National Lawyers Guild (NLG), Poder in Action, and Mass Liberation AZ.

“Arizonans routinely hold police accountable throughout their communities by recording them within eight feet,” said Mukund Rathi, an EFF attorney and Stanton Fellow focusing on free speech litigation. “Arizonans use these recordings to document police activity at protests, expose false charges against protesters, and inform the public of police racism and misconduct. Everyone must be free to use mobile devices and social media to record and publish the news, including how police use their powers.”

The new law makes it a crime, punishable by up to a month in jail, to record videos within eight feet of law enforcement activity. The law was signed by Gov. Doug Ducey in July and is scheduled to take effect Sept. 24.

Several news organizations and the American Civil Liberties Union of Arizona sued last month to prevent the law from going into effect, arguing it “creates an unprecedented and facially unconstitutional content-based restriction on speech about an important governmental function.”

The friend-of-the-court brief filed Friday by EFF and its collaborators agrees, and helps illustrate the potential impact of the law by detailing how the grassroots organizations create and use recordings to hold police accountable and keep their communities free and safe.

The groups, represented by EFF and co-counsel Kathleen E. Brody of Phoenix, told the court the new law harms efforts by legal observers and others to exercise their fundamental right to record police activity. Protest activity often occurs within eight feet of police, and sightlines of police activity are often obscured at greater distances. Also, officers often move closer to protestors and those who are recording videos–essentially creating the crime under this law. Video recordings also are more accurate, detailed, and shareable than written note-taking.

"Police and prosecutors in Maricopa County have arrested and falsely charged hundreds of protesters for their free expression in recent years," said Lola N'sangou, Executive Director of Mass Liberation AZ, a Black-led abolitionist group based in south Phoenix and organizing throughout Arizona. "Scores of these protesters faced false felony charges that were later dropped, in many cases due to recordings filmed within eight feet of the arrests and surrounding circumstances. Without these recordings, most of these protesters would have spent decades in prison. One protester faced 100.5 years on completely fabricated charges." 

The case is Arizona Broadcasters Association v. Brnovich, 2:22-cv-01431 in the U.S District Court, District of Arizona.

For the EFF, NLG, Poder in Action, and Mass Liberation AZ amicus brief: https:/

For the underlying complaint:

For more on the right to record:

Contact:  MukundRathiStanton Legal
Josh Richman

Honoring Peter Eckersley, Who Made the Internet a Safer Place for Everyone

3 months ago

With deep sadness, EFF mourns the loss of our friend, the technologist, activist, and cybersecurity expert Peter Eckersley. Peter worked at EFF for a dozen years and was EFF’s Chief Computer Scientist for many of those. Peter was a tremendous force in making the internet a safer place. He was recently diagnosed with colon cancer and passed away suddenly on Friday. 

The impact of Peter’s work on encrypting the web cannot be overstated. The fact that transport layer encryption on the web is so ubiquitous that it's nearly invisible is thanks to the work Peter began. It’s a testament to the boldness of his vision that he decided that we could and should encrypt the web, and to his sheer tenacity that he kept at it despite disbelief from so many, and a seemingly endless series of blockages and setbacks. There is no doubt that without Peter’s relentless energy, his strategy of cheerful cajoling, and his flexible cleverness, the project would not have even launched, much less succeeded so thoroughly.

While encrypting the web would have been enough, Peter played a central role in many groundbreaking projects to create free, open source tools that protect the privacy of users’ internet experience by encrypting communications between web servers and users. Peter’s work at EFF included privacy and security projects such as Panopticlick, HTTPS Everywhere, Switzerland, Certbot, Privacy Badger, and the SSL Observatory.

His most ambitious project was probably Let’s Encrypt, the free and automated certificate authority, which entered public beta in 2015. Peter had been incubating the project for several years, but was able to leverage the famous “smiley face” image from the Edward Snowden leaks showing where SSL was added and removed, to build a coalition that actually made it happen. Let’s Encrypt fostered the web’s transition from non-secure HTTP connections that were vulnerable to eavesdropping, content injection, and cookie stealing, to the more secure HTTPS, so websites could offer secure connections to their users and protect them from network-based threats. 

By 2017 it had issued 100 million certificates; by 2021, about 90% of all web page visits use HTTPS. As of today it has issued over a billion certificates to over 280 million websites. 

Peter joined EFF as a staff technologist in 2006, when the role was largely to advise EFF’s lawyers and activists so that our work was always technically correct and smart. His passion at the time was the mismatch between copyright law and how the Internet functions, and he finished his PhD while at EFF. Soon, Peter and EFF’s first staff technologist Seth Schoen began to see ways they could leverage small hacks existing to internet infrastructure systems to build technologies to spur more security and freedom online, as well as ensure that the internet served everyone. They began to build technical projects, recruited and hired some of the internet's most innovative technologists, and before long created EFF’s Technology Projects Team as a full pillar of EFF’s work.

Peter helped launch a tool to tell uses when their ISP was interfering in their web traffic, called Switzerland, which created a movement for open wireless networks. He also documented violations of net neutrality, advocated for keeping modern computer platforms open, and was a driving force behind the campaign against SOPA/PIPA internet blacklist legislation, after a call from his friend Aaron Swartz. The list goes on and on and includes advising EFF lawyers and activists on all manner of litigation and lobbying efforts.

We'll never forget the gleam in his eye as Peter started talking about his latest idea, nor his wide smile as he kept working to find a way to overcome obstacles and often almost bodily carry his ideas into being. He had the gift of being able to widen the aperture of any problem, giving a perspective that could help see patterns and options that were previously invisible. His single-minded passion could sometimes lead him to step on toes and gloss over problems, but his heart and vision never wavered from what would best serve humanity as a whole. We’ll also never forget the time he secretly built a gazebo on the roof of EFF, or his puckish fashion sense—one year we made special red EFF-logo socks for the entire staff to honor his style.

Peter left EFF in 2018 to focus on studying and calling attention to the malicious use of artificial intelligence and machine learning. He founded AI Objectives Institute, a collaboration between major technology companies, civil society, and academia, to ensure that AI is designed and used to benefit humanity.

Peter’s vision, audacity, and commitment made the web, and the world, a better place. We will miss him.


Cindy Cohn

Hollywood’s Insistence on New Draconian Copyright Rules Is Not About Protecting Artists

3 months ago

Stop us if you’ve heard these: piracy is driving artists out of business. The reason they are starving is because no one pays for things, just illegally downloads them. You wouldn’t steal a car. These arguments are old and being dragged back out to get support for rules that would strangle online expression. And they are, as ever, about Hollywood wanting to control creativity and not protecting artists.

When it comes to box office numbers, they’ve remained pretty consistent except when a global pandemic curtailed theater visits. The problem facing Hollywood is the same one that it’s faced since its inception: greed.

From the fever-pitch moral panic of the early 2000s, discussions about "piracy" disappeared from pop culture for about a decade. It’s come back, both from the side explaining why and the side that wants everyone punished.

Illegal downloading and streaming are not the cause of Hollywood’s woes. They’re a symptom of a system that is broken for everyone except the few megacorporations and the billionaires at the top of them.  Infringement went down when the industry adapted and gave people what they wanted: convenient, affordable, and legal alternatives. But recently, corporations have given up on affordability and convenience.

The Streaming Hellscape

It’s not news to anyone that the video streaming landscape has, in the last few years, become unnavigable. Finding the shows and movies you want has become a treasure hunt where, when you find the prize, you have to fork over your credit card information for it. And then the prize could disappear at any moment.

Rather than having a huge catalog of diverse studio material, which is what made Netflix popular to begin with, convenience has been replaced with exclusivity. But people don’t want everything a single studio offers. They want certain things. But just like the cable bundles that streaming replaced, a subscription fee isn’t for just what you want, it’s for everything the company offers. And it feels like a bargain to pay for all of it when a physical copy for one thing costs the same as a month’s subscription.

Except that paying for every service isn’t affordable. There are too many and they all have one or two things people want. So you can rotate which ones you pay for every so often, which is inconvenient, or just swallow the cost, which is not affordable. And none of that guarantees that what you want is going to be available. Content appears and disappears from streaming services all the time.

Disney removed Avatar from Disney+ because it is re-releasing it in theaters ahead of the sequel. Avatar is a 13-year-old movie, and rereleasing it in theaters should be a draw because of the theater-going experience. Avatar shouldn’t have to be removed from streaming since its major appeal is what it looks like on a big screen in 3D. But Disney isn’t taking the chance that the moviegoing experience of Avatar alone will get people to pay. It’s making sure people have to pay extra—either by going to the theater or paying for a copy.

And that’s when the content even has a physical form.

After the Warner Bros. merger with Discovery, the new owners wasted almost no time removing things from the streaming service HBO Max, including a number of things that were exclusive to the streaming service. That means there is no place to find copies of the now-removed shows. People used to joke that the internet was forever—once something was online it could not be removed. But that’s not the case anymore. Services that go under take all of their exclusive media with them. Corporate decisions like this remove things from the public record.

It’s a whole new kind of lost media, and like lost media of the past, it’s only going to be preserved by those individuals who did the work to make and save copies of it, often risking draconian legal liability, regardless of how the studio feels about that work.

When things are shuffled around, disappeared, or flat out not available for purchase, people will make their own copies in order to preserve it. That is not a failure of adequate punishment for copyright infringement. It’s a failure of the market to provide what consumers want.

It’s disingenuous for Hollywood’s lobbyists to claim that they need harsher copyright laws to protect artists when it’s the studios that are busy disappearing the creations of these artists. Most artists want their work to find an audience and the fractured, confusing, and expensive market prevents that, not the oft-alleged onslaught of copyright infringement.

Hollywood Cares About Money, Not Artists

There’s a saying that, in various forms, prevails within the creative industry. It goes something like “Art isn’t made in Hollywood. Occasionally, if you get very lucky, it escapes.”

Going back to Warner Bros. and HBO Max: another decision made by the new management was to cancel projects that were largely finished. This included a Batgirl movie, which had a budget of $90 million. The decision was made so that the studio could take a tax write-off, against the wishes of its star and directors, who said, “As directors, it is critical that our work be shown to audiences, and while the film was far from finished, we wish that fans all over the world would have the opportunity to see and embrace the final film themselves. Maybe one day they will insha’Allah.”

The point is that Hollywood isn’t in the art business. It’s in the business business. It is never trying to pay artists, it’s always trying to find a way to keep money out of artists’ hands and in the corporate coffers. There’s a reason “Hollywood accounting” has a Wikipedia entry. It’s an industry infamous for arguing that a movie that made a billion dollars at the box office actually made no money, all to keep from paying the artists involved.

Traditional movie making is a unionized endeavor. Basically everyone involved save the studio has a guild or union. That means that there are minimum standards for the employment contracts that studios have to meet. New technology is attractive to studios because it isn’t covered by those union agreements. They can ignore the demands of labor and then, if the unions threaten to refuse to work with them, they get to negotiate new terms. That’s why the Writers Guild went on strike in 2007.

The new streaming landscape also allowed studios to mistreat their below-the-line workers; everyone who is not an actor, producer, writer, or director. So, most people. IATSE, the union that represented most of those workers, overwhelmingly authorized a strike over working conditions. They particularly called out how streaming projects paid them less, even if they had budgets larger than that of traditional media.

Streaming has ruined the ability of writers to make a livable wage off of a job, and has all but eliminated mentoring and on-set experience, contrary to the desires of the actual people who make the shows. Instead of investing in writers, studios push for more “efficient” models that make writing jobs harder to get and producing experience nearly impossible.

So when Hollywood lobbyists argue for draconian copyright laws “for artists,” it should ring especially hollow.

What they want is exclusive control. That includes the ability to constantly charge for access, which means preventing people from having their own copies. Hollywood has fought against audiences having their own copies for as long as the technology has existed. They sued to eliminate VCRs and when they lost, then they started selling tapes. They sued the makers of DVRs, and when they lost again they opened up to video-on-demand. And now, streaming has given them what they’ve always wanted: complete control over copies of their work. No one owns a copy of the material they watch on a streaming service, they get only a license to watch it for a temporary period.

This way, the studios can make you pay for something every month instead of once. They can take it down so you can’t watch it at all. They can edit things post-release, losing some of the history of the creation. And without copies available to own, they prevent creative newcomers from exercising their right to make fair use of it. All of this is anti-artist.

Studios want to point to an outside reason for their actions. Copyright infringement is convenient that way. And when they endorse draconian legislation like the filter mandates of the Strengthening Measures to Advance Rights Technologies Copyright Act, that is why. But when infringement happens, it’s a symptom of a market not meeting demand, not the cause of the problem.

Take Action

Tell your senators to oppose The Filter Mandate

Katharine Trendacosta

How Ad Tech Became Cop Spy Tech

3 months ago

This article is part of EFF’s investigation of location data brokers and Fog Data Science. Be sure to check out our issue page on Location Data Brokers.

If a company wants to advertise something to you on the internet, it first has to know who you are and what you like to buy. There are many different approaches to gathering this data, but all generally have one goal in common: they link you with the data generated by your devices.

If law enforcement wants to track you via data generated by your devices, it first has to know where to find that data and how it links to you. As it turns out, these goals align quite strongly with the advertisers.

You can probably guess where this is going.

A multi-billion dollar industry of advertising data brokers sells sensitive data gathered from people’s phones to a wide range of clientele, including the U.S. military, federal law enforcement agencies and, as EFF has learned, state and local law enforcement. This is especially problematic because many law enforcement agencies have argued, erroneously, that they don’t need a warrant to buy people’s location data from data brokers.

And there’s one key digital advertising technology that Fog and other data brokers have turned into a police surveillance technology: the ad ID. Although Android and iOS call it different things, an advertiser identifier (ad ID for short) is a random string of letters and numbers generated for your device and attached to bundles of data generated by the apps and websites you use. These bundles of data often include private information about you, such as your year of birth, gender, what search terms you use, and perhaps most importantly for law enforcement, your location. When your device sends this data along, it’s often bought by data brokers to be repackaged and resold.

Since each of these bundles of data has your unique ad ID attached to it, data brokers can later group them together to form a more complete picture of your behavior. Without an ad ID, the data brokers and their law enforcement customers would have a much harder time tracking individuals in the sea of datapoints.

Because ad IDs are randomly generated, data brokers like Fog Data Science like to claim that the data they sell doesn’t contain personally identifiable information (PII). This, as multiple studies have shown, is bogus. Ad IDs, because they allow disparate data points to be grouped into an individual’s pattern of movement, can make it trivially easy to identify where a person sleeps at night, goes to work during the day, which bars they frequent, and much more. It takes just a few location hits to identify a person. Police know this. For example, in documents obtained by EFF, a police officer in St. Louis wrote regarding Fog: “There is no PI [personal information] linked to the [device ID]. (But, if we are good at what we do, we should be able to figure out the owner).”

Ad IDs are a crucial part of the online advertising ecosystem. Without them (or a similar technique for fingerprinting devices), it’s hard to imagine the data brokers’ current business model continuing to function. Certainly, companies like Fog and Venntel would find it much harder to sell individual device’s location data to law enforcement, which would be a huge win for people’s privacy.

A world without ad IDs isn’t hard to imagine, either. Starting in iOS 10.0, Apple began providing an option to “zero out” a device’s ad ID, and recently this option was enabled by default for all iOS users. Analytics data suggests that in the wake of iOS 14.5, 96% of U.S. users opt-out of tracking, effectively disabling ad IDs for iOS altogether. Google’s Android also has an option to remove your ad ID, but it’s still not enabled by default. Until all phone manufacturers disable this pernicious feature for good, there are some easy steps you can take to disable your ad ID.

Additionally, we need new laws that limit how corporations process our data, and how police acquire that data from businesses.

Read more about Fog Data Science:

Will Greenberg

Fog Revealed: A Guided Tour of How Cops Can Browse Your Location Data

3 months ago

This article is part of EFF’s investigation of location data brokers and Fog Data Science. Be sure to check out our issue page on Location Data Brokers.

In Part 1 of our series on Fog Data Science, we saw how when you give some apps permission to view your location, it can end up being packaged and sold to numerous other companies. Fog Data Science is one of those companies, and it has created a sleek search engine called Fog Reveal that allows cops to browse through that location data as if they were Google Maps results.

In this article, we’ll be taking a deep dive into Fog Reveal’s features. Although accounts for Reveal are typically only available to police departments, we were able to analyze the app’s public-facing code to get a better understanding of how it works, how it’s used, and what it looks like when cops get warrantless access to your location data.

What We Found fog_tech_1.png

Fog Reveal’s main page, allowing users to create geofenced device queries anywhere in the U.S.

Fog Reveal offers law enforcement a powerful and incredibly invasive tool for sifting through huge datasets of phone location data. Reveal’s workflow allows cops to perform “geofenced” device searches, i.e. a search for all devices in a specified region on a map, and then find all other locations those devices were at other times. A powerpoint presentation we received from the Chino Police Department describes how cops use these features to identify so-called “bed-down” locations and build up “patterns of life” for device’s owners. These features clearly undercut Fog’s claim that their product only contains “anonymized” data with “no PII [personally identifiable information]”.

We also discovered that Reveal’s frontend code contains the traces of a much more powerful “federal” featureset, which would allow users to further deanonymize data by revealing device Advertiser IDs, IP addresses, and other phone details. As we will discuss, we do not know if these features are currently in use, but regardless, they demonstrate how simply showing a few more data fields can make a data aggregation tool like this much more invasive.


To properly interpret our findings, it’s important to understand what kind of software Fog Reveal is, and to explain our research methodology. Fog Reveal, like Google Maps, is a web application that runs in your browser. To research its functionality, we locally reconstructed the app based on the web resources available by visiting This was possible because upon loading the page, without logging in or even clicking anything, the site automatically requests nearly all the javascript/HTML needed by the fully functional app. Throughout this document, we’ll be referring to the javascript and HTML pulled from Reveal as the “frontend” or “frontend code”, and to its server-side application as the “backend” or “backend code.”

By saving Reveal’s frontend files and organizing them into directories mirroring their original URL paths, we made a local reproduction of the site’s resources. From there, we wrote a mock backend server to serve the files and handle API calls made by the frontend, and then systematically worked out the format of data expected from that API. Once this was done, we had a semi-functional local reproduction of Reveal that made no requests to Fog’s actual server, and yet allowed us to explore its features.

Because our mock server isn’t an exact replica of Reveal’s actual backend, we should preface this article by saying that our findings here only apply to the frontend code, as our mock server’s functionality is based on educated guesses and only returns fake location data. Consequently, it’s possible that our local reproduction’s behavior differs from Fog’s actual application. Where appropriate, we will cite the relevant frontend code (which we've made available on DocumentCloud) and point out where uncertainties remain, and in general will describe our estimation of Fog Reveal’s actual features with as few assumptions about the backend as possible.

With that out of the way, let’s now take a look at our findings on Fog Reveal’s features. All of the data depicted in the following document, including latitude/longitude coordinates and IP addresses, are fake data generated randomly by our mock backend server. All screenshots are of our reconstructed app, not of Fog’s production app.

Making a query

After signing into Reveal, the user is presented with a Google Maps view of the US, as well as a toolbox at the top-right of the screen:


Users (most likely law enforcement) can zoom in on a location of interest, and use the toolbox to draw a geofence.

Reveal’s frontend shows several tools for drawing geofences, the most basic of which is just a circle:


Here, we’ve targeted the EFF offices in San Francisco.

If this isn’t specific enough, users can also draw arbitrary shapes to carve out a more detailed geofence:


We’ve now excluded EFF’s neighbors, as well as our patio.

The frontend limits the size of these geofence queries, although those limits are quite large. For example, the frontend circle tool will allow queries with a radius of 2500 meters1, allowing up to nearly 20 square kilometers when performing a “signal search.” It’s possible that the backend imposes further limitations.

The user can also specify a date and time range for their query, and it seems that these ranges can stretch back over several months: a copy of Fog Reveal’s user manual received from Greensboro Police Department claims that date/time ranges can extend up to 90 days, and can be searched “back to Jun[e] of 2017”.

Query Results

After specifying a geofence and date/time range, the user can run their query. Queries return a set of data points, referred to as “signals” in the user manual, which represent where a device was at a given point in time2. The user can then do further analysis on these signals, such as grouping them by the device that produced them, or displaying the path taken by the device over time:


Our query results show 10 signals originating from 2 separate devices.


By grouping the signals by the device that produced them, Reveal can trace out their path over time, giving us a view into how the device’s owner was moving that day.

As an aside, in this example we've been using the EFF office in San Francisco, which coincidentally was the location of a Planned Parenthood clinic in the past. While we do not have evidence that Fog or its law enforcement customers are using Reveal to search for people who’ve sought reproductive healthcare, it’s nevertheless conceivable that it could be used in this way: we have examples of cops using Reveal to search individual buildings, as well as examples of other data brokers selling the location data of Planned Parenthood patients (though SafeGraph stopped this practice after the story broke). After the Supreme Court’s decision to overturn Roe v. Wade, and as states across the country pass increasingly draconian bills restricting people’s access to abortion, it’s important to consider that Reveal and tools like it represent a new threat to people seeking reproductive healthcare.

Digging deeper with device queries

The frontend code suggests that Fog creates unique internal identifiers for devices–called “Fog IDs” (or “registration IDs,”3 which we understand to be the same as Fog’s “device registration number”). These unique identifiers can be queried directly, allowing users to get all signals produced by devices within a certain period of time, regardless of whether they were in the original geofence or not:

fog_tech_7.png fog_tech_8.png

In the user manual, this feature is called a “device query” and is described as including data from the device’s “local, regional or global travel.” The user manual also describes a feature called “common device queries”, which allow the user to determine “if any devices are common to multiple locations.”

Federal features

If certain user parameters are set4, Reveal will update its logo to display “Reveal Federal”, and enables the frontend to request a much more powerful suite of query tools from the backend. The frontend code suggests that these conditions may occur if the user is a member of federal law enforcement5, but because we have no public records mentioning any such federal users, we don’t know for sure which users (if any) this is true for. For the purposes of this document, we will refer to these hypothetical users as federal users.

Federal users have access to an interface for converting between Fog’s internal device IDs (“FOG IDs”) and the device’s actual Advertiser ID6:


This is eyebrow raising for a couple reasons. First, if this feature is operational, it would contradict assurances made in a sample State search warrant Fog sends to customers that FOG IDs can’t be converted back into Advertiser IDs. Second, if users could retrieve the Advertiser IDs of all devices in a query’s results, it would make Reveal far more capable of unmasking the identities of those device’s owners. This is due to the fact that if you have access to a device, you can read its Advertiser ID, and thus law enforcement would be able to verify if a specific person’s device was part of a query’s results.

Additionally, when a federal user views the devices in their results, the frontend is designed to show them a great deal more information7 about each device than it does non-federal users. Assuming that the backend provides this data, a federal user could view device information such as:

  • User Agent
  • Browser Family
  • Browser Version
  • OS Family
  • OS Version
  • Device Family
  • Device Brand
  • Device Model
  • Whether the device belongs to an EU Resident
  • Last Seen IP Addresses


Federal users are also given an interface to query for signals/devices based on one or more IP addresses:

fog_tech_11.pngfog_tech_11.png fog_tech_12.png

Connections to Venntel

Many of the features we analyzed in this article are powered by API calls that reference Venntel, a major player in the data broker scene and DHS contractor. Although it’s true that Fog’s engineers could have named these API endpoints arbitrarily, the way they function does seem to suggest that Venntel is a source of location and device data for Reveal.

Notably, when a Reveal user performs any geofenced device query, that query is submitted to the URL path /Venntel/GetLocationData. Additionally, queries for specific device locations send a request to /Venntel/GetDeviceLocationData, and when a federal user makes a request for more device details, the frontend sends a request to /Venntel/GetDeviceDetails. This means that nearly all frontend requests having to do with searching device or location data are prefixed with “Venntel”. And this wouldn’t be the only connection between Fog and Venntel: many of the records EFF has received point to a close link between the two companies.


As we've seen, Fog Reveal provides law enforcement a powerfully invasive tool for searching huge swaths of commercially available location data. With a few clicks, its users can find not only the devices present in a location, but also everywhere else each of those devices went during other time periods. Its federal featureset, whether currently in use or not, demonstrates how much more invasive the tool could be by only revealing a handful of other fields.

If you’re not happy about the idea of your location data possibly being sold to companies like Fog, we don’t blame you. Luckily, there’s an easy step you can take to make it much harder for data brokers and companies like Fog to tie your location data to your device: disabling Ad ID tracking on your phone. Beyond that, we believe that there are changes needed at both the technical and legal levels to prevent this kind of invasive data collection and usage. To learn more, check out our other articles in this series on data brokers.

Read more about Fog Data Science:

Will Greenberg
2 hours 21 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed