FBI Warning on IoT Devices: How to Tell If You Are Impacted

2 months 2 weeks ago

On June 5th, the FBI released a PSA titled “Home Internet Connected Devices Facilitate Criminal Activity.” This PSA largely references devices impacted by the latest generation of BADBOX malware (as named by HUMAN’s Satori Threat Intelligence and Research team) that EFF researchers also encountered primarily on Android TV set-top boxes. However, the malware has impacted tablets, digital projectors, aftermarket vehicle infotainment units, picture frames, and other types of IoT devices. 

One goal of this malware is to create a network proxy on the devices of unsuspecting buyers, potentially making them hubs for various potential criminal activities, putting the owners of these devices at risk from authorities. This malware is particularly insidious, coming pre-installed out of the box from major online retailers such as Amazon and AliExpress. If you search “Android TV Box” on Amazon right now, many of the same models that have been impacted are still up being sold by sellers of opaque origins. Facilitating the sale of these devices even led us to write an open letter to the FTC, urging them to take action on resellers.

The FBI listed some indicators of compromise (IoCs) in the PSA for consumers to tell if they were impacted. But the average person isn’t running network detection infrastructure in their homes, and cannot hope to understand what IoCs can be used to determine if their devices generate “unexplained or suspicious Internet traffic.” Here, we will attempt to help give more comprehensive background information about these IoCs. If you find any of these on devices you own, then we encourage you to follow through by contacting the FBI's Internet Crime Complaint Center (IC3) at www.ic3.gov.

The FBI lists these IoC:

  • The presence of suspicious marketplaces where apps are downloaded.
  • Requiring Google Play Protect settings to be disabled.
  • Generic TV streaming devices advertised as unlocked or capable of accessing free content.
  • IoT devices advertised from unrecognizable brands.
  • Android devices that are not Play Protect certified.
  • Unexplained or suspicious Internet traffic.

The following adds context to above, as well as some added IoCs we have seen from our research.

Play Protect Certified

“Android devices that are not Play Protect certified” refers to any device brand or partner not listed here: https://www.android.com/certified/partners/. Google subjects devices to compatibility and security tests in their criteria for inclusion in the Play Protect program, though the mentioned list’s criteria are not made completely transparent outside of Google. But this list does change, as we saw with the tablet brand we researched being de-listed. This encompasses “devices advertised from unrecognizable brands.” The list includes international brands and partners as well.

Outdated Operating Systems

Other issues we saw were really outdated Android versions. For posterity, Android 16 just started rolling out. Android 9-12 appeared to be the most common versions routinely used. This could be a result of “copied homework” from previous legitimate Android builds, and often come with their own update software that can present a problem on its own and deliver second-stage payloads for device infection in addition to what it is downloading and updating on the device.

You can check which version of Android you have by going to Settings and searching “Android version”.

Android App Marketplaces

We’ve previously argued how the availability of different app marketplaces leads to greater consumer choice, where users can choose alternatives even more secure than the Google Play Store. While this is true, the FBI’s warning about suspicious marketplaces is also prudent. Avoiding “downloading apps from unofficial marketplaces advertising free streaming content” is sound (if somewhat vague) advice for set-top boxes, yet this recommendation comes without further guidelines on how to identify which marketplaces might be suspicious for other Android IoT platforms. Best practice is to investigate any app stores used on Android devices separately, but to be aware that if a suspicious Android device is purchased, it can contain preloaded app stores that mimic the functionality of legitimate ones but also contain unwanted or malicious code.

Models Listed from the Badbox Report

We also recommend looking up device names and models that were listed in the BADBOX 2.0 report. We investigated the T95 models along with other independent researchers that initially found this malware present. A lot of model names could be grouped in families with the same letters but different numbers. These operations are iterating fast, but the naming conventions are often lazy in this respect. If you're not sure what model you own, you can usually find it listed on a sticker somewhere on the device. If that fails, you may be able to find it by pulling up the original receipt or looking through your order history.

A Note from Satori Researchers:

“Below is a list of device models known to be targeted by the threat actors. Not all devices of a given model are necessarily infected, but Satori researchers are confident that infections are present on some devices of the below device models:”

List of Potentially Impacted Models

Broader Picture: The Digital Divide

Unfortunately, the only way to be sure that an Android device from an unknown brand is safe is not to buy it in the first place. Though initiatives like the U.S. Cyber Trust Mark are welcome developments intended to encourage demand-side trust in vetted products, recent shake ups in federal regulatory bodies means the future of this assurance mark is unknown. This means those who face budget constraints and have trouble affording top-tier digital products for streaming content or other connected purposes may rely on cheaper imitation products that are rife with not only vulnerabilities, but even come out-of-the-box preloaded with malware. This puts these people disproportionately at legal risk when these devices are used to provide the buyers’ home internet connection as a proxy for nefarious or illegal purposes.

Cybersecurity and trust that the products we buy won’t be used against us is essential: not just for those that can afford name-brand digital devices, but for everyone. While we welcome the IoCs that the FBI has listed in its PSA, more must be done to protect consumers from a myriad of dangers that their devices expose them to.

Alexis Hancock

Weekly Report: Apache Tomcatに複数の脆弱性

2 months 2 weeks ago
Apache Tomcatには、複数の脆弱性があります。この問題は、当該製品を修正済みのバージョンに更新することで解決します。詳細は、開発者が提供する情報を参照してください。

Why Are Hundreds of Data Brokers Not Registering with States?

2 months 2 weeks ago

Written in collaboration with Privacy Rights Clearinghouse

Hundreds of data brokers have not registered with state consumer protection agencies. These findings come as more states are passing data broker transparency laws that require brokers to provide information about their business and, in some cases, give consumers an easy way to opt out.

In recent years, California, Texas, Oregon, and Vermont have passed data broker registration laws that require brokers to identify themselves to state regulators and the public. A new analysis by Privacy Rights Clearinghouse (PRC) and the Electronic Frontier Foundation (EFF) reveals that many data brokers registered in one state aren’t registered in others.

Companies that registered in one state but did not register in another include: 291 companies that did not register in California, 524 in Texas, 475 in Oregon, and 309 in Vermont. These numbers come from data analyzed from early April 2025.

PRC and EFF sent letters to state enforcement agencies urging them to investigate these findings. More investigation by states is needed to determine whether these registration discrepancies reflect widespread noncompliance, gaps and definitional differences in the various state laws, or some other explanation.

New data broker transparency laws are an essential first step to reining in the data broker industry. This is an ecosystem in which your personal data taken from apps and other web services can be bought and sold largely without your knowledge. The data can be highly sensitive like location information, and can be used to target you with ads, discriminate against you, and even enhance government surveillance. The widespread sharing of this data also makes it more susceptible to data breaches. And its easy availability allows personal data to be obtained by bad actors for phishing, harassment, or stalking.

Consumers need robust deletion mechanisms to remove their data stored and sold by these companies. But the potential registration gaps we identified threaten to undermine such tools. California’s Delete Act will soon provide consumers with an easy tool to delete their data held by brokers—but it can only work if brokers register. California has already brought a handful of enforcement actions against brokers who failed to register under that law, and such compliance efforts are becoming even more critical as deletion mechanisms come online.

It is important to understand the scope of our analysis.

This analysis only includes companies that registered in at least one state. It does not capture data brokers that completely disregard state laws by failing to register in any state. A total of 750 data brokers have registered in at least one state. While harder to find, shady data brokers who have failed to register anywhere should remain a primary enforcement target.

This analysis also does not claim or prove that any of the data brokers we found broke the law. While the definition of “data broker” is similar across states, there are variations that could require a company to register in one state and not another. To take one example, a data broker registered in Texas that only brokers the data of Texas residents would not be legally required to register in California. To take another, a data broker that registered with Vermont in 2020 that then changed its business model and is no longer a broker, would not be required to register in 2025. More detail on variations in data broker laws is outlined in our letters to regulators.

States should investigate compliance with data broker registration requirements, enforce their laws, and plug any loopholes. Ultimately, consumers deserve protections regardless of where they reside, and Congress should also work to pass baseline federal data broker legislation that minimizes collection and includes strict use and disclosure limits, transparency obligations, and consumer rights.

Clarification: We sent a letter to the Vermont Attorney General listing "309 data brokers that apparently have either not registered or re-registered" in Vermont. One of the companies listed in the Vermont letter, InfoPay, Inc., most recently registered as a data broker in Vermont on Jan. 13, 2025, according to current Vermont records. This registration did not appear in our own data when we originally gathered it from the state’s website on April 4-8, 2025.

Read more here:

California letter

Texas Letter

Oregon Letter

Vermont Letter

Spreadsheet of data brokers

Mario Trujillo

【オピニオン】納得できない事例あれば記者が対処 ファクトチェックより早い=木下 寿国

2 months 2 weeks ago
 その人がどういう人なのかは、その人の本棚を見ればわかる、という言い方がある。要するに、人がその人生で接してきた情報の総体が、その人の人格を形づくるということだろう。元の情報があやふやなものであれば、当然ながら形作られるものはいい加減なものにならざるを得ない。 では人々に日々大量の情報を発信し続けているマスコミをめぐる状況はどうなっているか。私は機関紙「JCJ神奈川」の最新号に送った原稿で、その実態はかなり公正さに欠けるのではないかと書いた。ここで改めてその内容を引きつつ、も..
JCJ

Major Setback for Intermediary Liability in Brazil: Risks and Blind Spots

2 months 2 weeks ago

This is the third post of a series about internet intermediary liability in Brazil. Our first post gives an overview of Brazil's current internet intermediary liability regime, set out in a law known as "Marco Civil da Internet," the context of its approval in 2014, and the beginning of the Supreme Court's judgment of such regime in November 2024. Our second post provides a bigger picture of the Brazilian context underlying the court's analysis and its most likely final decision. 

Update: Brazil’s Supreme Court analysis of the country’s intermediary liability regime concluded on June 26, 2025. With three dissenting votes in favor of Article 19’s constitutionality, the majority of the court deemed it partially unconstitutional. Part of the perils we pointed out have unfortunately come true. The ruling does include some (though not all) of the minimum safeguards we had recommended. The full content of the final thesis approved is available here. Some highlights: importantly, the court preserved Article 19’s regime for crimes against honor (such as defamation). However, the notice-and-takedown regime set in Article 21 became the general rule for platforms’ liability for user content. Platforms are liable regardless of notification for paid content they distribute (including ads) and for posts shared through “artificial distribution networks” (i.e., networks of bots). The court established a duty of care regarding the distribution of specific serious unlawful content. This monitoring duty approach is quite controversial, especially considering the varied nature and size of platforms. Fortunately, the ruling emphasizes that platforms can only be held liable for these specific contents without a previous notice if they fail to tackle them in a systemic way (rather than for individual posts). Article 19’s liability regime remains the general rule for applications like email services, virtual conference platforms, and messaging apps (regarding inter-personal messages). The entirety of the ruling, including the full content of Justice’s votes, will be published in the coming weeks.

The court’s examination of Marco Civil’s Article 19 began with Justice Dias Toffoli in November last year. We explained here about the cases under trial, the reach of the Supreme Court’s decision, and Article 19’s background related to Marco Civil’s approval in 2014. We also highlighted some aspects and risks of Justice Dias Toffoli’s vote, who considered the intermediary liability regime established in Article 19 unconstitutional.  

Most of the justices have agreed to find this regime at least partially unconstitutional, but differ on the specifics. Relevant elements of their votes include: 

  • Notice-and-takedown is likely to become the general rule for platforms' liability for third-party content (based on Article 21 of Marco Civil). Justices still have to settle whether this applies to internet applications in general or if some distinctions are relevant, for example, applying only to those that curate or recommend content. Another open question refers to the type of content subject to liability under this rule: votes pointed to unlawful content/acts, manifestly criminal or clearly unlawful content, or opted to focus on crimes. Some justices didn’t explicitly qualify the nature of the restricted content under this rule.   

  • If partially valid, the need for a previous judicial order to hold intermediaries liable for user posts (Article 19 of Marco Civil) remains in force for certain types of content (or certain types of internet applications). For some justices, Article 19 should be the liability regime in the case of crimes against honor, such as defamation. Justice Luís Roberto Barroso also considered this rule should apply for any unlawful acts under civil law. Justice Cristiano Zanin has a different approach. For him, Article 19 should prevail for internet applications that don’t curate, recommend or boost content (what he called “neutral” applications) or when there’s reasonable doubt about whether the content is unlawful.

  • Platforms are considered liable for ads and boosted content that they deliver to users. This was the position held by most of the votes so far. Justices did so either by presuming platforms’ knowledge of the paid content they distribute, holding them strictly liable for paid posts, or by considering the delivery of paid content as platforms’ own act (rather than “third-party” conduct). Justice Dias Toffoli went further, including also non-paid recommended content. Some justices extended this regime to content posted by inauthentic or fake accounts, or when the non-identification of accounts hinders holding the content authors liable for their posts.   

  • Monitoring duty of specific types of harmful and/or criminal content. Most concerning is that different votes establish some kind of active monitoring and likely automated restriction duty for a list of contents, subject to internet applications' liability. Justices have either recognized a “monitoring duty” or considered platforms liable for these types of content regardless of a previous notification. Justices Luís Roberto Barroso, Cristiano Zanin, and Flávio Dino adopt a less problematic systemic flaw approach, by which applications’ liability would not derive from each piece of content individually, but from an analysis of whether platforms employ the proper means to tackle these types of content. The list of contents also varies. In most of the cases they are restricted to criminal offenses, such as crimes against the democratic state, racism, and crimes against children and adolescents; yet they may also include vaguer terms, like “any violence against women,” as in Justice Dias Toffoli’s vote. 

  • Complementary or procedural duties. Justices have also voted to establish complementary or procedural duties. These include providing a notification system that is easily accessible to users, a due process mechanism where users can appeal against content restrictions, and the release of periodic transparency reports. Justice Alexandre de Moraes also specifically mentioned algorithmic transparency measures. 

  • Oversight. Justices also discussed which entity or oversight model should be used to monitor compliance while Congress doesn’t approve a specific regulation. They raised different possibilities, including the National Council of Justice, the General Attorney’s Office, the National Data Protection Authority, a self-regulatory body, or a multistakeholder entity with government, companies, and civil society participation. 

Three other justices have yet to present their votes to complete the judgment. As we pointed out, the ruling will both decide the individual cases that entered the Supreme Court through appeals and the “general repercussion” issues underlying these individual cases. For addressing such general repercussion issues, the Supreme Court approves a thesis that orients lower court decisions in similar cases. The final thesis will reflect the majority of the court's agreements around the topics we outlined above. 

Justice Alexandre de Moraes argued that the final thesis should equate the liability regime of social media and private messaging applications to the one applied to traditional media outlets. This disregards important differences between both: even if social media platforms curate content, it involves a massive volume of third-party posts, mainly organized through algorithms. Although such curation reflects business choices, it does not equate to media outlets that directly create or individually purchase specific content from approved independent producers. This is even more complicated with messaging applications, seriously endangering privacy and end-to-end encryption. 

Justice André Mendonça was the only one so far to preserve the full application of Article 19. His proposed thesis highlighted the necessity of safeguarding privacy, data protection, and the secrecy of communications in messaging applications, among other aspects. It also indicated that judicial takedown orders must provide specific reasoning and be made available to platforms, even if issued within a sealed proceeding. The platform must also have the ability to appeal the takedown order. These are all important points the final ruling should endorse. 

Risks and Blind Spots 

We have stressed the many problems entangled with broad notice-and-takedown mandates and expanded content monitoring obligations. Extensively relying on AI-based content moderation and tying it to intermediary liability for user content will likely exacerbate the detrimental effects of these systems’ limitations and flaws. The perils and concerns that grounded Article 19's approval remain valid and should have led to a position of the court preserving its regime.  

However, given the judgement’s current stage, there are still some minimum safeguards that justices should consider or reinforce to reduce harm.  

It’s crucial to put in place guardrails against the abuse and weaponization of notification mechanisms. At a minimum, platforms shouldn’t be liable following an extrajudicial notification when there’s reasonable doubt concerning the content’s lawfulness. In addition, notification procedures should make sure that notices are sufficiently precise and properly substantiated indicating the content’s specific location (e.g. URL) and why the notifier considers it to be illegal. Internet applications must also provide reasoned justification and adequate appeal mechanisms for those who face content restrictions.  

On the other hand, holding intermediaries liable for individual pieces of user content regardless of notification, by massively relying on AI-based content flagging, is a recipe for over censorship. Adopting a systemic flaw approach could minimally mitigate this problem. Moreover, justices should clearly set apart private messaging applications, as mandated content-based restrictions would erode secure and end-to-end encrypted implementations. 

Finally, we should note that justices generally didn’t distinguish large internet applications from other providers when detailing liability regimes and duties in their votes. This is one major blind spot, as it could significantly impact the feasibility of alternate and decentralized alternatives to Big Tech’s business models, entrenching platform concentration. Similarly, despite criticism of platforms’ business interests in monetizing and capturing user attention, court debates mainly failed to address the pervasive surveillance infrastructure lying underneath Big Tech’s power and abuses.   

Indeed, while justices have called out Big Tech’ enormous power over the online flow of information – over what’s heard and seen, and by whom – the consequences of this decision can actually deepen this powerful position. 

It’s worth recalling a line of Aaron Schwarz in the film “The Internet’s Own Boy” when comparing broadcasting and the internet. He said: “[…] what you see now is not a question of who gets access to the airwaves, it’s a question of who gets control over the ways you find people.” As he puts it, today’s challenge is less about who gets to speak, but rather about who gets to be heard.  

There’s an undeniable source of power in operating the inner rules and structures by which the information flows within a platform with global reach and millions of users. The crucial interventions must aim at this source of power, putting a stop to behavioral surveillance ads, breaking Big Tech’s gatekeeper dominance, and redistributing the information flow.  

That’s not to say that we shouldn’t care about how each platform organizes its online environment. We should, and we do. The EU Digital Services Act, for example, established rules in this sense, leaving the traditional liability regime largely intact. Rather than leveraging platforms as users’ speech watchdogs by potentially holding intermediaries liable for each piece of user content, platform accountability efforts should broadly look at platforms’ processes and business choices. Otherwise, we will end up focusing on monitoring users instead of targeting platforms’ abuses. 

Veridiana Alimonti