「Beyond 5G研究開発促進事業（電波有効利用型）」及び 「革新的情報通信技術（Beyond 5G（6G））基金事業」に係る 令和5年度新規委託研究の公募
Federal Judge Makes History in Holding That Border Searches of Cell Phones Require a Warrant
With United States v. Smith (S.D.N.Y. May 11, 2023), a district court judge in New York made history by being the first court to rule that a warrant is required for a cell phone search at the border, “absent exigent circumstances” (although other district courts have wanted to do so).
EFF is thrilled about this decision, given that we have been advocating for a warrant for border searches of electronic devices in the courts and Congress for nearly a decade. If the case is appealed to the Second Circuit, we urge the appellate court to affirm this landmark decision.The Border Search Exception as Applied to Physical Items Has a Long History
U.S. Customs & Border Protection (CBP) asserts broad authority to conduct warrantless, and often suspicionless, device searches at the border, which includes ports of entry at the land borders, international airports, and seaports.
For a century, the Supreme Court has recognized a border search exception to the Fourth Amendment’s warrant requirement, allowing not only warrantless but also often suspicionless searches of luggage and other items crossing the border.
The number of warrantless device searches at the border and the significant invasion of privacy they represent is only increasing. In Fiscal Year 2022, CBP conducted an all-time high of 45,499 device searches.
The Supreme Court has not yet considered the application of the border search exception to smartphones, laptops, and other electronic devices that contain the equivalent of millions of pages of information detailing the most intimate details of our lives—even though we asked them to back in 2021.Circuit Courts Have Narrowed the Border Search Exception’s Application to Digital Data
Federal appellate courts, however, have considered this question and circumscribed CBP’s authority.
The Ninth Circuit in United States v. Cano (2019) held that a warrant is required for a device search at the border that seeks data other than “digital contraband” such as child pornography. Similarly, the Fourth Circuit in United States v. Aigbekaen (2019) held that a warrant is required for a forensic device search at the border in support of a domestic criminal investigation.
These courts and the Smith court were informed by Riley v. California (2014). In that watershed case, the Supreme Court held that the police must get a warrant to search an arrestee’s cell phone.The Smith Court Rightly Applied the Riley Balancing Test
In our advocacy, we have consistently argued that Riley’s analytical framework should inform whether the border search exception applies to cell phones and other electronic devices. This is precisely what the Smith court did: “In holding that warrants are required for cell phone searches at the border, the Court believes it is applying in straightforward fashion the logic and analysis of Riley to the border context.”
In Riley, the Supreme Court applied a balancing test, weighing the government’s interests in warrantless and suspicionless access to cell phone data following an arrest, against an arrestee’s privacy interests in the depth and breadth of personal information stored on modern cell phones.
In analyzing the government’s interests, the Riley Court considered the traditional reasons for authorizing warrantless searches of an arrestee’s person: to protect officers from an arrestee who might use a weapon against them, and to prevent the destruction of evidence.
The Riley Court found only a weak nexus between digital data and these traditional reasons for warrantless searches of arrestees. The Court reasoned that “data on the phone can endanger no one,” and the probability is small that associates of the arrestee will remotely delete digital data.
The Riley Court also detailed how modern cell phones can in fact reveal the “sum of an individual’s private life,” and thus individuals have significant and unprecedented privacy interests in their cell phone data.
On balance, the Riley Court held that the traditional search-incident-to-arrest exception to the warrant requirement does not apply to cell phones.
The Smith court properly applied the Riley balancing test in the border context, noting that travelers’ privacy interests in their digital data are also significant:
Just as in Riley, the cell phone likely contains huge quantities of highly sensitive information—including copies of that person’s past communications, records of their physical movements, potential transaction histories, Internet browsing histories, medical details, and more … No traveler would reasonably expect to forfeit privacy interests in all this simply by carrying a cell phone when returning home from an international trip.
In analyzing the government’s interests in gaining warrantless access to cell phone data at the border, the Smith court considered the traditional justifications for the border search exception: in the words of the judge, “preventing unwanted persons or items from entering the country.” In particular, the government has a strong interest in conducting warrantless searches of luggage and other containers to identify goods subject to customs duty (import tax) and items considered contraband or that would otherwise be harmful if brought into the country such as drugs or weapons.
Considering these traditional rationales for the border search exception in the context of modern cell phones, the Smith court concluded that the government’s “interest in searching the digital data ‘contained’ on a particular physical device located at the border is relatively weak.”
The court focused on the internet and cloud storage, stating: “Stopping the cell phone from entering the country would not … mean stopping the data contained on it from entering the country” because any data that can be found on a cell phone—even digital contraband—“very likely does exist not just on the phone device itself, but also on faraway computer servers potentially located within the country.” This is different from physical items that if searched without a warrant may be efficiently interdicted, and thereby actually prevented from entering the country.
The Smith court further explained:
To be sure, that data may contain information relevant to the Government’s determination as to whether a person should be allowed entry, but the Government has little heightened interest in blocking entry of the information itself, which is the historical basis for the border search exception.
Thus, the Smith court concluded:
Because the government’s interests in a warrantless search of a cell phone’s data are thus much weaker than its interests in warrantless searches of physical items, and a traveler’s privacy interests in her cell phone’s data are much stronger than her privacy interests in her baggage, the Court concludes that the same balancing test that yields the border search exception cannot support its extension to warrantless cell phone searches at the border.EFF’s Work Is Making a Difference
The Smith court’s application of Riley’s balancing test is nearly identical to the arguments we’ve made time and time again.
The Smith court also cited Cano, in which the Ninth Circuit engaged extensively with EFF’s amicus brief even though it didn’t go as far as requiring a warrant in all cases. The Smith court acknowledged that no federal appellate court “has gone quite this far (although the Ninth Circuit has come close).”
We’re pleased that our arguments are moving through the federal judiciary and finally being embraced. We hope that the Second Circuit affirms this decision and that other courts—including the Supreme Court—are courageous enough to follow suit and protect personal privacy.
EU’s Proposed Cyber Resilience Act Raises Concerns for Open Source and Cybersecurity
The EU is in the middle of the amendments process for its proposed Cyber Resilience Act (CRA), a law intended to bolster Europe’s defenses against cyber-attacks and improve product security. This law targets a broad swath of products brought to market intended for European consumers, including Internet of Things (IoT) devices, desktop computers, and smartphones. It places requirements on device manufacturers and distributors with regards to vulnerability disclosure, and introduces new liability regulations for cybersecurity incidents.
EFF welcomes the intention of the legislation, but the proposed law will penalize open source developers who receive any amount of monetary compensation for their work. It will also require manufacturers to report actively exploited, unpatched vulnerabilities to regulators. This requirement risks exposing the knowledge and exploitation of those vulnerabilities to a larger audience, furthering the harms this legislation is intended to mitigate.Threats to Open Source Software
Open source software serves as the backbone of the modern internet. Contributions from developers working on open source projects such as Linux and Apache, to name just two, are freely used and incorporated into products distributed to billions of people worldwide. This is only possible through revenue streams which reward developers for their work, including individual donations, foundation grants, and sponsorships. This ecosystem of development and funding is an integral part of the functioning and securing of today’s software-driven world.
The CRA imposes liabilities for commercial activity which bring vulnerable products to market. Though recital 10 of the proposed law exempts not-for-profit open source contributors from what is considered “commercial activity” and thus liability, the exemption defines commercial activity much too broadly. Any open source developer soliciting donations or charging for support services for their software is not exempted and thus liable for damages if their product inadvertently contains a vulnerability which is then incorporated into a product, even if they themselves did not produce that product. Typically, open source contributors and developers write software and make it available as an act of good-will and gratitude to others who have done the same. This would pose a risk to such developers if they receive even a tip for their work. Smaller organizations which produce open source code to the public benefit may have their entire operation legally challenged simply for lacking funds to cover their risks. This will push developers and organizations to abandon these projects altogether, damaging open source as a whole.
We join others in raising this concern and call on the CRA to further exempt individuals providing open source software from liability, including when they are compensated for their work.Vulnerability Disclosure Requirements Pose a Cybersecurity Threat
Article 11 of the proposed text requires manufacturers to disclose actively exploited vulnerabilities to the European Union Agency for Cybersecurity (ENISA) within 24 hours. ENISA would then be required to forward fine details of these vulnerabilities on to the Member States’ Computer Security Incident Response Teams (CSIRTs) and market surveillance authorities. Intended as a measure for accountability, this requirement incentivizes product manufacturers with a lackluster record on product security to actively pursue and mitigate vulnerabilities. However well intended, this requirement will likely result in unintended consequences for manufacturers who prioritize their product security. Vulnerabilities that have serious security implications for consumers are often treated by these companies as well-guarded secrets until fixes are properly applied and deployed to end devices. These vulnerabilities can take weeks or even months to apply a proper fix.
The short time-frame will disincentivize companies from applying “deep” fixes which correct the root cause of the vulnerability in favor of “shallow” fixes which only address the symptoms. Deep fixes take time, and the 24-hour requirement starts the timer on response and will result in sloppy patchwork responses.
The second effect will be that a larger set of agencies and people will be made aware of the vulnerability quickly, which will greatly expand the risk of exposure of these vulnerabilities to those who may want to use them maliciously. Government knowledge of a range of software vulnerabilities from manufacturers could create juicy targets for hacking and espionage. Manufacturers concerned about the security outcomes for their customers will have little control or insight into the operational security of ENISA or the member-state agencies with knowledge of these vulnerabilities. This reporting requirement increases the risk that the vulnerability will be added to the offensive arsenal of government intelligence agencies. Manufacturers should not have to worry that reporting flaws in their software will result in furthering cyber-warfare capabilities at their expense.
An additional concern is that the reporting requirement does not include public disclosure. For consumers to make informed decisions about their purchases, details about security vulnerabilities should be provided along with security updates.
Given the substantial risks that this requirement poses, we call on European lawmakers to abstain from mandating inflexible deadlines for tackling security issues and that detailed reports about vulnerabilities are issued to ENISA only after vulnerabilities have been fixed. In addition, detailed public disclosure of security fixes should be required. For companies that have shown a lackluster record on product security, more stringent requirements may be imposed—but this should be the exception, not the rule.Further Protections for Security Researchers
Good-faith security research—which can include disclosure of vulnerabilities to manufacturers—strengthens product security and instills confidence in consumers. We join our partner organization EDRi in calling for a safe harbor for researchers involved in coordinated disclosure practices. This safe harbor should not imply that other forms of disclosure are harmful or malicious. An EU-wide blanket safe harbor will give assurance to security researchers that they will not come under legal threat by doing the right thing.Start With a Good First Step
The Cyber Resilience Act is intended to strengthen cybersecurity for all Europeans. However, without adopting changes to the proposed text, we fear aspects of the act will have the opposite effect. We call on the European Commission to take the concerns of the open source community and security professionals seriously and amend the proposal to address these serious concerns.
To Save the News, We Must Ban Surveillance Advertising
This is part three of an ongoing, five-part series. Part one, the introduction, is here. Part two, about breaking up ad-tech companies, is here.
The ad-tech industry is incredibly profitable, raking in hundreds of billions of dollars every year by spying on us. These companies have tendrils that reach into our apps, our televisions, and our cars, as well as most websites. Their hunger for our data is insatiable. Worse still, a whole secondary industry of “brokers” has cropped up that offers to buy our purchase records, our location data, our purchase histories, even our medical and court records. This data is continuously ingested by the ad-tech industry to ensure that the nonconsensual dossiers of private, sensitive, potentially compromising data that these companies compile on us are as up-to-date as possible.
Commercial surveillance is a three-step process:
- Track: A person uses technology, and that technology quietly collects information about who they are and what they do. Most critically, trackers gather online behavioral information, like app interactions and browsing history. This information is shared with ad tech companies and data brokers.
- Profile: Ad tech companies and data brokers that receive this information try to link it to what they already know about the user in question. These observers draw inferences about their target: what they like, what kind of person they are (including demographics like age and gender), and what they might be interested in buying, attending, or voting for.
- Target: Ad tech companies use the profiles they’ve assembled, or obtained from data brokers, to target advertisements. Through websites, apps, TVs, and social media, advertisers use data to show tailored messages to particular people, types of people, or groups.
This data-gathering and processing is the source of innumerable societal harms: it fuels employment discrimination, housing discrimination, and is a pipeline for predatory scams. The data also finds its way into others’ hands, including the military, law enforcement, and hostile foreign powers. Insiders at large companies exploit data for their own benefit. It’s this data that lets scam artists find vulnerable targets and lets stalkers track their victims.
Our entire digital environment has been warped to grease the skids for this dragnet surveillance. Our mobile devices assign tracking identifiers to us by default, and these unique identifiers ripple out through physical and digital spaces, tracking us to the most minute degree.
All of this done in the name of supporting culture and news. The behavioral advertising industry claims that it can deliver more value to everyone through this surveillance: advertisers get to target exactly who they want to reach; publishers get paid top dollar for setting up exactly the right user with exactly the right ad, and the user wins because they are only ever shown highly relevant ads that are tailored to their interests.
Of course, anyone who’s ever used the internet knows that this is hogwash. Advertisers know that they are being charged billions of dollars for ads that are never delivered. Publishers know that billions of dollars collected from advertisers for ads that show up alongside their content are never delivered.
And as to the claim that users “like ads, so long as they are relevant,” the evidence is very strong that this isn’t true and never was. Ad-blocking is the most successful consumer boycott in human history. When Apple give iPhone users a one-click opt-out to block all surveillance ads, 96 percent of users clicked the button (presumably, the other four percent were confused, or they work for ad-tech companies).
Surveillance advertising serves no one except creepy ad-tech firms; for users, publishers and advertisers, surveillance ads are a bad deal.
Getting rid of surveillance ads doesn’t mean getting rid of ads altogether. Despite the rhetoric that “if you’re not paying for the product, you’re the product,” there’s no reason to believe that the mere act of paying for products will convince the companies that supply that product to treat you with respect.
Take John Deere tractors: farmers pay hundreds of thousands of dollars for large, crucial pieces of farm equipment, only to have their ability to repair them (or even complain about them) weaponized and monetized against them.
You can’t bribe a company into treating you with respect - companies respect you to the extent that they fear losing your business, or being regulated. Rather than buying our online services and hoping that this so impresses tech executives that they treat us with dignity, we should ban surveillance ads.
If surveillance ads are banned, advertisers will have to find new ways to let the public know about their products and services. They’ll have to return to the techniques that advertisers used for centuries before the very brief period in which surveillance advertising came to dominate: they’ll have to return to contextual ads.
A contextual ad is targeted based on the context in which it appears: what article it appears alongside of, or which publication. Rather than following users around to target them with ads contextual advertisers seek out content that is relevant to their messages, and place ads alongside that content.
Historically, this was an inefficient process, hamstrung by the need to identify relevant content before it was printed or aired. But the same realtime bidding systems used to place behavioral ads can be used to place contextual ads, too.
The difference is this: rather than a publisher asking a surveillance company like Google or Meta to auction off a reader on its behalf, the publisher would auction off the content and context of its own materials.
That is, rather than the publisher saying “What am I bid for the attention of this 22 year old, male reader who lives in Portland, Oregon, is in recovery for opioid addiction, and has recently searched for information about gonorrhea symptoms?” the publisher would say, “What am I bid for the attention of a reader whose IP address is located in Portland, Oregon, who is using Safari on a recent iPhone, and who is reading an article about Taylor Swift?”
There are some obvious benefits to this. First things first: it doesn’t require surveillance. That’s good for readers, and for society.
But it’s also good for the publisher. No publisher will ever know as much about readers’ behavior than an ad-tech company; but no ad-tech company will ever know as much about a publisher’s content than the publisher. That means that it will be much, much harder for ad-tech companies to lay claim to a large slice of the publisher’s revenue, and it will be much, much easier for publishers to switch ad-tech vendors if anyone tries it.
That means that publishers will get a larger slice of the context ads pie than they do when the pie is filled with surveillance ads.
But what about the size of the pie? Will advertisers pay as much to reach readers who are targeted by context as they do when the targeting is behavioral?
Not quite. The best research-driven indications we have so far is that advertisers will generally pay about five percent less for context-based targeting than they do for behavioral targeting.
But that doesn’t mean that publishers will get paid less - even if advertisers insist on a five percent discount to target based on context, a much greater share of the ad-spending will reach the publishers. The largest ad-tech platforms currently bargain for more than half of that spending, a figure they’re only able to attain because their monopoly power over behavioral data gives them a stronger negotiating position over the publishers.
But more importantly: if ad tracking was limited to users who truly consented to it, almost no one would see any ads, because users do not consent to tracking.
This was amply demonstrated in 2021, when Apple altered iOS, the operating system that powers iPhones and iPads, to make it easy to opt out of tracking. 96 percent of Apple users opted out - costing Facebook over $10 billion dollars in lost revenue in the first year.
Unfortunately, Apple continues to track its users in order to target ads at them, even if those users opt out. But if the US were to finally pass a long-overdue federal privacy law with a private right of action and require real consent before tracking, the revenue from surveillance ads would fall to zero, because almost no one is willing to be tracked.
This is borne out by the EU experience. The European Union’s General Data Protection Regulation (GDPR) bans surveillance for the purpose of ad-targeting without consent. While the US-based ad-tech giants have refused to comply with this rule, they are finally being forced to do so.
Not everyone has flouted the GDPR. The Dutch public broadcaster NPO only used targeted ads for users who consented to them, which means it served virtually no targeted ads. Eventually, NPO switched to context ads and saw a massive increase in ad revenues, in part because the ads worked about as well as surveillance ads, but mostly because no one saw their surveillance ads, while everyone saw context ads.
Killing surveillance ads will make surveillance companies worse off. But everyone else: readers, journalists, publishers, and even advertisers will be much better off.