Copyright Bullying vs. Religious Freedom

8 hours 4 minutes ago

The government should not help a religious institution to punish or deter members from inquiring about their faith. Yet, once again, the Watch Tower Bible and Tract Society is trying to use flimsy copyright claims to exploit the special legal tools available to copyright owners in order to unmask anonymous online speakers. And, once again, EFF has stepped in to urge the courts not to give Watch Tower’s attempts the force of law, with the help of local counsel Jonathan Phillips of Phillips & Bathke, P.C.

EFF’s client, J. Doe, is a member of the Jehovah’s Witnesses who became interested in the history of the organization’s public statements, and how they’ve changed over time. They created research tools to analyze those documents and ultimately created a website, JWS Library, allowing others to use those tools and verify their findings through an archive that included documents suppressed by the church. Doe and others discovered prophecies that failed to come true, erasure of a leader’s disgrace, increased calls for obedience and donations, and other insights about the Jehovah’s Witnesses’ practices. Doe also used machine translation on a foreign-language document to help the community understand what the church was saying to different audiences and also to help understand potential changes in the organization’s attitudes towards dissent.

Within the church, dissent or even asking questions has often been punished by labeling members as apostates and ostracizing—or “disfellowshipping”— them. As a result, Doe and others choose to speak anonymously to avoid retaliation that could cost them family, friend, and professional relationships.

There is no law against questioning the Jehovah’s Witnesses. Instead, Watch Tower argues that Doe’s activities constitute copyright infringement and seeks to use the special process provided in the Digital Millennium Copyright Act (DMCA) to unmask them. It sent DMCA subpoenas to Google and Cloudflare, seeking information that would help them uncover Doe’s identity.

The problem for Watch Tower is that Doe’s research and commentary are clear fair uses allowed under copyright law. The First Amendment does not permit the unmasking of anonymous speakers based on such weak claims. Indeed, the First Amendment protects anonymous speakers precisely because some would be deterred from speaking if they faced retribution for doing so.

EFF stands with those who question the claims of those in power and who share the tools and knowledge needed to do so. We urge the judges in the Southern District of New York to quash these improper subpoenas and not allow copyright to be used to suppress important, legitimate speech.

Kit Walsh

AI Impact Summit: Nothing new on the horizon?

8 hours 54 minutes ago
Derechos Digitales attended the India AI Impact Summit to take part in important activities, and, within days of its closing, conducted an urgent assessment. The promises of civil society…
Jamila Venturini y Juan Carlos Lara

Think Twice Before Buying or Using Meta’s Ray-Bans

10 hours 7 minutes ago

Over the last decade or so, the tech industry has tried, and mostly failed, to make “smart glasses”—tech-infused glasses with cameras, AI, maps, displays, and more—a thing. But over the past year, products like Meta’s Ray-Ban Display Glasses and Oakley’s Meta Glasses have gone from a curious niche to the mainstream

Before you strap a dashcam to your face and sprint out into the world filming everything and everyone in your life, there are some civil liberties and privacy concerns to consider before buying or using a pair.

Meta is the biggest company that makes these sorts of glasses and their partnerships with Ray-Ban and Oakely are the most popular options, so we’ll be mostly focusing on them here. Others, like models from Snapchat are similar in form but far less ubiquitous. But Meta won’t hold this space for long. Google’s already announced a partnership with Warby Parker for their “AI-powered smart glasses,” and there are rumors around a competing product from Apple

With that, let’s dive into some of the considerations you should make before purchasing a pair.

If You’re Thinking About Buying Smart Glasses You’re likely not the only one who can see (and hear) your footage

The photos and videos you record with most smartglasses will likely be stored online at some point in the process. On Meta’s offerings, unless you are livestreaming, media you capture when you press the camera button is kept on the glasses until you import them onto your phone, but media is imported automatically by default into the Meta AI mobile app, which is required to set up the glasses. 

You can't use any AI features locally on the glasses. So anytime you use AI features, like when you say, “Hey Meta, start recording,” the footage is fed to Meta. You can use the glasses without the Meta AI app entirely, but considering you can’t easily download footage from the glasses to your phone without it, most people will likely use the app.

Some videos are fed to Meta for AI training, and we know at least in some cases that those videos go through human review. An investigation by Swedish newspapers found that workers were reviewing and annotating camera footage, which includes all sorts of sensitive videos, including nudity, sex, and going to the bathroom. Meta claimed to the BBC that this is in accordance with its terms of use, all in the name of AI training, which states:

In some cases, Meta will review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review may be automated or manual (human).

This all means that Meta and their third-party contractors will have access to at least some of what you record, and it’s very hard as a user to know where footage goes, who will have access to it, and what they will do with it. When you save footage to your phone’s camera roll, which is where the Meta AI app stores content, that might also be sent to Apple or Google’s servers, depending on your settings. Employees at these companies can then possibly access that media, and it could be shared with law enforcement.

The recorded audio from conversations with Meta AI are also saved by default, and if you don’t like that, tough luck, unless you go in and manually delete them every time you say something.

Filming all the time is even more privacy invasive than you think

A common argument in favor of using the cameras in smartglasses is that phones and cameras can do this too, and it’s never been a problem. 

But smartglasses are designed to resemble regular glasses, to the point where most reviews point out how friends didn’t notice that they had cameras embedded in them. They’re designed to be invisible to those being recorded outside of a small indicator light when they’re recording video footage (that cheap hacks can disable). Whereas it is often obvious that a person is recording if they pull their phone out of their pocket and point it at someone else.

They’re designed to be invisible to those being recorded outside of a small indicator light when they’re recording video footage

Moreover, constant recording of everything in public spaces can create all sorts of potential privacy problems, some more obvious than others. This is another way that cameras on glasses are different from cameras on phones: it is far easier to constantly record one’s whereabouts with the former than the latter. If you continuously record, maybe you just happen to catch someone entering their passcode or password onto their phone or computer at a coffee shop, or broadcast someone’s bank details when you’re standing in line at an ATM. That doesn’t even begin to get into when smartglasses are intentionally used for less socially responsible means. And some people may forget to turn off their smartglasses when they enter a private space like a bathroom.  

And if you find yourself caught on someone’s camera, there’s not much you can do in recourse. If you do notice a stranger recording you, it’s up to you to intervene and ask not to be included in that footage, which can easily turn awkward or confrontational.

Our expectations of privacy shift when we’re in public, but bystanders in many cases will still have privacy interests. Public spaces are a place where you will be seen, but that shouldn’t mean it’s suddenly okay to catalog and identify everyone.

Consider the company’s the track record and public statements

Meta, Google, Apple—perhaps one benefit of all the major tech companies entering this market is that we already have a good idea of how much they tend to respect the privacy of their users or the openness of their platforms. Spoiler, it’s often not much.

Meta has a long history of privacy invasive technologies and practices. We’ve heard rumblings that Meta hopes to add face recognition to its smartglasses, preferably, “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.” Yikes. This is a monumentally bad idea that should be abandoned by Meta and any of its competitors considering a similar feature. But regardless of whether they launch this feature, it’s a pretty clear indication of where Meta wants these sorts of devices to go. 

If You Have Smartglasses Already Opt out of sharing with Meta where you can

You can disable a couple of the features where unnecessary data is sent to Meta. In the Meta AI app, under the device settings, there’s a privacy page where you can disable sharing additional data, and more importantly, turn off “Cloud media,” where your photos and videos are sent to Meta’s cloud for processing and temporary storage. 

Decide your use-case and stick to it

These glasses can be useful for filming a variety of activities. We’ve seen fascinating scenes of tattoo artists doing their work (with client’s permission), and it doesn’t take a stretch of the imagination to see how people might use it to film extreme sports. Even on an everyday level, you might find them useful for capturing holidays, birthdays, and all sorts of other private occasions. 

But if you buy these glasses for a specific, mostly private purpose, it is probably best to stick to that, instead of wearing them everywhere and recording everything you do.

Follow the rules of a businesses and social expectations

You often have a right to record in public spaces, but that doesn’t mean other people will like it. Businesses, including restaurants and stores, may want nothing to do with continuous filming and may either post a sign asking you not to use smartglasses, or ask you to stop. This may reflect the preferences not just of the business owner, but the people around you. And don’t use glasses to record when you enter other people’s private spaces like bathrooms or changing rooms.

It’s also a good idea to check in with friends and family before tapping that record button at a social gathering. Some people may not be as comfortable with these glasses as they are with other recording equipment.

Consider blurring strangers if you’re going to upload video

Blurring video footage isn’t an easy task, but if you’re considering uploading footage from something like a protest, it may be worth the effort to do so (apps like Meta’s Edits simplify this process, as do some other video sites, like YouTube). Some people don’t want the government to see their faces at protests, and might be afraid to attend if other people are uploading their faces.

Some people don’t want the government to see their faces at protests, and might be afraid to attend if other people are uploading their faces.

It would be better if Meta leveraged its AI features to offer this sort of feature automatically, especially with livestreaming. It’s not that outlandish of a request, as it seems like the company tries to blur faces automatically in footage it captures for annotation, though it’s not always reliable. After all, Google began redacting faces in Street View years ago, following privacy concerns from groups like EFF.

Resist face recognition

Adding facial recognition technology to smartglasses would obliterate the privacy of everyone. We cannot let companies push face recognition into these glasses, and as a user, you should make your voice clear that this is not something you want.

Smartglasses don’t have to be used to decimate the privacy of anyone you encounter during the day. There are legitimate uses out there, but it’s up to those who use them to respect the social norms of the spaces they enter and the people they encounter.

Thorin Klosowski

The Government Must Not Force Companies to Participate in AI-powered Surveillance

10 hours 31 minutes ago

The rapidly escalating conflict between Anthropic and the Pentagon, which started when the company refused to let the government use its technology to spy on Americans, has now gone to court. The Department of Defense retaliated by designating the company a “supply chain risk” (SCR). Now, Anthropic is asking courts to block the designation, arguing that the First Amendment does not permit the government to coerce a private actor to rewrite its code to serve government ends.

We agree.

As EFF, the Foundation for Individual Rights and Expression, and multiple other public interest organizations explained in a brief filed in support of Anthropic’s motion, the development and operation of large language models involve multiple expressive choices protected by the First Amendment. Requiring a company to rewrite its code to remove guardrails means compelling different expression, a clear constitutional violation. Further, the public record shows that the SCR designation is intended to punish the company both for pushing back and for its CEO’s public statements explaining that AI may supercharge surveillance practices that current law has proven ill-equipped to address.

As we also explain, the company’s concerns about how the government will use its technology are well-founded. The U.S. government has a long history of illegally surveilling its citizens without adequate judicial oversight based on questionable interpretations of its Constitutional and statutory obligations. The Department of Defense acquires vast troves of personal information from commercial entities, including individuals’ physical location, social media, and web browsing data. Other government agencies continue to collect and query vast quantities of Americans’ information, including by acquiring information from third party data brokers.

A growing body of social science research illustrates the chilling effects of these pervasive activities. Fearing retribution for unpopular views, dissenters stay silent. And AI only exacerbates the problem. AI can quickly analyze the government’s massive datasets or combine that information with data scraped off the internet, purchased through the commercial data broker market, or from local police surveillance devices and use all of that data to construct a comprehensive picture of a person’s life and infer sensitive details like their religious beliefs, medical conditions, political opinions, or even sex partners. For example, an agency could use AI to infer an individual’s association with a particular mosque based on data showing that they visited its website, followed its social media accounts, and were located near the mosque during religious services. AI can also deanonymize online speech by using public information to unmask anonymous users.

It is easy to conceive how an agency, a government employee with improper intent, or a malicious hacker could exploit these capabilities to monitor public discourse, preemptively squelch dissent, or persecute people from marginalized communities. Against this background and absent meaningful changes to the governing national security laws and judicial oversight structure, it is entirely reasonable for Anthropic—or any other company—to insist on its own guardrails.

Without action from Congress, the task of protecting your privacy has fallen in large part to Big Tech—something no one wants, including Big Tech. But if Congress won’t do it, companies like Anthropic must be allowed to step in, without facing retribution.

Corynne McSherry

[B] 「イラン戦争は最終戦争の前兆」 米軍司令官の極右キリスト者的発言に兵士は不快感

12 hours 53 minutes ago
アメリカ人の宗教心の戦争意識への影響についての先の投稿[1]に関連する報告が、MRFF (Military Religious Freedom Foundation)という組織のウエッブサイト[2])に3月3日に発表されている。それによると、イランへの米・イスラエル軍による攻撃について、米軍の多くの部門の司令官がキリスト教の根本主義 (Fundamentalism)を確信、トランプ大統領はキリスト再臨の前哨で、イラン戦は、アーマゲドン(最終戦争)の前兆であると主張しているらしい。(落合栄一郎)
日刊ベリタ

【おすすめ本】木瀬貴吉『本づくりで世の中を転がす 反ヘイト出版社の闘い方 』━惰性に流れずヘイトに抗い 出版の途を開拓した冒険譚=加藤直樹(ノンフィクション作家)

14 hours 10 minutes ago
 副題に「反ヘイト出版社の闘い方」とある。そこから差別に反対する気骨ある出版社の生真面目な物語を想像すると間違う。これは出版を巡るワクワクするような冒険譚である。主人公の木瀬貴吉が、2人の仲間とともに、著者や書店に個性的な同志たちを見つけながら旅を続ける物語だ。 リンゴが樹から落ちるように、世の中には「いい本を数百部だけ刷る良心的な出版社」と「悪い本(例えばヘイト本)を売りまくる出版社」という構図が確かにある。 だが「『ころ』から車輪へ」という「パラダイムシフト」を掲げる「こ..
JCJ