情報通信審議会 電気通信事業政策部会 ユニバーサルサービス政策委員会(第37回)・ブロードバンドサービスに関するユニバーサルサービス制度における交付金・負担金の算定等に関するワーキンググループ(第7回)合同会合

4 weeks ago
情報通信審議会 電気通信事業政策部会 ユニバーサルサービス政策委員会(第37回)・ブロードバンドサービスに関するユニバーサルサービス制度における交付金・負担金の算定等に関するワーキンググループ(第7回)合同会合
総務省

Meta Oversight Board’s Latest Policy Opinion a Step in the Right Direction

4 weeks ago

EFF welcomes the latest and long-awaited policy advisory opinion from Meta’s Oversight Board calling on the company to end its blanket ban on the use of the Arabic-language term “shaheed” when referring to individuals listed under Meta’s policy on dangerous organizations and individuals and calls on Meta to fully implement the Board’s recommendations.

Since the Meta Oversight Board was created in 2020 as an appellate body designed to review select contested content moderation decisions made by Meta, we’ve watched with interest as the Board has considered a diverse set of cases and issued expert opinions aimed at reshaping Meta’s policies. While our views on the Board's efficacy in creating long-term policy change have been mixed, we have been happy to see the Board issue policy recommendations that seek to maximize free expression on Meta properties.

The policy advisory opinion, issued Tuesday, addresses posts referring to individuals as 'shaheed' an Arabic term that closely (though not exactly) translates to 'martyr,' when those same individuals have previously been designated by Meta as 'dangerous' under its dangerous organizations and individuals policy. The Board found that Meta’s approach to moderating content that contains the term to refer to individuals who are designated by the company’s policy on “dangerous organizations and individuals”—a policy that covers both government-proscribed organizations and others selected by the company— substantially and disproportionately restricts free expression.

The Oversight Board first issued a call for comment in early 2023, and in April of last year, EFF partnered with the European Center for Not-for-Profit Law (ECNL) to submit comment for the Board’s consideration. In our joint comment, we wrote:

The automated removal of words such as ‘shaheed’ fail to meet the criteria for restricting users’ right to freedom of expression. They not only lack necessity and proportionality and operate on shaky legal grounds (if at all), but they also fail to ensure access to remedy and violate Arabic-speaking users’ right to non-discrimination.

In addition to finding that Meta’s current approach to moderating such content restricts free expression, the Board noted thate importance of any restrictions on freedom of expression that seek to prevent violence must be necessary and proportionate, “given that undue removal of content may be ineffective and even counterproductive.”

We couldn’t agree more. We have long been concerned about the impact of corporate policies and government regulations designed to limit violent extremist content on human rights and evidentiary content, as well as journalism and art. We have worked directly with companies and with multi stakeholder initiatives such as the Global Internet Forum to Counter Terrorism, Tech Against Terrorism, and the Christchurch Call to ensure that freedom of expression remains a core part of policymaking.

In its policy recommendation, the Board acknowledges the importance of Meta’s ability to take action to ensure its platforms are not used to incite violence or recruit people to engage in violence, and that the term “shaheed” is sometimes used by extremists “to praise or glorify people who have died while committing violent terrorist acts.” However, the Board also emphasizes that Meta’s response to such threats must be guided by respect for all human rights, including freedom of expression. Notably, the Board’s opinion echoes our previous demands for policy changes, as well as those of the Stop Silencing Palestine campaign initiated by nineteen digital and human rights organizations, including EFF.

We call on Meta to implement the Board’s recommendations and ensure that future policies and practices respect freedom of expression.

Jillian C. York

Speaking Freely: Robert Ssempala

4 weeks ago

*This interview has been edited for length and clarity. 

Robert Ssempala is a longtime press freedom and social justice advocate. He serves as Executive Director at Human Rights Network for Journalists-Uganda, a network of journalists in Uganda working towards enhancing the promotion, protection, and respect of human rights through defending and building the capacities of journalists, to effectively exercise their constitutional rights and fundamental freedoms for collective campaigning through the media. Under his leadership, his organization has supported hundreds of journalists who have been assaulted, imprisoned, and targeted in the course of their work. 

 York: What does free speech or free expression mean to you?

 It means being able to give one’s opinions and ideas freely without fear of reprisals or without fearing facing criminal sanctions, and without being concerned about how another feels about their ideas or opinions. Sometimes even if it’s offensive, it’s one’s opinion. For me, it’s entirely how one wants to express themselves that is all about having the liberty to speak freely.

 York: What are the qualities that make you passionate about free expression?

 For me, it is the light for everyone when they’re able to give their ideas and opinions. It is having a sense of liberty to have an idea. I am very passionate about listening to ideas, about everyone getting to speak what they feel is right. The qualities that make me passionate about it are that, first, I’m from a media background. So, during that time I learned that we are going to receive the people’s ideas and opinions, disseminate them to the wider public, and there will be feedback from the public about what has come out from one side to the other. And that quality is so dear to my heart. And second, it is a sense of freedom that is expressed at all levels, in any part of the country or the world, being the people’s eyes and ears, especially at their critical times of need.

 York: I want to ask you more about Uganda. Can you give us a short overview of what the situation for speech is like in the country right now?

 The climate in Uganda is partly free and partly not free, depending on the nature of the issues at hand. Those that touch civil and political rights are very highly restricted and it has attracted so many reprisals for those that seek to express themselves that way. I work for the Human Rights Network for Journalists-Uganda (HRNJ-Uganda) which is a non-governmental media rights organization, so we monitor and document annually the incidents, trends, and patterns touching freedom of expression and journalists’ rights. Most of the cases that we have received, documented, and worked on are stemming from civil and political rights. We receive less of those that touch economic, social, and cultural rights. So depending on where you’re standing, those media houses and journalists that are critically independent and venture into investigative practices are highly targeted. They have been attacked physically, their gadgets have been confiscated and sometimes even damaged deliberately. Some have lost their jobs under duress because a majority of media ownership in this country is by the political class or lean toward the ruling political party. As such, they want to be seen to be supportive of the regime, so they kind of tighten the noose on all freedom of expression spaces within media houses and prevail over their journalists. This by any measure has led to heightened self-censorship.

 But also, those journalists that seem to take critical lines are targeted. Some are even blacklisted. We can say that from the looks of things that times around political campaigns and elections are the tightest for freedom of expression in this country, and most cases have been reported around such times. We normally have elections every five years. So every three years after an election electioneering starts. And that’s when we see a lot of restrictions coming from the government through its regulation bodies like the Uganda Communications Commission, which is the communications regulator in my country. Also from the Media Council of Uganda, which was put in place by an act of Parliament to oversee the practices of media. And from the police or security apparatus in this country. So it’s a very fragile environment within which to practice. The journalists operate under immense fear and there are very high levels of censorship. The law has increasingly been used to criminalize free speech. That’s how I’d describe the current environment.

 York: I understand that the Computer Misuse Act as well as cybercrime legislation have been used to target journalists. Have you or any of your clients experienced censorship through abuse of computer crime laws?

 We have a very Draconian law called the Computer Misuse Amendment Act. It was amended just last year to make it even worse. It has been now the walking stick of the proponents of the regime that don’t want to be subjected to public scrutiny, that don’t want to be held accountable politically in their offices. So abuses of public trust and power of their offices are hidden under the Computer Misuse Amendment Act. And most journalists, most editors, most managers have been, from time to time, interrogated at the Criminal Investigations Directorate of the police over what they have written about the powerful personalities especially in the political class – sometimes even from the business class – but mainly it’s from the political class. So it is used to insulate the powerful from being held accountable. Sadly, most of these cases are politically motivated. Most of them have not even ended up in courts of law, but have been used to open up charges against the media practitioners who have, from time to time, kept reporting and answering to the police for a long time without being presented to court or that are presented at a time when they realize that the journalists in question are becoming a bit unruly. So these laws are used to contain the journalists.

 Since most of the stories that have been at the highlight of the regime have been factual, they have not had reason to run to Court, but the effect of this is very counterproductive to the journalists’ independence, to their ability to concentrate on more stories – because they’re always thinking about these cases pending before them. Also, media houses now become very fearful and learn how to behave to not be in many cases of that nature. So the Computer Misuse Act, criminal defamation and now the most recent one, the Anti-Homosexuality Act (AHA) – which was passed by Parliament with very drastic clauses– are clawback legislation for press freedom in Uganda. The AHA in itself fundamentally affected the practice of journalism. The legislation falls short of drawing a clear distinction between what amounts to promotion or education [with regards to sharing material related to homosexuality]. Yet one of the crucial roles of the media is to educate the population about many things, but here, it’s not clear when the media is promoting and when it is educating. So it wants to slap a blackout completely on when you’re discussing the LGBTQI+ issues in the country. So, this law is very ambiguous and therefore susceptible to abuse at the expense of freedom speech

 And it also introduces very drastic sanctions. For instance, if one writes about homosexuality their media operating license is revoked for ten years. And I’m sure no media house can stand up again after ten years of closure and can still breathe life. Also, the AHA generalizes the practice of an individual journalist. If, for instance, one of your journalists writes something that the law looks at as against it, the entire media house license is revoked for ten years, but also you’re imprisoned for five years – you as the writer. In addition, you receive a hefty fine of the equivalent of 1 billion Uganda shillings, that’s about 250,000 euros. Which is really too much for any media house operating in Uganda.

 So that alone has created a lot of fear to discuss these issues, even when the law was passed in such a rushed manner with total disregard for the input of key stakeholders like the media, among others. As a media rights organization, we had looked at the draft bill and we were planning to make a presentation before the Parliamentary Committee. But within a week they closed all public hearings opinions, which limited the space for engagement. Within a few days the law had been written, presented again, and then assented to by the President. No wonder it’s being challenged in the Constitutional Court. This is the second time actually that such a law has been challenged. Of course, there are many other laws, like the Anti-Terrorism Act, which has not clearly defined the role of a journalist who speaks to a person who engages in subversive activities as terrorism. Where the law presupposes that before interviewing a person or before hosting them in your shows, you must have done a lot of background checks to make sure they have not engaged in such terrorism acts. So if you do not, the law here presses a criminal liability on the talk show host for promoting and abetting terrorism. And if there’s a conviction, the ultimate punishment is being sentenced to death. So these couple of laws are really used to curtail freedom of expression.

 York: Wow, that’s incredible. I understand how this impacts media houses, but what would you say the impact is on ordinary citizens or individual activists, for example?

 Under the Computer Misuse Amendment Act, the amended Act is restrictive and inhibitive to freedom of expression in regards to citizen journalism. It introduces such stringent conditions, like, if I’m going to record a video of you, say that I’m a journalist, citizen journalist or an activist who is not working for a media house, I must seek your permission before I record you in case you’re committing a crime. The law presupposes that I have no right to record you and later on disseminate the video without your explicit permission. Notably, the law is silent on the nature of the admissible permission, whether it is an email, SMS, WhatsApp, voice note, written note, etc. Also, the law presupposes that before I send you such a video, I must seek for your permission as the intended recipient of the said message. For instance, if I send you an email and you think you don’t need it, you can open a case against me for sending you unsolicited information. Unsolicited information – that’s the word that’s used.

 So the law is so amorphous in this nature that it completely closes out the liberty of a free society where citizens can engage in discussions, dialogues, or give opinions or ideas. For instance, I could be a very successful farmer, and I think the public could benefit from my farming practices, and I record a lot of what I do and I disseminate those videos. Somebody who receives this, wherever they are, can run to court and use this amended Computer Misuse Act to open up charges against me. And the fines are also very hefty compared to the crimes that the law talks about. So it is so evident that the law is killing citizen journalism, dissent, and activism at all levels. The law does not seem to cater to a free society where the individual citizens can express themselves at any one time, can criticize their leaders, and can hold them accountable. In the presence of this law, we do not have a society that can hold anyone accountable or that can keep the powerful in check. So the spirit of the law is bad. The powerful fence themselves off from the ordinary citizens that are out there watching and not able to track their progress of things or raise red flags through the different social media platforms. But we have tried to challenge this law. There is a group of us, 13 individual activists and CSOs that have gone to the Constitutional Court to say, “this law is counterproductive to freedom of expression, democracy, rule of law and a free society.” We believe that the court will agree with us given its key function of promoting human rights, good governance, democracy, and the rule of law.

 York: That was my next question- I was going to ask how are people fighting against these laws?

 People are very active in terms of pushing back and to that extent we have many petitions that are in court. For instance, the Computer Misuse Amendment is being challenged. We had the Anti-pornographic Act of 2014 which was so amorphous in its nature that it didn’t clearly define what actually amounts to pornography. For instance, if I went around people in a swimming pool in their swimming trunks and took photos and carried those in the newspaper or on TV, that would be promoting pornography. So that was counterproductive to journalism so we went to court. And, fortunately, a court ruled in our favor. So the citizens are really up in arms to fight back because that’s the only way we can have civic engagements that are not restricted through a litany of such laws. There has been civic participation and engagement through mass media, dialogues with key actors, among others. However, many fear to speak out due to fear of reprisals, having seen the closure of media houses, the arrest and detention of activists and journalists, and the use of administrative sanctions to curtail free expression.

 York: Are there ways in which international groups and activists can stand in solidarity with those of you who are fighting back against these laws?

 There’s a lot of backlash on organizations, especially local ones, that tend to work a lot with international organizations. The government seems to be so threatened by the international eye as compared to local eyes, because recently it banned the UN Human Rights Office. They had to wind up business and leave the country. Also, the offices of the Democratic Governance Facility (DGF), which was a basket of embassies and the EU that were the biggest funding entity for the civil society. And actually for the government, too, because they were empowering citizens, you know, empowering the demand side to heighten its demand for services from the supply side. The government said no and they had to wind up their offices and leave. This has severely crippled the work of civil society, media, and, generally, governance.

The UN played an important role before they left and we now have that gap. Yet this comes at a time when our national Uganda Human Rights Commission is at its weakest due to a number of structural challenges characterizing it. The current leadership of the Commission is always up in arms against the political opposition for accusing government of committing human rights excesses against its members. So we do our best to work with international organizations through sharing our voices. We have an African Hub, like the African IFEX, where the members try to replicate voices from here. In that nature we do try a lot, but it’s not very easy for them to come here and do their practices. Just like you will realize a lot of foreign correspondents, foreign journalists, who work in Uganda are highly restricted. It’s a tug of war to have their licenses renewed. Because it’s politically handled. It was taken away from the professional body of the Media Council of Uganda to the Media Centre of Uganda, which is a government mouthpiece.  So for the critical foreign correspondents their licenses are rarely renewed. When it comes to election times most of them are blocked from even coming here to cover the elections. The international media development bodies can help to build capacities of our media development organizations, facilitate research, provide legal aid support, and engage the government on the excesses of the security forces and some emergency responses for victims, among others.

 York: Is there anything that I didn’t ask that you’d like to share with our readers?

 One thing I was to add is about trying to have an international focus on Uganda in the build up to elections. There’s a lot of havoc that happens to the citizens, but most importantly, to the activists and human rights defenders. Either cultural activists or media activists- a lot happens. And most of these things are not captured well because it is prior to the peak of campaigns or there is fear by the local media of capturing such situations. So by the time we get international attention, sometimes the damage is really too irreparable and a lot has happened. As opposed to if there was that international focus from the world. To me, that should really be captured because it would mitigate a lot that has happened. 

 

Jillian C. York

【焦点】台湾有事「基本的にはない」中台統一なら一国両制で 台沖そっくりな立ち位置 大西広氏がオンライン講演=橋詰雅博

4 weeks 1 day ago
 1月の台湾総統選で勝利した民進党の頼清徳氏が5月に新総統に就任する。中国とは距離を置く民進党政権は葬英文前総統の8年を含め12年続く。台湾統一を掲げる中国の習近平国家主席はどう動くのか。JCJと日本AALA(アジア・アフリカ・ラテンアメリカ)連帯委員会が共催した3月2日のオンライン講演で台湾選挙を取材してきた慶応大学・京都大学名誉教授の大西広氏=写真=が米中対立の狭間で揺れ動く台湾の状況や「中台統一」などを語った。                          最大の..
JCJ

Podcast Episode: About Face (Recognition)

4 weeks 1 day ago

Is your face truly your own, or is it a commodity to be sold, a weapon to be used against you? A company called Clearview AI has scraped the internet to gather (without consent) 30 billion images to support a tool that lets users identify people by picture alone. Though it’s primarily used by law enforcement, should we have to worry that the eavesdropper at the next restaurant table, or the creep who’s bothering you in the bar, or the protestor outside the abortion clinic can surreptitiously snap a pic of you, upload it, and use it to identify you, where you live and work, your social media accounts, and more?

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F8d50564d-31d3-45bb-b0ab-4b9704d85c8a%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

   

(You can also find this episode on the Internet Archive and on YouTube.)

New York Times reporter Kashmir Hill has been writing about the intersection of privacy and technology for well over a decade; her book about Clearview AI’s rise and practices was published last fall. She speaks with EFF’s Cindy Cohn and Jason Kelley about how face recognition technology’s rapid evolution may have outpaced ethics and regulations, and where we might go from here. 

In this episode, you’ll learn about: 

  • The difficulty of anticipating how information that you freely share might be used against you as technology advances. 
  • How the all-consuming pursuit of “technical sweetness” — the alluring sensation of neatly and functionally solving a puzzle — can blind tech developers to the implications of that tech’s use. 
  • The racial biases that were built into many face recognition technologies.  
  • How one state's 2008 law has effectively curbed how face recognition technology is used there, perhaps creating a model for other states or Congress to follow. 

Kashmir Hill is a New York Times tech reporter who writes about the unexpected and sometimes ominous ways technology is changing our lives, particularly when it comes to our privacy. Her book, “Your Face Belongs To Us” (2023), details how Clearview AI gave facial recognition to law enforcement, billionaires, and businesses, threatening to end privacy as we know it. She joined The Times in 2019 after having worked at Gizmodo Media Group, Fusion, Forbes Magazine and Above the Law. Her writing has appeared in The New Yorker and The Washington Post. She has degrees from Duke University and New York University, where she studied journalism. 

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here

Transcript

KASHMIR HILL
Madison Square Garden, the big events venue in New York City, installed facial recognition technology in 2018, originally to address security threats. You know, people they were worried about who'd been violent in the stadium before, or Or perhaps the Taylor Swift model of, you know, known stalkers wanting to identify them if they're trying to come into concerts.

But then in the last year, they realized, well, we've got this system set up. This is a great way to keep out our enemies, people that the owner, James Dolan, doesn't like, namely lawyers who work at firms that have sued him and cost him a lot of money.

And I saw this, I actually went to a Rangers game with a banned lawyer and it's, you know, thousands of people streaming into Madison Square Garden. We walk through the door, put our bags down on the security belt, and by the time we go to pick them up, a security guard has approached us and told her she's not welcome in.

And yeah, once you have these systems of surveillance set up, it goes from security threats to just keeping track of people that annoy you. And so that is the challenge of how do we control how these things get used?

CINDY COHN
That's Kashmir Hill. She's a tech reporter for the New York Times, and she's been writing about the intersection of privacy and technology for well over a decade.

She's even worked with EFF on several projects, including security research into pregnancy tracking apps. But most recently, her work has been around facial recognition and the company Clearview AI.

Last fall, she published a book about Clearview called Your Face Belongs to Us. It's about the rise of facial recognition technology. It’s also about a company that was willing to step way over the line. A line that even the tech giants abided by. And it did so in order to create a facial search engine of millions of innocent people to sell to law enforcement.

I'm Cindy Cohn, the Executive Director of the Electronic Frontier Foundation.

JASON KELLEY
And I'm Jason Kelley, EFF’s Activism Director. This is our podcast series How to Fix the Internet.

CINDY COHN
The idea behind this show is that we're trying to make our digital lives BETTER. At EFF we spend a lot of time envisioning the ways things can go wrong — and jumping into action to help when things DO go wrong online. But with this show, we're trying to give ourselves a vision of what it means to get it right.

JASON KELLEY
It's easy to talk about facial recognition as leading towards this sci-fi dystopia, but many of us use it in benign - and even helpful - ways every day. Maybe you just used it to unlock your phone before you hit play on this podcast episode.

Most of our listeners probably know that there's a significant difference between the data that's on your phone and the data that Clearview used, which was pulled from the internet, often from places that people didn't expect. Since Kash has written several hundred pages about what Clearview did, we wanted to start with a quick explanation.

KASHMIR HILL
Clearview AI scraped billions of photos from the internet -

JASON KELLEY
Billions with a B. Sorry to interrupt you, just to make sure people hear that.

KASHMIR HILL
Billions of photos from, the public internet and social media sites like Facebook, Instagram, Venmo, LinkedIn. At the time I first wrote about them in January, 2020, they had 3 billion faces in their database.

They now have 30 billion and they say that they're adding something like 75 million images every day. So a lot of faces, all collected without anyone's consent and, you know, they have paired that with a powerful facial recognition algorithm so that you can take a photo of somebody, you know, upload it to Clearview AI and it will return the other places on the internet where that face appears along with a link to the website where it appears.

So it's a way of finding out who someone is. You know, what their name is, where they live, who their friends are, finding their social media profiles, and even finding photos that they may not know are on the internet, where their name is not linked to the photo but their face is there.

JASON KELLEY

Wow. Obviously that's terrifying, but is there an example you might have of a way that this affects the everyday person. Could you talk about that a little bit?

KASHMIR HILL

Yeah, so with a tool like this, um, you know, if you were out at a restaurant, say, and you're having a juicy conversation, whether about your friends or about your work, and it kind of catches the attention of somebody sitting nearby, you assume you're anonymous. With a tool like this, they could take a photo of you, upload it, find out who you are, where you work, and all of a sudden understand the context of the conversation. You know, a person walking out of an abortion clinic, if there's protesters outside, they can take a photo of that person. Now they know who they are and the health services they may have gotten.

I mean, there's all kinds of different ways. You know, you go to a bar and you're talking to somebody. They're a little creepy. You never want to talk to them again. But they take your picture. They find out your name. They look up your social media profiles. They know who you are.
On the other side, you know, I do hear about people who think about this in a positive context, who are using tools like this to research people they meet on dating sites, finding out if they are who they say they are, you know, looking up their photos.

It's complicated, facial recognition technology. There are positive uses, there are negative uses. And right now we're trying to figure out what place this technology should have in our lives and, and how authorities should be able to use it.

CINDY COHN
Yeah, I think Jason's, like, ‘this is creepy’ is very widely shared, I think, by a lot of people. But you know the name of this is How to Fix the Internet. I would love to hear your thinking about how facial recognition might play a role in our lives if we get it right. Like, what would it look like if we had the kinds of law and policy and technological protections that would turn this tool into something that we would all be pretty psyched about on the main rather than, you know, worried about on the main.

KASHMIR HILL
Yeah, I mean, so some activists feel that facial recognition technology should be banned altogether. Evan Greer at Fight for the Future, you know, compares it to nuclear weapons and that there's just too many possible downsides that it's not worth the benefits and it should be banned altogether. I kind of don't think that's likely to happen just because I have talked to so many police officers who really appreciate facial recognition technology, think it's a very powerful tool that when used correctly can be such an important part of their tool set. I just don't see them giving it up.

But when I look at what's happening right now, you have these companies like not just Clearview AI, but PimEyes, Facecheck, Eye-D. There's public face search engines that exist now. While Clearview is limited to police use, these are on the internet. Some are even free, some require a subscription.  And right now in the U. S., we don't have much of a legal infrastructure, certainly at the national level about whether they can do that or not. But there's been a very different approach in Europe where they say, that citizens shouldn't be included in these databases without their consent. And, you know, after I revealed the existence of Clearview AI, privacy regulators in Europe, in Canada, in Australia, investigated Clearview AI and said that what it had done was illegal, that they needed people's consent to put them in the databases.

So that's one way to handle facial recognition technology is you can't just throw everybody's faces into a database and make them searchable, you need to get permission first. And I think that is one effective way of handling it. Privacy regulators actually inspired by Clearview AA actually issued a warning to other AI companies saying, hey, just because there's all these, there's all this information that's public on the internet, it doesn't mean that you're entitled to it. There can still be a personal interest in the data, and you may violate our privacy laws by collecting this information.

We haven't really taken that approach, in the U. S. as much, with the exception of Illinois, which has this really strong law that's relevant to facial recognition technology. When we have gotten privacy laws at the state level, it says you have the right to get out of the databases. So in California, for example, you can go to Clearview AI and say, hey, I want to see my file. And if you don't like what they have on you, you can ask them to delete you. So that's a very different approach, uh, to try to give people some rights over their face. And California also requires that companies say how many of these requests they get per year. And so I looked and in the last two years fewer than a thousand Californians have asked to delete themselves from Clearview's database and you know, California's population is very much bigger than that, I think, you know 34 million people or so and so I'm not sure how effective those laws are at protecting people at large.

CINDY COHN
Here’s what I hear from that. Our world where we get it right is one where we have a strong legal infrastructure protecting our privacy. But it’s also one where if the police want something, it doesn’t mean that they get it. It’s a world where control of our faces and faceprints rests with us, and any use needs to have our permission. That’s the Illinois law called BIPA - the Biometric Privacy Act, or the foreign regulators you mention.
It also means that a company like Venmo cannot just put our faces onto the public internet, and a company like Clearview cannot just copy them. Neither can happen without our affirmative permission.

I think of technologies like this as needed to have good answers to two questions. Number one, who is the technology serving - who benefits if the technology gets it right? And number two, who is harmed if the technology DOESN’T get it right?

For police use of facial recognition, the answers to both of these questions are bad. Regular people don’t benefit from the police having their faces in what has been called a perpetual line-up. And if the technology doesn’t work, people can pay a very heavy price of being wrongly arrested - as you document in your book, Kash.

But for facial recognition technology allowing me to unlock my phone and manipulate apps like digital credit cards, I benefit by having an easy way to lock and use my phone. And if the technology doesn’t work, I just use my password, so it’s not catastrophic. But how does that compare to your view of a fixed facial recognition world, Kash?

KASHMIR HILL
Well, I'm not a policymaker. I am a journalist. So I kind of see my job as, as here's what has happened. Here's how we got here. And here's how different, you know, different people are dealing with it and trying to solve it. One thing that's interesting to me, you brought up Venmo, is that Venmo was one of the very first places that the kind of technical creator of Clearview AI, Hoan Ton-That, one of the first places he talked about getting faces from.

And this was interesting to me as a privacy reporter because I very much remembered this criticism that the privacy community had for Venmo that, you know, when you've signed up for the social payment site, they made everything public by default, all of your transactions, like who you were sending money to.

And there was such a big pushback saying, Hey, you know, people don't realize that you're making this public by default. They don't realize that the whole world can see this. They don't understand, you know, how that could come back to be used against them. And, you know, some of the initial uses were, you know, people who were sending each other Venmo transactions and like putting syringes in it and you know, cannabis leaves and how that got used in criminal trials.

But what was interesting with Clearview is that Venmo actually had this iPhone on their homepage on Venmo.com and they would show real transactions that were happening on the network. And it included people's profile photos and a link to their profile. So Hoan Ton-That sent this scraper to Venmo.com and it would just, he would just hit it every few seconds and pull down the photos and the links to the profile photos and he got, you know, millions of faces this way, and he says he remembered that the privacy people were kind of annoyed about Venmo making everything public, and he said it took them years to change it, though.

JASON KELLEY
We were very upset about this.

CINDY COHN
Yeah, we had them on our, we had a little list called Fix It Already in 2019. It wasn't a little, it was actually quite long for like kind of major privacy and other problems in tech companies. And the Venmo one was on there, right, in 2019, I think was when we launched it. In 2021, they fixed it, but that was right in between there was right when all that scraping happened.

KASHMIR HILL
And Venmo is certainly not alone in terms of forcing everyone to make their profile photos public, you know, Facebook did that as well, but it was interesting when I exposed Clearview AI and said, you know, here are some of the companies that they scraped from Venmo and also Facebook and LinkedIn, Google sent Clearview cease and desist letters and said, Hey, you know, you, you violated our terms of service in collecting this data. We want you to delete it, and people often ask, well, then what happened after that? And as far as I know, Clearview did not change their practices. And these companies never did anything else beyond the cease and desist letters.

You know, they didn't sue Clearview. Um, and so it's clear that the companies alone are not going to be protecting our data, and they've pushed us to, to be more public and now that is kind of coming full circle in a way that I don't think people, when they are putting their photos on the internet were expecting this to happen.

CINDY COHN
I think we should start from the source, which is, why are they gathering all these faces in the first place, the companies? Why are they urging you to put your face next to your financial transactions? There's no need for your face to be next to a financial transaction, even in social media and other kinds of situations, there's no need for it to be public. People are getting disempowered because there's a lack of privacy protection to begin with, and the companies are taking advantage of that, and then turning around and pretending like they're upset about scraping, which I think is all they did with the Clearview thing.

Like there's problems all the way down here. But I don't think that, from our perspective, the answer isn't to make scraping, which is often over limited, even more limited. The answer is to try to give people back control over these images.

KASHMIR HILL
And I get it, I mean, I know why Venmo wants photos. I mean, when I use Venmo and I'm paying someone for the first time, I want to see that this is the face of the person I know before I send it to, you know, @happy, you know, nappy on Venmo. So it's part of the trust, but it does seem like you could have a different architecture. So it doesn't necessarily mean that you're showing your face to the entire, you know, world. Maybe you could just be showing it to the people that you're doing transactions with.

JASON KELLEY
What we were pushing Venmo to do was what you mentioned was make it NOT public by default. And what I think is interesting about that campaign is that at the time, we were worried about one thing, you know, that the ability to sort of comb through these financial transactions and get information from people. We weren't worried about, or at least I don't think we talked much about, the public photos being available. And it's interesting to me that there are so many ways that public defaults, and that privacy settings can impact people that we don't even know about yet, right?

KASHMIR HILL
I do think this is one of the biggest challenges for people trying to protect their privacy is, it's so hard to anticipate how information that, you know, kind of freely give at one point might be used against you or weaponized in the future as technology improves.

And so I do think that's really challenging. And I don't think that most people, when they're kind of freely putting Photos on the internet, their face on the internet were anticipating that the internet would be reorganized to be searchable by face.

So that's where I think regulating the use of the information can be very powerful. It's kind of protecting people from the mistakes they've made in the past.

JASON KELLEY
Let’s take a quick moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians. And now back to our conversation with Kashmir Hill.

CINDY COHN
So a supporter asked a question that I'm curious about too. You dove deep into the people who built these systems, not just the Clearview people, but people before them. And what did you find? Are these like Dr. Evil, evil geniuses who intended to, you know, build a dystopia? Or are there people who were, you know, good folks trying to do good things who either didn't see the consequences of what they're looking at or were surprised at the consequences of what they were building

KASHMIR HILL
The book is about Clearview AI, but it's also about all the people that kind of worked to realize facial recognition technology over many decades.
The government was trying to get computers to be able to recognize human faces in Silicon Valley before it was even called Silicon Valley. The CIA was, you know, funding early engineers there to try to do it with those huge computers which, you know, in the early 1960s weren't able to do it very well.

But I kind of like went back and asked people that were working on this for so many years when it was very clunky and it did not work very well, you know, were you thinking about what you are working towards? A kind of a world in which everybody is easily tracked by face, easily recognizable by face. And it was just interesting. I mean, these people working on it in the ‘70s, ‘80s, ‘90s, they just said it was impossible to imagine that because the computers were so bad at it, and we just never really thought that we'd ever reach this place where we are now, where we're basically, like, computers are better at facial recognition than humans.

And so this was really striking to me, that, and I think this happens a lot, where people are working on a technology and they just want to solve that puzzle, you know, complete that technical challenge, and they're not thinking through the implications of what if they're successful. And so this one, a philosopher of science I talked to, Heather Douglas, called this technical sweetness.

CINDY COHN
I love that term.

KASHMIR HILL
This kind of motivation where it's just like, I need to solve this, the kind of Jurassic Park, the Jurassic Park dilemma where it's like,it'd be really cool if we brought the dinosaurs back.

So that was striking to me and all of these people that were working on this, I don't think any of them saw something like Clearview AI coming and when I first heard about Clearview, this startup that had scraped the entire internet and kind of made it searchable by face. I was thinking there must be some, you know, technological mastermind here who was able to do this before the big companies, the Facebooks, the Googles. How did they do it first?

And what I would come to figure out is that. You know, what they did was more of an ethical breakthrough than a technological breakthrough. Companies like Google and Facebook had developed this internally and shockingly, you know, for these companies that have released many kind of unprecedented products, they decided facial recognition technology like this was too much and they held it back and they decided not to release it.

And so Clearview AI was just willing to do what other companies hadn't been willing to do. Which I thought was interesting and part of why I wrote the book is, you know, who are these people and why did they do this? And honestly, they did have, in the early days, some troubling ideas about how to use facial recognition technology.

So one of the first deployments was of, of Clearview AI, before it was called Clearview AI, was at the Deploraball, this kind of inaugural event around Trump becoming president and they were using it because It was going to be this gathering of all these people who had had supported Trump, the kind of MAGA crowd, O=of which some of the Clearview AI founders were part of. And they were worried about being infiltrated by Antifa, which I think is how they pronounce it, and so they wanted to run a background check on ticket buyers and find out whether any of them were from the far left.

And apparently this smartchecker worked for this and they identified two people who kind of were trying to get in who shouldn't have. And I found out about this because they included it in a PowerPoint presentation that they had developed for the Hungarian government. They were trying to pitch Hungary on their product as a means of border control. And so the idea was that you could use this background check product, this facial recognition technology, to keep out people you didn't want coming into the country.

And they said that they had fine tuned it so it would work on people that worked with the Open Society Foundations and George Soros because they knew that Hungary's leader, Viktor Orban, was not a fan of the Soros crowd.

And so for me, I just thought this just seemed kind of alarming that you would use it to identify essentially political dissidents, democracy activists and advocates, that that was kind of where their minds went to for their product when it was very early, basically still at the prototype stage.

CINDY COHN
I think that it's important to recognize these tools, like many technologies, they're dual use tools, right, and we have to think really hard about how they can be used and create laws and policies around there because I'm not sure that you can use some kind of technological means to make sure only good guys use this tool to do good things and that bad guys don't.

JASON KELLEY
One of the things that you mentioned about sort of government research into facial recognition reminds me that shortly after you put out your first story on Clearview in January of 2020, I think, we put out a website called Who Has Your Face, which we'd been doing research for for, I don't know, four to six months or something before that, that was specifically trying to let people know which government entities had access to your, let's say, DMV photo or your passport photo for facial recognition purposes, and that's one of the great examples, I think, of how sort of like Venmo, you put information somewhere that's, even in this case, required by law, and you don't ever expect that the FBI would be able to run facial recognition on that picture based on like a surveillance photo, for example.

KASHMIR HILL
So it makes me think of two things, and one is, you know, as part of the book I was looking back at the history of the US thinking about facial recognition technology and setting up guardrails or for the most part NOT setting up guardrails.

And there was this hearing about it more than a decade ago. I think actually Jen Lynch from the EFF testified at it. And it was like 10 years ago when facial recognition technology was first getting kind of good enough to get deployed. And the FBI was starting to build a facial recognition database and police departments were starting to use these kind of early apps.

It troubles me to think about just knowing the bias problems that facial recognition technology had at that time that they were kind of actively using it. But lawmakers were concerned and they were asking questions about whose photo is going to go in here? And the government representatives who were there, law enforcement, at the time they said, we're only using criminal mugshots.

You know, we're not interested in the goings about of normal Americans. We just want to be able to recognize the faces of people that we know have already had encounters with the law, and we want to be able to keep track of those people. And it was interesting to me because in the years to come, that would change, you know, they started pulling in state driver's license photos in some places, and it, it ended up not just being criminals that were being tracked or people, not always even criminals, just people who've had encounters with law enforcement where they ended up with a mugshot taken.

But that is the the kind of frog boiling of ‘well we'll just start out with some of these photos and then you know we'll actually we'll add in some state driver's license photos and then we'll start using a company called Clearview AI that's scraped the entire internet Um, you know everybody on the planet in this facial recognition database.

So it just speaks to this challenge of controlling it, you know,, this kind of surveillance creep where once you start setting up the system, you just want to pull in more and more data and you want to surveil people in more and more ways.

CINDY COHN
And you tell some wonderful stories or actually horrific stories in the book about people who were misidentified. And the answer from the technologists is, well, we just need more data then. Right? We need everybody's driver's licenses, not just mugshots. And then that way we eliminate the bias that comes from just using mugshots. Or you tell a story that I often talk about, which is, I believe the Chinese government was having a hard time with its facial recognition, recognizing black faces, and they made some deals in Africa to just wholesale get a bunch of black faces so they could train up on it.

And, you know, to us, talking about bias in a way that doesn't really talk about comprehensive privacy reform and instead talks only about bias ends up in this technological world in which the solution is more people's faces into the system.

And we see this with all sorts of other biometrics where there's bias issues with the training data or the initial data.

KASHMIR HILL
Yeah. So this is something, so bias has been a huge problem with facial recognition technology for a long time. And really a big part of the problem was that they were not getting diverse training databases. And, you know, a lot of the people that were working on facial recognition technology were white people, white men, and they would make sure that it worked well on them and the other people they worked with.

And so we had, you know, technologies that just did not work as well on other people. One of those early facial recognition technology companies I talked to who was in business, you know, in 2000, 2001, actually used at the Super Bowl in Tampa in 2000 and in 2001 to secretly scan the faces of football fans looking for pickpockets and ticket scalpers.

That company told me that they had to pull out of a project in South Africa because they found the technology just did not work on people who had darker skin. But the activist community has brought a lot of attention to this issue that there is this problem with bias and the facial recognition vendors have heard it and they have addressed it by creating more diverse training sets.

And so now they are training their algorithms to work on different groups and the technology has improved a lot. It really has been addressed and these algorithms don't have those same kind of issues anymore.

Despite that, you know, the handful of wrongful arrests that I've covered. where, um, people are arrested for the crime of looking like someone else. Uh, they've all involved people who are black. One woman so far, a woman who was eight months pregnant, arrested for carjacking and robbery on a Thursday morning while she was getting her two kids ready for school.

And so, you know, even if you fix the bias problem in the algorithms, you're still going to have the issue of, well, who is this technology deployed on? Who is this used to police? And so yeah, I think it'll still be a problem. And then there's just these bigger questions of the civil liberty questions that still need to be addressed. You know, do we want police using facial recognition technology? And if so, what should the limitations be?

CINDY COHN
I think, you know, for us in thinking about this, the central issue is who's in charge of the system and who bears the cost if it's wrong. The consequences of a bad match are much more significant than just, oh gosh, the cops for a second thought I was the wrong person. That's not actually how this plays out in people's lives.

KASHMIR HILL
I don't think most people who haven't been arrested before realize how traumatic the whole experience can be. You know, I talk about Robert Williams in the book who was arrested after he got home from work, in front of all of his neighbors, in front of his wife and his two young daughters, spent the night in jail, you know, was charged, had to hire a lawyer to defend him.

Same thing, Portia Woodruff, the woman who was pregnant, taken to jail, charged, even though the woman who they were looking for had committed the crime the month before and was not visibly pregnant, I mean it was so clear they had the wrong person. And yet, she had to hire a lawyer, fight the charges, and she wound up in the hospital after being detained all day because she was so stressed out and dehydrated.

And so yeah, when you have people that are relying too heavily on the facial recognition technology and not doing proper investigations, this can have a very harmful effect on, on individual people's lives.

CINDY COHN
Yeah, I mean, one of my hopes is that when, you know, that those of us who are involved in tech trying to get privacy laws passed and other kinds of things passed can have some knock on effects on trying to make the criminal justice system better. We shouldn't just be coming in and talking about the technological piece, right?

Because it's all a part of a system that itself needs reform. And so I think it's important that we recognize, um, that as well and not just try to extricate the technological piece from the rest of the system and that's why I think EFF's come to the position that governmental use of this is so problematic that it's difficult to imagine a world in which it's fixed.

KASHMIR HILL
In terms of talking about laws that have been effective We alluded to it earlier, but Illinois passed this law in 2008, the Biometric Information Privacy Act, rare law that moved faster than the technology.

And it says if you want to use somebody's biometrics, like their face print or their fingerprint to their voice print, You need to get their consent, or as a company, or you'll be fined. And so Madison Square Garden is using facial recognition technology to keep out security threats and lawyers at all of its New York City venues: The Beacon Theater, Radio City Music Hall, Madison Square Garden.

The company also has a theater in Chicago, but they cannot use facial recognition technology to keep out lawyers there because they would need to get their consent to use their biometrics that way. So it is an example of a law that has been quite effective at kind of controlling how the technology is used, maybe keeping it from being used in a way that people find troubling.

CINDY COHN
I think that's a really important point. I think sometimes people in technology despair that law can really ever do anything, and they think technological solutions are the only ones that really work. And, um, I think it's important to point out that, like, that's not always true. And the other point that you make in your book about this that I really appreciate is the Wiretap Act, right?

Like the reason that a lot of the stuff that we're seeing is visual and not voice, // you can do voice prints too, just like you can do face prints, but we don't see that.

And the reason we don't see that is because we actually have very strong federal and state laws around wiretapping that prevent the collection of this kind of information except in certain circumstances. Now, I would like to see those circumstances expanded, but it still exists. And I think that, you know, kind of recognizing where, you know, that we do have legal structures that have provided us some protection, even as we work to make them better, is kind of an important thing for people who kind of swim in tech to recognize.

KASHMIR HILL
Laws work is one of the themes of the book.

CINDY COHN
Thank you so much, Kash, for joining us. It was really fun to talk about this important topic.

KASHMIR HILL
Thanks for having me on. It's great. I really appreciate the work that EFF does and just talking to you all for so many stories. So thank you.

JASON KELLEY
That was a really fun conversation because I loved that book. The story is extremely interesting and I really enjoyed being able to talk to her about the specific issues that sort of we see in this story, which I know we can apply to all kinds of other stories and technical developments and technological advancements that we're thinking about all the time at EFF.

CINDY COHN
Yeah, I think that it's great to have somebody like Kashmir dive deep into something that we spend a lot of time talking about at EFF and, you know, not just facial recognition, but artificial intelligence and machine learning systems more broadly, and really give us the, the history of it and the story behind it so that we can ground our thinking in more reality. And, you know, it ends up being a rollicking good story.

JASON KELLEY
Yeah, I mean, what surprised me is that I think most of us saw that facial recognition sort of exploded really quickly, but it didn't, actually. A lot of the book, she writes, is about the history of its development and, um, You know, we could have been thinking about how to resolve the potential issues with facial recognition decades ago, but no one sort of expected that this would blow up in the way that it did until it kind of did.

And I really thought it was interesting that her explanation of how it blew up so fast wasn't really a technical development as much as an ethical one.

CINDY COHN
Yeah, I love that perspective, right?

JASON KELLEY
I mean, it’s a terrible thing, but it is helpful to think about, right?

CINDY COHN
Yeah, and it reminds me again of the thing that we talk about a lot, which is Larry Lessig's articulation of the kind of four ways that you can control behavior online. There's markets, there's laws, there's norms, and there's architecture. In this system, you know, we had. norms that were driven across.

The thing that Clearview did that she says wasn't a technical breakthrough, it was an ethical breakthrough. I think it points the way towards, you know, where you might need laws.
There's also an architecture piece though. You know, if Venmo hadn't set up its system so that everybody's faces were easily made public and scrapable, you know, that architectural decision could have had a pretty big impact on how vast this company was able to scale and where they could look.

So we've got an architecture piece, we've got a norms piece, we've got a lack of laws piece. It's very clear that a comprehensive privacy law would have been very helpful here.

And then there's the other piece about markets, right? You know, when you're selling into the law enforcement market, which is where Clearview finally found purchase, that's an extremely powerful market. And it ends up distorting the other ones.

JASON KELLEY
Exactly.

CINDY COHN
Once law enforcement decides they want something, I mean, when I asked Kash, you know, like, what do you think about ideas about banning facial recognition? Uh, she said, well, I think law enforcement really likes it. And so I don't think it'll be banned. And what that tells us is this particular market. can trump all the other pieces, and I think we see that in a lot of the work we do at EFF as well.

You know, we need to carve out a better space such that we can actually say no to law enforcement, rather than, well, if law enforcement wants it, then we're done in terms of things, and I think that's really shown by this story.

JASON KELLEY
Thanks for joining us for this episode of how to fix the internet.
If you have feedback or suggestions, we'd love to hear from you. Visit EFF. org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some merch, and just see what's happening in digital rights this week and every week.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators.

In this episode, you heard Cult Orrin by Alex featuring Starfrosh and Jerry Spoon.

And Drops of H2O, The Filtered Water Treatment, by Jay Lang, featuring Airtone.

You can find links to their music in our episode notes, or on our website at eff.org/podcast.

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

We’ll see you next time.

I’m Jason Kelley.

CINDY COHN
And I’m Cindy Cohn.

Josh Richman

[B] 山岳地帯チン州再訪 民主武装組織の統治拡大 ミャンマー最前線からのレポート(7)DM生

4 weeks 1 day ago
ふた月ぶりに尋ねたチン州の景色は一変していた。青山と雲海は灰色と褐色の世界となっていた。自然発火と焼畑で山々に墨色がひろがり、常緑樹のまわりは枯れた下葉と紅葉が織り成している。変わったのは自然の景観だけではない。再訪した民兵の軍事キャンプは若者らの革命歌や食事の賑わいは消え、地雷で下肢の一部を失った新兵の保養収容所になっていた。ここで訓練期間を終えた三十名余の民兵は前線の戦闘に、あるいは巡回看護兵として、あるいはより本格的な軍事訓練を受けるため各地に散っていた。筆者の抱いていた再会の楽しみや目論見は吹っ飛んでしまった。外から時折やってくる記者の描くストーリーとは関係なく、生存への必死の営みが戦場ですすんでいるのだった。
日刊ベリタ

Rights watered down in draft privacy and data protection bill in Namibia

4 weeks 1 day ago

Some of the areas of specific human rights concern raised about the 2022 draft were underdeveloped consent provisions, the almost complete absence of protections for data subjects, and the absence of carve-outs for journalistic, artistic and academic data collection and processing.

Language English
lori

[B] ミャンマー人難民画家による絵画展(東京)4/11から

4 weeks 1 day ago
ミャンマー人難民画家・マウンマウンティン氏による絵画展が来月11日から四日間、東京・新宿で開催される予定。三日目の13日には、NHK国際放送ビルマ語キャスターのチョウチョウソー氏らによる講演会が行われる予定だ。(藤ヶ谷魁)
日刊ベリタ