Luke Stark argues that Facial recognition should be treated as the “plutonium of AI” – something so dangerous that it’s use should be carefully controlled and limited. If you follow the news, you’ll know that we’re currently treating it as the carbon dioxide of AI, a byproduct of profit-making that doesn’t look too awful on its own until you realize its buildup could very well cause something catastrophic to happen. Activists have worried about this pending catastrophe for a while, but lots of big money supports facial recognition, so they have thrown up a smokescreen of distractions – in one case, Facebook denied that its phototagging software in fact recognized faces (!) – in order to lull everyone into accepting it.
One of the worst offenders is a secretive company called Clearview, whose business model is to scrape the web of all the pictures it can find and then sell the technology to law enforcement. The company even has an international presence: in one disturbing instance, the Washington Post documents the use of its technology by Ukrainians to identify dead Russian soldiers by way of their Instagram and other social media accounts, and then sometimes to contact their families. More generally, the Post revealed internal documents showing that the company' database is nearing 100 billion images and that "almost everyone in the world will be identifiable." They're going all-in; the Post reports that "the company wants to expand beyond scanning faces for the police, saying in the presentation [obtained by the WP] that it could monitor 'gig economy' workers and is researching a number of new technologies that could identify someone based on how they walk, detect their location from a photo or scan their fingerprints from afar."
Clearview is also one of a cohort of companies that has been sued for violating Illinois’ Biometric Information Privacy Act (BIPA). BIPA, uniquely among American laws, requires opt-in assent for companies to use people’s biometric information (the Facebook case is central to my argument in this paper (preprint here); for some blog-level discussion see here and here). Of course, BIPA is a state-level law, so its protections do not automatically extend to anyone who lives outside of Illinois. That’s why yesterday’s news of a settlement with the ACLU is really good news. The Guardian reports:
Facial recognition startup Clearview AI has agreed to restrict the use of its massive collection of face images to settle allegations that it collected people’s photos without their consent. The company in a legal filing Monday agreed to permanently stop selling access to its face database to private businesses or individuals around the US, putting a limit on what it can do with its ever-growing trove of billions of images pulled from social media and elsewhere on the internet. The settlement, which must be approved by a federal judge in Chicago, will end a lawsuit brought by the American Civil Liberties Union and other groups in 2020 over alleged violations of an Illinois digital privacy law. Clearview is also agreeing to stop making its database available to Illinois state government and local police departments for five years. The New York-based company will continue offering its services to federal agencies, such as US Immigration and Customs Enforcement, and to other law enforcement agencies and government contractors outside Illinois.
Of course, the company denies the allegations in the lawsuit, and insists that it was just in the process of rolling out a “consent-based” product. Ok, sure! This is still a win for privacy and for one of the very few pieces of legislation in the U.S. that has any chance of limiting the use of biometric data.
Recent Comments