People make snap judgments about those they see the first time – mentally categorizing someone as friendly, threatening, trustworthy, etc. Most of us know that those impressions are idiosyncratic, and suffused with cultural biases along race, gender and other lines. So obviously I know what you’re thinking… we need an AI that do that, right? At least that’s what this new PNAS paper seems to think (h/t Nico Osaka for the link). The authors start right in with the significance:
“We quickly and irresistibly form impressions of what other people are like based solely on how their faces look. These impressions have real-life consequences ranging from hiring decisions to sentencing decisions. We model and visualize the perceptual bases of facial impressions in the most comprehensive fashion to date, producing photorealistic models of 34 perceived social and physical attributes (e.g., trustworthiness and age). These models leverage and demonstrate the utility of deep learning in face evaluation, allowing for 1) generation of an infinite number of faces that vary along these perceived attribute dimensions, 2) manipulation of any face photograph along these dimensions, and 3) prediction of the impressions any face image may evoke in the general (mostly White, North American) population”
Let’s maybe think for a minute, yes? Because we know that people make these impressions on unsound bases!
First, adversarial networks are already able to produce fake faces that are indistinguishable from real ones. Those fake faces can now be manipulated to appear more or less trustworthy, hostile, friendly, etc. When you make fake political ads, for example, that’s going to be useful. Already 6 years ago, one “Melvin Redick of Harrisburg, Pa., a friendly-looking American with a backward baseball cap and a young daughter, posted on Facebook a link to a brand-new website,” saying on June 8, 2016 that “these guys show hidden truth about Hillary Clinton, George Soros and other leaders of the US. Visit #DCLeaks website. It’s really interesting!” Of course, both Melvin Redick and the site he pointed to were complete fabrications by the Russians. Now we can make Melvin look trustworthy, and Clinton less so.
Second, the ability to manipulate existing face photos is a disaster-in-waiting. Again, we saw crude efforts with this before – making Obama appear darker than he is, for example. But here news photos could be altered to make Vladimir Putin appear trustworthy, or Mr. Rogers untrustworthy. This goes nowhere good, especially when combined with deepfake technology that already takes people out of their contexts and puts them in other ones (disproportionately, so far, women are pasted into porn videos, but the Russians recently tried to produce a deepfake of Zelensky surrendering. Fortunately that one was done sloppily).
Third, and I think this one is possibly the scariest, what about scanning images to see whether someone will be assessed as trustworthy? AI-based hiring is already under-regulated! Now employers will run your photo through the software and make customer-service hiring decisions based on who customers will perceive as trustworthy. What could go wrong?
All of this of course assumes that this sort of software actually works. The history of physiognomic AI, which uses all sorts of supposedly objective cues to determine personality and which is basically complete (usually racist) bunk suggests that the science is probably not as good as the article acts like. So maybe we’re lucky and this algorithm does not actually work as advertised. Of course, the fact that AI software is garbage doesn't preclude its being used to make people's lives miserable. Just consider the bizarre case of VibraImage.
But don’t worry. The PNAS authors are aware of ethics, noting that “the framework developed here adds significantly to the ethical concerns that already enshroud image manipulation software:”
“Our model can induce (perceived) changes within the individual’s face itself and may be difficult to detect when applied subtly enough. We argue that such methods (as well as their implementations and supporting data) should be made transparent from the start, such that the community can develop robust detection and defense protocols to accompany the technology, as they have done, for example, in developing highly accurate image forensics techniques to detect synthetic faces generated by SG2. More generally, to the extent that improper use of the image manipulation techniques described here is not covered by existing defamation law, it is appropriate to consider ways to limit use of these technologies through regulatory frameworks proposed in the broader context of face-recognition technologies.”
Yes, the very effective American regulation of privacy does inspire confidence! Also, “There is also potential for our data and models to perpetuate the biases they measure, which are first impressions of the population under study and have no necessary correspondence to the actual identities, attitudes, or competencies of people whom the images resemble or depict.”
Do you think? As Luke Stark put it, facial recognition is the “plutonium of AI:” very dangerous and with very few legitimate uses. This algorithm belongs in the same category, and should similarly be regulated like nuclear waste. For example, as Ari Waldman and Mary Anne Franks have written, one of the problems with deepfakes is that the fake version gets out there on the internet, and it is nearly impossible to make it go away (if you even know about it). Forensic software gets there too late, and those without resources aren’t going to be able to deploy it anyway. Lawsuits are even less useful, since they're time-consuming and expensive to pursue, and lots of defendants won't be jurisdictionally available or have pockets deep enough to make the chase worth it. In other words, not everybody is going to be able to defend themselves like Zelensky, who both warned about deepfakes and was able to produce video of himself not surrendering. In the meantime, faked and shocking things generally get diffused faster and further than real news. After all, “engagement” is the business model of social media. Further, to the extent that people stay inside filter bubbles (like Fox News), they may never see the forensic corrections, and they probably won’t believe the real one is real, even if they do.
And as for reinforcing existing biases, Safiya Noble already wrote a whole book on how algorithms that guess what you’re probably thinking about someone can do just that.
Recent Comments