By Gordon Hull
Facial recognition technology is an upcoming privacy mess. An early example of why is photo-tagging on Facebook. The privacy problem was noted a while ago by Woody Hartzog and Frederic Stutzman: “once a photo is tagged with an identifier, such as a name or link to a profile, it becomes searchable …making information visible to search significantly erodes the protection of obscurity, and, consequently, threatens a user’s privacy” (47; on obscurity, recall here and here). A while ago, I noted litigation surrounding Illinois’ Biometric Information Privacy Act (BIPA). BIPA basically establishes notice and consent rules for companies that collect and use biometric information. For example, it stipulates that “no private entity may collect, capture, purchase, receive through trade, or otherwise obtain a person's or a customer's biometric identifier or biometric information, unless it first” informs the person in question of what’s happening and what the entity is doing with the data, and then obtains a written release (740 ILCS 14/15(b)). As I suggested, this regime is subject to the obvious problems with notice and consent privacy, but companies like Facebook are resisting providing even that de minimis protection for their customers. In a landmark ruling last year, the Illinois Supreme Court upheld the law’s statutory damages provision.
More generally and as parallel federal litigation underscores, BIPA presents a significant threat to FB. The issue in question is precisely photo-tagging. As the 9th Circuit described the process:
“In 2010, Facebook launched a feature called Tag Suggestions. If Tag Suggestions is enabled, Facebook may use facial-recognition technology to analyze whether the user’s Facebook friends are in photos uploaded by that user. When a photo is uploaded, the technology scans the photo and detects whether it contains images of faces. If so, the technology extracts the various geometric data points that make a face unique, such as the distance between the eyes, nose, and ears, to create a face signature or map. The technology then compares the face signature to faces in Facebook’s database of user face templates (i.e., face signatures that have already been matched to the user’s profiles). If there is a match between the face signature and the face template, Facebook may suggest tagging the person in the photo” (Patel v. Facebook, 6).
A group of representative Illinois FB users sued the company and filed for class certification, arguing that “Facebook violated sections 15(a) and 15(b) of BIPA by collecting, using, and storing biometric identifiers (a “scan” of “face geometry,” id. 14/10) from their photos without obtaining a written release and without establishing a compliant retention schedule” (Patel v. FB, 7).
Facebook has attacked the law from virtually every possible angle. First, it has tried to stop its users from getting legal standing to sue as a class. The 9th Circuit has not been receptive to FB’s argument here, upholding in Patel v. Facebook the District Court’s class certification last August, but the Court did grant a stay of its ruling on Oct. 30, pending a FB appeal to the Supreme Court.
Second, FB responded by arguing that the plaintiffs had not suffered any “concrete injury in fact,” citing the recent Supreme Court ruling (Spokeo v. Robins, 2016) establishing that, to have standing, plaintiffs have to show “(a) concrete and particularized; and (b) actual or imminent, not conjectural or hypothetical” injury. The bulk of Patel v. FB then addresses itself to proving that BIPA protects concrete (substantive) and not merely procedural rights (i.e., that privacy is a substantive right) and that “the specific procedural violations alleged in this case actually harm, or present a material risk of harm to, such interests” (Patel v. FB, 18, quoting the standard in Spokeo).
This is all enormously important, but below the surface there is an even more potentially interesting issue: does FB’s software violate the law – i.e., does it collect and process biometric identifiers? As far as I can tell, this question is currently in limbo pending resolution of the other appeals. The district court outlined FB’s issue as follows, before declaring it a “quintessential dispute of fact for the jury to decide:”
“Plaintiffs' case turns in large measure on whether Facebook collects and stores scans of face geometry. While the parties have no serious disagreement about the literal text of Facebook's source code, they offer strongly conflicting interpretations of how the software processes human faces. Plaintiffs say the technology necessarily collects scans of face geometry because it uses human facial regions to process, characterize, and ultimately recognize face images. Facebook disagrees and says the technology has no express dependency on human facial features at all. Rather, according to Facebook, the technology "learns for itself what distinguishes different faces and then improves itself based on its successes and failures, using unknown criteria that have yielded successful outputs in the past.” (In re Facebook Biometric Info. Privacy Litig., 2018 U.S. Dist. LEXIS 810448, p. 8).
In short: does AI software “recognize” faces? Without ruling on the merit, the Court seems somewhat skeptical of FB’s defense that it does not:
“Facebook makes a stab at avoiding trial by suggesting that "scan" in "scan of face geometry" necessarily connotes an express measurement of human facial features -- for instance, "a measurement of the distance between a person's eyes, nose, and ears." Dkt. No. 299 at 20 (internal quotation omitted). But the argument is of no moment because it merely begs the question of what, in fact, happens in the operation of the technology. In addition, the word "scan" does not bear the definitional freight Facebook seeks to impose. BIPA does not specifically define it, and the ordinary meaning of "to scan" is to "examine" by "observation or checking," or "systematically . . . in order to obtain data especially for display or storage." Merriam-Webster's Collegiate Dictionary at 1107-08 (11th ed. 2003). "Geometry" is also understood in everyday use to mean simply a "configuration," which in turn denotes a "relative arrangement of parts or elements." Id. at 524, 261. None of these definitions demands actual or express measurements of spatial quantities like distance, depth, or angles. In addition, the Illinois legislature's decision to use the word "scan" rather than "record" does not indicate that express measurements are required, and limiting scans of face geometry to techniques that literally measure distances, depths, and angles cannot be squared with the legislature's clear intent to regulate emergent biometric data collection technology in whatever specific form it takes” (11-12)
A recent and very helpful paper on AI in the law by Mala Chatterjee and Jeanne Fromer seems to me to get at the heart of this. Chatterjee and Fromer point to an emerging need to know “whether machines can have mental states, or - at least - something sufficiently like mental states for the purposes of the law” (1888). Their focus is primarily on copyright, and the act of infringement specifically, but they note underscore mental state requirements are ubiquitous in law, covering everything from torts to crime. Their innovation is to invoke, via John Searle and David Chalmers, the philosophy of mind distinction between functionalist and phenomenal accounts of mental states. As they summarize, “Chalmers demonstrates that individual human mental states can be analyzed either in terms of what he calls their psychological properties—their functional role in producing behavior, or what they do—or their phenomenal properties—their conscious quality, or how they feel” (1906). This produces what we should probably call a latent ambiguity in the law around mental states:
“for each of the law’s mental state requirements, it remains an open question whether the law ultimately seeks to track the conscious or functional properties of the states in question. Because the law has primarily been designed for human actors, for whom the conscious and the functional typically coincide, this is a question we have principally been able to avoid until now” (1907)
This is not the old question of whether “machines think” – it’s the point that, at least from a legal standpoint, you need to figure out what it is that the law is trying to track, and then figure out whether the machine is doing whatever it is that’s necessary to track that. In other words, “If the law is concerned only with functional properties, then these properties could very well be possessed by the states of a nonhuman machine” (1907). As I suggested recently, I think latent ambiguities in law – there because the technology at the time the law was written didn’t require distinguishing two interpretations – are rampant in privacy. This is really, really good example, because ordinary usage of terms like “facial recognition” doesn’t need to distinguish between a functionalist and phenomenalist account. It doesn’t do that because when a human recognizes faces, or when a human programs a machine to do so using some sort of top-down algorithm (one based on rules provided by humans: what Luciano Floridi once called “good old-fashioned AI”), they’re doing what the statute describes. But does one have to do it that way?
Consider BIPA’s definition of a “biometric identifier,” which it says “means a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry. Biometric identifiers do not include writing samples, written signatures, photographs, human biological samples used for valid scientific testing or screening, demographic data, tattoo descriptions, or physical descriptions such as height, weight, hair color, or eye color.”
Facebook’s defense here depends on the functional/phenomenal distinction and using that to resolve the law’s ambiguity. FB’s argument is, in essence, is that (a) facial recognition (or biometric identification) necessarily designates a phenomenal state which would have to be replicated in a machine, and that (b) their software is only functional.
The argument has to concede that the software is functional (otherwise it wouldn’t work). But is it merely functional? Of course the algorithm is proprietary and buried under fifty tons of trade secret law, but let’s assume for the sake of argument that it does not do exactly the same thing that an embodied human being does when they identify a face (I’ll get back to this point in a future post, because I don’t think it’s actually that easy to make this assumption do the argumentative work it needs to. But now, assuming arguendo…). FB is then basically making a version Searle’s Chinese Room argument: there’s a lot of processing going on inside, but there’s no Chinese being spoken.
The defense seems to me to rely on a third, unstated premise: (c) the correct standard we should use in adjudicating the case will determine what the software (for lack of a better way of putting this) does metaphysically – in other words, that (a) is relevant. Another approach would ask what the law is trying to do. Indeed, from a legal standpoint, the metaphysical question of whether machines think is nearing “transcendental nonsense” territory; whether machines think is itself might better treated functionally in terms of what we want the law to do. Chatterjee and Fromer offer one hypothesis for how to resolve the legal question:
“As … it might be that the law is interested in conscious properties of mental states when it seeks to treat the actor in question as a rightsholder (such as in copyright authorship) or an autonomous and responsible agent (such as in criminal punishment). But in contexts in which the law is seeking simply to protect the rights or interests of others from the actor (such as copyright infringement), functionality might be all that matters” (1915-16).
If we make that distinction here, FB had better hope its efforts at standing succeed better than they have been: the functionality of photo-tagging is its selling point, and as the company complains repeatedly, the statutory damages are designed to be big enough to get their attention.
Recent Comments