By Gordon Hull
Last time, I suggested that a recent paper by Mala Chatterjee and Jeanne Fromer is very helpful in disentangling what is at stake in Facebook’s critique of Illinois’ Biometric Information Privacy Act (BIPA). Recall that BIPA requires consent before collecting biometric identifiers, and a group of folks sued FB over phototagging. Among FB’s defenses is the claim that its software doesn’t depend on human facial features; rather it “learns for itself what distinguishes different faces and then improves itself based on its successes and failures, using unknown criteria that have yielded successful outputs in the past.” (In re Facebook Biometric Info. Privacy Litig., 2018 U.S. Dist. LEXIS 810448, p. 8). Chatterjee and Fromer apply the phenomenal/function distinction from philosophy of mind to the question of how mental state requirements in law apply to AI, with an extended case study of liability for copyright infringement. Basically, there’s an ambiguity buried in the mental state requirements, and we need to decide – probably on a case-by-case basis – whether the law’s objective is better served by a phenomenal or functional account of the mental state in question.
In applying the distinction, I suggested that we assume for the sake of argument that the software does not do the same thing that an embodied human being does when they identify a face. In other words, I was suggested that we accept arguendo that the software in question does not achieve the same phenomenal state as one of us does when we recognize a face. I also said I think that assumption, while clearly correct in a literal sense, may not be able to do as much work as it needs to. Here’s why.
It should be fairly clear that the experience of recognizing Pierre in a café is not identical between different people, or probably even the same person in different times. For that to be true, the molecular structure and electrical activity in their respective brains would have to be identical, which isn’t going to be the case. It’s also not clear that we don’t “learn[] for [ourselves] what distinguishes different faces and then improve[] [ourselves] based on [our] successes and failures, using unknown criteria that have yielded successful outputs in the past,” just like FB. After all, if you ask me why I recognize somebody, I will produce some criteria – but if it’s somebody I know, it’s not like I consciously apply that criteria as a rule. Neither the FB system nor I are using the old-fashioned “AI” of an ELIZA program. It would therefore at least require some argument to say that I recognize the face by means of that criteria, rather than offering it as a post facto explanation. Indeed, recognition does not appear to be a “conscious” process in the relevant sense at all. So that can’t be the issue.
Recent Comments