By Gordon Hull
AI systems are notoriously opaque black boxes. In a now standard paper, Jenna Burrell dissects this notion of opacity into three versions. The first is when companies deliberately hide information about their algorithms, to avoid competition, maintain trade secrets, and to guard against gaming their algorithms, as happens with Search Engine Optimization techniques. The second is when reading and understanding code is an esoteric skill, so the systems will remain opaque to all but a very small number of specially-trained individuals. The third form is unique to ML systems, and boils down to the argument that ML systems generate internal networks of connections that don’t reason like people. Looking into the mechanics of a system for recognizing handwritten numbers or even a spam detection filter wouldn’t produce anything that a human could understand. This form of opacity is also the least tractable, and there is a lot of work trying to establish how ML decisions could be made either more transparent or at least more explicable.
Joshua Kroll argues instead that the quest for potentially impossible transparency distracts from what we might more plausibly expect from our ML systems: accountability. After all, they are designed to do something, and we could begin to assess them according to the internal processes by which they are developed to achieve their design goals, as well as by empirical evidence of what happens when they are employed. In other words, we don’t need to know exactly how the system can tell a ‘2’ from a ‘3’ as long as we can assess whether it does, and whether that objective is serving nefarious purposes.
I’ve thought for a while that there’s potential help for understanding what accountability means in the philosophy of law literature. For example, a famous thought experiment features a traffic accident caused by a bus. We have two sources of information about this accident. One is an eyewitness who is 70% reliable and says that the bus was blue. The other is the knowledge that 70% of the buses that were in the area at the time were blue. Epistemically, these ought to be equal – in both cases, you can say with 70% confidence that the blue bus company is liable for the accident. But we don’t treat them as the same: as David Enoch and Talia Fisher elaborate, most people prefer the witness to the statistical number. This is presumably because when the witness is wrong, we can inquire what went wrong. When the statistic is wrong, it’s not clear that anything like a mistake even happened: the statistics operate at a population level; when applied to individuals, the use of statistical probability will be wrong 30% of the time, and so we have to expect that. It seems to me that our desire for what amounts to an auditable result is the sort of thing that Kroll is pointing to.
Recent Comments