By Gordon Hull
Last time, I introduced a number of philosophy of law examples in the context of ML systems and suggested that they might be helpful in thinking differently, and more productively, about holding ML systems accountable. Here I want to make the application specific.
So: how do these examples translate to ML and AI? I think one lesson is that we need to specify what exactly we are holding the algorithm accountable for. For example, if we suspect an algorithm of unfairness or bias, it is necessary to specify precisely what the nature of that bias or unfairness is – for example, that it is more likely to assign high-risk status to Black defendants (for pretrial detention purposes) than it is white ones. Even specifying fairness in this sense can be hard, because there are conflicting accounts of fairness at play. But assuming that one can settle that question, we don’t need to specify tokens or individual acts of unfairness (or demand that each of them rise to the level where they would individually create liability) to demand accountability of the algorithm or the system that deploys it – we know that the system will have treated defendants unfairly, even if we don’t know which ones (this is basically a disparate impact standard; recall that one of the original and most cited pieces on how data can be unfair was framed precisely in terms of disparate impact).
Further, given the difficulties of individual actions (litigation costs, as well as getting access to the algorithms, which defendants will claim as trade secrets) in such cases, it seems wrong to channel accountability through tort liability and demand that individuals prove the algorithm discriminated against him (how could they? The situation is like the blue bus: if a group of people is 80% likely to reoffend or skip bail, we know that 20% of that group will not, and there is no “error” for which the system can be held accountable). Policymakers need to conduct regular audits or other supervisory activity designed to ferret out this sort of problem, and demand accountability at the systemic level.
Recent Comments