In a second court ruling on the NSA’s metadata collection program, Judge Pauley rejected virtually all of the arguments raised by the ACLU and other plaintiffs against the program. This opinion thus stands opposed to Judge Leon’s ruling of a few weeks before (my analysis of that is here). Here I want to look at Judge Pauley’s opinion, in the context of my original question about data and information as concepts in thinking about privacy in the era of big data.
Like Judge Leon, Judge Pauley spends considerable time on the question of standing: does the statutory regime underlying the data collection program permit citizens to challenge it? Also like Judge Leon, he says that it does not. In other words – and this is something I want to discuss in a future post – the only sort of challenge a citizen can make to the data collection program is a constitutional one. That said, the way Judge Pauley characterizes the standing question is perhaps worth a remark here: he straightforwardly claims that there is no way the analytics part of the program can proceed without first having all the metadata at hand. As he says, “there is no way for the government to know which particle of telephony metadata will lead to counterterrorism information” (35). This is the data/information distinction at work: the data by itself (or in a vacuum) is meaningless – and may even be meaningless forever – but you cannot even know whether it will rise to the level of information until after you run the analytics (hence my claim that privacy arrives too late). In this, I think, big data is charting new territory, insofar as older kinds of surveillance did not extensively collect material that was not obviously meaningful in some way or another (I actually think that older case law suggests that some material was considered per se informational. That will be the topic of my next post).
In any case, this generalization of this principle isn’t encouraging, since it basically says that all data needs to be available for collection, since we can never know what data is going to be meaningful. This would also be true for material on the information side of the divide, since the information that someone regularly visits the home of a known drug dealer might yield totally different information if treated as a data point in the NSA’s analytics. Once the analytics are what turn data into information, I think it follows straightforwardly that it will be virtually impossible to establish a (non-arbitrary) limiting principle on data collection.
For the Constitutional (4th Amendment) question, Judge Pauley relies on the “bedrock holding” of Smith v. Maryland that “an individual has no legitimate expectation of privacy in information provided to third parties” (39). Notice, of course, that we’ve slipped to calling the material “information” rather than “data.” As a result, the data/information distinction gets deployed in two ways. If it’s something I voluntarily provide that’s meaningful to me (a phone number, say), then I lose privacy protection. At the same time, the government can say that the metadata is not meaningful in isolation, and so deserves no privacy protection. The former logic is juridical and the latter biopolitical.. I realize that this is a lose-lose from a citizen’s point of view, but I think it’s significant that both logics are operative at the same time.
When Judge Pauley has to decide on whether to award a preliminary injunction against the NSA, he doesn’t find it hard to deny it. The security interests are overwhelming, he argues, and “the effectiveness of bulk telephony metadata collection cannot be seriously disputed” (48). Of course, this effectiveness is precisely what is disputed by the ACLU. But if we assume that the program is effective, the result follows immediately, as privacy interests always lose when they are compared to security; Dan Solove has influentially argued that this is partly because issues like security are social, and privacy is interpreted as an individual right. To have a chance, privacy would have to name a social value worth protecting. To do so, we are going to have to quit looking for a “visceral injury:” “at the end of the day, privacy is not a horror movie, and demanding more palpable harms will be difficult in many cases. Yet there is still a harm worth addressing, even if it is not sensationalistic.” He proposes that the erosion of social trust between businesses and consumers (the context is data-sharing with the NSA) is one such harm. From the point of view I am developing here, I want to mainly notice the mismatch between the juridical right and the biopolitical imperative. To avoid the sort of result to which Judge Pauley was led, we need some biopolitical, population-level reasons why these sorts of NSA programs are problematic.
Recent Comments