“Privacy Act. [Federal] Agencies shall, to the extent consistent with applicable law, ensure that their privacy policies exclude persons who are not United States citizens or lawful permanent residents from the protections of the Privacy Act regarding personally identifiable information.”
The Act does not require protection extend to non-permanent residents; Crouch provides the context, noting that “some agencies, however, have been providing aspects of privacy-act protections to non-citizens and permanent residents. The order appears to force agencies to stop that approach and instead expand governmental data collection and dissemination of information related to non-Americans.” In other words, expand the surveillance state as much as possible. I don’t see how this expansion won’t involve the collection of data on citizens, since citizens often interact with those in the U.S. lawfully or otherwise (for example, as students, H1-B workers, or under other programs), and so a threat to the privacy of non-citizens is indirectly a threat to the privacy of everyone in the United States.
In a second court ruling on the NSA’s metadata collection program, Judge Pauley rejected virtually all of the arguments raised by the ACLU and other plaintiffs against the program. This opinion thus stands opposed to Judge Leon’s ruling of a few weeks before (my analysis of that is here). Here I want to look at Judge Pauley’s opinion, in the context of my original question about data and information as concepts in thinking about privacy in the era of big data.
In a previous post, I suggested that the concept of privacy is going to prove inadequate as a protection against big data. This is the case for structural reasons: the concept of privacy is designed to protect information (generally, either information that is thought to be inherently intimate, or in the sense of control over the dissemination of information), whereas big data operates at what one might call a sub-information level: it siphons up enormous amounts of data, which becomes meaningful information only after it is analyzed in the context of vast amounts of other data. As a result, big data knows everything about us, even though we have neither consented nor not-consented to the release of the information that condemns us.
Today I want to leave that aside for the moment, and develop some background by way of a Foucauldian reading of Judge Leon’s recent decision issuing a preliminary injunction against the NSA’s collection of vast amounts of telephone metadata on American citizens. In subsequent posts, I will offer a reading of Judge Pauley’s decision upholding the NSA program and an earlier Supreme Court decision that gets at the issue before returning to the question of privacy. Although the analysis here is based on court cases and government programs, the intention is ultimately to make a more general point.
A federal judge today ruled that some of the NSA’s broad, warrantless collection of data from American citizens, particularly of so-called ‘metadata,’ which includes routing information for phone calls (what phone numbers have been in contact with each other, and so on. This can be very damaging!) – did not violate the constitution. This ruling contradicts an earlier federal court ruling that the data collection was unconstitutional, and the issue seems likely headed to the Supreme Court.
If we set aside the details of this case for the moment, it seems to me that an important set of issues around big data is emerging. That is, to be succinct, that the concept of ‘privacy’ is absolutely no good at all in slowing it down. I don’t, however, think that the problem is either the frequently announced ‘end of privacy’ or the so-called ‘privacy paradox’ (that people say they value privacy but then act as if they don’t). Rather, I think the problem is more basic.
Today’s opinion correctly reports a fact about privacy law: if I voluntarily disclose information to a third party – any third party – I lose any Fourth Amendment claim to privacy over that information, no matter how many times it changes hands afterwards. That’s a problem, one well captured by Helen Nissenbaum’s work on privacy as ‘contextual integrity’ (or see the original paper here), which argues that moving information out of one context and into another can very well change the appropriate norms for sharing it. But the pairing of ‘voluntary’ and ‘information’ also suggests that I have some cognizance of the semantic content of what I am sharing. I may not know why information is valuable to someone else, but I at least know what that information is.
Big data challenges that. We have no idea of the meaning of this material we are providing as we go about our daily lives, or even that it is meaningful: we are providing data, not information. The NSA case is exemplary:
Suppose that you want to defend someone or some institution from criticism that it has engaged in unacceptable behavior of type t. Here's a common rhetorical strategy understood by all professional pundits: First, you define some spectrum, relevant to t. Then you find a way to identify demons at the right and left-hand ends of that spectrum that will allow you to place your hero in the rational middle. It helps if one of the demons can be associated - even if unfairly - with actual people, preferably people that are already demonized by your likely readers. Balancing that first demon needn't actually be real people. Rather, you can use some vague phrase that suggests demonizable extremism. Such non-referential vaguery is useful because it allows you to suggest to readers that real critics fit this extreme, without having to actually defend claims about what they really say. Next you assume the middle, with high fanfare and moral certainty. Finally, you rhetorically assimilate your spectrum to three discrete points: point occupied by you and your hero; the crazies on one end, and all critics on the other.
Voila: Hero defended without having to actually address any of the substantive criticisms. No one who is a crazy spectrum-ending demon needs to be engaged with seriously.
Brian Leiter comments
in typical acerbic style on an excerpt in the Guardian from Daniel Dennett’s latest book, Intuition Pumps and Other Tools for
Thinking, titled “Daniel Dennett’s Seven Tools for Thinking:” “A curious list; not clear Dennett has always
honored all of them!”
What Leiter doesn’t notice, though, is that Dennett violates one of his
principles in explaining another! Dennett's last tool is “beware of deepities.” He
explains a deepity as
“a proposition that seems both
important and true – and profound – but that achieves this effect by being
ambiguous. On one reading, it is manifestly false, but it would be
earth-shaking if it were true; on the other reading, it is true but trivial.
The unwary listener picks up the glimmer of truth from the second reading, and
the devastating importance from the first reading, and thinks, Wow! That’s a
Dennett then offers two
examples. The first is the claim that “love is just a word.” The second, he
says, is not “quite so easily analyzed:” “Richard Dawkins recently alerted me
to a fine deepity by Rowan Williams, the then archbishop of Canterbury, who
described his faith as ‘a silent waiting on the truth, pure sitting and
breathing in the presence of the question mark’.” Dennett concludes “I leave
the analysis of this as an exercise for you.”
The university's Board of Trustees proposed and unanimously approved the fingerprinting policy earlier this fall as a way to better protect minors on campus from potential criminals, reportedly following the Jerry Sandusky sex abuse case that rocked Pennsylvania State University in 2011. "It's prudent for us to do our due diligence and make sure we don't hire people like that," Florida Gulf Coast President Wilson Bradshaw told local media last month."