By Gordon Hull
In a series of articles (and a NYT op-ed; my $.02 on that is here), Woddy Hartzog and several co-authors have been developing the concept of “obscurity” as a partial replacement for “privacy.” The gist of the argument, as explained by Hartzog and Evan Selinger in a recent anthology piece (“Obscurity and Privacy” (=OP, pagination to the ssrn version)), is that “obscurity is the idea that information is safe - at least to some degree - when it is hard to obtain or understand” (OP 2). This is because “we should not underestimate how much of a deterrent effort can be” (OP 2), and information that is hard to understand imposes similar costs in terms of effort. They argue that obscurity functions better as a concept than privacy, in part because it avoids the binarism associated with the public/private dichotomy:
“Because activities that promote obscurity can limit who monitors our disclosures without being subject to explicit promises of confidentiality, the tendency to classify information in binary terms as either ‘public’ or ‘private’ is inadequate. It lacks the nuance needed to describe a range of empirically observable communicative practices that exist along a continuum” (OP 4)
The public/private dichotomy has been the object of sustained criticism, in part because it does not track how people live their lives. For example, U.S. privacy law tends to regard information that an individual has voluntarily disclosed once as no longer private, as if the context of disclosure doesn’t matter at all. The obscurity argument is designed to start with this basic thought: we share information all the time, and do so with the expectation that others will manage it appropriately. This is Helen Nissenbaum’s point about the “contextual integrity” of information; it is also how Ari Waldman starts with his recent reformulation of privacy as trust. The general stability of these information and contextual flows is behind Lior Strahilevitz’s “social networks” account of privacy and its violation, as well as Dan Solove’s account of how suddenly viral spreads of information online occasions the need to rethink reputation.
In earlier work (“Obscurity by Design” (= OBD)), Hartzog and Frederic Stutzman emphasize the way that obscurity protection can be integrated into the design of sociotechnical systems (the legal arguments are detailed in “The Case for Online Obscurity” (= “Case for”)). It is here that some of the real advantages of the obscurity framing become apparent. If the concept is privacy, driven by a public/private dichotomy, there is an almost inexorable movement toward our unsustainable status quo: information is public if individuals voluntarily disclose it, and so the best way to protect your privacy is to get you to consent to information disclosure after we tell you what we will do with it; these two components ensure that the disclosure is voluntary.
I won’t dwell on the many infirmities of this procedure here (I did so here). What Hartzog and Stutzman illustrate is the far greater power of obscurity to protect people’s natural privacy intuitions. Focusing on social media, they note that most of us treat communications as obscure by default, a condition that we import from offline practices. For example, the reason we talk openly to our friends at restaurants is that we know that eavesdroppers are likely to lack enough relevant information to fully understand the conversation, or at the least that it would be hard enough for them to get that information that they probably wouldn’t bother. More technically, “stripping context from information reduces its clarity and increase the obscurity of information by reducing the number of people who are likely to understand the meaning of the disclosure” (OBD 401).
This is why, as they note, data aggregation can be so troubling: it allows the reconstruction of context. Similar worries apply to metadata: you may not understand my phone conversation, but if you know that I am calling a suspected terrorist number, you’re going to be inclined to interpret it in certain ways (Margaret Hu has a chilling discussion of some of the mischief that governments get up to with this sort of thing, and due process failures involved). The heuristic also explains what’s so troubling about phototagging:
“Social network sites or sharing sites like Facebook often promise to respect both the user’s privacy and her or his privacy settings. An important function of some of these websites is the ability to tag photos. Once a photo is tagged with an identifier, such as a name or link to a profile, it becomes searchable. According to our conceptualization, making information visible to search significantly erodes the protection of obscurity, and, consequently, threatens a user’s privacy. Thus, if a website promised to respect a user’s privacy and privacy settings, a destruction of online obscurity would constitute a breach of that promise” (“Case for,” 47)
More generally, they adduce four factors that can guide practical assessment of the obscurity of information; these are meant to help adumbrate the continuum of obscurity. The factors are:
“Information is obscure online if it exists in a context missing one or more key factors that are essential to discovery or comprehension. We have identified four of these factors: (1) search visibility, (2) unprotected access, (3) identification, and (4) clarity. This definition draws upon the previously detailed theoretical and empirical research and requires some explication” (“Case for,” 32).
They stress that they should be treated both contextually and on a case-by-case basis.
Sociotechnical systems can embed normative preferences, and Hartzog and Stutzman’s point is that we can embed preferences for a default of obscurity. That can work through technical means like blocks on search engines and behavioral cues such as the placement of privacy-protecting technologies or setting obscurity-protective defaults. It can also work through laws and regulatory regimes (the interaction between various forms of regulation is of course an important part of understanding the affordances of sociotechnical systems generally, as emerges quite clearly in both the STS literature and the literature around Lessig’s Code).
It seems to me that the focus on obscurity also helps to work through some normative issues around privacy. Start with a standard anti-privacy complaint: “I have nothing to hide!” As Dan Solove long ago demonstrated, this complaint isn’t really true – everybody has something to hide – it’s rather a misguided outburst expressing the idea that security is more important than privacy. Solove emphasizes that this debate is rigged against the privacy advocate (Priscilla Regan corroborates that this happens in legislation), since security is legible as a social value, and emphasizes the importance of treating privacy as a social value. The obscurity framing also shows that a public/private dichotomy leads to precisely the “nothing to hide” problem, because it presents all information as either public OR private. This in turn allows data collectors and FBI agents to group social security numbers with various skeletons in the closet having to do with drug dealing, and then to use the horrors of drug dealing to delegitimate the entire concept of the private.
What the conversation lacks, in short, is nuance. But nuance is precisely the virtue of obscurity. Again, Hartzog and Stutzman: “Obscurity is more flexible than some conceptualizations of privacy and also more feasible to implement” (OBD, 388). Or, Hartzog and Selinger: privacy “lacks the nuance needed to describe a range of empirically observable communicative practices that exist along a continuum” Hence, “by acknowledging the nuances of the obscurity continuum, it becomes possible to appreciate that many contemporary privacy debates are probably better understand if re-classified as concern over losing obscurity.” (OP 4)
What that means here is that there are a variety of reasons that information – both “public” and “private” – might be obscure. Some of them are technical, and those technical limitations are contingent and malleable. For example, criminal convictions are a lot easier to find on the Internet than they were when you had to visit an obscure county courthouse (this is one of Nissenbaum’s original examples in developing her contextual integrity theory, Hartzog goes into the public records cases in great detail). When that happens, it becomes necessary to think about the moral reasons for the obscurity, and whether those are the sorts of reasons we’d want to endorse. Back in Code, Lawrence Lessig talked about how technological affordances revealed “latent ambiguities” in law. He applies the concept briefly to privacy (p. 214), though I think the clearer example is about fair use in copyright (p. 189). Technological advances enable fairly precise metering of access to copyrighted materials, and control over access in a very granular way. So what happens to fair use? It turns out that we never had to get clear about why we had fair use. There’s at least two options. It could be that fair use basically existed because some uses were too costly and inconsequential for copyright owners to police. Or it could be because there’s a normative good to allowing fair use, as for example it might be good for democracy to allow educational copying.
Which brings me back to privacy and a couple of concluding points. First, as I argued in the context of Hartzog and Selinger’s op-ed, the economic discourse tends to get you to a lack of privacy. The cheaper and easier surveillance and data collection get, the more likely it is to happen and the less obscure we’re going to be. This is like Lessig’s example of fair use conceived as an efficient way to deal with the inefficiencies of policing all use. On that view, privacy was an efficient default because surveillance was inefficient. The important point is that we don’t have to conceptualize the world through economics (no matter how many times somebody from Chicago says we do)! The increased technological capacities of surveillance and the loss of obscurity it entails highlight the poverty of limiting theoretical discourse to economics.
I don’t know if obscurity will necessarily fare better than privacy in this regard, but obscurity has a leg up in at least two ways. On the one hand, its value neutrality – that it’s not always already associated in many people’s minds with hiding something shameful – pushes back against the negative normative valence often assigned to privacy. On the other hand, that obscurity isn’t subject to the public/private binary, i.e., that it allows for nuance, is a better starting ground for a nuanced normative conversation and a plurality of theoretical viewpoints. The concept tends to resist reduction.
Next time, I’ll say more about the second point, which is about latent ambiguity and the politics of obscurity and privacy.
Recent Comments