By Gordon Hull
In a recent paper, Karen Yeung introduces the concept of a ‘hypernudge’ as a way to capture the way Big Data intensifies design-based ‘nudges’ as a form of regulation. Yeung’s discussion draws partly from discussions of Internet regulation, partly from literature on design, and partly from legal literature around privacy and big data. Yeung’s basic argument is that, in the context of big data:
“Despite the complexity and sophistication of their underlying algorithmic processes, these applications ultimately rely on a deceptively simple design-based mechanism of influence – ‘nudge.’ By configuring and thereby personalizing the user’s informational choice context, typically through algorithmic analysis of data streams from multiple sources claiming to offer predictive insights concerning the habits, preferences and interests of targeted individuals (such as those used by online consumer product recommendation engines), these nudges channel user choices in directions preferred by the choice architect through processes that are subtle, unobtrusive, yet extraordinarily powerful” (119)
Ordinary nudging technologies – she cites the humble speed bump – are static. In contrast, the sorts of nudges provided by data analytics are dynamic, continuously and invisibly updating the choices a user sees. They work both to make decisions automatically based on what users have done or can be predicted to do, and by guiding decision-making by influencing what choices are available (and how they are presented). Because of both the dynamism and invisibility, data-driven nudges can be incredibly powerful in comparison to their static cousins.
Yeung’s paper also enables one to advance a couple of points in the context of information ethics.
First, Yeung situates her discussion in terms of the debate around the ethical permissibility of nudges. In particular, the liberalist critique of nudging is that it is manipulative: the motives are bad, the nudge is deceptive (insofar the process is designed to manipulate users’ emotions, not engage their rational thought processes), and they are not transparent. She then argues that notice and consent policies are unlikely to successfully deal with these problems, especially for hypernudging. Both the diagnosis and the critique of notice and consent seem right. It’s worth noting that the defense of nudges (architecture/control) as a means of regulation depend on its being static. Thus Edward Cheng offers an argument in favor of architectural/structural over “fiat” based regulation (the latter includes traditional legal prohibitions). Structural regulation has higher compliance rates (speed bumps vs speeding tickets); this helps to solidify norms. It also avoids arbitrary enforcement problems (police enforcement of traffic laws). Even if you're inclined to accept the argument, it seems to me that there is a plausible case to be made that these advantages are dependent on the regulatory regime being static. This would be particularly true in the case of fairness: dynamic hypernudging, by design, treats all people differently and in as precisely an individuated manner as possible. In addition, one of the presumptive advantages of both fiat and structural law as Cheng describes it is the visibility of regulation. Laws are promulgated, and speed bumps are highly visible. In the case of hypernudges, neither holds.
Second, hypernudges underscore that big data needs to be seen as a practice of subjectification. Hypernudges do not just offer subjects choices; they create subjects. Yeung makes good use of Julie Cohen’s work in this regard; as Cohen notes, liberalist notions of subjectivity, which treat subjects as exogenous to their information environments, not only miss what is interesting (and disturbing) about practices like hypernudging, they actively make it harder to see by denying the basic point that subjectivity is not exogenous. There is a complex interplay of truth and power here. It has most basically to do with the point that hypernudges tend to have a truth function in that they structure not just the environment, but the information environment. The information environment is directly constitutive of the choices I can make: if I do not know something is there, I cannot choose or not choose it. Correlatively, the information environment creates the “truth” of the world. If I rely on Facebook for my news, then the news “is” whatever Facebook’s algorithm serves up. If I search Google for information about something, then the truth of that something is whatever shows up on the first couple of pages of search results. In both cases I could go elsewhere, but both platforms carefully set things up so I probably won’t. At least two additional results follow.
On the one hand, efforts to structure individuals’ choice environments actively shape what they can do and what sorts of actions become habituated into their subjective preferences and tendencies. Of course all architectural regulation does this to an extent; if you put a wall next to me, I can’t walk in that direction (I will accept the correction of the economist: you raise the cost of doing so to the extent that it is not rational to try). But hypernudges make this process invisible and dynamic: I don’t know that (or how) the list of search results I see is determined by a range of data that Google has processed. Unlike in the case of a wall, I do not automatically imagine an “other side” to the wall. Not only that, because the nudging is dynamic, it is as if the wall subtly moves me a little further to the right every step I take. If the wall-moving algorithm is good enough, I may not even notice the iterative changes. In that regard, the behaviors that hypernudges are a lot easier to habituate than the ones that walls and speed bumps encourage, and so are better techniques of subjectification. If I am driving in a mostly-empty parking lot, I still tend to go around the speed bumps. But I almost never conduct Google searches in an incognito window. I don’t even bother to scroll past the first page or two.
On the other hand, to the extent that these hypernudges are predictive, they actively conspire to use individuals’ basic personality traits to then present a choice environment designed to manipulate that personality trait. There’s evidence that Facebook is doing just that. In that way, the ways that hypernudges structure choice environments is even more subtle than with nudges, because they cannot be traced to any specific prior behavior. Everyone knows about crude targeted advertising, where the shoe that you look at on the Internet follows you around. But predictive targeted advertising will know if you’re introverted, and give you a range of products based on that. It can target you based on what you might do, not what you’ve done.
Of course any regulatory structure has to imagine the individuals it regulates; the autonomous subject of liberalism is one way of doing that. To the extent that model is a good one, it really only has traction in discussing fiat-based regulation, because that regulation often imagines the autonomous subject of liberalism as its object. But in the case of nudges, and in particular in the case of hypernudges, the autonomous liberal subject is not apt. This is why notice and consent rules do such a poor job of dealing with techniques that change the choice environment. And it’s why the discussion of big data makes questions of subjectification urgently salient.
Recent Comments