By Gordon Hull
In a recent paper, Karen Yeung introduces the concept of a ‘hypernudge’ as a way to capture the way Big Data intensifies design-based ‘nudges’ as a form of regulation. Yeung’s discussion draws partly from discussions of Internet regulation, partly from literature on design, and partly from legal literature around privacy and big data. Yeung’s basic argument is that, in the context of big data:
“Despite the complexity and sophistication of their underlying algorithmic processes, these applications ultimately rely on a deceptively simple design-based mechanism of influence – ‘nudge.’ By configuring and thereby personalizing the user’s informational choice context, typically through algorithmic analysis of data streams from multiple sources claiming to offer predictive insights concerning the habits, preferences and interests of targeted individuals (such as those used by online consumer product recommendation engines), these nudges channel user choices in directions preferred by the choice architect through processes that are subtle, unobtrusive, yet extraordinarily powerful” (119)
Ordinary nudging technologies – she cites the humble speed bump – are static. In contrast, the sorts of nudges provided by data analytics are dynamic, continuously and invisibly updating the choices a user sees. They work both to make decisions automatically based on what users have done or can be predicted to do, and by guiding decision-making by influencing what choices are available (and how they are presented). Because of both the dynamism and invisibility, data-driven nudges can be incredibly powerful in comparison to their static cousins.
Yeung’s paper also enables one to advance a couple of points in the context of information ethics.
Recent Comments