By Gordon Hull
We don’t access the internet directly – it’s always through some sort of intermediary software. For that reason, it matters – a lot – what the intermediary does, and what kind of interactivity it promotes. Concern about this dates at least to a 1996 (published finally in 2000) paper by Lucas Introna and Helen Nissenbaum called “Why the Politics of Search Engines Matters.” More recently, Tarleton Gillespie has emerged as a major voice in these debates: his book, Wired Shut, makes a strong case against Digital Rights Management techniques, and his more recent “Politics of Platforms” makes an argument analogous to Introna and Nissenbaum’s for programs like Facebook. Indeed, internal FB studies seem to bear these concerns out: Facebook discovered that it could influence voter participation with simple “get out the vote” reminders sent to some users but not others. The results could easily swing a tight election. Gillespie and Kate Crawford have a new paper out that makes the argument in the context of “flagging” content.
The significance of Crawford and Gillespie’s paper lies in a couple of areas. First, it shows how a very complicated process of abstraction and reduction occurs in the translation of communications between human beings to “data” that Websites and other internet corporate entities can handle (the difference is ontological, as Mary Beth Mader points out in Foucauldian terms). In a sense, this is a necessary response to data overload, given the astonishing amount of content being uploaded to YouTube all the time, but it’s also a radically de-decontextualizing and (therefore) depoliticizing move, for the only possible response to content that bothers one is to “flag” it as “offensive.” Does this flag mean that you find it deeply offensive, or only a little bit? Does it mean you think it’s offensive but should be preserved online? Does it mean you think it’s important but in a place where kids can but shouldn’t see it? And who died and put you in charge, anyway? Nobody knows. However problematic they were, we don’t really have general interest intermediaries anymore, as Cass Sunstein has said repeatedly. As Crawford and Gillespie point out, the problem is even worse because most people don’t flag the content: does that mean they like it? Find it inoffensive? Offensive but not worth kicking up a fuss about? We simply don’t, and cannot, know, from the software mechanism employed. In short, it communicates a little more than “Yo!” But not a lot.
As Marx long ago recognized, this sort of decontextualization is at the heart of capitalist value extraction: the worker does something – some specific human activity, like driving a nail, but the system records that as an expenditure of labor power (you can get this from the 1844 Manuscripts or the more complete theory in Capital: the point is consistent in his work). The specifics of what the worker does don’t matter to the system one bit. Here, users help content providers police the internet, but they also indicate in this crude way to content providers what they want to see. But the specifics of that desire disappear completely in a haze of Benthamite hedons. Any sort of nuance drops out immediately, and we are taught to fetishize the presence or absence of flags as markers of quality. As Crawford and Gillespie put it:
“But more importantly, flags speak only in a narrow vocabulary of complaint. A flag, at its most basic, indicates an objection. User opinions about the content are reduced to a set of imprecise proxies: flags, likes or dislikes, and views. Regardless of the proliferating submenus of vocabulary, there remains little room for expressing the degree of concern or situating the complaint, or taking issue with the rules. There is not, for example, a flag to indicate that something is troubling, but nonetheless worth preserving. The vocabulary of complaint does not extend to protecting forms of speech that may be threatening, but are deemed necessary from a civic perspective. Neither do complaints account for the many complex reasons why people might choose to flag content, but for reasons other than simply being offended. Flags do not allow a community to discuss that concern, nor is there any trace left for future debates.”
When you combine this with those who don’t flag, the epistemic value drops to near zero:
“The number of flags a piece of content receives generally represents a tiny fraction of the total number of views. The degree to which flagged content should be understood as having concerned the community depends, in principle, on some understanding of what proportion of users were disturbed. Views might be a useful stand-in metric here, but for understanding the users’ response to a piece of content, they are even more narrow than the flag as an expressive gesture. When, for example, a user does not flag a video they just watched, this absence could represent full-throated approval, or tacit support, or ambivalence. Some of these users may have in fact been offended, but did not bother to flag, or did not know they are expected to, or did not know it was “for” them. Some might not believe it would make a difference. Others may have been offended, but also believed politically that the content should remain, or that the site shouldn’t be removing content at all. Invariably, the population of non-flaggers is a murky mix of some or all of these. But all the site has, at most, is an aggregate number of views, perhaps paired with some broad data about use patterns. This makes views, as a content moderation concern, an unreadable metric. Neither views nor flags can be read as a clear expression of the user community as a whole.”
And all of that is before questions of strategic flagging (imagine a conservative group systematically flagging all “liberal” content) or corporate manipulation emerge.
All of that matters in part because it gets at the heart of the problem of debates about internet commons regimes. Neoclassical economists like Harold Demsetz say that markets will evolve when societies become big and diverse enough that markets become the most efficient way to order people’s properties – in other words, when the cost of the market system (in terms of legal rules, police and so on) is lower than the value the system brings. Other regimes are of course possible: commons-based regimes, for example, where social norms and other non-legal structures govern the exchange of goods and use of resources (these are the commons grazing regimes of pre-modern England, which got fenced off in the enclosure movement of the 1600s). Absent property regimes, we are told, people will overconsume resources because they don’t bear the full cost of their own overconsumption.
But commons-based regimes are supposed to address these problems through social norms and other non-market features. The problem with such regimes is that they don’t seem to scale very well; Robert Ellickson famously involves examples of groups of ranchers. But when these groups get too big, the interpersonal interactions that sustain the norms that replace the law become unsustainable, and markets and their Hayekian decentralization become more efficient.
A significant strand of internet scholarship has sought to displace, or at least nuance this model by arguing, essentially, that the internet solves a lot of the scalability problems because it allows the right kind of communication without small, tight-knit communities. The most famous example in this literature is no doubt Yochai Benkler’s Wealth of Networks, which makes the sustained normative and economic case for commons-based regimes in at least some cases. I am a huge admirer of Benkler’s work, and it’s picked up as promising in places like Hardt and Negri’s Multitude. But I also wonder if the political economy that Crawford and Gillespie point to in the case of flagging show the limits of scalability to be more difficult than Benkler takes them to be. Benkler’s primary examples of functional rating systems are Slashdot and Wikipedia. Even if we assume that both of those work perfectly – and Crawford and Gillespie, although they favor the Wikipedia model over the YouTube one, have their reservations – those suddenly seem small and insular compared to the ubiquitous “flag content” buttons across the Web.
The other thing this movement into flagging content shows is how control techniques migrate from one area to another. Foucault called this the “swarming of the disciplines” in the “Panopticism” chapter of Discipline and Punish; it might equally be evidence of the “surveillant assemblage” discussed in more recent work. In this case, the motivation for algorithmic procedures for removing offending content derive from two “safe harbor” provisions in federal law. Both were designed to encourage internet service providers to allow third-party content. One requires that material that would violate the Communications Decency Act be removed; the other requires that content accused of violating copyright be removed. In order to avail themselves of the safe harbor protections, websites have to remove content first, and then examine it later. Material alleged to violate copyright can be appealed, but not many users know of the appellate process, it takes a minimum of two weeks, and it’s the website’s final decision what to do anyway. Since the website has the strong legal incentive to maintain immunity from a copyright suit, not a lot of that goes back up. So there’s not a lot of due process there, and users have absolutely none of the power. Crawford and Gillespie do a really nice job showing how these flagging algorithms have expanded outward from the enforcement of legal norms to the creation of new ones.
Anyway, I’m not the first person to recommend this paper – I read it because of Rebecca Tushnet’s review here – but I do recommend it.
Recent Comments