I’m currently teaching a summer gen-ed class on the topic of “Ethical Issues: Technology,” and when I teach this class, I always make a point to discuss Facebook early-on. Specifically, this time we’re talking about the “is Facebook making us lonely” question, using a piece from the Atlantic and a critique of it that appeared a few days later on Slate. But I always try to include time for talking about Facebook in general. And my students say more or less what the research says: almost all of them are on it, and they use it mainly to keep up with and enhance offline social networks. They gain considerable social capital from their use of it. But they also don’t like it all that much. They resent the constantly changing settings, and they’re getting fairly cynical about FB as a business. They don’t much like having to untag themselves from photos all the time. They tend to think FB either takes them for granted, or even takes advantage of them. More than one said they’d leave if they could figure out how. And they do worry about privacy. All of that is anecdotal, of course, but it’s been a pretty consistent response for a few years now.
A great deal of the value of a company like FB is network: like telephones, the more people who use them, the more valuable yours is. This is part of why students don’t have much of an exit option, as leaving FB would basically give them the SNS equivalent of a one-phone system. FB then rubs it in: there’s no way to export all of your material from it to another system, so a decision to leave is a decision to leave however many hours of socializing and networking behind. This sort of state of affairs led Tiziana Terranova to note – before FB – that websites extract a lot of surplus value from the users who produce them simply because of this network effect.
This year, I had a realization about surplus value and privacy, prompted by this piece by Julie Cohen. As Cohen notes, the idea of a market for privacy is an odd one: although consumers and end users are the source of information, the market for information is actually between the company that collects it and somebody else, an advertiser for example. Other things being equal, the less privacy users have, the more successful the market is. If this is the case, then there’s a much deeper sense in which surplus value enters the picture. FB’s business model strikingly parallels the model of industrial capital that Marx diagnosed: the entire operation of sites like FB is designed to extract value from users in the form of personal information, and then to “pay” them in something that costs FB less than the value of that information. Let’s call that “wages,” but it really amounts to social capital and enjoyment. The difference between the value of the commodified information and what users get in return presents surplus value for FB, which it then either uses to buy other companies or pay its investors. It’s a good trick, because for sites like FB, what users want often isn’t monetizable, so from the point of view of investors, the costs of keeping them happy are pretty low.
Following Marx, then, we can note that FB has every incentive to extract as much information as possible, and to “pay” users as little as it can get away with (and, as long as there’s an industrial reserve army of teenagers always around the corner, attrition of existing users isn’t a big problem). It has every reason to depress wages by discouraging users from withholding information, a task it achieves in part by making privacy preferences both hard to actuate and frequently changing. In that sense, the discovery that only 37% of FB users have privacy settings that match their expectations is actually good news for the corporation. Not only that, the network value of FB further conspires to depress wages. On the one hand, the more social capital users draw from Facebook, the less they are able to leave because of the negative effects that departing FB would have on their offline lives. On the other hand, the bigger the network, the greater percentage of that social capital comes nearly gratis to FB. In sum, we should expect FB to spend most of its energy on expanding its number of customers, and not on satisfying the ones it has, since that’s the best way increase its surplus value.
If all this is right, a couple of additional provocative points follow. First, those workers in the Marxian industrial proletariat didn’t destroy their bodies and minds in horrible factory jobs because they wanted to. They worked because they didn’t have any choice (“nothing to sell but their labor”). For that to happen, capital has to be continuously engaged in a process of primitive accumulation (or, as David Harvey calls it, “accumulation by dispossession”), through which workers are deprived of whatever of value they have. In England, this was achieved by the enclosure movement and widespread privatization of land formerly held in common. James Boyle has provocatively suggested that strengthening intellectual property laws function as an “enclosing of the commons of the mind.” Something similar, perhaps, is happening here, where people are dispossessed of their “privacy” and all the other values that get lumped together with it: intimacy, dignity, autonomy, etc. After enough social surveillance and total visibility, all we have to offer is our information (this is definitely a gendered phenomenon: after all, it is women who are most accustomed to being watched all the time, and it was camgirls who first faced some of these questions. And obviously, once we’re talking about surveillance, race is hugely important).*
Second, there’s the question of what to do. danah boyd proposed in exasperation that, to the extent that FB presents itself like a utility, it should be regulated as such. James Grimmelman suggested that FB could be evaluated under product liability laws. Another possibility suggests itself here: privacy protection is appropriately analogized to, or maybe even best conceptualized as, labor law. Certainly the Lochner-era reverence for the freedom to contract is the same, as Cohen noted a while ago. But we should also notice that the few significant efforts to protect privacy online have been to protect children. From this point of view, COPPA – which restricts website marketing and the extraction of information from children under 13 – is in effect a child labor law. The labor-law conception would obviously need working out in detail, but it does have the advantage that when things are put that way, it seems a whole lot more reasonable to restrict what FB can do with our information than it does if the question is framed as “notice and consent.” In other words, we’re accustomed to a wide variety of restrictions on what employers should be allowed to do to employees. There’s of course a lot of companies that don’t comply with labor laws, and of course ALEC and Koch-funded groups spend lots of money complaining that labor laws reduce their freedom (they do! That’s the point). But most people intuitively understand that there needs to be limits on what the boss can do to you. On this reading, privacy (in this context, anyway) should be about redressing the power imbalance between companies like Facebook and the end users who are locked into using their products, for the same reason that child labor laws are there to redress the power imbalance between coal miners and the mine owners.
[* This might also provide some traction on thinking about what privacy is. Cohen, who knows her Foucault and Deleuze, proposes that, in a nutshell, it offers shelter to enable the individuation of people. Her point is that individuation is a process, not a starting point. This is a huge topic; here, I just want to note that some of the problems in conceptualizing privacy might look pretty different under the lens suggested here]