By Gordon Hull
Facebook’s opaque advertising practices are in the news (again) because it was apparently the vehicle through which some of the Russian attempts to meddle in the 2016 election were routed. This piece by Sam Biddle on The Intercept is well worth the read, as it makes the case that the public needs to know more about Facebook’s advertising practices. On the one hand, the company has admitted to a little of what happened. But it studiously has failed to answer most of the relevant questions, hiding behind vague, well-lawyered blog posts. Biddle concludes:
“It’s reassuring that Facebook is cooperating with the ongoing Russia-related probes. But this is bigger than Russia, bigger than Hillary Clinton, and bigger than 2016. Should Facebook continue to simply allude to its ominous potential rather than sharing it in full, there’s only one good option left: Bring in Mark Zuckerberg and have him sworn in live on C-SPAN. No spokespeople required.”
This is not a new issue; what Facebook does to your news feed is a paradigmatic example of the black box society at work. Worried? Facebook routinely publishes research designed to exonerate itself from whatever concerns its black box might engender; for example, it published papers in both 2015 and 2012 purporting to prove that it did not contribute to the development of online echo chambers. As I argued at the time, these results are not just misleading for reasons internal to the research, but because they spread the myth that anything involving big data, machine learning, or a humongous N= figure somehow produces unvarnished, self-interpreting truth. This serves to insulate companies like Facebook from the scrutiny that accompanies what Tarleton Gillespie identified as the “politics of platforms” back in 2010 and what Helen Nissenbaum and Lucas Introna identified as an analogous issue with portal sites all the way back in 2000. When a company is the means through which people experience the Internet or their social interactions with other users, how that company curates its data matters. But two things are certain: the data is carefully curated, and the companies curating the data aren’t talking. This isn’t to say that they don’t say things indicating what they might be up to; under the guise of proving that the emotional contagion effect could work across remote networks (and did not require face-to-face interaction), Facebook basically telegraphed that it manipulates users’ news feeds to elicit (presumably positive) emotional states. danah boyd captured a lot of what is at stake, pointing out (again, a while ago) that Facebook presents itself like a public utility, but demands that it be free from the regulations that guarantee that utilities serve the public interest.
If I’ve over-emphasized how long some of these worries have been percolating, it’s because we shouldn’t be surprised to learn that FB was an (unwitting?) part of the 2016 election debacle. We should be alarmed and even outraged. But then we should remember that Zeynep Tufekci pointed three years ago to research proving that FB could swing a tight election if it wanted – and it could do so with nearly certain impunity. Tufekci:
“A biased platform could decide to use its own store of big data to model voters and to target voters of a candidate favorable to the economic or other interests of the platform owners. For example, a study published in Nature found that civic “go vote” messages that were targeted in Facebook through users’ social networks (thanks to a voting encouragement app deployed by Facebook) resulted in a statistically significant increase in voter turnout among those targeted, compared with a similar “go vote” message that came without such embedding in social ties (Bond, et al., 2012). A platform that wanted to manipulate election results could, for example, model voters who were more likely to support a candidate it preferred and then target a preponderance of such voters with a “civic” message narrowcast so that most of the targets were in the desired target group, with just enough thrown in from other groups to make the targeting less obvious. Such a platform could help tilt an election without ever asking the voters whom they preferred (gleaning that information instead through modeling, which research shows is quite feasible) and without openly supporting any candidate. Such a program would be easy to implement, practically undetectable to observers (since each individual only sees a portion of the social media stream directed and nobody sees the totality of messages in the whole platform except the platform owners), easily deniable (since the algorithms that go into things like Facebook’s news feed are proprietary and closely guarded secrets), and practically unconfirmable”
I am not saying that FB swung the election, even if advertising routed through FB was part of what did (and we don’t even really know that). What I am saying is that it is time to start properly regulating companies like Facebook. The public needs to know something about the convoluted methods by which its access to the world is curated. Platform companies like FB are making billions of dollars pretending to be neutral conduits for public conversation, even when they most assuredly are not.
Recent Comments