One of the more perplexing things about the Trump presidency is why it exists in the first place: he took office having lost the popular vote by a wide margin, and with one of the smaller electoral college margins in memory. The win also defied virtually all of the pre-election polling and commentary: almost no one (except Michael Moore) predicted the outcome correctly, and on his victory lap, Trump himself admitted that he thought he was going to lose. So a lot of us have tried to figure out what happened (I continue to think the election was about white supremacy, though that doesn’t explain how Trump got the white supremacists to the ballot box; I’ve also wondered about the libertarian candidates and Clinton’s staggering failure to take the Good News about the auto bailout to the rust belt states). Others have wondered about the Comey letter, Russian hacking, and so on. Now there’s another possibility: the Trump campaign’s use of big data, as reported in this chilling article on Motherboard.
In May, a 13-year-old named Izabel Laxamana took a selfie wearing a sports bra and some leggings, and sent it to a boy at her school. When school administrators heard about the picture, they contacted her parents. What happened next defies easy comprehension: delivering on a threatened punishment for breaking his social media rules, Izabel’s father cut off her hair. He then made a video of Izabel with her hair (in a pile on the floor), demanding that she say breaking their rules hadn’t been worth it. The video found its way to social media. Two days later, Izabel jumped off an overpass, and a day later, she died from her injuries. The reasons why Laxamana committed suicide are of course complex, and may or may not be because of the shaming (and the father may or may not be the one who posted it to social media).* But the videoed retaliatory haircut seems to be real. In a recent piece in Slate, Amanda Hess catalogues the sudden re-emergence of this medieval phenomenon – literally medieval; women were punished by having their hair cut off, often in public – and situates it as part of a more general re-emergence of the public shaming of teenagers by their parents:
In recent times there has been quite some discussion on the phenomenon of internet shaming. Two important recent events were the (admirable, brave) TED talk by Monica Lewinsky, and the publication of Jon Ronson’s book So you’ve been publicly shamed. Lewinsky’s plight mostly pre-dates the current all-pervasiveness of the internet in people’s lives, but she was arguably one of the first victims of this new form of shaming: shaming that takes world-wide(-web) proportions, no longer confined to the locality of a village or a city. Pre-internet, people could move to a different city, if need be to a different country, and start over again. Now, only changing your name would do, to avoid being ‘googled down’ by every new person or employer you meet.
As described in Ronson’s book (excerpt here, interview with Ronson here), lives can be literally destroyed by an internet shaming campaign (the main vehicle for that seems to be Twitter, judging from his stories). Justine Sacco, formerly a successful senior director of corporate communications at a big company, had her life turned upside down as a result of one (possibly quite unfortunate, though in a sense also possibly making an anti-racist point) tweet: ““Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white!” From there on, her life became a tragedy of Kafkaesque proportions, and she’s only one of the many people having faced similar misfortunes discussed in Ronson’s book. Clearly, people truly delight in denouncing someone as ‘racist’, as in Sacco’s case; it probably makes them feel like they are making a contribution (albeit a small one) to a cause they feel strongly about. But along the way, for the sake of ‘justice’, they drag through the dirt someone whose sole ‘crime’ was to post a joke of debatable tastefulness on Twitter. But who has never said anything unfortunate, which they later came to regret, on the internet?
If you haven't already, you should read yesterday's Stone article in the NYT by Justin McBrayer entitled "Why Our Children Don't Believe There Are Moral Facts." There, McBrayer bemoans the ubiquity of a certain configuration of the difference between "fact" and "opinion" assumed in most pre-college educational instruction (and, not insignificantly, endorsed by the Common Core curriculum). The basic presumption is that all value claims-- those that involve judgments of good and bad, right and wrong, better and worse-- are by definition "opinions" because they refer to what one "believes," in contradistinction to "facts," which are provable or disprovable, i.e., True or False. The consequence of this sort of instruction, McBrayer argues, is that our students come to us (post-secondary educators) not believing in moral facts, predisposed to reject moral realism out of hand. Though I may not be as quick to embrace the hard version of moral realism that McBrayer seems to advocate, I am deeply sympathetic with his concern. In my experience, students tend to be (what I have dubbed elsewhere on my own blog) "lazy relativists." It isn't the case, I find, that students do not believe their moral judgments are true--far from it, in fact-- but rather that they've been trained to concede that the truth of value judgments, qua "beliefs," is not demonstrable or provable. What is worse, in my view, they've also been socially- and institutionally-conditioned to think that even attempting to demonstrate/prove/argue that their moral judgments are True-- and, correspondingly, that the opposite of their judgments are False-- is trés gauche at best and, at worst, unforgivably impolitic.
A few months ago, I noticed an interesting and telling interaction between a group of academic philosophers. A Facebook friend posted a little note about how one of her students had written to her about having encountered a so-called "Gettier case" i.e., she had acquired a true belief for invalid reasons. In the email, the student described how he/she had been told the 'right time' by a broken clock. The brief discussion that broke out in response to my friend's note featured a comment from someone noting that the broken clock example is originally due to Bertrand Russell. A little later, a participant in the discussion offered the following comment:
Even though the clock case is due to Russell, it's worth noting that "Gettier" cases were present in Nyāya philosophy in India well before Russell, for instance in the work of Gaṅgeśa, circa 1325 CE. The example is of someone inferring that there is fire on a faraway mountain based on the presence of smoke (a standard case of inference in Indian philosophy), but the smoke is actually dust. As it turns out, though, there is a fire on the mountain. See the Tattva-cintā-maṇi or "Jewel of Reflection on the Truth of Epistemology." [links added]
Over on Cyborgology, my colleague Robin James has a post up about Taylor Swift’s promotion of her new album. James focuses on two moments in that promotion: on the one hand, Swift has removed her music from the free streaming part of Spotify, on the grounds that it insufficiently compensates her (and others’) labor in producing it. On the other hand, she released a video, “Blank,” that watches more like an interactive video game. On James’ argument, both of these strategies amount to an effort on Swift’s part to control and otherwise dictate the terms of her affective labor. On the surface of it, that’s laudable enough, and certainly the Internet can readily be seen as an enormously complex vehicle for extracting surplus value from its users by getting them to work for free. As Terry Hart tirelessly points out on Copyhype, Silicon Valley makes a lot of money off of other people’s work, and shockingly little of that money finds its way back to the content industries: Silicon Valley obscures (and does not compensate) the enormous amount of affective labor on which it depends.
There are two important posts up today elsewhere in the philosophical blogopshere that deserve your attention—both of which raise the question of how those of us in the profession at large can support those members who, because of activism or simply their social position, are vulnerable to various official and non-official forms of retaliation.
Above the fold, I will simply point readers to the Open Letter of Support for "for people in our profession who are suffering various trials either as victims of harassment or as supporters of victims" published on DailyNous by John Greco, Don Howard, Michael Rea, Jonathan Kvanvig, and Mark Murphy: and to NewAPPS emeritus blogger Eric Schliesser's more concrete suggestion about how to address the retaliatory deployment of legal means against complainants. Both pieces deserve to be read and reflected upon.
In what follows, I'll say a bit more about my sense of the importance of both pieces, and the larger phenomenon of retaliation against those contesting the inequitable state of the profession.
... in Turkey. I suppose no one should be surprised by what Recep Tayyip Erdogan is capable of by now, but this is definitely a new low. Below is a short BBC video narrating the chronology of events, and here is a piece in the Guardian from the point of view of those fighting back against the suppression of internet freedom in Turkey (H/T Lucas Thorpe for both).
I invite well informed readers to offer further elements on the situation in comments below.
Writing at the Atlantic, Ian Bogost develops the concept of “hyperwork” to describe the constantly-on conditions of work in contemporary society. The gist of the argument is that we (technology users, anyway) are overworked because we are doing a lot of jobs. As he puts it, “No matter what job you have, you probably have countless other jobs as well. Marketing and public communications were once centralized, now every division needs a social media presence, and maybe even a website to develop and manage. Thanks to Oracle and SAP, everyone is a part-time accountant and procurement specialist. Thanks to Oracle and Google Analytics, everyone is a part-time analyst.” And that’s before we get to try to manage email. Most of these extra jobs aren’t paid, but the loss of money is not nearly alarming as the loss of time.
At Cyborgology, my colleague Robin James takes up one point that Bogost does not make: that the new jobs we are all working are, by and large, traditionally jobs held by women or other minorities, for which traditionally “feminine” attributes of caring and nurturing are useful. She wonders aloud whether the phenomena of hyperwork will thus alter our notions of femininity.
Another point to which Bogost gestures but that needs more emphasis relates to what Tiziana Terranova, Paolo Virno, Franco Berardi, Antonio Negri and others of the Italian “autonomist” school of thought call “cognitive capitalism,” which is basically a Marxist interpretation and critique of the “net economy.”
Last week, Neil Sinhababu had a great post
here at New APPS picking up on an attempted explanation for why members of the
so-called Generation Y seem so dissatisfied with their lives (if indeed they
are). This latter post has been receiving a fair share of attention at the usual
places (Facebook, Twitter), and though admittedly funny, it seems to suffer
precisely from the limitation pointed out by Neil; it treats the problem mostly
as a psychological problem pertaining to the individual sphere (including of
course the parents component, as any good Freudian would have it), thus disregarding
the significant economic changes that took place in recent decades. However, I
do want to disagree with Neil’s quick dismissal of the non-negligible role that
the article claims for new technologies such as Facebook and social media in
general in the phenomenon. Neil says:
And I'm suspicious of explanations in
terms of the special properties of social media -- mostly it gives you a new
way to do kinds of social interaction that have been around forever.
I’m a newcomer to the world of Facebook, having joined only
in June of last year. I resisted for as long as I could, until I gave in to the
pull of being kept abreast of what people are up to, posting pictures of my
kids for family and friends scattered around the world, and possibly also some
less noble motivations. For the most part, it’s been a pleasurable experience:
I came to realize that there is a lot of meaningful philosophical interaction going
on on Facebook (whereas mathematicians, for example, seem to prefer to hang out
at G+), and got in touch with many philosophers who I had little to no contact
with. But as any newcomer to anything, I had to learn the rules of the game, in
particular what to post and what not
to post. (Which is not to say that I’ve achieved mastery level at this point;
far from it, as discussed below the fold.)
We already know that with a social graph at its disposal, a mood graph would give Facebook an incredible edge over its competitors for customizing ads and recommendations, as well as predicting users’ future feelings. But consider this: Even if you’re someone who doesn’t share anything, Facebook could potentially reverse-engineer your emotional persona by filling in the blanks from your like-minded friends’ emotional states. In other words, the more your friends emote and translate their soulful moments into basic data points, the more Facebook can determine what makes you tick, too.
In short, thanks to persuasive interface design and non-transparent algorithms, we may be providing emotional labor without even knowing it.