The FCC decided today to treat the Internet as a public utility and to (therefore) enforce net neutrality. This means that ISP’s won’t be able to favor one form of content over another by offering (for example) higher transmission rates for a fee. It also means that ISP’s can’t interfere with the transmission of content they don’t like (say, by a competitor). Assuming it holds up in court (and the major telecom companies are prepared to spend a lot of money trying to get it overturned), this is a big deal.
Readers may recall that last December we co-hosted an open letter in opposition to a draconian law that had been instituted in Macedonia, substantially abridging the autonomy the country's universities (more info here). The letter ended up with more than 100 signatures, of which more than 50 came through New APPS.
A little while ago, I received an email update about the situation from Katerina Kolozova, Professor of Philosophy, Gender Studies and Sociology at University American College-Skopje. The news is good: the law has been suspended and, negotiations have begun between the government and the 'Student Plenums' that have been organizing against the law.
Professor Kolozova writes:
Dear friends, Thank you so much for supporting us! And it hasn't been in vain. The plenums have won today: the law on higher education against which we have protested for months, against which we have been occupying universities, writing legal analyses we had no place to present except the social media, combating the Government propaganda through arguments presented on our blogs, Twitter and Facebook, the law which practically killed the university autonomy has been abolished. The Parliament voted a three year moratorium two weeks ago, and today the negotiations between the ministries and the plenums kicked off based on a concept proposed by the Plenums.
Thank you again for your signatures of support. They helped incredibly!
I would like to congratulate all the organizers, especially those on the ground who have worked for months to prevent this law from taking effect, but also those who have been working internationally to support them. And I would like to echo Professor Kolozova's thanks to all who signed on here and elsewhere in support of the campaign.
I'm teaching Wittgenstein this semester--for the first time ever--to my Twentieth-Century Philosophy class. My syllabus requires my students to read two long excerpts from the Tractatus Logico-Philosophicusand Philosophical Investigations; bizarrely enough, in my original version of that exalted contract with my students, I had allotted one class meeting to a discussion of the section from the Tractatus. Three classes later, we are still not done; as you can tell, it has been an interesting challenge thus far.
Cultural moral relativism is the view that what is morally right and wrong varies between cultures. According to normative cultural moral relativism, what varies between cultures is what really is morally right and wrong (e.g., in some cultures, slavery is genuinely permissible, in other cultures it isn't). According to descriptive cultural moral relativism, what varies is what people in different cultures think is right and wrong (e.g., in some cultures people think slavery is fine, in others they don't; but the position is neutral on whether slavery really is fine in the cultures that think it is). A strong version of descriptive cultural moral relativism holds that cultures vary radically in what they regard as morally right and wrong.
A case can be made for strong descriptive cultural moral relativism. Some cultures appear to regard aggressive warfare and genocide as among the highest moral accomplishments (consider the book of Joshua in the Old Testament); others (ours) think aggressive warfare and genocide are possibly the greatest moral wrongs of all. Some cultures celebrate slavery and revenge killing; others reject those things. Some cultures think blasphemy punishable by death; others take a more liberal attitude. Cultures vary enormously on womens' rights and obligations.
However, I reject this view. My experience with ancient Chinese philosophy is the central reason.
In an earlier post, I took some initial steps toward reading Foucault’s last two lecture courses, The Government of Self and Others (GS) and The Courage of Truth(CT), in which he studies the ancient Greek concept of parrhesia. As I noted last time, one of the things Foucault finds is a concern on the part of the Greeks that philosophy achieve effects in the world, and not remain at the level of “mere logos.”
Here, I want to say more (warning: lots more. Long post coming!) about that framework and discussion, in Foucault’s discussion of Plato in GS. In particular, I want to look at his reading of Plato’s Seventh Letter. I have to confess that I hadn’t read the Letter until this week, despite having read quite a bit of ancient Greek philosophy. I suspect that I’m not alone. This is in part because the authorship has been contested, but also no doubt because the text is completely at odds with most of the rest of Plato’s corpus. On the surface of things, the Letter is a sort of apologia: Plato is explaining his own conduct in relation to Dion and Dionysius of Syracuse, where he consents to offer advice – parrhesia – and becomes embroiled in the feuding between Dion and Dionysius by trying to mediate on Dion’s behalf. Why did he respond to the call? Because:
This is a moderated thread. So there can be no question that Leiter at least had to deliberately press ‘publish’ on this comment. It is less clear, as his own comment further down indicates, that he had fully thought through the implications of doing so.
Brian Leiter said...
Yes, I suppose I should not have approved #2, but I've been approving almost everything. On the other hand, Johnson is a very public and rather noxious presence in philosophy cyberspace, so I'm not surprised there is interest.
I’m sure we’re all glad to know that Brian has some standards (he didn’t approve everything, after all). Still, what he did approve seems to merit some comment.
The speculation about the reasons for Leigh’s ability to secure a second job in professional philosophy is untoward, given that she is a) non-tenured, b) not in any way credibly accused or even suspected of professional misconduct, and c) the characterization of her current position is inaccurate. Publishing this comment and thereby generating a public sense that Leigh does not deserve her current employment is at very least an obvious instance of bullying on Brian’s part (and fits his by now well established pattern of directing this sort of attention toward junior, precariously employed members of the profession).
In what has to be one of the great whoppers of his entire blogging career, Brian goes on to justify leaving such a comment up by validating a more general interest in the question of why someone who is, in his view, a "a very public and rather noxious presence in philosophy cyberspace” should have a job.
To say that the implicit standard in 2) risks implicating Brian himself is rather obvious. More interestingly, it seems to be perhaps as candid an admission as we are likely to get from Brian that he sees nothing wrong with harassing people he doesn’t like if he can possibly pull it off. And so we find him abusing the pretext of discussing ‘issues in the profession’ to pursue his own petty little vendetta.
Older data in sociology suggest that the prestige of PhD granting department is one of the main factors in hiring decisions (the other is the selectivity of the undergraduate institution. The authors conclude (rather dryly) "job placement in sociology values academic origins over performance."
Some six years ago, shortly after I had been appointed to its faculty, the philosophy department at the CUNY Graduate Center began revising its long-standing curriculum; part of its expressed motivation for doing so was to bring its curriculum into line with those of "leading" and "top-ranked" programs. As part of this process, it invited feedback from its faculty members. As a former graduate of the Graduate Center's Ph.D program, I thought I was well-placed to offer some hopefully useful feedback on its curriculum, and so, I wrote to the faculty mailing list, doing just that. Some of the issues raised in my email are, I think, still relevant to academic philosophy. Not everybody agreed with its contents; some of my cohort didn't, but in any case, perhaps this might provoke some discussion.
As you know, I was the gentleman that made that remark in a private facebook thread with a close friend. If I recall correctly, people in that thread were asking about whether certain kinds of thought experiments were typically referred to as “Gettier Cases”. I said that they were, despite how inaccurate or uninformative it might be to do so, in part because of the alternative traditions you cite. I’m sorry you interpreted my remark as silencing my friends on facebook. Personally I believe that philosophers should abandon the notion of “Gettier cases” and that the practice of labeling thought experiments in this way should be discouraged. If you are interested, I have recently argued for this in two articles here (http://philpapers.org/rec/BLOGCA) and here (http://philpapers.org/rec/TURKAL).
A few months ago, I noticed an interesting and telling interaction between a group of academic philosophers. A Facebook friend posted a little note about how one of her students had written to her about having encountered a so-called "Gettier case" i.e., she had acquired a true belief for invalid reasons. In the email, the student described how he/she had been told the 'right time' by a broken clock. The brief discussion that broke out in response to my friend's note featured a comment from someone noting that the broken clock example is originally due to Bertrand Russell. A little later, a participant in the discussion offered the following comment:
Even though the clock case is due to Russell, it's worth noting that "Gettier" cases were present in Nyāya philosophy in India well before Russell, for instance in the work of Gaṅgeśa, circa 1325 CE. The example is of someone inferring that there is fire on a faraway mountain based on the presence of smoke (a standard case of inference in Indian philosophy), but the smoke is actually dust. As it turns out, though, there is a fire on the mountain. See the Tattva-cintā-maṇi or "Jewel of Reflection on the Truth of Epistemology." [links added]
We’ve all heard that regulations are bad, because they interfere with businesses doing what they want (rules about dumping toxic chemicals get in the way of dumping toxic chemicals. Laws against murder hamper the business model of assassins. And so on.). New North Carolina Senator Thom Tillis made the media rounds this week for some odd remarks he made on the topic. When asked to name a regulation he thought was bad, he came up with… the rule that restaurant employees wash their hands after visiting the toilet. He then proposed that it would be better to have restaurants state whether or not employees have to wash their hands, and then “let the market” take care of it.
There’s two obvious problems here, both of which have been pointed out a lot. One is that there’s a public health issue. The other is that he hasn’t actually reduced regulation: he’s just replaced a public health rule with a rule about signage. I actually think the second point is interesting, well beyond the “gotcha!” treatment it got, because it perfectly illustrates something about neoliberalism: it doesn’t think regulations that create markets are regulations (or, if you prefer, regulating to create markets is good, other regulations are bad. This is the same mindset that concludes that the hyper-regulated Chicago futures markets are unregulated). The cleanliness of restaurant operations is not something consumers can know much about on their own, since they don’t do things like follow employees to the restroom. In this sense restaurant sanitation is a credence good (you have to believe the restaurant; you can’t inspect the product before you buy it). Since dirty food preparation can make people very sick, rational consumers should be willing to pay more for the knowledge that their food is safely prepared. But since they won’t be in any position to know about food safety, except (maybe) for places they’ve eaten before, we can expect market failure until some mechanism arrives to help consumers make their decisions.
I cringe, I wince, when I hear someone refer to me as a 'philosopher.' I never use that description for myself. Instead, I prefer locutions like, "I teach philosophy at the City University of New York", or "I am a professor of philosophy." This is especially the case if someone asks me, "Are you a philosopher?". In that case, my reply begins, "Well, I am a professor of philosophy...". Once, one of my undergraduate students asked me, "Professor, what made you become a philosopher?" And I replied, "Well, I don't know if I would go so far as to call myself a philosopher, though I did get a Ph.D in it, and...". You get the picture.
Yesterday, in my Twentieth Century Philosophy class, we worked our way through Bertrand Russell's essay on "Appearance and Reality" (excerpted, along with "The Value of Philosophy" and "Knowledge by Acquaintance and Knowledge by Description" from Russell's 'popular' work The Problems of Philosophy.) I introduced the class to Russell's notion of physical objects being inferences from sense-data, and then went on to his discussions of idealism, materialism, and realism as metaphysical responses to the epistemological problems created by such an understanding of objects. This discussion led to the epistemological stances--rationalism and empiricism--that these metaphysical positions might generate. (There was also a digression into the distinction between necessary and contingent truths.)
At one point, shortly after I had made a statement to the effect that science could be seen as informed by materialist, realist, and empiricist conceptions of its metaphysical and epistemological presuppositions, I blurted out, "Really, scientists who think philosophy is useless and irrelevant to their work are stupid and ungrateful." This was an embarrassingly intemperate remark to have made in a classroom, and sure enough, it provoked some amused twittering from my students, waking up many who were only paying partial attention at that time to my ramblings.
Sometimes, when I talk to friends, I hear them say things that to my ears sound like diminishments of themselves: "I don't have the--intellectual or emotional or moral--quality X" or "I am not as good as Y when it comes to X." They sound resigned to this self-description, this self-understanding. I think I see things differently; I think I see ample evidence of the very quality they seem to find lacking in themselves. Sometimes, I act on this differing assessment of mine, and rush to inform them they are mistaken. They are my friends; their lowered opinion of themselves must hurt them, in their relationships with others, in their ability to do the best they can for themselves. I should 'help.' It seems like the right thing to do. (This goes the other way too; sometimes my friends offer me instant correctives to putatively disparaging remarks I make about myself.)
Foucault’s last lecture courses at the Collège de France – recently published as The Government of Self and Others[GS] and The Courage of Truth [CT] – are interesting for a number of reasons. One is of course they offer one of the best glimpses we have of where his thought was going at the very end of his life; he died only months after delivering the last seminar in CT, and there is every reason to believe that he both knew that he was dying, and why. There’s a lot to think about in them, at least some of which I hope to talk about here over a periodic series of posts. Here I want to say something introductory about the material, and look at Foucault’s critique of Derrida in it.
The lectures contain a sustained investigation of parrhesia, the ancient Greek ethical practice of truth-telling. “Truth to power” is the closest modern term we have for such a practice, though you don’t have to get very far into the lectures to realize how richly nuanced the topic is, and how many different ways it manifest itself in (largely pre-Socratic) Greek thought and literature. The lectures also contain a number of references to contemporary events and people (from the beginning: GS starts with Kant, before going back to the Greeks), and it’s hard to put CT down without a sense that, had there been another year of lectures, Foucault would have been more explicit in assessing the implications of the study of Greek parrhesia today.
In recent weeks, there has been much discussion on journal editorial practices at a number of philosophy blogs. Daily Nous ran an interesting post where different journal editors described (with varying degrees of detail) their editorial practices; many agree that the triple-anonymous system has a number of advantages and, when possible, should be adopted.* (And please, let us just stop calling it ‘triple-blind’ or ‘double-blind’, given that there is a perfectly suitable alternative!) Jonathan Ichikawa, however, pointed out (based on his experience with Phil Studies) that we must not take it for granted that a journal’s stated editorial policies are always de facto implemented. Jonathan (correctly, to my mind) defends the view that it is not desirable for a journal editor to act as a (let alone the sole) referee for a submission.
With this post, I want to bring up for discussion what I think is one of the main issues with the peer-reviewing system (I’ve expressed other reservations before: here, here, and here), namely the extreme difficulties journal editors encounter at finding competent referees willing to take up new assignments. Until two years ago, my experience with the peer-review system was restricted to the role of author (and I, as everybody else, got very frustrated with the months and months it often took journals to handle my submissions) and the role of referee (and I, as so many others, got very frustrated with the constant outpour of referee requests reaching my inbox). Two years ago I became one of the editors of the Review of Symbolic Logic, and thus acquired a third perspective, that of the journal editor. I can confirm that it is one of the most thankless jobs I’ve ever had.