Posted by Gordon Hull on 29 September 2023 at 11:59 in Gordon Hull, Intellectual property and its discontents | Permalink | Comments (0)
Reblog
(0)
|
| |
This article from Gizmodo reports on research done over at Mozilla. Newer cars – the ones that connect to the internet and have lots of cameras – are privacy disasters. Here’s a paragraph to give you a sense of the epic scope of the disaster:
“The worst offender was Nissan, Mozilla said. The carmaker’s privacy policy suggests the manufacturer collects information including sexual activity, health diagnosis data, and genetic data, though there’s no details about how exactly that data is gathered. Nissan reserves the right to share and sell “preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes” to data brokers, law enforcement, and other third parties.”
Nissan’s response tells you everything that’s wrong with current privacy legislation:
““When we do collect or share personal data, we comply with all applicable laws and provide the utmost transparency,” said Lloryn Love-Carter, a Nissan spokesperson. “Nissan’s Privacy Policy incorporates a broad definition of Personal Information and Sensitive Personal Information, as expressly listed in the growing patchwork of evolving state privacy laws, and is inclusive of types of data it may receive through incidental means.””
Let’s translate. Nissan is probably compliant. Also, privacy compliance is a joke. Also, compliance apparently only requires that you receive NOTICE that they take your data AND CONSENT to that policy, probably merely by driving the vehicle. Also, they probably reserve the right to change their privacy policies unilaterally, at will. Also, they almost certainly do not let you opt-out of any of it while CONSENTING by driving the car. It’s a very special kind of “contract” and “consent.” Also, how do they know about your sex life? Also, even if you have sex in the car, there is basically no answer to that question that is not beyond creepy!
As you may have guessed, NOTICE AND CONSENT is an utter sham and has been for a while. The gizmodo article spells out some of the particular absurdities here – for example, you may not want to ride in one of these cars either, as passengers are “users” deemed to have CONSENTED to the privacy policy. Your driver should probably provide you NOTICE beforehand! “A number of car brands say it’s the driver’s responsibility to let passengers know about their car’s privacy policies—as if the privacy policies are comprehensible to drivers in the first place.” No wonder folks are cynical and resigned about corporate privacy – they’re manipulated into it by corporations. Also, they’re confused, frustrated and angry about the fact they don’t actually get to consent.
This is the best example I’ve seen of all that in a while, and a crystal-clear indicator of why we need not just new privacy legislation (we do!) but a new direction (more real regulation, less soft compliance and "notice and consent" fig-leaves).
PS – not picking only on Nissan:
“Other brands didn’t fare much better. Volkswagen, for example, collects your driving behaviors such as your seatbelt and braking habits and pairs that with details such as age and gender for targeted advertising. Kia’s privacy policy reserves the right to monitor your “sex life,” and Mercedes-Benz ships cars with TikTok pre-installed on the infotainment system, an app that has its own thicket of privacy problems.”
Posted by Gordon Hull on 14 September 2023 at 07:00 in Big Data, Gordon Hull, Privacy | Permalink | Comments (0)
Reblog
(0)
|
| |
By Gordon Hull
I’ve been developing (first, second, third, fourth) some reflections on what Foucault means by a reference to “Chardino-Marxism,” a disturbing trend that he credits Althusser with “courageously fighting.” The real opposition point seems to be Roger Garaudy, a PCF intellectual who is a leader in the effort to establish a post-Stalinist humanist Marxism, and who had a real sympathy for religion. Last time, I traced some of Garaudy’s sources on religion to Engels. Some of what Garaudy says also sounds like it’s coming straight from the Russian Marxist Anatoly Lunacharsky’s Religion and Socialism (1908). The claim here is categorically not that Garaudy read Lunacharsky – as will become evident in a minute, I think that’s highly unlikely. What I do want to underscore is that there is a coherent line of thought behind Garaudy’s religious impulse. As I’ll note when I get back to Garaudy and Althusser, there is a very specific political context to Althusser’s attacks on Garaudy having to do with the latter’s role in the PCF and his effort to use a humanist Marxism as a (from Althusser’s point of view, failed) alternative to Stalinism.
I know very little about Lunacharsky (Wikipedia here), but apparently he was tolerated by Lenin (despite being criticised heavily), and fell out of favor under Stalin. He died before he could be repressed, but in 1936-8, his memoirs were banned and he was erased from the official histories of communism. He enjoyed somewhat of a revival after Stalin’s death. Religion and Socialism is very obscure now: Google books reports a Yiddish translation (!) as well as a Spanish one from the 1970s. It’s not been translated into English or French. Marxists.org refers only to his later works in English and in French, and he doesn’t even show up on the German part.
Most of the work available on Lunacharsky now seems to be attributable to patient work by Roland Boer (upon whom I am completely dependent here). Religion and Socialism fell out of favor due to Lenin’s denunciation after it was published, was left out of Lunacharsky’s collected works, and was reduced to a few copies. Here is Boer in his paper on the text:
Continue reading "On Foucault on “Chardino-Marxism,” part 5: Lunacharsky" »
Posted by Gordon Hull on 07 September 2023 at 07:00 in Foucault, Gordon Hull | Permalink | Comments (0)
Reblog
(0)
|
| |
By Gordon Hull
This one has been percolating a while… Steven Thaler’s AI created a picture (below the fold), and Thaler has been using it to push for the copyrightability of AI-generated material. That endeavor has been getting nowhere, and a DC District Court just ruled on the question of “whether a work generated autonomously by a computer falls under the protection of copyright law upon its creation,” in the same way as a work generated by a person. Copyright attaches to human work very generously – this blog post is copyrighted automatically when I write it, and so are doodles you make on napkins. You get lots of extra protections and litigation benefits if you register, but registration is not a requirement for copyright in itself. Per 17 U.S.C. Sec. 102, copyright subsists in “original works of authorship fixed in any tangible medium of expression, now known or later developed, from which they can be perceived, reproduced, or otherwise communicated, either directly or with the aid of a machine or device.” Given this, it’s not hard to see why someone would want to know whether AI could be an “author” in the relevant sense.
The Court ruled that “United States copyright law protects only works of human creation.” This is not a surprise. The central argument is that “Copyright is designed to adapt with the times. Underlying that adaptability, however, has been a consistent understanding that human creativity is the sine qua non at the core of copyrightability, even as that human creativity is channeled through new tools or into new media.” Indeed, “human authorship is a bedrock requirement of copyright.” The Court both cites historical precedent and grounds it in the purpose of Copyright, which is constitutionally to incentivize the creation of new works:
Continue reading "AI Is not a (Copyright) Author (at least not today)" »
Posted by Gordon Hull on 24 August 2023 at 07:00 in Gordon Hull, Intellectual property and its discontents | Permalink | Comments (0)
Reblog
(0)
|
| |
By Gordon Hull
The last couple of times (here then here), I’ve started trying to work through a disparaging reference in the mid-1960s Foucault to “Chardino-Marxism.” Foucault is associating it with Marxist humanism, and comparing it unfavorably to the Althusserian alternative. As I noted, the name Foucault uses is Teilhard de Chardin, but the consistent target of the Foucault-aligned theorists appears to be Roger Garaudy.
So why, exactly, might Teilhard appeal to Marxism? More precisely, in what sense would Teilhard appeal to Garaudy? In a 1969 paper, Ladis KD Kristof offers some context (for Kristof’s remarkable life, see the memorial notice here). The “Phenomenon Teilhard” was widely discussed within the Soviet bloc countries, and within the USSR as early as 1962; a Russian translation of Teilhard’s Phenomenon of Man appeared in 1965. Kristof suggests that the initial Marxist attraction to Teilhard lies simply in that he has a world view – something they can respect, as opposed to (for example) American positivism or empiricism. More specifically, Teilhard: (a) has a scientific worldview, in that he has a Baconian belief that science can solve all problems; (b) has an evolutionary worldview, arguably even more so than Marx. On Kristof’s account, the difference is first in scope: Teilhard’s evolution is cosmic and Marx’s human.
This leads to a second fundamental difference; following Engels in Anti-Dühring, Marxists think that when man [I am following 1960s usage here – this is the generic “man”] starts taking control of nature (= making history), that is the final qualitative change, and that that future changes are quantitative. Teilhard, on the other hand, thinks that the end of the process of what he calls “hominization” will involve a qualitative leap. However, both camps are fundamentally anthropocentric in that “man” is the focus throughout. Finally, (c) Marxism involves a movement of faith: if one is struggling for the revolution, this requires a prior faith that one can effect progress and so forth; in this, there is a convergence with Teilhard’s optimism. Something of the sense of all this is conveyed in the following (long) passage from Teilhard’s Future of Mankind (I’m getting it from Kristof, who quotes part of it):
Continue reading "On Foucault on “Chardino-Marxism,” Part 3" »
Posted by Gordon Hull on 10 August 2023 at 07:00 in Foucault, Gordon Hull | Permalink | Comments (0)
Reblog
(0)
|
| |
By Gordon Hull
Last time, I noted that mid-late 1960s Foucault aligned himself in favor of Althusser’s work on Marx, and against what he called “Chardino-Marxism,” which turns out to be a shorthand for humanist Marxism, in particular any efforts to synthesize Marx and Teilhard de Chardin, as well as (or rather, as exemplified by) the work of PCF intellectual Roger Garaudy. Foucault’s opposition to “humanism” is well-known, but his differentiation of Marxism into less-desirable humanist varieties and more-desirable Althusserian less so, and so I want to pursue the Chardino-Marxism critique further, because it helps us understand the context in which the humanist critique appears, as well as Foucault’s subsequent efforts to position himself relative to Marxism in the 1970s (obligatory self-promotion: my foray into that is here)
In the 1966 interview, “L’homme est-il mort? [Is man dead?]”, Foucault gives as clear a position statement as I’ve seen on all of this. The interview is roughly contemporaneous with Order of Things, and certainly the more detailed exposition of humanism and Marxism’s place in it there needs to be taken into account in any full discussion. The interviewer had asked if Foucault differentiated among different kinds of humanism, naming Sartre. Foucault responds that “if you set aside the facile humanism that Teilhard and Camus represent, the problem of Sartre appears completely different.” Foucault then stops talking about Sartre and offers a general characterization: “humanism, anthropology and dialectical thought are related. What ignores man, is contemporary analytic reason which we saw born with Russell, [and] which appears in Levi-Strauss and the linguists.” On the other hand, dialectics, Foucault says, promotes the idea that the human being “will become an authentic and true man.” That is, it “promotes man to man and, to this extent, it is indissociable from humanist morality. In this sense, the great officials of contemporary humanism are evidently Hegel and Marx” (D&E I, 569). So we are back to the Lindung interview, where Foucault accuses Garaudy of having indiscriminately “picked up everything from Hegel to Teilhard de Chardin” (discussed last time), though with perhaps an emerging sense of what that lineage looks like.
Continue reading "What is Chardino-Marxism and Why does Foucault Care? (part 2)" »
Posted by Gordon Hull on 03 August 2023 at 07:00 in Foucault, Gordon Hull | Permalink | Comments (0)
Reblog
(0)
|
| |
By Gordon hull
In a 1966 interview with Madeline Chapsal, Foucault proposes that “our task currently is to definitively liberate ourselves from humanism” and offers the following example:
“Our task is to free ourselves definitively from humanism, and it is in this sense that our work is political work, insofar as all the regimes of the East or West pass out their bad goods under the flag of humanism. We have to denounce all these mystifications, like today, inside the Communist Party, where Althusser and his courageous companions are struggling against “Chardino-Marxism.”” (33)
Other than the fusion of Teilhard de Chardin and Marx, what is “Chardino-Marxism” and why does Foucault care?
Teilhard was a Jesuit priest and scientist who tried to reconcile his work in paleontology with Christianity. I’m dependent on Wikipedia for this, so I’ll just let you read the desription there of his main work, the Phenomenon of Man:
“His posthumously published book, The Phenomenon of Man, set forth a sweeping account of the unfolding of the cosmos and the evolution of matter to humanity, to ultimately a reunion with Christ. In the book, Teilhard abandoned literal interpretations of creation in the Book of Genesis in favor of allegorical and theological interpretations. The unfolding of the material cosmos is described from primordial particles to the development of life, human beings and the noosphere, and finally to his vision of the Omega Point in the future, which is "pulling" all creation towards it. He was a leading proponent of orthogenesis, the idea that evolution occurs in a directional, goal-driven way. Teilhard argued in Darwinian terms with respect to biology, and supported the synthetic model of evolution, but argued in Lamarckian terms for the development of culture, primarily through the vehicle of education.”
Teilhard is also known for developing the concept of “noosphere” to refer to the space of human reason and a transcendence of biology; this concept had some uptake among early Internet theorists. Apparently Teilhard was very trendy in the 1950s and 1960s. Foucault mentions him occasionally, and always to associate him with humanism. Later in the interview quoted above, and in response to question about whether the direction he was taking philosophy didn’t appear “cold and rather abstract,” Foucault exclaims: “it is humanism that is abstract!” and adds:
“What makes me angry about humanism is that it is now this screen behind which the most reactionary thought takes refuge, where monstrous and unthinkable alliances are formed: they want to combine Sartre and Teilhard, for example. In the name of what? Of man! Who would dare speak ill of man? And yet, the effort currently being made by people of our generation is not done in order to claim man [a]s against knowledge and against technology, but is precisely to show that our thought, our life, our manner of being, even our most everyday manner of being, are part of the same systematic organisation and therefore depend on the same categories as the scientific and technical world.” (34-5, translation slightly revised. Emphases original).
In a 1968 interview with Yngve Lindung that appeared in Stockholm (D&E #54; I can’t find a translation), Foucault elaborates a more on the topic. Asked if structuralism was opposed to Marxism, he acknowledges that “it is true that there are certain Marxists who are declared anti-structuralists,” but adds that “at the same time, we need to say that there are a large number of Marxists, among the youngest and let’s say the most dynamic, who on the contrary feel very close to structuralist research.” He explains:
“In general, one can say that we have to deal today with a soft, bland, humanist Marxism which tries to pick up everything that traditional philosophy has been able to say from Hegel up to Teilhard de Chardin [qui essaie de ramasser tout ce que la philosophie traditionnelle a pu dire depuis Hegel jusqu’à Teilhard de Chardin]. This Marxism is anti-structuralist insofar as it is opposed to structuralism’s having put in question the old values of bourgeois liberalism. Then we have an opposing group of Marxists that we could call anti-revisionists and for whom the future of Marxist thought and of the communist movement indeed requires that one reject all this eclecticism, all this interior revisionism, all this peaceful coexistence concerning the plan of ideas, and these Marxists are instead structuralists” (D&E I, 682-3 (2 vol version)).
Later in the interview he names the PCF intellectual Roger Garaudy in this regard. Now, Garaudy claims to be inspired by structuralism. But he also claims to be a humanist. Foucault responds that “I don’t believe that one could reasonably pretend that Garaudy is a Marxist.” He then adds that “it doesn’t in any way surprise me that Garaudy wants to gather [recueillir] what he is able to call a concrete structuralism and humanism. He has picked up everything from Hegel to Teilhard de Chardin. He will pick me up too [Il a tout ramassé depuis Hegel jusqu’à Teilhard de Chardin. Il me ramassera aussi]” (684).
The objection is to the eclecticism, and one can imagine Garaudy as having produced what one might call a scavenger Marxism. There is definitely this sense to Garaudy. I’ll say more in a later post, but in Marxism in the Twentieth Century (1966), for example, he writes that “structuralism can, like cybernetics [!], be one of the ways of comprehending the world and of conceiving man and his action, which corresponds the best to the spirit of our time, to the development of a new humanism: this will be precisely the humanism of which Marx was the pioneer, integrating all that was one by Graeco-Roman humanism and Judeo-Christian humanism, and going beyond both in a new synthesis of nature and man, of the external world and subjectivity, of necessary law and liberty” (75).
That’s a lot! Foucault wants nothing of it. Later in the Lindung interview he says the following, which is worth quoting at length:
“The situation of the French left is still dominated by the presence of the communist party. The current problematic at its interior is essentially the following: should the party, theoretically and politically, make itself the agent of peaceful coexistence, which politically drives [entraîne] a sort of neutralization of the conflict with the U.S., and which comports, from an ideological point of view, with an attempt at ecumenicism thanks to which all the important ideological currents in Europe and in the world are found [retrouveraient] more or less reconciled? It is clear that people like Sartre and Garaudy work toward this peaceful coexistence between diverse intellectual currents, and they precisely say: but we don’t have to abandon humanism, but we don’t have to abandon Teilhard de Chardin, but existentialism is also a little bit right , but structuralism too, if only it wasn’t doctrinaire, but concrete and open to the world. Opposed to this current, which puts coexistence at the first rank, you have a current that the ‘right wing people’ call doctrinaire, neostalinist and Chinese. This tendency inside the PCF is an attempt to reestablish a Marxist theory of politics, of science and of philosophy which is a consequential theory, ideologically acceptable, in accord with the doctrine of Marx. It is this attempt which at this moment effected [opérée] by the intellectuals of the left wing of the party, and they are more or less regrouped around Althusser. This structuralist wing is the left. You understand now what this maneuver of Sartre and Garaudy consists in, knowing how to pretend that structuralism is a typical ideology of the right. It lets them designate as accomplices of the right those who are in reality to their left. It lets them consequently present themselves as the only true representatives of the French left and communists. But this is only a maneuver” (686)
In this sense, Teilhard seems to be less of interest to Foucault than what Garaudy in particular is doing with Teilhard within the PCF. Teilhard is a stalking horse for Marxist humanism, which (like other humanisms) is to be combatted.
Next time, I’ll pick up from here…
Posted by Gordon Hull on 27 July 2023 at 07:00 in Foucault, Gordon Hull | Permalink | Comments (0)
Reblog
(0)
|
| |
By Gordon Hull
Last time, I followed up on a reference in Bernard Dionysius Geoghegan’s Code to Foucault’s short text “Message ou bruit” (1966). Here I want to trace out some of the political implications of that text, or at least to suggest a path from it to some of his later work in the 1970s and current forms of political resistance.
One of Foucault’s emergent interests in philosophy of language is in pragmatics and speech act theory. I don’t know enough about his reading in the relevant time period to know if this exactly tracks his growing interest in politics, but by a 1978 lecture in Japan (which I discussed here) he is able to say that:
“Perhaps one could see that there is still a certain possibility for philosophy to play a role in relation to power, which would be a role neither of foundation nor of renewal of power. Perhaps philosophy can still play a role on the side of counter-power, on the condition that this role does not consist in exercising, in the face of power, the very law of philosophy, on the condition that philosophy stops thinking of itself as prophesy, on the condition that philosophy stops thinking of itself either as pedagogy, or as legislation, and that it gives itself the task to analyse, clarify, and make visible, and thus intensify the struggles that develop around power, the strategies of the antagonists within relations of power, the tactics employed, the foyers of resistance, on the condition in sum that philosophy stops posing the question of power in terms of good and bad, but rather poses it in terms of existence. The question is not: is power good or bad, legitimate or illegitimate, a question of right or morality? Rather, one should simply try to relieve the question of power of all the moral and juridical overloads that one has placed on it, and ask the following naïve question, which has not been posed so often, even if a certain number of people have actually posed it for a long while: what do power relations fundamentally consist in?” (192)
Foucault’s emphases here – from the emphasis on local struggles to the rejection of prophetic thinking – should be familiar. He then immediately suggests that “We have known for a long time that the role of philosophy is not to discover what is concealed, but rather to make visible what precisely is visible, which is to say to make appear what is so close, so immediate, so intimately connected with ourselves that we cannot perceive it.” This is recognizable as a reference to Wittgenstein – specifically, the remark in Philosophical Investigations 128 that “the work of the philosopher consists in assembling reminders for a particular purpose.”
Posted by Gordon Hull on 20 July 2023 at 07:00 in Biopolitics, Foucault, Gordon Hull | Permalink | Comments (0)
Reblog
(0)
|
| |
By Gordon Hull
Last time, I offered a quick synopsis of Bernard Dionysius Geoghegan’s excellent new book Code. Here, I’d like to track one specific Foucault reference in it. Geoghegan takes Lévi-Strauss’s Savage Mind as a central text in the ambivalence French theorists came to feel about American communication theories, and he notes that the book “occasioned a broader reassessment of the human sciences marked by a new ascent of ‘coding’ as a key concept poised to dislocate and perhaps dissolve, existing scientific hierarchies” (152). He adds:
“Learning to code – that is, to cast cultural objects in terms of codes, relays, patterns, and systems – did more than reframe existing knowledge in cybernetic jargon. It also reflected a growing cynicism toward existing cultural and scientific nodes. From the 1960s onward, the semiotic task of deciphering obscure ‘codes’ in culture, politics, and science overtook the structuralist project. This crypto-structuralism shifted emphasis from the neutral connotations of ‘communication’ to antagonistic notions of code …. If these terms furthered the technocratic project of US foundations, they also set in motion a radical critique of scientific neutrality. Beneath the neutral science, something ‘savage’ lurked.” (152-3).
Geoghegan cites Lacan, Barthes and the Tel Quel group (on which see Danielle Marx-Scouras’s excellent study). He also quietly footnotes Foucault’s “Message ou bruit [message or noise]” “for a critical discussion of these same terms by Foucault” (215n81).
Continue reading "Signal or Noise? Foucault and Communication Theory (Part 1)" »
Posted by Gordon Hull on 13 July 2023 at 07:00 in Foucault, Gordon Hull | Permalink | Comments (0)
Reblog
(0)
|
| |
By Gordon Hull
I made myself wait until I was settled into the summer to read Bernard Dionysius Geoghegan’s Code: From Information Theory to French Theory. It was absolutely worth the wait. Code offers a look into the role of cybernetic theory in the development of postwar French theory, especially structuralism and what Geoghegan calls “crypto-structuralism.” The story starts in the progressive era U.S., with the emergence of technocratic forms of government and expertise “against perceived threats of anarchy and communism” and the “progressive hopes to submit divisive political issues for neutral technical analysis” (25). This governance as depoliticization then generates the postwar emphasis on cybernetics and information theory. Along the way, it picks up and reorganizes psychology and anthropology in figures like Margaret Mead and Gregory Bateson, as the emerging information theory disciplines are given extensive funding by “Robber Baron philanthropies” (and later, covertly of course, by the CIA). This then sets the stage for postwar cybernetic theory and the careful cultivation (again, substantially by philanthropies and the CIA) of intellectuals like Roman Jakobson and Lévi-Strauss.
This is not a story I’d heard before – and I get the impression that almost no one has, at least not in philosophy, which is why this book is so important – and the details are fascinating. It makes a compelling case for the need for those of us who work on the post-war French to get a handle on cybernetic theory in particular, especially because of the link to structuralism (more on that in a moment). It calls to mind some of Katherine Hayles’ work – I’m thinking of How we Became Posthuman and My Mother Was a Computer – that probably needs rereading in this context.
Continue reading "Reading List: Bernard Dionysius Geoghegan, Code" »
Posted by Gordon Hull on 06 July 2023 at 13:14 in Deleuze (and Guattari, sometimes), Foucault, French and Francophone, Gordon Hull, History of philosophy | Permalink | Comments (0)
Reblog
(0)
|
| |
In the face of the general disaster of the Republican majority on the Supreme Court’s ongoing power grab in the student loan case, I worry that the damage of the LGBTQ Wedding Website decision, Creative LLC v. Elenis, will get overlooked. It seems to me, based mainly on a reading of Justice Sotomayor’s dissent, that the real forerunner of Creative LLC is a case mentioned nowhere in the decision or dissent: Burwell v. Hobby Lobby (2014). Recall that in Burwell, the Court ruled that the Hobby Lobby Corporation could not be compelled by the Affordable Care Act to provide contraceptive coverage as part of its employees’ healthcare coverage, on account of the corporation’s religious beliefs. At the time, I noted that Hobby Lobby seemed very happy to avail itself of things like police and fire protection. I don’t usually quote myself in blog posts, but here’s what I said at the time:
“Hobby Lobby is a large, big-box retail chain that employs over 13,000 people. If those people (or others like them) didn’t exist or refused to work for Hobby Lobby, the corporation would go out of business immediately and the owners would have to find something else to do. Hobby Lobby, Inc. takes advantage of the publicly-provided roads that its employees, managers, and customers take to get to its stores and that its owners use to get to their corporate offices. Those offices were erected with the protection of enforceable building codes that make sure they don’t fall down, and that try to make sure that everyone can evacuate them in the event of a fire. Hobby Lobby, Inc. also takes advantage of municipally provided services, including the installation of stormwater systems that deal with the massive runoff caused by big-box stores’ parking lots. Hobby Lobby, Inc. also takes advantage of local police and fire services that protect their investment in their stores. All of these things are provided substantially by property taxes paid by everyone living in the municipalities where the owners exercise their freedom to open a store. Hobby Lobby, Inc. also freely avails itself of services provided by state and federal taxes, such as the Interstate highways on which it can transport its goods (highways which have to be widened at great public expense when suburbanization creates new local markets for its stores). Hobby Lobby, Inc. also has no moral objections to taking advantage of the national defense system that keeps its stores safe from foreign intervention, or the publicly funded legal system that allowed them to challenge the ACA and that enables them to recover money from those who owe them. No, in general, it seems that Hobby Lobby, Inc. depends quite a lot on the society in which it does business, even as its owners seek to excuse themselves from its rules. In the meantime, Hobby Lobby’s owners also take advantage of the legal structure governing corporations (Hobby Lobby, Inc. isn’t a sole proprietorship!), such as the fact that they aren’t personally liable for any bad things that their corporation might do. In other words, Hobby Lobby’s owners get to identify with the corporation when it’s a matter of religious belief, but not when doing so is inconvenient.”
It was this line of thought that I most remembered when reading Justice Sotomayor’s dissent in Creative LLC. She notes that:
Continue reading "The Supreme Court’s Disappearing Public" »
Posted by Gordon Hull on 30 June 2023 at 14:53 in Gordon Hull | Permalink | Comments (0)
Reblog
(0)
|
| |
Large Language Models (LLMs) like ChatGPT are well-known to hallucinate – to make up answers that sound pretty plausible, but have no relation to reality. That of course is because they’re designed to produce text that sounds about right given a prompt. What sounds kind of right may or may not be right, however. ChatGPT-3 made up a hilariously bad answer to a Kierkagaard prompt I gave it and put a bunch of words into Sartre’s mouth. It also fabricated a medical journal article to support a fabricated risk to oral contraception. ChatGPT-4 kept right on making up cites for me. It has also defamed an Australian mayor and an American law professor. Let’s call this a known problem. You might even suggest, following Harry Frankfurt, that it’s not so much hallucinating as it is bullshitting.
Microsoft’s Bing chatbot-assisted search puts footnotes in its answers. So it makes sense to wonder if it also hallucinates, or if it does better. I started with ChatGPT today and asked it to name some articles by “Gordon Hull the philosopher.” I’ll spare you the details, but suffice it to say it produced a list of six things that I did not write. When I asked it where I might read one of them, it gave me a reference to an issue of TCS that included neither an article by me nor an article of that title.
So Bing doesn’t have to be spectacular to do better! I asked Bing the same question and got the following:
Continue reading "Bing also hallucinates, even with footnotes" »
Posted by Gordon Hull on 04 May 2023 at 21:20 in Gordon Hull | Permalink | Comments (1)
Reblog
(0)
|
| |
By Gordon Hull
In the previous two posts (here and here) I’ve developed a political account of authorship (according to which whether we should treat an AI as an author for journal articles and the like is a political question, not one about what the AI is, or whether its output resembles human output), and argued that AIs can’t be property held accountable. Here I want to argue that AI authorship risks social justice concerns.
That is, there are social justice reasons to expand human authorship that are not present in AI. As I mentioned in the original post, researchers like Liboiron are trying to make sure that the humans who put effort into papers, in the sense that they make it possible, get credit. In a comment to that post, Michael Muller underlines that authorship interacts with precarity in complex ways. For example, “some academic papers have been written by collectives. Some academic papers have been written by anonymous authors, who fear retribution for what they have said.” Many authors have precarious employment or political circumstances, and sometimes works are sufficiently communal that entire communities are listed as authors. There are thus very good reasons to use authorship strategically when there are minoritized individuals or people in question. My reference to Liboiron is meant only to indicate the sort of issue in the strategic use of authorship to protect minoritized or precarious individuals, and to gesture to the more complex versions of the problem that Muller points to. The claim I want to make here is that , as a general matter, AI authorship isn’t going to help those minoritized people, and might well make matters worse.
If anything, therre’s a plausible case that elevating an AI to author status will make social justice issues worse. There’s at least two ways to get to that result, one specific to AI and one more generally applicable to cognitive labor.
Posted by Gordon Hull on 20 February 2023 at 16:56 in Gordon Hull | Permalink | Comments (0)
Reblog
(0)
|
| |
As if Sartre didn't produce enough words all by himself!
ChatGPT's response to the following prompt is instructive for those of us who are concerned about ChatGPT being used to cheat. Read past the content of the answer to notice the made-up citations. The "consciousness is a question..." line is in fact in the Barnes translation of Being and Nothingness, but is actually a term in the glossary provided by the translator (so it's not on p. 60 - it's on p. 629). Where did the AI find this? I'm guessing on the Wikipedia page for the book, which has a "special terms" section that includes the quote (and attributes it to Barnes. I should add as an aside that Barnes puts it in quote marks, but doesn't reference any source). The "separation" quote is, as far as I can tell, made up whole cloth. It does sound vaguely Sartrean, but it doesn't appear to be in the Barnes translation, and I can't find it on Google. It's also worth pointing out that neither quote is from the section about the cafe - both page numbers are from the bad faith discussion.
I don't doubt that LLMs will get better (etc etc etc) but for now, bogus citations are a well-known hallmark of ChatGPT. Watch it make-up quotes from Foucault (and generally cause him to turn over in his grave) here.
Continue reading "ChatGPT Putting Words in Sartre's Mouth" »
Posted by Gordon Hull on 17 February 2023 at 17:28 in Gordon Hull | Permalink | Comments (0)
Reblog
(0)
|
| |
By Gordon Hull
As I argued last time, authorship is a political function, and we should be applying that construction of it to understand whether AI should be considered an author. Here is a first reason for doing so: AI can’t really be “accountable.”
(a) Research accountability: The various journal editors all emphasize accountability. This seems fundamentally correct to me. First, it is unclear what it would mean to hold AI accountable. Suppose the AI fabricates some evidence, or cites a non-existent study, or otherwise commits something that, were a human to do it, would count as egregious research misconduct. For the human, we have some remedies that ought, at least in principle, to discourage such behavior. A person’s reputation can be ruined, their position at a lab or employer terminated, and so on. None of those incentives would make the slightest difference to the AI. The only remedy that seems obviously available is retracting the study. But there’s at least two reasons that’s not enough. First, as is frequently mentioned, retracted studies still get cited. A lot. Retraction Watch even keeps a list of the top-10 most cited papers that have been retracted. The top one right now is a NEJM paper published in 2013 and retracted in 2018; it had 1905 cites before retraction and 950 after. The second place paper is a little older, published in 1998 and retracted in 2010, and has been cited more times since its retraction than before. In other words, papers that are bad enough to be actually retracted cause ongoing harm; a retraction is not a sufficient remedy for research misconduct. If nothing else, whatever AI is going to find and cite it. And all of this is assuming something we know to be false, which is that all papers with false data (etc) get retracted. Second, it’s not clear how retraction disincentivizes an AI any more than any other penalty. In the meantime, there is at least one good argument in favor making humans accountable for the output of an AI: it incentivizes them to check its work.
Continue reading "Some Reasons to be Skeptical of AI Authorship, Part 2: Accountability" »
Posted by Gordon Hull on 13 February 2023 at 07:00 in Gordon Hull | Permalink | Comments (0)
Reblog
(0)
|
| |
By Gordon Hull
Large Language Models (LLMs) like Chat-GPT burst into public consciousness sometime in the second half of last year, and Chat-GPT’s impressive results have led to a wave of concern about the future viability of any profession that depends on writing, or on teaching writing in education. A lot of this is hype, but one issue that is emerging is the role of AI authorship in academic and other publications; there’s already a handful of submissions that list AI co-authors. An editorial in Nature published on Feb. 3 outlines the scope of the issues at hand:
“This technology has far-reaching consequences for science and society. Researchers and others have already used ChatGPT and other large language models to write essays and talks, summarize literature, draft and improve papers, as well as identify research gaps and write computer code, including statistical analyses. Soon this technology will evolve to the point that it can design experiments, write and complete manuscripts, conduct peer review and support editorial decisions to accept or reject manuscripts”
As a result:
“Conversational AI is likely to revolutionize research practices and publishing, creating both opportunities and concerns. It might accelerate the innovation process, shorten time-to-publication and, by helping people to write fluently, make science more equitable and increase the diversity of scientific perspectives. However, it could also degrade the quality and transparency of research and fundamentally alter our autonomy as human researchers. ChatGPT and other LLMs produce text that is convincing, but often wrong, so their use can distort scientific facts and spread misinformation.”
The editorial then gives examples of LLM-based problems with incomplete results, bad generalizations, inaccurate summaries, and other easily-generated problems. It emphasizes accountability (for the content of material: the use of AI should be clearly documented) and the need for the development of truly open AI products as part of a push toward transparency.
Continue reading "Some Reasons to be Skeptical of AI Authorship, Part 1: What is an (AI) Author?" »
Posted by Gordon Hull on 06 February 2023 at 07:00 in Big Data, Foucault, Gordon Hull | Permalink | Comments (2)
Reblog
(0)
|
| |
By Gordon Hull
Last time, I introduced a number of philosophy of law examples in the context of ML systems and suggested that they might be helpful in thinking differently, and more productively, about holding ML systems accountable. Here I want to make the application specific.
So: how do these examples translate to ML and AI? I think one lesson is that we need to specify what exactly we are holding the algorithm accountable for. For example, if we suspect an algorithm of unfairness or bias, it is necessary to specify precisely what the nature of that bias or unfairness is – for example, that it is more likely to assign high-risk status to Black defendants (for pretrial detention purposes) than it is white ones. Even specifying fairness in this sense can be hard, because there are conflicting accounts of fairness at play. But assuming that one can settle that question, we don’t need to specify tokens or individual acts of unfairness (or demand that each of them rise to the level where they would individually create liability) to demand accountability of the algorithm or the system that deploys it – we know that the system will have treated defendants unfairly, even if we don’t know which ones (this is basically a disparate impact standard; recall that one of the original and most cited pieces on how data can be unfair was framed precisely in terms of disparate impact).
Further, given the difficulties of individual actions (litigation costs, as well as getting access to the algorithms, which defendants will claim as trade secrets) in such cases, it seems wrong to channel accountability through tort liability and demand that individuals prove the algorithm discriminated against him (how could they? The situation is like the blue bus: if a group of people is 80% likely to reoffend or skip bail, we know that 20% of that group will not, and there is no “error” for which the system can be held accountable). Policymakers need to conduct regular audits or other supervisory activity designed to ferret out this sort of problem, and demand accountability at the systemic level.
Posted by Gordon Hull on 03 November 2022 at 11:44 in Big Data, Gordon Hull | Permalink | Comments (0)
Reblog
(0)
|
| |
By Gordon Hull
AI systems are notoriously opaque black boxes. In a now standard paper, Jenna Burrell dissects this notion of opacity into three versions. The first is when companies deliberately hide information about their algorithms, to avoid competition, maintain trade secrets, and to guard against gaming their algorithms, as happens with Search Engine Optimization techniques. The second is when reading and understanding code is an esoteric skill, so the systems will remain opaque to all but a very small number of specially-trained individuals. The third form is unique to ML systems, and boils down to the argument that ML systems generate internal networks of connections that don’t reason like people. Looking into the mechanics of a system for recognizing handwritten numbers or even a spam detection filter wouldn’t produce anything that a human could understand. This form of opacity is also the least tractable, and there is a lot of work trying to establish how ML decisions could be made either more transparent or at least more explicable.
Joshua Kroll argues instead that the quest for potentially impossible transparency distracts from what we might more plausibly expect from our ML systems: accountability. After all, they are designed to do something, and we could begin to assess them according to the internal processes by which they are developed to achieve their design goals, as well as by empirical evidence of what happens when they are employed. In other words, we don’t need to know exactly how the system can tell a ‘2’ from a ‘3’ as long as we can assess whether it does, and whether that objective is serving nefarious purposes.
I’ve thought for a while that there’s potential help for understanding what accountability means in the philosophy of law literature. For example, a famous thought experiment features a traffic accident caused by a bus. We have two sources of information about this accident. One is an eyewitness who is 70% reliable and says that the bus was blue. The other is the knowledge that 70% of the buses that were in the area at the time were blue. Epistemically, these ought to be equal – in both cases, you can say with 70% confidence that the blue bus company is liable for the accident. But we don’t treat them as the same: as David Enoch and Talia Fisher elaborate, most people prefer the witness to the statistical number. This is presumably because when the witness is wrong, we can inquire what went wrong. When the statistic is wrong, it’s not clear that anything like a mistake even happened: the statistics operate at a population level; when applied to individuals, the use of statistical probability will be wrong 30% of the time, and so we have to expect that. It seems to me that our desire for what amounts to an auditable result is the sort of thing that Kroll is pointing to.
Posted by Gordon Hull on 25 October 2022 at 09:26 in Big Data, Gordon Hull | Permalink | Comments (0)
Reblog
(0)
|
| |
In the previous two posts (first, second), I took up the invitation provided by a recent paper by Daniele Lorenzini to develop some thoughts on the relationship between Foucault’s thought and theorizing around epistemic injustice. In particular, Miranda Fricker’s account both draws heavily from Foucault and pushes back against his historicism to advocate for a more a-historical normative ground for the theory: testimonial injustice “distorts” who someone is. Last time, I looked at some of Foucault’s own work in the lectures leading up to Discipline and Punish to develop a sense of how both “truth” and “power” are relevant – and distinguishable – in that work, even as they both are historical constructs. In particular, following Lorenzini, we can distinguish between “x is your diagnosis” and “therefore, you ought to do y.” Here I begin with the complexity introduced in Foucault’s work by his addition of an embryonic genealogy of truth practices.
Let’s begin with the Psychiatric Power lectures, where Foucault had been talking about the strange role of science (and its personification in the doctor) in the governance of asylums. There, when speaking of the historical contingency of the modern scientific enterprise, Foucault writes:
Continue reading "Foucault and Epistemic Injustice (Part 3)" »
Posted by Gordon Hull on 17 October 2022 at 12:52 in Foucault, Gordon Hull | Permalink | Comments (0)
Reblog
(0)
|
| |
Now published in Critical Review. Here's the abstract:
Foucault distanced himself from Marxism even though he worked in an environment—left French theory of the 1960s and 1970s—where Marxism was the dominant frame of reference. By viewing Foucault in the context of French Marxist theoretical debates of his day, we can connect his criticisms of Marxism to his discussions of the status of intellectuals. Foucault viewed standard Marxist approaches to the role of intellectuals as a problem of power and knowledge applicable to the Communist party. Marxist party intellectuals, in his view, had developed rigid and universal theories and had used them to prescribe action, which prevented work on the sorts of problems that he uncovered—even though these problems were central to the development of capitalism.
The paper is an attempt to cut a path through some (mostly 1970s) texts to get a handle on what Foucault is doing with his inconsistent references to Marx and Marxism. There's a complex tangle of issues here, many related to the vicissitudes of the reception of Marx, and I hope that others will be able to add to our understanding of them and the period.
A huge thanks to Shterna Friedman, whose editorial work resulted in a much better article. Also, my paper is going to be part of a special issue of Critical Review on Foucault - the other papers should be appearing relatively soon.
Posted by Gordon Hull on 10 October 2022 at 08:32 in Foucault, Gordon Hull | Permalink | Comments (0)
Reblog
(0)
|
| |
By Gordon Hull
Last time, I took the opportunity provided by a recent paper by Daniele Lorenzini to develop some thought on the relationship between Foucault’s thought and theorizing around epistemic injustice. Lorenzini’s initial point, with which I agree fully, is that Fricker’s development of epistemic injustice is, on her own terms, incompatible with Foucault because she wants to maintain a less historicized normative standpoint than Foucauldian genealogy allows. Epistemic injustice, on Fricker’s reading, involves a distortion of someone’s true identity. Lorenzini also suggests that Foucault’s late work, which distinguishes between an epistemic “game of truth” and a normative/political “regime of truth” offers the distinction Fricker’s theory needs, by allowing one to critique the regime of truth dependent on a game of truth. In terms of Foucault’s earlier writings, he does not fully reduce knowledge to power, in the sense that is can be useful to analytically separate them. Here I want to look at a couple of examples of how that plays out in the context of disciplinary power.
Consider the case of delinquency, and what Foucault calls the double mode of disciplinary power (Discipline and Punish, 199): a binary division into two categories (sane/mad, etc.) and then the coercive assignment of individuals into one group or the other. The core modern division is between normal and abnormal, and we have a whole “set of techniques and institutions for measuring, supervising and correcting the abnormal” (199). The delinquent, then, is defined epistemically or juridically (in other words, as a matter of science or law; as I will suggest below, Foucault thinks that one of the ways that psychology instituted itself as a science was by successfully blurring the science/law distinction), and then things are done to her. This is the sort of gap that epistemic injustice theory, at least in its testimonial version, needs: in Fricker’s trial example, there is the epistemic apparatus of “scientific” racism, and then there is the set of techniques that work during the trial. Both of these can be targets of critique, but testimonial injustice most obviously works within the second of the two.
Continue reading "Foucault and Epistemic Injustice (Part 2)" »
Posted by Gordon Hull on 04 October 2022 at 07:00 in Foucault, Gordon Hull | Permalink | Comments (0)
Reblog
(0)
|
| |
By Gordon Hull
Those of us who have both made extensive use of Foucault and made a foray into questions of epistemic injustice have tended to sweep the question of the relation between the two theoretical approaches under the rug. Miranda Fricker’s book, which has basically set the agenda for work on epistemic injustice, acknowledges a substantial debt to Foucault, but in later work she backs away from the ultimate implications of his account of power on the grounds that his historicism undermines the ability to make normative claims. In this her argument makes a fairly standard criticism of Foucault (whose “refusal to separate power and truth” she aligns with Lyotard’s critique of metanarratives (Routledge Handbook of Epistemic Injustice, 55). As she describes her own project:
“What I hoped for from the concept of epistemic injustice and its cognates was to mark out a delimited space in which to observe some key intersections of knowledge and power at one remove from the long shadows of both Marx and Foucault, by forging an on-the-ground tool of critical understanding that was called for in everyday lived experience of injustice … and which would rely neither on any metaphysically burdened theoretical narrative of an epistemically well-placed sex-class, nor on any risky flirtation with a reduction of truth or knowledge to de facto social power” (Routledge Handbook, 56).
On this reading, then, Marxism relies too much on ideology-critique, on the one hand, and on privileging the position of women/the proletariat (or other, singular subject position). Foucault goes too far and reduces the normative dimension altogether.
In a new paper, Daniele Lorenzini addresses the Foucault/Fricker question head-on, centrally focusing on the critique of Foucault’s supposed excessive historicism. Lorenzini’s contribution, to which I will return later, is to suggest that Foucault’s later writings (1980 and forward) distinguish between “games” of truth and “regimes” of truth. The distinction is basically illustrated in the following sentence: “I accept that x and y are true, therefore I ought to do z.” The game of truth is the epistemic first half of the sentence, and the “regime” of truth – the part that governs human behavior – is the second half, the “therefore I ought…” On this reading, genealogy is about unpacking and bringing to light the tendency of the “therefore” to disappear as we are governed by its regime, and to unpacking the power structures that make it operate. In other words genealogy doesn’t collapse questions of truth and power; rather, it allows us to separate them by showing that a given game of truth does not entail the regime of truth that goes with it.
Continue reading "Foucault and Epistemic Injustice (part 1)" »
Posted by Gordon Hull on 27 September 2022 at 05:02 in Foucault, Gordon Hull | Permalink | Comments (0)
Reblog
(0)
|
| |
From the Department of Shameless Self-Promotion, here is the abstract for my new paper, "Dirty Data Labeled Dirt Cheap: Epistemic Injustice in Machine Learning Systems:"
"Artificial Intelligence (AI) and Machine Learning (ML) systems increasingly purport to deliver knowledge about people and the world or to assist people in doing so. Unfortunately, they also seem to frequently present results that repeat or magnify biased treatment of racial and other vulnerable minorities, suggesting that they are “unfair” to members of those groups. However, critique based on formal concepts of fairness seems increasingly unable to account for these problems, partly because it may well be impossible to simultaneously satisfy intuitively plausible operationalizations of the concept and partly because fairness fails to capture structural power asymmetries underlying the data AI systems learn from. This paper proposes that at least some of the problems with AI’s treatment of minorities can be captured by the concept of epistemic injustice. I argue that (1) pretrial detention systems and physiognomic AI systems commit testimonial injustice because their target variables reflect inaccurate and unjust proxies for what they claim to measure; (2) classification systems, such as facial recognition, commit hermeneutic injustice because their classification taxonomies, almost no matter how they are derived, reflect and perpetuate racial and other stereotypes; and (3) epistemic injustice better explains what is going wrong in these types of situations than does (un)fairness."
The path from idea to paper here was slow, but I hope the paper is convincing on the point that the literature on epistemic injustice can offer some needed resources for understanding harms caused by (some kinds of ) AI/algorithmic systems.
Posted by Gordon Hull on 20 June 2022 at 05:00 in Gordon Hull | Permalink | Comments (0)
Reblog
(0)
|
| |
By Gordon Hull
UPDATE: 6/14: Here's a nice takedown ("Nonsense on Stilts") of the idea that AI can be sentient.
I don’t remember where I read about an early text-based chatbot named JULIA, but it was likely about 20 years ago. JULIA played a flirt, and managed to keep a college student in Florida flirting back for something like three days. The comment in whatever I read was that it wasn’t clear if JULIA had passed a Turing test, or if the student had failed one. I suppose this was inevitable, but it appears now that Google engineer Blake Lemoine is failing a Turing test, having convinced himself that the natural language processing (NLP) system LaMDA is “sentient.”
The WaPo article linked above includes discussion with Emily Bender and Margaret Mitchell, which his exactly correct, as they’re two of the lead authors (along with Timnit Gebru) on a paper (recall here) that reminds everyone that NLP is basically a string prediction task: it scrapes a ton of text from the Internet and whatever other sources are readily available, and gets good at predicting what is likely to come next, given a particular input text. This is why there’s such concern about bias being built into NLP systems: if you get your text from Reddit, then for any given bit of text, what’s likely to come next is racist or sexist (or both). The system may sound real, but it’s basically a stochastic parrot, as Bender, Gebru and Mitchell put it.
So point one: LaMDA is not sentient, any more than ELIZA and JULIA were sentient, but chatbots are getting pretty good at convincing people they are. Still, it’s disturbing that the belief is spreading to people like Lemoine who really, really ought to know better.
Posted by Gordon Hull on 13 June 2022 at 13:30 in Gordon Hull | Permalink | Comments (0)
Reblog
(0)
|
| |
By Gordon Hull
As a criterion for algorithmic assessment, “fairness” has encountered numerous problems. Many of these emerged in the wake of ProPublica’s argument that Broward County’s pretrial detention system, COMPAS, was unfair to black suspects. To recall: In 2016, ProPublica published an investigation piece criticizing Broward County, Florida’s use of a software program called COMPAS in its pretrial detention system. COMPAS produced a recidivism risk score for each suspect, which could then be used in deciding whether someone should be detained prior to their trial. ProPublica’s investigation found that, among suspects that did not have a rearrest prior to their trial, black suspects were much more likely to have been rated as “high risk” for rearrest than white suspects. Conversely, among suspects who were arrested a second time, white suspects were more likely to have been labeled “low risk” than black ones. The system thus appeared to be discriminating against black suspects. The story led to an extensive debate (for an accessible summary with cites, see Ben Green’s discussion here) over how fairness should be understood in a machine learning context.
The debate basically showed that ProPublica focused on outcomes and demonstrated that the system failed to achieve separation fairness, which is met when all groups subject to the algorithm’s decisions receive the same false negative/positive rate. The system failed because “high-risk” black suspects were much more likely than white to be false positives. In response, the software vendor argued that the system made fair predictions because among those classified in the same way (high or low risk), both racial groups exhibited the predicted outcome at the same rate. In other words, among those classified as “high risk,” there was no racial difference in how likely they were to actually be rearrested. The algorithm thus satisfied the criterion of sufficiency fairness. In the ensuing debate, computer scientists arrived at a proof that, except in very limited cases, it was impossible to simultaneously satisfy both separation and sufficiency fairness criteria.
In the meantime, on the philosophy side, Brian Hedden has argued that a provably fair algorithm could nonetheless be shown to potentially violate 11 of 12 possible fairness conditions. In a response piece, Benjamin Eva showed the limits of the twelfth with a different test and proposed a new criterion:
Continue reading "Base Rate Tracking often can’t fix algorithmic fairness" »
Posted by Gordon Hull on 19 May 2022 at 08:53 in Big Data, Gordon Hull | Permalink | Comments (0)
Reblog
(0)
|
| |
Luke Stark argues that Facial recognition should be treated as the “plutonium of AI” – something so dangerous that it’s use should be carefully controlled and limited. If you follow the news, you’ll know that we’re currently treating it as the carbon dioxide of AI, a byproduct of profit-making that doesn’t look too awful on its own until you realize its buildup could very well cause something catastrophic to happen. Activists have worried about this pending catastrophe for a while, but lots of big money supports facial recognition, so they have thrown up a smokescreen of distractions – in one case, Facebook denied that its phototagging software in fact recognized faces (!) – in order to lull everyone into accepting it.
One of the worst offenders is a secretive company called Clearview, whose business model is to scrape the web of all the pictures it can find and then sell the technology to law enforcement. The company even has an international presence: in one disturbing instance, the Washington Post documents the use of its technology by Ukrainians to identify dead Russian soldiers by way of their Instagram and other social media accounts, and then sometimes to contact their families. More generally, the Post revealed internal documents showing that the company' database is nearing 100 billion images and that "almost everyone in the world will be identifiable." They're going all-in; the Post reports that "the company wants to expand beyond scanning faces for the police, saying in the presentation [obtained by the WP] that it could monitor 'gig economy' workers and is researching a number of new technologies that could identify someone based on how they walk, detect their location from a photo or scan their fingerprints from afar."
Clearview is also one of a cohort of companies that has been sued for violating Illinois’ Biometric Information Privacy Act (BIPA). BIPA, uniquely among American laws, requires opt-in assent for companies to use people’s biometric information (the Facebook case is central to my argument in this paper (preprint here); for some blog-level discussion see here and here). Of course, BIPA is a state-level law, so its protections do not automatically extend to anyone who lives outside of Illinois. That’s why yesterday’s news of a settlement with the ACLU is really good news. The Guardian reports:
Facial recognition startup Clearview AI has agreed to restrict the use of its massive collection of face images to settle allegations that it collected people’s photos without their consent. The company in a legal filing Monday agreed to permanently stop selling access to its face database to private businesses or individuals around the US, putting a limit on what it can do with its ever-growing trove of billions of images pulled from social media and elsewhere on the internet. The settlement, which must be approved by a federal judge in Chicago, will end a lawsuit brought by the American Civil Liberties Union and other groups in 2020 over alleged violations of an Illinois digital privacy law. Clearview is also agreeing to stop making its database available to Illinois state government and local police departments for five years. The New York-based company will continue offering its services to federal agencies, such as US Immigration and Customs Enforcement, and to other law enforcement agencies and government contractors outside Illinois.
Of course, the company denies the allegations in the lawsuit, and insists that it was just in the process of rolling out a “consent-based” product. Ok, sure! This is still a win for privacy and for one of the very few pieces of legislation in the U.S. that has any chance of limiting the use of biometric data.
Posted by Gordon Hull on 10 May 2022 at 08:48 in Big Data, Gordon Hull, Privacy | Permalink | Comments (0)
Reblog
(0)
|
| |
People make snap judgments about those they see the first time – mentally categorizing someone as friendly, threatening, trustworthy, etc. Most of us know that those impressions are idiosyncratic, and suffused with cultural biases along race, gender and other lines. So obviously I know what you’re thinking… we need an AI that do that, right? At least that’s what this new PNAS paper seems to think (h/t Nico Osaka for the link). The authors start right in with the significance:
“We quickly and irresistibly form impressions of what other people are like based solely on how their faces look. These impressions have real-life consequences ranging from hiring decisions to sentencing decisions. We model and visualize the perceptual bases of facial impressions in the most comprehensive fashion to date, producing photorealistic models of 34 perceived social and physical attributes (e.g., trustworthiness and age). These models leverage and demonstrate the utility of deep learning in face evaluation, allowing for 1) generation of an infinite number of faces that vary along these perceived attribute dimensions, 2) manipulation of any face photograph along these dimensions, and 3) prediction of the impressions any face image may evoke in the general (mostly White, North American) population”
Let’s maybe think for a minute, yes? Because we know that people make these impressions on unsound bases!
First, adversarial networks are already able to produce fake faces that are indistinguishable from real ones. Those fake faces can now be manipulated to appear more or less trustworthy, hostile, friendly, etc. When you make fake political ads, for example, that’s going to be useful. Already 6 years ago, one “Melvin Redick of Harrisburg, Pa., a friendly-looking American with a backward baseball cap and a young daughter, posted on Facebook a link to a brand-new website,” saying on June 8, 2016 that “these guys show hidden truth about Hillary Clinton, George Soros and other leaders of the US. Visit #DCLeaks website. It’s really interesting!” Of course, both Melvin Redick and the site he pointed to were complete fabrications by the Russians. Now we can make Melvin look trustworthy, and Clinton less so.
Second, the ability to manipulate existing face photos is a disaster-in-waiting. Again, we saw crude efforts with this before – making Obama appear darker than he is, for example. But here news photos could be altered to make Vladimir Putin appear trustworthy, or Mr. Rogers untrustworthy. This goes nowhere good, especially when combined with deepfake technology that already takes people out of their contexts and puts them in other ones (disproportionately, so far, women are pasted into porn videos, but the Russians recently tried to produce a deepfake of Zelensky surrendering. Fortunately that one was done sloppily).
Third, and I think this one is possibly the scariest, what about scanning images to see whether someone will be assessed as trustworthy? AI-based hiring is already under-regulated! Now employers will run your photo through the software and make customer-service hiring decisions based on who customers will perceive as trustworthy. What could go wrong?
All of this of course assumes that this sort of software actually works. The history of physiognomic AI, which uses all sorts of supposedly objective cues to determine personality and which is basically complete (usually racist) bunk suggests that the science is probably not as good as the article acts like. So maybe we’re lucky and this algorithm does not actually work as advertised. Of course, the fact that AI software is garbage doesn't preclude its being used to make people's lives miserable. Just consider the bizarre case of VibraImage.
But don’t worry. The PNAS authors are aware of ethics, noting that “the framework developed here adds significantly to the ethical concerns that already enshroud image manipulation software:”
“Our model can induce (perceived) changes within the individual’s face itself and may be difficult to detect when applied subtly enough. We argue that such methods (as well as their implementations and supporting data) should be made transparent from the start, such that the community can develop robust detection and defense protocols to accompany the technology, as they have done, for example, in developing highly accurate image forensics techniques to detect synthetic faces generated by SG2. More generally, to the extent that improper use of the image manipulation techniques described here is not covered by existing defamation law, it is appropriate to consider ways to limit use of these technologies through regulatory frameworks proposed in the broader context of face-recognition technologies.”
Yes, the very effective American regulation of privacy does inspire confidence! Also, “There is also potential for our data and models to perpetuate the biases they measure, which are first impressions of the population under study and have no necessary correspondence to the actual identities, attitudes, or competencies of people whom the images resemble or depict.”
Do you think? As Luke Stark put it, facial recognition is the “plutonium of AI:” very dangerous and with very few legitimate uses. This algorithm belongs in the same category, and should similarly be regulated like nuclear waste. For example, as Ari Waldman and Mary Anne Franks have written, one of the problems with deepfakes is that the fake version gets out there on the internet, and it is nearly impossible to make it go away (if you even know about it). Forensic software gets there too late, and those without resources aren’t going to be able to deploy it anyway. Lawsuits are even less useful, since they're time-consuming and expensive to pursue, and lots of defendants won't be jurisdictionally available or have pockets deep enough to make the chase worth it. In other words, not everybody is going to be able to defend themselves like Zelensky, who both warned about deepfakes and was able to produce video of himself not surrendering. In the meantime, faked and shocking things generally get diffused faster and further than real news. After all, “engagement” is the business model of social media. Further, to the extent that people stay inside filter bubbles (like Fox News), they may never see the forensic corrections, and they probably won’t believe the real one is real, even if they do.
And as for reinforcing existing biases, Safiya Noble already wrote a whole book on how algorithms that guess what you’re probably thinking about someone can do just that.
Posted by Gordon Hull on 25 April 2022 at 15:49 in Big Data, Gordon Hull | Permalink | Comments (0)
Reblog
(0)
|
| |
in refusing to grant copyright registration to an AI creation. I suspect this one to be litigated for a while, since the person who has been trying to get protection for the picture has declared limiting copyright to human authors as something that would be unconstitutional (I also think it would be pretty entertaining to watch somebody try to float that argument in front of the current Supreme Court). A good article on why this sort of thing is going to be a problem, and an interesting way of parsing law's traditional 'mental state' requirement, is here.
Posted by Gordon Hull on 18 February 2022 at 15:55 in Gordon Hull, Intellectual property and its discontents | Permalink | Comments (0)
Reblog
(0)
|
| |
If you want to use their website; WaPo has the story here. But it's one of those public/private partnerships where data leaks and hacks and thefts happen. To their credit, the Post went to Joy Buolamwini, whose work proved that facial recognition systems work best on white men and worst on Black women. But even a perfectly functioning system is frightening. First, it would unquestionably worsen the divide between those who have good Internet and those who don't, making convenient access to tax records contingent on having a sufficient income and computing skills. Also, of course, facial recognition is bad - the potential for misuse is so great, and the record is so permanent, that Woody Hartzog and Evan Selinger argue it ought to be legally impossible to consent to. (for an overview of the debate, see Selinger and Leong here).
One of the biggest problems with data is that it gets leaked and hacked, of course, but another big problem is that companies sell it to pretty much whoever arrives with cash. The company handling IRS facial recognition claims they'll turn it over to law enforcement, but the Post says there's no federal law proscribing what they can do with it. And they're switching companies and authentication strategies because of a massive data breach at Equifax a few years ago. So its not like nobody has ever heard of a data breach.
Oh, and ID.me, the company getting the contract, totally wants to sell you stuff:
"But advertising is a key part of ID.me’s operation, too. People who sign up on ID.me’s website are asked if they want to subscribe to “offers and discounts” from the company’s online storefront, which links to special deals for veterans, students and first responders. Consumer marketing accounts for 10 percent of the company’s revenue."
What could possibly go wrong? Well, if you look up the ID.me privacy policy, you discover that most of the usual things can go wrong. For example, they don't police 3rd party use of the data, which they encourage you to opt-in to:
"To avoid any confusion, Users should understand that, while we own and operate the Service and Website, we do not own or operate websites owned and operated by third parties who may avail themselves of the ID.me Service (collectively referred to hereafter as the “Third-Party Websites”). This Privacy Policy is intended to inform Users about our collection, use, storage, and disclosure, destruction and disposal of information that we collect or record in the course of providing the Website and the ID.me Service. Please note, we are not responsible for the privacy practices of Third-Party Websites and they are under no obligation to comply with this Privacy Policy. Before visiting Third-Party Websites, and before providing the User’s ID.me or any other information to any party that operates or advertises on Third-Party Websites, Users should review the privacy policy and practices of that website to determine how information collected from Users will be handled. Please further note, depending on a User’s particular interaction with us (e.g., Users who solely navigate the Website versus Users who create an account and use the ID.me Service at Third-Party Websites), different portions of this policy may apply to Users at different times."
Also, they reserve the right to change their privacy policy at any time, and it's your job to read it frequently to see:
"If we decide to change this Privacy Policy, we will post those changes to this page so that you are aware of what information we collect, how we use it, and under what circumstances, if any, we disclose it. We reserve the right to modify this Privacy Policy at any time, so please review it frequently. If we make material changes to this policy, we will notify you here, by email, or by means of notice on our home page."
That's item 1 on the policy. Nothing else matters. This is typical corporate privacy boilerplate that lets them do whatever they want with your facial biometric information. Good job IRS!
Posted by Gordon Hull on 27 January 2022 at 20:58 in Gordon Hull, Privacy | Permalink | Comments (0)
Reblog
(0)
|
| |
The SCOTUS decision yesterday striking down OSHA’s vaccine mandate is based on some of the most sophomoric reasoning the Court has issued in a long time. And I am aware of what Court I’m talking about. The gist of the argument is that OSHA is only authorized to enact safety rules that protect someone’s at their place of occupation. But this is a public health rule because Covid also occurs outside the workplace, ergo etc.
But of course work is one of the main places that you can get Covid, as Justin Feldman documents (he also shows that the predominance of workplace transmission helps to explain the disproportionate impact on non-white folks). The fact that vaccination also protects you outside of work is nice but not the point. I have a ladder at home. I don’t know the OSHA rules, but I bet there’s some covering the construction and use of ladders at work. If those rules cause ladder manufacturers to make a safer product, that also protects me at home. But it’s a little hard to explain how that standard doesn’t meet the statutory mandate of protecting people who use ladders in their occupation (the dissent cites several more such examples). What’s wrong with positive externalities?
The Court opines:
“It is telling that OSHA, in its half century of existence, has never before adopted a broad public health regulation of this kind—addressing a threat that is untethered, in any causal sense, from the workplace.”
Well, duh. We haven’t had a global pandemic like Covid during the existence of OSHA! In the meantime, if you read court opinions very often, you learn to expect documentation of bold factual assertions like that one. But there is no footnote explaining how there is no causal relation between the threat of Covid and the workplace. That’s because a credible such footnote cannot be written. As the dissent points out, “because the disease spreads in shared indoor spaces, it presents heightened dangers in most workplaces,” citing OSHA’s documentation of the risks and reminding that majority that Courts are supposed to be deferential in cases like this. Congress even allocated money to OSHA to address workplace hazards (dissent, p. 8). In short,
“The agency backed up its conclusions with hundreds of reports of workplace COVID–19 outbreaks—not just in cheek-by-jowl settings like factory assembly lines, but in retail stores, restaurants, medical facilities, construction areas, and standard offices.” (dissent, p. 9)
We also know that SCOTUS doesn’t even believe its own rhetoric about workplace risk: the justices are all vaccinated, all but Gorsuch wore masks to oral arguments on this case (prompting Sotomayor to participate from her chambers), and court policy is that arguing attorneys have to take a Covid test the day before, and argue remotely if positive. Attorneys are also supposed to wear KN95 masks when in the Courtroom except when actually speaking. One of the attorneys arguing against the mandate even had to appear remotely because he had Covid! So workplace safety is apparently a thing that SCOTUS has heard of – it’s just not one they deem fit to extend to workers who have less control over their environment.
In the meantime, Gorsuch took the time to write a concurrence tediously saying that states might have authority for public health, and that the nondelegation doctrine “ensures democratic accountability by preventing Congress from intentionally delegating its legislative powers to unelected officials.” Perhaps now is the time to remember that SCOTUS is unelected, and seems to enjoy its own antidemocratic powers quite a bit: this the Court that ordered the Biden administration to reinstate the Remain in Mexico policy, even though that’s foreign policy, traditionally the province of the democratically elected executive (remember, the Court kept trying to greenlight Trump’s border wall with the fake border Caravan emergency, even though Congress specifically withheld funding for it). This is also the same Justice Gorsuch who was appointed by the minoritarian Senate at the invitation of Donald Trump because Mitch McConnell refused to consider the nomination of the person who was democratically-elected president at the time of the vacancy. (Gorsuch also pontificates about the “major questions doctrine,” which is supposed intervene when an “agency may seek to exploit some gap, ambiguity, or doubtful expression in Congress’s statutes to assume responsibilities far beyond its initial assignment.” But since the Court made no effort to prove that a vaccination mandate would not improve workplace safety and instead tries to show that the mandate improved safety everywhere, this rhetoric should be filed under the ‘I’m going to cite myself in anti-regulatory rulings in the future” dept).
There is one bit of hope in the opinion, in this paragraph:
“That is not to say OSHA lacks authority to regulate occupation-specific risks related to COVID–19. Where the virus poses a special danger because of the particular features of an employee’s job or workplace, targeted regulations are plainly permissible. We do not doubt, for example, that OSHA could regulate researchers who work with the COVID–19 virus. So too could OSHA regulate risks associated with working in particularly crowded or cramped environments. But the danger present in such workplaces differs in both degree and kind from the everyday risk of contracting COVID–19 that all face. OSHA’s indiscriminate approach fails to account for this crucial distinction—between occupational risk and risk more generally—and accordingly the mandate takes on the character of a general public health measure, rather than an “occupational safety or health standard.” 29 U. S. C. §655(b) (emphasis added).” (slip op, p. 7)
The Biden administration should immediately institute revised standards mandating vaccination in places with disproportionately high Covid rates. There’s been research on that; as CNBC reports of the study:
“The top five occupations that had higher than a 50% mortality rate increase during the pandemic include cooks, line workers in warehouses, agricultural workers, bakers and construction laborers.”
Feldman links to some other high risk groups. But the Biden administration needs to immediately call the Court’s bluff. Will SCOTUS reverse itself here and go full-on Lochner and declare that the baking profession is unregulable?
Marx had lots of words for how the capitalist class treated the lives of workers as disposable. Engels had the better expression: “social murder.” How many workers did the right-wing majority in SCOTUS kill yesterday? “OSHA estimated that in six months the emergency standard would save over 6,500 lives and prevent over 250,000 hospitalizations” (dissent, p. 11), and that number was derived before Omicron emerged. As the dissent sums it up:
“Underlying everything else in this dispute is a single, simple question: Who decides how much protection, and of what kind, American workers need from COVID–19? An agency with expertise in workplace health and safety, acting as Congress and the President authorized? Or a court, lacking any knowledge of how to safeguard workplaces, and insulated from responsibility for any damage it causes?”
There’s definitely a separation of powers problem emerging, but it’s not the one the Court’s conservatives want you to think about.
Posted by Gordon Hull on 14 January 2022 at 09:20 in Gordon Hull | Permalink | Comments (0)
Reblog
(0)
|
| |
Recent Comments