Actuarial Judgment and Decision-Making Machines: Introducing Rouvroy’s Algorithmic Governmentality

The relation between machines and judgment in the legal sense is already somewhat actualized in terms of “actuarial” judgment: in parole boards, for example, the “judges” are given a computer-produced risk-probability based on preexisting statistics and the convict’s behavioral pattern. The “judgment” they render is thus really not ought to be considered on a par with legal judgment in the non-actuarial, more common form. Antoinette Rouvroy, the French philosopher of law who has coined the phrase “algorithmic governmentality,” has given an insightful talk on the subject, which I believe is available in written form, entitled “Governmentality in an Age of Autonomic Computing: Technology, Virtuality, and Utopia” (there is also another talk, “Algorithmic Governmentality and the End(s) of Critique,” available on the web for free).

She meticulously analyzes the forms of governmentality that are increasingly dependent upon predictions made from large data-sets by autonomic machines. Examples include risk-assessment of individuals to identify likely terrorists or offenders early on and preempt crime, as well as the already mentioned actuarial decision-making in certain legal settings. The most important issue here as well as in every discussion regarding the data-technologies is the notion of protocol and the “standardization” of the data produced by different profiling resources (these include Facebook as well as state-related polls and profiling projects): not only is the human being to be reduced to a (huge) number of data-fields (name, age, …) processable by “intelligent” and autonomic machines (the latter are defined by their autonomous decision-making; the more common examples are AI enemy players in games and shopping-bots, not to mention the Google PageRank and Facebook’s now (in)famous NewsFeed), but the data thus produced are to be standardized according to protocols and pooled together to form data-mines as “complete” as possible, making for more “accurate” predictions, whether about potential criminality in the “risk society” or the personalization of ads and services.

The decisions and predictions made by these autonomous agents is the result of turning the human being (and the world) into a black-box, whose internal life, intentionality, and states of mind are simply made to do not exist, at least where it matters. They are thus not in any way comparable to human judgment, although their end-result can be made to approach human judgment to determinable degrees. The most concise way to describe the difference between the two is to say that machinic decision-making does not know anything of the “excluded middle” and (perhaps) syllogism in general: it is absolutely singular and does not bypass the universal-individual continuum characteristic of judgment.

Advertisements