Actuarial Judgment and Decision-Making Machines: Introducing Rouvroy’s Algorithmic Governmentality

The relation between machines and judgment in the legal sense is already somewhat actualized in terms of “actuarial” judgment: in parole boards, for example, the “judges” are given a computer-produced risk-probability based on preexisting statistics and the convict’s behavioral pattern. The “judgment” they render is thus really not ought to be considered on a par with legal judgment in the non-actuarial, more common form. Antoinette Rouvroy, the French philosopher of law who has coined the phrase “algorithmic governmentality,” has given an insightful talk on the subject, which I believe is available in written form, entitled “Governmentality in an Age of Autonomic Computing: Technology, Virtuality, and Utopia” (there is also another talk, “Algorithmic Governmentality and the End(s) of Critique,” available on the web for free).

She meticulously analyzes the forms of governmentality that are increasingly dependent upon predictions made from large data-sets by autonomic machines. Examples include risk-assessment of individuals to identify likely terrorists or offenders early on and preempt crime, as well as the already mentioned actuarial decision-making in certain legal settings. The most important issue here as well as in every discussion regarding the data-technologies is the notion of protocol and the “standardization” of the data produced by different profiling resources (these include Facebook as well as state-related polls and profiling projects): not only is the human being to be reduced to a (huge) number of data-fields (name, age, …) processable by “intelligent” and autonomic machines (the latter are defined by their autonomous decision-making; the more common examples are AI enemy players in games and shopping-bots, not to mention the Google PageRank and Facebook’s now (in)famous NewsFeed), but the data thus produced are to be standardized according to protocols and pooled together to form data-mines as “complete” as possible, making for more “accurate” predictions, whether about potential criminality in the “risk society” or the personalization of ads and services.

The decisions and predictions made by these autonomous agents is the result of turning the human being (and the world) into a black-box, whose internal life, intentionality, and states of mind are simply made to do not exist, at least where it matters. They are thus not in any way comparable to human judgment, although their end-result can be made to approach human judgment to determinable degrees. The most concise way to describe the difference between the two is to say that machinic decision-making does not know anything of the “excluded middle” and (perhaps) syllogism in general: it is absolutely singular and does not bypass the universal-individual continuum characteristic of judgment.

Occasionalism and AI

There is a most subtle Occasionalism at work in the cluster-concept of AI. It is well-known that Descartes first comes up with the idea of Occasionalism (for anyone reading this and not familiar with the term, roughly put it is: a cosmo-theological doctrine that attributes all causation to an omnipotent God who constantly intervenes to ensure the workings of the world) is humans’ lack of knowledge of their bodies’ movement: how does one move one’s arm without knowing how, without even knowing what physiological processes are at work? For a philosopher with the highest regards for the cogitative capacities of humans, acting without knowledge is a scandal; the idea of an omniscient God intervening between my will (to move my hand) and its fulfillment (my hand moving), acting as the cause sets things right again, for Descartes’ God knows the what and the how of every act.

Now let us fast-forward to the present. In the preface to Ethem Alpaydın’s Introduction to Machine Learning, 2nd Ed. we read that one of the two principal reasons for using machine learning (as a sub-field of AI, it can be implemented in different fashions, from simple transfer functions to neural networks to genetic algorithms) is the inability of humans to “explain their expertise.” Language recognition is one among such cases. In a sense, the creation of intelligence can be seen as the final piece of what Freud termed the “Prosthetic God” humans are forever building. Some philosophers object to the use of the term “intelligence” for merely algorithmic behavior; but what about instances of artificial behavior for which no human-devised algorithm exist? I would also like to ask what forms of intentionality are valid in cases where we don’t know what we know, meaning of course the tasks we perform without knowing how to perform them.

There is also another point that I’d like to discuss, and that is the issue of the smart fire-detector (Jürgen Lawrenz had asked whether a “smart” fire-detector that calls your cellphone in case of a fire can be deemed as really intelligent or capable of judgment). Fully agreeing that such a gadget can in no way be deemed intelligent, I still feel it is important to remember that machines and technical objects were unable to respond to changes in their milieu before Wiener’s feedback-based cybernetic devices. The very possibility of “learning” even learning as bare memory accumulation (but there is also change in behavior) is of recent origin. All this aside, I too believe that the most important feat that distinguishes the human consciousness from the (perhaps) intelligent machine is judgment, something the latter is incapable of, but also something that it renders more or less obsolete.

Defined in terms of subsuming the particular to the universal, that is defined in loosely Kantian terms, judgment is a human process, and a costly one. Plato was somewhat right in disregarding the Sophists for their attempts at creating shortcuts (“short-circuits”, in Stiegler’s terminology), claiming that universal knowledge was essential to understanding and judgment. Such knowledge could only be amassed by years of study and learning and as such a very costly thing. This is one of the reasons why the earlier AI models based on human judgment failed to yield any efficiency. We must not forget that machines are built to be efficient, and the human process of judgment cannot be a successful model; Instead, we get the less-than-judgment machines; the cybernetic machines operating at a level below representation and knowledge.

The Unknown Masterpiece: “Heart String Marionette” and Authenticity

One would think that in the era of sharing and the omniscience of the “interweb” there would be at least one or two reviews for any given film recently released. I started to search for a review after I incidentally got my hands on a copy of the animated feature film Heart-String Marionette; I was so impressed with and in awe of this film that I just took it for granted that it had won several awards and is a celebrated work of art, at least in the right corners. Do a search yourselves and you will discover that there is not a single review or rating even on sites like IMDb. This post, however, is not a speculative attempt at a pathology of aesthetic reception and Internet fame. I am currently working on several essays and reviews on HSM in order to draw much deserved attention to it, at least in philosophical quarters. Here I will place a piece of a work in progress, hoping to get some feedback.

The film is made freely available by its generous director, M dot Strange. Watch Heart String Marionette on YouTube.

Continue reading

Monads and Cybernetic Organs

Leibniz’s monads do not have “windows” through which they could perceive the world; their milieu is unknowable to them. Yet they continue to function in perfect “harmony” with one-another, which is very crucial, given that according to Leibniz the whole world is made up of monads. Leibniz, whose invention of the integral calculus and life insurance alone might make him a fit candidate for the progenitor of the modern world (in which we are still living, although a bit less each day), explained the monads’ blindness through reference to their perfect design: being omni-scient and ditto-potent, God factored in all the world in its entirety in the workings of each monad and it is as such that each could be said to contain or reflect the whole world. The cult of design, the cult of the engineer, is only an extension of the ideas that gave birth to the Monadology. Leibniz is the progenitor of the modern world of engineering and design; his is a revival of mathematics in its true meaning: fore-knowledge. Heidegger explains the modern age as the time where the “principle of sufficient reason (ratio)” holds sway, a principle first stated by Leibniz (“nothing exists without a reason”). It is the same Leibniz who comes up with the idea now known as algorithmic complexity, of understanding as compression. He is the prime representative of the modern efficiency.

It was not until the WWII that modelling-prediction gained an alternative: real-time, feed-back driven prediction: cybernetics was born; the idea to end all ideas. For an introduction to what comes next see my presentation at the General Organology Conference held at the University of Kent in 20-22 November 2014 (the link will be up soon).

With the cybernetic organ (prosthesis), we are facing the exact opposite of a Leibnizian monad: the Leibnizian monad is the result of ultimate design, the kind of design only a God could be capable of: EVERYTHING has been factored in so that it functions smoothly without needing to see anything at all; it has no windows and yet functions perfectly. The cybernetic device, however, is the opposite of that. Not that it does away with blindness, like so many of the Capitalists and would-be technopriests might wish, but rather, it displaces the blindness: rather than being a blind work of perfect design, it becomes a seeing work of no design that turns a blind eye to the essence of its “object.” Instead of the blind monad, we get the black-box. What silently disappears in the process is human-ness, the capacity for thought and the possibility of science as knowledge of causes, of the “why” and the “what.”

The Vigilante of Faith: Kierkegaard and the Equalizer

Fuqua’s film version of the Equalizer is only a more recent and perhaps more outstanding sample of a wide array of films or books that essentially share a plot about an apparently normal person who, when trouble arises, reveals himself as a not-so-normal, as a hero. Of course there is always the idea that everyone can be a hero, but this must not be confused with the philosophy that stands behind the Equalize and its ilk.

Continue reading

Another Example of the Spreading Domination of the Cybernetic Organon

I recently came upon an interesting book. The book Strategy without Design is a only another example of the emerging, increasingly visible, logic that is the Cybernetic Organon: it essentially argues that self-organized, unintentional, collaborative work towards a goal is more efficient than intentionally designed strategy: leave it in God’s hands, god being that which controls the things I don’t understand. There is a sort of new faith at work, only this time, the God in question is much more occult and yet much more mundane: it is the god of cybernetic prosthesis. Cyber-capitalism is now long on its way to become the Cartesian God revived in the image of the circuit-breaker: Malebranche might have been wrong in the 17th century, but he completely in the right in the era of connective-capitalism. Occasionalism is vindicated: the revived God makes possible, establishes and breaks, all the connections in the planetary network that life now becomes; it lies between every two node, a protocol, through which all flows must pass. A decentered God, the God of Descartes.

My Newest Published Article


My review of Paul Virilio’s The Great Accelerator was published yesterday by Marx and Philosophy.

I believe its greatest merit to lie in its analysis of flash-trading or program-trading as it relates to a dehumanization of time and the resultant choc de la confiance. Also of interest are the widespread and popular phenomena analyzed by Virilio: although the author himself does not mention it, they are actually proof that the trends he had outlined decades ago in such books as The Aesthetics of Disappearance are now part of the common-place.

On Autonomic Computation, Writing, and Plato

The widely discussed objections that Plato brings to bear on writing can be read in a different way, in terms of machine learning. As such his statements would indict writing qua external memory, hence qua machine, because of its inability to adapt itself to the changes in its environment. The latter, in case of writing, usually involves a “conversation,” the questions different readers/users might put to the text/machine in different times. The meaning-machine of writing is not capable of learning: however the Derridean school might attempt to present it otherwise, writing, like all pre-cybernetic technics, like the Turing Machine, is unable to engage with and learn from its environment. As externalized memory, writing remains a static memory, unable to update itself, to extend and grow through experience. Plato would have to wait until after the Second World-War to see the upgraded dynamic memory emerging first and essentially in the feed-back mechanisms of cybernetic prostheses.