The General Organology Conference Videos are Online, My Presetation Included

The Noötechnics YouTube channel has made almost all of the talks given at The General Organology Conference available. The conference held November 2014 at Kent University to honor and celebrate the 20th anniversary of the publication of Technics and Time, vol. 1 included keynote speeches by Bernard Stiegler, Maurizio Lazzarato, Antoinette Rouvroy, and other prominent scholars. I was honored to be one of the speakers at the conference, presenting my paper “Cybernetics as the Efficient Organon: the Obsolescence of Knowledge and Subjectivity.” Thanks to the efforts of the founders of the Noötechnics and all involved for making all of this freely available.

I have embedded here my the video of my own presentation, as well as the video of its discussion by Stiegler, Alexander Wilson, and others. Please see Noötechnics’ YouTube channel for the other videos.

Advertisements

Occasionalism and AI

There is a most subtle Occasionalism at work in the cluster-concept of AI. It is well-known that Descartes first comes up with the idea of Occasionalism (for anyone reading this and not familiar with the term, roughly put it is: a cosmo-theological doctrine that attributes all causation to an omnipotent God who constantly intervenes to ensure the workings of the world) is humans’ lack of knowledge of their bodies’ movement: how does one move one’s arm without knowing how, without even knowing what physiological processes are at work? For a philosopher with the highest regards for the cogitative capacities of humans, acting without knowledge is a scandal; the idea of an omniscient God intervening between my will (to move my hand) and its fulfillment (my hand moving), acting as the cause sets things right again, for Descartes’ God knows the what and the how of every act.

Now let us fast-forward to the present. In the preface to Ethem Alpaydın’s Introduction to Machine Learning, 2nd Ed. we read that one of the two principal reasons for using machine learning (as a sub-field of AI, it can be implemented in different fashions, from simple transfer functions to neural networks to genetic algorithms) is the inability of humans to “explain their expertise.” Language recognition is one among such cases. In a sense, the creation of intelligence can be seen as the final piece of what Freud termed the “Prosthetic God” humans are forever building. Some philosophers object to the use of the term “intelligence” for merely algorithmic behavior; but what about instances of artificial behavior for which no human-devised algorithm exist? I would also like to ask what forms of intentionality are valid in cases where we don’t know what we know, meaning of course the tasks we perform without knowing how to perform them.

There is also another point that I’d like to discuss, and that is the issue of the smart fire-detector (Jürgen Lawrenz had asked whether a “smart” fire-detector that calls your cellphone in case of a fire can be deemed as really intelligent or capable of judgment). Fully agreeing that such a gadget can in no way be deemed intelligent, I still feel it is important to remember that machines and technical objects were unable to respond to changes in their milieu before Wiener’s feedback-based cybernetic devices. The very possibility of “learning” even learning as bare memory accumulation (but there is also change in behavior) is of recent origin. All this aside, I too believe that the most important feat that distinguishes the human consciousness from the (perhaps) intelligent machine is judgment, something the latter is incapable of, but also something that it renders more or less obsolete.

Defined in terms of subsuming the particular to the universal, that is defined in loosely Kantian terms, judgment is a human process, and a costly one. Plato was somewhat right in disregarding the Sophists for their attempts at creating shortcuts (“short-circuits”, in Stiegler’s terminology), claiming that universal knowledge was essential to understanding and judgment. Such knowledge could only be amassed by years of study and learning and as such a very costly thing. This is one of the reasons why the earlier AI models based on human judgment failed to yield any efficiency. We must not forget that machines are built to be efficient, and the human process of judgment cannot be a successful model; Instead, we get the less-than-judgment machines; the cybernetic machines operating at a level below representation and knowledge.