The catatonic knight, the “passional” knight, stands in the field of snow staring at the red blood amidst the white expanse; he has forgotten where he is or who he is or why he came to be there. Only one subjectivity at a time; only a super-plastic, re-adjustable, “adaptive” set of behaviors in each given situation. Deleuze’s Perceval is a cybernetic machine: he is taken out of the quest-milieu and into that of romance, a catatonia waiting for further assignments of action-milieus. He is capable of forgetting, completely, even his own name, and what is much more important, even his low-level bodily habits, and so becomes the emptiest subject, container, de-calibrated even of former habits to allow for maximum potentiality of becoming: if you become nothing, you can become everything. This is why it is only Perceval, the…
We are all Antigones now, effective desubjectification have become vulnerabilities, points where the core of the human-as-behavioral-bundle can be hacked into. In Cybernetic Capitalism affect is the most political aspect of human life as it provides a direct, fast, streamlined tool to bypassing the calcified habits that make up the mostly-bodily core of each person’s identity. In a world where subjectivity and has become an extremely fluid commodity, externalized and marketed, non-conscious habits and bodily memories become the root of continuity and the sole means of identification. The cultural obsession with images of the amnesiac and those fleeing old lives to start new selves is a clear symptom of the evaluation of the human in terms of the machine’s ability to radically alter and adapt itself again and again (Cronenberg’s History of Violence comes to mind, where the identity of the protagonist is revealed in/by his body’s fast reactions in a violent situation).
I have always tried to provide a concrete and fact-based analysis of the technologies that lie at the base of Cybernetic Capitalism and in this paper I am going to show how the Cybernetic Organon (Rahebi 2015) redefines efficiency, intelligence, and creativity in machinic terms, thus creating an impossible demand on the proletarized humans to meet cybernetic standards and forms of creativity and fluidity. The Cybernetic Oragnism (e.g. deep learning neural networks like Google’s DeepMind/AlphaGo), is a fluid, re-programmable, self-organizing, and autonomous intelligence that can just as easily adapt and specialize as it can reset and re-initialize its patterns for a new round of training and calibration in a new milieu or for a new task. It is this form of forgetting that, though desired and even demanded by Cybernetic Capitalism, cannot be achieved by a biological entity whose habit-forming is hardly reversible and in whom creativity in its machinic sense of radical fluidity meets the meaty, biological barrier of germinating habits set in cellular permanence.
This cybernetic fluidity (as opposed to biological-neural plasticity) short-circuits the entire process of individuation (between the universal and the individual) and renders it obsolete as the cybernetic organ shifts between fully specialized singularity (complete adaptation to the milieu) and the blank potentiality of the fully generic. This is what underlies Stiegler’s conceptions of desublimation and the short-circuits of dis-individuation and it shows why the proletarization of the spirit must be framed in terms of plasticity, habits, and cybernetic intelligence.
I will show how the biological organism, including the human individual, is incapable of the fluidity and Thanatotic creativity that is the hallmark of the Cybernetic Organism due to the irreversibility of habit-formation and learning as a form of subjectification and identification and that despite the claims of Deleuze and Deleuzians, radical becomings and Burroughs-like BwOs are biologically invalid and only serve to further the “immanent ideologies” of Cybernetic Capitalism.
At the end, I will come to the issue of affect and affective vulnerability: coming up against this biological barrier of “inefficiency”, Cybernetic Capitalism tries to improve its control-and-consumption mechanisms through the manipulation of affects as forms of desubjectification. From conditioning soldiers to incentivizing consumers, it relies on the de-rationalizing, evacuating power of affects to bypass the built-in defenses of the habituated biological organism.
*The illustration, “Well Connected” is by Ebrahim Zargari-Marandi as part of his New Monstrosities Project.
The Neurotics of Yore: Cyber-Schizos vs Germinal Neuroses
“There are no neurotics anymore; and not just according to the DSM-IV and V. When Deleuze and Guattari were writing Anti-Oedipus, their call for schizophrenia the emancipation of desire-flows seemed most revolutionary, even idealistic or utopian sometimes. When Nick Land wrote his controversial texts in the 1990s, things had changed and Land was perhaps one of the first to see how deeply the Deleuzian concepts of Schizophrenia, of Becoming and the Body without Organs, were connected to Cybernetic Capitalism.
In this chapter I will argue that the Schizo, the emancipatory model of non-subjective (non-individuated) singularity, is already here, living next door, ordering a customized bicycle online. The Schizo has been here for a while now, to the detriment of all things neurotic-normal.
If neurosis is indeed a form of behavioral learning mechanism, a habit-contraction mechanism at the lowest levels of the psyche, a subjectifying, individuating process of response-limitation, then we must realize that Cybernetic Capitalism, the “prosumer” culture, has no use for the neurotic just as it has no room for such outdated processes as individuation. All the similarity between Deleuzian literature and the self-help books now available are not really random; the call to creativity and self-curation goes beyond a nice figure of speech. The market cannot afford a neurotic, stuck in a rut, her consumption choices as limited as her capacity to adapt to change. While the neurotics of yore came up with the New Deal and lifetime jobs, the schizophrenics (a statistical norm today) have come up with precarious labor, and millennials that conceive of jobs as short-term stints. The obsession with the apocalypse in the entertainment sector is the most recent manifestation of the majority view of machinic humanity. The message in all those high budget films is clear enough: if all changes in an instant, will you adapt (be cybernetic, schizophrenic) or will you perish in your old ways.
I will argue that neurosis qua limit case of habit-formation and behavioral subjectification is still at play as a force or an “attractor” among others, but that it has succumbed to other forces, to the schizophrenic-consumerist attractors, limited to very basic levels of individuation. We do not yearn nostalgically for the neurotic times to be back, nor are we comfortable with the remnants of neurotic formations in philosophy (the linguistic turn, for example). What we have to do is to examine the somatic levels of habit-formation for indications of the emergence of new ideas or modes of being.”
P.S. The Neurotic Turn has added two other great philosophers to its contributors, Benjamin Noys and Patricia Reeds will also be included in the book, alongside Graham Harman, Nick Land, Sean McGrath, C. W. Johns, Katerina Kolozova, John Russon, Alex Nevil, and a host of other distinguished scholars.
Abstract of a chapter of Mittelstadt and Floridi’s Ethics of Biomedical Big Data. Although accepted, this chapter was never finished. The abstract was written in April 2015.
In this chapter I will attempt to show how the rise of Big Data necessitates a drastic revision of the notion and elements of informed consent. The “Fourth Paradigm” and its concurrent biomedical practices, such as data-driven science, the automation of Evidence-Based Practice, have changed the very foundation of the biomedical sciences, changes that must be also reflected in their ethics.
The most important change of perspective brought about by these new forms in medical practice is the proliferation of statistically observed, evidence-based treatments and protocols that promise great efficiency. In light of the ever-growing flood of data, the question becomes, can the physician provide adequate information for the patient’s informed consent when she does not know the mechanism of action of the treatment under discussion? Or rather, what are the contents of the information that must be provided regarding the treatment? Is that information only composed of data: statistical rates of success or side-effects, or does it necessarily contain some form of (subjective) medical expertise or scientific aspect that is not equivalent to data?
The question “can the physician explaining the treatment to the patient be replaced with graphical representations of available data” is not a rhetorical one after the “fourth revolution”. The ever-increasing amount of automated evidence-based practice will necessarily incapacitate the physician from gaining and providing subjective and expert information on newer treatments. Taking things even further, we will find ourselves faced with the fact that the same acceleration and increment of data will put efficient treatment forever ahead of the scientific, causal explanation of the condition and its treatment and the discovery of its mechanism of action. Whether or not this will render the theoretical aspects of biomedical sciences obsolete and trivial remains to be seen, but one thing is certain: the possibility of confounding has increased as never before. The issue cannot be reduced to a rehashing of the “computers make errors” cliché; the wide-spread forms of “epistemological uncertainty” (Renee Fox) point to a much more paradigmatic problem. In fact the most important reason for worrying about the insufficiency of data-based consent and the possibility of confounding is that there are no reasons to be concerned about it. The point is, the increasingly efficiency/performance based attitude of the biomedical sciences translates into the prioritizing of proficiency and efficiency in prediction over scientific verity.
Given all of these considerations, a revision of the content of the information necessary for the patient’s informed consent is exigent. I believe it is high time we ask ourselves where we stand with the data/information problem, since surely lives are at stake.
Originally written as an abstract for a conference in May 2015.
There is a deep affinity between the dead and the machinic when it comes to creativity, the creation of novelty; an affinity that is not reducible to their shared character of non-life. If creativity is the creation of an alterity, an act of spontaneity and a whim through which an other “without genealogy” (Malabou 2012, p. 3) comes to be, and if an organ(ism) creates itself into something new through an instant of self-determination, then by necessity that acts is a near-death experience; there is something of death in every spontaneous (self)creation. Spinoza mentions the zombie-poet Góngora, in his illustration of a death that is not actually dying but the emergence of new structures, new “ratios of motion and rest” (Spinoza and Parkinson 2000) between the parts, of novelty in a body that can no longer be the same. The amnesiac is a popular and necessary figure in the anthropology of creativity, yet one which pales against the cybernetic machine.
The Noötechnics YouTube channel has made almost all of the talks given at The General Organology Conference available. The conference held November 2014 at Kent University to honor and celebrate the 20th anniversary of the publication of Technics and Time, vol. 1 included keynote speeches by Bernard Stiegler, Maurizio Lazzarato, Antoinette Rouvroy, and other prominent scholars. I was honored to be one of the speakers at the conference, presenting my paper “Cybernetics as the Efficient Organon: the Obsolescence of Knowledge and Subjectivity.” Thanks to the efforts of the founders of the Noötechnics and all involved for making all of this freely available.
I have embedded here my the video of my own presentation, as well as the video of its discussion by Stiegler, Alexander Wilson, and others. Please see Noötechnics’ YouTube channel for the other videos.
The relation between machines and judgment in the legal sense is already somewhat actualized in terms of “actuarial” judgment: in parole boards, for example, the “judges” are given a computer-produced risk-probability based on preexisting statistics and the convict’s behavioral pattern. The “judgment” they render is thus really not ought to be considered on a par with legal judgment in the non-actuarial, more common form. Antoinette Rouvroy, the French philosopher of law who has coined the phrase “algorithmic governmentality,” has given an insightful talk on the subject, which I believe is available in written form, entitled “Governmentality in an Age of Autonomic Computing: Technology, Virtuality, and Utopia” (there is also another talk, “Algorithmic Governmentality and the End(s) of Critique,” available on the web for free).
She meticulously analyzes the forms of governmentality that are increasingly dependent upon predictions made from large data-sets by autonomic machines. Examples include risk-assessment of individuals to identify likely terrorists or offenders early on and preempt crime, as well as the already mentioned actuarial decision-making in certain legal settings. The most important issue here as well as in every discussion regarding the data-technologies is the notion of protocol and the “standardization” of the data produced by different profiling resources (these include Facebook as well as state-related polls and profiling projects): not only is the human being to be reduced to a (huge) number of data-fields (name, age, …) processable by “intelligent” and autonomic machines (the latter are defined by their autonomous decision-making; the more common examples are AI enemy players in games and shopping-bots, not to mention the Google PageRank and Facebook’s now (in)famous NewsFeed), but the data thus produced are to be standardized according to protocols and pooled together to form data-mines as “complete” as possible, making for more “accurate” predictions, whether about potential criminality in the “risk society” or the personalization of ads and services.
The decisions and predictions made by these autonomous agents is the result of turning the human being (and the world) into a black-box, whose internal life, intentionality, and states of mind are simply made to do not exist, at least where it matters. They are thus not in any way comparable to human judgment, although their end-result can be made to approach human judgment to determinable degrees. The most concise way to describe the difference between the two is to say that machinic decision-making does not know anything of the “excluded middle” and (perhaps) syllogism in general: it is absolutely singular and does not bypass the universal-individual continuum characteristic of judgment.
There is a most subtle Occasionalism at work in the cluster-concept of AI. It is well-known that Descartes first comes up with the idea of Occasionalism (for anyone reading this and not familiar with the term, roughly put it is: a cosmo-theological doctrine that attributes all causation to an omnipotent God who constantly intervenes to ensure the workings of the world) is humans’ lack of knowledge of their bodies’ movement: how does one move one’s arm without knowing how, without even knowing what physiological processes are at work? For a philosopher with the highest regards for the cogitative capacities of humans, acting without knowledge is a scandal; the idea of an omniscient God intervening between my will (to move my hand) and its fulfillment (my hand moving), acting as the cause sets things right again, for Descartes’ God knows the what and the how of every act.
Now let us fast-forward to the present. In the preface to Ethem Alpaydın’s Introduction to Machine Learning, 2nd Ed. we read that one of the two principal reasons for using machine learning (as a sub-field of AI, it can be implemented in different fashions, from simple transfer functions to neural networks to genetic algorithms) is the inability of humans to “explain their expertise.” Language recognition is one among such cases. In a sense, the creation of intelligence can be seen as the final piece of what Freud termed the “Prosthetic God” humans are forever building. Some philosophers object to the use of the term “intelligence” for merely algorithmic behavior; but what about instances of artificial behavior for which no human-devised algorithm exist? I would also like to ask what forms of intentionality are valid in cases where we don’t know what we know, meaning of course the tasks we perform without knowing how to perform them.
There is also another point that I’d like to discuss, and that is the issue of the smart fire-detector (Jürgen Lawrenz had asked whether a “smart” fire-detector that calls your cellphone in case of a fire can be deemed as really intelligent or capable of judgment). Fully agreeing that such a gadget can in no way be deemed intelligent, I still feel it is important to remember that machines and technical objects were unable to respond to changes in their milieu before Wiener’s feedback-based cybernetic devices. The very possibility of “learning” even learning as bare memory accumulation (but there is also change in behavior) is of recent origin. All this aside, I too believe that the most important feat that distinguishes the human consciousness from the (perhaps) intelligent machine is judgment, something the latter is incapable of, but also something that it renders more or less obsolete.
Defined in terms of subsuming the particular to the universal, that is defined in loosely Kantian terms, judgment is a human process, and a costly one. Plato was somewhat right in disregarding the Sophists for their attempts at creating shortcuts (“short-circuits”, in Stiegler’s terminology), claiming that universal knowledge was essential to understanding and judgment. Such knowledge could only be amassed by years of study and learning and as such a very costly thing. This is one of the reasons why the earlier AI models based on human judgment failed to yield any efficiency. We must not forget that machines are built to be efficient, and the human process of judgment cannot be a successful model; Instead, we get the less-than-judgment machines; the cybernetic machines operating at a level below representation and knowledge.
Leibniz’s monads do not have “windows” through which they could perceive the world; their milieu is unknowable to them. Yet they continue to function in perfect “harmony” with one-another, which is very crucial, given that according to Leibniz the whole world is made up of monads. Leibniz, whose invention of the integral calculus and life insurance alone might make him a fit candidate for the progenitor of the modern world (in which we are still living, although a bit less each day), explained the monads’ blindness through reference to their perfect design: being omni-scient and ditto-potent, God factored in all the world in its entirety in the workings of each monad and it is as such that each could be said to contain or reflect the whole world. The cult of design, the cult of the engineer, is only an extension of the ideas that gave birth to the Monadology. Leibniz is the progenitor of the modern world of engineering and design; his is a revival of mathematics in its true meaning: fore-knowledge. Heidegger explains the modern age as the time where the “principle of sufficient reason (ratio)” holds sway, a principle first stated by Leibniz (“nothing exists without a reason”). It is the same Leibniz who comes up with the idea now known as algorithmic complexity, of understanding as compression. He is the prime representative of the modern efficiency.
It was not until the WWII that modelling-prediction gained an alternative: real-time, feed-back driven prediction: cybernetics was born; the idea to end all ideas. For an introduction to what comes next see my presentation at the General Organology Conference held at the University of Kent in 20-22 November 2014 (the link will be up soon).
With the cybernetic organ (prosthesis), we are facing the exact opposite of a Leibnizian monad: the Leibnizian monad is the result of ultimate design, the kind of design only a God could be capable of: EVERYTHING has been factored in so that it functions smoothly without needing to see anything at all; it has no windows and yet functions perfectly. The cybernetic device, however, is the opposite of that. Not that it does away with blindness, like so many of the Capitalists and would-be technopriests might wish, but rather, it displaces the blindness: rather than being a blind work of perfect design, it becomes a seeing work of no design that turns a blind eye to the essence of its “object.” Instead of the blind monad, we get the black-box. What silently disappears in the process is human-ness, the capacity for thought and the possibility of science as knowledge of causes, of the “why” and the “what.”
I believe its greatest merit to lie in its analysis of flash-trading or program-trading as it relates to a dehumanization of time and the resultant choc de la confiance. Also of interest are the widespread and popular phenomena analyzed by Virilio: although the author himself does not mention it, they are actually proof that the trends he had outlined decades ago in such books as The Aesthetics of Disappearance are now part of the common-place.