My previous entry for CEO. Another has just been published (https://wp.me/p85ndU-uX) on complexity, algorithmic information theory, data compression, Leibniz’s Compressor-God
This is the abstract of my future presentation at the 10th Beyond Humanism Conference
We are all Antigones now, effective desubjectification have become vulnerabilities, points where the core of the human-as-behavioral-bundle can be hacked into. In Cybernetic Capitalism affect is the most political aspect of human life as it provides a direct, fast, streamlined tool to bypassing the calcified habits that make up the mostly-bodily core of each person’s identity. In a world where subjectivity and has become an extremely fluid commodity, externalized and marketed, non-conscious habits and bodily memories become the root of continuity and the sole means of identification. The cultural obsession with images of the amnesiac and those fleeing old lives to start new selves is a clear symptom of the evaluation of the human in terms of the machine’s ability to radically alter and adapt itself again and again (Cronenberg’s History of Violence comes to mind, where the identity of the protagonist is revealed in/by his body’s fast reactions in a violent situation).
I have always tried to provide a concrete and fact-based analysis of the technologies that lie at the base of Cybernetic Capitalism and in this paper I am going to show how the Cybernetic Organon (Rahebi 2015) redefines efficiency, intelligence, and creativity in machinic terms, thus creating an impossible demand on the proletarized humans to meet cybernetic standards and forms of creativity and fluidity. The Cybernetic Oragnism (e.g. deep learning neural networks like Google’s DeepMind/AlphaGo), is a fluid, re-programmable, self-organizing, and autonomous intelligence that can just as easily adapt and specialize as it can reset and re-initialize its patterns for a new round of training and calibration in a new milieu or for a new task. It is this form of forgetting that, though desired and even demanded by Cybernetic Capitalism, cannot be achieved by a biological entity whose habit-forming is hardly reversible and in whom creativity in its machinic sense of radical fluidity meets the meaty, biological barrier of germinating habits set in cellular permanence.
This cybernetic fluidity (as opposed to biological-neural plasticity) short-circuits the entire process of individuation (between the universal and the individual) and renders it obsolete as the cybernetic organ shifts between fully specialized singularity (complete adaptation to the milieu) and the blank potentiality of the fully generic. This is what underlies Stiegler’s conceptions of desublimation and the short-circuits of dis-individuation and it shows why the proletarization of the spirit must be framed in terms of plasticity, habits, and cybernetic intelligence.
I will show how the biological organism, including the human individual, is incapable of the fluidity and Thanatotic creativity that is the hallmark of the Cybernetic Organism due to the irreversibility of habit-formation and learning as a form of subjectification and identification and that despite the claims of Deleuze and Deleuzians, radical becomings and Burroughs-like BwOs are biologically invalid and only serve to further the “immanent ideologies” of Cybernetic Capitalism.
At the end, I will come to the issue of affect and affective vulnerability: coming up against this biological barrier of “inefficiency”, Cybernetic Capitalism tries to improve its control-and-consumption mechanisms through the manipulation of affects as forms of desubjectification. From conditioning soldiers to incentivizing consumers, it relies on the de-rationalizing, evacuating power of affects to bypass the built-in defenses of the habituated biological organism.
Abstract Submitted to a conference in Parma; rejected.
Artificial Life, Artificial Unconscious
Mohammad Ali Rahebi
The Unconscious is the Body
Cybernetics and the Cartesian Problem
The way the body works without knowledge or consciousness has been a complex and certainly consequential problem since Descartes at the latest. The question of how we do something without knowing how we are in fact doing so was a scandal that drove Cartesians to Occasionalism (a concept that is perhaps more true now than ever, with cloud computing and distributed computation).
Since then, we have come very far indeed but it is in AI that we come to again feel the real force of this problem again. Minsky has famously stated that
Social cognition qua language based and mainly representational, is a highly inefficient mode of signal processing or computation in general. As Minsky (despite his own strict adherence to the representational models, strangely enough) said:
It’s mainly when our other systems start to fail that we engage the special agencies involved with what we call “consciousness.”
As I shall try to while discussing the Peircean notion of “community of believers” and the way the Cybernetic schema manages to overcome its necessity, the most efficient manifestations of artificial intelligence are not representational, as is the case with the recent success of neural networks and machine learning. In fact, if we are to investigate artificial life instead of artificial consciousness (a failed project; consciousness qua delay is simply obsolete in machinic terms), we have to look at the body, at the habitual, that is to say at the non-representational, non-knowledge-based mechanisms and automations that are not language-based in the least.
In a relatively recent book edited by Raffaela Giovagnoli, one of the organizers, Computing Nature, the issue of alternative computation models to Turing Machines was broached and discussed to some extent. Here I will argue that Cybernetics is one such alternative model that has been growing, from its origin as a strange interdisciplinary field in the late 1940s, to become the dominant model of computation. Although it is not named as such, the models that are not algorithmic and termination- oriented are all operating on the basis of the “cybernetic schema of intelligibility.” These are very familiar processes for all of us, being ubiquitous in “smart” machines and smart software. Giovagnoli et al describe Cybernetics as computational processes where
The main criterion of success of this computation is not its termination, but its behavior – response to changes, its speed, generality and flexibility, adaptability, and tolerance to noise, error, faults, and damage.
If consciousness is to be taken as linguistic and representational, and as such mediated by the social in its historicity and theoretical bias, Cybernetics would have to be seen as an artificial unconsciousness, and in fact, is much closer to artificial life than such AI trends as GOFAI or even most strands of embodied cognition.
In French philosophy, under the influence of Derrida and his reading of Plato, the problem of technology has been tied to that of writing, as some form of Ur-Technics that contains the essence of all future technics qua mnemotechnics. This idea has been also taken up by Bernard Stiegler who has taken it further by discussing all technologies as means of “retention” and “protention.” All of this starts with Plato, however, for in his famous Phaedrus, he comes to barrage writing as a dangerous method, a supplement to spoken language, which unbeknownst to its to its users, had deleterious effects on memory as well as on truth and on the polis. We are not here concerned with all that; what is of interest is something he mentions in passing, namely that if you were to write down something that is true at one moment (‘it is day’), it would become false the next (as night fell) because unlike the speaker, it did not the ability to correct itself according to its surrounding reality, its environment. In the same manner, he says that when you write down the teachings of some philosopher, the written text will not be able to answer new questions or clarify obscure passage when asked to do so, in comparison to the person using living language.
This is, in fact, the same thing that distinguishes Cybernetics completely from all the technics that came before: cybernetic machines (think of your smartphone) can change their displayed content and their behavior in response to changes in their environment. Unlike previous forms of technology, which might have been mnemotechnics (or not), cybernetic technologies are adaptive, self-modifying, robust. In fact, this same Platonic problem occurs in Descartes (of whom we will not have time to speak) and later, Turing. In his famous “On Computation,” Turing states that although certain numbers might be uncomputable for the Turing Machine, they might be very well computable for human mathematicians. The reason for this is that the human mathematician is capable of revising and changing “their strategy” completely from the ground up, while the Turing Machine, the representational computing machine, cannot change its own behavior when faced with an unsolvable problem. It does not have the ability to change its actions spontaneously and in response to its specific problem-environment. This is of course recognizable as the same Platonic problem that we encountered with writing and other technological artifacts. They are Leibnizian in principle, relying on some pre-established harmony, on the stability of the operations in the environment and thus are rendered absolutely inoperative once the smallest change occurs.
The cybernetic machine, moreover, does not start from a human-defined state or representation but is an immanent, non-representational computing machine whose operations (e.g. in the case of a neural network running Big Data analyses) cannot be even comprehended by a human observer. As long as there be no need for the cybernetic machine to interact with a human, there is no need for representational information; data is much more efficient.
Peirce, Habits, Neural Networks
Finally, I will present some of my current research on the relation between the American Pragmatist philosophies, the very important yet undervalued concept of habits, and the recently most successful manifestation of the cybernetic schema, namely Neural Networks.
The Abstract of my contribution to C. W. Johns’ The Neurotic Turn, a collection of essays on neurosis and its reconceptualization. This follows Johns’ own work on neurosis and its re-purposing as a philosophical concept. His Neurosis and Assimilatioin: Contemporary Revisions on the Life of the Concept (see this post for a brief general introduction).
*The illustration, “Well Connected” is by Ebrahim Zargari-Marandi as part of his New Monstrosities Project.
The Neurotics of Yore: Cyber-Schizos vs Germinal Neuroses
“There are no neurotics anymore; and not just according to the DSM-IV and V. When Deleuze and Guattari were writing Anti-Oedipus, their call for schizophrenia the emancipation of desire-flows seemed most revolutionary, even idealistic or utopian sometimes. When Nick Land wrote his controversial texts in the 1990s, things had changed and Land was perhaps one of the first to see how deeply the Deleuzian concepts of Schizophrenia, of Becoming and the Body without Organs, were connected to Cybernetic Capitalism.
In this chapter I will argue that the Schizo, the emancipatory model of non-subjective (non-individuated) singularity, is already here, living next door, ordering a customized bicycle online. The Schizo has been here for a while now, to the detriment of all things neurotic-normal.
If neurosis is indeed a form of behavioral learning mechanism, a habit-contraction mechanism at the lowest levels of the psyche, a subjectifying, individuating process of response-limitation, then we must realize that Cybernetic Capitalism, the “prosumer” culture, has no use for the neurotic just as it has no room for such outdated processes as individuation. All the similarity between Deleuzian literature and the self-help books now available are not really random; the call to creativity and self-curation goes beyond a nice figure of speech. The market cannot afford a neurotic, stuck in a rut, her consumption choices as limited as her capacity to adapt to change. While the neurotics of yore came up with the New Deal and lifetime jobs, the schizophrenics (a statistical norm today) have come up with precarious labor, and millennials that conceive of jobs as short-term stints. The obsession with the apocalypse in the entertainment sector is the most recent manifestation of the majority view of machinic humanity. The message in all those high budget films is clear enough: if all changes in an instant, will you adapt (be cybernetic, schizophrenic) or will you perish in your old ways.
I will argue that neurosis qua limit case of habit-formation and behavioral subjectification is still at play as a force or an “attractor” among others, but that it has succumbed to other forces, to the schizophrenic-consumerist attractors, limited to very basic levels of individuation. We do not yearn nostalgically for the neurotic times to be back, nor are we comfortable with the remnants of neurotic formations in philosophy (the linguistic turn, for example). What we have to do is to examine the somatic levels of habit-formation for indications of the emergence of new ideas or modes of being.”
P.S. The Neurotic Turn has added two other great philosophers to its contributors, Benjamin Noys and Patricia Reeds will also be included in the book, alongside Graham Harman, Nick Land, Sean McGrath, C. W. Johns, Katerina Kolozova, John Russon, Alex Nevil, and a host of other distinguished scholars.
Photo post by @biblioklept. Source: Untitled — Zdzisław Beksiński
The Neurotic Turn, edited by Charles Johns and featuring the work of Graham Harman, Nick Land, and John Russon among others, is where my most recent work, “the Neurotics of Yore” will be published. In this upcoming book selected scholars will present different re-conceptualizations of the once-popular idea of Neurosis in a philosophical register. This is a tentative “Table of Contents” (my emphasis)
Conrad Hamilton – Neurosis in America
Charles Johns – The Neurotic Turn
Mike Ardoline – Neurosis and the Impossibility of Meta-Philosophy
Dany Nobus – Antrozoological Neurosis: On the Trials of Domestication and the Psychology of Happy Pets
Nick Land – Neurosys: On the Fictional Psychopathology of Abstract Horror
Christopher Ketcham – Neurosis: Asymmetry and Infinity
Mohammad-Ali Rahebi – The Neurotics of Yore
Katerina Kolosova – Anorexia Nervosa and Capitalism
Graham Harman – Freud’s Wolf Man in an Object-Oriented Light
Sean McGrath – A Schellingian Take on the Difference between Neurosis and Psychosis
John Russon – Neurosis
Patricia Friedrich – Neurosis, Obsession and Dis-identification relief
Bernardo Kastrup. – Physicalism and Neo-Darwinism as Neurotic Defense Mechanisms
Roderick Orner – Stepping Beyond Our Omnipotence: Neurosis As Branding The Incomprehensible
Petteri Pietikainen – Magic and Loss: Modalities of the Nervous Mind
My next post will introduce the themes of my own contribution to the book.
This is the most recent work by Charles William Johns, the editor of The Neurotic Turn, an anthology featuring selected scholars (among them Graham Harman and Nick Land; also myself) reinventing the concept of neurosis for a philosophical afterlife.
Neurosis and Assimilation is Johns’s third book to deal with neurosis and its re-conceptualization. As part of my research on the subject, I will be referring to this book for the novel insight it affords by discontinuing the monopoly of psychoanalysis over the notion of neurosis and re-purposing it as a tool of philosophy.
Here is the abstract:
This book deals with the possibility of an ontological and epistemological account of the psychological category ‘neurosis’. Intertwining thoughts from German idealism, Continental philosophy and psychology, the book shows how neurosis precedes and exists independently from human experience and lays the foundations for a non-essentialist, non-rational theory of neurosis; in cognition, in perception, in linguistics and in theories of object-relations and vitalism. The personal essays collected in this volume examine such issues as assimilation, the philosophy of neurosis, aneurysmal philosophy, and the connection between Hegel and Neurosis, among others. The volume establishes the connection between a now redundant psycho-analytic term and an extremely progressive discipline of Continental philosophy and Speculative realism.
See also the Springer’s own page.
Abstract of a chapter of Mittelstadt and Floridi’s Ethics of Biomedical Big Data. Although accepted, this chapter was never finished. The abstract was written in April 2015.
In this chapter I will attempt to show how the rise of Big Data necessitates a drastic revision of the notion and elements of informed consent. The “Fourth Paradigm” and its concurrent biomedical practices, such as data-driven science, the automation of Evidence-Based Practice, have changed the very foundation of the biomedical sciences, changes that must be also reflected in their ethics.
The most important change of perspective brought about by these new forms in medical practice is the proliferation of statistically observed, evidence-based treatments and protocols that promise great efficiency. In light of the ever-growing flood of data, the question becomes, can the physician provide adequate information for the patient’s informed consent when she does not know the mechanism of action of the treatment under discussion? Or rather, what are the contents of the information that must be provided regarding the treatment? Is that information only composed of data: statistical rates of success or side-effects, or does it necessarily contain some form of (subjective) medical expertise or scientific aspect that is not equivalent to data?
The question “can the physician explaining the treatment to the patient be replaced with graphical representations of available data” is not a rhetorical one after the “fourth revolution”. The ever-increasing amount of automated evidence-based practice will necessarily incapacitate the physician from gaining and providing subjective and expert information on newer treatments. Taking things even further, we will find ourselves faced with the fact that the same acceleration and increment of data will put efficient treatment forever ahead of the scientific, causal explanation of the condition and its treatment and the discovery of its mechanism of action. Whether or not this will render the theoretical aspects of biomedical sciences obsolete and trivial remains to be seen, but one thing is certain: the possibility of confounding has increased as never before. The issue cannot be reduced to a rehashing of the “computers make errors” cliché; the wide-spread forms of “epistemological uncertainty” (Renee Fox) point to a much more paradigmatic problem. In fact the most important reason for worrying about the insufficiency of data-based consent and the possibility of confounding is that there are no reasons to be concerned about it. The point is, the increasingly efficiency/performance based attitude of the biomedical sciences translates into the prioritizing of proficiency and efficiency in prediction over scientific verity.
Given all of these considerations, a revision of the content of the information necessary for the patient’s informed consent is exigent. I believe it is high time we ask ourselves where we stand with the data/information problem, since surely lives are at stake.
Originally written as an abstract for a conference in May 2015.
There is a deep affinity between the dead and the machinic when it comes to creativity, the creation of novelty; an affinity that is not reducible to their shared character of non-life. If creativity is the creation of an alterity, an act of spontaneity and a whim through which an other “without genealogy” (Malabou 2012, p. 3) comes to be, and if an organ(ism) creates itself into something new through an instant of self-determination, then by necessity that acts is a near-death experience; there is something of death in every spontaneous (self)creation. Spinoza mentions the zombie-poet Góngora, in his illustration of a death that is not actually dying but the emergence of new structures, new “ratios of motion and rest” (Spinoza and Parkinson 2000) between the parts, of novelty in a body that can no longer be the same. The amnesiac is a popular and necessary figure in the anthropology of creativity, yet one which pales against the cybernetic machine.
Presented at the London Conference of Critical Thought in June 26, 2015
At the heart of “big data ideology” lies its claim to an immanence (to the very lives of persons) of which human thought is incapable. It is with reference to the computational ability of real-time data processing that the proponents of big data advertise a sense of humanity and singularized individuality (personalized ads, precision medicine) without the inevitable bias of subjective human thought. It is in the name of this immanence, as Rouvroy noted, that reflective, critical thinking is short-circuited as transcendent and obsolete, if not “dangerous” or “reactionary”. The elimination of reflection is far from limited to the sphere of government/governance: It is the same claim to immanence (a principle of the cybernetic organon of which big data and algorithmic governmentality are the most recent manifestation)that underlies the so-called “fourth paradigm” in the sciences, replacing causal and explanatory theorization with real-time predictive modeling where hypotheses are replaced with transfer functions and parameter setting. As more scientific objects are being replaced with black boxes of high “reliability,” the question of truth as well as the questions of why and what are laid aside, and with them the human capacity of critical reflection. Assisted (read assailed) by data-based decision algorithms of all kinds and bombarded with visual stimuli, the thinking subject is short-circuited as data is connected directly to her unconscious body, desublimating desires into drives. The dividual celebrated as the digital savior of neoliberalism gives new meaning to Guattari’s concept of “machinic enslavement.”