Machine Learning and the Project of Autonomy
Technology in the philosophy of Cornelius Castoriadis
“Every society creates its own world, internal and external, and of this creation technique is neither an instrument nor a cause; it is a dimension… an everywhere dense sub-set. For it is present at every point at which the society constitutes what is, for it, the real-rational.”
Cornelius Castoriadis, Crossroads in the Labyrinth, p. 244
Asking the question “Can machines imagine?” will most likely elicit a bemused response bordering on ridicule. How could the cold mechanical calculations of a computer ever resemble imagination — that most irrational and creative aspect of mind?
Probing the possible role technology plays in the creation of the financial imaginary would induce a breakdown of the ostensible antonymy between imagination and technology, making the question of machinic imagination seem less bizarre. If both concepts were to become rearticulated in relation to one another, a further issue arises concerning the consequences of the technical dimension of the imaginary for any emancipatory politics that aims to call into question the existing state of things.
The following is an initial, speculative sketch of how we might address these matters with the help of philosopher of the imagination Cornelius Castoriadis, a thinker who devoted his life’s work to an attempt to systematically elucidate the creativity inherent in all regions of being and its political ramifications.
Being as creation
According to Castoriadis every society creates a world for itself. For him, society “exists by instituting the world as its world, or its world as the world” (The Imaginary Institution of Society, p.186). Society creates (institutes) a social imaginary around which it organizes reality into a ‘world’. This social imaginary is understood as a matrix of interconnected significations, or meanings, conforming to an internal (self-constituted) logic, according to which ‘reality’ is ‘presentified’ by the social to itself as a world. Castoriadis calls this “being-for-itself” (World in Fragments, p. 143). This mode of being-for-itself specific to the social-historical fits within the more general regime of being-for-itself of Castoriadis’ cosmology (discussed below in more detail).
Being-for-itself has an expressly political dimension for Castoriadis. Although it does not necessarily equate with a reflexive, conscious awareness on the part of society regarding its self-creation, being-for-itself conditions the potential for such an auto-critical mode of being. The realisation of which is the aim of what Castoriadis refers to as “the revolutionary project of autonomy.” Far from being a finality — a utopian end — an autonomous society would be an endless project of renewal driven by political praxis (knowledge and action). As such, a concrete description of an autonomous society is not possible, beyond the idea that genuine autonomy implies a consistent self-reflexive creation of norms and institutions that allow for the continuous flourishing of autonomy for everyone, i.e. the free capacity to act, live, and create (institute).
The opposite concept, heteronomy, is a much easier concept to grasp, however, and can therefore function as a negative definition of autonomy. According to Castoriadis, heteronomy refers to the situation in which society is blind to its own self-creative mode of being and displaces the source of its creation outside itself: God/The Gods in a religious society, for example. The being-for-itself of contemporary capitalist society is thus heteronomous, as it locates itself within the deterministic framework of the economy/market.
Considering Castoriadis’ theory in light of technological developments of the past few decades, what is the role of current digital technologies in this autonomy/heteronomy binary? Do computationally-driven processes perpetuate heteronomy and impede the possibility of autonomy through the closure of auto-creation from conscious human access?
Machine Learning and Finance
Investigation into the technologies at work within finance capital can afford us a partial view of how the financial imaginary determines reality according to its logic of calculation, and the impediment to access produced by said technologies. Doing so highlights the politically problematic nature of closure, which can be traced through efforts to map the way in which finance impacts other areas of social and cultural life.
The operation of the finance industry is heavily dependent on computational processes, silently driving all areas from high-frequency trading, credit checks, and pre-trade analysis, to more general market analysis for longer-term investment decisions. The extent of white-collar automation provides a view to the future trend of the industry. Since the year 2000, for example, Goldman Sachs has reduced the number of traders it employs on its cash equity desk from 600 traders to 2 traders aided by trading software supervised by 200 computer engineers. While there still remains a certain level of oversight, as this case shows it is increasingly clear that trading has become a largely automated process, and the role of computation is near totalizing.
Of course, it is one thing to highlight those instances of automation of repetitive calculations. In such cases the automated decisions and actions reproduce existing economic and financial models and beliefs. It seems uncontroversial, then, to suggest the world of finance is, to this extent, automatically reproduced by machines, but is it possible to go further and speculate that those machinic processes could produce, that is create, the financial imaginary? To answer in the affirmative would mean moving beyond an analysis of the simple concretization of the imaginary institutions of finance though automation (i.e. through simple mechanical reproduction). Put otherwise, rather than being a neutral medium through which social institutions are reinstantiated, does finance capital’s machinic regime of classification and relational organization instead contribute to the construction of the imaginary by creating new social significations?
With the accelerated development machine-learning is taking in its application across the finance industry, such a thesis becomes thinkable. Machine learning is a set of techniques in computer science aimed at building computers that ‘learn’ — i.e. program themselves with a minimum amount of human input in order to achieve maximum efficiency — often through statistical analysis of large amounts of data collected from the social world.
One example of the latest applications of machine learning in finance is the case of the data analytics company Kensho. In the words of its CEO, Kensho offers a service that answers questions like “How do defense, or oil, or airline stocks react to ballistic missile tests by North Korea?” Applying machine learning techniques to the analysis of huge amounts of data gathered from numerous sources, Kensho surveys global events to find correlations with stock and asset prices to advise on portfolio management. These global events cover the full range available in the big data society: “natural disasters, political developments, corporate earnings announcements, product launches and FDA drug approvals” (ibid). What the example of Kensho highlights is the way in which action within the financial field is increasingly predicated upon knowledge produced by computer generated patterning and correlation, which would otherwise be incomputable for the human brain alone.
The generative nature of machine learning disrupts the notion of technique as mere automated mechanical reproduction. Although machine learning programs are trained on data sets provided by human programmers, when applied to real world interactions they can produce unexpected results. The 2016 defeat of the world champion Go player Lee Sedol by Google’s DeepMind program ‘AlphaGo’ is an example of the creative potential of machine learning. Go is regarded as a highly intuitive and creative game, and in order to win AlphaGo had to be able to replicate the creativity of a Go champion. During the second match against Sedol, AlphaGo played a move that has gone down in history, not because it would lead to the first victory of a computer against a 9 dan (highest-ranking) Go player (AlphaGo had already won the first game), but because it was a new, unforeseeable move that no high-level player would ever make. AlphaGo made the move based on a set of calculations beyond the capacity of a human mind and has subsequently changed the way Go strategists think about the game.
In the case of finance, the process of learning opens technical mechanisms to the contingency of the social world. These machine learning programs process and reorder the data received from the social world, making inferences according to their internal computational logic. This, in turn, has consequences for action in and upon the social world in a positive feedback loop. Machine learning is therefore not assembly-line (re)production. Instead, it automates the iterative process of knowledge production and the implementation of that knowledge into action — in other words, a computational augmentation of praxis. The influence of this machinic processing of social environment rebounds on itself through a feedback loop. Like the example of AlphaGo, machine learning applied to financial decisions could lead to unforeseeable events. However, unlike the game of Go, financial markets have much greater impact on the organization of the world. It would be in this cybernetic interaction with the social environment that it might be argued machine-learning processes have the potential to contribute to the generation of the financial imaginary.
This is certainly not to suggest the disappearance of human involvement altogether; machine learning techniques still require considerable human input and parametrization. And even if most of any particular process were to be automated (such as in end-to-end automated trading), such processes take place within a broader system of human-machine interactions. Nevertheless, the extent to which machine learning might suspend the privileged position of the human as the sole creator of the social imaginary is a vital question for the philosophical debate concerning imagination/imaginary and, consequently, the project of autonomy. An imaginary that is, even to a limited extent, computationally-created poses significant problems.
The internal processing of machine learning is often black-boxed (as in the case of neural-networks) and therefore exceeds current human comprehension. Taking this into consideration, how can be we be sure that we would be fully aware of — let alone have access to — the full range of social significations produced by these processes, even while they have effects on the social world? The imaginative capacities required for the existence of autonomy have always been understood as a human biological capacity. Yet if the self-creative capacity of the social were to be partially directed by creative forces not fully translatable into the language of our human biological imagination, how would the latter act on the former? To bring about an autonomous society would thus require overcoming this apparent abyss between the artificial and the biological. The difficultly of this task becomes immediately apparent when one considers the independence of technology in the field of finance.
The opacity of the financial decision
Consider the sheer volume of machine-to-machine interaction that forgoes all human involvement except for high-level macro decisions. High-frequency trading, for instance, is now the dominant method of financial trading, making up to fifty-five percent of trading volume in the US equity market, meaning that approximately seventy-five million trades per day have no human interaction. Might it be possible that social significations are produced by these machinic interactions operating outside the domain of human access? In other words, might this complex artificial ecology give rise to a situation in which these machinic interactions primitively constitute their own “domains of classes, properties, and relations.” (Logic of Magmas and the Question of Being, p. 309) — i.e. what Castoriadis calls “signification”.
The role of machine learning is a productive area for such an investigation because the act of learning, especially when unsupervised, presents a potential situation in which a technical mode of being-for-itself is constituted, that is, a mode of being that creates a world-for-itself through a closed ordering of the world. Learning in the applied context of finance, for example, requires responding and acting in relation to the contingency of the social world, which implies an internal ordering (constitution or formation) of the world as its own (informational) world through the process of signification. Speaking speculatively, certain machinic significations may well be produced within this learning-action process, albeit a form of signification that would be meaningless to human understanding. Both technically and ontologically, the complexity of the interactions within machine ecologies — like high-frequency trading — supersede the human capability to disentangle the code and provide explanatory models for certain decisions made by these machines. If so, would such a closure that comes with this being-for-itself create a problem for human (political) autonomy?
Castoriadis can help us here, as he presents a way to escape traditional distinctions between the technological and the social. In his stratified ontology, the technical is an immanent dimension of the creation of the social-historical world-for-itself. This aligns him with other philosophies of technology — such as those found in the work of Gilbert Simondon, Donna Haraway, Bruno Latour, or Bernard Stiegler — that emphasize the co-constitutive relation of humans and technology.
In today’s computational culture, the independence of the technical dimension of society demands further consideration of the technical side of the human-machine coupling. And although Castoriadis’ emphasis was always on the human capacity to create worlds, both at the level of psyche and the social, it is still compatible with a reading that grants a (quasi-)independence to technological becoming.
This is because the being-for-itself of the social is but one region of being in a larger cosmology. Castoriadis presents a stratified schema of being-for-itself that also includes (1) the living being, (2) the psyche, (3) the social individual, (4) and society. Each region of the for-itself maintains a certain degree of interiority and closure from the whole, while at the same time participates in a sort of generic universality within the whole (World in Fragments, p. 150).
Castoriadis adds two more regions, (5) human subjectivity, and (6) autonomous society. These last two are distinguished by a self-reflexivity and radical openness that allows them to put their own creative being into question, that is, they have the capacity for autonomy of self-determination (as discussed above).
Returning to our question regarding the technical dimension of the social, might we suggest the existence of a seventh region of being-for-itself: (7) the technical? With software that can learn and programme itself (e.g. machine learning and genetic algorithms), and semi-autonomous technical systems like that found in finance, such a proposition seems plausible. Crucially, however, even though we are referring to machinic automation, this seventh region of the for-itself is closer to those regions from (1)-(4) in that it is not autonomous in the self-reflexive sense of (5) and (6). The distinction being the difference between (auto-constitutive) autonomy of oversight and (auto-legislative) autonomy of determination. As Castoriadis explains, the term autonomy, as he uses it, designates “the state in which ‘someone’ — singular subject or collectivity — is explicitly and, as a far as possible, lucidly (not ‘blindly’) author of its own law.” The implication being that “this singular or collective ‘someone’ can modify that law, knowing that it is doing so.” (Logic of Magmas and the Question of Being, p. 308)
This being said, to go too far and assume that technological being-for-itself means an absolute separation of technological becoming from the human dimension of the social would be to miscomprehend the relational constitution of social-historical being. Considering the participation of each region of being in the generic universality of the whole suggests a way out of the apparent dead-end of technological closure. Investigation into the technical dimension of the human-technical coupling reveals the artificiality immanent to the becoming of the social-historical imaginary. Implied in this is the technological-artificiality of the human in general, because (as Castoriadis reminds us) human individuals are socially produced, and technology is an “everywhere dense subset” of society. Beginning again from the position of the human-technical we can therefore re-assess our self-creation, our artificiality, and from there explore the imaginary institutions around which society organizes itself with a deeper comprehension of the technical dimension of the process of institution (even while closure is inevitable at certain scales). This means unpacking the technological-human relation at the level of the social imaginary, in order to recognize the artificial other within ourselves and begin to critically reflect and act upon our self-creation.
Returning to finance, this sketch should by no means be interpreted as an assertion that machine learning lends any necessarily emancipatory potential to finance capital, nor from capitalism in general. It does, nevertheless, suggest the need to think about how the technological dimension of the financial imaginary instantiates and is instantiated by its own modes and logics of interaction and calculation. Accepting the proposition of an ontological auto-generation of the technical system of finance provides a different method for analyzing the composition of the financial imaginary. With such an analytical framework, we might begin to take stock of the modes of action possible within the field of the financial imaginary in an age of computation.
Conrad Moriarty-Cole is a PhD student and associate lecturer at Goldsmiths College, University of London. His work explores the politics of the decision incomputational culture, through a development of the concepts of imagination and technology in the philosophy of Cornelius Castoriadis and Gilbert Simondon.