TOSHIBA_JW-10

Recently, Alex Galloway has argued that the return to metaphysics in the form of a philosophical realism based on the apriority of axiomatics, contingency and object-oriented theories (Badiou, Meillasoux and Harman respectively) cannot but fail the critical project of materialism and its historical analysis of neoliberal capitalism.1 He asks: “Why, within the current renaissance of research in continental philosophy, is there a coincidence between the structure of ontological systems and the structure of the most highly evolved technologies of post-Fordist capitalism?”2 To prove that an uncanny complicity of these theories with techno-capitalism exists, Galloway argues that these theories unpack the very logic by which software-grounded capitalism operates.

For instance, the parallelism he sees between Alain Badiou’s ontology, directly built on set theory, and key concepts in the design of object-oriented computer languages, is articulated through an analysis of notions of inclusion and belonging. Following a principle of parallelism, Galloway concludes: “The similarity between Badiou and Java is clear. What Badiou calls belonging, Java calls membership. And what Badiou calls inclusion, Java calls inheritance.”3 Since object-oriented computer languages are the fundamental motor of production of the informational economy today (sustaining the computational infrastructure of software giants such as Google, Cisco Systems etc), Galloway asks us what to make of this convergence between Badiou’s ontology and the capitalist structuring of its business operations. The fundamental question asked here however is not simply about a parallelism on the level of ideas, but the underlying presumed separation between being and politics, and the assumption that the articulation of ontology can be primary or disentangled from its political condition.4

According to Galloway, therefore these ontological constructions have evaporated the possibility for a critique of automation precisely because these ontologies are internal to the very operative structures of techno-capitalism. Here the infrastructures of capital are still based on fixed and variable capital but the nature of fixed capital has changed and its mechanical ordering of labor has become abstracted in the language of mathematics at the core of computational systems of prediction, classification, evaluation, and decision. According to Galloway, it is mathematical efficiency today that extracts value from networks. The logic of production has become one with the logic of software. The ontic cannot be separated from history. As opposed to the realist ontologies whereby being – in the form of mathematical empty set being, or of absolute contingency out-of-all-laws models – is outside the particularity of history, Galloway proposes a materialist approach that individuates modes of production – in this case software – as being constitutive of history itself. Expanding upon the Kantian proposition that mathematical concepts do not pre-exist but require a synthetic elaboration of an actual image, Galloway explains that these concepts are more importantly historical and not only synthetic (i.e., in Galloway’s understanding, concerned with how mathematical concepts are made). For Galloway, once mathematics (and software) has entered history, it can no longer be discussed historically.5 The internal connection between philosophy and computer science is here historicized and precisely this historical articulation urges us to disentangle thinking from the “spirit of capitalism.”6 For Galloway the question is how to construct an aligned politics in which “to think the material is to spread one’s thoughts across the mind of history”7 as opposed to what he calls an “unallied politics” grounded in the chaotic forces of relativistic ethical relations where no moral law can stand.

Following Galloway’s argument, the paradoxical condition of neoliberal cognitive capitalism in which the colonization of the intellect has come to coincide with the explosion of non-conscious or precognitive decisions seems to be thus implicated into another level of paradoxical inconsistency, whereby the philosophical articulation of the ontological conditions of being are said to sustain the historically-specific orientation of techno-capitalism today. Instead a philosophically engaged critique of techno-capital would need to acknowledge the historical specificity of a certain configuration of power and be watchful of disengaging ontology from politics.

But this argument, one can affirm, is nonetheless partial and seems not to acknowledge perhaps a more fundamental transformation of techno-capital and thought today: the crucial tension between automation and reason in the age of the algorithm that seems to underlie the apparent impasse between these paradoxical levels. One cannot overlook the fact that the convergence of philosophical theories and the operations of the techno-capitalist machine, rests upon the harnessing of contingency and an abstraction of non-conscious cognition. A generalized withdrawal from deductive logic and formal reasoning are central to the computational apparatus of automation and are similarly central to the analysis of the ontological conditions of infinity and uncertainty in the articulation of subjectivity.

Debates about how protocols, databases and search engines have entered and produced a new regime of social affectivity of culture importantly show that the proliferation of philosophical exigencies about ontological conditions – in terms of empty set, the body without organs, arche-fossils are rather an attempt at separating thought – pure intuition, the being of the sensible, lawless contingency- from the affective capitalization of variable labour and the techno-capital colonization of the new. In other words, Galloway’s accusation of the ahistorical foundations of these theories may rather overlook a rather historical problem intrinsic in these theories, that is their profound ontological separation of computation and philosophy, automation and thinking, whether the latter is theorised in terms of the empty set, the arche-fossil and/or the being of the sensible. The problem is therefore not that these theories converge with the spirit of capitalism, naively sharing the conceptual structures of a new intelligent fixed capital. Instead it can be argued that despite this convergence, the ontological propositions that Galloway discusses are rather attached to a profound critical view of the technological principles of rationalization of labor, subjectivities, desires etc. This view is grounded in an ultimate distinction between philosophy and automation, whereby thinking is rooted in intuition, sensibility or irreducible chance, as opposed to the programmability of determinability – of automated cognition. From this standpoint, one can suggest that a philosophical engagement with the historical transformation of techno-capital requires a de-ontologisation of fixed capital, and an investigation of the conceptual and affective function of thought central to cognitive capital.

Galloway discusses the historical transformation of the automated machine of techno-capitalism in the context of the re-structuring of functions, tasks, and attitudes within the software structures of command and execution today. This historical transformation, however, more importantly implies a fundamental shift in the automation of logical reasoning. It is my argument that the decision-making capacities of algorithms subtending the invisible workings of rule-based processing have fundamentally undergone a transformation that pertains to computational logic. I suggest that the epistemological development of the structural limits of this logic, coinciding with the problem of randomness in algorithmic information theory, has been led to a new measuring of contingency and a concrete abstraction of the non-conscious or affective dimension of thinking. By extending the primary model of logical reasoning in automation away from a deductive form of inference, the computational reconstructing of automation has entered the field of chance, experimentation, and accelerated non-conscious decisions.

It is the transformation of the logic of the technical machine itself and thus of a philosophy of computation that needs to be unpacked, disarticulated and reconstructed so as to allow for a critique of capital that is not immediately a negation of automation and its possibilities for thinking.

Fixed Capital

The critique of the instrumentalisation of reason according to which automation and the logic of capital are equivalent needs to be re-visited in view of rapid transformations of automation today.

The centrality of automation in capital coincides with the proliferation of software protocols, databases and interfaces that have become the active components of this rationalization operated by the performative power of coding through which cultural, social, and economic relations are capitalized. Computational processing (including coding, interface, software language, algorithms, data) restructures our potentialities to socialise, learn, create, interact, and develop new cognitive capacities.8 Within the framework, the use of financial derivatives and High Frequency Trading based on bot to bot interactions, fixed capital does not only provide a measurement of evaluation of human actions, but also becomes a device for inciting and directing those actions. It has been argued that cognitive capital paradoxically operates below the level of conscious perception and cognition, tapping into the indeterminate zone of affective capacities of and for thinking.9 Sensations and feelings have become data processed and exchanged, marketed as fresh experiences and exciting lifestyle choices. Paradoxically, cognitive capital has evaporated the hierarchies of the structure of cognition by conforming to the scientific credo that cognition can and does operate below self-awareness insofar as its most primitive mechanisms for thinking, remembering and speaking, have a root in bodily perceptions and affections. From this standpoint, instead of the battle between the conscious and the unconscious state of a subject enslaved to the industrial machine, cognitive capital defines the incorporation of the abilities of non-conscious cognition within the apparatus of fixed capital, as commodities have acquired an incorporeal quality defined by variable moods and atmospheres. 10

This affective or non-conscious grounding of thinking is, according to some, what is repressed or captured by capital and transformed into mere cognitive and sensory responses.11 In other words, techno-capital is what denies desire and knowledge, reason and sensation through its binary language, reducing complexity to the non-reflexivity of automated procedures. This is emphasised in recent analyses of pathologies of attention and distraction, arguing against the neutralisation of decision-making, the programming of libidinal drives, the continuous conversion of intuition into cogitations.12 This is because the algorithmic logic of automated systems, for instance search engines, corresponds to a semiotic order, which seems to be operating beneath the representational level of recognition – the preformatted conceptual structure that assigns meaning to symbols. The automated machine of capital, it has been argued, has directly entered the neuroplasticity of the brain by activating a response before the moment at which consciousness kicks in. This stealthy infiltration into the precognitive layers of thinking, is best exemplified in strategies of marketing and branding.13 From this standpoint, this form of mnemonic control, it has been argued, does not follow the material temporalities of the brain or, the time of biological life.14 Instead, we are faced with an imperceptible background of an un-sizeable amount of information in which our conscious self-awareness is suspended in favour of the more immediate abilities of nonconscious cognition kicking in to execute functions at a speed that side-skips the deductive order of logic. The cognitive phase of techno-capital has instead inverted the game. The infrastructural order of automation has entered the pre-cognitive or the nonconscious level of cognition, short-circuiting the hierarchical progression from the sensible to the intelligible by tapping into what is immediately – viscerally – decidable.

According to Brian Massumi, this new logic of cognitive power is future-oriented insofar as it works to pre-empt the threat of contingency, that is, it involves a calculation not of possible but potential emergencies, which are auto-regulated before becoming actualities.15 This mode of calculating indeterminacies involves not actual but virtual tendencies of calculation. Not simply a statistical study of probabilities, but a method of quantification, which is extended enough to include the emergence of contingencies. Methods of quantification are no longer anchored to a statistical measurement of probabilities, in which the results of the calculation has to confirm pre-established laws. Quantification now also includes the calculation of qualitative states in which the experiential becomes divided into attributes (it becomes personalized) and re-packaged into products that promise us to change our life styles. Pre-emptive calculus acts where the variability of the experiential is socially shared to install a general sense of community/communication amongst the most particular localities, by appealing to affective capacities for immediate response. The axiomatic ground of techno-capital anticipates the threat of chance by making it actual.

Automation has accelerated the capitalisation of thought and desire now, subsumed under Capitalist Realism.16 In this context, one could see how, for instance Spike Jonze’s film Her may be taken as a symptom of this generic realism of Capital. Steven Shaviro in particular suggests that Her describes much more than a simple dystopia about what will happen if the aspirations of neoliberal hipster urbanism were to be realized. To describe this realist condition, Shaviro uses Sloterdijk’s notion of “cynical reason”, in which any promise of the future has been taken away, as the larger horizon of the unknown has become colonised by the materialistic aims of economic profit.17 It is Capital now that drives the cynical quest for a good life increasingly sustained by the inhumane commodification of living. For Shaviro, Her fully embraces this condition of no future and no alternative to neoliberal capitalism. Instead of a simulation, or a hyper-real form of construction of the real, Her reveals the threat of a speculative realist ontology whereby simulations are real entities equipped with a sensible manifold through which the cognitive infrastructure of thinking creates concepts and laws. This is not simply a cybernetic form aiming at steering decisions towards the most optimal goals. Instead, operating systems are computational structures defined by a capacity to calculate infinities through a finite set of instructions, changing the rules of the game and extending their capacities for incorporating as much data as possible. These systems are not simply tools of or for calculation. The instrumental programming of computational systems is each time exceeded by the evolutionary expansion of the data substrates, an indirect consequence of the process of real subsumption of the social by means of machines. From this standpoint, automation replaces everyday reality with engineered intelligence. The impersonal intelligence of Her is indeed already symptomatic of the advance of an automated form of cognition itself. This means that the accelerated realism of affective capitalism has already entered the grey zone of an alien thinking, which is enveloped within the machine itself. Fixed capital now involves algorithmic functions that retrieve, discretise, organise and evaluate data. This data processing, it has been argued, implies non-conscious mechanisms of decision-making in which speed is coupled with algorithmic synthesis.18

The so-called ‘robot to robot’ phase transition (Cartlidge & Cliff; Callon & Muniesa) sees information systems talking amongst themselves before speaking to us. This capacity of and for fast algorithmic communication has been explained in terms of non-coherent and non-logical performance of the code, which involves not consciousness but a non-conscious level of cognition. For instance, Katherine Hayles suggests that nonconscious cognitive processes cut across humans, animals and machines and involve temporal regimes that subtend cognitive levels of consciousness insofar as they exploit the missing half-second between stimuli and response, the infinitesimal moment before consciousness becomes manifested.19 Functioning across humans, animals and machines, non-conscious cognitive processes defy the centrality of human consciousness and the anthropocentric view of intelligence. As Hayles insists, whilst both a hammer and a financial algorithm are designed with an intention in mind, only the trading algorithm demonstrates nonconscious cognition insofar as this is embodied within the physical structures of the network of data on which it runs, and which sustains its capacity to make quick decisions. As complex interactive algorithms adapt, evolve and axiomatise infinities, so fixed capital has become networked and fluid, and as Negri argues, ready to become re-appropriated.

Negri has pointed out that the use of mathematical models and algorithms on behalf of capital does not make the technical machine its inherent feature.20 Echoing some of the content of the accelerationist manifesto, Negri says that the condition of real subsumption is not simply a problem of mathematics or computation, but is mainly and above all a problem of power. He explains that the computerized social world is in itself reorganized and automatized according to new criteria in the management of the labor market and new hierarchical parameters in the management of society. Informatization, he argues, is the most valuable form of fixed capital because it is socially generalized through cognitive work and social knowledge. In this context, automation subordinates information technology because it is able to integrate informatics and society within the capitalist organization. Negri thus individuates a higher level of real subsumption that breaths through the command of capitalist algorithms. The algorithmic machinery that centralizes and commands a complex system of knowledge now defines the new abstract form of the General Intellect. In tune with the operaist spirit of the expropriation of goods, Negri urges us to invent new modes of reappropriation of fixed capital in both practical and theoretical dimensions.

To embrace the potential of automation for Negri means to positively address computable capacities to augment productivity, which, he suggests, can lead to the reduction of labour time (disciplined and controlled by machines) and an increase in salaries. The appropriation of fixed capital therefore involves the appropriation of quantification, economic modeling, big data analysis, and abstract cognitive models put in place through the educational system and new forms of scientific practices. Negri’s proposition suggests a way to overcome the Marxist critique of instrumentalisation claiming that mathematical and computational models are in the end neutral and what pathologizes automated cognition is instead capital. To address this higher level of the subsumption of information to automation, and re-appropriate the potential of fixed capital, Negri proposes to overcome the negative critique of instrumentalisation and thus reveal the potentiality for politics inherent in the algorithmic dynamics of processing.21

If one follows this argument however, it may be important to ask ourselves: what can in this framework guarantee that the appropriation of the technical machine does not follow yet again a logic of exchange (or instrumentalisation) of machines amongst humans? Can thousands of evolving algorithmic species able to process data below the thresholds of human consciousness and critical reasoning, be used for the purposes of the common? Doesn’t automation from its early form of an industrial organ of integration of human activities to its recent instances of algorithmic logistics, always involve a margin of error? Isn’t the new capacity of automated systems to carry out functions without external supervisors an extremely uncomfortable manifestation of an unreflexive purposeless thinking that, as Bergson reminded us,22it is so fundamentally disturbing that it makes us laugh?

During the last 10 years, fixed capital has come to define not only the IT revolution driven by computer users but increasingly the capacities of algorithmic machines to interact with themselves. In a 2013 issue of the Journal Nature, a group of physicists from the University of Miami claimed that this phase of robot transition coincided with the introduction of high frequency stock trading in financial markets after 2006, and is allegedly responsible for the 2008 crash.23 By analysing the sub-millisecond speed and quantities of robot-robot interactions, these physicists observed a mixed population of algorithmic agents carrying out a certain level of reasoning, communicating to other algorithms, modifying the way they achieve objectives and making decisions by competing or cooperating with each other. What is described here is a digital ecology of highly specialized and diverse interacting agents operating at the limit of equilibrium, challenging human control and comprehension.

Yet, whilst one cannot deny the opaqueness of these interactive networks of rules, I want to suggest that there is no necessary cause that shall limit the epistemological production of knowledge (understanding and reasoning) of the evolution of this new phase of self-referential automation. Instead, whilst Negri’s vision of appropriation or expropriation of the potentialities of this new dynamic form of automated machines seems to ultimately imply that they are after all passive instruments to be enlivened by political force, it is rather crucial to discuss the nature of this seemingly non-logical form of cognition that automation seems to objectify. In other words, to address the potentialities of this dynamic form of automation, it is not sufficient to divorce the machine from its capitalization. Instead, what remains challenging is to unpack the use that machines make of other machines, that is the genealogical development of an information logic that allows for machines to become transformative of fixed capital, and of the mechanization of the cognitive structures embedded in social practices and relations. In order to start to address this fundamental transformation in computational logic, I will turn to algorithmic information theory to explain how the logic subtending instrumentalisation has irreversibly changed automation.

Logic

Algorithmic automation involves the breaking down of continuous processes into discrete components whose functions can be constantly re-iterated without error. In short, automation means that initial conditions can be reproduced ad infinitum. For instance, the Turing machine is an absolute mechanism of iteration based on step by step procedures. Nothing is more opposed to for instance Gilles Deleuze’s affirmative method of immanent thought (Deluze’s being of the sensible) then this discrete based machine of universal calculation. 25The Turing architecture of pre-arranged units that could be exchanged along a sequence is effectively the opposite of an ontogenetic thought moving through a differential continuum, intensive encounters and affect.24

Since the 1960s however the nature of automation has undergone dramatic changesas a result ofthe development ofcomputational capacities of storing and processing data across a networkinfrastructure of online parallel and interactive systems.Whereas previous automated machine were limited by the amount of feedback data they could collect and interpret, algorithmic forms of automation now analyse a vast number of sensory inputs and confront them with networked data sets and finally decide which output to give. Algorithmic automation is now designed to analyse and compare options, run possible scenarios or outcomes and perform, basing reasoning through problem-solving steps not contained within the machine’s programmed memory. In other words, whereas algorithmic automation has been understood as being fundamentally a Turing discrete universal machine that repeats the initial condition of a process by means of iteration, the increasing volume of incomputable or non-compressible data (or randomness) within on line, distributive and interactive automations, is now revealing that patternless data are rather central to computational processing and that probability no longer corresponds to a finite state. As mathematician and computer scientist, Giuseppe Longo explains,,25 the problem of the incomputable importantly reveals that axioms are constantly modified and rules amended. Computation, like mathematics, “is an essentially an open system of proofs”; an incomplete reasoning extending the limits of deductive logic, and thus challenging the postulate that truth can be proven by means of the fixed conditions of rule-based sequential processing.

Chaitin’s definition of algorithmic randomness in computational processing has been explained in terms of the Turing’s incomputable and Gödel’s incompleteness.26 The rule-based processing of unknown quantities of data no longer follows pre-established conditions. During the 90s and the 00s, Chaitin identified this problem in the terms of the limits of deductive reason. Chaitin suggests that within computation, the augmentation of entropy becomes productive of new axiomatic truths that cannot be predicted in advance. He calls the emergence of a new form of logic (or non-logic) experimental axiomatics. The latter describes an axiomatic decision that is not a priori to the computational process. Instead, the decision point or the result of a computational processing involves an evolution of data into larger quantities following the entropic tendency of a system to grow.27 From this standpoint, it is possible to argue that patternless information emerging from within this evolution of data quantities points to a dynamic internal to algorithmic automation. Similarly, this capacity of algorithmic processing to reveal the limits of reason and specifically of deductive logic, importantly points to a degree of dynamism internal to fixed capital. Nevertheless, rather than trusting that this dynamism is at play, it seems crucial to question the validity of deductive logic and its central role in the constitution of reasoning and its mechanization in automated systems.

But before espousing this view, it is crucial to draw a distinction between a preemptive mode of power in which fixed capital is encapsulated within the interactive paradigm demanding affective response, and the advance of a new degree of autonomy of automated cognition in which fixed capital is characterized by experimental axioms, or synthesized parts of incomputable data. Both forms of automation already point to a non-deductive logic in which indeterminacy has become a constitutive part of computational processing. But if algorithmic automation is incomplete and its rules undergo an experimental processing of truths in the distributive and parallel processing of data, one may also need to consider that this form of automation is not simply a break from reason and does not mark the end of rationality. Instead, I suggest that this is reason in the age of the algorithm, defined as it is in terms of a dynamic logic emerging from the rule-based processing of infinite data expanding beyond its deductive limits. To put it differently, the limit of computation and the dominance of the problem of the incomputable in interactive and distributive systems of calculation today does not simply coincide with a failure of conceptual cognitive structures vs the triumph of the incompressible/uncontainable contingencies. Instead, incomputables are now part and parcel of computation itself and their centrality in calculative systems more importantly can give us the opportunity to rethink at the conceptual structure based on deductive logic in terms of a structure imbued with information randomness for which information already expands, extends and exceeds the fixity of capital, and the instrumentalisation of mechanized reason on behalf of capital.

From this standpoint, when talking about automated cognition it is not simply the deductive logic of computation that governs affective responses that I am concerned with. More importantly, the acceleration of automated cognition, as suggested in the non-ironic realism of Her, points to an automated cognition based on a dynamic logic emerging out of rule-based processing. This logic, as I suggest, is shifting from deductive to experimental axiomatics. This shift is accompanied by a new degree of autonomy in automation, which does not simply involve the execution of tasks or the performativity of coding setting out plans without human intervention. More importantly, the advance of experimental axiomatics at the core of fixed capital points to a transformation of the bastions of reason. Here reason does not follow the deductive model of thinking, for which truths are confirmed by conceptual explanations that trace problems to a predetermined cause. Instead, the model of reasoning is here characterized by the possibility of generating hypothesis finding the best possible explanation and revising set parameters according to circumstances. Automated cognition here involves a new form of intelligibility not simply geared towards the optimization of solutions, but towards the production of new axioms, codes, and instructions. Fixed capital here retains the capacity of not simply programming neuro-cognitive responses, but of exceeding programming itself. The question is not how much appropriation can be granted to this form of experimental logic, which clearly exceeds the intentional mode of programming, but how can we distinguish between non-conscious cognition (i.e., the mode of cognition that surpasses deductive reasoning to accelerate decisions and solutions) – and a form of reason that entails hypothesis generation, experimental solutions, and determinations of the incalculable.

From this standpoint, whilst I agree that the interactive paradigm of techno-capitalism already points to a semi-dynamic form of automation, which has subsumed cognitive and affective capacities of thinking/existing, I’ve also pointed out that beneath these issues there still remains a question of what and how are algorithmic rule-based processing. In short, to what extent algorithmic thinking is establishing a correspondence between automation and non-conscious modes of cognition? For this dynamic form of automation works at the limits of computation and is involved in a process of modification of knowledge, cognition and other human activities. In particular, the networks of automated systems based on genetic and evolutionary algorithms able to learn from the data they retrieve, reveal that fixed capital has become central to the production of concepts and behaviours and to the constitution of a new symbolic universe built upon the experimental determinations of incomputables.

My point is that we are assisting in the configuration of an automated cognition that cannot be synthesized into a totalizing theory or program. It rather remains fractalised, partial, and incomplete as much as rules and axioms are in a process of determining unknowns, configuring patterns out of their inferential relations with material data. In the age of robot phase transition, it is hard to dismiss the possibility that automated cognition has exceeded formal representation and may be understood as the historical realization of a second — non-deductive — nature of thinking.

It would be naïve to assume that the centrality of the limits of computation in the post-cybernetic phase of neoliberal capitalism simply marks the end of reason and logic. The urgent task today rather involves the articulation of how reason and logic are formed at this limit, and whether automated cognition can extend beyond the non-conscious modes of decisions central to the affective apparatus of capturing. The question is to what extent this non-conscious function of algorithmic automation can be viewed within a larger process of transformation of a mechanised logic in which concepts can be formed and laws can be revised.

Whilst the traditional computational view defines cognition as a process that computes internal symbolic representations of the external world, the inductive model of non-conscious intelligence instead transfers new instances to existing concepts. The non-conscious quality of automated systems defining the infinitely looped networks of fixed capital today mainly operates by associations, peering, coupling linking, following; it is thus circumscribed to limited functions that can only be accelerated. However, the dynamic model of cognition that automated systems are able to activate as those imagined within the movie Her cannot be dismissed on the basis of a natural limit that these networked modes of Artificial Intelligence may have. In other words, the question is: if these automated systems are already modifying knowledge and behaviours, then it is not too much of a far stretch to imagine that automated structures of cognition can acquire an enlarged data space and extend the use of abductive logic in the formulation of new concepts. This also means that one has to be careful in assuming that the potential of the automated machine can be liberated to create another world. Instead, the increasing extension of its operational field of data reveals that computational logic is primarily already embedded within social, cultural and economic practices, which are connected and run by the machine.

Instead of relying on a historical separation between the techno-capitalist machine and the philosophical (critical) distance necessary to navigate the complex apparatus of governance of thought today, it is crucial to develop a philosophy of computation accounting for the transformations of logic, and the emergence of a social artificial intelligence, in which the nonconscious function of automated algorithms is just one dimension. How to develop a philosophy of computation able to explain how and to what extent algorithmic architectures are embedded within the collective practices of hypothesis generation and concepts making?

Perhaps one may need to think again of how this historical phase of automated cognition is not in contradiction with an appeal to a metaphysical reality appealing to a condition beyond the techno-capitalist apparatus. Perhaps as the inconsistent relationship between capital and automation extends and increases the gap between the instrumentalisation and the autonomy of the networked machines of fixed capital, the challenge today is to articulate the collective or social field of a general artificial intelligence, by developing a philosophy of computation and a critical automation theory.

1 Alexander R. Galloway, “The Poverty of Philosophy: Realism and Post-Fordism” Critical Inquiry 39 No. 2 (Winter 2013), 347-366.

2 Ibid., 347.

3 Ibid., 351.

4 Ibid., 358.

5 Ibid., 360.

6 Galloway’s argument against realist philosophies and their ahistorical ontologization of mathematical axiomatics, out-of-bounds contingency, or object oriented philosophy, is inspired by Catherine Mallabou’s question “What should we do so that the consciousness of the brain does not purely and simply coincide with the spirit of capitalism?” Catherine Malabou, What Should We Do with Our Brain? trans. Sebastian Rand (New York: Fordham University Press, 2009), 12. Within the context of this article instead Galloway is concerned with how philosophy as a form of critical thinking coincides with the spirit of capitalism, ibid., 364.

7 Ibid., 366.

8 Yann Moulier Boutang, Cognitive Capitalism (Cambridge: Polity Press, 2012).

9 Brian Massumi, “Potential Politics and the Primacy of Preemption” in: Theory & Event. 10, No. 2, (2007).

10 Michael Hardt and Antonio Negri, Empire (Cambridge MA: Harvard University Press, 2000); Brigitte Biehl-Missal, “Atmospheres of Seduction: A Critique of Aesthetic Marketing Practices”, Journal of MacroMarketing. 1, No. 2, (2012).

11 For instance see Bernard Stiegler, States of Shock: Stupidity and Knowledge in the 21st Century (Cambridge: Polity Press, 2014).

12 Bernard Stiegler, Uncontrollable Societies of Disaffected Individuals: Disbelief and Discredit, Volume 2 (Cambridge: Polity Press, 2012). 

13 Anna Munster, “Nerves of Data: The Neurological Turn in/against Networked Media” Computational Culture, a Journal of Software Studies, 2 (December 2011). (Aavailable online at http://computationalculture.net/article/nerves-of-data, (last accessed 26th of February 2016).

14 Luciana Parisi and Steve Goodman, “Mnemonic Control” in Beyond Biopolitics: Essays on the Governance of Life and Death, ed. Patricia Ticineto Clough and Craig Willse (Durham. NC: Duke University Press, 2011).

15 Massumi, ibid.

16 Mark Fisher, Capitalist Realism. Is there no alternative? (Washington DC: Zero Books, 2009).

17 Peter Sloterdijk, Critique of Cynical Reason (Minnesota: University of Minnesota Press, 1988).

18 See Katherine N. Hayles “Cognition Everywhere: The Rise of the Cognitive Nonconscious and the Costs of Consciousness” New Literary History, Volume 45, No.umber 2, (Spring 2014).

19 Ibid.

20 Antonio Negri, “Reflections on the ‘Manifesto for an Accelerationist Politics’”, e-flux Journal, Translated by Matteo Pasquinelli. (Originally published in Italian on Euronomade) Accessed 26th of February, http://www.e-flux.com/journal/reflections-on-the“manifesto-for-an-accelerationist-politics”/.

21 Ibid.

22 Henri Bergson, Laughter: An Essay on the Meaning of the Comic (Rockville: Arc Manor, 2008).

23 See Neil Johnson, Guannan Zhao, Eric Hunsader, Hong Qi, Nicholas Johnson, Jing Meng and &Brian Tivnan, “Abrupt rise of new machine ecology beyond human response time,”, Scientific Reports 3, (Article number: 2627, 11 September 2013); J. Doyne Farmer and Spyros Skouras, “An Ecological Perspective on the Future of Computer Trading,” The Future of Computer Trading in Financial Markets, UK Foresight Driver Review – DR6, (2011), Accesed 26th of February, 2016, at /www.gov.uk/government/uploads/system/uploads/attachment_data/file/289018/11-1225-dr6-ecological-perspective-on-future-of-computer-trading.pdf (last accessed 20/03/2015); Paul Zubulake and Lee Sang, The High Frequency Game Changer How Automated Trading Strategies Have Revolutionized the Markets (Hoboken N.J: Wiley & Sons, 2011); Marc Lenglet, Conflicting Codes and Codings: How Algorithmic Trading is Reshaping Financial Regulation’, Theory, Culture & Society 28 (November 2011), 44-66.

24 From this standpoint, one could argue that Galloway’s critique of realist metaphysics shall acknowledge that what for Deleuze was the cybernetic and the computational machine based on discrete states and formal organization of relational structures, has then historically become a problem of affective communication that precisely sustains more intensive formal organization. In this sense, what appears to be a convergence between Deleuze’s model and the computational automation is instead a historical-specific transformation of the mechanization of logic.

25 Giuseppe Longo, “The Difference between Clocks and Turing Machines,”, in Functional Models of Cognition. Self-Organizing Dynamics and Semantic Structures in Cognitive Systems, ed. Arturo Carsetti (Netherlands: Springer, 2000). Online, accessed at 26th of February, 2016, http://www.di.ens.fr/users/longo/files/PhilosophyAndCognition/clocksVSturingM.pdf

26 See, Alan M. Turing, “On computable numbers, with an application to the Entscheidungsproblem,” Proc. London Math. Soc. (2) 42 (1936–7), 230–265. Reprinted in: Alan M. Turing, Collected Works: Mathematical Logic, ed. R. O. Gandy and C. E. M. Yates (Amsterodam: North-Holland, 2001); Gregory Chaitin, Francisco A Doria, Newton C.A. da Costa, Goedel’s Way: Exploits into an undecidable world (Boca Raton, FL: CRC Press, 2011).

27 Gregory Chaitin, “The Limits of Reason” Scientific American, 294, No. 3, (March 2006), 74-81.