“Kein Mittel ist nur Mittel.”

[No means is just a means.]

– Günther Anders, Die Antiquiertheit des Menschen. Über die Seele im Zeitalter der zweiten industriellen Revolution

 

“Die Stille ist das Kind der Mechanisierung; und das Unberührte der ungeheure Rest, der übrigbleibt, wenn die Menschheit sich darauf beschränkt, auf vorgeschriebenen Straßen zu fahren oder gar auf Geleisen.”

[Silence is the child of mechanization; and the untouched is the monstrous remainder that is left over when mankind confines itself to driving on prescribed streets, or even on tracks.]

–Günther Anders, Philosophische Stenogramme

In their much-discussed article Ich habe nur gezeigt, dass es die Bombe gibt (English:“The data that turned the world upside down”)[1] journalists Mikael Krogerus and Hannes Grassegger described how the British company Cambridge Analytica (CA) allegedly helped Donald Trump win the US election. Based on OCEAN, a scale of personality traits developed by Polish psychologist Michal Kosinski that combines demographic with personal online data,[2] CA claims to be able to create character profiles of potential voters to generate personalized political advertising. On these grounds, the firm promises to deliver the right messages to the right people at the right time: Ads can be varied according to OCEAN’s five criteria of “openness,” “conscientiousness,” “extraversion,” “agreeableness,” and “neuroticism.” Through this, as its promise goes, the company can decide between 175,000 different versions of political ads and, in the end, massively influence not only voters, but elections as a whole. As is propagated on their Web site, they use big data and advanced psychographics “to grow audiences, identify key influencers” in order to “move people to action.”

In both the Anglophone and German-speaking contexts, Krogerus and Grassegger’s report on CA was a viral scoop. Its provocative heading and its narrative featuring a tragic main character (Kosinski, whose research opened up a digital Pandora’s box) seemed to offer a welcome explanation for the otherwise inexplicable election outcomes in the U.S. For, if a company is able — using only 300 recorded Facebook likes — to determine the behavior of a person better than his/her most intimate partner, it seemed quite probable that such a company would be able to manipulate voter behavior on a large scale.

Yet, shortly after many users had shared, liked, tweeted, and posted the article, the first critical responses emerged doubting the credibility and the sources of the article itself. An intense wave of protest arose against Krogerus and Grassegger’s claim that CA had substantially facilitated the rise of Trump. Unfortunately, this partly justifiable critique regarding the objectivity of the report often went hand in hand with its rejection in toto; loaded reactions effectively preventing more nuanced approaches from being heard. This wave of noise predictably led to the suppression of a much-needed debate on the dangers of voter manipulation. More generally, it seemed that within the milieu of a questionable technology, the possibility of critique itself was rendered rather precarious.

In hindsight, the overheated polarity between affirmation and negation, which eruptively wafted through the Internet in the article’s aftermath, not only reflects the attention-seeking economy of social media. Even more so, it is an indication of problems embedded in the medium’s technical design: seemingly, its programmed architecture of binary 1’s and 0’s manages to transform complex discourses into rapid-opinionated attunements. Facebook’s algorithmic design seems, almost naturally, to reinforce such a one-click culture, a culture that prefers explosive titles over content, pictures over text, and crazy cat videos over differentiated arguments. The “social”[3] network has obviously succeeded in establishing a virtual space of information, but it is one that reinforces the tendency to reduce debates to only two sides of the same coin.

In the course of such modes of binarization, another aspect has become all the more evident: namely, that the countercultural promise of digital-democratic self-determination alongside the often-proclaimed emancipatory power of networks seem most questionable these days. To better grasp such processes of disillusionment and the interplay between freedom and control, mere analyses of the status quo and more appeals for rational discourse seem rather insufficient. Instead, it would be most vital to examine the very fundaments of contemporary (technological) truth regimes and the questioning of their heritage. In this vein, it seems particularly essential to understand the entwinement between economic, viz. neoliberal — and, obviously, also political — interests with those cybernetic modes that are reflected both within the design of social networks as well as the programs of firms such as Cambridge Analytica. 

Order from Noise: The Cybernetic Hypothesis

Hannes Grassegger, one of the authors of the Das Magazin article, provides a vital hint in this regard: “What we are confronted with here is a completely opportunist actor,” he recently explained, “whose main interest is not even the election itself.” That is, for tech companies such as CA it is rather irrelevant whom they target, whose political agenda or conviction they support, or how they potentially influence societies. The only thing that matters is that their method is being used and paid for. Thus, while the multioptional applicability and the often-proclaimed ideological neutrality of technologies such as CA’s (think of, for instance, Mark Zuckerberg’s early attitude regarding the fake-news controversy) surely provides a good sales argument, it clearly provokes realpolitik problems (it is worth noting that rightwing computer scientist Robert Mercer reportedly has a $10m stake in the company). Indeed, inherent in the agendas of big-data companies is not only a prevalent neoliberal opportunism, but an (a-)morality of feasibility, an availability to be used by a variety of actors, actants, and ideologies. Next, to the more trivial insights that homo faber’s tools are always ambivalent and that the aims of companies are naturally profit-oriented, a more fundamental development thus becomes apparent in this context: namely, that within the horizon formed out of economic interests, political ambitions, and technological possibilities, a new mode of governing is constituted, which associates cybernetic control knowledge with the neoliberal spirit.

CA’s proclaimed methods thus mirror what the authorial collective Tiqqun has described as a “cybernetic hypothesis,” which is their term for a Western-centric, technocratic grasp of reality, one which conceives of biological, physical, and social conduct and habits as programmable. In this conception, the individual is pictured, they argue, “as a fleshless envelope, the best possible conductor of social communication, the locus of an infinite feedback loop which is made to have no nodes.” According to Tiqqun, the cybernetic hypothesis need not even conceive of individuals as cyborgs or machines to be able to control them, nor does it have to operate with disciplinary imperatives. It is essential, rather, that cybernetic governmentality operate precisely not like a unidirectional machine à gouverner, but that it instead focus on quasi-autonomous self-regulation and adaptive behavior. Thus, what suffices for effective steering and the implementation of control is the subtle influencing of desires, the setting of incentives, the acceleration of processes of self-learning — in short, the exertion of what Bernard Stiegler recently termed “psychopower.” In their poignant The Cybernetic Hypothesis, Tiqqun are very clear on how cybernetic concepts such as feedback, self-regulation, and information have tacitly colonized the social imaginary throughout recent decades. Thus, it seems only consequential that contemporary concepts of the “self” seem completely loaded with terms such as autopoietic functioning, self-optimization, or self-perfectioning[4]. In this vein, and with the sole remaining concrete utopias being fabricated by the Elon Musks and Tim O’Reillys of Silicon Valley, it is not surprising that contemporary sociopolitical visions seem bound to technical concepts. Indeed, it seems that the more our everyday life is designed by Palo Alto revolutionaries, the less conceivable any outside of the “open machine” becomes. Even worse: usually, such an “outside” is not even pictured as desirable, let alone feasible, and this holds not only for those believing in the peacemaking power of self-driving cars, but also, increasingly, for politicians. In a present seemingly determined by chaotic contingency, “systematic,” so-called “smart” solutions come to affect the political with ever-greater intensity. This in turn only helps to manifest the largely technical modus operandi established by cybernetic governmentality.

It is worth mentioning that such a form of subtle government is, however, no invention of the 21st century. It was in 1928 that Propaganda, the ultimate Bible of marketing, was published by nephew of Sigmund Freud and inventor of PR Edward Bernays. There he clarified that “the conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in democratic society.” Propaganda,[5] marketing, and the management of public relations are, says Bernays, necessary, for “vast numbers of human beings must cooperate in this manner if they are to live together as a smoothly functioning society.” With the advent of new big-data technologies, the battleground of incentives has expanded significantly. While there are indeed no reliable indications proving the efficacy of CA’s promises, it is however telling that Facebook experimented with its own ability to enhance voter participation during midterm elections: in 2010, Facebook secretly built an application into millions of user profiles to check whether the influence of close friends might enhance their likeliness to vote. Indeed, those who saw that their friends had voted showed a 0.39 percent higher voting participation than those who did not[6]. While this percentage might not appear significantly high at first glance, the total number suffices to change election outcomes, at least if the margins between the parties are narrow: out of 61 million users involved in the 2010 experiment, 340,000 more turned out to vote. Thus, the lack of any reliable data on CA’s effectiveness should not lead us to simply assume that big-data targeting does not have an effect on voter behavior.

Technics and its “Neutrality”

 In his magnum opus, Die Antiquiertheit des Menschen (English: The Outdatedness of Human Beings), German philosopher Günther Anders writes: “Nothing is more misleading than the … philosophy of technics, which claims that tools were ‘morally neutral’ in the first place, that they were readily available to arbitrary use, and that the only thing that mattered was how we used them. This very widespread thesis,” Anders concludes, “needs to be fought. For, it eo ipso grants precedence to any apparatus whilst naming philosophy a latecomer; since it presumes that the formulation of the moral problem always only needs to be established retroactively.”[7] Such a critique could readily be extended onto the new media, for technologies such as big data appear never to be merely technologies; rather, what is preinstalled in them is a logic of control or, at least, a preceding momentum of power.[8] The latter, it seems, becomes only intensified throughout the current process of digitalization: today, goes the crypto-theological mantra of all the Zuckerbergs & Co., everything ought to be connected, and anyone who criticizes this process becomes an instance of noise, a source of disturbance.

This dynamic became all the more obvious vis-à-vis the digital excitement about the Das Magazin article, which contributed little to reinforcing a constructive discussion about the sense and nonsense of political big-data targeting. Interestingly, the reactions within the German-speaking context where it was first published, pretty much resembled the later reception in the Anglophone media: affirmation and wide sharing in the beginning were soon overrun by “expert” voices — digital consultants as well as technophile bloggers and authors — which relativized the crowd’s early reactions and led finally to a dismissal of the article as the kind of “data science mythology” that lends too much credence to “voodoo marketing in action.”[9] Unsurprisingly, such “voodoo” terminology was more than happily shared by Tim O’Reilly, notably one of the greatest defenders of such concepts as “government as platform” and algorithmic governance more generally, on Twitter. There he wrote: “Excellent analysis debunking the stories about CA’s role in the election.” Even Cathy O’Neil, author of Weapons of Math Destruction and a nuanced critic of algorithmic regulation, relativized the story’s narrative since, according to her, political campaigns never “play nice” without anyone having a “monopoly on voter manipulation.”

Similar arguments had dominated the debate in the German-speaking media weeks earlier. In the end, the only question examined was whether the article itself was fake news, while the actual problems alluded to therein were hardly taken into account. In hindsight, a whole range of the “counter-arguments” put forward against the article seem shallow to say the least. One such shallow counter-claim was that CA’s control could not be all-encompassing, since contents such as the article about CA’s doings could still freely circulate. The shallowness of such a counter-critique can readily be seen by remembering, as mentioned above, that cybernetic control precisely functions because it is not total. A similar such shallow critique was made by tech-reporter and cofounder of Digitalista Elisabeth Oberndorfer who argued that CA was not as successful as it had been portrayed in the article and that, in any case, Hillary Clinton had used similar big-data tools. Regarding the Anglo-American reception, a Buzzfeed article later made the same claim: “In several articles, Nix has said the company has 5,000 pieces of data on every American adult. As creepy as that may sound, however, it’s common.” Yet, rather than relativize the findings, should this not all the more intensify our concerns about what is obviously a new normalcy currently being established in U.S. digital realpolitik?

Eventually, the upshot of the uproar was the article’s characterization as “a leftist conspiracy theory” by one of Germany’s biggest media platforms. In the U.S., these allegations were echoed in the aforementioned Buzzfeed article, which promised to finally reveal the “truth” about the whole CA chatter while, at the same time, accusing an article written in support of the Das Magazin narrative of being a “conspiracy.” However, the “truth” revealed herein was limited to anonymous sources claiming that CA does and did not function as the Swiss report, or CA-strategist Nix, had contended. While the Buzzfeed article rightfully weakens CA’s own marketing narrative and the way it had been adopted by the Swiss reportage, it concurrently downplays the potential dangers of voter targeting by generalizing psychotechnical targeting and data collection. To cite one last counterclaim, the aforementioned NZZ commentary continued analogizing “fake news” with what was termed the “journalistic text with facts,” concluding that its viral career had finally rendered obvious that “even intellectuals are part of filter bubbles.”[10] Most dubious, however, was the commentary’s undifferentiated conclusion: “the polemical critique of big data is equally shortsighted and as populist as the agitations effused by Trump supporters.”

More nuanced assessments, such as those written by net activist, researcher, and author Wolfie Christl (“There is strong evidence indicating that data-based microtargeting can systemically increase or decrease voter participation of particular groups of people”), by Cambridge scholar Vesselin Popov (“even within the European legal framework, CA-technologies could cause enormous harm”), and in objective journalistic accounts,[11] did not reach a comparable level of user-attention.[12] In turn, this led to an overall absence of debate regarding the fact that campaign staffs are at all considering hiring firms such as CA to influence citizens’ voting preferences, that is, their voting behavior. The most popular counterargument against the Das Magazin article – that CA is not yet functioning as precisely as it proclaims to – was not at all evaluated critically, and this despite the fact that tech experts in particular should well-know that big-data technologies develop their efficiency successively, i.e., throughout the process of their practical application and of the scaling of enormous amounts of data (Google’s driverless car would be one example of such). Thus it was that, because of their late arrival, even such well-argued articles as these did not help the authors of the Das Magazin-article. Confronted with a tidal wave of harsh critiques, the only thing left was to relativize all of the article’s claims – even those claims that were not in need of relativization.

Contextually, it is vital to underscore, as net critic and data activist Geert Lovink emphasizes, that social media lacks actual media, that is, the “curatorial element of human labor.” Regarding the CA debate, this decisive aspect was not only echoed in the reactions to the article, but also in the absence of any political discourse within the platform itself. “The fact that Facebook offers a platform for political content alongside everyday banalities,” claims German media theorist Roberto Simanowski, “does not hide the fact that its technical and social dispositif, through the fostering of unreflexive forms of communication and everyday related communicative contents, is opposed per se to a culture of political discussion.”[13] 

Hype & Counter-Hype: “Democracy as Data”

In this vein, the dynamic of reception regarding the CA report points to the preprogrammed logic of that medium within which both hypes and counter-hypes took place. Comparable to a three step cryptodialectic-cybernetic process, the article was first succeeded by feedback (sharing and then counterarguments), whereas it shortly reached the level of adaptation, eventually culminating in the system’s self-regulation (leveling). In retrospect, the polarization resembles a fake debate that eventually only served to manifest the technopolitical status quo. To be sure, not every dissenting voice was absorbed by the well-pleased, newly-reconciled cloud of negated negation. Yet, in a time that conceives of data as the “21st century’s raw material”[14], the debate on CA seems to mirror a system that proceeds not cautiously, but instead very, very smoothly. Meanwhile, Michal Kosinski is continuing his research, with papers such as “Mining Big Data to Extract Patterns and Predict Real-Life Outcomes” aimed at “greatly” improving “our understanding of individuals” recently published in high-impact journals.

While the cybernetic agendas of tech companies surely follow a neoliberal-economic calculus these days, it becomes increasingly evident that big data also promises to increase the effectiveness of governments. Indeed, it is certainly no coincidence that President Obama’s OpenGov Initiatives were consulted on by Eric Schmidt and Jared Cohen (Google/ Alphabet Inc.), as well as Beth Noveck (a TED talker and author of Smart Citizens, Smarter State: The Technologies of Expertise and the Future of Governing), who idealizes social network mechanisms as new, practicable models of (political) governance. Hence, not only are election campaigns technologically mediated, what is more, everyday political governing quite unaccidentally mirrors the imaginative horizons of very early theories of the state that envisioned forms of data politics already at the beginning of the 1960s. In this vein, current digital governance (from nudging and digital agendas to the tendency to what Evgeny Morozov recently termed “solutionism”) can indeed be read as an idea-historical revenant of theories of political cybernetics put forward decades ago by Karl Deutsch, the little-known Eberhard Lang, and David Easton.

These political theorists propagated a politics largely centered on the recording and subtle steering of the general will at a time when cybernetics was still largely non-applicable to political systems. Based on a socially implemented feedback system centered on adaptive behavior, Deutsch, Lang & Co. sought to gain immediate access to the people’s will in order to effectively control the masses — that is, to create order from noise[15]. Strikingly, the ideal of political cybernetics was precisely not to reinforce political controversy or to stimulate a productive dissensus, let alone to empower people to have their voices heard on a public agora. Rather, it was to insert extensive information into a greater system with the aim of regulating it. Vital to them was a simple formula which is currently experiencing a renaissance in Silicon Valley: the more communication and information, the more democratic the system — irrespective of content and semantics.

While political cyberneticists writing back then still lacked the practical, technical instruments to actualize their feedback fantasies (most significant in this respect is probably management cyberneticist Stafford Beer’s failure to implement cybernetics in Allende’s Chilean economy in the early 1970s), it is about time to analyze in what sense processes of cybernetization are affecting contemporary politics and the political as such. Telling in this respect is political scientist and strategist Parag Khanna’s most recent Technocracy in America, which suggests a model of “direct technocracy” in connection with “info-states” — in short, “democracy as data.” Khanna’s suggestion for a politics post-Trump is that experts (not politicians) decide policies on the basis of data. This, according to him, will ideally lead to states finally achieving their “long-term goals.” In this vein, Khanna is sure that a “devolved world of info-states is indeed the surest path to a more genuinely democratic world.” It is important to note the ultimately cybernetic assumption here: connectivity automatically leads to greater freedom and democracy. One of the examples Khanna has in mind is China; tellingly, however, is that in the book there is no mention whatsoever of China’s plans to introduce an all-encompassing “social credit system” to reinforce “social” behavior, to construct an “honest mentality and credit levels of the entire society” via digital control and nudging. Although such “citizen scores” still seem an impossibility in Western societies, two of Barack Obama’s top advisors on technology and terrorism issues, the aforementioned Eric Schmidt and Jared Cohen, provide a rather alarming hint in their cyber-manifesto The New Digital Age: “To be sure, there will be people who resist adopting and using technology. …Yet a government might suspect that people who opt out completely… are more likely to break laws, and as a counterterrorism measure, that government will build [a] kind of ‘hidden people’ registry. …If you don’t have any registered social-networking profiles or mobile subscriptions… you might be considered a candidate for such a registry. You might also be subjected to a strict set of new regulations that includes rigorous airport screening or even travel restrictions.”

However, we should not only be worried by the permanent potential of a totalitarian use of data here. Equally problematic is the fact that cybernetic concepts of the political substitute the democratic notion of “having a share in something” by mere “participation,” and justice by equal access, thus constituting a form of reduction that ultimately leads to the vanishing of politics as we know it.[16] Most worrisome, however, is the programmatic substitution of concepts of “the social” and “society” by the mere notion of “networks.” In this vein, as Occupy co-creator Micah White recently argued, it must be questioned whether emancipatory resistance can be successful at all in the long term if it increasingly relies on technical network solutions because such solutions transform “political engagement [into] a matter of clicking a few links.” The fact that, as White puts it, digital activism has partly “adopted the logic of the marketplace” complicates issues further and indirectly indicates a lack of imaginary and sociopolitical alternatives.

In his book Ordo ab Chao, German media philosopher and mathematician Dieter Mersch explains such limited imagination by referring to what he terms “a fundamental misjudgment,” that is, the supposition that “nets and channels obtain basic democratic potential.”[17] Doubting that nonhierarchical spaces could be built by the mere use of network models, Mersch claims instead that “the opposite is the case: They [i.e., the networks] are regimes of authorization, regimes of dressage. They are such precisely through their openness. …If the saying of their democratization is meaningful at all, then it is at best in the sense of an egalization of control, its interiorization of self-access. …Democracy, which has always incorporated a theory of having a share in something, of resistance, perverts its own sense, since now the term of resistance clashes with noise. Its exclusion lies in the interest of the optimization of the nets, which, in turn, enhances the illusion of freedom and participation. Actually, it reinforces… factual departicipation.”[18] An outside to the seemingly closed circle of technical regulation seems rather excluded from Mersch’s diagnosis, that is, networks are hardly democratically re-programmable to him, and a look to data realpolitik tends to prove him right: the current future hopes of contemporary politics seem solely centered on the possibilities of devices, which are ever more precise; techniques, which are ever more transparent; big data analyses, which are ever better; and so on.

To name one European example for this quite fundamental tendency, the German government has just advocated to abrogate the appropriation of compiled data in the context of the controversial drafts regarding the new EU General Data Protection Regulation. Most recently, Chancellor Angela Merkel warned that Germany will likely become a digital “developing nation” due to its “excessive data protection.” Such statements reflect not only questionable possibilities manufactured by surveillance capitalism, but also a more general cybernetic colonization of politics. To be sure, Merkel’s, or the aforementioned Eric Schmidt’s, technopolitical imaginaries do not directly amount to a cybernetic state of control. However, the horizon of ideas currently adopted by politicians as well as the growing political aspirations of tech-geeks seem clearly headed in an ultimately technological direction. This can be seen not only in the recent well-known affiliation between Trump and Peter Thiel, [19] but also in the recent speculations regarding Mark Zuckerberg’s potential presidential ambitions — ambitions that seem not entirely far-fetched given his appeals to “community governance” and his use of pseudo-political vocabulary in his most recent manifesto. Vice versa, on a mission to cultivate contacts between Denmark and Apple and Google & Co., the Danish government has just revealed its intent to appoint a “digital ambassador.”

What becomes obvious in this regard is thus the far-reaching absence of what German philosopher Günther Anders has termed “moral fantasy.”[20] What he meant by this was the capacity to envision the long-term consequences of the technically constructed and, in particular, to question the destructive potentials of technology’s efficacy. Today such a stance would, in the first place, imply refusing to leave the design of digital infrastructure, interfaces, and applications solely to engineers. For, if algorithms to a certain extent embody the moral values of their programmers, [21] the fascination with technology’s possibilities ought to be accompanied by professional skepticism — a task to be fulfilled not only by philosophers but also by political scientists, sociologists, political theorists, and the humanities in general.

To conclude, next to the need for debate regarding the ownership and power of disposition over our data, there is an urgent necessity to engage with the crucial question of how the digital affects the very character of the political itself and politics as such. In this vein, it is essential not to confuse networks with a politically sustainable concept of the social per se. Rather, what is needed is a debate on the relation between politics and technics that goes beyond the usual poles of blind technophilia on the one hand, and unproductive ressentiment on the other hand. This, however, also necessitates our thinking through certain revolutionary shifts that the aforementioned Günther Anders had already diagnosed at the beginning of the 1980’s: That insofar as technics has started to intervene in political structures, it is, from day to day, less the case that technics develops “within the political frame. On the contrary, then, an actual revolution takes place. That is: at that point, technics’ significance gains the upper hand to such an extent that the political happenings eventually occur within its frame.”[22]

***

(Shortly before finishing this article, the inventor of the WWW, Sir Tim Berners-Lee, published an open letter in which he claims that online political advertising is being used in “unethical ways,” thus urging that it “needs transparency and understanding.” Furthermore, he calls for putting a fair level “of data control back in the hands of the people” In the letter, Berners-Lee poses a fundamental question: “Targeted advertising allows a campaign to say completely different, possibly conflicting things to different groups. Is that democratic?”)

***

This essay is a modified and translated version of the article “‘Democracy as Data?’ Über Cambridge Analytica und die ‘moralische Phantasie’,” which was published by MERKUR on February 6th, 2017.

 

Footnotes:

Some quotes in the text were published only in German and have been translated by the authors.

[1] The original appeared in the Swiss Das Magazin, the English version on Vice’s Motherboard.

[2] For a critical and extensive analysis of Kosinski’s model in light of data mining and predictive analytics, see Wolfie Christl’s and Sarah Spiekermann’s detailed study Networks of Control, Vienna: Facultas, pp. 13ff (available online here).

[3] It’s “social-ity” being particularly manifest not only throughout the technically orchestrated election campaigns in the US, but also regarding the way the Das Magazin article was “debated.”

[4] Cf. Ulrich Bröckling, “Über Feedback. Anatomie einer kommunikativen Schlüsseltechnologie,” in: Die Transformation des Humanen. Beiträge zur Kulturgeschichte der Kybernetik, ed. by Erich Hörl and Michael Hagner, Frankfurt a.M.: Suhrkamp, pp. 326–347.

[5] It ought to be noted that in 1928 the term did not yet carry the negative connotations it does today.

[6] Cf. Felix Stalder, Kultur der Digitalität, Frankfurt a.M.: Suhrkamp, p. 224.

[7] Günther Anders, Die Antiquiertheit des Menschen. Band II. Über die Zerstörung des Lebens im Zeitalter der dritten industriellen Revolution, Munich: C.H. Beck, p. 216–217 (citation translated by the authors).

[8] It is in this respect that media theorists often refer to the military origins of communications techniques.

[9] Most recently, the New York Times quoted CA-executives as saying that the firm had never used psychographics in the Trump campaign, and that it was neither involved in the pro-“Brexit-leave.EU”-campaign (the latter statement might, however, need to be seen in the context of the ICO’s [British Information Commissioner’s Office’s] recently launched inquiry into CA’s potential misuse of private data during the “leave.EU-”campaign). Yet, shortly before the NYT-article was published, CA-CEO Alexander Nix repeated his earlier stance (i.e., that CA did play a major role in the US election using psychographics, particularly by targeting voters in the swing states) during a keynote speech held at the Online Marketing Rockstars Festival 2017. CA’s role during the US-election and the “leave.EU”-campaign thus remains opaque, with the decisive efficacy of proclaimed methods still unproven. Moreover, it is unclear to what extent, if at all, they actually used Kosinski’s “OCEAN”-method to create voter profiles during the campaigns. Yet equally unproven is the claim that CA did not play a decisive role during the campaigns and that the proclaimed psychographics do not work as precisely as proclaimed by the firm.

[10] It is at this point that one might justifiably ask whether there are any statistics available regarding who has read the article for which particular reasons and for which concerns a particular user has shared it? Not everyone might have read it as the long-awaited monocausal explanation for why Trump won the election.

[11] See, for instance, the articles written by Frankfurter Allgemeine Zeitung journalist Uwe Ebbinghaus (“European law does not prohibit Cambridge Analytica’s data analyses as consistently as many try to convince themselves these days”) and Adrienne Fichter (Neue Zürcher Zeitung; “According to the philosopher Jürgen Habermas, the trend of ‘messengerisation’ also threatens the indispensable precondition for the formation of the public sphere: the principle of boundlessness, that is, that no one is excluded from the discourse.”)

[12] This would not be much of a problem, were it not the case that more than 40% of all users nowadays retrieve their information exclusively from social networks.

[13] Roberto Simanowski, Facebook-Gesellschaft, Berlin: Matthes & Seitz, p. 155. (citation translated by the authors)

[14] C.f., Francis Maude, and most recently, Angela Merkel.

[15] C.f., Karl Deutsch, The Nerves of Government: Models of Political Communication and Control, New York: The Free Press; Eberhard Lang, Zu einer kybernetischen Staatslehre, Salzburg: Pustet; Eberhard Lang, Staat und Kybernetik. Prolegomena zu einer Lehre vom Staat als Regelkreis, Salzburg: Pustet; David Easton, A Systems Analysis of Political Life, New York: John Wiley & Sons.

[16] Khanna, who describes his suggestion of technocratic governance as a “democracy without politics” is just the most recent example of this.

[17] Dieter Mersch, Ordo ab chao, Zurich: diaphanes, p. 55 (citation translated by the authors).

[18] Ibid., p. 56 (citation translated by the authors).

[19] Thiel being the founder of and shareholder at Palantir, a company for big-data-based surveillance technologies with a specific focus on anti-terror-analyses. According to The Intercept, Palantir’s new intelligence system “Investigative Case Management” will “assist in President Donald Trump’s efforts to deport millions of immigrants from the United States.”

[20] Günther Anders, Die Antiquiertheit des Menschen, Vol. I, Über die Seele im Zeitalter der zweiten industriellen Revolution, Munich: C.H.Beck, p. 273.

[21] As Cathy O’Neil writes: “Models are opinions embedded in mathematics.” (Cathy O’Neil, Weapons of Math Destruction. How Big Data Increases Inequality and Threatens Democracy, London: Allen Lane, p. 21.)

[22] Günther Anders, Die Antiquiertheit des Menschen, Vol. 2, Über die Zerstörung des Lebens im Zeitalter der dritten industriellen Revolution, Munich: C.H. Beck, p. 108.