Public Seminar ReviewVolume 1, Issue 2Second Semester/Summer 2014

Public Seminar Review: Second Semester and Summer Issue

The second semester of the Public Seminar is over, and the papers are now in, presented in this our second issue. Here you find short and long essays, supplemented by visual presentations around five major themes: Capitalism and its Alternatives, Democracy and its Enemies, Identities, the Arts and Literature, and Media, Memory and Miscellaneous. Note, though, that the pieces in fact address each other between and among these categories, as they consider “fundamental problems of the human condition and pressing problems of the day, using the broad resources of social research,” staying true to the mission statement of Public Seminar, and to the scholarly and public project of our academic home, The New School for Social Research. -Jeffrey Goldfarb

In This Issue

Section: Capitalism and its Alternatives

Top 10 List of Best American Historical Writing
By Eli Zaretsky

Autonomous Politics and Liberal Thought-Magic
By Nick Montgomery

The Politics of Public Debt
By Wolfgang Streeck

Critical Theory After the Anthropocene
By McKenzie Wark

The Women Did It?
By Ann Snitow and Victoria Hattam

Further Reflections on Feminists and the Left
By Eli Zaretsky

Look Out Kids: On the New and Next Left
By Jeremy Varon

John Dewey’s Encounter with Leon Trotsky
By Richard J. Bernstein

The Creative Class Rises Again
By Vince Carducci

Birth of Thanaticism
By McKenzie Wark

Shifting Geographies Rather Than Defections
By Ann Snitow and Victoria Hattam

What's Left?
By Eli Zaretsky

Slaves: The Capital that Made Capitalism
By Julia Ott

Capitalism Studies: A Manifesto
By Julia Ott and William Milberg

On the Heilbroner Center's Manifesto
By Cinzia Arruzza and Omri Boehm

Is This Still Capitalism?
By McKenzie Wark

Ernesto Laclau, 1936-2014
By Robin Blackburn

Has Capitalism Seen Its Day?
By Wolfgang Streeck

Consumption and the Social Condition
By Tim Rosenkranz

Starbucks Goes to College
By Bryant Simon

The Politics of the Sharing Economy
By Trebor Scholz

Anarchism and Feminism: Toward a Happy Marriage?
By Chiara Bottici

The Capitalism of Affects
By Cinzia Arruzza

Section: Democracy and its Enemies

Hannah Arendt, Constitutionalism and the Problem of Israel/Palestine
By Andrew Arato

Sweeping the Sand Out of the Desert: From Verwoerd to Prawer
By Hilla Dayan

Ariel Sharon (1928 - 2014)
By Irit Dekel

EuroMaidan Politics
By Kateryna Ruban

The Social Condition: Religion and Politics in Israel
By Jeffrey C. Goldfarb

A Response to Goldfarb’s Review Article on "Theocratic Democracy" by Nachman Ben Yehuda
By Omri Boehm

Brazil, June 2013, Act II
By Gustavo Hessmann Dalaqua

Preaching to the Choir: The Crimea and Putin’s Domestic Audience
By Yana Gorokhovskaia

Solidarity with Ukraine against Putin’s Reality
By Michael D. Kennedy

Egypt’s Constitutional Mess and Solutions from South Africa
By Keith Harrington

The Terrorism that Netanyahu Supports
By Yossi Gurvitz

Heidegger's Black Notebooks: Extreme Silencing
By Thomas Meyer

The War on Fascism
By Eli Zaretsky

Conceptions of Corruption, Its Causes, and Its Cure
By Alan Ryan

Israel's Right-Wingers Have Problems with Facts
By Carlo Strenger

The Brazilian Discontents behind the World Cup Stage
By Mariana Prandini Assis

In Support of PODEMOS
By Public Voices

Adding Injustice to Injury
By Public Voices

When Neo-Fascism Was Power in Argentina
By Federico Finchelstein

Terrorist Rule of Law in Israel
By Omri Boehm

The Strongest Terrorist Organization in the Middle East
By Yossi Gurvitz

The Other Victim under the Rubble of Gaza
By Benoit Challand

Gaza: How Will It End?
By Nahed Habiballah

Reflections on Critical Responses to the Tragedy of Gaza
By Jeffrey C. Goldfarb

Operation Protective Edge and Just War Theory
By Piki Ish-Shalom

On Bitter Satisfaction
By Karolina S. Follis

Death, Destruction, and the Israeli Turn to the Right
By Iddo Tavory

Hamas and the Israeli Ruling Coalition Are Not Collaborators
By Ross Poole

Proportionality and the Diaspora in Operation Protective Edge
By Ilan Zvi Baron

Reflections on a Revolutionary Imaginary and Round Tables
By Elzbieta Matynia

Section: Identities

Making Sense of Place
By Christiane Wilke

Ferguson and Fatherhood: My Turn to Give The Talk
By Edward E. Baptist

Rolezinho: Politics in Brazil's Shopping Malls?
By Mariana Prandini Assis

Hard Lessons on Rape Culture: Dispatch from Brazil
By Mariana Prandini Assis and Ana Carolina Ogando

Sex and the Super Bowl
By Monique Trauger

Chelsea Manning Performing Gender
By Fabienne Malbois

Immigrant Mothers as Agents of Change
By Agata Lisiak

Don't Worry... Be Happy!
By Jeremy Safran

It’s All in the Mind – Or is It?
By Jeremy Safran

Physics Envy
By Jeremy Safran

McMindfulness
By Jeremy Safran

Talking about Gaza in Psychoanalysis
By Eyal Rozmarin

Are We Really Such Beasts?
By Hakan Altinay

Making Sense of Place
By Christiane Wilke

Section: The Arts and Literature

Film and Myth
By Eli Zaretsky

Edge of Tomorrow: Cinema of the Anthropocene
By McKenzie Wark

Rebooting RoboCop
By Chris Crews

The Book of Job as Community Theater
By Mark Larrimore

English Psycho
By McKenzie Wark

Big Data, Little Music
By Nancy Weiss Hanrahan

A Conversation with Krzysztof Czyzewski
By Ariel Merkel and Zeyno Ustun

 

Public Seminar and Public Seminar Review are copyright © The Editorial Board of Public Seminar, all rights reserved.

Section: Capitalism and its Alternatives

Recently the net has seen various ten best lists of works in American history. I’d like to propose one of my own, but first I want to explain my rationale. American historical writing was transformed in the 1960s by two things: the realization that slavery and racism were the foundations of American history and the enormous achievements of such Marxist historians and social thinkers as Eric Hobsbawm, EP Thompson and Immanuel Wallerstein. Works produced during and after the sixties have given us a dramatically new picture of the country and my list reflects this. In addition, I chose works that look at American capitalism as a whole, not works that narrowly confine themselves to particular “fields,” like political history, economic history and so forth. Finally, I have chosen works that help us to see the United States in a global perspective even if the works are not themselves comparative or transnational. Here is my list:

1. Winthrop Jordan, White Over Black. Perry Miller, of course, remains central, but Jordan shows how Elizabethan England and the American colonies blended Puritanism and race. The work also brilliantly deploys Freudian thought, without resorting to jargon. A closely related book, also great, is Edmund Morgan’s American Slavery, American Freedom.

2. Ernest Tuveson, Redeemer Nation, one of the great books of the sixties now largely forgotten. Tuveson shows how the Augustinian contempt for the “city of man” was transformed in the Reformation and how this shaped the US sense of absolute good and evil. I would supplement Tuveson with Richard Hofstadter’s great essay “The Paranoid Style in American Politics.”

3. To cover the American Revolution, many suggest Gordon Wood’s celebratory accounts. By contrast, I would use an old book but a great one, Robert Palmer’s Age of the Democratic Revolution. What Palmer shows is that the American Revolution was one in a series of eighteenth century revolutions, and thus helps us de-provincialize our history.

4. Charles Sellers, The Market Revolution shows that both the market and capitalism were historical phenomena and that both were intrinsically connected, on the one hand to slavery and on the other to family life, and thereby gender.

5. David Brion Davis, Slavery and Human Progress. There is an embarrassment of riches on slavery and race, but Davis links the slavery question to the rise of capitalism.

6. Eugene Genovese’s Roll, Jordan, Roll the single greatest work on slavery, powerful because it shows that even that most horrendous of human institutions was based on human relationships.

7. Barrington Moore, The Social Origins of Democracy and Dictatorship. Moore’s chapter on the American Civil War, which Moore calls “the second American Revolution,” remains nonpareil, and again has the benefit of situating the Civil War comparatively, linking it to the English Revolution, the French Revolution, Prussian state-building and the like. I would supplement Moore with the great sections on the US in Eric Hobsbawm’s Age of Capital, particularly the discussion of the Gold Rush.

8. William Cronon, Nature’s Metropolis: Chicago and the Great West for the role of the railroads, the city and the environment.

9. Alice Kessler-Harris, Out to Work. Simultaneously a classic of women’s history and labor history; throwing a piercing light on the whole history of the working class. Alternatively, Linda Gordon’s Women’s Body, Women’s Right, taking us into the intimacies of the sexual bed.

10. George Chauncey, Gay New York, which transforms our understanding of urban history, of the twenties, of our greatest metropolis and of our sexual natures.

Admittedly, there are no works for the most recent period, which we are only beginning to conceptualize. A leading contender for me would be Daniel Rodgers, Age of Fracture. Of course, Natasha Zaretsky’s No Direction Home: The American Family and National Decline has a special place in my heart.

Anarchism is often dismissed as incoherent, naïve, and ineffective. This is Nancy Fraser’s position in a recent article called “Against Anarchism.” Fraser’s criticisms are worth engaging not because they’re particularly perceptive or unique, but because they’re exceedingly common: these are some of the reasons that people dismiss anarchism all the time. What is it about anarchism that’s so threatening to people like Nancy Fraser? I think Fraser (and many others) are actually threatened by what I’ll call “autonomous politics,” which is both narrower and broader than anarchism, encompassing currents of Marxism, indigenism, queer politics, feminism, and anarchism. My suspicion is that Fraser hates autonomous politics not because it’s ineffective or undemocratic, but because it undermines her whole worldview and political project. Autonomous politics destabilizes liberalism, opening up more productive ways of thinking and relating.

Fraser’s broad argument is that democratic politics works on “two tracks.” On the first track, “publics in civil society generate public opinion,” and on the second track “political institutions make authorized and binding decisions to carry them out.” Chief among these formal institutions is the State, and she explains that anarchists reject this second track, because they think “the administrative logics of the political system are bound to colonize the independent energies of society.” Fraser’s charge is that this single track politics is fundamentally undemocratic: anarchist politics becomes isolated, unaccountable, and vanguardist without engagement on this second track.

So are anarchists accountable (like a good liberal) or are they unaccountable (and therefore undemocratic)? Will you be a good citizen, or a bad outsider? This is liberal thought-magic: the strange spell that funnels everything back into “State” and “public,” making it difficult to imagine any other kind of politics. There is no escape, no alternative.

I think the current of anarchism that’s particularly threatening to Fraser is the one that dissipates the spell of liberal thought-magic. Some currents of anarchism (and other radical political traditions) aren’t simply anti-State or anti-institutional: they point to the ways that institutions always pull us back into relation to these organizations, like black holes. Autonomous politics short-circuits the relationship between formal institutions and publics, enabling new, open-ended relationships and practices to emerge, which just don’t fit into the liberal framework.

This makes autonomous politics—practices and actions that don’t aim at reforming institutions or mobilizing publics—frustrating, confusing, and menacing to liberal thought-magicians. Autonomous isn’t just “outside” Fraser’s two tracks; it threatens to undermine the whole edifice and break the spell. How?

First, the persistence of autonomous politics is a reminder that the modern conceptions of “State” and “civil society” are only a few centuries old. Part of the thought-magic is to insist that life beyond the State is nasty, brutish and short, and it will continue to be, without the rigidities of the two liberal tracks. But before and beyond and after the State, there was (and is) an incredible diversity of ways that people organize themselves, resolve conflicts, engage with neighbours and more distant ties, and relate to land and their home places. This infinite complexity is politics, and it will always be more complex than liberal thought-magic.

Fraser gestures briefly at “isolated indigenous communities struggling to subsist off the grid,” lumping them in with “relatively privileged but downwardly mobile youth.” These are the main subscribers to autonomous politics, she thinks (the rest of us know better). Of course, insisting on the necessity of the State probably doesn’t sound as good to undocumented workers, prisoners, indigenous land defenders, and others being crushed, criminalized or erased by the State and other modern institutions. But it’s not just about being privileged (or not) by the State and its politics: it’s also about the effect on our political imagination; this is what makes liberal thought-magic so magical.

Second, autonomous politics threatens the role of the liberal political theorist: liberal magicians make recommendations for how things should be, in terms of the “proper” relationship between formal institutions and publics. This liberal thought-magic is always augmented by admitting that formal institutions are not really all that democratic and responsive: that’s all the more reason to keep trying to make them better.

However, my experience has been that from the perspective of folks trying to change things—even people trying to influence formal institutions—the role of the liberal political theorist isn’t much use. It encourages us to see everything in terms of the two tracks: State and public, and encourages us to answer the abstract question of how things should be.

With this in mind, I should situate myself: I’ve spent lots of time reading about liberal politics, and I was once firmly under its spell. I can’t say that’s all gone and I see everything clearly, but I’ve become critical of liberalism (obviously) and I’ve found other forms of thought-magic (including currents of anarchism) more useful in thinking through the ways I relate to people, and to the political projects I’m part of. I’ve developed priorities and values that don’t make sense from the perspective of the dual tracks of State and public. I don’t have a replacement for Fraser’s thought-magic because I’m trying to inhabit (and be open to) a diversity of traditions and encounters, beyond Fraser’s “two tracks.”

Third, autonomous politics threatens to proliferatethe tracks of politics. There aren’t one, or two, but many tracks, relationships and actors. Many of the most prominent and radical tendencies of anarchism, feminism, indigenism, and queer politics gesture at the infinity of political “tracks.” Not all of these tracks are “publics” or “formal institutions”; these categories erase the complexity of allegiances, alliances, tensions, anxieties, adversaries, and enemies that criss-cross contemporary political actions and groups.

Autonomist politics is often perceived as isolationism by people like Fraser, who conflate isolationism with a refusal to engage with the State and other institutions on their own terms. Police, bureaucrats, politicians, and other institutional representatives have no a priori legitimacy or authority here; it’s up in the air: they might be obeyed, attacked, engaged or ignored. This is not because autonomous politics embraces an anything-goes nihilism: they often point to authorities and values that are erased by liberal thought-magic, such as family, community, indigenous nationhood, ecosystems, and non-humans. This is because autonomous politics enables new (and old) relationships, alliances, solidarities and connections.

Autonomy doesn’t just mean separation. Warding off the myopic two tracks of State-public interchange enables other relationships and practices to emerge: it becomes possible to think and act differently. I’m sure Fraser would have no problem jamming these emergent values and solidarities back into the liberal paradigm: it’s some powerful magic. But for many people, the spell is losing its power. It’s increasingly obvious that States and other formal institutions are not only undemocratic; they’re increasingly designed to absorb, placate, divide, and destroy grassroots movements while defending the exploitative status quo.

Autonomist politics appears more realistic here, rather than naive: we need to relate to each other, figure things out together, and struggle together, without guarantees.

This is the text of the Heuss Lecture (with audio of the Q & A below), delivered as part of the General Seminar series in the Wolff Conference Room of The New School for Social Research at 6 E. 16th. St. in New York on December 11, 2013. 

From the 1970s on public debt increased more or less steadily in most, if not all, OECD countries, as it never had in peacetime. The rapid rise in public indebtedness was a general, not a national phenomenon, although in some countries, especially ones with low levels of inflation like West Germany, it began earlier than in others (Streeck 2011). In this essay I will emphasize the cross-national commonalities rather than the national specifics of the transformation of the “tax state” (Schumpeter 1991 [1918]) into a debt state and from there, at present, a consolidation state.[1] My argument focuses on the family of countries that adopted a regime of democratic capitalism, or capitalist democracy, after the Second World War, combining institutionalized mass participation in government with capitalist property relations and a market economy. By placing the current fiscal crisis of democratic-capitalist political economies in a historical context — in other words, treating it as a step in a historical sequence, not as a single event — I hope to shed light on the underlying dynamics of the crisis, beyond what static-technical theories of public finance have to offer.

The context within which I will situate the fiscal crisis of contemporary democratic states I conceive as a process of capitalist development. By this I mean the historical trajectory that led to the neoliberal revolution after the 1970s and abolished the “mixed economies” (Shonfield 1965; Shonfield and Shonfield 1984) of the three postwar decades, resulting in a more or less continuously growing role of markets including international markets in political-economic governance. In line with Schumpeter’s early research program of “fiscal sociology” (Schumpeter 1991 [1918]), I discuss public finance as both an indicator of and a causal factor in an evolving relationship between political rule and the economy, or more precisely, between the democratic state and modern capitalism.[2] Approaching the politics of public debt in this way, I will show that political-economic theories in the tradition of Public Choice, which attribute the rise in government debt to an inherent tendency of democracies to “live beyond their means”, cannot account for the fiscal crisis of today. After rejecting what I call the democratic failure theory, and based on the record of the last four decades, I will present a list of proximate causes accounting for the rise in state indebtedness and relate them to what I consider, for the purposes of my narrative, the ultimate cause behind them. That cause, I will argue, is the long-term decline in the growth performance of advanced capitalist economies and their subsequent inability to honor the promises of economic and human progress on which their legitimacy depended and depends.[3]

Following my analysis of the genealogy of the current crisis of public finance, I will turn to the five years that have passed since the near-crash of the global financial system in 2008, to outline what I perceive to be a new politics of debt management by consolidation. As I will argue, this includes a profound restructuring of the democratic-capitalist political economy in continuation of the neoliberal transformation of the last two decades of the twentieth century, in the direction of a state that is “leaner,” less interventionist, and, in particular, less receptive to popular demands for redistribution than was the case for states of the postwar period.[4] Special attention will be paid to the relationship between the politics of government debt on the one hand and social and economic inequality on the other.

Democratic Failure?

Democratic capitalism is a historically recent phenomenon. It became firmly institutionalized as a political regime only after 1945 under the international hegemony of the New Deal United States and, at least in Europe, built on social-democratic traditions (for many others Ruggie 1982; Judt 2005; Reich 2007; Judt 2009). In democratic capitalism, governments are expected to intervene in markets to secure social justice and stability as defined and demanded by a voting majority. This is because without political correction of a Keynesian and Beveridgean kind, free markets tend to give rise to cumulative advantage, also known as the “Matthew effect” (Merton 1968), which would make them unpalatable to a democratically empowered citizenry.[5]

Average public indebtedness among OECD countries more than doubled in the roughly four decades between the 1970s and 2010 from about 40 percent of GDP to more than 90 percent (for a sample of twelve major OECD countries, see Figure 1). As pointed out, increasing public debt was a general phenomenon in almost all countries of democratic capitalism. Differences between countries did exist, but in a longitudinal perspective, they reduce mostly to time lags and appear of minor significance in light of the universal nature of the process. Note that the rise of indebtedness was halted in the mid-1990s for about a decade, to resume only in 2008, the first year of an apparently never-ending financial crisis when state indebtedness started its steepest incline of the period under observation. I will return to this later.

Economic-institutionalist theories in the tradition of writers like James Buchanan attribute the increase in public debt since the 1970s to an inherent tendency of political democracy to overspend, caused by short-sightedness of voters and opportunism of politicians (Buchanan 1958; Buchanan and Tullock 1962; 1977; Buchanan and Wagner 1977). Where Public Choice transmutes into a theory of democratic failure, the claim is that public deficits and public debt are due to majoritarian electoral pressure from below for redistribution through public spending. In the following I will argue that this account, based on highly stylized hypothetical assumptions on “rational” behavior under democratic conditions, appears highly implausible when the increase in public debt is placed in the context of other events and developments that happened in the OECD world during the same period. This is because the growth of public debt was accompanied by a steady decline in both democratic mobilization and the distributional position of mass publics, pointing to a secular contraction of the power resources and redistributive capacities of the very democratic politics that are held responsible by theories of “public choice” for the rise in public indebtedness since the 1970s.

As to democratic power resources, participation in national elections in the OECD world peaked in the 1960s when it was as high as 84 percent on average for 22 countries (Figure 2). From there it dropped continuously from decade to decade and reached 73 percent in the eleven years from 2000 to 2011 (Schäfer and Streeck 2013). Unionization attained its highest postwar level in the 1970s and then began to fall everywhere (for six major countries, see Figure 3).[6] A third form of mass political participation, “industrial action”, also known as strikes, practically ended in the 1980s (see Figure 4, which omits Italy where strikes were extremely frequent in the 1970s but ceased almost entirely in the 1980s).

The decay of popular participation in redistributive politics was associated with a continuous loss in the distributional position of popular majorities. Unemployment increased everywhere as governments withdrew from the postwar promise of politically guaranteed full employment. Today, unemployment rates between five and ten percent are considered normal in capitalist democracies (Figure 5), de-unionization and often painful “reforms” of social security systems notwithstanding.[7] Even Sweden, the classical country of full employment labor market policy, has since the end of the 1990s been content with a “natural” level of unemployment hovering between six and nine percent (Mehrtens 2013). In parallel, income inequality has steadily increased in most countries until the middle of the first decade of the 2000s (Figure 6). One factor behind this was a massive decline of the wage share almost everywhere (Duménil and Lévy 2004; Kristal 2010; Ryner 2012) caused by a lasting decoupling of wage increases from increases in productivity. This was, not surprisingly, most pronounced in the United States, where by the end of the 1970s average hourly earnings ceased to follow productivity, embarking on a long stagnation while productivity continued to go up. Increases in household incomes during the period in question were solely due to higher participation of women in the labor market (Kochan 2013, Figure 7).[8]

Summing up, the rise of public debt — the arrival of the debt state — took place alongside a neoliberal revolution in the postwar political economy. At a time when democratic-redistributive intervention in capitalist markets became ineffectual on many fronts, increasing public debt is unlikely to be explained by excessive democratic power on the part of voters and workers. In fact, rather than electorates extracting unearned incomes from the economy, growing government indebtedness in OECD nations was accompanied by a lasting decline in in the distributional position of popular majorities, which in turn was associated with a secular decay of the power resources (Korpi 1983) of redistributive democracy.

Proximate Causes, Ultimate Cause

To account for the increase in government debt across a wide range of countries over an extended period of time, it seems useful to draw on the proven distinction between proximate and ultimate causes (Thierry 2005). The parallel build-up of debt in capitalist democracies was produced by a variety of specific factors that, while often interrelated, differed between countries and over time. All of these proximate causes, however, point back to one common, ultimate cause: a secular decline in economic growth in the democratic-capitalist OECD world. Seen from this perspective, the accumulation of public debt since the 1970s appears as part of a variegated response of countries and actors to declining growth and to the pressures on the politics of rich capitalist democracies that resulted and result from it.

The following, incomplete list includes some of the most important proximate causes of the rise of public debt during the period in question.

1. Public debt began to increase in the mid-1970s, and in particular in the early 1980s as a result of an OECD-wide recession which activated automatic fiscal stabilizers and, in some countries, called forth “Keynesian” stimulus spending. The “Second Oil Crisis” in 1979 caused higher expenses on unemployment benefit and active labor market policy while lowering public revenues, especially from payroll taxes. The same was true for the contraction of employment following the deflationary monetarist policy of the U.S. central bank under Volcker after 1979, with interest rates at times exceeding 20 percent, and the British turn to monetarism under Margaret Thatcher. Generally, the revocation of the postwar commitment to politically guaranteed full employment — a commitment that had begun to cause high and rising inflation at the end of postwar growth — and the acceptance on the part of governments of a residual level of unemployment as a natural condition was bound to put pressure on public finance as long as retrenchment of the postwar welfare state had not yet been accomplished.

2. The end of both growth and inflation led to a sharp increase in tax resistance, first in the United States and then elsewhere in the OECD world. In response, several countries passed tax reforms to eliminate what is called “bracket creep”: the movement of tax payers into higher income tax rates with rising nominal incomes. In subsequent years, “globalization” and the resulting international tax competition (Genschel and Schwarz 2013) motivated tax cuts for high income earners and corporations.[9] Emblematic for this was the tax reform during Ronald Reagan’s first period of office (1981-1985), which together with deflation and an unprecedented arms build-up was instrumental in causing the most dramatic rise in government debt since the Second World War (Greider 1981; Stockman 1986). While tax revenue had until the mid-1970s by and large kept pace with public spending, by the late 1980s it began to stagnate until it started declining after the end of the century (Figures 8 and 9). By 2010, taxation levels were back where they had been two decades earlier.

3. The 1990s were a time when OECD nations managed to bring down public spending in an effort to match it to stagnant and indeed declining tax revenue (as seen in Figure 8). In part, this was made easier by the end of the Communist bloc and the “peace dividend” it wrought. But it was also due to deep reforms of welfare state institutions. It seems reasonable to consider welfare state reform as a time-lagged response to the rise in social spending after the end of politically guaranteed unemployment. Retrenchment of social protection was championed in particular by the Clinton administration which, following its defeat in the mid-term elections of 1994, vowed to “end welfare as we know it.” In Germany, welfare reform was delayed by unification as the West-German social policy regime was translated one-to-one to the Neue Länder (Streeck and Trampusch 2006). A decade later, however, the social-democratic Schröder government passed the so-called Hartz IV legislation. Depending on the country, welfare state reform did not always and necessarily result in lower aggregate spending, at least not immediately; it did, however, cut individual entitlements in reaction to rising numbers of long-term unemployed and other recipients of social assistance. The 1990s, which may be described as a first period of fiscal consolidation, show that mass democracies, if placed under enough economic pressure and with voters sufficiently demobilized, are quite capable of curtailing social protection and generally imposing economic hardship on a majority of voters in the interest of “sound finance.”

4. By the late 1990s, a country like the United States had achieved a budget surplus (Pierson 1998; 2001). This did not last long, however, as it was soon to be wiped out after 2001by deep tax cuts combined with a steep increase in military spending, very much on the model of the first Reagan administration. Given that the “Bush tax cuts,” as they came to be called, overwhelmingly benefited corporations and the very rich (Hacker and Pierson 2011), they cannot possibly be attributed to an excess of redistributive democracy.[10] Quite to the contrary, the restored public deficit was used as an argument for further cuts in public expenditure, as military spending was untouchable and higher taxes on high incomes politically infeasible. Current debates on balancing the U.S. federal budget continue to focus almost exclusively on the so-called “entitlements,” in particular to social security and health care. Generating a public deficit by simultaneously cutting taxes and raising military spending corresponds to the strategic concept of the ultra-liberal American Right as organized by the anti-tax activist Grover Norquist. The strategy is summed up in the slogan, “starving the beast,” the beast being the residual welfare state of the post-New Deal United States.[11]

5. The financial crisis of 2008 caused the greatest hike in public indebtedness ever, due to the immense costs of both the rescue of the financial system and the stimulus spending required for keeping national economies from collapsing (for a selection of countries see Figure 10). Like tax cuts for the rich, “Star Wars” and the invasions of Afghanistan and Iraq, the absorption after 2008 of unsustainable private debt by the state as a debtor of last resort after 2008 cannot be attributed to irresponsible greed among voters and politicians. The emergency measures taken in 2008 wiped out all of the – politically very costly – accomplishments of the consolidation efforts of the 1990s and restored the level of public debt to the trend line for the forty-year period beginning in the mid-1970s. Contrary to public choice theory, the most dramatic leap in public indebtedness since the 1970s was a case of failure, not of democracy but of capitalism, in particular in its new form of financial capitalism.

How are the various proximate causes of the fiscal crisis of rich democracies related? The common ultimate cause, I suggest, behind the proximate causes effective along the trajectory of the public debt build-up was the declining growth performance of the OECD world (Figure 11). After 1974, average real growth per year in OECD countries over five year periods fluctuated between two and three percent, apart from two peaks at the end of the 1980s and the 1990s when it rose to between three and four percent, albeit only for a short time. Thereafter, in the one-and-a-half decades since 1998, i.e., ten years before the Great Recession, average growth rates declined almost steadily until they bottomed out at zero in 2010. In addition, with the end of inflation in the 1980s the automatic devaluation of public debt ended as well. Moreover, low growth during the same period resulted in average unemployment rates between six and seven percent. After 1998 it also kept debt ratios high although budget deficits almost disappeared in 2002-2008 due to consolidation efforts. They were, of course, to come back with a vengeance as a result of the financial crisis.

Pulling together ultimate cause and proximate causes, weak economic growth-induced governments and central banks in the 1970s — with the exception of the Bundes bank after 1974 — to accommodate wage pressures in order to preserve employment, which resulted in inflation. Monetary stabilization in the 1980sto end stagflation produced unemployment and thereby upset the fiscal balance of social security systems; it also added to tax resistance, which was facilitated by “globalization” enabling mobile assets to migrate between jurisdictions. Globalization also called forth “supply-side policies” including tax relief for corporations and the rich. It furthermore inspired financial deregulation, or “financialization” (Krippner 2011), in an attempt to restart the capitalist profit engine, especially in Anglo-American countries. As we know now, this did not really work and growth rates under financialization continued to decline. In the end, when the strategy collapsed in the Great Recession[12], it turned out to have produced pseudo-growth at best.

Over time, insufficient growth gave rise to a sequence of different crisis configurations, with (I) high inflation and low debt in the 1970s followed, from 1980 to 1993, by (II) low inflation and public and private debt rising simultaneously, and from 1994 to 2007 by (III) low inflation, receding public debt, and further increasing private debt. Since 2008, we continue to see (IV)low inflation, now combined with slightly declining private debt and further increasing public debt (Figure 12 for the U.S.; the pattern for other countries is essentially the same, with variations reflecting contingent national circumstances). Overall, the increase in public debt was part of a general rise of indebtedness in capitalist countries, which coincided with low growth. Thus the aggregate debt burden of the United States, comprising the debt of government, households, and non-financial as well as financial corporations, doubled in four decades from four-and-a-half to nine times the country’s GDP (Figure 13), of which government debt accounted for only a small share. The fact that the rise in government debt since the 1980s was embedded in a simultaneous rise in aggregate debt [13] tends to be overlooked in public discourse, in particular where fiscal problems are attributed to a failure of democracy. Growing overall indebtedness — the accelerating investment of savings and freely created fiat money in promises of future repayment effectively conditional on economic growth — would appear to be an insufficiently understood aspect of contemporary capitalist development.

Re-Building Confidence

The crisis of 2008marked the beginning of a new era in the politics of public debt, and generally in the relationship between global capitalism and the state system. As states accepted vastly increased indebtedness in order to rescue their national economies from the fallout of the collapse of the financial industry, investors in public debt became doubtful whether governments would ever be able to honor their unprecedented financial obligations, and whether public debt might have reached a point where states would find it more in their interest to default than to pay up. Declining investor confidence found expression, among other things, in a flurry of changing judgments on national public finances, meted out by the three U.S. rating agencies and in rising and fluctuating risk premiums on government bonds. Not surprisingly, economists went to work to calculate the debt level beyond which a country would cease to be solvent because its debt would render its economy unable to grow (Reinhart and Rogoff 2010). [14]

It soon turned out, however, that the matter was more complicated. Apparently, if there was a critical threshold, it was different for different countries. The United States continued to be charged a risk premium close to what “the markets” require from Germany, even though its government has long refused to address the country’s decades-old “double deficit.” Rather than specific numbers, discussions began to focus on intangibles like the trustworthiness of a country’s politics and the confidence it inspired in the psychology of owners of financial assets. In a more technical language, what was looked for was credible commitments on the part of countries to servicing their debt, come what may. I suggest that it is in this context that the rise of austerity as a political imperative for — some — debtor countries must be seen.

The politics of public debt may be conceived in terms of a distributional conflict between creditors and citizens (Streeck 2013, 117-132). Both have claims on public funds, the ones in the form of commercial contracts and the others of rights of citizenship. In a democracy, citizens may elect a government responsive to them but “irresponsible” from the viewpoint of financial markets, one that in the extreme case expropriates its creditors by annulling its debt. As accumulated debt grows and investors must be more cautious as to where they put their money, creditors will seek guarantees that this will not happen to them –that their claims will always be given priority over those of citizens, for example of pensioners demanding the pension that state and employers promised to them when they were workers.

“Structural reform” of domestic spending to cut the “entitlements” of the citizenry is one important way of reassuring creditors that their money will be safe.[15] Another is institutional change, such as balanced budget amendments to national constitutions, or international obligations to honor commercial before political debt. I consider extracting credible commitments of this kind — where there is broad space for creativity with respect to their concrete form[16] — as the driving force of the transformation of the debt state of the last third of the twentieth century into what I call the consolidation state of the future.

Looking at Europe[17], what is peculiar here is that what is to be the restoration of investor confidence takes place not just in national but also in international politics through a deep restructuring of the European state system, as demanded by both the European Union and, in particular, European Monetary Union. To reassure creditors, states agree to tight mutual surveillance, for example under the Fiscal Pact, tying each other’s hands to rule out default and constrain one another to get fit for debt service. This involves far-reaching sacrifices of national sovereignty in exchange for arrangements amounting de facto to a mutualization of public debt, to guarantee bond holders that they will be paid even if a member state was to become insolvent. Since debt mutualization cannot be popular with voters in countries that would have to pay for it, it is typically not done in the light of day but rather inside the entrails of the European Central Bank, whose President has famously vowed “to do whatever it takes to preserve the euro.”[18][19]

How much and what kind of “confidence” the “markets” must be provided with by debt states is far from understood. Clearly creditors will not complain if states fearing the fear of the markets do more than would in fact be necessary. Since international capital markets are not subject to competition law, it also cannot be precluded that institutional investors will collectively drive up the price of their trust. States, in turn, may use financial regulation to force certain categories of investors, like insurance companies, to buy and hold their bonds. The strategic games that are being played here will not end once the current crisis will be declared over, if it ever will. States will for a long time be dependent on financial markets, even with consolidated finances, if only for refinancing their remaining debt (which will be considerable for many years). In any case, financial markets may need government debt as a safe haven for investment. Bargaining over the rebuilding of the democratic state in the face of high debt, at the national as well as international level, will therefore not cease, with citizens trying to defend their social rights and creditors threatening higher risk premiums unless the primacy of their titles is firmly established in international treaties and national fiscal regimes and constitutions.

As is increasingly being noted, building investor confidence by way of imposing austerity on national economies may not in all circumstances achieve its objective. Austerity may impede economic growth by cutting demand, rather than promoting it by, among other things, creating “rational expectations” on the part of the “real economy” for low taxes and higher growth in the future. Apparently, as claimed by Blyth (2013) and others (Boyer 2012), expansionary austerity has never really worked in a financial crisis. While austerity may shift an increasing share of a society’s resources from citizens to creditors, it may shrink the sum total of available resources. Obviously the second effect could, in particular in the longer run, suppress the first effect as low growth might undo whatever confidence may have been gained through austerity.

Public Debt and Social Inequality

The build-up of public debt since the 1970s was in complex ways connected to the increase in economic inequality that occurred at the same time, and this holds true also for the current politics of consolidation. As growth rates declined and unemployment became endemic in the OECD world after the end of inflation, the wage and income spread increased, and so did public spending. Dwindling unionization and the “withering away of the strike” (Ross and Hartman 1960) contributed their share to rising income inequality (Western and Rosenfeld 2011). Tax collection became more difficult due to growing resistance, and later also because of international tax competition in an increasingly open global economy. Public revenues fell as a result, further adding to public deficits and public debt. Distributional gains on the part of capital and of segments of the middle classes, made possible by a growing low-wage sector and less progressive taxation, produced a savings overhang that was looking for safe investment opportunities. Tax reforms aimed at dissuading firms and high earners from exiting to less demanding jurisdictions reinforced this, expanding both the demand for and the supply of sovereign credit. In the 1990s at the latest, governments found it necessary to allow the financial industry to expand far beyond traditional limits, among other things by creating new credit instruments benefiting states increasingly dependent on borrowing at favorable rates. Financialization in itself added to income inequality, both between sectors and within (Palley 2008; Tomaskovic-Devey and Lin 2011).

States borrowing from their citizens instead of taxing them make another, independent contribution to economic and social inequality. Owners of financial assets who can lend to the state what it would otherwise confiscate earn interest on what remains their capital. They may also leave their wealth to their offspring, especially where inheritance taxes have been cut or abolished for fear of taxpayer exit. A complementary effect, incidentally, is at work under “privatized Keynesianism” where liberalized credit serves to replace social assistance or supplement low wage. The result is that the poor have to repay with interest what would have been their wage or social benefit with better employment, stronger trade unions, and more public intervention (Mertens 2013).

Moreover, as the debt state in its current form as consolidation state reassures its creditors that their claims to public funds will take precedence over the claims of citizens, it essentially expropriates social rights and politically created entitlements intended to protect social cohesion. Privatization of public services and a reduction of public social investment make for less egalitarian access to resources essential for equality of opportunity in an advanced “knowledge society.” As a result, social mobility for future generations is likely to diminish, as is already the case in the United States (Karabel 2012). With consolidation continuing, patterns of public spending will follow tax systems in becoming less progressive.

Concluding Remarks

What is coming? We have seen how the emerging consolidation state is cutting itself down, through public austerity and the progressive privatization of infrastructures and social services. The question is whether this will restore economic growth and democratic legitimacy to post-2008 capitalism. Seeking to achieve these goals as in the past two decades by relying on a lax monetary policy and a bloated financial sector apt any time to produce new bubbles may at a minimum be risky, and it could easily become self-destructive when another “rescue” like that of 2008 will be needed but may by then might have become impossible (Stockman 2013).[20] The alternative, the neoliberal reform cure which requires stripping society of its remaining defenses and throwing it into the icy waters of an untamed market economy, in the hope that it will eventually start swimming, may be rejected by the voting public as long as there still is one. The result may be a political stand-off like in Italy or France, which is not likely to encourage economic growth either.

What if a resumption of growth, as implied by older traditions of political economy, would require more public investment rather than less, and perhaps also a reversal of the apparently inexorable trend toward ever more inequality (Stiglitz 2012)? In this case, the declining capacity of politics to contain the plundering of the public sphere and the apparently unending self-enrichment of the already unendingly rich may pose a problem not just for democracy, but also for the economy – see the super rich among the Greeks who are abandoning Greece in droves, availing themselves of free international capital markets to take their money to the safe havens of Wall Street or the City of London; or the Russian and Ukrainian “oligarchs” who, having expropriated their fellow-citizens in post-communist primitive accumulation, are abandoning them to their domestic misery. What we are seeing here may be the beginning of the fate of economic elites becoming divorced from that of the economies-cum-societies from where they have derived their riches, decoupling the fortunes of the rich and their families from the prosperity, or the lack of it, of normal people.

Does this sound outlandish? Consider the current state of the distributional game in the United States, a country that, unlike Ukraine or China, is still considered a democracy. According to Emmanuel Saez, in 2010, the Year Two after the crisis, at a time of high unemployment and record public debt, 93 percent of all income gains in the U.S., i.e., almost the entire amount by which the national income increased, went to the top one percent of the income distribution. What is more, the top 0.01 percent, about 15,000 households, received more than a third, 37 percent, of those income gains (Saez 2012).[21] There is no reason not to call this an asset stripping operation of epic dimensions perpetrated by a tiny minority benefiting, among other things, from the deepest tax cuts in history. Why should the new oligarchs be interested in their countries’ future productive capacities and present democratic stability if, apparently, they can be rich without it, processing back and forth the synthetic money produced for them at no cost by a central bank for which the sky is the limit, at each stage diverting from it hefty fees and unprecedented salaries, bonuses and profits as long as it is forthcoming — and then leave their country to its remaining devices and withdraw to some privately owned island?

REFERENCES

Blyth, Mark, 2013: Austerity: The History of a Dangerous Idea. Oxford: Oxford University Press.

Boyer, Robert, 2012: The four fallacies of contemporary austerity policies: the lost Keynesian legacy. Cambridge Journal of Economics. Vol. 36, No. 1, 283-312.

Buchanan, James M., 1958: Public Principles of Public Debt: A Defense and Restatement. Homewood, Ill.: Richard R. Irwin, Inc.

Buchanan, James M. and Gordon Tullock, 1962: The Calculus of Consent: Logical Foundations of Constitutional Democracy. Ann Arbor: Univrsity of Michigan Press.

Buchanan, James M. and Gordon Tullock, 1977: The Expanding Public Sector: Wagner Squared. Public Choice. Vol. 31, 147-150.

Buchanan, James M. and Richard E. Wagner, 1977: Democracy in Deficit: The Political Legacy of Lord Keynes. New York: Academic Press.

Crouch, Colin, 2009: Privatised Keynesianism: An Unacknowledged Policy Regime. British Journal of Politics & International Relations. Vol. 11, No. 3 382-399.

Duménil, Gérard and Dominique Lévy, 2004: Capital Resurgent: Roots of the Neoliberal Revolution. Cambridge, Mass.: Harvard University Press.

Genschel, Philip and Peter Schwarz, 2013: Tax Competition and Fiscal Democracy. In: Schäfer, Armin and Wolfgang Streeck, eds., Politics in the Age of Austerity. Cambridge: Polity,

Greider, William, 1981: The Education of David Stockman. The Atlantic. No. December 1981.

Hacker, Jacob and Paul Pierson, 2011: Winner-Take-All Politics: How Washington Made the Rich Richer — and Turned Its Back on the Middle Class. New York City: Simon & Schuster Paperbacks.

Herndon, Thomas, Michael Ash and Robert Pollin, 2013: Does High Public Debt Consistently Stifle Economic Growth? A Critique of Reinhart and Rogoff. Political Economy Research Institute Working Paper Series No. 322. Amherst, Mass.: University of Massachusetts Amherst.

Judt, Tony, 2005: Postwar: A History of Europe Since 1945. London: Penguin.

Judt, Tony, 2009: What Is Living and What is Dead in Social Democracy? The New York Review of Books. No. December 17, 2009, 86-96.

Karabel, Jerome, 2012: Grand Illusion: Mobility, Inequality, and the American Dream, The Huffington Post.

Kenworthy, Lane, 2007: Egalitarian Capitalism: Jobs, Incomes, and Growth in Affluent Countries. New York, NY: Russell Sage.

Kochan, Thomas A., 2013: The American Jobs Crisis and Its Implications for the Future of Employment Policy: A Call for a New Jobs Compact. International Labor Relations Review. Vol. 66, No. 2, 291-314.

Korpi, Walter, 1983: The Democratic Class Struggle. London: Routledge and Kegan Paul.

Krippner, Greta R., 2011: Capitalizing on Crisis: The Political Origins of the Rise of Finance. Cambridge: Harvard University Press.

Kristal, Tali, 2010: Good Times, Bad Times: Postwar Labor’s Share American Sociological Review. Vol. 75, No. 5, 729-763.

Mehrtens, Philip, 2013: Staatsentschuldung und Staatstätigkeit: Zur Transformation der schwedischen politischen Ökonomie. Doctoral Dissertation. Köln: Universität Köln und Max-Planck-Institut für Gesellschaftsforschung.

Mertens, Daniel, 2013: Privatverschuldung in Deutschland: Zur institutionellen Entwicklung der Kreditmärkte in einem exportgetriebenen Wachstumsregime. Doctoral Dissertation. Köln: Wirtschafts- und Sozialwissenschaftliche Fakultät. Universität Köln und Max-Planck-Institut für Gesellschaftsforschung.

Merton, Robert K., 1968: The Matthew Effect in Science. Science. Vol. 159 No. 3810, 56-63.

Palley, Thomas I., 2008: Financialisation: What it is and Why it Matters. IMK Working Paper No. 4/2008. Düsseldorf: Institut für Markoökonomie und Konjunkturforschung.

Pierson, Paul, 1998: The Deficit and the Politics of Domestic Reform. In: Weir, Margaret, ed., The Social Divide: Political Parties and the Future of Activist Government. Washington, D.C., New York: Brookings Institution Press and Russell Sage Foundation, 126-178.

Pierson, Paul, 2001: From Expansion to Austerity: The New Politics of Taxing and Spending. In: Levin, Martin A. et al., eds., Seeking the Center: Politics and Policymaking at the New Century. Washington D.C.: Georgetown University Press, 54-80.

Reich, Robert B., 2007: Supercapitalism. New York: Alfred A. Knopf.

Reinhart, Carmen M. and Kenneth S. Rogoff, 2010: Growth in a Time of Debt. American Economic Review: Papers & Proceedings. Vol. 100 No. May 2010, 573-578.

Ross, A. M. and P. T. Hartman, 1960: Changing Patterns of Industrial Conflict. New York: Wiley.

Ruggie, John Gerard, 1982: International Regimes, Transactions and Change: Embedded Liberalism in the Postwar Economic Order. International Organization. Vol. 36, No. 2, 379-399.

Ryner, Magnus, 2012: The (I)PE of Falling Wage-Shares: Situating Working Class Agency. Prepared for Presentation at the Inaugural Conference of the Sheffield Political Economy Research Institute (SPERI) ‘The British Growth Crisis: The Search for a New Model’ Sheffield, UK, July 17, 2012. Unpublished Manuscript.

Saez, Emmanuel 2012: Striking it Richer: The Evolution of Top Incomes in the United States (Updated with 2009 and 2010 estimates).

Schäfer, Armin and Wolfgang Streeck, 2013: Introduction. In: Schäfer, Armin and Wolfgang Streeck, eds., Politics in the Age of Austerity. Cambridge: Polity.

Schratzenstaller, Margit, 2011: Vom Steuerwettbewerb zur Steuerkoordinierung in der EU? WSI-Mitteilungen. Vol. 64, No. 6, 304-313.

Schratzenstaller, Margit 2013: Für einen produktiven und solide finanzierten Staat. Determinanten der Entwicklung der Abgaben in Deutschland. Studie im Auftrag der Abteilung Wirtschafts- und Sozialpolitik der Friedrich-Ebert-Stiftung Bonn: Friedrich-Ebert-Stiftung

Schumpeter, Joseph A., 1991 [1918]: The Crisis of the Tax State. In: Swedberg, Richard, ed., The Economics and Sociology of Capitalism. Princeton: Princeton University Press, 99-141.

Shonfield, Andrew, 1965: Modern Capitalism: The Changing Balance of Public and Private Power. London and New York: Oxford University Press.

Shonfield, Andrew and Suzanna Shonfield, 1984: In Defense of the Mixed Economy. Oxford: Oxford University Press.

Stiglitz, Joseph E., 2012: The Price of Inequality: How Today’s Divided Society Endangers Our Future. New York: W. W. Norton.

Stockman, David A., 1986: The Triumph of Politics: How the Reagan Revolution Failed. New York: Harper and Row.

Stockman, David A., 2013: State-Wrecked: The Corruption of Capitalism in America. New York Times, March 31, 2013. Vol.

Streeck, Wolfgang, 2011: The Crises of Democratic Capitalism. New Left Review. Vol. 71, No. Sept/Oct 2011, 5-29.

Streeck, Wolfgang, 2013: Gekaufte Zeit: Die vertagte Krise des demokratischen Kapitalismus. Berlin: Suhrkamp.

Streeck, Wolfgang and Christine Trampusch, 2006: Economic Reform and the Political Economy of the German Welfare State. In: Dyson, Kenneth and Stephen Padgett, eds., The Politics of Economic Reform in Germany: Global, Rhineland or Hybrid Capitalism? Milton Park, Abingdon: Routledge, 60-81.

Thierry, B. , 2005: Integrating proximate and ultimate causation: Just one more go! Current Science. Vol. 89 No. 7, 1180-1184.

Tomaskovic-Devey, Donald and Ken-Hou Lin, 2011: Income Dynamics, Economic Rents and the Financialization of the US Economy. American Sociological Review. Vol. 76, No. 4, 538-559.

Western, Bruce and Jake Rosenfeld, 2011: Unions, Norms, and the Rise in U.S. Wage Inequality. American Sociological Review. Vol. 76, No. 4, 513-537.

NOTES

[1] For an elaboration see Streeck (2013, 164ff., passim). An alternative term would be austerity state.

[2] “The public finances are one of the best starting points for an investigation of society, especially but not exclusively of its political life. The full fruitfulness of this approach is seen particularly at those turning points, or epochs, during which existing forms begin to die off and to change into something new. This is true both of the causal significance of fiscal policy (insofar as fiscal events are important elements in the causation of all change) and of the symptomatic significance (insofar as everything that happens has its fiscal reflection).” (Schumpeter 1991 [1918], 110)

[3] For the purpose of this treatment I will consider declining growth as exogenous.

[4] This is essentially what Pierson (1998; 2001) refers to as an “austerity regime”.

[5] In other words, democratic capitalism implies a politics with a redistributive-egalitarian bent; indeed with reference to the postwar political formation in the West one could just as well speak of egalitarian capitalism (Kenworthy 2007). One implication is that not every political interference with market outcomes is “democratic” as the term is used here; for example, for the Bush tax cuts to be passed, democracy as we know it had to be anaesthetized rather than activated.

[6] Figure 3 does not include Sweden where union density was traditionally the highest in the world. Including it would have distorted the scale. Apart from this, the Swedish trajectory was very much in line with the other countries, except that the decline started later. At the beginning of the 1990s, union density in Sweden was still above 80 percent; by 2011, in about two decades, it had fallen to 68 percent.

[7] The average rate of unemployment in the OECD was 2.2 percent from 1960 to 1973, from where it increased steadily to 7.1 percent in 1990 to 2001. From 2002 to 2008 it was at 5.8 percent, only to rise to 6.6 percent between 2009 and 2012.

[8] Kochan refers to the historical watershed of the late 1970s as to the breaking of the postwar “social contract”(Kochan 2013).

[9] For Europe, see Schratzenstaller (2011).

[10] Redistribution to the poor had by this time already been privatized, i.e., relocated to deregulated financial markets where citizens were allowed to make up for stagnant incomes by taking up ever riskier loans (Crouch 2009). After 2008, these ended to a large extent on the public balance sheet.

[11] That tax cuts for the well-to-do cause public deficits, which are then used to argue the need for cuts in social welfare spending, is by no means limited to the United States. The same pattern was effective in Europe, including Germany ( Schratzenstaller 2013) where the losses in revenue caused by the Schröder tax reform were for several years the only reason why the federal government was unable to achieve a balanced budget. The deficit later became a central argument for the Hartz reform of the German welfare state.

[12] EXPLAIN: because it had encouraged an unsustainable leveraging of capitalist economies.

[13] The general picture remains the same if the debt of the financial sector is excluded.

[14] While there has been considerable excitement recently on a calculation error and the method of sample construction in Reinhart and Rogoff’s 2010 paper (Herndon et al. 2013), what should have caused consternation much earlier is their idea, mechanistic if nothing else, of a “one size fits all” general debt threshold for all countries, regardless of political and economic circumstances – not to mention that high debt may be the effect of low growth rather than vice versa.

[15] “Lloyd Blankfein, the head of giant investment bank Goldman Sachs, has said the UK must stick with its austerity plan or face a negative reaction from global investors. In an interview with the BBC, he said he would like it if the UK could ease the pace of the squeeze on spending. But Mr. Blankfein … said if you have a deficit that choice is taken away from you because markets will react.” BBC News, April 23, 2013

[16] Means of restoring creditor confidence may include a low general level of public spending, low taxes and a lean state, a de-unionized economy, all major political parties subscribing to fiscal rectitude and committed to a healthy financial industry, and the like.

[17] The U.S. is a special case, not only because of its role as global central banker of last resort. In U.S domestic politics, it seems firmly settled that the demands of creditors will always have priority over the rights of citizens, and that explicit debt will be served no matter what, even if implicit debt has to be given a “haircut” if necessary. This has not been as clear in Europe, at least not until a few years ago.

[18] “Within our mandate, the ECB is ready to do whatever it takes to preserve the euro. And believe me, it will be enough.” Verbatim of the remarks made by Mario Draghi, at the Global Investment Conference in London, 26 July 2012. Website of the European Central Bank, read April 27, 2013.

[19] EXPLAIN: today easy ECB money and ECB credit and purchasing programs cause convergence of risk premiums while at the same time salvaging banks, but this cannot last forever

[20] EXPLAIN: need to get off QE!

[21] As summarized by Steven Rattner in the New York Times, March 25, 2012: “In 2010, as the nation continued to recover from the recession, a dizzying 93 percent of the additional income created in the country that year, compared to 2009 — $288 billion — went to the top 1 percent of taxpayers, those with at least $352,000 in income. That delivered an average single-year pay increase of 11.6 percent to each of these households. Still more astonishing was the extent to which the super rich got rich faster than the merely rich. In 2010, 37 percent of these additional earnings went to just the top 0.01 percent, a teaspoon-size collection of about 15,000 households with average incomes of $23.8 million. These fortunate few saw their incomes rise by 21.5 percent. The bottom 99 percent received a microscopic $80 increase in pay per person in 2010, after adjusting for inflation. The top 1 percent, whose average income is $1,019,089, had an 11.6 percent increase in income.”

1. One does not have to look far to find intellectuals trained in the humanities, even the social sciences, who feel the need to ‘critique’ the concept of the Anthropocene. Clearly, since we did not invent this concept, it must somehow be lacking! And yet rarely does one find them trying the inverse procedure: what if we took the Anthropocene as that which critiques the state of critical thought? Maybe it is our concepts that are to be found lacking…

2. Even to understand the Anthropocene in its own terms calls for a certain ‘vulgarity’ of thought. The Anthropocene is about the consequences of the production and reproduction of the means of existence of social life on a planetary scale. The Anthropocene calls for the definitive abandonment of the privileging of the superstructures, as the sole object of critique. The primary object of thought is something very basic now: the means of production of social life as a whole.

3. It seems likely that the Anthropocene as a kind of periodization more or less corresponds to the rise of capitalism. But it is no longer helpful, even if that is the case, to tarry among critical theories that only address capitalism and have nothing to say about other periods, other modes of production. The Anthropocene may be brief, but the Holocene is long. A much long temporality is called for. It is ironic that critical theory, so immune in other ways to ‘anthropocentrism’, nevertheless insists on thinking in merely human time scales.

4. To even know the Anthropocene calls on the expertise of many kinds of scientific knowledge and an elaborate technical apparatus. Those who have led the charge in raising alarm about the Anthropocene have been scientific workers. Those who attempt to deny its significance do so through mystifications which, it must be acknowledge, nevertheless draw on critiques of science. Critical theory need not submit itself to scientific knowledge, but it needs to accept its existence and the validity of its methods. One has to know when one’s tactics, even if correct in themselves, put you on the wrong side of history.

5. Means for enduring the Anthropocene are not going to be exclusively cultural or political, let alone theological. They will also have to be scientific and technical. A united front of many kinds of knowledge and labor is absolutely necessary. To imagine that the ‘political’ or ‘revolution’ or ‘communism’ will now work the miracles they so failed to work in the last two centuries is a charming habit of thought, but not a useful one. In the domain of praxis everything is yet to be invented.

6. And so it is not enough to just critique the Anthropocene with the tired old theory toolbox handed down now for more than one generation through the graduate schools. The Anthropocene is a standing rebuke to the exhaustion of those hallowed texts. Let’s have done with answering all contingencies with the old quotations from Freud and Heidegger, Lukacs and Benjamin, Althusser and Foucault. It is time for critical theory to acknowledge its conservative habits – and to break with them.

7. At a minimum, the Anthropocene calls on critical theory to entirely rethink its received ideas, its habituated traditions, its claims to authority. It needs to look back in its own archive for more useful critical tools. Ones that link up with, rather than dismiss or vainly attempt to control, forms of technical and scientific knowledge. The selective tradition needs to be selected again. The judgments of certain unquestioned authorities need for once to be questioned.

8. And in the present, it is time to work transversally, in mixed teams, with the objective of producing forms of knowledge and action that are problem-centered rather than tradition and discipline centered. Critical though avoids the inevitable fate of becoming hypocritical theory when it takes its problems from without, from the world of praxis, rather than from within its own discursive games. The Anthropocene is the call from without to pay attention to just such problems.

9. It is time, in short, for critical theory to be as ‘radical’ in its own actual practice of thought as it advertises. Let’s have done with the old masters and their now rather old-timey concerns. Let’s start with the problem before us, whose name is the Anthropocene.

We are living through dark times. Many lament the decline of a vibrant Left in American politics; why the right has been ascendant for the past quarter century is a matter worth extensive exploration. Zaretsky’s “Rethinking the Split Between Feminists and the Left,” however, both underestimates the deep roots of the American right and overestimates the power of feminism (Perlstein, Lowndes). In doing so, Zaretsky makes it difficult to rethink the possibilities and obstacles for the Left now. Zaretsky’s account of feminist politics runs amuck because of the ways in which he links feminism with madness and distances it from radicalism and race. Let us untangle the ways in which Zaretsky puts these elements in play in ways that distort past, present, and future.

Let us begin at the beginning — with madness. Zaretsky starts with a fine recognition of Shulamith Firestone (1945-2012), and her work as a radical feminist thinker (The Dialectic of Sex) and activist. Shortly after Firestone wrote her astonishing book, she was diagnosed as a paranoid schizophrenic and Zaretsky, quite rightly, calls this tragic. But then, using Susan Faludi’s piece in The New Yorker about Firestone’s difficult life as his source, he continues: “Faludi touches on a related topic.” A related topic? In the first of several such slippages in the piece, it turns out that the split between the Women’s Liberation Movement and the New Left was “tragic,” too, and that, like Firestone, the psychological atmosphere in which she worked was also “truly mad.”

This pathologizing of the women’s movement seems to cordon women of the Left off from the Left more generally, since it is these crazy women who abandon the New Left. These linkages give rise to this extraordinary thought: “As I see it, it is partly thanks to this split that there is no Left in the United States today.” Men of the New Left have been saying this with varying degrees of thoughtfulness or bitterness or hurt feelings for decades. This idea does not qualify as “rethinking.” “The separate or autonomous women’s movement” did introduce its own forms of distortion, isolation, or contradiction but, as at some points Zaretsky concedes, these were not so different from the kinds of schisms and failures of community to be found in the New Left, Old Left, Occupy, etc.

Zaretsky offers to exonerate the women’s movement from being motivated by “anger and irrationality,” but the conjunction of these words repeats the slippage in the argument. Feminist anger was neither “irrational” nor “mad.” Moreover, we thought we could well afford this rage. With the historical ignorance so common at that time, radical feminists of 1968-to-1970 thought the Civil Rights and Left movements from which Women’s Liberation arose were there to stay. The goal was to deepen Left values and analysis, to add feminists’ new understanding of how domination works in the private spheres of life.

It’s with some sorrow that we entirely disagree with Zaretsky’s judgment that women’s departure was a definitive wound that helped take down the New Left. Alas, there is no way that women were key, or important, or recognized as needed in the New Left. Women were what they call in the army “non-essential personnel.” Ann knows; she was there. Women would talk and — now famously recorded in a hundred memoirs — a man had to say the same thing to get that idea heard. Mostly, women didn’t say much. Why bother? In this atmosphere (too normalized to be recognized as mad), neither men nor women thought women were important actors in the movement. But we trust Zaretsky to be observing something. What?

As he eloquently grants, women were the helpers, the servers, the lovers. Could he mean that men needed this support much more than men ever knew or acknowledged or women ever dared to claim? This would make sense, but then what becomes of the subsequent argument that it was specifically women’s departure that was the tragedy? Surely this is a group tragedy, a shared pathology, what Dorothy Dinnerstein called “sexual arrangements and human malaise.” The New Left weakened and lost momentum and appeal to a large constituency for many reasons, and the mixed messages of feminism take their place in this long list.

Zaretsky moves from feminists’ break with the New Left to consider the “decline of a radical tendency within feminism” more generally. Though obviously radical feminism has been weakened, diluted, misunderstood, and co-opted by everything from Sarah Palin’s campaign to neoliberal business identities like those described in Lean In, in which women do everything — run corporations, have happy families with un-neglected kids, etc., etc. — the sad fact is that feminism in America was never a Left movement. Ann remembers regretting this; in many groups she was called “the politico” for her Left identification. As Zaretsky says, whole “ancient and vulnerable institutions” were being swept away, and feminist mobilizations in response reached far beyond the Left locations like trade unions or universities and landed: in Black women’s softball teams in the Midwest; in alternative counter-culture spaces that sometimes included Left fellow travelers but often not; in anti-domestic violence movements where Marxist ideology was never mentioned. Though one key cause of violence, that women were viewed as property, was right there in Engels, discussions of domestic violence didn’t mention him. The Left vocabulary and basic structure of ideas came and went in the astonishing proliferations of feelings, analysis, mobilizations that can only — and only loosely — be called “women’s movements,” “feminisms” — all with the “s” added.

Zaretsky is passionate about the awfulness of the “trashing” of leaders in the Women’s Liberation Movement. Again he uses Faludi: trashing was “like a cancer,” though he concedes that some feminists criticized this madness at the time. This kind of infighting was indeed common, particularly in New York, but a look at the varied memoir essays in The Feminist Memoir Project (eds. DuPlessis and Snitow) might offer a more nuanced picture in which trashing is not at the center of early women’s movement groups or organizing projects. Zaretsky himself seems to feel he’s gone too far in naming trashing as unique to women’s movements or without internal critique. He shifts to a comparison between American movements, which puritanically focus on behavior, and European ones more focused on structure. Agreed. But the example Zaretsky gives of “the personal is political” as one more typically American, private, individualistic complaint flattens the complex history of this concept. “The personal is political” was the great structural insight the women’s movement offered. It was meant to reconfigure public and private, a useful project for Leftists everywhere. Its later incarnation as “the personal is sufficient in itself” as a form of politics is indeed a falling away from earlier movement intentions. But this is a complex generational-historical shift that lines up with the loss of Left political movements in general — as, in some places, Zaretsky notes himself. The proliferation of feminist ideas into neoliberal sites is but one aspect of feminism’s extraordinary reach in many directions, not only as a cooptation by neoliberals but as a new consciousness available in many other progressive forms of politics in the United States today.

The words Zaretsky cites from Barbara Deming tell a large part of this story: “Our lives, women’s lives, are not real to [men] — except in so far as they support the lives of men.” The breakdown in many relationships and institutions made such a thing sayable. And for better and worse, the Left will have to confront the fundamentally different shape of the political facing the young today.

Finally, Zaretsky does some puzzling footwork about why — unlike the autonomous women’s movement — African American movements are not guilty of abandoning the Left and adopting a narrow identity politics. Where the New Left operated with a conception of linked fate, in which none would be free until all were free, feminists, according to Zaretsky, shifted from solidarity to identity politics. Here a different logic is said to prevail in which the personal is political morphs into liberation for some, but not for all. Multiple oppressions within the identity frame, according to Zaretsky, were challenged separately rather than together. It is this shift from solidarity to identity that Zaretsky identifies as the blow feminists launched against the Left.

What then of race? In the final paragraph of his post, Zaretsky turns to race and is quick to specify that African Americans did not follow the same misguided identity politics that derailed radical feminists. “The civil rights movement was not simply a movement for rights, but also aimed at destroying a racially organized state, namely the Jim Crow South. As a result, the civil rights movement was the expression of a group struggling collectively for its origins as a people. This gave black power a different valence than the other movements of the sixties such as the student movement, the antiwar movement and women’s liberation.” The slippage here from “Civil Rights” to “Black Power” is startling since Black Power organizers often faced the same critique Zaretsky aims at radical feminists here. Malcolm X, Stokely Carmichael and others often were attacked as “separatists” damaging the drive towards “racial equality” of earlier phases of the movement. Exactly what distinguishes black power from the radical feminists in Zaretsky’s argument is not at all clear. Zaretsky hints that there is a relation to American nationalism that makes movements for racial equality less individualistic and identity based. But the distinction is tough to maintain.

Numerous accounts of sixties politics have documented the artificiality of separating identities into neatly separated demographic groups. What exactly is it that makes women an identity and race a people? Rather than trying to maintain this distinction, we suggest that more can be gained from attending to the ways in which lives and politics traverse multiple identities (Omolade, Lowe, Cohen). Identities, movements and states are necessarily heterogeneous social formations. All identities are vulnerable to divisive appeals; all can be framed in ways that link identities into polyglot forms. The difference lies not in the identity itself but in the politics through which peoples are mobilized. There is no way to inoculate race from the critique Zaretsky has launched against women. Better to specify the ways in which all identities are vulnerable to narrow appeals.

Paradoxically, Zaretsky’s arguments are anchored in the very ideas of identity he seeks to critique; he uses the demographics of groups to describe a far-flung politics that doesn’t resolve clearly into the generalizations he is making. The subtext seems to be, white feminists abandoned broad progressive goals and chose individualism and a separation from the Left’s larger visions and desires. Feminisms (of both white and black women) were more messy and ambitious then and now than Zaretsky allows.

Much is at stake in our disagreement with Zaretsky. As all historians know, how one narrates the past sets future possibilities. The splits Zaretsky describes are a common retrospective closure of the story. Feminists’ break with the Left was never so fast and clean as Zaretsky claims. Political change rarely occurs in such an orderly manner. Recapturing the heterogeneity of the feminist movement, and attending to its enduring entanglements with the American Left, requires that we rethink the dominant narrative of the Sixties that Zaretsky rearticulates here. Doing so changes our sense of past resources and future possibilities of progressive politics in the decades ahead.

“The Women Did It?” by Ann Snitow and Victoria Hattam correctly argues that we need to understand the conflicts and splits of the late nineteen sixties if we are to build a New Left today. Today’s Left is rooted in the decisions and turning points of that time, and it will be hard to build something new until we come to grips with our past. However, Ann and Vicky (for we are all friends) frame the issues wrongly in that they are essentially concerned with blaming and defending. They reiterate that the men of the New Left really were sexist, and that the women of the New Left really had not meant to destroy the New Left in creating women’s liberation. This is not the way to think about it.

To be sure, I would be fool indeed to “blame” women for the demise of the New Left, as Ann and Vicky suggest I do. The women’s movement of the late sixties was akin to a natural force, a great river of emotion and eloquent power; who would blame a river? No, my purpose was to understand the breakup of the New Left historically, not morally. In this regard I will take up and develop three themes that my original post raised: 1) the historical significance of the women’s liberation that emerged in the late sixties; 2) the difference between Black Power and feminism, and therefore the origins and nature of identity politics and 3) How to think about a Left today, specifically in its relation to capitalism, feminism and identity politics. I will take these up in turn.

My basic idea, which underlies all three of these points, is that we need to understand the sixties in the context of a much longer conception of the Left. In my view, we have to think of the Left as having gone through three phases: the movements against slavery and for radical self-government of the eighteenth century Democratic revolutions; the movement against capitalism and for socialism that predominated in the late nineteenth and most of the twentieth century; and the movements for personal and sexual emancipation and participatory democracy that emerged in the 1960s. All three of these moments need to be distinguished from mainstream-rights-based liberalism. Rather, they constitute a history and a tradition of their own. The women’s liberation movement that emerged in the sixties, as well as the gay liberation movement that emerged at the same time, was part of this tradition. These movements are impossible to imagine except as part of the New Left, and they cannot be assimilated to liberal feminism, suffragism or the idea of equal rights per se, which is what Snitow and Hattam in effect do. In my post, I was trying to explore what happened to the essentially revolutionary impulse that lay behind these great innovations of the sixties’ Left.

Ann and Vicky ignore the most important sentence in my original post: “Whatever failings the men of the New Left had, and they were many, it is far more reasonable to conclude that women left the Left because they wanted to, than because male sexism drove them out.” The point I was making is that the historical significance of women’s liberation goes way beyond correcting male behavior but rather pointed to a new stage in the evolution of the Left, one centered on themes of personal liberation including sexual liberation that could not be reduced to the earlier anti-capitalist Left. I viewed and still view thinkers like Firestone not as writing defensive responses to male sexism but rather as expressing new possibilities for human freedom, rooted as much in the history of modern art and literature and in the history of psychoanalysis as in Left wing politics. Not for the first time in history, great things and terrible things occurred together. Rather than “blaming” women, the absurdity that Ann and Vicky accuse me of, I was trying to understand what place the emergence of women’s liberation had in what, after all, is the decisive event of our time: the world historical defeat of the Left, a defeat which dates to the late sixties, and in which women’s liberation played a distinct but obviously particular and limited role. My next step in understanding this defeat has to do with the distinction between Black Power and women’s liberation.

The reason this distinction is important has to do with the way in which the Left of the sixties was defeated. It was not exploded and destroyed, as for example communism was — in most accounts — exploded and destroyed. Rather, the ideas and innovations of the New Left were accepted by the liberal mainstream but in a particular, shrunken and distorted form: they triumphed as meritocracy, but not as equality. Understanding the difference between these two is fundamental to understanding my argument. Meritocracy is based on the market and on liberal principles of equal rights. Thus we learned since the sixties not to discriminate against Blacks or Jews or gays or women in hiring for a job, or in electing a politician or — the example Ann and Vicky use — in deciding who should speak at a meeting. I do not discount the liberal principle of equal rights that underlies meritocracy; on the contrary, it is a basis on which any Left of the future must rely. However, it is wholly insufficient for the achievement of justice in any robust, comprehensive sense. Thus our present sensitivity to discrimination against women, Blacks, etc., entirely coincides with a world pretty completely run by and for bankers and rentiers, a world — alien to the impulses that animated the sixties — in which there are two entirely different educational systems, housing systems, opportunity structures and health care systems — for the one percent and for the ninety-nine percent. My piece was an attempt to understand how rampant inequality and intense sensitivity to meritocratic discrimination could so easily and neatly coincide. That is why I introduced the distinction between Black power and women’s liberation, and thereby the question of “identity politics.”

In my post I needed to address the idea that it was the Black Power movement and not the women’s movement that introduced identity politics, for example under the rubric, “Black is beautiful.” My point, however, was that African-American identity politics or Black nationalism was completely different from the identity politics that emerged in the sixties precisely because it was — just that — nationalism. In other words, just as the Irish developed national feelings against the English, the Czechs against the Austro-Hungarian Empire; the Jews against European anti-Semitism, so African-Americans developed a separate sense of national identity as well. Black power, then, was not the birth of identity politics of the sort that has prevailed since the late sixties; it was rather an understandable expression of national consciousness for a people who had been oppressed as a separate people, a nation.

To understand identity politics, by contrast, we need to understand the new forms of consciousness — specifically of universalism — born in the New Left and the ways in which these forms were transformed into the meritocracy with which we live today. My thinking in this regard was influenced by Kristin Ross’s book on May 68, which introduced the term “dis-identification.” According to Ross,

“May ’68 had little to do with the social group — students or ‘youth’ — who were its instigators. It had much more to do with the flight from social determinants, with displacements that took people out of their location in society, with a disjunction that is, between political subjectivity and the social group. What is forgotten when May ’68 is forgotten seemed to have less to do with the lost habits of this or that social group, than it did with a shattering of social identity that allowed politics to take place.”

This quality of passing beyond one’s social determinants marked a new stage in the evolution of the Left and made possible the unique contribution of the New Left — solidarity with people very different from oneself, such as Vietnamese peasants or Mississippi sharecroppers. This solidarity was not based on class membership, as the old Left had been, nor on individual rights and meritocracy, as liberalism is. Rather, it was qualitatively new in that it was both universalist and based on deep, internal identifications. Women’s liberation departed from historic feminism in that it continued the New Left idea — it sought to free women from being defined by their social determinants, especially women’s place within the family. Still, its implications were ambiguous. On the one hand, women’s liberation took the classic New Left critique of capitalist oppression into its roots in private property and the patriarchal family. By rejecting women’s historic role of self-sacrifice, a role rooted in women’s place in the family, it was also critiquing the most powerful social determinant in human history, the family. This was a potentially enormous turning point in the evolution of the New Left, one that had everything to do with the changes in capitalism and the decline in democracy that was then underway. However, in the event, it was also and soon primarily assimilated to the neo-liberal ideology of equal rights, meritocracy and individual choice, along with identity politics. This failure to grasp the truly revolutionary ideas and values that the Left had created, and to turn them into a continuing radical presence in American life, was the collective failure of an entire generation of Leftists — women as well as men. The result was to allow an essentially conservative liberalism to control the agenda, leading to today’s all-too-obvious era of stasis, psychological depression and loss of direction.

This brings us to the most important point, namely how to build a Left today. First, we should remember that the break-up of the New Left in the late sixties and early seventies was not a unique occurrence. Abolitionists — the first American Left — dropped the ball after slavery was abolished (except for a few individuals like Wendell Phillips). Similarly, socialism never escaped the traumatic experience of Stalinism, a terrible weight that still holds back the obvious need to move beyond one-sided market solutions to organizing our global condition. Occupy Wall Street completely changed political discourse in the United States by inventing the brilliant trope of the ninety-nine percent, but also lost control of that discourse, allowing the Obamaesque pablum of “opportunity” and “Costco” to supplant Occupy’s insights into injustice and exploitation. The first thing we can learn from examining this history is that we need a continuing radical presence, not just an episodic one.

Secondly, the second stage in the history of the Left was correct in foregrounding capitalism. No movement that does not grasp this can call itself a Left. What occurred in the 1960s was the culmination of a long-term psychological revolution unleashed by the rise of capitalism and the corresponding decline in the role of the family as a productive unit. I have called this revolution “personal life,” and discussed both its significance and its limits in various works. To be sure, meritocracy remains an important ideal, but we have to recognize, as previous generations did, that capitalism generates structural inequality, and is completely compatible with identity politics, and the rejection of discrimination. In fact it thrives on these political forms. The critique of capitalism is indispensable for the Left.

Finally, our deepest values are universalist. Part of the value of the upheaval of the sixties lay in the rejection of national identity and the forging of ties across borders. Obviously, one understands that women may want to be with women in discussing certain matters, or that gays want to discuss certain matters with gays, but our deepest politics involves overcoming all forms of division and developing a concept of universal emancipation. We have to recognize that without our contribution — that of a Left — the problems that have surfaced recently in terms of the nature of our economic system, the paralysis of our politics and the flirting with ecological disaster that characterizes today’s world, will only get worse. We have a precious heritage, including the upheavals of the sixties, and we have to guard and advance it.

Bundled into Eli Zaretsky’s unmistakable claim that second wave feminism was substantially to blame for the undoing of the 60s-era Left is another curious charge: that no American Left exists today, or has for a long time [“Rethinking the Split Between Feminists and the Left”]. In their response, Ann Snitow and Vicky Hattam expose the flimsy basis and maladroit construction of the first charge [“The Women Did It?”]. While adding to their case, I address mostly the second. I do so not as one who “was there” in the 1960s but as both a scholar of the period and an activist since the 1980s in what I’ve always considered the Left. Zaretsky’s rebuttal of the Snitow/Hattam response further confuses his original argument while modestly improving its terms. I deal with it briefly at the end.

Uniting both of Zaretsky’s claims is a dismissive view of the experiences and perspectives of others. Second wave feminists might feel proud of their efforts to establish battered women’s shelters, health and day care collectives, rape crisis centers, alternative schools, peace camps, and more accepting versions of the family. But feminists’ most fateful action, Zaretsky insists, was to eat their own through internal schism, while taking down the Left with their separatist fury. Defined by its most extreme tendencies in its most charged settings — and regardless of feminists’ recollection of their own movement — feminism is for Zaretsky the hidden culprit in a great American tragedy.

Today’s activists in diverse struggles might likewise bristle at Zaretsky’s implication that their work to change the world, whatever their sacrifice, amounts to little because not graced with the majestic vision of a Left knowable only by someone of his generation, when the last meaningful radicalism existed. It’s hard enough fighting the powers that be. One shouldn’t have to contend with being told from an ostensible ally that you don’t count within some mystified designation of political authenticity.

* * *

At stake in Zaretsky’s harsh judgments are not wounded feelings but what kind of analysis does justice to the achievements, flaws, and complexities of past movements and helps the Left to flourish today. Consider, in this light, Zaretsky’s sweeping claim:

[I]t is partly thanks to this split [between feminism and the New Left] that there is no Left in the United States today. We do, of course, have protest movements of all sorts, but no Left in the more emphatic sense of a social and intellectual tendency capable of understanding American capitalism as a whole.

What does it mean to be capable of such awesome wisdom, and when last were Americans so blessed with it? Zaretsky’s argument depends on his ability to define and locate this capacity historically.

Zaretsky presumes that the New Left indeed had it, but little supports his generosity. The Port Huron Statement, the founding document of the quintessential New Left organization Students for a Democratic Society (SDS), was mostly a liberal treatise that assailed alienation and injustice, while demanding for individuals greater power to shape their destiny. Capitalism was barely in its sights. Later on, SDS’s Education Research and Action Project, in which members lived among and sought to organize the poor, yielded mostly a new appreciation of the terrible effects of poverty and the difficulty of cross-class alliances, not any grand revelation about capitalism.

New Leftists of the late 1960s cast about for such illumination as they sought models for making revolution. These ranged from “New Working Class” theory, holding that in a late capitalist society knowledge workers displace in importance and function the proletariat; to the canned binary of wage labor versus capital, prompting anachronistic embraces of the industrial working class; to a view of (white) American workers as a global “labor aristocracy” on the wrong side of anti-imperialism and history. And, in an irony Zaretsky surely must recall, the more insistently Marxist the New Left grew — the more it claimed a holistic critique of capitalism — the more dogmatic, primitive,and shrill its analysis typically became. With some exceptions, New Left veterans mostly cringe when recalling the movement’s high Marxist phase.

The Civil Rights and Black Power movements, whatever their deep insight into American society, also proved poor bearers of some master perspective on capitalism. In the first place, the Civil Rights Movement was hardly socialist in analysis or aspiration, no matter the attention Dr. King and others paid to economic injustice. Even the Black Panthers often demanded simply greater inclusion in the social welfare state, while some Black Power adherents saw Black capitalism as key to self-determination. And in no one’s rhetoric was Marxist-Leninist-Maoist sloganeering, from which Black militants were hardly immune, any triumph of understanding.

Even Herbert Marcuse — the great New Left lodestar — was flummoxed by the capitalism of his day. Assuming the objective integration of workers into capitalist prosperity, he ceased to view them as an insurgent force. He therefore advocated a new kind of revolution waged on moral and aesthetic, not conventionally material, grounds. But he also felt that revolution without the working class was “unimaginable.” He never solved this quandary.

To be sure, the New Left’s Marxist tilt had some benefit. Understanding the Vietnam War in terms of imperialism was a true breakthrough that enabled a structural analysis of American militarism and an internationalist purview. Some middle-class New Leftists reinvented themselves as factory workers, bringing a new militancy and ideological depth to a generation of labor struggle. But all too often, radicals’ dedication to a class (or anti-imperialist) politics was rolled up in a fantasy of a moribund US capitalism in its final death throws. A common refrain of erstwhile militants is that they “underestimated the resiliency of capitalism.” Ya think?

Less glibly, one may conclude that the New Left’s Marxism was, at best, equally a liability as an asset. Whatever one’s score sheet, the New Left emerged from the Sixties mostly confused about capitalism and class, which the voguish New Communism of the mid-1970s did little to cure. And feminism was scarcely to blame for the fuzzy (if also doctrinaire) thinking.

Thrown back at itself, Zaretsky’s logic further undermines his autopsy of the Left’s alleged demise. To his proposition that without a robust critique of capitalism there is no Left, one could counter that without an analysis of patriarchy — and the sexual division of labor especially — there is no true critique of capitalism. How, then, did feminists ruin a political trajectory by exposing its blind spots? They might instead be credited for helping to rescue it by urging that it be more perspicacious. So too, African Americans in the Sixties charged that without a thorough understanding of race there could be no apprehension of American power, economic or otherwise. Queer people made similar claims with respect to the marginalization of issues of sexuality within various political tendencies. Behind what Zaretsky likely sees as the Balkanization of the Left on group lines often laid a trenchant awareness of the partial nature of any grand analysis and the necessary limits of a politics based on it.

Seeking to exalt the New Left, Zaretsky instead nearly defines it out of existence by setting as the criterion for belonging something so elusive and abstract as an understanding of “capitalism as a whole.” A more useful understanding of the New Left, influential among scholars, is as a “movement of movements.” In this capacious view, the New Left spans the student, youth, and antiwar movements; the Civil Rights, Black Power, Native American and Chicano/a struggles; second wave feminism and gay liberation; and so forth. This description is also problematic, as it obscures whether members of these movements at the time saw themselves as part of “the New Left” (Blacks rarely did, as the term generally referred to white students and youth) and minimizes the tension between them.

Setting aside issues of nomenclature, this image of “a movement of movements” gives cause to question Zaretsky’s vision of feminism as simply a self-segregating movement apart, hostile to others. Rather, second wave feminism largely behaved as a movement alongside others. Among movements, there was surely tension but also mutual influence and solidarity. And as a practical matter — whether by virtue of the intersectionality Snitow and Hattam note or simple inclination — countless individual feminists participated in multiple struggles. Consider Roxanne Dunbar-Ortiz, author of a well-known memoir. She participated in women’s groups, wrote feminist tracts, and called out the sexism of comrades; went to Cuba on the Venceremos Brigades; worked with the Panthers and Young Lords; and ran with the Revolutionary Union. For many, in sum, the fabled “split” was hardly a clean break but rather the expansion of contexts, scenes, and idioms in which to be political.

Zaretsky’s reductive view of both the Left and feminism feeds his most strained charge. To him, whites in the heavily male, early New Left exhibited “a shattering of social identity and a reaching out at the deepest possible level to achieve solidarity with people utterly unlike oneself,” epitomized by white participation in Freedom Summer and horror at the napalming of Vietnamese. Feminists, by contrast, were myopically focused on their “own” oppression, to the exclusion of that of others.

From one side of the coin, the courage and solidarity whites exhibited in Freedom Summer was the exception, not the rule. In no way should the white New Left as a whole be defined only by among its greatest heroes during the integrationist phase of the Black struggle. White politicos were continually dogged by both the perception and reality that they could not easily renounce an embedded racial privilege to work graciously and effectively in multi-racial coalition. SNCC, after all, kicked whites out, telling them to organize “their own.”

Indeed, any rich understanding of the era must appreciate as well the incredible difficulty of “shattering social identity.” Scholarship on the 1960s overflows with accounts of the debilitating hierarchies of race, class, gender, and sexuality within and between all sectors of the Left. Put otherwise, the solidarity Zaretsky touts hardly transcended all divisions. A middle class antiwar organizer who wept for napalmed children, Penny Lewis’s superb new book about the working class and the antiwar movement reminds us, could still have little clue as to how talk about the war to workers, whose widespread dislike of the conflict could have fueled greater protest. From the Sixties, today’s activists need to know the sources and dynamics of division, not just the alleged perils of separatism.

From the other side, Zaretsky ignores how solidarity among women could be the basis for a broad politics of justice, such as when women organized qua women against rape, domestic violence and war; or environmental threats; or in support of the rights of women globally. (In a contemporary analogue, Code Pink uses gender solidarity as a basis to address everything from drone strikes in Pakistan, to war in Syria, to skewed federal budget priorities.) How is a woman working to prevent another woman from being sexually assaulted merely an expression of a solipsistic “identity politics,” as opposed to a genuine, even universalist magnanimity? Doesn’t everyone have the right not to be violently attacked? Moreover, it is its own myopia to view, for example, domestic violence — surely the most pervasive, literal violence in American daily life, often with drastic consequences for male perpetrators — as simply a women’s issue. Zaretsky’s taxonomy of the meaning and reach of various kinds of struggle is so parochial that it becomes untenable.

* * *

Seeing the Left as a “movement of movements” enables one, at last, to properly recognize a post-60s Left, which has likewise been a plurality of struggles. This was certainly the Left I knew coming of age as a student activist in the mid-1980s. We had our constellation of often interconnected causes, from Central American solidarity, to divestment from apartheid South Africa, to anti-CIA activism, to campaigns against sexual assault. Moreover, we thought ourselves to be proudly participating in the 1960s legacy of anti-racist, anti-imperialist, anti-sexist activism and advancing the broad cause of justice. It was not mere “protest.”

As in the 1960s, the women rightly complained that so-called “women’s issues” were never given the attention they deserved from campus men, seemingly more concerned with the distant, suffering other than oppressions closer to home, in which they may be more directly implicated. Sometimes with men, but often without, the women threw themselves into the front lines of struggles such as protecting abortion clinics from right-wing zealots. “The Left,” as we experienced it, was strengthened and not diminished by that commitment. And what mattered to us was doing the work, not what political label we adopted.

My activist birth is my own story, worthy in itself of no special attention. I invoke it here to stage a concluding point. In any setting, milieu, and era since the 1960s one finds similar constellations of causes, taken up by countless individuals and groups sometimes winning real change. As the assemblage of such struggles, the Left of course still exists, and every generation of activists deserves plaudits for its contributions, even as its failings may be criticized.

To be sure, in its pluralism the American Left often lacks a common analytic compass and unity of purpose. At real cost, efforts since the Sixties to found a multi-issue, national, radical youth organization akin to SDS have foundered. So too, the relative inattention to issues of political economy following the decline of the late-1990s’ alter-globalization movement, which had synergized diverse struggles, was a loss.

But it would be dearly cynical to read the Left’s heterogeneity as primarily the effect of the system’s divide-and-conquer cunning, or a failure of radical resolve, or neo-liberal dissipation. In part, it reflects the extraordinary diversity of the United States, ribboned with multiple and intersecting lines of difference and possessing a strong doctrinal commitment to freedom. As a historic consequence of slavery and the need for collective emancipation, the United States developed as well a pronounced emphasis on group rights. This cross-hatch has also been, we must recognize, deeply enabling, beyond even the country’s borders. Not by accident, both second wave feminism and gay liberation took root first in the United States, and they are — as they helped catalyze similar movements elsewhere — among the country’s greatest exports. (Saying so is not to deny the validity of critiques of cultural imperialism often leveled at Western feminism.) Inspired in part by decolonization efforts, the Civil Rights and Black Power movements in turn fed the struggles elsewhere of peoples and groups, whether national, sub-national, or subaltern.

Zaretsky is doubtless right that the advent of a robust, long-haul movement to address the deep structures of neo-liberalism would be a good thing. In his book Why American Needs a Left he has important ideas on the subject, and I hope it gains a wide audience. However, if he has any wish of being an intellectual leader of that effort — and of moving young people especially — he would do well to recall true leadership’s requirement, considered an article of faith by the Civil Rights giant Bob Moses: that leaders inspire, empower, and make those they want to mobilize feel good about their efforts. Instead, Zaretsky’s essay dismisses and diminishes even presumed allies.

Look out kids.

Rebuttal to Eli Zaretsky’s response to Ann Snitow and Victoria Hattam

Zaretsky devotes most of his rebuttal to claiming that he was misunderstood by Snitow and Hattam and to further intoning his history of the Left. Conspicuous in the new text is that it contradicts what was stated in the first. In his original essay Zaretsky insists “there is no Left today.” Yet the rebuttal casually refers to “today’s Left.” The original piece describes “the split between radical feminism and the New Left” as a “significant moment” in “the global defeat of the Left.” The rebuttal nonetheless upbraids Snitow and Hattam for somehow thinking that he “‘blames’ women for the demise of the New Left.” His initial essay plainly claims that second wave feminism was constituted by the self-separation of women from the New Left to found a movement of their own. He now writes that “The women’s liberation movement that emerged in the sixties, as well as the gay liberation movement. . . are impossible to imagine except as part of the New Left.” It is difficult to respond to a discourse that flip-flops this way, so I’ll focus only on two of Zaretsky’s most persistent and important points.

In the rebuttal, Zaretsky renews his efforts to spare African Americans the taint of “identity politics” that he reserves exclusively for feminists. Doubling down on the idea of Blacks as a nation, his distinction grows no more convincing. Black nationalism as a strong commitment to formal, institutional and even territorial sovereignty was an important, but still a minority, ideology among African-Americans in the 1960s. It had, as part of a comprehensive Black Power ethos, broader appeal as a commitment to racial pride, self-determination, and power, both within one’s community and the white establishment. For some, this group feeling took form of an overtly “national” consciousness and nationalist demands. But for many others it did not, and the historic window in which large numbers of Blacks framed their experience in terms of nationalism quickly closed.

Zaretsky would do well to note that Peniel Joseph and other scholars of Black Power locate its legacy in efforts to represent the African American experience in curricula of all kinds; establish African American studies programs and Africana centers at universities; bust and reassemble various canons; celebrate Black culture; and win political office — not for the sake of liberal “inclusion” but as an expression of power. Feminism of course echoed much of this valuable work with respect to women. This congruence diminishes both the value of and rationale for trying to declare one movement nationalism and the other something less legitimate. More accurate would be to conclude that Blacks, women, and other self-defining groups, if in different ways, have each participated in the good and potential limitations of a group-based politics. (This is not to exempt white men of particularism, as their de facto political interests are commonly masked as universalism.)

Beneath his hortatory tone, Zarestsky at root sensibly utters a Left-wing version of the Bill Clinton slogan: “It’s the economy, stupid!” This stance holds that all political gains will be highly partial if they do not address inequality and its neo-liberal infrastructure. To this, Zaretsky adds the fascinating observation that even as commitments to formal equality and the cultural censure of rank prejudice are in respects increasing, economic inequality is rising. From this he posits the compatibility and even collusion between liberalism and neo-liberalism, limiting the value of bids for greater rights.

About this analysis, three things. The first is that it can be overstated, especially if one minimizes how subjectivity and experience shape priorities. The demand for greater tolerance may seem to Zaretsky and others a politics of a lesser order. But if you are a black man who does not want to be profiled by police or killed because some white thought you were in the wrong neighborhood, issues of prejudice remain vital — literally life and death, as the massive mobilization protesting the Trayvon Martin murder and verdict grasped. Moreover, there is a new slew of racialized codes, some of which barely try to conceal their racism.

The second is that a Left based in the systemic critique of capitalism Zaretsky favors will have to attach itself to concrete campaigns and causes — none of which promise capitalism’s undoing — if it is to be durable. Occupy radicals accomplished an enormous amount with their “no demands,” cry-of-the-heart protest of today’s capitalism. But Occupy’s evident inability to convert that perspective into sustained, concrete projects (on a large scale at least) surely contributed to its dissipation.

Finally, the whole opposition of an ostensibly universalistic politics of economic equality versus a particularistic politics of identity and group rights is often a false one, given how issues of economy and identity interpenetrate. In this intersection exist enormous opportunities to address systemically and further energize issues around which people are already mobilized. The push for higher wages, for example, is a gender and race issue, given that women — often of color and undocumented — increasingly dominate the lowest wage sectors. Racialized mass incarceration, around which activism is rapidly growing, have economic dimensions, from the relationship of poverty to crime, to the privatization of security, to the exploitation of prison labor. The struggle for immigration reform, which has produced stunningly powerful protests, speaks to the labor migration and other dislocations of global capitalism.

The challenges and possibilities of making connections are near endless. Telling others, as Zaretsky does, what kind of politics is worthy of the title “the Left” does little to actually mobilize passion and insight. Making the Left relevant to new constellations of causes, while carving out new issues and approaches, may be its future.

This lecture was the keynote address at a conference dedicated to Dewey in Mexico held in Mexico City in 2012.

The 1930s was one of the one eventful and productive decades in Dewey’s life. He published more than a half dozen books including Logic: The Theory of Inquiry. It was during this decade that he sharpened his understanding of radical democracy and a renascent liberalism. He interrupted his scholarly work to travel to Mexico as the Chair of the Trotsky Commission — or to give its full title, “The Commission of Inquiry into the Charges Made against Leon Trotsky in the Moscow Trials.” To appreciate the role that Dewey played in the Commission and the significance of his subsequent intellectual exchange with Trotsky, we need to understand the context of his thinking and activities. Dewey began the decade in the midst of the Depression with a sharp critique of what was going in the United States. Citing a few of his passages gives something of the pungency of his criticisms of the failures of American capitalism. In 1933, addressing the economic situation in the United States and the steps needed for recovery he wrote:

What are the most evident sore spots of the present? The answer is clear. Unemployment, extreme inequality in the distribution of national income; … a crazy, cumbrous, inequitable tax system that puts the burden on the producer, and the ultimate consumer, and lets off the parasites, exploiters and the privileged, — who ought to be relieved entirely of their gorged excess, … a vicious and incompetent banking system (Later Works [LW] 9: 64).

Although written in 1933, it might just have been easily written in 2014. And in focusing on the crisis of liberalism, Dewey argued that a doctrine that had once been a rallying point for a demand for equality, toleration, and social justice had become an ideology for defending the status quo of “laissez faire” capitalism.

[T]he crisis of liberalism was a product of historical events. Soon after liberal tenets were formulated as eternal truths, it became an instrument of vested interests in opposition to further social change, a ritual of lip service, or else was shattered by new forces that came in. Nevertheless, the ideas of liberty, of individuality and of freed intelligence have an enduring value, a value never more needed than now. (LW 11: 35)

Dewey called for a “renascent liberalism,” a radical liberalism that categorically rejects any appeal to violence. On the contrary those “who decry the use of violence are themselves willing to resort to violence and are ready to put their will into operation. Their fundamental objection to change in the economic institution that now exists, and for its maintenance they resort to the use of the force that is placed in their hands by this very institution… Force, rather than intelligence, is built into the procedures of the existing social system, regularly as coercion, in times of crises as overt violence. The legal system, conspicuously in its penal aspect, more subtly in civil practice, rests upon coercion.” (LW 11: 45).

Liberalism must now become radical, meaning by ‘radical’ perception of the necessity of thoroughgoing changes in the set-up of institutions and corresponding activity to bring the changes to pass. For the gulf between what the actual situation makes possible and the actual state itself is so great that it cannot be bridged by piecemeal policies undertaken ad hoc. The process of producing the changes will be, in any case, a gradual one. But ‘reforms’ that deal with now with this abuse and now with that without having a social goal based upon an inclusive plan, differ entirely from effort at re-forming, in its literal sense, the institutional scheme of things. The liberals of more than a century ago were denounced in their time as subversive radicals, and only when the new economic order was established did they become apologists for the status quo or else content with social patchwork. If radicalism be defined as perception of the need for radical change, then today any liberalism which is not also radicalism is irrelevant and doomed. (LW 11: 45).

In theory and practice, Dewey was a radical critic of the abuses of American capitalism — a left critic of The New Deal. He was not innocent about power. He felt that both existing parties — the Democratic and the Republican parties — were only “errand boys” of big business. He (unsuccessfully) argued for the need for a new party to take up “the business of educating people until the dullest and the most partisan see the connection between economic life and politics. Its business is to make the connection between political democracy and industrial democracy as clear as the noon-day sun.” [1]

But unlike some of his fellow liberals, Dewey became increasing skeptical and critical of really existing communism. He had visited the Soviet Union in 1928 and was favorably impressed (especially by the experiments in education), but by the early 1930s he became a sharp and persistent critic. He thought that communism posed a serious threat to his vision of a radical democratic liberalism. In 1934 he joined Morris Cohen and Bertrand Russell in stating explicitly “Why I am Not a Communist.” He opposed the dogmatism of an ideology that “has made the practical traits of the dictatorship of the proletariat and over the proletariat, the suppression of civil liberties of all non-proletarian minorities, integral parts of the standard communist faith and dogma” (LW 9: 91-2). He rejected the absolute determinism of a Communist theory of history and the inevitability of class war. Dewey doesn’t pull any punches. Having personally experienced the ruthless attacks by Communists, he finds extremely repugnant the methods of dispute by Communists.

Fair play, elementary honesty in the representation of facts, and especially of the opinions of others, are something more than ‘bourgeois virtues.’ They are traits that have been won only after long struggle. They are not deep-seated in human nature even now — witness the methods that brought Hilterism to power. The systematic, persistent and seemingly intentional disregard of these things by Communist spokesmen in speech and press, the hysteria of their denunciations, their attempts at character assassination of their opponents, the misrepresentation of the views of the ‘liberals’ to whom they also appeal for aid in their defense campaigns, their policy of ‘rule or ruin’ in their so-called united front activities, their apparent conviction that what they take to be the end justifies the use of any means if only those means promise to be successful — all these, in my judgment, are fatal to the very end which official Communism profess to have at heart. (LW 9: 94).

Indeed, already in 1934, Dewey saw the parallels between what was happening in the U.S.S.R. and the growth of fascism in Italy and Germany. “As an unalterable opponent of Fascism in every form, I cannot be a Communist” (LW 9: 93).

What is distinctive and admirable about Dewey in the early 1930s is the combination of a sharp critique of the excesses of American capitalism and Soviet Communism combined with a passionate commitment to a vision of a radical democracy. Dewey practiced what he firmly believed. This became evident when Dewey agreed to be chair of Commission of Inquiry into the charges made against Leon Trotsky in the Moscow trials. Popular front liberals tended to down play the significance of these purges, but not Dewey. Dewey was not only severely attacked for agreeing to chair the Commission — there were even threats on his life. Dewey made it clear he was defending “Trotsky’s right to a public trial, although I have no sympathy with what seems to me abstract ideological fanaticism.” So Dewey, at the age of 78, set aside his work on his Logic, and made the arduous trip to Mexico City where he chaired the hearings in Coyocan, Mexico that consisted of thirteen sessions held between April 10 and 17. Strictly speaking, the inquiry was not a trial. The Commission sought to ascertain the veracity of the charges that had been made against Trotsky and his son in Stalin’s trumped up Moscow trials. As Dewey stated in the opening session, the Commission “is here in Mexico neither as a court nor as a jury. … Our sole function is to ascertain the truth as far as is humanly possible” (LW 11: 306). The transcript shows just how active Dewey was in carrying out its task. Ironically, for the all the criticism of the pragmatist conception of truth, Dewey before, during, and after the inquiry defended the importance of ascertaining the truth. I find it both moving and consistent with his character that Dewey concluded his opening remarks of the first session with the following personal declaration.

Speaking finally not for the commission but for myself, I had hoped that a chairman might be found for these preliminary investigations might be found whose experience better fitted for the difficult and delicate task to be performed. But I have given my life to the work of education, which I have conceived to be that of public enlightenment in the interests of society. If I finally accepted the responsible post I now occupy, it was because I realized that to act otherwise would be to be false to my life work. (LW 11: 309)

The following September, The Dewey Commission issued a summary of its findings and concluded: “We therefore find the Moscow trials to be frame-ups. We therefore find Trotsky and Sedov not guilty.”[2] After the publication of the Commission findings, Not Guilty, the attacks on Dewey became even more vicious. He was called a “fascist,” “a tool of reaction.” A letter appeared in the New Masses signed by many prominent American intellectuals warning that Dewey was being used by Troskyists. And Dewey, who had long been a contributor to the New Republic resigned from the editorial board because he felt it took an equivocal stance on the Moscow purges instead of forthrightly condemning them. In response to those “liberals” who questioned the work of the Commission, Dewey wrote “For if liberalism means anything, it means complete and courageous devotion to freedom on inquiry” (LW 11: 318). In the Soviet Union, Dewey — who after his 1928 visit had been praised as a sympathetic friend — was now condemned as “the mouthpiece of modern imperialistic reaction, the ideologist of American imperialism.” [3]

Although Dewey consistently defended the right of Trotsky to have a fair hearing, he was not sympathetic with the Trotsky’s ideological convictions. The opportunity for an intellectual confrontation with Trotsky came after the findings of the commission were published. In June 1938, Trotsky published his famous polemical article “Their Morals and Ours” in The New International. The editors invited Dewey to reply in their August issue. Dewey’s reply is short but sharp. A careful analysis of it reveals a great deal about Dewey’s understanding and commitment to a radical democratic vision.

Dewey begins by noting that the relation of means and ends has not only been a long standing issue in morals but also a “burning issue in political theory and practice” (LW 13: 149). Dewey, in his firm but judicious manner, once again condemns those who defend Stalin “on the grounds that the purges and prosecutions, perhaps even with a certain amount of falsification, were necessary to maintain the alleged socialistic régime of that country.” But he is just as critical of those who wanted to condemn Trotsky simply because he was a Marxist and who claimed that if Trotsky had been in power he would have used “any means whatever that seemed necessary to achieve the end involved in dictatorship by the proletariat” (LW 13: 349). Trotsky, in his article, had brought to the fore the explicit discussion of means and ends in social action. Dewey finds common ground with Trotsky in rejecting “absolutistic ethics based on the alleged deliverances of conscience, or a moral sense, or some brand of eternal truths. …” (LW 13: 350). Dewey “holds that the end in the sense of consequences provides the only basis for moral ideas and action, and therefore provides the only justification that can be found for means employed.” The specific thesis advanced by Trotsky that Dewey discusses is the following: “A means can be justified only by its end. But the end in turn needs to be justified. From the Marxian point of view, which expresses the historic interests of the proletariat, the end is justified if it leads to increasing the power of man over nature and to the abolition of power of man over man” (LW 13: 350). [4]

Here is where Dewey digs in. Dewey notes that “end” covers here two things — “the final end and the ends that are themselves means to this final end.” Dewey is here referring to a distinction that is not only relevant to his critique of Trotsky, but absolutely central to his own philosophy. For Dewey consistently argued for the interdependence of means and ends. There is no absolute distinction here. On the contrary means are constitutive of ends — and what are taken as ends may well be the means to further ends. Why is this conception of the interdependence of means and ends so important for Dewey? In regard to Trotsky’s thesis it bears on his claim: “That which is permissible, we answer, which really leads to the liberation of mankind.”

Were the latter claim consistently adhered to and followed through it would be consistent with the sound principle of interdependence of means and end. Being in accord with it, it would lead to scrupulous examination of the means that are used, to ascertain what their actual objective consequences will be as far as it is humanly possible — to show that they do ‘really’ lead to the liberation of mankind. (LW 13: 350-51)

If the question is raised about the justification of means, then the first task to understand as clearly as one can (“as far as it is humanly possible”) what will be the actually consequences of the means. This cannot be “deduced” from any a priori principles or claims about the “laws of history.” And here we see the double significance of the idea of an end.

As far as it means consequences actually reached, it is clearly dependent upon means used, while measures in their capacity of means are dependent upon the end in the sense that they have to be viewed and judged on the ground of their actual objective results. On this basis, an end-in-view represents an idea of the final consequences, in case the idea of the ground of the means that are judged to be most likely to produce the end. The end-in-view is thus itself a means for directly action — just as a man’s idea of health to be attained or a house to be built is not identical with end in the sense of actual outcome but is a means for directing action to achieve that end. (LW 13: 351)

We need to distinguish two senses of end — end in the sense of the consequences that actually follow from our action and an end-in-view. The end-in-view is the imagined or conceived end that we adopt to guide our actions. It is the present means for directing action. But it is crucial to distinguish this role of an end-in-view as a means for directing action — and the objective consequences of the means that are adopted. Why? Because when to comes to any action — especially political action — we cannot make a categorical distinction between means and end. Indeed Dewey consistently argued that if one seeks to achieve or further democratic ends, then this demands the employment of democratic means. Democratic means are constitutive democratic ends. It is crucial to emphasize the difference between anticipated consequences and actual consequences for two reasons. First, because when we evaluate a means, we must evaluate the anticipated consequences as carefully as we can. And this is an issue open to debate and public controversy. Second, we must always be alert to the disparities that can arise between anticipated consequences and actual consequences. The relation of means and ends is not only interdependent; it is dynamic — not static. Ends-in-view guide our actions. They demand that we anticipate the objective consequences of our actions. But when there is a disparity between anticipated consequences and actual consequences then we are required to alter our ends-in-view.

An individual may hold, and quite sincerely believe as far as his personal opinion is concerned that certain means will ‘really’ lead to a professed and desired end. But the real question is not one of personal belief but of the objective grounds upon which it is held: namely the consequences that will actually be produced by them. (LW 13: 351)

This demands judgment and public debate about anticipated objective consequences and a real willingness to alter our ends-in-view in light of actual consequences. So from Dewey’s perspective if Trotsky were consistent in his claim that “dialectical materialism knows no dualism of means and end” then he ought to consider the various means without a “fixed preconception of what they must be.” But this is not the course adopted by Trotsky. He writes: “The liberating morality of the proletariat is of a revolutionary character . … It deduces a rule of conduct from the laws of development of society, this primarily the law of all laws.” (LW 13: 351)

Here we can locate what is perhaps the most fundamental difference between Dewey’s fallibilistic pragmatism and the Marxism professed by Trotsky (and many others.) Dewey, like all the thinkers in the pragmatic tradition, is profoundly skeptical and critical of a conception of “science” that seems to owe more to nineteenth century German conceptions of Wissenschaft (with it suggestion of necessity and finality) than the actual practice of experimental science. All scientific hypotheses and theories in the natural and social disciplines are fallible and open to public criticism and revision. If we refuse to carefully and publicly evaluate different means for achieving our goals then we are violating the most elementary principles of inquiry. And if we do not acknowledge that any scientific claim is open to revision and criticism in light of further evidence and argument then we are abandoning scientific inquiry. To speak of “the law of all laws of social development” is sheer dogmatism. Trotsky, in effect, is dogmatically taking the class struggle as the only means for achieving the “liberation of mankind” without a careful critical examination of the meaning and actual consequences of “class struggle.” Despite Trotsky’s claim that “dialectical materialism knows no dualism between means and ends,” Dewey shows that Trotsky presupposes just such a dualism.

For the choice of means is not decided upon the ground of an independent examination of the measures and policies with respect to their actual objective consequences. On the contrary, means are ‘deduced’ from an independent source, an alleged law of history which is the law of all laws of social development. (LW 13: 352)

Dewey’s doctrine of the interdependence of means and ends does not rule out the role that such a struggle may play in furthering democratic ends-in-view. But if these means are to be justified, they must be justified “by an examination of actual consequences of its use, not deductively.” “It is one thing to say that class struggle is a means of attaining the end of liberation. It is a radically different thing to say that there is an absolute law of class struggle which determines the means to be used” (LW 13: 353). It follows from Dewey’s analysis that we must also be critical of taking a vague abstraction as if it specified a concrete end. For the very meaning of what Trotsky take to be the final end that does not need justification — “the liberation of mankind” — is itself open to public discussion and criticism. What precisely do we mean by “the liberation of mankind”? There is something desperately wrong with thinking that there are “final ends” that are not subject to critical evaluation. To speak as if we can simply dogmatically specify “final ends” is to remove these ends from public criticism.

Dewey makes a further point. To speak of the liberation of mankind as an end to be striven for is to speak about a moral end. “No scientific law can determine a moral end save by deserting the principle of the interdependence of means and end.”

A Marxian may sincerely believe that class struggle is the law of social development. But quite aside from the fact that the belief closes the door to further examination of history — just as an assertion that the Newtonian laws are the final laws of physics would preclude further search for physical laws — it would not follow, even if it were the scientific law of history, that it is the means to the moral goal of the liberation of mankind. That it is such a means has to be shown not by ‘deduction’ from a law but by examination of the actual relations of means and consequences; an examination in which given the liberation of mankind as end, there is free and unprejudiced search for the means by which it can be attained. (LW 13: 353)

The point that Dewey emphasizes goes beyond his dispute with Trotsky. Dewey called for the application of experimental scientific procedures in dealing with moral and political issues. But he certainly did not think we can read off from science — whether the natural or the social sciences — the moral goals for which we ought to strive. In so far as Trotsky’s conception of science is one that reveals — once and for all — what are supposed to be the “laws of history and social development,” he guilty of a confusion of what we can learn from science and what ought to be our moral ends-in view. Scientific knowledge is relevant in articulating and defending our moral ends-in-view but the appeal to science is never sufficient to justify our moral vision and the moral ends-in-view that we seek to achieve. Here again Dewey is not only criticizing Trotsky’s appeal to “the law of all laws of social development,” but as deeply flawed conception of science.

Dewey concludes his sharp critique of Trotsky by accusing him that in avoiding one form of absolutism, he plunges us into another form of absolutism.

The only conclusion that I am able to reach is that in avoiding one kind of absolutism Mr. Trotsky has plunged into another kind of absolutism. There appears to be a curious transfer among orthodox Marxists of allegiance from the ideals of socialism and scientific methods of attaining them (scientific in the sense of being based on the objective relations of means and consequences) to the class struggle as the law of historical change. Deduction of ends set up, of means and attitudes, from this law as the primary thing that makes all moral questions, that is, all questions of the end to be finally attained, meaningless. To be scientific about ends does not mean to read them out of laws, whether the laws are natural or social. (LW 13: 354)

I have analyzed Dewey’s response to Trotsky for several reasons. Dewey is frequently criticized for his “wooly” prose, but his trenchant critique illustrates how Dewey could be precise, perceptive and polemical in his critiques. He raises some of the most searching questions about what is presupposed and obscured in the doctrine that the “end justifies the means.” He questions the very idea of science, law, and history that underlies Trotsky understands of the relation of means and ends. When we read his critique of Trotsky in the context of his thinking and actions during the 1930s, we can see how Dewey is an exemplar of the committed radical liberal democrat who refuses to be seduced by any form of dogmatism. Dewey was also consistent and persistent critic of the abuses of capitalism. He condemned severe economic inequality and the rapacious character of unfettered capitalism. He feared that money and power were undermining what is most vital in democracy. He chided those who appealed to an outdated “liberalism” to defend the status quo. He called for a radical liberalism that demanded “a social goal based on an inclusive plan.” But unlike some popular front “liberals” Dewey no illusions about Communism and what was happening in the Soviet Union under Stalin. And he had no patience with those who wanted to sacrifice truth to what they took to be political expediency. Dewey was viciously attacked from the right and the left but he had the courage of his convictions. His willingness to chair and take an active role in the Trotsky inquiry showed how seriously he took the values of truth and fairness and toleration that he took to be fundamental for a fighting liberalism. He had no sympathy with the ideas professed by Trotsky, but defends his right to a fair hearing. He had no patience with those who ignored or down played the horrors of the Moscow trials and purges. Dewey knew that in times of a crisis there is an enormous temptation to abandon democratic means, to resort to violence, to use any means possible to achieve one’s ends. But he exposed and resisted this temptation. He never wavered in his conviction that there is a dynamic interdependence of democratic means and democratic ends — and that both means and ends-in-view need to be constantly rethought in light of actual consequences. In opposition to Trotsky on means and ends, Dewey’s states:

The fundamental principle of democracy is that the ends of freedom and individuality for all can be attained only by means that accord with those ends. … There is intellectual hypocrisy and moral contradiction in the creed of those who uphold the need for at least a temporary dictatorship of a class as well as in the position of those who assert that the present economic system is one of freedom of initiative and of opportunity for all. … A democratic liberalism that does not recognize these things in thought and action is not awake to its own meaning and to what that meaning demands. (LW 11:298)

I have frequently said that we cannot turn to Dewey to solve our current problems. But I believe that he can serve as a source of inspiration. He exemplifies what is best of our democratic liberal tradition. He had the courage to stand up against his critics on the right and left. He expressed his outrage about the injustices and failures of American capitalism and called for radical reform of economic and political institutions. He supported protest movements against the abuses of capitalism. At the same time, he had no illusions about “really existing communism,” especially Stalinist totalitarianism. He was not afraid to stand up to those “liberals” who equivocated about the scandals of the Moscow purges. He refused to compromise on the principle that the achievement of creative democracy can only be achieved by democratic means.

Hannah Arendt spoke about living in dark times. Dark times occur when there is a debasement of speech and action, when “ light is extinguished by ‘credibility gaps’ and ‘invisible governments,’ by speech that does not disclose what is but sweeps it under the carpet, by exhortations, moral and otherwise, that under the pretext of upholding old truths, degrade all truth to meaningless triviality” (Arendt 1995: viii). Arendt went on to say “that even in the darkest of times we have a right to expect some illumination, and that such illumination may well come less from theories and concepts than from the uncertain, flickering, often weak light that some men and women, in their lives and works, will kindle under almost all circumstances and shed over the time span that was given them on earth” (Arendt 1995: ix). We are now living through “dark times.” Dewey’s life, works, and deeds — especially as exemplified in that other dark period, the 1930s, provides the type of illumination that is so badly needed today as we face up to new threats to the democratic ideals that Dewey cherished and to which he dedicated his life’s work to achieve.

NOTES

[1] “Democracy Joins the Unemployed,” a speech delivered on July 2, 1932 cited in Westbrook 1991; 443.

[2] Not Guilty: Report of the Commission of Inquiry into the Charges made Against Leon Trotsky in the Moscow Trials . (1938, xv)

[3] Quoted in Westbrook 1991: 482

[4] Trotsky’s passage continues: “That is permissible … which really leads to the liberation of mankind. Since this end can be achieved only through revolution, the liberating morality of the proletariat of necessity is endowed with a revolutionary character. It irreconcilably counteracts not only religious dogmas but all kinds of idealistic fetishes, these philosophic gendarmes of the ruling class.” Steven Lukes argues that Trotsky exhibits what Lukes labels the “paradox” of Marxism. “[W]hat is striking about Marxism is its apparent commitment to both the rejection and the adoption of moral criticism and exhortation.” Lukes 1985: 4

REFERENCES

Arendt, Hannah (1995) Men in Dark Times. Harcourt Brace & Company, New York.

Dewey, John The Later Works 1925-1953. Southern Illinois University Press, Carbondale, Ill.

Lukes, Steven (1985) Marxism and Morality. Clarendon Press, Oxford.

Not Guilty: Report of the Commission of Inquiry into the charges made against Leon Trotsky in the Moscow Trials (1938) Harper & Brothers, New York

Westbrook, Robert (1991) John Dewey and American Democracy. Cornell University Press, Ithaca, N. Y.

When first published in 2002, The Rise of the Creative Class quickly established its author Richard Florida as an urban policy and business management guru. The Rise of the Creative Class heralded the emergence of a new class of worker who promised to lead the economy, and along with it the rest of society, to unprecedented levels of prosperity. The creative class, according to Florida, included scientists, engineers, artists, designers, media producers, and others whose primary function is “to create new ideas, new technology and/or creative content.” They are abetted in this endeavor by a whole host of high-level information workers — doctors, lawyers, accountants, educators, and the like — who draw upon complex bodies of knowledge to solve difficult problems that require high degrees of autonomy. To mark a decade of influence, the book was re-released in 2012 in a substantially updated version, The Rise of the Creative Class, Revisitednow out in paperback.

Based on statistical modeling of US Census information, demographic surveys, and economic data, Florida’s theory holds that metropolitan regions with high concentrations of creative class workers tend to outperform other areas not so well endowed. Specifically, Florida’s research zeros in on what he terms the “three T’s” of economic development: technology, talent, and tolerance. The last indicator is based on the presence of so-called bohemians — musicians, writers, designers, and other arty types — and gays in a community. Together the three T’s comprise the “creativity index” that measures a region’s economic potential as a result of its supply of “creative capital.”

The creative class thesis soon became the rationale behind a number of urban redevelopment projects, particularly in the Midwest where cities that were once paragons of America’s productive might have struggled to find a place in the postindustrial economy. Municipal officials, corporate CEOs, foundation staff, and other policy wonks embraced the concept, citing Florida in their efforts to promote arts and culture districts and otherwise jump start their local creative economies. One such example was former Michigan Governor Jennifer Granholm’s 2003 Cool Cities initiative aimed at providing grants and other resources to the Rust Belt cities of Flint, Saginaw, and Detroit in hopes of bringing them back from near extinction.

As much as the book had its adherents among policymakers, it equally had its detractors on both sides of the political spectrum. Conservatives decried the valorization of bohemian and “alternative” lifestyles while liberals denounced the apparent glossing over of the thorny issues of rising inequality and race. More academic readers questioned Florida’s argument for its lack of precision in defining the composition of the creative class meaningfully and for his research methodology. Florida spends a good part of The Rise of the Creative Class, Revisited responding to his critics.

Florida asserts that for the most part experience has borne him out. The regions he predicted to do well generally have done so even factoring in the financial upheavals of 2008. He further finds applicability of the creative class thesis in the global context as well. He leaves out any direct discussion of the critique by economist Ann Markusen, whose competing concept of “creative placemaking” is more modest in its claims and seems to be more solidly grounded empirically. (Click here to download Markusen’s white paper on creative placemaking written in collaboration with Ann Gadwa for the National Endowment for the Arts.) It should be acknowledged, though, that the significance of place does factor highly in Florida’s analysis but in a broader context. One area he does pay more attention to is inequality, adding a new section at the end devoted to the topic. However, even there he notes that he originally wanted a chapter on inequality in the earlier edition of the book but was dissuaded from it by his publisher who told him that the manuscript was already too long.

Florida has leveraged the creative class concept into big league consulting and punditry. His clients include Fortune 500 companies such as IBM, BMW, and Philips. He is in demand around the world as a speaker and is now a senior editor at The Atlantic magazine where he co-founded and edits The Atlantic Cities website. This is America, after all, where turning nothing into money is a time-honored tradition, so he can’t be faulted for cashing in. Still, there is cause to be circumspect about it all.

Florida’s theory is actually a pretty grand thesis. And I’m convinced that most of those who claim to have adopted it actually haven’t read it or at least have misunderstood it. A big problem is that they don’t seem to recognize what Florida freely acknowledges about statistical research, namely that correlation does not imply causation. In this case the presence of the creative class (a fuzzy concept to be sure) in a metropolitan area is associated with economic growth but may not in fact be the root cause of it. There may be what statisticians term intervening variables at work. Many proponents seem to have just picked up the “creative” buzzwords and run with them, often to perdition. By the same token, the critics likely haven’t read Florida either; they’re really criticizing the use the other people who haven’t read him have made of his work.

In a nutshell, Florida is one-upping Karl Marx, casting the creative class as the rightful inheritors of the fruits of the Earth. The creative class is distinct from the service class, who occupy low level McJobs with virtually no upside potential, and the working class, whose prospects have been and continue to be in decline. Like Marx’s proletariat, the creative class is currently a class in itself — a class having a common relation to the means of production — in need of evolving the collective consciousness of a class for itself — a class organized in pursuit of its own interests.

To help them accomplish that mission, Florida sets forth a “Creative Compact,” a new and improved New Deal akin yet ostensibly superior to the social compact of the 1930s, ’40s, and ’50s that led to the last golden age of broadly experienced prosperity. Florida’s compact calls for the “creatification” of everyone in order to unleash their greatest potential. This is achieved essentially by doing a lot of the things the old New Deal and its subsequent iterations portended to do, such as broadening access to educational opportunity, promoting diversity, strengthening the social safety net, reviving cities, etc., only more so.

The Creative Class, Revisited cites a lot of social science literature as part of its argument (Weber and Durkheim in addition to Marx, Mark GranovetterDaniel BellArlie Russell Hochschild, and others), but there’s a big sociological question being left on the table, specifically the question of agency. In sociology, agency is the capacity to act, individually or collectively, in accordance with one’s will. It is typically juxtaposed to structure, the social patterns and institutions (i.e., customs, ideologies, class, gender, ethnicity, etc.) that constrain that capacity. In Marx, structure is the capitalist system, which ties the capacity to act to one’s position in relation to the means of production. Both capitalist and worker are constrained in different ways with different potential outcomes by the relentless drive for profit.

At several points in The Creative Class, Revisited, Florida references Fordism, the economic and social system of standardized industrial production, named for Henry Ford, that engendered broadly shared prosperity for a good part of the twentieth century. Fordism, in Florida’s telling, was “dumb growth,” growth that presumed more and more material output as de facto the best marker of prosperity. Fordism was a key driver of productivity in the Organizational Age, the age of large hierarchical bureaucracies and the conformist identities that were necessary to keep the system going. But that system neglected to account for such externalities as sustainability and personal self-actualization. For Florida, the death knell of the Fordist/Organizational Age was the Great Recession of 2008.

The creative economy by contrast is supposedly smart growth; what we lack is the proper metric by which to assess it. The Creativity Index is Florida’s attempt to develop such a metric, which in addition to productivity takes into consideration happiness and well-being. (How it does that is unclear. Is having a high presence of bohemians and gays in one’s community necessarily correlative to increased happiness and well-being?) But other than references here and there to globalization and the role of information within it, the structure under which the creative economy operates is left unstated.

In truth there have been a number of attempts to define the current structure going back several decades. Among the earliest is Michel Aglietta’s theory of capitalist regulation, first published in France in 1976. Regulation theory is part of a broader analysis of contemporary capitalism gathered under the rubrics post-Fordism and neoliberalism. Prominent researchers include Giovanni ArrighiDavid Harveythe Italian autonomists, and others too numerous to mention here. This structure indeed contrasts to the Fordist/Organizational regime in many of the ways Florida describes but it maintains relationships of power he doesn’t examine.

The creative class is crucial to the post-Fordist system, but within that structure many researchers would find the extent of its agency to be questionable. McKenzie Wark‘s A Hacker Manifesto, for example, similarly casts what Florida terms creatives (Wark calls them hackers) as a class in itself but counterposes it to a “vectoralist” class, a reconstituted elite of capitalists so named for their control over the nodes and networks of information and thus capital flows. One of the vectoralist’s primary tools for separating hacker-creatives from the fruits of their labor is the regime of intellectual property that induces producers to sign away their copyrights, patents, and trademarks for a fraction of their true value.

Then there’s the case of the prosumer, the mash-up of producer and consumer roles under Web 2.0 whereby users of social media such as Facebook, Pinterest, Instagram, and the like entertain themselves and their friends by creating and sharing content for free while being sold to advertisers for a profit that accrues to vectoralists like Mark Zuckerberg, Sheryl Sandberg, Marissa Mayer, Jack Dorsey, and their venture capitalist partners.

Finally, there’s what Luc Boltanski and Eve Chiapello term “the new spirit of capitalism,” an economic and ideological order under which we get to work for little or no pay in return for the privilege of self-expression in the manner of the Romantic artist starving in the garret. (For an excellent critique of the business mantra “Do what you love; love what you do,” see this essay by Miya Tokumitsu in Jacobin.)

And all of this is as much potentially subject to outsourcing to lower-cost production zones in lesser-developed parts of the world as Fordist manufacturing jobs have been, as many members of the creative class have discovered as of late, much to their chagrin.

To his credit, Florida does take note of the precarious nature of creative class work. He calls for more security and greater equity at several points in the book and the Creative Compact is his blueprint for getting there. He expresses hope for the potential of a broadly shared prosperity with the concluding statement that “every single human is creative.” Even the notion of the creative class as a class in need of developing a collective consciousness as class for itself hints at the specter that haunts The Creative Class, Revisited, the specter of the commonwealth that continues to elude us.

This article was originally published on March 12, 2014 in Motown Review of Art.

I don’t know why we still call it capitalism. It seems to be some sort of failure or blockage of the poetic function of critical thought.

Even its adherents have no problem calling it capitalism any more. Its critics seem to be reduced to adding modifiers to it: postfordist, neoliberal, or the rather charmingly optimistic ‘late’ capitalism. A bittersweet term, that one, as capitalism seems destined to outlive us all.

I awoke from a dream with the notion that it might make more sense to call it thanatism, after Thanatos, son of Nyx (night) and Erebos(darkness), twin of Hypnos (sleep), as Homer and Hesiod seem more or less to agree.

I tried thanatism out on twitter, where Jennifer Mills wrote: “yeah, I think we have something more enthusiastically suicidal. Thanaticism?”

That seems like a handy word. Thanaticism: like a fanaticism, a gleeful, overly enthusiastic will to death. The slight echo of Thatcherism is useful also.

Thanaticism: a social order which subordinates the production of use values to the production of exchange value, to the point that the production of exchange value threatens to extinguish the conditions of existence of use value. That might do as a first approximation.

Bill McKibben has suggested that climate scientists should go on strike. The Intergovernmental Panel on Climate Change released its 2013 report recently. It basically says what the last one said, with a bit more evidence, more detail, and worse projections. And still nothing much seems to be happening to stop Thanaticism. Why issue another report? It is not the science, it’s the political science that’s failed. Or maybe the political economy.

In the same week, BP quietly signaled their intention to fully exploit the carbon deposits to which it owns the rights. A large part of the value of the company, after all, is the value of those rights. To not dig or suck or frack carbon out of the ground for fuel would be suicide for the company, and yet to turn it all into fuel and have that fuel burned, releasing the carbon into the air, puts the climate into a truly dangerous zone.

But that can’t stand in the way of the production of exchange value. Exchange value has to unreel its own inner logic to the end: to mass extinction. The tail that is capital is wagging the dog that is earth.

Perhaps its no accident that the privatization of space appears on the horizon as an investment opportunity at just this moment when earth is going to the dogs. The ruling class must know it is presiding over the depletion of the earth. So they are dreaming of space-hotels. They want to not be touched by this, but to still have excellent views.

It makes perfect sense that in these times agencies like the NSA are basically spying on everybody. The ruling class must know that they are the enemies now of our entire species. They are traitors to our species being. So not surprisingly they are panicky and paranoid. They imagine we’re all out to get them.

And so the state becomes an agent of generalized surveillance and armed force for the defense of property. The role of the state is no longer managing biopower. It cares less and less about the wellbeing of populations. Life is a threat to capital and has to be treated as such.

The role of the state is not to manage biopower but to manage thanopower. From whom is the maintenance of life to be withdrawn first? Which populations should fester and die off? First, those of no use as labor or consumers, and who have ceased already to be physically and mentally fit for the armed forces.

Much of these populations can no longer vote. They may shortly loose food stamps and other biopolitical support regimes. Only those willing and able to defend death to the death will have a right to live.

And that’s just in the over-developed world. Hundreds of millions now live in danger of rising seas, desertification and other metabolic rifts. Everyone knows this: those populations are henceforth to be treated as expendable.

Everybody knows things can’t go on as they are. Its obvious. Nobody likes to think about it too much. We all like our distractions. We’ll all take the click-bait. But really, everybody knows. There’s a good living to be made in the service of death, however. Any hint of an excuse for thanaticism as a way of life is heaped with Niagras of praise.

We no longer have public intellectuals; we have public idiots. Anybody with a story or a ‘game-changing’ idea can have some screen time, so long as it either deflects attention from thanaticism, or better – justifies it. Even the best of this era’s public idiots come off like used car salesmen. It is not a great age for the rhetorical arts.

It is clear that the university as we know it has to go. The sciences, social sciences and the humanities, each in their own ways, were dedicated to the struggle for knowledge. But it is hard to avoid the conclusion, no matter what one’s discipline, that the reigning order is a kind of thanatcisim.

The best traditional knowledge disciplines can do is to focus in tightly on some small, subsidiary problem, to just avoid the big picture and look at some detail. That no longer suffices. Traditional forms of knowledge production, which focus on minor or subsidiary kinds of knowledge are still too dangerous. All of them start to discover the traces of thanaticism at work.

So the university mast be destroyed. In its place, a celebration of all kinds of non-knowledge. Whole new disciplines are emerging, such as the inhumanities and the antisocial sciences. Their object is not the problem of the human or the social. Their object is thanaticism, its description and justification. We are to identify with, and celebrate, that which is inimical to life. Such an implausible and dysfunctional belief system can only succeed by abolishing its rivals.

All of which could be depressing. But depression is a subsidiary aspect of thanaticism. You are supposed to be depressed, and you are supposed to think that’s your individual failing or problem. Your bright illusory fantasy-world is ripped away from you, and the thanatic reality is bared – you are supposed to think its your fault. You have failed to believe. See a shrink. Take some drugs. Do some retail therapy.

Thanaticism also tries to incorporate those who doubt its rule with a make-over of their critique as new iterations of thatatic production. Buy a hybrid car! Do the recycling! No, do it properly! Separate that shit! Again, its reduced to personal virtue and responsibility. Its your fault that thanaticism wants to destroy the world. Its your fault as a consumer, and yet you have not choice but to consume.

“We later civilizations…  know too that we are mortal,” Valery said in 1919. At that moment, after the most vicious and useless war hitherto, such a thing could appear with some clarity. But we lost that clarity. And so: a modest proposal. Let’s at least name the thing after its primary attribute.

This is the era of the rule of thanaticism: the mode of production of non-life. Wake me when its over.

 

 

 

 

 

We share Eli Zaretsky’s desire to understand the trajectory of the Left past, present, and future. We disagree with him over the nature of the Left itself and with his account of the dynamics of political change. Where Zaretsky looks to the long duree and to political breaks as sources of current decline, we argue that the Left was always a more protean political formation in which lines of affiliation and disagreement were porous and changing. Finally, we insist, that if we are to understand the fate of the Left we must put it in dynamic relation with the actions of capital. Without expanding the political field, we mis-specify the geographies of political action — then and now.

Contested Fields Rather Than Defection

In the early 70s, small groups, Marxist-Feminist I-IV, formed to think about how Marxism and feminism might be put together. At one of those meetings, Snitow remembers the anthropologist Sherry Ortner rather regretfully saying that her Marxism wasn’t an adequate frame for what she was seeing in her research. “There’s something phobic in men’s actions toward women.” Left women struggled in many locations with exclusions that made no rational sense. Men seemed to be irrational in their need for the public sphere to be male only.

But Zaretsky reiterates in his latest post his point that it was feminism and its wild move to separation that was irrational: “The women’s movement of the late sixties was akin to a natural force, a great river of emotion and eloquent power; who would blame a river?” This kind of tin ear is not what one expects from a colleague who is familiar with feminist theory, with the long effort to analyze the many ways in which sexism and the family work. By equating women with nature, with flowing emotion, and by seeing their force as overpowering and with no rational restraint, we arrive at a distortion of New Left feminist history. (Consider the image reversed: “The men of the New Left were like a natural force, a mighty stream of emotional outrage; they can’t be blamed for the ways their utopian yearning for unity caused the New Left to seriously, even tragically, underestimate the forces impeding this unity, nor for the ways in which ‘unity’ itself is a problematic wish.”) But in some senses, Zaretsky’s river has its truth: those were indeed wildly flowing times, and both men and women were caught up in visions of new freedoms and new desires.

At the other end of the spectrum from great rivers, Zaretsky sees early second wave feminist thought as having a narrow focus on identity. But many classic feminist texts contain multiple strands that get flattened in Zaretsky’s liberation-to-identity politics retelling. Early feminist work ranged widely from the economy to the family, from the shape of the public sphere to various, shifting forms of private life.

Out of the spate that was 60s politics, Zaretsky’s impulse is to sort out radical movements one from another, trying to establish bright lines among what were more muddied, heterogeneous political phenomena: First, there are the “liberal” ones unable to effect radical change, only demanding small accommodations against discrimination, lack of opportunity, etc., changes that are easily assimilated and co-opted by the system. Second, there is real radicalism, which always has capitalism in its sights. Zaretsky blames the Left for “allowing” (his verb) these different tendencies to run into each other. If only we had the power to “allow!” Many other forces are always at work defining the possible.

In Zaretsky’s sorting device, he places the “movements for personal and sexual emancipation and participatory democracy” inside his idea of a radical New Left. The Women’s Liberation Movement and the Gay Liberation Movement, he says, were part of an authentic radicalism; “they cannot be assimilated to liberal feminism, suffragism, or the idea of equal rights per se, which is what Snitow and Hattam in effect do.” Alas, not only can all these movements be assimilated; they often are. We agree with Zaretsky that liberation movements have the potential to demand radical change but there is no guarantee that these regroupings will go beyond the “identity politics” Zaretsky deplores. Some do, some do not develop into potentially radical political struggles. The new groupings require the development of their own radical political analyses and practices.

Complex movements include many locations and there is constant cross-over of ideas and of individuals. For example, many radicals have chosen “rights” struggles in these over thirty years of reaction; they have seen “rights” as the only discourse in town and have tried to extend rights territory to include systemic social change (For example, The National Social and Economic Rights Initiative [NESRI]). They try to push liberal opportunities towards radical outcomes in a time when an organized Left has been absent — for many reasons, beyond the scope of this post.

We need empathy with how radicals try to climb, step by step, in very restrictive situations, and the ways in which they are sometimes forced back down into a mere caricature of their once freely evolving demands. That a powerful movement like feminism is constantly being co-opted, turned into private rather than social goods in a grossly privatizing corporate world, assimilated into values its radical thinkers abhor, should go without saying. Such afterlives of radical feminist demands — or call them such aggressive reinterpretations of those demands — are signs of both feminism’s various defeats and its extraordinary successes. To give an example of this kind of slide-around, post-1989 feminism in East Central Europe was often anti-Left, given recent communist history. Initially, many people were happy to greet liberation as a free marketplace. But already this formation is changing. Disenchantment with once-lionized Reaganism has given rise to an indigenous New Left, especially among the young. Alas, there’s a separation between these New Leftist ideals and a growing regional feminism that replicates some of what happened in the U.S. Feminism travels, and a vigorous Left feminist movement (of both men and women this time) would require constant rethinking, recombining of interests, and reimagining of effective on-the-ground strategies.

Zaretsky says we ignored his most important sentence: “…Women left the Left because they wanted to [not primarily] because male sexism drove them out.” Zaretsky again betrays a desire to parse and split phenomena that we think need to be analyzed together. Male sexism and women’s separatism are related, not alternative paths. Separation had many meanings in those roiling times — from lesbian self-realization to new political forms that could include once quiet and isolated women in new ways. But how Zaretsky gets from those partial, or sexual, or angry, separations to the idea that this became the damaging “identity politics” that prevented our collectively reaching a next stage of revolution, seems to us to require quite a leap.

In fact, radical feminism was precisely the opposite of identity politics. Many early activists thought of female identity as something others had done to them, the very mechanism of their oppression. Initially, radical feminism was about what Zaretsky says the New Left dreamed of, an escape from being over-defined by “social determinants.” It should be obvious to everyone on the Left: “Passing beyond one’s social determinants” was much easier for some than for others. Dare we turn Zaretsky’s argument on its head? Maybe the failure of the New Left was its inability to see how many couldn’t act on that dream of existential freedom and universal community. There was damage, lack, restriction, anger, disappointment, gross exclusion. The New Left didn’t take seriously enough the consequences of living in a system which did indeed, as Zaretsky says, generate “structural inequality” and separate people from each other. More recent radical movements, some of which Jeremy Varon has described so well in his post, have rejected an ideal of oneness, seeking other forms of fundamental, radical resistance that don’t require a single vision.

We agree with Zaretsky that “Women’s Liberation goes way beyond correcting male behavior” and that feminists wanted (and some still want) “a new stage in the evolution of the Left, one centered on themes of personal liberation including sexual liberation.” We agree, too, that Firestone’s The Dialectic of Sex is not an anti-male text but a utopian vision of liberation. There is no way we said that an end to discrimination or the chance to speak more at meetings was what we were after. Meritocracy!? No. Once again, being able to speak was not the far-off goal but the merest gateway. Many women formed their own groups to hear each other for the first time, but, new pleasure that it was, once again speech was merely an entry point for the chance to entertain new, radical ways of thinking and for constructing effective political action. Women speaking has changed the prospects for radical politics.

Capital

Early feminists and everyone on the Left did not fully understand how much capitalism and its global projects were flexible, changing, shifting into new phases of organization and development. This was a general failure to recognize where we were, though some in all camps began work on analyzing the new situation (Eli Zaretsky, Judith Stacey). Zaretsky seems to think that if the Left had held together and not succumbed to various particularist identity struggles that…well, what? It would have survived and successfully opposed the surge of self-protective changes capitalists made in response to new crises in their system, basic changes beginning just as the New Left was eating its own in internecine battles, and just as the long post-war growth in the U.S. took a major hit — around 1973?

For example, how should feminism have responded? It would have been great if Left feminist analysis of that time had understood the macro changes in the works. Feminism — and feminist analysis — was then and is now polyglot. Sometimes liberal feminism bloomed into much more radical projects. (The Radical Future of Liberal Feminism, Zillah Eisenstein [1981]) And sometimes revolutionary, utopian ideas got whittled down by long-term backlash. As everyone on the Left knows, truly alternative visions are often swallowed up in their own ardor or visciated by the need to compromise. As Zaretsky says himself, the Left was “often” defeated. Defeat is indeed something that should call forth analysis. But the assumption that we (the Left, feminism, etc.) made mistakes that led to our demise gives us all too much power. As Marx said, it’s capitalism that is the wild flood. That we lacked a perfect vision or strategy is only a small part of the story of how we were often swept away.

The New Left assumed that capitalism was the true and only enemy. Rereading those early feminist texts (we recommend this) we find that many feminists thought the Marxist interpretation of women’s subordination was not so much wrong as insufficient. (Ann remembers many meetings on the theme: “Is women’s subordination a secondary or a primary contradiction?”) Left men hoped the inferiorizing of women would wither away with the end of capitalism. Many feminists saw this as unlikely, and at the very least as requiring a long, long wait.

The conceit of the American Left broadly conceived is that we set the terms of American politics across the twentieth century. The historic and analytic narrative has long been framed as the rise and fall of the New Deal and Civil Rights coalitions, with scant attention to people, movements, and institutions that operated beyond that frame. Only in the last two decades has there been any sustained attempt to understand the rise of the Right. Doing so has widened the analytic lens and forced the Left to contend with the limits of its own political agency.

The Left’s decline must be analyzed in dynamic relation with shifts in capital. The move to post-Fordist production, off shoring, globalization, abandoning of the Gold Standard, and changing supply chains, redrew the boundaries of political possibilities in ways that dwarf the separatist skirmishes within the New Left. The work of scholars such as Michael Piore and Charles Sabel, Harley Shaiken, Gary Herrigel, Katherine Stone, Richard Locke, Wolfgang Streek, Julie Graham/Katherine Gibson, and Robert Meister all might be useful here. This is a diverse, even disparate list, but all shed light on the dynamic nature of capital, and force the question that Left politics needs to be reconsidered in relation to dynamic economic social formations. Any account that limits its analysis to the dynamics within the Left will be inadequate. Indeed, many have gone further and suggested that the economy has always been a heterogeneous and protean cultural formation that requires a rethinking of industrialization as well as the contemporary economic conditions (Timothy Mitchell, Charles Sabel and Jonathan Zeitlin, Michel Callon, Gerald Berk, Adam Sheingate, Bethany Morton, Julia Ott). The difficult task when mapping broad social change of the sort that captures Zaretsky’s imagination is to see the dynamic interplay between economic and social forces. The defection argument that structures Zaretsky’s account does not do justice to this dynamic interplay.

We sympathize with Zaretsky’s dream: “our deepest politics involve overcoming all forms of division and developing a concept of universal emancipation.” This is utopian thinking, an important part of all movements, though, obviously, activists never arrive at this amniotic bliss, this universal freedom. To be sure, identity alone has major flaws as the foundation of structural change. It does indeed lend itself to piece-of-the-existing-pie thinking. But thinking about connection has changed in interesting ways, and utopia is often figured now as agonistic. In feminist theory, the ideal of the One has been edged out by many constructions of “difference” meant to confront the weaknesses of unity. Unity represses struggle and the right to fundamentally disagree. These are among the wild, flowing freedoms radicals want.

Jeremy Varon’s interesting and important response raises three questions: 1) What do we mean by a “Left”? 2) How are we to understand the New Left’s break-up and, specifically the relation of the women’s movement to that break-up and 3) How are we to evaluate the Left today? Let me start with the third and work backwards.

I do not believe we can properly speak of a Left today. Jeremy’s view of a plurality of different movement working independently but parallel to one another avoids all the important questions. A Left needs coherence and direction. It needs leaders, organizations, its own counter public-spheres, some sense of the values that distinguish it from the mainstream. It needs a coherent analysis of such basic ruling class institutions as the Democratic Party, the universities and the so-called public sphere. Obviously I am not advocating a vanguard party, or a mass party of the Debsian sort. But to speak of the huge diversity of present protest movements that might be termed progressive as a Left stretches the term beyond reason.

A good example both of the potential of a Left today, and its weakness lies in the brilliant but short life of Occupy Wall Street. On the one hand, in inventing the figure of the 1%, OWS gave the faltering Obama campaign the language it needed to achieve its historic reelection victory in 2012. But even as OWS brought the themes of class and inequality back into American politics, it built no institutions, created no journals, devised no distinct line of analysis, had no presence at the Democratic Convention, did not establish a place inside universities and vanished almost as quickly as it appeared. Varon praises this as an example of the happy-go-lucky, antinomian, “no demands” spirit of the contemporary Left. I regard it as a tragic lost opportunity. As we look ahead, to the Hillary Clinton Presidential run, we see again that we are not even in a position to discuss how to situate ourselves, yet it is only through a genuine turn to the Left that America can address its present problems.

Turning now to the break-up of the New Left, Varon claims I “blame” the women’s movement for the breakup. This would be absurd. As long as the history of feminism is discussed in terms of a language of blame, recrimination and defense we will never make progress. It is definitely the case that 70s feminism stood for a separate women’s movement, and not for building on the initiatives of the old and new left. It is also the case that there were costs to that decision, such as the loss in continuity between the older socialist lefts and the newer cultural movements emerging in the sixties. Varon ascribes to me a whole set of silly stereotypes, including a supposed idealization of the sixties Left (“Ah, Golden Youth — how sweet it was”), and an insistence on economic inequality, to which Varon reduces the question of capitalism. What is most important is that we need to have an historical understanding of the sixties and seventies, since that is the seedbed of today’s crisis, and an understanding of that crisis is the most important task of any Left today.

In fact, the sixties were a turning point in the tradition of revolution and the Left, which had come down from the eighteenth century. There had been two previous epochs in that tradition: that of the eighteenth century, which was centered on the question of self-government and, especially, the abolition of slavery and that of the nineteenth and early twentieth centuries, which had been centered on anti-capitalism, socialism and communism. A third phase, centered on sexuality, gender and identity, and with a strong antinomian current, started in the sixties and we still don’t have a good enough handle on how to understand it. Varon has a lot of fun mocking my supposedly grandiose ambitions for a conception of “capitalism as a whole.” But what I am calling for, namely an historically based understanding of the present, including its roots in the sixties and seventies, is nothing more than intellectuals in previous centuries had. Most importantly, there is a dominant narrative concerning the sixties out there, namely that the good part came in learning to respect women and Blacks. A Left is necessary to deepen that narrative by showing how respect for women and Blacks involves the whole of modern society, just as the abolitionists showed that slavery pervaded every aspect of society and the socialists and communists showed that class structured every aspect of society.

This brings us finally to the question of what is a Left? My point, which I elaborated on in my book, is that in the United States the very idea of a Left is a recent one, invented in the thirties and forties and continued in the sixties. The politics that has prevailed since then is a politics that claims that we do not need a Left. The true difference between Varon and me is over this question. For Jeremy many separate movements constitute a Left. In my view, neo-liberalism rests on ideas of diversity and difference such as Varon celebrates. We need a Left that relates the different moments of protest to one another, and tries to point them in a common direction, that of equality or justice, which we will not attain simply by extending markets.

Racialized chattel slaves were the capital that made capitalism. While most theories of capitalism set slavery apart, as something utterly distinct, because under slavery, workers do not labor for a wage, new historical research reveals that for centuries, a single economic system encompassed both the plantation and the factory.

At the dawn of the industrial age commentators like Rev. Thomas Malthus could not envision that capital — an asset that is used but not consumed in the production of goods and services — could compound and diversify its forms, increasing productivity and engendering economic growth. Yet, ironically, when Malthus penned his Essay on the Principle of Population in 1798, the economies of Western Europe already had crawled their way out of the so-called “Malthusian trap.” The New World yielded vast quantities of “drug foods” like tobacco, tea, coffee, chocolate, and sugar for world markets. Europeans worked a little bit harder to satiate their hunger for these “drug foods.” The luxury-commodities of the seventeenth century became integrated into the new middle-class rituals like tea-drinking in the eighteenth century. By the nineteenth century, these commodities became a caloric and stimulative necessity for the denizens of the dark satanic mills. The New World yielded food for proletarians and fiber for factories at reasonable (even falling) prices. The “industrious revolution” that began in the sixteenth century set the stage for the Industrial Revolution of the late eighteenth and nineteenth centuries.

But the “demand-side” tells only part of the story. A new form of capital, racialized chattel slaves, proved essential for the industrious revolution — and for the industrial one that followed.

The systematic application of African slaves in staple export crop production began in the sixteenth century, with sugar in Brazil. The African slave trade populated the plantations of the Caribbean, landing on the shores of the Chesapeake at the end of the seventeenth century. African slaves held the legal status of chattel: moveable, alienable property. When owners hold living creatures as chattel, they gain additional property rights: the ownership of the offspring of any chattel, and the ownership of their offspring, and so on and so forth. Chattel becomes self-augmenting capital.

While slavery existed in human societies since prehistoric times, chattel status had never been applied so thoroughly to human beings as it would be to Africans and African-Americans beginning in the sixteenth century. But this was not done easily, especially in those New World regions where African slaves survived, worked alongside European indentured servants and landless “free” men and women, and bore offspring — as they did in Britain’s mainland colonies in North America.

In the seventeenth century, African slaves and European indentured servants worked together to build what Ira Berlin characterizes as a “society with slaves” along the Chesapeake Bay. These Africans were slaves, but before the end of the seventeenth century, these Africans were not chattel, not fully. Planters and overseers didn’t use them that differently than their indentured servants. Slaves and servants alike were subject to routine corporeal punishment. Slaves occupied the furthest point along a continuum of unequal and coercive labor relations. (Also, see here and here.) Even so, 20% of the Africans brought into the Chesapeake before 1675 became free, and some of those freed even received the head-right — a plot of land — promised to European indentures. Some of those free Africans would command white indentures and own African slaves.

To the British inhabitants of the Chesapeake, Africans looked different. They sounded different. They acted different. But that was true of the Irish, as well. Africans were pagans, but the kind of people who wound up indentured in the Chesapeake weren’t exactly model Christians. European and African laborers worked, fornicated, fought, wept, birthed, ate, died, drank, danced, traded with one another, and with the indigenous population. Neither laws nor customs set them apart.

And this would become a problem.

By the 1670s, large landowners — some local planters, some absentees — began to consolidate plantations. This pushed the head-rights out to the least-productive lands on the frontier. In 1676, poor whites joined forces with those of African descent under the leadership of Nathaniel Bacon. They torched Jamestown, the colony’s capital. It took British troops several years to bring the Chesapeake under control.

Ultimately, planter elites thwarted class conflict by writing laws and by modeling and encouraging social practices that persuaded those with white skin to imagine that tremendous social significance — inherent difference and inferiority — lay underneath black skin. (Also, see here and here.) New laws regulated social relations — sex, marriage, sociability, trade, assembly, religion — between the “races” that those very laws, in fact, helped to create.

The law of chattel applied to African and African-descended slaves to the fullest extent on eighteenth century plantations. Under racialized chattel slavery, master-enslavers possessed the right to torture and maim, the right to kill, the right to rape, the right to alienate, and the right to own offspring — specifically, the offspring of the female slave. The exploitation of enslaved women’s reproductive labor became a prerogative that masters shared with other white men. Any offspring resulting from rape increased the master’s stock of capital.

Global commerce in slaves and the commodities they produced gave rise to modern finance, to new industries, and to wage-labor in the eighteenth century. Anchored in London, complex trans-Atlantic networks of trading partnerships, insurers, and banks financed the trade in slaves and slave-produced commodities. (Also, see here.) Merchant-financiers located in the seaports all around the Atlantic world provided a form of international currency by discounting the bills of exchange generated in the “triangle trade.” These merchant-financiers connected British creditors to colonial planter-debtors. Some of the world’s first financial derivatives — cotton futures contracts — traded on the Cotton Exchange in Liverpool. British industry blossomed. According to Eric Williams, the capital accumulated from the transatlantic trade in slaves and slave-produced commodities financed British sugar refining, rum distillation, metal-working, gun-making, cotton manufacture, transportation infrastructure, and even James Watt’s steam engine.

After the American Revolution, racialized chattel slavery appeared — to some — as inconsistent with the natural rights and liberties of man. Northern states emancipated their few enslaved residents. But more often, racialized chattel slavery served as the negative referent that affirmed the freedom of white males. (Also, see here.) In Notes on the State of Virginia (1785), Thomas Jefferson — who never freed his enslaved sister-in-law, the mother of his own children — postulated that skin color signaled immutable, inheritable inferiority:

It is not their condition then, but nature, which has produced the distinction… blacks, whether originally a distinct race, or made distinct by time and circumstances, are inferior to the whites in the endowments both of body and mind … This unfortunate difference of colour, and perhaps of faculty, is a powerful obstacle to the emancipation of these people.

Even so, the former plantation colonies of the Upper South stood in a sorry state after Independence, beset by plummeting commodity prices and depleted soils. After the introduction of the cotton gin in 1791, these master-enslavers found a market for their surplus slave-capital.

The expanding cotton frontier needed capital and the Upper South provided it. Racialized chattel slavery proved itself the most efficient way to produce the world’s most important crop. The U.S. produced no cotton for export in 1790. In the antebellum period, the United States supplied most of the world’s most traded commodity, the key raw ingredient of the Industrial Revolution. Thanks to cotton, the United States ranked as the world’s largest economy on the eve of the Civil War.

From about 1790 until the Civil War, slave-traders and enslavers chained 1 million Americans of African descent into coffles and marched or shipped them down to southeast and southwest states and territories. They were sold at auction houses located in every city in the greater Mississippi Valley.

Capital and capitalist constituted one another at auction. At auction, slaves were stripped and assaulted to judge their strength and their capacity to produce more capital or to gratify the sexual appetites of masters. Perceived markers of docility or defiance informed the imaginative, deeply social practice of valuing slave-capital. In this capital market, Walter Johnson reveals, slaves shaped their sale and masters bought their own selves.

After auction, reconstituted coffles traveled ever deeper into the dark heart of the Cotton Kingdom (also, see here) and after 1836, into the new Republic of Texas. Five times more slaves lived in the United States in 1861 than in 1790, despite the abolition of the transatlantic slave trade in 1808 and despite the high levels of infant mortality in the Cotton Kingdom. Slavery was no dying institution.

By 1820, the slave-labor camps that stretched west from South Carolina to Arkansas and south to the Gulf Coast allowed the United States to achieve dominance in the world market for cotton, the most crucial commodity of the Industrial Revolution. At that date, U.S. cotton was the world’s most widely traded commodity. Without those exports, the national economy as a whole could not acquire the goods and the credit it required from abroad.

And the Industrial Revolution that produced those goods depended absolutely on what Kenneth Pomeranz identifies as the “ghost acres” of the New World: those acres seeded, tended, and harvested by slaves of African descent. Pomeranz estimates that if, in 1830, Great Britain had to grow for itself, on its own soil the calories that its workers consumed as sugar, or if it had to raise enough sheep to replace the cotton it imported from the United States, this would have required no less than an additional 25 million acres of land.

In New England and (mostly) Manchester, waged-workers spun cotton thread which steam-powered mills spun into cloth. Once a luxury good, cotton cloth now radically transformed the way human beings across the globe outfitted themselves and their surroundings. Manchester and Lowell discovered an enormous market in the same African-American slaves that grew, tended and cleaned raw cotton, along with the same workers who operated the machines that spun and wove that cotton into cloth. According to Seth Rockman’s forthcoming book, Plantation Goods and the National Economy of Slavery, the ready-made clothing industry emerged in response to the demand from planters for cheap garments to clothe their slaves.

The explosion in cotton supply did not occur simply because more land came under cultivation. It came from increased productivity, as new work by Ed Baptist illustrates. The Cotton Kings combined the bullwhip with new methods of surveilling, measuring, and accounting for the productivity of the enslaved, radically reorganizing patterns of plantation labor. Planter-enslavers compelled their slave-capital to invent ways to increase their productivity — think of bidexterous Patsey in Solomon Northrup’s Twelve Years a Slave. At the end of every day, the overseer weighed the pickings of each individual, chalking up the numbers on a slate. Results were compared to each individual’s quota. Shortfalls were “settled” in lashes. Later the master copied those picking totals into his ledger and erased the slate (both mass-produced by burgeoning new industries up North). Then he set new quotas. And the quotas always increased. Between 1800 and 1860, productivity increases on established plantations matched the productivity increases of the workers that tended to the spinning machines in Manchester in the same period, according to Ed Baptist.

Slavery proved crucial in the emergence of American finance. Profits from commerce, finance, and insurance related to cotton and to slaves flowed to merchant-financiers located in New Orleans and mid-Atlantic port cities, including New York City, where a global financial center grew up on Wall Street.

Cotton Kings themselves devised financial innovations that channeled the savings of investors across the nation and Western Europe to the Mississippi Valley. Cotton Kings, slave traders, and cotton merchants demanded vast amounts of credit to fund their ceaseless speculation and expansion. Planter-enslavers held valuable, liquid collateral: 2 million slaves worth $2 billion, a third of the wealth owned by all U.S. citizens, according to Ed Baptist. With the help of firms like Baring Brothers, Brown Brothers, and Rothschilds, the Cotton Kings sold bonds to capitalize new banks from which they secured loans (pledging their slaves and land for collateral). These bonds were secured by the full faith and credit of the state that chartered the bank. Even as northern states and European empires emancipated their own slaves, investors from these regions shared in the profits of the slave-labor camps in the Cotton Kingdom.

The Cotton Kings did something that neither Freddy, nor Fannie, nor any of “too big to fail” banks managed to do. They secured an explicit and total government guarantee for their banks, placing taxpayers on the hook for interest and principal.

It all ended in the Panic of 1837, when the bubble in southeastern land and slaves burst. Southern taxpayers refused to pay the debts of the planter-banks. Southern States defaulted on those bonds, hampering the South’s ability to raise money through the securities markets for more than a century. Cotton Kings would become dependent as individuals on financial intermediaries tied to Wall Street, firms like Lehman Brothers (founded in Alabama).

It didn’t take very long for the flow of credit to resume. By mid-century, racialized chattel slavery had built not only a wealthy and powerful South. It had also given rise to an industrializing and diversifying North. In New England, where sharp Yankees once amassed profits by plying the transatlantic slave trade — and continued to profit by transporting slave-produced commodities and insuring the enslaved — new industries rose up alongside the textile mills. High protective tariffs on foreign manufactures made the products of U.S. mills and factories competitive in domestic markets, especially in markets supplying plantations.

After the Erie Canal opened in 1824, the North slowly began to reorient towards timber and coal extraction, grain production, livestock, transportation construction, and the manufacture of a vast array of commodities for all manner of domestic and international markets. Chicago supplanted New Orleans. By the 1850s, industrial and agricultural capitalists above the Mason-Dixon line no longer needed cotton to the same extent that they once did. With the notable exception of Wall Street interests in New York City, Northerners began to resist the political power — and the territorial ambitions — of the Cotton Kings. Sectional animosity set the stage for the Civil War.

But up to that point, slave-capital proved indispensable to the emergence of industrial capitalism and to the ascent of the United States as a global economic power. Indeed, the violent dispossession of racialized chattel slaves from their labor, their bodies, and their families — not the enclosure of the commons identified by Karl Marx — set capitalism in motion and sustained capital accumulation for three centuries.

Adapted from a lecture in the team-taught course “Rethinking Capitalism” at The New School for Social Research.

It seems odd now to recall that up until a few years ago, the concept of capitalism largely had fallen out of favor as a subject of academic inquiry and critique. Most scholars in the humanities and social sciences regarded the term as too broad, too vague, too encumbered by associations with either Marxism or laissez-faire. Following the collapse of the Soviet Union, capitalism could be taken for granted, it seemed. No person or nation could escape the discipline of efficient, spontaneous, self-regulating, globalizing markets.

Economists cut economies loose from society, institutions, culture, and history. They repositioned their discipline upon models that assumed that rational, utility-maximizing individual parts represented and explained the behavior of the economy-as-a-whole. Many social scientists — especially in political science — embraced these rational-actor models. Others joined historians and humanities scholars in the “cultural turn.” They struck out for new worlds of culture, those ever-shifting systems of language and meaning, symbols and signifiers, identity and consciousness that produce and reproduce power. In doing so, however, these academics largely abandoned questions of class and ceded the terrain of economics.

The New School for Social Research swam against these intellectual currents, unafraid of large structures, long processes, broad comparisons, and big questions. Led by colleagues like Robert Heilbroner, Eric Hobsbawm, David Gordon, Charles Tilly, and Louise Tilly, NSSR faculty presumed that capitalism must be explained, not assumed. Capitalism — in its myriad manifestations across time and space — remained a central concern, both as a broad analytic concept and as a subject of nuanced social, political, and historical inquiry. The New School for Social Research continued to expose capitalisms to critical and ethical scrutiny. They embraced the new methods of cultural analysis to interrogate its power.

Drawing upon these NSSR intellectual traditions, the Robert L. Heilbroner Center for Capitalism Studies at the New School for Social Research brings together students and faculty from across The New School for interdisciplinary conversations around theoretical approaches to and analytic methods for the study of capitalisms. Affiliated faculty and students work in diverse and innovative fields including the history of capitalism, economic sociology, international political economy, heterodox economics, critical theory, economic anthropology, and science and technology studies.

Capitalism is a social process. Institutions, history, and cultural context shape the specific form that capitalism assumes in any given place at any particular moment. The Center for Capitalism Studies identifies power relations — whether organized by state policy and laws, structured by social norms and institutions, articulated in ideology, or embedded within racial, gender and class relations — as critical determinants of economic outcomes. We recognize the capacity of economic theories — such as those concerning the rational actor, efficient markets, and the primacy of shareholders — to operate as political ideologies and to shape the reality they purport to describe. We apprehend capitalism as both a system fundamentally grounded in violence and the most effective engine for bettering the material condition of mankind ever known.

The Robert L. Heilbroner Center for Capitalism Studies seek to develop a common language with which capitalism can be understood, analyzed, interpreted, and engaged — with rigor, with precision, and in a manner that is accessible to the broadest possible audience. Our graduate and undergraduate courses examine the basic logic of capitalism (as conceived by a range of theorists), its various historically contingent forms, and its ability to structure our political possibilities and creative endeavors. Our program supports diverse inquiries into the major structuring force in contemporary society, posing questions both timeless and pressing:

  • When and how does capitalism emerge and develop?
  • What is the relation between capitalism and democracy?
  • How is the commodification of human relationships best historicized, analyzed, and interpreted?
  • Can a capitalist society preserve cultural, religious, and linguistic diversity and sustain inclusivity in the face of globalized systems of commerce, labor, and finance?
  • When does poverty evidence capitalist exploitation and when does it indicate an absence of capitalist development or inclusion?
  • What assumptions and norms undergird the metrics and indicators used to measure economic performance and well-being at level of the individual, the firm, the nation, and the globe?
  • What is economic value, how is it created, how is it recognized, and for whom does it exist?
  • Can capitalism rest upon any ethical and moral foundation apart from individual self-interest?
  • What impact does the distribution of income, wealth and indebtedness have on macroeconomic performance and on the condition of human capital in a given capitalist society?
  • What modes of finance best support innovation and the equitable distribution of its benefits?
  • What might be the alternatives to capitalism?

The 2007-8 financial crisis — the precariousness and inequality it revealed, the stagnation and disillusionment it wrought — revived academic interest in capitalism. The Robert L. Heilbroner Center for Capitalism Studies aims to develop theoretical and analytic tools that can help us to envision and to instantiate different and better capitalisms — local and global — for the future. A more generous, egalitarian, patient, deliberate, and accountable form of capitalism must begin with incisive and interdisciplinary social inquiry, without which policy change cannot be successful.

In our opinion, the document drafted by Julia Ott and Will Milberg for the new Robert L. Heilbroner Center for Capitalism Studies should be the beginning of a debate among NSSR faculty about the Center’s mission rather than a final manifesto. There are many claims in the document with which we wholeheartedly agree: the pressing necessity to return to discussing and analyzing large structures, long processes, and big questions; the idea that capitalism must be a central object of study and concern; the interpretation of capitalism as a social process; the identification of various power relations as critical determinants of economic outcomes; and the acknowledgment that economic theories operate as political ideologies. Further, we agree with Ott and Milberg that capitalism “should not be assumed.” However, we think that it should not be only “explained,” as the present document suggests, but also, by the same token, criticized. Critique, indeed, is a constitutive part of the explanation of social phenomena and processes, and explaining capitalism without criticizing it does amount to assuming it.

It seems to us that the document’s assumption of capitalism becomes clear, for example, in its concluding normative political proposal. It reads:

The Robert L. Heilbroner Center for Capitalism Studies aims to develop theoretical and analytic tools that can help us to envision and to instantiate different and better capitalisms — local and global — for the future. A more generous, egalitarian, patient, deliberate, and accountable form of capitalism must begin with incisive and interdisciplinary social inquiry, without which policy change cannot be successful.

While we do not object in principle to reforms that could improve the living conditions of millions, we cannot but wonder whether envisioning a more human and generous capitalism as the only logical alternative to the current situation should be assumed as the Center’s mission. On the contrary, we think that a wider range of critical stances should be incorporated into the Center’s mission-statement, allowing room, among other things, to stances that positively reject the capitalist assumption. Moreover, the New School for Social Research should not, in our opinion, run the risk of turning the Center into another think tank for policy building within the capitalist horizon.

In spite of the vast array of literature on alternative economic models, the research questions enumerated in the document do not invite comprehensive study of the systemic alternatives to capitalism. None of the questions listed addresses the closely related ecological crisis or the pressing contradiction between capitalist accumulation and the planet’s ecological preservation. None of the questions listed mentions the connection between capitalism and war, colonial and neo-colonial expansion, or the ongoing expropriation of people from their land and from the commons. In other words, we don’t think that the document’s abstract recognition that capitalism is “violent” does enough to take a critical stance towards its dangers. Finally, none of the questions on the list mentions racialization, forced migration, and gender inequality. We believe that explaining the relations between capitalism and such forms of expropriation, displacement, ecological destruction, war, and inequality is necessary. Are these relations constitutive of capitalism or merely contingent? In other words, is there such a thing as what the document describes as “humane” capitalism, or is “generous, egalitarian” capitalism a contradiction in terms? We believe that the task of the New School’s Center for Capitalism Studies should be explicitly addressing these as open questions.

As already said, the critique of capitalism should be constitutive to its explanation, starting from the presupposition that capitalism is not only a social process, but also a social relation. In other words, capitalism is not a machine: it is the product of our own activity and practices, organized through specific social relations. To be sure, we are confident that such views as expressed above will be welcome as part of the repertoire of the Heilbroner Center. This is only continuous with the democratic and critical culture that has characterized the New School from its very beginnings. But this is not enough. If the Center for Capitalism Studies aspires to promote a pluralistic path of inquiry, to explain and criticize capitalism without assuming it, radical critiques of capitalism should be included as part of the Center’s mission — not just tolerated as an exception to it. We hope that a debate on different explanation methods and critiques will open up, and that a final manifesto will answer to these concerns.

Is ‘capitalism’ an adequate term to describe the currently dominant mode of production? I think there would be wide consensus, at least at the New School for Social Research, that it is. But is ‘capitalism’ an adequate description for the leading edge of production? I get the sense that, despite their differences, those who want a social science of capitalism (Ott and Milberg), and those who want a critique of it that points toward anti-capitalist alternatives (Boehm and Arruzza), might actually agree that if the currently hegemonic forces continue to prevail, that what is in store for the planet is more capitalism. My question to both would be: what would count as signs of a possible new mode of production?

It seems to me that without some notional sense of both origins but also potential end-points, the concept of ‘capitalism’ risks becoming a bit too totalizing and ahistorical. Either in the mode of critique or the mode of analysis, research will be drawn to what appears continuous in the object of study, and not notice things that point towards transformations into something else.

‘Capitalism’ does indeed seem to be back as an analytic object. In my own little world, the ‘cultural turn’ treated too much interest in the mode of production as vulgar Marxism. So too did what one might call the ‘political turn’. In this year when both Stuart Hall and Ernesto Laclau have passed away, we might want to not forget what was valuable in the cultural and political turns, of which they are avatars. But at the same time, it is worth pointing out that both took the underlying economic form of capitalism to be more or less unchanged. What was interesting and new was at the cultural or political ‘level’ of the social formation.

The return of capitalism as an object comes at a time when both of the possible historical ‘grand narratives’ about its future have receded. Its hard to sustain much faith in bureaucratic socialist planned economies as any kind of alterative. Khrushchev did not bury us with ‘red plenty.’ Also in retreat are various ‘third way’ narratives about how capitalism has passed over into some more sophisticated world where class struggle and ideology are dead. And so, strikingly, the narrative imaginary that is most widely shared is that capitalism just goes on and on, getting worse or better, depending on one’s point of view.

One sign of strain in it as a concept is the frequent addition of modifiers. Periodization has become internal to the concept rather than external. And so we have: neoliberal capitalism, postfordist capitalism, communicative capitalism, biopolitical capitalism, cognitive capitalism, semio-capitalism, not to mention the persistence of the charmingly named ‘late’ capitalism. But is there not a certain failure of imagination in merely adding a qualifier to return strange new observable features of the social formation to the familiar ground of ‘capitalism’? This seems to me to run the risk of not explaining but explaining away the object of both critique and analysis.

There’s another work that a concept can do: to not explain or explain away, but to defamiliarize, to make strange the seemingly self evident. Its in that spirit that, as a thought experiment, I have been trying to find signs that there might be some quite other mode of production nascent in the old capitalist one. What if what was emerging was not more capitalism, not even really a kind of capitalism, but something worse? Something with all of the worst features of capitalism: exploitation, inequality, instability, class polarization, ecological crisis, but also some things not well captured by the analytics or critique of ‘capitalism.’

It might help to define the concept a bit more at this point. By ‘capitalism’ I mean one thing in particular: a social formation with a ruling class which dominates it by the ownership of the means of production. This is course not the only way to define capitalism, but it is the sense in which I mean it here. My question, then is: could there be an emergent ruling class that does not dominate the social formation through the ownership of the means of production, but through some other means?

If one looks, for example, at Fortune 500 companies, its striking how many don’t really own much by way of the means of production in any traditional sense. That icon and relic of Fordism – the Ford motor company – still owns the factories that make its products. As became evident in 2008, Detroit was actually making a big slice of its profits not from making cars but from financial service.

Apple does not really make much of its own products. Neither does Microsoft. A lot of that is contracted out. These are companies that control the production cycle by controlling brands and intellectual property. The drug companies do make things, but it’s a tiny fraction of their value.

What matters in most of these cases is owning the brand, the patents and the copyrights, together with those vectors along which information is gathered, analyzed and made effective. The latter is the path pioneered by Google, whose chief asset is the capacity to gather and analyze data.

Take the biggest of the Fortune 500 companies, Walmart: is it really a company that dominates its sector because it owns more box stores? Or is it more to do with superior logistics? Walmart figured out how to manage distribution on a hitherto unimaginable scale by managing data. Amazon cherry-picked certain kinds of product for which a logistics would work that did not need retail stores.

Perhaps what is going on is a kind of power that has less to do with owning the means of production thereby controlling the value cycle, as in capitalism. Perhaps it is more about owning the means of mediation, thereby controlling the means of production and hence the value cycle. The actual production can be outsourced, and manufacturing firms will have to compete for the privilege of making products with someone else’s intellectual property embedded in it, and sold under some else’s brand.

Certain key parts of production may well be retained, or even acquired. Google is scooping up firms that actually make things in the fields of robotics and the ‘internet of things.’ But the vast extension of so-called intellectual property in the last half century, combined with ever more efficient ways of communicating and managing data, means that a tremendous amount of power can now reside in simply owning the means of mediation.

Why would this be something other than just more capitalism? Let’s have a closer look at what happened to the means of production. Marx was writing in that great era where steam power transformed the forces of production. The worker no longer controlled the tool; the machine controlled the worker. Workers became interchangeable. The physical actions of the worker could be captured and quantified, and as Marx so grandly showed, the worker only received a fraction of the value of their labor.

Is that still how the forces of production work? For a lot of workers, yes. Other power sources have replaced stream, but the worker is still controlled by the machine, and the worker’s labor is controlled and quantified by the machine. Estimates of the number of industrial workers in China vary, but it is at least 80 million. If anything, this is the great era of the industrial forces of production.

Industrialization shifted power away from a landowning class who collected rents on agricultural land from farmers to a class that owned these new means of production and collected profits by exploiting labor. The question would be whether the locus of power might be shifting once again.

What is interesting about more contemporary developments in the forces of production is that they are less about capturing value from physical labor and more about capturing data from any activity whatsoever. It is no longer the case that the only ‘efficient’ signal is the price signal. What if there was a mode of production based not on capturing surplus value but on capturing surplus information?

This might be as much the case with Walmart as with Google. Sure, Walmart captures surplus value from its workers, but it also captures surplus information from its workers, its customers, its suppliers, even from the movement of its goods as well. This information is ‘surplus’ in the sense that for every bit of data given away a whole raft of data and metadata remains proprietary.

Google merely perfects this. All it is really giving away is access to some particular information – which Google did not itself have to produce. What it gets is the aggregate patterns that it can extract from all of that data. And of course these days it has not only all that search data, but basic telemetry on the movements of any human that has in its possession a cellphone running its Adroid OS.

None of this, one should hasten to add, is ‘immaterial’. Can we just admit that this was a terrible (non)concept? Just as it took an incredible amount of infrastructure to seize power from the old landlord class, so too seizing power from a capitalist class to vest it with something else takes a powerful infrastructure, one no longer about making and distributing things but about controlling that making and distributing.

In short, considered in a really vulgar way, in terms of the forces of production, maybe there’s something new going on. Some of the relations of production look familiar. This is still an economy that appears to have markets and prices, firms and profits and so on. But perhaps power is shifting away from owning the means of production, which merely extract surplus value from labor, toward owning the means of mediation, by which a surplus can be extracted from any activity at all.

So far discussion of the Heilbroner center text has oscillated around the old grand narratives. But perhaps there are some minor stories out there that point to different emerging realities.

 

It is with great sadness that we learn of the death of Ernesto Laclau, the outstanding Argentinean political philosopher, at the age of 78. Ernesto had a heart attack in Seville where he was giving a lecture. He was the author of landmark studies of Marxist theory and of populism as a political category and social movement. In highly original essays and books he demonstrated the far reaching implications of the thought of Antonio Gramsci, probed the assumptions of Marxism and illuminated the modern history of Latin America, rejecting simplistic schemas linked to notions of dependency and populism.

After studying in Buenos Aires Ernesto came to Britain in the early 1970s, where he lectured at the University of Essex and later founded the Centre for Theoretical Studies. The Centre ran a very successful postgraduate programme, attracting students from around the world. In the 1970s Ernesto made his mark with his critique of the so-called “dependency school” of Latin American political economists such as Fernando Henrique Cardoso.

In 1985 Ernesto published the best-selling Hegemony and Socialist Strategy, a book co-authored with his Belgian wife Chantal Mouffe whom he met at Essex.

Ernesto and Chantal used the work of Antonio Gramsci to reject what they saw as the reductionism and teleology of much Marxist theory. Though sometimes calling himself a “post-Marxist” and an advocate of “radical democracy” Ernesto insisted that he remained a radical anti-imperialist and anti-capitalist. His criticisms of Marx and Marxism were made in a constructive spirit, and without a hint of rancour.

As a young man Ernesto had been attracted to Argentinian Trotskyism and its rejection of Stalinism. He was also pre-occupied with understanding the Peronist popular movement, with its strong trade union following. Ernesto published a pioneering essay on populism in the 70s but followed this up with On Populist Reason in 2005, a work which sought to explain the democratic and anti-imperialist impulses sweeping Latin America in the wake of the electoral victories of Hugo Chavez and other leftist standard-bearers in a dozen countries.

In Ernesto’s view radical democracy did not spurn electoral and representative politics but defined itself by drawing the mass of citizens into political life and ensuring that the national wealth was dedicated to real improvements in the living conditions of all. Ernesto was recognised as a keynote thinker with invitations to address the Argentine national assembly and to act as a roving ambassador for his native country.

Chantal also brought to the work they published together her own experience with social movements, especially the women’s movement.

Ernesto was recognised as leading thinker in Latin America but also as an intellectual star in the academic world, co-authoring Contingency, Hegemony and Universality with Slavoy Zizek and Judith Butler in 2000. He gave courses at a string of leading universities in Europe and the Americas, including Northwestern and the New School for Social Research. Ernesto became Emeritus professor at Essex in 2003, but the Centre he established continues its work.

In March this year, Ernesto was invited to give a lecture at the Argentine embassy in London to mark the publication of his latest book, The Rhetorical Foundations of Society. At the dinner which followed Ernesto was in excellent form leading the company in the singing of revolutionary songs, with special emphasis on those associated with the Italian partisan movement. It is a memory of political good cheer which all who were present will cherish.

A version of this article was first published in VersoBooks.com/blogs.

There is a widespread sense today that capitalism is in critical condition, and more so than ever since the end of the Second World War. Looking back, the crash of 2008 was only the latest in a long sequence of political and economic disorders that began with the end of postwar prosperity in the mid-1970s. Successive crises turned out to be ever more severe, spreading more widely and rapidly through an increasingly interconnected global economy. Global inflation in the 1970s was followed by rising public debt in the 1980s, and fiscal consolidation in the 1990s was accompanied by a steep increase in private sector indebtedness (Streeck 2011; 2013a). For four decades now, disequilibrium has more or less been the normal condition of OECD capitalism, both at the national and the global levels. In fact, with time, the crises of postwar capitalism have become so pervasive that they are increasingly perceived as more than just economic in nature, in a rediscovery of the older notion of a capitalist society: of capitalism as a social order and way of life, vitally dependent on uninterrupted progress of private capital accumulation.

Crisis symptoms are many, among them three long-term trends in the trajectories of rich, “advanced,” highly industrialized — or better, increasingly deindustrialized — capitalist economies. The first is a persistent decline in the rate of economic growth, recently aggravated by the events of 2008 (Figure I). The second, and associated with the first, is an equally persistent increase in overall indebtedness among leading capitalist economies, where governments, private households and non-financial as well as financial firms have,over four decades,piled up financial obligations with no end in sight (for the U.S., see Figure II). And third, economic inequality, of both income and wealth, has been on the rise for several decades now (Figure III), alongside rising debt and declining growth.

Steady growth, sound money and a modicum of social equity, spreading some of the benefits of capitalism to those without capital, were long considered prerequisites for a capitalist political economy commanding the legitimacy it needs. What must be most alarming from this perspective is that the three critical trends I have mentioned may be mutually reinforcing. There is mounting evidence that increasing inequality may be one of the causes of declining growth, as inequality both impedes improvements in productivity and weakens demand. Low growth, in turn, reinforces inequality (OECD 2013) by intensifying distributional conflict, making concessions to the poor more costly for the rich, and making the rich insist more than before on strict observance of the St. Matthew principle governing free markets: “For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken even that which he hath” (Matthew 25:29, King James Version).[1] Also, rising debt, while failing to halt the decline of economic growth, adds to inequality through the structural changes associated with financialization. Financialization, in turn, helped compensate wage earners and consumers for the growing income inequality caused by stagnant wages and cutbacks in public services (Crouch 2009; Streeck 2011; 2013a).

Can what appears to be a vicious circle of downward trends continue forever? Are there counterforces that might break it up — and what will happen if they fail to materialize, as they have for almost four decades now? Historians inform us that crises are nothing new under capitalism, and may in fact be required for its longer-term good health. But what they are talking about are cyclical movements or random shocks after which capitalist economies can move into a new equilibrium, at least temporarily. What we are seeing today as we look back, however, is a continuous process of gradual decay, protracted but apparently all the more inexorable. Recovery from the occasional Reinigungskrise is one thing; breaking a concatenation of intertwined long-term trends quite another. Assuming that ever-lower growth, ever-higher inequality and ever-rising debt are not indefinitely sustainable and may together issue in a crisis that is systemic in nature — one we have difficulty imagining what it would be like — can we see signs of an impending reversal?

Here the news is not good. Five years have passed since 2008, the culmination so far of the postwar crisis sequence. When memory of the abyss was still fresh, demands and blueprints for “reform” to protect the world from a replay abounded. International conferences and summit meetings of all kinds chased one another, but half a decade later hardly anything has come out of them (Mayntz 2012; Admati and Hellwig 2013). In the meantime, the financial industry, where the disaster originated, has had a full recovery: profits, dividends, salaries and bonuses are back where they were, while re-regulation got stuck in international negotiations and domestic lobbying. Governments, first and foremost that of the United States, have remained firmly in the grip of the money-making industries. These, in turn, are being generously provided with cheap cash, created out of thin air on their behalf by their friends in the central banks — prominent among them the former Goldman Sachs man Mario Draghi at the helm of the European Central Bank — money that they sit on or invest in government debt. Half a decade after Lehman, growth remains anemic, as do labor markets; unprecedented liquidity fails to jump-start the economy; and inequality reaches ever more astonishing heights as the little growth is appropriated by the top one percent of income earners — and its lion’s share by a small fraction of them (Saez 2012; Alvaredo et al. 2013).

Little reason indeed to be optimistic. For some time now, OECD capitalism was kept going by liberal injections of fiat money, under a policy of monetary expansion of which its architects know better than anyone else that it cannot forever continue. In fact, several attempts were made in 2013 to get off the tiger, in Japan as well as in the U.S., but when stock prices plunged in response, “tapering,” as it came to be called, was postponed for the time being. In mid-June, the Bank for International Settlements (BIS) in Basel, the mother of all central banks, declared “quantitative easing” to have to come to an end. In its Annual Report, the Bank pointed out that central banks had, in reaction to the crisis and the slow recovery, expanded their balance sheets, “which are now collectively at roughly three times their pre-crisis level — and rising” (Bank for International Settlements 2013, 5). While this had been necessary to “prevent financial collapse,” now the goal had to be “to return still-sluggish economies to strong and sustainable growth.” This, however, was beyond the capacities of central banks which

… cannot enact the structural economic and financial reforms needed to return economies to the real growth paths authorities and their publics both want and expect. What central bank accommodation has done during the recovery is to borrow time… But the time has not been well used, as continued low interest rates and unconventional policies have made it easy for the private sector to postpone deleveraging, easy for the government to finance deficits, and easy for the authorities to delay needed reforms in the real economy and in the financial system. After all, cheap money makes it easier to borrow than to save, easier to spend than to tax, easier to remain the same than to change (ibid.).

Apparently this view was shared even by the Federal Reserve under Bernanke. By the end of the summer, it once more seemed to be signaling that the time of easy money was coming to an end. In September, however, the expected return to higher interest rates was again put off. The reason given was that “the economy” looked less “strong” than hoped for. Immediately, global stock prices went up. The real reason, of course, why a return to more conventional monetary policies is so difficult is one that an international institution like BIS is freer to spell out than a — still — more politically exposed national central bank. This is that as things stand, the only alternative to sustaining capitalism by means of an unlimited money supply is trying to revive it through neoliberal economic reform, as summarily characterized in the second subtitle of the BIS’s Annual Report: “Enhancing flexibility: a key to growth” (Bank for International Settlements 2013, 6) — the bitter medicine for the many, combined with higher incentives for the few.[2]

A problem with democracy

It is here at the latest that discussion of the crisis and the future of modern capitalism must turn to democratic politics. Capitalism and democracy had long been considered adversaries, until the postwar settlement seemed to have accomplished their reconciliation. Well into the twentieth century,owners of capital had been afraid of democratic majorities abolishing private property, while workers and their organizations expected capitalists to finance a return to authoritarian rule in defense of their privileges. Only in the Cold War world after 1945 did capitalism and democracy seem to be birds of the same feather (Lipset 1963 [1960]), as economic progress made it possible for working-class majorities to accept a free-market, private-property regime, in turn making it appear that democratic freedom was inseparable from and indeed depended on the freedom of markets and profit-making. Today, however, doubts about the compatibility of a capitalist economy with a democratic polity have powerfully returned. Among ordinary people, there is now a pervasive sense that politics can no longer make a difference in their lives, as reflected in common perceptions of deadlock, incompetence and corruption among what seems an increasingly self-contained and self-serving political class united in their claim that “there is no alternative” to them and their policies. One result is declining electoral turnout together with high voter volatility, growing electoral fragmentation due to the rise of “populist” protest parties, and pervasive government instability.[3]

The legitimacy of postwar democracy was based on the premise that states had a capacity to intervene in markets and correct their outcomes in the interest of citizens. Decades of rising inequality have cast doubt on this, as has the impotence of governments before, during and after the crisis of 2008. In response to their growing irrelevance in a global market economy, governments and political parties in OECD democracies more or less happily looked on as the “democratic class struggle” (Korpi 1983) turned into post-democratic politainment (Crouch 2004). In the meantime,the transformation of the capitalist political economy from postwar Keynesianism to neoliberal Hayekianism progressed smoothly (Streeck 2013a): from a political formula for economic growth through redistribution from the top to the bottom, to one expecting growth from redistribution from the bottom to the top. Egalitarian democracy, regarded under Keynesianism as economically productive, is considered a drag on efficiency under contemporary Hayekianism, where growth is to derive from insulation of markets and of the cumulative advantage they entail, against redistributive political distortion.

A central topic of current anti-democratic rhetoric is the fiscal crisis of the contemporary state, as reflected in the astonishing increase in public debt since the 1970s (Figure IV). Growing public indebtedness is attributed to electoral majorities living beyond their means by exploiting their societies’ “common pool,” and to opportunistic politicians buying the support of myopic voters with money that they do not have.[4] However,that the fiscal crisis was unlikely to have been caused by an excess of redistributive democracy can be seen from the fact that the buildup of government debt coincided with a decline in electoral participation, especially at the lower end of the income distribution, with shrinking unionization, the disappearance of strikes, welfare state cutbacks and, as noted, exploding income inequality (Streeck 2013b). What the deterioration of public finances was related to was declining overall levels of taxation (Figure V) and tax systems becoming less progressive, as a result of “reforms” of top income and corporate tax rates (Figure VI). In fact, by replacing tax revenue with debt, governments contributed further to inequality as they offered secure investment opportunities to those whose money they would or could no longer confiscate and had to borrow instead. Unlike taxpayers, buyers of government bonds continue to own what they pay to the state and in fact collect interest on it, typically paid out of increasingly less progressive taxation; they also can pass it on to their children. Moreover, rising public debt can be and is being utilized politically to argue for cutbacks in state spending and privatization of public services, further restraining redistributive democratic intervention in the capitalist economy.

Institutional protection of the capitalist market economy from democratic interference has advanced greatly in recent decades.[5] Trade unions are on the decline everywhere and have in many countries been rooted out, above all in the United States. Economic policy has widely been turned over to independent — i.e., democratically unaccountable — central banks concerned above all with the health and goodwill of financial markets.[6] In Europe, national economic policies, including wage-setting and budget-making, are increasingly governed by supranational agencies like the European Commission and the European Central Bank that are beyond the reach of popular democracy. Effectively this de-democratizes European capitalism without, of course, de-politicizing it.

Still, doubts continue among the profit-dependent classes as to whether democracy will, even in its emasculated, post-democratic version, allow for the neoliberal “structural reforms” necessary for their regime to recover. Like ordinary citizens, although for the opposite reasons, elites are losing faith in democratic government and its suitability for rebuilding societies in line with market pressures for unimpeded technocratic decision-making and unlimited adaptability of social structures and ways of life. Public Choice’s disparaging view of democratic politics as corruption of market justice in the service of opportunistic politicians and their clientele has become common sense among elite publics, as has the belief that market capitalism cleansed of democratic politics will not only be more efficient but also virtuous and responsible.[7] Countries like China are complimented for their authoritarian political systems being so much better equipped than majoritarian democracy with its egalitarian bent to deal with what are claimed to be the challenges of “globalization” (Bell 2006; Berggruen and Gardels 2012) — a rhetoric that is beginning conspicuously to resemble the celebration among capitalist elites during the interwar years of German and Italian fascism and even Stalinist communism for their apparently superior economic governance.

For the time being, the neoliberal mainstream’s political utopia is a “market-conforming democracy,”[8] devoid of market-correcting powers and supportive of “incentive-compatible” redistribution from the bottom to the top. Although that project is already far advanced in both Western Europe and the United States, its promoters continue to worry that the political institutions inherited from the postwar compromise may at some point be repossessed by popular majorities, in a last-minute effort to block progress toward a neoliberal solution of the crisis. Elite pressures for economic neutralization of egalitarian democracy therefore continue unabated, in Europe in the form of a continuing relocation of political-economic decision-making to supranational institutions such as the European Central Bank and summit meetings of government leaders.

Based on a lecture presented in the course “Rethinking Capitalism.”

NOTES

[1] The “Matthew effect” was discovered as a social mechanism by Robert K. Merton (1968). The
technical term is cumulative advantage. On cumulative advantage in free markets see also Piketty (2014).

[2] And even that may be less than promising in countries like the United States and Britain, where it is hard to see what neoliberal “reforms” could
still be implemented.

[3] See several chapters in Schäfer and Streeck (2013).

[4] This is the Public Choice view of the fiscal crisis, as powerfully put forward by James Buchanan and his school (see for example Buchanan and Tullock 1962).

[5] A practical demonstration that capitalism can do better — i.e., be more capitalist — without democracy was initiated in 1973 by Henry Kissinger and
the CIA, in cooperation with the local financial elite, when they removed the elected socialist President of Chile from office in order to clear the
way for a successful field experiment in Chicago economics. The coup inaugurated the neoliberal revolutions during the subsequent era of
“globalization.”

[6] One often forgets that most central banks, including the Bank for International Settlements, have long been or still are in part under private
ownership. For example, the Bank of England and the Bank of France were nationalized only after 1945. Central bank “independence,” as introduced by
many countries in the 1990s, may be seen as a form of re-privatization under public ownership.

[7] Of course, as Colin Crouch (2011) has pointed out, neoliberalism in its really existing form is
a politically deeply entrenched oligarchy of giant multinational firms.

[8] The expression is from Angela Merkel.

“In a series of posts, Jeff Goldfarb and I [Iddo Tavory] have been sketching an outline for the study of the social condition — the predictable dilemmas that haunt social life. We argue that one of the core intellectual missions of sociology is to account for the ways in which social patterns set up these dilemmas that actors experience as crucial for their lives and how they define themselves.”

I have been following this inquiry into the social condition for a while, and I suggest that it will help to further understanding this condition if we take seriously the daily dramas of consumption, both as comedy and tragedy. “Say Yes to the Dress” is one of these social dramas, based on the very premise that buying a wedding dress really matters, that people do not make their consumption decisions lightly.

“Say Yes to the Dress” portrays one of the existential dilemmas women in the age of consumer society face. It is an emotional roller coaster of wonder, judgment, deliberation, budgeting, frustration and decision. “Say Yes to the Dress” is a reality-TV show on TLC. For some, the show might look like a scene straight out of Theodore Adorno’s nightmare of “mass deception,” the display of the human tragedy in a world of commodities. But “Say Yes to the Dress” also presents in 60-minute segments, why the critique of consumer culture misses the point: Commodities are more than the meaningless, exchangeable representations critical theory makes them out to be. Instead, commodities mean everything to people. We cry, laugh, scream, or fight over them and we triumph or fail through them.

Of course “Say Yes to the Dress” is an edited and selective social drama, following a similar script each episode. The bride comes into the wedding dress shop with her entourage (family and friends). The consultant clarifies the parameters of the desired dress, first with the bride alone: What does she want, what is her budget? Then, the two pick some options in a dressing room. The bride dresses, and the trial begins. She has to face her family and friends, who judge her dream dress, taking it apart. As this process goes on, personal choice becomes collective negotiation, a struggle between the self and its public perception. Tears of frustration and joy mix, culminating in the final decision for the “perfect” wedding dress. When the bride-to-be says, “Yes!” to the dress.

How we read the social drama that unfolds in “Say Yes to the Dress,” makes all the difference. Should we shrug at the superficiality of the act? Shake our heads over all the energy, money and emotions spent on the selection of a dress that is worn only for one day, that probably has thousands of look-a-likes around the country? Or, should we suspend our judgment and really try to understand what is going on here? Why do people fight for hours over dress length, color, beading and décolleté? One answer might be that the struggle over the perfect wedding dress is as much a struggle over what kind of bride one will be, what family one will have, what life one will live. “Say Yes to the Dress” from this perspective captures one of these crucial moments in life, when past and future meet in the events and choices of the present.

That “Say Yes to the Dress” airs on TLC, The Learning Channel, seems like a practical joke, but it is more than that. It shows the social conditioning that goes into our daily, and not so daily, consumption dramas. Of course there is socialization in the very act of “really wanting” the “perfect” wedding dress. But this should not take away from the act itself by rendering it meaningless. Instead, it should sensitize us for what goes on in these personal and social dramas, for how they connect cultural desires, economic valuations and the weaving of the social fabric in one act of emotional decision-making. To look for the social condition in consumption, means taking people’s decisions seriously, because they do.

The problem for us consumers is not that consumption is fake. Instead consumption, like the buying of a wedding dress, is, for the lack of a better word, “real.” In today’s hyper-commodified world, the social condition of the society becomes one of ever-expanding choices, symbolic meanings, and experiences. This produces personal satisfaction, but also anxiety, because choosing wrong has serious consequences for our “selves” and our social being. Choice and anxiety become simultaneously problem and principle of consumption, a social condition worthy of critical, but also respectful attention.

A version of this article was first published in DeliberatelyConsidered.com.

I wrote a book about Starbucks a few years ago, so my email started to buzz with Google alerts when the company announced that it would help to provide free education for its employees. The New York Times, the Huffington Post, and Business Week, among others, jumped on the story. A day or so after the announcement, Starbucks CEO, Howard Schultz, appeared on the Daily Show with Jon Stewart, winning mad praise from the host for having “venti balls” to make such a bold move.

As Starbucks officials explained it, the deal offered to reimburse employees for a portion, not all, of their tuition, but only for online classes hosted by Arizona State University’s Web server. Starbucks publicists talked about the company’s “unique” and forward-looking mission to build a people-based corporation that valued individuals and communities as much as profit. They identified their new benefit as an investment in the future, for corporate America and for the nation. That’s when it hit me — again — that Starbucks wasn’t looking ahead, it was looking backward, mimicking an older model of labor-management relations.

Labor historians like myself are very familiar with corporate efforts to appear as benevolent employers. Each year between World War I and the Great Depression, for example, a denim manufacturer in Greensboro, North Carolina owned by the Jewish Cone family gave their workers Christmas hams. Throughout the year, they sponsored company baseball teams and marching bands, health clinics and adult education classes. They paid their employees a little more than their competitors across town and built sturdier houses than the mill owners down the road. From the Cones’ day down to our own, labor historians — have called these kinds of actions welfare capitalism.

Employers ranging from auto titan Henry Ford to the humbler Cones invested in the welfare of their employees to encourage their workers to identify with the firm, and not with each other. Even more, they wanted the men and women on the line to stay on the job. Training new employees was a costly expense that cut into profits and they sought to discourage employees from seeking another job.

Welfare capitalism acted, then as now, as the carrot that accompanied the stick of hard-nosed anti-unionism. Companies spent heavily so that workers would feel grateful and appreciative — and deferential — towards management. Welfare capitalism’s goal has always been to create a stable and tractable labor force.

This same goal animates Starbucks’ new tuition policy. Much like the Christmas ham of yore, Starbucks portrays college support as a gift and not a benefit. “Everyone … of our partners (as Starbucks’ calls its employees),” proclaimed Schultz, “should have an opportunity to complete college.” He promised to do what he could to make that happen. In return, he hoped to gain the loyalty of his workforce, admitting to the New York Times, that he believed that the college plan would “lower attrition, … increase performance, … [and] … attract and retain better people.”

But he couldn’t simply trust that his employees would see him as a generous man at the helm of a generous company and then behave according to Starbucks’ wishes. So the college plan that Schultz helped to create featured its own subtle forms of coercion. Under the program, Starbucks workers must pay a large chunk of their first and second years of college, and then, and only then, does the company’s support kick in. Under this plan, Starbucks’ student-employees now have so much more to lose if they depart, experience a reduction in their hours, or find themselves laid off. They’ll lose their pay and they’ll find their academic progress forestalled.

One thing they won’t lose is their student debts. Starbucks will foot the bill for the last two years of school, and as it announced to the press, it won’t require employees to work at Starbucks after they receive their degrees. (As if allowing them to reenter the job market at a higher skill level was some sort of magnanimous act). While ASU will offer Starbucks workers a discounted rate, the cost of those first couple of years would still be somewhere between $6,500 and $10,000. That’s not much for college, but it is a lot for a barista earning between $8 and $10 per hour and typically working less than 30 hours a week. Many would have to take out loans to pay for the classes. That, in turn, would make it harder to leave Starbucks and to leave ASU where the student-workers would already have accumulated numerous credits, credits that might not easily transfer to other colleges and universities. For Starbucks, a dependable labor force is an indebted one.

The new Starbucks program replaced an older company plan that gave employees $1,000 to attend any college of their choice. Now they don’t have a choice; it is Arizona State University online or nothing. This might generate new opportunities to attend college, but it also narrows baristas’ education options. ASU offers 40 online undergraduate programs, leading to degrees in Organizational Leadership, Justice Studies, and Global Health. But if a Starbucks student wanted to major in Women’s Studies, Math, or Asian-American Studies, they would have go somewhere else and pay for it themselves.

Online education fits Starbucks (and McDonald’s and WalMart’s) “flexible” labor model. Classes and meetings with professors aren’t held at a fixed times. They need not interfere with baristas’ wildly varying and unpredictable schedules which Starbucks managers create to match the hours the company needs. Most of the classes, moreover, will be taught, no doubt, by non-tenured adjunct professors, themselves forced into flexible work by budget cuts, austerity regimes, and the erosion of academic work. Who knows? Now perhaps underemployed history and philosophy Ph.D.s will find it convenient to apply for a barista position so that they can re-tool through degree programs in Food Industry Management or Nursing. No matter their class, race, gender, or level of educational attainment, increasing numbers of precariously-employed Americans confront for-profit, online education and high levels of student debt.

“There’s no doubt,” Howard Schultz said when he rolled out the college program, “that inequality within the country has created a situation where many Americans are left behind.” Schultz clearly implied that his online initiative, in which his company will receive tax breaks in exchange for tuition payments, would address the problem. But it won’t. The history of labor in the mid-twentieth century — in the United States and across the industrialized world — demonstrates that only rising real wages and a strong union movement can address these pressing issues. But Schultz isn’t interested in that past or its possible lessons for our future. No, he wants to return the country to another past, the one before the New Deal. The one before the postwar years of rising real median wages and high rates of unionization, when working people bought houses and went on vacations and sent their kids with their own money (and state support) to college.

Schultz seeks to return us instead to the age of welfare capitalism, where companies tried to buy off their workers, with limited options, gilded gifts, and sugary — or should I say venti — pronouncements.

“As best we can tell, the politics of the venture capital elite boils down to fending off higher taxes, keeping labor costs low and reducing the ‘burden’ of government regulation. … Silicon Valley could start by putting a stop to pretending that the sharing economy is about anything other than making a killing.” – Andrew Leonard

If you’ve heard about companies like Airbnb, Zipcar, Skype, Uber, Getaround, and Lyft, and you know a bit about crypto-currencies, you get the picture. The “sharing economy” is just as exhilarating and vexing as the Web 2.0 meme was nine years ago.

I am all there with Arun Sundararajan, professor at Stern School of Business at NYU, who describes walking down the street in New York City, musing on all the parked cars that remain unused ninety-two percent of the time. He gets it right; it seems awfully inefficient, even wasteful. Why couldn’t he just pick up one of those vehicles, run an errand, return the car to that same spot thirty minutes later, clip a twenty dollar bill under the sunshade, and be done?

But then he claims that such emerging marketplaces can perfectly self-regulate and should be left to their own devices. Sharon Ciarella, Vice President of Amazon Mechanical Turkmade a similar argument: Mechanical Turk workers would just vote with their feet — they could not be tricked into performing exploitative work. All good here; no intervention needed.

Not so fast. It is surprising that crowdmilking practices on Amazon’s Mechanical Turk still have not raised red flags in the offices of regulators. Based on these examples, it should be clear how sorely regulation is needed. I agree with Evgeny Morozov who pointed out that the so-called “sharing economy” is nothing but the logical continuation of crowdsourcing. A company like Uber is not free from those dynamics. There is a reason that taxi fares are regulated; it prevents abuse. I rode in a town car recently and was quoted $16 for my trip. Through Uber, the same trip would have cost between $21 and $27.

But it is also all so electrifying. Uber is valued at 10 billion dollars and Airbnb, a company founded in 2008, is valued higher than the Hyatt hotel chain. Airbnb offers as many rooms as Intercontinental, which has 4600 hotels with 120,000 employees in over 100 countries. It took Intercontinental sixty years to build this business empire. Hyatt and Intercontinental had to hire architects and build up an enormous infrastructure. And then here comes Airbnb, which offers an impressive 500,000 listings in 33,000 cities in more than 192 countries. So far, Airbnb has hosted 8.5 million guests without ever turning a brick. All they got is an app; it’s a logistics company. Are we looking at a secret plot, a covert p2p takeover? Companies in the “sharing economy” can only function because they are using your “assets,” your resources: your car (Bla Bla Car, Getaround), your apartment (Airbnb), and your computing power (Skype).

But the exploitative basis of such business ventures is overshadowed by their obvious appeal to consumers. In the sharing economy, surviving without a job suddenly seems to be in reach; people can now rent out all of their “assets.” The rush to wield unused capacity (“Buy a bigger car because now you’ll be able to rent it out!”) can quickly move from a bit of welcome extra cash to being a requirement. “Why don’t you just lend your car, drill, parking lot, blender, and house once your unemployment benefits run out?” And the sharing economy can even feed you. A new app lets you share your leftovers with strangers.

The appeal of the sharing economy to the consuming public is solidified through its rhetoric. Rachel Botsman and Roo Rogers’s book What’s Mine is Yours, published by Harper Business, is a frequent reference in the sharing movement, which, unlike its title suggests, has nothing to do with infrastructure socialism or Richard Barbrook’s “cybernetic communism.” The “sharing economy” comes with slogans like “Sharing is the New Buying,” “Sharing is Growing,” and “Sharing is Mainstream,” and vocabulary like love (!), open innovation, trust, co-creation, co-design, and mass customization. This sentimentality is easy to understand. Take the example of Etsy. Many people have a soft spot for this site and community. Who wouldn’t immediately line up when they hear about a culture that is community-centric, based on trust, sustainability, a novel type of horizontality, “a new social operating system based on unused value,” generosity, and a culture that is against wastefulness and for responsible consumption and completely new marketplaces?

Now, though I want to give the impression that I want to sign up for commons-based peer production, I also want to be crystal clear about which part of this conversation I don’t want to have any part of. There is a difference between non-market practices and greed-free business like Craigslist and Fairnopoly on the one hand and corporations like Airbnb or Uber that profit from peer-to-peer interactions on the other. Again, I support peer production and sharing practices but I am vexed by attempts to subsume them into the new corporate hype of “the sharing revolution” that comes with calls to make the world a better place and comparisons to the Arab Spring and Occupy Wall Street. What seems to be completely missing from the discussion about the “sharing economy” is a distinction between the shifts of markets (and labor practices) to the Internet and the surprising victories of market incumbents like Airbnb, Lyft, and Uber on the one hand, and commons-oriented peer production and greed-free companies on the other.

The market-oriented companies of the former group have given rise to a culture in which all “sharing”-oriented practices end up characterized by the celebratory Californian Ideology, complete with mandatory meditation, cheers, and group hugs followed by business pitches. Close your eyes and wake up in San Francisco circa 1995, then turn on an episode of HBO’s Silicon Valley (spiritual advisor to business tycoon: “You clearly have a great understanding of humanity”).

I am not sympathetic when practices and projects like Shareable and Wikipedia or — more generally — collaborative lifestyles with people exchanging resources such as food, skills, or time, are mixed up with often exploitative practices such as crowdsourcing or massively commercial online learning platforms that can be linked to the closure of Community Colleges. What Yochai Benkler defines as commons-based peer production shouldn’t be thrown into one pot with mass customization sites like MyTwinnGemvara, or Zyrra and co-design a la 99Designs. It is not to be mixed up with developers designing unique features for platforms in the app economy. SETI@home would not hit the “like” button on AMT, usertesting.commob4hire, or 99tests. The high-minded values of genuine commons-based production should not be confused with the user exploitation inherent in the practices of a company like Airbnb, which is not concerned with the fact that hosts who are rent-subsidized can be evicted on the grounds that they obviously did not need all that extra space. And do we really want to wave our lighters through the dark evening skies for the newly gained ability not to buy a table anymore but just get the parts and assemble it ourselves?

Let me be a bit more concrete. Economists now project a division of society, where a superclass of ten to fifteen percent of the population makes over one million dollars a year and the rest makes between $5,000 and $10,000 annually. The numbers and percentages may differ but there is a pretty coherent vision here of the overdeveloped world being transformed in a way that cuts out the middle class. Computer scientist Jaron Lanier has argued that the Internet is directly complicit in this transformation.

How would a society be able to move in such a direction? It would require an extreme de-skilling of large parts of the population and the reorganization of labor in a way that makes extremely low-paid work available for the vast majority of the population. The rhetoric of austerity can do wonders in this regard. As long as everybody understands that they have to tighten their belts, the ruling class can cut wages and transform the nature of work for the rest of us. This reorientation of labor and society also entails a prejudice against prospective workers who possess credentials or deep skills that could serve as bargaining instruments. If the goal is to undermine the educational system and replace it with a system of learning that does not allow for credentials and actual deep learning, then distributed learning (i.e. education not dependent upon a physical location) would be a highly efficient instrument on the way to a society without a middle class. To be clear, I’m not against massively open online courses in general, but I’m deeply skeptical about the commercialization of connected learning.

There is really a tricky undercurrent that sometimes gets forgotten. It is absolutely true that it’s phenomenal that Airbnb and Skype could “disrupt” entire industries (to use business lingo for a moment) without having to build infrastructure but instead using existing “wasted resources.” And it is true that consumers seem to benefit from the services. But let’s also acknowledge that this means that people have to open their homes, that the nature of the private has completely changed, and that life itself changes when your apartment turns into a B&B and you become an innkeeper.

If the goal is to undermine the traditional nature of employment and make the gig economy go viral, then businesses like TaskRabbit, Uber, or Lyft should be celebrated. If one thinks that the model of micro-entrepreneurship is something to aspire to, then all of that makes perfect sense.

At the end of their book, Rachel Botsman and Roo Rogers write: “The status quo is being replaced by a movement. Peer-to-peer is going to become the default way people exchange things, whether it is space, staff, skills, or services.” Statements like that could come straight out of the Occupy Wall Street handbook. But incumbents in several areas of the economy in fact iterate this activist rhetoric in the context of an industry takeover. It is fantastic that co-creation is possible on Amazon.com where authors can self-publish with the help of platforms like Blurb. At the same time, self-published authors who make it onto on Amazon’s best-seller list also make only a fraction of what they would have made in the context of traditional publishing. Therefore, I would not hold up companies like Lulu, Blurb, MooThreadless, or Amazon’s book reviews as shining examples of co-creation, powered by the living labor of intrinsically motivated producers. And I cannot even start to address here the harsh mistreatment of workers in Amazon warehouses all over the world.

One thing is clear: there is something irresistible and important about commons-based peer production. But what is compelling is not that millions in revenue have shifted from the owners of the Intercontinental hotel chain to the youthful owners of Airbnb or that a completely new breed of business has taken hold. What matters are collectives and greed-free economic practices that are infused with values relating to ecological concerns.

In “When Push Comes to Pull,” David Bollier defines the pull economy as being based on demand rather than supply, an economy that is built by like-minded individuals who “pull” the goods and services that they want on their own terms. Nobody needs to create the demand for me to want a place to stay in when I’m in another city. That need already exists and services like home sharing and boat sharing meet that need. Yes, there are new marketplaces for labor, things, ideas, and money, but we should look closely and see whether they are addressing issues of income inequality or if they are just about delivering the next Jeff Bezos.

Community-based tool lending libraries, bike and car sharing initiatives, meal exchanges (e.g., to feed the Walmart employees who can’t afford a Thanksgiving dinner) or potlucks, peer-to-peer land initiatives, personal fabrication with 3D printers, open hardware, the free exchange app Yerdle, and even team-buying services like the Chinese Twangou set the needle of our moral compass in a much better direction than platforms that expropriate and capitalize on our labor.

Value creation is no longer bound to corporate wage-labor. The value that is created through the collaborative economy is based on social connectedness, it is based on communities, it is based on connectivity; it is grounded in the ubiquitous use of mobile phones, collaboration, and economies of scale.

Free labor may not be the problem itself but at the same time, I am not interested in being a wheel on the bandwagon of any soon-to-be billionaire incumbent.

Some have argued that the marriage between Marxism and feminism ended up in an unhappy marriage: by reducing the problem of women’s oppression to the single factor of economic exploitation, Marxism risks dominating feminism precisely in the same way in which men in a patriarchal society dominate women (Sargent 1981). The oppression of the latter needs to take into account a multiplicity of factors, each with its own autonomy, without attempting to reduce them to one all-explaining source — be it the extraction of surplus value in the workplace or unpaid shadow work in the household. There seems to be something intrinsically multifaceted in the oppression of women — so much so that women’s and gender studies programs are all, inevitably, interdisciplinary ones.

The question then arises whether feminism could not find a better partner in anarchism. Despite the fact that anarchism and Marxism often went on the same path and even converged in workers struggles,the major difference between them is that anarchist thinkers work with a more variegated notion of domination that emphasizes the existence of forms of exploitation that cannot be reduced to economic factors — be they political, cultural or, we should add, sexual. Hence also its happier marriage with feminism: if the relationship between Marxism and feminism has overall been characterized as a dangerous liaison (Arruzza 2010), which reproduced the same logic of domination occurring between the two sexes, then the relationship between feminism and anarchism seems to be a much more convivial encounter. Historically, the two have converged so often that some have argued that anarchism is by definition feminism (Kornegger “Anarchism: the feminist connection” in R. Graham, ed., 2007). The point is not simply to register that, from Michail Bakunin to Emma Goldman, and with the only (possible) exception of Proudhon, anarchism and feminism often went hand in hand. This historical fact signals a deeper theoretical affinity. You can be a Marxist without being a feminist, but you cannot be an anarchist without being a feminist at the same time. Why not?

If anarchism is a philosophy that opposes all hierarchies, including those that cannot be reduced to economic exploitation, it has to oppose the subjection of women, too, for otherwise it is incoherent with its own principles. Most anarchist thinkers work with a conception of freedom which is best characterised as a “freedom of equals” (Bottici 2014), according to which I cannot be free unless everyone else is equally free, because even if I am the master, the relationship of domination to which I participate will enslave me as much as the slave herself — it is the paradox of domination that even a philosopher like Rousseau, who was neither a self-declared anarchist nor a feminist, strongly emphasized.

But if I cannot be free unless I live surrounded by people who are equally free, that is, unless I live in a free society, then the subjection of women cannot be reduced to something that concerns only a part of society: a patriarchal society will be fundamentally oppressive for both sexes, precisely because I cannot be free on my own. And this is something that we tend to forget: patriarchy is oppressive for everybody, not only for women.

So if it is true that anarchism has to be by definition feminism, does the opposite hold? Can there be feminists who are not anarchists? Clearly, historically speaking, many feminist movements were not anarchist. However, some feminists claimed that feminism, in particular the second-wave feminism of the 1970s, was anarchist in its deep structure and aspirations. According to Peggy Kornegger (2007), for instance, radical feminists of this period were unconscious anarchists both in their theories and their practices. The structure of women’s groups (e.g., consciousness-raising groups), with their emphasis on small groups as the basic organizational unit, on the personal which is political, and on spontaneous direct action, bore a striking resemblance to typically anarchistic forms of organization (ibid., 494).

But even more striking is the conceptual convergence with the conception of freedom that I have described above. For instance, Kornegger affirms that “liberation is not an insular experience” because it can occur only in conjunction with all other human beings (ibid., 496), which, again, means that freedom cannot but be a freedom of equals.However, this also implies that one cannot fight patriarchy without fighting all other forms of hierarchy, be they economic or political. As Kornegger (2007:493) again put it, “feminism does not mean female corporate power or a woman president: it means no corporate power and no president.”

Otherwise stated, feminism does not simply mean that women should take the place occupied by men (which would be a rather phallic form of feminism); rather, women should fight to radically subvert the logic of domination where sexism, racism, economic exploitation, and political oppression reciprocally reinforce one another, although with different forms and modalities in different contexts. This holds even more so today, in a globalizing world where different forms of oppression and exploitation, whether based on gender, sex, race, or class, sustain each other. Perhaps the greatest contribution of third-wave feminism is that it pointed toward the need for a multifaceted analysis of domination, with its emphasis on post-colonialism and intersectionality. If by feminism we understand simply the fight for formal equality between men and women, we risk creating new forms of oppression. We run the risk that equality between men and women will signify only that women must take positions once reserved for white bourgeois males, thus further reinforcing mechanisms of oppression rather than subverting them. For instance, if we take the emancipation of white women to mean simply entering the public sphere on an equal footing with men, this, in turn, may imply that somebody else has to replace these women in their households. But for the immigrant woman who replaces the white housewife in providing domestic care, this is not liberation: she merely exits her household in order to enter into another one as a waged laborer. In the current predicament, the emancipation of some (white) women directly risks meaning the oppression of other (immigrant, black, or southern) women, if feminism does not aim at dissolving all forms of hierarchy, whether they are entrenched in gender, class, or racial oppression.

To conclude, maybe feminism has not historically always been anarchist, but it should be because it should aim at subverting all forms of domination — be they sexist, economic, and political. Feminism, today more than in the past, cannot mean only women rulers or women capitalists: it means no rulers and no capitalism.

In her groundbreaking book about emotional labor, The Managed Heart, Arlie Russell Hochshild suggests that emotions are not simply stored in us waiting to be expressed: they are also produced and managed. The notion and practice of affects management, both privately and socially, are not specific to capitalism. Hellenistic philosophers made up a new word to convey this very idea: metriopatheia, from pathos, affect, and metrios, a word that conveys both the notion of measure and that of moderation. As Foucault correctly noted, the management or negotiation of pathē in Greek and Roman philosophers, and in particular in the Stoics, is constitutive part of a process of subject formation, utilizing what Foucault calls techniques of the self, through which a specific and historically determined subject constitutes himself as capable of self-determination and self-mastery through a process that was social and individual at the same time. Generally speaking the notion that pathē are the expression of an authentic self was to a large extent foreign to Greek and Roman philosophy: on the contrary, the very word pathē expresses passivity and conveys the idea that the subject undergoes affects and experiences them as forced upon him.

The social management of affects is not an invention of capitalism and does not, as such, characterize capitalism in a specific way. In other words, when we address the problem of affects under capitalism, we should be very careful to avoid the risk of thinking that the problem lies in the capitalist intrusion into our hearts, in an opposition between, for example, the authenticity and naturalness of our private affects and their forced and normative display or regulation dictated by capitalist social relations. On the contrary, we may even think that a robust notion of the privacy of affects as characterizing what it means to be a unique individual arises with capitalism and modernity. 

If this is true, then, we need some more analytical work in order to understand what exactly is specific to the managed heart under capitalism. For this purpose, I would like to suggest at least three factors that concur to a specific capitalist form of affects management.

The first: as shown by Hochshild’s work, under capitalism affects become, like other capabilities, a set of skills produced and regulated in such a way as to be sold as a commodity sui generis, that strange commodity that is labor power. That is, affects need to be included among the physical and intellectual resources that the worker sells for a wage: this transformation of affects into a crucial component of the commodity labor power cannot but have important consequences on a person’s self-perception and experience.

The second factor, strictly connected with the first, is what I would call affects fetishism. That is, precisely because affects have become marketable skills, they undergo to a large extent the same process characterized by Marx as commodity fetishism: they become things, detachable from their subject, that mediate the relations among people. This appears clearer in the use of affects in marketing, where the display of specific affects is employed for the sake of the creation of further affects, for example desire, self-identification, lust, ambition, to be attached to commodities. Another recent instance of such a phenomenon is what recent studies, which have aroused several polemics because of their intrusive methods, have defined as emotional contagion on social media. In other words, we witness here a divorce between affects and people’s living and organic experience. 

The third, crucial, factor has to do with the contradiction between the two phenomena that I have just mentioned, on the one hand, and the fact that one of the main anthropological transformations determined by capitalist modernity is precisely the constitution of the individual as the subject of unique, irreducible, never entirely expressible, and essentially private emotions and feelings. To clarify, what I am suggesting is that the transformation of our social relations and form of life under capitalism has produced both sets of phenomena at the same time: on the one hand we are interpellated to recognize and accept our “true” emotions as in them our inner and most authentic self finds expression; on the other hand, our emotions are detached from us and constructed as interchangeable and measurable things that can be exchanged on the market or as skills that add to our labor power. The estrangement experienced by people providing commodified affective labor lies precisely in this contradiction between quality and quantitative equalization through exchange, concrete living experience and abstract affective labor, autonomy and heteronomy. This contradiction, however, should not be conceptualized as a contradiction between naturalness and artificiality, authenticity and inauthenticity, but rather between two different forms of experience that are both socially mediated and that are both part of what it means to live in a capitalist society. 

While I want to challenge the idea of a complete privacy and naturalness of affects, I want to insist, at the same time, on the fact that these two forms of experience are actually different and contribute in very different ways to the process of subject formation and to the way a subject perceives herself. I am insisting on this point, because I think that we should avoid two parallel dangers. On the one hand, the danger of trying to find resources for resistance and struggle in artificial and romantic ideas of authenticity and naturalness. On the other hand, the danger of thinking that the form of social mediation in the management of our affects is fundamentally the same in all spheres of society.

To conclude, in very broad terms, I would suggest that decommodifying affects should be both our goal and a means of resistance and struggle, without for this reason falling into a romantic ideal of authenticity. It is not a matter of defending private authenticity versus social reification, but rather of mediating, shaping, and managing our affects through more humane social relations.

Section: Democracy and its Enemies

The following was the keynote lecture at the XXVII Encuentro Internacional de Ciencias Sociales in Guadalajara, Mexico, December 5, 2013.

On October 3, 2013 the Supreme Court of Israel ruled that there is no Israeli identity, since there is “objectively” no Israeli ethnicity. The 21 litigants will have to continue having the designation “Jewish” in their official files (coded into their identity cards!), instead of “Israeli” as they desired. Against their own wish, they will not be able to share a common citizenship identity with Arab citizens of Israel, in a state that continues to be identified as that of an ethnicity, the Jewish people. Some of the consequences of that identification are well known. Thus, for example, if I wished to ask for Israeli citizenship and membership in the citizen body to which the state is said to belong, namely the Jewish millet, I would be able to do so, though I have never lived in Israel and practice no religion. Many who have lived all their lives in that country would not be able to do the same, unless they converted to Judaism. Even if married according to Islamic law, Arab citizens do not have the right to permanently settle their spouses in Israel. But even I would not be able to marry or divorce in Israel, unless I followed the rules, requirements and rituals of orthodox religious law. And I could not marry a non-Jew in any case.

As all those who have seen the film Hannah Arendt must realize, the great political theorist’s relationship to Israel was deeply ambivalent. She believed that the idea of the modern nation state with an ethnic definition of belonging was at the root of modern (as against traditional) anti-Semitism, and she strongly rejected the idea that its victims, the Jews, living together with another people, should establish such a state themselves. (The Jewish Writings p. 352) Moreover, to put the matter in her own concepts if not words, even if Israel’s formation was an act of liberation (whether from colonial rule or, more broadly, the European nation states) it was not followed by the constitution of freedom. The founders, as is well known, were unable to produce, as required by UN resolution of 1947 and the Israeli Declaration of Independence, a written constitution with equal fundamental rights for all then living in the territory. The constituent assembly elected also with Arab participation was converted into the first Knesset, whose simple majority chose to abandon the project of constitution making, in favor of basic laws that would be produced gradually, by ordinary parliaments. Arendt’s devastating critique of constitution making as acts of government instead of the mobilized and enlightened actions of “the people” themselves, deserves to be well known, however that difference between two approaches is to be understood procedurally.

There are three possible explanations of why Israel failed to produce an entrenched documentary constitution in 1948. The first (and best known) is currently represented by my former student Hannah Lerner, who argues that it was the impossibility of secular and religious actors to agree concerning fundamental rights and the relationship of religion and state. While the latter relations and in particular the surrender of family law to the Orthodox rabbinate were informally agreed upon by the religious Agudat Israel and the secular Mapai of Ben Gurion, in the status quo agreements of 1947, to formalize e.g. a “prohibition of intermarriage” in a written constitution would have been supposedly highly embarrassing and unacceptable at least to the secular side. This is the explanation accepted by Hannah Arendt (Eichman in Jerusalem p. 7), but is not the only relevant one. The second explanation focuses on the power interests of Ben Gurion himself. He was no different than other liberation leaders in the Anglophone colonial world, who believed that it was a Westminster type of parliamentary sovereignty, incompatible with entrenchment, fundamental rights and judicial review, that would give a dominant party and its leader the most power. While Nehru in the end gave way on this matter, probably because of the internal pluralism of his party, and agreed to a well entrenched constitution with strong table of rights and formally codified constitutional review, Ben Gurion, whose dominance over his party was greater than Nehru’s, did not. Finally, I believe there was also the problem of Arab-Jewish relations. Given the fundamental Zionist idea of a Jewish state, it was unacceptable to formally establish equal rights for all citizens, that in the future could perhaps lead to a state of two peoples, or one of all its citizens whatever their ethnic belonging.

The last motivation is the most important because it touches not only the inability to agree on a constitution, but the prior difficulty of constituting a democratic constituent power. It is not obvious in other words whether those participating in such power should be Jews in Israel, Jews and Arabs in Palestine, or Jews in and outside Israel whose state it is supposed to be. The fact that in 1948 a whole multitude was still waiting in displaced people’s camps to make the journey to Palestine was an important consideration for many of the founders of the state. This last consideration hardly applies today when relatively few seek to emigrate to Israel, yet it is reflected in the Supreme Court decision of October 3, this year, recognizing Jews rather than Israelis as the state’s relevant constituency.

It would not be difficult to combine the three explanations, since after all the actors responsible were many and they could have different interests and priorities. In any case the result still stands, in spite of the efforts of the Supreme Court in the 1990s, under Aharon Barak, to turn a couple of very weakly entrenched basic laws into the functional equivalent of a written constitution with rights in their center. Barak even attempted to provide a new interpretation of the definitional phrase “Jewish and democratic state” by arguing that “Jewish” could not contain any element of meaning that was not consistent with democracy. The October 3 decision, made by a court with very different members, reveals his failure, however brave the effort. What would be ordinarily regarded in democratic polities as the imputed constituent subject of a democratic constitution, the Israeli people, does not exist, according to the court. An actor admittedly has been put into its empty place, namely the Jewish people, but even if it existed it does not seem to have the will and the capability of becoming that subject any more than any other people essentially defined. Thus a formal, documentary, entrenched, and enforceable constitution still has to wait.

Hannah Arendt does not mention Israel even once in her important book on constitutions, On Revolution (1963), and mentions the failure of constitution making only once in Eichman in Jerusalem, published the same year, blaming the issue of religious law. She points out the “breathtaking … naivete” with which the prosecution in the Eichmann trial denounced the prohibition of intermarriage in the Nurnberg Laws of 1935. Yet even for the most courageous of 20th Century thinkers, that trial was apparently not the right time “to tell the Jews what was wrong with the laws and institutions of their own country” (p. 8) Is it too much of a stretch however to assume that her contemporary critique of constitutions produced by incumbent governments, without much legitimacy, applied even more to Israel than to European post World War states?

In fact, the very sentence just quoted indicates, that performatively, Arendt was less hesitant than the journalists she mentions to tell Israelis what was wrong with their country. She did much more then that in the 1940s in terms of predictions and proposals. After some involvement in Zionist circles and their politics in the 1930s, in the 40s she became a determined critic of both labor Zionist and Revisionist visions for the future. Her first critique of the Zionists as early as in 1937-8 was complex, but its focus was on an substantialization or essentialization of the Jewish people, on the one hand, and the absence of an autonomous politics based on independence from other powers and with clearly defined friend enemy relations. Later, she implied that Zionism was to make-up for its lack of politics in the 40s, but if anything intensifying the essentialism based on a German inspired rather than French political form of nationalism. (The Jewish Writings pp. 366-367) Arendt regarded the nation state as an obsolete and endangered form that would be replaced by larger units, empires or federations. Only the latter would allow the survival and flourishing of small people like the Jews. (The Jewish Writings p. 371) Yet, according to her provocative thesis it was the program of the revisionists, who were in her view close to being neo-fascists, that was in the end adopted by the whole movement and above all Ben Gurion in the project of a sovereign Jewish ethno-national state. (see piece on Begin in The Jewish Writings p. 417ff and “Zionism Reconsidered” in The Jewish Writings p. 343ff; 351)

Arendt did not wish to easily submit to this outcome; passivity was not a part of her character. Close to Judah Magnes, after first initially opposing him, Arendt went on to propose an Arab Jewish confederation in Palestine, a variant of his idea of a bi-national state included in a larger federation of Middle East. (J. Magnes “Toward Peace in Palestine” in Foreign Affairs, 1943; The Jewish Writings p. 336, 400, 441, 446) She was to call Magnes the “conscience of the Jewish people” (p. 451ff) and she always shared his abhorrence of the absurd notion of a people without land searching for, later actually being in a land supposedly without people. Only the moon is a place without people, she repeatedly wrote. But there were two main features of Magnes 1943 proposal of bi-nationalism in a federation of Ottoman successor states that she strongly opposed. Within the bi-national Palestinian state Jews would be a large minority at best, and within the federation as a whole a small one. This would make the Jewish status one of a vulnerable minority within an “Arab empire”. Secondly, to avoid violation of minority rights, Magnes sought to establish Anglo-American alliance guarantees. Such protection by imperialist powers, resembling the similar role of absolutist princes, would make the Jewish minority their client and agent, a terrible status in a world where she anticipated de-colonization. This idea of the bi-national state to Arendt destroyed the meaning of federation, that was supposed to be made up of “different people with equal rights.” (p. 336) It retained according to her the homogenizing and exclusionary logic of the nation state, that made the so-called minority problem impossible to solve on the bases of equality.

When she partially revised her opinion with respect to the Magnes scheme, her conception of a confederation, replacing bi-nationalism, represented an attempt to save the core principle of federation. It was also a transposition to Arabs and Jews an earlier notion where a non-territorial Jewish people would be a member of the European federation. It was not clear how that idea would work without falling into rigid consociationalism as in Lebanon. She was however happy, if prematurely so, in discovering the idea of a confederation with Jerusalem as its common capital among Jewish, Arab and UN proposals in 1948. (p. 408ff.; 446-7) The idea was in any case free of all Eurocentrism even as she continued to strongly support a federalization of Europe itself, on political rather than economic grounds. (J. Butler review, Compare The Jewish Writings p. 129-131 vs. 447, 450) Once in the United States (after 1941) she came to be even more impressed by the federal idea, that she partially misunderstood as one bringing together distinct and differentiated peoples that nevertheless defined their citizenship in purely political rather than ethnic or national terms. The same federal idea was to be central to both her constitutional proposal, and for the thesis of On Revolution, but of course nothing could be further from it than the unitary state form resembling the Westminster model established by the founders of Israel. If this state was created in a French way resembling the making of the third republic, it was in one fundamental respect different than English and French nation states. Israel’s founders envisaged an “essentialist” ethno-religious rather than a political identity, thus a form of state power limited by religious law in the form an inheritence from the Ottoman era preserved by the British Mandate, the millet system. (cf. Yuksel Sezgin The Israeli Millet system) Hannah Arendt rejected this form of self-definition and self-limitation.

If her encounter with Zionism, helped deepen Arendt’s critique of the nation state so sharp in The Origins of Totalitarianism (1950: on Israel, p. 290, 299), and develop her notion of federalism central to On Revolution, these works remain highly relevant to the problem of Israeli state and constitution at least in a diagnostic sense. If Israel plays no role in the second of these works, and the constitution is only once mentioned in Eichmann, it may have been because by the early 60s she was no longer confident in being able to propose a solution to constitutional problems of this state, already established. She hated to engage in the production of abstract, politically irrelevant utopias. Yet, I believe that the theory of On Revolution, deeply linked to U.S. American solutions and the concept of “revolution” bears some of the burden for the omission. After briefly reviewing this theory, I will try to show that earlier, she did possess concepts that would still be essential for the just solution of the now historical conflict. On these, a missing chapter of On Revolution, one on Israel, could have been based.

In the classical doctrine, whether biblical or republican, the foundation of the new polity presupposes violence generally by a prophet or a political law-giver. Hannah Arendt is deeply disturbed by this assumption, and seeks to go beyond the role of dictating violence with two distinctions. The first is between liberation and constitution, that she saw as two dimensions of a revolutionary process. Where liberation could be and generally was violent, constitution, if it was to be successful, had to be the act of a plurality of actors persuading one another, and making binding promises regarding the future. The second distinction was between the United States and France, where the first succeeded at producing a stable constitution guaranteeing at least constitutionally limited government and the second failed. Ultimately at bottom of this difference were according to her two different conceptions of the constituent power. In the case of the U.S., whether in the states or the Union, the constituent power was already constituted as the power of small republics, whether townships or states. In France, the constituent power of the people or the nation was understood to be in the state of nature, limited by no rules or pre-existing procedures. It was this doctrine that allowed French political agents, whether the Assemblee constituant, the Convention nationale or one man, stepping into the empty place of the sacralized king, to engage in constitution making by government entities that could produce no legitimacy or stability. Arguably the first and all subsequent Israeli Knessets too occupied that place, even if the task of producing a unified documentary constitution has never been resumed. None of the 11 basic laws, including the ones relied on by Justice Barak, nor their 3 revisions, were produced through a wider, more participatory constituent process (Justice M. Cheshin United Mizrahi decision, who imagines this in classical European terms of an extraordinary constituent assembly). After liberation from colonialism, Israel thus chose a generically French path rather than the federal American one, quite contrary to what Judah Magnes and Hannah Arendt among others proposed.

The American path would have meant not only federation in two senses, process and outcome, but also the establishment of an entrenched, enforceable written constitution with a strong table of fundamental rights. Israel was not alone in disregarding the American paradigm. While some of its substantive aspects have been incredibly influential, it is fair to say that the procedures of its first emergence were never successfully adopted elsewhere. The latter involved the making of the constitution in several stages where none of the actors usurp the imputed popular sovereignty of the people, a process in which the drafting assembly possesses no other power than to recommend a constitution to other instances. As Condorcet and Sieyes realized at the time of the French Revolution already, the historical preconditions stressed by Arendt for an already constituted pouvoir constituant, namely small self-governing republics did not exist in France. But even where such bodies did exist, as in many countries of Latin America, they could not play the role assigned to them by Arendt. We can see the reason in the chapter of On Revolution (chapter 6) where she seeks to overcome “American exceptionalism.” I have in mind her famous theory of councils and soviets that she proposes as the alternative to modern political parties produced by modern revolutions. As inspiring her inaccurate history of these political bodies may be, she has to implicitly admit that direct democratic councils, whether in France, Russia or Hungary could not complete a revolution that she defined not merely as the exercise, but also as the institutionalization of public freedom, namely a constitution. Many years before, anticipating her later theory of the “tragedy” of direct democratic councils, she had to admit the same failure for the kibbutzim in Palestine. Bracketing reservations concerning their apolitical and even authoritarian aspects she saw these as the only creative force that was ideologically predisposed to institute either a bi-national state or a confederation of two peoples, but she was forced to see the political weakness of grassroots self government when opposed to the modern party movement, especially given their self imposed “abstention from politics.” (Young-Bruehl Hannah Arendt p. 97, 139 vs. p. 229; The Jewish Writings p. 395; 348-9)

What she did not realize then or later is that it was precisely the setting of a revolutionary rupture, or in her language the violence of liberation, that greatly privileged a disciplined party organization such as Lenin’s against loose democratic organizations that tended to define politics primarily in the local sense. Tocqueville whom she greatly admired did realize, and understood the political vacuum produced by revolution as the birthplace of a new authoritarian alternative. She unfortunately did not fully understand his complex explanation of American exceptionalism in constitutional development, that among its main components relied on the concept of “revolutionary results without having had a revolution”.

Fortunately, Arendt’s proposal for a confederation of two peoples within a larger federation, genetically related to her conception in On Revolution, does not stand or fall with her direct democratic or radical republican theory of agency. Yet a confederation or federation does require agents capable of federating. If the confederation is to be democratic or republican, the agents that produce it must be themselves democratic or republican, or at least strongly attached to an outcome based on free democratic competition under constitutional restraints. Because of the long history of omissions of British imperialism, as Arendt knew so well, such agents were missing on both Jewish and Arab side. While Arendt claims that the Jewish Yishuv was an effective proto-state primarily of the Histadrut (The Jewish Writings p. 436-437) it was a top down, bureaucratic organization capable of producing effective military force, without any democratic experience. Admittedly, the UN Resolution of 1947 ending the British mandate aimed at the creation of two states based on democratic competition, and equal rights under constitutions. Yet, the incredible rapidity of the British departure left no time for such entities to organize themselves. The horrors of the Indian partition had a similar cause, with the difference that Indian political actors by then had long political experience and were well organized. Such political experience was missing in Palestine on both sides. This is why Hannah Arendt rather reluctantly joined Magnes in supporting the failed plan for a temporary UN trusteeship, in an uncommonly tough essay condemning the increased influence of Jewish terrorism, aiming to expel Arabs and to make negotiations between the two people’s impossible. (“To Save a Jewish Homeland,” 1948. The Jewish Writings p. 397-400) In any case, the leaving of the British created the kind of power vacuum Tocqueville noted characteristic of modern revolutions. If it was a liberation, there were two subjects of liberation capable of entering into the space of power, with mutually incompatible claims as Arendt noted following Magnes. We know the outcome. On the Jewish side, those representing the claims of a modern ethno-national state easily defeated small groups representing views such as Arendt’s, at the same time that they militarily triumphed over Arab actors making similar exclusionary claims. Given British supports for at least some of the Arab forces, the process could be still be represented as liberation by Jews, even as it became al Nakhba, the catastrophe, to Arab Palestinians. But it decisively failed as a process of constitution, and that brings us back to today, when there is neither a constitution nor constitutionalism for either Israel of the original armistice lines, or for what has been called the Israeli control system, that includes now also the West Bank.

After the Israeli state was established, but before she saw the need to rehabilitate the concept of revolution, in “Peace or Armistice in the Near East?” (1950) dedicated to the memory of Judah Magnes, Arendt began the essay with the following lines:

Peace, as distinguished from an armistice, cannot be imposed by the outside; it can only be the result of negotiations, of mutual compromise., and eventual agreement between Jews and Arabs. … A good peace is usually the result of negotiation and compromise, not necessarily a program. (The Jewish Writings p. 423; 427)

I leave to the side that today we would say Israelis and Palestinians, instead of Jews and Arabs. The lines retain their validity. Once President Sadat made his trip to Jerusalem in 1978, in spite of the fact that promises regarding the Palestinians were not kept by Israel, there have been serious negotiations among the parties, Israelis and Palestinians. At one time at least, at Taba in 2001, they came close to success, vitiated by the closeness of Israeli elections. Today, talks have been restarted, but the chances of an agreement seem to be slim in the face of obvious sabotage by the settler dominated present Israeli government, indicated by the continued building of settlements that tend to destroy or transform the very object of negotiation, and potentially by Hamas that has been left outside, though it controls Gaza with the allegiance of many in the West Bank. Yet there is no alternative to these negotiations except, as Hannah Arendt knew as well, renewed violence that leads nowhere.

But what are these negotiations to be about, and how are they to be conducted? The parties to the extent they think they are negotiating only over territory, and control over people, cannot be right. They are also negotiating over a constitution or constitutions. Whether the result is supposed to be one state or two states, any possible entity emerging will still be a deeply divided society on ethnic as well as religious grounds. Such societies need written, relatively rigid, enforceable constitutions more than all others. Moreover peace between any new entities, the survival and even deepening of a modus vivendi, the adherence to solemn promises made during the negotiations will depend on both constitutional and international law arrangements that need to be codified along with plausible enforcement. It would be futile to make an agreement concerning state structures without providing at least the main principles for regime forms.

Of course the problem of state structure has to have priority. At the moment it is quite unclear whether the result of the negotiations in the end will be two states or one as many on both Israeli right and left think inevitable, and it is not up to outsiders to make this choice. They can however point out that the same problem would exist for either option, because it will not be possible nor desirable to create ethnically homogeneous states either way. Here Hannah Arendt remains a good guide, and helps us see once again that the choice of a nation state entails the next bad choice between second-class citizenship or expulsion. (The Jewish Writings p. 343-344, 347, 352) Even outsiders can point out furthermore, as Magnes and Arendt did, that there are options that can avoid such a bad choice, ranging from bi-nationalism to consociationalism, or even a confederation not entirely based on territory but also political organization. It is not up to us to decide whether poltical or economic integration of new units should come first, but we can point out that both would be needed for either to be stable and lasting.

It is true that if outsiders came up with concrete and detailed plans, they would wind up only repeating Arendt’s own demonstrable contradictions in these matters, and her frequent change of perspectives that was a function of the rapidly changing context on the ground. In the end she did warn about being attached to plans, and formulae, instead of focusing on genuine negotiations. (The Jewish Writings p. 427) Yet, can the actors do without at least the outline of a plan that would make coming to an agreement attractive to both sides? It can perhaps be said, that the idea common to Magnes and Arendt, of a confederation or federation within a larger confederation or federation, remains normatively the most promising one for Israel/ Palestine and its region. Whatever its troubles today, the EU composed of former violent enemies does indicate that structures such as these are possible. The possibility of a federal Europe was an important part of Arendt’s vision in the 1940s. However mistaken she was about the potentially federal nature of the British Commonwealth (too weak once empire was gone), the federalism of the U.S. (not based on a union of distinct peoples) and the Soviet Union (integrated through one party rule, rather than voluntary union), the example of Europe, then only a dream but today widely imitated shows that she was potentially right. Might this example not be attractive not only to Israelis and Palestinians, but also the other states of the region today each replete with the minority problems she associated with the nation state?

Can there be an international political role in achieving the desired end? Hannah Arendt as a previous quotation indicates believed in federation as autonomous product only of those who seek to federate. Yet, even with the British omissions of the Mandate in her clear memory, she was willing to accept a temporary trusteeship for Palestine, in order to reduce antagonism, and to enable actors capable of fair negotiations to emerge and to develop. (The Jewish Writings p. 399-400) Even the forming of the European Union occurred under the defense umbrella of primarily the United States. For Israel/Palestine the issue however is different. While the Palestinians have lost their Soviet sponsor, with only very tenuous replacements, Israel is indeed under an American umbrella of protection. One of Hannah Arendt’s main criticisms of Zionism has aimed at the desire, inherited form the older assimilationist trend in Judaism, to be under the protection of some “big brother”, first the Ottomans, then Great Britain, with America next, and for a time the Soviet Union, before America’s protective role was settled on after the Suez fiasco of 1956. (ibid. 392 and elsewhere) This is why she insisted on the autonomous agreement only of the parties themselves, representing the peoples of Palestine and Israel. Yet might not the removal of the protective umbrella have the same effect as a trusteeship? Or, instead of the protective umbrella, should not soft power be applied to both sides, to facilitate the making of a just and equitable solution, whose terms should emerge from the parties themselves.

Are these parties and their representatives capable of coming to any serious agreement? Toward the end of his life, Edward Said, whose views in many respects resemble those of Hannah Arendt, became a strong critic of attempts at Palestinian liberation relying on terror. Arendt too was an uncompromising critic of Jewish terrorism. But Said was fortunate enough to live in a period in which a new method linking peace and constitution making emerged. He was highly impressed by the post revolutionary effort in South Africa to devise a democratic system based on equality through negotiations between former enemies who thereby became political opponents. If this was possible in South Africa given the heritage of apartheid, it must also be possible in Israel/Palestine he thought. I think he was right, even if he misunderstood the negotiated many stage character of this process in South Africa. In particular he was in my view deeply mistaken in rejecting negotiations on the Oslo pattern. He of course may have been right that only new actors are capable, I would say on both sides, of acting in an entirely new manner. Yet, still under the influence of populist idea of constitution making Said believed that the key new actor must be the Palestinian people who could generate new institutions from below through an elected constituent assembly. But the Palestinian people is no less an essentialist abstraction than the Jewish people. It is unclear for example whether the constitution making task Said had in mind was based on the constituent power of only people in the West bank, or if he meant to include Arab citizens of Israel, and also Palestinian exiles. (From Oslo to Iraq and the Road Map: Essays, 2004 p. 48-49, 186-187, 192)

Originally Hannah Arendt too seemed to believe in a unitary people constituted by Schmittian friend –enemy relations, that she at one time saw as the meaning of the political. Precisely as a result her brave confrontation with the Arab – Jewish conflict, she came to understand that politics could not be based on essentialized friend enemy relations without self contradiction, and the major political task was the mutual persuasion of former enemies to become opponents in a pluralistic field. (compare The Jewish Writings p. 56-57 vs. 351, 359) A historical compromise producing a federation or confederation, especially one within a larger federal entity can be the work only of many actors, within each people, and across a series of negotiating tables. This is the lesson of the recent transitions to democracy, that inspired Edward Said, as it should inspire us.

The Prawer-Begin Plan was shelved. But the idea that you can forcefully transfer an indigenous population and determine where it can legally reside — looks and smells like a plan pulled from the dusty drawer of Hendrik Verwoerd, architect of Apartheid South Africa. And that didn’t work out so well.

Sadly, it was too early to celebrate the downfall of the Prawer-Begin Plan. The victory of suspending the Knesset vote following the “day of rage” protests on November 30 was short lived. The dark threatening cloud of ethnic cleansing still hovers over the Negev’s Bedouin population. Nevertheless, Prawer’s suspension was the culmination of a grassroots mobilization that took months, years actually, to climax and grab public attention. It was an historical achievement.

The good news is that for the first time, the Jewish Israeli public woke up to the sound of a clear, well articulated and well organized Bedouin-led resistance movement.

The bad news is that it was immediately perceived, particularly by Israeli liberals, as ingratitude and a failure to comply with what the government successfully framed as the most enlightened, generous and historically fair settlement of land disputes in the Negev. Lieberman and fellow Jewish supremacists naturally opposed the plan from the very beginning, considering it too generous and predicting its demise from the outset. But it is more interesting to look a closer look at the “enlightened” expressions of disappointment about the Bedouin protests. It is the liberal state of mind that will likely eventually provide the moral justification for forced removals in the near future. Meirav Arlosoroff’s analysis in Haaretz immediately following the day of rage is a case in point.

What is the difference between massive confiscations of Arab land for Jewish settlements in the 1950s and the Prawer Plan today? There’s no difference, according to Arlosoroff. Prawer follows the well-known formula of confiscation, state control over population distribution and forced removals. However, whereas in the 1950s the welfare of Jews was the government’s main concern, today’s plan has only the Bedouin in mind. They better take the hand that reaches out to them, from a government that is finally willing and ready to invest, develop and save them from their backwardness. Arlosoroff is tuned to and emphatic to the Bedouins’ plight: they have a long memory and the wounds of the 1950s are still open, but their resistance is irrational.

Why not leave “dark spots of degradation, backwardness, crime and poverty” when you can actually profit from resettlement? She wonders.

Had the Bedouin read Haaretz, they would have by now recognized the benefits of upgrade-and-exit strategies. Small Bedouin settlements, much like small start-up companies, are for the most part unsustainable. They have to merge or disappear. From her neoliberal standpoint, Prawer’s population-merger plan only makes sense.

But the fact is that the story Arlosoroff tells Haaretz readers about the benevolent government of Israel is simply a lie.

The Prawer plan looks and smells like it was just taken out of Hendrik Verwoerd’s dusty drawer. Verwoerd, the infamous “architect” of apartheid, conceived the idea of forcefully moving populations to government-built designated areas. The idea that the government determines where an entire population can legally reside, delimiting “for its development” a restricted area, and considered legitimate only by a hostile authorities but not by those living under its thumb, is truly a relic of a dark era.

With the passing of Nelson Mandela, it’s worthwhile remembering how this grand apartheid experiment in systematic domestic ethnic cleansing actually ended. It was a catastrophe, and not only for the population; as far as policy makers are concerned it was a grand planning failure as well. According to some estimates, around 8 million people became internal refugees of apartheid, to this day populating some of the largest shanty towns in the world. No tourist arriving at Cape Town International Airport can miss the ubiquitous and huge “squatters camps” along the main highway. The Prawer Plan, in whatever new incarnation it reappears, will doubtlessly have similar results.

Contrary to what Arlosoroff claims, the Israeli government, not the Bedouin, is turning its back on history by refusing to recognize the 37 villages in which more than 80,000 out of its 200,000 Arab citizens reside. These are not the small “dark spots” that Arlosoroff depicts.

Activists from Tarabut — an Arab-Jewish grassroots movement that studied the plan and is active resisting Prawer — did the arithmetic: the average number of residents in a so-called unrecognized village in the Be’er Sheva area is 1,740. This number is three times bigger than the average number of the residents (309) of Jewish villages in the same area.

The process of forced Judaization of the Negev has already created a geographical landscape of apartheid eerily familiar to anyone who has ever traveled across South Africa. Since 1997, the 59 so-called “loner’s farms” built by families with friends in high places built, all began when the Israeli government enabled a new breed of pioneers cum start-up professionals to take over huge swaths of land and generously assisted them with infrastructure.

Arlosoroff does not even bother mentioning the desert bloomers with their organic cheeses made from boutique goats. It is apparently irrelevant that for those selling the authentic Zionist experience to the occasional weekend flock of Jacuzzi-dwellers in romantic eco-friendly retreat cabins, the government allowed illegal settlement without permits or master plans. This went on for years until a new law retroactively legalized them wholesale in July 2010.

Meanwhile, the state has been forced to reassess how to go about cleaning and removing the primary environmental hazard affecting the Jewish “loners” – the human Bedouin hazard.

In light of the scandalously heavy handed police response and arbitrarily long arrests of activists participating in the “day of rage,” it is likely that we will witness an escalation in the form of continued systematic destruction of Bedouin dwellings, accompanied by more and more arbitrary arrests and incarcerations targeting not only activists, but the Bedouin population as a whole.

The worst case would be if the state goes through with forced removals combined with forced confinement in government-planned dwellings. They may look less like the restrictions on movement placed on people in the apartheid townships of yore, and more like the “open jails” Israeli already operates for African refugees. We must take into consideration that this is a population, which is perceived as criminal not only by the authorities, but also in the Israeli public conscious.

But even the worse repression will ultimately fail just like the grand experiment in South Africa miserably failed. The human hazard is not going to simply disappear from view. Israel’s land regime produces chronic subversion, from public housing “squatters” to tent city dwellers, to refugees marching in defiance of “open jail.” With Bedouin demanding their human rights and dignity, that subversion will only increase.

A South African apartheid politician once likened the attempt at removing native populations to a Sisyphean task: “like sweeping the water out of the sea with a broom.” In local parlance this resembles a government effort to sweep the sand out of the desert with a broom. The determination to get rid of entire communities at the cost of great human suffering has certainly not disappeared; it may have even intensified after the protest and the “leftist” disappointment that stemmed from it. It is also painstakingly clear that the government’s massive investment is in its sweeping effort and not the Negev’s population itself. But just like in South Africa, destructive and disastrous as it was, this investment is futile and will most likely fail to achieve its goals.

Another English version of this article first appeared in +972.

Ariel Sharon was perhaps the last Israeli soldier-statesman whose life was framed with the Zionist myth of martyrology. Although there surely is no shortage of commanders who are mythical figures and became politicians in contemporary Israel, Sharon joins an exclusive club of those mythic figures of men in the history of Zionism whose lives ended mysteriously, untimely, not in war, and/or whose death stories were contested and ambiguous. Theodore Herzl, who died young, and is rumored to have suffered from syphilis. Joseph Trumpeldor who died protecting Tel Chai in 1920 and, as the myth holds (Yael Zerubavel provides a detailed account), said before dying “never mind, it is good to die for our country.” Yitzhak Rabin was assassinated in 1995 by a national-religious law student. Yasser Arafat, whom Israel tried and in the end probably succeeded to poison or otherwise kill during his long career (2004). Rafael Eitan, a former chief of staff and politician: a wave pulled him into the sea in the Ashdod harbor in which he was a project manager (2004), and Sharon, who was in coma for eight years starting in January 2006. His social death was blurred, extended even beyond the span of “the king’s two bodies.” Shortly after his stroke, streets and institutions were already named after him.

The eulogies on his death and the description of the funeral reflected this ambivalence, which combined inglorious and glorious military chapters, including his resignation after being found responsible for the massacre in the Sabra and Shatila refugee camps during the first Lebanon war, and the change he made as a prime minister which led him to support the withdrawal from Gaza.

Sharon was described in the Israeli press as a farmer, a commander and a statesman, similar to the depiction of Yitzhak Rabin, Raphael Eitan, Moshe Dayan.

Dayan’s death was never ambiguous as was true for the others, but his life was marked by mythical dramas, commitment to agriculture and the love of archeological finds (many of which he stole).

Sharon was also put alongside other figures such as Rechaveam (“Gandhi”) Zeevi, advocate of the “transfer” of the Palestinians and a head of “Moledet” Party. He was assassinated by members of the popular front of the liberation of Palestine in 2001, while serving as the minister of tourism.

The elusiveness of the end of life for notably many mythical figures of the Zionist movement in Israel says much about the materials out of which those myths are made: sacrifice, fighting, and the sanctification of death, on the one hand, and labor, agriculture, and husbandry, on the other. Both sets of mythemes are part and parcel of the totality of sacrifice for the land, versions of which we encounter also with David Ben Gurion.

The Israeli and international press understood those tensions in their depiction of Sharon’s death. Sharon, the 11th prime minister of Israel held the image of the Sabra that Dayan and Rabin had. The Israeli press combined the images of “Arik, an Uzi in his hand, a bandage on his head and a lamb draped over his shoulders” (Aluf Ben, Ha’aretz, 13.1.14).

The German foreign minister Steinmaier said he was an “indefatigable protector of his beloved fatherland Israel.” Merkel called him a patriot. UN general secretary Ban Ki-Moon stated that he was a “hero of his people,” while Prime Minister Benjamin Netanyahu eulogized: “Arik: your special contribution to the Israeli security will be remembered in our history.” President Shimon Peres said: “you were the shoulder on which the security of our people leaned…your footprint is in every hill and valley. You harvested them with the sickle and protected them with the sword.”

Both the Frankfurter Algemeine Zeitung (FAZ) and the (Berlin) Tagesspiegel quoted Palestinian sources to refer to his role in war crimes. The central article in the FAZ (13.1.2014) was titled “a war maker and a farmer.” American Vice-President Joe Biden described him as “a complicated man in complicated times and a complicated neighborhood” referring more to the present challenges that Israel faces than to Sharon’s mythological “total commitment” to security. Referring to, but not held captive by, Sharon’s place in the pantheon of Zionist leaders hovering over the performance of more prosaic leaders today, the international press presented him as an Israeli leader with a controversial career. The Israeli response, both in the eulogies themselves, and in their reportage, hangs close to the myth of Sharon, insisting on his incomparability, even when the myth is called into question.

“The following video contains graphic content, which may be disturbing for some viewers,” says NYTimes.com about a video of the protests in Ukraine. Yes, politics — if by “politics” we do not mean debates of “experts” and TV celebrities who represent political parties — is disturbing, and not only in Ukraine.

Yet, in Ukraine, politics has come back. Hundreds of thousands of people have been on the streets for two months already protesting the government. What started in November 2013 as a protest against President Viktor Yanukovych’s decision not to sign the Association Agreement with the EU has very quickly turned into a protest against the entire regime, the whole system of power from the President to a local police officer.

The first violence used against the protesters on November 30 showed that the government hates to see the faces of those who do not like it. After two months of mass protests, when two to three hundred thousand people were peacefully marching the streets of Kyiv, the government turned to “legal” measures. On January 16, the pro-government Parliament passed a package of anti-Constitutional laws that criminalized every form of public protest. Moreover, they criminalized political competition, making an opposition victory practically impossible.

What happened next was another turn in the struggle against the corrupt government that protects itself with internal military and police forces. In the aftermath of the laws that criminalized everyone on the Maidan being passed, the opposition leaders were at a loss and could not clearly explain what to do. Some of the protesters turned to violence against the police, and the police shot at the protesters, killing four and injuring hundreds, all the while declaring that they do not use firearms.

“Law enforcement” agents were also kidnapping the wounded from the hospitals. One of them, Ihor Lutsenko, survived and told of how he and another man, who was later found dead, were tortured and interrogated by people close to the police. Another victim of kidnapping, Dmytro Bulatov, is the organizer of the AutoMaidan, a car protest that started as rallies on wheels and have since been patrolling the streets of Kyiv in search of the paid thugs who cooperate with the police and attack protesters. Bulatov was tortured for eight days and then dumped in a forest near Kyiv.

The videos of police brutality and torture proves what happens to those who become their hostages. The arrested protesters are heavily beaten at the police station, while the wounded are taken from or on the way to the hospitals. In official negotiations with the opposition the government acts as a terrorist. It openly threatens that the courts can either set the arrested free or put them in prison for months and years, where they will be tortured and maybe even killed. The brutal officers of Berkut, special forces who are trained to suppress mass public activities, are often called “zvirstvo,” the beastly, by Ukrainians. For fun, the beastly take pictures with their victims. In a video that appeared on YouTube a few days ago, a young man was stripped naked on the snow and photographed by policemen while being beaten. On Facebook shots from this video are often compared to pictures of Nazi actions in concentration camps. This comparison reflects the way in which the protests were radically politicized, following the logic of Carl Schmitt.

The Berkut and the protesters are absolute enemies for each other, and enemies must die: even if the enemy is your neighbor, relative, or just a suffering human being. The kidnapped Lutsenko described his interrogators as people who really believe in what they were doing. They believe that the protestors on the Maidan were paid to be there, taking orders from the “master” who designed the evil plan the interrogators try to uncover.

On the other side, there are many rumors that the Berkut on the streets of Kyiv consists of Russian special forces units — this is how people imagine the enemy that does not belong to “us.”

The struggle quickly spread beyond the Maidan and has already come to the houses of Berkut families: in the city of Kryvyi Rih people ruined the car and house of a member of the Berkut who is now on service in Kyiv. The Berkut and the police in general have lost their monopoly on violence. After brutal kidnapping and torture no one believes in the right of the police to arrest. What people are doing now is trying to rescue those who have been and will be arrested, staying on watch in hospitals, protesting at police stations, and mobilizing advocates and parliamentarians to the courts.

From the beginning, the three opposition leaders — Arsenii Yatseniuk, Volodymyr Klychko, and Oleh Tiahnybok — were uncertain about their actions and were gradually losing support of the people on the Maidan. After the violence erupted in Kyiv on January 19, the people outside the capital, first in western and later in central and eastern Ukraine, began occupying government buildings. The protesters were hardly led by uncertain calls of the opposition leaders to create an alternative to the government. As in Kyiv, the large-scale public protests turned to radical action such as occupying buildings and fighting with the police, when small radical groups took the initiative and started acting.

As the state structure apparently is crumbling, many fear a split, the threat of civil war, and Russian intervention to restore order, quite a familiar scenario. For many media outlets, the map showing the location of occupied buildings demonstrates the tired thesis of “two Ukraines” — a western one that is pro-EU and supports the opposition, and an eastern one that is pro-Russia and supports Yanukovych. Another version of the story is western nationalists or even fascists against pro-Putin east. This is a simplistic vision that first of all universalizes language as the dividing factor and essentializes cultural differences, while ignoring other causes of political and social division and unity. Significanlty, the first victim of police brutality was a young man from the Dnipropetrovsk region, one of the core areas of government support.

While many speak about the “civil society” that is being formed on the Maidan, instead of the emergence of this vague buzzword, what I see is the very powerful momentum of self-organization that not only makes the Maidan function, but also aims to replace state institutions of coercion. The Maidan’s kitchen and medical services are the best-known examples, but the AutoMaidan is more obvious evidence that the protesters can go further.

The government started bringing hundreds of paid thugs to the streets of Kyiv to beat up the protesters and create chaos in the city. Known as “titushky,” these men are unemployed youth or low-paid workers who are eager to earn 20-30 dollars and do not mind fighting or other illegal activity. In fact, they are able to direct their own frustrations with the country’s political and economic situation into physical action; sadly they do not realize (or do not care) that they are being played by the very people who bear responsibility for Ukraine’s plight.

The AutoMaidan has turned into a street patrol that aims to seize groups of titushky. For taking up the role of the discredited police, many of them have been severely beaten and arrested, their cars destroyed.

Another example of well-organized protesters is the soccer “hooligans” all over Ukraine and ultra-right organization like Pravyi Sektor who last week started fighting with the police in Kyiv. Their radicalism attracts many people who are not throwing Molotov cocktail themselves, but are ready to back them up, considering in this situation violence against their “enemies” appropriate. There are calls for the next step — to free people from police stations and prisons, breaking into them and setting them on fire. Ultras from Donetsk, the hometown of Yanukovych, and other eastern regions unexpectedly also voiced support for the local Maidan in their city. These radical people are used to fighting police and now they are fighting on the side of protesters against similar local groups of titushky.

Soccer hooligans were a strong organized force before the Maidan, but now they have become a driving force of violent protest, praised by the opposition leaders for their bravery. In words of famous Ukrainian writer Serhii Zhadan, ultras are fans of soccer clubs, not the oligarchs who own them.

At the same time, the group of protesters, mainly associated with leftist movements, now organize a watch in Kyiv hospitals to prevent the taking of wounded protesters to police departments. Kyiv-Mohyla Academy, the oldest and most esteemed university in Ukraine, is on strike, and the university building has been turned into a hospital where the wounded are safe from the police. These are only some examples of self-organization that shape a picture of the Maidan.

Those who throw Molotov cocktails and stones at the Berkut and consider themselves “right,” and those who prevent the kidnapping of the wounded from hospitals and consider themselves “left,” all belong to a Maidan that is much bigger and more diverse than is often presented, especially in the Western media.

Listening to endless chants “Slava Ukraïni!” — “Heroiam slava!” — “Smert’ voroham!” (“Glory to Ukraine,” “Glory to our heroes,” “Death to our enemies”), which originated in the nationalist struggle of the first half of the twentieth century, raises much concern about the direction of the protests. One of the three leading opposition parties is the nationalist Svoboda (Freedom). Its protesters toppled the statue of Lenin last December. Their symbols of anti-Soviet guerrilla struggle are the most visible on the protests. Two other parties, Batkivshchyna (Motherland) and UDAR (Ukr. Democratic Alliance for Reform), are more liberally oriented, or rather comprise a mixture of rhetoric without a clear political program that can be defined as “right.”

People on the Maidan are disappointed with the three figures who are still fighting to be the solitary leader and are still in negotiations with the president, even after they have realized that all negotiations come to a deadlock. The two weeks of negotiations were used by the government as a mere decoration of democracy; their content demonstrated that the president and his government are not going to share their power with the opposition.

Now both sides are waiting. The protesters are unable to bridge the political power of the street and its political representation. Yanukovych controls the majority of the parliament, thus the fate of the new government and amnesty laws are in his hands. But the Maidan on the streets of Kyiv and other cities keeps him in a deadlocked position, especially when he still has to address mounting economic problems and the danger of default.

The Maidan is the power of the people on the streets, separate from all political parties. Its ultimate power is that politicians do not control it. At the same time, people realize that without political leaders they cannot solve this crisis: hence they boo them yet still listen to their speeches.

Many see the Maidan as a version of the Occupy movement, self-organized and without hierarchy, without a clear political representative of the public demands. The people who come every day to the Maidan after work, who live there in the occupied buildings, who bring money and medication to the wounded cannot be described as “right” or “left” as, for better or worse, these categories are not relevant for the Maidan now. Even if in many accounts, the Maidan is more about nationalist Ukrainians who believe in “Ukraine above all,” one of the popular slogans, and sing the national anthem every hour. For some other commentators, it is a revolution of the middle class that disregards the old East–West division. Yet, most of the protesters are the poor, the majority in Ukraine.

From a Marxist prospective, no class as a revolutionary subject can be found on the Maidan, but neither is the nation the revolutionary subject, even if it is a popular nationalists’ claim. If we do not imagine the people in Ukraine as a class or nation, then their struggle loses teleology and a conviction of inevitable victory. Different groups of people are gaining new knowledge of how to struggle for their rights, be it how to build barricades or how to protect themselves in solidarity. They may succeed, or not, but in either case they will have to start from the beginning over and over again. Like every revolution, it is an unfinished project, and its outcome in the next a days or months will be able to satisfy those who struggle for changes only for a while.

Once I commit myself to a new theoretical project, I start realizing how my reading can illuminate it. Sometimes this involves a concerted effort. Thus these days I am re-reading Georg Simmel with an intuition that he can be a key theoretical guide in understanding the social condition. But sometimes this is just a matter of reading something of general interest and realizing that it contributes to my project.  Thus I thought of my exploration with Iddo Tavory of the unresolvable dilemmas built into the social fabric when I was reading Nachman Ben Yehuda’s  book, Theocratic Democracy: The Social Construction of Religious and Secular Extremism.

Ben Yehuda, my old friend and colleague, is studying in his book Jewish extremism in the Jewish state. He investigates deviance in the religious community as a way to analyze the conflict between the religious and secular in Israel. Central religious and political commitments in Israel as a matter of the identity of the national community pose serious problems. Not only has the recognition of Israel as a Jewish Democratic state become a key demand and obstacle in negotiations with the Palestinians, it has become a problematic challenge to the relationship among Israeli Jews, all topics which Ben Yehuda has explored. His central findings are presented in part 2 of Theocratic Democracy, on the deviance and the non-conformity of the ultra orthodox, and part 3 on cultural conflict in the media.

In part 2, a selection of “illustrative events and affairs” is presented, among many others: a 1958 affair surrounding the building of a swimming pool for mixed, male and female, bathing, in ultra-orthodox (Haredi) rendering the “abomination pool,” and the 1981 ultra-orthodox attack on an archaeological dig of the City of David, near the old city of Jerusalem, leading to a series of conflicts, ultimately resulting in the fine tuning of the law of archaeology. During 1985-6, there were Haredi attacks on advertising posters.  Further, there were attacks on movie theaters open on the Saturday Sabbath, and many other attacks against free secular activity understood as abominations according to religious orthodoxy. Most heart rendering are the reports on controversies revolving around the question of who is a Jew? (And therefore, who has full citizenship rights in the Jewish state). Some of the tensions have had less to do with principle, more to do with raw politics and corruption: thus, the decade long controversy concerning the Aryeh Deri scandal. I particularly liked Chapter 7 “Themes of Deviance and Unconventionality,” which presents media reports from 1948 to 1998 in alphabetic order from “Archeological Excavations” to “Violence in the Family.” Using the alphabet demonstrates how broad and deep his selected examples go.

Ben Yehuda’s careful analysis of how these various events and affairs were reported differently in secular and the religious mass media is especially important. He shows how the tensions between subgroups in the society are perpetuated by how the groups perceive their connections and conflicts and how these are reported. Thus, for example:

In 1987 Yeduit Aharonot reported:

A yeshiva student spat on a woman soldier because of an immodest dress and called her ‘slut.’ A police officer arrested the offender….About 30 other yeshiva students ….attacked the police officers in order to free the arrested yeshiva student. Police arrested 10 of them.

Hamodea’s version was that the yeshiva student was arrested because he ‘was badly offended when [he saw] that near the [Western Wall] a woman soldier …offended the holiness of the place in public. The yeshiva student was arrested when he expressed his protest.’

Agency and responsibility are reversed, confirming each side in its attitude towards the other. I am struck by how in this, and the many other media accounts Nachman reviews, the fundamental tension in the Democratic Jewish state is reinforced by the media, and this is not necessarily the result of bad will or tendentiousness (though it may sometimes be).

Many critics of Israel take from this situation proof that the secular Zionist project is fundamentally flawed, moving religious Zionists to emphasize the religious side of the Jewish state and secular critics to post or anti-Zionism (especially when considering the Palestinian problem). My friend is less radical, more moderate and modest in his appraisal.

Ben Yehuda believes that the conflict in “a theocratic democracy [by which he means ‘a democracy with strong theocratic colors in some areas’] … can be managed, mitigated and handled, but it cannot be ‘solved’ at a reasonable social price.” And that this implies “instability, never-ending negotiations, and chronic tensions, and requires politicians to have the dexterity and skills to keep such a political structure viable.”

As far as the Israeli case, I remain perplexed.  Considering that the fundamental tragedy includes the Palestinians, I think that the conflict may be beyond the dexterity and skills of any politician, that the relationship between democracy and religion is truly complex. But as I read Theocratic Democracy, a difficult title naming a very difficult political situation, I think that my friend is exploring a very important general issue, his is a case study of the inherent tension between religion and politics, an important element of the social condition, and he is right that the only way to understand the social condition is by theoretically and politically muddling through. There are no easy answers.

On the one hand, societies in general are based on common cultural commitments and understanding, and these are quite often religious, while on the other, the conflation of politics and religion makes an autonomous politics impossible.

Israelis struggle with this, as do many political communities. Tocqueville considered this social condition in the opening chapters of volume 2 of Democracy in America. It is a fundamental problem in the Muslim world, obviously in Egypt today. Perhaps the case of Turkey demonstrates, at least until recently, that political leaders with dexterity and skills can address it. But even in the U.S., which Tocqueville believed had resolved the religion – politics dilemma long ago, the conflicts persist, sometimes as comedy, as revealed in the season of the “war on Christmas,” but looming as a tragedy, as Catholics, Jews and Muslims, among others, have been excluded by some from full citizenship in our society’s history.

I particularly appreciate that Nachman addresses an important case of the social condition revealing complexity, eschewing easy theoretical and political answers.

A version of this article was first published in Deliberately Considered.

We all know the problem of the Holy Roman Empire, which was neither holy, nor Roman, nor an empire. I have a similar worry about books like Theocratic Democracy as case studies of Israel. I haven’t read the book, to be sure; and I have no doubt that Israel is interesting as a case study of political theology. But the Jewish State is, in my judgment, neither democratic not theocratic.

The case why Israel isn’t a democracy is more obvious (though frequently debated). I’ll skip this issue here. It would be more interesting to say why we shouldn’t refer to Israel as a theocracy.

If by “theocracy” we mean a state in which religious authority is important politically and even constitutionally, then Israel can be called a theocracy. But in this case this would be too loose: given the nature of monotheistic religions and monotheistic authority, a state is helpfully understood as a theocracy only if its principle authority is religious. This doesn’t seem to be the case in Israel. (Famously, Zionism was originally a- or even anti-religious. And while religion is on the rise in Israel as throughout the world, religious authority hasn’t yet replaced Israel’s secular political institutions.)

Now because Zionism turned to understand Judaism as a culture rather than religion, it is tempting to regard Israel as a non-neutral liberal state. On this view, shared by many, the Jewish State is Jewish in the same sense that Italy is Italian and Germany is German. If this is so, there’s certainly no point in talking of Israel as a theocracy; and, assuming we agree that liberalism doesn’t require cultural neutrality, there’s no problem at all in the concept of a Jewish liberal state. Unfortunately, even wise political philosophers such as Avhishai Margalit and Moshe Halbertal have argued for this position, making it much too easy for us Israelis too pretend we can go down this path. (I hope to write something longer about this point on a different occasion.)

It is annoying to Israelis like myself to admit, but Jewish identity in Israel is defined first and foremost on the basis of race. It is both necessary and sufficient to have the right genes in order to be recognized as a Jew in Israel. Necessary: for if you observe Jewish Law — and/or follow and identify with Jewish culture — but don’t have the right genes, you will not be officially recognized as Jewish. Sufficient: for no matter what religion or culture you follow and identify with, if you do have the right genes, you would be officially recognized. Since Judaism is the main organizing principle of Israeli politics, the correct category for analyzing Israel qua a Jewish state is as an ethnocracy.

There are only two ways of making exceptions to genes, and these tell us much about the nature of the rule. One possibility is converting to Judaism by the strict measures of Jewish orthodoxy. Those are exceptionally harsh: the idea is that one can in principle become Jewish even without the genes, but only under rare situations. (Famously, the flexible standards of reform American Judaism are banned in Israel.) The other, much more “lax” way of becoming Jewish, is having more or less the right genes and joining the IDF, which recently initiated expedited conversion programs. (Usually, you’d enroll in such a program if you’re an Israeli Russian immigrant who’s been drafted into obligatory military service, but your Jewishness isn’t completely clear.)

The only way to speak of Israel as a theocracy, in this light, is to argue that religious authority decides exclusively about one’s race. But this is only partially true: the Jewish state does recognize your Jewishness also if your father is Jewish and your mother isn’t. For these and similar reasons it seems to me that in order to understand what it means, politically and sociologically, for Israel to be Jewish, we need to focus less on religion (and culture) and more about race. An Ethnocratic Democracy then? This book still needs to be written.

Now that 2013 is over, it seems safe to say that the major event last year in Brazil was the series of demonstrations that took place all over the country in June. What triggered the protests was a small rise in the cost of public transportation. On June 1st, fares increased R$0,20 in São Paulo city. On June 13th, a group of university students was severely beaten by the military police on Avenida Paulista. Many journalists witnessed the beating. Most protesters were injured, and two journalists almost lost their eyes. The beating was broadcast on national television and across social networks. Brazilians were appalled with police brutality in São Paulo, and thenceforth demonstrations spread throughout the nation.

Police violence has been common in Brazil for many years, and has not been a big concern for most Brazilians. One has only to think of Captain Nascimento, an unorthodox police officer played by Wagner Moura in Elite Squad, the all-time biggest box office ticket seller in Brazilian cinema. In order to achieve justice, Captain Nascimento is not afraid to disrespect the law. He hits drug users, tortures favelados, and is fierce with his wife. His ferocious manner notwithstanding, Captain Nascimento was elected a national hero by Veja, Brazil’s most popular magazine. The majority of Brazilians seemed therefore to approve of police violence. Perhaps this is why Geraldo Alckmin, São Paulo’s governor, did not hesitate in demanding a stronger police reaction against demonstrators on June 12th.

What Alckmin forgot, however, is that Captain Nascimento’s methods were adopted mainly against black Brazilians in poor districts. Students from São Paulo University and journalists were spared from that kind of treatment. When Brazil’s middle class saw on the news that people like them were being beaten by the police, they were not pleased. This time police violence was not okay. This time the middle class could actually relate to those who were suffering state violence. The same police violence that was condoned in favelas was condemned on Avenida Paulista. The way we reacted to the awful images taken on June 13th reveals how deeply entrenched social exclusion is in our country. Not all lives are of equal worth in Brazil, and some citizens are more worthy of protection than others.

Despite being what initially galvanized Brazilians into action, state violence and public transportation were not the only causes of the demonstrations. The police stopped being violent with demonstrators, and bus-fare increases were revoked. Even so, the demonstrations continued. Suddenly, people started discussing politics on the streets. Until very recently, most Brazilians regarded politics as something alien, a boring play enacted by actors whose names they could barely remember, in a far-off city in the middle of cerrado, viz., Brasília. To be sure, political alienation was also what took so many people to the streets last June.

What happened in June 2013 was a massive political initiation for many Brazilians. People discovered that political power is not solely the prerogative of bureaucrats and politicians. In a democracy, political power belongs to the dêmos, the people. Whenever people come together and act in concert, a new source of political power comes into being. Brazilians are newly aware of their power, and that is why June 2013 will happen again in June 2014 – the month the whole world will be looking on us because of the World Cup. According to every poll taken so far, the majority of Brazilians supported the protests last year. June 2013 indeed left us a great legacy. Congress repealed PEC 37, a bill that threatened the separation of powers in our constitution, and decided that petroleum royalties are going to be destined exclusively for public education and health. One has only to wait and see what political gifts June 2014 has in store for Brazil.

On February 28th, the Federal Council, Russia’s upper house, granted Vladimir Putin’s request to use military force in Ukraine. By that time, Russian troops stationed at the Black Sea Naval Base in Crimea had already left their garrisons and secured the area. Russian forces now effectively occupy the Crimea, which is a semi-autonomous and self-governing region of Ukraine with a majority ethnically Russian population.

In response, the U.K., France, the U.S. and Canada have announced that they are suspending their preparatory meetings for the G8 summit due to take place in Sochi this summer. On March 1st, the UN Security Council held an emergency meeting on the crisis in Ukraine. President Barack Obama has warned that Russia’s actions will have “costs.” As several academic and media sources have noted, Russia is potentially in violation of the 1994 Budapest Memorandum on Security Assurances, which guaranteed the territorial integrity and sovereignty of Ukraine in exchange the country’s denuclearization. The official rational for military intervention used by the Kremlin and repeated at the UNSC meeting by Russia’s ambassador to the UN, Vitaly Churkin – namely, to protect “Russian citizens and compatriots” in Ukraine through the deployment of troops “on the territory of Ukraine” not “against Ukraine”– is vague and not in line with international laws such as the responsibility to protect (the R2P doctrine). The problem is that there is no clear way to punish Russia for an incursion into the Ukraine. An open military confrontation with European powers is unlikely, and Russia’s resource exports, like the natural gas it ships to Europe, make economic pressure ineffective in the short term. The prospects for a peaceful resolution that would leave Ukraine intact are grim. As President George W. Bush’s former deputy national security adviser, James F. Jeffrey, has said “There is nothing we can do to save Ukraine at this point.”

Putin’s actions seem, from the perspective of the West, to be a clear opportunistic land grab of a region that, despite its long association with Russia, held a referendum in 1993 to join the newly independent country of Ukraine. Putin’s exact geopolitical motivations are unclear. He may hope to use the Crimea as a bargaining chip in negotiations with the new Ukrainian government or simply be taking advantage of post-revolutionary chaos to lay claim to a long desired peninsula. Whatever the reason, this action effectively wastes any international goodwill generated as a result of the Sochi Winter Olympics. It doesn’t look good from the outside, but what does it look like from the inside?

It is worth noting at the onset that not all Russians support war with Ukraine. This weekend saw anti-war protests in Moscow and other Russian cities. Russian users of Twitter even developed a new hash tag, #НетВойны or #NoWar, which is being used by activists and politicians to rally support for Ukraine and the EuroMaidan movement. Nevertheless, there is considerable support for Putin’s actions among the Russian population. According to a February survey carried out by the Moscow-based, non-governmental researcher, the Levada Center, only 16% of respondents sympathized with the protesters on the Maidan while 36% were outraged by their actions. Furthermore, 43% of respondents characterized what was happening in Kiev as a “coup” against the elected government and 45% thought that the protests were a result of “Western influence”.

Putin is playing to his domestic audience and the state-controlled media is helping him craft the message. As I have pointed out before, state-controlled media, especially television, is an important element in Russia’s competitive authoritarian regime. Although the Russian media, and its role in promoting regime candidates, is a favorite topic of discussion among academics and analysts during election time, it has recently gained attention for its coverage of the events taking place in Ukraine. So what are the Russian people hearing?

First, Russians are being told that ethnic Russians or Russian speakers in Crimea and Eastern Ukraine are fearful for their safety. Media reports have quoted a claim made by the deputy speaker of the Federal Council that as many as 143,000 refugees from Ukraine have arrived in the Belgorodsky region of Russia. Second, Russians are hearing that Russian language and culture are being attacked throughout Ukraine; the Russian language, spoken by many in Southeastern Ukraine has lost its official status and many statues of Lenin as well as monuments to WWII soldiers have been toppled or vandalized in cities around the country. Russians are also hearing that the citizens or Crimea and other regions in the Southeast are either welcoming Russian troops or calling for Russian intervention. Just today, the chief of the Ukrainian navy surrendered the Sevastopol headquarters after defecting to Crimea and siding with the pro-Russian leader there. Finally, and most importantly, Russian people are hearing that not only did the new government in Kiev come to power as the result of a coup, but that the coup was supported by right-wing nationalists and fascists. Rhetoric about the threat of fascism appeals to the mythology that exists in Russia surrounding the liberation of Ukraine and WWII. In this context, the Crimea is especially symbolically important given that the port of Sevastopol was retaken in 1944 from the Nazis by the Soviet Army at great cost. Along with these messages, Russians are also hearing their top politicians proclaim: “Russians and Ukrainians are one nation.”

None of this justifies Putin’s actions. But it explains how the message about Ukraine is being manipulated for the domestic audience. It is worth noting that a similar framing was happening in the lead up to the six day armed confrontation with Georgia in 2008. More recently, Western concerns about the role of anti-gay laws at the Sochi Olympics were reframed in Russia as unfair criticism of a highly touted international event. Now warnings by the UN, American, Canadian, British and French about the crisis in Crimea are being similarly twisted by the Russian media into a message of European and Western aggression in an area of the world that Russia considers to be within its immediate influence. While what the West says about Ukraine certainly matters, for Putin, what Russians hear about Ukraine matters even more.

We should not be surprised by differences about how to respond to the Russian invasion of Ukraine. Understanding reasons for those differences is one critical step toward formulating an effective response. Recognizing both real policy options and the equal importance of political signals is the second. Moving too fast is dangerous in the short run, but not moving at all is the most dangerous in the long run. And that’s what Germany’s leadership promises.

We should not be surprised that the authorities of Germany, the Netherlands, France, Italy and Spain explicitly resist calls for trade sanctions. Leaderships in Austria and Hungary are likely with them. London seems more concerned with its financial prospects than European well-being. Putin has been pursuing a policy of diplomatic divide and conquer within the EU, sweetened with economic deals powered by the energy business. Critical studies often explain corporate power and practice by analyzing interlocking directorates. It’s time that progressives use the same methods to understand Russia’s post-Soviet imperialist strategy, and the willingness of European elites to buy into it.

Although Chancellor Merkel may report that Putin is out of touch with reality, Putin has constructed a business reality in which Germany, England, and others are deeply and increasingly implicated. And that reality finds expression in calls for more diplomacy, more fact-finding missions, more OSCE engagement in the face of Russia’s invasion of Ukraine. And that’s just what Putin wants. It gives him even more time to consolidate what has by now become the fait accompli. Defacto if not dejure, Russia has Crimea. And Putin seeks more: a fully subordinated Ukraine through the country’s fracture into more autonomous regions easier for imperial manipulation.

Germany and the like-minded are avoiding tough responses because they are living in and accepting Putin’s reality. That’s dangerous in the long run, for Putin’s reality is ultimately based on the rule of force, not the rule of law, on the convenience of the lie and not the search for the truth. Ukraine was trying to build something different.

EuroMaidan and its extensions rebuilt Ukrainian society. Although it had its political class, its methods were not unlike the Occupy movement itself. It was an alternative public, maybe even a “revolution in reverse” to use David Graeber’s terminology. It tried to model in protest the kind of society it sought to establish for the nation. While it had its limits, it certainly fared well in comparison to the regime it eventually overthrew. While some activists of EuroMaidan might have pulled down Lenin statues and thrown Molotov Cocktails, the Yanukovych regime won any contest for brutality with its snipers and its torturers. That Yanukovych regime kidnapped hospital patients and assigned them to prison cells without health care. EuroMaidan was a revolution in the name of dignity and rights. It overthrew a dictator. It’s insulting to discuss whether the new government is constitutional, for Euromaidan made a revolution against Yanukovych’s intransigence and brutality. Only 1989 managed to square that legal revolutionary circle.

Of course EuroMaidan also harbored those whose politics I detest.

We should analyze critically and diminish politically all those who seek to restore fascism’s appeal, whether in its crude anti-Semitisms or celebrations of almighty leaders. At the same time, we should not fall prey to those who use the invocations of Bandera and other World War II fighters by some of EuroMaidan’s activists to identify the whole movement’s politics. Russia has been deploying its considerable political technology to demonize the leadership come out of Euromaidan as fascists, thugs, and nationalists, in part to disguise their own fascist behavior. After all, what can be more fascist than to use Hitler’s techniques to justify war?

On the day before German forces invaded Poland in 1939, Hitler dressed Nazis in Polish uniforms and attacked the German speaking Gliwice radio station. Poles and other East Europeans know this trick all too well, and see it in Putin’s forces today. After invading Crimea, Ukrainian soldiers have by and large resisted the impulse to fight. With this kind of strategic non-violence, itself a legacy of the EuroMaidan revolution, Putin lost his justification for invasion. Instead, he relies on lies and provocations to get what he wants. He sends Russians to pose as Ukrainians to provoke clashes. He doctors digital media to imply mass oppression of Russian-speaking citizens. He creates the image of chaos so that he can rescue ethnic brethren. He denies that Russian-speaking Ukrainians might not want to live in a Russia defined by Putin’s reality. And Putin can rely on the deposed president, Viktor Yanukovych, to request the rescue of the Ukrainian nation against an unconstitutional takeover by the forces of EuroMaidan.

Who would want to be defended by such a lying and brutal regime?

It cannot be any more clear that the New Ukraine EuroMaidan promised is the kind of society the world wants as its partner and Ukrainians would prefer to a warfare-based state. It cannot be any more clear that the kind of society Putin wishes to install, and what he imposes at home, is the kind of order that is a risk to all. Too many invoke Munich 1938 as parallel. While I see the justification for parallel, I can’t justify the call to war. At the same time, I am glad Poland has assembled NATO forces.

Poland has requested a meeting of NATO ambassadors under Article 4 of their Charter. This cannot be read, this should not be read, as preparation for NATO’s war with Russia. While some will identify NATO superiority in overall capacity, it does not have sufficient solidarity and will to go to war. That’s good. But they need sufficient coordination and commitment to use their capacity for war to deter further aggression. They also need to be careful. Brinksmanship could spark unwanted conflict.

Poland may be playing an incredibly smart hand here. Their allies in the European Union and NATO know that Poland and Lithuania have been the most aggressive in defending the New Ukraine. These countries also know Russian political technologies better, or, at least, have the least stomach for them. Those NATO countries bordering Ukraine should invoke Article 4 to prepare for war in case Russia has a hard time stopping when it decides Crimea is not enough. Ukraine’s NATO neighbors should also prepare for war to move those allies still mired in Putin’s reality to more aggressive non-military actions.

It has been repeated time and again by commentators. Impose sanctions now. Focus on Russia’s ruling class, and not just the men with their hands on the triggers and on the gas meters. Yes, freeze their accounts in Western banks, but also deny them and their families the visas that enable them to travel to this decadent Europe they so disdain in their public pronouncements, and so love in their private moments.

Those sanctions will only reinforce the punishment global markets have already imposed on the Russian economy. Today the ruble declined to its lowest trading value vis-à-vis the dollar ever. Its MICEX index lost more than 10% of its value in a single day. Gazprom stock took an even bigger hit. Those declines could be exacerbated through the rule of law. What would happen if an extensive and systematic investigation of money laundering took place across Europe, with a focus on Russia’s ruling class?

It is said that America should play the lead here. In the absence of leadership from Germany, France, and the United Kingdom, it must. And it should support Poland, Sweden, Lithuania and the other parts of Europe who choose not to be defined by Putin’s reality.

At the same time as political authorities act and markets collapse, its time for the publics of Europe and beyond to show their solidarity with Ukrainians struggling to defend their nation from invasion, and with Russians who struggle to save their nation from war’s destruction.

I admire those courageous Russians who dare protest Putin’s war. The demonstrations protesting this war in Russia are not overwhelming, but they are incredibly brave. They coordinate, in part, with a hashtag circulating in Russia: # нетвойне, or No to War. There is some people to people diplomacy going on that is pretty compelling too. Ukrainian students communicate directly to Russian students in this YouTube video: “We ask you to tell your leaders not to kill us.”

I also admire those Ukrainians who are now prepared to defend their nation. I admire their bravery, but I also admire their savvy. I admire their social media publicizing all of Putin’s lies, and I admire their willingness to sign up to fight Putin’s aggression. But they cannot win by themselves. And I pray that they don’t have to fight any more.

One might hope that demonstrations will grow as the costs of this criminal aggression in Ukraine become more apparent to the Russian public. One might hope that Russia’s oligarchs will recognize the risk Putin’s reckless intervention puts to the entire Russian economy, and to their way of life, and do something about it. But all of that depends on real solidarity with the New Ukraine, with that society whose virtues were so evidently being born on EuroMaidan. It depends on the European Union and NATO finding common voice in severe sanctions. We can’t risk war. But we should prepare for it. If Putin’s reality defines the world, we will have to wage it.

Completed: March 3, 2014 9:00 pm

Of the many important lessons the Egyptian people might take away from their 2014 constitutional referendum, three certainly stand out in stark relief: first, that the military owns the product of the plebiscite and must also own the political consequences; second, that no constitution or government will enjoy true legitimacy without a national reconciliation effort; and third, that the pathway out of Egypt’s transitional morass might in fact begin at the other end of the continent in South Africa.

When the government of ousted Egyptian president Mohammed Morsi sent its constitution to a public referendum in December 2012, it would have been a tall order to find a more emblematic case study in how not to establish a democratically legitimate national charter. In a desperate effort to jam through a constitution that would ensconce its role in governance, the Brotherhood made several strategic blunders that virtually ensured the showdown that led to Morsi’s ouster: namely, the use of strong-arm majoritarian tactics to dominate the process; engaging in an ugly legal and constitutional battle with the other powerful government institutions; and rigging the constitution to ensconce Islamist control over the political system.

But as in so many other instances over the past year, the Egyptian military has once again trounced the Brotherhood. Not to be surpassed in sheer ignorance or contempt for proper democratic process, the Supreme Council of the Armed Forces (SCAF) has taken the Morsi formula for flawed constitution making to new lows. Far from a democratically legitimate charter, the generals have given the Egyptian people a document written by unelected representatives, stained in the blood of the political opposition, and finally rubber-stamped by a mere fraction of the public in a plebiscite that could hardly be termed free or fair.

The junta’s epic transitional fumble should come as no surprise. The SCAF has remained the country’s primary power broker and ruler-of-last-resort since the start of the revolution, and good governance has eluded it from the start. The first SCAF misfire was its constitutional declaration of March 2011. Intended as a sort of interim constitution to facilitate governance prior to parliamentary elections, the declaration was effectively the original sin from which so many transitional disasters (including the Brotherhood’s moment of misrule) have ensued. Had it had the nation’s best interest in mind, the SCAF may have listened to a broad array of political and revolutionary actors and instituted a pluralistic and transparent national dialogue to hash out an interim constitution and a set of principles for an eventual permanent charter. Coalitions of such actors, including the so-called Egyptian National Council had in fact made attempts to institute such a process. Under the leadership of figures such as Tahany El Gebaly, a Supreme Constitutional Court justice, the short-lived National Council produced a comprehensive document containing 30 constitutional principles and a list of 21 “basic rights and freedoms” to be protected in addition to those already outlined in the 1971 constitution.

Despite its strengths, the NC effort was hobbled in the end by lack of official engagement from the Muslim Brotherhood or the Supreme Council of the Armed Forces. Given their organizational superiority, the former preferred to hold elections before engaging in constitution making, whereas the generals appeared perfectly content to do what is typical of post-revolutionary power holders and rule by decree. Thus, far from a roadmap for democratic change, the SCAF’s “constitutional declaration” of March 2011 was effectively a resurrected version of the suspended Mubarak-era constitution gussied-up with a few amendments that had been crafted in a closed technocratic committee and rubber stamped in a plebiscite.

Minus a consensual constitutional roadmap, it is hardly surprising that the Brotherhood and their Islamist allies pursued a winner-takes-all, majoritarian approach to constitution writing following their big wins in the winter 2012 parliamentary elections. The Islamist domination of both the selection of the Constituent Assembly and its proceedings were a direct consequence of the failure of the military as stewards of the revolution to establish a transitional process that reflected its pluralist values. Deepening polarization and tensions only continued to grow in the ensuing months as competing powers engaged in an escalating series of reprisals and counter reprisals beginning with the disbanding of the Islamist parliament through court orders just prior to the election of president Mohammed Morsi. In an effort to shield his office and his Constituent Assembly from further attacks, Morsi drastically overcorrected, granting himself sweeping powers and touching off the “Tamarod” protest movement that eventually culminated with his July 2013 ouster.

Thus instead of providing a foundation for national unity, Egypt’s first round of constitution-making became a Rubicon of national division and sectorial hostility. Here a wise leadership may have seized on Morsi’s ouster as an opportunity to change course and pursue a path of national reconciliation, but not the SCAF. Instead, the generals and their interim government decided to double down on the old recipe for disaster, and wage open war on the Brotherhood with bullets, draconian decrees and constitutional clauses. Regardless of one’s attitude toward the Brotherhood and its politics, its deep roots in Egyptian society and major electoral support remain undeniable political realities. Accordingly, the military’s efforts to ban the movement as a terrorist threat and to establish a democratically-legitimated constitution without Islamist support are nothing short of folly. Indeed, given that the democratic bona-fides of the new Egyptian constitution are arguably more dubious than those of the Brotherhood’s charter, and given that it was approved with only 39 percent of the electorate casting ballots, it is likely that a future Egyptian government will have to contend with another major legitimation crisis like that which brought down Morsi. Should that occur, or should Egyptian authorities try to prevent it, a drastically different transitional approach is needed, and that’s where the South African model comes in.

Whereas the Egyptian experience has been panned as “a case study in how not to initiate a constitution-writing process,” the South African process has been widely lauded as an international democratic exemplar. Key to its success was its structure, as constitutional scholar Andrew Arato has illustrated:

This process was composed of two major stages, with a democratic election between them, involving the making of two constitutions, and with a multi-party negotiating forum or roundtable, a constitutional assembly and a constitutional court as its three main institutional agents.

This South African approach, which Arato refers to as the “Round Table” model, has indeed become so paradigmatic that it has now seemingly become a de facto normative framework according to which all major democratic transitions are expected to unfold. Yet, as Arato also points out, the Round Table’s usefulness in South Africa and elsewhere such as post-communist Hungary was also largely a function of very specific political circumstances that forestalled the alternative transitional approaches of revolution or reform. In the South African case, for example, the Apartheid government lacked the necessary legitimacy to enact successful reforms, whereas Mandela’s African National Congress and its allies lacked the ability to seize state power by revolutionary force. Inclusive, protracted, multi-party negotiations were thus the only viable path toward a stable democratic state.

Given that a revolutionary uprising was apparently a viable option for democratic forces in Egypt, one might object to the Round Table approach as an applicable option in Egypt. Yet, to reference Arato once more, the overarching virtue of the Round Table may be its usefulness as effective tool for addressing major legitimacy deficits such as that facing Egypt. Moreover, one might argue that despite the critical role it has played in facilitating two revolutionary coups, the SCAF nevertheless remains a powerful remnant of the old order standing in the way of a new order. Seen in this light, the political situation in Egypt becomes not unlike that which necessitated the use of the Round Table model in South Africa. On the one hand you have an establishment power (the military) that is seemingly incapable of advancing a reform process that could enjoy broad democratic legitimacy; and on the other hand you have opposition forces that are incapable achieving a revolutionary overthrow of that establishment power. In other words, Egypt’s revolutionary actors will not likely have available either the possibility of using force to seize state power directly from the military or the ability to control the transition process on their own.

Of course, given the passage of the military’s constitution, another major legitimation crisis would be required at this point before we could see the implementation of the Round Table model in a complete form. However, new elections are right around the corner, and should the next government come to appreciate the necessity of taking steps to shore up its constitutional legitimacy, it might seek to apply aspects of the Round Table process in the following way:

  • Convene a multi-party forum to facilitate sectarian reconciliation and foster a national dialogue on constitutional reform. Inclusivity and pluralism would require unbanning the Brotherhood and bringing its members to the table alongside representatives of the SCAF, the judiciary, and a broad array of progressive opposition party and civil-society delegates. The importance of adequate representation by progressive voices in the constitution revision process is underscored by Zaid Al-Ali who argues that the progressives are the only political force with a “convincing vision of the future,” and the only force yet not to get a crack at drafting a constitution.
  • Arrange a public constitutional outreach and listening campaign to solicit broad-based feedback on the amendments and ensure that the outcome reflects the interests of all Egyptians. This type of effort was a key innovation employed in South Africa to ensure direct public consultation on the draft constitution, and is far more democratic than a mere plebiscite;
  • Produce a set of amendments to the 2014 constitution based on the roundtable negotiations and public-listening tour. The amendments should be approved by something like a two-thirds majority of the forum members and then sent to a public referendum in accordance with the military constitution’s amendment provisions.

Now, considering the powers afforded to the military in the new constitution, its own obvious disinterest in accepting curbs on that power, and the fact that key political groups such as Tamarod and the National Salvation Front back military strongman General Abdul Al-Sisi as a presidential candidate, it might be overly optimistic to imagine a new government would proactively initiate such a process. As such, the Egyptian people may need to take matters into to their own hands yet again with another round of mass protests. Except this time, the protestors would need to make the institution of a Round Table transitional forum their primary demand and they would need to direct their demands at the military rather than the regime du-jour.

It is perhaps a tall order given the authorities’ increasingly harsh crackdowns on dissent, but if there is one thing that the Egyptian people have taught us in the last few years it is that we should never underestimate their courage and extraordinary power to affect change.

This article was first published in Truthout.

Benjamin Netanyahu often speaks of terrorism. He built his career on the unfounded claim that he’s a terrorism expert (his book Terrorism: How the West Can Win is, in fact, composed of articles by other experts), and he makes sure to speak of terrorism over and over again. After “Iran” and “Holocaust,” “terrorism” is probably the most frequently used term in Netanyahu’s vocabulary.

Yet, when speaking of terrorism, Netanyahu makes sure to address only the Palestinian and Muslim varieties. In his vocabulary, there’s no Jewish terrorism. The rest of the world has by now understood that this isn’t quite accurate, and the American State Department just recently made waves by reporting on Jewish terrorism (mistakenly identified as “Price Tag” activities — but more on this below) as equivalent to Palestinian terrorism. According to the State Department, in 2013 alone, 399 incidents of Jewish terrorism were registered. But these are not all the incidents; these are only those known to the UN and some Civil Rights organizations. Most of the incidents are actually not reported, because Palestinians are by now weary of reporting them.

For the sake of comparison, the international anti-Semitism report, which was published last week, found that all the tens of millions of anti-Semites in the world — in any event, we are told that this is the number — have managed to organize only 594 anti-Semitic incidents. France, which leads the reports of such incidents, was especially striking, with 116 incidents. That is, the Israeli settlers and their helpers have managed to produce in 2013 two thirds of the amount of hate crimes that anti-Semites world-wide have produced, and the number of crimes that they have committed is more than three times as many as the anti-Semitic crimes committed in France.

Which, you must admit, is pretty impressive. It puts in perspective the situation of the Palestinians, compared to, say, France’s Jews. And yes: Israeli violence against Palestinians is terrorism. It is common among settlers and their helpers to argue that Price Tag activities are nothing more than graffiti writing. This is bullshit. The graffiti attracts more attention in the media, but the acts we’re talking about are in most cases setting Palestinian property on fire. If you set on fire a cross in the back yard of a black person in the U.S., this is terrorism. The graffiti itself is also terrorism, for it sends a message: we came here at night, and we sneaked out. Tonight, we didn’t set your house on fire while you were sleeping. But you shouldn’t count on being so lucky the next time.

Price Tag activities attract the media’s attention both in Israel and world wide, but most acts of Jewish terrorism are not Price Tag activities at all. The latter usually take place after the rare occasions in which the IDF takes this or that action against the settlers, and they are intended to terrorize not so much the Palestinians but the Israeli soldiers. Their message is: if you (the soldiers) go on like this, we will set the whole area on fire, and the military will have to pay for the occupation above the normal price. In contrast to Price Tag activities, most Jewish terrorist activities do not send their message to the military. They are intended to terrorize the Palestinians — to keep the Palestinians off their territory. One obvious case is that of the outpost settlement Adi Ed: its settlers have managed to produce 96 criminal incidents on record. 21 of those involving actual physical violence against Palestinians. The rest involved harm to property. The result is that the Palestinian villagers who had been living around this outpost-settlement have abandoned their homes in large numbers. The settlers, in turn, have taken over more and more of the land. Who said that terrorism doesn’t succeed?

One reason why this day-to-day terrorism isn’t reported in the media is that it is carried out with the full support of IDF soldiers. The story of an IDF soldier, who intercepted a lynching of two Palestinian civilians by two settlers from Yitzhar, only to tell them “that’s enough,” hasn’t made it to the Israeli media, but such occurrences are much more common than Price Tag activities. Attacks like this happen almost on a daily basis and, when our Judeo-Nazis are especially angry, even more frequently. IDF soldiers are protecting these terrorists; they do not arrest them.

And what does Netanyahu’s government do about this? Nothing. Sometimes, they’ll mention that Price Tag activities must be condemned; but the settlers’ daily violence serves this government. Israel has already accustomed the world to the fact that it’s going to take over the larger “settlement clusters,” but Israel’s main aim is still — as it has been for generations — to annex as many C-territories (i.e., those territories in the West Bank that are still within Israel’s exclusive administrative and military control) as possible. The nationalizations and annexations of thousands of acres that the Israeli Defense Minister has just announced in the West Bank almost certainly will turn into settlement lands.

In other words, this government needs the settlers’ terrorism in order to expand its future hold of this territory: every territory on which the Palestinians give up is taken over by the settlers. The American State Department speaks explicitly of terrorism in this regard, and it refers to the names of the terrorist organizations. Among them, it identifies the Ha’Kol Ha Yehudi (The Jewish Voice) as a terror organization. The operators of the Jewish Voice are known: two of them were recently charged with incitement, but not with terrorism. The State Department likens the Jewish Voice to Kahane Chai, a team which is considered a terrorist group in the U.S.; in Israel it was labeled a terrorist group in 1994, after the Goldstein massacre, and it was considered as such until after Rabin’s assassination.

In other words, until Netanyahu came to power. Afterwards, the interest in this organization has faded. It’s old activists, who were held for a while in administrative detention, who have continued to operate in other organizations under different names, while Israel’s government has decided to ignore them. But not everybody has done so: the U.S denied visa to Michael Ben-Ari, who was elected as a Member of Knesset for the Ihud Leumi party, because of his involvements with Kahana activities. The U.S hasn’t forgotten. Netanyahu’s Israel has.

A few months ago, the Israeli defense forces recommended defining Price Tag activities as terrorism. Netanyahu, in what was a precedent, rejected their recommendation and replaced it with empty words: he defined the offending organizations as “an unlawful alliance.” This was sheer bullshit: there’s no such alliance; there are no properties that can be confiscated; there’s no “pyramid structure” of these organizations that one could attack.

There is, however, an exception. There’s an illegal union that one could act against: the village council of the extremist settlement outpost Yitzhar. I’ve written in the past about the fact that this village council has decided not to disclose to the police the identity of Yitzhar’s pogromists, and about the fact that the police did not particularly care. That, in turn, gave more energy to Yitzhar’s people: their council is now going to vote on the question of whether settlers should be allowed to hurt IDF soldiers and/or Israeli police officers. Let’s run here the usual thought experiment and try to imagine what would have happened if, say, the members of the city council of Nazareth (the Arab city in Israel), had suggested a vote in which some of them supported hurting Israeli security officers, while others explained why this could actually be a good idea. All of them would have been arrested in their sleep, in a police raid widely covered in the Israeli media. To the Yitzhar city council, as if this needs to be said, this isn’t going to happen.

Why did Netanyahu refuse to denounce Jewish terrorism as terrorism? Prime Minister Sharon changed the discourse in a moment, when he spoke of the traitor Eden Natan Zada as a Jewish terrorist. Why cannot Netanyahu pronounce the words “Jewish terrorist”? First, there’s the official excuse. If Israel would admit that there are Israeli terrorists, this could harm its international image.

But this excuse doesn’t work any longer. As we saw, the State Department just announced the existence of Jewish terrorists and, outside of Israel, this seems obvious. So again, why cannot Netanyahu recognize their existence?

Here one must say something very clearly. When I speak of recognizing Jewish terrorism as terrorism, I do not at all mean to suggest that the Shin Bet should receive extra authority, in order to treat the settlers as if they were Palestinians. I do not support administrative detention (of anybody), or prevention of legal defense. (Just last Wednesday, a couple in Yitzhar was arrested according to such a procedure because they were suspected of lending their car to participants of Price Tag activities. They were held for a relatively short while, but this criminal practice, in which the state makes people disappear and holds them without giving them basic rights, is expanding.)

As I’ve written repeatedly, the solution is not oppression in the name of “security.” The problem is that there needs to be better investigation and more professional investigators. The ability to hold people in detention — while threatening their children — as reportedly happened to the couple from Yitzhar this last weekend, doesn’t lead to better intelligence. It can at most produce more confessions, but these of necessity are without value. I’m not even speaking of the inhuman threats that the police reportedly made on the woman — threats that her children would be taken away.

What needs to be done, then, isn’t an increase in administrative detentions, torture of children (yes, yes: torture of children), or any of these crimes that our security forces constantly employ in the West Bank. What needs to happen is something else: security officers must begin to realize that a large portion of settlers are terrorists. They must realize that when they see a Jewish terrorist attacking a Palestinian, their job isn’t to protect the former; their job isn’t to tell him “this is enough,” but rather to arrest him — and, if he is an immediate threat to human life, also to shoot him. Police investigators must realize this. They will have to learn how one investigates crimes against Palestinians and they will have to treat such crimes as political crimes — which is to say, as terrorism. At the moment, their success rate in investigating such crimes is virtually zero. Moreover, the courts, too, will have to realize that this is terrorism. In order for all this to happen, Israel’s leaders must use the prominent stage that they have (what people call in English the “bully pulpit”) and lead a true public campaign.

But, of course, this won’t happen. Netanyahu’s problem isn’t the fear of damaging the hasbara public relations campaign. Obviously, a serious attempt by Israel’s government to fight Jewish terrorism would have helped the hasbara rather than damage it. If, in four months, Israel’s police and Shin Bet will be able to announce proudly that they have arrested (say) 30 Jewish terrorists, and that they have supplied enough evidence to charge them with terrorism, this would prove to the world that Israel seriously attempts to stop Jewish terrorism.

Why, then, will this not happen? Because in the end, Netanyahu is a supporter of Jewish terrorism. Because in the end, he is surrounded by settlers and other religious Zionists who don’t understand why Jewish terrorism is a problem. Because in the end, he believes in the settlers’ vocation, and he has no interest in the rights of the Palestinians. Because in the end, the people who are responsible for Jewish terrorism — whether by financing and organizing it, such as Beni Katzover and Gershon Mesika, or by preaching in its favor, like Dov Lior and Gintzburg — are connected to Netanyahu’s inner circle. A serious Shin Beit investigation into the roots of Jewish terrorism can end pretty badly for several of the people who are constant guests in Netanyahu’s house. Therefore, such investigation will never take place. This is the place to mention, by the way, that the settlers have put much pressure on Netanyahu in order that he choose a Shin Bet head that will suit them. As Netanyahu chose a Government Legal Counsel who will not make too many waves — and a state critic who knows on whose good side he should remain — in the same way, he chose a Shin Bet head who knows what his boss doesn’t want to hear.

And therefore, despite the constant background noises about Yitzhar and Price Tag activities, the settlers’ terrorism will continue as usual.

Translated post from the Israeli blog,Ha’Haverim Shel George” (“Friends of George”).

The Black Notebooks (Die Schwarzen Hefte), containing Martin Heidegger’s assorted thoughts from the 1930s and 40s, throw new light on the self-aggrandizement into totalitarianism of the most German of all philosophers.

The Freiburg professor of philosophy was not yet 50 years old when, in 1937 and 1938, he retraced his way of thought (Denkweg): He conjoined manuscripts of his various books, talks and lectures in a factual (sachlich) and discerning manner, with a view to ascertaining how all of it should be continued, including a publication strategy. Buoyed by the feeling that he had already achieved the “authentic” breakthrough by 1936, as he wrote to his brother Fritz in 1948, he was henceforward convinced of his ability to lead Western philosophy into a form of “thinking” purified by a history of being and event (or enowning) (seins-und ereignisgeschichtlich geläutert) and thus freed of the ballast of history and its decadent offshoot, historicity.

Heidegger’s constant and revisionist revisiting of his 1927 magnum opus Being and Time was motivated by a single question. This was the question of historical being (Seyn) and, therein contained, how this historical being, which even superficially was set off from the being (Sein) of the philosophical tradition, could be thought of in its complex relationship to what is (zum Seienden). Based on the legacy of a select few pre-Socratics, this necessary “different beginning” was the result of an exclusive insight. Heidegger saw how modernity concentrated the entirety of its efforts toward calculative thinking. This was an error rooted in Plato and Aristotle, culminating in Descartes and carried on by Hegel and Nietzsche.

Published under the title Überlegungen (meditations), a title which Heidegger himself chose, the “notebooks” have a precisely assigned place within these constellations: “What is documented in these notebooks […] partly is a rendering of the basic moods or moods (Stimmungen) of enquiry and of pointers (Weisungen) to the most distant horizons of attempted thinking. While these thoughts have seemingly emerged according to the moment (Augenblick), they all bear the mark of incessant efforts to address the only question.”

The Überlegungen are first referenced in a lecture manuscript from the summer semester of 1932. Alongside the convoluted “officially entitled Contributions to Philosophy (Beiträge zur Philosophie) and essentially entitled Of the Event or From Enowning (Vom Ereignis),” the Überlegungen are part of this work. They clarify, explain, specify, give life guidance and go through with what is otherwise only suggested. The Contributions were published in 1989, which means that we have known of the Überlegungen for 25 years. But only now is it actually possible to assess their relevance to Heidegger’s thought.

Spanning the years from 1931 to 1941 and from 1945 until the early 1970s, certain “notebooks” were at times kept simultaneously. The publication of the first three volumes (of no more than 37) of the oilcloth-bound black notebooks already lends the Gesamtausgabe (complete works) definite shape. Clearly, this neither means that the hitherto extant 30,000 pages can only be understood in the light of the Überlegungen nor that they represent the pinnacle of his achievement. And yet, it is through a coupling of the Überlegungen (specifically, the volumes spanning the years 1938 to 1941) with the extensive studies undertaken during the 1930s and 40s that the core of Heidegger’s movement of thought can be exposed.

Why is this? Just like the contemporaneous works, the Überlegungen follow the idea of the “leading question” (Leitfrage), i.e. the question of being, while the “fundamental question” (Grund-Frage) enquires into the “truth” of “historical being (Seyn).” With a view not only to getting a decent grasp of the demarcation between the new way of thinking and that preceding it, but also to pushing his own project forward, Heidegger always formulates his insights in two ways: on the one hand, with critical intent, and on the other, with a constructively and radically different form and language.

When he writes a text on the so-called “event,” reference to the Überlegungen is meant to keep the reader from feeling that he or she has already understood it all. Thus, in explaining the “event,” he writes: “The highest thing that must be possible to say must become an extreme silencing. Silencing [or reticence] authentically as silence bearing. But is the logic of silencing not the betrayal of all and nothingness? Certainly if, like logic, it were hitherto ‘read’ and obeyed.” According to his precisely calculated system of references, in a simultaneous up and downward movement, the Überlegungen are to make possible the abandonment of old rails to ride new ones.

That is not merely incomprehensible at first glance; it is the way it is supposed to be. To enter the Heideggerian hall of mirrors is to be confronted with a decision: either you run out, because you take it all to be nonsense anyway — among these rank some rather bright people — or you run with Heidegger through the labyrinth of his own making, maybe reaching the “clearing” (Lichtung), were “historical being” (Seyn) to show itself once more after its last appearance to Hölderlin. Then you can make a judgment whether “thinking” could actually lead the way out of the ever-worsening disaster of our modern wrongdoings. It all sounds like a privatissimum, like a fantastical thought exercise, like a construct with which to hinder others from being able to follow. And the few who nonetheless do are constantly served new obstacles, thereby coming to the conclusion that, in the best case, the goal lies along the path. But in no way can we leave things at that — Heidegger was always very serious. And we should no less be so.

From the very outset, resentment, deep insights, and uniquely fascinating analyses about the philosophical tradition are coupled with a tendency toward cultural criticism and almost impenetrable rhetoric. When, in December of 1929, Heidegger was ranked third in the race to occupy Ernst Troeltsch’s vacant chair in Berlin, the appointments commission drew a similar conclusion: “Martin Heidegger’s name has of late been on everyone’s lips. Even if the scientific merits of his literary achievements so far have been a matter of controversy, it is certain that he has original thoughts and, above all, that he has a strong and magnetic personality. At the same time, even his admirers admit that, among the many students who flock to him, no one actually understands him. He is currently in a crisis. We must await its outcome.”

The first volume of the Überlegungen does a better job of documenting how Heidegger hopes to solve the “crisis” than anything previously extant. There, he follows a three-pronged strategy: a re-appropriation of undistorted pre-Socratic thought; an ingenious analysis of the levels of philosophy’s self-mistaking (Selbstverfehlung); and, finally, the alignment of his own radicalism with political and cultural phenomena outside the academy, something we might call reality (Wirklichkeit).

Heidegger himself felt the “crisis.” For him, it is clear that its solution can only be found in the mode of a struggle against both oneself and external enemies. Moments of despair (Verzweiflung) must be overcome; half-baked political alternatives, such as the extremely conservative Die Tat circle, were in his opinion to be rejected and rival interpreters required banishing from the field. It is then imperative to find a new language, available from 1936 onwards. Given the task at hand, a blend of clear-sighted energeticism, hubris, and despair runs through every page of the Überlegungen.

In 1939, Heidegger takes stock of the many years dedicated exclusively to the clarification of his own project: “Thinking in purely ‘metaphysical’ terms, from 1930 until 1934 I took National Socialism to be the possibility for a transition and interpreted it as such. This was an underestimation of the ‘movement’ not only in terms of its authentic powers and inner necessities, but also in terms of its own estimation of size and type of size.” Based on his insights into the “hominization (Vermenschung) of mankind in self-certain rationality,” the “historical-technical” and the “complete mobilization,” what follows is this: “Only from the full recognition (Einsicht) of my previous error regarding National Socialism’s essence and historical force of essence (geschichtliche Wesenskraft) does the necessity of its affirmation follow and this is so on the basis of thinking (aus denkerischen Gründen).”

At this point, from around 1938 or 1939 onwards, Heidegger sees himself confronted with what he terms the “Jewry” (Judentum). He conceives of it exclusively in the mode of the National Socialist collective singular form, which he thought through “in the manner of thinking” (denkerisch). These collective singulars do not satisfy their own expectations and therefore also fall prey to the final battle of modernity. Alongside the Jewry stand Americanism, Bolshevism, and National Socialism. It is a value judgment understood in terms of the history of being and enowning, in contrast to the account given by Peter Trawny in his otherwise commendable book. Sticking closely to the motto that “‘radicalism’ is in true (echt) essence the preservation (Bewahrung) of the origin,” those who represent the “eternal race (Volk)” — in his opinion, this is a competition between Germans and “Jews” — must be sent off (verwiesen) into the confines (Schranken) of the history of being and event. Their “temporary increase in power” was made possible because “Western metaphysics” weakened itself, thus offering an “anchor point” to the “spread of an otherwise empty rationality and calculative faculty (Rechenfähigkeit).”

It is striking that the “Jew ‘Freud’” should be just as bad as the National Socialist psychologists. Jewish or non-Jewish, they are Cartesians. But once the final battle has been fought, the collective singulars and their carriers will vanish, letting “concealed Germanness” reveal itself. Before that happens, you’d better watch out. After all, “thievery (Räuberwesen) and banditry can take on various guises.”

By 1941, Bolshevists, such as the “Jew Litvinov,” have been sent off to the confines: “The onset of the war against Bolshevism has finally rid many Germans of the burden they felt as a result of our purportedly all too close ties to Russia.” But only later generations would be able to appreciate the proper relevance of the “document,” i.e. Hitler’s speech of June 22, 1941. It is therefore no wonder that the notes end with a remark on Jewry: his is not a “racialist” (rassische) but a “metaphysical question regarding a type of humanity (Menschentümlichkeit) that can absolutely freely take on the world-historical ‘task’ of uprooting all that is (alles Seienden) from being.”

Attempts to defend these remarks have been made in two ways. Some point to the fact that Heidegger kept these notes secret during National Socialism. (In fact, Fritz Heidegger gave excerpts of the Überlegungen to an American researcher in 1978 and, since 1999, they can be viewed at an archive).

But this just further begs the question as to the reason for publication. The Black Notebooks, as Heidegger called them only by the 1970s, were from the outset considered a constitutive part of his entire opus. They were to expand the horizon of what had already been published. If they were marginal, they should have appeared with the lectures, which, after all, only served to prepare for the essential. Clearly, Heidegger thought that the idea of a final battle of modernity, i.e. the translation of what was occurring in reality into the language pertaining to the thinking of historical being and event, would be sufficient to immunize his Überlegungen against the real National Socialists and their actions. No, quite the contrary is true. His making particular reference to unalterable occurrences that let the raging of “machinations” seem a cathartic spectacle (Schauspiel) linked the project to events at the time (Zeitläuften).

According to the second strategy of defense, Heidegger later distanced himself, as shown in the revision of the “other beginning.” That is extremely difficult to address. For instance, there is a long elegy to the dead German soldier in the volume entitled Event-Thinking (Ereignis-Denken). Its stern tone makes one shudder. Then there is his silence regarding the Shoah. One might ask whether what was said here about the “silence-bearing” (Er-Schweigung) might also apply to the Holocaust; that silence could be the expression of a higher form of recognition for what happened. From what we know today, the question can only be answered in the negative.

The various French and German positions on this issue remain all too determined by reflexes of yesteryear: “kill him off” (erledigen) or “undying loyalty” — incriminating quotes on the one side, reference to the significance of the “whole” Heidegger on the other. Things cannot be left at that. Not only given Heidegger’s history of reception, not merely because this is a German question of principle — rather, it is because of his self-aggrandizement into a totalitarianism of thinking that we are obligated to scrutinize his Überlegungen most precisely. The first round of scrutiny cannot but conclude that Heidegger weakened thinking decisively.

First published in Süddeutsche Zeitung and translated by Philip Schauss for Public Seminar from the original German .

By my title, “The War on Fascism,” I do not mean the war between the US, the Soviet Union and Great Britain, on the one hand, and Nazi Germany, Mussolini’s Italy and imperial Japan on the other, the war that took place between 1939 and 1945. Rather I mean an unspoken war on the concept of fascism that increasingly characterizes our understanding of World War Two and informs discussion of contemporary problems, such as Ukraine. Although the term “fascism” is still in use today, it generally refers to real or supposed dictatorships, such as those of Saddam Hussein or Vladimir Putin, and has lost its original connotation, that of an authoritarian but still capitalist state. Because the original meaning of “fascism” was aimed not at dictatorship, but at the relation between dictatorship and private property and market power, the term had a critical or self-reflective character. Understanding the loss of this character can help us understand the history by which present political discussions, for example those concerning Putin, have become impoverished.

The concept of fascism was originally not a critical one. Rather when the concept originated in Mussolini’s Italy it had a positive connotation: it meant the resurgence of authority, the strengthening of the state, or the creation of a unified national will. After the emergence of Nazism, however, in the late twenties, the concept took on its negative connotations. During World War Two the “Popular Front” — the alliance between the Soviet Union, Britain and the United States against Italy Germany and Japan — was defined in terms of “the defeat of fascism.” However, fascism was thought to exist not only in Italy and Germany but also as a tendency in our own societies, taking such forms as racism, anti-Semitism and the suppression of civil liberties. Whereas World War One had unfolded without any real historical explanation among the masses of people, World War Two’s “struggle against fascism” was widely understood to reflect a long-term crisis of capitalism, which had surfaced in the thirties’ Depression, and which had empowered dictators like Hitler and Mussolini. The Americans, British and Free French, then, fought World War Two not only to defeat fascism but also to reform and improve their own societies, in other words to complete and extend the pre-war New Deal and its European counterparts. Even in the Soviet Union many Bolsheviks expected that the end of the war would produce greater freedoms and an end to the purges and gulags, most of which dated to the late twenties and thirties as well. Even after World War Two, the concept of fascism was used to describe the repression of free speech in America, the distortions of the public sphere, or the increasing American propensity toward military solutions, especially in Vietnam. By that point, the term may well have lost its meaning but it still recalled a time when the struggle against dictatorship and the struggle to reform capitalism were seen as compatible or mutually reinforcing struggles.

After the defeat of Hitler and Mussolini, Americans faced the problem of redefining their relations to the Soviet Union. On the one hand, the Soviet Union had given up nearly thirty million of its citizens to defeat Nazism. On the other hand, the Soviet Union remained a backward, authoritarian state, which utilized terror even after the denunciations of Stalin and the efforts to reform the system, which began in the 1950s. The so-called “Cold War” provided the context for this redefinition. Then, as now, Russia was isolated, fearful, inward turned and striving to impose its will on its near abroad, especially Eastern Europe. Then, as now, the US was triumphalist, blind to the values of other civilizations, and intent on extending its law and property system to every corner of the earth. The theory of “totalitarianism,” which equated Nazism and Communism developed in that context. The theory caught a truth, namely that Nazism and Communism were both dictatorships that rejected the Anglo-American rights tradition. But the theory also suppressed historical efforts to grasp the very different tragedies that twentieth century Russia and Germany had experienced.

Most importantly, unlike the earlier concept of fascism, the concept of totalitarianism was uncritical; it sought to deflect criticism from the United States and to identify anti-democratic and repressive forces as coming wholly from outside. After the formulation of this idea, the understanding that World War Two involved both a struggle against dictatorship and a struggle to reform capitalism disappeared. In its place came the War in Vietnam, and the “war on terror” including the invasions of Afghanistan and Iraq. Communists were equated with Nazis, Saddam Hussein with Hitler, Al-Qaeda with fascism. The meaning of the twentieth century was seen to lie in the triumph of liberal democracy including markets. The effort to reform capitalism, or to transform it into a system more fitting to human needs and values, whether in the form of the New Deal, the Keynesian Revolution or socialism appeared less central, eventually disappearing to a large extent.

To see how the redefinition of fascism as totalitarianism lies behind contemporary discussions of Russia and Ukraine consider the influence of Timothy Snyder’s 2010 Bloodlands. A work of prodigious scholarship and tendentious politics, Bloodlands informs Snyder’s widely circulated descriptions of Ukraine as a suffering democratic revolution, and of Putin as a contemporary version of the classic dictator. Bloodlands is the next logical step in the evolution of the totalitarianism thesis in that it not only assimilates Nazism and Communism, it blames Communism for Nazism.

According to Snyder, the formative event of the twentieth century was the starvation of the Ukrainians, which followed the deKulakization campaigns of 1928-9. And, to be sure, those were crimes that all subsequent progressives will have to understand. But Snyder also suggests that Hitler took Stalin as his ideal, modeling the killing of the Jews on the killing of the Ukrainians. Hitler, then, becomes a “second Stalin.” Putin, of course, is directly in this lineage. Missing from Snyder’s account is such factors as the role of World War One in generating monstrous state apparatuses able to deploy mass violence, the anti-Semitism stirred up by that war and central to Nazism but not to Bolshevism, and the Western “appeasement” policies, which date from the end of World War One and which sought to peacefully settle Germany’s borders with France, while leaving Germany’s Eastern borders (and therefore the path to the war with the Soviet Union) open to violence. Anyone who wants to follow up the ways in which Snyder’s book comes close to eliminating not only fascism but also anti-Semitism from twentieth century history should read Richard Evans’ superb review in The London Review of Books, November 4, 2010.

Ideas have consequences and the most powerful ideas of the twentieth century were historical ideas, among which capitalism and fascism were two of the most important. What is most striking about the present day public sphere is absence of such ideas and their replacement by ahistorical moral stereotypes. The reinterpretation of the meaning of World War Two so that it lost its connection to domestic reform and became one in a series of struggles against an evil outside was an important moment in this decline. The purpose of this reinterpretation was to discredit another powerful twentieth century idea, the idea of a Left, which was also an historical idea referencing a movement of thought and action far more powerful and long-lasting than that of socialism, but not excluding socialism either. While the idea of fascism is largely important for understanding twentieth century history, the idea of the Left is central for understanding the world today. Yet the two ideas are closely linked. Indeed, the beginnings of wisdom lie in going back to 1945 and trying to understand how in the last thirty or forty years, mostly under Democratic Presidents, the United States squandered one of the greatest triumphs of the twentieth century: the destruction of fascism, a triumph in which a global Left played a leading role.

This is a very brisk walk through a topic that should be taken slowly and treated in depth, but inevitably therefore at much greater length. Not the least of the reasons for engaging with it so briefly is that the institutions, if not always the practice, of Britain, the United States, and other liberal democracies today reflect efforts to rein in corruption that began in the eighteenth and nineteenth centuries, but which drew on very ancient arguments about the individual and institutional failings that rot individual character and bring about the downfall of states by weakening their ability to resist foreign attack, or by turning accountable republican government into some form of tyranny. More recent arguments focus on the economic cost of corruption, leading some writers to distinguish quite sharply between political and economic corruption. It seems plausible to see them as two faces of one phenomenon: politicians enriching themselves by extracting favors from individuals and businesses, and the latter offering favors to politicians whose “friendship” they need. A greater focus on the economic costs seems apt for developing countries which cannot afford the damage to the welfare of their population, and a greater focus on the erosion of decent democratic governance more apt for rich countries such as Italy or the United States.

I begin with two observations. The first and most obvious is that in order to decide what is and what is not corrupt behavior we need an antecedent view about what good behavior is like and how it comes to be corrupted. Discussions of corruption are invariably and rightly embedded in discussions of good governance. Since corruption frequently refers to the corruption of politicians by rich men seeking favors or the shaking down of rich men by agents of the state seeking to enrich themselves, any discussion of corruption requires a firm grip on what we think an uncorrupt political system looks like and what a “clean” economy looks like. “The opposite of contemporary Russia” is a good start but lacking in detail and sophistication. Universal corruption, of the sort endemic to communist Romania, where school teachers, doctors, police, and administrators all expected to be bribed by those who needed their assistance, is the paradigm of a “dirty” economy and political system.

The second is that questions about corruption are enmeshed in questions about individual character; the notion of an objectively perfectly decent person engaging in objectively utterly corrupt behavior makes little sense. The notion of a person who is perfectly decent by their own lights engaging in behavior that an observer thinks corrupt is entirely intelligible; but we find it a stretch to think that they are really perfectly decent and yet “objectively” corrupt. At best, we shall think it is just about excusable, or a stain on their character that we can understand being overlooked in the circumstances in which they are operating. Sir Warren Hastings defended his conduct in India with the observation that when he considered the opportunities he had for enriching himself at the expense of the native population, he was astonished at his own moderation. American politicians in the Gilded Age might have said the same. I make this very simple point because anyone interested in institutional design will wish to think of ways in which decent persons not endowed with superhuman strength of character can be induced to behave in an uncorrupt fashion, and not be subjected to too much temptation to behave badly.

I have no general theory of good governance or of sturdy and incorruptible character, let alone how to render them universal. Robespierre’s soubriquet of the “sea-green incorruptible” reminds us that we may sometimes wish that people would be corrupted to the extent of not doing what they think to be their duty, either by simple good nature or by venality — prison officers who somehow lost well-connected political prisoners under the apartheid regime in South Africa come to mind. There are many platitudes about good governance that are too often honored in the breach and could be enforced more effectively by a combination of better institutional design and greater savagery in punishing breach of trust, but they are platitudes and need no belaboring. What may need more belaboring is how difficult good institutional design actually is, and how beset with vicious circles its implementation can be. It is easy to declare ourselves in favor of transparency and accountability, as I am, but it is very hard to create mechanisms of accountability that cannot be circumvented, subverted, or ignored. Anticorruption regulations also need to be supported by the local political culture if they are not to be dead letters. Making demands of public opinion raises large issues of time and attention; human beings have short attention spans, limited powers of concentration, and a host of different matters calling for their attention. Walter Lippmann’s Phantom Public is almost 90 years old but could have been written yesterday, and its title tells us all we need to know. No standard model of accountability is wholly capable of dealing with those constraints. Familiar devices meet some problems — random spot checks can ensure that nobody is skimming at the checkout — but with an outfit the size of Citibank or Walmart, such devices are hard to organize and easy to undermine. With enterprises such as arms manufacturers facing third world governments and a host of intermediaries expecting their payoffs, the job becomes almost impossible.

Transparency is certainly needed to suppress corruption; accountability is impossible if those who stand in temptation’s way don’t have to produce accurate and complete accounts, don’t run the risk of facing public hearings on the accuracy of their accounts, aren’t properly audited, and so on. Up to the point of simple information overload, making all this public is a useful procedure. It can be a pain in the neck if you are running something like a small college and have to spend a lot of your time explaining where the college’s income goes and where it comes from in the first place, but it is income entrusted to you for a purpose, and those who directly or indirectly contribute to it have a right to know that it’s spent on what it ought to be spent on. But transparency takes one only so far.

Consider the contemporary American political system. Although there are areas of the funding of political campaigns where more transparency would help — preventing donors of large sums of money from hiding behind a veil of anonymity — there is no great difficulty in discovering who has funded particular members of Congress. Nor is there any great difficulty in connecting that funding to the voting record of the members of Congress, either. The difficulty is that there is no consensus on what corruption is. A substantial part of the population thinks it is the job of a member of Congress to lobby the government on behalf of the particular districts and economic interests to which they have given their allegiance. One person will think that receiving $300,000 from the makers of medical devices and then doing one’s damnedest to get the Affordable Care Act’s tax on such devices repealed is corrupt — essentially, selling one’s vote. Another may think that since the device maker operates in the district you represent, your duty is to fight for the firm and its employees. To put it another way, flagrant, gold-plated, universally condemned corruption is something that greater transparency would root out. Contestable corruption is another matter; clean-up campaigns work only when there is a moral consensus behind them, and it is an understatement to say that we do not live in an age of moral consensus.

This is an excerpt from an article published in Social Research, Vol. 80: No. 4: Winter 2013.

Israel’s right-wingers never stop providing spectacular examples of the all-too-human tendency to avoid facts that contradict their worldview. Two weeks ago I showed how the Anti-Defamation League’s anti-Semitism survey demonstrates the falsity of Netanyahu & Co.’s favorite theory that anti-Semitism is the source of Israeli criticism. The ADL’s study shows the opposite: European criticism of Israel’s occupation is negatively correlated with anti-Semitic attitudes, i.e. that countries like Sweden and Britain, which are almost devoid of such attitudes, criticize Israel most strongly, whereas countries that Netanyahu & Co. consider as friends harbor high levels of anti-Semitism.

The ADL’s survey produced one result that, while not unexpected, certainly requires further thought and analysis: Arab countries have by far the highest rate, 74% of the population, of anti-Semitic attitudes. There seems to be a high correlation with Islam, because Malaysia, the country with the largest Muslim population on the planet, has a very high percentage, 61% (unfortunately there are no data for Pakistan). Within the Arab world Palestinians in the West Bank and Gaza lead with a fairly staggering percentage of 93%.

A recent Haaretz editorial blasted Netanyahu for using these figures to argue that the Palestinian Authority is a hotbed of anti-Semitism that distorts and blindly attacks both Jews in general and Israel in particular. The editorial claims that the high incidence of anti-Semitism should be explained by 47 years of occupation, and that the government should act to lower this anti-Semitism rather than using it as a pretext to avoid reaching an agreement with the Palestinians.

While the editorial makes an important point, it does not address the much more complicated question of why the rest of the Arab world has such incredibly high proportions of anti-Semitic attitudes. Simply making Israel’s occupation of the West Bank responsible for this will not do. This requires much more complex analyses taking into account a multitude of factors, primarily the Arab world’s abysmal failure to adapt to modernity that Bernard Lewis pointed out long ago, and the failure of the Islamic world to develop viable political programs, as French sociologist Olivier Roy has shown in a series of books.

I therefore definitely think that the Haaretz editorial is not faultless. But Aryeh Eldad has graded the editorial with an F for logic on the following grounds: He claims that the editorial simply does not take into account 3,500 years of Jew-hatred. Eldad asks whether the Egyptian pharaohs’ injunction to kill all Jewish first-born and Haman’s anti-Semitic manifesto, which argues that there is a people that keeps to itself and doesn’t respect the kingdom’s custom and should therefore be eradicated, are also due to Israel’s occupation of Palestinian territory, before continuing to quote other examples.

Prof. Eldad is professor of medicine, and I am told he has been a conscientious physician, researcher and teacher (to the best of my knowledge he is not currently practicing). I am sure that in medical research and treating patients, he made sure to follow scientific methodology carefully.

If he grades Haaretz with an F for logic, it is certainly justified to have a look at the cogency of his own argument. Has he checked whether the incidence of hatred toward Jews along history was higher than toward other minorities in similar conditions? He might do well to look at Yuri Slezkine’s The Jewish Century and he will see that there is a powerful argument to the contrary.

Furthermore Prof. Eldad’s opening salvo suffers from a very simple problem. There is virtual unanimity among historians and archaeologists that the Jews were never enslaved in Egypt and the Exodus never took place. The story emerged most likely about 700 years later when the relevant portions of the Bible were written.

Prof. Eldad might do well to read Finkelstein and Silberman’s The Bible Unearthed and Richard Elliott Friedman’s Who Wrote the Bible for starters. Eldad doesn’t have to swallow these researchers’ conclusions, but he might do well to have a look at the vast literature they quote, and at this occasion he will find out that the overwhelming majority of scholars believe that the Book of Esther (from which the tale of Haman is derived) is a historical novella rather than a historical account.

Could Eldad therefore explain why he grades Haaretz with an F for logic when he doesn’t follow the most basic rules of ascertaining his facts before making use of them?

I do not take Jew-hatred in its many forms, including modern anti-Semitism (the term dates from the last third of the 19th century) lightly. And I find it deeply disturbing that the Islamic world has such horrendously high figures of anti-Jewish beliefs and attitudes. Trying to understand this phenomenon requires serious research combining a variety of disciplines. Developing policies to counteract the surge in anti-Semitism in the Islamic world requires looking at the facts carefully, weighing the evidence and making use of the most sophisticated armamentarium of extant theoretical approaches. I suggest Scott Atran’s Talking to the Enemy as an excellent starting point.

As to Prof. Eldad, I would suggest that he use his scientific training to make sure that his political commentary be informed by the absolute minimum of intellectual integrity. Before grading Haaretz with an F for logic, he should make sure that he doesn’t commit blunders unacceptable from a first-year student in any discipline, and that do certainly not befit a man of learning — even if he comes from Israel’s extreme right.

This article was originally published on May 28, 2014 in the author’s “Strenger than Fiction” blog at Haaretz.

In a few days, the eyes of millions of people around the world will be fixed on their TV screens, following a ball rolling in some shining green field in Brazil. They will be expecting to witness one of the most exciting World Cups in history; after all, we Brazilians live in the country of football. But probably very few of these spectators know that more than 250,000 Brazilians had their rights violated and their lives harshly disrupted in the process of making such a sport spectacle possible. Entire communities were evicted to build the facilities for the games and the infrastructure to receive the tourists. The slums and the peripheral neighborhoods of major cities were militarized in a process euphemistically referred to as “pacifying.” Workers were displaced and injured, and died building the new stadiums required by FIFA, while their labor rights remained unobserved.

And now, children and teenagers face the danger of sexual exploitation (distinct from prostitution, which should be completely decriminalized). Homeless people are violently repressed and “cleaned” from the prime city areas. Protesters and activists are criminalized — and to add insult to injury, the popular cries against what the World Cup represents and presents to the country have been dismissed as an unintelligible lack of patriotism and hatred for the game that is seen as a defining category of our identity.

However, the Brazilian working class, social movements, and popular sectors of society have not been silenced by such dismissive attacks. In a wave of protests that reached its capstone in June 2013, people are taking to the streets to make explicit their discontent, claiming back what belongs to us, meaning our freedom to decide what path we want to take. During this week, metro and train workers in São Paulo started an open-ended strike demanding higher wages. In the same city, the Homeless Workers Movement (MTST) mobilized more than 10,000 people in a peaceful march to the Arena Corinthians stadium, which is to host the opening game of the World Cup next week, demanding the government to increase funding for public transport, health, education, and low-income housing, besides a lifetime pension for the families of the construction workers who died during stadium construction. In Belo Horizonte, students occupied the administration’s office at the Federal University of Minas Gerais to protest against the decision to close the campus during the days in which the nearby Mineirão stadium will be hosting games, turning the campus into a FIFA territory. And the indigenous peoples, who staged a protest against the landlords’ lobby on the National Congress’s dome in the end of May, promise further political actions in the coming weeks. These are just a few instances of the countless demonstrations that are bursting all over Brazil, in large and small cities alike, expressing people’s discontent with the status quo and aiming at destabilizing it. The broad array of issues brought into light by protesters demonstrates that this is not only about the World Cup; it is rather about something larger and deeper, of which the football spectacle is just the most visible and immediate component.

Indeed, the current wave of revolts expresses disapproval of a neoliberal model of development of which mega-events such as the World Cup and the Olympics are a small, albeit relevant, part. And differently from what Marxist writer Antonio Negri claimed last week in an interview for a Brazilian newspaper, this is not a shift from the politics of revolutionary transformations put in place by former President Lula to a new politics of mega-events advanced by his successor, Dilma Roussef. On the contrary, this is all part of the same developmental model fostered by the Workers’ Party, of which both Lula and Dilma are key representatives. Such a model relies on the privatization of the commons, the flexibilization of work relations, and the institutionalization of laws of exception that not only suspend constitutional rights but also criminalize all forms of political dissent. Meanwhile, it promotes the interests of transnational corporations and actors, such as FIFA, which are exempted from any form of democratic accountability. Within this framework, the World Cup is not different from mining activities or the construction of dams: it is “accumulation by dispossession,” to use David Harvey’s term.

Hundreds of thousands of families were displaced in order to make space for new stadiums and facilities, to which none of them can afford to have access. Workers in the informal sector have been pushed away from the areas where the matches will take place and the tourists will circulate, thus losing their source of income in favor of FIFA’s authorized dealers. Public funds were employed for the renovation and construction of infrastructure that will generate profits for private companies. These developments are, however, not anomalies, but rather part of a larger trend, and it is against this model that the marginalized and oppressed are now fighting in Brazil. Hopefully, their cries will appear on the TV screens of the millions who will be following the matches around the world. Because, contrary to what Maradona once stated, this time “la pelota se manchó” — the ball was stained!

One of the most appalling and discouraging outcomes of the recent European elections has been the rise and affirmation of a number of far-right, xenophobic, and populist electoral parties in East and Northern Europe and in France. This has been largely the outcome of years of austerity policies and crisis, which have deteriorated the conditions of life for millions of people across the continent. In this discouraging scenario, the most promising novelty has been, in addition to the Syriza’s electoral victory in Greece, the birth and astonishing affirmation (7.9%) of a new organization: Podemos (Spain). Podemos was created only a few months ago, in March, by leftist activists associated with the 15-M movement, and inherited the spirit and organizational methods of the Indignados movement. It opposes austerity policies and defends the welfare state and social rights from the neo-liberal attack supported by both center-left and center-right coalitions across Europe. Moreover, it is an interesting and powerful experiment in radical democracy, one which might bear decisive consequences for the renewal of the European left and of its culture. For these reasons, I decided to join the signatories of this statement in support of Podemos. Defending social rights and reclaiming radical democracy is the only effective answer against the rise of a dangerous xenophobic, homophobic, and misogynist far-right.

In the coming weeks, PS will publish a more detailed, first-person account of the experience of Podemos. Cinzia Arruzza

* * *

In the wake of the European elections, we want to celebrate the emergence of PODEMOS as a political alternative in Spain. With almost no resources, just four months after its foundation, PODEMOS has managed to garner impressive popular support, winning eight percent of the vote and becoming the third political force in 23 of the 40 main cities in the country. While the politics of austerity are turning Southern Europe into a desolate landscape, it is encouraging that more and more people are willing to rise up and fight for democracy, their social rights, and popular sovereignty. Even more so, it is deeply inspiring that they are willing to contest the mandates of the financial and political elites through new, radically democratic means.

PODEMOS has managed to build upon the cycle of popular uprisings that have spread worldwide since 2011, demanding a democracy that is worthy of its name. It has done so by empowering the people’s political participation, holding open primary elections, elaborating a participatory political program, and constituting more than 400 circles and popular assemblies worldwide in support of the initiative. PODEMOS relies exclusively on crowdfunding and popular donations, refusing to receive any funding from the financial institutions that are responsible for the crisis, and all its expenses can be viewed online. All its representatives will be revocable and subject to a strict limitation of their mandates, their privileges, and their salaries.

PODEMOS’s political program, elaborated with the contributions of thousands of citizens, makes manifest and expresses a hope shared by millions around the world: to break with the neoliberal logic of austerity and the dictatorship of debt; to establish a fair distribution of wealth and labor among all; the radical democratization of all instances of public life; the defense of public services and social rights; and the end of the impunity and corruption that have turned the European dream of liberty, equality, and fraternity into the nightmare of an unjust, cynical, and oligarchic society.

This election has shown that the disaffection and malaise created by the policies of the Troika are the breeding ground for fascism and xenophobic forces. It is urgent, therefore, that the message of hope expressed by PODEMOS spreads across all our countries — the message of the resistance of a people who will not succumb to passivity, who reclaims instead a power exclusively of its own: the democratic capacity for all to decide on what is common, on the matters that determine the lives of all.

This is why we express our support of this initiative, of its open and participatory method, hoping that its efforts will materialize and spread throughout many other countries in Europe and the world.

In solidarity,

Gilbert Achcar
Jorge Alemán
Cinzia Arruzza
Étienne Balibar
Brenna Bhandar
Bruno Bosteels
Wendy Brown
Hisham Bustani
Judith Butler
Fathi Chamkhi
Noam Chomsky
Mike Davis
Erri De Luca
Costas Douzinas
Eduardo Galeano
Michael Hardt
Srećko Horvat
Robert Hullot-Kentor
Sadri Khiari
Naomi Klein
Chantal Mouffe
Aristeidis Mpaltas
Yasser Munif
Antonio Negri
Jacques Rancière
Leticia Sabsay
Mixalis Spourdalakis
Nicos Theotocas
Alberto Toscano
Slavoj Žižek

This statement was originally published on the Apoyo Internacional a PODEMOS.

Our colleague, Zeyno Ustun, is back in Istanbul this month. We corresponded about the situation there on the occasion of the anniversary of the Gezi protests. She reports political paralysis with maximum police presence and sent a report from Amnesty International that she judges to summarize the situation accurately. Zeyno came across the following piece in Revolution News. It is re-posted here with permission. –Jeff Goldfarb

The repression of peaceful protest and the use of abusive force by police continues unabated one year after the Gezi Park protests.

Across Turkey, more than 5,500 people have been prosecuted in connection with the Gezi Park protests.

Only five prosecutions have been brought against nine police officers, despite hundreds of complaints of police abuses.

Medical associations, doctors, and other civil servants have faced sanction and prison sentences for their alleged care of injured protesters.

Social media users are on trial and facing prison sentences for sharing information about the protests.

New laws restrict access to social media and criminalize the provision of emergency medical care during protests.

One year on from the Gezi Park protests, the government’s approach to demonstrations is as abusive as ever while impunity for police violence is rampant, Amnesty International said in a report on June 9.

“The Turkish authorities have been relentless in their crackdown on protesters — be it police violence on the streets or by prosecuting them through the courts. Meanwhile the police enjoy near total impunity. The message is clear: peaceful demonstrations will not be tolerated,” said Salil Shetty, Secretary General of Amnesty International. “Just in the last ten days, demonstrations across Turkey to mark the anniversary of the Gezi Park protests were banned and arbitrarily and brutally dispersed with tear gas, water cannons, and beatings. The government must change course, allow peaceful protest, and ensure accountability for police abuses.”

Amnesty International’s report, Adding Injustice to Injury: One Year On from the Gezi Park Protests in Turkey, examines developments following the small protest against the destruction of the park in central Istanbul that spiraled into nationwide anti-government demonstrations. It calls on the Turkish authorities to end impunity for human rights abuses by law enforcement officials and to guarantee the right to peaceful assembly.

Eight thousand people were injured during the Gezi Park protests and 11 people died as a result of police violence, but investigations into police abuses have stalled, been obstructed, or closed.

Only five separate prosecutions have been brought against police officers to date. In stark contrast, more than 5,500 people face prosecution for organizing, participating in, or supporting the Gezi Park protests. Many are being prosecuted for nothing more than peacefully exercising their right to freedom of assembly. Protest organizers are being prosecuted for “founding a criminal organization” while scores have been charged with unsubstantiated terrorism offences. “The government must revise the law on demonstrations, remove excessive restrictions on where and when demonstrations can take place, and repeal provisions used to criminalize peaceful protest,” said Andrew Gardner, Amnesty International’s researcher on Turkey.

Doctors have been disciplined and, in two cases, criminally prosecuted for providing first aid in makeshift medical clinics during the Gezi Park protests. In January 2014, the government introduced legislative amendments that could be used to support criminal punishment of those who provide emergency medical treatment during protests.

In a crude violation of the right to freedom of expression, criminal investigations have been started against commentators who documented the protests. These investigations were followed by random prosecutions of people posting opinions on social media during the protests. Increased powers to shut down websites have been introduced.

“One year on from the Gezi Park protests, the Turkish authorities seem to be firmly set on the path of intolerance, conflict, and polarization. Unless checked, this will lead to further violations of human rights in the country,” said Salil Shetty. “It is not too late for the government to change course. However, this requires the political will to acknowledge legitimate grievances and reach out to the disaffected; to accept criticism and to respect the right to freedom of assembly; to stay the prosecution of peaceful protesters and to ensure accountability for police abuses.”

* * *

Cases:

On June 3, 2013, Hakan Yaman was beaten up and thrown on a fire by four riot police officers and a person in plain clothes operating next to a water cannon vehicle. A witness recorded the incident on his mobile phone. Despite the number of the water cannon vehicle being visible in the video, the Istanbul police authorities have failed to reveal the identities of the officers assigned to work alongside it.

Five members of Taksim Solidarity, a coalition of over 100 NGOs, political groups, and professional bodies that came together to oppose the redevelopment of Gezi Park, stand accused of “founding a criminal organization,” “provoking others to participate in an unauthorized demonstration,” and “refusing to disperse from an unauthorized demonstration.” There is no evidence in the indictment that the five people participated in or incited violence or engaged in any other conduct not protected by human rights law. All five face up to 15 years imprisonment.

Twenty-nine young people in Izmir are on trial for “inciting the public to break the law” via social media posts. Three of the defendants are additionally charged with defaming the Prime Minister. The case is based entirely on tweets that were sent about the first weekend of the protests. They provide information, such as available wireless passwords and locations where the police were using force against demonstrators, or contain opinions and messages of support for the demonstrations. None of the tweets in the indictment contains any incitement to, or indication of participation in, violence. A number of the tweets is said to defame the Prime Minister, who intervened in the case and is listed as a “victim.” After two hearings, the case was postponed until July 14, 2014.

This article was originally published by Revolution News.

After forty years, though more historical research is needed on the presidency of Isabel Perón (1974-1976), what we know today leads us to consider that her Peronist government was one of the most violent in the violent history of Argentina. To be sure, political violence was quite extensive prior to the death of her husband, President General Juan Perón. Violence was unleashed before and after 1974 to the left and right of the political spectrum in in those years. But the state violence generated from the Peronist government of Isabel Perón acted as a sort of historical preamble to the “Dirty War” of the military junta that ruled the country after toppling her by early 1976.

Commanded by the most powerful minister of Isabel Perón’s administration, José López Rega, the neo-fascist organization Triple A acted as a paramilitary arm of the Peronist government. Between July and September 1974, Triple A murdered 60 people, producing the gruesome statistic of having killed one person every 19 hours. Flushed with state funds, and with strong links with the security forces and the world of Peronist trade unionism, the Triple A Cold Warriors who openly stated that “the best enemy is the dead enemy” recognized Isabel Perón as their leader. In the history of global fascism, Isabel Perón has the dubious record of being the first female leader of a neo-fascist organization.

This was a peculiar turn because the differences between Peronism and classical fascism had always been important. In fact, Peronism was never a fascist movement but a post-fascist political formation that consolidated after 1945. Its emergence as the first post-war populist regime precisely marked General Juan Perón’s rejection of dictatorship as a model of government. Perón was at that time the leader of a military dictatorship that called for and won Presidential elections. The movement associated with his name emerged as a post-fascist rejection of fascist violence. Instead, Peronism created an electoral democracy between 1946 and 1955 and was characterized by low levels of political violence. In contrast, the military regime that, in the name of “freedom,” overthrew Peronism in 1955 was undoubtedly far more violent and repressive than the classic Peronism of 1946. It is clear from these facts that Peronism was not fascism. It was a form of authoritarian populism that expanded the social participation of citizens at the same time that it curtailed some political freedoms.

How, then, can we explain the belated Peronist engagement with Triple A and its neo-fascist violence? About 900 Argentinian citizens were killed by a Peronist organization supported by the Peronist state. In fact, Juan Perón had always maintained an admiration for fascism, even at the time that he strongly rejected in practice. And from 1943, he had recurrently used fascists and neo-fascists as hands for dirty jobs. When General Perón died in 1975, his wife and then vice-President continued with this Peronist tradition.

The New York Times reported in those years that the “spirit of fascism lives in Isabel Peron’s regime.” But what is the legacy of the government of Isabel Perón? Her relationship with fascism marks a unique moment, a momentary break, where the Peronism that was originally based precisely on a blunt populist rejection of fascist violence seemed to move backward on the path of its emergence as the first democratic populist regime after 1945. In 1974, the Peronist government seemed to undo this populism by forming what today some historians see as a prologue to the state terrorism of the dictatorship. The Populist synthesis of democracy and authoritarianism was lost and only authoritarian violence remained.

There is not much relation between the rule of Isabel Perón and the current Peronism of Argentina. The former implicitly rejects the legacy of the Peronist right of the Presidencies of Juan and Isabel Perón. However, it does so without critically and historically inscribing them in the history of the multiple metamorphoses of Peronist populism.

To mark its rich contradictions we must also remember that even many of the victims of Isabel Perón and her Triple A warriors were Peronists.

The historical analysis of this Peronist experiment with neo-fascism in the 1970s might help us understand the different ways and changing shapes of Peronism, its amazing transformations and contradictory contents throughout the history of modern populism in Argentina. These mutations include Peronism’s encounters with the populist Latin American left, its engagement with neoliberalism and the “Washington consensus” in the 1990s under Peronist President Carlos Menem, and its current “national and popular” phase.

The para-state terrorism that preceded the Dirty War (1976-1983) was one of the many historical developments of Peronism. It was one of its multiple possibilities.

Prime Minister Benjamin Netanyahu’s response to the kidnapping and murder of a Palestinian teenager in East Jerusalem — apparently, in retaliation for the recent kidnapping and murder of three Israeli teenagers in the West Bank — was a public call to Israelis to “refrain from taking the law into their own hands.” This message, delivered by the Prime Minister both personally and through his spokesmen, is very revealing.

On a first look, it is nothing but a laconic statement — a sober appeal to the nation in a moment of escalating violence that’s alarming even by Israeli standards.

On a second look, it contains an embarrassing mistake. Kidnapping and murdering an innocent Palestinian teenager has nothing to do with “taking the law into one’s own hands.” It is a criminal terrorist act per se; not a moment in which, say, an injured civilian violates the rule of law by punishing a criminal that the state has failed to punish.

On a third look, and the most accurate of all, Netanyahu’s statement contains no embarrassing mistake; it reveals rather the embarrassing truth. For indeed state terrorism against Palestinians — executed in revenge, retaliation and general intimidation of the occupied Palestinian population — is officially carried out by the Israeli state. It is a form of terrorism that Israelis consider legal.

There are several examples of legal state terrorism in Israel — officially approved by the Israeli Supreme Court of Justice — but the clearest case is the IDF’s practice of house demolitions.

In the last few days, three houses were demolished by the IDF (watch video below): the house of Marwan Qawasmeh and Amer Abu Eishe, the Hamas activists suspected of kidnapping and murdering the three Israeli teens; and the house of Ziad Awad, a Hamas activist suspected of murdering an Israeli Police officer last spring. Of course, house demolition even of convicted terrorists is contrary to justice, international law and the Geneva convention, for it necessarily aims at punishing civilians who have not been convicted (or even charged) with a crime: the terrorist’s family and close circle. Such demolitions are intended to intimidate the civilian population, to deter them from supporting terrorists and guerilla fighters. Such deterrence by intimidation of a civilian population is, by definition, state terrorism.

What is especially disturbing about terrorist state practices in Israel is the Supreme Court’s relation to them. Indeed the recent demolitions have been officially considered and approved by the Court — similarly to previous demolitions — but the recent ruling is all the more offending because, in this case, Qawasmeh, Abu Eishe and Awad have not even been convicted of a crime. Qawasmeh and Abu Eishe have disappeared, and Hamas — the organization to which they apparently belong — hasn’t taken responsibility for the murder. Awad was captured in the arrest waves immediately following the murder, and was only now charged with a crime.

To the human rights organizations in Israel’s Supreme Court, the Justices replied that the demolitions are in the first place intended not as “punishment” but as “deterrence” of the Palestinian population. In other words, because the IDF isn’t punishing people for crimes they did not commit — but uses them to intimidate Palestinians and deter them from cooperating with terrorists and guerilla fighters — the demolitions are legal. (It is at least worth mentioning here that this is the same Supreme Court over which the celebrated Aharon Barak used to preside: a Court that’s recognized within Israel and internationally as the last “leftist” Israeli resort; the last functioning guard of Israel’s democratic rule of law. Yet, Aharon Barak, too, approved in his day tens if not hundreds of house demolitions of Palestinians.) As of this morning, Haaretz reports that the IDF is now preparing to a massive demolition operation, in which tens of Palestinian houses are supposed to come down.

This puts Netanyahu’s statement from yesterday in the right context. There is a rule of law in Israel, but it is a terrorist rule of law. It is the right of the state and the state only to exercise terrorism against the Palestinian population. Israeli civilians should leave terrorist activity to the IDF, the Israel Police, and the other official security brunches: they ought not take the law into their own hands.

The IDF deliberately chooses to attack the families of Hamas activists. This is a war crime. Shall every Hebrew mother know, that her son serves in a terrorist organization.[1]

“Ladies and gentlemen, good morning, this is the news broadcast. Az Adin Al-Kassam fighters took responsibility this morning for the bombing of the house of Captain Motti, an IDF platoon commander, in Hanarkisim Street in Tel Aviv. Captain Motti’s wife, Ariela, was killed in the bombing, along with Yair, his 2 years old son, Sigalit, his 1 year old daughter, Shlomit, Motti’s 64 years old mother, and Yaron, a 23 year old neighbor, who was just visiting the family. Three nearby apartments on Hanarkisim Street caught fire, and eight neighbors were hospitalized with varying degrees of injury. According to Hamas’ statement, they did know that Captain Motti was not at the time present in the house. The bombing, they say, was necessary in order to clarify to Captain Motti — whom they consider a wanted terrorist for participating in the bombing of a tunnel in Gaza — that he has no home to return to.”

Of course, the statement is fictional. Does it sound ridiculous to you?

This is exactly the type of announcement that the IDF has started to release to the press in the last couple of days. Of course, the military makes sure to do so incognito: anonymous “senior officers” release statements to the press. Here is, for example, a recent report from Yisrael Hayom, the Prime Minister Office’s semi-official spokes-agency:

“Israel enhanced yesterday the operation in Gaza, hoping to force Hamas commanders to blink first and plead for a cease fire. As part of this initiative, the military decided to double the number of targets attacked. Attacks now include deliberate strikes of the private homes of all the commanders of Hamas fighters in Gaza, Han Yunnes and Rafach. The idea is clear: to create a ‘losing price’ and pressure of family members, who would be scarred to lose their whole world.” (My emphasis –Y.G.) [2]

Nachum Barnea wrote this morning something similar in Yediot Acharonot:

“The Egyptian military, which did not fight the tunnels in the days of Mubarak and Morsi, manages to eliminate them in the days of General Sissi. A house under which a tunnel is discovered is destroyed — together with its inhabitants. The prospering smuggling industry, which has sustained Hamas and enabled its armament initiatives, was fatally injured.” [3]

Indeed a “senior officer in the Israeli Air-Force” said something very similar to Haaretz. In other words, the IDF is intentionally briefing the media about its current house-destruction policy: they are intentionally sending a message. Of course, it is important to notice: Nachum Barnea doesn’t quote his sources. Haaretz is quoting a “senior Air-Force officer,” and Yisrael Hayom speaks of “senior officers in the IDF and the Shin Bet.” Why, if this briefing is so intentional, doesn’t the IDF’s spokesmen announce them? Why does no senior IDF officer take responsibility for this policy?

Because this policy is a war crime. An officer who would stand behind it would target himself for future international lawsuits. It is legitimate for the IDF to attack military targets, but the targets they are now shooting are clearly civilian. It is not a Hamas commander — but his family members — that they shoot, using them to put pressure on Hamas and on him. Less politely put, the IDF is taking his family as hostages, and is killing these hostages in large numbers.

As Betzelem pointed out earlier in this campaign — and before the killing of many more civilian casualties — a flying IDF terrorist used this morning a multi-million jet-fighter to kill Nur Marwan Almajidi, another 10 year old girl, from Rafach. All Palestinian casualties are now 105 (150 by the time of translation). Private houses are not a legitimate military target. Behind the balanced words, “violation of international humanitarian law,” there’s another concept, which is more to the point: war crime.

In earlier days, the IDF had a permanent excuse: it said that the killing of Palestinian civilians is never intentional, that the military only targets areas from which rockets are shot, despite knowing that civilians might be killed. The claim was, and still is: Hamas is hiding behind civilians. The IDF, up until recently, always insisted that it is not interested in the killing of civilians — the civilians only insist on getting into the route of its missiles and bombs.

Much Israeli propaganda used to ride on this alleged moral difference between the IDF and Hamas: while Hamas is shooting at Israeli civilians and is hiding behind Palestinian civilians, the IDF protects Israeli civilians and only kills Palestinians as a side effect. The IDF, Israeli thinkers, and Israeli propaganda vehicles built a whole moral theory surrounding the alleged legitimacy of this fighting code. Somehow, the numbers, i.e., the fact that the IDF kills about 100 Palestinian civilians for every 1 Israeli civilian casualty, were successfully not included in this moral theory. But now, something new is occurring. Killing civilians is not anymore announced as a mistake — it is now a policy. Now the IDF has stopped the pretending and the make believe. If Hamas is shooting inaccurate weapons into civilian populations, but can claim to its defense that it doesn’t have accurate weaponry (to be sure, this wouldn’t help: this too is a war crime), the IDF is using accurate weapons and is now finally confessing that it uses them to target civilians.

This shouldn’t surprise us. The IDF’s fighting philosophy has always included targeting civilians. The famous Retaliation Operations, which still stand at the heart of the IDF’s fighting ethos, were deliberate attacks on Palestinian civilian villagers, attempting “to put pressure on the PLO and the other resistance movements,” or in clearer words: to terrorize the Palestinians.

In the fifties, the Israeli Air Force hijacked a civilian Syrian airplane in order to negotiate the release of a few Israeli intelligence officers who were caught after crossing the border to Syria. It is possible that this was the first hijacking of a civilian airplane (a method then used by Palestinians against Israelis repeatedly). The Prime Minister at the time, Moshe Sharet, was shocked when he heard of the hijacking and immediately instructed the IDF to release the hostages: but this does not change the fact that the IDF acted as a terrorist organization, and that this expressed something of a broader mentality.

In the War of Attrition on the Egyptian border, after the IDF failed answering the Egyptian’s persistence attack on its outposts in the Sinai Desert, it started to bomb and shell the Egyptian towns on the Suez Canal — that is, it transferred the fighting to civilians.

Similarly, in 1982 Beirut, our flying fighters killed thousands of uninvolved civilians. More or less the whole Israeli strategy in Southern Lebanon, culminating in operations such as Din VeChshbon and Invey Zaam, was based on targeting civilians: we will shoot the Shi’ite villagers; they will in turn put pressure on Beirut, which will in turn put pressure on Hezbollah. The IDF didn’t know how to deal with Hezbollah directly, and its commanders were fed-up losing on the battlefield.

The same logic is active now. The IDF doesn’t know how to effectively hit Hamas commanders, and it is reluctant to send troops in. In order to overcome this “dilemma,” it bombs their private homes from the air and kills their families. This policy is more wide-ranging than commonly imagined: according to details released by the OCHA (the UN’s humanitarian coordination agency), the Israeli Air Force has demolished in the current campaign 70 houses, along with 342 living unites that were severely hit. Approximately 2000 people became homeless as a result. Furthermore, Israeli fighters also attacked 5 clinics and injured 18 medical workers. These data were released 3 days ago.

Israel would have demanded international outrage, for a good reason, if Hamas had started blowing up private houses of IDF officers, especially if Hamas knew full well that the officers were not there at the time of bombing. Israel would have argued, again, correctly, that this is terrorism. But this is exactly the policy that Israel now adopts. If you didn’t have enough reasons to refuse serving in the IDF, here is the most recent one. Service in the IDF is direct and indirect cooperation with a policy of taking hostages and killing them. Shall every Hebrew mother know that she is sending her son to serve in a terrorist organization.

NOTES

[1] “Shall every Hebrew mother know, that her son serves under commanders who are worthy of commanding” is a slogan coined by Ben-Gurion and still widely in use by Israeli commanders, for example in the IDF officer-training programs.

[2] Israel Hayom [Israel Today] is a daily that was created by Netanyahu’s right wing American donor, Shledon Edelson. Its editors are unmistakably biased in supporting Netanyahu — in security and diplomatic issues, but also in internal Israeli politics. It is one of the most read newspapers in Israel.

[3] Top mainstream Israeli journalist, working in Yediot Acharonot, Israel’s largest circulating centrist daily newspaper.

This article first appeared in Friends of George and was translated from the original Hebrew by Omri Boehm. All footnotes are by the translator.

For more than ten days, the Gaza strip has again been under attack by Israel, and although missiles are fired everyday by Palestinian factions into Israel, the causalities are massively (if not uniquely) on the Palestinian side. Twenty-four hours after the beginning of a sustained ground operation on Thursday, July 17, one may rightly fear that the number of victims, like the dozen Palestinian children killed in the past week, will increase to indecent proportions.

This renewed operation by Israel, initially hidden behind the media frenzy of the World Cup and now of the Malaysian plane shot down over Ukraine, is another round of collective punishment against Gazans — a gesture that has not gathered much international attention despite the gravity of the situation.

The point is not only to count the number of dead bodies under the rubble of Gaza, be they Hamas and Islamic Jihad’s fighters or civilians and children. There is another important casualty under the wreckage caused by the Israeli bombings after the failed ceasefire on Wednesday, July 16, and one to which commentators have paid little, or no attention: words.

And not just any words. These unusual casualties consisted of the declarations made on Wednesday, July 16 by leaders of Hamas when there were still hopes for a ceasefire, and which signaled an important political opening on the side of the Islamist faction. A careful look at them shows that Hamas has been forced to change its position since the ousting of President Morsi in Egypt and the consequent weakening of the Muslim Brotherhoods in the region. Statements published in Palestinian news outlets declared that Hamas was willing to sign a ceasefire with Israel providing that their conditions were met.

What were Hamas’s conditions and what do they tell us about the possibility of a path toward a negotiated exit from this new escalation?

Hamas’s requests started with very practical improvements, such as access by Palestinian farmers to the buffer zone along Gaza borders to cultivate their land; re-instatement of the six- nautical-miles limits for Palestinian fishers; the re-opening of the Gaza seaport and airport under UN supervision; and the reopening of the Rafah crossing under international control. But the request went well beyond practicalities and contained an important although overlooked novelty: Hamas called for substantial negotiation with Israel, as opposed to just a short-term ceasefire. Indeed, Hamas was willing to offer a 10-year truce, providing that Gazans were granted entry permits into Israel, East Jerusalem, and the West Bank, and that Israel promised not to interfere in the Fatah-Hamas reconciliation process.

This is in itself very significant for a movement that for years refused to engage in meaningful political negotiations with Israel. It shows that Hamas’s leaders have developed a vision that comes de facto close to some of the botched Oslo negotiations of the 1990s, with their endless and fruitless discussion between Arafat and Israel about the creation of a “safe corridor” between Gaza and the West Bank. Not only is Hamas now willing to discuss, but it is also announcing its willingness from a platform that was once that of Arafat and Fatah. It is striking (and surprisingly overlooked) that Hamas is now operating within a logic of ad interim negotiations — proposing a 10-year truce window — when many had sharply criticized Arafat for having accepted an ad hoc, temporary solution rather than aiming at a radical solution on the basis of international law and UN resolutions.

These proposals represent quite a break, considering that for years Hamas hid behind a sibylline and vague suggestion of coexistence, of a “100-year truce” between an Israeli state along the lines of the 1949 armistices and a Palestinian state made of the West Bank, East Jerusalem, and the Gaza Strip. For many Israelis and commentators this was the sign of a double-talk strategy on the part of Hamas: showing a willingness to live next to Israel (yet falling short of recognizing its existence) while simultaneously indulging a permanent temporariness (what happens after the 100 years?) that potentially incubated inimical sentiments towards Israel.

In a conflict where each word and formulation matters, it is quite surprising that these offers (or requests) have not generated more in-depth analyses. It might also be the case that for some, especially in the current right-wing Israeli government, it is better to hide these proposals under the rubble of Gaza, thereby assuring that a negotiated solution remains only a distant — a very distant — prospect.

As I am working on my dissertation, I try to isolate myself from the present and dig into the past, in the hope that something will be revealed. However, I can’t help but to be dragged back to the reality on the ground, because the topic of the past is relevant to the reading of the present.

Throughout this new round of atrocities and violence that is spreading throughout Palestine and Israel, one thing keeps ringing in my ears. According to the Israeli media, there was no choice but to hit the Gaza Strip with all the might of the Israeli Army because the violence that took the form of rocket fire came from the Gaza Strip. And to my amazement, the “civilized” world has reiterated that the State of Israel, as a sovereign and free country, has a right — even an obligation — to strike back against belligerence and protect its civilian population.

The question that must be asked is: who determines the operating chronological framework? According to Israel, the belligerence began with the firing of rockets. On the other hand, one could argue instead that the belligerence started when the State of Israel occupied land and started building on it and transferring its citizens to live in that land while at the same time stripping the Palestinians of the means to self-determination.

To this day, the only internationally recognized border to the State of Israel is that which was underlined in the Partition Plan of historic Palestine in 1947. Several wars have occurred since that time, with the result that the State of Israel found itself in a position of power that enabled it to grab more land and put the Palestinian population under siege.

The Palestinian population has been divided and discriminated against, and it continually faces new and creative oppressive measures that are meticulously crafted by the Israeli state. From the building of the Separation Wall to the disengagement of the Gaza Strip and an intricate network of checkpoints, surveillance and control characterizes the Israeli bureaucratic and military system that immobilizes and terrorizes a whole nation.

Israel has a right to defend itself and this is its justification for indiscriminately firing sophisticated and lethal weaponry into the Gaza Strip. The rising number of innocent civilians killed by this aggressive reaction is of no importance.

In Discipline and Punish, Foucault explains that in modern nation states, visible violence and torture was no longer welcomed and thus killing was no longer performed as a collective spectacle. Punishment transformed into a subtle and hidden enterprise. With the sophisticated technology of modern arms, a whole neighborhood, even a whole city, could be wiped out with the touch of a button.

The death of innocents is reduced to the result of a depersonalized video-game like action, and consequently it garners apathetic reactions. What else could characterize this silence in relation to what is happening in the Gaza Strip? Those people are in a cage and they are targeted in a “hunger games” kind of fashion. Even if the innocents want to escape this horrific situation, they have no way out. In the meantime, across the border the Israeli public has found a new entertaining pastime in which they meet in a public space overlooking the Gaza Strip with popcorn and beach chairs to watch and cheer the bombardment of its civilians.

There is no space here to give a detailed description of the lives and deaths of those who are being killed, especially as the number is rising. We can only acknowledge the biased reaction of the world that supports the belligerent power that has perfected the art of collective punishment.

One is left only with questions: What will be the end of it all? Do the officials of the State of Israel have any vision of how the conflict with Palestinians will end? Do they think that oppressive and violent action in all its forms will make the Palestinian people disappear? Are they seeing what these appalling and discriminatory, racist actions are doing to their own people?

Israeli State violence is penetrating into the Israeli the citizenry; for a long time now the Israeli Defense Force has cooperated with Jewish settlers in the West Bank to impose violence and inflict terror on the Palestinian population. Many such violations have been documented on video through B’Tselem’s “camera project” and through other private initiatives. The culture of violence is spreading throughout the country and it is further strengthened by the Prime Minister and his government, even as he urges the Jewish population not to take “the law into their own hands.” According to Netanyahu, the Israeli legal system allows for collective punishment of the Palestinian population even if they have not committed any act of violence. Thus, vengeance is legitimate, but he prefers to exercise it through the “proper” channels. After all, it is harder to criticize a state since it has a legitimate monopoly on the use of violence.

Thus acts such as the vandalizing of Palestinian property, the terrorizing of a whole population, chants of “Death to the Arabs,” and taking the law into one’s own hands – ultimately leading to the burning of an alive Palestinian teenager — is overlooked by the Israeli state. Soon this demagogic mob reaction will infect the whole Israeli society, as it has been taught by its government: those who have the muscle are always in the right.

As the years progress, I am becoming convinced that most people can’t walk, chew gum, and think at the same time.* Why did people who were highly critical of American capitalism feel compelled to overlook the atrocities associated with Stalinism? Why did other people critical of Soviet power look favorably upon the “authoritarian” but reliably anti-communist Latin American dictatorships as part of the free world? And to get to my present discomfort, why do those who are highly critical of Israeli actions in Gaza and the West Bank, ignore the terrorist tactics of Hamas? And why is it that those who are concerned with Palestinian terrorism ignore deeply problematic qualities of the order of things in Israel today?

As editor of Public Seminar, I’m thinking about this having received private correspondence from colleagues who worry about the series of highly critical pieces we have published on Israel: Yossi Gurvitz’s scathing criticisms of Israeli propaganda, analyzing Netanyahu’s support of terrorism and that the IDF is the largest terrorist organization in the Middle East, and Omri Boehm’s demonstration how the words of Benjamin Netanyahu reveal the logic of a terrorist regime. I have reservations about some of the implications of Gurvitz and Boehm, but I think they do reveal a crucial point. Israel’s policies toward Palestinians are deeply problematic, on both sides of the Green Line, in the Gaza strip, with greater brutality in the occupied territories than in Israel proper, yielding the greatest suffering in Gaza today as I write. This is underscored even more directly in  Nahed Habibabah’s telling le cri du Coeu.

Yet, I worry as I read about the tragic events and as I have suggested in my replies to Boehm’s piece. While I think it is important to both recognize and thoroughly analyze the deeply problematic qualities of Israeli policies and practices, especially as the war in Gaza escalates, I think it is also important to recognize that both the ruling coalition in Israel and Hamas present military solutions to problems that ultimately must be addressed politically, and because of this, they share responsibility for the escalating inhumane death and destruction. They are collaborators.

Netanyahu needs Hamas to rationalize systematic domination of the Palestinians of the West Bank and Gaza, and to exclude the Palestinian citizens of Israel proper. Hamas, depends on Netanyahu and the Israel Defense Forces to work to counteract its waning popularity in Gaza, as support from Egypt disappears and turmoil spreads in Syria, Iraq and beyond. Those who continue to shoot rockets, ineffective as they have been, from Gaza into Israel, and those who systematically work to eradicate the capacity to launch these rockets are allies in their terrorism. Instead of addressing the enduring political challenges, the pursuit of justice, dignity and decency for the people of Israel–Palestine, Hamas launches rockets and Israel shoots back with great force and sends in the troops.

Thus, while basically I agree with the arguments of Boehm and Gurvitz, and find Habiballah’s perspective compelling, I think something substantial is missing, a more critical understanding of the meaning of Hamas’ military actions. We should critically consider the united terrorist front. When we don’t do this, criticism of one party of terror, can and often does become apology for the other. We need to chew gum, walk and think at the same time.

Thus, while I don’t generally agree with J. Goldberg on the conflict, I think he raised an important issue when he asked “Is Hamas trying to get Gazans killed?” But when that question is opened, it is also important, indeed crucial, to pay close attention to how the killing of Gazans and the mass arrests and harassment of Palestinians on the West Bank have been based upon deliberate lies of Israeli officials. This has led to escalating collective outrage and a pointless search for kidnapped hitchhiking Israeli teenagers, long after the authorities knew the boys had been killed, as J.J. Goldberg has demonstrated.

J. Goldberg notes that the Hamas rocket attacks assured Israeli counterattacks, leading to the deaths of many Gazans, militants along with innocent civilians. The cynicism involved in this maneuver is lamentable to say the least. J. Goldberg, further, ponders what would have happened if the Palestinians years ago used the Israeli withdrawal from Gaza, not as a base for rocket attacks, but as an opportunity to carve out a zone of self rule and economic development in the fashion of the Kurds in Iraq. They could have established state structures and constituted civil society institutions for the Palestinian public good and against the Israeli adversary. As someone who has observed how “acting as if one lived in a free society” worked to foster a fundamental and radical transformation in Central Europe, this makes sense to me. There is the very real power of the powerless, the power of what I call the politics of small things.

Yet, one must face hard facts. J.J. Goldberg reveals how the Israeli officialdom cultivated and heightened a broad public alarm, fostering hatred, providing a shield for mass arrests in the West Bank, and preparing Israelis for war in Gaza. Here too the cynicism is outstanding. Instead of seeking to quiet hatred and conflict, they were stoked. It is a policy of acting as if there is a commitment to peace and reconciliation, covering a policy of overwhelming military action, as we are now observing. J. and J.J. illuminate a significant problem and as a result, there seems to be no exit.

But only seems so. They, we, can begin anew.

There are glimmers of hope against hopelessness. This is how I read Benoit Challand’s analysis of a shift in Hamas’ wording, which suggests constructive moves towards a peaceful resolution of the conflict, and how I understand our Israeli colleagues’ protest petition, which in no uncertain terms say no to the destructive military action in Gaza. There are people who are addressing the pressing political problems through politics and not the barrel of the gun.

This leads me to want to act. I think a letter of solidarity should be circulated around the world, in solidarity with the Israeli protesters, and the politicians on both sides of the conflict, when they can be found, committed to the power and importance of words working towards peace. More about this very soon, I hope.

*Original phrase was how the press reported the then President Lyndon Johnson’s evaluation of the intellectual capacities of  future President Gerald Ford. For more on this, read Gerald Ford’s obit in The Guardian.

I teach Just War Theory (JWT). I defend it strongly as a necessary moral guideline for world politics in classes full of cynical students, Israeli-raised students, many of whom went through the grinding machine of the occupation (themselves grinding Palestinians in check points, night arrests, and the like); students who speak fluently the language of power. But at times I myself see the dark, the political abysses in which JWT becomes almost nothing but a scholastic exercise, like debating how many angels (or for that matter, demons) can dance on the head of a pin.

Think of the recurrent explosions of violence in Gaza/Israel, which in its current phase is called by Israel “Operation Protective Edge.” The recurrence of the violence has a fundamental importance that should not be overlooked in applying JWT to Operation Protective Edge. How in terms of JWT to evaluate the justness of the Israeli operation? Can it be at all just, with all the Palestinian victims, so many of whom are noncombatant civilians? But can it be at all unjust, when Israelis are facing intensive missile attacks? Arguably, JWT provides us moral tools to normatively evaluate the operation. In its jus ad bellum dimension JWT establishes six strict principles ‎for determining if and when a war is just: 1. just ‎cause, 2. right intention, 3. legitimate authority, 4. last resort, 5. probable chances of ‎success, and 6. Proportionality. All look very clear, so how is it that the Israeli operation is seen so differently around the word? Think of the harsh accusations in Public Seminar as compared to official statements by world leaders. Yes, political leaders can be cynical sometimes, but the purism of some moralists is no better normative teacher than cynicism; those purists who hold the too easy high moral ground look at the world from a nowhere point that allows them to see nothing.

Surely Israel has a just cause to defend itself against missile attacks, and the other JWT criteria can also be met. Even, I’ll immediately add — risking being ostracized by moralist academics who are my community, to whom I want to belong — with regard to the crucial criterion of proportionality. One should not count bodies to decide if a war is proportional or unproportional, just or unjust (in the time of writing this piece almost 40 Israelis were killed, the great majority of whom were combatants, as compared to more than 800 Palestinians many of whom were noncombatants). Counting bodies can fit the biblical imperative of eye for an eye, a tooth for a tooth (or it may be suitable to General Westmoreland and his likes, who substituted PR for strategy). To be proportional a war, or operation, should not be excessive as compared to the threat imposed on the state. And being attacked by hundreds of lethal missiles calls for harsh measures, even like the ones conducted by Israel.

So why am I doubtful of Israel’s acts? It is the aforementioned “recurrent” that rings the alarm bells for me. This operation is the third in the past six years that Israel conducts in Gaza. Preceding the current Operation Protective Edge were Operations Cast Lead (2008-2009), and Pillar of Defense (2012). How should we account for this recurrence? What has Israel done following the previous rounds of violence to stop the future ones? Nothing whatsoever.

It still besieges Gaza and prevents decent life from the Palestinians. Observing that, I hate what I see when looking at the mirror. I hate myself, hate my country, and hate what it has become and what it represents. From here the easiest step is to condemn the operation, judging it as unjust, maybe even a war crime, a repeated series of war crimes. Thus I can happily join my peers and the high moral ground they occupy. But then my look wanders to my kids (and yes, mentioning kids is always manipulative. But then again: 1. they are indeed my kids, and I care for them. And 2, both sides manipulatively use kids, their tragedies and shattered dreams and lives). I see my kids and remember that they have the moral right to be defended against harm, and that it is their country that owes them this defense; yes, the same country that I feel has betrayed me and my values. The State of Israel owes my kids (and other Israeli noncombatant civilians) defense and security against that which is so terribly wrong, namely the Palestinians firing of hundreds of missiles targeted at Israeli civilians; something which is so terribly wrong that it surely amounts to a war crime.

So then what? What is it that I expect my country, namely the State of Israel, to do? Nothing less than defend its citizens, and if necessary with another harsh military operation. And here I find myself part way with the purists among the moralists. I find myself lost in uncertainty, and as having no trust in moral theory that offers no real effective guidelines. Because on the one hand, it is clear that Israel did not pursue all possible measures beforehand to avoid the necessity to resort to a military operation. Hence, the condition of war as last resort is not really met. But on the other hand, it is doubtful if Hamas offers any real prospects for the success of such diplomatic measures, and Israeli citizens, kids and adults alike, deserve to be defended against missile attacks, no matter the reason they were launched.

It is here that the right of self-defense comes to life. It is here where just cause joins hands with the other criteria to justify Israel’s action. But it is here also, where just war theory becomes an immoral instrument, a rally-round-the cause; a legitimizing instrument of war in which people from both sides perish, mostly Palestinians, who are trapped between the hammer and the anvil, between the Hamas and IDF.

Does it mean that JWT has lost its relevance in our world of power politics? This is a far too sweeping conclusion. JWT is still very much relevant and rightly so. But maybe like any theory it needs condition scopes, and perhaps intractable conflicts like of the Israeli-Palestinian conflict, are beyond its scope (this is mostly true regarding jus ad bellum not jus in bello, that dimension of JWT which stipulate the rules prescribing permissible military conduct). The purpose of JWT is to decrease the number of wars by limiting the circumstances in which we deem war as necessary, hence permissible. But intractable conflicts, which by definition are continuously unresolved (unresolvable?), do not lend themselves to this functionality of JWT. They go on erupting in vicious cycles, almost blind to rational calculations and moral evaluations. It is with those kinds of intractable conflicts that JWT loses its functionality and breaks down. When JWT is supposedly employed in intractable conflicts, it is often to rhetorically serve as a masquerade of permissibility and legitimacy. JWT provides excuses that dress immorality in moral clothes and provide justifications to what is unjustifiable.

JWT enables leaders on both sides to excuse themselves from even attempting to resolve the intractable conflict. Instead of investing the political resources and political will that are needed for the difficult task of addressing the sources and reasons of the conflict, the leaders prefer using JWT to inflict evermore harms on the other side. And opportunities are abundant; from rockets launched at noncombatants to targeted (and not so targeted) assassinations. The spectrum of acts is indefinite and so is the never ending temptation of retaliation and revenge dressed in robes of just causes. Israel besieges Gaza, Hamas (and/or other splinter groups) launches rockets at Israel, which launches and relaunches its never ending operations.

Therefore, the leaders of both Israel and Hamas are, ‎if not war criminals through and through, at least political crooks leading their peoples to misery and doom under the moral guise of JWT, instead of under its critical gaze.

The verdict was more forceful than expected. On July 24, 2014 the European Court of Human Rights in Strasbourg handed down two unanimous rulings in the cases of Al Nashiri v. Poland and Husayn (Abu Zubydah) v. Poland. The cases concerned the extraordinary rendition by the CIA of two terrorism suspects to a secret detention site in Poland. Both men alleged that in December of 2002, during the early phase of the Bush administration’s “War on Terror,” they were secretly transferred to Poland, where they were tortured while being held, for nine and six months respectively, in an unacknowledged detention facility.

Having examined the evidence, including expert statements and findings of several international inquiries (but not documents from the Polish investigation, which the Polish government withheld), the Court found that the applicants’ allegations were sufficiently convincing. It ruled that by “acquiescing to and conniving in the CIA’s High Value Detainee program,” the Polish state violated several articles of the European Convention on Human Rights and Fundamental Freedoms, notably Article 3, which prohibits torture and inhuman or degrading treatment or punishment.

The Court noted that the interrogations carried out at the facility in Stare Kiejkuty were the “exclusive responsibility of the CIA.” “It was unlikely,” the holding reads, “that the Polish officials had witnessed or known exactly what had happened inside the facility.” Their culpability, according to the Court, is in their failure to ensure that persons under Polish jurisdiction were not subjected to treatment prohibited by the Convention. The Court ruled also that the Polish investigation into the allegations of ill-treatment was ineffective, and that the applicants had thus been denied an “effective remedy.” Most observers had expected that the judges would condemn the inadequacy of the Polish proceedings in the case, which have been conducted in secret and without conclusion for the past six years. But the far-reaching nature of the rulings surprised commentators and seems to have caught the Polish government off guard. The Court ordered Poland to pay the men, who are presently being held in Guantánamo, damages in the sum of €100,000 each, and additionally €30,000 in costs and expenses to Al-Nashiri. Members of the Polish government have already hinted at the possibility of appeal.

“I receive this verdict with bitter satisfaction,” said Mikolaj Pietrzak, one of Al-Nashiri’s attorneys. He noted that the Polish state could have avoided the judgment by conducting an effective investigation itself, but regrettably chose not to. Now the Court’s decision carries major implications for other European states that collaborated with the Bush administration’s secret programs (related cases against Lithuania and Romania are pending). But to regard it as the final settling of accounts in the nearly decade-long debate concerning Poland’s co-responsibility for torture would be a mistake.

Human rights advocates, including lawyers directly involved in the case in Poland, rightly praised the decision as “historical,” “the best possible,” and a “comprehensive condemnation of the CIA, the black site program and Poland’s role in it.” In the words of Anne Brasseur, the President of the Parliamentary Assembly of the Council of Europe, the verdict is a sign of progress in the “onward march of truth.” But to stick with Brasseur’s phrase, truth still has some way to go. In the United States, no court has ever acknowledged the program of extraordinary renditions. Similarly, Polish authorities never admitted that they hosted a black site prison, and some of the politicians who condoned its establishment still enjoy thriving careers while continuing to refuse to concede that Poland’s complicity with torture may constitute a problem.

In the US Senate, a 6,000-page report on CIA detention and interrogation practices awaits declassification. In Poland, where I have closely followed the issue since 2005, the status of documents and other evidence in the case is shrouded in mystery. We do not know who exactly authorized the landings of rendition flights, who granted the CIA access to the remote government-owned villa and on what terms, who took care of on-site logistics, and who assisted with the cover-up. We do not know either what happened with the $15 million in cash that the CIA allegedly paid for Polish cooperation.

Ever since the Washington Post and Human Rights Watch revealed in November of 2005 that the US chose Poland, Romania, and Lithuania as its partners in the interrogations of terrorism detainees, the Polish public has encountered persistent obfuscation from its leaders. The reticence of officials was remarkably consistent from the left to the right of the political spectrum. It was on the watch of the former President Aleksander Kwaśniewski and the former Prime Minister Leszek Miller that the CIA flew terrorism suspects in and out of Poland. Both of them have a long record of public denials and assertions that whatever may have happened, they only ever acted in the best interest of the state. Kwaśniewski and Miller are erstwhile colleagues in the party that passes in Poland as the mainstream “left,” but on this issue (and this issue only), they have found support across the political spectrum all the way to the far right.

This state of official denial has persisted in spite of several high profile international inquiries, notably two Council of Europe reports on extraordinary renditions in Europe complied by the Swiss Senator Dick Marty and the Fava Report issued by the European Parliament’s body, which investigated the use of European countries by the CIA for secret prisoner operations. Following a political shift in Poland in 2007, the then-new Prime Minister Donald Tusk initiated a domestic criminal investigation to determine if public officials abused their powers by allowing the establishment of an extraterritorial zone under the control of another state’s jurisdiction. It is this investigation that has now been condemned by the European Court.

And here we are now, in 2014, in a paradoxical situation that has been brewing for a long time. On the one hand, international law carries an absolute prohibition of torture. The principled rejection of inhuman and degrading treatment or punishment by democratic states is as close to a universal international consensus as it gets. Practice may fail to live up to principle but that does not invalidate the principle. On the other hand, however, we have a democratic government in Europe, which is sending a clear signal to its constituents that sometimes torture is not only permissible, but it may be in fact the right thing to do.

“I wish this issue [Poland’s cooperation with the CIA] was never leaked,” said Polish Foreign Minister Radoslaw Sikorski in a TV interview on the day the Strasbourg judgment was announced. “There are things between allies, and between secret services, which ought to remain secret.” He also warned against “excessive sympathy for the terrorists who pretend to be victims.” It is, of course, impossible to say what proportion of Poles would fully concur. However, an unscientific survey of online comments section of Polish papers on the left and on the right suggests that there is no shortage of those who declare that terrorists deserved the treatment they got and worse, and who hope that Poland will reject the Strasbourg judgment and never pay the damages.

No matter what happens next, people who hold such views will not change them overnight, and will not change them in response to the ruling of a Court perceived by many as distant and ideologically suspect. But what could begin the difficult work of delegitimizing such discourse would be a strong political response coming from the government itself.

Józef Pinior, an independent liberal senator and former Member of the European Parliament, who stood out among Polish politicians as the most outspoken critic of the policy of denial, noted at a press conference that what is needed is “a very clear declaration of the Prime Minister that Poland will not tolerate violations of international law on its territory, that there will be no torture and no private prisons.” He called for a swift completion of the domestic investigation and added that “generations of Polish opposition fought for Poland to be governed by the rule of law, for freedom from torture in prisons, and for freedom from arbitrary detention by the political police.” Pinior’s long-held position has been judicially vindicated in Strasbourg, but surely he and other Poles outraged by what the government did in their name would have preferred for justice to be served closer to home. The European Court has offered satisfaction, but the bitterness is certain to linger.

One of the most depressing aspects of the current war in Gaza is the repetition of images in discourse about the conflict. “Defensive Edge,” “Pillar of Defense,” and “Cast Lead” all bleed into each other. Images of death and destruction recur across patriotic monikers that stand as a monument to the limited inventiveness of the national copy writers. Nothing much seems to change. And yet, with every iteration of death and destruction, Israel’s political culture turns more and more to the right.

This is felt most acutely by Israeli Arabs, but is also being increasingly felt by left-wing Jews in Israel. There are, of course, striking differences between the two experiences. For Arab Israelis, the situation has become increasingly scary, especially in Jerusalem. Recently, two Arabs from Jerusalem were attacked, and both are now hospitalized in serious condition. They were asked for a cigarette in the tram station, and when the predators recognized their Arabic accent, the two were attacked by about twelve men with bats and crowbars.

Nothing like this has happened to my leftist-Jewish friends. One narrowly avoided being beaten up in a rally in Tel Aviv, but this was pretty minor; the barrage of anti-left calls such as “death to the leftists,” “leftists to the gas chambers!,” etc., are yet to be realized. Again and again, when I talk to leftist Israeli-Jews, I hear a very different story than the one I hear about Israeli Arabs, one that is surely less threatening than crowbars: they are not taken seriously.

When leftist Jews talk about the situation, people look at them as if they are delirious, as if they were speaking an unintelligible language. I am not speaking, mind you, about reactions from extreme right-wing home-grown fascists, but rather from regular run-of-the-mill Israelis. They cannot fathom how my friends think Israel should enter negotiations with Hamas (“They all want to kill us,” “every truce is a ruse”), and while they note that Gazan children dying is a sad thing (they say so with detached empathy), they are convinced that it is obviously Hamas’s fault and, therefore, that there is not even a relevant argument.

Obsessing over Facebook posts, I have seen some of my outspoken Israeli friends being mocked, and not knowing how to answer, opting for relatively safe clichés such as “we must hope.” I suggested to a friend that Hamas’s attack tunnels, while horrific, are politically convenient targets, as they allow Israel to proclaim it had “achieved its goals” whenever it chooses to withdraw. In response, he stopped the conversation and said I was “a radical” and that there was no point talking to me. A left-voting colleague told a friend that although “this is a tragedy,” she decided not to demonstrate against the operation, since “one should not do so in a time of war.”

These are small things. Each one by itself seems minor. But they are crucial. A political culture is, partly, the horizon of the things one can say and remain considered a legitimate voice in an ongoing conversation. One of the things that is happening in Israel, deepening from year to year, is that the horizon of the intelligible shifts to the right. There are still left wing journalists — notably, Amira Hass and Gideon Levy — who are being heard. But they are being heard by fewer and fewer people. For most Israelis, their writing is no longer a political threat by a legitimate political voice; it is simply the ranting and babbling of irrelevant lunatics, self-hating Jews, auto-Anti-Semites.

Hamas, as others have noted here, is a convenient enemy for the Israeli right wing. Enough of its leadership iterates that any peace is illusory and that the goal will always be the destruction of Israel. With such enemies, it is easy to forget that a large chunk of the Israeli right-wing ruling coalition would deny Palestinians any sort of statehood, easy to overlook the increasing power of the settlers’ messianic vision, and easy to minimize what is happening to Israeli political culture even beyond the avowed right-wing. But with every round of conflict, a new taken-for-granted is etched. It is a taken-for-granted that shrugs off the death of Palestinian children while it transforms the left from a viable political opponent to a pitiful group of madmen. It may not be very useful to prognosticate, but the intensification of this new political culture does not bode well for the future.

Jeffrey Goldfarb argues that if we criticize the behavior of one group, we should not turn a blind eye to the behavior of another. He complains that the contributions of Yossi Gurvitz, Omri Boehm, and Nahed Habiballah to this seminar, while effective in their criticisms of the policies and practices of Israel, ignore the terroristic tactics of Hamas. The truth is, he suggests using a phrase of Omri Boehm, that both Israel (or at least its ruling coalition) and Hamas are “collaborators” in terrorism. Insofar as they both seek “military solutions to problems that ultimately must be addressed politically … they share responsibility for the escalating inhumane death and destruction.”

Jeff’s initial point is a good one. There are good moral as well as political reasons for Palestinians and their supporters to look critically at the tactics of their political leaders — not only of Hamas but also of Fatah. But to move from this to the idea that Hamas and Israel are “collaborators in terrorism” and that they “share responsibility” is absurd. When we are discussing political conflicts, we need to bear in mind what the conflict is about. In South Africa in the apartheid era, both the security forces and the ANC were guilty of torture and murder. But it remains the case that the security forces were fighting to maintain a grotesquely unjust system and the ANC was struggling to overthrow that system. To speak of them as “collaborators in terrorism,” or as “sharing responsibility,” would be patently absurd. So too in the case of Palestine and Israel.

Talk of “collaboration” and “sharing” also obscures the fundamental inequality in the relationship between Hamas and Israel. Israel is by far the most powerful country in the region; it has a technologically advanced army and murderously efficient security forces. It obscures the ill treatment, oppression, and humiliation that Israel inflicts on the people of Gaza on a daily basis. It turns a blind eye on the Israeli blockade and the fact that nearly two million people are literally imprisoned in Gaza. There is nothing remotely comparable on the other side. We must take into account the nature and extent of this inequality if we are to understand the different modalities of violence in play in the struggle between Palestinians and Israel. The term “terrorism” here, and elsewhere, stands in the way, not merely of a understanding the nature of struggles between unequals, but also of coming to terms with what is morally at issue in these struggles.

Despite Boehm’s brilliant demonstration of the “terrorism” of Israeli law, my suggestion is that it would be a good idea to avoid the use of the term “terrorism” for the time being. This should not diminish our horror at murdered children, families destroyed (whether by poorly aimed rockets or “surgical” interventions), the torture of suspects, and so on. But it would be a step towards placing these in a wider and more pervasive spectrum of horrors. And it might lessen the temptation towards a moral absolutism that precludes an understanding of and negotiation with groups that are labeled “terrorist.”

Goldfarb speaks mostly of Hamas. He ignores the Unity Pact between Fatah and Hamas, now only two months old. No doubt the weakness of Hamas was one of the main motivations, but it was undoubtedly a step towards a less intransigent attitude towards Israel. This pact was rejected both by Israel and the USA. As usual, the term “terrorist” was made to do a lot of work. Netanyahu responded by tightening controls on the Hamas border and announcing without evidence that the Hamas leadership was responsible for the murder of the three Yeshiva students and that Hamas would be made to pay. It was clear that his policy has been to destroy the alliance between the PLO and Hamas, even though some measure of political unity is necessary if there is to be negotiation, let alone a political settlement, between Israel and Palestine.

Until I read Benoit Challand’s contribution to this forum, I did not know of the negotiating position put forward by Hamas, which was ignored both by Netanyahu and the mainstream media. Much of the content is familiar to those who have followed the situation in Gaza. Nevertheless, for Netanyahu and his supporters to have tried to explain why it is not possible to allow farmers and fishing communities to pursue their livelihoods and for internationally supervised entry to and exit from Gaza, would have meant going beyond the language of terror. But as Benoit emphasizes, what was even more important was the suggestion of a long-term truce enabling a period of negotiation, and thus a movement beyond the cyclical violence of retribution and revenge. That this was not rejected but in fact ignored is further evidence of Netanyahu’s commitment to a morally untenable status quo.

As far as military euphemisms go, Operation Protective Edge is not the worst offender. As any reliable voice will point out, Israel faces significant danger from Hamas and its various factions. The threat posed by missile attacks or deadly incursions courtesy of a significant tunneling network out of Gaza and into Israel are real and they are serious. It is unreasonable to expect Israel to do nothing about them indefinitely. Yet, self-defense does not mean that anything goes. Therein lies the problem. The population density of Gaza is high, and so the risk of harming civilians in any military attack is great. Any military strike, no matter how precise, will almost certainly hit civilians. Assuming, for a moment, that this war has been taken as a last resort, one of the remaining moral questions about this war becomes one of proportionality. Are the attacks proportionate to the desired outcome? In other words, is the risk to Palestinian civilians mitigated by the intended aim of an Israeli military strike? There has been some excellent commentary already on the topic of Just War, proportionality and Operation Protective Edge. However, something is being missed when we focus on the normative language of Just War and don’t pay enough attention to the moral terrain on which decisions are based. As the scholar Piki Ish-Shalom of Hebrew University writes, “purists who hold the too easy high moral ground look at the world from a nowhere point … allows them to see nothing.”

It is easy for those of us in the Diaspora to take an easy moral ground. Not necessarily that of the purist, but that of the person detached who does not have to live with the consequences of action or inaction. However, Israel is not Las Vegas. What happens in Israel no longer stays in Israel, and Diaspora Jews cannot, as Sigal Samuel writes, have it both ways: 

“Dear Diaspora Jews, I’m sorry to break it to you, but you can’t have it both ways. You can’t insist that every Jew is intrinsically part of the Israeli state and that Jews are also intrinsically separate from, and therefore not responsible for, the actions of the Israeli state.”

Diaspora Jews are not responsible for what the Israeli government does. However, to the extent that Diaspora Jews and Diaspora Jewish organizations affirm the importance of Israel as the Jewish State for Diaspora Jews and as a consequence defend Israel uncritically, Diaspora Jews can be held morally accountable for this support and its consequences. The Diaspora does not stand from nowhere, but from a place that really matters.

First, in response to Operation Protective Edge, there have been increasing numbers of attacks against Jews in the Diaspora. The correlation here is not to say that Israel is responsible for the behavior of anti-Semites, but that Israeli military actions are not isolated in their repercussions. As the Jewish State, Israel needs to take into account how its security policies may be decreasing the security of the Jews in the Diaspora and raising fear among Diaspora Jews. It is disturbingly ironic when the Jewish State, the State that was built to protect and offer security to the Jewish people, contributes to raising Jewish insecurity. Diaspora Jews are, in this sense, not detached observers. A serious discussion on this issue is urgently needed.

Second, In a world where Israel is verbally condemned from all around, where in Europe, the continent that once tried to rid the world of the Jewish people, come voices condemning Jews who defend themselves, it is up to the Diaspora to speak hard words to Israel because the Diaspora may be the only critical voice that Israel may listen to. One of these hard words is proportionality, although not for the reasons usually written about. The Israeli government and the Israeli military regularly claim to abide by international law. However, if this is so, then the use of artillery into densely populated areas (as recently reported by the New York Times) is a serious violation.

I have never served in the military. I have never been in battle. Nevertheless, I can appreciate that when facing enemy fire, the pressure of the moment requires fast actions and swift judgment. Be that as it may, soldiers are professionals, and all soldiers are expected to act according to the rules of war. This is partly what distinguishes military soldiers from other types of combatants: their training and the responsibility that each soldier is expected to bear. The question, consequently, that is troubling is why were Israeli military officers prepared to use artillery in heavily populated urban areas. 

The first answer could be as simple as context and timing. The soldiers were under attack, they needed to respond in self-defense, and only artillery was available at that moment. I am prepared to accept such an answer, although it does not excuse a possible violation of international law. If the law was broken, those who broke it need to be held responsible.

The second answer, which I find more troubling, is that some Israeli soldiers did not care. The euphemism “mowing the grass” is a disgusting phrase to describe Israeli security policy, but it accurately describes the increasing de-humanization of the Palestinians by Israel. People are not grass. Morality is easy if it applies only to people we like. It is when we have to deal with those who are not like us, and those that we disagree with, that morality becomes difficult and our moral being revealed. The occupation of the West Bank, the ongoing conflict with the Palestinians and the anti-Israel rhetoric that so regularly comes out of Arab countries have all contributed to Israelis viewing the Palestinian people not as a people, but as a problem. Being able to sustain the military occupation of the West Bank has resulted in Israeli soldiers de-humanizing their Palestinian neighbors and taking these opinions into post-military life. The anti-Arab and anti-Palestinian racism that has taken hold of Israeli society is deeply disturbing. The troubling conclusion is that even if there were alternative military solutions to using artillery to attack an area with a known UN-school in the line of fire, the soldiers who ordered the attack may not have cared. They may not have cared because they were under fire and because they did not care about the Palestinian civilians that were in harm’s way.

I really hope that this is not the case, that the artillery attacks were just terrible accidents, isolated incidents of indiscriminate firing. But a part of me thinks that there may be more to it. Indeed, in the second attempt to rescue Lt. Hadar Goldin, the Israeli army instituted its Hannibal procedure, which is intended to prevent Israeli soldiers being captured and which may involve massive use of force. In this case, the extensive and indiscriminate use of fire resulted in the death of 130 Palestinians. 

This violent and deadly conflict is not mowing the grass, or weeding, or any kind of gardening. It is, as the soldiers on the front lines and their families know, a war. If this war is to ever end and offer a long-term solution Israelis and Diaspora Jews need to start caring more about the Palestinian people as human beings. At issue here is more than needing to be compassionate. At issue is about what type of a people Diaspora Jews want to be when we always hedge our statements about Palestinian suffering by reference to Palestinian terror, and what type of people Israeli Jews want to be. Israel has, alas, made it acceptable for Jews to be racists. This war may only further exacerbate the moral damage racism is doing to the Jewish people. The bombings of the UN Schools was most likely disproportionate. They were disproportionate in regard to the risk to civilians and they were disproportionate to the risk they pose to Israel and Diaspora Jewry’s moral compass. Legitimate the attack against a school, and we de-humanize not only those who were killed and injured, but ourselves as well.

This is the prepared text answering the question “What do we really know about transitions to democracy?” for  the General Seminar of The New School for Social Research, March 19, 2014.

It was a quarter of a century ago, in 1989, that a new kind of revolutionary imaginary emerged, one that promises a new beginning, and demonstrates the possibility of comprehensive systemic change without bloodshed. Velvet or otherwise un-radical, this kind of revolution has become a site of tangible hope, a site in which words have power, where people regain their dignity, and realize their agency through instruments other than weapons. Negotiated revolutionis not an oxymoron, but it is still an extraordinary event, as dictatorships are by definition opposed to any spirit of dialogue and compromise.

The shift from the logic of revolution to the logic of negotiation had been tested in Spain in 1975 and Chile in 1988, and it made possible the negotiated transitions in Poland and Hungary in 1989, and in South Africa in 1993. And yes, it was not a miracle, but it was new, and as Arendt suggested, it did appear in the guise of a miracle. The very fact that the new formula was developed locally, setting in motion a mechanism for negotiating the transformation of a dictatorship into a democracy, may be the most precious political accomplishment of an otherwise dark century rife with wars, genocide, and an array of modern despotisms, the termination of which has too often been left to the mercy of multinational institutions and alliances.

No matter how widely spread the new imaginary has become, the transition to a meaningful and enduring democracy, never an easy project, has a chance to succeed only if it is initiated and owned by the local people, and sustained by their voices, imbued as they are with their respective histories, cultures, and economies. And we saw this whether at the front gate of the Gdansk Shipyard, or in Tahrir, Taxim or Maidan. People who gathered there saw themselves above all and for the first time as citizens, and indeed the squares, activated by a newly arisen public realm, have become both sites and narratives of societal hope.

The emergence of such a realm creates the conditions for dialogue, engaged conversation, negotiation, and compromise deeply invested in the democratic promise, but this is only the first act. How to move from here, and how tojump-start change when the storming of the Bastille is collectively taken out of the equation?

I like to think of the furnishing of democracy as beginning with a particular piece of furniture, the round table, which becomes the main prop of the drama I am taking about. It was exactly 25 years ago, in the spring of 1989, that this table facilitated the dismantling of the one-party system in Poland. It was made on special order in a furniture factory near Warsaw, was about 26 feet in diameter, and accommodated 57 people. As much as it is often an actual piece of furniture, the Round Table itself is above all a conventional act, an idiom of political compromise, which is both a site of, and a powerful instrument for, the release of political performativity. Whether real or symbolic, with no privileged seats, it has the effect of safeguarding equality in communication when the word crosses the barriers between the speech zones of the participating parties. Such participation, mediated by a reasoned and informed exchange, implies the possibility of learning, of self-transformation on the part of those participating, and therefore, the possibility of compromise. And finally it establishes the grounds for a new order, and marks the beginning of the long, tedious, and less thrilling process of building the new democratic order.

In Poland, or in South Africa, the two cases I know best, the Round Table provided tools for institutionalizing a dialogue between those who held dictatorial power and those social movements which — though still illegal, and often represented by people just back from prison or exile, and labeled enemies of the state — were now acknowledged by the regime, however reluctantly, as the only ones able to bring credibility to the proposed dialogue and an eventual contract. Many years later Tadeusz Mazowiecki, who had become the first Prime Minister in democratic Poland, remembered: “It is really uncanny that we sat down with them at the round table, but we did.”

The two Round Tables brought together a pragmatically motivated, but until recently a rather unlikely assembly of modern subjects, half of whom, the oppressed, were well aware of having been stripped of their basic rights and capabilities as citizens. The other half, the oppressors, acknowledged — even if reluctantly — that they were the keepers of a system whose very existence depended on excluding large parts of society from participation in the political decision-making process, and therefore from access to the resources and capacitiesneeded to advance the well-being of both the community and its individual members.

The Round Table institutionalizes dialogue by providing a space of appearance (a concrete temporal and spatial framework), by authorizing and legitimizing the actors, by necessitating the drafting of a script, by establishing rules for the conduct of negotiations, and by foreseeing the need for a contingency infrastructure in which a lack of agreement or specific stalemates can be dealt with.

In this situation the agreement expected to be produced at the Round Table represents more than what it states. The Round Table itself, however extraordinary it may have appeared, becomes an event staged according to agreed-upon conventions. And it is this very aspect of the whole arrangement that makes its performativity possible, as it acquires the status of an effective practice endowed with power for jump-starting a change of the political system. I would like to think that this is not only a real alternative to tanks and bullets, but also a kind of force that can help recover the lost dignity of people and their identity as citizens.

The round table talks that facilitated the democratic transformation of Poland’s one-party state took six weeks — they began in February and concluded in April 1989, at a time when the communist system in the region had still seemed to be — even if not robust any more — certainly irreversible. Yet clearly those talks were a lot less visually spectacular and telegenic than the joyous crowds hammering at the Berlin Wall half a year later. The talks that brought an end to apartheid in South Africa, though lengthy (extended over twenty-three months and interrupted by dramatic and unplanned intermissions), did not produce stunning images either: certainly nothing comparable to those one-person, one-vote images of the winding lines of people waiting to vote for the first time in their lives. The real work of hammering out such agreements is simply not mass-media-friendly, even if it is a critical space of appearance, in which both subjugators and subjugated are devising and testing a new formula for sweeping political change. What is most important: the launching of such a dialogue is not the result of the “good will” of the ruling regime, but a combination of factors, one of them being recognition by the regime of a creative, emancipatory invincibility demonstrated by society, the other party to the negotiations.

Of course, the cases of Spain, Chile, Poland, and South Africa are hardly analogous. The one thing they had in common was, generally speaking, the ostentatiously non-democratic character of their regimes, which were otherwise very different from each other. What may seem a paradox at first glance is that while in Poland it was the hegemonic communist party that was the ultimate confiscator of civil and human rights, in Spain and in South Africa it was the outlawed communist party that acted against their respective dictatorships of fascism and racial apartheid.

Perhaps the most important question concerns the prerequisites for entering into aprocess of negotiating change. What does it take for a dictatorship to bend enough to open,and to open up for a Round Table or any other idiom that might facilitate a dialogue with an ignored society and its outlawed civic structures? What can persuade the oppressed — in fact the very people, often yesterday’s political prisoners, who are known for their indomitable tenacity — to sit at the same table with their oppressors?

First, it is important to observe that in such cases the ancien regime is usually in the process of weakening. Its core ideological motivations are long gone, or they are disoriented; it has trouble paying its bills and dealing with social unrest, and it loses its few foreign supporters. Fascism in Spain began to deteriorate in the1960s. Communism in Poland lost face once and for all in 1981 when, unwilling to broaden the public sphere, it imposed martial law. In South Africa, the economic sanctions and international isolation began to take their toll on the apartheid government in the mid 80s. In each case it took approximately one decade for the ancien regime to realize that it could no longer manage crises, and that the existing institutions of public life were unable to bring stability (let alone creativity!) to the economic, political, and cultural realms. Such governments still have considerable force at their disposal; so they can stay in power but do little else.

A second element facilitating the Round Table is less frequently discussed: the precarious state of the anti-regime movements, the valiant society itself: its organizations and its leadership showing visible signs of fatigue. And it is precisely because of this kind of balance of weakness on both sides that the Round Table is not only possible but in fact unavoidable. Adam Michnik, in a lecture delivered at the annual Democracy & Diversity Institute, organized by the New School’s TCDS in Krakow, Poland, July 1999, put it this way:

Negotiations are possible when the resistance of the democratic opposition is strong enough that the dictatorship cannot destroy it completely, and when the dictatorship is strong enough that the opposition cannot overthrow it from one day to the next. The weakness of both sides becomes the national opportunity.

Joe Slovo, the legendary father of the South African Communist Party, announced boldly:

We are negotiating because towards the end of the 80s we concluded that as a result of escalating crisis, the apartheid power block was no longer able to continue ruling in the old way and was genuinely seeking some break with the past. At the same time, we were clearly not dealing with a defeated enemy and even a revolutionary seizure of power by the liberation movement could not be realistically posed.[1]

In both Poland and South Africa the actual negotiations were preceded by years of cautious contacts and informal,often failed, communication between the adversaries. The gradual regaining of real subjectivity — the process enabling members of society to become the agents of their own lives, which in the case of South Africa meant non-racial democracy — was part and parcel of the negotiation process, and radiated well beyond the space and the actors of the talks themselves.

The usual prerequisites for launching a dialogue are the freeing of political prisoners (like Michnik or Mandela), a stipulation that the negotiations will be preceded by, or will include, the legalization of outlawed organizations (the Communist Party in Spain, Solidarity in Poland, the ANC and other liberation movements in South Africa), and that they will establish freedom of speech and information. In South Africa an important condition was that both sides renounce violence, which in the case of liberation movements had meant armed struggle, and in the case of the apartheid regime had meant the use of specialized state security forces. A separate and very sensitive stipulationconcerned the past, i.e., crimes perpetrated by the dictatorship and sometimes by the liberation movements: namely, a tacit understanding on both sides that a successful Round Table would exclude guillotines or Nuremberg Trials.

As the launching of a dialogue between enemies is a daunting task, an external third party, serving as promoter, guardian, or intermediary in the process, usually assists it. Interestingly, those are often surprising or even unlikely allies. In South Africa they were the Afrikaner nationalists, or more specifically the verligte wing of the governing National Party, enlightened Afrikaner intellectuals, mostly academics, but still loyal to the nationalist outlook. It was they who initiated and cultivated the early clandestine contacts with the ANC leaders in the late 80s, and it was they who within their own party started a discussion on the necessity of reforms.[2] Both in Spain and in Poland the third parties that facilitated the dialogue were pre-modern institutions deriving their own legitimacy not from the people, but from divinity, the institution of the monarchy and the Catholic Church, respectively. Still, perhaps one should not wonder: after all, it was precisely these forces that in the past had paid the highest price in the course of modern revolutions.

The Polish and the South African negotiations were each additionally facilitated by an unusually favorable external context. In the case of Poland it was the only foreign context that mattered to the dependent societies of the communist bloc: the Soviet Union and Gorbachev’s policy of perestroika and glasnost thatdisoriented the hard-liners in the communist party and severely shook their self-confidence, while encouraging the society.

The end of the Cold War, the democratic transitions in Eastern Europe, and the collapse of the Soviet Union had an impact on the situation in South Africa as well. It not only further weakened support for the apartheid regime in some corners of the world (it could not exploit the fear of communism of its few remaining foreign allies anymore), but it also terminated the support extended by the Soviet Union to the communist party of South Africa, an important actor in the anti-apartheid movement. Moreover, the Gorbachev reforms lessened the suspicions of the Pretoria regime that the anti-apartheid movement was directed from Moscow.

Finally: the would-be negotiators both in Poland and in South Africa worked in a climate influenced by the presence of a new global actor, an increasingly influential human rights community, expressing itself through overlapping networks of non-governmental, transnational organizations monitoring abuses and systematically reporting them to key international institutions and to the world at large.

Even if not particularly telegenic, the Round Table process in Poland was an intense 59-day-longpolitical drama with over 400 performers (a panoply of negotiating teams representing both sides), taking place sometimes simultaneously on three round stages, where three separate ensembles debated the problems of the economy and social policy, trade union pluralism, and political reforms.

Though politically representative, the Polish Round Table was absurdly, and disappointingly, gender exclusive. Only five women were invited as negotiators at the three main Tables — so women represented just a shade more than 1 percent of the Round Table cast. In South Africa, just five percent of the negotiators were women, also a very low number given the vibrant women’s organizations there, and the attention given to gender issues by the liberation movements. Yet — unlike those in Poland — women were mobilized across racial and political lines by their absence at the negotiating table, and launched a nation-wide campaign to claim their civil rights and to fight against their conspicuous political marginalization.

As successful as these two Round Tables were, sharp criticisms of the agreements emerged very quickly in both Poland and South Africa, denouncing them as dirty deal-making, as a conspiracy in each case between elites.

I like Michnik’s rejoinder in that Krakow lecture:

The path of negotiations brings many disappointments, bitterness, and a sense of injustice and unfulfillment. But it does not bring victims. Disappointed are those who are, after all, alive.

NOTES

[1] This is also why Mandela and De Klerk, despite continuous threats to the fragile process of negotiations coming from all sides, kept resuming the talks). The image, sometimes translated as “mutual siege” (or mutual dependence), was brought up by Jeremy Cronin at his lecture in Cape Town in January 17, 2006, at the annual Democracy & Diversity Institute organized by the New School for Social Research in collaboration with IDASA. Cronin, who took part in the CODESA talks as a representative of the newly un-banned Communist Party of South Africa (SACP), was at the time of the lecture in Cape Town the deputy secretary of SACP, and a Member of Parliament representing the ANC.

[2] I’ve been cautioned about emphasizing the one, verligte–related narrative by one of the members of this group itself, Professor André du Toit, who took part in the 1987 meeting with the ANC in Dakar.

Section: Identities

When I was in primary school, there were two street names in my hometown that I always got wrong. My teacher looked at me with disbelief and worry when I called the street next to the school Wolgaster Straße.

My geography skills improved dramatically after 1989, when the street names finally caught up with me. I grew up in the German Democratic Republic; call it DDR, GDR, or East Germany. The street names my teachers insisted on were Wilhelm Pieck Allee (Allee means promenade) and Otto Grotewohl Allee, named after the first President and Prime Minister of my dear republic. At home, I had learned to refer to these streets as Wolgaster Straße (Straße means street) and Anklamer Straße. Wolgast and Anklam are nearby cities. If you go to Wolgast, you leave the city via Wolgaster Straße. These street names are neat mnemonic devices; they point to nearby places. My pre-1989 teacher was not worried about my lack of knowledge. She must have known that the names I used were from a different time. For her, remembering the wrong name was worse than forgetting the (politically) correct name. After 1989, the old names returned.

Since then, I never got in trouble over street names again – that is, until I moved to Berlin for part of my sabbatical. It was my first time living in Berlin. My parents grew up in this city. In their twenties, they moved away. Their memories of the city are from the 1970s. When I talk about places, subway stops, and streets in Berlin, my mother often has no idea what I am talking about. Danziger Straße? Torstraße? Where would that be? These places are not even in the former West Berlin; they are in the East. My parents knew them and yet don’t recognize them. Danziger Straße used to be called Dimitroffstraße when my mother roamed these quarters.

The obsession with naming and renaming streets pre-dates the East German state. I recently finished reading Hans Fallada’s amazing novel Alone in Berlin (Jeder stirbt für sich allein), which tells a story of futile anti-Nazi resistance and includes a 1944 street map of Berlin. This 1944 map includes evidence of then new names, such as Hermann Göring Straße. The name did not last for long, of course. Nowadays the street is called Ebertstraße, after the Weimar Republic President Friedrich Ebert.

The current Nordbahnhof (Northern Station) was called Stettiner Bahnhof until 1950. Sections of the current Torstraße used to be Elsässer Straße (Alsace Street) und Lothringer Straße (Lorraine Street) between 1871 and 1951, when it became Wilhelm Pieck Straße, the name by which my mother should know it. In the wake of the wars with France, Alsace and Lorraine had been claimed as parts of Germany. These claims were embedded in the Berlin streetscape via street names.

It does not need explaining that names such as Hermann Göring Straße disappear. But what crimes, one might ask, did cities like Danzig and Stettin commit? They used to be German cities in a tenuous way. Now they are Gdansk and Szczecin, Polish cities. The Stettiner Bahnhof was the place to catch a train to Stettin when it was part of the German state. Today, trains leaving for Szczecin depart from the new Central Station (two hours, about 30 Euros, www.bahn.de). Nordbahnhof has been demoted to a simple subway stop.

The Berlin streetscape contains layers of references to a broader European geography and to political desires. Train stations were named after the destinations of departing trains. Streets were named after places in Europe (Bornholm, Stockholm and Oslo), after places that the roads were leading towards (Prenzlauer Allee, Potsdamer Straße), and as a way of making claims to cities and places as German (Stettin, Danzig, Elsaß, and Lothringen).

After the end of the Second World War, the East German state renamed streets and places to remind the people of newly important politicians, but also in order to erase the references to places that were no longer German. Thus, Leipziger Straße kept its name, but Danziger Straße became Dimitroffstraße. Likewise, Stettin station did not become Szczecin station, but Northern Station. It is as if it forgetting that Danzig and Stettin ever existed was preferable to remembering the new names of these cities.

Which of the East German names for streets and places remain? Nordbahnhof is still Nordbahnhof, Rosa Luxemburg Straße remains, but Dimitroffstraße and Wilhelm Pieck Straße ceased to be. Names that do not sound too “East German Communist” stayed: Nordbahnhof. Dimitroff and Pieck had to go. Rosa Luxemburg and Friedrich Engels, however, managed to stay.

Berlin is a city with layers of names, as I keep seeing in conversations with older people from East Germany. Recently, I started to feel old as well. I went to get a haircut, talked to the hairdresser, and found out that we grew up in the same city. I asked her which school she attended, and she said “Hanseschule.” I don’t know any school by that name. I know all the schools of the city by their official East German names; and only some of them by their post-transformation names. When I graduated, in 1997, the name changes were still fresh. So I asked the hairdresser about the name of her school before 1989. She didn’t know. That’s when I felt old. I had just asked the kind of question that my mother asks me when she tries to understand where I work, shop, eat, and visit people in Berlin.

My inability to translate from new to old names in Berlin (partially remedied by Wikipedia and a great online collection of old city maps), the hairdresser’s inability to remember the old name of her school, and my inability to remember the new name of her school made us speechless. We cannot talk about places that we have no common name for. Talking about cities, schools and streets in East Germany, you have to translate between old, new, and very old.

Beneath the surface, we East Germans of different generations speak different languages. We need to remember names that are no longer and names that are not yet. Such acts of memory and translation are crucial, for otherwise it is impossible to relate to the cities, the schools and the wider world of different generations.

A version of this article was first published in Deliberately Considered.

Recently, I took my son to the doctor for his 13-year old checkup. “He’s 5’8”, she told me, “and he hasn’t even begun his growth spurt yet.” I was also a late bloomer. 6’1” now, at his age, I was 5’2”. Looking at the chart, I could see there was an even chance he’d hit 6’4” in the next few years.

I knew it was time for The Talk.

My son doesn’t get out so much. Like most middle-class kids his age, the problem isn’t getting him off the corner, it is getting him off the computer. My son, however, is African-American.

I’m not. But I’m not stupid. I know that in this country, a large, young, African-American man is at risk every time he goes out in the world. And even if his personality and my middle-class resources can keep him away from many of the dangerous situations that other young people might create, there is another danger. That’s the one I can’t do anything to control.

So we talked.

I explained to him that he must understand that some cops will see him as a danger no matter what he is or isn’t doing. No matter how he dresses or talks or what grades he gets. That he must not expect, if he should be stopped or questioned, that the police officer he encounters is rational. He should see the cop as a dangerous, terrified animal, halfway eager to resolve his fear by attacking. Thus: no sudden moves. Do nothing but follow explicit instructions. Do not struggle or argue. Do not give any information about any other person except your parents. Do not believe any promises the police make. If they take you into custody, continue to repeat “I want to call my dad,” and when you are older, “I want to call my lawyer.”

This was four days before Michael Brown was shot.

Once, while hiking, my son and I saw a black bear. It was medium-sized, so, in other words, as big as the two of us combined. But we knew what to do. No sudden moves. We kept our distance. We protected ourselves.

The bear didn’t have a semi-automatic Glock, and Mike Brown wasn’t allowed to keep his distance. Darren Wilson pursued him, shot him repeatedly even while Brown’s hands were raised high in surrender.

There’s nothing new about the talk I gave my son. The parents of young African-American men have had to give versions of that talk for centuries. As a historian of slavery, I’ve read the testimonies of men and women who survived slavery. They told of how their elders tried to tell them what they could and couldn’t do if they hoped to live: how to bear torture, how to make a day’s quota of cotton picking, how to evade violence. Sometimes, even how to run away.

There’s nothing special about me having to give The Talk, either. I might be the first white male Cornell professor father to have to give this talk to his African-American son, but I doubt even that uniqueness. Meanwhile, women — mothers and others, not always black — have given the talk, countless times.

Perhaps women who give the talk don’t understand the tumultuous interior experience of adolescent masculinity. And if I can do that, I on the other hand — while I can listen to my friends explaining their experience of rage at the suspicion, violence, and humiliation that was regularly directed at them by the police, I can’t fully experience it.

I’m lucky for that privilege. I’m not sure I would’ve survived. And yet, at the moment when it starts, every adult who has to give The Talk is in the same dilemma. We love this young man, more than we love our own lives. We have worked for years to bring him to this point. We don’t want to see the fear of white racism take his brilliant joy from him. Yet now we must tell him that we cannot protect him from the people who are officially supposed to protect him. We cannot even honestly guarantee that the information we give him will enable him to protect himself from them.

Four days later, I heard about Michael Brown.

I had wild thoughts. I would equip my son with a GoPro camera that would automatically livestream any possible encounters with the law back to a secure server. But even though I think cameras attached to cops on the job are one of the few good ideas to come out of this disaster, we can’t attach cameras to our adolescent children 24/7. Nor should we.

My most frightening feeling, however, said this: No one out there values my son’s life. And that’s true. When the chips are down, law enforcement agencies and legislatures and the broad white public identify more with the fear and aggression of the police than with the terror and rage that young black people and their parents have to experience. History justifies the parents’ fear: from slavery to Reconstruction night riders, to Jim Crow lynchers, to the massive increase in policing and incarceration that has shaped the United States so powerfully over the course of my lifetime.

Then, over the last week, I saw the Ferguson protestors.

I was struck by the young men who dressed like corner guys from back where I grew up. By the young women who marched, wearing respectable clothes or the kind of clothes that the vast babble of social media tries to mock as ratchet. Together they marched in the face of lines of cops playing stormtrooper. They refused to run. They threw tear gas canisters back. They stared down men with machine guns. They help each other stagger out of billowing chemical clouds. They weren’t afraid to break into a McDonald’s to get milk to pour on the weeping faces of their friends, and they weren’t afraid to stand in front of another store to protect it from looters.

I also noticed the older people, too, including the community leaders who helped temper the edge of the anger. This was “The Talk,” too, in the break of a set of actions so improvisational that no one knew what was coming next. This was the elders loving the young people.

What was in the young men and women that made my feelings change from fear to righteous anger? Was it their love? In the moment of facing the militarized police, it looked to me like they loved each other. Like they loved Michael Brown, even if they never knew him. And they loved themselves enough to get out there and take the risk of protest — not a suicidal one, for that isn’t self-love — but a risk that allowed them to save their own joy in being alive. Their own joy in defying the world that sometimes can only desire them but can’t seem to love them. Their own joy in loving themselves enough to stand up for their own infinite value.

They also showed knowledge, the knowledge of exactly how to confront. Here’s a middle finger for you, stormtroopers, they say. Here’s two. Don’t act like you don’t see them through your gas mask visor. Yes, you can find Mike Brown walking down the street on an August day. You can make him beg for his life and kill him anyway. You can leave his body baking on the street for four hours. But you can’t kill all of us out here together, they seemed to be saying. You can run us off the streets tonight, but we will be back tomorrow.

You’re so afraid of us that you come here by the hundred, in armor. You can lock us up and twist the system of representative government so that white cops and mayors rule a black town, gerrymander the districts to cripple the President. But we will keep coming, generation after generation. Three hundred and ninety-five years in, and just by doing this, we are winning and you are losing. So their courage said.

That’s what I thought I saw, anyway. What I think I’m hearing on Twitter. And it made me sad, and it made me angry, and it also buoyed me up, for the story here runs from generation to generation. It made me remember a young enslaved man named John, beaten by a Georgia overseer whose booted kicks shattered John’s eye socket. An older man named Glasgow went to John, held John’s head between his arm and chest, and with the other hand applied a splint of wax and cloth to hold the bones in place. And he whispered to John something he’d learned: there was a place outside of slavery.

John didn’t lose the eye. One day, after the bone healed, he hid on a cotton ship, and found his way out of the slave South. He published his story. His words helped mobilize the confrontation that led to the U.S. Civil War. That in turn brought slavery to an end. And that reminds me that if he learns from the deep, continually renewed tradition of resistance how to protect his joy while also learning to protect his life, my son will find his way to strike a blow in his time as well.

Since last December, Brazilian shopping malls have become the stage for a new style of youth gathering: the rolezinho. Roughly translated as “little excursions” or outings, the rolezinhos can be characterized as planned meetings (via social network) of a large group of youth from poor neighborhoods, with the intent of seeing each other, flirting, eating and drinking at McDonald’s, taking pictures to post on facebook, and simply having fun. This can be considered a collective action with direct links to at least two different issues that characterize contemporary Brazilian society.

First, rolezinhos cannot be understood without taking into account the almost nonexistence of public spaces for leisure and enjoyment. Coupled with the historic negligence of the Brazilian state to the population’s right to recreation, the ongoing privatization and destruction of the few existent public spaces of the kind leads to the curious situation in which shopping malls and, particularly, their food courts and parking lots, become places for hundreds of young people to hang out.

Second, the country’s economic growth in the last decade, with its emphasis on consumption, dramatically changed the social landscape, reinforcing the notion that in order to be someone, one needs to possess material goods, more specifically, branded merchandise. This last element is emphasized by the musical genre known as “ostentatious funk and embraced by young Brazilians living in the periphery of big cities, particularly in São Paulo (many of whom take part in the rolezinhos). Commonly framed as the more acceptable version of the Brazilian funk genre, the lyrics of “ostentatious funk” as well as the video-clips produced by the MCs, cultivate a mode of life that places value on consumption. Wearing certain brands of clothing, driving certain cars, drinking certain liquors would altogether provide status, access to women and, most importantly, entrance into a differentiated social group. As an aside, there are serious gender issues to be analyzed and critiqued within the universe of “ostentatious funk.” Women are usually placed in the same hierarchy and role as any other object for consumption, and very few of them work as MCs. The gender dynamics characterizing this domain certainly impact the rolezinhos. Nonetheless, it is beyond the scope of this essay, as a first attempt to examine such a complex social phenomenon, to address the gender questions embedded in it. (Watch video below.)

In this context, there is nothing uncommon about young people from the outskirts of one of the richest (and most unequal) Brazilian cities deciding to hang out in the shopping malls. Besides associating this particular mode of consumption with social status, the teenagers taking part in the rolezinho do not want to be locked up at home on the weekends, as pointed out by 20 year old Jefferson Luís, one of the organizers.

Uncommon, nonetheless, is the effect such an action causes when the participants choose to do it collectively in large groups. The first rolezinho brought together no less than six thousands teenagers to a mall in Itaquera on December 7 on the outskirts of São Paulo. They were met by fear and panic from both the shops’ owners and other customers, followed by violent police repression. Since this first event, the rolezinhos became a fever, drawing together hundreds (sometimes thousands) of youth to various malls on the outskirts of São Paulo and other major cities in Brazil. At the same time, they ignited a violent response from the administration of the shopping malls. These have resorted not only to private security, but also state police force – in many cases legitimated by judicial decisions – either to keep the youth literally out of these spaces by locking the doors and deciding on an individual basis (racially biased) who is allowed in, or to welcome them with tear gas, rubber bullets and, in the most extreme cases, arrest.

Different framings, from the radical left to the most extreme right, have been used to read and interpret this new social phenomenon. I would like to put forward a different way of comprehending the rolezinho as political, one that does not depend upon the intention of the participants (who clearly want to have a good time). I also do not want to present them from victims into heroes. Rather, the argument advanced here relies on the meaning of the action itself vis-à-vis established social norms.

Brazilian society has long been understood as one whose foundations led to multiple forms of segregation. Take, for example, the case of race, which plays a very important role in the rolezinhos. Brazil was the last country in the Americas to abolish slavery, in 1888. Despite some attempts of formulating the nation as a model of racial democracy due to its mixed population and the nonexistence of institutionalized segregation, the reality is that racism pervades every dimension of Brazilian society. While more than half of the population defines itself as black or brown, the average income of these, according to IPEA, a governmental research organization, is slightly less than half of whites. The majority of the population in the poorest areas of the large cities, the slums, is black. Access to a university degree only became a tangible aspiration for black and brown Brazilians after the introduction of affirmative action in public universities. Finally, the rate of homicides among the young black population is alarming and much of it constitutes summary executions by the police force.

Another clear example of segregation, which is also crucial for understanding the rolezinhos, is found in urban development. The design of the Brazilian urban landscape portrays the deep inequalities which characterize our society: while upper class neighborhoods have access to facilities, implement renovation and conservation plans and are served by a variety of public services, the poor areas exhibit precarious living conditions. On a certain level, one can claim that our cities display, through their streets, squares, buildings and public services, the differentiated citizenship, as discussed by James Holston, and characteristic of our socio-political heritage. Formally, citizenship is universal and inclusive, but when it comes to the benefits linked to citizenship, especially social rights, only a small portion of the population enjoys them fully. Urban space in Brazil mirrors the unequal distribution of wealth and political exclusion of the lower classes.

To a certain extent, the economic and social development of the country in the last decade intervened on those two axes of segregation, by providing, on the one hand, some social goods that allow for social ascendency, such as education, and, on the other hand, by increasing the power of consumption of the working classes. Nonetheless, the social norms already well established, along with these material forms of segregation remained in place. These norms, which are constitutive parts of la police in Rancière’s terms, organize society, arrange bodies by defining “the allocation of ways of doing, ways of being, and ways of saying, and sees that those bodies are assigned by name to a particular place and task,” thereby instituting “an order of the visible and the sayable.”

In Brazil, these norms are legitimated, to a great extent, by the myth of racial democracy, largely accepted by the population who most of the time abide by such rules of propriety. In this sense, the so called “differentiated citizenship” is not only accepted, but also guides the ways in which people organize and manage their lives as well as locate themselves socially. 

The rolezinhos constitute the moment when black and brown teenagers decide to collectively occupy sanitized and disciplined spaces of consumption – a consumption which in the first place was not meant for them – in order to make of it a locus of enjoyment and fun in their own terms – a form of leisure, linked to a lifestyle much celebrated by “ostentatious funk,” so far segregated and misrecognized. By doing so, they disrupt those very norms, putting into question the police order and exposing the great fallacy of the myth of racial democracy. And this disruption causes fear and hatred. They are bodies occupying spaces and reclaiming a form of citizenship, which was not meant for them. And this is precisely why, independent of the initial intentions of their participants, the rolezinhos are political: they are disruption of the police order. As Rancière formulates it, not only is the police order hierarchical, it also relies on the assumption of inequality. Politics, on the contrary, is founded on the premise of equality. It challenges, it disrupts, and it interrupts the easy permanence of the police order. 

One could counter-argue and say that the rolezinhos cannot be understood as a dissensus because they aim for inclusion in one of the constitutive spaces of the contemporary police order: the space of neoliberal consumption. However, I am not claiming that politics is pure or devoid of contradictions. Rather the opposite; politics is impure and paradoxical, it blends with the police’s order without ever merging with it. The politics in the rolezinho is located precisely in its impurity. By aiming to exercise their neoliberal right of enjoying a life of consumption and fun outside the limits of the ghetto, black and brown Brazilian teenagers expose and call to question the very norms of segregation that remain intact in all other spaces of social life. If these norms have not been tamed even by the rules of the neoliberal market, with all its promises of freedom and equality as consumers, one can imagine where they stand in every other social realm. It is time to take a rolezinho into these spaces!

A version of this article was first published in The Dissident Voice.

“I don’t deserve to be raped! No one deserves to be.” These were the words printed on the signs made by thousands of Brazilian women who decided to join a massive online campaign launched through Facebook some weeks ago. The campaign aimed to protest against the highly misogynist views made public in a recent survey conducted by the Institute of Applied Research and Statistics (IPEA). The data showed that 58% of the interviewed either completely or partially agree that if women knew how to behave, there would be fewer cases of rape, 65.1% agree with the statement that “battered women who stay with their partners like to suffer violence,” and 26% of Brazilians agree with the statement “women wearing clothes showing their bodies deserve to be attacked.”

While methodological problems with the survey were pointed out at the time and IPEA recognized faults with data presentation, we believe that attempts at qualifying the scientific value of the research should in no way curtail the more structural issues it raised. That is to say, the data and the ensuing sexist reactions to the online campaign, show such beliefs not only point towards the naturalization of violence, particularly on gender matters, but also to the possibilities of a conservative backlash in the country. Indeed, the stronghold of patriarchy is all too clear by the overall opinion captured from IPEA’s study, as well as from previous surveys.

According to another survey conducted by Fundação Perseu Abramo’s (FPA) in 2010, when prompted to answer if they had suffered any type of violence, 40% of the women claimed they had. Corroborating such findings, in 2012, the Brazilian Forum of Public Security claimed that rapes have been rising increasingly. In 2013, the Secretary for the Ministry on Women’s Policies (SPM), Eleonora Menicucci divulged that every 12 seconds a woman suffers from violence in Brazil. In FPA’s survey, there were echoes of the same beliefs articulated by IPEA’s. These beliefs support a culture of blaming women for their own victimization. It is as if we are constantly challenged to fulfill the duty to protect ourselves.

While most of us who know a little about the contemporary social landscape in Brazil agree that we are far from a scenario of deep transformation in gender relations, we were still surprised by the data mentioned above, particularly those of us who have been closely following the struggles and achievements of the feminist movements in the country. The surprise is derived from what we identify as a gender mainstreaming paradox. On the one hand, we have witnessed and are still witnessing an expansion of feminist discourses and subjects, the mainstreaming of gender in public policies, the amplification of spaces of feminist intervention, as well as a greater participation of women in the public spheres. We have a woman occupying the presidency of the country. This scenario supports our claim that an ongoing challenge to the main tenets of patriarchy is taking place. On the other hand, research such as the ones cited above serve to remind us that the growing rate of violence against women (and other historically excluded groups such as blacks and LGBTQI) and the widespread misogynist discourses in the media reveal a wave of conservative and sometimes even reactionary forces. These forces work towards not only impeding further achievements, but also reversing what feminist and other emancipatory struggles have accomplished thus far. Our question then is how do we make sense of this gender mainstreaming paradox? What are the features of the Brazilian contemporary socio-political horizon from which both forces of liberation and traditionalism spring forth? Finally, what role should the feminist movements play in such a context?

A cycle of protests erupted in the country in June 2013, starting off with Movimento Pelo Passe Livre’s demands for reversing the increase in bus fare in São Paulo. The protests that ensued took social movements, the government, media, academics and society at large by surprise given their rapid expansion to various cities, the array of issues being brought forth (from the left to the right of the political spectrum), the use of social media to increase participation, and the alarming violence used by the police forces. While many scrambled to make sense of the protests, unseen in such numbers since the call for the impeachment of former President Collor in the early 1990s, what they signaled was a growing dissatisfaction with ever present inequalities despite the past two government’s claimed efforts to target them through compensatory policies. The forms of protests escalated to incorporate the claims of those targeting the so-called mega events, housing movements and occupations, unions, student movements, and various articulations of feminism, all in an environment in which traditional and new social movements met the new “new social movements.”

Amidst such diffusion of struggles, the feminist movements played a central role leading up to the recent protests. Not only were women of all ages present at the demonstrations, but so was the focus on the varying dimensions of gender oppression. Issues such as the right to abortion and the alarming rates of sexual violence were explicitly part of the debates generated in the context of the June Protests. But the fight against patriarchy did not happen overnight. While during the military dictatorship and the struggle for redemocratization the feminist movement remained very much grassroots, at the margins of the institutional arena and, at times, underground, from the 1990s onward, it was confronted with a dual shift in strategies. The democratic state allowed for greater dialogue and collaborative work, paving the way for a growing institutionalization. Neoliberal reforms, on the other hand, propelled the process of NGOzation, as feminist organizations became instrumental in providing expert gender knowledge required by international agencies. In our view, these transformations provide a key for understanding the gender mainstreaming paradox that characterizes contemporary Brazilian society.

Indeed, the feminist movements can now be mapped out on two spaces. First, there is what we call an institutional domain. Within this domain, not only have historical agendas of the movements been incorporated into the discourse and the policies of the state, but leading figures have also come to occupy relevant positions within the state’s bureaucracy, generating our version of femocrats. At first sight, it seems that feminism has finally achieved some of its historical goals, by securing a permanent space within the structure of the state, from where it can influence decisions towards gender justice. Second, there is a social space, where feminists from different backgrounds and age groups, educational and economic levels come together to challenge the patriarchal forces and androcentrism that pervade everyday life in Brazil. While this social domain can be traced back to most of our history, when feminists did not have access to state and organized themselves primarily at the grassroots, we sense, nonetheless, that the very feminist movements in Brazil are experiencing a moment of transformation, characterized by the expansion and diversity of claims, actors and spaces of intervention.

Nonetheless, no matter the diversity of new issues, strategies and discourses brought forth by feminists, and the strength of the actions stemming from both these two spaces — institutional and social, the dismal fact is that they are dealing with the same kind of stereotypes their sisters of the past experienced as early as the so-called first wave in Brazil. This is the gender mainstreaming paradox. And that is why we should not underestimate how a political culture, sustained and reinvented by new forms of patriarchy, still configures a social imaginary built upon conservative values regarding women’s autonomy. Feminists continue to be labeled as socially and morally deviant, lesbians and angry women. Domestic violence is still framed as a private issue. Women’s moral character is still valued by the way they dress, move their bodies and present themselves in public. And our objectification continues to be the best selling market strategy. All these formulations continue to resonate and are captured not only by the data with which we opened this essay, but also by advertisement campaigns (such as the 2014 Adidas shirts for the World Cup) and speeches given by public representatives.

Take, for example, the statements made by the then President of the House of Representative’s Committee on Human Rights and Minorities, Marco Feliciano, who claimed in 2013 that the feminist movement has led to the prevalence of homosexuality and the demise of family values in Brazilian society. Behind this argument is the centuries old belief, strongly articulated by religious groups, that a woman’s primary role is that of a care-taker within the family and heterosexuality is not only natural, but the norm to be strictly followed. Despite the fury such statements provoked among activists, the moral panic with which such politicians and religious leaders masterfully evoke when discussing gender issues has been the perfect backdrop for reactionary forces.

So what exactly is happening when women and girls claiming their right to freedom, autonomy and control over their bodies are met with discourses that legitimate the very violence they are fighting against? We believe that in order to answer this question, we need to revisit the idea of patriarchal masculinity or, in other words, the construction of a model of masculinity that not only relies on fixed gender roles, but also places them on a hierarchy. When this type of manhood is put into question and shamed, both through the various gender policies implemented by the state and the occupation of the streets, internet and other public spaces by new feminist bodies, violence erupts, in its discursive, visual and physical forms, as a mechanism of protecting patriarchal forms of authority, which work concomitantly in the public and private spheres.

The data, which has stirred much public debate recently, should be a constant warning to the feminist movements that in some form or other, as historian Dipesh Chakrabarty has affirmed, we are never completely disassociated from the past as we continue to inhabit and reproduce many of its beliefs and practices. The challenge posed by the gender mainstreaming paradox is then to articulate new forms of struggle against our very old enemies. In order to do so, we believe two strategies are necessary. First, and despite the gains achieved with the incorporation of some of our demands by the official policies, it is necessary to take a step back from institutionalized arenas and reclaim our autonomy vis-à-vis the state. Only by doing so will we be able to critically examine and oppose the limited responses given to issues we have historically addressed. Second, we need to engage with women, at the grassroots level, on an everyday basis, hearing their concerns and building strategies for collective action. If new feminist interventions, such as the Slut Walks and their presence in the blogosphere expand actions to domains previously neglected, it is necessary to acknowledge that they reach out to very specific groups of Brazilian society. In contrast, the gender-based violence we confront on an everyday basis is pervasive, which is why our struggles need to be as well.

Every year the issue of gender and sexual stereotyping is highlighted at the Super Bowl and in the minutes of well-famed commercials surrounding the game. Be it macho-football players, sexy cheerleaders, slick, yet still, macho-men in fancy cars, sexy Danica Patrick, macho-beer drinkers, sexy female beer drinkers, static femininity and masculinity are displayed suggesting to us all what kind of men and women we should be.

Following this grand display of gender duality, there is an annual critique of femininity, generally in response to the halftime show, with camps divided between female sexuality as an autonomous choice of empowerment and female sexuality curtailed in consumerism, thus objectifying the participants.

If, as a nation, we are going to talk about female sexuality at this time of year, it is about time we open the discussion up to include men. Michael Kimmel stands out as someone who shows us time and time again how masculinity is taken for granted and overlooked. And in November on NPR’s Morning Edition, Frank Deford spoke of the damage football causes both physically and emotionally. A television series about football teams comprised of 8 year-olds called Friday Night Tykes on the new Esquire network is an extreme comment on football and masculinity, both in the show’s subject matter and also in how it is packaged. Just as we are so readily upset about the limited space women are allowed to inhabit in media, so too should we be appalled by the static and limited portrayal of masculinity, not only of men, but of very young boys.

On the up side, there are at least moments of breakdown in the gender dynamic, which occasionally coincide with the halftime show. Prince’s amazing performance during Super Bowl XLI comes to mind. He embodies a masculine persona in stark contrast to the machismo football culture feeds off of. A moment like this interrupts the “real-man” narrative that surrounds football and touts that to be big, aggressive, and dominating is to be a real man. The reinforcement of the “realness” of a particular kind of man to the exclusion of all others helps us to see that a “real man” as such, is a fiction. Furthermore, this unrealistic ideal needs constant replication and reinforcement to perpetuate itself. Men, when they are not pressured to conform to an unattainable and harmful masculinity, can inhabit any masculinity (or femininity) they feel desirable.

Yet, for the most part, Prince is marginal, and macho men prevail. The gender caricatures presented to us every year at the Super Bowl are not changing. We have to work hard to change our relationship to them.


Watch (Purple Rain)Superbowl Halftime Performance – Prince in Music | View More Free Videos Online at Veoh.com

San Francisco’s Pride Parade will take place on 29 June and will bring together activists for LGBT rights under the rallying cry of “Color our world with pride.” As usual, various officially recognized groups will take part in the march. But this year, among the multicolored sections of the parade, curious onlookers will be able to make out some people carrying a banner bearing an image. Among those marching, some will be there to defend and represent the colors, the figure and the appearance of someone who will be notable by her absence: Chelsea Manning, who will not be able to walk with them.

Locked up in the military correctional facility of Fort Leavenworth, Kansas, Manning will have served the first months of a 35-year sentence by the time the San Francisco Pride comes around. Why was she imprisoned? For sending WikiLeaks documents obtained while working as an intelligence agent for the American army in Iraq.

Given that Chelsea Manning is being held prisoner and out of the public gaze, she will have to be represented symbolically in order for the marchers to physically become one with her. And yet this famous prisoner is already a public figure, or rather an emblem. All on her own, Chelsea Manning already stands for the struggle for the recognition of LGBT identities. The organizing committee recently awarded her the honorary title of Grand Marshal for the 2014 Parade, as well as a new role, that of “public emissary.”

A double cause in person

Chelsea Manning’s support network was quick to make this information public on its website, encouraging everyone to show their support in the following terms:

This year we have more reason to celebrate the SF Pride 2014 parade in San Francisco: Heroic WikiLeaks whistleblower Chelsea Manning is being honored as an official Grand Marshal!

This lighthearted call to join the Pride Parade demonstrates that Chelsea Manning today embodies a double cause: that of transgender people and that of whistleblowers. And yet, when soldier Manning appeared before the Court Martial in Fort Meade (Maryland) on August 21 2013 hearing the judgment which brought his trial to an end, sending him to prison, he was a man who answered to the name of Bradley.

In addition and despite her repeated requests since being sent to prison, the American army has still not given her permission to undertake the hormone therapy that would feminize her body — let’s not forget that up until 2011 the lives of gay and lesbian soldiers were governed by “Don’t ask, Don’t tell.” Finally, although a legal ruling enabled Bradley Manning to take the name Chelsea on April 23 2014, her/his papers still say s/he is a man.

So how did Chelsea Manning manage to take control of her existence, perform a gender identity that is recognizable and recognized by others, become an emblematic public figure and then a vehicle for political action?

The Today Show, Chelsea Manning’s birthplace

In order to answer these questions, we need to go back to August 22 2013 when Chelsea Manning was born, thanks to NBC’s Today Show. That was the day when Bradley Manning, in detention at the time, sent the television channel a press release in which he announced that he felt he had always been a woman and wished from now on to be known as “Chelsea.”

This self-declaration of gender identity was in fact pronounced by the Today Show journalist: when interviewing Manning’s attorney, she read out part of the press release. For a brief moment then, the journalist became Manning’s spokesperson and through her voice the whole world discovered that Manning was a male to female transsexual.

Once the request had been made, the journalist and the attorney continued the interview by referring to Manning as a woman. As the first people she had publicly asked to now address her as a female, they both responded positively, lending weight to her self-declaration of her identity. So it was that a television studio became the place where a gender transition occurred.

Activity on Wikipedia: Manning, Bradley or Chelsea?

But Manning’s announcement had lasting effects well beyond this limited arena. Within the confines of the global scene that is the online encyclopedia Wikipedia, there were energetic debates between contributors about whether or not the article called “Bradley Manning” should be renamed to reflect this newly acquired femaleness.

On the English-language version of Wikipedia, which has the most participants, discussion started on August 22 2013 and lasted about ten days. Debate was intense. Those opposed to a change were victorious since the name predominantly used by American and British media was deemed the most authentic. At that time in the English-speaking world, Manning was still most often referred to as “Bradley” and it was accepted that this form of identification should continue. However, following another discussion, at the beginning of October 2013 the page took on the new title “Chelsea Manning,” a name which had by then become publicly recognized, many media sources making use of it to refer to Manning.

Before the 22 August 2013 Today Show, among the inhabitants of the world that we all know, we counted and could identify a certain Bradley Manning. Transforming the gender identity of the American soldier, the television program gave birth to Chelsea Manning. The existence of Chelsea Manning was then recognized by various speakers: journalists, English-speaking contributors to Wikipedia, members of Manning’s support network and so on. Manning herself, helped by several artists, drew her own portrait in order to give a face to the person bearing the name “Chelsea.” Who as a result became even more recognizable.

As long as this name is spoken and repeated and as long as the body to which it refers is represented, the gender identity re-presented by Chelsea Manning will make up “a fragment of our experience of the world” (Paul Ricœur, Oneself as Another). Except for those who consult the French-language Wikipedia, which more or less reproduces the world as it is depicted by the French media: Chelsea Manning still doesn’t exist there and remains to this day “Bradley.”

In Chelsea Manning’s cinemascope

Having met with a positive reception in the “public realm” (Hannah Arendt, The Human Condition) Manning’s announcement to the Today Show deeply transformed the world to which we refer. Her announcement was diffracted in public by creating political realignments which follow the outlines of linguistic or national communities. How was that possible?

Chelsea Manning has turned her prison cell into a projection room for her own image. Although surrounded on all sides by high walls, this cinemascope has open up the possibility of putting perspectives into perspective, or in other words of an infinite number of mirrors. For the screen on which this figure is refracted has the crystal-clear size, thickness and depth of the public realm. Which is why the disturbing gender of Manning also works to reveal the many forms that can be taken by the sense of what is right in our democracies.

Wikipedia is made up of a collection of linguistic communities which correspond to national territories enforcing a more or less liberal policy. Articles on Wikipedia are underpinned by a varied and complex normative infrastructure, which organizes collective participation. Thrown into this universal and yet heterogeneous arena, the figure of Chelsea Manning projects a harsh light on the differentiated way in which this online encyclopedia allows subjective rights a place. Rights whose multicolored flag will be proudly carried at the 2014 San Francisco Pride Parade.

Originally published in sociopublique, translated from French by Joy Charnley.

In a recent interview with the German daily Frankfurter Allgemeiner Zeitung, Elisabeth Badinter bemoans new trends in motherhood emerging in France. The French feminist observes that growing numbers of French mothers — though they are still a minority, she is quick to add — are becoming preoccupied with childrearing practices: they are overprotective, sacrifice too much for the (perhaps largely imagined) good of their offspring, lose themselves in motherhood. Immediately, I thought this sounded a lot like a description of Matka Polka, the archetypal Polish mother — that is until I read the next sentence, in which Badinter concludes that French mothers are becoming more like the German ones.

When faced with a description of something profoundly familiar that is presented as inherent to a culture or a place different from our own, we can pause and reflect on how similar some national discourses really are, despite their exclusive labels. We may even find some consolation in this revelation, think: Phew, it’s not just us.

Both in my research and in my own life I wonder if these moments when we discuss different cultural definitions of stereotypical motherhood happen often enough to really change the popular way of thinking about motherhood in national terms. The same goes for the differences in performing motherhood across cultures and nations — like American mothers in France, or Jewish mothers in the US, Polish mothers in Germany. Reading or hearing about them, we may compare them to our own practices, try to see them in a different light, but is it enough to influence our behaviors and values in a meaningful way?

It makes you wonder when, where, and how dominant discourses on motherhood can truly be challenged. Why would they need to be challenged? And who could challenge them?

When I started asking myself these questions, I thought of people who on a daily basis engage with discourses on motherhood and mothering practices other than those into which they have been socialized. I thought of immigrant mothers. As they literally step out of the dominant national ideals of motherhood (by physically leaving their country of origin) into other national ideals of motherhood, immigrant mothers are in a unique position to question both ideals. Immigrant mothers could thus be considered ideal agents of change.

Immigrant mothers as agents of change

Now, the word immigrant is never neutral. It is context specific and can designate various meanings, but it is always about power, about class, about exclusion. Immigrants’ class position is often difficult to define, as it changes not only over time, but also across space (between the sending and receiving countries, between the city and the countryside). For immigrants, class is fluid. Its fluidity can manifest in various ways, through emancipation or through what one could call class degradation. Although class ambiguity as well as class diversity among immigrants seems rather obvious in the world around us, it tends to be overlooked. Politicians and media alike often refer to immigrants as if they were miraculously homogenous groups, distinguished solely by nationalized or ethnicized categories. It is therefore still necessary to stress that not all immigrants belong to the same class. Some of them may not even be considered immigrants at all, but expats or internationals. Me, for instance: I’ve been living in Berlin for nearly nine years, but I would never call myself an immigrant. Instead, I think of myself as a Berliner, JFK style. Ich bin Wahlberlinerin — a Berliner by choice.

I focus on immigrant mothers here and in my current research project, but this should not imply that only biological mothers can perform the everyday practices of childrearing. I would prefer to include people who have close emotional relationships with the children they take care of and who regularly drop-off and pick-up at kindergartens and schools, do playtimes, feed, bathe, dress, help with homework, take kids to doctor’s appointments, after-school sports activities, shopping for shoes, etc. These practices can, of course, be performed by more than one person, regardless of their biological relation to the child and regardless of their gender. The word “mother” seems unnecessarily exclusive. A better word, proposed by bell hooks in her groundbreaking article “Revolutionary Parenting” published in her book Feminist Theory: From Margin to Center, would be “childrearer.” The fact remains, however, that in most cases that I encounter in my research and private observations, these childrearers are indeed mothers.

In the TRANSFORmIG project at Humboldt University we investigate how immigrants develop competences to operate within new societies and cultures, and study whether these newly acquired intercultural skills and attitudes transfer between individuals in various geographical locations. Specifically, we look at Poles living in Germany and the UK. Among Polish immigrants — and, as substantial literature on gender and migration demonstrates, also among other immigrants — it is actually mothers who perform childrearing practices on a regular basis, thus engaging with their environments in the receiving countries in very specific and crucial ways. Furthermore, when immigrant mothers share their observations, experiences, and new skills with significant others back home, it is mostly other women, often mothers, to whom they talk and display their childrearing practices. Women thus are particularly instrumental in maintaining transnational networks and transmitting cultural capital. Despite the obvious limitations, I will then continue to use the word “mother,” but with the understanding that childrearing practices can (and should) be performed by non-mothers alike.

By talking about immigrant mothers as agents of change, I hope to bring attention to the individual agency of people who are often reduced wholesale to victim groups. Over and over again, immigrant mothers are presented in popular discourses in Europe as disadvantaged, isolated, victims of the patriarchal societies they come from (but, unsurprisingly, not of the patriarchal societies of the receiving countries.)

I want to reclaim the idea of human agency from neoliberal newspeak. The neoliberal rhetoric we are by now so used to hearing from European politicians reduces individual agency to market-related self-determination and, consequently, as Wendy Brown poignantly notes, reduces social problems to individual problems with market solutions. This, however, is only one of the possible uses of the phrase — a cynical and corrupt one — and should not obscure the fact that, yes, human beings are capable of producing change through deeds, big and small. And that’s a good thing.

Small things and potential for change

National discourses continue to impact our lives and everyday practices in ways that are discriminatory and limiting. Not only do they often exclude people of other nations and minorities, but they also tend to be highly normative and tailored to the (upper) middle classes. Increasingly, national discourses go hand in hand with neoliberal discourses. Privileging the dominant nation and classes (not necessarily dominant in numbers, but in political, social, and cultural importance) leads to a further deepening of social inequalities, both within individual states and between them. Stepping outside one’s familiar territory, behaviors, institutions, and norms — which is what happens through migration — may actually help people reassess and rethink their practices and the discourses that impact them.

In The Politics of Small Things, Jeffrey Goldfarb claims that “when people freely meet and talk to each other as equals, reveal their differences, display their distinctions, and develop a capacity to act together, they create power.” He focuses, in an Arendtian fashion, on “acting and speaking in each other’s presence” (discussing, for example, how in communist Poland people would meet at a kitchen table and talk with each other as though they lived in a free country). The concept of the politics of small things has been immensely inspiring and useful in my thinking about mothering practices and the potential for change they entail. What I would add, however, is that change can and does occur even when the act of speech is not involved; social change can also happen through display and observation.

All the everyday practices in which immigrant mothers engage — playground visits, subway rides, grocery store shopping, preschool pick-ups — can potentially lead them to rethink the ways in which they do certain things. Immigrant mothers cannot but reflect on the small things they see and hear and on the conversations and other exchanges they participate in. Faced with diversity, they may revise some of their earlier conceptions of motherhood and mothering practices.

It is not only immigrant mothers, however, who thus are affected by the power of small things. Through displaying their motherhood, immigrant mothers show others (immigrants and non-immigrants alike) how differently certain practices related to childrearing can be performed, how feelings related to childrearing can also be expressed. Their various audiences are not passive consumers of display, but they actively engage in creating new meanings of and for motherhood.

The potential for change that resides in everyday practices reaches even further than the places in which these practices are performed, affecting people beyond the neighborhood playground. Immigrants function transnationally, they’re transmigrants. Affordable communication and transportation allow for more regular contact with families and friends back home. Immigrants talk to their significant others about their experiences in their new countries, cities, and neighborhoods, and during their visits home they display the knowledge and practices they have acquired through migration.

The politics of small things, as Goldfarb insists in his eponymous book, is “a potential component of everyday life.” The challenge is to identify the practices that carry that potential. Or, rather, not to overlook the potential in small things that tend to be too easily dismissed as unimportant. This, admittedly, is not exactly an easy task. It is a challenge I welcome as I embark on my new research project on immigrant mothers as agents of change, a project that finds inspiration in and celebrates the power of small things.

Over the last decade the field of positive psychology has become a burgeoning area of research within academic psychology. Well known figures in positive psychology include Martin Seligman (developer of the well known learned helplessness model of depression and past president of the American Psychological Association), Mihaly Csikszentmihalyi (creator of the construct of flow), and Daniel Gilbert (author of the widely acclaimed Stumbling on Happiness). The field of positive psychology focuses on developing a scientific understanding of positive human experiences and virtues. Important research areas include happiness, optimism, fulfillment, compassion, and gratitude. The field positions itself in contrast to traditional approaches to mental health, which focus on psychopathology and treating mental illness. The roots of positive psychology can be traced to the field of humanistic psychology, which peaked in popularity during the 1960s. Well known pioneers of humanistic psychology included Abraham Maslow, Carl Rogers, and Fritz Perls.

The field’s roots can be traced back even further to the American pop culture emphasis on the power of positive thinking (e.g., Norman Vincent Peale and Dale Carnegie). Earlier foundations for these traditions can be found in the “New Thought Movement” that swept the United States in the mid-nineteenth century through the influence of figures like Mary Baker Eddy (the founder of Christian Science) and Phineas Quimby, a New England mesmerist and popular healer. There is a thread of continuity linking all of these traditions, which all share a positive, optimistic perspective and, in one way or another, emphasize the power of the mind to influence both psychological and physical health.

American culture is known for its optimistic quality. The common stereotype that contrasts the positive, optimistic American sensibility with the darker, world-weary European one is not without some validity. At one level, optimism is an important American “natural resource.” It inspired the development of one of the world’s first modern democracies and provided a haven for immigrants fleeing lives of persecution, oppression, and poverty in their homelands. Ideally, America is the land of equal opportunity — a classless society, where hard work allows anyone to lead the type of lifestyle that was once reserved for the privileged aristocracy.

But we all know that this ideal masks a very different reality. The discrepancy between the wealthy and the poor is greater in the United States than virtually any other developed country. America’s self-image as the “land of equal opportunity” obscures the fact that there are massive inequities in the social and economic conditions into which people are born, and, further, it provides an easy justification for blaming the underprivileged for their own problems. Rather than reforming social policies that perpetuate the discrepancies between the privileged and underprivileged, the myth of equal opportunity can be too easily translated into an equation between poverty and moral failure.

Similarly, the assumptions that we all have the ability to be happy and that happiness is a “good” in its own right becomes translated into a moral imperative to be happy. This leads to an insidious type of oppression that marginalizes and silences those who are suffering from psychological problems or physical illness, and judges them as failures or implicitly as morally inadequate. There is a limited tolerance for sadness and other painful emotional experiences. One of the many concerns that critics raise about the latest edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-V) is that diagnostic criteria for a number of psychiatric disorders are becoming broadened to the point that painful experiences once considered part and parcel of everyday living will now qualify people for psychiatric diagnoses. Sadness becomes “depression.” Depression becomes a form of illness to be treated with medication, or evidence of a failure to take responsibility for one’s life.

Evidence of a link between a positive psychological attitude and recovery from various illnesses becomes translated into a moral imperative to stay cheerful in the face of chronic illness. Cancer becomes a “gift” — an opportunity to learn a much needed lesson. In a recent book written in the wake of her own personal struggle with breast cancer, the journalist Barbara Ehrenreich in her book Bright-Sided: How Positive Thinking is Undermining America (2009) critiques what she refers to as our “relentless promotion of positive thinking” in America. In her words:

“Americans are a ‘positive’ people. This is our reputation as well as our self-image. We smile a lot and are often baffled when people from other cultures do not return the favor. In the well-worn stereotype, we are upbeat, cheerful, optimistic, and shallow, while foreigners are likely to be subtle, world-weary, and possibly decadent…. Surprisingly, when psychologists undertake to measure the relative happiness of nations, they routinely find that Americans are not, even in prosperous times and despite our vaunted positivity, very happy at all. A recent meta-analysis of over a hundred studies of self-reported happiness worldwide found Americans ranking only twenty third, surpassed by the Dutch, the Danes, the Malaysians, the Bahamians, the Australians, and even the supposedly dour Finns.”

On a personal note Ehrenreich speaks about her tremendous sense of isolation while struggling with breast cancer because of the cultural pressure to deal with her experience in a “positive way.” For example, she tells us that at one point she posted a statement on an online breast cancer support group bulletin board that conveyed some of her despair and anger. In response Ehrenreich reports receiving a “chorus of rebukes.”

People often speak of Freud as having a pessimistic perspective on human beings. He theorized that there is an inherent conflict between instinct and civilization, and he emphasized the importance of acknowledging and accepting the hardships, cruelties, and indignities of life, without the consolation of illusory beliefs. In an oft-paraphrased statement, he argued that the goal of psychoanalysis is one of transforming neurotic misery into ordinary human unhappiness. This can be interpreted as a pessimistic perspective. But it can also be viewed as a realistic and profoundly liberating perspective — not unlike the Zen perspective, which holds that enlightenment involves letting go of the fantasy of escaping the realities of everyday life. Essentially, what Freud was arguing was that the goal of life is not to eliminate those aspects of existential suffering that are an inevitable part of the human condition, but rather to help people to live more wisely — to reduce the extent to which they unconsciously inflict suffering on themselves.

So what could possibly be wrong with the growing interest in positive psychology? At one level, there is something very much right about it. Just as the humanistic tradition of the 1960s was an important corrective to the conservative and pathologizing aspects of the psychoanalysis of the times, as well as the mechanistic aspects of the behavioral tradition, positive psychology’s focus on happiness and achieving beneficial states of mind is a potential force for good. Yet at the same time there is something missing from it — a tragic or ironic sensibility. Positive psychology fails to grapple fully with the painful aspects of life with its inevitable sorrows, losses, and indignities.

A version of this article first appeared in Psychology Today’s “Straight Talk.”

A recent New York Times article reports that a new study published in the New England Journal of Medicine found that patients receiving one of the most commonly performed forms of knee surgery (arthroscopic surgery to repair a torn meniscus) did no better than those receiving a placebo treatment. In the study, patients with meniscus tears were randomly assigned either to the standard surgical procedure, or a sham surgical procedure, which involved making an incision without touching the meniscus. One year following treatment, the majority of patients in both genuine surgical and placebo conditions reported feeling better. Moreover, the majority of patients in both conditions said that they would undergo the same treatment again. The authors of the study conclude that their findings (taken together similar findings from previous studies) raise important questions regarding best practice standards of care for the treatment of knee problems.

From my perspective, the finding that the majority of patients in the placebo condition experienced the procedure as helpful, is just as important and perhaps more conceptually intriguing. While placebo effects have been spoken about in the medical literature since the 1920’s, it was not until the 1950’s that researchers began to systematically use placebo controls in treatment effectiveness studies. Historically, researchers have been primarily interested in the placebo effect as a foil – a chemically inert agent that any “genuine” treatment should be able to outperform. It turns out, however, that the placebo is a relatively stubborn foil. In recent years placebo research has come into its own as a field of investigation.

There is now a large and growing body of evidence that placebo treatments can have a positive impact on a range of problems including pain conditions, gastric disorders, irritable bowel syndrome, chronic fatigue syndrome, Parkinson’s disease, psoriasis and other skin disorders, allergies, migraines, depression and anxiety disorders. What are the mechanisms through which placebos work? From a psychological perspective we know that a range of factors, including patients’ expectations of benefiting from the treatment, and the quality of the relationship with the healer, play roles in mediating placebo effects. In many respects, however, research on the underlying psychological mechanisms is still in its infancy. At a biological level the evidence indicates that placebo effects are associated with changes in neurotransmitter levels. And finally there is a growing body of evidence that placebo effects are associated with changes in fMRI patterns. For example, placebo induced pain control is associated with areas of the brain that are activated when pain control is achieved through the use of narcotics.

Given the growing evidence regarding both the impact and mechanisms of placebo effects, clinicians and researchers are confronted with an interesting ethical question. Is it ethically justifiable to treat patients using a substance or procedure when we know that its impact is attributable to placebo effects? One can certainly make the argument that any treatment that involves the use of deception is ethically unacceptable. In practice, however, it is not at all uncommon for doctors to intentionally use treatments that they know are placebos. Surveys show that up to 50% of physicians report using placebos in their practices.

In an effort to find ways of getting around this ethical problem a few studies have been conducted which examine whether it is possible to treat problems with placebos while eliminating the element of deception. For example, in a 2010 study Ted Kaptchuk and colleagues at Harvard Medical School administered a placebo treatment to patients with irritable bowel symptom (IBS) and told them that they were being given “placebo pills made of an inert substance, like sugar pills, that have been shown in clinical studies to produce significant improvement in symptoms through mind-body self-healing processes.” They found that these patients experienced twice as much relief from their symptoms as patients who received no treatment.

Now note that in this study Kaptchuk and colleagues avoided the use of deception by being explicit about the fact that the pill consisted of an inert substance. It’s important to bear in mind, however, that they told patients that the evidence suggests that these inert pills are effective. Given the findings that the patients did benefit from the placebo some of the questions that emerge are: 1) what did patients take away from the researchers’ communications (i.e. to what extent did they expect the chemically inert pills would help them), 2) to what extent did the researchers believe that the placebo would be helpful under these conditions, 3) to what extent were the researchers true beliefs implicitly communicated to patients, and 4) did the researchers’ beliefs about the effectiveness of the placebo mediate their effectiveness in any way? Addressing these types of questions will be helpful in further clarifying the psychological and social processes through which placebo treatment works.

These are troubling times for the mental health field in the United States. A variety of historical developments have paved the road to the current predicament. Following World War II, the federal government and growing mental health lobby began an unprecedented expansion of mental heath services. This expansion in may respects continued over the next 30 years. It was not until the 1970s that American psychiatry underwent its first major crisis in the post war era. This crisis was precipitated by a number of factors including: the growing evidence of the lack of reliability of psychiatric diagnosis, the anti-psychiatry movement that was in keeping with the counter-cultural ethos of the 1960s, and a growing crisis of confidence regarding psychiatry’s status as a genuine medical specialty. All of these factors led to the development of the third edition of the official Diagnostic manual for psychiatry (DSM-III), which purged it of most of its “pseudoscientific” psychoanalytic influences, conveyed an aura of scientific respectability, and helped to galvanize a biological turn in psychiatry (or more accurately a pendulum swing back in the direction of a long established tradition of biological psychiatry).

On the heels of its scientific and biological makeover, American psychiatry entered into a new era of respectability and profitability. Neuro-chemical models of psychopathology proliferated. The federal government was willing to spend money on biologically oriented psychiatry research, and perhaps most importantly, the apparent successes of new psychotropic medications became a goldmine for pharmaceutical companies. True, psychiatrists had to sacrifice much of their interest in psychotherapy. But for many this was a small price to pay in order to be able to feel like real doctors.

But the cracks are now beginning to show. The internal controversies about DSM-5 (the latest edition of the official diagnostic manual) for psychiatry, led by psychiatry insiders including Robert Spitzer and Allen Frances (both chairs of former DSM task forces), made news in the mainstream media. Even though many of such controversies had taken place on a smaller scale with the development of DSM-III and DSM-IV, the public was beginning to suspect that that the emperor has no clothes.

To add injury to insult, there was a growing body of evidence that many of the claims for the miraculous powers of the new generation of psychiatric medications had been massively inflated, which diminished the pharmaceutical companies’ willingness to invest their money on research and development relevant to this area. Add to this the fact that we are in the midst of the deepest and most long lasting economic downturn since the great depression, and our national healthcare costs have become unsustainable. It is, then, no surprise that hospitals are being forced to merge and slash costs any way they can. And when it comes to making decisions about where to slash budgets, psychiatry departments (even those that have turned their backs on “the talking cure”) are the weak links in the chain.

Now I suppose psychiatry’s plight could lead a lesser psychologist than myself to experience the guilty pleasure of schadenfreude. After all, why should I worry about the plight of psychiatry? Didn’t American psychiatry prohibit the training of so called “lay psychoanalysts” (psychologists and other non medically trained psychoanalysts) until 1988? And nobody forced psychiatrists to abandon the field of psychoanalysis, or to forgo extensive training in psychotherapy of any type in residency programs. And if they want to turn the field of psychotherapy over to psychologists and social workers, so that they can spend their time prescribing medications to more seriously ill patients — so be it.

Yet, all is not well in the house of psychology either. Many of the same forces (or at least similar ones) that are reaping havoc upon psychiatry are in one way or another affecting the field of psychology as well, and many of the changes taking place within psychiatry are having an important impact on psychology and other mental health disciplines. The first force to be reckoned with is a common malady — “physics envy.” Just as many psychiatrists want to be real doctors, many psychologists want to be real scientists. This has always been an important influence on the development of American psychology, but my sense is that these days, it is a force that is increasingly impinging on the discipline(or at least clinical psychology) in problematic ways.

There is a strong movement afoot to push the training of future clinical psychologists in a more “science based” direction. Proponents of this movement lament the fact that too few clinicians in the real world use “evidence based” treatments such as cognitive therapy, comparing the current state of clinical psychology to the “pre-scientific state of American Medicine at the time of the Flexner report in the 20th century.” Never mind the fact that the claim that cognitive therapy is “evidence based” and other therapies are not is based on a serious misreading of the empirical literature. By way of addressing the problem they advocate for a more widespread acceptance of an alternative to the American Psychological Association accreditation system that would only accredit clinical psychology programs that are considered “science based” in nature. This emphasis on “science” is reflected in both the name of the new accreditation body, i.e. The Academy of Psychological Clinical Science (APCS), and the training model for clinical psychology that it enshrines, i.e. the clinical science model.

The APCS has become one of the dominant forces determining the direction that training in clinical psychology is likely to take in the future. And the clinical science training model appears poised to replace the scientist-practitioner model as the most common in clinical psychology.

What are the differences between these two models?

The scientist-practitioner model, established when clinical psychology first emerged as a distinct field (following World War II), holds that clinical psychologists should be well trained in both clinical practice and research, and the goal is to integrate or bridge these two worlds, in one’s professional activity — whether as a clinician, a researcher, or both. The goals are for 1) clinical research should be meaningfully informed by, and relevant to, real word clinical practice, and 2) the clinicians real world practices should be informed by their experiences as scholars and researchers.

In contrast, the clinical science model de-emphasizes or abandons the goal of integrating clinical practice and research, and instead has an overarching emphasis on “contributing to knowledge” by conducting empirical research and publishing it in professional journals. In fact one of the major criteria for accreditation by the APCS consists of demonstrating that both faculty and students in the program have good track records of publishing research in peer-review journals and attracting external funding.

Needless to say, the majority of clinical science programs train students in cognitive therapy (to the virtual exclusion of other therapeutic approaches). But perhaps even more important is the fact that the curricula for clinical science programs place very little emphasis on providing students with clinical training. In some sense this is understandable. Whether or not this de-emphasis of clinical training indicates a belief that clinical skills are easily acquired without extensive training (which it inevitably does), from a practical perspective, a Ph.D. student in clinical psychology who is going to be prolific enough to have a first rate publication record, and a good track record of securing external funding by the time he or she graduates is going to have very little time for clinical training. At the present time more than fifty clinical psychology programs have been accredited by the APCS, and the number is growing exponentially. Many Directors of Clinical Psychology have told me that although they continue to maintain their accreditation with the American Psychological Association, they are also seeking accreditation through the APCS because they see it as the “wave of the future.”

What are some of the practical implications of this development? Increasingly the clinical psychologists, who end up with faculty positions in clinical psychology programs, will have had very little clinical training prior to graduating, and will be highly unlikely to have clinical practices once they graduate. In some respects this development is simply an intensification of a trend that has been taking place for years now, that has been widening the chasm between practicing clinicians and the academic clinical psychologists who train clinical psychology students in graduate school. Yet, it is important to note that this is an intensification of the long term trend which will have serious implications for the future of clinical psychology and the clinical treatment.

Increasingly the clinical research that is likely to be published in professional journals will become less and less relevant to clinicians in the real world as the proportion of academic researchers who know anything about real world clinical practice decreases. Further, future clinical psychologists trained in clinical science program will be less likely to become skilled clinicians.

And also note, another development taking place in the mental health field: the growing emphasis that both psychiatry and psychology are placing on brain science research at the expense of other important fields of research that include psychological, social and cultural analysis. A reductionist monoculture is emerging, threatening to subsume psychiatry and psychology into the field of neuroscience.

“McMindfulness.” I came across this term for the first time today. I wish I had coined it. It would be nice to be able to make a claim to originality. But coming across the term is almost good enough. It provides a name for a phenomenon that I didn’t even know needed one, and it makes it real. I don’t think anyone knows who coined this term. It’s kind of like “neoliberalism.” Suddenly there is a name for something you know is a problem — an important problem that can be difficult to put your finger on.

Mindfulness practice is a meditative discipline, originating in Buddhism, that involves the cultivation of a type of present-centered, nonjudgmental awareness of the ongoing flow of one’s emerging experience. While mindfulness enjoyed some popularity in the 1960’s as a countercultural phenomenon, in recent years it has surged into mainstream prominence to be embraced with gushing enthusiasm by both popular culture and mainstream psychology.

So what is McMindfulness? It’s the marketing of mindfulness practice as a commodity that is sold like any other commodity in our brand culture. “Mindfulness really works.” It reduces stress, cures depression and anxiety, and manages pain. We know so because research proves it. Never mind the fact that up until recently there was no research comparing the effectiveness of mindfulness to anything else. Never mind the fact that the research that has compared mindfulness-based cognitive therapy to traditional cognitive therapy (the latter allegedly being the evidence based treatment of choice) finds that the emperor has no clothes. And never mind the fact that there is no solid evidence that traditional cognitive therapy is more helpful than any other bona fide form of psychotherapy (including the “discredited pseudoscience” of psychoanalysis).

That’s not the point. McMindfulness is a stock on the rise. A brand that promises to deliver. It satisfies spiritual yearnings without being a religion. It’s backed by brain scientists at Harvard and MIT. It’s magic without being magical. It even transforms corporate culture and increases market share! Now that’s worth paying for.

Don’t get me wrong. I believe in mindfulness. I would not have practiced it in one form or another for the last forty years if I didn’t. But isn’t mindfulness practice supposed to be, as the Zen teachers of old used to say, nothing special?

Then again, mind you, why would the Zen students of old have undergone such hardships to seek the teachings of reclusive Zen masters if they really believed that they had nothing special to offer? Clearly a canny pitch if ever there was one.

So hasn’t mindfulness always been marketed? I suppose in one sense the answer is “yes.” People have bought and sold things since the beginning of time. Street merchants have always hawked their wares. Yogis have always claimed to have supernatural powers. The Buddha promised enlightenment as the end of suffering. Saint Paul promised salvation. Freud said that psychoanalysis could transform neurotic misery into ordinary suffering — not exactly a hard pitch. But good salesmen know that many customers don’t respond well to hard pitches. And whatever Freud’s limitations may have been as the leader of a new movement, he certainly managed to instill faith in his disciples. But there is something different about the selling of mindfulness these days. That’s what makes it McMindfulness. McMindfulness is the marketing of a constructed dream; an idealized lifestyle; an identity makeover.

I’m not saying that mindfulness practice doesn’t work. It’s not as simple as that. In order to understand what mindfulness does and does not do for people, we need to understand the desires, needs and yearnings that the successful marketing of mindfulness taps into. We need to think about the role and function of the self-help industry in our culture. We need to remember that psychotherapy is a type of secular religion. And we need to remember that psychotherapy needs to be marketed in our culture, just as medications need to be marketed. Of course the profits in the psychotherapeutic industry are negligible relative to those in the pharmaceutical industry. But psychotherapists do need to pay their rent. And then, of course, there is the marketing of ideas. Developers of new brands of psychotherapy don’t make a fortune, but there is always the social capital that comes with developing a successful therapy brand.

And students of mindfulness are doing something fashionable. It’s reminiscent of seeing a psychoanalyst in New York during the 1950’s, but it’s even hotter than that. And it does help them — as much as anything else does.

Politics is usually absent from explicit discourse in psychoanalysis. I say explicit, because obviously we all live in socio-political contexts that signify and structure our roles in any setting. There is a long history to the retreat of politics and political thought from the psychoanalytic clinic. But suffice it to say that, especially since psychoanalysis became a Central European refugee in the post WWII anti-socialist US, everyone has been careful. The allusion of scientism and the ideology of neutrality have been a good defense.

But days like these complicate the picture. Being a New York based Jewish-Israeli therapist with a sizable cohort of Jewish-Israeli patients, nowadays there is no avoiding discussing current events. My patients’ family and friends are living in varying degrees of alert and panic from flying rockets. Rockets that for the most part do not hit but are intercepted and loudly exploded above their heads. Soldiers that might be one’s brother or cousin or nephew are killed and wounded. There is new vehemence to the public discussion in Israel which everyone follows, and obvious manipulation by the government. And there is constant exposure to the carnage inflicted by Israel on Gaza. Destroyed neigbourhoods and lives and immense suffering are on everyone’s mind and conscience. No one can keep away. People are relentlessly engaged and deeply troubled. They feel scared and angry and guilty and ashamed. At the same time they are often isolated in environments that are either ignorant of the events or avoidant. They want to talk about what’s going on, to figure out how they feel and where they stand. Some of them have an urge to go there. Some feel Israel is slipping away as a place of attachment and belonging. They are anxious to imagine solutions. Most of all, they want to not be alone when all of this is happening. And they need me to be present in many ways, including as s trusted compatriot with an opinion. I do have my very strong feelings and opinions. But what is the place of it all in our analytic work?

I refuse to treat such concerns as only grist for the mill of a purely psychological interaction. To consider talking about war, our war, a matter for psychological exploration should be a part of what we do. But it would be oblivious and unethical to end there. When people speak about the reality and politics of a war in which they are intimately implicated, they are not only enacting personal dramas, although they invariably do. They are also wondering, sometimes desperately, how one aught to live in this world as one of many, as a social, ethical being with collective identities. Life is singular and social. Subjectivity is made in intimacy and in public. When we talk about the war we talk about personal experiences, about identifications with victims and perpetrators, about the conflicts of power and fear and empathy and shame that are already part of one’s being as a subject. For my Israeli patients there are always also formative memories. Fathers absent for months while enlisted in conflicts, like those of 67 and 73, the fear and worry and excitement of a child exposed to such circumstances, and often one’s own army service with its dilemmas, and excess and danger. Deep and sometimes dissociated traumatic conundrums come to life again.

But when we talk about the war, we are also talking about belonging and alienation, about conforming and resisting, about our place in a collective that has its own history and conflicts, and anxieties and madness. There is not a single Israeli I know who does not feel implicated and accountable for what’s going on in Israel and Gaza these days, in both the personal and political sense. There is a set of collective identifications, and a complicated feeling of collective responsibility, and an urgent need to make sense and explain. There is, in other words, sharp awareness that one is a political being, that the very meaning and feeling one has of himself, that one’s very existence, take place in a social-political universe. And this universe is undergoing a violent upheaval. We should be able to address this register of human experience in our work.

But if psychoanalysis has had some good 100 years developing ways to think about the traditionally “psychological,” we have nearly nothing with which to address the experience and ethics of life in the sometimes routine, sometimes exploding social-political register. How to explore, what to share, when, if ever, to take a stand? In other words, what is the knowledge or insight one aims for, and what is the ethics one stands for as a psychoanalyst?

It has been my conclusion that if I want to address the questions of the political in psychoanalysis, I need to move beyond psychoanalysis. To engage critical theories that attempt an account for the link between subjectivity and the social. To apply social thinking to my effort to understand the conscious and unconscious dilemmas that impel and oppress what appears, what perhaps masks, as purely subjective. Psychoanalysis’ most basic premise is that people are helped by making the unconscious conscious, recognizing and dispelling certain kinds of apparent givens, creating new potentials for signification and agency. And so, in this context, my concern for knowledge is a concern vis-a-vis what might be construed, implicitly or explicitly, as the socially unconscious. Or looked at from another angle, ideology. In the context of war, of our war, the historical and current work of ideology is so brutally apparent, that it is actually not difficult to see and talk about. Every singular perception, every singular experience, is built around a massively ingrained and perpetuated collective narrative. What “they” want, what “we” want, what “they” are doing to “us,” what “we” don’t want to but must do to “them.” The government pronounces, the media bombards, family and friends rehearse; competing traumatic histories and pleas for historical justice.

There are variance and nuances, but when it comes to people’s sense of “we,” there is an inherent need of and therefore surrender to collectively, politically and ideologically generated narratives. In some ways, vis-a-vis the collective and its circumstances, it is as if people remain eternal children, in need of parental guidance in a confusing and dangerous world. For this reason, despite the apparent ease of recognizing collective ideology and manipulation, it has been my experience that questioning collective identifications is often harder, more troubling, than doing so in the context of one’s relations with one’s family. I have written elsewhere that collective identity seems sometimes as fundamental and as complicated as gender or sexual identity. It is no accident that the violence of the public discourse in Israel these days ties together loyalty and sexuality, that women protesting the militant mainstream are threatened with rape, that men who lean too far to the left are called treasonous homos, that on a radical right-wing Facebook page a woman offered rewarding soldiers on leave with sex (she apologized it could be only up to 10 a day). To challenge a patient to become critical of the collective narratives around which his identity is organized, and which on days like these amplify and harden into a shield, is akin to asking him to become politically queer. It is doubly difficult since the cost is deeply felt and the gain is unclear. The rationale for trying in such direction is also unclear. After all, it is not my role to unsettle what makes my patients feel secure.

But the people I see, none of them actually feels secure. The war is exploding the discontents of our civilized living. There is a great deal of confusion. Identifications that are usually stable or shifting in slow motion are rattled, become uncertain, emotionally urgent. There is attachment to old truths, but also a need and a will to question. There is a paradoxical reach for both the rehearsed and the new. And so we talk about “us” and “them,” about the collective narratives that infuse and constrain our existence, about the fear and confusion and anger and shame, and it helps. It helps to open up and make sense of what seems unavoidably tragic in the present. It helps, I believe, in the long run, in giving us all a sense of greater freedom and greater responsibility vis-a-vis life’s hard choices. In times like this the fact that the personal is political is painfully evident. The psychoanalysis we do these days involves talking about the rockets and the tunnels, about the finances of Hamas and the tactics of the Israeli army, about the cynical positions of the US and the EU and the Arab world. We talk about the destruction, so much destruction, and so many dead and wounded people. We talk about collective fear and collective trauma, and collective loss and collective ambition. There is immense helplessness in the face of overwhelming historical forces and criminal politics and miserable, lost people everywhere. You and I, us and them. Who and what do we feel for? Who and what do we reject? There is immense helplessness, but also resolution, chaos but also meaning. We are looking for a place from which to build our own sense of self, while still trapped facing history’s pile of debris as it keeps growing skyward. We appreciate the loss and the hope that arise when we disentangle from the oppressive truths that surround us. And we do it together, perhaps making up a new kind of together, with meaning in both the subjective and the political registers. This seems to me the right thing to do these days.

There is a relentless barrage of narratives about our supposed beastly nature and conduct. Since childhood, we have all watched animals routinely tear off each others’ limbs in countless nature documentaries meant to show us that survival at any cost is the natural order of life. We are fascinated by House of Cards, from which we infer that only suckers play by the book and uphold standards of decency. Many of us stumbled across the political theory of Thomas Hobbes in school; he told us that man is a wolf to other men and that the only way to reign in the beast is to resign to a larger beast — the Leviathan. We also recall that Adam Smith advised us not to rely on the charity of the butcher and the grocer for our meal, but on their self interest. We watched Scorsese’s The Wolf of Wall Street or Costa-Gavras’s Le Capital, and they confirmed that self interest knows no bounds. International relations experts thunder that great powers have always been dangerous actors, and they will not be embarrassed to continue to be dangerous and irresponsible; others complain that too many Americans do not have the stomach for raw American might, and want their national power sautéed in moral purpose. It has become impossible to leaf through a daily newspaper without encountering stories of genocide, corruption, rape, and multiple other manifestations of humanity’s beastly nature.

And yet, none of these convinced us to be beastly as individuals. Take the Ultimatum Game: This is a game where a person is given $100, and is told to offer a split to a second person. It is called the ultimatum game because the second person has no say on what the split is and receives, in effect, an ultimatum: she has the option either to accept the split, or reject the split, in which case neither of them gets anything. If we were all convinced of each others’ beastly nature, we would expect the most common split to be $99 for the first person and $1 for the second person. The first person would be foolish to offer anything more than $1, as she is expected to do nothing other than maximize her gain, and the second would be foolish to turn down $1, as that is better than what she had a minute ago. Yet, 30 years of conducting this experiment in all corners of the world reveals that this is not at all what we do. The average split that the people offer is 55-45; it is not quite 50-50, but close enough. What is more revealing is that splits worse than 75-25 are routinely rejected by people in the second position, a thoroughly irrational move, if maximizing our self-interest is indeed the only metric we have. It seems that many among us are ready to pay a personal price to oppose gross unfairness. In these experiments, participants are not given to believe that the players are related, or somehow are part of the same community; neither are they given information that their performance in the game will become public knowledge and a part of their reputation. People are not primed, in other words, to defend fairness in order to benefit from a fair system or a good personal reputation in the long run. Instead, we seem to understand innately the importance of fairness without being lectured about it.

There is another variant of this experiment, where again $100 is given to a person, who is told to split it with a second person, but this time around the second person has no right to turn it down, and therefore no veto. In this version, called the Dictator Game, the average split is 70-30, and a quarter of the people give the second person $50 or more, even though there is no immediate material punishment to a 100-0 split. So what is going on? Could it be that we are not selfish brutes after all?

Fortunately, scholars did not stop asking questions about human nature after Hobbes. Edward Wilson, for example, has shown that while egoistic individuals have an evolutionary advantage, so do solidaric groups. Could that be why we oppose blatant unfairness at a personal cost and act far more generously than crude selfishness would dictate? Robert Axelrod has set out to discover how cooperation emerges without central authority. He designed simulation experiments wherein strategies that start with cooperation and reciprocate both cooperation and non-cooperation proved to be the most successful and resilient strategies. In other words, having some faith in our fellow humans is not foolish, but rational. Elinor Ostrom has demonstrated how we achieve cooperation and reign in selfish free riders without a Leviathan, and won a Nobel Prize for her work. She chronicles how belonging to the same normative and social communities, attending the same cafés and bars, and building reputation through the same channels all provide formidable venues for binding covenants. Other experiments have proven that we are susceptible to the gaze of our peers. When a photograph of a pair of eyes is placed over a donation box for the office coffee machine, contributions increase substantially. In addition to a commitment to an ethics of reciprocity, it seems we have learned to be attentive to the gaze and regard of our peers, and to avoid their loathing.

There is a yet another experiment that tests the rhythms of our cooperative temperament. In this experiment, called the Public Goods Game, five or more people are each given an allowance of $100. They are told that any voluntary contribution they make to a common pot will be increased by 50%, and the accumulated sum will be evenly distributed back to each member of the group. As you can infer from previous studies, some people contribute a good deal; others contribute little or nothing. Experiments have shown that the average contributions in the first round coalesce around one-third of each allowance. When this game is played in more than one round, voluntary contributions go down. We are ready to be solidaric, but we do not want to be made fools of; when we see people contributing less than we do and still benefiting from our generosity, we adjust our contributions downward. Two things have proved to be effective in raising and sustaining voluntary contributions: (1) allowing participants to punish selfish members and (2) communication among participants. The former should not surprise us. We have seen in the previous experiments that we have a propensity to punish unfair members, even if it involves a cost to ourselves. And the latter should not surprise us either; after all, we learn, produce, and reproduce norms by talking about them. In A Cooperative Species, their book on human reciprocity, Bowles and Gintis observe that our linguistic capabilities allow us as a species to formulate social norms, communicate these norms to newcomers, alert others to their violation, and organize coalitions to punish the violators. Communication elicits and elucidates norms.

The moral of the story is that we are not the beasts we are told we should feel free to be. Through evolution and successive generations of renegotiation, we have forged a strong ethics of fairness and reciprocity. It also seems that each viable society has some members who are more willing and ready to oppose unfairness, even if such opposition involves a personal cost. The fact that they are not always the majority does not seem to deter them, and that is a good thing, as we see that even minorities can be effective guardians of fair play. In our “nice-guys-finish-last” popular culture, we are prodded to think of these guardians of fairness as suckers, but they are in truth the custodians of vital civility and decency, without which the rest of our systems and societies crumble. It is not that such peer pressure can solve all problems; it is rather that without such mechanisms, it would be impossible to have a functional system at all. Since fairness, trust, and other pro-social dispositions are important and precious components of any social system, we ought to have mechanisms to celebrate and reward actors and practices that replenish their stock, and loathe those who drain it. The newest wave of research does try to ascertain precisely how small a group of people is still capable of policing the laggards, and what level of intensity of the gaze and condemnation are required. Regardless, it seems clear that we all have a say in the particular constellation of norms and conventions that are at play around us. We are all complicit, for better or worse, in the conventions that govern us.

To be sure, neither are we angels. If we were all cooperation-prone and guided by long-term interests, our societies and systems would look a lot different than the status quo. Narratives of our angelic nature are as much caricatures as those based on our beastly nature. Such simplistic models are neither accurate nor helpful. We both cooperate and compete. The interesting questions are, when do we opt for one or the other, and how does the precise mix change both over time and in response to particular incentives and feedback loops?

Idealists have at times been described as cynics-in-the-waiting, who have not yet been mugged by reality. Some have indeed discovered that the noble frameworks they used to make sense of the world and to help guide their own actions had too little correspondence to facts. Overly romantic notions of human nature serve no real purpose and are often an impediment. Our frameworks have to cohere reasonably well with real human proclivities and need to be able to account for incidences of foul play. Otherwise, disappointments will lead some to retract their commitments to fairness, and may cause them to swing wildly to the other end. The acumen to be cherished would be to nudge people toward a greater commitment to fairness, while also inoculating them to disappointments.

Getting this right is even more important today as we sail into uncharted waters of increasing interdependence. We now live in a world where what happens in one of its parts affects lives in other parts. CO2 emissions, infections, financial products, radiation leaks, and novel ideas from one part of the world have significant consequences for others. These centripetal dynamics are pushing us together and intermixing our destinies. Our lives are no longer solely authored by us, but are being co-authored with others. How we share that authorship is anything but obvious. I can think of no question more important than the one that asks what sort of rapport we wish to have with the billions of others with whom we share our planet and destinies, if not our countries.

Here, too, we come across serendipitous pockets of decency. When asked whether their country should obey international law even when their governments think doing so is against the national interest, a stunning 58% of the people around the world answered in favor of international law. What is even more remarkable is that when asked whether others in their society agree with them, that same 58% reported that they were a minority in their own society. Any conversation about how to live in an increasingly interdependent world has to start with recognizing that we are likely to encounter decency far more frequently than we are told is the case. How to seize onto that encouraging baseline, and how to strengthen and grow — and also how not to overburden — that pro-social temperament is the vital question. For centuries, we built trust over time through face-to-face interactions and through trial and error. We all have finely tuned notions of who to trust and under what conditions. Trust and other conventions involving people in faraway places will need to be forged through different and novel methods. We will need to develop, with exceptional creativity, sincerity and disclosure, of which scale and the pressure of time deprive us. It’s a shame we have not, so far, received much help from mighty popular culture in this journey.

When I was in primary school, there were two street names in my hometown that I always got wrong. My teacher looked at me with disbelief and worry when I called the street next to the school Wolgaster Straße.

My geography skills improved dramatically after 1989, when the street names finally caught up with me. I grew up in the German Democratic Republic; call it DDR, GDR, or East Germany. The street names my teachers insisted on were Wilhelm Pieck Allee (Allee means promenade) and Otto Grotewohl Allee, named after the first President and Prime Minister of my dear republic. At home, I had learned to refer to these streets as Wolgaster Straße (Straße means street) and Anklamer Straße. Wolgast and Anklam are nearby cities. If you go to Wolgast, you leave the city via Wolgaster Straße. These street names are neat mnemonic devices; they point to nearby places. My pre-1989 teacher was not worried about my lack of knowledge. She must have known that the names I used were from a different time. For her, remembering the wrong name was worse than forgetting the (politically) correct name. After 1989, the old names returned.

Since then, I never got in trouble over street names again – that is, until I moved to Berlin for part of my sabbatical. It was my first time living in Berlin. My parents grew up in this city. In their twenties, they moved away. Their memories of the city are from the 1970s. When I talk about places, subway stops, and streets in Berlin, my mother often has no idea what I am talking about. Danziger Straße? Torstraße? Where would that be? These places are not even in the former West Berlin; they are in the East. My parents knew them and yet don’t recognize them. Danziger Straße used to be called Dimitroffstraße when my mother roamed these quarters.

The obsession with naming and renaming streets pre-dates the East German state. I recently finished reading Hans Fallada’s amazing novel Alone in Berlin (Jeder stirbt für sich allein), which tells a story of futile anti-Nazi resistance and includes a 1944 street map of Berlin. This 1944 map includes evidence of then new names, such as Hermann Göring Straße. The name did not last for long, of course. Nowadays the street is called Ebertstraße, after the Weimar Republic President Friedrich Ebert.

The current Nordbahnhof (Northern Station) was called Stettiner Bahnhof until 1950. Sections of the current Torstraße used to be Elsässer Straße (Alsace Street) und Lothringer Straße (Lorraine Street) between 1871 and 1951, when it became Wilhelm Pieck Straße, the name by which my mother should know it. In the wake of the wars with France, Alsace and Lorraine had been claimed as parts of Germany. These claims were embedded in the Berlin streetscape via street names.

It does not need explaining that names such as Hermann Göring Straße disappear. But what crimes, one might ask, did cities like Danzig and Stettin commit? They used to be German cities in a tenuous way. Now they are Gdansk and Szczecin, Polish cities. The Stettiner Bahnhof was the place to catch a train to Stettin when it was part of the German state. Today, trains leaving for Szczecin depart from the new Central Station (two hours, about 30 Euros, www.bahn.de). Nordbahnhof has been demoted to a simple subway stop.

The Berlin streetscape contains layers of references to a broader European geography and to political desires. Train stations were named after the destinations of departing trains. Streets were named after places in Europe (Bornholm, Stockholm and Oslo), after places that the roads were leading towards (Prenzlauer Allee, Potsdamer Straße), and as a way of making claims to cities and places as German (Stettin, Danzig, Elsaß, and Lothringen).

After the end of the Second World War, the East German state renamed streets and places to remind the people of newly important politicians, but also in order to erase the references to places that were no longer German. Thus, Leipziger Straße kept its name, but Danziger Straße became Dimitroffstraße. Likewise, Stettin station did not become Szczecin station, but Northern Station. It is as if it forgetting that Danzig and Stettin ever existed was preferable to remembering the new names of these cities.

Which of the East German names for streets and places remain? Nordbahnhof is still Nordbahnhof, Rosa Luxemburg Straße remains, but Dimitroffstraße and Wilhelm Pieck Straße ceased to be. Names that do not sound too “East German Communist” stayed: Nordbahnhof. Dimitroff and Pieck had to go. Rosa Luxemburg and Friedrich Engels, however, managed to stay.

Berlin is a city with layers of names, as I keep seeing in conversations with older people from East Germany. Recently, I started to feel old as well. I went to get a haircut, talked to the hairdresser, and found out that we grew up in the same city. I asked her which school she attended, and she said “Hanseschule.” I don’t know any school by that name. I know all the schools of the city by their official East German names; and only some of them by their post-transformation names. When I graduated, in 1997, the name changes were still fresh. So I asked the hairdresser about the name of her school before 1989. She didn’t know. That’s when I felt old. I had just asked the kind of question that my mother asks me when she tries to understand where I work, shop, eat, and visit people in Berlin.

My inability to translate from new to old names in Berlin (partially remedied by Wikipedia and a great online collection of old city maps), the hairdresser’s inability to remember the old name of her school, and my inability to remember the new name of her school made us speechless. We cannot talk about places that we have no common name for. Talking about cities, schools and streets in East Germany, you have to translate between old, new, and very old.

Beneath the surface, we East Germans of different generations speak different languages. We need to remember names that are no longer and names that are not yet. Such acts of memory and translation are crucial, for otherwise it is impossible to relate to the cities, the schools and the wider world of different generations.

A version of this article was first published in Deliberately Considered.

Section: The Arts and Literature

Two films frequently cited together on the best films lists for 2013 were Gravity and All is Lost. As many reviewers noted, the films featured isolated individuals up against the cold, impersonal forces of the universe — the dark void of outer space for Sandra Bullock in Gravity and the dark depths of the Indian Ocean for Robert Redford in All is Lost. Less noted was a crucial difference between the two films: Sandra Bullock survives and Robert Redford dies. Intrinsically connected to these outcomes is another difference: Gravity is the story of a woman; All is Lost is the story of a man. Through examining this difference we can learn how contemporary film achieves its effects through mobilizing unconscious mythic and archetypal images, especially those concerning gender.

In both films the main character is faced with the ultimate existential crisis: imminent death. In both films the characters are resourceful and draw on considerable inner resources in their struggle to survive. In both films the essence of the struggle lies in the characters’ efforts to connect with other human beings. In Gravity, Sandra Bullock uses her inner connections to two objects to keep herself from giving up. These objects — her long-deceased daughter and her recently deceased co-pilot, George Clooney — are dead in reality but alive in her psyche. In All is Lost Robert Redford struggles to get the attention of the only human world within sight, the passing container ships. Bullock’s rich inner dialogues keep her courage up as she struggles to reach earth. Redford, by contrast, remains an unnoticed speck and perishes at sea.

The Bullock character mobilizes a set of classic mythic images concerning women, namely their intrinsically object-related character, which relates to their role as mothers. The Bullock character is never really alone, but is always with another; this is her strength and this is why she comes to earth in a birth image, emerging (even evolving) out of the sea. The Redford character mobilizes an equally powerful myth concerning maleness: he is not only alone in a shipwrecked vessel, he has actually chosen to be alone by sailing the Indian Ocean with technical instruments but with no human companions. While Gravity is suffused with Bullock’s interior monologue, there are no words in All is Lost, except the poignant letter that Redford puts in a bottle, expressing the monumentality of his struggle to survive and what seems to be a lost or fractured marriage in the past. We learn everything about Bullock’s relations because she is her relationships; we learn almost nothing about Redford because he is so desperately alone.

While these images of man and woman, of masculinity and femininity, are ancient, they take a special form today, which can be grasped historically. They are the product of seventies’ feminism, and especially with its struggle with psychoanalysis — the core intellectual and in many ways spiritual struggle of that movement. Behind the Bullock woman lie thinkers like Nancy Chodorow, Carol Gilligan and Dorothy Dinnerstein, who described women as essentially object-related and men as essentially isolated and alone. Contemporary philosophers like Judith Butler have struggled to escape the gender essentialism perpetuated by seventies’ feminists, but ultimately fail because they retain the ideological attack on the “lone horseman,” the “Cartesian ego,” the “pathos of heterosexuality” and the like.

I will not try to argue here for the superiority of my own view that Freud’s emphasis on bisexuality, namely the way that both sexes go back and forth between male and female objects and the way in which every sexual relation, whether heterosexual or homosexual, needs to be understood as consisting of four people, not two. But I think I can point out why All is Lost is the better of the two movies by bringing a third, also similar, movie into the discussion, namely Captain Phillips.

In Captain Phillips we again see an individual (Tom Hanks) struggling against fatally overwhelming odds, but in this case the problem is human and social, namely Somali piracy. Captain Phillips demonstrates a powerful reality of the modern world, namely the way in which literally billions of dollars in military — naval — power can be used to save a single individual. As an American who travels a lot, I am very well aware of what it means to have American power at my back. However, simplistic patriotism is undone by one unforgettable scene in the movie when the Hanks character suggests to the pirate leader (peerlessly played by Barkhad Abdi) that there may be better ways to make money than piracy, and Abdi replies “No. Maybe in America, but not here.” Similarly in All is Lost, we get a sense that what Redford finally is up against are the huge, Maersk container ships, which glide past with only a handful of humans on board, and which exemplify Marx’s characterization of the fetishism of commodities. By contrast, Gravity, remains the most mythic of the three, the least accessible to a critical reading or an historical analysis and this may reflect its deeper roots in the still unresolved upheavals of the nineteen seventies.

This post has now been included in a broader piece on Anthrocene cinema, which is here.

Watching the previews for the summer movies, they all seem to me to belong to the genre of the Anthropocene. They all seem to be narratives about a civilization confronting limits of its own making. Some movies respond by stressing the glorious expenditure of energy, burning it up with images of fast cars, fast planes, fast women. And guns, lots of guns. Other opt for apocalypse. If the present cannot go on infinitely expanding then it can only collapse. No qualitative change can be imagined in narrative form. After us, the deluge: the Sun King’s prediction democratized.

Edge of Tomorrow is an interesting variation. Yes, it’s a Tom Cruise action sci-fi concoction, but these are not without their charms. Tom’s face provides the machinic sheen against which robots and other otherwise all too techy images come to seem warm and somehow human. There’s a creepy shot of his right ear that keeps returning, again and again, with weird stretch marks, as if someone has shrouded a Mills grenade in cling-wrap.

Setting Cruise aside, Edge of Tomorrow is interesting for a few reasons. The story’s mechanic is pure video game. Cruise and his co-star have the special property, bestowed on them accidentally by invading ‘aliens’, of starting the action over again, every time they die. Edge of Tomorrow lives out – and dies out – a desire for do-overs, for digital time. The time of the edit suite, as well as of the video game, where real time is not duration but measured and metered time. As if Bergson had it backwards, and pure duration were more an after-image of clock time and clock speed.

Edge of Tomorrow is about video game time, where death is not final, not an end, but rather a beginning, a do-over. Tom and his co-star do time over and over, trying to beat the aliens, clearing levels, backtracking out of dead-ends, all the way up to the boss level. But the time against which they fight is not duration, it is rather the historical time of the Anthropocene. It’s a human wave assault, by the most advanced flesh-tech of this civilization, against the very limits it has itself created.

It is not an exaggeration to call this historical time one of civilization. The aliens have conquered Europe. Russia and China are holding it at bay. The decisive battle is a re-staging of D-day, across the English channel. The movie charmingly presents the Brits as nothing more than a front for American imperial power. But in a way all of the current variants of capitalism as a civilization confront the same enemy.

It is of course unseemly to talk about civilizations. That whole language has belonged since its inception to the apologists of empire. So one has to imagine, when I use the word civilization, that I say it the way Charles Fourier would – pausing to spit in the middle of such a long and encumbered term.

The fantasy, then, is that the digital time of this civilization – be it capitalism still, or something worse – has within its power the ability to overcome the almost shapeless, formless, seething tentacle menace. One which curiously seems to have some sort of mimetic power. It doubles us and confounds us. It erupts from the earth or out of the sea, or appears out of nowhere in the sky. Its an almost molecular enemy. It is techy, like us, and yet not. It is perhaps the shadow image of our own forces of production, mediating between earth and air and water, and bringing fire. It is very scary except in those moments when the film makers lose their nerve and give it a face.

Tom and co-star alone confront this alien with the digital power of do-over time, getting beaten again and again, restarting the game each time. The co-star is Emily Blunt. She is the perfect embodiment of the weaponized woman. We see her tanned and oiled arms as she does push-ups in a black-ops chic sleeveless number, the camera lingering just a bit over her ass. The casting is a masterstroke. Blunt plays the global archetype of the stiff-upper-lip-Brit, mixed in with a bit of thorny English rose. The femme-gun doesn’t really do feelings. Blunt’s performance is so on-point that she makes Cruise seem almost human.

I won’t give away where they confront the boss alien, but it is in a landscape under water. Weird weather as a feature of a lot of movies of the Anthropocene. It can be caused by anything at all, except the emission of green house gases from the collective labors of this civilization. This is key. The cinema of the Anthropocene is about anything but the causes of the Anthropocene. But it is very candid about its effects.

So the boss-alien is confronted in old Europe, from which this civilization’s mode of production sprang. We see old Europe under water, as indeed in a way it already is, in the future already pre-set for it.

Cruise and Blunt: perfect names for our heroes, for the two affects that dominate the action. And of course they win. There may be a point to this. If we could prefigure all of the permutations of the narrative resources of this civilization, run through them all, have all our futures over and done with in advance, we might be done with this whole narrative formation. Perhaps we need to play this game till we get bored with it. Perhaps we will get bored with it soon enough to discover that its digital time does not accord with the historical time of the Anthropocene. That other time is out there, like a formless alien.

As someone who grew up with Paul Verhoeven’s original 1987 RoboCop, I can’t help but feel the dystopic and critical social commentary of the movie was lost in its reboot. What was once a critical and distopic film exploring the dangers of unchecked corporate power has become a soft endorsement of corporate warfare. Yet, some elements of the remake do provide useful insights into our changing social politics that are worth considering. The evolution of RoboCop reveals how both capitalism and imperialism have changed and deepened their hold on our cultural imaginary in less than pleasant ways.

There are basically four key sets of players in the original story: Omni Consumer Products or OmniCorp (OCP), the mega corporation which builds RoboCop, runs the Detroit police force and plans to construct a corporate utopia called “Delta City” in the ruins of old Detroit; Alex Murphy, the Detroit cop who becomes RoboCop; the police department run by OCP; and crime boss Clarence Boddicker, who is in cahoots with OCP executive Dick Jones,  The connections among these players were central to the original story. By removing or sidelining them in the new movie, a telling critical edge has been lost.

In the original story, OCP takes over the collapsing police department, ostensibly due to rampant crime and the crumbling economy. This explains the significance of the abandoned factory where the story started and ends, as well as why the Detroit police union votes to strike against OCP. But in the remake there is no mention that Detroit’s economy is collapsing, that OCP is actually facilitating this process in order to build its new Delta City, or that the police force is run and controlled by the OCP. Instead OCP is shown as a separate actor from the police. By removing these tensions in the original plot, questions about the dangers of growing private security forces, critiques of forced urban “redevelopment” for the wealthy, and the class politics of unions and labor strikes have all but disappeared from discussion. Instead there is a forced performance from Samuel Jackson playing media pundit Pat Novak, host of The Novak Element, who is basically a black Bill O’Reilly caricature, there to remind us how fucking awesome America is, was and always will be. In stereotypical fashion he attacks liberals more concerned with robot “feelings” than saving American lives abroad. (Watch movie trailer at th