Wednesday, October 19, 2016

The Future Hiding in Plain Sight

Carl Jung used to argue that meaningful coincidences—in his jargon, synchronicity—were as important as cause and effect in shaping the details of human life. Whether that’s true in the broadest sense, I’ll certainly vouch for the fact that they’re a constant presence in The Archdruid Report. Time and again, just as I sit down to write a post on some theme, somebody sends me a bit of data that casts unexpected light on that very theme.

Last week was a case in point. Regular readers will recall that the theme of last week’s post was the way that pop-culture depictions of deep time implicitly erase the future by presenting accounts of Earth’s long history that begin billions of years ago and end right now. I was brooding over that theme a little more than a week ago, chasing down details of the prehistoric past and the posthistoric future, when one of my readers forwarded me a copy of the latest Joint Operating Environment report by the Pentagon—JOE-35, to use the standard jargon—which attempts to predict the shape of the international environment in which US military operations will take place in 2035, and mostly succeeds in providing a world-class example of the same blindness to the future I discussed in my post.

The report can be downloaded in PDF form here and is worth reading in full. It covers quite a bit of ground, and a thorough response to it would be more the size of a short book than a weekly blog post. The point I want to discuss this week is its identification of six primary “contexts for conflict” that will shape the military environment of the 2030s:

“1. Violent Ideological Competition. Irreconcilable ideas communicated and promoted by identity networks through violence.” That is, states and non-state actors alike will pursue their goals by spreading ideologies hostile to US interests and encouraging violent acts to promote those ideologies.

“2. Threatened U.S. Territory and Sovereignty. Encroachment, erosion, or disregard of U.S. sovereignty and the freedom of its citizens from coercion.” That is, states and non-state actors will attempt to carry out violent acts against US citizens and territory.

“3. Antagonistic Geopolitical Balancing. Increasingly ambitious adversaries maximizing their own influence while actively limiting U.S. influence.” That is, rival powers will pursue their own interests in conflict with those of the United States.
“4. Disrupted Global Commons. Denial or compulsion in spaces and places available to all but owned by none.” That is, the US will no longer be able to count on unimpeded access to the oceans, the air, space, and the electromagnetic spectrum in the pursuit of its interests.

“5. A Contest for Cyberspace. A struggle to define and credibly protect sovereignty in cyberspace.” That is, US cyberwarfare measures will increasingly face effective defenses and US cyberspace assets will increasingly face effective hostile incursions.

“6. Shattered and Reordered Regions. States unable to cope with internal political fractures, environmental stressors, or deliberate external interference.” That is, states will continue to be overwhelmed by the increasingly harsh pressures on national survival in today’s world, and the failed states and stateless zones that will spawn insurgencies and non-state actors hostile to the US.

Apparently nobody at the Pentagon noticed one distinctly odd thing about this outline of the future context of American military operations: it’s not an outline of the future at all. It’s an outline of the present. Every one of these trends is a major factor shaping political and military action around the world right now. JOE-35 therefore assumes, first, that each of these trends will remain locked in place without significant change for the next twenty years, and second, that no new trends of comparable importance will emerge to reshape the strategic landscape between now and 2035. History suggests that both of these are very, very risky assumptions for a great power to make.

It so happens that I have a fair number of readers who serve in the US armed forces just now, and a somewhat larger number who serve in the armed forces of other countries more or less allied with the United States. (I may have readers serving with the armed forces of Russia or China as well, but they haven’t announced themselves—and I suspect, for what it’s worth, that they’re already well acquainted with the points I intend to make.) With those readers in mind, I’d like to suggest a revision to JOE-35, which will take into account the fact that history can’t be expected to stop in its tracks for the next twenty years, just because we want it to. Once that’s included in the analysis, at least five contexts of conflict not mentioned by JOE-35 stand out from the background:

1. A crisis of legitimacy in the United States. Half a century ago, most Americans assumed as a matter of course that the United States had the world’s best, fairest, and most democratic system of government; only a small minority questioned the basic legitimacy of the institutions of government or believed they would be better off under a different system. Since the late 1970s, however, federal policies that subsidized automation and the offshoring of industrial jobs, and tacitly permitted mass illegal immigration to force down wages, have plunged the once-proud American working class into impoverishment and immiseration. While the wealthiest 20% or so of Americans have prospered since then, the other 80% of the population has experienced ongoing declines in standards of living.

The political impact of these policies has been amplified by a culture of contempt toward working class Americans on the part of the affluent minority, and an insistence that any attempt to discuss economic and social impacts of automation, offshoring of jobs, and mass illegal immigration must be dismissed out of hand as mere Luddism, racism, and xenophobia. As a direct consequence, a great many working class Americans—in 1965, by and large, the sector of the public most loyal to American institutions—have lost faith in the US system of government. This shift in values has massive military as well as political implications, since working class Americans are much more likely than others to own guns, to have served in the military, and to see political violence as a potential option.

Thus a domestic insurgency in the United States is a real possibility at this point. Since, as already noted, working class Americans are disproportionately likely to serve in the military, planning for a domestic insurgency in the United States will have to face the possibility that such an insurgency will include veterans familiar with current counterinsurgency doctrine. It will also have to cope with the risk that National Guard and regular armed forces personnel sent to suppress such an insurgency will go over to the insurgent side, transforming the insurgency into a civil war.

As some wag has pointed out, the US military is very good at fighting insurgencies but not so good at defeating them, and the fate of Eastern Bloc nations after the fall of the Soviet Union shows just how fast a government can unravel once its military personnel turn against it. Furthermore, since the crisis of legitimacy is driven by policies backed by a bipartisan consensus, military planners can only deal with the symptoms of a challenge whose causes are beyond their control.

2. The marginalization of the United States in the global arena. Twenty years ago the United States was the world’s sole superpower, having triumphed over the Soviet Union, established a rapprochement with China, and marginalized such hostile Islamic powers as Iran. Those advantages did not survive two decades of overbearing and unreliable US policy, which not only failed to cement the gains of previous decades but succeeded in driving Russia and China, despite their divergent interests and long history of conflict, into an alliance against the United States. Future scholars will likely consider this to be the worst foreign policy misstep in our nation’s history.

Iran’s alignment with the Sino-Russian alliance and, more recently, overtures from the Philippines and Egypt, track the continuation of this trend, as do the establishment of Chinese naval bases across the Indian Ocean from Myanmar to the Horn of Africa, and most recently, Russian moves to reestablish overseas bases in Syria, Egypt, Vietnam, and Cuba. Russia and China are able to approach foreign alliances on the basis of a rational calculus of mutual interest, rather than the dogmatic insistence on national exceptionalism that guides so much of US foreign policy today. This allows them to offer other nations, including putative US allies, better deals than the US is willing to concede.

As a direct result, barring a radical change in its foreign policy, the United States in 2035 will be marginalized by a new global order centered on Beijing and Moscow, denied access to markets and resources by trade agreements hostile to its interests, and will have to struggle to maintain influence even over its “near abroad.” It is unwise to assume, as some current strategists do, that China’s current economic problems will slow that process. Some European leaders in the 1930s, Adolf Hitler among them, assumed that the comparable boom-bust cycle the United States experienced in the 1920s and 1930s meant that the US would be a negligible factor in the European balance of power in the 1940s. I think we all know how that turned out.

Here again, barring a drastic change in US foreign policy, military planners will be forced to deal with the consequences of unwelcome shifts without being able to affect the causes of those shifts. Careful planning can, however, redirect resources away from global commitments that will not survive the process of marginalization, and toward securing the “near abroad” of the United States and withdrawing assets to the continental US to keep them from being compromised by former allies.

3. The rise of “monkeywrenching” warfare. The United States has the most technologically complex military in the history of war. While this is normally considered an advantage, it brings with it no shortage of liabilities. The most important of these is the vulnerability of complex technological systems to “monkeywrenching”—that is, strategies and tactics targeting technological weak points in order to degrade the capacities of a technologically superior force.  The more complex a technology is, as a rule, the wider the range of monkeywrenching attacks that can interfere with it; the more integrated a technology is with other technologies, the more drastic the potential impacts of such attacks. The complexity and integration of US military technology make it a monkeywrencher’s dream target, and current plans for increased complexity and integration will only heighten the risks.

The risks created by the emergence of monkeywrenching warfare are heightened by an attitude that has deep roots in the culture of US military procurement:  the unquestioned assumption that innovation is always improvement. This assumption has played a central role in producing weapons systems such as the F-35 Joint Strike Fighter, which is so heavily burdened with assorted innovations that it has a much shorter effective range, a much smaller payload, and much higher maintenance costs than competing Russian and Chinese fighters. In effect, the designers of the F-35 were so busy making it innovative that they forgot to make it work. The same thing can be said about many other highly innovative but dubiously effective US military technologies.

Problems caused by excessive innovation can to some extent be anticipated and countered by US military planners. What makes monkeywrenching attacks by hostile states and non-state actors so serious a threat is that it may not be possible to predict them in advance. While US intelligence assets should certainly make every effort to identify monkeywrenching technologies and tactics before they are used, US forces must be aware that at any moment, critical technologies may be put out of operation or turned to the enemy’s advantage without warning. Rigorous training in responding to technological failure, and redundant systems that can operate independently of existing networks, may provide some protection against monkeywrenching, but the risk remains grave.

4. The genesis of warband culture in failed states. While JOE-35 rightly identifies the collapse of weak states into failed-state conditions as a significant military threat, a lack of attention to the lessons of history leads its authors to neglect the most serious risk posed by the collapse of states in a time of general economic retrenchment and cultural crisis. That risk is the emergence of warband culture—a set of cultural norms that dominate the terminal periods of most recorded civilizations and the dark ages that follow them, and play a central role in the historical transformation to dark age conditions.

Historians use the term “warband” to describe a force of young men whose only trade is violence, gathered around a charismatic leader and supporting itself by pillage. While warbands tend to come into being whenever public order collapses or has not yet been imposed, the rise of a self-sustaining warband culture requires a prolonged period of disorder in which governments either do not exist or cannot establish their legitimacy in the eyes of the governed, and warbands become accepted as the de facto governments of territories of various size. Once this happens, the warbands inevitably begin to move outward; the ethos and the economics of the warband alike require access to plunder, and this can best be obtained by invading regions not yet reduced to failed-state conditions, thus spreading the state of affairs that fosters warband culture in the first place.

Most civilizations have had to contend with warbands in their last years, and the record of attempts to quell them by military force is not good. At best, a given massing of warbands can be defeated and driven back into whatever stateless area provides them with their home base; a decade or two later, they can be counted on to return in force. Systematic attempts to depopulate their home base simply drive them into other areas, causing the collapse of public order there. Once warband culture establishes itself solidly on the fringes of a civilization, history suggests, the entire civilized area will eventually be reduced to failed-state conditions by warband incursions, leading to a dark age. Nothing guarantees that the modern industrial world is immune from this same process.

The spread of failed states around the periphery of the industrial world is thus an existential threat not only to the United States but to the entire project of modern civilization. What makes this a critical issue is that US foreign policy and military actions have repeatedly created failed states in which warband culture can flourish:  Afghanistan, Iraq, Syria, Libya, and Ukraine are only the most visible examples. Elements of US policy toward Mexico—for example, the “Fast and Furious” gunrunning scheme—show worrisome movement in the same direction. Unless these policies are reversed, the world of 2035 may face conditions like those that have ended civilization more than once in the past.

5. The end of the Holocene environmental optimum. All things considered, the period since the final melting of the great ice sheets some six millennia ago has been extremely propitious for the project of human civilization. Compared to previous epochs, the global climate has been relatively stable, and sea levels have changed only slowly. Furthermore, the globe six thousand years ago was stocked with an impressive array of natural resources, and the capacity of its natural systems to absorb sudden shocks had not been challenged on a global level for some sixty-five million years.

None of those conditions remains the case today. Ongoing dumping of greenhouse gases into the atmosphere is rapidly destabilizing the global climate, and triggering ice sheet melting in Greenland and Antarctica that promises to send sea levels up sharply in the decades and centuries ahead. Many other modes of pollution are disrupting natural systems in a galaxy of ways, triggering dramatic environmental changes. Meanwhile breakneck extraction is rapidly depleting the accessible stocks of hundreds of different nonrenewable resources, each of them essential to some aspect of contemporary industrial society, and the capacity of natural systems to cope with the cascading burdens placed upon them by human action has already reached the breaking point in many areas.

The end of the Holocene environmental optimum—the era of relative ecological stability in which human civilization has flourished—is likely to be a prolonged process. By 2035, however, current estimates suggest that the initial round of impacts will be well under way. Shifting climate belts causing agricultural failure, rising sea levels imposing drastic economic burdens on coastal communities and the nations to which they belong, rising real costs for resource extraction driving price spikes and demand destruction, and increasingly intractable conflicts pitting states, non-state actors, and refugee populations against one another for remaining supplies of fuel, raw materials, topsoil, food, and water.

US military planners will need to take increasingly hostile environmental conditions into account. They will also need to prepare for mass movements of refugees out of areas of flooding, famine, and other forms of environmental disruption, on a scale exceeding current refugee flows by orders of magnitude. Finally, since the economic impact of these shifts on the United States will affect the nation’s ability to provide necessary resources for its military, plans for coping with cascading environmental crises will have to take into account the likelihood that the resources needed to do so may be in short supply.

Those are the five contexts for conflict I foresee. What makes them even more challenging than they would otherwise be, of course, is that none of them occur in a vacuum, and each potentially feeds into the others. Thus, for example, it would be in the national interest of Russia and/or China to help fund and supply a domestic insurgency in the United States (contexts 1 and 2); emergent warbands may well be able to equip themselves with the necessary gear to engage in monkeywrenching attacks against US forces sent to contain them (contexts 4 and 3); disruptions driven by environmental change will likely help foster the process of warband formation (contexts 5 and 4), and so on.

That’s the future hiding in plain sight: the implications of US policies in the present and recent past, taken to their logical conclusions. The fact that current Pentagon assessments of the future remain so tightly fixed on the phenomena of the present, with no sense of where those phenomena lead, gives me little hope that any of these bitter outcomes will be avoided.

****************
There will be no regularly scheduled Archdruid Report next week. Blogger's latest security upgrade has made it impossible for me to access this blog while I'm out of town, and I'll be on the road (and my backup moderator unavailable) for a good part of what would be next week's comment cycle. I've begun the process of looking for a new platform for my blogs, and I'd encourage any of my readers who rely on Blogger or any other Google product to look for alternatives before you, too, get hit by an "upgrade" that makes it more trouble to use than it's worth.

Wednesday, October 12, 2016

An Afternoon in Early Autumn

I think it was the late science writer Stephen Jay Gould who coined the term “deep time” for the vast panorama opened up to human eyes by the last three hundred years or so of discoveries in geology and astronomy. It’s a useful label for an even more useful concept. In our lives, we deal with time in days, seasons, years, decades at most; decades, centuries and millennia provide the yardsticks by which the life cycles of human societies—that is to say, history, in the usual sense of that word—are traced.

Both these, the time frame of individual lives and the time frame of societies, are anthropocentric, as indeed they should be; lives and societies are human things and require a human measure. When that old bamboozler Protagoras insisted that “man is the measure of all things,” though, he uttered a subtle truth wrapped in a bald-faced lie.* The subtle truth is that since we are what we are—that is to say, social primates whow have learned a few interesting tricks—our capacity to understand the cosmos is strictly limited by the perceptions that human nervous systems are capable of processing and the notions that human minds are capable of thinking. The bald-faced lie is the claim that everything in the cosmos must fit inside the perceptions human beings can process and the notions they can think.

(*No, none of this has to do with gender politics. The Greek language, unlike modern English, had a common gender-nonspecific noun for “human being,” anthropos, which was distinct from andros, “man,” and gyne, “woman.” The word Protagoras used was anthropos.)

It took the birth of modern geology to tear through the veil of human time and reveal the stunningly inhuman scale of time that measures the great cycles of the planet on which we live. Last week’s post sketched out part of the process by which people in Europe and the European diaspora, once they got around to noticing that the Book of Genesis is about the Rock of Ages rather than the age of rocks, struggled to come to terms with the immensities that geological strata revealed. To my mind, that was the single most important discovery our civilization has made—a discovery with which we’re still trying to come to terms, with limited success so far, and one that I hope we can somehow manage to hand down to our descendants in the far future.

The thing that makes deep time difficult for many people to cope with is that it makes self-evident nonsense out of any claim that human beings have any uniquely important place in the history of the cosmos. That wouldn’t be a difficulty at all, except that the religious beliefs most commonly held in Europe and the European diaspora make exactly that claim.

That last point deserves some expansion here, not least because a minority among the current crop of “angry atheists” have made a great deal of rhetorical hay by insisting that all religions, across the board, point for point, are identical to whichever specific religion they themselves hate the most—usually, though not always, whatever Christian denomination they rebelled against in their adolescent years. That insistence is a fertile source of nonsense, and never so much as when it turns to the religious implications of time.

The conflict between science and religion over the age of the Earth is a purely Western phenomenon.  Had the great geological discoveries of the eighteenth and nineteenth centuries taken place in Japan, say, or India, the local religious authorities wouldn’t have turned a hair. On the one hand, most Asian religious traditions juggle million-year intervals as effortlessly as any modern cosmologist; on the other, Asian religious traditions have by and large avoided the dubious conviction, enshrined in most (though not all) versions of Christianity, that the Earth and everything upon it exists solely as a stage on which the drama of humanity’s fall and redemption plays out over a human-scaled interval of time. The expansive Hindu cosmos with its vast ever-repeating cycles of time, the Shinto concept of Great Nature as a continuum within which every category of being has its rightful place, and other non-Western worldviews offer plenty of room for modern geology to find a home.

Ironically, though, the ongoing decline of mainstream Christianity as a cultural influence in the Western world hasn’t done much to lessen the difficulty most people in the industrial world feel when faced with the abysses of deep time. The reason here is simply that the ersatz religion that’s taken the place of Christianity in the Western imagination also tries to impose a rigid ideological scheme not only on the ebb and flow of human history, but on the great cycles of the nonhuman cosmos as well. Yes, that would be the religion of progress—the faith-based conviction that human history is, or at least ought to be, a straight line extending onward and upward from the caves to the stars.

You might think, dear reader, that a belief system whose followers like to wallow in self-praise for their rejection of the seven-day creation scheme of the Book of Genesis and their embrace of deep time in the past would have a bit of a hard time evading its implications for the future. Let me assure you that this seems to give most of them no trouble at all. From Ray Kurzweil’s pop-culture mythology of the Singularity—a straightforward rewrite of Christian faith in the Second Coming dolled up in science-fiction drag—straight through to the earnest space-travel advocates who insist that we’ve got to be ready to abandon the solar system when the sun turns into a red giant four billion years from now, a near-total aversion to thinking about the realities deep time ahead of us is astonishingly prevalent among those who think they’ve grasped the vastness of Earth’s history.

I’ve come to think that one of the things that feeds this curious quirk of collective thinking is a bit of trivia to be found in a great many books on geology and the like—the metaphor that turns the Earth’s entire history into a single year, starting on January 1 with the planet’s formation out of clouds of interstellar dust and ending at midnight on December 31, which is always right now.

That metaphor has been rehashed more often than the average sitcom plot. A quick check of the books in the study where I’m writing this essay finds three different versions, one written in the 1960s, one in the 1980s, and one a little more than a decade ago. The dates of various events dance around the calendar a bit as new discoveries rewrite this or that detail of the planet’s history, to be sure; when I was a dinosaur-crazed seven-year-old, the Earth was only three and a half billion years old and the dinosaurs died out seventy million years ago, while the latest research I know of revises those dates to 4.6 billion years and 65 million years respectively, moving the date of the end-Cretaceous extinction from December 24 to December 26—in either case, a wretched Christmas present for small boys. Such details aside, the basic metaphor remains all but unchanged.

There’s only one problem with it, but it’s a whopper. Ask yourself this: what has gotten left out of that otherwise helpful metaphor? The answer, of course, is the future.

Let’s imagine, by contrast, a metaphor that maps the entire history of life on earth, from the first living thing on this planet to the last, onto a single year. We don’t know exactly when life will go extinct on this planet, but then we don’t know exactly when it emerged, either; the most recent estimate I know of puts the origin of  terrestrial life somewhere a little more than 3.7 billion years ago, and the point at which the sun’s increasing heat will finally sterilize the planet somewhere a little more than 1.2 billion years from now. Adding in a bit of rounding error, we can set the lifespan of our planetary biosphere at a nice round five billion years. On that scale, a month of thirty days is 411 million years, a single day is 13.7 million years, an hour is around 571,000 years, a minute is around 9514 years, and a second is 158 years and change. Our genus, Homo,* evolved maybe two hours ago, and all of recorded human history so far has taken up a little less than 32 seconds.

(*Another gender-nonspecific word for “human being,” this one comes from Latin, and is equally distinct from vir, “man,” and femina, “woman.” English really does need to get its act together.)

That all corresponds closely to the standard metaphor. The difference comes in when you glance at the calendar and find out that the present moment in time falls not on December 31 or any other similarly momentous date, but on an ordinary, undistinguished day—by my back-of-the-envelope calculation, it would be September 26.

I like to imagine our time, along these lines, as an instant during an early autumn afternoon in the great year of Earth’s biosphere. Like many another late September day, it’s becoming uncomfortably hot, and billowing dark clouds stand on the horizon, heralds of an oncoming storm. We human mayflies, with a lifespan averaging maybe half a second, dart here and there, busy with our momentary occupations; a few of us now and then lift our gaze from our own affairs and try to imagine the cold bare fields of early spring, the sultry air of summer evenings, or the rigors of a late autumn none of us will ever see.

With that in mind, let’s put some other dates onto the calendar. While life began on January 1, multicellular life didn’t get started until sometime in the middle of August—for almost two-thirds of the history of life, Earth was a planet of bacteria and blue-green algae, and in terms of total biomass, it arguably still is.  The first primitive plants and invertebrate animals ventured onto the land around August 25; the terrible end-Permian extinction crisis, the worst the planet has yet experienced, hit on September 8; the dinosaurs perished in the small hours of September 22, and the last ice age ended just over a minute ago, having taken place over some twelve and a half minutes.

Now let’s turn and look in the other direction. The last ice age was part of a glacial era that began a little less than two hours ago and can be expected to continue through the morning of the 27th—on our time scale, they happen every two and a half weeks or so, and the intervals between them are warm periods when the Earth is a jungle planet and glaciers don’t exist. Our current idiotic habit of treating the atmosphere as a gaseous sewer will disrupt that cycle for only a very short time; our ability to dump greenhouse gases into the atmosphere will end in less than a second as readily accessible fossil fuel reserves are exhausted, and it will take rather less than a minute thereafter for natural processes to scrub the excess CO2 from the atmosphere and return the planet’s climate to its normal instability.

Certain other consequences of our brief moment of absurd extravagance will last longer.  On our timescale, the process of radioactive decay will take around half an hour (that is to say, a quarter million years or so) to reduce high-level nuclear waste all the way to harmlessness. It will take an interval of something like the same order of magnitude before all the dead satellites in high orbits have succumbed to the complex processes that will send them to a fiery fate in Earth’s atmosphere, and quite possibly longer for the constant rain of small meteorites onto the lunar surface to pound the Apollo landers and other space junk there to unrecognizable fragments. Given a few hours of the biosphere’s great year, though, everything we are and everything we’ve done will be long gone.

Beyond that, the great timekeeper of Earth’s biosphere is the Sun. Stars increase in their output of heat over most of their life cycle, and the Sun is no exception. The single-celled chemosynthetic organisms that crept out of undersea hot springs in February or March of the great year encountered a frozen world, lit by a pale white Sun whose rays gave far less heat than today; the oldest currently known ice age, the Cryogenian glaciation of the late Precambrian period, was apparently cold enough to freeze the oceans solid and wrap most of the planet in ice. By contrast, toward the middle of November in the distant Neozoic Era, the Sun will be warmer and yellower than it is today, and glacial eras will likely involve little more than the appearance of snow on a few high mountains normally covered in jungle.

Thus the Earth will gradually warm through October and November.  Temperatures will cycle up and down with the normal cycles of planetary climate, but each warm period will tend to be a little warmer than the last, and each cold period a little less frigid. Come December, most of a billion years from now, as the heat climbs past one threshold after another, more and more of the Earth’s water will evaporate and, as dissociated oxygen and hydrogen atoms, boil off into space; the Earth will become a desert world, with life clinging to existence at the poles and in fissures deep underground, until finally the last salt-crusted seas run dry and the last living things die out.

And humanity? The average large vertebrate genus lasts something like ten million years—in our scale, something over seventeen hours. As already noted, our genus has only been around for about two hours so far, so it’s statistically likely that we still have a good long run ahead of us. I’ve discussed in these essays several times already the hard physical facts that argue that we aren’t going to go to the stars, or even settle other planets in this solar system, but that’s nothing we have to worry about. Even if we have an improbably long period of human existence ahead of us—say, the fifty million years that bats of the modern type have been around, some three and a half days in our scale, or ten thousand times the length of all recorded human history to date—the Earth will be burgeoning with living things, and perfectly capable of supporting not only intelligent life but rich, complex, unimaginably diverse civilizations, long after we’ve all settled down to our new careers as fossils.

This does not mean, of course, that the Earth will be capable of supporting the kind of civilization we have today. It’s arguably not capable of supporting that kind of civilization now.  Certainly the direct and indirect consequences of trying to maintain the civilization we’ve got, even for the short time we’ve made that attempt so far, are setting off chains of consequences that don’t seem likely to leave much of it standing for long. That doesn’t mean we’re headed back to the caves, or for that matter, back to the Middle Ages—these being the two bogeymen that believers in progress like to use when they’re trying to insist that we have no alternative but to keep on stumbling blindly ahead on our current trajectory, no matter what.

What it means, instead, is that we’re headed toward something that’s different—genuinely, thoroughly, drastically different. It won’t just be different from what we have now; it’ll also be different from the rigidly straight-line extrapolations and deus ex machina fauxpocalypses that people in industrial society like to use to keep from thinking about the future we’re making for ourselves. Off beyond the dreary Star Trek fantasy of metastasizing across the galaxy, and the equally hackneyed Mad Max fantasy of pseudomedieval savagery, lies the astonishing diversity of the future before us: a future potentially many orders of magnitude longer than all of recorded history to date, in which human beings will live their lives and understand the world in ways we can’t even imagine today.

It’s tolerably common, when points like the one I’ve tried to make here get raised at all, for people to insist that paying attention to the ultimate fate of the Earth and of our species is a recipe for suicidal depression or the like. With all due respect, that claim seems silly to me. Each one of us, as we get out of bed in the morning, realizes at some level that the day just beginning will bring us one step closer to old age and death, and yet most of us deal with that reality without too much angst.

In the same way, I’d like to suggest that it’s past time for the inmates of modern industrial civilization to grow up, sprout some gonads—either kind, take your pick—and deal with the simple, necessary, and healthy realization that our species is not going to be around forever. Just as maturity in the individual arrives when it sinks in that human life is finite, collective maturity may just wait for a similar realization concerning the life of the species. That kind of maturity would be a valuable asset just now, not least because it might help us grasp some of the extraordinary possibilities that will open up as industrial civilization finishes its one-way trip down the chute marked “decline and fall” and the deindustrial future ahead of us begins to take shape.

Wednesday, October 05, 2016

The Myth of the Anthropocene

To explore the messy future that modern industrial society is making for itself, it’s necessary now and again to stray into some of the odd corners of human thought. Over the decade and a bit that this blog has been engaged in that exploration, accordingly, my readers and I have gone roaming through quite an assortment of topics—politics, religion, magic, many different areas of history, at least as many sciences, and the list goes on. This week, it’s time to ramble through geology, for reasons that go back to some of the basic presuppositions of our culture, and reach forward from there to the far future.

Over the last few years, a certain number of scientists, climate activists, and talking heads in the media have been claiming that the Earth has passed out of its previous geological epoch, the Holocene, into a new epoch, the Anthropocene. Their argument is straightforward: human beings have become a major force shaping geology, and that unprecedented reality requires a new moniker. Last I heard, the scholarly body that authorizes formal changes to that end of scientific terminology hasn’t yet approved the new term for official use, but it’s seeing increasing use in less formal settings.

I’d like to suggest that the proposed change is a mistake, and that the label “Anthropocene” should go into whatever circular file holds phlogiston, the luminiferous ether, and other scientific terms that didn’t turn out to represent realities. That’s not because I doubt that human beings are having a major impact on geology just now, far from it.  My reasons are somewhat complex, and will require a glance back over part of the history of geology—specifically, the evolution of the labels we use to talk about portions of the past. It’s going to be a bit of a long journey, but bear with me; it matters.

Back in the seventeenth century, when the modern study of geology first got under way, the Book of Genesis was considered to be an accurate account of the Earth’s early history, and so geologists looked for evidence of the flood that plopped Noah’s ark on Mount Ararat. They found it, too, or that’s what people believed at the time. By and large, anywhere you go in western Europe, you’ll be standing on one of three things; the first is rock, the second is an assortment of gravels and compact tills, and the third is soil. With vanishingly few exceptions, where they overlap, the rock is on the bottom, the gravels and tills are in the middle, and the soil is on top. Noting that some of the gravels and tills look like huge versions of the sandbars and other features shaped by moving water, the early geologists decided the middle layed had been left by the Flood—that’s diluvium in Latin—and so the three layers were named Antediluvian (“before the flood”), Diluvian, and Postdiluvian (“after the flood”).

So far, so good—except then they started looking at the Antediluvian layer, and found an assortment of evidence that seemed to imply that really vast amounts of time had passed between different layers of rock. During the early eighteenth century, as this sank in, the Book of Genesis lost its status as a geology textbook, and geologists came up with a new set of four labels: Primary, Secondary, Tertiary, and Quaternary. (These are fancy ways of saying “First, Second, Third, and Fourth,” in case you were wondering.) The Quaternary layer consisted of the former Diluvian and Postdiluvian gravels, tills, and soil; the Tertiary consisted of rocks and fossils that were found under those; the Secondary was the rocks and fossils below that, and the Primary was at the bottom.

It was a good scheme for the time; on the surface of the Earth, if you happen to live in western Europe and walk around a lot, you’ll see very roughly equal amounts of all four layers. What’s more, they  always occur in the order just given.  Where they overlap, the Primary is always under the Secondary, and so on; you never find Secondary rocks under Primary ones, except when the rock layers have obviously been folded by later geological forces. So geologists assigned them to four different periods of time, named after the layers—the Primary Era, the Secondary Era, and so on.

It took quite a bit of further work for geologists to get a handle on how much time was involved in each of these eras, and as the results of that line of research started to become clear, there was a collective gulp loud enough to echo off the Moon. Outside of India and a few Native American civilizations, nobody anywhere had imagined that the history of the Earth might involve not thousands of years, but billions of them. As this sank in, the geologists also realized that their four eras were of absurdly different lengths. The Quaternary was only two million years long; the Tertiary, around sixty-three million years; the Secondary, around one hundred eighty-six million years; and the Primary, from there back to the Earth’s origin, or better than four billion years.

So a new scheme was worked out. The Quaternary era became the Quaternary period, and it’s still the Quaternary today, even though it’s not the fourth of anything any more. The Tertiary also became a period—it later got broken up into the Paleogene and Neogene periods—and the Tertiary (or Paleogene and Neogene) and Quaternary between them made up the Cenozoic (Greek for “recent life”) era. The former Secondary era became the Mesozoic (“middle life”) era, and was divided into three periods; starting with the most recent, these are the Cretaceous, Jurassic, and Triassic. The former Primary era became the Paleozoic (“old life”) era, and was divided into six periods; again, starting with the most recent, these were are the Permian, Carboniferous, Devonian, Silurian, Ordovician, and Cambrian. The Cambrian started around 542 million years ago, and everything before then—all three billion years and change—was tossed into the vast dark basement of the Precambrian.

It was a pretty good system, and one of the things that was pretty good about it is that the periods were of very roughly equal length. Thus the Paleozoic had twice as many periods as the Mesozoic, and it lasted around twice as long. The Mesozoic, in turn, had three times as many complete periods as the Cenozoic did (in pre-Paleogene and Neogene days)—the Quaternary has just gotten started, remember—and it’s around three times as long. I don’t know how many of my readers, as children, delighted in the fact that the whole Cenozoic era—the Age of Mammals, as it was often called—could be dropped into the Cretaceous period with room to spare on either end, but I did. I decorated one of my school notebooks with a crisp little drawing of a scoreboard that read DINOSAURS 3, MAMMALS 1. No, nobody else got the joke.

In recent decades, things have been reshuffled a bit more.  The Precambrian basement has been explored in quite some detail, and what used to be deliciously named the Cryptozoic eon has now sadly been broken up into Proterozoic and Archean eons, and divided into periods to boot. We can let that pass, though, because it’s the other end of the time scale that concerns us. Since Cenozoic rock makes up so much of the surface—being the most recently laid down, after all—geologists soon broke up the Tertiary and Quaternary periods into six shorter units, called epochs: from first to last, Eocene, Oligocene, Miocene, Pliocene, Pleistocene, and Holocene. (These are Greek again, and mean “dawn recent, few recent, some recent, many recent, most recent,” and “entirely recent”—the reference is to how many living things in each epoch look like the ones running around today.) Later, the Eocene got chopped in two to yield the Paleocene (“old recent”) and Eocene. Yes, that “-cene” ending—also the first syllable in Cenozoic—is the second half of the label “Anthropocene,” the human-recent.

The thing to keep in mind is that an epoch is a big chunk of time. The six of them that are definitely over with at this point lasted an average of almost eleven million years a piece. (For purposes of comparison, eleven million years is around 2200 times the length of all recorded human history.) The exception is the Holocene, which is only 11,700 years old at present, or only about 0.001% of the average length of an epoch. It makes sense to call the Holocene an epoch, in other words, if it’s just beginning and still has millions of years to run.

If in fact the Holocene is over and the Anthropocene is under way, though, the Holocene isn’t an epoch at all in any meaningful sense. It’s the tag-end of the Pleistocene, or a transition between the Pleistocene and whichever epoch comes next, whether that be labeled Anthropocene or something else. You can find such transitions between every epoch and the next, every period and the next, and every era and the next. They’re usually quite distinctive, because these different geological divisions aren’t mere abstractions; the change from one to another is right there in the rock strata, usually well marked by sharp changes in a range of markers, including fossils. Some long-vanished species trickle out in the middle of an epoch, to be sure, but one of the things that often marks the end of an epoch, a period, or an era is that a whole mess of extinctions all happen in the transition from one unit of time to the next.

Let’s look at a few examples to sharpen that last point. The Pleistocene epoch was short as epochs go, only a little more than two and a half million years; it was a period of severe global cooling, which is why it’s better known as the ice age; and a number of its typical animals—mammoths, sabertooth tigers, and woolly rhinoceri in North America, giant ground sloths and glyptodons in South America, cave bears and mastodons in Europe, and so on—went extinct all at once during the short transition period at its end, when the climate warmed abruptly and a wave of invasive generalist predators (i.e., your ancestors and mine) spread through ecosystems that were already in extreme turmoil. That’s a typical end-of-epoch mess.

Periods are bigger than epochs, and the end of a period is accordingly a bigger deal. Let’s take the end of the Triassic as a good example. Back in the day, the whole Mesozoic era routinely got called “the Age of Reptiles,” but until the Triassic ended it was anybody’s guess whether the dinosaurs or the therapsid almost-mammals would end up at the top of the ecological heap. The end-Triassic extinction crisis put an end to the struggle by putting an end to most of the therapsids, along with a lot of other living things. The biggest of the early dinosaurs died off as well, but the smaller ones thrived, and their descendants went on to become the huge and remarkably successful critters of the Jurassic and Cretaceous. That’s a typical end-of-period mess.

Eras are bigger than periods, and they always end with whopping crises. The most recent example, of course, is the end of the Mesozoic era 65 million years ago. Forty per cent of the animal families on the planet, including species that had been around for hundreds of millions of years, died pretty much all at once. (The current theory, well backed up by the data, is that a good-sized comet slammed into what’s now the Yucatan peninsula, and the bulk of the dieoff was over in just a few years.) Was that the worst extinction crisis ever? Not a chance; the end of the Paleozoic 251 million years ago was slower but far more ghastly, with around ninety-five per cent of all species on the casualty list. Some paleontologists, without undue exaggeration, describe the end-Paleozoic crisis as the time Earth nearly died.

So the landscape of time revealed to us by geology shows intervals of relative stability—epochs, periods, and eras—broken up by short transition periods. If you go for a walk in country where the rock formations have been exposed, you can literally see the divisions in front of you: here’s a layer of one kind of rock a foot or two thick, laid down as sediment over millions of years and then compressed into stone over millions more; here’s a thin boundary layer, or simply an abrupt line of change, and above it there’s a different kind of rock, consisting of sediment laid down under different climatic and environmental conditions.

If you’ve got a decent geological laboratory handy and apply the usual tests to a couple of rock samples, one from the middle of an epoch and the other from a boundary layer, the differences are hard to miss. The boundary layer made when the Mesozoic ended and the Cenozoic began is a good example. The Cretaceous-Paleogene boundary layer is spiked with iridium, from space dust brought to earth by the comet; it’s full of carbon from fires that were kindled by the impact over many millions of square miles; and the one trace of life you’ll find is a great many fungal spores—dust blown into the upper atmosphere choked out the sun and left most plants on Earth dead and rotting, with results that rolled right up the food chain to the tyrannosaurs and their kin. You won’t find such anomalies clustering in the rock sample from the middle of the epoch; what you’ll find in nearly every case is evidence of gradual change and ordinary geological processes at work.

Now ask yourself this, dear reader: which of these most resembles the trace that human industrial civilization is in the process of leaving for the rock formations of the far future?

It’s crucial to remember that the drastic geological impacts that have inspired some scientists to make use of the term “Anthropocene” are self-terminating in at least two senses. On the one hand, those impacts are possible because, and only because, our species is busily burning through stores of fossil carbon that took half a billion years for natural processes to stash in the rocks, and ripping through equally finite stores of other nonrenewable resources, some of which took even longer to find their way into the deposits we mine so greedily. On the other hand, by destabilizing the climate and sending cascading disturbances in motion through a good-sized collection of other natural cycles, those impacts are in the process of wrecking the infrastructure that industrial society needs to go its merry way.

Confronted with the tightening vise between accelerating resource depletion and accelerating biosphere disruption, the vast majority of people in the industrial world seem content to insist that they can have their planet and eat it too. The conventional wisdom holds that someone, somewhere, will think of something that will allow us to replace Earth’s rapidly emptying fuel tanks and resource stocks, on the one hand, and stabilize its increasingly violent climatic and ecological cycles, on the other.  That blind faith remains welded in place even as decade after decade slips past, one supposed solution after another fails, and the stark warnings of forty years ago have become the front page news stories of today. Nothing is changing, except that the news just keeps getting worse.

That’s the simple reality of the predicament in which we find ourselves today. Our way of life, here in the world’s industrial nations, guarantees that in the fairly near future, no one anywhere on the planet will be able to live the way we do. As resources run out, alternatives fail, and the destructive impacts of climate change pile up, our ability to influence geological processes will go away, and leave us once more on the receiving end of natural cycles we can do little to change.

A hundred million years from now, as a result, if another intelligent species happens to be around on Earth at that time and takes an interest in geology, its members won’t find a nice thick stratum of rock marked with the signs of human activity, corresponding to an Anthropocene epoch. They’ll find a thin boundary layer, laid down over a few hundred years, and laced with exotic markers: decay products of radioactive isotopes splashed into the atmosphere by twentieth-century nuclear bomb testing and nuclear reactor meltdowns; chemical markers showing a steep upward jolt in atmospheric carbon dioxide; and scattered freely through the layer, micron-thick streaks of odd carbon compounds that are all that’s left of our vast production of plastic trash. That’s our geological legacy: a slightly odd transition layer a quarter of an inch thick, with the usual discontinuity between the species in the rock just below, many of whom vanish at the transition, and the species in the rock just above, who proliferate into empty ecological niches and evolve into new forms.

In place of the misleading label “Anthropocene,” then, I’d like to propose that we call the geological interval we’re now in the Pleistocene-Neocene transition. Neocene? That’s Greek for “new recent,” representing the “new normal” that will emerge when our idiotic maltreatment of the planet that keeps us all alive brings the “old normal” crashing down around our ears. We don’t call the first epoch after the comet impact 65 million years ago the “Cometocene,” so there’s no valid reason to use a label like “Anthropocene” for the epoch that will dawn when the current transition winds down. Industrial civilization’s giddy rise and impending fall are the trigger for the transition, and nothing more; the shape of the Neocene epoch will be determined not by us, but by the ordinary processes of planetary change and evolution.

Those processes have been responding to the end of the so-called Holocene—let’s rename it the Late Pleistocene, given how extremely short it turned out to be—in the usual manner.  Around the world, ice caps are melting, climate belts are shifting, acid-intolerant species in the ocean are being replaced by acid-tolerant ones, and generalist species of animals such as cats, coyotes, and feral pigs are spreading rapidly through increasingly chaotic ecosystems, occupying vacant ecological niches or elbowing less flexible competitors out of the way. By the time the transition winds down a few centuries from now, the species that have been able to adapt to new conditions and spread into new environments will be ready for evolutionary radiation; another half a million years or so, and the Neocene will be stocked with the first preliminary draft of its typical flora and fauna.

It’s entertaining, at least to me, to speculate about what critters will roam the desert sands of Kansas and Nebraska or stalk its prey in the forests of postglacial Greenland. To many of my readers, though, I suspect a more pressing question is whether a certain primate called Homo sapiens will be among the common fauna of the Neocene. I suspect so, though of course none of us can be sure—but giving up on the fantasy that’s embodied in the label “Anthropocene,” the delusion that what our civilization is doing just now is going to keep on long enough to fill a geological epoch, is a good step in the direction of our survival.