Wednesday, July 22, 2015

The Cimmerian Hypothesis, Part Two: A Landscape of Hallucinations

Last week’s post covered a great deal of ground—not surprising, really, for an essay that started from a quotation from a Weird Tales story about Conan the Barbarian—and it may be useful to recap the core argument here. Civilizations—meaning here human societies that concentrate power, wealth, and population in urban centers—have a distinctive historical trajectory of rise and fall that isn’t shared by societies that lack urban centers. There are plenty of good reasons why this should be so, from the ecological costs of urbanization to the buildup of maintenance costs that drives catabolic collapse, but there’s also a cognitive dimension.

Look over the histories of fallen civilizations, and far more often than not, societies don’t have to be dragged down the slope of decline and fall. Rather, they go that way at a run, convinced that the road to ruin must inevitably lead them to heaven on earth. Arnold Toynbee, whose voluminous study of the rise and fall of civilizations has been one of the main sources for this blog since its inception, wrote at length about the way that the elite classes of falling civilizations lose the capacity to come up with new responses for new situations, or even to learn from their mistakes; thus they keep on trying to use the same failed policies over and over again until the whole system crashes to ruin. That’s an important factor, no question, but it’s not just the elites who seem to lose track of the real world as civilizations go sliding down toward history’s compost heap, it’s the masses as well.

Those of my readers who want to see a fine example of this sort of blindness to the obvious need only check the latest headlines. Within the next decade or so, for example, the entire southern half of Florida will become unfit for human habitation due to rising sea levels, driven by our dumping of greenhouse gases into an already overloaded atmosphere. Low-lying neighborhoods in Miami already flood with sea water whenever a high tide and a strong onshore wind hit at the same time; one more foot of sea level rise and salt water will pour over barriers into the remaining freshwater sources, turning southern Florida into a vast brackish swamp and forcing the evacuation of most of the millions who live there.

That’s only the most dramatic of a constellation of climatic catastrophes that are already tightening their grip on much of the United States. Out west, the rain forests of western Washington are burning in the wake of years of increasingly severe drought, California’s vast agricultural acreage is reverting to desert, and the entire city of Las Vegas will probably be out of water—as in, you turn on the tap and nothing but dust comes out—in less than a decade. As waterfalls cascade down the seaward faces of Antarctic and Greenland glaciers, leaking methane blows craters in the Siberian permafrost, and sea level rises at rates considerably faster than the worst case scenarios scientists were considering a few years ago, these threats are hardly abstract issues; is anyone in America taking them seriously enough to, say, take any concrete steps to stop using the atmosphere as a gaseous sewer, starting with their own personal behavior? Surely you jest.

No, the Republicans are still out there insisting at the top of their lungs that any scientific discovery that threatens their rich friends’ profits must be fraudulent, the Democrats are still out there proclaiming just as loudly that there must be some way to deal with anthropogenic climate change that won’t cost them their frequent-flyer miles, and nearly everyone outside the political sphere is making whatever noises they think will allow them to keep on pursuing exactly those lifestyle choices that are bringing on planetary catastrophe. Every possible excuse to insist that what’s already happening won’t happen gets instantly pounced on as one more justification for inertia—the claim currently being splashed around the media that the Sun might go through a cycle of slight cooling in the decades ahead is the latest example. (For the record, even if we get a grand solar minimum, its effects will be canceled out in short order by the impact of ongoing atmospheric pollution.)

Business as usual is very nearly the only option anybody is willing to discuss, even though the long-predicted climate catastrophes are already happening and the days of business as usual in any form are obviously numbered. The one alternative that gets air time, of course, is the popular fantasy of instant planetary dieoff, which gets plenty of attention because it’s just as effective an excuse for inaction as faith in business as usual. What next to nobody wants to talk about is the future that’s actually arriving exactly as predicted: a future in which low-lying coastal regions around the country and the world have to be abandoned to the rising seas, while the Southwest and large portions of the mountain west become more inhospitable than the eastern Sahara or Arabia’s Empty Quarter.

If the ice melt keeps accelerating at its present pace, we could be only a few decades form the point at which it’s Manhattan Island’s turn to be abandoned, because everything below ground level is permanently  flooded with seawater and every winter storm sends waves rolling right across the island and flings driftwood logs against second story windows. A few decades more, and waves will roll over the low-lying neighborhoods of Houston, Boston, Seattle, and Washington DC, while the ruined buildings that used to be New Orleans rise out of the still waters of a brackish estuary and the ruined buildings that used to be Las Vegas are half buried by the drifting sand. Take a moment to consider the economic consequences of that much infrastructure loss, that much destruction of built capital, that many people who somehow have to be evacuated and resettled, and think about what kind of body blow that will deliver to an industrial society that is already in bad shape for other reasons.

None of this had to happen. Half a century ago, policy makers and the public alike had already been presented with a tolerably clear outline of what was going to happen if we proceeded along the trajectory we were on, and those same warnings have been repeated with increasing force year by year, as the evidence to support them has mounted up implacably—and yet nearly all of us nodded and smiled and kept going. Nor has this changed in the least as the long-predicted catastrophes have begun to show up right on schedule. Quite the contrary: faced with a rising spiral of massive crises, people across the industrial world are, with majestic consistency, doing exactly those things that are guaranteed to make those crises worse.

So the question that needs to be asked, and if possible answered, is why civilizations—human societies that concentrate population, power, and wealth in urban centers—so reliably lose the capacity to learn from their mistakes and recognize that a failed policy has in fact failed.  It’s also worth asking why they so reliably do this within a finite and predictable timespan: civilizations last on average around a millennium before they crash into a dark age, while uncivilized societies routinely go on for many times that period. Doubtless any number of factors drive civilizations to their messy ends, but I’d like to suggest a factor that, to my knowledge, hasn’t been discussed in this context before.

Let’s start with what may well seem like an irrelevancy. There’s been a great deal of discussion down through the years in environmental circles about the way that the survival and health of the human body depends on inputs from nonhuman nature. There’s been a much more modest amount of talk about the human psychological and emotional needs that can only be met through interaction with natural systems. One question I’ve never seen discussed, though, is whether the human intellect has needs that are only fulfilled by a natural environment.

As I consider that question, one obvious answer comes to mind: negative feedback.

The human intellect is the part of each of us that thinks, that tries to make sense of the universe of our experience. It does this by creating models. By “models” I don’t just mean those tightly formalized and quantified models we call scientific theories; a poem is also a model of part of the universe of human experience, so is a myth, so is a painting, and so is a vague hunch about how something will work out. When a twelve-year-old girl pulls the petals off a daisy while saying “he loves me, he loves me not,” she’s using a randomization technique to decide between two models of one small but, to her, very important portion of the universe, the emotional state of whatever boy she has in mind.

With any kind of model, it’s critical to remember Alfred Korzybski’s famous rule: “the map is not the territory.” A model, to put the same point another way, is a representation; it represents the way some part of the universe looks when viewed from the perspective of one or more members of our species of social primates, using the idiosyncratic and profoundly limited set of sensory equipments, neural processes, and cognitive frameworks we got handed by our evolutionary heritage. Painful though this may be to our collective egotism, it’s not unfair to say that human mental models are what you get when you take the universe and dumb it down to the point that our minds can more or less grasp it.

What keeps our models from becoming completely dysfunctional is the negative feedback we get from the universe. For the benefit of readers who didn’t get introduced to systems theory, I should probably take a moment to explain negative feedback. The classic example is the common household thermostat, which senses the temperature of the air inside the house and activates a switch accordingly. If the air temperature is below a certain threshold, the thermostat turns the heat on and warms things up; if the air temperature rises above a different, slightly higher threshold, the thermostat turns the heat off and lets the house cool down.

In a sense, a thermostat embodies a very simple model of one very specific part of the universe, the temperature inside the house. Like all models, this one includes a set of implicit definitions and a set of value judgments. The definitions are the two thresholds, the one that turns the furnace on and the one that turns it off, and the value judgments label temperatures below the first threshold “too cold” and those above the second “too hot.” Like every human model, the thermostat model is unabashedly anthropocentric—“too cold” by the thermostat’s standard would be uncomfortably warm for a polar bear, for example—and selects out certain factors of interest to human beings from a galaxy of other things we don’t happen to want to take into consideration.

The models used by the human intellect to make sense of the universe are usually less simple than the one that guides a thermostat—there are unfortunately exceptions—but they work according to the same principle. They contain definitions, which may be implicit or explicit: the girl plucking petals from the daisy may have not have an explicit definition of love in mind when she says “he loves me,” but there’s some set of beliefs and expectations about what those words imply underlying the model. They also contain value judgments: if she’s attracted to the boy in question, “he loves me” has a positive value and “he loves me not” has a negative one.

Notice, though, that there’s a further dimension to the model, which is its interaction with the observed behavior of the thing it’s supposed to model. Plucking petals from a daisy, all things considered, is not a very good predictor of the emotional states of twelve-year-old boys; predictions made on the basis of that method are very often disproved by other sources of evidence, which is why few girls much older than twelve rely on it as an information source. Modern western science has formalized and quantified that sort of reality testing, but it’s something that most people do at least occasionally. It’s when they stop doing so that we get the inability to recognize failure that helps to drive, among many other things, the fall of civilizations.

Individual facets of experienced reality thus provide negative feedback to individual models. The whole structure of experienced reality, though, is capable of providing negative feedback on another level—when it challenges the accuracy of the entire mental process of modeling.

Nature is very good at providing negative feedback of that kind. Here’s a human conceptual model that draws a strict line between mammals, on the one hand, and birds and reptiles, on the other. Not much more than a century ago, it was as precise as any division in science: mammals have fur and don’t lay eggs, reptiles and birds don’t have fur and do lay eggs. Then some Australian settler met a platypus, which has fur and lays eggs. Scientists back in Britain flatly refused to take it seriously until some live platypuses finally made it there by ship. Plenty of platypus egg was splashed across plenty of distinguished scientific faces, and definitions had to be changed to make room for another category of mammals and the evolutionary history necessary to explain it.

Here’s another human conceptual model, the one that divides trees into distinct species. Most trees in most temperate woodlands, though, actually have a mix of genetics from closely related species. There are few red oaks; what you have instead are mostly-red, partly-red, and slightly-red oaks. Go from the northern to the southern end of a species’ distribution, or from wet to dry regions, and the variations within the species are quite often more extreme than those that separate trees that have been assigned to different species. Here’s still another human conceptual model, the one that divides trees from shrubs—plenty of species can grow either way, and the list goes on.

The human mind likes straight lines, definite boundaries, precise verbal definitions. Nature doesn’t. People who spend most of their time dealing with undomesticated natural phenomena, accordingly, have to get used to the fact that nature is under no obligation to make the kind of sense the human mind prefers. I’d suggest that this is why so many of the cultures our society calls “primitive”—that is, those that have simple material technologies and interact directly with nature much of the time—so often rely on nonlogical methods of thought: those our culture labels “mythological,” “magical,” or—I love this term—“prescientific.” (That the “prescientific” will almost certainly turn out to be the postscientific as well is one of the lessons of history that modern industrial society is trying its level best to ignore.) Nature as we experience it isn’t simple, neat, linear, and logical, and so it makes sense that the ways of thinking best suited to dealing with nature directly aren’t simple, neat, linear, and logical either.

 With this in mind, let’s return to the distinction discussed in last week’s post. I noted there that a city is a human settlement from which the direct, unmediated presence of nature has been removed as completely as the available technology permits. What replaces natural phenomena in an urban setting, though, is as important as what isn’t allowed there. Nearly everything that surrounds you in a city was put there deliberately by human beings; it is the product of conscious human thinking, and it follows the habits of human thought just outlined. Compare a walk down a city street to a walk through a forest or a shortgrass prairie: in the city street, much more of what you see is simple, neat, linear, and logical. A city is an environment reshaped to reflect the habits and preferences of the human mind.

I suspect there may be a straightforwardly neurological factor in all this. The human brain, so much larger compared to body weight than the brains of most of our primate relatives, evolved because having a larger brain provided some survival advantage to those hominins who had it, in competition with those who didn’t. It’s probably a safe assumption that processing information inputs from the natural world played a very large role in these advantages, and this would imply, in turn, that the human brain is primarily adapted for perceiving things in natural environments—not, say, for building cities, creating technologies, and making the other common products of civilization.

Thus some significant part of the brain has to be redirected away from the things that it’s adapted to do, in order to make civilizations possible. I’d like to propose that the simplified, rationalized, radically information-poor environment of the city plays a crucial role in this. (Information-poor? Of course; the amount of information that comes cascading through the five keen senses of an alert hunter-gatherer standing in an African forest is vastly greater than what a city-dweller gets from the blank walls and the monotonous sounds and scents of an urban environment.) Children raised in an environment that lacks the constant cascade of information natural environments provide, and taught to redirect their mental powers toward such other activities as reading and mathematics, grow up with cognitive habits and, in all probability, neurological arrangements focused toward the activities of civilization and away from the things to which the human brain is adapted by evolution.

One source of supporting evidence for this admittedly speculative proposal is the worldwide insistence on the part of city-dwellers that people who live in isolated rural communities, far outside the cultural ambit of urban life, are just plain stupid. What that means in practice, of course, is that people from isolated rural communities aren’t used to using their brains for the particular purposes that city people value. These allegedly “stupid” countryfolk are by and large extraordinarily adept at the skills they need to survive and thrive in their own environments. They may be able to listen to the wind and know exactly where on the far side of the hill a deer waits to be shot for dinner, glance at a stream and tell which riffle the trout have chosen for a hiding place, watch the clouds pile up and read from them how many days they’ve got to get the hay in before the rains come and rot it in the fields—all of which tasks require sophisticated information processing, the kind of processing that human brains evolved doing.

Notice, though, how the urban environment relates to the human habit of mental modeling. Everything in a city was a mental model before it became a building, a street, an item of furniture, or what have you. Chairs look like chairs, houses like houses, and so on; it’s so rare for humanmade items to break out of the habitual models of our species and the particular culture that built them that when this happens, it’s a source of endless comment. Where a natural environment constantly challenges human conceptual models, an urban environment reinforces them, producing a feedback loop that’s probably responsible for most of the achievements of civilization.

I suggest, though, that the same feedback loop may also play a very large role in the self-destruction of civilizations. People raised in urban environments come to treat their mental models as realities, more real than the often-unruly facts on the ground, because everything they encounter in their immediate environments reinforces those models. As the models become more elaborate and the cities become more completely insulated from the complexities of nature, the inhabitants of a civilization move deeper and deeper into a landscape of hallucinations—not least because as many of those hallucinations get built in brick and stone, or glass and steel, as the available technology permits. As a civilization approaches its end, the divergence between the world as it exists and the mental models that define the world for the civilization’s inmates becomes total, and its decisions and actions become lethally detached from reality—with consequences that we’ll discuss in next week’s post.

Wednesday, July 15, 2015

The Cimmerian Hypothesis, Part One: Civilization and Barbarism

One of the oddities of the writer’s life is the utter unpredictability of inspiration. There are times when I sit down at the keyboard knowing what I have to write, and plod my way though the day’s allotment of prose in much the same spirit that a gardener turns the earth in the beds of a big garden; there are times when a project sits there grumbling to itself and has to be coaxed or prodded into taking shape on the page; but there are also times when something grabs hold of me, drags me kicking and screaming to the keyboard, and holds me there with a squamous paw clamped on my shoulder until I’ve finished whatever it is that I’ve suddenly found out that I have to write.

Over the last two months, I’ve had that last experience on a considerably larger scale than usual; to be precise, I’ve just completed the first draft of a 70,000-word novel in eight weeks. Those of my readers and correspondents who’ve been wondering why I’ve been slower than usual to respond to them now know the reason. The working title is Moon Path to Innsmouth; it deals, in the sidelong way for which fiction is so well suited, with quite a number of the issues discussed on this blog; I’m pleased to say that I’ve lined up a publisher, and so in due time the novel will be available to delight the rugose hearts of the Great Old Ones and their eldritch minions everywhere.

None of that would be relevant to the theme of the current series of posts on The Archdruid Report, except that getting the thing written required quite a bit of reference to the weird tales of an earlier era—the writings of H.P. Lovecraft, of course, but also those of Clark Ashton Smith and Robert E. Howard, who both contributed mightily to the fictive mythos that took its name from Lovecraft’s squid-faced devil-god Cthulhu. One Howard story leads to another—or at least it does if you spent your impressionable youth stewing your imagination in a bubbling cauldron of classic fantasy fiction, as I did—and that’s how it happened that I ended up revisiting the final lines of “Beyond the Black River,” part of the saga of Conan of Cimmeria, Howard’s iconic hero:

“‘Barbarism is the natural state of mankind,’ the borderer said, still staring somberly at the Cimmerian. ‘Civilization is unnatural. It is a whim of circumstance. And barbarism must always ultimately triumph.’”

It’s easy to take that as nothing more than a bit of bluster meant to add color to an adventure story—easy but, I’d suggest, inaccurate. Science fiction has made much of its claim to be a “literature of ideas,” but a strong case can be made that the weird tale as developed by Lovecraft, Smith, Howard, and their peers has at least as much claim to the same label, and the ideas that feature in a classic weird tale are often a good deal more challenging than those that are the stock in trade of most science fiction: “gee, what happens if I extrapolate this technological trend a little further?” and the like. The authors who published with Weird Tales back in the day, in particular, liked to pose edgy questions about the way that the posturings of our species and its contemporary cultures appeared in the cold light of a cosmos that’s wholly uninterested in our overblown opinion of ourselves.

Thus I think it’s worth giving Conan and his fellow barbarians their due, and treating what we may as well call the Cimmerian hypothesis as a serious proposal about the underlying structure of human history. Let’s start with some basics. What is civilization? What is barbarism? What exactly does it mean to describe one state of human society as natural and another unnatural, and how does that relate to the repeated triumph of barbarism at the end of every civilization?

The word “civilization” has a galaxy of meanings, most of them irrelevant to the present purpose. We can take the original meaning of the word—in late Latin, civilisatio—as a workable starting point; it means “having or establishing settled communities.” A people known to the Romans was civilized if its members lived in civitates, cities or towns. We can generalize this further, and say that a civilization is a form of society in which people live in artificial environments. Is there more to civilization than that? Of course there is, but as I hope to show, most of it unfolds from the distinction just traced out.

A city, after all, is a human environment from which the ordinary workings of nature have been excluded, to as great an extent as the available technology permits. When you go outdoors in a city,  nearly all the things you encounter have been put there by human beings; even the trees are where they are because someone decided to put them there, not by way of the normal processes by which trees reproduce their kind and disperse their seeds. Those natural phenomena that do manage to elbow their way into an urban environment—tropical storms, rats, and the like—are interlopers, and treated as such. The gradient between urban and rural settlements can be measured precisely by what fraction of the things that residents encounter is put there by human action, as compared to the fraction that was put there by ordinary natural processes.

What is barbarism? The root meaning here is a good deal less helpful. The Greek word βαρβαροι, barbaroi, originally meant “people who say ‘bar bar bar’” instead of talking intelligibly in Greek. In Roman times that usage got bent around to mean “people outside the Empire,” and thus in due time to “tribes who are too savage to speak Latin, live in cities, or give up without a fight when we decide to steal their land.” Fast forward a century or two, and that definition morphed uncomfortably into “tribes who are too savage to speak Latin, live in cities, or stay peacefully on their side of the border” —enter Alaric’s Visigoths, Genseric’s Vandals, and the ebullient multiethnic horde that marched westwards under the banners of Attila the Hun.

This is also where Conan enters the picture. In crafting his fictional Hyborian Age, which was vaguely located in time betwen the sinking of Atlantis and the beginning of recorded history, Howard borrowed freely from various corners of the past, but the Roman experience was an important ingredient—the story cited above, framed by a struggle between the kingdom of Aquilonia and the wild Pictish tribes beyond the Black River, drew noticeably on Roman Britain, though it also took elements from the Old West and elsewhere. The entire concept of a barbarian hero swaggering his way south into the lands of civilization, which Howard introduced to fantasy fiction (and which has been so freely and ineptly plagiarized since his time), has its roots in the late Roman and post-Roman experience, a time when a great many enterprising warriors did just that, and when some, like Conan, became kings.

What sets barbarian societies apart from civilized ones is precisely that a much smaller fraction of the environment barbarians encounter results from human action. When you go outdoors in Cimmeria—if you’re not outdoors to start with, which you probably are—nearly everything you encounter has been put there by nature. There are no towns of any size, just scattered clusters of dwellings in the midst of a mostly unaltered environment. Where your Aquilonian town dweller who steps outside may have to look hard to see anything that was put there by nature, your Cimmerian who shoulders his battle-ax and goes for a stroll may have to look hard to see anything that was put there by human beings.

What’s more, there’s a difference in what we might usefully call the transparency of human constructions. In Cimmeria, if you do manage to get in out of the weather, the stones and timbers of the hovel where you’ve taken shelter are recognizable lumps of rock and pieces of tree; your hosts smell like the pheromone-laden social primates they are; and when their barbarian generosity inspires them to serve you a feast, they send someone out to shoot a deer, hack it into gobbets, and cook the result in some relatively simple manner that leaves no doubt in anyone’s mind that you’re all chewing on parts of a dead animal. Follow Conan’s route down into the cities of Aquilonia, and you’re in a different world, where paint and plaster, soap and perfume, and fancy cookery, among many other things, obscure nature’s contributions to the human world.

So that’s our first set of distinctions. What makes human societies natural or unnatural? It’s all too easy  to sink into a festering swamp of unsubstantiated presuppositions here, since people in every human society think of their own ways of doing things as natural and normal, and everyone else’s ways of doing the same things as unnatural and abnormal. Worse, there’s the pervasive bad habit in industrial Western cultures of lumping all non-Western cultures with relatively simple technologies together as “primitive man”—as though there’s only one of him, sitting there in a feathered war bonnet and a lionskin kilt playing the didgeridoo—in order to flatten out human history into an imaginary straight line of progress that leads from the caves, through us, to the stars.

In point of anthropological fact, the notion of “primitive man” as an allegedly unspoiled child of nature is pure hokum, and generally racist hokum at that. “Primitive” cultures—that is to say, human societies that rely on relatively simple technological suites—differ from one another just as dramatically as they differ from modern Western industrial societies; nor do simpler technological suites correlate with simpler cultural forms. Traditional Australian aboriginal societies, which have extremely simple material technologies, are considered by many anthropologists to have among the most intricate cultures known anywhere, embracing stunningly elaborate systems of knowledge in which cosmology, myth, environmental knowledge, social custom, and scores of other fields normally kept separate in our society are woven together into dizzyingly complex tapestries of knowledge.

What’s more, those tapestries of knowledge have changed and evolved over time. The hokum that underlies that label “primitive man” presupposes, among other things, that societies that use relatively simple technological suites have all been stuck in some kind of time warp since the Neolithic—think of the common habit of speech that claims that hunter-gatherer tribes are “still in the Stone Age” and so forth. Back of that habit of speech is the industrial world’s irrational conviction that all human history is an inevitable march of progress that leads straight to our kind of society, technology, and so forth. That other human societies might evolve in different directions and find their own wholly valid ways of making a home in the universe is anathema to most people in the industrial world these days—even though all the evidence suggests that this way of looking at the history of human culture makes far more sense of the data than does the fantasy of inevitable linear progress toward us.

Thus traditional tribal societies are no more natural than civilizations are, in one important sense of the word “natural;” that is, tribal societies are as complex, abstract, unique, and historically contingent as civilizations are. There is, however, one kind of human society that doesn’t share these characteristics—a kind of society that tends to be intellectually and culturally as well as technologically simpler than most, and that recurs in astonishingly similar forms around the world and across time. We’ve talked about it at quite some length in this blog; it’s the distinctive dark age society that emerges in the ruins of every fallen civilization after the barbarian war leaders settle down to become petty kings, the survivors of the civilization’s once-vast population get to work eking out a bare subsistence from the depleted topsoil, and most of the heritage of the wrecked past goes into history’s dumpster.

If there’s such a thing as a natural human society, the basic dark age society is probably it, since it emerges when the complex, abstract, unique, and historically contingent cultures of the former civilization and its hostile neighbors have both imploded, and the survivors of the collapse have to put something together in a hurry with nothing but raw human relationships and the constraints of the natural world to guide them. Of course once things settle down the new society begins moving off in its own complex, abstract, unique, and historically contingent direction; the dark age societies of post-Mycenean Greece, post-Roman Britain, post-Heian Japan, and their many equivalents have massive similarities, but the new societies that emerged from those cauldrons of cultural rebirth had much less in common with one another than their forbears did.

In Howard’s fictive history, the era of Conan came well before the collapse of Hyborian civilization; he was not himself a dark age warlord, though he doubtless would have done well in that setting. The Pictish tribes whose activities on the Aquilonian frontier inspired the quotation cited earlier in this post weren’t a dark age society, either, though if they’d actually existed, they’d have been well along the arc of transformation that turns the hostile neighbors of a declining civilization into the breeding ground of the warbands that show up on cue to finish things off. The Picts of Howard’s tale, though, were certainly barbarians—that is, they didn’t speak Aquilonian, live in cities, or stay peaceably on their side of the Black River—and they were still around long after the Hyborian civilizations were gone.

That’s one of the details Howard borrowed from history. By and large, human societies that don’t have urban centers tend to last much longer than those that do. In particular, human societies that don’t have urban centers don’t tend to go through the distinctive cycle of decline and fall ending in a dark age that urbanized societies undergo so predictably. There are plenty of factors that might plausibly drive this difference, many of which have been discussed here and elsewhere, but I’ve come to suspect something subtler may be at work here as well. As we’ve seen, a core difference between civilizations and other human societies is that people in civilizations tend to cut themselves off from the immediate experience of nature nature to a much greater extent than the uncivilized do. Does this help explain why civilizations crash and burn so reliably, leaving the barbarians to play drinking games with mead while sitting unsteadily on the smoldering ruins?

As it happens, I think it does.

As we’ve discussed at length in the last three weekly posts here, human intelligence is not the sort of protean, world-transforming superpower with limitless potential it’s been labeled by the more overenthusiastic partisans of human exceptionalism. Rather, it’s an interesting capacity possessed by one species of social primates, and quite possibly shared by some other animal species as well. Like every other biological capacity, it evolved through a process of adaptation to the environment—not, please note, to some abstract concept of the environment, but to the specific stimuli and responses that a social primate gets from the African savanna and its inhabitants, including but not limited to other social primates of the same species. It’s indicative that when our species originally spread out of Africa, it seems to have settled first in those parts of the Old World that had roughly savanna-like ecosystems, and only later worked out the bugs of living in such radically different environments as boreal forests, tropical jungles, and the like.

The interplay between the human brain and the natural environment is considerably more significant than has often been realized. For the last forty years or so, a scholarly discipline called ecopsychology has explored some of the ways that interactions with nature shape the human mind. More recently, in response to the frantic attempts of American parents to isolate their children from a galaxy of largely imaginary risks, psychologists have begun to talk about “nature deficit disorder,” the set of emotional and intellectual dysfunctions that show up reliably in children who have been deprived of the normal human experience of growing up in intimate contact with the natural world.

All of this should have been obvious from first principles. Studies of human and animal behavior alike have shown repeatedly that psychological health depends on receiving certain highly specific stimuli at certain stages in the maturation process. The famous experiments by Henry Harlow, who showed that monkeys raised  with a mother-substitute wrapped in terrycloth grew up more or less normal, while those raised with a bare metal mother-substitute turned out psychotic even when all their other needs were met, are among the more famous of these, but there have been many more, and many of them can be shown to affect human capacities in direct and demonstrable ways. Children learn language, for example, only if they’re exposed to speech during a certain age window; lacking the right stimulus at the right time, the capacity to use language shuts down and apparently can’t be restarted again.

In this latter example, exposure to speech is what’s known as a triggering stimulus—something from outside the organism that kickstarts a process that’s already hardwired into the organism, but will not get under way until and unless the trigger appears. There are other kinds of stimuli that play different roles in human and animal development. The maturation of the human mind, in fact, might best be seen as a process in which inputs from the environment play a galaxy of roles, some of them of critical importance. What happens when the natural inputs that were around when human intelligence evolved get shut out of the experiences of maturing humans, and replaced by a very different set of inputs put there by human beings? We’ll discuss that next week, in the second part of this post.

Wednesday, July 08, 2015

Darwin's Casino

Our age has no shortage of curious features, but for me, at least, one of the oddest is the way that so many people these days don’t seem to be able to think through the consequences of their own beliefs. Pick an ideology, any ideology, straight across the spectrum from the most devoutly religious to the most stridently secular, and you can count on finding a bumper crop of people who claim to hold that set of beliefs, and recite them with all the uncomprehending enthusiasm of a well-trained mynah bird, but haven’t noticed that those beliefs contradict other beliefs they claim to hold with equal devotion.

I’m not talking here about ordinary hypocrisy. The hypocrites we have with us always; our species being what it is, plenty of people have always seen the advantages of saying one thing and doing another. No, what I have in mind is saying one thing and saying another, without ever noticing that if one of those statements is true, the other by definition has to be false. My readers may recall the way that cowboy-hatted heavies in old Westerns used to say to each other, “This town ain’t big enough for the two of us;” there are plenty of ideas and beliefs that are like that, but too many modern minds resemble nothing so much as an OK Corral where the gunfight never happens.

An example that I’ve satirized in an earlier post here is the bizarre way that so many people on the rightward end of the US political landscape these days claim to be, at one and the same time, devout Christians and fervid adherents of Ayn Rand’s violently atheist and anti-Christian ideology.  The difficulty here, of course, is that Jesus tells his followers to humble themselves before God and help the poor, while Rand told hers to hate God, wallow in fantasies of their own superiority, and kick the poor into the nearest available gutter.  There’s quite precisely no common ground between the two belief systems, and yet self-proclaimed Christians who spout Rand’s turgid drivel at every opportunity make up a significant fraction of the Republican Party just now.

Still, it’s only fair to point out that this sort of weird disconnect is far from unique to religious people, or for that matter to Republicans. One of the places it crops up most often nowadays is the remarkable unwillingness of people who say they accept Darwin’s theory of evolution to think through what that theory implies about the limits of human intelligence.

If Darwin’s right, as I’ve had occasion to point out here several times already, human intelligence isn’t the world-shaking superpower our collective egotism likes to suppose. It’s simply a somewhat more sophisticated version of the sort of mental activity found in many other animals. The thing that supposedly sets it apart from all other forms of mentation, the use of abstract language, isn’t all that unique; several species of cetaceans and an assortment of the brainier birds communicate with their kin using vocalizations that show all the signs of being languages in the full sense of the word—that is, structured patterns of abstract vocal signs that take their meaning from convention rather than instinct.

What differentiates human beings from bottlenosed porpoises, African gray parrots, and other talking species is the mere fact that in our case, language and abstract thinking happened to evolve in a species that also had the sort of grasping limbs, fine motor control, and instinctive drive to pick things up and fiddle with them, that primates have and most other animals don’t.  There’s no reason why sentience should be associated with the sort of neurological bias that leads to manipulating the environment, and thence to technology; as far as the evidence goes, we just happen to be the one species in Darwin’s evolutionary casino that got dealt both those cards. For all we know, bottlenosed porpoises have a rich philosophical, scientific, and literary culture dating back twenty million years; they don’t have hands, though, so they don’t have technology. All things considered, this may be an advantage, since it means they won’t have had to face the kind of self-induced disasters our species is so busy preparing for itself due to the inveterate primate tendency to, ahem, monkey around with things.

I’ve long suspected that one of the reasons why human beings haven’t yet figured out how to carry on a conversation with bottlenosed porpoises, African gray parrots, et al. in their own language is quite simply that we’re terrified of what they might say to us—not least because it’s entirely possible that they’d be right. Another reason for the lack of communication, though, leads straight back to the limits of human intelligence. If our minds have emerged out of the ordinary processes of evolution, what we’ve got between our ears is simply an unusually complex variation on the standard social primate brain, adapted over millions of years to the mental tasks that are important to social primates—that is, staying fed, attracting mates, competing for status, and staying out of the jaws of hungry leopards.

Notice that “discovering the objective truth about the nature of the universe” isn’t part of this list, and if Darwin’s theory of evolution is correct—as I believe it to be—there’s no conceivable way it could be. The mental activities of social primates, and all other living things, have to take the rest of the world into account in certain limited ways; our perceptions of food, mates, rivals, and leopards, for example, have to correspond to the equivalent factors in the environment; but it’s actually an advantage to any organism to screen out anything that doesn’t relate to immediate benefits or threats, so that adequate attention can be paid to the things that matter. We perceive colors, which most mammals don’t, because primates need to be able to judge the ripeness of fruit from a distance; we don’t perceive the polarization of light, as bees do, because primates don’t need to navigate by the angle of the sun.

What’s more, the basic mental categories we use to make sense of the tiny fraction of our surroundings that we perceive are just as much a product of our primate ancestry as the senses we have and don’t have. That includes the basic structures of human language, which most research suggests are inborn in our species, as well as such derivations from language as logic and the relation between cause and effect—this latter simply takes the grammatical relation between subjects, verbs, and objects, and projects it onto the nonlinguistic world. In the real world, every phenomenon is part of an ongoing cascade of interactions so wildly hypercomplex that labels like “cause” and “effect” are hopelessly simplistic; what’s more, a great many things—for example, the decay of radioactive nuclei—just up and happen randomly without being triggered by any specific cause at all. We simplify all this into cause and effect because just enough things appear to work that way to make the habit useful to us.

Another thing that has much more to do with our cognitive apparatus than with the world we perceive is number. Does one apple plus one apple equal two apples? In our number-using minds, yes; in the real world, it depends entirely on the size and condition of the apples in question. We convert qualities into quantities because quantities are easier for us to think with.  That was one of the core discoveries that kickstarted the scientific revolution; when Galileo became the first human being in history to think of speed as a quantity, he made it possible for everyone after him to get their minds around the concept of velocity in a way that people before him had never quite been able to do.

In physics, converting qualities to quantities works very, very well. In some other sciences, the same thing is true, though the further you go away from the exquisite simplicity of masses in motion, the harder it is to translate everything that matters into quantitative terms, and the more inevitably gets left out of the resulting theories. By and large, the more complex the phenomena under discussion, the less useful quantitative models are. Not coincidentally, the more complex the phenomena under discussion, the harder it is to control all the variables in play—the essential step in using the scientific method—and the more tentative, fragile, and dubious the models that result.

So when we try to figure out what bottlenosed porpoises are saying to each other, we’re facing what’s probably an insuperable barrier. All our notions of language are social-primate notions, shaped by the peculiar mix of neurology and hardwired psychology that proved most useful to bipedal apes on the East African savannah over the last few million years. The structures that shape porpoise speech, in turn, are social-cetacean notions, shaped by the utterly different mix of neurology and hardwired psychology that’s most useful if you happen to be a bottlenosed porpoise or one of its ancestors.

Mind you, porpoises and humans are at least fellow-mammals, and likely have common ancestors only a couple of hundred million years back. If you want to talk to a gray parrot, you’re trying to cross a much vaster evolutionary distance, since the ancestors of our therapsid forebears and the ancestors of the parrot’s archosaurian progenitors have been following divergent tracks since way back in the Paleozoic. Since language evolved independently in each of the lineages we’re discussing, the logic of convergent evolution comes into play: as with the eyes of vertebrates and cephalopods—another classic case of the same thing appearing in very different evolutionary lineages—the functions are similar but the underlying structure is very different. Thus it’s no surprise that it’s taken exhaustive computer analyses of porpoise and parrot vocalizations just to give us a clue that they’re using language too.

The takeaway point I hope my readers have grasped from this is that the human mind doesn’t know universal, objective truths. Our thoughts are simply the way that we, as members of a particular species of social primates, to like to sort out the universe into chunks simple enough for us to think with. Does that make human thought useless or irrelevant? Of course not; it simply means that its uses and relevance are as limited as everything else about our species—and, of course, every other species as well. If any of my readers see this as belittling humanity, I’d like to suggest that fatuous delusions of intellectual omnipotence aren’t a useful habit for any species, least of all ours. I’d also point out that those very delusions have played a huge role in landing us in the rising spiral of crises we’re in today.

Human beings are simply one species among many, inhabiting part of the earth at one point in its long lifespan. We’ve got remarkable gifts, but then so does every other living thing. We’re not the masters of the planet, the crown of evolution, the fulfillment of Earth’s destiny, or any of the other self-important hogwash with which we like to tickle our collective ego, and our attempt to act out those delusional roles with the help of a lot of fossil carbon hasn’t exactly turned out well, you must admit. I know some people find it unbearable to see our species deprived of its supposed place as the precious darlings of the cosmos, but that’s just one of life’s little learning experiences, isn’t it? Most of us make a similar discovery on the individual scale in the course of growing up, and from my perspective, it’s high time that humanity do a little growing up of its own, ditch the infantile egotism, and get to work making the most of the time we have on this beautiful and fragile planet.

The recognition that there’s a middle ground between omnipotence and uselessness, though, seems to be very hard for a lot of people to grasp just now. I don’t know if other bloggers in the doomosphere have this happen to them, but every few months or so I field a flurry of attempted comments by people who want to drag the conversation over to their conviction that free will doesn’t exist. I don’t put those comments through, and not just because they’re invariably off topic; the ideology they’re pushing is, to my way of thinking, frankly poisonous, and it’s also based on a shopworn Victorian determinism that got chucked by working scientists rather more than a century ago, but is still being recycled by too many people who didn’t hear the thump when it landed in the trash can of dead theories.

A century and a half ago, it used to be a commonplace of scientific ideology that cause and effect ruled everything, and the whole universe was fated to rumble along a rigidly invariant sequence of events from the beginning of time to the end thereof. The claim was quite commonly made that a sufficiently vast intelligence, provided with a sufficiently complete data set about the position and velocity of every particle in the cosmos at one point in time, could literally predict everything that would ever happen thereafter. The logic behind that claim went right out the window, though, once experiments in the early 20th century showed conclusively that quantum phenomena are random in the strictest sense of the world. They’re not caused by some hidden variable; they just happen when they happen, by chance.

What determines the moment when a given atom of an unstable isotope will throw off some radiation and turn into a different element? Pure dumb luck. Since radiation discharges from single atoms of unstable isotopes are the most important cause of genetic mutations, and thus a core driving force behind the process of evolution, this is much more important than it looks. The stray radiation that gave you your eye color, dealt an otherwise uninteresting species of lobefin fish the adaptations that made it the ancestor of all land vertebrates, and provided the raw material for countless other evolutionary transformations:  these were entirely random events, and would have happened differently if certain unstable atoms had decayed at a different moment and sent their radiation into a different ovum or spermatozoon—as they very well could have. So it doesn’t matter how vast the intelligence or complete the data set you’ve got, the course of life on earth is inherently impossible to predict, and so are a great many other things that unfold from it.

With the gibbering phantom of determinism laid to rest, we can proceed to the question of free will. We can define free will operationally as the ability to produce genuine novelty in behavior—that is, to do things that can’t be predicted. Human beings do this all the time, and there are very good evolutionary reasons why they should have that capacity. Any of my readers who know game theory will recall that the best strategy in any competitive game includes an element of randomness, which prevents the other side from anticipating and forestalling your side’s actions. Food gathering, in game theory terms, is a competitive game; so are trying to attract a mate, competing for social prestige, staying out of the jaws of hungry leopards, and most of the other activities that pack the day planners of social primates.

Unpredictability is so highly valued by our species, in fact, that every human culture ever recorded has worked out formal ways to increase the total amount of sheer randomness guiding human action. Yes, we’re talking about divination—for those who don’t know the jargon, this term refers to what you do with Tarot cards, the I Ching, tea leaves, horoscopes, and all the myriad other ways human cultures have worked out to take a snapshot of the nonrational as a guide for action. Aside from whatever else may be involved—a point that isn’t relevant to this blog—divination does a really first-rate job of generating unpredictability. Flipping a coin does the same thing, and most people have confounded the determinists by doing just that on occasion, but fully developed divination systems like those just named provide a much richer palette of choices than the simple coin toss, and thus enable people to introduce a much richer range of novelty into their actions.

Still, divination is a crutch, or at best a supplement; human beings have their own onboard novelty generators, which can do the job all by themselves if given half a chance.  The process involved here was understood by philosophers a long time ago, and no doubt the neurologists will get around to figuring it out one of these days as well. The core of it is that humans don’t respond directly to stimuli, external or internal.  Instead, they respond to their own mental representations of stimuli, which are constructed by the act of cognition and are laced with bucketloads of extraneous material garnered from memory and linked to the stimulus in uniquely personal, irrational, even whimsical ways, following loose and wildly unpredictable cascades of association and contiguity that have nothing to do with logic and everything to do with the roots of creativity. 

Each human society tries to give its children some approximation of its own culturally defined set of representations—that’s what’s going on when children learn language, pick up the customs of their community, ask for the same bedtime story to be read to them for the umpteenth time, and so on. Those culturally defined representations proceed to interact in various ways with the inborn, genetically defined representations that get handed out for free with each brand new human nervous system.  The existence of these biologically and culturally defined representations, and of various ways that they can be manipulated to some extent by other people with or without the benefit of mass media, make up the ostensible reason why the people mentioned above insist that free will doesn’t exist.

Here again, though, the fact that the human mind isn’t omnipotent doesn’t make it powerless. Think about what happens, say, when a straight stick is thrust into water at an angle, and the stick seems to pick up a sudden bend at the water’s surface, due to differential refraction in water and air. The illusion is as clear as anything, but if you show this to a child and let the child experiment with it, you can watch the representation “the stick is bent” give way to “the stick looks bent.” Notice what’s happening here: the stimulus remains the same, but the representation changes, and so do the actions that result from it. That’s a simple example of how representations create the possibility of freedom.

In the same way, when the media spouts some absurd bit of manipulative hogwash, if you take the time to think about it, you can watch your own representation shift from “that guy’s having an orgasm from slurping that fizzy brown sugar water” to “that guy’s being paid to pretend to have an orgasm, so somebody can try to convince me to buy that fizzy brown sugar water.” If you really pay attention, it may shift again to “why am I wasting my time watching this guy pretend to get an orgasm from fizzy brown sugar water?” and may even lead you to chuck your television out a second story window into an open dumpster, as I did to the last one I ever owned. (The flash and bang when the picture tube imploded, by the way, was far more entertaining than anything that had ever appeared on the screen.)

Human intelligence is limited. Our capacities for thinking are constrained by our heredity, our cultures, and our personal experiences—but then so are our capacities for the perception of color, a fact that hasn’t stopped artists from the Paleolithic to the present from putting those colors to work in a galaxy of dizzyingly original ways. A clear awareness of the possibilities and the limits of the human mind makes it easier to play the hand we’ve been dealt in Darwin’s casino—and it also points toward a generally unsuspected reason why civilizations come apart, which we’ll discuss next week.

Wednesday, July 01, 2015

The Dream of the Machine

As I type these words, it looks as though the wheels are coming off the global economy. Greece and Puerto Rico have both suspended payments on their debts, and China’s stock market, which spent the last year in a classic speculative bubble, is now in the middle of a classic speculative bust. Those of my readers who’ve read John Kenneth Galbraith’s lively history The Great Crash 1929 already know all about the Chinese situation, including the outcome—and since vast amounts of money from all over the world went into Chinese stocks, and most of that money is in the process of turning into twinkle dust, the impact of the crash will inevitably proliferate through the global economy.

So, in all probability, will the Greek and Puerto Rican defaults. In today’s bizarre financial world, the kind of bad debts that used to send investors backing away in a hurry attract speculators in droves, and so it turns out that some big New York hedge funds are in trouble as a result of the Greek default, and some of the same firms that got into trouble with mortgage-backed securities in the recent housing bubble are in the same kind of trouble over Puerto Rico’s unpayable debts. How far will the contagion spread? It’s anybody’s guess.

Oh, and on another front, nearly half a million acres of Alaska burned up in a single day last week—yes, the fires are still going—while ice sheets in Greenland are collapsing so frequently and forcefully that the resulting earthquakes are rattling seismographs thousands of miles away. These and other signals of a biosphere in crisis make good reminders of the fact that the current economic mess isn’t happening in a vacuum. As Ugo Bardi pointed out in a thoughtful blog post, finance is the flotsam on the surface of the ocean of real exchanges of real goods and services, and the current drumbeat of financial crises are symptomatic of the real crisis—the arrival of the limits to growth that so many people have been discussing, and so many more have been trying to ignore, for the last half century or so.

A great many people in the doomward end of the blogosphere are talking about what’s going on in the global economy and what’s likely to blow up next. Around the time the next round of financial explosions start shaking the world’s windows, a great many of those same people will likely be talking about what to do about it all.  I don’t plan on joining them in that discussion. As blog posts here have pointed out more than once, time has to be considered when getting ready for a crisis. The industrial world would have had to start backpedaling away from the abyss decades ago in order to forestall the crisis we’re now in, and the same principle applies to individuals.  The slogan “collapse now and avoid the rush!” loses most of its point, after all, when the rush is already under way.

Any of my readers who are still pinning their hopes on survival ecovillages and rural doomsteads they haven’t gotten around to buying or building yet, in other words, are very likely out of luck. They, like the rest of us, will be meeting this where they are, with what they have right now. This is ironic, in that ideas that might have been worth adopting three or four years ago are just starting to get traction now. I’m thinking here particularly of a recent article on how to use permaculture to prepare for a difficult future, which describes the difficult future in terms that will be highly familiar to readers of this blog. More broadly, there’s a remarkable amount of common ground between that article and the themes of my book Green Wizardry. The awkward fact remains that when the global banking industry shows every sign of freezing up the way it did in 2008, putting credit for land purchases out of reach of most people for years to come, the article’s advice may have come rather too late.

That doesn’t mean, of course, that my readers ought to crawl under their beds and wait for death. What we’re facing, after all, isn’t the end of the world—though it may feel like that for those who are too deeply invested, in any sense of that last word you care to use, in the existing order of industrial society. As Visigothic mommas used to remind their impatient sons, Rome wasn’t sacked in a day. The crisis ahead of us marks the end of what I’ve called abundance industrialism and the transition to scarcity industrialism, as well as the end of America’s global hegemony and the emergence of a new international order whose main beneficiary hasn’t been settled yet. Those paired transformations will most likely unfold across several decades of economic chaos, political turmoil, environmental disasters, and widespread warfare. Plenty of people got through the equivalent cataclysms of the first half of the twentieth century with their skins intact, even if the crisis caught them unawares, and no doubt plenty of people will get through the mess that’s approaching us in much the same condition.

Thus I don’t have any additional practical advice, beyond what I’ve already covered in my books and blog posts, to offer my readers just now. Those who’ve already collapsed and gotten ahead of the rush can break out the popcorn and watch what promises to be a truly colorful show.  Those who didn’t—well, you might as well get some popcorn going and try to enjoy the show anyway. If you come out the other side of it all, schoolchildren who aren’t even born yet may eventually come around to ask you awed questions about what happened when the markets crashed in ‘15.

In the meantime, while the popcorn is popping and the sidewalks of Wall Street await their traditional tithe of plummeting stockbrokers, I’d like to return to the theme of last week’s post and talk about the way that the myth of the machine—if you prefer, the widespread mental habit of thinking about the world in mechanistic terms—pervades and cripples the modern mind.

Of all the responses that last week’s post fielded, those I found most amusing, and also most revealing, were those that insisted that of course the universe is a machine, so is everything and everybody in it, and that’s that. That’s amusing because most of the authors of these comments made it very clear that they embraced the sort of scientific-materialist atheism that rejects any suggestion that the universe has a creator or a purpose. A machine, though, is by definition a purposive artifact—that is, it’s made by someone to do something. If the universe is a machine, then, it has a creator and a purpose, and if it doesn’t have a creator and a purpose, logically speaking, it can’t be a machine.

That sort of unintentional comedy inevitably pops up whenever people don’t think through the implications of their favorite metaphors. Still, chase that habit further along its giddy path and you’ll find a deeper absurdity at work. When people say “the universe is a machine,” unless they mean that statement as a poetic simile, they’re engaging in a very dubious sort of logic. As Alfred Korzybski pointed out a good many years ago, pretty much any time you say “this is that,” unless you implicitly or explicitly qualify what you mean in very careful terms, you’ve just babbled nonsense.

The difficulty lies in that seemingly innocuous word “is.” What Korzybski called the “is of identity”—the use of the word “is” to represent  =, the sign of equality—makes sense only in a very narrow range of uses.  You can use the “is of identity” with good results in categorical definitions; when I commented above that a machine is a purposive artifact, that’s what I was doing. Here is a concept, “machine;” here are two other concepts, “purposive” and “artifact;” the concept “machine” logically includes the concepts “purposive” and “artifact,” so anything that can be described by the words “a machine” can also be described as “purposive” and “an artifact.” That’s how categorical definitions work.

Let’s consider a second example, though: “a machine is a purple dinosaur.” That utterance uses the same structure as the one we’ve just considered.  I hope I don’t have to prove to my readers, though, that the concept “machine” doesn’t include the concepts “purple” and “dinosaur” in any but the most whimsical of senses.  There are plenty of things that can be described by the label “machine,” in other words, that can’t be described by the labels “purple” or “dinosaur.” The fact that some machines—say, electronic Barney dolls—can in fact be described as purple dinosaurs doesn’t make the definition any less silly; it simply means that the statement “no machine is a purple dinosaur” can’t be justified either.

With that in mind, let’s take a closer look at the statement “the universe is a machine.” As pointed out earlier, the concept “machine” implies the concepts “purposive” and “artifact,” so if the universe is a machine, somebody made it to carry out some purpose. Those of my readers who happen to belong to Christianity, Islam, or another religion that envisions the universe as the creation of one or more deities—not all religions make this claim, by the way—will find this conclusion wholly unproblematic. My atheist readers will disagree, of course, and their reaction is the one I want to discuss here. (Notice how “is” functions in the sentence just uttered: “the reaction of the atheists” equals “the reaction I want to discuss.” This is one of the few other uses of “is” that doesn’t tend to generate nonsense.)

In my experience, at least, atheists faced with the argument about the meaning of the word “machine” I’ve presented here pretty reliably respond with something like “It’s not a machine in that sense.” That response takes us straight to the heart of the logical problems with the “is of identity.” In what sense is the universe a machine? Pursue the argument far enough, and unless the atheist storms off in a huff—which admittedly tends to happen more often than not—what you’ll get amounts to “the universe and a machine share certain characteristics in common.” Go further still—and at this point the atheist will almost certainly storm off in a huff—and you’ll discover that the characteristics that the universe is supposed to share with a machine are all things we can’t actually prove one way or another about the universe, such as whether it has a creator or a purpose.

The statement “the universe is a machine,” in other words, doesn’t do what it appears to do. It appears to state a categorical identity; it actually states an unsupported generalization in absolute terms. It takes a mental model abstracted from one corner of human experience and applies it to something unrelated.  In this case, for polemic reasons, it does so in a predictably one-sided way: deductions approved by the person making the statement (“the universe is a machine, therefore it lacks life and consciousness”) are acceptable, while deductions the person making the statement doesn’t like (“the universe is a machine, therefore it was made by someone for some purpose”) get the dismissive response noted above.

This sort of doublethink appears all through the landscape of contemporary nonconversation and nondebate, to be sure, but the problems with the “is of identity” don’t stop with its polemic abuse. Any time you say “this is that,” and mean something other than “this has some features in common with that,” you’ve just fallen into one of the corel boobytraps hardwired into the structure of human thought.

Human beings think in categories. That’s what made ancient Greek logic, which takes categories as its basic element, so massive a revolution in the history of human thinking: by watching the way that one category includes or excludes another, which is what the Greek logicians did, you can squelch a very large fraction of human stupidities before they get a foothold. What Alfred Korzybski pointed out, in effect, is that there’s a metalogic that the ancient Greeks didn’t get to, and logical theorists since their time haven’t really tackled either: the extremely murky relationship between the categories we think with and the things we experience, which don’t come with category labels spraypainted on them.

Here is a green plant with a woody stem. Is it a tree or a shrub? That depends on exactly where you draw the line between those two categories, and as any botanist can tell you, that’s neither an easy nor an obvious thing. As long as you remember that categories exist within the human mind as convenient handles for us to think with, you can navigate around the difficulties, but when you slip into thinking that the categories are more real than the things they describe, you’re in deep, deep trouble.

It’s not at all surprising that human thought should have such problems built into it. If, as I do, you accept the Darwinian thesis that human beings evolved out of prehuman primates by the normal workings of the laws of evolution, it follows logically that our nervous systems and cognitive structures didn’t evolve for the purpose of understanding the truth about the cosmos; they evolved to assist us in getting food, attracting mates, fending off predators, and a range of similar, intellectually undemanding tasks. If, as many of my theist readers do, you believe that human beings were created by a deity, the yawning chasm between creator and created, between an infinite and a finite intelligence, stands in the way of any claim that human beings can know the unvarnished truth about the cosmos. Neither viewpoint supports the claim that a category created by the human mind is anything but a convenience that helps our very modest mental powers grapple with an ultimately incomprehensible cosmos.

Any time human beings try to make sense of the universe or any part of it, in turn, they have to choose from among the available categories in an attempt to make the object of inquiry fit the capacities of their minds. That’s what the founders of the scientific revolution did in the seventeenth century, by taking the category of “machine” and applying it to the universe to see how well it would fit. That was a perfectly rational choice from within their cultural and intellectual standpoint. The founders of the scientific revolution were Christians to a man, and some of them (for example, Isaac Newton) were devout even by the standards of the time; the idea that the universe had been made by someone for some purpose, after all, wasn’t problematic in the least to people who took it as given that the universe was made by God for the purpose of human salvation. It was also a useful choice in practical terms, because it allowed certain features of the universe—specifically, the behavior of masses in motion—to be accounted for and modeled with a clarity that previous categories hadn’t managed to achieve.

The fact that one narrowly defined aspect of the universe seems to behave like a machine, though, does not prove that the universe is a machine, any more than the fact that one machine happens to look like a purple dinosaur proves that all machines are purple dinosaurs. The success of mechanistic models in explaining the behavior of masses in motion proved that mechanical metaphors are good at fitting some of the observed phenomena of physics into a shape that’s simple enough for human cognition to grasp, and that’s all it proved. To go from that modest fact to the claim that the universe and everything in it are machines involves an intellectual leap of pretty spectacular scale. Part of the reason that leap was taken in the seventeenth century was the religious frame of scientific inquiry at that time, as already mentioned, but there was another factor, too.

It’s a curious fact that mechanistic models of the universe appeared in western European cultures, and become wildly popular there, well before the machines did. In the early seventeenth century, machines played a very modest role in the life of most Europeans; most tasks were done using hand tools powered by human and animal muscle, the way they had been done since the dawn of the agricultural revolution eight millennia or so before. The most complex devices available at the time were pendulum clocks, printing presses, handlooms, and the like—you know, the sort of thing that people these days use instead of machines when they want to get away from technology.

For reasons that historians of ideas are still trying to puzzle out, though, western European thinkers during these same years were obsessed with machines, and with mechanical explanations for the universe. Those latter ranged from the plausible to the frankly preposterous—René Descartes, for example, proposed a theory of gravity in which little corkscrew-shaped particles went zooming up from the earth to screw themselves into pieces of matter and yank them down. Until Isaac Newton, furthermore, theories of nature based on mechanical models didn’t actually explain that much, and until the cascade of inventive adaptations of steam power that ended with James Watt’s epochal steam engine nearly a century after Newton, the idea that machines could elbow aside craftspeople using hand tools and animals pulling carts was an unproven hypothesis. Yet a great many people in western Europe believed in the power of the machine as devoutly as their ancestors had believed in the power of the bones of the local saints.

A habit of thought very widespread in today’s culture assumes that technological change happens first and the world of ideas changes in response to it. The facts simply won’t support that claim, though. As the history of mechanistic ideas in science shows clearly, the ideas come first and the technologies follow—and there’s good reason why this should be so. Technologies don’t invent themselves, after all. Somebody has to put in the work to invent them, and then other people have to invest the resources to take them out of the laboratory and give them a role in everyday life. The decisions that drive invention and investment, in turn, are powerfully shaped by cultural forces, and these in turn are by no means as rational as the people influenced by them generally like to think.

People in western Europe and a few of its colonies dreamed of machines, and then created them. They dreamed of a universe reduced to the status of a machine, a universe made totally transparent to the human mind and totally subservient to the human will, and then set out to create it. That latter attempt hasn’t worked out so well, for a variety of reasons, and the rising tide of disasters sketched out in the first part of this week’s post unfold in large part from the failure of that misbegotten dream. In the next few posts, I want to talk about why that failure was inevitable, and where we might go from here.

Wednesday, June 24, 2015

The Delusion of Control

I'm sure most of my readers have heard at least a little of the hullaballoo surrounding the release of Pope Francis’ encyclical on the environment, Laudato Si. It’s been entertaining to watch, not least because so many politicians in the United States who like to use Vatican pronouncements as window dressing for their own agendas have been left scrambling for cover now that the wind from Rome is blowing out of a noticeably different quarter.

Take Rick Santorum, a loudly Catholic Republican who used to be in the US Senate and now spends his time entertaining a variety of faux-conservative venues with his signature flavor of hate speech. Santorum loves to denounce fellow Catholics who disagree with Vatican edicts as “cafeteria Catholics,” and announced a while back that John F. Kennedy’s famous defense of the separation of church and state made him sick to his stomach. In the wake of Laudato Si, care to guess who’s elbowing his way to the head of the cafeteria line? Yes, that would be Santorum, who’s been insisting since the encyclical came out that the Pope is wrong and American Catholics shouldn’t be obliged to listen to him.

What makes all the yelling about Laudato Si a source of wry amusement to me is that it’s not actually a radical document at all. It’s a statement of plain common sense. It should have been obvious all along that treating the air as a gaseous sewer was a really dumb idea, and in particular, that dumping billions upon billions of tons of infrared-reflecting gases into the atmosphere would change its capacity for heat retention in unwelcome ways. It should have been just as obvious that all the other ways we maltreat the only habitable planet we’ve got were guaranteed to end just as badly. That this wasn’t obvious—that huge numbers of people find it impossible to realize that you can only wet your bed so many times before you have to sleep in a damp spot—deserves much more attention than it’s received so far.

It’s really a curious blindness, when you think about it. Since our distant ancestors climbed unsteadily down from the trees of late Pliocene Africa, the capacity to anticipate threats and do something about them has been central to the success of our species. A rustle in the grass might indicate the approach of a leopard, a series of unusually dry seasons might turn the local water hole into undrinkable mud: those of our ancestors who paid attention to such things, and took constructive action in response to them, were more likely to survive and leave offspring than those who shrugged and went on with business as usual. That’s why traditional societies around the world are hedged about with a dizzying assortment of taboos and customs meant to guard against every conceivable source of danger.

Somehow, though, we got from that to our present situation, where substantial majorities across the world’s industrial nations seem unable to notice that something bad can actually happen to them, where thoughtstoppers of the “I’m sure they’ll think of something” variety take the place of thinking about the future, and where, when something bad does happen to someone, the immediate response is to find some way to blame the victim for what happened, so that everyone else can continue to believe that the same thing can’t happen to them. A world where Laudato Si is controversial, not to mention necessary, is a world that’s become dangerously detached from the most basic requirements of collective survival.

For quite some time now, I’ve been wondering just what lies behind the bizarre paralogic with which most people these days turn blank and uncomprehending eyes on their onrushing fate. The process of writing last week’s blog post on the astonishing stupidity of US foreign policy, though, seems to have helped me push through to clarity on the subject. I may be wrong, but I think I’ve figured it out.

Let’s begin with the issue at the center of last week’s post, the realy remarkable cluelessness with which US policy toward Russia and China has convinced both nations they have nothing to gain from cooperating with a US-led global order, and are better off allying with each other and opposing the US instead. US politicians and diplomats made that happen, and the way they did it was set out in detail in a recent and thoughtful article by Paul R. Pillar in the online edition of The National Interest.

Pillar’s article pointed out that the United States has evolved a uniquely counterproductive notion of how negotiation works. Elsewhere on the planet, people understand that when you negotiate, you’re seeking a compromise where you get whatever you most need out of the situation, while the other side gets enough of its own agenda met to be willing to cooperate. To the US, by contrast, negotiation means that the other side complies with US demands, and that’s the end of it. The idea that other countries might have their own interests, and might expect to receive some substantive benefit in exchange for cooperation with the US, has apparently never entered the heads of official Washington—and the absence of that idea has resulted in the cascading failures of US foreign policy in recent years.

It’s only fair to point out that the United States isn’t the only practitioner of this kind of self-defeating behavior. A first-rate example has been unfolding in Europe in recent months—yes, that would be the ongoing non-negotiations between the Greek government and the so-called troika, the coalition of unelected bureaucrats who are trying to force Greece to keep pursuing a failed economic policy at all costs. The attitude of the troika is simple: the only outcome they’re willing to accept is capitulation on the part of the Greek government, and they’re not willing to give anything in return. Every time the Greek government has tried to point out to the troika that negotiation usually involves some degree of give and take, the bureaucrats simply give them a blank look and reiterate their previous demands.

That attitude has had drastic political consequences. It’s already convinced Greeks to elect a radical leftist government in place of the compliant centrists who ruled the country in the recent past. If the leftists fold, the neofascist Golden Dawn party is waiting in the wings. The problem with the troika’s stance is simple: the policies they’re insisting that Greece must accept have never—not once in the history of market economies—produced anything but mass impoverishment and national bankruptcy. The Greeks, among many other people, know this; they know that Greece will not return to prosperity until it defaults on its foreign debts the way Russia did in 1998, and scores of other countries have done as well.

If the troika won’t settle for a negotiated debt-relief program, and the current Greek government won’t default, the Greeks will elect someone else who will, no matter who that someone else happens to be; it’s that, after all, or continue along a course that’s already caused the Greek economy to lose a quarter of its precrisis GDP, and shows no sign of stopping anywhere this side of failed-state status. That this could quite easily hand Greece over to a fascist despot is just one of the potential problems with the troika’s strategy. It’s astonishing that so few people in Europe seem to be able to remember what happened the last time an international political establishment committed itself to the preservation of a failed economic orthodoxy no matter what; those of my readers who don’t know what I’m talking about may want to pick up any good book on the rise of fascism in Europe between the wars.

Let’s step back from specifics, though, and notice the thinking that underlies the dysfunctional behavior in Washington and Brussels alike. In both cases, the people who think they’re in charge have lost track of the fact that Russia, China, and Greece have needs, concerns, and interests of their own, and aren’t simply dolls that the US or EU can pose at will. These other nations can, perhaps, be bullied by threats over the short term, but that’s a strategy with a short shelf life.  Successful diplomacy depends on giving the other guy reasons to want to cooperate with you, while demanding cooperation at gunpoint guarantees that the other guy is going to look for ways to shoot back.

The same sort of thinking in a different context underlies the brutal stupidity of American drone attacks in the Middle East. Some wag in the media pointed out a while back that the US went to war against an enemy 5,000 strong, we’ve killed 10,000 of them, and now there are only 20,000 left. That’s a good summary of the situation; the US drone campaign has been a total failure by every objective measure, having worked out consistently to the benefit of the Muslim extremist groups against which it’s aimed, and yet nobody in official Washington seems capable of noticing this fact.

It’s hard to miss the conclusion, in fact, that the Obama administration thinks that in pursuing its drone-strike program, it’s playing some kind of video game, which the United States can win if it can rack up enough points. Notice the way that every report that a drone has taken out some al-Qaeda leader gets hailed in the media: hey, we nailed a commander, doesn’t that boost our score by five hundred? In the real world, meanwhile the indiscriminate slaughter of civilians by US drone strikes has become a core factor convincing Muslims around the world that the United States is just as evil as the jihadis claim, and thus sending young men by the thousands to join the jihadi ranks. Has anyone in the Obama administration caught on to this straightforward arithmetic of failure? Surely you jest.

For that matter, I wonder how many of my readers recall the much-ballyhooed “surge” in Afghanistan several years back.  The “surge” was discussed at great length in the US media before it was enacted on Afghan soil; talking heads of every persuasion babbled learnedly about how many troops would be sent, how long they’d stay, and so on. It apparently never occurred to anybody in the Pentagon or the White House that the Taliban could visit websites and read newspapers, and get a pretty good idea of what the US forces in Afghanistan were about to do. That’s exactly what happened, too; the Taliban simply hunkered down for the duration, and popped back up the moment the extra troops went home.

Both these examples of US military failure are driven by the same problem discussed earlier in the context of diplomacy: an inability to recognize that the other side will reliably respond to US actions in ways that further its own agenda, rather than playing along with the US. More broadly, it’s the same failure of thought that leads so many people to assume that the biosphere is somehow obligated to give us all the resources we want and take all the abuse we choose to dump on it, without ever responding in ways that might inconvenience us.

We can sum up all these forms of acquired stupidity in a single sentence: most people these days seem to have lost the ability to grasp that the other side can learn.

The entire concept of learning has been so poisoned by certain bad habits of contemporary thought that it’s probably necessary to pause here. Learning, in particular, isn’t the same thing as rote imitation. If you memorize a set of phrases in a foreign language, for example, that doesn’t mean you’ve learned that language. To learn the language means to grasp the underlying structure, so that you can come up with your own phrases and say whatever you want, not just what you’ve been taught to say.

In the same way, if you memorize a set of disconnected factoids about history, you haven’t learned history. This is something of a loaded topic right now in the US, because recent “reforms” in the American  public school system have replaced learning with rote memorization of disconnected factoids that are then regurgitated for multiple choice tests. This way of handling education penalizes those children who figure out how to learn, since they might well come up with answers that differ from the ones the test expects. That’s one of many ways that US education these days actively discourages learning—but that’s a subject for another post.

To learn is to grasp the underlying structure of a given subject of knowledge, so that the learner can come up with original responses to it. That’s what Russia and China did; they grasped the underlying structure of US diplomacy, figured out that they had nothing to gain by cooperating with that structure, and came up with a creative response, which was to ally against the United States. That’s what Greece is doing, too.  Bit by bit, the Greeks seem to be figuring out the underlying structure of troika policy, which amounts to the systematic looting of southern Europe for the benefit of Germany and a few of its allies, and are trying to come up with a response that doesn’t simply amount to unilateral submission.

That’s also what the jihadis and the Taliban are doing in the face of US military activity. If life hands you lemons, as the saying goes, make lemonade; if the US hands you drone strikes that routinely slaughter noncombatants, you can make very successful propaganda out of it—and if the US hands you a surge, you roll your eyes, hole up in your mountain fastnesses, and wait for the Americans to get bored or distracted, knowing that this won’t take long. That’s how learning works, but that’s something that US planners seem congenitally unable to take into account.

The same analysis, interestingly enough, makes just as much sense when applied to nonhuman nature. As Ervin Laszlo pointed out a long time ago in Introduction to Systems Philosophy, any sufficiently complex system behaves in ways that approximate intelligence.  Consider the way that bacteria respond to antibiotics. Individually, bacteria are as dumb as politicians, but their behavior on the species level shows an eerie similarity to learning; faced with antibiotics, a species of bacteria “tries out” different biochemical approaches until it finds one that sidesteps the antibiotic. In the same way, insects and weeds “try out” different responses to pesticides and herbicides until they find whatever allows them to munch on crops or flourish in the fields no matter how much poison the farmer sprays on them.

We can even apply the same logic to the environmental crisis as a whole. Complex systems tend to seek equilibrium, and will respond to anything that pushes them away from equilibrium by pushing back the other way. Any field biologist can show you plenty of examples: if conditions allow more rabbits to be born in a season, for instance, the population of hawks and foxes rises accordingly, reducing the rabbit surplus to a level the ecosystem can support. As humanity has put increasing pressure on the biosphere, the biosphere has begun to push back with increasing force, in an increasing number of ways; is it too much to think of this as a kind of learning, in which the biosphere “tries out” different ways to balance out the abusive behavior of humanity, until it finds one or more that work?

Now of course it’s long been a commonplace of modern thought that natural systems can’t possibly learn. The notion that nature is static, timeless, and unresponsive, a passive stage on which human beings alone play active roles, is welded into modern thought, unshaken even by the realities of biological evolution or the rising tide of evidence that natural systems are in fact quite able to adapt their way around human meddling. There’s a long and complex history to the notion of passive nature, but that’s a subject for another day; what interests me just now is that since 1990 or so, the governing classes of the United States, and some other Western nations as well, have applied the same frankly delusional logic to everything in the world other than themselves.

“We’re an empire now, and when we act, we create our own reality,” neoconservative guru Karl Rove is credited as saying to reporter Ron Suskind. “We’re history’s actors, and you, all of you, will be left to just study what we do.” That seems to be the thinking that governs the US government these days, on both sides of the supposed partisan divide. Obama says we’re in a recovery, and if the economy fails to act accordingly, why, rooms full of industrious flacks churn out elaborately fudged statistics to erase that unwelcome reality. That history’s self-proclaimed actors might turn out to be just one more set of flotsam awash on history’s vast tides has never entered their darkest dream.

Let’s step back from specifics again, though. What’s the source of this bizarre paralogic—the delusion that leads politicians to think that they create reality, and that everyone and everything else can only fill the roles they’ve been assigned by history’s actors?  I think I know. I think it comes from a simple but remarkably powerful fact, which is that the people in question, along with most people in the privileged classes of the industrial world, spend most of their time, from childhood on, dealing with machines.

We can define a machine as a subset of the universe that’s been deprived of the capacity to learn. The whole point of building a machine is that it does what you want, when you want it, and nothing else. Flip the switch on, and it turns on and goes through whatever rigidly defined set of behaviors it’s been designed to do; flip the switch off, and it stops. It may be fitted with controls, so you can manipulate its behavior in various tightly limited ways; nowadays, especially when computer technology is involved, the set of behaviors assigned to it may be complex enough that an outside observer may be fooled into thinking that there’s learning going on. There’s no inner life behind the facade, though.  It can’t learn, and to the extent that it pretends to learn, what happens is the product of the sort of rote memorization described above as the antithesis of learning.

A machine that learned would be capable of making its own decisions and coming up with a creative response to your actions—and that’s the opposite of what machines are meant to do, because that response might well involve frustrating your intentions so the machine can get what it wants instead. That’s why the trope of machines going to war against human beings has so large a presence in popular culture: it’s exactly because we expect machines not to act like people, not to pursue their own needs and interests, that the thought of machines acting the way we do gets so reliable a frisson of horror.

The habit of thought that treats the rest of the cosmos as a collection of machines, existing only to fulfill whatever purpose they might be assigned by their operators, is another matter entirely. Its origins can be traced back to the dawning of the scientific revolution in the seventeenth century, when a handful of thinkers first began to suggest that the universe might not be a vast organism—as everybody in the western world had presupposed for millennia before then—but might instead be a vast machine. It’s indicative that one immediate and popular response to this idea was to insist that other living things were simply “meat machines” who didn’t actually suffer pain under the vivisector’s knife, but had been designed by God to imitate sounds of pain in order to inspire feelings of pity in human beings.

The delusion of control—the conviction, apparently immune to correction by mere facts, that the world is a machine incapable of doing anything but the things we want it to do—pervades contemporary life in the world’s industrial societies. People in those societies spend so much more time dealing with machines than they do interacting with other people and other living things without a machine interface getting in the way, that it’s no wonder that this delusion is so widespread. As long as it retains its grip, though, we can expect the industrial world, and especially its privileged classes, to stumble onward from one preventable disaster to another. That’s the inner secret of the delusion of control, after all: those who insist on seeing the world in mechanical terms end up behaving mechanically themselves. Those who deny all other things the ability to learn lose the ability to learn from their own mistakes, and lurch robotically onward along a trajectory that leads straight to the scrapheap of the future.