Wednesday, December 10, 2014

Dark Age America: The Sharp Edge of the Shell

One of the interesting features of blogging about the twilight of science and technology these days is that there’s rarely any need to wait long for a cogent example. One that came my way not long ago via a reader of this blog—tip of the archdruidical hat to Eric S.—shows that not even a science icon can get away with asking questions about the rising tide of financial corruption and dogmatic ideology that’s drowning the scientific enterprise in our time.

Many of my readers will recall Bill Nye the Science Guy, the star of a television program on science in the 1990s and still a vocal and entertaining proponent of science education. In a recent interview, Nye was asked why he doesn’t support the happy-go-lucky attitude toward dumping genetically modified organisms into the environment that’s standard in the United States and a few other countries these days. His answer  is that their impact on ecosystems is a significant issue that hasn’t been adequately addressed. Those who know their way around today’s pseudoskeptic scene won’t be surprised by the reaction from one of Discover Magazine’s bloggers: a tar and feathers party, more or less, full of the standard GMO industry talking points and little else.

Nye’s point, as it happens, is as sensible as it is scientific: ecosystems are complex wholes that can be thrown out of balance by relatively subtle shifts, and since human beings depend for their survival and prosperity on the products of natural ecosystems, avoiding unnecessary disruption to those systems is arguably a good idea. This eminently rational sort of thinking, though, is not welcomed in corporate boardrooms just now.  In the case under discussion, it’s particularly unwelcome in the boardrooms of  corporations heavily invested in genetic modification, which have a straightforward if shortsighted financial interest in flooding the biosphere with as many GMOs as they can sell.

Thus it’s reasonable that Monsanto et al. would scream bloody murder in response to Nye’s comment. What interests me is that so many believers in science should do the same, and not only in this one case. Last I checked, “what makes the biggest profit for industry must be true” isn’t considered a rule of scientific reasoning, but that sort of thinking is remarkably common in what passes for skepticism these days. To cite an additional example, it’s surely not accidental that there’s a 1.00 correlation between the health care modalities that make money for the medical and pharmaceutical industries and the health care modalities that the current crop of soi-disant skeptics consider rational and science-based, and an equal 1.00 correlation between those modalities that don’t make money for the medical and pharmaceutical industries and those that today’s skeptics dismiss as superstitious quackery.

To some extent, this is likely a product of what’s called “astroturfing,” the manufacture of artificial grassroots movements to support the agendas of an industrial sector or a political faction. The internet, with its cult of anonymity and its less than endearing habit of letting every discussion plunge to the lowest common denominator of bullying and abuse, was tailor-made for that sort of activity; it’s pretty much an open secret at this point, or so I’m told by the net-savvy, that most significant industries these days maintain staffs of paid flacks who spend their working hours searching the internet for venues to push messages favorable to their employers and challenge opposing views. Given the widespread lack of enthusiasm for GMOs, Monsanto and its competitors would have to be idiots to neglect such an obvious and commonly used marketing tactic.

Still, there’s more going on here than ordinary media manipulation in the hot pursuit of profits. There are plenty of people who have no financial stake in the GMO industry who defend it fiercely from even the least whisper of criticism, just as there are plenty of people who denounce alternative medicine in ferocious terms even though they don’t happen to make money from the medical-pharmaceutical industrial complex. I’ve discussed in previous posts here, and in a forthcoming book, the way that faith in progress was pressed into service as a substitute for religious belief during the nineteenth century, and continues to fill that role for many people today. It’s not a transformation that did science any good, but its implications as industrial civilization tips over into decline and fall are considerably worse than the ones I’ve explored in previous essays. I want to talk about those implications here, because they have a great deal to say about the future of science and technology in the deindustrializing world of the near future.

It’s important, in order to make sense of those implications, to grasp that science and technology function as social phenomena, and fill social roles, in ways that have more than a little in common with the intellectual activities of civilizations of the past. That doesn’t mean, as some postmodern theorists have argued, that science and technology are purely social phenomena; both of them have to take the natural world into account, and so have an important dimension that transcends the social. That said, the social dimension also exists, and since human beings are social mammals, that dimension has an immense impact on the way that science and technology function in this or any other human society.

From a social standpoint, it’s thus not actually all that relevant that that the scientists and engineers of contemporary industrial society can accomplish things with matter and energy that weren’t within the capacities of Babylonian astrologer-priests, Hindu gurus, Chinese literati, or village elders in precontact New Guinea. Each of these groups have been assigned a particular social role, the role of interpreter of Nature, by their respective societies, and each of them are accorded substantial privileges for fulfilling the requirements of their role. It’s therefore possible to draw precise and pointed comparisons between the different bodies of people filling that very common social role in different societies.

The exercise is worth doing, not least because it helps sort out the far from meaningless distinction between the aspects of modern science and technology that unfold from their considerable capacities for doing things with matter and energy, and the aspects of modern science and technology that unfold from the normal dynamics of social privilege.  What’s more, since modern science and technology wasn’t around in previous eras of decline and fall but privileged intellectual castes certainly were, recognizing the common features that unite today’s scientists, engineers, and promoters of scientific and technological progress with equivalent groups in past civilizations makes it a good deal easier to anticipate the fate of science and technology in the decades and centuries to come.

A specific example will be more useful here than any number of generalizations, so let’s consider the fate of philosophy in the waning years of the Roman world. The extraordinary intellectual adventure we call classical philosophy began in the Greek colonial cities of Ionia around 585 BCE, when Thales of Miletus first proposed a logical rather than a mythical explanation for the universe, and proceeded through three broad stages from there. The first stage, that of the so-called Presocratics, focused on the natural world, and the questions it asked and tried to answer can more or less be summed up as “What exists?”  Its failures and equivocal successes led the second stage, which extended from Socrates through Plato and Aristotle to the Old Academy and its rivals, to focus their attention on different questions, which can be summed up just as neatly as “How can we know what exists?”

That was an immensely fruitful shift in focus. It led to the creation of classical logic—one of the great achievements of the human mind—and it also drove the transformations that turned mathematics from an assortment of rules of thumb to an architecture of logical proofs, and thus laid the foundations on which Newtonian physics and other quantitative sciences eventually built.  Like every other great intellectual adventure of our species, though, it never managed to fulfill all the hopes that had been loaded onto it; the philosopher’s dream of human society made wholly subject to reason turned out to be just as unreachable as the scientist’s of the universe made wholly subject to the human will. As that failure became impossible to ignore, classical philosophy shifted focus again, to a series of questions and attempted answers that amounted to “given what we know about what exists, how should we live?”

That’s the question that drove the last great age of classical philosophy, the age of the Epicureans, the Stoics, and the Neoplatonists, the three philosophical schools I discussed a few months back as constructive personal responses to the fall of our civilization. At first, these and other schools carried on lively and far-reaching debates, but as the Roman world stumbled toward its end under the burden of its own unsolved problems, the philosophers closed ranks; debates continued, but they focused more and more tightly on narrow technical issues within individual schools. What’s more, the schools themselves closed ranks; pure Stoic, Aristotelian, and Epicurean philosophy gradually dropped out of fashion, and by the fourth century CE, a Neoplatonism enriched with bits and pieces of all the other schools stood effectively alone, the last school standing in the long struggle Thales kicked off ten centuries before.

Now I have to confess to a strong personal partiality for the Neoplatonists. It was from Plotinus and Proclus, respectively the first and last great figures in the classical tradition, that I first grasped why philosophy matters and what it can accomplish, and for all its problems—like every philosophical account of the world, it has some—Neoplatonism still makes intuitive sense to me in a way that few other philosophies do. What’s more, the men and women who defended classical Neoplatonism in its final years were people of great intellectual and personal dignity, committed to proclaming the truth as they knew it in the face of intolerance and persecution that ended up costing no few of them their lives.

The awkward fact remains that classical philosophy, like modern science, functioned as a social phenomenon and filled certain social roles. The intellectual power of the final Neoplatonist synthesis and the personal virtues of its last proponents have to be balanced against its blind support of a deeply troubled social order; in all the long history of classical philosophy, it never seems to have occurred to anyone that debates about the nature of justice might reasonably address, say, the ethics of slavery. While a stonecutter like Socrates could take an active role in philosophical debate in Athens in the fourth century BCE, furthermore, the institutionalization of philosophy meant that by the last years of classical Neoplatonism, its practice was restricted to those with ample income and leisure, and its values inevitably became more and more closely tied to the social class of its practitioners.

That’s the thing that drove the ferocious rejection of philosophy by the underclass of the age, the slaves and urban poor who made up the vast majority of the population throughout the Roman empire, and who received little if any benefit from the intellectual achievements of their society. To them, the subtleties of Neoplatonist thought were irrelevant to the increasingly difficult realities of life on the lower end of the social pyramid in a brutally hierarchical and increasingly dysfunctional world. That’s an important reason why so many of them turned for solace to a new religious movement from the eastern fringes of the empire, a despised sect that claimed that God had been born on earth as a mere carpenter’s son and communicated through his life and death a way of salvation that privileged the poor and downtrodden above the rich and well-educated.

It was as a social phenomenon, filling certain social roles, that Christianity attracted persecution from the imperial government, and it was in response to Christianity’s significance as a social phenomenon that the imperial government executed an about-face under Constantine and took the new religion under its protection. Like plenty of autocrats before and since, Constantine clearly grasped that the real threat to his position and power came from other members of his own class—in his case, the patrician elite of the Roman world—and saw that he could undercut those threats and counter potential rivals through an alliance of convenience with the leaders of the underclass. That’s the political subtext of the Edict of Milan, which legalized Christianity throughout the empire and brought it imperial patronage.

The patrician class of late Roman times, like its equivalent today, exercised power through a system of interlocking institutions from which outsiders were carefully excluded, and it maintained a prickly independence from the central government.  By the fourth century, tensions between the bureaucratic imperial state and the patrician class, with its local power bases and local loyalties, were rising toward a flashpoint.  The rise of Christianity thus gave Constantine and his successors an extraordinary opportunity.  Most of the institutions that undergirded patrician power linked to Pagan religion; local senates, temple priesthoods, philosophical schools, and other elements of elite culture normally involved duties drawn from the traditional faith. A religious pretext to strike at those institutions must have seemed as good as any other, and the Christian underclass offered one other useful feature: mobs capable of horrific acts of violence against prominent defenders of the patrician order.

That was why, for example, a Christian mob in 415 CE dragged the Neoplatonist philosopher Hypatia from her chariot as she rode home from her teaching gig at the Academy in Alexandria, cudgeled her to death, cut the flesh from her bones with sharpened oyster shells—the cheap pocket knives of the day—and burned the bloody gobbets to ashes. What doomed Hypatia was not only her defense of the old philosophical traditions, but also her connection to Alexandria’s patrician class; her ghastly fate was as much the vengeance of the underclass against the elite as it was an act of religious persecution. She was far from the only victim of violence driven by those paired motives, either. It was as a result of such pressures that, by the time the emperor Justinian ordered the last academies closed in 529 CE, the classical philosophical tradition was essentially dead.

That’s the sort of thing that happens when an intellectual tradition becomes too closely affiliated with the institutions, ideologies, and interests of a social elite. If the elite falls, so does the tradition—and if it becomes advantageous for anyone else to target the elite, the tradition can be a convenient target, especially if it’s succeeded in alienating most of the population outside the elite in question.

Modern science is extremely vulnerable to such a turn of events. There was a time when the benefits of scientific research and technological development routinely reached the poor as well as the privileged, but that time has long since passed; these days, the benefits of research and development move up the social ladder, while the costs and negative consequences move down. Nearly all the jobs eliminated by automation, globalization, and the computer revolution, for example, used to hire from the bottom end of the job market. In the same way, changes in US health care in recent decades have benefited the privileged while subjecting most others to substandard care at prices so high that medical bills are the leading cause of bankruptcy in the US today.

It’s all very well for the promoters of progress to gabble on about science as the key to humanity’s destiny; the poor know that the destiny thus marketed isn’t for them.  To the poor, progress means fewer jobs with lower pay and worse conditions, more surveillance and impersonal violence carried out by governments that show less and less interest in paying even lip service to the concept of civil rights, a rising tide of illnesses caused by environmental degradation and industrial effluents, and glimpses from afar of an endless stream of lavishly advertised tech-derived trinkets, perks and privileges that they will never have. Between the poor and any appreciation for modern science stands a wall made of failed schools, defunded libraries, denied opportunities, and the systematic use of science and technology to benefit other people at their expense. Such a wall, it probably bears noting, makes a good surface against which to sharpen oyster shells.

It seems improbable that anything significant will be done to change this picture until it’s far too late for such changes to have any meaningful effect. Barring dramatic transformations in the distribution of wealth, the conduct of public education, the funding for such basic social amenities as public libraries, and a great deal more, the underclass of the modern industrial world can be expected to grow more and more disenchanted with science as a social phenomenon in our culture, and to turn instead—as their equivalents in the Roman world and so many other civilizations did—to some tradition from the fringes that places itself in stark opposition to everything modern scientific culture stands for. Once that process gets under way, it’s simply a matter of waiting until the corporate elite that funds science, defines its values, and manipulates it for PR purposes, becomes sufficiently vulnerable that some other power center decides to take it out, using institutional science as a convenient point of attack.

Saving anything from the resulting wreck will be a tall order. Still, the same historical parallel discussed above offers some degree of hope. The narrowing focus of classical philosophy in its last years meant, among other things, that a substantial body of knowledge that had once been part of the philosophical movement was no longer identified with it by the time the cudgels and shells came out, and much of it was promptly adopted by Christian clerics and monastics as useful for the Church. That’s how classical astronomy, music theory, and agronomy, among other things, found their way into the educational repertoire of Christian monasteries and nunneries in the dark ages. What’s more, once the power of the patrician class was broken, a carefully sanitized version of Neoplatonist philosophy found its way into Christianity; in some denominations, it’s still a living presence today.

That may well happen again. Certainly today’s defenders of science are doing their best to shove a range of scientific viewpoints out the door; the denunciation meted out to Bill Nye for bringing basic concepts from ecology into a discussion where they were highly relevant is par for the course these days. There’s an interesting distinction between the sciences that get this treatment and those that don’t: on the one hand, those that are being flung aside are those that focus on observation of natural systems rather than control of artificial ones; on the other, any science that raises doubts about the possibility or desirability of infinite technological expansion can expect to find itself shivering in the dark outside in very short order. (This latter point applies to other fields of intellectual endeavor as well; half the angry denunciations of philosophy you’ll hear these days from figures such as Neil DeGrasse Tyson, I’m convinced, come out of the simple fact that the claims of modern science to know objective truths about nature won’t stand up to fifteen minutes of competent philosophical analysis.)

Thus it’s entirely possible that observational sciences, if they can squeeze through the bottleneck imposed by the loss of funding and prestige, will be able to find a new home in whatever intellectual tradition replaces modern scientific rationalism in the deindustrial future. It’s at least as likely that such dissident sciences as ecology, which has always raised challenging questions about the fantasies of the manipulative sciences, may find themselves eagerly embraced by a future intellectual culture that has no trouble at all recognizing the futility of those fantasies. That said, it’s still going to take some hard work to preserve what’s been learnt in those fields—and it’s also going to take more than the usual amount of prudence and plain dumb luck not to get caught up in the conflict when the sharp edge of the shell gets turned on modern science.

Wednesday, December 03, 2014

Dark Age America: The Fragmentation of Technology

It was probably inevitable that last week’s discussion of the way that contemporary science is offering itself up as a sacrifice on the altar of corporate greed and institutional arrogance would field me a flurry of responses that insisted that I must hate science.  This is all the more ironic in that the shoddy logic involved in that claim also undergirded George W. Bush’s famous and fatuous insistence that the Muslim world is riled at the United States because “they hate our freedom.”

In point of fact, the animosity felt by many Muslims toward the United States is based on specific grievances concerning specific acts of US foreign policy. Whether or not those grievances are justified is a matter I don’t propose to get into here; the point that’s relevant to the current discussion is that the grievances exist, they relate to identifiable actions on the part of the US government, and insisting that the animosity in question is aimed at an abstraction instead is simply one of the ways that Bush, or for that matter his equally feckless successor, have tried to sidestep any discussion of the means, ends, and cascading failures of US policy toward the Middle East and the rest of the Muslim world.

In the same way, it’s very convenient to insist that people who ask hard questions about the way that contemporary science has whored itself out to economic and political interests, or who have noticed gaps between the claims about reality made by the voices of the scientific mainstream and their own lived experience of the world, just hate science. That evasive strategy makes it easy to brush aside questions about the more problematic dimensions of science as currently practiced. This isn’t a strategy with a long shelf life; responding to a rising spiral of problems by insisting that the problems don’t exist and denouncing those who demur is one of history’s all-time bad choices, but intellectuals in falling civilizations all too often try to shore up the crumbling foundations of their social prestige and privilege via that foredoomed approach.

Central to the entire strategy is a bit of obfuscation that treats “science” as a monolithic unity, rather than the complex and rather ramshackle grab-bag of fields of study, methods of inquiry, and theories about how different departments of nature appear to work. There’s no particular correlation between, let’s say, the claims made for the latest heavily marketed and dubiously researched pharmaceutical, on the one hand, and the facts of astronomy, evolutionary biology, or agronomy on the other; and someone can quite readily find it impossible to place blind faith in the pharmaceutical and the doctor who’s pushing it on her, while enjoying long nights observing the heavens through a telescope, delighting in the elegant prose and even more elegant logic of Darwin’s The Origin of Species, or running controlled experiments in her backyard on the effectiveness of compost as a soil amendment. To say that such a person “hates science” is to descend from meaningful discourse to thoughtstopping noise.

The habit of insisting that science is a single package, take it or leave it, is paralleled by the equivalent and equally specious insistence that there is this single thing called “technology,” that objecting to any single component of that alleged unity amounts to rejecting all of it, and that you’re not allowed to pick and choose among technologies—you have to take all of it or reject it all. I field this sort of nonsense all the time. It so happens, for example, that I have no interest in owning a cell phone, never got around to playing video games, and have a sufficiently intense fondness for books printed on actual paper that I’ve never given more than a passing thought to the current fad for e-books.

I rarely mention these facts to those who don’t already know them, because it’s a foregone conclusion that if I do so, someone will ask me whether I hate technology.  Au contraire, I’m fond of slide rules, love rail travel, cherish an as yet unfulfilled ambition to get deep into letterpress printing, and have an Extra class amateur radio license; all these things entail enthusiastic involvement with specific technologies, and indeed affection for them; but if I mention these points in response to the claim that I must hate technology, the responses I get range from baffled incomprehension to angry dismissal.

“Technology,” in the mind of those who make such claims, clearly doesn’t mean what the dictionary says it means.  To some extent, of course, it amounts to whatever an assortment of corporate and political marketing firms want you to buy this week, but there’s more to it than that. Like the word “science,” “technology” has become a buzzword freighted with a vast cargo of emotional, cultural, and (whisper this) political meanings.  It’s so densely entangled with passionately felt emotions, vast and vague abstractions, and frankly mythic imagery that many of those who use the word can’t explain what they mean by it, and get angry if you ask them to try.

The flattening out of the vast diversity of technologies, in the plural, into a single monolithic shape guarded by unreasoning emotions would be problematic under any conditions. When a civilization that depends on the breakneck exploitation of nonrenewable resources is running up against the unyielding limits of a finite planet, with resource depletion and pollution in a neck-and-neck race to see which one gets to bring the industrial project to an end first, it’s a recipe for disaster. A sane response to the predicament of our time would have to start by identifying the technological suites that will still be viable in a resource-constrained and pollution-damaged environment, and then shift as much vital infrastructure to those as possible with the sharply limited resources we have left. Our collective thinking about technology is so muddled by unexamined emotions, though, that it doesn’t matter now obviously necessary such a project might be: it remains unthinkable.

Willy-nilly, though, the imaginary monolith of “technology” is going to crumble, because different technologies have wildly varying resource requirements, and they vary just as drastically in terms of their importance to the existing order of society. As resource depletion and economic contraction tighten their grip on the industrial world, the stock of existing and proposed technologies face triage in a continuum defined by two axes—the utility of the technology, on the one hand, and its cost in real (i.e., nonfinancial) terms on the other. A chart may help show how this works.


 This is a very simplified representation of the frame in which decisions about technology are made. Every kind of utility from the demands of bare survival to the whims of fashion is lumped in together and measured on the vertical axis, and every kind of nonfinancial cost from energy and materials straight through to such intangibles as opportunity cost is lumped in together and measured on the horizontal axis. In an actual analysis, of course, these variables would be broken out and considered separately; the point of a more schematic view of the frame, like this one, is that it allows the basic concepts to be grasped more easily.

The vertical and horizontal lines that intersect in the middle of the graph are similarly abstractions from a complex reality. The horizontal line represents the boundary between those technologies which have enough utility to be worth building and maintaining, which are above the line, and those which have too little utility to be worth the trouble, which are below it. The vertical line represents the boundary between those technologies which are affordable and those that are not. In the real world, those aren’t sharp boundaries but zones of transition, with complex feedback loops weaving back and forth among them, but again, this is a broad conceptual model.

The intersection of the lines divides the whole range of technology into four categories, which I’ve somewhat unoriginally marked with the first four letters of the alphabet. Category A consists of things that are both affordable and useful, such as indoor plumbing. Category B consists of things that are affordable but useless, such as electrically heated underwear for chickens. Category C consists of things that are useful but unaffordable, such as worldwide 30-minute pizza delivery from low earth orbit. Category D, rounding out the set, consists of things that are neither useful nor affordable, such as—well, I’ll let my readers come up with their own nominees here.

Now of course the horizontal and vertical lines aren’t fixed; they change position from one society to another, from one historical period to another, and indeed from one community, family, or individual to another. (To me, for example, cell phones belong in category B, right next to the electrically heated chicken underwear; other people would doubtless put them in somewhere else on the chart.) Every society, though, has a broad general consensus about what goes in which category, which is heavily influenced by but by no means entirely controlled by the society’s political class.  That consensus is what guides its collective decisions about funding or defunding technologies.


With the coming of the industrial revolution, both of the lines shifted substantially from their previous position, as shown in the second chart. Obviously, the torrent of cheap abundant energy gave the world’s industrial nations access to an unparalleled wealth of resources, and this pushed the dividing line between what was affordable and what was unaffordable quite a ways over toward the right hand side of the chart. A great many things that had been desirable but unaffordable to previous civilizations swung over from category C into category A as fossil fuels came on line. This has been discussed at great length here and elsewhere in the peak oil blogosphere.

Less obviously, the dividing line between what was useful and what was useless also shifted quite a bit toward the bottom of the chart, moving a great many things from category B into category A. To follow this, it’s necessary to grasp the concept of technological suites. A technological suite is a set of interdependent technologies that work together to achieve a common purpose. Think of the relationship between cars and petroleum drilling, computer chips and the clean-room filtration systems required for their manufacture, or commercial airliners and ground control radar. What connects each pair of technologies is that they belong to the same technological suite. If you want to have the suite, you must either have all the elements of the suite in place, or be ready to replace any absent element with something else that can serve the same purpose.

For the purpose of our present analysis, we can sort out the component technologies of a technological suite into three very rough categories. There are interface technologies, which are the things with which the end user interacts—in the three examples just listed, those would be private cars, personal computers, and commercial flights to wherever you happen to be going. There are support technologies, which are needed to produce, maintain, and operate the output technologies; they make up far and away the majority of technologies in a technological suite—consider the extraordinary range of  technologies it takes to manufacture a car from raw materials, maintain it, fuel it, provide it with roads on which to drive, and so on. Some interface technologies and most support technologies can be replaced with other technologies as needed, but some of both categories can’t; we can put those that can’t be replaced bottleneck technologies, for reasons that will become clear shortly.

What makes this relevant to the charts we’ve been examining is that most support technologies have no value aside from the technological suites to which they belong and the interface technologies they serve. Without commercial air travel, for example, most of the specialized technologies found at airports are unnecessary. Thus a great many things that once belonged in category B—say, automated baggage carousels—shifted into category A with the emergence of the technological suite that gave them utility. Thus category A balloons with the coming of industrialization, and it kept getting bigger as long as energy and resource use per capita in the industrial nations kept on increasing.

Once energy and resource use per capita peak and begin their decline, though, a different reality comes into play, leading over time to the situation shown in the third chart.


 As cheap abundant energy runs short, and it and all its products become expensive, scarce, or both, the vertical line slides inexorably toward the left. That’s obvious enough. Less obviously, the horizontal line also slides upwards. The reason, here again, is the interrelationship of individual technologies into technological suites. If commercial air travel stops being economically viable, the support technologies that belong to that suite are no longer needed. Even if they’re affordable enough to stay on the left hand side of the vertical line, the technologies needed to run automated baggage carousels thus no longer have enough utility to keep them above the horizontal line, and down they drop into category B.

That’s one way that a technology can drop out of use. It’s just as possible, of course, for something that would still have ample utility to cost too much in terms of real wealth to be an option in a contracting society, and slide across the border into category C. Finally, it’s possible for something to do both at once—to become useless and unaffordable at something like the same time, as economic contraction takes away the ability to pay for the technology and the ability to make use of it at the same time.

It’s also possible for a technology that remains affordable, and participates in a technological suite that’s still capable of meeting genuine needs, to tumble out of category A into one of the others. This can happen because the cost of different technologies differ qualitatively, and not just quantitatively. If you need small amounts of niobium for the manufacture of blivets, and the handful of niobium mines around the world stop production—whether this happens because the ore has run out, or for some other reason, environmental, political, economic, cultural, or what have you—you aren’t going to be able to make blivets any more. That’s one kind of difficulty if it’s possible to replace blivets with something else, or substitute some other rare element for the niobium; it’s quite another, and much more challenging, if blivets made with niobium are the only thing that will work for certain purposes, or the only thing that makes those purposes economically viable.

It’s habitual in modern economics to insist that such bottlenecks don’t exist, because there’s always a viable alternative. That sort of thinking made a certain degree of sense back when energy per capita was still rising, because the standard way to get around material shortages for a century now has been to throw more energy, more technology, and more complexity into the mix. That’s how low-grade taconite ores with scarcely a trace of iron in them have become the mainstay of today’s iron and steel industry; all you have to do is add fantastic amounts of cheap energy, soaring technological complexity, and an assortment of supply and resource chains reaching around the world and then some, and diminishing ore quality is no problem at all.

It’s when you don’t have access to as much cheap energy, technological complexity, and baroque supply chains as you want that this sort of logic becomes impossible to sustain. Once this point is reached, bottlenecks become an inescapable feature of life. The bottlenecks, as already suggested, don’t have to be technological in nature—a bottleneck technology essential to a given technological suite can be perfectly feasible, and still out of reach for other reasons—but whatever generates them, they throw a wild card into the process of technological decline that shapes the last years of a civilization on its way out, and the first few centuries of the dark age that follows.

The crucial point to keep in mind here is that one bottleneck technology, if it becomes inaccessible for any reason, can render an entire technological suite useless, and compromise other technological suites that depend on the one directly affected. Consider the twilight of ceramics in the late Roman empire. Rome’s ceramic industry operated on as close to an industrial scale as you can get without torrents of cheap abundant energy; regional factories in various places, where high-quality clay existed, produced ceramic goods in vast amounts and distributed them over Roman roads and sea lanes to the far corners of the empire and beyond it. The technological suite that supported Roman dishes and roof tiles thus included transport technologies, and those turned out to be the bottleneck: as long-distance transport went away, the huge ceramic factories could no longer market their products and shut down, taking with them every element of their technological suite that couldn’t be repurposed in a hurry.

The same process affected many other technologies that played a significant role in the Roman world, and for that matter in the decline and fall of every other civilization in history. The end result can best be described as technological fragmentation: what had been a more or less integrated whole system of technology, composed of many technological suites working together more or less smoothly, becomes a jumble of disconnected technological suites, nearly all of them drastically simplified compared to their pre-decline state, and many of them jerry-rigged to make use of still-viable fragments of technological suites whose other parts didn’t survive their encounter with one bottleneck or another.  In places where circumstances permit, relatively advanced technological suites can remain in working order long after the civilization that created them has perished—consider the medieval cities that got their water from carefully maintained Roman aqueducts a millennium after Rome’s fall—while other systems operate at far simpler levels, and other regions and communities get by with much simpler technological suites.

All this has immediate practical importance for those who happen to live in a civilization that’s skidding down the curve of its decline and fall—ours, for example. In such a time, as noted above, one critical task is to identify the technological suites that will still be viable in the aftermath of the decline, and shift as much vital infrastructure as possible over to depend on those suites rather than on those that won’t survive the decline. In terms of the charts above, that involves identifying those technological suites that will still be in category A when the lines stop shifting up and to the left, figuring out how to work around any bottleneck technologies that might otherwise cripple them, and get the necessary knowledge into circulation among those who might be able to use it, so that access to information doesn’t become a bottleneck of its own

That sort of analysis, triage, and salvage is among the most necessary tasks of our time, especially for those who want to see viable technologies survive the end of our civilization, and it’s being actively hindered by the insistence that the only possible positive attitude toward technology is sheer blind faith. For connoisseurs of irony, it’s hard to think of a more intriguing spectacle. The impacts of that irony on the future, though, are complex, and will be the subject of several upcoming posts here.

Wednesday, November 26, 2014

Dark Age America: The Suicide of Science

Last week’s discussion of facts and values was not as much of a diversion from the main theme of the current sequence of posts here on The Archdruid Report as it may have seemed.  Every human society likes to think that its core cultural and intellectual projects, whatever those happen to be, are the be-all and end-all of human existence. As each society rounds out its trajectory through time with the normal process of decline and fall, in turn, its intellectuals face the dismaying experience of watching those projects fail, and betray the hopes so fondly confided to them.

It’s important not to underestimate the shattering force of this experience. The plays of Euripides offer cogent testimony of the despair felt by ancient Greek thinkers as their grand project of reducing the world to rational order dissolved in a chaos of competing ideologies and brutal warfare. Fast forward most of a millennium, and Augustine’s The City of God anatomized the comparable despair of Roman intellectuals at the failure of their dream of a civilized world at peace under the rule of law. 

Skip another millennium and a bit, and the collapse of the imagined unity of Christendom into a welter of contending sects and warring nationalities had a similar impact on cultural productions of all kinds as the Middle Ages gave way to the era of the Reformation. No doubt when people a millennium or so from now assess the legacies of the twenty-first century, they’ll have no trouble tracing a similar tone of despair in our arts and literature, driven by the failure of science and technology to live up to the messianic fantasies of perpetual progress that have been loaded onto them since Francis Bacon’s time.

I’ve already discussed, in previous essays here, some of the reasons why such projects so reliably fail. To begin with, of course, the grand designs of intellectuals in a mature society normally presuppose access to the kind and scale of resources that such a society supplies to its more privileged inmates.  When the resource needs of an intellectual project can no longer be met, it doesn’t matter how useful it would be if it could be pursued further, much less how closely aligned it might happen to be to somebody’s notion of the meaning and purpose of human existence.

Furthermore, as a society begins its one-way trip down the steep and slippery chute labeled “Decline and Fall,” and its ability to find and distribute resources starts to falter, its priorities necessarily shift. Triage becomes the order of the day, and projects that might ordinarily get funding end up  out of luck so that more immediate needs can get as much of the available resource base as possible. A society’s core intellectual projects tend to face this fate a good deal sooner than other, more pragmatic concerns; when the barbarians are at the gates, one might say, funds that might otherwise be used to pay for schools of philosophy tend to get spent hiring soldiers instead.

Modern science, the core intellectual project of the contemporary industrial world, and technological complexification, its core cultural project, are as subject to these same two vulnerabilities as were the corresponding projects of other civilizations. Yes, I’m aware that this is a controversial claim, but I’d argue that it follows necessarily from the nature of both projects. Scientific research, like most things in life, is subject to the law of diminishing returns; what this means in practice is that the more research has been done in any field, the greater an investment is needed on average to make the next round of discoveries. Consider the difference between the absurdly cheap hardware that was used in the late 19th century to detect the electron and the fantastically expensive facility that had to be built to detect the Higgs boson; that’s the sort of shift in the cost-benefit ratio of research that I have in mind.

A civilization with ample resources and a thriving economy can afford to ignore the rising cost of research, and gamble that new discoveries will be valuable enough to cover the costs. A civilization facing resource shortages and economic contraction can’t. If the cost of new discoveries in particle physics continues to rise along the same curve that gave us the Higgs boson’s multibillion-Euro price tag, for example, the next round of experiments, or the one after that, could easily rise to the point that in an era of resource depletion, economic turmoil, and environmental payback, no consortium of nations on the planet will be able to spare the resources for the project. Even if the resources could theoretically be spared, furthermore, there will be many other projects begging for them, and it’s far from certain that another round of research into particle physics would be the best available option.

The project of technological complexification is even more vulnerable to the same effect. Though true believers in progress like to think of new technologies as replacements for older ones, it’s actually more common for new technologies to be layered over existing ones. Consider, as one example out of many, the US transportation grid, in which airlanes, freeways, railroads, local roads, and navigable waterways are all still in use, reflecting most of the history of transport on this continent from colonial times to the present. The more recent the transport mode, by and large, the more expensive it is to maintain and operate, and the exotic new transportation schemes floated in recent years are no exception to that rule.

Now factor in economic contraction and resource shortages. The most complex and expensive parts of the technostructure tend also to be the most prestigious and politically influential, and so the logical strategy of a phased withdrawal from unaffordable complexity—for example, shutting down airports and using the proceeds to make good some of the impact of decades of malign neglect on the nation’s rail network—is rarely if ever a politically viable option. As contraction accelerates, the available resources come to be distributed by way of a political free-for-all in which rational strategies for the future play no significant role. In such a setting, will new technological projects be able to get the kind of ample funding they’ve gotten in the past? Let’s be charitable and simply say that this isn’t likely.

Thus the end of the age of fossil-fueled extravagance means the coming of a period in which science and technology will have a very hard row to hoe, with each existing or proposed project having to compete for a slice of a shrinking pie of resources against many other equally urgent needs. That in itself would be a huge challenge. What makes it much worse is that many scientists, technologists, and their supporters in the lay community are currently behaving in ways that all but guarantee that when the resources are divided up, science and technology will draw the short sticks.

It has to be remembered that science and technology are social enterprises. They don’t happen by themselves in some sort of abstract space insulated from the grubby realities of human collective life. Laboratories, institutes, and university departments are social constructs, funded and supported by the wider society. That funding and support doesn’t happen by accident; it exists because the wider society believes that the labors of scientists and engineers will further its own collective goals and projects.

Historically speaking, it’s only in exceptional circumstances that something like scientific research gets as large a cut of a society’s total budget as they do today.  As recently as a century ago, the sciences received only a tiny fraction of the support they currently get; a modest number of university positions with limited resources provided most of what institutional backing the sciences got, and technological progress was largely a matter of individual inventors pursuing projects on their own nickel in their off hours—consider the Wright brothers, who carried out the research that led to the first successful airplane in between waiting on customers in their bicycle shop, and without benefit of research grants.

The transformation of scientific research and technological progress from the part-time activity of an enthusiastic fringe culture to its present role as a massively funded institutional process took place over the course of the twentieth century. Plenty of things drove that transformation, but among the critical factors were the successful efforts of scientists, engineers, and the patrons and publicists of science and technology to make a case for science and technology as forces for good in society, producing benefits that would someday be extended to all. In the boomtimes that followed the Second World War, it was arguably easier to make that case than it had ever been before, but it took a great deal of work—not merely propaganda, but actual changes in the way that scientists and engineers interacted with the public and met their concerns—to overcome the public wariness toward science and technology that made the mad scientist such a stock figure in the popular media of the time.

These days, the economic largesse that made it possible for the latest products of industry to reach most American households is increasingly a fading memory, and that’s made life a good deal more difficult for those who argue for science and technology as forces for good. Still, there’s another factor, which is the increasing failure of institutional science and technology to make that case in any way that matters.

Here’s a homely example. I have a friend who suffered from severe asthma. She was on four different asthma medications, each accompanied by its own bevy of nasty side effects, which more or less kept the asthma under control without curing it. After many years of this, she happened to learn that another health problem she had was associated with a dietary allergy, cut the offending food out of her diet, and was startled and delighted to find that her asthma cleared up as well.

After a year with no asthma symptoms, she went to her physician, who expressed surprise that she hadn’t had to come in for asthma treatment in the meantime. She explained what had happened. The doctor admitted that the role of that allergy as a cause of severe asthma was well known. When she asked the doctor why she hadn’t been told this, so she could make an informed decision, the only response she got was, and I quote, “We prefer to medicate for that condition.”

Most of the people I know have at least one such story to tell about their interactions with the medical industry, in which the convenience and profit of the industry took precedence over the well-being of the patient; no few have simply stopped going to physicians, since the side effects from the medications they received have been reliably worse than the illness they had when they went in. Since today’s mainstream medical industry makes so much of its scientific basis, the growing public unease with medicine splashes over onto science in general. For that matter, whenever some technology seems to be harming people, it’s a safe bet that somebody in a lab coat with a prestigious title will appear on the media insisting that everything’s all right; some of the time, the person in the lab coat is right, but it’s happened often enough that everything was not all right that the trust once reposed in scientific experts is getting noticeably threadbare these days.

Public trust in scientists has taken a beating for several other reasons as well. I’ve discussed in previous posts here the way that the vagaries of scientific opinion concerning climate change have been erased from our collective memory by one side in the current climate debate.  It’s probably necessary for me to reiterate here that I find the arguments for disastrous anthropogenic climate change far stronger than the arguments against it, and have discussed the likely consequences of our civilization’s maltreatment of the atmosphere repeatedly on this blog and in my books; the fact remains that in my teen years, in the 1970s and 1980s, scientific opinion was still sharply divided on the subject of future climates, and a significant number of experts believed that the descent into a new ice age was likely.

I’ve taken the time to find and post here the covers of some of the books I read in those days. The authors were by no means nonentities. Nigel Calder was a highly respected science writer and media personality. E.C. Pielou is still one of the most respected Canadian ecologists, and the book of hers shown here, After the Ice Age, is a brilliant ecological study that deserves close attention from anyone interested in how ecosystems respond to sudden climatic warming. Windsor Chorlton, the author of Ice Ages, occupied a less exalted station in the food chain of science writers, but all the volumes in the Planet Earth series were written in consultation with acknowledged experts and summarized the state of the art in the earth sciences at the time of publication.

Since certain science fiction writers have been among the most vitriolic figures denouncing those who remember the warnings of an imminent ice age, I’ve also posted covers of two of my favorite science fiction novels from those days, which were both set in an ice age future. My younger readers may not remember Robert Silverbergand Poul Anderson; those who do will know that both of them were serious SF writers who paid close attention to the scientific thought of their time, and wrote about futures defined by an ice age at the time when this was still a legitimate scientific extrapolation

These books exist.  I still own copies of most of them, and any of my readers who takes the time to find one will discover, in each nonfiction volume, a thoughtfully developed argument suggesting that the earth would soon descend into a new ice age, and in each of the novels, a lively story set in a future shaped by the new ice age in question. Those arguments turned out to be wrong, no question; they were made by qualified experts, at a time
when the evidence concerning climate change was a good deal more equivocal than it’s become since that time, and the more complete evidence that was gathered later settled the matter; but the arguments and the books existed, many people alive today know that they existed, and when scientists associated with climate activism insist that they didn’t, the result is a body blow to public trust in science.

It’s far from the only example of the same kind. Many of my readers will remember the days when all cholesterol was bad and polyunsaturated fats were good for you. Most of my readers will recall drugs that were introduced to the market with loud assurances of safety and efficacy, and then withdrawn in a hurry when those assurances turned out to be dead wrong. Those readers who are old enough may even remember when continental drift was being denounced as the last word in pseudoscience, a bit of history that a number of science writers these days claim never happened. Support for science depends on trust in scientists, and that’s become increasingly hard to maintain at a time when it’s unpleasantly easy to point to straightforward falsifications of the kind just outlined.

On top of all this, there’s the impact of the atheist movement on public debates concerning science. I hasten to say that I know quite a few atheists, and the great majority of them are decent, compassionate people who have no trouble accepting the fact that their beliefs aren’t shared by everyone around them. Unfortunately, the atheists who have managed to seize the public limelight too rarely merit description in those terms.  Most of my readers will be wearily familiar with the sneering bullies who so often claim to speak for atheism these days; I can promise you that as the head of a small religious organization in a minority faith, I get to hear from them far too often for my taste.

Mind you, there’s a certain wry amusement in the way that the resulting disputes are playing out in contemporary culture. Even diehard atheists have begun to notice that whenever Richard Dawkins opens his mouth, a dozen people decide to give religion a second chance. Still, the dubious behavior of the “angry atheist” crowd affects the subject of this post at least as powerfully as it does the field of popular religion. A great many of today’s atheists claim the support of scientific materialism for their beliefs, and no small number of the most prominent figures in the atheist movement hold down day jobs as scientists or science educators. In the popular mind, as a result, these people, their beliefs, and their behavior are quite generally conflated with science as a whole.

Theimplications of all these factors are best explored by way of a simple thought experiment. Let’s say, dear reader, that you’re an ordinary American citizen. Over the last month, you’ve heard one scientific expert insist that the latest fashionable heart drug is safe and effective, while three of your drinking buddies have told you in detail about the ghastly side effects it gave them. You’ve heard another scientific expert denounce acupuncture as crackpot pseudoscience, while your Uncle Henry, who messed up his back in Iraq, got more relief from three visits to an acupuncturist than he got from six years of conventional treatment. You’ve heard still another scientific expert claim yet again that no qualified scientist ever said back in the 1970s that the world was headed for a new ice age, and you read the same books I did when you were in high school and know that the expert is either misinformed or lying. Finally, you’ve been on the receiving end of yet another diatribe by yet another atheist of the sneering-bully type mentioned earlier, who vilified your personal religious beliefs in terms that would probably count as hate speech in most other contexts, and used an assortment of claims about science to justify his views and excuse his behavior.

Given all this, will you vote for a candidate who says that you have to accept a cut in your standard of living in order to keep research laboratories and university science departments fully funded?

No, I didn’t think so.

In miniature, that’s the crisis faced by science as we move into the endgame of industrial civilization, just as comparable crises challenged Greek philosophy, Roman jurisprudence, and medieval theology in the endgames of their own societies. When a society assigns one of its core intellectual or cultural projects to a community of specialists, those specialists need to think, hard, about the way that  their words and actions will come across to those outside that community. That’s important enough when the society is still in a phase of expansion; when it tips over its historic peak and begins the long road down, it becomes an absolute necessity—but it’s a necessity that, very often, the specialists in question never get around to recognizing until it’s far too late.

Thus it’s unlikely that science as a living tradition will be able to survive in its current institutional framework as the Long Descent picks up speed around us. It’s by no means certain that it will survive at all. The abstract conviction that science is humanity’s best hope for the future, even if it were more broadly held than it is, offers little protection against the consequences of popular revulsion driven by the corruptions, falsifications, and abusive behaviors sketched out above. What Oswald Spengler called the Second Religiosity, the resurgence of religion in the declining years of a culture, could have taken many forms in the historical trajectory of industrial society; at this point I think it’s all too likely to contain a very large dollop of hostility toward science and complex technology. How the scientific method and the core scientific discoveries of the last few centuries might be preserved in the face of that hostility will be discussed in a future post.

Wednesday, November 19, 2014

Facts, Values, and Dark Beer

Over the last eight and a half years, since I first began writing essays on The Archdruid Report, I’ve fielded a great many questions about what motivates this blog’s project. Some of those questions have been abusive, and some of them have been clueless; some of them have been thoughtful enough to deserve an answer, either in the comments or as a blog post in its own right. Last week brought one of that last category. It came from one of my European readers, Ervino Cus, and it read as follows:

“All considered (the amount of weapons—personal and of MD—around today; the population numbers; the environmental pollution; the level of lawlessness we are about to face; the difficulty to have a secure form of life in the coming years; etc.) plus the ‘low’ technical level of possible development of the future societies (I mean: no more space flight? no more scientific discovery about the ultimate structure of the Universe? no genetic engineering to modify the human genome?) the question I ask to myself is: why bother?

“Seriously: why one should wish to plan for his/her long term survival in the future that await us? Why, when all goes belly up, don't join the first warlord band available and go off with a bang, pillaging and raping till one drops dead?

“If the possibilities for a new stable civilization are very low, and it's very probable that such a civilization, even if created, will NEVER be able to reach even the technical level of today, not to mention to surpass it, why one should want to try to survive some more years in a situation that becomes every day less bright, without ANY possibilities to get better in his/her lifetime, and with, as the best objective, only some low-tech rural/feudal state waaay along the way?

“Dunno you, but for me the idea that this is the last stop for the technological civilization, that things as a syncrothron or a manned space flight are doomed and never to repeat, and that the max at which we, as a species and as individuals, can aspire from now on is to have a good harvest and to ‘enjoy’ the same level of knowledge of the structure of the Universe of our flock of sheeps, doesen't makes for a good enough incentive to want to live more, or to give a darn if anybody other lives on.

“Apologies if my word could seem blunt (and for my far than good English: I'm Italian), but, as Dante said:

“Considerate la vostra semenza:
fatti non foste a viver come bruti,
ma per seguir virtute e canoscenza.”
 (Inferno - Canto XXVI - vv. 112-120)

“If our future is not this (and unfortunately I too agree with you that at this point the things seems irreversibles) I, for one, don't see any reason to be anymore compelled by any moral imperative... :-(

“PS: Yes, I know, I pose some absolutes: that a high-tech/scientific civilization is the only kind of civilization that enpowers us to gain any form of ‘real’ knowledge of the Universe, that this knowledge is a ‘plus’ and that a life made only of ‘birth-reproduction-death’ is a life of no more ‘meaning’ than the one of an a plant.

“Cheers, Ervino.”

It’s a common enough question, though rarely expressed as clearly or as starkly as this. As it happens, there’s an answer to it, or rather an entire family of answers, but the best way there is to start by considering the presuppositions behind it.  Those aren’t adequately summarized by Ervino’s list of ‘absolutes’—the latter are simply restatements of his basic argument.

What Ervino is suggesting, rather, presupposes that scientific and technological progress are the only reasons for human existence. Lacking those—lacking space travel, cyclotrons, ‘real’ knowledge about the universe, and the rest—our existence is a waste of time and we might as well just lay down and die or, as he suggests, run riot in anarchic excess until death makes the whole thing moot. What’s more, only the promise of a better future gives any justification for moral behavior—consider his comment about not feeling compelled by any moral imperative if no better future is in sight.

Those of my readers who recall the discussion of progress as a surrogate religion in last year’s posts here will find this sort of thinking very familiar, because the values being imputed to space travel, cyclotrons et al. are precisely those that used to be assigned to more blatantly theological concepts such as God and eternal life. Still, I want to pose a more basic question: is this claim—that the meaning and purpose of human existence and the justification of morality can only be found in scientific and technological progress—based on evidence? Are there, for example, double-blinded, controlled studies by qualified experts that confirm this claim?

Of course not. Ervino’s claim is a value judgment, not a statement of fact.  The distinction between facts and values was mentioned in last week’s post, but probably needs to be sketched out here as well; to summarize a complex issue somewhat too simply, facts are the things that depend on the properties of perceived objects rather than perceiving subjects. Imagine, dear reader, that you and I were sitting in the same living room, and I got a bottle of beer out of the fridge and passed it around.  Provided that everyone present had normally functioning senses and no reason to prevaricate, we’d be able to agree on certain facts about the bottle: its size, shape, color, weight, temperature, and so on. Those are facts.

Now let’s suppose I got two glasses, poured half the beer into each glass, handed one to you and took the other for myself. Let’s further suppose that the beer is an imperial stout, and you can’t stand dark beer. I take a sip and say, “Oh, man, that’s good.” You take a sip, make a face, and say, “Ick. That’s awful.” If I were to say, “No, that’s not true—it’s delicious,” I’d be talking nonsense of a very specific kind: the nonsense that pops up reliably whenever someone tries to treat a value as though it’s a fact.

“Delicious” is a value judgment, and like every value judgment, it depends on the properties of perceiving subjects rather than perceived objects. That’s true of all values without exception, including those considerably more important than those involved in assessing the taste of beer. To say “this is good” or “this is bad” is to invite the question “according to whose values?”—which is to say, every value implies a valuer, just as every judgment implies a judge.

Now of course it’s remarkably common these days for people to insist that their values are objective truths, and values that differ from theirs objective falsehoods. That’s a very appealing sort of nonsense, but it’s still nonsense. Consider the claim often made by such people that if values are subjective, that would make all values, no matter how repugnant, equal to one another. Equal in what sense? Why, equal in value—and of course there the entire claim falls to pieces, because “equal in value” invites the question already noted, “according to whose values?” If a given set of values is repugnant to you, then pointing out that someone else thinks differently about those values doesn’t make them less repugnant to you.  All it means is that if you want to talk other people into sharing those values, you have to offer good reasons, and not simply insist at the top of your lungs that you’re right and they’re wrong.

To say that values depend on the properties of perceiving subjects rather than perceived objects does not mean that values are wholly arbitrary, after all. It’s possible to compare different values to one another, and to decide that one set of values is better than another. In point of fact, people do this all the time, just as they compare different claims of fact to one another and decide that one is more accurate than another. The scientific method itself is simply a relatively rigorous way to handle this latter task: if fact X is true, then fact Y would also be true; is it? In the same way, though contemporary industrial culture tends to pay far too little attention to this, there’s an ethical method that works along the same lines: if value X is good, then value Y would also be good; is it?

Again, we do this sort of thing all the time. Consider, for example, why it is that most people nowadays reject the racist claim that some arbitrarily defined assortment of ethnicities—say, “the white race”—is superior to all others, and ought to have rights and privileges that are denied to everyone else. One reason why such claims are rejected is that they conflict with other values, such as fairness and justice, that most people consider to be important; another is that the history of racial intolerance shows that people who hold the values associated with racism are much more likely than others to engage in activities, such as herding their neighbors into concentration camps, which most people find morally repugnant. That’s the ethical method in practice.

With all this in mind, let’s go back to Ervino’s claims. He proposes that in all the extraordinary richness of human life, out of all its potentials for love, learning, reflection, and delight, the only thing that can count as a source of meaning is the accumulation of “‘real’ knowledge of the Universe,” defined more precisely as the specific kind of quantitative knowledge about the behavior of matter and energy that the physical sciences of the world’s industrial societies currently pursue. That’s his value judgment on human life. Of course he has the right to make that judgment; he would be equally within his rights to insist that the point of life is to see how many orgasms he can rack up over the course of his existence; and it’s by no means obvious why one of these ambitions is any more absurd than the other.

Curiosity, after all, is a biological drive, one that human beings share in a high degree with most other primates. Sexual desire is another such drive, rather more widely shared among living things. Grant that the fulfillment of some such drive can be seen as the purpose of life, why not another? For that matter, why not more than one, or some combination of biological drives and the many other incentives that are capable of motivating human beings?

For quite a few centuries now, though, it’s been fashionable for thinkers in the Western world to finesse such issues, and insist that some biological drives are “noble” while others are “base,” “animal,” or what have you. Here again, we have value judgments masquerading as statements of fact, with a hearty dollop of class prejudice mixed in—for “base,” “animal,” etc., you could as well put “peasant,” which is of course the literal opposite of “noble.” That’s the sort of thinking that appears in the bit of Dante that Ervino included in his comment. His English is better than my Italian, and I’m not enough of a poet to translate anything but the raw meaning of Dante’s verse, but this is roughly what the verses say:

“Consider your lineage;
You were not born to live as animals,
But to seek virtue and knowledge.”

It’s a very conventional sentiment. The remarkable thing about this passage, though, is that Dante was not proposing the sentiment as a model for others to follow. Rather, this least conventional of poets put those words in the mouth of Ulysses, who appears in this passage of the Inferno as a damned soul frying in the eighth circle of Hell. Dante has it that after the events of Homer’s poem, Ulysses was so deeply in love with endless voyaging that he put to sea again, and these are the words with which he urged his second crew to sail beyond all known seas—a voyage which took them straight to a miserable death, and sent Ulysses himself tumbling down to eternal damnation.

This intensely equivocal frame story is typical of Dante, who delineated as well as any poet ever has the many ways that greatness turns into hubris, that useful Greek concept best translated as the overweening pride of the doomed. The project of scientific and technological progress is at least as vulnerable to that fate as any of the acts that earned the damned their places in Dante’s poem. That project might fail irrevocably if industrial society comes crashing down and no future society will ever be able to pursue the same narrowly defined objectives that ours has valued. In that case—at least in the parochial sense just sketched out—progress is over. Still, there’s at least one more way the same project would come to a screeching and permanent halt: if it succeeds.

Let’s imagine, for instance, that the fantasies of our scientific cornucopians are right and the march of progress continues on its way, unhindered by resource shortages or destabilized biospheres. Let’s also imagine that right now, some brilliant young physicist in Mumbai is working out the details of the long-awaited Unified Field Theory. It sees print next year; there are furious debates; the next decade goes into experimental tests of the theory, and proves that it’s correct. The relationship of all four basic forces of the cosmos—the strong force, the weak force, electromagnetism, and gravity—is explained clearly once and for all. With that in place, the rest of physical science falls into place step by step over the next century or so, and humanity learns the answers to all the questions that science can pose.

It’s only in the imagination of true believers in the Singularity, please note, that everything becomes possible once that happens. Many of the greatest achievements of science can be summed up in the words “you can’t do that;” the discovery of the laws of thermodynamics closed the door once and for all on perpetual motion, just as the theory of relativity put a full stop to the hope of limitless velocity. (“186,282 miles per second: it’s not just a good idea, it’s the law.”) Once the sciences finish their work, the technologists will have to scramble to catch up with them, and so for a while, at least, there will be no shortage of novel toys to amuse those who like such things; but sooner or later, all of what Ervino calls “‘real’ knowledge about the Universe” will have been learnt; at some point after that, every viable technology will have been refined to the highest degree of efficiency that physical law allows.

What then? The project of scientific and technological progress will be over. No one will ever again be able to discover a brand new, previously unimagined truth about the universe, in any but the most trivial sense—“this star’s mass is 1.000000000000000000006978 greater than this other star,” or the like—and variations in technology will be reduced to shifts in what’s fashionable at any given time. If the ongoing quest for replicable quantifiable knowledge about the physical properties of nature is the only thing that makes human life worth living, everyone alive at that point arguably ought to fly their hovercars at top speed into the nearest concrete abutment and end it all.

One way or another, that is, the project of scientific and technological progress is self-terminating. If this suggests to you, dear reader, that treating it as the be-all and end-all of human existence may not be the smartest choice, well, yes, that’s what it suggests to me as well. Does that make it worthless? Of course not. It should hardly be necessary to point out that “the only thing important in life” and “not important at all” aren’t the only two options available in discussions of this kind.

I’d like to suggest, along these lines, that human life sorts itself out most straightforwardly into an assortment of separate spheres, each of which deals with certain aspects of the extraordinary range of possibilities open to each of us. The sciences comprise one of those spheres, with each individual science a subsphere within it; the arts are a separate sphere, similarly subdivided; politics, religion, and sexuality are among the other spheres. None of these spheres contains more than a fraction of the whole rich landscape of human existence. Which of them is the most important? That’s a value judgment, and thus can only be made by an individual, from his or her own irreducibly individual point of view.

We’ve begun to realize—well, at least some of us have—that authority in one of these spheres isn’t transferable. When a religious leader, let’s say, makes pronouncements about science, those have no more authority than they would if they came from any other more or less clueless layperson, and a scientist who makes pronouncements about religion is subject to exactly the same rule. The same distinction applies with equal force between any two spheres, and as often as not between subspheres of a single sphere as well:  plenty of scientists make fools of themselves, for example, when they try to lay down the law about sciences they haven’t studied.

Claiming that one such sphere is the only thing that makes human life worthwhile is an error of the same kind. If Ervino feels that scientific and technological progress is the only thing that makes his own personal life worth living, that’s his call, and presumably he has reasons for it. If he tries to say that that’s true for me, he’s wrong—there are plenty of things that make my life worth living—and if he’s trying to make the same claim for every human being who will ever live, that strikes me as a profoundly impoverished view of the richness of human possibility. Insisting that scientific and technological progress are the only acts of human beings that differentiate their existence from that of a plant isn’t much better. Dante’s Divina Commedia, to cite the obvious example, is neither a scientific paper nor a technological invention; does that mean that it belongs in the same category as the noise made by hogs grunting in the mud?

Dante Alighieri lived in a troubled age in which scientific and technological progress were nearly absent and warfare, injustice, famine, pestilence, and the collapse of widely held beliefs about the world were matters of common experience. From that arguably unpromising raw material, he brewed one of the great achievements of human culture. It may well be that the next few centuries will be far from optimal for scientific and technological progress; it may well be that the most important thing that can be done by people who value science and technology is to figure out what can be preserved through the difficult times ahead, and do their best to see that these things reach the waiting hands of the future. If life hands you a dark age, one might say, it’s probably not a good time to brew lite beer, but there are plenty of other things you can still brew, bottle and drink.

As for me—well, all things considered, I find that being alive beats the stuffing out of the alternative, and that’s true even though I live in a troubled age in which scientific and technological progress show every sign of grinding to a halt in the near future, and in which warfare, injustice, famine, pestilence, and the collapse of widely held beliefs are matters of common experience. The notion that life has to justify itself to me seems, if I may be frank, faintly silly, and so does the comparable claim that I have to justify my existence to it, or to anyone else. Here I am; I did not make the world; quite the contrary, the world made me, and put me in the irreducibly personal situation in which I find myself. Given that I’m here, where and when I happen to be, there are any number of things that I can choose to do, or not do; and it so happens that one of the things I choose to do is to prepare, and help others prepare, for the long decline of industrial civilization and the coming of the dark age that will follow it.

And with that, dear reader, I return you to your regularly scheduled discussion of decline and fall on The Archdruid Report.

Wednesday, November 12, 2014

Dark Age America: The Hoard of the Nibelungs

Of all the differences that separate the feudal economy sketched out in last week’s post from the market economy most of us inhabit today, the one that tends to throw people for a loop most effectively is the near-total absence of money in everyday medieval life. Money is so central to current notions of economics that getting by without it is all but unthinkable these days.  The fact—and of course it is a fact—that the vast majority of human societies, complex civilizations among them, have gotten by just fine without money of any kind barely registers in our collective imagination.

One source of this curious blindness, I’ve come to think, is the way that the logic of money is presented to students in school. Those of my readers who sat through an Economics 101 class will no doubt recall the sort of narrative that inevitably pops up in textbooks when this point is raised. You have, let’s say, a pig farmer who has bad teeth, but the only dentist in the village is Jewish, so the pig farmer can’t simply swap pork chops and bacon for dental work. Barter might be an option, but according to the usual textbook narrative, that would end up requiring some sort of complicated multiparty deal whereby the pig farmer gives pork to the carpenter, who builds a garage for the auto repairman, who fixes the hairdresser’s car, and eventually things get back around to the dentist. Once money enters the picture, by contrast, the pig farmer sells bacon and pork chops to all and sundry, uses the proceeds to pay the dentist, and everyone’s happy. Right?

Well, maybe. Let’s stop right there for a moment, and take a look at the presuppositions hardwired into this little story. First of all, the narrative assumes that participants have a single rigidly defined economic role: the pig farmer can only raise pigs, the dentist can only fix teeth, and so on. Furthermore, it assumes that participants can’t anticipate needs and adapt to them: even though he knows the only dentist in town is Jewish, the pig farmer can’t do the logical thing and start raising lambs for Passover on the side, or what have you. Finally, the narrative assumes that participants can only interact economically through market exchanges: there are no other options for meeting needs for goods and services, no other way to arrange exchanges between people other than market transactions driven by the law of supply and demand.

Even in modern industrial societies, these three presuppositions are rarely true. I happen to know several pig farmers, for example, and none of them are so hyperspecialized that their contributions to economic exchanges are limited to pork products; garden truck, fresh eggs, venison, moonshine, and a good many other things could come into the equation as well. For that matter, outside the bizarre feedlot landscape of industrial agriculture, mixed farms raising a variety of crops and livestock are far more resilient than single-crop farms, and thus considerably more common in societies that haven’t shoved every economic activity into the procrustean bed of the money economy.

As for the second point raised above, the law of supply and demand works just as effectively in a barter economy as in a money economy, and successful participants are always on the lookout for a good or service that’s in short supply relative to potential demand, and so can be bartered with advantage. It’s no accident that traditional village economies tend to be exquisitely adapted to produce exactly that mix of goods and services the inhabitants of the village need and want.

Finally, of course, there are many ways of handling the production and distribution of goods and services without engaging in market exchanges. The household economy, in which members of each household produce goods and services that they themselves consume, is the foundation of economic activity in most human societies, and still accounted for the majority of economic value produced in the United States until not much more than a century ago. The gift economy, in which members of a community give their excess production to other members of the same community in the expectation that the gift will be reciprocated, is immensely common; so is the feudal economy delineated in last week’s post, with its systematic exclusion of market forces from the economic sphere. There are others, plenty of them, and none of them require money at all.

Thus the logic behind money pretty clearly isn’t what the textbook story claims it is. That doesn’t mean that there’s no logic to it at all; what it means is that nobody wants to talk about what it is that money is actually meant to do. Fortunately, we’ve discussed the relevant issues in last week’s post, so I can sum up the matter here in a single sentence: the point of money is that it makes intermediation easy.

Intermediation, for those of my readers who weren’t paying attention last week, is the process by which other people insert themselves between the producer and the consumer of any good or service, and take a cut of the proceeds of the transaction. That’s very easy to do in a money economy, because—as we all know from personal experience—the intermediaries can simply charge fees for whatever service they claim to provide, and then cash in those fees for whatever goods and services they happen to want.

Imagine, by way of contrast, the predicament of an intermediary who wanted to insert himself into, and take a cut out of, a money-free transaction between the pig farmer and the dentist. We’ll suppose that the arrangement the two of them have worked out is that the pig farmer raises enough lambs each year that all the Jewish families in town can have a proper Passover seder, the dentist takes care of the dental needs of the pig farmer and his family, and the other families in the Jewish community work things out with the dentist in exchange for their lambs—a type of arrangement, half barter and half gift economy, that’s tolerably common in close-knit communities.

Intermediation works by taking a cut from each transaction. The cut may be described as a tax, a fee, an interest payment, a service charge, or what have you, but it amounts to the same thing: whenever money changes hands, part of it gets siphoned off for the benefit of the intermediaries involved in the transaction. The same thing can be done in some money-free transactions, but not all. Our intermediary might be able to demand a certain amount of meat from each Passover lamb, or require the pig farmer to raise one lamb for the intermediary per six lambs raised for the local Jewish families, though this assumes that he either likes lamb chops or can swap the lamb to someone else for something he wants.

What on earth, though, is he going to do to take a cut from the dentist’s side of the transaction?  There wouldn’t be much point in demanding one tooth out of every six the dentist extracts, for example, and requiring the dentist to fill one of the intermediary’s teeth for every twenty other teeth he fills would be awkward at best—what if the intermediary doesn’t happen to need any teeth filled this year? What’s more, once intermediation is reduced to such crassly physical terms, it’s hard to pretend that it’s anything but a parasitic relationship that benefits the intermediary at everyone else’s expense.

What makes intermediation seem to make sense in a money economy is that money is the primary intermediation. Money is a system of arbitrary tokens used to facilitate exchange, but it’s also a good deal more than that. It’s the framework of laws, institutions, and power relationships that creates the tokens, defines their official value, and mandates that they be used for certain classes of economic exchange. Once the use of money is required for any purpose, the people who control the framework—whether those people are government officials, bankers, or what have you—get to decide the terms on which everyone else gets access to money, which amounts to effective control over everyone else. That is to say, they become the primary intermediaries, and every other intermediation depends on them and the money system they control.

This is why, to cite only one example, British colonial administrators in Africa imposed a house tax on the native population, even though the cost of administering and collecting the tax was more than the revenue the tax brought in. By requiring the tax to be paid in money rather than in kind, the colonial government forced the natives to participate in the money economy, on terms that were of course set by the colonial administration and British business interests. The money economy is the basis on which nearly all other forms of intermediation rest, and forcing the native peoples to work for money instead of allowing them to meet their economic needs in some less easily exploited fashion was an essential part of the mechanism that pumped wealth out of the colonies for Britain’s benefit.

Watch the way that the money economy has insinuated itself into every dimension of modern life in an industrial society and you’ve got a ringside seat from which to observe the metastasis of intermediation in recent decades. Where money goes, intermediation follows:  that’s one of the unmentionable realities of political economy, the science that Adam Smith actually founded, but was gutted, stuffed, and mounted on the wall—turned, that is, into the contemporary pseudoscience of economics—once it became painfully clear just what kind of trouble got stirred up when people got to talking about the implications of the links between political power and economic wealth.

There’s another side to the metastasis just mentioned, though, and it has to do with the habits of thought that the money economy both requires and reinforces. At the heart of the entire system of money is the concept of abstract value, the idea that goods and services share a common, objective attribute called “value” that can be gauged according to the one-dimensional measurement of price.

It’s an astonishingly complex concept, and so needs unpacking here. Philosophers generally recognize a crucial distinction between facts and values; there are various ways of distinguishing them, but the one that matters for our present purposes is that facts are collective and values are individual. Consider the statement “it rained here last night.” Given agreed-upon definitions of “here” and “last night,” that’s a factual statement; all those who stood outside last night in the town where I live and looked up at the sky got raindrops on their faces. In the strict sense of the word, facts are objective—that is, they deal with the properties of objects of perception, such as raindrops and nights.

Values, by contrast, are subjective—that is, they deal with the properties of perceiving subjects, such as people who look up at the sky and notice wetness on their faces. One person is annoyed by the rain, another is pleased, another is completely indifferent to it, and these value judgments are irreducibly personal; it’s not that the rain is annoying, pleasant, or indifferent, it’s the individuals who are affected in these ways. Nor are these personal valuations easy to sort out along a linear scale without drastic distortion. The human experience of value is a richly multidimensional thing; even in a language as poorly furnished with descriptive terms for emotion as English is, there are countless shades of meaning available for talking about positive valuations, and at least as many more for negative ones.

From that vast universe of human experience, the concept of abstract value extracts a single variable—“how much will you give for it?”—and reduces the answer to a numerical scale denominated in dollars and cents or the local equivalent. Like any other act of reductive abstraction, it has its uses, but the benefits of any such act always have to be measured against the blind spots generated by reductive modes of thinking, and the consequences of that induced blindness must either be guarded against or paid in full. The latter is far and away the more common of the two, and it’s certainly the option that modern industrial society has enthusiastically chosen.

Those of my readers who want to see the blindness just mentioned in full spate need only turn to any of the popular cornucopian economic theorists of our time. The fond and fatuous insistence that resource depletion can’t possibly be a problem, because investing additional capital will inevitably turn up new supplies—precisely the same logic, by the way, that appears in the legendary utterance “I can’t be overdrawn, I still have checks left!”—unfolds precisely from the flattening out of qualitative value into quantitative price just discussed.  The habit of reducing every kind of value to bare price is profitable in a money economy, since it facilitates ignoring every variable that might get in the way of making money off  transactions; unfortunately it misses a minor but crucial fact, which is that the laws of physics and ecology trump the laws of economics, and can neither be bribed nor bought.

The contemporary fixation on abstract value isn’t limited to economists and those who believe them, nor is its potential for catastrophic consequences. I’m thinking here specifically of those people who have grasped the fact that industrial civilization is picking up speed on the downslope of its decline, but whose main response to it consists of trying to find some way to stash away as much abstract value as possible now, so that it will be available to them in some prospective postcollapse society. Far more often than not, gold plays a central role in that strategy, though there are a variety of less popular vehicles that play starring roles the same sort of plan.

Now of course it was probably inevitable in a consumer society like ours that even the downfall of industrial civilization would be turned promptly into yet another reason to go shopping. Still, there’s another difficulty here, and that’s that the same strategy has been tried before, many times, in the last years of other civilizations. There’s an ample body of historical evidence that can be used to see just how well it works. The short form? Don’t go there.

It so happens, for example, that in there among the sagas and songs of early medieval Europe are a handful that deal with historical events in the years right after the fall of Rome: the Nibelungenlied, Beowulf, the oldest strata of Norse saga, and some others. Now of course all these started out as oral traditions, and finally found their way into written form centuries after the events they chronicle, when their compilers had no way to check their facts; they also include plenty of folktale and myth, as oral traditions generally do. Still, they describe events and social customs that have been confirmed by surviving records and archeological evidence, and offer one of the best glimpses we’ve got into the lived experience of descent into a dark age.

Precious metals played an important part in the political economy of that age—no surprises there, as the Roman world had a precious-metal currency, and since banks had not been invented yet, portable objects of gold and silver were the most common way that the Roman world’s well-off classes stashed their personal wealth. As the western empire foundered in the fifth century CE and its market economy came apart, hoarding precious metals became standard practice, and rural villas, the doomsteads of the day, popped up all over. When archeologists excavate those villas, they routinely find evidence that they were looted and burnt when the empire fell, and tolerably often the archeologists or a hobbyist with a metal detector has located the buried stash of precious metals somewhere nearby, an expressive reminder of just how much benefit that store of abstract wealth actually provided to its owner.

That’s the same story you get from all the old legends: when treasure turns up, a lot of people are about to die. The Volsunga saga and the Nibelungenlied, for example, are versions of the same story, based on dim memories of events in the Rhine valley in the century or so after Rome’s fall. The primary plot engine of those events is a hoard of the usual late Roman kind,  which passes from hand to hand by way of murder, torture, treachery, vengeance, and the extermination of entire dynasties. For that matter, when Beowulf dies after slaying his dragon, and his people discover that the dragon was guarding a treasure, do they rejoice? Not at all; they take it for granted that the kings and warriors of every neighboring kingdom are going to come and slaughter them to get it—and in fact that’s what happens. That’s business as usual in a dark age society.

The problem with stockpiling gold on the brink of a dark age is thus simply another dimension, if a more extreme one, of the broader problem with intermediation. It bears remembering that gold is not wealth; it’s simply a durable form of money, and thus, like every other form of money, an arbitrary token embodying a claim to real wealth—that is, goods and services—that other people produce. If the goods and services aren’t available, a basement safe full of gold coins won’t change that fact, and if the people who have the goods and services need them more than they want gold, the same is true. Even if the goods and services are to be had, if everyone with gold is bidding for the same diminished supply, that gold isn’t going to buy anything close to what it does today. What’s more, tokens of abstract value have another disadvantage in a society where the rule of law has broken down: they attract violence the way a dead rat draws flies.

The fetish for stockpiling gold has always struck me, in fact, as the best possible proof that most of the people who think they are preparing for total social collapse haven’t actually thought the matter through, and considered the conditions that will obtain after the rubble stops bouncing. Let’s say industrial civilization comes apart, quickly or slowly, and you have gold.  In that case, either you spend it to purchase goods and services after the collapse, or you don’t. If you do, everyone in your vicinity will soon know that you have gold, the rule of law no longer discourages people from killing you and taking it in the best Nibelungenlied fashion, and sooner or later you’ll run out of ammo. If you don’t, what good will the gold do you?

The era when Nibelungenlied conditions apply—when, for example, armed gangs move from one doomstead to another, annihilating the people holed up there, living for a while on what they find, and then moving on to the next, or when local governments round up the families of those believed to have gold and torture them to death, starting with the children, until someone breaks—is a common stage of dark ages. It’s a self-terminating one, since sooner or later the available supply of precious metals or other carriers of abstract wealth are spread thin across the available supply of warlords. This can take anything up to a century or two before we reach the stage commemorated in the Anglo-Saxon poem “The Seafarer:” Nearon nú cyningas ne cáseras, ne goldgiefan swylce iú wáeron (No more are there kings or caesars or gold-givers as once there were).

That’s when things begin settling down and the sort of feudal arrangement sketched out in last week’s post begins to emerge, when money and the market play little role in most people’s lives and labor and land become the foundation of a new, impoverished, but relatively stable society where the rule of law again becomes a reality. None of us living today will see that period arrive, but it’s good to know where the process is headed. We’ll discuss the practical implications of that knowledge in a future post.