One Postmodern Sentence

The connections intellectuals make among the mooring notions of our age always surprise me; they demonstrate the malleability of the virtual circle we build from experience and fallible reasoning. It sometimes seems as though almost any claim may be advanced as reasonable. And that is an indictment of logic. Or it might be until one uses that same logic a bit more rigorously. It bears repeating that a logical expression is logical all the way down, so to speak. It does not self-destruct under close and patient scrutiny that calls the specious assertion into doubt. This healthy distrust of what Plato called the love of our own opinions was codified by Karl Popper as the principle of falsifiability. What Popper advised in respect to natural science should be broadened into a more generalized strategy for testing our judgments. We are all eager to fall in love with our own point of view, and for that reason we should skeptically examine it.

This conclusion was reinforced by a perusal of the introduction of David Bentley Hart’s “The Beauty of the Infinite: The Aesthetics of Christian Truth”. This is a serious scholarly work. Hart earned his masters from Cambridge and his PhD from University of Virginia and has won plaudits for his published works defending Christianity from atheism. I don’t intend to confront the book’s thesis. I can’t, having been defeated by the introduction alone. What stymied my effort was the writer’s presentation, which seemed composed of a fusillade of unsupported assertions so dense, vague, and twisted that no reasoned response could follow my effort to mine the text for comprehensibility. I find Hart’s sentences to be something other than discursive prose: their density, disregard for clarity, and floating referents closer to journaling than argument. It seems fair to mention at this point that Hart should expect nothing more, for the precondition of his theoretical position is a rejection of the very notion of disinterested rational appraisal. I cannot assert this all that strongly, for Hart assumes the assent he presumably pursues without resorting to anything as prosaic as warrant. It is unsurprising that Hart, a postmodernist to be sure, would advance claims supporting that peculiar position, but the question inevitably arises of why he bothers to take so much paper and ink and time to make it. Finding his core argument is rather like attempting to find the best restaurant meal in East Timor: one has to negotiate unfamiliar terms, confusing directions, and unsubstantiated opinion in equal measure. It may be that his introduction covers ground already so familiar to his presumed reader that Hart feels no obligation to justify or even do more than sketch out his frame of reference. But surely this does a disservice to both the skeptic and the generalist. Postmodernism is hardly a fait accompli, and Hart must appreciate that faith is even less of one, and defensible aesthetic theories least of all. I finished his introduction with only a sense of Hart’s self-satisfaction to reward me for my effort. And that is a shame, for one expects an analysis to produce either assent or rejection, but the vapidity of his language allows the reader neither option. To disengage from Hart’s fundamentally postmodern assertions is to reject notions that float through this culture, but to assent is equally problematic, for one hardly can know what her assent commits her to. Hart, a postmodern theologian, advances positions so far beyond his justifications that one might be signing away her core convictions by accepting on faith any one of his conjectures.

So what can be done with a work whose terms are too squishy to define and whose “arguments” cannot be bothered with correspondence justifications that might be examined by his reader? How can one approach an analysis whose central axiom is that logical justifications are merely rhetorical devices used to exert power over the reader? Why his argument, such as it is, should escape the same charge is just one more mystery. What can we make of this gelatinous glob of aesthetics and religion, illuminated only by the black light of postmodern theory? How can we know before launching into chapter one that engagement with his ideas might prove worth the enormous effort his style demands? Or how can we throw down the book in disgust simply because it makes demands of us that seem absurd? Perhaps the dross is worth the gold.

To illustrate as well as resolve this problem, one that characterizes postmodern thought in general, I will take a single sentence of Hart’s introduction as an object of serious analysis with the goal of both framing the difficulty of more extensive assessment and of exposing the suppositions that underlie what can only be called the effluvium of words that comprise his effort. I ask you to accept that this one sentence is not exceptional but rather entirely indicative of Hart’s presentation. I chose it because it is somewhat independent of what precedes and follows it. Please trust that nothing in its vicinity adds to its intelligibility. I also argue that this sentence is what we might call a postmodern one in that in terms of structure and argument, it is entirely representative of a mode of thought that disdains but fails to replace reason and discursive language in what we can only assume is a rhetorical exercise. At any rate, that may be its intent. I really am not sure.

So here it is.

The great project of “modernity” (the search for comprehensive metanarratives and epistemological foundations by way of a neutral and unaided rationality, available to all reflective intellects, and independent of cultural and linguistic conditions) has surely foundered; “reason” cannot inhabit language (and it certainly has no other home) without falling subject to an indefinite deferral of meaning, a dissemination of signification, a play of nonsense and absence, such that it subsists always in its own aporias, suppression of sense, contradictions, and slippages; and “reason” cannot embody itself in history without at once becoming irrecoverably lost in the labyrinth of time’s interminable contingencies (certainly philosophy has no means of defeating such doubts).”

 The greatest compliment one can give another is to listen attentively to her declarations, straining to comprehend them, and once understood, to weigh them, apply them to experience, and then engage in a deliberative conversation with their author so as to produce a consensual claim to truth, goodness, or beauty justified by some correspondence to a shared reality. But this sentence denies that possibility at the outset, for it rejects access to rationality by “reflective intellects,” and it questions reason’s role in making sense of the declarative contents of the sentence because reason must fall prey to the inadequacies of present culture, etymology, and manipulative intent. So even without careful explication, the sentence seems to deny itself what it intends to proclaim: a declarative truth. If modernity cannot succeed in such an effort, why should postmodernity (or Christianity or any aesthetic proclamation) be permitted to succeed? Metaphorically, this sentence seems to exemplify the old conundrum of the office worker advising the new hire, “Believe me when I tell you no one in this office can be trusted.”

But let us grant it the dignity of a serious effort at comprehension anyway. The sentence seems to me to make mo fewer than fifteen truth claims.

 

  1. That “modernity’ has committed to a “great project,” that project being a search for comprehensive metanarratives and epistemological foundations.
  2. That modernity argues for neutral and unaided rationality as the means to succeed in this great project.
  3. That modernity argues that such rationality is available to all reflective intellects.
  4. That such rationality claims to be independent of all cultural and linguistic conditions.
  5. That the effort outlined in contentions 1-4 has “surely foundered.”
  6. That reason cannot “inhabit” language.
  7. That reason “certainly has no other home” than language.
  8. That any attempt by reason to “inhabit” language subjects it to an indefinite deferral of meaning.
  9. That any attempt by reason to “inhabit” language subjects it to a dissemination of signification.
  10. That such a dissemination of signification is in apposition, “a play of nonsense and absence.”
  11. That such a play causes reason’s attempt to inhabit language always to subsist in its own aporias.
  12. That such a play causes reason’s attempt to inhabit language always to subsist in suppression of sense.
  13. That such a play causes reason’s attempt to inhabit language always to subsist in contradictions and slippages.
  14. That reason’s attempt to “embody itself in history” must result in it at “once becoming irrevocably lost in the labyrinth of time’s interminable contingencies.”
  15. That philosophy has “no means of defeating such doubts.”

 

So what are we to do with these fifteen truth claims? Old-fashioned modernity would ask us to do what Hart advises us cannot be done: examine each truth claim in light of its warrant to determine our judgment of its truth. But what would postmodernity advise as an alternative? My understanding has always been that it would ask us to enfold these truth claims as a jellyfish would a morsel of food: envelope it into our virtual circle, examine it for the bitter taste of personal contradiction, and either make it flesh of our flesh or spit it out like gristle. But Hart doesn’t seem so forthright in his expectation: he presents these claims as self-evident and universal historical truth, though what makes them so escapes me as does the entire postmodern schema of justification. So how do we approach these fifteen declarations? His flabby syntax combines with the blizzard of truth claims to render parts of the sentence almost a poetic utterance, yet one chained to history and philosophy. Do we explicate it as historical truth or appreciate it as an aesthetic unity? If the former, do we take it to be a rhetorical rather than an analytic use of language, and if the latter, what aesthetic theory do we use to apprehend it? Given its discursive inadequacy, how much creative construction must the reader bring to the sentence to vivify it with meaning?

I can’t know how to answer these questions without some sense of what the sentence actually does mean. What follows is my best shot.

 

  1. It is a postmodern charge that modernity’s project is the search for metanarratives and epistemological foundations. This certainly is not how modernism would characterize its own foundations. The notion that the great paradigms of modernity are merely narratives is a pernicious one, for explanatory models are far more than merely stories we tell ourselves (for more, please see the blog entry of July, 14, 2014, “Tall Tales.”). Modernism accepts as axiomatic that the epistemological foundations of analysis are universal and rooted in human nature. ((for more, please see blog entry of August 4, 2014, “The Tyranny of Rationality”).
  2. The only remarkable point of the sentence’s second contention is that rationality might be “aided.” Hart offers no referent in the vicinity of this sentence for that possibility, though one might expect him to champion divine guidance as the aid modernity rejects, which is certainly true. It is equally a truism that modernity unapologetically offers rationality as the means of finding truth in reality along with rationally examined experience (for more please see blog entry of February 2, 2014, “Modernism’s Midwives”).
  3. Modernity does offer the universality of reason as a key to unlocking the secrets of reality. It is actually a serious thing to accuse rationality of this charge, for, if true, it equalizes access to truth and goodness, an intention postmodernism often denies to modernity. In this as in other foundational convictions, modernism has often lost its way in practice, but the appeal to universal rights demonstrates the very self-correcting quality of rationality that Hart denies in this sentence.
  4. The alternative is pretty awful. If rationality were dependent on cultural or other conditions, postmodernism would have no reason to charge modernism with hypocrisy for asserting the superiority of some cultures, races, or genders. That modernists were hypocritical in their judgments of gender, culture, or race might assault either their suppositions that reason is the universal qualifier for judgment or their conclusion that some were more capable of exercising it than others. It cannot, as this sentence implies, call both into question. If Hart subscribes to the postmodern position that rationality is a product of experience and is therefore idiosyncratic, he ends in a relativism that justifies oppression and exploitation on grounds of cultural determinism and a subjectivism that robs individuals of the means to resolve their differences.
  5. This unexamined and unsupported contention is patently false. It seems to question the triumphs of science, the avatar of modernism. Let those who proclaim modernism’s failure try to forego the benefits of its most muscular project: medical, theoretical, mathematical, technological. On what grounds can postmodernism indict natural science as the royal road to those truths about reality that its methodology supports? I am convinced that three possible sources of this disdain exist for the postmodernist. First, a narrow focus on the failures of the human sciences might deceive a poorly educated observer to consider the human sciences representative of science in general. Secondly, an historicist excavation of science’s overreach (of which the human sciences are the detritus) over the last two centuries might mistake the hubris of scientism for the actual activities of today’s natural scientist, which is rather like lumping Comte with Curie. The only areas in which such scientism survives today are the human sciences and popular cosmology. Third, postmodernists might champion some pseudo-science alternative as a means of justifying truth or goodness claims that true science as now defined simply cannot warrant. But in this position as well postmodernism seeks to have it both ways: share in the reflected glory of successful scientific endeavor through the use of arcane terms and fanciful theories without subjecting them to the rigors of the scientific method. True science can never resolve goodness issues; its method forbids judgments based on non-material and non-quantitative factors. Postmodernism, on the other hand, remains true to its roots in philosophy, human science and pseudo-science and so embraces what seems almost a mockery of scientific trappings in its attachment to philology, psychology, sociology, constructivist education, and the embarrassments of Freudianism and Marxism. How charming that Hart’s weighty, macroscopic condemnation of two centuries of Western thought should merit merely a throwaway line without a shred of warrant! Such an act of intellectual vandalism seems only possible for one either preaching to the choir or from the throne of Peter. But a postmodernist religionist cannot embrace both the postmodern disdain for authority and traditional religious elevation of it.
  6. I don’t know what this truth claim means. The metaphorical and poetical preferences of postmodernism are evidence either of intentional misdirection or a self-delusion about the nature of truth. I challenge any reader of Hart’s prose to explain discursively what it means for reason to inhabit language. I don’t know what the words mean. If he means that language can only roughly approximate reasoning, he may have a point, which is why the natural sciences use the language of mathematics to frame and instantiate their truth claims. If he means that non-mathematical language is insufficient to delineate the truths of reality, he certainly has a point, but what of it? The only language capable of revealing that inadequacy is the language that produces it, so unless he wishes to either find a better formulation or stop making truth claims about reality that are not mathematical, he should simply remind us that all such claims, including his own, are to be considered as only provisionally true, and not merely because of the insufficiencies of language but more so because of the uncertainty of their warrant. But then Hart seems not to worry about warrant.
  7. I don’t know what this claim means. It may restate for some reason the one above, or perhaps Hart is gauzily referencing the difficulty of communicating our truth claims to others as opposed to that of framing them for ourselves. Such problems were far more thoughtfully teased out by that early prophet of modernity, Francis Bacon, who appropriately saw them as serious but not fatal impediments to rational examination of experience.
  8. I also don’t know the meaning of this claim. In what sense is meaning “deferred”? What necessitates such deferral? What would resolve it? If language is inadequate to the task of discursively clarifying or communicating our understanding of reality, how can such an inadequacy ever be resolved? What are we to make of the language of this claim? If Hart is referencing the postmodern insistence that language creates rather than contains meaning, he should say so, though such a charge would destroy rather than defer the denotative function of language, something that may mark Hart’s prose but doesn’t seem to have fatally wounded discursive communication more generally. But I am only guessing as to what this claim means.
  9. I am also guessing on this one. If “dissemination” refers to a democratization of meaning based on culture or other group experience, then I suppose the problem references issues of validity suggested by declaration #4. How postmodernism could resolve such claims befuddles me. Modernism would subject them to logical analysis. Linguistic claims that purport to deconstruct the hidden power relationships between signifier and signified have opened language to a psychological critique that psychology as a science is not equal to, though it has given postmodernists that good old thrill of revelation of hidden agendas so characteristic of human science paradigms. If language is not a clear container into which we pour meaning, it is also not infinitely malleable or used inevitably as a weapon. While it is true that the claim to truth is also a claim to power, the power that ensues derives from the truth rather than the claim itself, for possession of truth confers the potential to choose goods accruing from the accurate comprehension of reality. Nothing nefarious or curtained in that. In the physics sense, power is merely the capability to accomplish work, the first stage of which is discerning what work is to be accomplished. It is no accident that every claim to truth is also a claim to power. It is no evil either.
  10. I don’t understand this contention. Do you? Perhaps a use of language even more figurative than the rest of the sentence?
  11. If verbalized truth claims are always attended by doubt, then I would agree with this claim, for the discursive power of ordinary language will never be up to the task of delineating any element of reality beyond a reasonable doubt. Whether such a shadow of a doubt should disqualify a truth claim from rational consideration is a valid epistemological question, as is the degree of certainty required to validate a truth claim. That such uncertainty accompanies a truth claim is hardly an indictment of a modernism that continually subjected its own claims to precisely this question, the resolution lying in the use of more sophisticated (mathematical) language for certain kinds of issues and increased levels of doubt attached to those less empirical. At any rate, such a charge is hardly either a new one or one modernism has not confronted. One might think a proper response to an unavoidable lack of certainty would be an effort to minimize doubt by increasing rational confidence. But perhaps figurative obfuscation is another way to go.
  12. I have no idea what this claims. How is “sense” meant? If the thought refers to “sensation” as an alternative to reasoning, then this argument harkens to the entire idealist objection to rationality. This would constitute another macroscopic contention to which millions of words have already been devoted, one disputed by naturalists and their natural science heirs, an argument hardly resolved by another throwaway line that simply assumes without warrant what it also thinks everybody knows. If by “sense” Hart means our faculties for making sensations intelligible, then to argue that the only means we possess to frame our picture of reality does fatal violence to that reality suggests that we can say nothing valid nor reason gainfully, in which case we can stop torturing ourselves with his writing. But I may be maligning Hart beyond his desserts by reading “sense” in either meaning, so I will only malign him for one more muddy expression.
  13. This claim is false, as neither empirical reasoning nor positivism nor even ordinary reasoning about experience is self-refuting. Let postmodernism refute empirical science and the interlocking subject disciplines it has developed, its use of mathematical language to extend reasoning into theory, and its production of working technology.   In ordinary language, one can posit logical questions that conduce to self-contradiction, and critics make much of the difficulties posed by Heisenberg to science and Godel to formal reasoning. If this kind of thing is what Hart refers to, let us accept his point for the sake of argument, though doing so means ignoring the success of science. The challenge then lies in positing an alternative to a rational sense of operations that proves more effective at resolving issues of truth, goodness, and beauty. Even more basically, it lies in satisfying Hume’s claim that even reason’s success in divorcing cause and effect ontologically would do nothing to dissuade us from its epistemological necessity. As I wrote recently (please see my blog entry “Theology’s Cloud of Unknowing” of October 27, 2014), my accepting determinism will not deter me from exercising what I take to be my free will. And as much as postmodernists condemn reason, they cannot forego it, though Hart demonstrates how their disdain damages their use of it.
  14. If you understand this truth claim, please let me know. I can’t even hazard a guess as to what he means by this one. There seems a bit of a lilt in it, though. Poetry?
  15. If his mode of thought in this sentence can be charitably described as philosophical, this final assertion seems Hart’s strongest by far, for his presentation is bad philosophy and even worse discursive language. Of course, his own position is not the one he is assailing, and his triumphant flourish at the end of the sentence seems to be the claim that modernism has no means of erasing the doubts raised by postmodernists, doubts poorly alluded to in the sentence under examination. In this he is correct, for the failings of modernism are everywhere to be seen. The cadre of French theorists who nailed the jello of postmodernism to history’s wall in the 1970’s were conducting what they saw as a post-mortem. Though poorly understood at the time, modernism had died on the battlefields of the first World War, or at least a variant of it had. The twentieth century was a miserable and sustained effort to resurrect from the ruins adequate warrants for claims to truth, goodness, and beauty in the face of what was widely perceived as modernism’s failures (For a fuller explanation of this event, please see my book What Makes Anything True, Good, Beautiful: Challenges to Justification). But modernism did not die, though it changed, in part in response to academics’ century-long flirtation with postmodernism. Its resilience lay in its relentless self-critique, one that began with the first pitiful attempts by rationalist philosophers attempting to conduct another post-mortem, this one on the cataclysmic collapse of authority in the Reformation. Modernism certainly was never the intellectual bully its critics on the right (religionists nostalgic for a vanished authority) or left (postmodernists sensitive to its hypocrisies and inconsistencies) saw it as being. Rather, from its birth in the seventeenth century it was a desperate attempt to warrant truth, goodness, and beauty claims by some consensual means that might bind persons to each other with even a modicum of the force of lost authority, an attempt subject to endless self-criticism and relentless reductionism. Hart for once uses just the right language to describe philosophy’s failure to replace religious authority, for rationalism truly found no means to defeat the objections of its opponents, only to confront and minimize them. Reason and closely examined experience, the warrants modernism advanced, could never offer the certainty of Gnostic claims, nor in the messy amalgamation of Romantic and modernist warrants embraced by Victorians could it counter charges of bad faith and hypocrisy. The enduring appeal of scientism is a current example of modernism’s failure to project a defensible framework of moral justification, and religionists are quick to identify it as a threat, probably because it is rooted in a nostalgia for authority’s lost certainty, such nostalgia characterizing their position in general. Nor has modernism produced clarity in regard to evaluations of quality (for more on its problems with goodness issues, please see my blog entry of December 12, 2013, “Is Goodness Real?”). And it has not risen to the challenge of constructing an unambiguous aesthetic, though Kant laid a solid foundation for that effort. Modernism’s struggles are likely to continue. It sloughs off accretions and inconsistencies clumsily and often embarrassingly, as it did the hypocrisies of Victorianism that fueled the postmodern critique. Worse, its failures and its efforts to correct them dominate the zeitgeist. Its demand for rational consistency and honest examination of experience open it to relentless criticism from reactionary advocates of authority and from post-structuralist champions of the virtual circle proclaiming their own warrants for truth and goodness claims. But neither of these approaches can succeed, for the history of the Reformation laid bare authority’s bankruptcy in the face of challenges from within its mode of warrant. Authority simply cannot resolve disagreement from other authority. Nor can the virtual circle model of postmodernism resolve conflicts among cultures or between individuals except by recourse to coercion. No critic of modernity has been harder on reason as warrant than modernists themselves, and this relentless application of the principle of falsifiability remains our best hope of finding truth in the face of the kind of obscurantism we see in Hart’s cited sentence.

 

Standard

Theology’s Cloud of Unknowing

No part of the quest for knowledge of truth, goodness, and beauty matters as much as the search for God. It is only in the last year that I have found myself comfortable in that effort for two reasons: I have resolved to my own satisfaction some difficulties inherent in religious commitment, and I am beginning to understand the categories used by religious apologists.

The greatest proof of God’s existence and nature I can discover is neither the ontological nor any of the cosmological proofs. It is the existence of free will in the face of determinism. Frankly, it puzzles me that this argument is not used more extensively by religious apologists. On the contrary, atheists and agnostics hurl scientific determinism in the face of those who wish to claim that God acts in the world. I have argued the futility of religionists disputing determinism in the observable universe (please see “Religionists Fighting the Wrong Battle” of July 6, 2014). It seems a fool’s errand to deny determinism, for that would demand denying the truth of scientific discoveries based upon it. But these are pretty difficult to repudiate since they include not only the eerie correlation between mathematics and empirical research, but also the amazing interlocking bases in the truth claims of all the natural science disciplines. And don’t forget the deal breaker: the technological marvels that science has given mankind. Put simply, to deny determinism in the physical world is to deny that science works. Those who wish to argue that God acts in the world must refute the counterargument that the world unwinds itself in completely predictable fashion, such predictability constituting the lodestone of all scientific endeavor. Now at this point it might seem that I am switching sides, for the argument just given is the atheistic one: God’s action cannot be found in reality because determinism is irrefutable. Allow me to spring my trap now. The stronger you make the determinism case, the more you also make the case that God does indeed act upon reality. For to argue is to choose a side. And to choose is implicitly to deny determinism in favor of free will. Not even the most committed scientist can deny that she chooses her field of study, her theoretical and experimental efforts, and the conclusions she draws from them. The greatest refutation of cosmic determinism is our own sense of freedom.

Now I will confess that I was stymied at this point for the last decade or so, dismissing this sense of freedom as an error and a self-delusion, the same kind of error we make when we trust our direct perception of sense data or when we assume the virtual circle we create from applying reason to sense data is reality itself. I dismissed our sense of free will as just another hiccup in the epistemological/ontological linkage.

Only it isn’t. I am perfectly willing to relinquish any claim to free will, at least in the abstract. Logically, I can hardly do otherwise, for every libertarian and compatibilist argument that attempts to reconcile determinism with free will has failed. As we are indubitably material substances, I am perfectly willing to accept that we are as determined in our choices as the most inert object of scientific inquiry. But no scientist committed to such an inquiry would be able to reconcile her trust in determinism with the simple truth that she cannot avoid feeling free to commit to that trust. The great mystery is not that we are determined but that we feel free despite our knowledge to the contrary. I can no more stop struggling over my choices of what to judge true, good, and beautiful than I can stop my own heart from beating. My brain seems designed to recognize the natural freedom that lies at the center of my humanity, just as it seems compelled to exercise the preferential freedom involved in weighing choices as it yearns for the flourishing that accrues to wise action (for more on these three levels of freedom, please see my blog entry of November 20, 2013, “Our Freedom Fetish”).

Now this truth leads me to one of two conclusions. Either I truly am free, and along with others like me therefore the only free things in a deterministic universe, or I merely feel free as a condition of my own consciousness along with others like me, also demonstrating some odd uniqueness of human nature impossible for other material substances in the universe to duplicate (so far as we know and with a minor caveat for very modest choice-making for higher mammals). I have come to realize that it hardly matters which of these options pertains. Either true freedom or the inescapable sense of it serves as a proof of human uniqueness. Granted, the notion that human beings alone among created matter are actually free would argue for the existence of the soul and place us in contiguity to God. But the other option also works. Even if my perceived freedom is an illusion, one has to ask why that particular illusion? Why can I not escape moral responsibility for my judgments? I could tell myself my options are limited by heredity or environment, but that would do nothing to remove either my sense of moral responsibility or of culpability for wrong choices. C. S. Lewis once remarked that our sense that reality is unfair is proof enough that we possess some sense of divine justice, but I would argue that such an understanding rests on a prior sense of what is due, and even such a vague moral outlook is equally convincing evidence of our uniqueness. So it hardly matters whether we are truly free or err in our sense of moral freedom. Human beings are choice making machines, but it hardly makes evolutionary sense that we expend so much energy agonizing over illusory choices when instinct would prove a far more efficient director of our actions. We don’t live in the kind of world many religionists would prefer: one in which everything operates directly on God’s orders, resulting in a miraculous and therefore incoherent reality that would frustrate any rational agent’s attempts to choose well. So we don’t live in a world where we do not feel free or one where everything else does. We see two foundational oddities at work here, our sense of freedom and reality’s enslavement to determinism. The clincher is the third oddity that marks the connection between the two: the way these antitheses work together to bestow upon us a sense of rationality that guides our moral choosing. Nothing in the strongest case to be made for determinism forbids God’s action in the one area of reality we cannot help feeling exempt from determinism: our sense of moral freedom. Both this sense and reality’s determinism seem signs of the kind of Creator who choreographed the dance between the universe’s determinism and our ability to make choices in it.

As Kant said, the starry heavens above me and the moral sense within me. That point of view might seem irrefutable, at least until one reads Karen Armstrong’s “The Case for God.” Her richly sourced investigation –I count 374 in her bibliography– makes a rather different argument: we can know nothing of God’s nature using reason or reasoned experience. Whatever we learn entails more negative capability than positive knowledge. It is a curious argument for several reasons.

First, it is odd that such a claim is structured as an argument. Armstrong traces a long tradition of mistrusting ratio (reason) as a means of comprehension of spiritual reality, though she acknowledges its success in the kinds of endeavors ordinary life hands us. She prefers muthos (myth) as a means of religious knowledge. This sort of effort does not merely call for a rejection of reason as a means of knowing. Armstrong seems to think its success requires real affronts to our rational capacity; disorientation, contradiction, paradox, koans, and self-neglect are her route to God, one that deliberately frustrates the reasoning we apply to the rest of our experience for the very good reason that nothing else in our existence bears the slightest resemblance to the ineffability of the divine, and our natural inclination to use the tools of ordinary knowing tend to reduce God to something more familiar and pedestrian, an idol. Her exhaustive historical account shows that reduction to be a constant temptation for religionists, one quite understandable since she acknowledges reason’s central role in human activity. Her approach owes something to Kant’s aesthetic theories, and in her conclusion, Armstrong explicitly compares religion and art. Kant thought aesthetic reasoning to be fundamentally different from practical reasoning because it recognizes the unique quality of aesthetic objects that exist for neither practical use nor classification. Armstrong makes a parallel argument for our thinking about God, saying that we cannot regard God as a being, for that would mistakenly place the divine in a class with other beings. She differs from Kant only in arguing that we are incapable of thinking about God at all. For that reason she also favors what might be seen as imaginative alternatives to reason: myth, metaphor, analogy, poetry, visual arts, and music. She devotes a good deal of attention to these qualities in the holy texts of the world’s religions. The third route to religious knowledge Armstrong highlights involves the importance of will: commitment, ritual, prayer, altruism as an antidote to egoism, and meditation.

She charges Christianity with two historical rational errors. The first came with efforts to standardize Christian doctrine in the first few centuries after Christ. Old Testament writings and New Testament candidates for orthodoxy were gradually aligned so as to give logical force to claims for Jesus’ divinity, something Armstrong argues was never implicit in earlier Christianity. Even so, she charges that interpreting scripture as historical and inerrant truth was only made normative after the Enlightenment. Like other religious apologists who view science as an affront to religion (please see the blog posting on “Tao and the Myth of Religious Return” of October 13, 2014), Armstrong sees religious fundamentalism as a defensive response to the aggressive assaults of positivist science. Interestingly, she argues that this response has distorted and threatens to destroy religion since such a defense attempts to rationalize religious belief and place it on an equal footing with other means of warrant more suited to practical reasoning than theology (Please see “The Latest Creationism Debate”, February 16, 2014). In any case, she neither recognizes modernity as an ad hoc response to the self-destruction of authority in the Reformation nor postmodernism as a fundamental challenge to religious faith, though to her credit she does see the human sciences as a threat to contemporary religion if only because the clergy are so eager to wrap themselves in the reflected glory of science. I must add that the same motive moves human sciences in their imitation of the hard sciences (for more, please see my blog posting of February 9, 2014, “The Calamity of the Human Sciences,”).

Other facets of Armstrong’s analysis are also troubling. First, her argument about God’s creation of the universe ex nihilo indicates that this “invention” of theology somehow negates any possibility for inferring the nature of the creator from the creation, but why should that be so? Both St. Paul and St. Augustine make explicit that we can indeed rationally infer something of God from the nature of the universe, and the assumption that it differs from the divinity that made it does nothing to invalidate that connection, so long as we never forget that we can only draw imperfect conclusions from imperfect reasoning applied to an imperfect creation. Secondly, she raises but hardly settles the issue of what the alternative approaches to religion she champions can warrant. She repeatedly argues that religious faith is useful as a source of comfort in the face of misfortune and death, that believers have found a life of altruism to be richer than one of self-centeredness. But she never argues that muthos reveals or can reveal any real truths about the nature of divinity or morals, nor that the pragmatic benefits of religion are anything other than a gratifying illusion. Thirdly, Armstrong repeatedly fails to distinguish between muthos as an extension of reason and as an alternative to it, citing testimony from thinkers as divergent as Thomas Aquinas and John Calvin, Aristotle and Paul Tillich. This is quite a crucial question that characterizes two wildly different traditions in all organized religions, but Armstrong’s eagerness to advance her case blinds her to the distinction. My own sense of faith is that its proper role is to extend reason to the corona of uncertain truth claims we simply cannot warrant with confidence (Please see September 11, 2013, “Religion and Truth”). I also doubt that what Armstrong recommends can be accomplished, for we are too reliant on reason to make sense of existence to ignore its dictates in any one sphere of activity, especially one so central as theology (for more on this, please see my blog entry for August 4, 2014, “The Tyranny of Rationality”). Further, it seems that the only way Armstrong can claim that muthos is equal or superior to the kinds of truths ascertainable by reason and reasoned experience is to warrant it in a purely coherence sense if only because the kinds of intuitions such efforts justify are so deeply personal. But a coherence warrant for a correspondence truth contains the seeds of its own disintegration (please January 12, 2014: “Can Belief Be Knowledge?”) as well as the grossest sort of intolerance for differing interpretations. I find it deeply disturbing that in her entire analysis, Armstrong never once considers the power of authority as warrant to truth and goodness claims, but instead seems to validate psychological need as a sufficient justification for embracing the truth of religious claims. It seems too obvious to mention, but since she doesn’t, I will: people embrace all sorts of things regardless of the truth in pursuit of psychic balm. I hesitate to charge her with bad faith, but if innocent of that charge, she surely is guilty of sloppy categorization, for the question of whether faith supplements or supplants reason is one of the core questions of theology. Adherents of either tradition would surely resent being lumped with their opponents as Armstrong repeatedly does.

As one who as struggled for decades with the apparent irrationality of religious belief, I found perhaps too much comfort in Armstrong’s assertions that the core texts of religious dogma were never meant to provide rational warrant for religious faith, that their power lay in some allegorical, analogical, mythical, or poetic meaning (for more on the problems such a view engenders, please see my posting of October 2, 2013, “The Problem of Metaphor in Religion”). The message I take from such an argument is that any search for correspondence warrant stronger than authority in primal religious texts is doomed to failure, and that any exegesis is as much a creative as an explanatory endeavor.

So we are left with the plodding work of inference based on the nature of creation and the moral sense that shapes human nature. Perhaps Armstrong is correct in her central contention that we can know no essentials of the deepest mystery and pervading immanence of the Creator, but our minds seem ordered by both the determinist nature of creation and our unique sense of freedom to make the attempt.

Standard

Tao and the Myth of Religious Return

I’ve noticed an odd theme running through conversations and my reading over the last few months as I seek clarity on the nature of religious knowledge. Discounting psychological, pragmatic, and utilitarian arguments in favor of how believers justify the core claims of their faith, I’ve found a surprisingly consistent common thread, an historical narrative that parallels the loss of Eden in Genesis. Only the serpent in this garden is science.

This is not just a version of the bumper sticker mentality: “The Bible says it/I believe it/That settles it.” It does not stoop to denying the determinism that underpins the scientific enterprise, which is an affront to reason as well as science (please see “Religionists Fighting the Wrong Battle,” blog posting of July 6, 2014). Nor does it resemble the misguided attempt to establish some parity between the methodologies of religious and scientific reliability (See “Latest Creationism Debate,” blog posting of February 16, 2014), an effort foredoomed to failure. Its attack is far more subtle, respectable, and powerful. These mythmakers deserve a thoughtful response.

Perhaps the most impressive phrasing was given by Alasdair MacIntyre in his magisterial work on ethics, “After Virtue.” C.S. Lewis covers some of the same ground in his most direct polemic, “The Abolition of Man.” Chesterton, Newman, Eliot, Tolstoy, Solzhenitsyn, Maritain, and a host of other very respected authors make their own versions of the same case, each differing in some details but all agreeing in essentials.

The story they tell is this. Something vital has been lost to culture, stolen by the revolution in thought begun by Descartes at the beginning of the seventeenth century. His attempt to establish objectivity and autonomy for our pursuit of knowledge was misguided hubris that launched the scientific enterprise and the Enlightenment, which in toto have rained catastrophe on western culture. Our fading hope for reprieve can only lie in a return to traditional values informed by religious truth, rejection of materialism, and repudiation of scientific theories of man.

Altogether, it is a good story. Some of it is even true.

The first thing to notice is precisely that it is a story, one with the requisite moral. In fact, it is a very old story, as old as the Epic of Gilgamesh and Noah’s flood. It is the story of Eden, of the Pharisees and Jesus, of Augustine’s two cities. Equally telling, it is the story of Plato’s cave, of Aeneas, and of Lewis’s beloved Norse sagas. The tale of the wrongly chosen path and of human hubris is both an archetype and a touchstone. It informed the entire Romantic era’s love of all things medieval. It inspires the young through tales of Atlantis as it characterizes their grandfathers’ fond recollections of misspent youth. I found it surprising to find so much unanimity among philosophers, theologians, cultural commentators, and poets about the centrality of narrative to an understanding of truth, at least until I recalled how saturated they were by Romanticism as it was filtered through the artifices of the Victorian era and how antagonistic they were to the discursive language of science.

But pegging its roots does not dispute its truth. And it goes without saying that nearly all modernist literature of the first half of the twentieth century was colored by just this sense of loss and diminishment. My issue is not with the loss per se. It is with the nature of that loss.

For the regret expressed by the mythmakers is rooted in historical and ethical generalizations that cannot face real scrutiny. I count seven serious errors in their analysis, any one of which would prove a fatal blow to their version of events.

 1. It is clearly untrue that there was some homogeneous value system that scientific thinking ultimately attacked and is in the process of destroying. What C.S. Lewis calls “the Tao” as a shorthand for traditional values was no more coherent than the lost America conservatives wish to resurrect based on wholesome television series of the 1950’s. No single moral system characterized world or western culture before the scientific revolution. One only need consider the challenge medievalism posed to classical culture to see how fragmented western ethical history was in the anno domini, not to mention in other parts of the world. Probably the most sourced of the works I’ve read recently is Michael Aeschliman’s “Restitution of Man,” which marshalls thinkers as diverse as Cicero and Samuel Johnson to his cause. That they would have been surprised to find their views lumped together would be an understatement. The deeply religious authors who present this myth of moral unanimity need hardly have looked beyond their own Christian faith for disproof of their contentions, for the bloodbath of the Reformation is sufficient proof that no moral position went unchallenged during that miserable era when religions warred over divergent moral outlooks. What could they be thinking to claim otherwise?

I have yet to see a straightforward answer to this question from any of these thinkers, but I think I can provide one that they conspicuously avoid supplying. While we see no less moral controversy before Descartes than we’ve seen since, the grounds of the argument have shifted. The unanimity was not in the truth and goodness claims offered by pre-modern thinkers. It was in their mode of justification. What writers like MacIntyre and Chesterton wish to return to is the power of authority as a warrant (Granted, they locate the source of that authority in different places, for MacIntyre the culture and for Chesterton dogma, but both revere tradition). It was authority that the eighteenth century attacked and defeated, something only made possible by the glaring deficiencies religious authority made manifest during the awful decades of the Reformation (for more on those deficiencies, please see my series of postings from January, 2014).

2.These critics of modern science treat its rise as an unprovoked challenge to traditional values rather than a desperate attempt to find alternatives following their collapse in the Reformation (For more, please see my posting “Modernism’s Midwives” of February 2, 2014). But by ignoring the causes for, say, Descartes’ efforts to find consensual warrants for truth claims amidst the ruins of the French Reformation, they also overestimate his success and underrate the power of later attacks on his method. One can hardly blame writers like Chesterton for missing the postmodern revolt that was emerging in his own time. Perhaps he might have seen more clearly how modernism’s warrants, reason and closely examined experience, were assaulted by their very modes of analysis in ways that authority could never withstand if he had realized that the tradition he most revered was authority itself. I think a critic as brilliant as Lewis would have recognized– and abjured– the postmodern revolt on modernism and indeed might have then traced back the roots of his unease, but his death in 1963 came before the brilliant formulations of postmodernism that mainly emerged in the 1970’s, themselves explaining events that had been sorting themselves out since the turn of the twentieth century (for more on this process, please see my book “What Makes Anything True, Good, Beautiful? Challenges to Justification.”) Why MacIntyre, writing in 1984, failed to see it befuddles me. By rooting values and moral duties in culture, he seems to find agreement with postmodernism, though how he could avoid the charge of cultural relativism that tarnishes their arguments escapes me too. I assume the nostalgic mythologizers derived what they took to be proof from yet another error, but one that has deep truths buried within.

 3.They assumed that the “science” that emerged from the birth of empiricism in the seventeenth century is synonymous with modern science, so the boasts of a Bacon or Comte might be seen as proofs of scientistic hubris. This issue requires a bit of teasing out, though. First, the entire nineteenth century has been a continuing effort to refine what counts as valid scientific experience. This entire effort has been reductive in the extreme, rooting out pseudo-sciences and outliers as it builds disciplinary paradigms and establishes links across fields of study. No one can deny that the early proponents of empirical processes were guilty of hubris, but one glory of their method of warrant is that it is self-correcting, and the boasts of early natural science have long been muted as science has matured in the last century. No one who understands true science can argue today that its efforts are guided by any greater value than respect of truth. On all other values, true science must remain mute for the simple reason that its focus on material, measurable, and mathematical substance provides no means to warrant moral claims. Had its critics clearly differentiated truth claims, which science does exceedingly well within its sphere of competence, and goodness claims, they would recognize that morality was safe from science.

But we might pardon them for their error, especially when we see how slow science itself has been to learn its own limits. In this, the mythmakers have their strongest point, for the human sciences have proved guilty of all their charges: hubristic, value-laden, misleading, and a threat to every other means of knowing (for more, please see “The Calamity of the Human Sciences” blog entry of February 9, 2014).

4. But the mythologists here make their fourth mistake in that they don’t seem to distinguish the human from the natural sciences. The former justify all their charges and have since the Enlightenment first championed “the science of man” as an extension of “the science of nature.” Their failure to distinguish the two might lead us to the conclusion that these writers are just a bunch of old men who yearn for the good old days. There’s some truth in that charge as there is power in their charge that the “human sciences” are neither science nor humanizing. But any natural scientist could have told them that. Real science resents “soft science” basking in the reflected glow of true science at least as much as traditionalist humanists do.

But it is not likely that academics trained in the arts and humanities would seek out the counsel of the college of sciences. For the first two-thirds of the twentieth century, they attempted a flanking attack that drew its power from some of the more gruesome scientific accomplishments of that difficult time. This tactic might be called the “Mary Shelley” argument: that natural science freed from moral restraint would create abominations. It did. The killing fields of the Great War faded into nightmare only to be replaced by the genocides that followed, and then by total war and the specter of the mushroom cloud. We may be too close to that era to appreciate how powerfully these prophecies affected culture during the Cold War. Scientistic utopias of the “Brave New World” or “1984” variety may have seemed possible, even likely, but history has shown them to be the fifth mistake of the reactionary mythmakers. 5. The technology produced by the scientific revolution has not diminished human flourishing; the consensus judgment is that it has improved life. At any rate, the technologies that natural science has wrought are so woven into the fabric of world culture today, it is far more difficult to imagine a successful Luddite rebellion now than it would have been a century ago.

The mythmakers were more perceptive in tracing some of the cultural products of a technocentric culture. Perhaps it was natural that they would characterize the popular view of science in terms reminiscent of the laity and the clergy. After all, this was their preferred social structure. They were correct in seeing the layman as befuddled and overawed by the new priesthood of scientists, viewing their accomplishments as equally mysterious and inexplicable. This credulity is a major motive for the human sciences’ efforts to ape the terminology and training of the natural sciences, though, of course, without their successes. It is perhaps more the laymen than the scientists who merit the charge of scientism, for an exaggeration of the capabilities of science analogous to magic can only succeed for the outsider. No practitioner of modern natural science could perpetuate such a hoax from within the discipline. For laymen, pseudo-scientists and some practitioners of soft science, such overblown claims with their echoes of the early prophets of true science might impress nonspecialists for awhile. I should add that popular cosmology is sometimes guilty of that charge, particularly when it attempts to confront questions of the universe’s origin that depend on shaky theoretical underpinnings. What this popular scientism does lead to is a misunderstanding of the nature of moral thinking, but it is an error the mythmakers share. The layman yearns for moral certitude somehow produced through the methodology of science. The mythmakers are right to condemn this is a false hope, for no “ought” can ever derive from even the most certain “is.” Or to put it in liberal arts terms, no imperatives from the indicative. Every capability of science requires a moral injunction to direct it, and that injunction can never be derived from the science that serves it. Medical science may extend life, but it may not decree that life ought to be extended. But the mythmakers’ nostalgia for the moral directives of religious authority are, like their historical narrative itself, a longing for a myth. Authority of any kind must founder on disagreement. It cannot resolve dispute within its own mode of warrant (for more on why, please read my blog on “The Fragility of Religious Authority” of September 18, 2013). Neither empirical science nor religious authority can provide certain moral guidance in a multicultural climate. What can?

I consider expertise to be an admirable guide to judgments of quality as well as to issues of truth that yield to repeated and studied experience, but I must agree with the mythmakers that expertise is difficult to come by in the rough-and-tumble of undifferentiated experience. So in that sense, they are right to condemn the mid-twentieth century’s obsession with soulless professionalization and mass efficiency, though I should quickly add that an increasingly complex society cannot survive without bureaucracy and middle managers.

6. Still, the mythmakers ominous charge that in the absence of religious morality, “efficiency experts” and technocrats would by default be the moral arbiters of mass culture proved to be yet another error on their part. Wouldn’t you agree that is a role more likely assumed by commercial artists and entertainers in today’s culture? At any rate, expertise, though a powerful justification for many kinds of truth and goodness claims, can only have a tenuous hold on moral ones, and that the expertise derived from a long life well lived. Neither scientists nor technical experts have replaced bishops, ministers, or mullahs as moral arbiters.

The “Myth of Religious Return” so prized by conservative literati is a good story for sure. But like all narratives, it suffers in any effort to translate it into discursive language (Please see blog entry “Tall Tales” of July 14, 2014 for more). Without a doubt, the failure of their analysis can be traced to the dawn of modernism, a thought revolution spurred entirely by the dismal failure of religious authority over the century and a half of Reformation conflict. But the mythmakers failed as miserably in understanding their own age, and this serious mistake constitutes their final misjudgment. 7. They failed to appreciate the postmodern revolution that rejected modernism at the dawn of the twentieth century in favor of group identities molded by spurious claims of social science, existential Romanticism, utilitarianism, and American Pragmatism. We are certainly still suffering the consequences of postmodern moral thinking (Please see my blog posting “Postmodernism is its Discontents” of July 7, 2013), and some of the strongest objections conservatives raise to the current moral climate are valid objections to postmodern thinking. Still, thanks to the Enlightenment revolution, itself condemned by both the mythmaking reactionaries and postmodern nihilists, morality is still seen as the most prized possession of the individual’s rational will in pursuit of what it calls good. Many of us exercise that will by choosing to respect religious authority, though less completely than in ages past. Rather than yearn for some mythic, medieval paradise lost, religionists must compete for the moral assent of their adherents as adults, not as children cowering in fear. The intellectual revolution that freed reason from authority also established the sacred right of the individual to choose her own moral outlook justified by her own moral warrant. The mythmakers are certainly correct in asserting that hasn’t always gone so well, but moral agency does not preclude error any more than it perpetuates it.

 

 

Standard

The Tyranny of Rationality

My argument today can be summarized thus: we all are deeply, deterministically rational.

Since we partake of three realities (brute reality, our constructed image of it, and the language we use to convey our knowledge to others), it seems appropriate to examine this claim in reference to each. I do not mean to say that the external reality, the tree that falls in the forest, is rational. Brute reality simply exists, and any character attributed to it requires an interpreting mind. So if we should decide that, yes, the external world is rational, what we are saying is shorthand for what we really should be saying: that we can apply our rational faculties to the substance and events of the world with some confidence, knowing that the predictions and explanations we produce will prove accurate. Further, if they prove inaccurate, we know that our rational faculties can locate a more accurate prediction or take into account some previously hidden factor that will then explain to our satisfaction the way of the world.

Now this congruence between brute reality and our own thinking about it is really very mysterious, for there is no good reason why creatures produced by brute reality should be able to unlock its secrets as well as we do. From the way I’ve framed the two realities referenced thus far, you might assume that the correspondence I have mentioned is rooted in empiricism, the natural sciences. And who can doubt that the disciplines of the hard sciences have proven the exemplars of unlocking the mysteries of nature with the key of human reason? Nothing at all surprising there. You might wonder why I bother bringing up such a cliché.

My answer is that the methods of the hardest of the natural sciences, while profoundly rational, are only more rigorous applications of something deeply rooted in all human experience, something we can no more shuck than the wetness of rain. Even our stoutest protests against rationality, the ecstatic cries of mystics and the Kafkaesque wails of nihilists, are logical shafts of light in a metaphysical darkness and no less attempts to build a working model of reality than applied particle physics. The difference only makes sense in consideration of the process.

Long before neurologists began their contemporary struggle to map the brain, philosophers attempted to probe the mind and its workings. The pioneers of this effort, the first epistemologists, sought to answer the question of how the mind represents brute reality. They quickly discarded the Aristotelian model of direct perception despite its dominance in the thinking of the 17th century. The sort of naïve assumption that we perceive the world complete and entire, that our senses present to us a “true” picture of reality, is a hard one to dismiss, for it is our default approach to experience. But it is patently false. From the vanishing point in art to dreams, hallucinations, and apparent time compression and expansion based on our level of enjoyment, we don’t need to look very hard to find that our perception of brute reality is something different from the reality itself. And don’t even bring into this issue the limits of perception as exemplified in quantum theory and general relativity!

John Locke’s representative theory of perception gave us our second reality, an internal reconstruction of the external one that in his view was a pretty effortless reproduction constructed by the mind. Locke argued that all of our thinking is composed of perceptions and reflections upon them. Of course, that notion of a “reality movie” playing in our head does little to explain either the misperceptions that the brain seems so often guilty of nor the bothersome truth that we all don’t seem to perceive reality the same way. Fast forward a century to George Berkeley’s famous question about the falling tree. How do we know the movie playing in our heads is an accurate representation? All we have access to is the movie.

Berkeley thus builds the most important edifice in epistemology: the perceptual wall, an impenetrable barrier that separates brute reality from whatever we take it to be in our minds. We build a creative representation of reality from perceptions and reflections and have only experience to guide our choices. In fact, as Immanuel Kant famously observed at the end of the eighteenth century, whatever structure we build is formed not only by our senses but by the mental structures in our minds that pick and choose among them to structure our creation. This is an active process of choosing, sorting, and assembling perceptions so as to build a working model of reality inside the perceptual wall. Sense data bombard the mind and just as we can pick out a familiar voice in a noisy room or see a foreground object while ignoring all the others in our field of vision, our minds sort through the barrage of perceptions that our senses transmit to produce a working model of reality. How different is this process from our default notion of direct perception and how likely is it that our minds will build a model that makes sense to us regardless of its fidelity to the entire picture presented to it? And– disturbing thought– how likely that we all will build anything like the same reality from differing experiences?

But here’s the catch. Kant famously insisted that the mechanism for that construction, the sorting device for the incoming data stream, must be profoundly and completely rational. His famous categories of experience were mental sorting and assembling mechanisms that inevitably present to us a rational world. This is why every cause seems to have an effect and every effect a cause, why the world presents itself as both unity and diversity, and why quantity seems so ubiquitous in physical reality. These are simply the way we see things. The way we must see things. We have to remind ourselves ad nauseam that correlation is not causation simply because we are programmed to read causation into every effect we observe. We see constellations in random star positions, animals in cloud formations, and purpose in chaos because that is what we have to see. It seems as plain as the nose on our face unless we check our naïve assumptions at the door. The world is not rational. We are. And we can’t help being.

But wait. There’s more. Just as our moment-by-moment experience of reality is composed of the assemblage of innumerable sense data inputs orchestrated by a mental process, so too is the composite, ongoing picture of reality these experiences produce. We don’t merely act in the world. We respond to it, and that response is a product of an ongoing reflection that orients us to experience. We don’t just think we know the momentary truth of this instant. We know reality. We must think this way so as to navigate our way through the truths that allow us to choose all the goods we come to value, whatever they may be. It is this picture of reality and our place in it that comprise our own virtual circle.

In previous blogs I have discussed the virtual circle at some length (please see post of August 6, 2013). It bears repeating that our ability to find some correspondence between our experience and external reality is only testable by experience and that our efforts to improve those tests has brought us the scientific method. Its essence is an effort to improve the reliability of experience and our reflections upon it. We have other tests, of course. To determine correspondence between an unknowable reality and our picture of it, we may rely on expertise, authority, or undifferentiated experience. But all of these truth tests are inherently rational. We know experts have deeply examined the repeated experiences that produce their expertise. We trust authority in one field because it has proven trustworthy in others (a questionable assumption as I explore in post of January20, 2014.). We (mistakenly) assume a new experience can be examined in light of an old one. Please notice that I am not claiming parity for undifferentiated experience and a scientific experiment. The latter has intentionally confronted the issues of unreliability that plague the former and has attempted remedies for them. What I am claiming is that our conscious assemblage of reality, our virtual circle, is composed of rationally constructed truth claims. Their correspondence is, of course, always in doubt (we cannot guarantee the tree has fallen, after all, only that we have heard it), but the truth tests I have mentioned produce sufficient warrant for us to judge these claims as true (July 9, 2013). It is also defensible, though not certain, that the human operating system that presents such sense data constructions to us operates as a guarantor of intersubjectivity so that we may compare our correspondence claims constructively to those made by others.

But, of course, our constructed reality consists of far more than simple correspondences to material reality. What about correspondences to conceptualizations? How does our mind construct, for instance, true impressions of abstractions like justice or love? And what about the purpose of all this construction? How do we define, limit, and choose the good?

In this zeitgeist, to claim that such things are correspondence constructions, meaning they have some objective reality, is going way out on a limb. I have attempted to make that argument in prior posts (December 10, 2013), but even if you embrace our culture’s attachment to the subjective quality of conceptualizations, and especially conceptualization of goodness, I can still claim with confidence that your subjective experiences are rational, or at least, that you deem them so. Here is why.

In any attempt to find the true or the good, we engage in an act of comparison. In correspondence truth tests, we examine the percept in our minds against the brute reality we seek to know. Is that a Mercedes or an Audi? Even if we embrace the impermeability of the perceptual wall, we still examine our truth claims in comparison to the virtual circle of truths we have already accepted as true. Is this queasy feeling in my gut what I call nausea? This act of analysis so central to every truth and goodness claim cannot help but be a rational one, and it characterizes each moment of consciousness. It builds a second level of rationality over the foundation of Kant’s sense data theory of perception, this one a conscious and comparative one.

An unfortunate consequence of the acceptance of the perceptual wall is the epistemological viewpoint termed phenomenology. This school takes seriously the impermeability of the perceptual wall and argues for the radical subjectivity of all experience. Its adherents take their name from Kant’s famous assertion that we can never know things-as-they-are (“noumena”) but only things-as-they-appear (“phenomena”). We can only see the inside of the perceptual wall, digesting phenomena as they appear in the mind. Perhaps this argument would have been taken less seriously if it hadn’t followed upon the heels of Romanticism, with its perceptual wall-piercing valuation of intuition as a divine source of insight. Question that level of certainty by doubting either the reality of intuition or its divine source and you are left with something far less convincing: the total subjectivity of experience. This bleak picture of humankind’s fruitless search for truth and goodness leant its emotional force to the twentieth century’s infatuation with postmodernism (July 30, 2013).

But note that even in this bleak and black view, we see the light of reason. For phenomenology is founded on Kantian metaphysics, Romanticism on a valuation of intuition as a reliable means of knowledge, postmodernism on a cobbled-together set of reactions to unsettling events in the first decades of the twentieth century. Despite their claims to the contrary, the source of all philosophy is the search for wisdom: the true conditions of reality. And if Theresa of Avila, St. John the Divine, and Franz Kafka find those conditions to extend far beyond the reach of correspondence knowledge — meaning beyond the reach of the third level of reality I mentioned at the beginning of this entry, the use of language– that is still fine. For their beliefs do not render their rational appraisal of reality incorrect. They extend it, perhaps to realms that others might not see or appreciate. In “The Idea of the Holy,” Rudolf Otto makes clear that any concept, even of the numinous if such a thing is possible, is rational.

But even if it isn’t, even if a conversion experience, a horrifying ordeal, a drug-induced revelation that changes your life, cannot be conceptualized as experienced, it must still be incorporated into the virtual circle. It still must comprise its own piece of our picture of reality. And that process too must be a rational one. For the only way we can construct that picture is to examine it according to the rule of either the principle of non-contradiction or, if we are more rigorous in our thinking, the principle of logical entailment. The mental process of turning a unique experience into a bit of the virtuous circle must be an act of conceptualization and thus a rational act. I am certainly not claiming that we all succeed in this effort, nor that we apply very much rigor to the process. The haze of beliefs that extends our knowledge like a sun’s corona are often poorly examined in light of the knowledge we have already accepted, for instance. But even so, note the act of rational comparison that lies at the center of the effort. Perhaps mental health professionals might find a continuum of rationality from the integrated personality to the psychopath. I doubt the latter considers her virtual circle very much compromised. We all think our conception of the world pretty sensible, and each thinks her own the best for the simple reason that she would choose another if it seemed more true or good.

Perhaps logicians will find fault with my argument, insisting that rationality is not a matter of degree and that it indicates some absolute proficiency. I cannot disagree that formal logic establishes a rigor absent in less rigid formulations, but certainly at least some of the difference is attributable to the third reality of language rather than the second reality of the virtual circle. But just as expertise is a less perfect form of rational application of experience than empiricism, so too is ordinary logic a dilution of the methodology of formal logic and for the same reasons. We accept expertise because we cannot frame many experiences in the light of experimental science, accepting the limitations of experts because that is about the best we can do just as we cannot frame ordinary experience with the mathematical structures so admired by formal logicians. Dilute that comparison still further and observe that we subject our beliefs to the far less rigorous tests of non-contradiction because we cannot subject them to the truth tests of correspondence. The lesson should be clear. We are rational beings. Rather than eschew that inherent rationality, we should embrace it and apply the most rigorous tests to our perceptions and reflections that they will withstand. We cannot escape conceptualizing our thinking about truth, goodness, and beauty, and in seeking warrants, we cannot escape the reasoning that must accompany such thinking.

Standard

Prejudice and Privilege

I really dislike looking into matters of race, but you don’t have to scratch the surface of any American problem very hard to find race eating away just under the surface, complicating solutions and, worse, analysis. It is this subterranean process, this rotting under the polished surface of our ideals, that has given rise to the relatively new popularity of the notion of privilege as racism. I wish to examine this view of racism in relation to the traditional notion of prejudice.

Their etymology indicates a major difference that has major implications. While “prejudice” is rooted in the active voice: it means literally “to prejudge” presumably without sufficient evidence, “privilege” derives from French meaning “private” and “law.” To be granted a privilege is a passive, not an active act. We may assume that favor was sought, but its reception was not within the power of the recipient to procure in contrast to the power we have in exercising our judgment actively to show prejudice.

Now the use of these two terms today tracks their etymology, and this distinction is hugely important in the current racial climate for two different reasons, both of which make remedying racism more difficult. First, the concept of personal responsibility is a bedrock moral principle, and that is difficult to connect to privilege as racism. Consequently, we tend to underestimate the degree to which our actions are determined by prior conditions (please see my posting of March 23, 2014) and overestimate our moral freedom in present ones, thus leading to the second problem: we consider our accomplishments to be entirely our own and resist crediting others for even a part of our success.

Contrast this haze of shared responsibility to active prejudice. To commit an act of prejudice is an error committed by the thinker. It is within her power to remedy. It is not only a cultural offense but also a rational one. It connotes sloppy thinking even when the prejudice is positive. For unless a class of things or persons is definitionally exclusive (“All bachelors are unmarried”), one may not reasonably apply even accurate group classifications to individuals, not to mention the difficulties inherent in forming those classifications (a danger postmodernists blithely ignore for some reason. Please see post of March 30, 2014). But to receive a privilege is a gift the recipient has no control over. In the sense social critics use the word, white privilege describes a thing unearned, an accident of birth, a booster rocket for economic and social ascent denied to others. The term is thus not only passive, but also inevitably comparative. From its beginning, the private law benefiting some was implicitly to be contrasted with the public law relatively penalizing others.

So the use of the word privilege changes the nature of a charge of racism. First, the accused may have done nothing in contrast to the implicit irrationality of a prejudicial judgment. She may bear no personal moral responsibility. She is merely the beneficiary of an unearned advantage that she may have neither asked for nor been aware of. Second, the charge insinuates that any advantage thus conferred also must penalize others. Finally, we face the most relevant issue that derives entirely from her degree of moral responsibility: what is she supposed to do about it? Let us attempt some calibration.

First, the power of charging prejudice is inextricably linked to the rational error it commits. Racism is morally offensive in part because it is stupid, and holding the moral high ground in the discussion cannot be separated from intellectual superiority. Racists are ignorant, uneducated, unable to grasp nuance. They make sweeping generalizations that are wildly inaccurate and then attempt to paint individuals with them. A long tradition of finding empirical or rational means to justify racist judgments from pseudo-Aristotle to Thomas Jefferson to Charles Murray attempts to invalidate the association between racism and ignorance, but its existence only reinforces the connection, for all such attempts are now regarded with disdain by intellectuals who are unwilling to sever it or to take any such effort seriously. This intimate connection cannot be carried over to the new racism of privilege, for privilege reveals no requisite flaws in its recipient whatsoever. Southern freedom riders in the 1960’s were as guilty as the dog handlers who attacked them if both were white. Indeed, the notion of privilege immediately conjures up a consequential guilt that might have motivated the former and enraged the latter. But should those risking their lives to end racism be charged with prejudice because they receive benefits from a system they actively oppose? Is such a charge warranted? And must white privilege disadvantage blacks?

We must assume that such privilege derives not from the absolute advantage conferred by being white but by the relative disadvantage the privilege implies for being black. We will call this kind of privilege disparative privilege. Now this relationship requires some investigation on three counts: first, what constitutes the privilege; second, how does it redound to the disadvantage of blacks; and third, must any conception of privilege be built upon comparative relationships?

Conservative whites wish to dismiss the whole notion of white advantage– and with it the notion of white guilt– by insisting that whatever advantage they received was earned rather than given, though they wish to be vague about whether they earned it or their forebears did. At most, they point to cultural values that encourage their success: family structure, emphasis on education, and work ethic. They rightly accentuate the self-discipline their success required, the acceptance of deferred gratification and commitment. But the essence of privilege as racism is the precisely the charge that advantages were not earned, that they accompanied skin color as a gift. So we step into a minefield of moral ambiguity, for I cannot be responsible for a harm I did not commit, nor can I be asked to feel guilt for a benefit I earned. To be fair, these character traits are not confined or exclusive to “white culture,” and to say they are is simply prejudice. To claim that being black automatically results in cultural disadvantage in regard to these prerequisites for success is a claim I can’t imagine any unbiased cultural observer would wish to make. Nor are these automatic socioeconomic markers, for lazy scions of wealthy families and ungrateful second generation Americans are clichés that belie any guaranteed conferral of privilege. So much for any sweeping comparisons. But let’s face it. The conditions for success are certainly better established in some socioeconomic environments than in others, and of the multitudinous strands in the tapestry of any success story, many are woven without effort, simply by the expectations of others that form our horizon of possibilities. Still, it is a gross act of prejudice to see white privilege as an unearned gift which white America takes for granted… at least until one compares it to being black in America. Only by comparison does the generalization hold undeniable truth. White guilt derives not from privilege but from prejudice as surely as the tail of the coin implies a head. Compared to the lot of blacks in this country– not only in the past but in the present– every white person now living was born on third base, and whatever her positive efforts, all were built to a degree upon a scaffold of exploitation. There is no denying that any comparison of white and black privilege will lead to one conclusion: whites still reap unearned privilege and blacks unearned privation because of skin color, and this legacy of active prejudice is a moral stain. White persons are like the boss’s son who may start in the mail room and may work hard but who will never know how much of his success is due to the accident of birth nor to what degree that success has kept less lucky fellow workers from rising as they might have in an equal race. Who can doubt that the future occupant of the corner office will pass on the fruits of his success to his children just as his future underlings will hand off their lesser luck to theirs?

But note in the example that the while the goods are absolute, the harms are all relative. Let us try to think of privilege in an absolute sense. Considered as an unearned gift, we are awash in privileges. We did not earn the social order that benefits us, the political system that frees and equalizes us, the economic system that enriches us, the family that nurtures us, the knowledge that guides us, the beliefs that give meaning to our lives, and on and on. While it makes sense for us to be grateful for these blessings, I can think of no reason we should feel guilty for them. In this large sense, anyone who lives in conditions allowing her to meet her human needs in the world is privileged, for it is by the satisfaction of our needs that our lives are fulfilled and the conditions for that satisfaction have been well-established thanks to the conventions of civilized life (for more on this, please see my post of November 13, 2013). If we have a loving family, dear friends, education, civil order, productive work, and the like, it seems to me we have the goods we are by nature designed to have, and the moral response to that is satisfaction and gratitude, not guilt. What is more, these are limitless goods. There is more than enough of these blessings to go around and my having a sufficiency in a working civil society in no way limits your access. We do not compete for all privileges.

But we do compete for those blessings that are limited, and then we are forced to face both the universality of our needs and the pain their absence produces. The most impoverished citizen in this country is privileged compared to the 84% of Liberians living on less than one dollar per day and the most unjust political jurisdiction here is utopia compared to life in Syria. If comparative privilege imposes a consequential guilt, then we all have a moral duty to ameliorate the living conditions of the poor regardless of their location. But do we? Let us put religious injunctions aside for the moment, though they impose their own moral duties, so that we may confront the central question that a relativistic concept of white privilege and white guilt implies: does relative economic and political privilege inevitably impose moral obligation? Let us refine the question: does privilege impose an obligation even when disparity is not a consequence of the privilege?

Let me acknowledge and laud the sentiment evoked in kind-hearted persons by seeing suffering. We wish to make it better. But a clear-eyed view of this natural desire also compels us to see how conditioned it is by our degree of privilege, as exemplified by the coat-drives-for-pets mentality of some wealthy enclaves. And just as any suffering tugs at the heartstrings, another nearly universal response is simply to turn away. What we happen to see disturbs us, so we simply refuse to see it so as not to be disturbed. This accounts, I think, for at least a part of the gated community mentality that seems so prevalent in rich neighborhoods. The moral principle of ought-implies-can–that moral obligation only follows the ability to act– comes into play here as well. An indiscriminate and wholesale equality of goods would be impossible to conceive much less to achieve (please see my posting of December 3, 2013 for more). The Soviet Union was a case study of that failure. Despite the efforts of thinkers like John Stuart Mill to objectify such sentiments and thereby impart to them some moral valuation– an effort that collapsed in a thicket of evaluating pleasures and pains– we would do well to remember that justice does not require an equality of degree of all perceived goods. For we perceive many things as good and value them differently and there are too few Maseratis to go around. Rather, it is the sufficient distribution of goods necessary to meet our human needs that is required, otherwise called an equality of kind. This operation is a profoundly rational one, the tugging at our heartstrings notwithstanding. We are left then with the suspicion that some kinds of privilege and guilt are not handmaidens of wellbeing and some are and that some wellbeing is earned rather than gifted. So why should guilt be intertwined with privilege like snakes on a caduceus?

But that is just the issue, isn’t it? For in our times and in America and for economic and political privilege especially, the relationship is always partly causal. Some linkages are as thick as wisteria vines squeezing the columns of antebellum mansions. Some family wealth, wealth that produced all the goods it is capable of buying for succeeding generations, was built on the direct foundation of the importation and perpetuation of slavery. Other privileges are less easily traced. It is said the target of the fourth hijacked plane on September 11, 2001 was either the White House or the Capitol. What kind of loss to our national pride would that inflict? Both edifices were built by slaves. When a group of people are disadvantaged by color and so denied an equal wage or vote or voice in social policy sufficient to deprive them of the satisfaction of their needs, someone will reap the benefits, and the misalignment of power being what it is, the odds are that someone already has the sufficiency that justice requires. To the degree that white America has harvested this kind of white privilege, it deserves to feel white guilt. And so long as the privilege is maintained, so too is the guilt, and so too is the moral obligation to correct the moral harm. Reparations for slavery would not clear the slate, for the vestiges of racism would continue, producing continuing disparity and white privilege.

But just to be clear, should I feel guilty for having parents who valued education and instilled a work ethic because others were less fortunate in their choice of parents? White guilt must be measured by the racism that relatively advantages one group over another, not the absolute goods consonant with universal human needs that some received and others did not. Social scientists may attempt to lay all cultural differences at the feet of some ancestral economic exploitation, but such an indictment seems too sweeping to be justified by science and too Marxist to be embraced by interpreters, though it is consistent with postmodern emphasis of culture as the creator of identity (please see posting of July 30, 2013 for more). If a narrower difference maps the battleground of white privilege and white guilt, then fight it there. But let liberals leave out injunctions of religious duty, emotivist objections to inequalities of degree, and claims that all privilege imposes guilt. Let conservatives put away their blind pride in winning a rigged game and their contempt for those losing it. While privilege may broaden our view, it shouldn’t change our focus. Prejudice is still the villain of the piece, still a moral obloquy and intellectual failure. So long as its effects ripple through the culture, white guilt is its proper consequence.

That conclusion applies also to the other kinds of privileges. To the degree that they were unearned benefits at the costs of sexism, colonialism, imperialism, and the like, we might expect to see coinages of terms like male privilege, first world privilege, heterosexual privilege and the like with their attendant trains of guilt. And to the degree that these disparities still hinder exploited people from satisfying their needs while easing the lives of their exploiters, active amelioration is the only morally justifiable response.

So what is active amelioration; what is to be done? Since justice is defined as “to each her due,” it seems clear that justice demands that those unjustly advantaged should be those who make reparations after, of course, performing the required triage. For if all vestiges of racism were magically removed from our society today, we would still be left with the inequities it has long produced, both privilege and privation. Repairing these inequities is not so difficult as egalitarians might imagine since justice requires not only an equality of kind but also the inequality of degree that the exercise of our preferential freedom produces (for more on this, please see my post of November 20, 2013). The shame has never been that some have an excess. Rather it is that the prejudice that helped produce the excess has also produced a deficiency for its victims. It is daunting to accept that the same arguments that produce white guilt hold sway in regard to other kinds of privilege, leading to other moral obligations, but there it is. Since the exercise of influence over other sovereign nations is a governmental function, we as citizens should move our government to act on our behalf in accord with the limitations of the ought-implies-can principle of morality.

This is what is to be done: we have the obligation to root out disparative privilege in all of its other manifestations by actively opposing prejudice in our own circle and by favoring governmental action to produce the equality of kind that justice demands. And let us remember also to be thankful for the absolute privileges we enjoy but did not earn.

Standard

One-Armed Economics and Wealth Creationism

I venerate expertise as a truth warrant. In judgments of correspondence goodness evaluating quality, it can substitute for clearly defined standards (please see my post of October 15, 2013 for more). Because expertise is to some degree built upon experience, it is a deeply flawed justification for truth and goodness claims, though its reliance on rational examination of experience raises its reliability. That being said, the criteria for developing expertise depend on the subject. The human sciences have compiled a dismal record in this regard, in part because of weaknesses inherent in their fields (see post of February 9, 2014 for more) and in part because their roots in academia encourage professional disagreement. Nevertheless, the hardest of the soft sciences and one of the few that is based upon quantitative analysis is “the dismal science,” economics.

 Because it is a human science, it is built on the shifting sand of conflicting paradigms. We see broad disagreement about essential subject matter along the political spectrum, but even economists embracing capitalism splinter in their premises and the conclusions built upon them. Pit a disciple of Hayek against a Keynsian and watch the sparks fly, justifying Harry Truman’s famous preference for a one-armed economist who wouldn’t say, “on the other hand….” The general unpreparedness of economists for the crash of 2008 does not speak well of their predictive powers. The psychic hot line did better. So call economics an immature science, one step below respectable status. Even keeping that caveat in mind, I cannot help respecting economists for their devotion to data, something all too rare in the human sciences, and I respect the their analytic method, flawed though it may be by the theories that dictate it. They know so much more than I do. I only participate in the economy. I do not profess to understand it. My field is epistemology, so I am painfully aware of what I do not know, but I have a few issues with the concepts of wealth and job creation as some economists define them that confuse me. Perhaps an expert can set me straight.

 I took a dollar out of my wallet the other day, and right above George Washington’s head in a kind of corona, a girlish hand had printed in red marker the name “Maria.” That got me thinking about how many hands that bill had gone through since Maria had first claimed it for her own, and how many transactions had been facilitated by its existence. Now I’ve heard conservative pundits and a few economists insist that wealth can only be created by free enterprise, that government can only transfer rather than build wealth. Since the money comes from tax revenues, they say, it is merely changing hands, not creating value in the way private enterprise does. This seems obvious when given their favorite example of a Steve Jobs inventing the iPhone. There was nothing and then there was something that people were willing to pay for. This is true wealth creation, as in creation ab initio, a making that seems almost divine. Advocates of this position contrast that kind of wealth making with the confiscatory policies of government taxation that only moves money around after, of course, squandering a large percentage in fraud and waste. So when government spends, it is spending not only what the earner would have spent more wisely but also what it did not create. The assumption is that the taxpayer created the wealth by her labor just as Steve Jobs created the iPhone. So taxation is not only wasteful in that it adds an unnecessary drain for money to go down. It is also parasitic in that it adds nothing to the economic basket. Have I got that argument right?

 But Maria’s dollar tells another story. Does your job “create” wealth from nothing in the sense of inventing value? More likely it does what all those people who passed on Maria’s dollar do: perform a service that the payer considers worth a dollar. Whether that service is performed in the public or private sector is irrelevant. The taxpayer pays for a service that government provides just as she pays for a babysitter or a taxi or a pizza. I will admit that I can choose whether to purchase these things and that I have no such choice in government spending except, of course, through my vote. But look at it another way in respect to other purchases. I have no choice about fulfilling any of my economic needs. That’s why they are called “needs” (For more on this, please see my post of November 13, 2013). Can I refuse the grocer, the hospital, the landlord? You may respond that I can choose another provider to meet my needs if dissatisfied and such freedom is denied me in regard to government services. This is an undeniable burr under the saddle of the cowboy libertarian wing of the conservative cause. But let us examine this irritant more closely.

 There are two good reasons for the monopolies that government “enjoys” in performing its services and though these affect but transcend economics.

 First, since the raison d’etre of government is justice rather than profit, it must retain a monopoly of power so as to be the final arbiter in its goal of providing justice to meet those needs that citizens cannot meet by their own efforts. The legitimate scope of such efforts, I must add, is limited to those needs citizens cannot satisfy for themselves. These fall into two broad categories: those too expansive for any individual to provide (such as defense) and those that might be skewed into injustice by gross inequalities (such as the court system and the legal rights of minorities).

 Secondly, the issue of its efficiency and desirability in regard to any particular but necessary service is skewed by the simple truth that government is often the provider of last resort in regard to these essential services, performing public functions (like fire protection and other disaster relief) that no private employer would undertake because they could never prove profitable. Florida provided another example after hurricanes ravaged the state in 2004. Private insurers deserted the state, which was obligated to go into the insurance business to provide needed coverage for homeowners.

 These two realities put to the lie the claim by some economists that government only transfers rather than creates wealth. It provides services citizens cannot provide for themselves, services that meet human needs and that may demand an attention to justice over profit.

 Now it is worth discussing whether government can perform some of these services as well as a private entity, but this is a question of relative efficiency, not absolute wealth creation. But in considering efficiency, you can be sure that no corporation will pursue these kinds of opportunity unless profit is factored into the job, profit that adds to the cost of providing the service, profit that blinds the provider to issues of justice in provision of services. Does it really matter whether Maria’s dollar passes through government hands by way of taxation or into the till of a business if it then goes to pay some employee for doing necessary work? By the logic of those who deny the value of public expenditures, the education private colleges provide has value while that provided by public colleges does not. Can anyone claim that a nurse working at a VA hospital provides no valuable service in comparison to one working at a for-profit hospital? Does this make any sense?

 So much for the absolute claim that public dollars cannot create wealth. But conservative economists might then simply pivot to the question of efficiency, subtracting from that wealth the costs incurred by government’s incompetence, leading to the same conclusion by means of a different subterfuge. For if the admitted economic value is reduced by waste, fraud, and inefficiency they think inherent in government spending, the net sum might still be zero. This is certainly a different argument from the definitional claim that the public sector is a drain, and it resonates. The Soviet Union was hardly an exemplar of socialism, but it surely was a model of waste and bureaucracy, and conservatives are on much more solid ground in condemning the poor performance of some government bureaucracies at all levels. But let us examine this point critically as well, for it is based on two false claims.

The first is that government is for some reason inherently wasteful, but that conclusion relies on a cost/benefit analysis appropriate to private rather than public enterprise. Put another way, “wasteful” in capitalism refers to cost versus profits, but as the goal of public enterprise is not profit, at least not the profit that can be quantified into dollars, on what grounds can it be termed wasteful? Unless critics are willing to use a broader standard of value, they can hardly objectively judge the public sector, and to use the standard of value appropriate to private enterprise is grossly distorting. 

Underlying the charge though is a psychological theory of motivation that capitalism’s champions mistakenly take to be indisputable.  They charge the poor performance of government services less to weak oversight than to the slackness of its work ethic. No reasonable person can argue against the profit motive as an incentivizer of efficiency, but it is carrying the argument to absurdity to view it as the only incentive as Ronald Reagan did in his infamous contrast between public and private workers: “The best minds are not in government.  If any were, business would steal them away.” I doubt that even Steve Jobs found profit more motivating than his own love of discovery and invention. Millions of dedicated police officers, firefighters, teachers, and public servants are moved to do their duties by their commitment to the general welfare rather than the size of their paycheck. Bureaucracy is not necessarily a pejorative term.This is not to dismiss charges of waste and incompetence nor to diminish good faith efforts to make government more effective, only to challenge the presuppositions of those who seek to discredit it by inept comparisions.  It is only an inspiring Chamber of Commerce vision of wealth that sees it as created from nothing by the ingenuity of the human spirit in pursuit of profit.

 Maybe not so inspiring as we might wish, as the ugliness of Ayn Rand’s portrayals demonstrate. Her “arguments” as phrased in her novels are certainly created out of nothing (please see last week’s post on the dangers of fiction-as-reality). But more to the point, they are adolescent fantasies. It is hard to decide whether they are more objectionable for their Romantic excess or their childish ingratitude. Even a moment’s cool thought after reading Rand’s overheated prose should make it obvious that Steve Jobs did not build Apple from nothing. It hardly detracts from his genius to note that he relied on the education, protection, and facilitation that his parents, his community, and his government provided to apply his genius. What would the iPhone have looked like if Jobs had labored away in a slum in Somalia? Rand’s heroes hardly made themselves (though I am not sure their parents would have wanted to claim them), and while her warnings of the dangers of collectivism were on target against a Communism that championed a stupid equality of degree, it is an elephant swatting gnats in today’s liberty-loving America (for more on the battle between liberty and equality, see posts of November 20 and December 3, 2013). The self-made man is a staple of the American dream, perhaps because it is a dream to imagine anyone being totally responsible for his own success.

 A stronger point of leverage against government as a wealth creator might target simple overreach. I mentioned earlier that government’s positive obligations in justice focus on broad needs for the general welfare and more pinpoint needs to arbitrate competing interests. The emphasis must always be on needs that individuals cannot provide for themselves. This is an inherently hazy category. Its components are built upon the universality of human needs (see post of November 13, 2013) that introduces an equality of kind (see post of December 3, 2013) that imposes obligations on government (see posts of February 23, and March 2, 2014). But it is not merely the ambiguity and difficulty of the topic that lead both liberals and conservatives to avoid facing it. Both sides have reasons to blur the issue.

 Liberals refuse to face the thorny issue of individual responsibility. Though each of my needs confers a right, the satisfaction of most of those needs is my own obligation. It is, in truth, my core duty as an adult human being (see post of November 6, 2013). Should I fail through my own error, government as the collective will of my fellow citizens is under no obligation to repair my situation unless I am unable to repair it for myself. I understand that Christian values in the U.S. have tinted many persons’ views of this sort of thing, but all those conservatives who think this a Christian country might want to differentiate their Christian from their political duty (though they seem loath to face either: see below), and liberals who wish to use government resources to satisfy any unmet needs whatsoever might want to clarify in their own minds which are government’s duty and which are each citizen’s. Conservatives have a point about the “nanny state” that liberals rather wish to ignore. Look it at this way; to treat adults who should go about the business of satisfying their own needs as children the rest of us must care for is pure paternalism: insulting and crippling to those we seek to help. It is also a waste of public resources in that individuals are not only responsible but also more efficient in these efforts than those who seek to ameliorate their condition for them. It violates the only duty of government in that it is inherently unjust both to the individual and to the citizens who attempt to do for her what she should do for herself. Liberals need to face the matter of individual responsibility squarely.

 Conservatives have a different motive for blurring the issue of needs, for to base government upon their satisfaction would call into question the social contract justification for government and with it the majoritarian argument that has long delivered injustice to minorities. More pragmatically, it would cost more money, for to finance a government seriously committed to the general welfare– a term I define as meeting needs that individuals cannot meet by their own efforts– would socialize some efforts now undertaken for profit. The absurd cost, inadequate distribution and poor outcomes of American health care is one prime example from among many. The net result would be to change our value system from the orientation that wealth determines worth to a respect for the equality of kind rooted in our common humanity, an innocuous enough notion that you would think Christians as well as champions of human rights would subscribe to, yet one many conservatives find threatening.

 A related wealth creation story lauds the positive contributions of the job creators in our economy, those who stimulate the economy by providing employment for the ninety percent of us who work for a wage. The argument hinges on the more basic notion discussed above: that wealth is created out of nothing only by those operating within the free enterprise system. See, Apple had two employees in 1976 and 45,000 last year. Each of those well-paying positions only exists because of Jobs and Wozniak. They are not only wealth creators. They are also job creators. Again, it is hard to argue with this. Something came into being as a result of their genius that did not exist before and by dint of their hard work and smarts, that new thing has created both wealth and jobs. Surely, job creators deserve recognition and reward for their efforts. This version of events is convincing, yet it seems just a bit truncated and simplistic. There’s more to it than just invention and production. All those sleek computers and phones and tablets are great products, no doubt, but all those high paying jobs and profits were not created solely from production. There is also the little matter of consumption. Even the paragons of job creation could not have made their companies or built their wealth or hired their workers without a demand for their products. And demand depends on the health of the economy. No titan of the marketplace could work her magic in a failing society, which brings us right back to the necessity of government not just as a wealth creator but as a job creator. Just as most wealth creation is not ab initio but derives from the providing of a desired service or product, so too does job creation depend on the consumer’s purchasing power and the health of the economy. This health is a dance between private enterprise and public policy. See the way the stock market embraces the Federal Reserve and vice versa! Yet from the way conservatives portray their version of a job creator, one would think he pays salaries from his own pocket rather than from the operating expenses of his company, but then maybe that impression is enhanced by the ridiculous salaries paid these self-styled giants of the marketplace. A moment’s thought should uncover the real job creators for the bulk of the economy are the middle-class consumers whose income provides the demand that increases the cash flow that creates the jobs. This healthy cycle characterizes any productive economy. To signal out the employer as the lynchpin of this cycle is to distort its nature. The United States has the highest level of economic inequality in the developed world (But we are more equal than Mexico and Turkey. Yay!) One may make a legion of moral arguments about what various stakeholders in our economy deserve, leading to interesting discussions about minimum wage and CEO compensation, but as a purely pragmatic matter, the real job creators in our economy, meaning consumers, can hardly perform their part in the economic cycle if this level of income inequality continues. But the conservative moral argument disputes this pragmatic one. We may discern a number of reasons for the widening gap between rich and poor since the 1970’s, but surely the position that employers have a more important role in the economy than workers and the concomitant conclusion that they have a moral right to a larger slice of the pie than at any time in our history (excepting the ominously significant year of 1928) is largely responsible for the current disparity.

 In moral philosophy, we see the concept of “ought implies can,” a very valuable check on the applicability of moral principles. It is fine to say that such and so moral principle should apply, but the argument is defeated before it begins if no way exists to apply it. It is worth asking if the conservative argument about job creators introduces the ought/can issue. In other words, should moral principle bow to pragmatic necessity? Because consumers are as necessary a part of the business cycle as employers, should that be the end of the discussion? Does their practical necessity as purchasers trump the moral argument for the superiority of job creators in creating wealth? I would argue no, for no amount of pragmatic limitation would tamp down the position that bosses should prosper disproportionately to their employees as much as business cycles will allow because of their greater contribution to the general welfare. Granted, this concession would at least recognize the moral worth of workers to some degree, which would be a decided improvement over the current blindness that elevates employers to godlike status. But even in the healthiest of economies, some pragmatic positions require moral interrogation: that workers are interchangeable drones, that shareholders matter more than employees; that profit is the only real product of any business; that crony capitalism, influence peddling, and corporate welfare are acceptable governmental functions; and that rigging the system so as to deliver obscene wealth to a fortunate few while denying fairness to the rest satisfies the obligation of government to deliver equal justice under the law. We not only can do better as a society, but we ought to.

 Which brings me right back to economics as a science. I began this entry by admiring its devotion to data and quantitative analysis. I will end it by pointing out the most glaring reason why economics as now constructed can never be a real science: it can never apportion proper value to the human concerns that the economy should serve. The goal of science must be to find truth. It does not have the means to find goodness within its modes of warrant, and questions of value are always questions of goodness. I have frequently discussed the wise blindness that science must bring to its objects of analysis (most recently on July 6, 2014) in order to provide a reliable warrant for its truth claims. Like its sister human sciences, economics can never provide that warrant, which is why even those of us lacking in expertise should apply our reasoning to its provenance.

Standard

Tall Tales

Even while I studied and taught literature, I was always troubled by the loose linkage between stories and reality. I am not talking about the reality depicted in the stories themselves. It has always struck me as right and proper to object when they violate their own premises. This might be as simple as the continuity errors that eagle-eyed viewers always point out in movies. Look. Her glass was half-empty and in the next shot is three-quarters full. A more serious failing is the deus ex machina that rectifies a failing story line at its climax. Or perhaps a character acts entirely contrary to her nature without sufficient cause, leading the reader to scratch her head in bemusement or throw the book down in disgust. Still, the bar we set for fiction is pretty low. It need not mirror life, an act critics call mimesis, so long as it remains true to its own premises. If pigs can speak to spiders in the first chapter, if choruses burst into song in the first act, the observer only asks that the same rule applies later, and if things change, that the change is explained so as to allow the work to remain internally consistent. Stories exist in their own world, and that is what pleases us about them.

Only they don’t and it doesn’t. Three-year-olds can effortlessly navigate the gulf between created reality, the made-up world of fiction, and the common reality we all participate in, but something odd evidently happens to grown-ups, and the problem only grows more serious with education. Sophisticated critics and professors of literature engage in an interesting sleight-of-hand in examining the relationship between real and imagined. If confronted outright with a request to define the connection per se, they will deny any explicit linkage because even a moment’s thought will introduce the iron curtain that divides the real from the imagined. But five minutes later they are enthusiastically dissecting mob mentality in Billy Budd or the moral implications of the Grand Inquisitor chapter in The Brothers Karamazov. They seem at least to sense the problem, so they seek cover by referring to Melville or Dostoevsky’s view of things, but what do they expect their often-captive listeners to do with the analysis they are conducting? Are we to confine the novel’s meaning to the fictional world of the nineteenth century whaler or a Russian orthodox monastery? Are we to infer that these two brilliant creative geniuses have nothing to say to the common reality they inhabited– or perhaps to the one we now inhabit– that their brilliance is curtained by the imaginary worlds they created, worlds so rich and dimensioned that we can drag ourselves back to reality only by an effort of will and once returned remain strangely lost, with one foot in the real world and the other in the somehow richer world pinned to the pages or etched on the DVD? Adolescents emerge from the theater lobby with plans to play quidditch or kick-box or buy an assault rifle. Adults finish Macbeth with a richer understanding of the perils of ambition. Really?

I’ve been bothered by this issue for many years, but like so many other super-macroscopic cultural issues, it seemed few others shared my concerns. But in a recent TED talk, the famous sociobiologist E.O. Wilson was asked about our zeitgeist’s obsession with narratives, an issue about which he expressed some concern. I think it time to delve into my own discomfort.

Like so many other big-picture problems, this one suffers from a poverty of appropriate terminology. Wilson observed that our evolutionary bias is toward confronting reality. But, of course, we can’t do that directly, for before “getting” it, we must construct our version of that reality, a process I have termed the virtual circle in these posts (please see August 13, 2013 for a fuller explanation). An entirely accurate construction that mirrors reality in perfect detail is something I define as the virtuous circle, an unattainable goal yet one we cannot help pursuing and compositing in all of our truth, goodness, and beauty claims (for more on this one, check out October 2 and 7, 2013). The mimetic process that occurs in our minds as we attempt this construction occurs constantly and may be considered the perpetual goal of all of our perceptual and most of our rational efforts. Our struggle to identify the true, engage our natural and preferential freedom to choose the good, and negotiate the difficulties of appreciating the beautiful occupies most of the moments of our lives. But not all. Wilson implied that the created reality of narratives provides us with just what we have been exhaustively searching for: a consistent and comprehensible world that we gradually come to understand, but unlike our own, one that we merely observe rather than feel forced to make choices in. By that logic, we jump on continuity errors and narrative inconsistency with an almost feral anger, for our minds are led to think of these created worlds as being the thing we most seek: a mimesis without self-contradiction. Further, our poor brains, exhausted by the creative effort of half discovering and half creating a common reality that we but poorly understand, gratefully absorb the balm of the fiction we indulge ourselves in, absorbing at one level the completeness of this intelligible but imaginary reality yet simultaneously engaging in the same kind of logical analysis that we bring to every conscious moment of our existence. We naturally do so. When I taught literature, I had to remind students that nothing in the created reality of a piece of fiction occurs by chance. Everything is intentional in the made-up world. They found that notion incredible because real life doesn’t work that way. That intentionality is what we yearn for in common reality. It energizes the world’s religions as, I suspect, it sustains the scientist’s trust in her method. There is an authority in creating an imaginary world fully furnished with simulacra of the one we are assured a greater Author created, and there is comfort in turning past the title page or watching the opening credits, knowing that what follows has order and meaning. We delight in immersing ourselves and trusting that world for a very good reason: it differs from the real one, so often dull, bewildering, and meaningless. If narrative literature only existed to produce that delight, it would have paid its way in this weary world. That immersion into another world, into the author’s consciousness, justifies what Robert Coles termed “the call of stories” in his excellent book of the same name. But it seems our brains are made so as to ask more of literature than it can deliver. We cannot help but mine stories for meaning, to ask them to cross over.

No wonder children emerge from the theater’s cocoon with their fingers pointed and thumbs cocked. No wonder critics knit the imagined world of the narrative not only to their own virtual circle but to the virtuous one, drawing out of the created world lessons for the one we all inhabit. Though the process is a natural one, the imaginative interweaving is facilitated by the ease with which students of literature deal with figurative language, especially metaphor (an issue I’ve addressed in another context in these pages on October 2, 2013). They are used to thinking of one thing in connection with another, but the relationships they establish are necessarily imprecise and allusive. And so they shy away from a discursive and frank appraisal of the relationship between created and common reality. The term they would use to describe such an attempt is reductionist (a disparaging term whose reputation I attempted to salvage in these pages on September 3, 2013). I see nothing amiss in asking the critic to state with some precision what relationship the events in a fictional world have to the reader’s participation in the real one. If the author intends to communicate some wisdom about common reality through narrating the experiences of characters or the voice of some speaker, fine. It strikes me that we often communicate our experiences in just that way, intending to impart some wisdom to our listener. But that common sense view runs into two major roadblocks in regard to fiction.

The first involves the limitations of experience. Of the correspondence truth tests, experience is both the most commonly applied and the most unreliable. (For a fuller explanation of why, please see my posting of October 7, 2013.) We build our virtual circles largely from experience, which explains both why we find reality so difficult to comprehend and why the transference from fictional experience to virtual circle seems so natural. But experience’s limitations should caution us in this temptation. As a justification for truth and goodness claims, experience suffers from contextualization. It is necessarily unique and unreplicable, so the lessons learned from each experience are only loosely applicable to later and similar ones. And as experiences are perceptually registered, they are altered in ways we cannot be conscious of, since sense data is filtered pre-consciously so as to present us with a fully-constructed picture of reality. This distortion is profound enough to support postmodern charges that experience itself must be private and subjective. I argue in opposition to this charge that our reasoning about experience may produce a degree of intersubjectivity that allows us some broad degree of consensus. This universal reasoning faculty applied to subjective experience is the anchor that moors us to a common reality. But what do we reason about when we think about a movie or novel? What facts of experience can we accept as true in this created world?

The second problem concerns the intent of the creator. A novel or a movie is a work of art, subject to conventions governing its genre and aesthetic considerations that shape its substance and style. Those seeking that it also somehow convey the truth in its storyline are asking it to accomplish a second and divergent goal, for as we all know to our sorrow, unvarnished common reality could rarely be mistaken for art. To make it so, the creator must distort reality as a sculptor must mold her clay.

To illustrate the issue as it affects literature, contrast a reaction to a biography with a response to a piece of serious fiction. I just finished Christopher Hitchens’ short biography of Thomas Jefferson and David McCullough’s fine life of John Adams. Both works were polished pieces of craft with distinctive styles and authorial expertise. Both followed genre conventions for biography. I consider both to be fine artistic creations. Both paint a fairly unflattering portrait of Jefferson. While I may have quibbled with a few incidents or details, I approached their portraits of the sage of Monticello with equanimity. Here were expositions of another life and another time. Like all actual lives in all times, there were loose ends and unknown motives, questions and inconsistencies. Whatever telescoping of perspective or framing of events, whatever intentional omission or sharp focus each author effected, I judged to be in service to the attempt to convey a true account of a real life that I was free to further investigate and confirm or dispute. In contrast, at the same time I was reading Edith Wharton’s novel House of Mirth, whose storyline chronicles the rise and fall of a social climber in the Gilded Age, Lilly Bart. It was as rich in historical detail as the biographies, with a comprehensive picture of the social environment of her times. The machinations forced upon Lilly by her class and gender roles were as deeply affecting as they were exotic to this twenty-first century male reader. But at the novel’s end, what was I to do with these insights? I had entered a richly furnished late Victorian room and had trolled the minds of all its denizens, had observed their triumphs and bitter falls, and upon closing the book had stored it all in memory. What part of that memory may I think real? Edith Wharton actually inhabited rooms like those she portrayed, joined the social elite, and undoubtedly was acquainted with many a nouveau riche. But what of it? Her admirers will infer that her novel will give us the “flavor” of that life, or an “insight” into it that somehow translates into knowledge of it. But they ask for too much, for at the same instant they wish for the novel to also exist as a unique aesthetic object, one crafted intentionally to produce an emotional response. These purposes are of necessity in conflict. Two examples of that inevitable conflict should suffice to make my point. First, we know from the title page on that something will happen in House of Mirth, and –wonder of wonders– it will happen to the central character! That is a piece of luck! Funny how life turns out differently. Furthermore, somehow the novel’s reader will know enough about the characters and events of the novel to get a pretty rounded picture of not only what happens but why and how and, even more miraculously, some causes and effects of the events depicted. Now even if we are willing to suspend disbelief sufficiently to correlate these unlikely findings with reality, we face yet another problem regarding the insight we are being handed: we are being asked to trust that the author is skilled enough in her authorial craft to accomplish her artistic ends and at the same time observant enough of real truths in the real world to communicate an experience truthfully and reliably, and not just any old experience, but one that conveys some essential truth that cannot be communicated discursively (if it could be, it would be, for discursive language is far easier to employ than writer’s craft). How are we to judge any single feature of her novel as a piece of experiential truth? After all, every representation is from some angle dictated by aesthetic rather than experiential requirements, and we cannot know how accurate any imaginative creation mirrors what it reflects. If Wharton had slipped in some anachronism as a private joke, would I have known? If she grossly exaggerated Lilly’s paralysis in the face of rigid gender roles to drive home some private grievance or authorial machination, if she employed some Dickensian character or plot twists to dramatize her storyline, if her unflattering foray into the consciousness of her male figures was prompted by some misanthropic impulse to stereotype…. How would I know? Can I ever separate my memory of the watering holes of fin de siècle society imparted by her creativity from the histories of the era I have studied? Should I try? I hear a great deal of talk about “artistic truth,” “theme,” and “deeper meaning” in discussions of literature. I would like a clearer understanding of just what knowledge such ideas entail, not to mention the more difficult issue of how such truth claims are warranted. How does artistic, creative genius and authorial skill translate into depth of knowledge of what is grandiosely termed “the human condition”? (for more on aesthetic judgment, please see my post of December 13, 2013.)

As a finished artistic creation, the novel stands on its own to enfold us as a unique intentional work to produce the “disinterested delight” that Kant said characterizes all works of art. I get that. But just because it has those qualities, I question any “lessons” the work can offer us: lessons about history, sociology, psychology, or, in the words of the English teacher, “life.” My brain cannot help but to form the same synthesis with this imaginary and created world as it does with the mimesis of the real world I construct as my virtual circle. After all, mirroring reality is what it does for a living. Though natural, I am convinced such an effort is delusionary and dangerous and should be resisted rather than embraced.

Plato recognized the danger. In Book X of The Republic, he envisions a utopia without creative arts. We largely discount his warning today unless we buy into his theory of forms, whose architecture allowed him to see artistic creation as a mirror of a mirror. Since common reality for Plato was merely a reflection of the ideal, any artistic creation that fulfills a mimetic role must reflect common reality, thus distancing the observer even farther from contemplation of the ideal. One hardly needs to subscribe to the Platonic vision to make that complaint. Consider Augustinian objections to secular literature still exerting their force in the closure of theaters during the English Commonwealth and Jefferson’s well-known quarrel with reading fiction.

As in so many things, Aristotle disagreed with Plato, and at least a whiff of his argument attends every subsequent effort to find truth in the narrative arts. The power of fiction according to Aristotle’s schema in The Poetics is to distill the essence of experience rather than any particular and therefore unique perspective. Just as he envisioned our knowledge of abstractions to be gradually constructed of multiple exposures to their instantiation, so too did he see the artist’s role to distill the essence of experience into its essential archetype, the defining characteristic of the essences portrayed in the narrative. Macbeth is an imaginary king as Shakespeare portrays him, but his approach to gaining and retaining power typifies a certain type of monarch, or so we like to think. The muthos of the play, its essentials, are thus both imaginative and didactic. The author both creates and instructs. The audience responds to effective archetyping with catharsis, which Aristotle saw as an emotional purging. We might grow as callous to blood as Macbeth himself if we actually knew him, but we retain our emotional distance when watching him onstage just enough to explore regicide as an idea and experience our response to it as a vicarious emotion. So we are double winners, Aristotle claims. We derive the emotional charge of involvement with the intellectual depth of detachment. We end the narrative emotionally spent but rationally energized. Aristotle’s arguments are powerful, but they fail to bridge the gap between the imaginary and the real. Certainly, what we experience in Macbeth is a powerful emotional ride that leaves us exhausted well before Macbeth loses his head. But only the catharsis is real, not the manipulated events that produce it, and what experiential truth can derive from events that are so clearly manufactured? I do not mean to say that immortal characters and events cannot be consensually discovered in great literature. We approach our Willy Lomans and Don Corleones with too much reverence to claim that our emotional response to narratives cannot build immortal archetypes. But these are cardboard cutouts compared to any living person. Their power derives from the crispness of their definition, and that clarity is entirely a product of their being merely artifacts, framed by intent. As for the intellectual power we derive from our experiencing their fictional world, I would argue that it is precisely these singular great characters and storylines and the profound implications they generate that produce the greatest intellectual dissent among critics and literary experts. Our emotional response is molded by the intelligence creating fictional narratives– this is after all a world created to elicit it– but our attempt to interrogate that response and extrapolate its significance to common reality must splinter into private conviction and public conjecture when it crashes against the wall between created and common reality. Archetypes there may well be and catharses they may well produce, but when we attempt to derive real-world truths from them and put those truths into discursive language, we enter the thicket of controversy that fuels a hundred academic journals and a thousand websites. The deepest wells of the narrative arts, the Hamlets, the madeleines, the Rosebuds, the monolith, that in Aristotle’s schema should produce the deepest and most powerful consensual truths lead instead to the most vociferous dispute among experts who try to frame those truths in the discursive language of the academic article or popular essay. Why is that? Could it be that the “truths” thus communicated about “the human condition” are as numinous as a religious conversion? Could it be that no reliable truths about “real life” can be produced by the portrayal of an unreal one?

That the narrative arts must serve mimetic purposes seemed relatively undisputed until the Romantics refocused the spotlight upon themselves. But this was hardly an improvement since it necessitated exchanging the universal for the personal, with all the attendant temptations of private experience proffered as artistic genius. These nineteenth century obsessions were magnified by the growth of popular culture and the rise of literacy, cheap publication methods, and universal education. By the twentieth century the new narrative forms of film and television guaranteed the ascendency of the narrative not only as art form but as educational tool. And a new philosophy emerged to spotlight the narrative form, to place it at the very center of its premises. I have often written about the early twentieth century transition from modernism to postmodernism in these pages (please see posts of July 22 and 30, 2013 for more). Its veneration of creativity and subjectivity was matched only by its disdain for empirical science and rationality. When joined to the new technologies that celebrated the narrative form, it stimulated a powerful effort to link created and common reality.

Its focus on creativity, criticism, and irony guaranteed that its approaches would be heterodox, so it took a while for postmodernism as a movement to reach full steam. Its groping for consistency coincided with the maturation of both the movie and the television industries into the powerful social forces we see today, and no one familiar with either could deny the countercurrent of sappy Romanticism that characterize not only these media but also popular literature (for more on the formation of one hybrid archetype of this era, the antihero, see my post of November 26, 2013). Postmodernists embraced the individualist and subjectivist biases of the Romantics, along with a near deification of the artistic rebel. Their mature theory could be discerned in the works of a cadre of mainly French intellectuals by the mid-1970’s. They were academics and literateurs who found fertile soil for their theories, and indeed often communicated them, in literature rather than in philosophy. In appealing to literature to carry philosophical weight, they were honoring a long tradition that included Freud’s grounding his theories in Greek mythology and John Dewey’s reliance on Rousseau’s Emile to support his Progressivist educational theories. But the postmodernists sought even more pride of place for the narrative form. In their terminology, the great historical movements of the modern age were grand narratives, merely widely accepted stories that cultures tell themselves to justify the status quo. Abstract and discursive political, religious, and moral theory is thus dismissed as mere storytelling with all of its fictionalizing. Ironically, postmodernists value another kind of story, mini-narratives, of individuals or of previously neglected and oppressed groups. They taught a generation of aspiring literature instructors to seek out truth in these untold stories. But note the difference between historians reading letters from Tuskegee Airmen and movies celebrating their service in World War II. In the parlance of the movie trailer, “based on a true story” is simply another way of saying, “not true.” Postmodernists also advocated subjecting what they scornfully called the canon of dead, white male authors to a critique using deconstruction, whose purpose was to mine their fiction and poetry for evidence of grand narratives perpetuating exploitative social orders. While racist, homophobic, misogynistic, and capitalist undercurrents certainly swirl through the fiction of the canon, and while avid students pride themselves on uncovering it, I find it disturbing that they assume its molding influence on readers without asking what I think is the more basic question. Yes, readers are seeing in Tennyson or Hugo a disturbing misogyny, but so what? Yes, readers then and now should not have their prejudices confirmed, but not merely because we prefer our prejudices to theirs but because these imaginative works are neither sociological investigations nor psychological confessionals. Perhaps every human creation from cave paintings to kewpie dolls screams a political manifesto, but for my money the meaning is brought to the reading rather than derived from it. Deconstructing fiction is said to have given critics what they have always desired: equal partnership in artistic creation. I doubt if serious academics would have gone for it if they hadn’t already accepted the claim that fiction qualifies as another form of philosophy. But the effort to find truth in fiction was only the first step, leading inevitably to the goal of all truth claims: finding goodness. Let no one think the postmodern method is morally neutral despite an implicit rejection of objective moral standards, for its program of social reform is built on the model of the human sciences (not a good idea at all. Please see my post of September 2, 2013 for why). In substituting sociology and psychology for the kind of pure aesthetic John Ruskin favored, postmodernists transform imaginative works into covertly polemical ones, replacing one exaggerated influence with another, using the narrative form to pursue weighty political ends it could never support. They mock Ruskin’s Romantic pretensions as the merest fluff while mining literature for justification for their moral crusade (I am entirely sympathetic to their egalitarian agenda, by the way, though I find their warrants unforgivably simplistic. Please see my post of November 20, 2013). The truth content of created reality is simply too insubstantial to carry the weight of their analyses. They are shooting at bubbles in the air to fill them with rhetorical lead, but they merely dissolve, and with them goes the cathartic power of narrative media.

So what’s the harm? The child finishes the Harry Potter novel and eagerly reaches for the next in the series. The crowd files out of the movie theater marveling at the latest computer animation effect. The reader closes The Great Gatsby still envisioning the green light at the end of the pier. No harm there, only a rich emotional immersion into a created world. But it is so hard to resist the next step. Presidents mimic action heroes. Romance novel fans inspect their snoring husbands with disdain. Serious and intelligent students learn to seek truth in the deconstruction of the latest serious novel, yet they find no critical consensus on the wisdom it purportedly conveys. E.O. Wilson complained in his TED talk that our attraction to narrative prompts us to seek a simplistic, spurious intelligibility in the world around us. We want good guys and bad guys like in the movies. We yearn for happily ever after like in the fairy tales. We yearn for the omniscience of the novelist’s world. In doing so, we disdain the open-ended complexity of the natural sciences, the hard work of sustained commitment, the doubt and uncertainty of finding truth and choosing the good. We want our stories to be real and reality to be as silky smooth as a heroine’s cheek. But that cannot be.

 

 

 

 

 

 

 

Standard