Theology’s Cloud of Unknowing

No part of the quest for knowledge of truth, goodness, and beauty matters as much as the search for God. It is only in the last year that I have found myself comfortable in that effort for two reasons: I have resolved to my own satisfaction some difficulties inherent in religious commitment, and I am beginning to understand the categories used by religious apologists.

The greatest proof of God’s existence and nature I can discover is neither the ontological nor any of the cosmological proofs. It is the existence of free will in the face of determinism. Frankly, it puzzles me that this argument is not used more extensively by religious apologists. On the contrary, atheists and agnostics hurl scientific determinism in the face of those who wish to claim that God acts in the world. I have argued the futility of religionists disputing determinism in the observable universe (please see “Religionists Fighting the Wrong Battle” of July 6, 2014). It seems a fool’s errand to deny determinism, for that would demand denying the truth of scientific discoveries based upon it. But these are pretty difficult to repudiate since they include not only the eerie correlation between mathematics and empirical research, but also the amazing interlocking bases in the truth claims of all the natural science disciplines. And don’t forget the deal breaker: the technological marvels that science has given mankind. Put simply, to deny determinism in the physical world is to deny that science works. Those who wish to argue that God acts in the world must refute the counterargument that the world unwinds itself in completely predictable fashion, such predictability constituting the lodestone of all scientific endeavor. Now at this point it might seem that I am switching sides, for the argument just given is the atheistic one: God’s action cannot be found in reality because determinism is irrefutable. Allow me to spring my trap now. The stronger you make the determinism case, the more you also make the case that God does indeed act upon reality. For to argue is to choose a side. And to choose is implicitly to deny determinism in favor of free will. Not even the most committed scientist can deny that she chooses her field of study, her theoretical and experimental efforts, and the conclusions she draws from them. The greatest refutation of cosmic determinism is our own sense of freedom.

Now I will confess that I was stymied at this point for the last decade or so, dismissing this sense of freedom as an error and a self-delusion, the same kind of error we make when we trust our direct perception of sense data or when we assume the virtual circle we create from applying reason to sense data is reality itself. I dismissed our sense of free will as just another hiccup in the epistemological/ontological linkage.

Only it isn’t. I am perfectly willing to relinquish any claim to free will, at least in the abstract. Logically, I can hardly do otherwise, for every libertarian and compatibilist argument that attempts to reconcile determinism with free will has failed. As we are indubitably material substances, I am perfectly willing to accept that we are as determined in our choices as the most inert object of scientific inquiry. But no scientist committed to such an inquiry would be able to reconcile her trust in determinism with the simple truth that she cannot avoid feeling free to commit to that trust. The great mystery is not that we are determined but that we feel free despite our knowledge to the contrary. I can no more stop struggling over my choices of what to judge true, good, and beautiful than I can stop my own heart from beating. My brain seems designed to recognize the natural freedom that lies at the center of my humanity, just as it seems compelled to exercise the preferential freedom involved in weighing choices as it yearns for the flourishing that accrues to wise action (for more on these three levels of freedom, please see my blog entry of November 20, 2013, “Our Freedom Fetish”).

Now this truth leads me to one of two conclusions. Either I truly am free, and along with others like me therefore the only free things in a deterministic universe, or I merely feel free as a condition of my own consciousness along with others like me, also demonstrating some odd uniqueness of human nature impossible for other material substances in the universe to duplicate (so far as we know and with a minor caveat for very modest choice-making for higher mammals). I have come to realize that it hardly matters which of these options pertains. Either true freedom or the inescapable sense of it serves as a proof of human uniqueness. Granted, the notion that human beings alone among created matter are actually free would argue for the existence of the soul and place us in contiguity to God. But the other option also works. Even if my perceived freedom is an illusion, one has to ask why that particular illusion? Why can I not escape moral responsibility for my judgments? I could tell myself my options are limited by heredity or environment, but that would do nothing to remove either my sense of moral responsibility or of culpability for wrong choices. C. S. Lewis once remarked that our sense that reality is unfair is proof enough that we possess some sense of divine justice, but I would argue that such an understanding rests on a prior sense of what is due, and even such a vague moral outlook is equally convincing evidence of our uniqueness. So it hardly matters whether we are truly free or err in our sense of moral freedom. Human beings are choice making machines, but it hardly makes evolutionary sense that we expend so much energy agonizing over illusory choices when instinct would prove a far more efficient director of our actions. We don’t live in the kind of world many religionists would prefer: one in which everything operates directly on God’s orders, resulting in a miraculous and therefore incoherent reality that would frustrate any rational agent’s attempts to choose well. So we don’t live in a world where we do not feel free or one where everything else does. We see two foundational oddities at work here, our sense of freedom and reality’s enslavement to determinism. The clincher is the third oddity that marks the connection between the two: the way these antitheses work together to bestow upon us a sense of rationality that guides our moral choosing. Nothing in the strongest case to be made for determinism forbids God’s action in the one area of reality we cannot help feeling exempt from determinism: our sense of moral freedom. Both this sense and reality’s determinism seem signs of the kind of Creator who choreographed the dance between the universe’s determinism and our ability to make choices in it.

As Kant said, the starry heavens above me and the moral sense within me. That point of view might seem irrefutable, at least until one reads Karen Armstrong’s “The Case for God.” Her richly sourced investigation –I count 374 in her bibliography– makes a rather different argument: we can know nothing of God’s nature using reason or reasoned experience. Whatever we learn entails more negative capability than positive knowledge. It is a curious argument for several reasons.

First, it is odd that such a claim is structured as an argument. Armstrong traces a long tradition of mistrusting ratio (reason) as a means of comprehension of spiritual reality, though she acknowledges its success in the kinds of endeavors ordinary life hands us. She prefers muthos (myth) as a means of religious knowledge. This sort of effort does not merely call for a rejection of reason as a means of knowing. Armstrong seems to think its success requires real affronts to our rational capacity; disorientation, contradiction, paradox, koans, and self-neglect are her route to God, one that deliberately frustrates the reasoning we apply to the rest of our experience for the very good reason that nothing else in our existence bears the slightest resemblance to the ineffability of the divine, and our natural inclination to use the tools of ordinary knowing tend to reduce God to something more familiar and pedestrian, an idol. Her exhaustive historical account shows that reduction to be a constant temptation for religionists, one quite understandable since she acknowledges reason’s central role in human activity. Her approach owes something to Kant’s aesthetic theories, and in her conclusion, Armstrong explicitly compares religion and art. Kant thought aesthetic reasoning to be fundamentally different from practical reasoning because it recognizes the unique quality of aesthetic objects that exist for neither practical use nor classification. Armstrong makes a parallel argument for our thinking about God, saying that we cannot regard God as a being, for that would mistakenly place the divine in a class with other beings. She differs from Kant only in arguing that we are incapable of thinking about God at all. For that reason she also favors what might be seen as imaginative alternatives to reason: myth, metaphor, analogy, poetry, visual arts, and music. She devotes a good deal of attention to these qualities in the holy texts of the world’s religions. The third route to religious knowledge Armstrong highlights involves the importance of will: commitment, ritual, prayer, altruism as an antidote to egoism, and meditation.

She charges Christianity with two historical rational errors. The first came with efforts to standardize Christian doctrine in the first few centuries after Christ. Old Testament writings and New Testament candidates for orthodoxy were gradually aligned so as to give logical force to claims for Jesus’ divinity, something Armstrong argues was never implicit in earlier Christianity. Even so, she charges that interpreting scripture as historical and inerrant truth was only made normative after the Enlightenment. Like other religious apologists who view science as an affront to religion (please see the blog posting on “Tao and the Myth of Religious Return” of October 13, 2014), Armstrong sees religious fundamentalism as a defensive response to the aggressive assaults of positivist science. Interestingly, she argues that this response has distorted and threatens to destroy religion since such a defense attempts to rationalize religious belief and place it on an equal footing with other means of warrant more suited to practical reasoning than theology (Please see “The Latest Creationism Debate”, February 16, 2014). In any case, she neither recognizes modernity as an ad hoc response to the self-destruction of authority in the Reformation nor postmodernism as a fundamental challenge to religious faith, though to her credit she does see the human sciences as a threat to contemporary religion if only because the clergy are so eager to wrap themselves in the reflected glory of science. I must add that the same motive moves human sciences in their imitation of the hard sciences (for more, please see my blog posting of February 9, 2014, “The Calamity of the Human Sciences,”).

Other facets of Armstrong’s analysis are also troubling. First, her argument about God’s creation of the universe ex nihilo indicates that this “invention” of theology somehow negates any possibility for inferring the nature of the creator from the creation, but why should that be so? Both St. Paul and St. Augustine make explicit that we can indeed rationally infer something of God from the nature of the universe, and the assumption that it differs from the divinity that made it does nothing to invalidate that connection, so long as we never forget that we can only draw imperfect conclusions from imperfect reasoning applied to an imperfect creation. Secondly, she raises but hardly settles the issue of what the alternative approaches to religion she champions can warrant. She repeatedly argues that religious faith is useful as a source of comfort in the face of misfortune and death, that believers have found a life of altruism to be richer than one of self-centeredness. But she never argues that muthos reveals or can reveal any real truths about the nature of divinity or morals, nor that the pragmatic benefits of religion are anything other than a gratifying illusion. Thirdly, Armstrong repeatedly fails to distinguish between muthos as an extension of reason and as an alternative to it, citing testimony from thinkers as divergent as Thomas Aquinas and John Calvin, Aristotle and Paul Tillich. This is quite a crucial question that characterizes two wildly different traditions in all organized religions, but Armstrong’s eagerness to advance her case blinds her to the distinction. My own sense of faith is that its proper role is to extend reason to the corona of uncertain truth claims we simply cannot warrant with confidence (Please see September 11, 2013, “Religion and Truth”). I also doubt that what Armstrong recommends can be accomplished, for we are too reliant on reason to make sense of existence to ignore its dictates in any one sphere of activity, especially one so central as theology (for more on this, please see my blog entry for August 4, 2014, “The Tyranny of Rationality”). Further, it seems that the only way Armstrong can claim that muthos is equal or superior to the kinds of truths ascertainable by reason and reasoned experience is to warrant it in a purely coherence sense if only because the kinds of intuitions such efforts justify are so deeply personal. But a coherence warrant for a correspondence truth contains the seeds of its own disintegration (please January 12, 2014: “Can Belief Be Knowledge?”) as well as the grossest sort of intolerance for differing interpretations. I find it deeply disturbing that in her entire analysis, Armstrong never once considers the power of authority as warrant to truth and goodness claims, but instead seems to validate psychological need as a sufficient justification for embracing the truth of religious claims. It seems too obvious to mention, but since she doesn’t, I will: people embrace all sorts of things regardless of the truth in pursuit of psychic balm. I hesitate to charge her with bad faith, but if innocent of that charge, she surely is guilty of sloppy categorization, for the question of whether faith supplements or supplants reason is one of the core questions of theology. Adherents of either tradition would surely resent being lumped with their opponents as Armstrong repeatedly does.

As one who as struggled for decades with the apparent irrationality of religious belief, I found perhaps too much comfort in Armstrong’s assertions that the core texts of religious dogma were never meant to provide rational warrant for religious faith, that their power lay in some allegorical, analogical, mythical, or poetic meaning (for more on the problems such a view engenders, please see my posting of October 2, 2013, “The Problem of Metaphor in Religion”). The message I take from such an argument is that any search for correspondence warrant stronger than authority in primal religious texts is doomed to failure, and that any exegesis is as much a creative as an explanatory endeavor.

So we are left with the plodding work of inference based on the nature of creation and the moral sense that shapes human nature. Perhaps Armstrong is correct in her central contention that we can know no essentials of the deepest mystery and pervading immanence of the Creator, but our minds seem ordered by both the determinist nature of creation and our unique sense of freedom to make the attempt.

Standard

Tao and the Myth of Religious Return

I’ve noticed an odd theme running through conversations and my reading over the last few months as I seek clarity on the nature of religious knowledge. Discounting psychological, pragmatic, and utilitarian arguments in favor of how believers justify the core claims of their faith, I’ve found a surprisingly consistent common thread, an historical narrative that parallels the loss of Eden in Genesis. Only the serpent in this garden is science.

This is not just a version of the bumper sticker mentality: “The Bible says it/I believe it/That settles it.” It does not stoop to denying the determinism that underpins the scientific enterprise, which is an affront to reason as well as science (please see “Religionists Fighting the Wrong Battle,” blog posting of July 6, 2014). Nor does it resemble the misguided attempt to establish some parity between the methodologies of religious and scientific reliability (See “Latest Creationism Debate,” blog posting of February 16, 2014), an effort foredoomed to failure. Its attack is far more subtle, respectable, and powerful. These mythmakers deserve a thoughtful response.

Perhaps the most impressive phrasing was given by Alasdair MacIntyre in his magisterial work on ethics, “After Virtue.” C.S. Lewis covers some of the same ground in his most direct polemic, “The Abolition of Man.” Chesterton, Newman, Eliot, Tolstoy, Solzhenitsyn, Maritain, and a host of other very respected authors make their own versions of the same case, each differing in some details but all agreeing in essentials.

The story they tell is this. Something vital has been lost to culture, stolen by the revolution in thought begun by Descartes at the beginning of the seventeenth century. His attempt to establish objectivity and autonomy for our pursuit of knowledge was misguided hubris that launched the scientific enterprise and the Enlightenment, which in toto have rained catastrophe on western culture. Our fading hope for reprieve can only lie in a return to traditional values informed by religious truth, rejection of materialism, and repudiation of scientific theories of man.

Altogether, it is a good story. Some of it is even true.

The first thing to notice is precisely that it is a story, one with the requisite moral. In fact, it is a very old story, as old as the Epic of Gilgamesh and Noah’s flood. It is the story of Eden, of the Pharisees and Jesus, of Augustine’s two cities. Equally telling, it is the story of Plato’s cave, of Aeneas, and of Lewis’s beloved Norse sagas. The tale of the wrongly chosen path and of human hubris is both an archetype and a touchstone. It informed the entire Romantic era’s love of all things medieval. It inspires the young through tales of Atlantis as it characterizes their grandfathers’ fond recollections of misspent youth. I found it surprising to find so much unanimity among philosophers, theologians, cultural commentators, and poets about the centrality of narrative to an understanding of truth, at least until I recalled how saturated they were by Romanticism as it was filtered through the artifices of the Victorian era and how antagonistic they were to the discursive language of science.

But pegging its roots does not dispute its truth. And it goes without saying that nearly all modernist literature of the first half of the twentieth century was colored by just this sense of loss and diminishment. My issue is not with the loss per se. It is with the nature of that loss.

For the regret expressed by the mythmakers is rooted in historical and ethical generalizations that cannot face real scrutiny. I count seven serious errors in their analysis, any one of which would prove a fatal blow to their version of events.

 1. It is clearly untrue that there was some homogeneous value system that scientific thinking ultimately attacked and is in the process of destroying. What C.S. Lewis calls “the Tao” as a shorthand for traditional values was no more coherent than the lost America conservatives wish to resurrect based on wholesome television series of the 1950’s. No single moral system characterized world or western culture before the scientific revolution. One only need consider the challenge medievalism posed to classical culture to see how fragmented western ethical history was in the anno domini, not to mention in other parts of the world. Probably the most sourced of the works I’ve read recently is Michael Aeschliman’s “Restitution of Man,” which marshalls thinkers as diverse as Cicero and Samuel Johnson to his cause. That they would have been surprised to find their views lumped together would be an understatement. The deeply religious authors who present this myth of moral unanimity need hardly have looked beyond their own Christian faith for disproof of their contentions, for the bloodbath of the Reformation is sufficient proof that no moral position went unchallenged during that miserable era when religions warred over divergent moral outlooks. What could they be thinking to claim otherwise?

I have yet to see a straightforward answer to this question from any of these thinkers, but I think I can provide one that they conspicuously avoid supplying. While we see no less moral controversy before Descartes than we’ve seen since, the grounds of the argument have shifted. The unanimity was not in the truth and goodness claims offered by pre-modern thinkers. It was in their mode of justification. What writers like MacIntyre and Chesterton wish to return to is the power of authority as a warrant (Granted, they locate the source of that authority in different places, for MacIntyre the culture and for Chesterton dogma, but both revere tradition). It was authority that the eighteenth century attacked and defeated, something only made possible by the glaring deficiencies religious authority made manifest during the awful decades of the Reformation (for more on those deficiencies, please see my series of postings from January, 2014).

2.These critics of modern science treat its rise as an unprovoked challenge to traditional values rather than a desperate attempt to find alternatives following their collapse in the Reformation (For more, please see my posting “Modernism’s Midwives” of February 2, 2014). But by ignoring the causes for, say, Descartes’ efforts to find consensual warrants for truth claims amidst the ruins of the French Reformation, they also overestimate his success and underrate the power of later attacks on his method. One can hardly blame writers like Chesterton for missing the postmodern revolt that was emerging in his own time. Perhaps he might have seen more clearly how modernism’s warrants, reason and closely examined experience, were assaulted by their very modes of analysis in ways that authority could never withstand if he had realized that the tradition he most revered was authority itself. I think a critic as brilliant as Lewis would have recognized– and abjured– the postmodern revolt on modernism and indeed might have then traced back the roots of his unease, but his death in 1963 came before the brilliant formulations of postmodernism that mainly emerged in the 1970’s, themselves explaining events that had been sorting themselves out since the turn of the twentieth century (for more on this process, please see my book “What Makes Anything True, Good, Beautiful? Challenges to Justification.”) Why MacIntyre, writing in 1984, failed to see it befuddles me. By rooting values and moral duties in culture, he seems to find agreement with postmodernism, though how he could avoid the charge of cultural relativism that tarnishes their arguments escapes me too. I assume the nostalgic mythologizers derived what they took to be proof from yet another error, but one that has deep truths buried within.

 3.They assumed that the “science” that emerged from the birth of empiricism in the seventeenth century is synonymous with modern science, so the boasts of a Bacon or Comte might be seen as proofs of scientistic hubris. This issue requires a bit of teasing out, though. First, the entire nineteenth century has been a continuing effort to refine what counts as valid scientific experience. This entire effort has been reductive in the extreme, rooting out pseudo-sciences and outliers as it builds disciplinary paradigms and establishes links across fields of study. No one can deny that the early proponents of empirical processes were guilty of hubris, but one glory of their method of warrant is that it is self-correcting, and the boasts of early natural science have long been muted as science has matured in the last century. No one who understands true science can argue today that its efforts are guided by any greater value than respect of truth. On all other values, true science must remain mute for the simple reason that its focus on material, measurable, and mathematical substance provides no means to warrant moral claims. Had its critics clearly differentiated truth claims, which science does exceedingly well within its sphere of competence, and goodness claims, they would recognize that morality was safe from science.

But we might pardon them for their error, especially when we see how slow science itself has been to learn its own limits. In this, the mythmakers have their strongest point, for the human sciences have proved guilty of all their charges: hubristic, value-laden, misleading, and a threat to every other means of knowing (for more, please see “The Calamity of the Human Sciences” blog entry of February 9, 2014).

4. But the mythologists here make their fourth mistake in that they don’t seem to distinguish the human from the natural sciences. The former justify all their charges and have since the Enlightenment first championed “the science of man” as an extension of “the science of nature.” Their failure to distinguish the two might lead us to the conclusion that these writers are just a bunch of old men who yearn for the good old days. There’s some truth in that charge as there is power in their charge that the “human sciences” are neither science nor humanizing. But any natural scientist could have told them that. Real science resents “soft science” basking in the reflected glow of true science at least as much as traditionalist humanists do.

But it is not likely that academics trained in the arts and humanities would seek out the counsel of the college of sciences. For the first two-thirds of the twentieth century, they attempted a flanking attack that drew its power from some of the more gruesome scientific accomplishments of that difficult time. This tactic might be called the “Mary Shelley” argument: that natural science freed from moral restraint would create abominations. It did. The killing fields of the Great War faded into nightmare only to be replaced by the genocides that followed, and then by total war and the specter of the mushroom cloud. We may be too close to that era to appreciate how powerfully these prophecies affected culture during the Cold War. Scientistic utopias of the “Brave New World” or “1984” variety may have seemed possible, even likely, but history has shown them to be the fifth mistake of the reactionary mythmakers. 5. The technology produced by the scientific revolution has not diminished human flourishing; the consensus judgment is that it has improved life. At any rate, the technologies that natural science has wrought are so woven into the fabric of world culture today, it is far more difficult to imagine a successful Luddite rebellion now than it would have been a century ago.

The mythmakers were more perceptive in tracing some of the cultural products of a technocentric culture. Perhaps it was natural that they would characterize the popular view of science in terms reminiscent of the laity and the clergy. After all, this was their preferred social structure. They were correct in seeing the layman as befuddled and overawed by the new priesthood of scientists, viewing their accomplishments as equally mysterious and inexplicable. This credulity is a major motive for the human sciences’ efforts to ape the terminology and training of the natural sciences, though, of course, without their successes. It is perhaps more the laymen than the scientists who merit the charge of scientism, for an exaggeration of the capabilities of science analogous to magic can only succeed for the outsider. No practitioner of modern natural science could perpetuate such a hoax from within the discipline. For laymen, pseudo-scientists and some practitioners of soft science, such overblown claims with their echoes of the early prophets of true science might impress nonspecialists for awhile. I should add that popular cosmology is sometimes guilty of that charge, particularly when it attempts to confront questions of the universe’s origin that depend on shaky theoretical underpinnings. What this popular scientism does lead to is a misunderstanding of the nature of moral thinking, but it is an error the mythmakers share. The layman yearns for moral certitude somehow produced through the methodology of science. The mythmakers are right to condemn this is a false hope, for no “ought” can ever derive from even the most certain “is.” Or to put it in liberal arts terms, no imperatives from the indicative. Every capability of science requires a moral injunction to direct it, and that injunction can never be derived from the science that serves it. Medical science may extend life, but it may not decree that life ought to be extended. But the mythmakers’ nostalgia for the moral directives of religious authority are, like their historical narrative itself, a longing for a myth. Authority of any kind must founder on disagreement. It cannot resolve dispute within its own mode of warrant (for more on why, please read my blog on “The Fragility of Religious Authority” of September 18, 2013). Neither empirical science nor religious authority can provide certain moral guidance in a multicultural climate. What can?

I consider expertise to be an admirable guide to judgments of quality as well as to issues of truth that yield to repeated and studied experience, but I must agree with the mythmakers that expertise is difficult to come by in the rough-and-tumble of undifferentiated experience. So in that sense, they are right to condemn the mid-twentieth century’s obsession with soulless professionalization and mass efficiency, though I should quickly add that an increasingly complex society cannot survive without bureaucracy and middle managers.

6. Still, the mythmakers ominous charge that in the absence of religious morality, “efficiency experts” and technocrats would by default be the moral arbiters of mass culture proved to be yet another error on their part. Wouldn’t you agree that is a role more likely assumed by commercial artists and entertainers in today’s culture? At any rate, expertise, though a powerful justification for many kinds of truth and goodness claims, can only have a tenuous hold on moral ones, and that the expertise derived from a long life well lived. Neither scientists nor technical experts have replaced bishops, ministers, or mullahs as moral arbiters.

The “Myth of Religious Return” so prized by conservative literati is a good story for sure. But like all narratives, it suffers in any effort to translate it into discursive language (Please see blog entry “Tall Tales” of July 14, 2014 for more). Without a doubt, the failure of their analysis can be traced to the dawn of modernism, a thought revolution spurred entirely by the dismal failure of religious authority over the century and a half of Reformation conflict. But the mythmakers failed as miserably in understanding their own age, and this serious mistake constitutes their final misjudgment. 7. They failed to appreciate the postmodern revolution that rejected modernism at the dawn of the twentieth century in favor of group identities molded by spurious claims of social science, existential Romanticism, utilitarianism, and American Pragmatism. We are certainly still suffering the consequences of postmodern moral thinking (Please see my blog posting “Postmodernism is its Discontents” of July 7, 2013), and some of the strongest objections conservatives raise to the current moral climate are valid objections to postmodern thinking. Still, thanks to the Enlightenment revolution, itself condemned by both the mythmaking reactionaries and postmodern nihilists, morality is still seen as the most prized possession of the individual’s rational will in pursuit of what it calls good. Many of us exercise that will by choosing to respect religious authority, though less completely than in ages past. Rather than yearn for some mythic, medieval paradise lost, religionists must compete for the moral assent of their adherents as adults, not as children cowering in fear. The intellectual revolution that freed reason from authority also established the sacred right of the individual to choose her own moral outlook justified by her own moral warrant. The mythmakers are certainly correct in asserting that hasn’t always gone so well, but moral agency does not preclude error any more than it perpetuates it.

 

 

Standard

The Tyranny of Rationality

My argument today can be summarized thus: we all are deeply, deterministically rational.

Since we partake of three realities (brute reality, our constructed image of it, and the language we use to convey our knowledge to others), it seems appropriate to examine this claim in reference to each. I do not mean to say that the external reality, the tree that falls in the forest, is rational. Brute reality simply exists, and any character attributed to it requires an interpreting mind. So if we should decide that, yes, the external world is rational, what we are saying is shorthand for what we really should be saying: that we can apply our rational faculties to the substance and events of the world with some confidence, knowing that the predictions and explanations we produce will prove accurate. Further, if they prove inaccurate, we know that our rational faculties can locate a more accurate prediction or take into account some previously hidden factor that will then explain to our satisfaction the way of the world.

Now this congruence between brute reality and our own thinking about it is really very mysterious, for there is no good reason why creatures produced by brute reality should be able to unlock its secrets as well as we do. From the way I’ve framed the two realities referenced thus far, you might assume that the correspondence I have mentioned is rooted in empiricism, the natural sciences. And who can doubt that the disciplines of the hard sciences have proven the exemplars of unlocking the mysteries of nature with the key of human reason? Nothing at all surprising there. You might wonder why I bother bringing up such a cliché.

My answer is that the methods of the hardest of the natural sciences, while profoundly rational, are only more rigorous applications of something deeply rooted in all human experience, something we can no more shuck than the wetness of rain. Even our stoutest protests against rationality, the ecstatic cries of mystics and the Kafkaesque wails of nihilists, are logical shafts of light in a metaphysical darkness and no less attempts to build a working model of reality than applied particle physics. The difference only makes sense in consideration of the process.

Long before neurologists began their contemporary struggle to map the brain, philosophers attempted to probe the mind and its workings. The pioneers of this effort, the first epistemologists, sought to answer the question of how the mind represents brute reality. They quickly discarded the Aristotelian model of direct perception despite its dominance in the thinking of the 17th century. The sort of naïve assumption that we perceive the world complete and entire, that our senses present to us a “true” picture of reality, is a hard one to dismiss, for it is our default approach to experience. But it is patently false. From the vanishing point in art to dreams, hallucinations, and apparent time compression and expansion based on our level of enjoyment, we don’t need to look very hard to find that our perception of brute reality is something different from the reality itself. And don’t even bring into this issue the limits of perception as exemplified in quantum theory and general relativity!

John Locke’s representative theory of perception gave us our second reality, an internal reconstruction of the external one that in his view was a pretty effortless reproduction constructed by the mind. Locke argued that all of our thinking is composed of perceptions and reflections upon them. Of course, that notion of a “reality movie” playing in our head does little to explain either the misperceptions that the brain seems so often guilty of nor the bothersome truth that we all don’t seem to perceive reality the same way. Fast forward a century to George Berkeley’s famous question about the falling tree. How do we know the movie playing in our heads is an accurate representation? All we have access to is the movie.

Berkeley thus builds the most important edifice in epistemology: the perceptual wall, an impenetrable barrier that separates brute reality from whatever we take it to be in our minds. We build a creative representation of reality from perceptions and reflections and have only experience to guide our choices. In fact, as Immanuel Kant famously observed at the end of the eighteenth century, whatever structure we build is formed not only by our senses but by the mental structures in our minds that pick and choose among them to structure our creation. This is an active process of choosing, sorting, and assembling perceptions so as to build a working model of reality inside the perceptual wall. Sense data bombard the mind and just as we can pick out a familiar voice in a noisy room or see a foreground object while ignoring all the others in our field of vision, our minds sort through the barrage of perceptions that our senses transmit to produce a working model of reality. How different is this process from our default notion of direct perception and how likely is it that our minds will build a model that makes sense to us regardless of its fidelity to the entire picture presented to it? And– disturbing thought– how likely that we all will build anything like the same reality from differing experiences?

But here’s the catch. Kant famously insisted that the mechanism for that construction, the sorting device for the incoming data stream, must be profoundly and completely rational. His famous categories of experience were mental sorting and assembling mechanisms that inevitably present to us a rational world. This is why every cause seems to have an effect and every effect a cause, why the world presents itself as both unity and diversity, and why quantity seems so ubiquitous in physical reality. These are simply the way we see things. The way we must see things. We have to remind ourselves ad nauseam that correlation is not causation simply because we are programmed to read causation into every effect we observe. We see constellations in random star positions, animals in cloud formations, and purpose in chaos because that is what we have to see. It seems as plain as the nose on our face unless we check our naïve assumptions at the door. The world is not rational. We are. And we can’t help being.

But wait. There’s more. Just as our moment-by-moment experience of reality is composed of the assemblage of innumerable sense data inputs orchestrated by a mental process, so too is the composite, ongoing picture of reality these experiences produce. We don’t merely act in the world. We respond to it, and that response is a product of an ongoing reflection that orients us to experience. We don’t just think we know the momentary truth of this instant. We know reality. We must think this way so as to navigate our way through the truths that allow us to choose all the goods we come to value, whatever they may be. It is this picture of reality and our place in it that comprise our own virtual circle.

In previous blogs I have discussed the virtual circle at some length (please see post of August 6, 2013). It bears repeating that our ability to find some correspondence between our experience and external reality is only testable by experience and that our efforts to improve those tests has brought us the scientific method. Its essence is an effort to improve the reliability of experience and our reflections upon it. We have other tests, of course. To determine correspondence between an unknowable reality and our picture of it, we may rely on expertise, authority, or undifferentiated experience. But all of these truth tests are inherently rational. We know experts have deeply examined the repeated experiences that produce their expertise. We trust authority in one field because it has proven trustworthy in others (a questionable assumption as I explore in post of January20, 2014.). We (mistakenly) assume a new experience can be examined in light of an old one. Please notice that I am not claiming parity for undifferentiated experience and a scientific experiment. The latter has intentionally confronted the issues of unreliability that plague the former and has attempted remedies for them. What I am claiming is that our conscious assemblage of reality, our virtual circle, is composed of rationally constructed truth claims. Their correspondence is, of course, always in doubt (we cannot guarantee the tree has fallen, after all, only that we have heard it), but the truth tests I have mentioned produce sufficient warrant for us to judge these claims as true (July 9, 2013). It is also defensible, though not certain, that the human operating system that presents such sense data constructions to us operates as a guarantor of intersubjectivity so that we may compare our correspondence claims constructively to those made by others.

But, of course, our constructed reality consists of far more than simple correspondences to material reality. What about correspondences to conceptualizations? How does our mind construct, for instance, true impressions of abstractions like justice or love? And what about the purpose of all this construction? How do we define, limit, and choose the good?

In this zeitgeist, to claim that such things are correspondence constructions, meaning they have some objective reality, is going way out on a limb. I have attempted to make that argument in prior posts (December 10, 2013), but even if you embrace our culture’s attachment to the subjective quality of conceptualizations, and especially conceptualization of goodness, I can still claim with confidence that your subjective experiences are rational, or at least, that you deem them so. Here is why.

In any attempt to find the true or the good, we engage in an act of comparison. In correspondence truth tests, we examine the percept in our minds against the brute reality we seek to know. Is that a Mercedes or an Audi? Even if we embrace the impermeability of the perceptual wall, we still examine our truth claims in comparison to the virtual circle of truths we have already accepted as true. Is this queasy feeling in my gut what I call nausea? This act of analysis so central to every truth and goodness claim cannot help but be a rational one, and it characterizes each moment of consciousness. It builds a second level of rationality over the foundation of Kant’s sense data theory of perception, this one a conscious and comparative one.

An unfortunate consequence of the acceptance of the perceptual wall is the epistemological viewpoint termed phenomenology. This school takes seriously the impermeability of the perceptual wall and argues for the radical subjectivity of all experience. Its adherents take their name from Kant’s famous assertion that we can never know things-as-they-are (“noumena”) but only things-as-they-appear (“phenomena”). We can only see the inside of the perceptual wall, digesting phenomena as they appear in the mind. Perhaps this argument would have been taken less seriously if it hadn’t followed upon the heels of Romanticism, with its perceptual wall-piercing valuation of intuition as a divine source of insight. Question that level of certainty by doubting either the reality of intuition or its divine source and you are left with something far less convincing: the total subjectivity of experience. This bleak picture of humankind’s fruitless search for truth and goodness leant its emotional force to the twentieth century’s infatuation with postmodernism (July 30, 2013).

But note that even in this bleak and black view, we see the light of reason. For phenomenology is founded on Kantian metaphysics, Romanticism on a valuation of intuition as a reliable means of knowledge, postmodernism on a cobbled-together set of reactions to unsettling events in the first decades of the twentieth century. Despite their claims to the contrary, the source of all philosophy is the search for wisdom: the true conditions of reality. And if Theresa of Avila, St. John the Divine, and Franz Kafka find those conditions to extend far beyond the reach of correspondence knowledge — meaning beyond the reach of the third level of reality I mentioned at the beginning of this entry, the use of language– that is still fine. For their beliefs do not render their rational appraisal of reality incorrect. They extend it, perhaps to realms that others might not see or appreciate. In “The Idea of the Holy,” Rudolf Otto makes clear that any concept, even of the numinous if such a thing is possible, is rational.

But even if it isn’t, even if a conversion experience, a horrifying ordeal, a drug-induced revelation that changes your life, cannot be conceptualized as experienced, it must still be incorporated into the virtual circle. It still must comprise its own piece of our picture of reality. And that process too must be a rational one. For the only way we can construct that picture is to examine it according to the rule of either the principle of non-contradiction or, if we are more rigorous in our thinking, the principle of logical entailment. The mental process of turning a unique experience into a bit of the virtuous circle must be an act of conceptualization and thus a rational act. I am certainly not claiming that we all succeed in this effort, nor that we apply very much rigor to the process. The haze of beliefs that extends our knowledge like a sun’s corona are often poorly examined in light of the knowledge we have already accepted, for instance. But even so, note the act of rational comparison that lies at the center of the effort. Perhaps mental health professionals might find a continuum of rationality from the integrated personality to the psychopath. I doubt the latter considers her virtual circle very much compromised. We all think our conception of the world pretty sensible, and each thinks her own the best for the simple reason that she would choose another if it seemed more true or good.

Perhaps logicians will find fault with my argument, insisting that rationality is not a matter of degree and that it indicates some absolute proficiency. I cannot disagree that formal logic establishes a rigor absent in less rigid formulations, but certainly at least some of the difference is attributable to the third reality of language rather than the second reality of the virtual circle. But just as expertise is a less perfect form of rational application of experience than empiricism, so too is ordinary logic a dilution of the methodology of formal logic and for the same reasons. We accept expertise because we cannot frame many experiences in the light of experimental science, accepting the limitations of experts because that is about the best we can do just as we cannot frame ordinary experience with the mathematical structures so admired by formal logicians. Dilute that comparison still further and observe that we subject our beliefs to the far less rigorous tests of non-contradiction because we cannot subject them to the truth tests of correspondence. The lesson should be clear. We are rational beings. Rather than eschew that inherent rationality, we should embrace it and apply the most rigorous tests to our perceptions and reflections that they will withstand. We cannot escape conceptualizing our thinking about truth, goodness, and beauty, and in seeking warrants, we cannot escape the reasoning that must accompany such thinking.

Standard

Prejudice and Privilege

I really dislike looking into matters of race, but you don’t have to scratch the surface of any American problem very hard to find race eating away just under the surface, complicating solutions and, worse, analysis. It is this subterranean process, this rotting under the polished surface of our ideals, that has given rise to the relatively new popularity of the notion of privilege as racism. I wish to examine this view of racism in relation to the traditional notion of prejudice.

Their etymology indicates a major difference that has major implications. While “prejudice” is rooted in the active voice: it means literally “to prejudge” presumably without sufficient evidence, “privilege” derives from French meaning “private” and “law.” To be granted a privilege is a passive, not an active act. We may assume that favor was sought, but its reception was not within the power of the recipient to procure in contrast to the power we have in exercising our judgment actively to show prejudice.

Now the use of these two terms today tracks their etymology, and this distinction is hugely important in the current racial climate for two different reasons, both of which make remedying racism more difficult. First, the concept of personal responsibility is a bedrock moral principle, and that is difficult to connect to privilege as racism. Consequently, we tend to underestimate the degree to which our actions are determined by prior conditions (please see my posting of March 23, 2014) and overestimate our moral freedom in present ones, thus leading to the second problem: we consider our accomplishments to be entirely our own and resist crediting others for even a part of our success.

Contrast this haze of shared responsibility to active prejudice. To commit an act of prejudice is an error committed by the thinker. It is within her power to remedy. It is not only a cultural offense but also a rational one. It connotes sloppy thinking even when the prejudice is positive. For unless a class of things or persons is definitionally exclusive (“All bachelors are unmarried”), one may not reasonably apply even accurate group classifications to individuals, not to mention the difficulties inherent in forming those classifications (a danger postmodernists blithely ignore for some reason. Please see post of March 30, 2014). But to receive a privilege is a gift the recipient has no control over. In the sense social critics use the word, white privilege describes a thing unearned, an accident of birth, a booster rocket for economic and social ascent denied to others. The term is thus not only passive, but also inevitably comparative. From its beginning, the private law benefiting some was implicitly to be contrasted with the public law relatively penalizing others.

So the use of the word privilege changes the nature of a charge of racism. First, the accused may have done nothing in contrast to the implicit irrationality of a prejudicial judgment. She may bear no personal moral responsibility. She is merely the beneficiary of an unearned advantage that she may have neither asked for nor been aware of. Second, the charge insinuates that any advantage thus conferred also must penalize others. Finally, we face the most relevant issue that derives entirely from her degree of moral responsibility: what is she supposed to do about it? Let us attempt some calibration.

First, the power of charging prejudice is inextricably linked to the rational error it commits. Racism is morally offensive in part because it is stupid, and holding the moral high ground in the discussion cannot be separated from intellectual superiority. Racists are ignorant, uneducated, unable to grasp nuance. They make sweeping generalizations that are wildly inaccurate and then attempt to paint individuals with them. A long tradition of finding empirical or rational means to justify racist judgments from pseudo-Aristotle to Thomas Jefferson to Charles Murray attempts to invalidate the association between racism and ignorance, but its existence only reinforces the connection, for all such attempts are now regarded with disdain by intellectuals who are unwilling to sever it or to take any such effort seriously. This intimate connection cannot be carried over to the new racism of privilege, for privilege reveals no requisite flaws in its recipient whatsoever. Southern freedom riders in the 1960’s were as guilty as the dog handlers who attacked them if both were white. Indeed, the notion of privilege immediately conjures up a consequential guilt that might have motivated the former and enraged the latter. But should those risking their lives to end racism be charged with prejudice because they receive benefits from a system they actively oppose? Is such a charge warranted? And must white privilege disadvantage blacks?

We must assume that such privilege derives not from the absolute advantage conferred by being white but by the relative disadvantage the privilege implies for being black. We will call this kind of privilege disparative privilege. Now this relationship requires some investigation on three counts: first, what constitutes the privilege; second, how does it redound to the disadvantage of blacks; and third, must any conception of privilege be built upon comparative relationships?

Conservative whites wish to dismiss the whole notion of white advantage– and with it the notion of white guilt– by insisting that whatever advantage they received was earned rather than given, though they wish to be vague about whether they earned it or their forebears did. At most, they point to cultural values that encourage their success: family structure, emphasis on education, and work ethic. They rightly accentuate the self-discipline their success required, the acceptance of deferred gratification and commitment. But the essence of privilege as racism is the precisely the charge that advantages were not earned, that they accompanied skin color as a gift. So we step into a minefield of moral ambiguity, for I cannot be responsible for a harm I did not commit, nor can I be asked to feel guilt for a benefit I earned. To be fair, these character traits are not confined or exclusive to “white culture,” and to say they are is simply prejudice. To claim that being black automatically results in cultural disadvantage in regard to these prerequisites for success is a claim I can’t imagine any unbiased cultural observer would wish to make. Nor are these automatic socioeconomic markers, for lazy scions of wealthy families and ungrateful second generation Americans are clichés that belie any guaranteed conferral of privilege. So much for any sweeping comparisons. But let’s face it. The conditions for success are certainly better established in some socioeconomic environments than in others, and of the multitudinous strands in the tapestry of any success story, many are woven without effort, simply by the expectations of others that form our horizon of possibilities. Still, it is a gross act of prejudice to see white privilege as an unearned gift which white America takes for granted… at least until one compares it to being black in America. Only by comparison does the generalization hold undeniable truth. White guilt derives not from privilege but from prejudice as surely as the tail of the coin implies a head. Compared to the lot of blacks in this country– not only in the past but in the present– every white person now living was born on third base, and whatever her positive efforts, all were built to a degree upon a scaffold of exploitation. There is no denying that any comparison of white and black privilege will lead to one conclusion: whites still reap unearned privilege and blacks unearned privation because of skin color, and this legacy of active prejudice is a moral stain. White persons are like the boss’s son who may start in the mail room and may work hard but who will never know how much of his success is due to the accident of birth nor to what degree that success has kept less lucky fellow workers from rising as they might have in an equal race. Who can doubt that the future occupant of the corner office will pass on the fruits of his success to his children just as his future underlings will hand off their lesser luck to theirs?

But note in the example that the while the goods are absolute, the harms are all relative. Let us try to think of privilege in an absolute sense. Considered as an unearned gift, we are awash in privileges. We did not earn the social order that benefits us, the political system that frees and equalizes us, the economic system that enriches us, the family that nurtures us, the knowledge that guides us, the beliefs that give meaning to our lives, and on and on. While it makes sense for us to be grateful for these blessings, I can think of no reason we should feel guilty for them. In this large sense, anyone who lives in conditions allowing her to meet her human needs in the world is privileged, for it is by the satisfaction of our needs that our lives are fulfilled and the conditions for that satisfaction have been well-established thanks to the conventions of civilized life (for more on this, please see my post of November 13, 2013). If we have a loving family, dear friends, education, civil order, productive work, and the like, it seems to me we have the goods we are by nature designed to have, and the moral response to that is satisfaction and gratitude, not guilt. What is more, these are limitless goods. There is more than enough of these blessings to go around and my having a sufficiency in a working civil society in no way limits your access. We do not compete for all privileges.

But we do compete for those blessings that are limited, and then we are forced to face both the universality of our needs and the pain their absence produces. The most impoverished citizen in this country is privileged compared to the 84% of Liberians living on less than one dollar per day and the most unjust political jurisdiction here is utopia compared to life in Syria. If comparative privilege imposes a consequential guilt, then we all have a moral duty to ameliorate the living conditions of the poor regardless of their location. But do we? Let us put religious injunctions aside for the moment, though they impose their own moral duties, so that we may confront the central question that a relativistic concept of white privilege and white guilt implies: does relative economic and political privilege inevitably impose moral obligation? Let us refine the question: does privilege impose an obligation even when disparity is not a consequence of the privilege?

Let me acknowledge and laud the sentiment evoked in kind-hearted persons by seeing suffering. We wish to make it better. But a clear-eyed view of this natural desire also compels us to see how conditioned it is by our degree of privilege, as exemplified by the coat-drives-for-pets mentality of some wealthy enclaves. And just as any suffering tugs at the heartstrings, another nearly universal response is simply to turn away. What we happen to see disturbs us, so we simply refuse to see it so as not to be disturbed. This accounts, I think, for at least a part of the gated community mentality that seems so prevalent in rich neighborhoods. The moral principle of ought-implies-can–that moral obligation only follows the ability to act– comes into play here as well. An indiscriminate and wholesale equality of goods would be impossible to conceive much less to achieve (please see my posting of December 3, 2013 for more). The Soviet Union was a case study of that failure. Despite the efforts of thinkers like John Stuart Mill to objectify such sentiments and thereby impart to them some moral valuation– an effort that collapsed in a thicket of evaluating pleasures and pains– we would do well to remember that justice does not require an equality of degree of all perceived goods. For we perceive many things as good and value them differently and there are too few Maseratis to go around. Rather, it is the sufficient distribution of goods necessary to meet our human needs that is required, otherwise called an equality of kind. This operation is a profoundly rational one, the tugging at our heartstrings notwithstanding. We are left then with the suspicion that some kinds of privilege and guilt are not handmaidens of wellbeing and some are and that some wellbeing is earned rather than gifted. So why should guilt be intertwined with privilege like snakes on a caduceus?

But that is just the issue, isn’t it? For in our times and in America and for economic and political privilege especially, the relationship is always partly causal. Some linkages are as thick as wisteria vines squeezing the columns of antebellum mansions. Some family wealth, wealth that produced all the goods it is capable of buying for succeeding generations, was built on the direct foundation of the importation and perpetuation of slavery. Other privileges are less easily traced. It is said the target of the fourth hijacked plane on September 11, 2001 was either the White House or the Capitol. What kind of loss to our national pride would that inflict? Both edifices were built by slaves. When a group of people are disadvantaged by color and so denied an equal wage or vote or voice in social policy sufficient to deprive them of the satisfaction of their needs, someone will reap the benefits, and the misalignment of power being what it is, the odds are that someone already has the sufficiency that justice requires. To the degree that white America has harvested this kind of white privilege, it deserves to feel white guilt. And so long as the privilege is maintained, so too is the guilt, and so too is the moral obligation to correct the moral harm. Reparations for slavery would not clear the slate, for the vestiges of racism would continue, producing continuing disparity and white privilege.

But just to be clear, should I feel guilty for having parents who valued education and instilled a work ethic because others were less fortunate in their choice of parents? White guilt must be measured by the racism that relatively advantages one group over another, not the absolute goods consonant with universal human needs that some received and others did not. Social scientists may attempt to lay all cultural differences at the feet of some ancestral economic exploitation, but such an indictment seems too sweeping to be justified by science and too Marxist to be embraced by interpreters, though it is consistent with postmodern emphasis of culture as the creator of identity (please see posting of July 30, 2013 for more). If a narrower difference maps the battleground of white privilege and white guilt, then fight it there. But let liberals leave out injunctions of religious duty, emotivist objections to inequalities of degree, and claims that all privilege imposes guilt. Let conservatives put away their blind pride in winning a rigged game and their contempt for those losing it. While privilege may broaden our view, it shouldn’t change our focus. Prejudice is still the villain of the piece, still a moral obloquy and intellectual failure. So long as its effects ripple through the culture, white guilt is its proper consequence.

That conclusion applies also to the other kinds of privileges. To the degree that they were unearned benefits at the costs of sexism, colonialism, imperialism, and the like, we might expect to see coinages of terms like male privilege, first world privilege, heterosexual privilege and the like with their attendant trains of guilt. And to the degree that these disparities still hinder exploited people from satisfying their needs while easing the lives of their exploiters, active amelioration is the only morally justifiable response.

So what is active amelioration; what is to be done? Since justice is defined as “to each her due,” it seems clear that justice demands that those unjustly advantaged should be those who make reparations after, of course, performing the required triage. For if all vestiges of racism were magically removed from our society today, we would still be left with the inequities it has long produced, both privilege and privation. Repairing these inequities is not so difficult as egalitarians might imagine since justice requires not only an equality of kind but also the inequality of degree that the exercise of our preferential freedom produces (for more on this, please see my post of November 20, 2013). The shame has never been that some have an excess. Rather it is that the prejudice that helped produce the excess has also produced a deficiency for its victims. It is daunting to accept that the same arguments that produce white guilt hold sway in regard to other kinds of privilege, leading to other moral obligations, but there it is. Since the exercise of influence over other sovereign nations is a governmental function, we as citizens should move our government to act on our behalf in accord with the limitations of the ought-implies-can principle of morality.

This is what is to be done: we have the obligation to root out disparative privilege in all of its other manifestations by actively opposing prejudice in our own circle and by favoring governmental action to produce the equality of kind that justice demands. And let us remember also to be thankful for the absolute privileges we enjoy but did not earn.

Standard

One-Armed Economics and Wealth Creationism

I venerate expertise as a truth warrant. In judgments of correspondence goodness evaluating quality, it can substitute for clearly defined standards (please see my post of October 15, 2013 for more). Because expertise is to some degree built upon experience, it is a deeply flawed justification for truth and goodness claims, though its reliance on rational examination of experience raises its reliability. That being said, the criteria for developing expertise depend on the subject. The human sciences have compiled a dismal record in this regard, in part because of weaknesses inherent in their fields (see post of February 9, 2014 for more) and in part because their roots in academia encourage professional disagreement. Nevertheless, the hardest of the soft sciences and one of the few that is based upon quantitative analysis is “the dismal science,” economics.

 Because it is a human science, it is built on the shifting sand of conflicting paradigms. We see broad disagreement about essential subject matter along the political spectrum, but even economists embracing capitalism splinter in their premises and the conclusions built upon them. Pit a disciple of Hayek against a Keynsian and watch the sparks fly, justifying Harry Truman’s famous preference for a one-armed economist who wouldn’t say, “on the other hand….” The general unpreparedness of economists for the crash of 2008 does not speak well of their predictive powers. The psychic hot line did better. So call economics an immature science, one step below respectable status. Even keeping that caveat in mind, I cannot help respecting economists for their devotion to data, something all too rare in the human sciences, and I respect the their analytic method, flawed though it may be by the theories that dictate it. They know so much more than I do. I only participate in the economy. I do not profess to understand it. My field is epistemology, so I am painfully aware of what I do not know, but I have a few issues with the concepts of wealth and job creation as some economists define them that confuse me. Perhaps an expert can set me straight.

 I took a dollar out of my wallet the other day, and right above George Washington’s head in a kind of corona, a girlish hand had printed in red marker the name “Maria.” That got me thinking about how many hands that bill had gone through since Maria had first claimed it for her own, and how many transactions had been facilitated by its existence. Now I’ve heard conservative pundits and a few economists insist that wealth can only be created by free enterprise, that government can only transfer rather than build wealth. Since the money comes from tax revenues, they say, it is merely changing hands, not creating value in the way private enterprise does. This seems obvious when given their favorite example of a Steve Jobs inventing the iPhone. There was nothing and then there was something that people were willing to pay for. This is true wealth creation, as in creation ab initio, a making that seems almost divine. Advocates of this position contrast that kind of wealth making with the confiscatory policies of government taxation that only moves money around after, of course, squandering a large percentage in fraud and waste. So when government spends, it is spending not only what the earner would have spent more wisely but also what it did not create. The assumption is that the taxpayer created the wealth by her labor just as Steve Jobs created the iPhone. So taxation is not only wasteful in that it adds an unnecessary drain for money to go down. It is also parasitic in that it adds nothing to the economic basket. Have I got that argument right?

 But Maria’s dollar tells another story. Does your job “create” wealth from nothing in the sense of inventing value? More likely it does what all those people who passed on Maria’s dollar do: perform a service that the payer considers worth a dollar. Whether that service is performed in the public or private sector is irrelevant. The taxpayer pays for a service that government provides just as she pays for a babysitter or a taxi or a pizza. I will admit that I can choose whether to purchase these things and that I have no such choice in government spending except, of course, through my vote. But look at it another way in respect to other purchases. I have no choice about fulfilling any of my economic needs. That’s why they are called “needs” (For more on this, please see my post of November 13, 2013). Can I refuse the grocer, the hospital, the landlord? You may respond that I can choose another provider to meet my needs if dissatisfied and such freedom is denied me in regard to government services. This is an undeniable burr under the saddle of the cowboy libertarian wing of the conservative cause. But let us examine this irritant more closely.

 There are two good reasons for the monopolies that government “enjoys” in performing its services and though these affect but transcend economics.

 First, since the raison d’etre of government is justice rather than profit, it must retain a monopoly of power so as to be the final arbiter in its goal of providing justice to meet those needs that citizens cannot meet by their own efforts. The legitimate scope of such efforts, I must add, is limited to those needs citizens cannot satisfy for themselves. These fall into two broad categories: those too expansive for any individual to provide (such as defense) and those that might be skewed into injustice by gross inequalities (such as the court system and the legal rights of minorities).

 Secondly, the issue of its efficiency and desirability in regard to any particular but necessary service is skewed by the simple truth that government is often the provider of last resort in regard to these essential services, performing public functions (like fire protection and other disaster relief) that no private employer would undertake because they could never prove profitable. Florida provided another example after hurricanes ravaged the state in 2004. Private insurers deserted the state, which was obligated to go into the insurance business to provide needed coverage for homeowners.

 These two realities put to the lie the claim by some economists that government only transfers rather than creates wealth. It provides services citizens cannot provide for themselves, services that meet human needs and that may demand an attention to justice over profit.

 Now it is worth discussing whether government can perform some of these services as well as a private entity, but this is a question of relative efficiency, not absolute wealth creation. But in considering efficiency, you can be sure that no corporation will pursue these kinds of opportunity unless profit is factored into the job, profit that adds to the cost of providing the service, profit that blinds the provider to issues of justice in provision of services. Does it really matter whether Maria’s dollar passes through government hands by way of taxation or into the till of a business if it then goes to pay some employee for doing necessary work? By the logic of those who deny the value of public expenditures, the education private colleges provide has value while that provided by public colleges does not. Can anyone claim that a nurse working at a VA hospital provides no valuable service in comparison to one working at a for-profit hospital? Does this make any sense?

 So much for the absolute claim that public dollars cannot create wealth. But conservative economists might then simply pivot to the question of efficiency, subtracting from that wealth the costs incurred by government’s incompetence, leading to the same conclusion by means of a different subterfuge. For if the admitted economic value is reduced by waste, fraud, and inefficiency they think inherent in government spending, the net sum might still be zero. This is certainly a different argument from the definitional claim that the public sector is a drain, and it resonates. The Soviet Union was hardly an exemplar of socialism, but it surely was a model of waste and bureaucracy, and conservatives are on much more solid ground in condemning the poor performance of some government bureaucracies at all levels. But let us examine this point critically as well, for it is based on two false claims.

The first is that government is for some reason inherently wasteful, but that conclusion relies on a cost/benefit analysis appropriate to private rather than public enterprise. Put another way, “wasteful” in capitalism refers to cost versus profits, but as the goal of public enterprise is not profit, at least not the profit that can be quantified into dollars, on what grounds can it be termed wasteful? Unless critics are willing to use a broader standard of value, they can hardly objectively judge the public sector, and to use the standard of value appropriate to private enterprise is grossly distorting. 

Underlying the charge though is a psychological theory of motivation that capitalism’s champions mistakenly take to be indisputable.  They charge the poor performance of government services less to weak oversight than to the slackness of its work ethic. No reasonable person can argue against the profit motive as an incentivizer of efficiency, but it is carrying the argument to absurdity to view it as the only incentive as Ronald Reagan did in his infamous contrast between public and private workers: “The best minds are not in government.  If any were, business would steal them away.” I doubt that even Steve Jobs found profit more motivating than his own love of discovery and invention. Millions of dedicated police officers, firefighters, teachers, and public servants are moved to do their duties by their commitment to the general welfare rather than the size of their paycheck. Bureaucracy is not necessarily a pejorative term.This is not to dismiss charges of waste and incompetence nor to diminish good faith efforts to make government more effective, only to challenge the presuppositions of those who seek to discredit it by inept comparisions.  It is only an inspiring Chamber of Commerce vision of wealth that sees it as created from nothing by the ingenuity of the human spirit in pursuit of profit.

 Maybe not so inspiring as we might wish, as the ugliness of Ayn Rand’s portrayals demonstrate. Her “arguments” as phrased in her novels are certainly created out of nothing (please see last week’s post on the dangers of fiction-as-reality). But more to the point, they are adolescent fantasies. It is hard to decide whether they are more objectionable for their Romantic excess or their childish ingratitude. Even a moment’s cool thought after reading Rand’s overheated prose should make it obvious that Steve Jobs did not build Apple from nothing. It hardly detracts from his genius to note that he relied on the education, protection, and facilitation that his parents, his community, and his government provided to apply his genius. What would the iPhone have looked like if Jobs had labored away in a slum in Somalia? Rand’s heroes hardly made themselves (though I am not sure their parents would have wanted to claim them), and while her warnings of the dangers of collectivism were on target against a Communism that championed a stupid equality of degree, it is an elephant swatting gnats in today’s liberty-loving America (for more on the battle between liberty and equality, see posts of November 20 and December 3, 2013). The self-made man is a staple of the American dream, perhaps because it is a dream to imagine anyone being totally responsible for his own success.

 A stronger point of leverage against government as a wealth creator might target simple overreach. I mentioned earlier that government’s positive obligations in justice focus on broad needs for the general welfare and more pinpoint needs to arbitrate competing interests. The emphasis must always be on needs that individuals cannot provide for themselves. This is an inherently hazy category. Its components are built upon the universality of human needs (see post of November 13, 2013) that introduces an equality of kind (see post of December 3, 2013) that imposes obligations on government (see posts of February 23, and March 2, 2014). But it is not merely the ambiguity and difficulty of the topic that lead both liberals and conservatives to avoid facing it. Both sides have reasons to blur the issue.

 Liberals refuse to face the thorny issue of individual responsibility. Though each of my needs confers a right, the satisfaction of most of those needs is my own obligation. It is, in truth, my core duty as an adult human being (see post of November 6, 2013). Should I fail through my own error, government as the collective will of my fellow citizens is under no obligation to repair my situation unless I am unable to repair it for myself. I understand that Christian values in the U.S. have tinted many persons’ views of this sort of thing, but all those conservatives who think this a Christian country might want to differentiate their Christian from their political duty (though they seem loath to face either: see below), and liberals who wish to use government resources to satisfy any unmet needs whatsoever might want to clarify in their own minds which are government’s duty and which are each citizen’s. Conservatives have a point about the “nanny state” that liberals rather wish to ignore. Look it at this way; to treat adults who should go about the business of satisfying their own needs as children the rest of us must care for is pure paternalism: insulting and crippling to those we seek to help. It is also a waste of public resources in that individuals are not only responsible but also more efficient in these efforts than those who seek to ameliorate their condition for them. It violates the only duty of government in that it is inherently unjust both to the individual and to the citizens who attempt to do for her what she should do for herself. Liberals need to face the matter of individual responsibility squarely.

 Conservatives have a different motive for blurring the issue of needs, for to base government upon their satisfaction would call into question the social contract justification for government and with it the majoritarian argument that has long delivered injustice to minorities. More pragmatically, it would cost more money, for to finance a government seriously committed to the general welfare– a term I define as meeting needs that individuals cannot meet by their own efforts– would socialize some efforts now undertaken for profit. The absurd cost, inadequate distribution and poor outcomes of American health care is one prime example from among many. The net result would be to change our value system from the orientation that wealth determines worth to a respect for the equality of kind rooted in our common humanity, an innocuous enough notion that you would think Christians as well as champions of human rights would subscribe to, yet one many conservatives find threatening.

 A related wealth creation story lauds the positive contributions of the job creators in our economy, those who stimulate the economy by providing employment for the ninety percent of us who work for a wage. The argument hinges on the more basic notion discussed above: that wealth is created out of nothing only by those operating within the free enterprise system. See, Apple had two employees in 1976 and 45,000 last year. Each of those well-paying positions only exists because of Jobs and Wozniak. They are not only wealth creators. They are also job creators. Again, it is hard to argue with this. Something came into being as a result of their genius that did not exist before and by dint of their hard work and smarts, that new thing has created both wealth and jobs. Surely, job creators deserve recognition and reward for their efforts. This version of events is convincing, yet it seems just a bit truncated and simplistic. There’s more to it than just invention and production. All those sleek computers and phones and tablets are great products, no doubt, but all those high paying jobs and profits were not created solely from production. There is also the little matter of consumption. Even the paragons of job creation could not have made their companies or built their wealth or hired their workers without a demand for their products. And demand depends on the health of the economy. No titan of the marketplace could work her magic in a failing society, which brings us right back to the necessity of government not just as a wealth creator but as a job creator. Just as most wealth creation is not ab initio but derives from the providing of a desired service or product, so too does job creation depend on the consumer’s purchasing power and the health of the economy. This health is a dance between private enterprise and public policy. See the way the stock market embraces the Federal Reserve and vice versa! Yet from the way conservatives portray their version of a job creator, one would think he pays salaries from his own pocket rather than from the operating expenses of his company, but then maybe that impression is enhanced by the ridiculous salaries paid these self-styled giants of the marketplace. A moment’s thought should uncover the real job creators for the bulk of the economy are the middle-class consumers whose income provides the demand that increases the cash flow that creates the jobs. This healthy cycle characterizes any productive economy. To signal out the employer as the lynchpin of this cycle is to distort its nature. The United States has the highest level of economic inequality in the developed world (But we are more equal than Mexico and Turkey. Yay!) One may make a legion of moral arguments about what various stakeholders in our economy deserve, leading to interesting discussions about minimum wage and CEO compensation, but as a purely pragmatic matter, the real job creators in our economy, meaning consumers, can hardly perform their part in the economic cycle if this level of income inequality continues. But the conservative moral argument disputes this pragmatic one. We may discern a number of reasons for the widening gap between rich and poor since the 1970’s, but surely the position that employers have a more important role in the economy than workers and the concomitant conclusion that they have a moral right to a larger slice of the pie than at any time in our history (excepting the ominously significant year of 1928) is largely responsible for the current disparity.

 In moral philosophy, we see the concept of “ought implies can,” a very valuable check on the applicability of moral principles. It is fine to say that such and so moral principle should apply, but the argument is defeated before it begins if no way exists to apply it. It is worth asking if the conservative argument about job creators introduces the ought/can issue. In other words, should moral principle bow to pragmatic necessity? Because consumers are as necessary a part of the business cycle as employers, should that be the end of the discussion? Does their practical necessity as purchasers trump the moral argument for the superiority of job creators in creating wealth? I would argue no, for no amount of pragmatic limitation would tamp down the position that bosses should prosper disproportionately to their employees as much as business cycles will allow because of their greater contribution to the general welfare. Granted, this concession would at least recognize the moral worth of workers to some degree, which would be a decided improvement over the current blindness that elevates employers to godlike status. But even in the healthiest of economies, some pragmatic positions require moral interrogation: that workers are interchangeable drones, that shareholders matter more than employees; that profit is the only real product of any business; that crony capitalism, influence peddling, and corporate welfare are acceptable governmental functions; and that rigging the system so as to deliver obscene wealth to a fortunate few while denying fairness to the rest satisfies the obligation of government to deliver equal justice under the law. We not only can do better as a society, but we ought to.

 Which brings me right back to economics as a science. I began this entry by admiring its devotion to data and quantitative analysis. I will end it by pointing out the most glaring reason why economics as now constructed can never be a real science: it can never apportion proper value to the human concerns that the economy should serve. The goal of science must be to find truth. It does not have the means to find goodness within its modes of warrant, and questions of value are always questions of goodness. I have frequently discussed the wise blindness that science must bring to its objects of analysis (most recently on July 6, 2014) in order to provide a reliable warrant for its truth claims. Like its sister human sciences, economics can never provide that warrant, which is why even those of us lacking in expertise should apply our reasoning to its provenance.

Standard

Tall Tales

Even while I studied and taught literature, I was always troubled by the loose linkage between stories and reality. I am not talking about the reality depicted in the stories themselves. It has always struck me as right and proper to object when they violate their own premises. This might be as simple as the continuity errors that eagle-eyed viewers always point out in movies. Look. Her glass was half-empty and in the next shot is three-quarters full. A more serious failing is the deus ex machina that rectifies a failing story line at its climax. Or perhaps a character acts entirely contrary to her nature without sufficient cause, leading the reader to scratch her head in bemusement or throw the book down in disgust. Still, the bar we set for fiction is pretty low. It need not mirror life, an act critics call mimesis, so long as it remains true to its own premises. If pigs can speak to spiders in the first chapter, if choruses burst into song in the first act, the observer only asks that the same rule applies later, and if things change, that the change is explained so as to allow the work to remain internally consistent. Stories exist in their own world, and that is what pleases us about them.

Only they don’t and it doesn’t. Three-year-olds can effortlessly navigate the gulf between created reality, the made-up world of fiction, and the common reality we all participate in, but something odd evidently happens to grown-ups, and the problem only grows more serious with education. Sophisticated critics and professors of literature engage in an interesting sleight-of-hand in examining the relationship between real and imagined. If confronted outright with a request to define the connection per se, they will deny any explicit linkage because even a moment’s thought will introduce the iron curtain that divides the real from the imagined. But five minutes later they are enthusiastically dissecting mob mentality in Billy Budd or the moral implications of the Grand Inquisitor chapter in The Brothers Karamazov. They seem at least to sense the problem, so they seek cover by referring to Melville or Dostoevsky’s view of things, but what do they expect their often-captive listeners to do with the analysis they are conducting? Are we to confine the novel’s meaning to the fictional world of the nineteenth century whaler or a Russian orthodox monastery? Are we to infer that these two brilliant creative geniuses have nothing to say to the common reality they inhabited– or perhaps to the one we now inhabit– that their brilliance is curtained by the imaginary worlds they created, worlds so rich and dimensioned that we can drag ourselves back to reality only by an effort of will and once returned remain strangely lost, with one foot in the real world and the other in the somehow richer world pinned to the pages or etched on the DVD? Adolescents emerge from the theater lobby with plans to play quidditch or kick-box or buy an assault rifle. Adults finish Macbeth with a richer understanding of the perils of ambition. Really?

I’ve been bothered by this issue for many years, but like so many other super-macroscopic cultural issues, it seemed few others shared my concerns. But in a recent TED talk, the famous sociobiologist E.O. Wilson was asked about our zeitgeist’s obsession with narratives, an issue about which he expressed some concern. I think it time to delve into my own discomfort.

Like so many other big-picture problems, this one suffers from a poverty of appropriate terminology. Wilson observed that our evolutionary bias is toward confronting reality. But, of course, we can’t do that directly, for before “getting” it, we must construct our version of that reality, a process I have termed the virtual circle in these posts (please see August 13, 2013 for a fuller explanation). An entirely accurate construction that mirrors reality in perfect detail is something I define as the virtuous circle, an unattainable goal yet one we cannot help pursuing and compositing in all of our truth, goodness, and beauty claims (for more on this one, check out October 2 and 7, 2013). The mimetic process that occurs in our minds as we attempt this construction occurs constantly and may be considered the perpetual goal of all of our perceptual and most of our rational efforts. Our struggle to identify the true, engage our natural and preferential freedom to choose the good, and negotiate the difficulties of appreciating the beautiful occupies most of the moments of our lives. But not all. Wilson implied that the created reality of narratives provides us with just what we have been exhaustively searching for: a consistent and comprehensible world that we gradually come to understand, but unlike our own, one that we merely observe rather than feel forced to make choices in. By that logic, we jump on continuity errors and narrative inconsistency with an almost feral anger, for our minds are led to think of these created worlds as being the thing we most seek: a mimesis without self-contradiction. Further, our poor brains, exhausted by the creative effort of half discovering and half creating a common reality that we but poorly understand, gratefully absorb the balm of the fiction we indulge ourselves in, absorbing at one level the completeness of this intelligible but imaginary reality yet simultaneously engaging in the same kind of logical analysis that we bring to every conscious moment of our existence. We naturally do so. When I taught literature, I had to remind students that nothing in the created reality of a piece of fiction occurs by chance. Everything is intentional in the made-up world. They found that notion incredible because real life doesn’t work that way. That intentionality is what we yearn for in common reality. It energizes the world’s religions as, I suspect, it sustains the scientist’s trust in her method. There is an authority in creating an imaginary world fully furnished with simulacra of the one we are assured a greater Author created, and there is comfort in turning past the title page or watching the opening credits, knowing that what follows has order and meaning. We delight in immersing ourselves and trusting that world for a very good reason: it differs from the real one, so often dull, bewildering, and meaningless. If narrative literature only existed to produce that delight, it would have paid its way in this weary world. That immersion into another world, into the author’s consciousness, justifies what Robert Coles termed “the call of stories” in his excellent book of the same name. But it seems our brains are made so as to ask more of literature than it can deliver. We cannot help but mine stories for meaning, to ask them to cross over.

No wonder children emerge from the theater’s cocoon with their fingers pointed and thumbs cocked. No wonder critics knit the imagined world of the narrative not only to their own virtual circle but to the virtuous one, drawing out of the created world lessons for the one we all inhabit. Though the process is a natural one, the imaginative interweaving is facilitated by the ease with which students of literature deal with figurative language, especially metaphor (an issue I’ve addressed in another context in these pages on October 2, 2013). They are used to thinking of one thing in connection with another, but the relationships they establish are necessarily imprecise and allusive. And so they shy away from a discursive and frank appraisal of the relationship between created and common reality. The term they would use to describe such an attempt is reductionist (a disparaging term whose reputation I attempted to salvage in these pages on September 3, 2013). I see nothing amiss in asking the critic to state with some precision what relationship the events in a fictional world have to the reader’s participation in the real one. If the author intends to communicate some wisdom about common reality through narrating the experiences of characters or the voice of some speaker, fine. It strikes me that we often communicate our experiences in just that way, intending to impart some wisdom to our listener. But that common sense view runs into two major roadblocks in regard to fiction.

The first involves the limitations of experience. Of the correspondence truth tests, experience is both the most commonly applied and the most unreliable. (For a fuller explanation of why, please see my posting of October 7, 2013.) We build our virtual circles largely from experience, which explains both why we find reality so difficult to comprehend and why the transference from fictional experience to virtual circle seems so natural. But experience’s limitations should caution us in this temptation. As a justification for truth and goodness claims, experience suffers from contextualization. It is necessarily unique and unreplicable, so the lessons learned from each experience are only loosely applicable to later and similar ones. And as experiences are perceptually registered, they are altered in ways we cannot be conscious of, since sense data is filtered pre-consciously so as to present us with a fully-constructed picture of reality. This distortion is profound enough to support postmodern charges that experience itself must be private and subjective. I argue in opposition to this charge that our reasoning about experience may produce a degree of intersubjectivity that allows us some broad degree of consensus. This universal reasoning faculty applied to subjective experience is the anchor that moors us to a common reality. But what do we reason about when we think about a movie or novel? What facts of experience can we accept as true in this created world?

The second problem concerns the intent of the creator. A novel or a movie is a work of art, subject to conventions governing its genre and aesthetic considerations that shape its substance and style. Those seeking that it also somehow convey the truth in its storyline are asking it to accomplish a second and divergent goal, for as we all know to our sorrow, unvarnished common reality could rarely be mistaken for art. To make it so, the creator must distort reality as a sculptor must mold her clay.

To illustrate the issue as it affects literature, contrast a reaction to a biography with a response to a piece of serious fiction. I just finished Christopher Hitchens’ short biography of Thomas Jefferson and David McCullough’s fine life of John Adams. Both works were polished pieces of craft with distinctive styles and authorial expertise. Both followed genre conventions for biography. I consider both to be fine artistic creations. Both paint a fairly unflattering portrait of Jefferson. While I may have quibbled with a few incidents or details, I approached their portraits of the sage of Monticello with equanimity. Here were expositions of another life and another time. Like all actual lives in all times, there were loose ends and unknown motives, questions and inconsistencies. Whatever telescoping of perspective or framing of events, whatever intentional omission or sharp focus each author effected, I judged to be in service to the attempt to convey a true account of a real life that I was free to further investigate and confirm or dispute. In contrast, at the same time I was reading Edith Wharton’s novel House of Mirth, whose storyline chronicles the rise and fall of a social climber in the Gilded Age, Lilly Bart. It was as rich in historical detail as the biographies, with a comprehensive picture of the social environment of her times. The machinations forced upon Lilly by her class and gender roles were as deeply affecting as they were exotic to this twenty-first century male reader. But at the novel’s end, what was I to do with these insights? I had entered a richly furnished late Victorian room and had trolled the minds of all its denizens, had observed their triumphs and bitter falls, and upon closing the book had stored it all in memory. What part of that memory may I think real? Edith Wharton actually inhabited rooms like those she portrayed, joined the social elite, and undoubtedly was acquainted with many a nouveau riche. But what of it? Her admirers will infer that her novel will give us the “flavor” of that life, or an “insight” into it that somehow translates into knowledge of it. But they ask for too much, for at the same instant they wish for the novel to also exist as a unique aesthetic object, one crafted intentionally to produce an emotional response. These purposes are of necessity in conflict. Two examples of that inevitable conflict should suffice to make my point. First, we know from the title page on that something will happen in House of Mirth, and –wonder of wonders– it will happen to the central character! That is a piece of luck! Funny how life turns out differently. Furthermore, somehow the novel’s reader will know enough about the characters and events of the novel to get a pretty rounded picture of not only what happens but why and how and, even more miraculously, some causes and effects of the events depicted. Now even if we are willing to suspend disbelief sufficiently to correlate these unlikely findings with reality, we face yet another problem regarding the insight we are being handed: we are being asked to trust that the author is skilled enough in her authorial craft to accomplish her artistic ends and at the same time observant enough of real truths in the real world to communicate an experience truthfully and reliably, and not just any old experience, but one that conveys some essential truth that cannot be communicated discursively (if it could be, it would be, for discursive language is far easier to employ than writer’s craft). How are we to judge any single feature of her novel as a piece of experiential truth? After all, every representation is from some angle dictated by aesthetic rather than experiential requirements, and we cannot know how accurate any imaginative creation mirrors what it reflects. If Wharton had slipped in some anachronism as a private joke, would I have known? If she grossly exaggerated Lilly’s paralysis in the face of rigid gender roles to drive home some private grievance or authorial machination, if she employed some Dickensian character or plot twists to dramatize her storyline, if her unflattering foray into the consciousness of her male figures was prompted by some misanthropic impulse to stereotype…. How would I know? Can I ever separate my memory of the watering holes of fin de siècle society imparted by her creativity from the histories of the era I have studied? Should I try? I hear a great deal of talk about “artistic truth,” “theme,” and “deeper meaning” in discussions of literature. I would like a clearer understanding of just what knowledge such ideas entail, not to mention the more difficult issue of how such truth claims are warranted. How does artistic, creative genius and authorial skill translate into depth of knowledge of what is grandiosely termed “the human condition”? (for more on aesthetic judgment, please see my post of December 13, 2013.)

As a finished artistic creation, the novel stands on its own to enfold us as a unique intentional work to produce the “disinterested delight” that Kant said characterizes all works of art. I get that. But just because it has those qualities, I question any “lessons” the work can offer us: lessons about history, sociology, psychology, or, in the words of the English teacher, “life.” My brain cannot help but to form the same synthesis with this imaginary and created world as it does with the mimesis of the real world I construct as my virtual circle. After all, mirroring reality is what it does for a living. Though natural, I am convinced such an effort is delusionary and dangerous and should be resisted rather than embraced.

Plato recognized the danger. In Book X of The Republic, he envisions a utopia without creative arts. We largely discount his warning today unless we buy into his theory of forms, whose architecture allowed him to see artistic creation as a mirror of a mirror. Since common reality for Plato was merely a reflection of the ideal, any artistic creation that fulfills a mimetic role must reflect common reality, thus distancing the observer even farther from contemplation of the ideal. One hardly needs to subscribe to the Platonic vision to make that complaint. Consider Augustinian objections to secular literature still exerting their force in the closure of theaters during the English Commonwealth and Jefferson’s well-known quarrel with reading fiction.

As in so many things, Aristotle disagreed with Plato, and at least a whiff of his argument attends every subsequent effort to find truth in the narrative arts. The power of fiction according to Aristotle’s schema in The Poetics is to distill the essence of experience rather than any particular and therefore unique perspective. Just as he envisioned our knowledge of abstractions to be gradually constructed of multiple exposures to their instantiation, so too did he see the artist’s role to distill the essence of experience into its essential archetype, the defining characteristic of the essences portrayed in the narrative. Macbeth is an imaginary king as Shakespeare portrays him, but his approach to gaining and retaining power typifies a certain type of monarch, or so we like to think. The muthos of the play, its essentials, are thus both imaginative and didactic. The author both creates and instructs. The audience responds to effective archetyping with catharsis, which Aristotle saw as an emotional purging. We might grow as callous to blood as Macbeth himself if we actually knew him, but we retain our emotional distance when watching him onstage just enough to explore regicide as an idea and experience our response to it as a vicarious emotion. So we are double winners, Aristotle claims. We derive the emotional charge of involvement with the intellectual depth of detachment. We end the narrative emotionally spent but rationally energized. Aristotle’s arguments are powerful, but they fail to bridge the gap between the imaginary and the real. Certainly, what we experience in Macbeth is a powerful emotional ride that leaves us exhausted well before Macbeth loses his head. But only the catharsis is real, not the manipulated events that produce it, and what experiential truth can derive from events that are so clearly manufactured? I do not mean to say that immortal characters and events cannot be consensually discovered in great literature. We approach our Willy Lomans and Don Corleones with too much reverence to claim that our emotional response to narratives cannot build immortal archetypes. But these are cardboard cutouts compared to any living person. Their power derives from the crispness of their definition, and that clarity is entirely a product of their being merely artifacts, framed by intent. As for the intellectual power we derive from our experiencing their fictional world, I would argue that it is precisely these singular great characters and storylines and the profound implications they generate that produce the greatest intellectual dissent among critics and literary experts. Our emotional response is molded by the intelligence creating fictional narratives– this is after all a world created to elicit it– but our attempt to interrogate that response and extrapolate its significance to common reality must splinter into private conviction and public conjecture when it crashes against the wall between created and common reality. Archetypes there may well be and catharses they may well produce, but when we attempt to derive real-world truths from them and put those truths into discursive language, we enter the thicket of controversy that fuels a hundred academic journals and a thousand websites. The deepest wells of the narrative arts, the Hamlets, the madeleines, the Rosebuds, the monolith, that in Aristotle’s schema should produce the deepest and most powerful consensual truths lead instead to the most vociferous dispute among experts who try to frame those truths in the discursive language of the academic article or popular essay. Why is that? Could it be that the “truths” thus communicated about “the human condition” are as numinous as a religious conversion? Could it be that no reliable truths about “real life” can be produced by the portrayal of an unreal one?

That the narrative arts must serve mimetic purposes seemed relatively undisputed until the Romantics refocused the spotlight upon themselves. But this was hardly an improvement since it necessitated exchanging the universal for the personal, with all the attendant temptations of private experience proffered as artistic genius. These nineteenth century obsessions were magnified by the growth of popular culture and the rise of literacy, cheap publication methods, and universal education. By the twentieth century the new narrative forms of film and television guaranteed the ascendency of the narrative not only as art form but as educational tool. And a new philosophy emerged to spotlight the narrative form, to place it at the very center of its premises. I have often written about the early twentieth century transition from modernism to postmodernism in these pages (please see posts of July 22 and 30, 2013 for more). Its veneration of creativity and subjectivity was matched only by its disdain for empirical science and rationality. When joined to the new technologies that celebrated the narrative form, it stimulated a powerful effort to link created and common reality.

Its focus on creativity, criticism, and irony guaranteed that its approaches would be heterodox, so it took a while for postmodernism as a movement to reach full steam. Its groping for consistency coincided with the maturation of both the movie and the television industries into the powerful social forces we see today, and no one familiar with either could deny the countercurrent of sappy Romanticism that characterize not only these media but also popular literature (for more on the formation of one hybrid archetype of this era, the antihero, see my post of November 26, 2013). Postmodernists embraced the individualist and subjectivist biases of the Romantics, along with a near deification of the artistic rebel. Their mature theory could be discerned in the works of a cadre of mainly French intellectuals by the mid-1970’s. They were academics and literateurs who found fertile soil for their theories, and indeed often communicated them, in literature rather than in philosophy. In appealing to literature to carry philosophical weight, they were honoring a long tradition that included Freud’s grounding his theories in Greek mythology and John Dewey’s reliance on Rousseau’s Emile to support his Progressivist educational theories. But the postmodernists sought even more pride of place for the narrative form. In their terminology, the great historical movements of the modern age were grand narratives, merely widely accepted stories that cultures tell themselves to justify the status quo. Abstract and discursive political, religious, and moral theory is thus dismissed as mere storytelling with all of its fictionalizing. Ironically, postmodernists value another kind of story, mini-narratives, of individuals or of previously neglected and oppressed groups. They taught a generation of aspiring literature instructors to seek out truth in these untold stories. But note the difference between historians reading letters from Tuskegee Airmen and movies celebrating their service in World War II. In the parlance of the movie trailer, “based on a true story” is simply another way of saying, “not true.” Postmodernists also advocated subjecting what they scornfully called the canon of dead, white male authors to a critique using deconstruction, whose purpose was to mine their fiction and poetry for evidence of grand narratives perpetuating exploitative social orders. While racist, homophobic, misogynistic, and capitalist undercurrents certainly swirl through the fiction of the canon, and while avid students pride themselves on uncovering it, I find it disturbing that they assume its molding influence on readers without asking what I think is the more basic question. Yes, readers are seeing in Tennyson or Hugo a disturbing misogyny, but so what? Yes, readers then and now should not have their prejudices confirmed, but not merely because we prefer our prejudices to theirs but because these imaginative works are neither sociological investigations nor psychological confessionals. Perhaps every human creation from cave paintings to kewpie dolls screams a political manifesto, but for my money the meaning is brought to the reading rather than derived from it. Deconstructing fiction is said to have given critics what they have always desired: equal partnership in artistic creation. I doubt if serious academics would have gone for it if they hadn’t already accepted the claim that fiction qualifies as another form of philosophy. But the effort to find truth in fiction was only the first step, leading inevitably to the goal of all truth claims: finding goodness. Let no one think the postmodern method is morally neutral despite an implicit rejection of objective moral standards, for its program of social reform is built on the model of the human sciences (not a good idea at all. Please see my post of September 2, 2013 for why). In substituting sociology and psychology for the kind of pure aesthetic John Ruskin favored, postmodernists transform imaginative works into covertly polemical ones, replacing one exaggerated influence with another, using the narrative form to pursue weighty political ends it could never support. They mock Ruskin’s Romantic pretensions as the merest fluff while mining literature for justification for their moral crusade (I am entirely sympathetic to their egalitarian agenda, by the way, though I find their warrants unforgivably simplistic. Please see my post of November 20, 2013). The truth content of created reality is simply too insubstantial to carry the weight of their analyses. They are shooting at bubbles in the air to fill them with rhetorical lead, but they merely dissolve, and with them goes the cathartic power of narrative media.

So what’s the harm? The child finishes the Harry Potter novel and eagerly reaches for the next in the series. The crowd files out of the movie theater marveling at the latest computer animation effect. The reader closes The Great Gatsby still envisioning the green light at the end of the pier. No harm there, only a rich emotional immersion into a created world. But it is so hard to resist the next step. Presidents mimic action heroes. Romance novel fans inspect their snoring husbands with disdain. Serious and intelligent students learn to seek truth in the deconstruction of the latest serious novel, yet they find no critical consensus on the wisdom it purportedly conveys. E.O. Wilson complained in his TED talk that our attraction to narrative prompts us to seek a simplistic, spurious intelligibility in the world around us. We want good guys and bad guys like in the movies. We yearn for happily ever after like in the fairy tales. We yearn for the omniscience of the novelist’s world. In doing so, we disdain the open-ended complexity of the natural sciences, the hard work of sustained commitment, the doubt and uncertainty of finding truth and choosing the good. We want our stories to be real and reality to be as silky smooth as a heroine’s cheek. But that cannot be.

 

 

 

 

 

 

 

Standard

Religionists Fighting the Wrong Battle

I came across the following article not long ago, and made a close reading of its arguments, many of them broadly Christian in outline, on the subject of material determinism. Original in black, my comments in bold italics. I hope they don’t overly detract from the flow.

From Catholic Answers Magazine (Volume 19, Number 4)
Determined to Deny Your Freedom
By: Peter A. Kwasniewski

“Determinism” is not an everyday word, but we feel the effects of this philosophical view every day—usually in the unspoken assumptions of popular scientific journalism and critiques of religion. It is helpful to be aware of what this view involves and why it is untenable.
Determinism in its most general sense could be described as the theory that the history of the world—all events and their order of occurrence—is fixed and unitary. In other words, there is only one possible history of the world down to every last detail. There are several types of determinism: logical determinism, theological determinism, biological determinism, scientific determinism. In this article I will concentrate on this last and most familiar form. Scientific determinism stems from a belief that modern science, especially physics, has successfully proved that all reality is material and operates according to fixed laws of action and reaction.

No. This demonstrates a basic misunderstanding of science, which could never prove something so ambitious to its own satisfaction, much less yours. What you describe is the self-limitation of all scientific inquiry. It can only work with material phenomena. To see it as holding the philosophical position that any event of any sort is fully explicable (and thus, in principle, predictable) by a pre-existing chain of physical events necessitating it is also mistaken, though this is a fine philosophical definition of determinism. Science can only use the tools it has, so full explanations as science sees them must be couched in the language and employ the tools that science provides. For instance, why I get cancer and you don’t is explicable by science to a degree, but no researcher would claim that explanation is complete in a metaphysical sense, nor would any claim that any empirical explanation whatsoever is ever satisfactorily completed as explanations always lead to further questions. The core issue here is that science applies the philosophical position of determinism to a very limited sphere of study, all involving perceptual experience. You seem to be setting science up for a fall with these hyperbolic definitions.

In a world where science has been elevated to the status of a quasi-religion and its spokesmen to the rank of high priests, we are bound to encounter people who hold this position. It is well to note that the attitude or frame of mind underlying it strikes at the root of religion as such, impeding conversations about anything—God and the human soul, Christ and the Church, sin and grace, even good and evil—that is not strictly empirical or susceptible of laboratory analysis.

This may be true if scientists are unwilling to admit the limitations of their method, but most seem all too willing to acknowledge them. Scientism, not science, fits the definitions you give here, and scientism is easy to refute. Yes, science “impedes” conversations about anything non-empirical, just as it ought to, for that is its sole sphere of action. But nothing stops religionists from diving right in.

Science Explains It All . . .
This view found its rudimentary expressions in the writings of René Descartes, Francis Bacon, Galileo Galilei, Isaac Newton and their contemporaries, but attained a dogmatic consistency in the blatant materialism of Thomas Hobbes, Julien Offray de La Mettrie, Voltaire, and Baron Paul Henri d’Holbach. These writers exaggerated the reach of physical science and claimed that experimental physics was the model for a total explanation of reality.

Yes, don’t forget Poincare and other thinkers who championed what passed for science in the nineteenth century. This sense of unbounded optimism is always a temptation for science, but the view you reference was especially popular in the era before science was as strictly defined as it is now. Bear in mind that the provenance and power of “science” as an activity have tightened over the years, and the trust that eighteenth and nineteenth century writers placed in the future of science has since been challenged by the increasingly rigorous requirements of empirical research. What Voltaire thought of science is as irrelevant to its current status as what Plotinus thought of religion. Pompous self-importance and inflated truth claims are particular temptations of the human sciences, but these are not what we think of when we think of scientific success.

Later on, Charles Darwin’s theory fed into this powerful stream. His godless account of biological diversity showed itself well adapted for integration into a larger philosophy of scientific determinism.

Slanderous. Read the last sentence of On the Origin of Species! Still, just as Copernicus removed the necessity for angelic rotation of the spheres, Darwin removed the necessity for the Great Watchmaker to establish biological diversity and complexity. I suspect biology will soon remove the necessity of God from biological creation and cosmology is already working on a universe originating from nothing. But what is the alternative? To reject evolution and modern cosmology in favor of miraculous creationism and geocentrism? I have written extensively in these pages of the temptations of coherentism, whose virtual circles of personal truths need only be supported by the principle of non-contradiction and of the immediate contradiction religionists face when they attempt to claim absolutist truth about external reality using these same means (see posts beginning on September 11, 2013 for more on this). It is certainly defensible to make truth claims about the transcendent reality we cannot know so long as they cohere with what we do know, but to prefer private belief to correspondence knowledge violates not only the proofs of correspondence but also the only proof open to coherentists. It is a childish error of willfulness.

The rapid and spectacular advance of technology, born from the marriage of modern physics and capitalism, seemed to verify beyond all doubt the materialistic mentality behind both.

I had no idea physics and capitalism were even dating! Seriously, how are they connected? They may both be godless in their subject areas, but the pews are filled with practitioners of both on the Sabbath, so these fields conduce to atheism no more insistently than any others. Yet again, it seems to me science’s silence on metaphysics might be interpreted as an admirable restraint on a subject it can never know.

Given that people nowadays have been more or less habituated by textbooks, teachers, and news media to accept scientific determinism as fact, the apologist should start by explaining that the position is essentially a belief or dogma. It cannot be deduced from empirical knowledge, which must always be imperfect (no scientist would dare to claim that he knows or could know all the “laws of nature” and all the data required to predict future events).

Again, odd definitions. “Deduction” means to draw a conclusion not explicit in the premise, so it strikes me as wrong to say truth cannot be deduced from empirical knowledge (though the use of “fact” in the passage shows a lamentable mental sloppiness). The methodology of science ensures that it is the best means available to arrive at certain kinds of truths. The error here is to consider absence of evidence as evidence of absence. The empirical endeavor simply cannot speak to the issues you value. Its silence should not be interpreted as a rejection of your value system but rather as a blindness to it. I think your quarrel should be with modern theology that stupidly accepts the narrow focus of empiricism as a kind of limitation or criticism of religion’s core values while at the same time modeling its theology and pastoralism on the flimsiest of human sciences. The use of “dogma” is interesting, for it references a common criticism of the scientific enterprise: that it pretends to be based on fact yet is actually predicated on non-verifiable assumptions. On that charge it is clearly guilty. Let us call these assumptions “axioms,” rather than “dogma” so that we may avoid religious equivalencies. The principle of non-contradiction is one such axiom as is the inductive method. These assumptions are based on a deeper axiom: that reality is fundamentally rational (Heisenberg and Godel have taken aim at the latter, an attack crucial to postmodern critiques of natural science). These axioms are working premises rather than dogma. They underlie the investigation, but as early twentieth century theories of quantum mechanics and general relativity have revealed, they are open to criticism and revision using the very same techniques scientists employ in their everyday pursuits. Contrast this pragmatism with religion’s absolutist reliance on the inerrant and revealed truth. Imagine how religionists might respond to assaults on their dogma! You don’t have to. Study the Reformation.

It cannot be considered self-evident because it contradicts the experience of freedom, which has more weight than any theory.

Nice point, but the phenomenological sense of freedom does not necessitate an ontological reality. We may feel free without being free.

The one who puts forward determinism as a universal explanation lays it down a priori, that is, as an axiom and without sufficient evidence.

Now this is just silly. It is neither an axiom nor a priori, but is a deduction drawn from experience. You may argue it is the wrong deduction, but to call it a priori is simply an error of definition. Once we see the claim to determinism as an a posteriori conclusion validated by a near infinitude of experiential instances, the truth of determinism as a foundation stone of empiricism seems unassailable.

Empirical science can never go beyond the boundaries of the measurable or observable, and, as a consequence, is simply unqualified to make judgments about the existence or non-existence of anything beyond its limited field.

Yes! Now with that in mind, go back and revise everything you have accused it of till now. Don’t malign it for ignoring religion and God when it cannot do otherwise and don’t take its silence on metaphysical issues as dismissal.

. . . Or Maybe Not
Let us consider seven instances where scientific determinism founders.
1. It is meaningless to speak of universal “laws of nature” unless they have been instituted by a lawgiver. Matter, as such, is not capable of giving laws of behavior to itself. That means that material things are not the source of these laws; rather, they presuppose laws when they act and react in an intelligible manner.

No. A “law” of nature is simply a large explanatory hypothesis that answers the “how” of some physical question. You are correct in saying that things cannot create laws: these are products of human interpretation. We say the law of gravity dictates the attraction of two masses in a vacuum, but the masses don’t know the law. They do act according to its dictates as we would if falling toward the earth from an airplane. But a law of nature is merely a logical explanation of phenomena, not a prescription dictating behavior, and as such it requires a mind to make the analysis. Such analysis provides real predictive power. Nothing in the nature of natural laws necessitates an external source or creator; all that is required is a mind to explicate the law based on careful observation and rational analysis. Sounds pretty determinist to me, for what is a natural law but a replicable prediction?

Moreover, how did material things come to exist, not merely as matter, but as matter functioning within a system that leads to the formation of stable and orderly structures? Do atoms just mysteriously “know” where to go to in order to make up a certain molecule in a certain kind of organism?

Also a red herring. Does water know to freeze at 32 degrees or boil at 212? Again, matter responds to forces acting upon it in predictable ways. That means determinism. Otherwise, we couldn’t be accurate in our predictions and science would collapse into magic. The totality of these actions produces a “system” which magnifies the predictability and therefore the determinism operating in the system. The system doesn’t predict or know it is a system. We do. Theoretical cosmologists and theologians may find in such macroscopic interlocking order evidence for divine intervention that empiricists might never observe in the microscopic determinism of phenomena. But, I hasten to add, such confirmation could never be scientific simply because the explanation involves notions not open to perception and– pity poor science– it can only deal with phenomena.

The materialist will have sophisticated answers, of course, about how one system gives rise to another and how this environment happens to be suited to that reaction or result. But buried in the fancy language is the same problem: “begging of the question.” They have assumed that which is supposed to be demonstrated.

Huh? Is this a blurry version of the cosmological proofs?

2. A living animal (or one of its organs) is obviously and radically different from a dead animal (or dead organ) even though the material stuff out of which they are made seems to be the same. Therefore, some principle other than and greater than the material parts must exist to account for the life of a living thing. This principle, according to the Western tradition, is the soul. Both Aristotle and St. Thomas Aquinas teach that plants, animals, and especially human persons are animated beings (from anima, soul). It is the soul in each organism that contributes its distinctive nature and controls its activities. The presence of a soul in living things testifies against the materialism that usually accompanies scientific determinism.

You are on higher ground here as you are not challenging science but transcending it, which seems to me your best bet. Science certainly can describe the difference between a live and a dead body, but it must be silent on the existence of the soul that animates it. Ockham’s Razor does play into this, though. We don’t need angels pushing the planets around the sun when we can offer centripetal force, though they still might be doing that. We don’t need Apollo driving his chariot across the sky, though he still might be there. So it is with an animating force. Organisms are more than matter. Life requires energy transfers through biological and chemical systems that cease to function at death and this empirical physical change adequately explains the difference between life and death. Maybe there is a soul involved in this arrangement, but Ockham’s Razor makes it unnecessary in order for pathologists to fully explain the perceptual factors involved. Unfortunately, you cannot prove the existence of a soul any more than your opponents can disprove it, so your argument is unlikely to be a powerful one against active opposition.

3. The human intellect has a unique power: It is capable of knowing simultaneously things that are mutually exclusive. For example, hot and cold are properties of a body (physical object) and cannot exist at the same time in the same respect; a body can either be so hot or so cold, but not at once perfectly hot and perfectly cold. The intellect, however, in knowing hot knows also cold, and in fact knows the one in and through the other. Your mind can be all hot and all cold, inasmuch as you are able to grasp these opposites at the same time. More than that, intellect conceives of hotness and coldness, which are more than mere degrees belonging to some body—they are essences, “whatnesses.” These reflections help show that the intellect is not a body, for something is seen to be true of it that can be true of no body whatsoever.
Now, because the intellect has a power over opposites or contraries that no physical organ has, and because it attains a knowledge of universal things that stand beyo3. The human intellect has a unique power: It is capable of knowing simultaneously things that are mutually exclusive. For example, hot and cold are properties of a body (physical object) and cannot exist at the same time in the same respect; a body can either be so hot or so cold, but not at once perfectly hot and perfectly cold. The intellect, however, in knowing hot knows also cold, and in fact knows the one in and through the other. Your mind can be all hot and all cold, inasmuch as you are able to grasp these opposites at the same time. More than that, intellect conceives of hotness and coldness, which are more than mere degrees belonging to some body—they are essences, “whatnesses.” These reflections help show that the intellect is not a body, for something is seen to be true of it that can be true of no body whatsoever.
Now, because the intellect has a power over opposites or contraries that no phnd the scope of any sense power, the intellect must be immaterial. Since matter is the very cause of a thing’s being corruptible (i.e., able to break down and fall apart), the intellect in itself is incorruptible—it will never break down and fall apart. Hence the soul of man, insofar as it is intellectual, is immortal. What is more, the soul is not subject to opposition from or coercion by material causes. In other words, no body can make you change your mind, unless your mind changes itself. This is a powerful sign that the intellect (or better, the intellectual soul, which includes free will), has its feet planted in the material world by way of the sense powers, but holds its head aloft in a spiritual world where the stakes are truth and falsehood, good and evil.

Well, that was a long trip! This Platonic argument was countered by Aristotle, who argued that the Forms were neither perfect nor divine but were rather conceptualizations built up from experience. This argument was developed ad nauseam by the “realist” opponents to nominalism in the high Middle Ages, but as no convincing resolution of the argument settled the question, it seems to me all the old nominalist and conceptualist arguments can still be made against this one. My position is that a concept like justice is not a real object any more than hatred is– though I hold that both are objects of thought that allow us to recognize and discuss their nature– but that does not mean that either is divine in any way. This is not to disprove the Platonic argument. It can neither be proved nor disproved. But as a completely natural counterargument suffices to explain the phenomena in question, this argument against determinism is not necessarily convincing. BTW, it is a huge leap to go from conceptualism to immortality, so I would say your argument makes some unsupported jumps along the way here even if the initial Platonic points prove defensible. For instance, it certainly is not logically necessary for a nonmaterial substance to have entirely contrary qualities to a material one. Because one thing is contrary to another in one sense does not necessitate it being contrary in all. Men and women are opposites in regard to gender but identical in many other respects. So too may immaterial qualities be like material ones in some respects but not all. At any rate, conceptualist notions of qualities like “consciousness” posit an objective reality rooted in that consciousness rather than in some Platonic realm.
4. The determinist claim that free will is an illusion flies in the face of our immediate and unshakable awareness of freedom over moral actions. It undermines praise and blame, reward and punishment, and the practice of justice, which renders to each what he deserves. If man is not the free cause of his actions, how can he be praised for defending his family from crime, or punished for murdering a fellow human being? All social life and jurisprudence is founded on the fact of moral freedom, which we know with a certainty far greater than any scientific hypothesis commands. Some people use the expression “pre-scientific knowledge” to refer to the fundamental experience of the natural world and of ourselves that not only must come before, but must dominate the interpretation of, all subsequent knowledge. Some scientific theories are reminiscent of a man on a ladder sawing off the planks that support him, or a tightrope walker ready to sever the cord that holds him up.

You seem to mistake wishing for knowing here. I would love to think I am free. Also that I will not die. Both of these convictions are strong in me. But if I face facts squarely, I know I will die. And if I look at the various arguments against determinism in regard to free will, I find that compatibilism, libertarianism, and various attempts to find room for human freedom are wishes rather than arguments. I concede that we feel free to choose, but we are wrong about many things we come to naturally, and even unpleasant truths must be faced. Still, I think you have this one backward. If we have free will, we are not determined. But because we feel like we have free will is not proof that we are not determined. Wish it were.

5. Nothing is a cause unless it has power to cause. No physical thing gives itself power to cause, but always receives this power from something else. Moreover, no physical thing is the cause of its own being, but exists only as a result of prior beings. Thus, for each cause, one must seek the source of its causality; for each being, one must seek the source of its existence. If there is not, prior to all physical causes, a non-physical origin of the power of causality, then nothing could ever begin to cause and nothing would in fact occur. Posterior causes depend on prior causes; if there is not, prior to all physical beings, a non-physical origin of their existence, then nothing would exist—all of which is absurd. The existence and causality of material things therefore depends entirely on a perfectly immaterial uncaused cause of both being and motion—namely, God. Far from doing away with God, scientific determinism cannot make any sense at all without implicitly assuming him—or rather, without arbitrarily transferring divine attributes to matter and chance.

This is certainly what the Deists say and it is a strong argument. Thanks, Aquinas. It seems right to argue that the only way to break the causal chain is to posit an uncaused cause. If the universe had no moment of creation, then everything that could happen would already have happened and we would have reached perfect entropy. Cosmology is doing a bang-up job with multiverses and so on to push the moment of original creation back, but infinity is a long time and I doubt that cosmology will ever be equal to taking that on. Still your strongest point, but I would like to stress Aquinas’s point, one echoed by C.S. Lewis, that calling a creator “God” does nothing to imbue Him with any of the other qualities we like to attribute to the Judeo-Christian divinity. And if we seek to follow Paul’s advice in Romans 1:20 and infer the nature of the divinity from the universe it created, I cannot imagine we would produce the Judeo-Christian God, particularly if we don’t begin with the Biblical version of creation. Any honest induction from the nature of creation should produce a creator who loves material diversity far more than individuals, a conclusion supported both by the size and complexity of material reality and by the fragility and waste implicit in biological evolution.

6. The exponent of scientific determinism is guilty of a dramatic inconsistency between his thinking and his life. His dogma tells him that he is not free, that he is not responsible for his actions, and similarly that nobody else is free or responsible; yet in his life he behaves as a free person towards other free persons, exacts duties of himself and others, and shows mercy or cries out for justice when wrong has been done. His dogma tells him that his wife and children are basically automatons, yet, if he is a good man, he loves them and could never actually believe that the unique relationship he has with them—the experiences they have shared, the meeting of his future wife, their marrying and rearing children—is no more than a lockstep parade of meaningless atoms.

Not really. This seems a corollary to argument four and the same counterarguments apply. We know we are going to die, but nonagenarians keep on eating nonetheless. So do twenty-year-olds whose death in cosmic spans will follow immediately. Why act in a way that violates the basic facts of existence? Because we have hope, which is all this point offers. I am convinced that free will is tied to the Kantian categories, specifically causation, in questions of goodness, so though we feel we possess natural freedom, and therefore bear the burden of choosing, we are actually determined. (Please see my postings of March 16 and 23, 2014 for more.) Since we don’t know what the determining influence is, we feel it doesn’t exist. But people were bound by gravity long before anyone knew what it was. In any case, our beliefs often seem at odds with the known facts of existence, but the conflict should challenge our beliefs rather than the facts.

7. If someone asserts that determinism is true, has he come to understand something true about reality as a whole? If so, how can this truth, which is universal, timeless, and independent of all particular events, be merely an effect of material causes? It already reaches into a domain no longer subject to—indeed totally outside of—the strict chain of physical cause and effect to which the theory appeals. There is no room for truth as such in the world of the determinist; the man who says “determinism is true” refutes himself in the very act of speaking.

This is a restating of your third argument but is much weaker. Conceptualism offers a long history of explanation for terms such as “truth” that traces back to Aristotle’s original rejection of the origin of the Forms. David Hume does a clear job of catching one up to progress from the Greeks to the Scottish Enlightenment, though not much has been worked out since, but the argument you give here would also make unicorns and Klingons not only real but of divine origin. I would add that the man who says, “Determinism is false” should never expect his car to start in the morning. Consider the literally thousands of chemical, electrical, mechanical, metallurgical, and physics processes involved in an automobile’s creation and operation, from the making of the internal combustion engine to refining of gasoline, from radiator to exhaust gasses. Each is based on determinism as is every bit of natural science. The notion that God made a miraculous universe died a well-deserved death beginning in the Renaissance. Its adherents seem not to appreciate that a miraculous universe would be a literally chaotic one, one not open to reason and experience: in other words, the medieval world. To trash human reasoning so willingly seems to me to be a deeply ungrateful act of betrayal to the divinity believers profess to worship.

Nevertheless, the apologist should bear in mind that determinism, as a quasi-religious dogma, is passionately and stubbornly clung to by its adherents, who have often, so to speak, pre-determined the outcome of the dispute before it even gets under way. An apologist is more likely to be successful with ordinary people who have given credit to determinism only because it is repeated ad nauseam in textbooks and the media. Their half-hearted endorsement of it, or of some aspects of it, is thus more easily shaken.

I am afraid you are guilty of this charge, and the success you desire smacks more of religious proselytizing than of a deep investigation into the methodology of science. I once again challenge you and those you seek to convert to visualize the appalling consequences of living in a universe unintelligible to reason, for surely that would characterize one devoid of material determinism. Such a universe would hardly be a “cosmos,” an orderly creation, but rather the “Pandemonium” Milton envisioned as the realm of devils.

Reviewing the weak theories that attempt to rob us of our freedom, we might well desire to cry out again with St. Paul: “For freedom Christ has set us free; stand fast therefore, and do not submit again to a yoke of slavery” (Gal. 5:1); “Now the Lord is the Spirit, and where the Spirit of the Lord is, there is freedom” (2 Cor. 3:17).

I admire Dr. Kwasniewski confronting the issue in such a disciplined way, and his characterization of science does have some validity in regard to the excesses of scientism, which is certainly a problem that science and its popular adherents sometimes fail to confront sufficiently. But as the lines were drawn long ago by Popper and others, there seems no excuse for scientists or educated laymen to cross that line. For an opponent of science to attack the excesses of scientism seems to me an act of bad faith, for the two have long been disentangled, and the proper sphere of science should by now be well-appreciated by practitioners and the rest of us who profit by its technology, rigorous subject disciplines, and deep view of material reality, none of which necessarily dissolves faith and all of which may serve to deepen and broaden it. My most serious objection to the premise of this article is that it chooses the worst possible point of attack. Can anyone take seriously an assault on material determinism in this age of scientific triumph? As I have tried to make clear, such an assault hammers at the very foundation stone of the entire empirical enterprise (and any rational enterprise as well). Such a reactionary appeal to pre-Reformation epistemological positions would have been hopeless in 1700. It seems worse than quixotic to pitch it today in the face of the manifold daily proofs we see of the predictive power of the natural sciences. Science exists because of its explanatory power. Its truth is confirmed by the interlocking paradigms of its discrete subject disciplines, its reliance on the precision and logic of mathematics as its language platform, and, most obviously, its technological marvels. All of these rest squarely and exclusively on the truth of determinism. Why religionists fight this lost cause puzzles me, particularly when they can use some of these same arguments to contrast material determinism with human freedom, attributing the latter to a difference of kind only explicable by postulating the existence of a soul. Dr. Kwasniewski gets half of the argument right –our sense of freedom argues for a spark of the divine–but to grant that same freedom to the material universe by denying determinism puts him in the same boat with those he attacks, only he accuses them of wishing to make man matter without freedom while he wishes to make all matter free. Religionists might consider broadening the gulf between humans and nature rather than narrowing it. Science cannot help itself: it only examines perceptual reality and so must limit its study to our material being. Its glory is that it accepts its limitations and excels within them. Religionists should agree to accept theirs, and stop fighting for a cause long since lost. They might still triumph if they pick their battles more wisely.

Standard