Divine Justice

Justice is classically defined as “to each her due,” a definition both necessary and sufficient to examine various applications of the term: distributive, retributive, moral, legal, and sociological. But what about our impression of God’s justice? Can we say anything meaningful about divine justice and does such an attempt cast any light whatsoever on our imperfect efforts here and now? As in so many issues of truth and goodness, the answer to that question depends on the warrant one chooses to apply.

The essential issue concerns what is due. The Book of Job attempts to resolve the problem of evil by establishing a distance between human and divine justice. Job and those who know him calculate his desserts based on his actions and his character: the story makes quite clear that Job is a good and blameless man who fears God. By any rational calculation, he is due God’s favorable judgment. But as Satan observes, perhaps it has been this very blessing from God that has kept Job on the straight and narrow. Withdraw it and his goodness would evaporate, Satan charges. What follows is a litany of miseries visited upon the poor man by the devil, but with God’s consent. By the end of the nightmare as Job lies cowering in the dust covered in rags and boils and praying to die, the reader must share with Job the conviction that here is a God who cares not at all about what His creatures deserve. To those who would seek to salvage some goodness for a God who visits such trials on the innocent, that hope is crushed by a lovely but terrifying divine peroration. It goes on and on. “Where wast thou when I laid the foundations of the earth?….Knowest thou the ordinances of heaven?….Wilt thou also disannul my judgment? Wilt thou condemn me that thou mayest be righteous?” The thrust of God’s response may be summarized simply: God decides what is due and therefore what is just. Our own sense of justice must be abandoned, surrendered to God’s, no matter how arbitrary we may find it.

This moral imperative is hardly confined to one Old Testament book. It permeates Biblical narratives as salt the Dead Sea and stifles in all cases the merely human appraisal. Adam and Eve are ejected from paradise and their descendents condemned to lives of toil and suffering even though our unhappy parents “knew not evil” in the garden and so lacked the moral sense to grasp the enormity of their disobedience to divine command. Abraham was prepared to commit filicide, an almost unthinkable evil, in what turns out to be a test of his obedience. David’s son by Bathsheba is killed to punish David for his adultery. The Pharoah’s heart is hardened by God. John the Baptist is decapitated. Jesus is crucified. The lesson of Job is seemingly difficult for believers to learn but is so often repeated that none can escape it. In the classic formulation of the issue as framed by Plato in The Euthyphro, the Biblical response to the question of whether the gods’ commands are good because the gods command them or because they are of themselves good is unequivocal. We find a thoroughly consistent response, as summarized in first Isaiah: “ I form the light and create darkness, I make peace and create evil; I, the LORD, do all these things.”

Martin Luther summarized what might be called this orthodox Biblical response for Christians. “For were this justice such as could be adjudged as just by the human understanding, it were manifestly not divine and would differ in nothing from human justice. But since God is true and single, yea in His entirety incomprehensible and inaccessible to human reason, it is right, nay it flows necessarily, that His justice is also incomprehensible.” Put simply, it may be summarized this way. “Who are you to judge me?” Taken at face value, such a declaration should settle the issue of divine justice conclusively. Any God who boasts of having put evil into creation is one who attacks our sense of what is due at its foundation, introducing contradiction into what humans assume to be a rational appraisal of value so necessary in any application of human justice. For what we are due in human terms involves a spectrum of desserts wherein the reward should fit the accomplishment or the punishment the crime. The manifold exceptions so clearly laid out in Biblical narratives, not to mention God’s repeated bald claims of responsibility for the creation of evil itself, make clear that in Plato’s terms, the Biblical God decrees goodness as He sees fit, in violation of any human conception of the term and of all rational conceptions of justice. His sense of justice is fundamentally not our own.

Now this realization poses a problem for believers and not only modern ones. Evidently, even the writers of these narratives found it impossible to reconcile their own rational sense of what is due with the bald pronouncements of divine authority. And this is a problem of warrant. In previous posts I have examined both the problem of religious authority (please see posting of September 18, 2013) and our utter reliance on our own rationality (August 4, 2014) to make sense of our truth and goodness claims. What the conflict between authority and reason produced in Biblical authors and continues to produce in their adherents is a kind of bifocal view of divine justice. Though the text answers Plato’s question in The Euthyphro one way, the subtext responds with the other. God seems both bound and unbound by His own goodness. He either makes or allows evil but then how are we to reconcile such ambivalence with His beneficence? The conundrum is resolved by a renewed appeal to divine justice that seeks to comfort the rational sense that simply cannot be satisfied with Biblical accounting and will not accept it.

For each of the Biblical narratives that establishes divinity’s unlimited prerogatives to do to His creatures as He wishes, that baldly remind the reader in the story that actions are just because God defines them that way, we find a restoration and satisfaction of what Luther calls our “merely human” sense of justice. So though Adam and Eve are condemned, their heirs are promised redemption. Abraham’s knife armed is stayed, though at the last moment, so Isaac is not sacrificed to God’s will but is instead the inheritor of greatness and prosperity. Job is rewarded with greater wealth and happiness than he had lost. Jesus is resurrected and covered in glory. But as much as each story seeks to rebalance the scales of justice to grant God total latitude, the parts all add up to a glorious whole in which the evil each character suffers is defeated, and goodness–rationally understood–is rewarded. The omnipotent thumb will balance the scales of justice, and the good will find eternal joy while the evil suffer torment for their sins. Sanity is restored to a cosmos seemingly deprived of it by a supernatural order that corrects the injustices of this world with an entirely rational justice in the next.

So the divine “order of things” that seems so disordered is seen by believers as only making sense when tuned by the music of the divine sphere. In this, their opponents evidently concur, for the existential literature and philosophy of the last century is characterized by what can only be called a sense of petulance at the injustice of creation, an injustice that a godless and uncreated universe could never foster (for more on this sense of betrayal, please see “Freedom, Antihero, and Zeitgeist” posting of November 26, 2013). But if unbelievers want to not have their cake and not eat it too, believers seem equally determined to be inconsistent. They stress the absolute power of divine arbitrary will while seeking to chain God to human rationality as Biblical authors did: by restoring rational standards of justice through appeal to an overarching supernatural order. When this appeal fails to appease a natural human appetite for justice in this life, they resort to a trump card they have already discarded: divine mystery. Why did one hundred infants die in some natural disaster and one wizened octogenarian survive? Why did a loved one die an agonizing death from bone cancer? In short, why does natural evil have its way with the best of us and unjust contentment shine forth from the wrongdoer? Their answer echoes Job. It is God’s will. Which is to say, a mystery. Is it any wonder that those “who hunger and thirst for righteousness sake” find such an answer unsatisfactory, particularly since such a detailed explication of God’s nature has been so carefully drawn by the dogmas of the religionists who now appeal to divine caprice, and since part of that dogma explains the divine order that guarantees “to each her due”? Perhaps the careening between incompatible conceptions of divine justice might suggest to believers and unbelievers alike that there are some very basic categories of human understanding that remain unsatisfied with either conception. How can a just God allow injustice? Is His justice so unlike our own? Or is the appearance of injustice an illusion? Or will justice be delivered in the next world? Isn’t the honest answer that we simply do not know, that an acceptance of the transcendence of divine justice must be more overarching than we wish to accept? Isn’t such an admission of our ignorance of the divine more honest than the confident assertion of doctrines that embrace both God’s justice and His arbitrariness? When we are reduced to confront the tears of the oppressed with a shrug, what allows us to then speak at such length of our putative knowledge of God’s purpose and plan? Why is it hubris to confront the inconsistency of divine justice and not hubris to explain it away in such a disjointed effort? Haven’t ten thousand splinterings of Christian orthodoxy satisfied our suspicion that authority is not up to the explanatory task it attempts, that only what Augustine called “the natural light” of our reasoning can find the line between the knowable and the mysteries of faith that we so desperately seek?

It is, I suppose, possible to posit a kind of Averroism on this question, to assert as Luther did that we cannot evaluate divine justice in human terms, that some unknown fundamental difference between God’s sense of what is due and our own dooms any human attempt to make sense of this question. But such a “cure” strikes me as far worse than the “disease” of confessing our ignorance. Justice is a fundamental human characteristic, central to any moral rule and to our attempt to apply it. To posit a divine “justice” that operates on some foreign principle would not do violence to a human conception of justice, for we cannot live without it, but would thoroughly doom any effort to establish a connection with the divine and a moral universe imitative of it. Imagine a God who actually did leave Job in rags and boils, who delighted in Abraham’s murder, who acted as viciously as the gods of Olympus. It seems a Platonic sense of justice must be as much a characteristic of a divinity as omnipotence and omniscience are. An unjust god is an affront to our concept of the divine and indicates not blasphemy to the Biblical God but an error in our conception of Him. The only correctives are a relentless humility coupled with a tireless application of reason, God’s scale on earth.

 

 

Standard

Tangled Terms

For those interested in how persons choose to justify their claims to truth, goodness, or beauty, conversations prove as rich in meaning as poetry, though perhaps more elegy than ode. Because it is far easier to make our declarations than warrant them, we often find ourselves tied into a Gordian knot of self-contradiction when asked to explain ourselves. We have history to thank for this conundrum. Our age endorses two contradicting kinds of warrant: personal and public or, to use the more formal terms, coherence and correspondence (for more, see blog entry of August 6, 2013, What Is the Virtual Circle?). Yet the language we use to make truth claims is not up to the task of clarifying which kind of warrant we are appealing to, drenching each declarative sentence in a murky ambiguity both confounding and tempting. While navigating these claims tends to confound the attentive listener who wishes to investigate the declaration’s truth, the ability to transition effortlessly, even unconsciously, between the two modes tempts careless speakers to make claims they could never justify to another. I’ve harvested a few common turns of phrase from recent conversations and readings that illustrate the problem. Perhaps you’ve noticed them too.

 

“I know my opinion is true…”

Either you actually don’t know it, or the opinion or belief you’re expressing is not what it seems to be. We call a broad range of truth claims “opinions” that are actually preferences or expressions of taste. It certainly is possible to know such things in a coherence sense, meaning they confirm or fail to contradict all the other personalized truths we construct in our minds that can be justified in no other way. For instance, I know that I am hungry at this moment. Whether that sensation qualifies as an opinion is more of an insult to the term opinion than to my appetite; my uttering it to you is an insult to your attention. So should the speaker be claiming that kind of an opinion true, the listener might wonder why she bothered to speak it. Perhaps she was talking to herself. Perhaps she was making a correspondence claim and casually misused the word “opinion” when she might have more clearly used the indicative term “judgment.” On the other hand, she might have spoken correctly, and when challenged, might have shot back in defense and confusion, viz…

 

“I have a right to my opinion.”

It is doubtful that anyone would bother wasting this phrase– with the requisite degree of indignation that accompanies it– on a mere coherence truth claim involving personal taste, emotional response, or interior sensation. This kind of claim bursts from an injured dignity. The speaker feels under attack. The implication of this comment is that all opinions ought to be respected, perhaps equally. But this feeling reflects the confusion implicit in the word opinion. Certainly, when used in a coherence sense, everyone certainly does have such a right since coherence truth requires the speaker’s choosing to assign her own degree of value to all putative warrants, so should she accord sacred status to revelations or dreams or any other perception or reflection, her listener must respect that right, so long as the collective assemblage of private truths betrays no contradictions and makes no correspondence claim. The problem arises when said opinion goes beyond the confines of the speaker’s own mind and presumes to state a truth about the common reality we all share. For claims to truth, goodness, or beauty in common reality are subject to an entirely different set of justifications. For these claims, simple non-contradiction is insufficient, and the “opinion” opens itself to the kinds of warrants used in correspondence (for details on the use of warrants, see blog entry of March 10, 2015, “What Counts as Justification?”). In these cases, the speaker might more correctly claim that she has a right to be wrong. Confusion about the use of the word opinion derives from the flaccidity of its meaning. Still, it could be worse. She might be referencing your point of view, in which case the preferred comment seems to be…

 

“That’s only your opinion.”

This is the flip side of the previous comment. If that one was fueled by injured dignity, this one runs on contemptuous dismissal blithely indifferent to the contradiction produced by applying the former to my opinions and this one to yours. This one discounts any differing truth claim on the grounds that it derives from differing perceptions and idiosyncratic reasoning, in effect accusing it of carrying only coherentist meaning. It goes without saying that the speaker of this comment disagrees with the claim. Obviously, if she examines its nature to determine if it can be warranted by correspondence, she would be required to find the warrant for her disagreement, but why bother to go to the trouble when dismissing it out of hand by casting it as a meaningless expression of taste or preference disposes of it far more thoroughly? We see a similar lazy tack with related expressions, for instance…

 

“My logic versus your logic.”

This kind of comment vivifies the postmodern notion that consciousness is fundamentally private and incommunicable, justifying a thorough coherentism (for more background on this issue, please see my blog entry of July 30, 2013, “Postmodernism Is Its Discontents.”) The phenomenological position has a long heritage, but its insistence that reasoning is molded by experience or culture is refuted by the universality of mathematics and worldwide standardization of scientific disciplines just as its fundamental claim that objective reality is unknowable is disproven by, among many other things, the triumph of technology. Since even devout coherentists find these correspondence truths difficult to reject, they must construct some inventive twists in their virtual circles to maintain their objections. Coherentists are thus handicapped by a requirement for epistemological purity that their opponents can avoid. Correspondentists who defend the intersubjectivity or even the objectivity of experience are not required by their position to claim that all reality is knowable by reason or common experience (the historical errors of science’s champions to claim more territory for correspondence knowledge than it can support have been responsible for both the hubris of scientism and the disaster of the human sciences), but coherentists are compelled by the exclusionary nature of their argument to reject the possibility of any objective knowledge. The result is that modernists may embrace a place for both correspondence and coherence, but postmodernists must reject correspondence in its entirety or introduce contradiction into a virtual circle whose only truth test is the presence of non-contradiction. One of the many embarrassments of that absolute rejection is that they must somehow reject the truth of the scientific enterprise. One way is to denigrate science as one more kind of propaganda by saying…

 

“Culture is composed of the stories we tell of ourselves.”

This one is particularly slippery because, after all, at least some of culture is exactly that. But the perniciousness of this claim lies in the privileged status it accords to stories at the expense of the other kinds of truth and goodness claims that form culture: e.g. moral values, traditions, laws, and theories. The root of the dichotomy that puts narrative on one side of the scale and other kinds of conceptual constructs on the other can be traced to the fabrication of mature postmodern theory by the poststructuralist movement of the 1970’s. Their opposition to modernism and its errors was complete; they were deeply contemptuous of rationalizations for exploitation that modernism had endorsed: imperialism, colonialism, racism, classism, capitalism, and misogyny. Postmodern critics dismissed the rickety and elaborate (and self-contradictory) justifications modernism had erected to excuse its exploitations as mere stories the powerful tell themselves, grand narratives with all the freight of fictionalizing and world-creating that stories require. Of course, demoting a theory to a story not only diminished the theory; it also elevated the story, something the literateurs who composed postmodernism thought perfectly splendid. But no matter how sustained the effort at epistemological equivalency or how insistently an entertainment culture places narratives before us, stories are neither reality nor adequately suited to explain it (for more on why, please see my blog entry of July 14, 2014, “Tall Tales.”) We grow blind to moral reasoning and the lessons of experience when we think they are, and so blur all judgments into opinion, all analysis into parable. It is little wonder that the urge to question everything is deeply ingrained in our culture today, and so we…

 

“Challenge authority!”

This mantra has been the theme of the last five hundred years, so one might think it tried and true. From the Protestant Reformation that spawned modernism through the postmodern thought revolution that challenged it at the dawn of the twentieth century, intellectual history has been a roundelay of questions, objections, and rephrasings of accepted wisdom. But one needn’t know any history to observe this in action: each person’s life is a microcosm of the same process as we first believe, then trust, and finally interrogate the truth, goodness, and beauty claims that we were born into. I cannot say this maturation process has always been as forthright and aggressive as it is today in a world where yesterday’s verities are today’s ironies, but surely something like it has molded more and more lives since the sixteenth century. So why do I in turn challenge the authority of the phrase? I think it blurs both the nature of authority and the possibility of challenging it. First, it is tempting to throw all “received wisdom” into the same basket, making it easy to assume that all the “truth” we learned as children or from the culture is equally dependent on authority. An unlikely coalition of religionists and postmodernists support this view, the former to elevate authority and the latter to denigrate it. In an earlier post (January 20, 2014, “Authority, Trust and Knowledge”) I referenced C.S. Lewis’s claim that our acceptance of science is built on the authority of the scientific community. No less an authority than John Paul II in his encyclical Fides Et Ratio makes the same claim. But surely the paradigms of science are quite different from the pronouncements of parents. The facts of the history text are of sterner stuff than the musings of the first-year teacher citing them. One may choose to accept anything on authority, but when appeal to a stronger correspondentist warrant is available, we have the choice to reach for it, and only our own limitations of time and energy inhibit the effort. So strong warrants for truth and goodness claims are often accessible: empiricism, logic, or expertise supports so many of the truths we choose to take “on authority.” This is quite different from those truth and goodness claims for which authority is the only possible warrant, such as those supporting religious dogma. I think religionists blur the difference between authority and stronger correspondence warrants because they seek to elevate authority to the level of these justifications, perhaps to blur the line, perhaps because their own acceptance convinces them of its legitimacy. Quite the opposite motive moves postmodernists, who proudly seek to challenge authority wherever they find it, seeing it as an implement of cultural control masking simple power relationships. Their motive is to sap the strength of stronger justifications, reducing all to the level of authority, a source of power they feel very comfortable confronting. Historically, their ease derives from the earlier challenges to religious and political authority in the name of modernism, efforts whose modes postmodernists appropriated for their own attacks on modernism itself (not that modernism ever shirked from attacking its own modes of knowing as well). There is still a whiff of the lone protester blocking the line of tanks in Tiananmen Square about this exhortation to challenge authority, an invitation to bravely man the barricades. Come on! After six hundred years of attacks, authority has all the starch of a wet paper bag, its fatal weakness being its inability as justification to resolve challenge on its own terms. We might miss that weakness if we fail to isolate authority from stronger modes of justification, which are indeed much more difficult to attack and which contain within their own methodology the means to resolve challenge. Or we might overlook how tiresomely often authority is used as an easily wielded form of cultural control just as postmodernists charge. This effort is understandable, for nothing in our complex and bureaucratic zeitgeist is easier than setting up a top-down “authority.” Only it can’t work that way for two simple reasons: acceptance must be granted from below, so to speak, and it is always tentative in a society nurtured on questioning everything. As often as it is set up, it gets knocked down simply as a function of its nature as warrant. For when we “challenge authority,” we break the reed of trust that props it up. Should we appeal to a stronger warrant, authority fails, for example, as religion retreats in the face of competing scientific claims or as parents accede to the wisdom of childrearing experts. Should we withdraw assent in the name of a competing authority, the effect is the same, for what can our original authority offer as a counter? Should it proclaim its righteousness more loudly, more insistently, more violently in the face of challenge? This, of course, explains the absurdity of religious conflict. It is no great thing to challenge authority, for the challenge alone dissolves authority as justification for truth and goodness claims. Challenging power is a different and harder matter, but the decision to do so derives from the knowledge that such an act would accomplish some perceived good, and determining the good thrusts us right back into issues of warrant. The problem then gets harder: finding goodness is a far greater challenge than finding truth, particularly by means of either authority or pragmatic convenience. Of the three categories of goodness– instrumental, qualitative, and moral– the search for moral goodness is the most difficult of all, in part because in an age that reveres science the way earlier ages revered religion, we find empiricism deaf to our pleas for clarity about moral goodness. Science can only approach the perceptual. The great failure of our age is the failure of moral judgment, so it is no wonder we wish to find someone of whom we can say…

 

“She is a moral authority.”

But she is only if you trust her to be. We see two bottlenecks in this claim, each limiting the qualifications of those who can claim moral authority. The issue of authority is the same as in the example above. No one can claim authority over you without your permission, for authority by nature is power freely granted through an act of trust. To labor this point for clarity, let me distinguish someone being granted authority over you versus your freely granting it. Of course, all kinds of persons and positions assume positions of authority in complex social arrangements: policemen and principals, clergy and clerical workers at the DMV. But until you accept their power over you willingly, trusting in its legitimacy, they can have no authority. We all know what happens right after you reject, say, the authority of the policeman. But the power of the gun and badge are not synonymous with the authority most citizens grant the person who carries them. Our bowing to power is an act of pragmatic calculation rather than moral reasoning. It is the polar opposite of granting authority, though one might not know it from observing our response to the demands of either.

And that introduces a wrinkle, for on what grounds should you grant any authority? It makes little sense to say you grant authority on the authority of the one you grant it to, does it? So how do you decide whom you should trust with moral authority? This thorny question has been answered for all of us by our childhood trust in parents and other adults, a trust logically granted in one area because it had earlier been earned in others. Children trust their parents to handle their childhood needs, one of which is moral suasion. So all parents, or at least all good parents, qualify as moral authorities, at least to the children who freely grant their trust. Is the situation analogous for adults, who have learned lessons of human fallibility all too well? It seems so, for adults grant moral authority with the same kind of rational assent perhaps leavened with a stronger rational approval of the institution that supports its moral leader. So we willingly give our trust to pope, patriarch, ayatollah, or shaman in moral issues, and also to police, judges, and civil magistrates because the institutions they lead have provided us with a moral compass we approve of. When we investigate the basis of such assent, and especially when our trust is violated, human fallibility being what it is, moral authority seems like a pretty precarious thing, principally because of the weakness of authority as a warrant for truth and goodness claims.

But then we face the second bottleneck: we are granting moral authority, and in this most difficult of goodness issues, the problems of authority are multiplied by the elusiveness of moral certainty. We face immediately the collapse of the stronger correspondence warrants for truth and goodness. Empiricism cannot guide us, for moral goodness is not perceptible, removing all such questions from science’s purview. Science is the strongest warrant for many truths, but it must be blind and dumb to moral issues. Expertise is not very useful for moral questions either if only because morality is too contextualized to conform to the demands of repeated, similar experiences that the development of expertise demands. We have seen three moral systems based on logic: virtue ethics, Utilitarianism, and duty ethics; but these efforts to systematize moral agency are complex to negotiate and less than perfect in practice. It is true that many, perhaps most, persons today rely on the weakest of correspondence warrants for truth and goodness, undifferentiated experience. In practice, this total reliance on the demands of the moment as calculated by a fairly random attribution of value to the factors involved is indistinguishable from moral pragmatism or even emotivism. It is little wonder that moral authority still retains its attraction despite the ravages of the last five hundred years. It offers certainty and clarity, provided it is not challenged, but, of course, in a diverse and divisive moral landscape, it always will be challenged. And even the slightest consideration of the competing warrant of a challenge destroys the trust necessary to maintain any authority, especially the moral kind. Religious fundamentalists of all stripes seem to forget this. They seek a premodern position for authority, a return to the trust that nourished it for the ten centuries before the Reformation. They seek to overturn a reality in which…

 

“People will believe what they want to believe.”

Since many persons equate “belief” with “opinion,” the intent of this one is probably similar to dismissing a truth claim as only an opinion, but perhaps something more attaches to this expression. The use of “belief” ups the stakes, for while “opinion” is a pretty meaningless term in our culture, “belief” still retains at least a hint of theology about it. We all call our religious convictions “beliefs,” not “opinions.” And this is appropriate, for the etymology of “belief” includes an emotional attachment to whatever we claim to be true, or good, or beautiful. I really do believe what I want to believe. Now this affection is not an impediment to my claiming coherence knowledge. The freedom to select from a nearly infinite buffet of perceptions and reflections limited only by the weak restraint of non-contradiction implies exactly the kind of affective choosing that coherence permits and “belief” signifies. I may believe myself capable of great feats of courage or acts of self-sacrifice in such a schema. I think it true because I wish it true as I assemble my virtual circle.

But surely not all of our beliefs are wishes on the order of Santa Clause or the Tooth Fairy. Many of our beliefs are negative ones. Atheists disbelieve in a caring Father in the sky. Suicides believe life is hopeless. By what definition could these dark thoughts ever be judged “wishes”? Though counterintuitive, the term still applies, I think. In terms of warrant for correspondence truth, beliefs lie just outside the borders of knowledge, meaning that their warrants are too weak to be considered “true by a preponderance of the evidence” simply because either the evidence or our reasoning about it is insufficient. (Of course, beliefs are perfectly satisfactory as coherence justifications for the kinds of personal truths that build a virtual circle.) The trick is to apply the right warrant to the correct belief. The question of whether God exists is hardly a coherence one, so the nonbeliever and the theist face the same correspondence knowledge issue that cannot be answered in this life. Each seeks to extend her knowledge of what can be answered, entering the realm of belief. So long as each recognizes the limits of assertions based on their respective beliefs, no foul. (Two key problems are translating belief into authority and revelation into correspondence truth. For more on this issue, please see the blog entry of September 11, 2013,”Truth in Religion.”). Since we cannot know by a preponderance of the evidence, we must believe something simply to complete our picture of the world; we must supplement the virtuous circle, that conceptual construct of real truths about the world that we have discovered, with arcs of the virtual one that we must construct. These constructions constitute our beliefs. There is no escape from this effort. The agnostic who finds wisdom in proclaiming her ignorance still must act, and her actions must give the lie to her claim of indifference (for more on this issue, please see my blog entry “The Lure of the Will to Believe” of July 2, 2013). This irresistible urge to complete the virtuous circle, to know reality even when that completion requires construction rather than discovery, characterizes all belief. It is perhaps the strongest of our impulses, for it allows us to do what we are made for: choosing the good. Since that choice requires first knowing the reality that contextualizes the choice so as to frame options, we are moved to knowledge, and then to the belief that completes it. This impulse drives us to curiosity about the world and allows us to learn from experience, but it also tempts us to premature closure of our virtual circle in what is surely the mistaken judgment that it is the virtuous one. So, yes, belief is what we wish to think true if only because the extension of our knowledge into that murky territory is our only means of claiming comprehension about our environment and opening it to instrumental, preferential, and moral goodness, even if what we do know pushes our beliefs into the depths of despair.

This constructive effort is considerably more creative for the coherentist who may structure her beliefs as knowledge verified by non-contradiction and integrated into her virtual circle by whatever degree of logical rigor she wishes to apply. It is more circumscribed in efforts capable of correspondence justification, for the judgment I must bring to my effort to secure knowledge requires a deliberative attempt at objectivity expressly at odds with the license granted in coherence. In truth, the judgments we make in attempting to build the virtuous circle regard the temptations of belief as obstacles to knowledge, for all kinds of biases tempt us, in Plato’s words, to fall in love with our own opinions. So the correspondentist faces a border battle in regard to belief. Embraced too uncritically, they tempt her to hubris and premature closure, to exaggerated claims to knowledge of the reality we all share, yet she must still embrace beliefs at some point to complete her picture of how reality operates so that she might choose the good as she understands it. And the border between knowledge and belief is a moving one as we grow in wisdom. So how do we grow wise in arbitrating the ever-shifting boundary between our knowledge and beliefs? The best advice I have seen is to follow Karl Popper’s warning: seek anomalies in our own judgments and the beliefs we build upon them. Search our cherished truths for falsifiability. Subject our supposed virtuous circle to all the skeptical objections we can throw against it. As unpleasant as errors are to find, the discovery of each is a corrective to the temptations of premature closure.

We cannot help but to complete the shapes of what we only dimly perceive in the foggy distance, and that bit of construction must derive from what we can see in the foreground. So we properly base our beliefs upon our knowledge. But even this process of cautious completion, extrapolation, and exploration reminds us of the temptations of belief, for we mold the unknown so as to conform to the known despite all of our caution. Just as medieval mapmakers drew great white spans upon their charts yet chose to confine them by the continental outlines they knew, so must we shape the unknown by the reality we know. St. Paul had it right: faith really is the evidence of things not seen. Modesty should caution us on that score not to claim more evidence than we are given, for we are seeing through a glass darkly, no matter what we wish to believe.

It may be a psychological truism that people will believe what they want to believe, but it turns out to be a very poor moral directive. Better far to say that people should believe what is true, recognizing that beliefs extend rather than constitute knowledge, at least for correspondentists. Coherentists face a rather different problem, for their freedom to claim their beliefs and opinions as certain knowledge exacts its ironic price. The postmodern battle cry, “Perception is reality,” must always end in the stalemate of “my truth versus your truth.”

 

 

 

 

 

 

 

 

Standard

What Counts as Justification?

Try this. The next time a friend makes some simple declarative comment, ask her this simple question: “How do you know?” Now this exercise should be enlightening for you as well as your friend if you take a moment to examine it. First, you might ask yourself why you picked this particular declaration to challenge. Next, you might pay attention to the tone of your friend’s response. Finally, notice its content. All three observations should prove educational.

As to the first, it seems a peculiarity of our nature that we never notice a need to justify our truth claims unless we find them challenged. Did you disagree with the remark you chose to question? The odds are that you did, and that propensity reveals something about our desire for consensus and civility that is at one level very positive. Unless there’s something wrong with us, we don’t wish to be confrontational, annoying, and fractious. We typically don’t seek out discord. Our alarm bells don’t go off when conversations either confirm or fail to challenge our virtual circles, the complex of mutually supportive truth and goodness claims that make sense of our world. (On the down side of this inclination, our failure to interrogate our virtual circles allows us an unjustified conviction that they are, in truth, the virtuous circle, the single accurate understanding of what reality really is.) Our reluctance to challenge others and ourselves could be a social survival mechanism or mental indolence. I am convinced that we have a little unconscious mental calculator that clicks off all the positive reinforcements we receive for our own views of the true and the good. Perhaps this consensus gentium helps us to sustain them. In any case, why would we want to ruffle feathers and disrupt our own placidity by seeking justification to declarations we find congenial?

Of course, we and those we converse with could be wrong. We could be wrong as we think the 9-11 hijackers were when they discussed the pleasures of the afterlife as they steered their Boeing 767 into the World Trade Center. Wrong as Germans who cheered the Nuremberg Laws of 1935, or the Papal Court at the trial of Galileo or the defenders of Jim Crow in the American South in the 1950’s. For social consensus is hardly proof or justification, though it moves mores and conventions. I consider it a kind of authority, and in prior posts I have attempted to explain why it is a very poor kind of justification for our truth and goodness claims. It fails as soon as it faces serious challenge. Allow me to be clear. Its failure in no way disproves the truth it attempts to justify. The collapse of consensus is a consequence of disagreement, meaning a failure of authority to justify the truth claim. It is simple. Challenge dissolves authority. Some other warrant must be sought if the claim is to be defended. Your asking the simple question, “How do you know?” is exactly the kind of challenge authority fears if only because it opens the door to disagreement.

The second thing to notice in this little exercise is the tone of your friend’s response to this simple question. If it hints at some defensiveness, you might back off. But why would she object to a simple request to explain why she thinks something to be true? After all, she put it out there, making the truth claim, stating the declaration, making the claim, structuring a sentence so as to say “this is true” or “this is good” or even “this is beautiful.” If you agree, wouldn’t it be interesting to gather more proof for your own judgment, and if you don’t, wouldn’t it be even more interesting to understand why someone thinks differently, perhaps with enough rationale to help you arrive at some little arc of the virtuous circle? So you asked, and since she bruited the truth claim to begin with, isn’t she under some obligation to respond?

You would think so, but perhaps you might back off a bit if you also consider that her justification might very quickly veer into very personal territory. Then again, it might not. Oddly, that depends not at all on the nature of the declaration itself but rather on the nature of the justification she uses in her own mind to make sense of it, to convince herself that what she said to you, what she really thinks, reflects some truth. Should her warrant fall into the realm of correspondence, she might reference an article she read or a comment made by her podiatrist or mechanic, or maybe an experience she had that she feels no hesitation to relate. That is the way correspondentist justifications go. They are external, singular, and common, so if you then reply with a conflicting kind of warrant provided by an article you read, an expert you consulted, or an experience you had, the two of you will find further conversation fruitful as you compare the strength of the warrants you used in your truth claims. But perhaps the response to your question will be different. Your friend pauses as she searches for the right language to express an entirely different kind of warrant, one far more personal and less easily communicated. In this case she justifies her declaration not by some correspondentist warrant but by a coherentist one. Support for her declaration is not something external but something deeply personal, composed of the entirety of her experiences and reflections on them: in short, her values and her world view. For you to ask her to justify her truth claim is synonymous with asking her to justify herself and her understanding of her world. What is more, if she should attempt such a feat, how should any real conflict in your viewpoints be reconciled? What is most certain about her coherentist virtual circle is that it is different from yours. How could it not be? She has had a lifetime of different experiences and the lessons they teach to build it, and if she relies on her personal truths with any regularity, she is likely to regard the logical rigor with which she has constructed her world view to be also a matter of personal preference. To challenge her claim is to challenge everything that makes sense of her world and the means she’s used to build that sense.

And that brings us to the third point. What do you, her sympathetic listener, do with her declaration? On what grounds do you, can you, respond? Should you find her warrant to be correspondentist, you may without fear proceed to a discussion of the merits of the claim itself. You may offer supporting or challenging evidence to further or dispute her position with every confidence that such a discussion will not imperil your friendship and perhaps will improve your mutual understanding of the issue under discussion. But should you see her argument as coherentist, you must tread more lightly. For a supporting argument adds no heft to hers. As it is based on a virtual circle and therefore personalized warrant, the addition of another from an infinity of private warrants will do little to add support to her argument unless, of course, she treasures that little consensus calculator I mentioned earlier. And should you disagree, what of it? The same reality ensues. She can not only discount your argument as simply a variation of experience, as she should, but may also find you challenging her entire world view and the intellectual or emotional means she employs to structure it, for to find an anomaly in one part of the virtual circle is to introduce anomaly into its entirety. You are not merely challenging a truth claim, it seems, but also the entire identity of the person who made it. No wonder our disagreements are often so corrosive!

 

To put it plainly, correspondence arguments must be based on one of five justifications.

Empirical (scientific) warrants are the strongest simply because they make use of techniques designed to minimize the errors our reasoning is most prone to. First, they limit themselves intentionally to a very narrow range of perceptual experiences. Even within that narrow range, they further limit the quantity of specific experiences under review. They attempt to replicate those experiences to eliminate variance. They use precise language to explain what the experience has taught, which is why science is fond of the only infinitely precise language available: mathematics and the statistical methods it employs. They reason based on the experience, building explanatory hypothesis on closely examined experience rather than attempting to force experience to mirror preconceived judgments. The edifice of science over the last two centuries has been built on this knowledge foundation. But such efforts are limited in scope. Science can address goodness issues of utility—it can tell us the best materials to use in constructing an elevator cable—but it simply cannot speak to issues of quality or moral goodness or beauty because its methods require perceptual and measurable data.

And so we must seek logical verification of many of our experiences, attempting to find an ordered series of deductions derived from similar experiences or logical necessity. Such thinking, typified by a geometric theorem, is universal and ordered, but the underlying structure is provided by human reasoning rather than standardized experience as produced by scientific experimentation. For the last century, British philosophers have devoted themselves to ever more rigorous use of logical structure in their pursuit of the indubitable. The result has been a perverse discrediting of reason as a means to truth. But if we define knowledge as “truth by a preponderance of the evidence,” we can salvage reasoning as a means to knowledge while still accepting its openness to challenge. We can verify a logical correspondence argument to a listener, settling possible dispute by appeal to a universal reasoning process.

As experiences are so variable, we often rely on expertise to give us correspondence justification, for the careful application of universal reasoning to varied experience may produce reliable judgments about truth and goodness that are admittedly less reliable than empirical or strictly logical applications because they take into account the variability of experience but attempt to apply a universal logical filter to these experiences derived from careful analysis. The kinds of experience that produce expertise are easy to identify: they must be different enough to allow thoughtful analysis to reveal distinction but similar enough to distill the essence of the thing being studied. And there must be many of them. I was pleased to see Malcolm Gladwell addressing this requirement in Outliers, though his standard of ten thousand hours of practice and study seems a bit rigid for a warrant that allows so many kinds of activities. Though it may require that much time to become an expert programmer, perhaps something less might be asked of a short order cook, though the same sort of mental effort might be required to perfect skill and technique. It goes without saying that experts only speak with a strong warrant in their own field of expertise.

Failing in such efforts to examine experience, we fall upon the lesser kinds of justification: authority, the kind of cultural or traditional support excoriated above. It offers several unique strengths as a warrant for correspondence claims. First, it is easily accessible. We are all familiar with its power, for we were all children obligated to accept the truth and goodness claims of parents, teachers, and other adults. Their warrant was our trust in their persons or the institutions that placed them in our path. We accepted their declarations not because we had proof by a preponderance of the evidence but because they had demonstrated their reliability in earlier claims. But as any parent of a first child knows, that trust is often undeserved, at least in the beginning. For authorities are not necessarily experts. A strange extension of this kind of warrant might be termed cultural authority, the kind of diffused assurance that “everyone thinks so, and so it must be true.” As mentioned above, this kind of trust in the culture is ridiculously easy to call into question, for history has shown it to be as insubstantial as the music of the spheres.

Even more spindly is undifferentiated experience, the weakest kind of correspondence support, yet the one used most frequently by ordinary persons to support their truth claims. It is the weakest because it posits its conclusions on a clear falsehood: that some present experience can be understood in light of some past one. It is false because the context of each experience must be different even if the experience proved similar to a prior one. But even if the context of some undifferentiated experience should prove identical—say the thirty-fifth tossing of a coin—one difference remains: time. No two experiences can ever be identical for that reason alone, not to mention issues of context, so the conclusions we draw from prior experiences applied to present ones must always be suspect. Still, despite this fatal flaw, we all use undifferentiated experience promiscuously simply because it is often all we have to go on in the blur of choices about truth and goodness that face us each day.

A correspondentist faces further challenges as she approaches the frontiers of her knowledge, that hazy landscape where her means of warrant are insufficient to provide truth as correspondence must define it: knowledge by a preponderance of the evidence. If she is thoughtful in analyzing what she knows versus what she believes, she will find her beliefs to be as much construction as discovery. The etymology of belief gives away its flaw when asserted as truth: it signifies an attachment or preference quite at odds with the dispassionate judgment required for assessing claims to truth, goodness, or beauty. Properly applied belief uses the bases of knowledge to project logically consistent beliefs about those subjects we c cannot know: do aliens exist, how did the universe come to be, what happens after death, etc.

 

It is, of course, possible to reject these five correspondentist arguments in favor of a coherentist one, finding justification in the logical concordance of any and all perceptions granted credence by the thinker. This virtual circle of coherent truth claims is far more comprehensive yet also more personalized than any single correspondence claim. It may include any number of conveyances of putative truth: experiences, emotions, beliefs, imagination, intuition, and whatever else the coherentist chooses to value as a means to meaning. He gets to decide for himself what degree of rigor to bring to his efforts to harmonize his own beliefs, values, and truths. In this attempt, his sole limitation and thus his sole warrant is the principle of non-contradiction, a recognition that something cannot be both true and untrue simultaneously. The truth limit to his declarations is set at whatever degree of rigor he determines as appropriate, and at that limit he must be consistent in the kinds of claims he makes about truth, goodness and beauty. If he is logically insistent, he might even extend this requirement to the idea called logical entailment, meaning he is obligated to accept not only those claims that might be consistent with his virtual circle, his personalized reality, but also all claims that might in the future be also consistent. He may find any number of inputs congenial to such an effort to stabilize claims. No one may gainsay his efforts or critique the fastidiousness of his construction. Even the logic he brings to his sole means of justification is beyond the critical reach of another, for “it makes sense to me” is as much the credo of the coherentist as “it is true for me.” His reality is his perception and both are of his own creation. Of course, such flexibility comes at a price. The coherentist can never critique another’s virtual circle, so he must be tolerant of disagreement. Should his friend assert a conflicting declaration, he is obligated by the principle of non-contradiction to reply with a polite recitation of his own views, no matter how they might agree or dispute another’s, with no attempt at challenge or reconciliation. Such a conversation, a hallmark of this postmodern age, would be guaranteed to be congenial if unenlightening.

I will leave it to you, reader, to decide if such conversations are worth having.

 

 

 

 

Standard

Theocracy and the Commandments

Chief Justice of the Alabama Supreme Court Roy Moore is in the headlines again, this time for ordering probate judges in the state to disregard Federal Court orders to perform homosexual marriages. You might remember his name from his 2003 refusal to remove a monument to the Ten Commandments from Montgomery’s Supreme Court grounds. His is far from the only voice seeking to build a secular society on the Ten Commandments. After all, the U.S. Supreme Court building contains not one but two statues of the ancient lawgiver, both prominently featuring the stone tablets that were his gift to the Israelites and by implication to all of us. So we might be forgiven for asking why Judge Moore’s efforts to place God’s law front and center have produced so much heat from supporters and detractors. Do religionists think we need laws against stealing, killing, and libel? Do detractors think we don’t need such laws? There is much more here than meets the eye.

If we examine the commandments themselves, we find immediately a formal problem, as the relevant passages in Exodus are broken by various denominations in different ways, so that Orthodox Christians, Talmudic Jews, Roman Catholics, Lutherans, and Calvinists literally have different sets of the Decalogue. So the first challenge might be which actual sentences to enact into law. The Supreme Court neatly divides that baby by depicting the iconic tablets and an equally iconic Moses inspired by Michelangelo, but leaving the tablets conveniently blank. So the first question to pose to Judge Moore would have to be which set would you engrave into law? But this is not insurmountable. A generic formulation could easily be put together that gets to the gist of every denomination’s values, provided none is too insistent on a literal reading of their own Scripture.

Once we prepare to chisel them in stone, we face immediately our second challenge. Laws against murder, theft, libel and slander: these are easy. But what do we gain by erecting prohibitions against these offenses? We already have those enshrined in current law. We hardly need God’s thunder to convince us that these acts are wrong. We can reason out the harms these violations do to a civil society, so we might in the interests of saving taxpayer dollars let the stonemason leave these aside. But in preparing the rest, we face problems of habeas corpus. For instance, consider the “coveting” proscriptions that make up either one or two commandments, depending on your denomination. As actually coveting is a sin committed in the darkness of the twisted heart, it would be a challenge to prove these a crime in a court of law, so again in the interest of avoiding expensive and unwinnable cases, we might instruct our stonecutter to leave these off our tablets as well.

So now we are down to five or six commandments (again depending on denomination), and here we come to the real meat of the matter. It is perfectly feasible that courts by precedent or reference to codified law could define acts that violate these remaining five or six commandments. So let us put them up in the public square. For instance, adultery is easy and is, in fact, considered as a crime in military codes of justice, so we could simply criminalize it in state law in the same language, which would merely resurrect laws that once were on the books and have since been jettisoned in our secularized society. Sadly, that law alone might fill courtrooms to overflowing. More crowds would follow other criminalizing. Dishonoring the Sabbath might be a bit tougher to codify since we’d have to either specify whether the Sabbath falls on Friday, Saturday, or Sunday– which would incommode Jews, Seventh Day Adventists, or other Christians — or alternatively we could carefully define what would constitute “dishonor” regardless of which day one attends religious service. I assume citizens might be arrested for failing to attend their registered religious services, but then lawyers might have to litigate whether those sleeping or demonstrating lack of attention in these services meet the legal definition of “honoring.” We used to have laws against cursing, so a little research should have no trouble reminding the citizenry of which words cross the line into “in vain” territory and which ones keep us safely within the law. I can see some free speech zealots causing trouble on this one. The recent videos of ISIS partisans sledge hammering the museum exhibits in Mosul provides instruction on the law forbidding “graven images.” Interestingly, ISIS cited exactly the same proscription in their vigorous efforts to demolish irreplaceable Assyrian and Babylonian artifacts. I predict some informative courtroom debate on whether film, video, People Magazine, or the face of the $20 bill is more “graven.”

Ridiculous? This cannot be what Judge Moore and his supporters really want, though it clearly is what they say they want. A moment’s thought should help us see that instilling the Decalogue as the basis of law would be divisive as well as redundant, unenforceable, or unconstitutional. But if the supporters of this movement don’t want that, they clearly want something like that. So let us interpret their desires figuratively yet respectfully.

It seems what these religious appeals truly aspire to is a more moral America, and religionists like Judge Moore are not ashamed to use positive law to achieve it. In this pursuit, they are already more honest– and more correct– than their postmodernist opponents. Religionists see the civil law as being a moral imperative. Their argument is that every law is also a moral injunction. “Thou shalt not run a red light!” Law prescribes an outcome that we consider to be a public good. At least some of religionists’ opponents argue the ridiculous assertion, “You can’t legislate morality.” Ironically, persons who say this are thinking of morality exactly as Judge Moore would have them think: as something given from on high, something absolute, endorsed by the divine. As the examples taken from the Ten Commandments show, you certainly can’t legislate some kinds of morality: those that concern the intentions of our hearts or those that can’t be demonstrated to show clear public benefits. But as the overlay of religious proscriptions against murder, libel, and theft with their secular counterparts demonstrates, you certainly can legislate some morality. In fact, legislating morality is all laws do. Even traffic rules that are purely conventional, like whether to drive on the left or the right-hand side of the road, assume a moral dimension when generalized in the public interest. Obviously, it could hardly matter less in itself which side we choose, so long as we all do the same thing. So some conventional laws in themselves have no inherent moral content. They differ from laws based on simple moral reasoning like those against murder or theft, which bind us, as it were, based on individual moral responsibility. Wanton killing would be wrong even if it were not against the law. Our reasoning about equity tells us that. Still, it is against the law because tolerating it would permit public harm, and in prohibiting public harm all laws, even conventional ones, have a moral component.

Postmodernists think all laws to be in essence like driving on one side of the road or the other. The notion that all laws are purely conventional and arbitrary, determined purely by societal agreement, is one that religionists understandably abhor, and so they seek the sanction of divine approval as a sign of moral heft, sincerely thinking morality itself would dry up and blow away if not reinforced by religion. Their fear is understandable. For most of human history, religious authority was indeed the only moral warrant available (for a more detailed analysis of the bases for moral goodness, please see my blog posting of October 22, 2013, “What Do We Mean by Morality?”) Many persons, perhaps most persons, still see it that way. No doubt, fans of Judge Moore see it that way. I certainly cannot doubt their sincerity. But I think their view is demonstrably in error as a matter of historical truth, moral judgment, and practicality.

Defenders of a religiously based morality remind us that our Founding Fathers considered this a Christian nation founded on Christian principles. This is a half-truth or perhaps a meaningless one. Of course, the white, British, male creators of this nation were inescapably steeped in the culture that produced their consciousness, and that culture was Judeo-Christian by the traditions of two thousand years. By that standard even the most avowedly anti-Christian among them, Thomas Jefferson, acted “on Christian principles.” One might with equal truth assert them all Greco-Roman on that basis, for these neoclassical, Enlightenment figures were as familiar with Epicurus as with Ecclesiastes. Though nurtured in the Christian tradition, they were anything but traditional in their view of it, at least as regards the authority of law. Two proofs that the Fathers were not quite so Christian as Judge Moore would prefer caution us against assuming a traditionalist Christian orientation as a basis of our nation’s legal status.

First, their public pronouncements were far less explicitly Christian than religionists might like, though anyone can certainly cherry pick Christian comments in their letters and private communications. If they regarded divine command, and specifically Christian morality, to be necessary for the guidance of the nation, why did they not more explicitly set it in the town square? Why resort to what the Supreme Court prefers to call “formal religiosity” in their official commentary? Washington’s first inaugural is a brief gem of public speaking that explicitly appeals to God’s beneficence and guidance but fails to mention Jesus. It is likely that early patriots like John Adams saw no conflict between their faith and their reason and thought each sufficient to bolster the other in pursuit of public morals.

But if that were so, you might ask, why not explicitly summon both to the aid of the new nation in its battle against lawlessness and disunion at a time when its moral authority was in doubt among enemies at home and abroad? Any reading of their struggles in those dark years, first against the British and then against the collapse of their great experiment into factionalism and disunion, would argue for an explicit appeal not to some Deistic divine providence but to divine command, to Christian warrant for secular law. Yet instead we see the Establishment Clause. It is an antithesis, an explicit rejection of all governments’ previous basis for legitimacy and a ringing rejection of Judge Moore’s appeals to divine command. Our Founding Fathers knew their Judeo-Christian heritage far better than we, and because they knew their history, they also knew that the one mighty pillar of law that this heritage utilized was God’s mighty voice proclaiming absolute law. Why in heaven’s name did they resort to the kind of Rotary Club generic religious appeal that Washington used in his First Inaugural? And why in God’s name did they forbid using religion to fortify their novel and frequently besieged rule of law?

They knew something we may have forgotten. They knew it because it was fresh to their memories, and they wrote the Establishment Clause with this knowledge uppermost in their mind. It was knowledge both historical and moral.

The remembered the Reformation. Jefferson knew the punishment for atheism in Virginia was death. This was the consensual expectation of his time, for how could a common government rooted in denominational truth and goodness claims live in a society of religious diversity? Other founders’ parents endured the sectarian power struggles of Catholic Maryland and Anglican Virginia, of Puritan Massachusetts and Baptist Rhode Island, of Quaker Pennsylvania and Anglican England. Their forebears suffered the persecutions of James I that forced the Pilgrims to the Mayflower or the scorched earth of Cromwell’s invasion of Ireland, of Bloody Mary’s purges and Henry VIII’s loyalty oaths. And on the continent, the Huguenots and Dutch Reformed, St. Bartholomew’s Day, and Wallenstein’s cannons, all the way back to Luther and the Peasant’s Revolt. Even further– these founders were educated men, remember– they recalled the Hussites and Torquemada and the Lollards. They had studied the decimation of the Knights Templar, the suppression of the Cathars, the Fourth Crusade. The whole sordid history of sectarian conflict was the bloodied soil into which they drove the stake of the Establishment Clause. Why else did they establish history’s first experimental and consciously created state built on a novelty: moral appeal of the law based upon the rational consent of the governed rather than the command of God? Power flowing up from citizens rather than down from on high. No political entity before theirs had ever done that before. Why take the chance?

I have seen no condensation of their moral reasoning, so I will attempt it here. They recognized the Achilles’ heal of divine command, of all religious authority. They could hardly fail to recognize what the bloodied history of the Reformation had finally made manifest. Religious authority could offer no constructive response to challenge. Why? Any investigation of the Reformation instead revealed a radical splintering of responses, of which the multiplicity (one educated estimate of the number of Christian denominations today is 10,000, but I have no idea of how that is compiled or whether it is accurate) of sects emerging from the Reformation are examples. Why is that? The answer revolves around the nature of warrant. Religious authority, the kind that justified both the divine command of dogma and the divine right of monarchs like Henry IV of France, and, obviously, the Ten Commandments, depends for its justification solely on the trust of those who accept it (for more on this process, please see blog posting of September 18, 2013, “The Fragility of Religious Authority”). That trust is a very weak kind of objective knowledge, easily disrupted by contrarian claims of competing authority (However, it is a very strong kind of private conviction, leading to inevitably ugly contention [please see blog entry of January 26, 2014, “Premodern Authority”]). So, though commitment to the truth and goodness claims of the authority might seem solid when trust is given, it can be easily revoked when trust is cast into doubt. Note that other kinds of warrant: empirical, expertise, and experience, do not suffer from the same kind of wobbly support but contain within their mode of warrant the means of settling disputes involving truth and goodness claims. For instance, two experts might discuss their findings and agree that one or the other possesses greater knowledge of the specifics of their disagreement and thus settle their dispute. Even undifferentiated experience allows us to revisit issues in doubt to throw further experience at the problem in an effort to resolve it peacefully. But two authorities in conflict cannot resolve disagreement, for both can offer no further proof of their case, and the presence of dispute itself, of course, tempts trust to weaken. Collapse follows quickly and catastrophically, often conducing to a transfer of trust to another authority. This is the story of the Reformation. It is also the story of the current conflict between Shi’ite and Sunni, between radical Islam and the secular West, as it must be in every conflict that disputes specific contents of divine command, as it would be in any effort to enshrine the Decalogue as the basis for civil society. Such crises of authority are epistemologically insoluble.

This lesson was difficult to learn. Ten generations of horrendous bloodshed in Europe were the price of learning that authority itself was the problem. Just as any posting of the Ten Commandments as a basis for positive law would instantly face insuperable issues of sectarian controversy, any appeal to Divine Command must fail in the face of anything but a religiously monolithic society.

Our Founding Fathers knew or intuited this. They established their experiment on a firmer ground than authority. In the spirit of the Enlightenment of which they were the representatives, they built a state upon the enlightened self-interest of their constituents. Their state with all its safeguards against unlimited power and lack of representation was a Rubik’s Cube of rationality: a nation constructed of reason appealing unabashedly to reasonable citizens for its moral power rather than to the Almighty. It was unprecedented.

The era of the founding saw other formal attempts to build moral systems on reason and reasoned experience in explicit rejection of divine command and divine right. Three moral systems reliant on reason continue to vie for assent: virtue ethics, Utilitarianism, and duty ethics (for more on these models, see blog posting of October 30, 2013, “Three Moral Systems”). Other and more flawed appeals to reason derived from or trended toward postmodernism: social contract theory and pragmatism come to mind. These later experiments extended the founders’ brave experiment and insured that there could be no return to an appeal to religious authority with all its certainty and unanimity, no matter how nostalgic that might prove.

Judge Moore’s quest to honor his religious beliefs is admirable, but his fears of a moral collapse are misdirected. The modernism that produced our form of government is not his enemy, though believers understandably believe it to be (for more on their position and its errors, please see blog entry of October 13, 2014, “Tao and the Myth of Religious Return”). They should refocus their opposition on the postmodern movement contemptuous of objective morality that seeks to equate all knowledge with opinion, all truth with culture, and all morality with nihilism.

Standard

When is Civil Disobedience Justified?

Two fairly mundane recent news articles have prompted me to examine the problem of warranting civil disobedience over the force of law.

The first has become increasingly familiar. According to the local newspaper, a twenty-nine year old man was arrested on multiple counts for threatening a municipal water company employee who wished to read his meter. The suspect was “wearing full body armor with a knife strapped to his chest, a Kevlar helmet with a face mask and armed with pepper spray.” After a brief standoff he was physically subdued by a law officer and arrested. Questioning revealed he was a member of Sovereign Citizens movement and had had multiple previous scrapes with the law. Followers of the movement think, “they can decide which laws to obey and which to ignore” according to the Southern Poverty Law Center. He also told arresting officers he hoped to “go to Pakistan in two years so he could fight and maybe die for something ‘that was good and true.’” But he had been found psychologically unfit for military service by the U.S. government.

The second is more trivial, but perhaps reveals more clearly the scope of the issue I intend to examine. It is against the law in our municipality to set off fireworks, a law more honored in the breach than the observance over the years. But our neighborhood association decided this New Year’s Eve to enforce a more scrupulous adherence to the ban by hiring off-duty police officers to ticket offenders. This effort prompted lively debate on our neighborhood’s Facebook page between those supporting these efforts and those opposed. The president of our association framed the law-and-order side. “Last I checked, we are a country of laws, not of men…. It’s amazing to me that some would try to pick and choose which laws they would abide by…No responsible person should encourage illegal activity….” The opposition argued two points in response. The first was a defense of cultural tradition. “I try to live like it was the good old days. I have no bad intentions.” A more spirited defense engaged the president’s point directly. “We are a nation of laws. But we are a nation of individual freedoms and rights too.” To which his sparring partner replied, “What about respecting the rights of others to have peace and quiet? Lots of folks find fireworks offensive and a safety hazard.” Thereafter, in what has become typical of social media, the argument degenerated. The disputants clearly had different conceptions of what constitutes a right.

Both of these examples illustrate a real confusion about law and citizenship, a tangle that only gets knotted tighter by considerations of admirable civil disobedience versus civic responsibility. In any conflict between rights and law, where are the boundaries and where is the sweet spot that balances the interests of some against the interests of others? More to the point, are interests synonymous with rights? It is troubling to suppose that the ideal is always in the middle– that some fulcrum exists that finds the balance point– for in a totalitarian state that point might be far too close to the interests of the state while in a libertarian dream world, it might celebrate all the chaos of a gold rush boomtown. The notion that we can’t pick and choose which laws to obey rings true, yet we erect monuments to our Founding Fathers, whose picking and choosing spawned a new pattern demanding obedience. Ditto for Gandhi, Walesa, Mandela, and King. So we certainly cannot say what parents always say to their children: a good citizen always obeys the law.

In my last entry I examined some alternatives warranted by other justifications for legal power, rejecting divine command and legal positivism on the grounds that these warrants provide no means of ensuring minority rights or respecting civil disobedience. That cannot be said of social contract theory. As the U.S. Constitution exemplifies, defined rights can be specified in written or tradition-bound compacts, though these rights would of necessity be civil rather than human rights and thus open to a charge of cultural relativism that would do little to arbitrate disagreements between polities that define rights differently. We see that kind of conflict frequently on today’s international scene. But this problem is not by itself sufficient to condemn a social contract that builds in protections for minorities and allows mechanisms for civil disobedience. But we might ask of the framers of such a constitution the same question raised by the two examples cited earlier: what properly limits state power? It is good to have a Bill of Rights, but why these and not others and why ten and not a hundred? It is all very fine to sanctify free speech as a guaranteed civil right, but why should it be, other than that our wise Founding Fathers decided to add it to our compact of government (but only after a bruising debate and with serious dissent as revealed in The Federalist Papers)? The kernel of social contract is that government is a purely conventional invention resulting from a voluntary association of like-minded individuals in pursuit of common interests, so it seems reasonable that had their interests been different, their contract might have taken an entirely different form. Viewed in this light, there is nothing sacred about the rights enshrined in the U.S. charter, and so long as we embrace the fiction of the social contract as justification for government, we’ll have to look elsewhere for some foundation for our rights and some reason to violate positive law when we think they have been abrogated (for a fuller exposition of this issue, please see my blog entry of November 13, 2013,”Where Do Rights Originate?).

It is, of course, quite possible to take seriously the notion that we are not at all bound to law except by our own consent as the examples cited at the beginning of this essay argue. This, after all, underlies the myth of the original social contract– and recall that Locke also argued that each of us renegotiates that contract as soon as we decide to accept the law’s dictates– so what was good for our ancestors should be good enough for us. But that notion quickly produces the dyspeptic realization that any government is an impingement on our freedom. This certainly seems the position of the originator of the term “civil disobedience.” Thoreau’s famous line about that government being best that governs least vivifies the libertarian position that any path out of the state of nature involves reluctant sacrifice of freedom in exchange for security. I think the widespread acceptance of this view explains a big chunk of today’s resentment of government. That resentment seems an inevitable outcome of taking social contract seriously as does the brute libertarianism that follows. But a careful reading of Civil Disobedience raises doubts about whether civil libertarians should make Thoreau their patron saint. And if we reject the warrant Thoreau used to derive his view of government, what befalls the argument for civil disobedience that flows from it?

It is certainly true that his resistance to government constitutes an embrace of the state of nature, one literally demonstrated by his years at Walden Pond. But I argue that the state of nature Thoreau embraces is as far from that envisioned by the framers of the theory as can be conceived. So though Thoreau stands with those who resist government, his argument is less against government per se than against government justified by social contract. Libertarians think he desires anarchy. He does not. Writing in his homebuilt cabin in the Massachusetts winter of 1847, Thoreau was not merely objecting to government overreach; he was objecting to government by majority, by consensus, by contract, and by convention. He argues, “Any man more right than his neighbors constitutes a majority of one already.” Recall that in his original framing of the state of nature, Hobbes had dismissed the notion of “right” as a subjective delusion. Everyone acts to secure what he thinks right, said Hobbes, and it is this pursuit that he inevitably comes into conflict with his neighbor, such war of all against all producing the need for the leviathan of government power. Thoreau is clearly not imagining that kind of natural condition, nor does he envision that kind of barbaric outcome. He is adamant in denying majoritarian rule: “[The state] can have no pure right over my person and property than what I concede to it.” He can only do so because he imagines the demands of social order to be far more dangerous than disagreements among individuals. Why should he have come to a conclusion so different from the social contract theorists who inspired our Founding Fathers?

The answer can be found in the sixty-one years separating them from him. Specifically, two moral revolutions changed the theoretical field, the first advanced by Immanuel Kant and the second by the Romantic revolution he inspired.

It was Kant who argued for moral autonomy as the antithesis and antidote to the tyranny of authority that had dominated political theory during the millennia of divine command. In Critique of Practical Reason, he advanced the argument that each of us constitutes a moral universe whose laws we must actively engage and accept and for which we may be held responsible. Even should we attempt to abdicate that responsibility and assent to authority, he reminds us that it is our own reason that judges the authority worthy of our respect. For Kant, it is this preferential freedom that marks us as moral agents worthy of radical respect. We see the echo of Kant’s argument in Thoreau’s.

But we see something else as well: Thoreau’s confidence in his own moral rectitude was more absolute than Kant with all his admiration for practical reasoning could have imagined. Throughout Thoreau’s writings, and especially in Walden and Civil Disobedience, we see a disdain for convention coupled with an almost strutting confidence in his own moral superiority. The former fuels Thoreau’s contempt for social contract theory, which at its center regards political arrangements as conveniences invented to procure the desires of the majority. Thoreau even goes so far as explicitly to condemn William Paley’s The Principles of Moral and Political Philosophy (1785) precisely because it, like social contract theory, bases government on grounds of practical convenience rather than justice (though Paley also skewers social contract theory for its other falsities, among them the myth of the state of nature). But echoing through Thoreau’s writing is the deeper appeal to justice, to right, and to divine providence combined with an absolute confidence that he, and not that mass of men living in quiet desperation, might decide which laws satisfy justice and which offend it. What grounds such a belief?

It grows from the same confidence that sees anarchy as avoidable and wisdom as universally available if not universally embraced. Thoreau could reject the contractarian argument that conflict among citizens is inevitable for the same reason he could reject Kant’s careful constructions about the limits of reason: Thoreau embraced the Romantic pantheism that was flooding Western culture in the first half of the nineteenth century, and he could assert with confidence, “What I think is right is right; the heart’s emphasis is always right.” The receptive heart in tune with nature could hardly avoid knowing the good and adhering to it as well. With such ready access to indubitability, who could blame him for his disgust with those who would put self-interest above justice? To determine it, one must ignore compacts and laws in favor of a higher and more intuitive knowledge that the god-in-nature provides, so long as she could clear away the delusionary and conflicting propaganda advanced by the majority. This is Thoreau’s answer to the thorny question of when one is entitled to practice civil disobedience.

Seen in the cynical light of a postmodern age, Thoreau’s Romantic warrant seems quaint and pitiable. Facing it prompts a Tooth Fairy kind of moment. If we can’t trust god-in-nature to reveal to our intuitions when laws are unjust and we also can’t trust a social contract rooted in a fiction, founded on convention, and moored to no fixed conception of rights, and if positivist and divine command theory make no provision whatsoever for retention of rights, what can we trust to litigate the defensibility of any act of civil disobedience?

Only natural rights theory provides a solid grounding for answering such a question because it is the only justification for law that begins with individual rights based on needs and sees civil participation as one such need. Rights theory is the political wing of a much more thorough ethical structure whose morality is guided by the development of the habitual disposition to satisfy needs (A broader outline is provided by my blog entry of November 06, 2013, “A Virtue Ethics Primer.”). The determination of needs and the rights that entitle us to their satisfaction are relatively simple to determine by interrogating what we choose to call a need: is the achievement of this desire necessary for flourishing? If we answer affirmatively, we then ask further: is this desire instrumental to some other and more fundamental desire? Does this more basic desire meet the criteria of all human needs: universal and transcultural (both temporally and geographically)? Does the satisfaction of this desire make impossible the satisfaction of some other need? One of the central tenets of virtue ethics is that the requirements for a fully human life are mutually supportive rather than in conflict, accumulating to the summum bonum, the totality of goods necessary for living a complete human life. While they cannot all be satisfied simultaneously and indeed for much of human history may not have been capable of satisfaction at all– consider sanitary or medical needs throughout most of history–their satisfaction remains an unmet need in each generation until finally sated.

The self-questioning that accompanies our attempt to recognize and fulfill needs anchors the intellectual virtue component of the moral system. It is no small thing to negotiate our desires since, as choice-making machines, persons exist in a perpetual pursuit of the good, regardless of how they might define it. Plato was certainly correct when he said we always choose what we consider good at the moment of choosing; otherwise, we would choose an alternative. Virtue ethics makes a point of identifying the truly good so that our desires might reflect our actual needs rather than degenerate into a mad scramble of conflicting wants or manufactured materialist desires.

By now it should be obvious that natural rights theory would champion human needs as human rights: we are entitled to try to meet our needs. They are our due and their satisfaction our moral duty. And since justice may be defined as “to each her due,” we are entitled to a political system that facilitates rather than impedes their satisfaction. This explains why a social contract view of political organization could not be more wrong. Government is neither conventional nor built from expedience. It is not an invention at all, but has always formed a buttress to human happiness. It is a crucial component of the quest for persons’ satisfaction of their needs, operating impartially to arbitrate conflicts among citizens’ desires and actively meeting those that citizens can satisfy only through collective action. Government’s sole function is to deliver justice to each citizen. A perfectly moral government would manage its retributive and distributive roles so assiduously that any failure to meet any citizen’s needs, meaning any failure to secure rights, would be the result of that citizen’s own moral failings. Anything less would be unjust. Anything different would justify civil disobedience by individuals or minorities.

There is more to it, of course. It goes without saying that no government satisfies such a lofty function all that well, whether from its own incompetencies or from the failure of citizens either to understand or accede to its purpose. It faces thorny issues of distributive justice for citizens unable to satisfy their own needs (For more on this, please see my blog entry of December 3, 2013, The Riddle of Equality.). It must balance the just demands of liberty and equality (For more on this issue, please see Our Freedom Fetish, blog entry of November 20, 2013.) And even when it is able to handle such delicacies, it finds itself trying to identify and balance citizens’ real needs in such a way that they both get what they need and are satisfied with the process.

The two examples that began this essay typify aspects of the task. The Sovereign Citizen who viewed his liberty as outweighing justice and who displayed a Thoreauean confidence in his judgment exemplifies an unjust act of civil disobedience. His rights were untrammeled by the governmental agents he opposed and even had they been, he could not have justified threatening the life of another person to defend such a slender transgression. The fireworks issue is a bit more interesting since the right to recreation by cherry bombers in this case opposes the right to a night’s rest by their opponents. But since neither party would suffer by such a slight deprivation (a few hours’ loss of sleep versus a loss of a little recreational excitement), it seems fair to say that this instance hardly rises to worthiness as a civil disobedience issue. Is it therefore safe to say that developing the habit of respect for law would trump the tradition of New Year’s Eve fireworks? Like so many social media squabbles, this one seems unworthy of arousing irascibility, much less civil unrest. Applying the needs-based justification of natural rights theory not only alerts us to when civil disobedience is a moral duty but also to when–and why–it is unwarranted.

Standard

Preliminary Thoughts on Civil Disobedience: Natural Rights Issues

In a previous post (“Where Do Rights Originate?” [November 13, 2013]) I examined the four traditional justifications for the law, finding unambiguous support for rights only in the warrant that recognizes them as morally and chronologically antecedent to the demands of order. The natural rights theory is a unique view of political organization because it ties rights to justice, which it defines in the traditional manner as “to each her due.” It rejects the claims of divine command and legal positivism that provide no means to justify civil disobedience. We see echoes of these positions in the admonition to obey legal authority unequivocally because it represents the god’s will or settled law. But such inflexibility provides no opportunity for individual conscience or minority claims against that very law. Some of the defenders of the fourth and most popular justification for legality, social contract theory, argue that it makes space for rights. It is possible that constitutions and compacts might institutionalize opportunities for individual and minority rights, but nothing in the nature of social contract theory makes that possibility more than one option among many. The consequence is that social contractarians view the morality of civil disobedience as an issue of critical mass: a movement gains legitimacy when it persuades sufficient numbers of citizens to support it and thereby gain the attention of the majority. In this it echoes the moral system of Utilitarianism, unsurprising in that theories of utility arose contemporaneously with theories of social contract. But isn’t this notion of majoritarian warrant incomplete? Can we really defend the notion that Martin Luther King’s appeal to civil rights only achieved legitimacy after the Civil Rights Act of 1964 was passed? If not then, when? On what grounds did King’s movement attract its first followers? Are we to think the King’s appeals had no moral legitimacy when he spoke to one hundred parishioners of the Ebenezer Baptist Church in 1948 and only gained it when he spoke to half a million in Washington in 1963? If numbers legitimize legal power, then how could the “rights” of any minority be protected, much less individual dissent?

It seems clear that any notion of civil disobedience must first delineate the rights whose violation justifies acts of resistance. The natural rights theory argues that civil disobedience is appropriate when two conditions exist. First, the rights of any individual have been intentionally violated by civil functionaries regardless of positive law. Second, legal recourse to redress the grievance has been exhausted. The first condition eliminates the kind of offenses that devolve from external circumstance rather than legal power: the lack of resources that produce famine, for instance. The second links acts of civil resistance to the sophistication of legal processes designed to reveal and ameliorate violations of rights. So a mature democracy might have built-in mechanisms of recall, referendum, and initiative that allow citizens to challenge governmental power that less functional governments might omit, resulting in a delay of protest in the former case and an acceleration of what positivists call criminal action in the latter. I have in mind the contrast between the march on Selma in the U.S. led by Dr. King in 1964 and the attacks by Mr. Mandela’s followers on the South African police beginning in 1976. But even if we apply these two conditions, ambiguities abound. It is no small thing to break the law.

Let us eliminate the confusion resulting from justifications for law that make no provision for the existence of rights in themselves and so discard divine command, social contract, and legal positivism in favor of natural rights theory. Still, even when we begin our justification for law by enshrining rights as foundational, where do we place the moral boundaries of civil disobedience in pursuit of rights?

As far as I have been able to discover, the answer to that question depends on our answer to another that defines the origin and therefore the extent of rights. And this plants us in a swampy history. The progression of rights theory has been anything but linear, which helps explain the wide variations in delineating them.

It is ironic that we can credit Aristotle’s Nicomachean Ethics with both establishing the connections that root natural rights theory and also illustrating its structural challenges, of which I will focus on four.

As a political philosophy, natural rights theory nests itself in a lovely matrix of ontological and epistemological structure (some of that structure is outlined in blog entries of November 6, 2013,“A Virtue Ethics Primer,” and October 15, 2013, “What Do We Mean by Good?”). Viewed in that larger context, natural rights theory is but the political wing of a broader moral theory, virtue ethics, a model deeply rooted in experience. But the same experiential basis that grounds this structure in reality makes possible errors in judgments evaluating that reality. Aristotle is notorious for thinking slavery, misogyny, and aristocracy normative simply because they existed in every society he observed. It is tempting to regard what is universally experienced as morally proper, which only goes to show that the balance between what we do and what we should do is never easy to pin down. A moral outlook that privileges our “natural” inclinations over moral duty tends toward subjectivism and emotivism whereas one that ignores our proclivities leads us into a sterile duty ethic. The trick is to acknowledge our desires without kowtowing to them and in that act of discrimination find a convincing motivation for self-improvement. Natural rights theory is unique among the four warrants for law mentioned earlier in that it is grounded neither in purely conventional arrangements nor in decrees justified by absolute authority but is instead built up from the aggregate of real persons’ actual experience. So how could Aristotle have regarded practices we now regard as abhorrent, such as slavery and oppression of women, as morally acceptable merely because they characterized all two hundred states that he examined in composing his moral theory? More to the point, if the universality of a practice is an indicator that it is “natural,” on what grounds can it be condemned as immoral? This is the first issue natural law has to resolve. It does so by beginning not with the practice but with reasoning about what need it serves to satisfy. For instance, education of young people to prepare them for productive work is a universal practice that serves vital human needs. No justification can exist to deny that education to any person anywhere. Such denial would constitute a violation of rights. To deprive a person of this need because of gender, class, race, or religion would be tantamount to classifying that person as less than human.

A second problem for rights theory involves variations in practices between cultures. Rights devolve from needs, and needs reveal themselves in particular social arrangements that vary by time and place. The problem then becomes how to tease out universal needs from the varied cultural practices that satisfy them. Concerning the need for education, we see this issue playing out today in regard to controversies involving child labor, particularly in traditional agrarian cultures. For generations, children assisted their families in farm work both as contributions to their welfare and as a form of practical education. Is child labor of this type exploitative? Answering this question is less difficult than it might seem if we embrace an ethic that grounds a common moral framework firmly to universal reason applied to varied cultural means to satisfy needs. If the work load is tailored to the child’s age and abilities, if this kind of traditional education will still apply to her culture once she is grown, and if her other needs are also being met, such an education can indeed be regarded as morally appropriate and one culture’s means of satisfying a universal human need. Consider all the ways children have been and are educated in our world today. Respect for cultural diversity is certainly warranted so long as customs prove innocuous. For example, no people’s cuisine is superior to another’s so long as each satisfies human nutritional needs. Beginning with such universal needs provides the psychological grounding in reality that moral systems require while evaluating their satisfaction in moral duty elevates our preferential freedom beyond simple and private preference (for more on this balancing act, please see “The Problem of Moral Pragmatism,” blog entry of March 19, 2014).

A third problem for the moral theorist seeking to ground ideals in observation of experience involves not variations between cultures but rather variations within them. Cultural arrangements are adapted to the natural differences of the individuals who enter into them, introducing further variability into practical moral reasoning. Aristotle regarded normalcy as a term of statistical rather than moral judgment, as his famous observations on left-handedness demonstrate (his judgment has been adapted in the current climate to champion equality for the LBGT community). His astute comment that anyone living outside of society must be either god or beast acknowledges the range of social engagement his observations revealed as “normal.” It seems self-evidently wrong to tell a person who is perfectly content living a solitary life that she lacks an essential component of happiness, yet this is what virtue ethics insists upon. This counterintuitive judgment is based on the summum bonum conception of human flourishing that refuses to define the goal of human existence in terms of contentment. On the other hand, virtue ethics also recognizes the myriad ways individuals can satisfy their needs, which opens possibilities for cultural diversity in the application of rights theory denied to other theories of law.

The three issues for rights theory mentioned above– universality, cultural difference, and individual difference– all require a fine-tuned examination of needs and the many ways they can be met. This examination poses a final, macroscopic problem. As the failures of the human sciences of psychology and sociology have more recently revealed (please see blog entry of February 9, 2014, “The Calamity of the Human Sciences”), we could hardly consider even Aristotle’s careful observations of human society to rise to the level of scientific validity. For all its valuation of reason as the defining characteristic of human beings, rights theory can make no appeal to our culture’s most esteemed warrant for truth claims. It is not scientific. No moral theory can be, for morality concerns itself with questions of goodness that no empirical methodology can answer. Without recourse to scientific verification, any moral theory is hobbled in this age of science worship, but even leaving status issues out of it, rights theory can at best appeal to a rational warrant to support its claims, and Aristotle’s errors on that count should caution us to consider it warranted at best by only a preponderance of the evidence. Surely, its three competitors could claim no more and, as mentioned, can offer no integral defense of civil disobedience.

These difficulties for natural rights theory were compounded by historical developments that modified their rational warrant. Fleshed out by the Epicureans and the Stoics in the classical age, the theory shed some of Aristotle’s caste worship as it explicitly tied itself to universal human reason, inclining toward history’s first intrinsically democratic moral structure. The great Stoic Epictetus was a freed slave who saw reason as an inescapable basis for moral and political participation. Unlike Christianity, which saw acceptance of morality as a conscious choice leading to damnation or salvation, Stoicism regarded rational moral valuation as something all persons must participate in, either well or badly. It is unsurprising that Christianity embraced elements of Stoicism into its dogma, but in doing so the warrant for moral choosing devolved from reason to authority (for more on the relative strengths of correspondence warrants for truth and goodness claims, please see my blog entry of October 7, 2013, “Better, Blended Systems of Knowledge”). Late medieval theorists sought to reconcile authority and reason, but without success, for their modes of warrant are incompatible (for reasons why, please see the blog entry of September 11, 2013, “Religion and Truth”). It still seems jarring to read Aquinas’s towering logical arguments built on the shaky foundations of the authority of Church fathers (the oil-and-water conflict of authority and reason is more fully explored in my blog entry of September 18, 2013, “The Fragility of Religious Authority”). This is not to champion reason alone as a superior warrant. As Descartes would later reveal, pure reason forms an indubitable warrant for truth and goodness claims, but we can thank David Hume for demonstrating that such claims devoid of experience must be sterile. But Aristotle had demonstrated the fallibility of reason applied to experience, and this hybrid justification would prove no match for divine authority’s claims to certainty. Thus divine command swept away the Stoic’s attachment to reasoned experience as a foundation for ethics and law. More’s the pity.

The crisis of justification that changed everything was the Protestant Reformation. I’ve written previously of it as a kind of nuclear winter of warrant, a catastrophe whose pathology tainted every truth, goodness, and beauty claim previously warranted by the authority of church and God (for more on this, please see my blog entry for January 26, 2014, “Premodern Authority”). It should be unsurprising that the first revival of rights theory should be tinged with divine command. Members of the Roundhead army led by John Lilburne made claims for natural rights rooted in Biblical authority as early as 1641. But as was so typical during even the last horrific chapter of Reformation history, the tangle of authority and reason as warrant could not as yet be made straight, and Lilburne’s appeal to the Levellers subsided into the general chaos of the English interregnum.

It was resurrected, after a fashion, by the social contract theorists who finally succeeded in separating political justification from divine command beginning in the 1660’s. But in seeking a myth to derive political power from the people, thinkers like Hobbes, Locke, and Rousseau removed their faith from the Bible and placed it in a spurious state of nature in which “rights” were taken to mean “license.” For these contractarians, our natural freedom gave each of us unlimited rights over ourselves and our property, and everyone else’s too. This trampling of definition still confuses contemporary understandings of rights and results in the mistaken notion that rights can be conferred by constitutions and compacts and that they can be both specified and limited by such documents and traditions. It also introduced the lamentable and erroneous conclusion that the establishment of government of itself requires the surrender of natural rights in favor of protection, the degree of surrender to be specified by contract. This notion compares most unfavorably to the natural rights justification that sees political organization as a natural means of satisfying needs and so as a guarantor of rather than a threat to our rights. Jefferson only perpetuated the contractarian error. He thought rights derived from our earliest political associations and that their legitimacy stemmed from their traditional and consensual nature. This view was dangerous, for it gives a culture an unlimited capacity to grant or revoke rights dependent on its constituent values, a view entirely consistent with the social contract theory Jefferson embraced but far removed from the classical Stoic position of inviolable natural rights rooted in universal and individual reason. In his view of rights, Jefferson seems more than usually confused, for he bemoaned the overthrow of Anglo-Saxon tribal customs by the Normans as a violation of natural rights. Yet the customs that led to the Magna Carta were as much Norman as Saxon, though neither tradition recognized natural rights as universal and, being blended, could hardly have rooted them in tribal custom. It need not be added that the author of the Declaration of Independence and the “inventor” of the only three “rights” most Americans can name did not recognize them as universal either, despite his claim that they applied to “all men,” or at least three-fifths of all men (for more on the relationship between social contract theory and rights, please see “Our Freedom Fetish,” November 20, 2013).

Add these historical confusions to the definitional issues implicit in defining rights discussed earlier, and you can easily see why delineating rights may seem a pretty squishy enterprise. Eleanor Roosevelt gave it a go after World War II, and indeed, the United Nations Declaration of Universal Human Rights of 1948 stands as a monument to the articulation and enumeration of human rights. It is far from perfect, as must be all things assembled by a committee, but it succeeds at deriving rights from needs and in seeing the hallmark of needs as universality, perennially present regardless of culture or epoch. Granted, it too often mistakes instrumental goods for the moral goods they are used to satisfy: e. g. the demand for vacation time for the world’s workers is the instrumental need that reflects the moral need for adequate time for recreation, better served by limiting working hours for the world’s workers. But I daresay that if persons think about what they are saying when they defend “human rights,” they are probably thinking of this document.

Or maybe they aren’t. Postmodern political theory invokes a conventional source of rights heavily indebted to contractarian framework yet committed to a positivist conception of law as morally neutral and justified only by power. It is not pleasant to watch persons holding this position attempt to square cultural practice with human rights: should they defend the deeply traditional “mini-narratives” of honor killings and ritual female castration on anti-imperialist grounds or the universal rights of feminism and freedom of religion that challenge cultural tradition? While their distrust of established power relationships might move them to wholesale approval of civil disobedience, postmodernists must also admit that not all flouting of law is equally admirable.

Finally, one more differentiation of terms needs to be added. Civil rights are those enshrined in law and reflect a positivist specificity peculiar to each jurisdiction. Their connection to human rights is clearly a derivative one. The best definition of justice is “to each her due.” Since the sole role of government is to deliver justice to its citizens, the determination of what is due is entirely dependent on the human rights of the citizenry; these rights translated into positive law dictating retributive and distributive justice guarantee civil rights. Natural rights theory holds as foundational the moral principle that such rights cannot be abrogated, and that depriving any citizen of these rights violates justice and thus the purpose of government. This radical respect both defines the grounds for just civil disobedience and legitimizes individuals and minorities in actions to defend or procure their rights.

Standard

One Postmodern Sentence

The connections intellectuals make among the mooring notions of our age always surprise me; they demonstrate the malleability of the virtual circle we build from experience and fallible reasoning. It sometimes seems as though almost any claim may be advanced as reasonable. And that is an indictment of logic. Or it might be until one uses that same logic a bit more rigorously. It bears repeating that a logical expression is logical all the way down, so to speak. It does not self-destruct under close and patient scrutiny that calls the specious assertion into doubt. This healthy distrust of what Plato called the love of our own opinions was codified by Karl Popper as the principle of falsifiability. What Popper advised in respect to natural science should be broadened into a more generalized strategy for testing our judgments. We are all eager to fall in love with our own point of view, and for that reason we should skeptically examine it.

This conclusion was reinforced by a perusal of the introduction of David Bentley Hart’s “The Beauty of the Infinite: The Aesthetics of Christian Truth”. This is a serious scholarly work. Hart earned his masters from Cambridge and his PhD from University of Virginia and has won plaudits for his published works defending Christianity from atheism. I don’t intend to confront the book’s thesis. I can’t, having been defeated by the introduction alone. What stymied my effort was the writer’s presentation, which seemed composed of a fusillade of unsupported assertions so dense, vague, and twisted that no reasoned response could follow my effort to mine the text for comprehensibility. I find Hart’s sentences to be something other than discursive prose: their density, disregard for clarity, and floating referents closer to journaling than argument. It seems fair to mention at this point that Hart should expect nothing more, for the precondition of his theoretical position is a rejection of the very notion of disinterested rational appraisal. I cannot assert this all that strongly, for Hart assumes the assent he presumably pursues without resorting to anything as prosaic as warrant. It is unsurprising that Hart, a postmodernist to be sure, would advance claims supporting that peculiar position, but the question inevitably arises of why he bothers to take so much paper and ink and time to make it. Finding his core argument is rather like attempting to find the best restaurant meal in East Timor: one has to negotiate unfamiliar terms, confusing directions, and unsubstantiated opinion in equal measure. It may be that his introduction covers ground already so familiar to his presumed reader that Hart feels no obligation to justify or even do more than sketch out his frame of reference. But surely this does a disservice to both the skeptic and the generalist. Postmodernism is hardly a fait accompli, and Hart must appreciate that faith is even less of one, and defensible aesthetic theories least of all. I finished his introduction with only a sense of Hart’s self-satisfaction to reward me for my effort. And that is a shame, for one expects an analysis to produce either assent or rejection, but the vapidity of his language allows the reader neither option. To disengage from Hart’s fundamentally postmodern assertions is to reject notions that float through this culture, but to assent is equally problematic, for one hardly can know what her assent commits her to. Hart, a postmodern theologian, advances positions so far beyond his justifications that one might be signing away her core convictions by accepting on faith any one of his conjectures.

So what can be done with a work whose terms are too squishy to define and whose “arguments” cannot be bothered with correspondence justifications that might be examined by his reader? How can one approach an analysis whose central axiom is that logical justifications are merely rhetorical devices used to exert power over the reader? Why his argument, such as it is, should escape the same charge is just one more mystery. What can we make of this gelatinous glob of aesthetics and religion, illuminated only by the black light of postmodern theory? How can we know before launching into chapter one that engagement with his ideas might prove worth the enormous effort his style demands? Or how can we throw down the book in disgust simply because it makes demands of us that seem absurd? Perhaps the dross is worth the gold.

To illustrate as well as resolve this problem, one that characterizes postmodern thought in general, I will take a single sentence of Hart’s introduction as an object of serious analysis with the goal of both framing the difficulty of more extensive assessment and of exposing the suppositions that underlie what can only be called the effluvium of words that comprise his effort. I ask you to accept that this one sentence is not exceptional but rather entirely indicative of Hart’s presentation. I chose it because it is somewhat independent of what precedes and follows it. Please trust that nothing in its vicinity adds to its intelligibility. I also argue that this sentence is what we might call a postmodern one in that in terms of structure and argument, it is entirely representative of a mode of thought that disdains but fails to replace reason and discursive language in what we can only assume is a rhetorical exercise. At any rate, that may be its intent. I really am not sure.

So here it is.

The great project of “modernity” (the search for comprehensive metanarratives and epistemological foundations by way of a neutral and unaided rationality, available to all reflective intellects, and independent of cultural and linguistic conditions) has surely foundered; “reason” cannot inhabit language (and it certainly has no other home) without falling subject to an indefinite deferral of meaning, a dissemination of signification, a play of nonsense and absence, such that it subsists always in its own aporias, suppression of sense, contradictions, and slippages; and “reason” cannot embody itself in history without at once becoming irrecoverably lost in the labyrinth of time’s interminable contingencies (certainly philosophy has no means of defeating such doubts).”

 The greatest compliment one can give another is to listen attentively to her declarations, straining to comprehend them, and once understood, to weigh them, apply them to experience, and then engage in a deliberative conversation with their author so as to produce a consensual claim to truth, goodness, or beauty justified by some correspondence to a shared reality. But this sentence denies that possibility at the outset, for it rejects access to rationality by “reflective intellects,” and it questions reason’s role in making sense of the declarative contents of the sentence because reason must fall prey to the inadequacies of present culture, etymology, and manipulative intent. So even without careful explication, the sentence seems to deny itself what it intends to proclaim: a declarative truth. If modernity cannot succeed in such an effort, why should postmodernity (or Christianity or any aesthetic proclamation) be permitted to succeed? Metaphorically, this sentence seems to exemplify the old conundrum of the office worker advising the new hire, “Believe me when I tell you no one in this office can be trusted.”

But let us grant it the dignity of a serious effort at comprehension anyway. The sentence seems to me to make mo fewer than fifteen truth claims.

 

  1. That “modernity’ has committed to a “great project,” that project being a search for comprehensive metanarratives and epistemological foundations.
  2. That modernity argues for neutral and unaided rationality as the means to succeed in this great project.
  3. That modernity argues that such rationality is available to all reflective intellects.
  4. That such rationality claims to be independent of all cultural and linguistic conditions.
  5. That the effort outlined in contentions 1-4 has “surely foundered.”
  6. That reason cannot “inhabit” language.
  7. That reason “certainly has no other home” than language.
  8. That any attempt by reason to “inhabit” language subjects it to an indefinite deferral of meaning.
  9. That any attempt by reason to “inhabit” language subjects it to a dissemination of signification.
  10. That such a dissemination of signification is in apposition, “a play of nonsense and absence.”
  11. That such a play causes reason’s attempt to inhabit language always to subsist in its own aporias.
  12. That such a play causes reason’s attempt to inhabit language always to subsist in suppression of sense.
  13. That such a play causes reason’s attempt to inhabit language always to subsist in contradictions and slippages.
  14. That reason’s attempt to “embody itself in history” must result in it at “once becoming irrevocably lost in the labyrinth of time’s interminable contingencies.”
  15. That philosophy has “no means of defeating such doubts.”

 

So what are we to do with these fifteen truth claims? Old-fashioned modernity would ask us to do what Hart advises us cannot be done: examine each truth claim in light of its warrant to determine our judgment of its truth. But what would postmodernity advise as an alternative? My understanding has always been that it would ask us to enfold these truth claims as a jellyfish would a morsel of food: envelope it into our virtual circle, examine it for the bitter taste of personal contradiction, and either make it flesh of our flesh or spit it out like gristle. But Hart doesn’t seem so forthright in his expectation: he presents these claims as self-evident and universal historical truth, though what makes them so escapes me as does the entire postmodern schema of justification. So how do we approach these fifteen declarations? His flabby syntax combines with the blizzard of truth claims to render parts of the sentence almost a poetic utterance, yet one chained to history and philosophy. Do we explicate it as historical truth or appreciate it as an aesthetic unity? If the former, do we take it to be a rhetorical rather than an analytic use of language, and if the latter, what aesthetic theory do we use to apprehend it? Given its discursive inadequacy, how much creative construction must the reader bring to the sentence to vivify it with meaning?

I can’t know how to answer these questions without some sense of what the sentence actually does mean. What follows is my best shot.

 

  1. It is a postmodern charge that modernity’s project is the search for metanarratives and epistemological foundations. This certainly is not how modernism would characterize its own foundations. The notion that the great paradigms of modernity are merely narratives is a pernicious one, for explanatory models are far more than merely stories we tell ourselves (for more, please see the blog entry of July, 14, 2014, “Tall Tales.”). Modernism accepts as axiomatic that the epistemological foundations of analysis are universal and rooted in human nature. ((for more, please see blog entry of August 4, 2014, “The Tyranny of Rationality”).
  2. The only remarkable point of the sentence’s second contention is that rationality might be “aided.” Hart offers no referent in the vicinity of this sentence for that possibility, though one might expect him to champion divine guidance as the aid modernity rejects, which is certainly true. It is equally a truism that modernity unapologetically offers rationality as the means of finding truth in reality along with rationally examined experience (for more please see blog entry of February 2, 2014, “Modernism’s Midwives”).
  3. Modernity does offer the universality of reason as a key to unlocking the secrets of reality. It is actually a serious thing to accuse rationality of this charge, for, if true, it equalizes access to truth and goodness, an intention postmodernism often denies to modernity. In this as in other foundational convictions, modernism has often lost its way in practice, but the appeal to universal rights demonstrates the very self-correcting quality of rationality that Hart denies in this sentence.
  4. The alternative is pretty awful. If rationality were dependent on cultural or other conditions, postmodernism would have no reason to charge modernism with hypocrisy for asserting the superiority of some cultures, races, or genders. That modernists were hypocritical in their judgments of gender, culture, or race might assault either their suppositions that reason is the universal qualifier for judgment or their conclusion that some were more capable of exercising it than others. It cannot, as this sentence implies, call both into question. If Hart subscribes to the postmodern position that rationality is a product of experience and is therefore idiosyncratic, he ends in a relativism that justifies oppression and exploitation on grounds of cultural determinism and a subjectivism that robs individuals of the means to resolve their differences.
  5. This unexamined and unsupported contention is patently false. It seems to question the triumphs of science, the avatar of modernism. Let those who proclaim modernism’s failure try to forego the benefits of its most muscular project: medical, theoretical, mathematical, technological. On what grounds can postmodernism indict natural science as the royal road to those truths about reality that its methodology supports? I am convinced that three possible sources of this disdain exist for the postmodernist. First, a narrow focus on the failures of the human sciences might deceive a poorly educated observer to consider the human sciences representative of science in general. Secondly, an historicist excavation of science’s overreach (of which the human sciences are the detritus) over the last two centuries might mistake the hubris of scientism for the actual activities of today’s natural scientist, which is rather like lumping Comte with Curie. The only areas in which such scientism survives today are the human sciences and popular cosmology. Third, postmodernists might champion some pseudo-science alternative as a means of justifying truth or goodness claims that true science as now defined simply cannot warrant. But in this position as well postmodernism seeks to have it both ways: share in the reflected glory of successful scientific endeavor through the use of arcane terms and fanciful theories without subjecting them to the rigors of the scientific method. True science can never resolve goodness issues; its method forbids judgments based on non-material and non-quantitative factors. Postmodernism, on the other hand, remains true to its roots in philosophy, human science and pseudo-science and so embraces what seems almost a mockery of scientific trappings in its attachment to philology, psychology, sociology, constructivist education, and the embarrassments of Freudianism and Marxism. How charming that Hart’s weighty, macroscopic condemnation of two centuries of Western thought should merit merely a throwaway line without a shred of warrant! Such an act of intellectual vandalism seems only possible for one either preaching to the choir or from the throne of Peter. But a postmodernist religionist cannot embrace both the postmodern disdain for authority and traditional religious elevation of it.
  6. I don’t know what this truth claim means. The metaphorical and poetical preferences of postmodernism are evidence either of intentional misdirection or a self-delusion about the nature of truth. I challenge any reader of Hart’s prose to explain discursively what it means for reason to inhabit language. I don’t know what the words mean. If he means that language can only roughly approximate reasoning, he may have a point, which is why the natural sciences use the language of mathematics to frame and instantiate their truth claims. If he means that non-mathematical language is insufficient to delineate the truths of reality, he certainly has a point, but what of it? The only language capable of revealing that inadequacy is the language that produces it, so unless he wishes to either find a better formulation or stop making truth claims about reality that are not mathematical, he should simply remind us that all such claims, including his own, are to be considered as only provisionally true, and not merely because of the insufficiencies of language but more so because of the uncertainty of their warrant. But then Hart seems not to worry about warrant.
  7. I don’t know what this claim means. It may restate for some reason the one above, or perhaps Hart is gauzily referencing the difficulty of communicating our truth claims to others as opposed to that of framing them for ourselves. Such problems were far more thoughtfully teased out by that early prophet of modernity, Francis Bacon, who appropriately saw them as serious but not fatal impediments to rational examination of experience.
  8. I also don’t know the meaning of this claim. In what sense is meaning “deferred”? What necessitates such deferral? What would resolve it? If language is inadequate to the task of discursively clarifying or communicating our understanding of reality, how can such an inadequacy ever be resolved? What are we to make of the language of this claim? If Hart is referencing the postmodern insistence that language creates rather than contains meaning, he should say so, though such a charge would destroy rather than defer the denotative function of language, something that may mark Hart’s prose but doesn’t seem to have fatally wounded discursive communication more generally. But I am only guessing as to what this claim means.
  9. I am also guessing on this one. If “dissemination” refers to a democratization of meaning based on culture or other group experience, then I suppose the problem references issues of validity suggested by declaration #4. How postmodernism could resolve such claims befuddles me. Modernism would subject them to logical analysis. Linguistic claims that purport to deconstruct the hidden power relationships between signifier and signified have opened language to a psychological critique that psychology as a science is not equal to, though it has given postmodernists that good old thrill of revelation of hidden agendas so characteristic of human science paradigms. If language is not a clear container into which we pour meaning, it is also not infinitely malleable or used inevitably as a weapon. While it is true that the claim to truth is also a claim to power, the power that ensues derives from the truth rather than the claim itself, for possession of truth confers the potential to choose goods accruing from the accurate comprehension of reality. Nothing nefarious or curtained in that. In the physics sense, power is merely the capability to accomplish work, the first stage of which is discerning what work is to be accomplished. It is no accident that every claim to truth is also a claim to power. It is no evil either.
  10. I don’t understand this contention. Do you? Perhaps a use of language even more figurative than the rest of the sentence?
  11. If verbalized truth claims are always attended by doubt, then I would agree with this claim, for the discursive power of ordinary language will never be up to the task of delineating any element of reality beyond a reasonable doubt. Whether such a shadow of a doubt should disqualify a truth claim from rational consideration is a valid epistemological question, as is the degree of certainty required to validate a truth claim. That such uncertainty accompanies a truth claim is hardly an indictment of a modernism that continually subjected its own claims to precisely this question, the resolution lying in the use of more sophisticated (mathematical) language for certain kinds of issues and increased levels of doubt attached to those less empirical. At any rate, such a charge is hardly either a new one or one modernism has not confronted. One might think a proper response to an unavoidable lack of certainty would be an effort to minimize doubt by increasing rational confidence. But perhaps figurative obfuscation is another way to go.
  12. I have no idea what this claims. How is “sense” meant? If the thought refers to “sensation” as an alternative to reasoning, then this argument harkens to the entire idealist objection to rationality. This would constitute another macroscopic contention to which millions of words have already been devoted, one disputed by naturalists and their natural science heirs, an argument hardly resolved by another throwaway line that simply assumes without warrant what it also thinks everybody knows. If by “sense” Hart means our faculties for making sensations intelligible, then to argue that the only means we possess to frame our picture of reality does fatal violence to that reality suggests that we can say nothing valid nor reason gainfully, in which case we can stop torturing ourselves with his writing. But I may be maligning Hart beyond his desserts by reading “sense” in either meaning, so I will only malign him for one more muddy expression.
  13. This claim is false, as neither empirical reasoning nor positivism nor even ordinary reasoning about experience is self-refuting. Let postmodernism refute empirical science and the interlocking subject disciplines it has developed, its use of mathematical language to extend reasoning into theory, and its production of working technology.   In ordinary language, one can posit logical questions that conduce to self-contradiction, and critics make much of the difficulties posed by Heisenberg to science and Godel to formal reasoning. If this kind of thing is what Hart refers to, let us accept his point for the sake of argument, though doing so means ignoring the success of science. The challenge then lies in positing an alternative to a rational sense of operations that proves more effective at resolving issues of truth, goodness, and beauty. Even more basically, it lies in satisfying Hume’s claim that even reason’s success in divorcing cause and effect ontologically would do nothing to dissuade us from its epistemological necessity. As I wrote recently (please see my blog entry “Theology’s Cloud of Unknowing” of October 27, 2014), my accepting determinism will not deter me from exercising what I take to be my free will. And as much as postmodernists condemn reason, they cannot forego it, though Hart demonstrates how their disdain damages their use of it.
  14. If you understand this truth claim, please let me know. I can’t even hazard a guess as to what he means by this one. There seems a bit of a lilt in it, though. Poetry?
  15. If his mode of thought in this sentence can be charitably described as philosophical, this final assertion seems Hart’s strongest by far, for his presentation is bad philosophy and even worse discursive language. Of course, his own position is not the one he is assailing, and his triumphant flourish at the end of the sentence seems to be the claim that modernism has no means of erasing the doubts raised by postmodernists, doubts poorly alluded to in the sentence under examination. In this he is correct, for the failings of modernism are everywhere to be seen. The cadre of French theorists who nailed the jello of postmodernism to history’s wall in the 1970’s were conducting what they saw as a post-mortem. Though poorly understood at the time, modernism had died on the battlefields of the first World War, or at least a variant of it had. The twentieth century was a miserable and sustained effort to resurrect from the ruins adequate warrants for claims to truth, goodness, and beauty in the face of what was widely perceived as modernism’s failures (For a fuller explanation of this event, please see my book What Makes Anything True, Good, Beautiful: Challenges to Justification). But modernism did not die, though it changed, in part in response to academics’ century-long flirtation with postmodernism. Its resilience lay in its relentless self-critique, one that began with the first pitiful attempts by rationalist philosophers attempting to conduct another post-mortem, this one on the cataclysmic collapse of authority in the Reformation. Modernism certainly was never the intellectual bully its critics on the right (religionists nostalgic for a vanished authority) or left (postmodernists sensitive to its hypocrisies and inconsistencies) saw it as being. Rather, from its birth in the seventeenth century it was a desperate attempt to warrant truth, goodness, and beauty claims by some consensual means that might bind persons to each other with even a modicum of the force of lost authority, an attempt subject to endless self-criticism and relentless reductionism. Hart for once uses just the right language to describe philosophy’s failure to replace religious authority, for rationalism truly found no means to defeat the objections of its opponents, only to confront and minimize them. Reason and closely examined experience, the warrants modernism advanced, could never offer the certainty of Gnostic claims, nor in the messy amalgamation of Romantic and modernist warrants embraced by Victorians could it counter charges of bad faith and hypocrisy. The enduring appeal of scientism is a current example of modernism’s failure to project a defensible framework of moral justification, and religionists are quick to identify it as a threat, probably because it is rooted in a nostalgia for authority’s lost certainty, such nostalgia characterizing their position in general. Nor has modernism produced clarity in regard to evaluations of quality (for more on its problems with goodness issues, please see my blog entry of December 12, 2013, “Is Goodness Real?”). And it has not risen to the challenge of constructing an unambiguous aesthetic, though Kant laid a solid foundation for that effort. Modernism’s struggles are likely to continue. It sloughs off accretions and inconsistencies clumsily and often embarrassingly, as it did the hypocrisies of Victorianism that fueled the postmodern critique. Worse, its failures and its efforts to correct them dominate the zeitgeist. Its demand for rational consistency and honest examination of experience open it to relentless criticism from reactionary advocates of authority and from post-structuralist champions of the virtual circle proclaiming their own warrants for truth and goodness claims. But neither of these approaches can succeed, for the history of the Reformation laid bare authority’s bankruptcy in the face of challenges from within its mode of warrant. Authority simply cannot resolve disagreement from other authority. Nor can the virtual circle model of postmodernism resolve conflicts among cultures or between individuals except by recourse to coercion. No critic of modernity has been harder on reason as warrant than modernists themselves, and this relentless application of the principle of falsifiability remains our best hope of finding truth in the face of the kind of obscurantism we see in Hart’s cited sentence.

 

Standard