We can phrase the question in any number of ways. Of what use is morality? If it is not useful, why bother with it? If it isn’t useful, is that enough to call it “good”? But doesn’t everyone get to decide what is useful for herself? Could it possibly be useful or good in some sense of these words that differs from our ordinary understanding? Morality (huh!): what is it good for?
The simplest terms give us the most trouble, as we see here. Morality certainly is a category of goodness (see “What Do We Mean by ‘Good’?”) but is distinct from other meanings, specifically utility and quality (see “What Do We Mean by ‘Morality’?”). Whatever morality is, we know it is excluded when we describe plague as a good means of population control or laugh over a good cartoon. Why does morality not suit these examples and yet apply to adopting orphans? All moral choices may be good, but not all goodness choices are moral. That much is clear.
Things get murky thereafter. To call an act moral seems to appeal to some inherent quality independent of circumstance. Though we no doubt desire different things, somehow the moral thing seems inherently desirable on its own account. It seems to impose a unique duty on the choice not present in the other two senses of goodness, something not only desired but desirable, not only useful to our own ends but admirable, good of itself. The connotation forces us to ask just what that added ingredient can be and why it pertains to some putative goods and not to others. What factor sets moral goodness above and beyond utility?
I wish to challenge the existence of such a factor, at least as it characterizes what might be called public morality: choices involving strangers, either civil or social. This rather clumsy term includes laws, each of which specifies a public good, and mores, public values that do not rise to the importance of laws. Laws and mores must be distinguished from the morality that orders the private beliefs that govern our relationships with family and friends. Love governs these intimate relationships, or at least it should. Reciprocity may or may not apply. That variability is a problem. We know all families do not operate by the same rules whereas justice must arbitrate our interactions equitably in the public sphere. Recent history shows that confusing private and public morality is fraught with peril (see “Belief in the Public Square”), but our discussion is so primitive on this issue that we have a hard time imagining any alternative to expanding our private behavior with those we love to strangers whom we do not. Yet it is clear that we require some adaptation to carve out common space for acceptable public behavior in law and custom and not only for action but for the warrants persons use to justify them (see “What Counts as Justification?”). But that is quite difficult because our present contretemps is about far more than which acts we think moral. It is rooted in what we think qualifies as a reason to think so, and that more foundational quarrel spurs us to uncover the assumptions that rumble deep beneath the roiling surface of contemporary life (see “The Axioms of Moral Systems”). Persons begin to build their understanding of morality from these unquestioned assumptions in early life and find what they have built in childhood too rickety and jury-rigged to stand adult scrutiny and adult needs. Further, they seem unaware from the outset that their moral worlds isolate them from others as surely as space isolates planets. By what rules can their moral interactions be played out in the town square?
Though our disagreements on particulars are manifold, when we trace them to their shaping axioms, we find only three fundamental moral assumptions at play.
The oldest claims the most certainty. I call it a premodern moral outlook because its formative moral source is religious authority. I have explored both why this outlook has fallen into disfavor (see “Premodern Authority”) and why it must fail to unify Western societies and increasingly all others as well. Its greatest strength must also prove its fatal flaw: in offering moral certainty, it must denigrate not only competing authorities but any other mode of warrant because it can find no means to refute them within its mode of warrant (see “The Fragility of Religious Authority”).
A second source of morality taps a venerable tradition, seeking moral truth in wisdom derived from careful reasoning about experience that seeks to universalize the lessons so learned. This mode of warrant taps the power of reasoned experience to root morality in individual moral autonomy (see “The Tyranny of Rationality”), rejecting the irreconcilable claims of competing authorities in favor of often ingenious resolutions to moral conflict (see “Why Invent a Social Contract?”) that appeal to the thoughtful citizen. This modernist outlook regards reasoning as our only common means of determining moral goodness in and between cultures (see “Modernism’s Midwives”).
This has not gone so well as the last two centuries has demonstrated (see “Modernism and Its Discontents”), and so modernism has been explicitly rejected by postmodernists, who deny the universality of reason in favor of the variability of experience, even to the point of claiming mind to be molded by identity, and therefore private (see “Postmodernism Is Its Discontents”). This movement is a recent one, having only matured since the 1970’s and having been disseminated at first in academia and only gradually into Western popular culture, largely through the need of human sciences to consider will to be as determined as physical reality. Postmodernists also vehemently reject the authoritarian claims of religious absolutists in favor of a pragmatic response to the vagaries of experience or a relativism that sees morality as a social creation or a veiled means of exploitation.
These three outlooks are explicitly exclusive of each other, reflecting the historical conflicts that made them necessary. Modernism and postmodernism are adaptations to massive public failures of warrant. They were cobbled together in desperate efforts to find cultural consensus during two nightmarish epochs of disorientation: the Reformation and the early twentieth century (see “My Argument in Brief”). The formation of the later mode was inevitably a rebuke to the former’s catastrophic failure. That they now coexist somehow in contemporary culture indicts it and its members who shift thoughtlessly from one mode of warrant to another, employing whatever axioms suit their private intent as they pursue interests in a public sphere, justifying this public moral claim by reference to authority and that one by appeal to “common sense,” and yet another by mention of private experience or expediency. Such is the moral life of the consumer.
Many persons attempt a more consistent outlook, difficult as that may be in a culture of such contraries. But in their pursuit of moral goods, they also face implicit contradictions that overthrow the attempt and contribute to even more obdurate discord.
Let us define pre-modernists as those who trust the institutional authority of either doctrine or doctrinal interpretations of sacred texts. For most of Western history, their axiom powered public morality, for it required public subscription and ensured a unanimity of social values that seems both desirable and distant in our fractious political life. Pre-modernists are definitionally persons who trust in the combined truth and goodness claims of an inerrant divine command, regardless of how they view its source or transmission. Their trust requires a forfeiture of their own powers to arbitrate conflict, for they have surrendered agency to the authority that decrees an absolute morality on their behalf. Unlike religious believers who retain agency and withhold trust so as to pick and choose from among a banquet of preferences, whether religious or secular or some amalgamation of both, congregants ground their moral universe in categorical goods that they and all moral agents should choose regardless of their own inclinations. These are set in opposition to the hypothetical goods that govern choices involving utility and quality in the rest of their lives. They see a clear distinction between the moral absolutism that defines these goods for them and the rational ability, called moral agency, which allows them to choose useful and qualitative goods for themselves. The difference may be expressed this way: for them, moral choosing involves the expression “X is good.” Hypothetical goods invoke a different and rational framing of the issue: “If you want Y, then Z is good.” Hypothetical goods in their analysis must always involve issues of utility or quality and cannot be moral. “If you want to drive a nail, a hammer is a good tool.” “If you want your taxes done accurately, Stanley is a good accountant to choose.” The dichotomy between morality and utility to their minds is definitional. Hypothetical choices are the function of rational agency acting for prudential reasons to achieve instrumental goals. Authoritarians see moral choices as commanded, absolute, non-negotiable, and disinterested. These qualities give such choices both their universality and their certainty since the goods they claim are a divine command free of self-interested calculations. But their strength must also prove their undoing, at least beyond a univocal culture that embraces them. Their warrant requires trust and cannot survive doubt for the simple reason that authority can provide no resolution to challenge from other authority. “Thou shalt have no other gods before me” is a necessary commandment for a warrant rooted in authority. So it should be clear from the nature of their common mode of warrant that other religions must be rejected. What may be less clear is that either modern or postmodern warrants must be equally or even more fiercely rejected not so much because they are immoral but because they are amoral in believers’ opinion. Evangelical Christians speak out against Sharia law. They recognize a competing moral authority. They reject secular humanism, the orientation of modernism, and identity theory, the face of postmodernism, because these replace God’s judgment with human reason and categorical morality with utility. To their minds, such calculations cannot be moral by their very nature. Not only are they unavoidably situational and hypothetical, depending on the desires of the moral agents who choose them, but they are also necessarily contingent and laden with doubt because they rely on fallible thinking of what is true and even more questionable preferences of what is good. They ask how fallen human reason can guide a private morality, much less bind others to public one. They cannot understand how doubters can reject the linked categoricality of truth and goodness claims that they trust so deeply, whose truth is guaranteed by divine goodness and whose goodness is assured by their inerrant truth. Surrendering judgment has given them the gift of trust in the authorities who guarantee certainty, and they simply cannot think like those who have chosen to retain moral agency for their own hypothetical interests. Trust will simply not allow it, and doubting trust must lead to apostasy. So congregants find themselves forced to reject not only competing faiths as unauthorized challenges to their moral frameworks but also systems built along entirely different lines. Their moral axiom therefore can tolerate neither categorical nor hypothetical challenge. So long as their moral assumptions go unquestioned, their institutional authorities must dictate their public morality. Their axiom must force them to reject any attempt at compromise when morality enters public life.
But this purity of moral outlook is contradicted in Western religions by what can only be seen as an intentional conflation in the Bible between two kinds of goodness congregants wish to keep separate: utility and morality. In both testaments, the morally absolute is alloyed with the prudentially useful, the categorical with the hypothetical, external moral agency with appeals to internal rational autonomy, rewards for trust with requests for rational approval. Adam and Eve are explicitly commanded to obey. When they rationally question that command (or are led to question it by the tempter who had shown them the difference between good and evil), they are punished. Abraham is thought a paragon of obedience for refusing to question a horrific command. Job is explicitly told that humanity must not question the deity who creates both good and evil. Jesus consistently emphasizes that he conveys his Father’s will and not his own. Biblical text reminds us of the sanctity of the divine command as the fount of moral authority and condemns any attempt to reduce morality to hypothetical or prudential status. Believers are clearly cautioned against pitting their own rational sense against the will of God, of reducing divine intent to a puny human scale. But the subtext inevitably challenges that clear distinction, introducing not only hypothetical but even prudential incentives to moral choice. Each instance of obedience or disgrace is followed by divine reward or punishment entirely consistent with our ordinary rational and prudential notions. Adam and Eve are punished with mortality’s pains. Isaac is spared and Abraham becomes the father of a nation. Job’s blessings are doubled. Pharoah is destroyed. Jesus is resurrected. The good prosper. The evil suffer. Reality is enlarged to the supernatural, but ordinary justice still prevails. Faith does not require a rejection of our prudential and hypothetical reasoning, only an enlargement of scale. The congregant, we may assume, listens and structures her choice to commit around a legion of internalized rational arguments involving reward and punishment, culminating in eternal joy or damnation. She wishes to glorify God and to be rewarded for it. The bright line between the categorical and the hypothetical is blurred as is the source of moral agency. Is her moral life decided strictly by categorical command, which places it in a separate sphere from every other calculation of goodness she derives in every moment of life but which gives it the sheen of certainty? Does she surrender doubt as a little child moved by unwavering trust, ineffable awe, and grace-filled love? Or is it a calculation of ultimate utility, enlarged by a trust in the unseen, yet even more rationally calculated by the eternal stakes Pascal reminds us of? If true, what could possibly be more reasonable than obeying the dictates of our own moral reasoning? But even submitting authority to such an examination must challenge the trust that is authority’s only support, must remove the moral agency from infallible divine command to all the vagaries of reason and experience by submitting it to the mind of the congregant to decide. Must subject the absolute, external, categorical command to internal, rational, hypothetical moral agency. Once done, even a resubmission of commitment must stem from that source. That calculus cannot be unthought. Can we sever the morality of such choices from their utility? I cannot see how. Once her intentions are examined, the honest pre-modernist must admit that her private morality is motivated at least in part by a rational and hypothetical calculus. If these commandments are valid, I should obey them. She may find her morality thus diminished, but she can no longer regard the moral as categorically different from the useful.
This smudging of the bright moral line introduces another problem for Christians in particular, for not only does their faith ostensibly reject hypothetical and prudential morality in favor of divine command, it also condemns selfishness in favor of emulation of Christ’s self-sacrifice. To admit to prudential motives is bad enough. To also admit that those motives might be self-interested is something Christians must regard as fundamentally immoral, and so the history of religion is tinged with efforts to reconcile or conflate categorical and hypothetical goodness, morality and prudence, selfless faith and desire for salvation. This split motivation would not be a problem in other contexts. We may admire the badge and still fear the gun. But it dilutes both the categorical and the disinterested nature of pure moral absolutism. Adulterating divine command with self-interested and hypothetical calculations in dogma and holy writ challenges the congregant’s axiom that moral thinking should be divorced from prudential considerations. Depending on her thoughtfulness, this might prove a solvent to her trust. But it is a boon for our pursuit of a public morality because it allows prudential considerations to share moral space with categorical ones when churched persons interact with those who reject their doctrines.
I do not mean to signal out religionists as being the only obstacles to the effort to build a public moral system. Postmodernists and modernists bring their own implicit tensions to the mix.
Postmodernism’s axioms assume one’s nature as well as one’s mode of reasoning must be molded by experience and as a result must be a social construct or at least a subjective one. That variability thus requires tolerance as the only universal social good, all others being culturally determined. In practice, postmodernism’s ethic must always be a pragmatic one. The contrast between the categorical certainty of religious morality and postmodernists’ toleration for an experienced reality that builds identity is a constant source of tension for both sides. Congregants find secular culture flaccid and rudderless. Pragmatists defuse that charge, touting the nimble contortionism of a morality that suits the moment, and they remind doctrinal authority that they can accept what it offers so long as religion is considered one source of belief from among many, all arbitrated by the moral agent’s own interests. They embrace their autonomy and responsibility to the point of creating a personalized morality suited to their experience that may include religious doctrines so long as they are shorn of their institutional authority (see “What Is the Virtual Circle?”). They deeply resent the notion of a discovered morality as the unwarranted imposition of power and self-interest hidden in the folds of tradition. They seek to uncover buried contradictions in their opponents’ moral structure, the effort reflecting their only mode of justification: non-contradiction. But they too are trapped by their moral axioms, for they can provide no means of resolving discord in public life consistent with granting themselves the widest possible latitude. Their interactions with strangers take the form of surrenders or triumphs, for both parties are obligated by nothing beyond private hypothetical and prudential intentions that inevitably must clash without a means of reconciliation. For the religionist, no morality is prudential. For the pragmatist, all must be. For the Christian, no moral choice may pursue self-interest. For the pragmatist, all may. The rules of the virtual circle require adherence to the principle of non-contradiction, but the logical stringency of applying that principle is as much a personal choice as the expedients it governs. The result for the postmodernist is a moral system with all the inertia of balls in a lottery machine as choices bounce through the tempests of desire.
Modernists bring their own distortions to the mix as well, theirs involving the nature of the law. Their long, long struggle to find replacement warrants for truth and goodness claims in the misery of the Reformation was analogous to sewing the parachute after tumbling from the plane. One unfortunate attempt to ground public morality was the invention of a justification for government intended to replace the fragments of divine right that the Reformation had splintered. Compliant with modernist respect for moral autonomy and reasoning about experience, the social contract theory was an ad hoc effort to make sense of the Reformation chaos while shifting the source of moral agency from God to the individual. Thinkers made that effort work, after a fashion, and it was enshrined in the modern nation state. But imagining persons’ natural state to be free and independent required citizens to view government itself as an unnatural and conventional imposition on freedom (see “Alienation of Civic Affection”). The social contract enshrined the tyranny of the majority and rooted social goods in whatever compact the majority saw fit to impose on the minority, defining rights as originating in the contract and so subject to revision or revocation by the state. This pragmatic and shifting accommodation did invoke individual moral autonomy, the hallmark of modernism, as the root of morality. What could not grow from that root was a mature public morality. To see civil society as a compromise imposed on us by our forefathers as we pursue our own private goods drains all civility from society. To see such pursuits as endlessly competitive and in perpetual disputation requires government to assume the role of arbiter and enforcer in a savage competition of private moralities. The founders chose the Reformation era, that brutal age of cultural collapse when authority revealed its fatal weakness, that scarring epoch of desperate crisis, as humankind’s natural state. It’s as though the children of history’s worst divorce convinced their descendants for all time that such nastiness is the essence of marriage.
Unbridled capitalism, colonialism, and imperialism grew from that state-of-nature morality, eventually collapsing under the gravity of the vast hypocrisies of elites that either still proclaimed their authority to justify their power and so exercised it in plain violation of reason (see “The Victorian Rift”). These tensions multiplied over the course of the nineteenth century as the new popular culture pitched a dreamy Romanticism whose own promise of moral certitude rested in Gnostic revelation and a consequent rejection of the universality of reason that structured modern life. Romanticism invoked a private pantheist intuition as a guarantor of truth and goodness, communicated through powerful emotion rather than a cumbrous ratiocination. Modernism thus was critiqued from both sides. Its attachment to reason revealed unimaginable wonders like Einstein’s 1905 theory of relativity. It might produce an endless parade of technological magic, but these futuristic victories were won at the cost of an empirical tightening of its intellectual rules of engagement that cast doubt upon common sense, common in the sense of both universal and accessible. At the same time, its premises were simultaneously undermined by a popular appeal to a Romanticist private certainty about public goods that drenched popular culture. Certain intution clashed viciously with anachronistic authority even more vehemently imposed by institutions threatened on all sides. These stressors finally erupted at the turn of the twentieth century into our second great intellectual suffocation and produced the postmodernist response that now vies for dominance.
I have painted a dismal picture of the explicit and implicit conflicts that plague our public life, but it is not a hopeless one. One thread runs through all these grinding disagreements that can knit a more consensual social fabric, but it requires us to limit one term and expand another.
The limitation must address our understanding of morality so as to draw a distinction between a public and private ethos. We have no term to distinguish the two, and postmodern culture disdains even the use of the term “moral” as a vestige of disgraced authority, preferring instead the term “appropriate” to pass judgment on public behavior. This is a ham-handed effort to ground morality in something, but I am unsure who decides such things or on what basis we all should agree on what is or is not “appropriate.” Rather than indulge such circumlocutions, we might simply refine our understanding of “ public morality” to a more consensual meaning.
Given the incompatibility and internal contradictions of the three axioms of private morality discussed above, I suggest we think of public morality less as a set of prescriptions of value and more as a kind of language, ignoring for now any judgment or warrant for its ultimate worth. Language is an entirely pragmatic arrangement. Any language that communicates its speaker’s intent will do. Sign language functions for the deaf, mathematics for the scientist, and secret syllables for childhood siblings. If it serves the speaker’s intent, it is a good language. Here we see not only a hypothetical but also a prudential view that establishes a criterion of simple utility. Just as they may choose their own language in their own homes, persons in private life may engage any moral system that satisfies whatever criteria they establish including a categorical or utterly pragmatic one (leaving aside any judgment of the objective success of these implementations to meet their own standards of value). But when they carry that same moral language into the public sphere, they face a problem of translation. Regardless of either the worth or the prudential success of their system in private, they must find themselves adding some further pragmatic considerations to their public acts just as speakers of a household language must adapt to the lingua franca of the larger world.
Congregants will object that this subordinates morality to utility, but as their own traditions set that precedent, and since their categorical values are simply incoherent to others in the culture, they may find sufficient incentive to consider public morality in hypothetical terms as the means to merely public ends. To demand a categorical commitment from those who speak a different moral language denies them the responsibility to warrant morality for themselves, thus offending the freedom that makes adult commiment itself possible. A common moral language presupposes a common point of contact, and that point must be a hypothetical calculus focused on consensual goods.
Religious authority will also likely view such a reorientation as a surrender to secularism and relentless moral relativism. Their fears in this regard are well founded, for today’s pragmatism invokes a simplistic rule of utility. What limits social accommodation? When traditionalists feared “death panels” of bureaucrats charged with the power to euthanize the elderly, they were giving vent to this fear of the slippery slope of social utility, something religionists also frequently raise in ongoing arguments on abortion. If public morality is simply a sort of common language of behavior, what sets the limits of what is permissible if not the eternal commands of the deity? Modernists who look to constitutions and courts for the foundational underpinnings of public morality can offer no assurance either, for these suppose what the social contract supplies: simple majority rule. If public life involves the interaction of x number of strangers, each pursuing private goods that must lead to conflict, and if each person’s ultimate arbiter is her own moral autonomy, what alternative can we find to simply adding up the total and casting it as what Rousseau called “the general will”? Another era demonstrated the product of that notion of social utility: the Reign of Terror of 1793. Democracy has been cast as the solution to the problem of social conflict for so long that it is now unquestioned conventional wisdom. But even its advocates recognize its potential for tyrannizing the minority, suppressing the same individual moral autonomy that is in theory its source, and depriving the few of all so that the many might have more. If our common moral language is pragmatic and if it must respond blindly to majority will, what can safeguard a minority from Jim Crow or Nuremberg Laws or the genocide of First Nations (see “Preliminary Thoughts on Civil Disobedience: Natural Rights Issues”)?
The idea that religious absolutism must be the only recourse to pragmatic relativism is a canard, but it is a challenging one to refute. The solution lies not in rejecting pragmatic public moral utility but in taking it more seriously. It asks its adherents to choose based on the intended consequence they value without regard to any larger theory of value. The result is inevitably conflict in the public sphere. They operate out of what may be considered a private moral language of their own. But the notion of intended consequence in a public setting must be different from that practiced privately, for the simple reason that no private aims can ever be achieved should persons pursue in public the cross-purposes their private desires dictate. We see proof of that in the verge-of-violence moral permissiveness prevalent in Western societies today. Without regard to the content of morality, we can assert with confidence that the actual consequences fail to mirror whatever private intended consequences pragmatists bring to the public sphere. This is analogous to drivers employing their own rules of the road. They might be fine rules per se but their employment would guarantee no one would arrive safely at her destination. Continuing the language analogy, carrying private pragmatism into the public sphere would see persons inventing their own denotations and syntax in conversations with others. Religious absolutists are correct to fear such idiosyncratic moral thinking. What seems missing from the language analogy is some anchoring mechanism. Language is infinitely malleable, as we see in observing the evolution of a word like “gay.” But public morality requires some framework to limit the pragmatic accomodations that pre-modernists fear and the rest of us ought to.
That can be found if pragmatists simply took their own value system more seriously. It would call for an expansion of the meaning of pragmatism. In public encounters with strangers, what might be called their consequential horizon must grow to include collaborative ventures whose utility is affected by the choices of all involved. Immediate private circumstances are insufficient to allow public choosing. So pragmatists must reorient. This requires more discipline in public choosing than they expend in their private pursuits, calling for both a longer-term view and a more inclusive one. To accord equal value to others’ moral autonomy in the public sphere changes pragmatism even more than it changes moral absolutism, for it requires abandoning a seat-of-the-pants brand of moral navigation. Pragmatists might respond that they are already forced to do that to make their way in the world. So they would likely view an expanded pragmatism as a conventional set of accommodations with no intrinsic moral value whatsoever. Does it matter whether we drive on the left side of the road or the right, so long as we all commit to agree? An enlarged consequential horizon to pragmatists would not seem morality at all, but rather simple utility centered on public behavior. They would share with religionists the conviction that public morality is not really moral at all.
But just as religionists find a true hypothetical morality hidden beneath their categorical commandments, so too will pragmatists find their devotion to toleration to be more than it seems. A longer consequential horizon produces some strange similarities to morality as traditionally composed. The person who violates that conventional structure of traffic order by driving on the wrong side of the road commits an illegal act that we would be justified in also calling immoral. Would you consider driving through a school zone at one hundred miles an hour an inappropriate act of intolerance of pedestrians or a moral failure? Should a coworker on a vital project insist on using his own private language in your group endeavors, would you consider it only a personal choice or a moral lapse? Believers find their sense of categorical moral imperatives contain a hidden hypothetical and prudential character. Pragmatists likewise find their hypothetical and prudential calculations contain a truly moral imperative when enlarged to the public sphere. For both sides, that enlargement requires a broadening of perspective. In our eagerness to fulfill our desires in the public sphere, we seek maximal circumstantial freedom to accommodate varying circumstance. Yet as we look beyond the private pursuits derived from such calculations to a larger consequential horizon, honesty would compel us to admit that if some of us have an unlimited amount, others will find they have little or none (see “The Riddle of Equality”). While private pragmatism might find this objection restrictive of private aims, a public one must acknowledge equity as the means to expand individual freedom in the public sphere. An extended consequential horizon now may pragmatically define justice as a maximal circumstantial liberty limited by equity. A moral term if ever there was one has emerged from simply extending a pragmatic consequential horizon beyond the immediate moment and transforming our grudging tolerance for other individuals’ desires into an active embrace of equity as a public good. Equitable laws expand freedom by forbidding the license that would threaten it. It is in the nature of mores not only to restrain behavior but also to prescribe it, by such highlighting further expanding the moral freedom that defines human nature in opposition to the self-interest operative in our private desires (see “Our Freedom Fetish”). A final benefit of public morality is that such a clear determination of public goods assists all participants in both identifying and attaining goods that are simply not attainable by private means. The common moral language only works if everyone speaks it. The rules of the road only work at all if they work for all. But this is only possible if all expand their own moral sense and agree to the constituents of terms like “the general welfare” (see “Two Senses of the Common Good”).
Is this a transformation of pragmatic self-interest? I would argue it to be the same acknowledgement of reality that religionists must face when prompted by a broader appraisal of their own motivations, only pragmatists move toward the prudential from the direction of narrow self-interest while religionists arrive at the same point through an examination of their putative disinterest. Both movements require a more rational appraisal culminating in a public hypothetical imperative focused on a utility of furthest ends (see ‘The Utility of Furthest Ends”). The same merger of the pragmatic and the moral allows the pragmatist to adopt a public moral language beyond what is “appropriate.” It incidentally helps explain both the irresistible appeal of the language of morality to pragmatists and their inarticulate efforts to justify that language. Their stumblings reflect a reluctance to accept what any true moral appeal requires: that rationality is universal rather than the product of environment. But this admission is thoroughly modernist in its orientation, depending on a proper use of hypothetical and prudential reasoning by individuals acting in the public sphere, one that makes no concession to what postmodernists consider the environmental or personal origins of human identity and appeals instead to a universalist sense of conduct justified by rational analysis open to all moral agents.
We should not find it surprising that this universalist view traces the thinking of the founders of the U.S. system. But I wish to utterly reject the two axioms that guided their thinking on the specific forms of public morality such thinking produces. In rooting morality in the individual rather than any collective entity, they embodied modernism’s rejection of categorical religious absolutism in favor of the individual as the atomic unit of all moral thinking. But their own recent history had shown irreconcilable discord among those individuals. The nature of those disagreements reflected the failures of religious authority that had underwritten all moral claims including those for government power. Their first error was in assuming that even hypothetical justifications must break down, leading to anarchy or tyranny or a return to an absolutist state because individuals must pursue private goals in the public sphere just as believers during the bloody Reformation pursued private revelations. They considered anarchy our natural political condition and government a conventional and imperfect preference to total circumstantial freedom. Their second mistake lay in assuming as a result that the only way persons could be driven into political associations was through agreements by the majority that channeled the “natural” moral disputes among the private pursuits of citizens into the broad highway of majority rule. Most votes win, and the 50% less one that lost might console themselves that whatever condition the social contract reduced them to was still preferable to the anarchy that modernism assumed to be their only alternative and their natural condition. These errors were the result of a breathless historical analysis undertaken amidst the wreckage of authority. To their credit, the founders established a polity that was meant to be self-correcting and perfectible through amendment to the social contract. To their shame, they defined no criteria to gauge or even define improvement for the simple reason that they regarded private moral aims to be inevitably in conflict. They saw moral autonomy as inviolable. In that, they were right. But they thought its private dictates to be an unfailing source of public dispute. In that they were dead wrong. If it seems at this point we are far from a more perfect union, a good part of the blame must be placed on the two mistaken assumptions that guided the founders’ thinking.
They cannot be faulted for failing to predict the fault lines that the next two centuries would cut into their vision and the postmodernist quake those faults would produce. They can hardly be blamed for not seeing the future. But why did they so misread the past?
Had they seen the cataclysm of the Reformation and the birth of their own moral warrants more lucidly, perhaps they might have begun their experiment in self-government from a less embittered beginning. Consider a thoroughly pragmatic view of public morality, though one informed by a reasonable consequential horizon. If we grant that persons act over a lifetime to fulfill their admittedly disparate desires, we would reach two conclusions. First, that many of these desires are common to all moral agents. It is this commonality that forces reason to embrace the need for equity and derives from that capacity and that natural equality the human dignity that warrants any theory of natural rights. These universal desires move common morality to consensus. This is not some airy and empty and high-blown fluff like the “rights” enunciated by Jefferson, but actual goods that form the spine of any workable moral vision. I have explored that question previously at some length (see “Needs Anchor Morality”). Second, only a few of these pursuits are facilitated by government, but those few that are can be met by no other means (see “Where Do Rights Originate?”). The principle of subsidiarity argues that our pursuits should be met by what we might call the simplest source competent to fulfill them, that principle in itself a conclusion derived from a rational analysis of the satisfaction of needs. Government is that simplest source for a subset of universal needs (see “Natural Law and The Legality of Human Rights”). The founders were utterly mistaken in assuming that government must be purely conventional and artificial and therefore might be defined only by majority will. Like other social institutions, it is purely natural and exists to satisfy those human needs that individuals, families, and friendships in all their variety cannot meet. Those needs constitute public goods. What channels human desire into those broad highways mentioned earlier is not the coincidental overlap of conflicting pursuits but varied means of satisfying the same universal needs. Properly conceived, government performs its necessities just as families and individuals perform theirs (see “Foundations of the Law: an Appetizer”). What are those shared needs conducive to the general welfare? We identify those necessities through prudential and hypothetical reasoning in our common spaces. We apply universal reasoning to variable individual experience in the modernist model. The result will be a public morality fit for public use. Of what use is that? It promotes our flourishing. And that is good.