It seems to me we can answer the title question in one of three ways.
First, we can adopt a skeptical or cynical response: goodness does not have objective existence. Now this approach can be coarse or fine.
The coarse view holds that goodness is an empty term filled with will. In Shakespeare’s words, “Nothing is good or bad but thinking makes it so.” We choose to call what we like or desire good and what we dislike or don’t want evil. In this view goodness is a projection of attachment. Its champions think utility to be good but also so situationally dependent as to discourage external judgment. “You just had to be there” is their motto. A tire iron is good for putting on a spare or knocking someone over the head. A good painting is one you either treasure or can sell at a profit. You decide. Others can’t or shouldn’t.
As a means of premature closure, this answer is splendid. If we can dismiss goodness as arbitrary preference, we will find ourselves under no obligation to either seek it or follow its dictates, and that is freeing. I should think it would come as no surprise that this view is most popular among postmodernists who embrace a pragmatist outlook grafted onto an ironic detachment that regards institutions and authorities as power grabs on individual autonomy. To value a personal sense of utility and maximal autonomy is their response.
It is impossible to oppose all standards of judgment, but a coarse skepticism about goodness compels a complete dismissal of every general rule and the institutional traditions that proclaim and enforce them. This argument is hopelessly compromised by its own standards. Postmodernists see hypocrisy and self-contradiction as the only truly moral offenses (see “Postmodernism’s Unsettling Disagreements”) since even the degree of logical rigor they bring to consideration is in their view determined by the same environment that forms preferences. They think their flexibility to be good not just for themselves but for everyone. This elevation of tolerance and sincerity is itself a universalist moral position that implies a whole range of logically entailed beliefs that carry even more moral force. Their acceptance of one ontological reality over another attests to a preference for truth over falsehood and utility over purposelessness (see “Our Freedom Fetish”). This contradicts their moral solipsism. Their argument stipulates such things as good or less bad than the alternatives. Their condemnation of others’ judgment that goodness has real existence requires an objective ontologogy their axioms deny. Expressing their preference implicitly makes the case for a common reasoning faculty; otherwise, they would not see a benefit in arguing it. They also violate their own valuations of utility unless they live alone on a desert island, for working with others in the pursuit of mutually advantageous goods has universal utility that their experience reinforces daily. I will use that wedge to convince them of a better position.
The less coarse skeptical view still holds goodness as a relative term, but adheres to a frank preference for social utility as a desideratum, including the practical benefit of mores and positive laws that lubricate persons’ interaction. So goodness in their world is a means to immediate social ends. To their minds it is like driving on the right-hand side of the road: a convention that has no meaningful content whatsoever except that it facilitates getting you to where you are going. They agree with their more cynical allies in valuing utility, but theirs is a social rather than a personal valuation, one in which private esteem must bow to public will. Murder is wrong because we all agree it is. Should we think differently, as citizens did during World War II when indiscriminate bombing killed millions of civilians, then murder would be acceptable as collateral damage to a larger social good, however decided. In democracies, that is called the common good, though calling it that implies both its real existence and a universality in defining it that is obviously lacking (see “Two Senses of the Common Good”). Other means of social organization have their own means to determine value, but all are governed by their own social norms (see “Which Clash of Civilizations?”)
This fine-grained skepticism strikes a responsive chord. We cannot doubt that mores serve useful ends, though in weighing them the cynic might question if these are outweighed by the irreconcilable disagreements that must arise over whose version of utility must win. But this concession to their point should not obscure another: we must also ask if preferences only serve social utility or if they seek to or actually do accomplish other ends, including real and objective goods irrespective of public will.
Whether coarse or fine, these skeptics regard goodness as a quality persons award to an element of their experience. It is an opinion they hold, not an independent quality of a thing like its weight or color. Goodness resides as a term of approbation in the mind of the viewer. These skeptics might be called nominalists. As a general term nominalism considers reality to be composed of individual items and experiences. Any clumping or abstraction deriving from our efforts to categorize these discrete entities is unwarranted. With this limitation, any claim to ontological goodness would be ridiculous. But preferential nominalists who ostensibly value the equality of choice are most unlikely to be consistent, valuing their own culture, preference, or opinion to those they disagree with and attempting a defense their position denies.
One group that skeptics would certainly quibble with are those at the opposite end of the spectrum. Idealists consider goodness to have quiddity: an existence in reality as firm as any material thing. They reject seeing it merely as a product of experience or opinion, for they know that any such characterization tends to subjectivism. They too argue from a fine or a coarse perspective. Both grow from an implicitly religious root that argues for ontological goodness.
The fine view is as old as the Academy. Plato drew the outlines by seeing “the good” as an ideal and separate reality that ordinary existence only dimly remembers and seeks to resuscitate. This appeal to the ideal summons belief in a divinely inspired order as the guarantor not only of the ideal but of persons’ access to it. As Aquinas said, “A thing is good so long as it has being.” So existence is objectively better than non-existence because existence increases potentiality toward purposes and ends built into reality. When argued as goodness, “potentiality” becomes “freedom,” but the ontological argument would hold even if there were no humans in the universe. Being is better than nothingness because being allows stuff to happen. This is obviously closely related to a second argument for ontological goodness: the argument for order. We see this argument implied in science. Mercury is good even if it has no impact on any human because its existence helps order the solar system, the Milky Way, and so on. This is a refinement of the argument from being, but it is an important one. It sees nature as a single unity in which physical order is good because it allows everything to exist both as things in themselves and as part of a larger order. Think of the ecological movement that sees nature as good and human intrusion as a general moral evil. That cultural view explicitly embraces the ontological goodness of the natural order while also rejecting the disruption humans impose by virtue of what is viewed as a misuse of their free will. Of course, this depends for its persuasive power on the argument from order, an essentially religious view of natural goodness and human sinfulness going back the Garden of Eden, all supercharged by the Romantics in the early 1800’s.
This argument is rationally permissible if not rationally entailed, but it requires a commitment of belief to imagine an ontologically solid ideal reality that we somehow dimly access in our determinations of the good. But there is a much simpler way to make its case. This is the coarse view favoring ontological goodness. Moral absolutism finds it much easier to argue for the ontological reality of goodness. Another name for that thing is God. By this view, all manifestations of goodness are expressions of the divine will communicated as commands to persons. Religionists regard the famous Socratic question of whether the gods create or are subservient to goodness as a misunderstanding of identity, for in their minds goodness and divine are like “light” and “bright”: two aspects of one thing. Goodness is but a different name for divine will manifesting itself in human life. It doesn’t take a moral skeptic to question how such an argument can be reasonably defended, for it relies on proofs about reality that it seems reluctant to provide. It is, of course, possible that persons who make the argument possess perceptions that others lack or reasoning powers beyond their peers, but as they seem unable to demonstrate these faculties to an uncommitted observer, it seems more likely that their valuation relies on some less definitive and more personalized conception. We must seek the source of this sensus divinitas elsewhere. Such a view requires only the exercise of preferential freedom in one’s construction of reality, an operation otherwise known as belief (see “Knowledge, Trust, and Belief”). This simultaneity flavors the truth of an experience with a preference, thereby corrupting the dispassionate ratiocinative determination of truth that in general should be completed and grasped before being submitted to a subsequent preference dependent on it. This process differs dramatically from judgments of truth and goodness that separate the mental act into a prior grasp of truth that then guides a subsequent act of preference. Belief stacks the rational deck so as to combine determinations of truth and goodness into a single mental operation, thereby granting a seeming certainty to the outcome. Understanding of truth in this pursuit is inextricably bound to preference, leaving no gap into which reason might insert its demand for judgment. Belief becomes its own warrant, its goodness upheld by its truth and vice versa. In ordinary reasoning, we must know a situation before finding options for choice that then are filtered for preference, but beliefs short-circuit this rational sequence, bolstering a sense of certainty. Believers have no problem vivifying the independent existence of evil, which they define as the absence of goodness, a negation of the divine presence rather than the positive personification of an ontological quality. This connects well with their elevation of being as good and denigration of non-being as evil, only they see all of this as communicated through dogma, revelation, or some religious authority.
Both the coarse and the fine defense of real goodness suffer from the same defect: they cannot withstand logical challenge. Believers fill in the epistemological blanks with their own preferences, but these vary with the belief and with the believer, so their effort to advance their cause, though it cannot be refuted in situations in which belief is properly employed, can neither be defended from unbelievers nor from other believers with different beliefs (see “The Fragility of Religious Authority”). Proper beliefs, those confronting issues whose reality we cannot grasp, cannot be refuted because they are permissible to reason. Aliens may exist. God may guide the human heart. Preference moves belief in such cases and multiple possibilities open to belief, none of which may be either confirmed or refuted. Improper beliefs, those made in a rejection of available judgments of truth, are erroneous because they slur determinations of truth prematurely so as to make judgments of goodness impossible.
You can imagine the confusion that arises when the postmodern skeptic confronts the religious absolutist on any question concerning goodness or vice versa!
I find it easy to refute the nominalist position simply for its internal contradictions. I have never met a thorough moral nominalist, nor have I heard a defense of nominalism that can evade the simple charge that such a defense must be considered by its maker as a better position than that which it opposes. “Better” may not indicate ontological goodness, but it does evoke a standard that the speaker accepts. It is less easy for a disinterested investigator to reject the idealist argument for the simple reason that properly limited beliefs are impossible to confirm or deny by reason free from desire. Depending on the manner in which idealism confronts issues of divine justice, it might be made as convincing to a disinterested listener as it is to true believers, but to my knowledge no one, not even the great Aquinas, has made that case sufficiently to subject it to judgment (see “Divine Justice”). That’s why it’s called faith. To exercise religious belief as a real doxastic venture without falling into insupportable claims to knowledge on one side or a facile agnosticism on the other requires balancing on a razor’s edge (see “Religious Knowledge as Mobius Strip”).That may be asking too much, but both sides seem oblivious to the issue. Believers evidently don’t think they need it and skeptics don’t seem to have the patience to follow the argument or the disinterest to withhold judgment until they do.
There is a middle ground on goodness that falls between a pragmatic nominalism and a fideist idealism. Conceptualism withholds judgment on ontological goodness, preferring to endorse an epistemological position instead, one that regards goodness as a possession of the mind. Goodness exists as abstract mental construction. It resembles any abstraction in this view. When we speak of justice, for instance, are we thinking of an empty expression of preference, a real thing in the world, or something else? Do we regard it as an attribute as real as color or size? Or do we callously assume it to be an issue of pure opinion, like taste in food or fashion? Or do we more callously yet see justice as the velvet glove that power puts on to commit its depredations? Is justice a thing or a delusion? It seems to be neither, doesn’t it, but rather the descriptor we apply to an experience in which persons receive their due. It seems a judgment of quality that over time and with thought becomes a categorization of similitude of experience. So it goes with the other concept we call goodness. At its simplest, it is the judgment we bring to the utility of one preference over another. An elevator is a good conveyance for vertical rather than horizontal travel in a building. I think that is a pretty solid and universal judgment of utility-as-goodness. When we compare an experience to some standard of quality we can justify publicly and consistently, we apply judgments of quality. And when we apply some ethical standard to an experience, we judge its morality. All of these operations are both intensely rational and necessarily provisional. Their rationality derives from the overlay of standard to situation, an operation that calls for fine discrimination of similitude and differentia so as to enact a fair categorization. It also derives from the prior development of the standard itself, the conceptualization that is the only real existence of goodness per se that is available to the human mind. If such a thing existed in reality, idealism would be entirely defensible. If no conceptualization could be derived by reason or if its existence could not be publicly confirmed or defended, then the skeptics, subjectivists, cultural relativists, or materialists would carry the day. Nominalism would triumph as the only option available, thereby condemning all activities that invoke and evaluate conceptualizations of goodness. Preference would be universally acknowledged to be as arbitrary as a throw of the dice. Even personal utility could stake no claim to goodness, for if utility is denied as an abstract standard of value, every preference must prove arbitrary. But let us allow nominalism a bit of leeway by allowing it to defend at least a pragmatic outlook, one that refuses even to endow utility itself as a standard but that also takes preferential freedom seriously enough to acknowledge that preference opens choice to at least some rational arbitration even if it is highly idiosyncratic (see “The Problem of Moral Pragmatism”). This is an extremely liberal definition of either “standard” or “rational,” so it forms a most generous understanding of nominalism. So if our judgments of conceptual goodness, unlike our beliefs about it, are to carry their weight, the objections of this kind of nominalism have to be defeated. If that can be managed, all more rigorous definitions of nominalism would also fail, and with them all skeptical objections to conceptualizations of goodness.
Two concerns rise, either of which if accepted must prove fatal to conceptualism in general and judgments of goodness in particular. Both relate to the issue of objectivity of concepts. First, how can standards of goodness be determined to be objective if such standards are merely private conceptualizations of value or opinion? The second objection applies to the overlay of experience upon these standards. Even should their existence be conceded, doesn’t the uniqueness of experience doom any effort to apply any standard so as to derive a publicly defensible judgment of goodness?
We apply standards of goodness to a wide range of experiences, and it is true that many of these are nonsense. I hope I have demonstrated why any proper belief is disqualified from consideration. Beliefs are not judgments. This is not to question their truth—properly limited beliefs must always by definition be permissible– but it does challenge the availability of knowledge that admits the possibility of standards. Expressions of taste must also be discarded. You may think pistachio is the best ice cream or Citizen Kane the worst film, but unless you can articulate the standards by which you propose these expressions as judgments or can cite repeated thoughtful experiences that have refined your opinions into judgments, they deserve to be kept to yourself or stated as naïve and private preferences. Now this could go wrong in either of two ways. Standards may be available, but you may not know them. Perhaps a set of mutually-agreed-upon standards is available to movie critics or creamery owners that lays out some path to judgment. I’ve not seen them, so any expression of preference I apply is bound to be merely opinion and therefore private. Or the subject simply does not lend itself to replicable standards of thoughtful repetitive experience in the first place because it is either unique or so repetitious that thought can add nothing valuable to our understanding. In either case a confident assertion of quality is just so much chin music. When art critics argue about the quality of a work, their positions would be bolstered by some reference to the standards they are employing.
On the other hand, some standards are so obvious that we implicitly agree to apply them to any judgment of goodness. This situation begins with the simplest judgments of utility, judgments so obvious as to be invisible. We would all agree that a hammer is a good tool to drive a nail while a throw pillow is not. But this universalism leads us in ever tighter spirals to every more specific standards of quality. Civil engineers have a pretty good idea of the best material from which to construct an elevator cable. Judges have a clear conception of the nature of positive law that governs their rulings. A whole range of human activities lend themselves to the development of expertise so as to determine standards applicable to particular activities or that allow a set of similar experiences to be rationally examined over time (see “Expertise”). So the position that no common conceptualizations of goodness can be defended on the basis of their standards of judgment is clearly mistaken in judgments of both utility and quality.
It survives, I think, for three reasons. First, careless thinkers toss out their opinions like confetti, expecting their listeners to consider them serious judgments of goodness. A whole range of pseudo-experts pontificate on experiences without adequate standards, effectively appealing not to our reason but to their own authority for our concurrence. Second, believers, especially religious ones, propagate moral beliefs as judgments, appealing to authority or their own private revelations for the absolutist mandates they profess. The Ten Commandments certainly are standards of goodness, but their clarity must prove compelling only to those who embrace the authority that has transmitted them. Other authorities transmit other moral imperatives that bind other believers. The moral element of goodness is particularly vexing in that believers regard it as beyond ordinary hypothetical calculations of value such as are used in judgments of utility and so defend their religious values as categorically binding on all persons, rather than merely hypothetically binding (see “Religion and Truth”). The substance of their beliefs is something followers of other religious traditions must deny, and the categorical obligation is additionally denied as a source of morality by unbelievers. Third, persons are extraordinarily careless in their claims to goodness, typically assuming their own positions to be universally true and characterizing those they disagree with as meaningless or subjective. Even the great Immanuel Kant could not convincingly demonstrate that standards of taste exist or could be judged or improved if they did, so the average person can hardly be expected to do better and can be forgiven for universalizing her taste or opinion. But saying it does not make it so. This self-deception is facilitated by the careless use of the terms we apply to such matters. Even educated persons use belief, judgment, and opinion indiscriminately, slurring the responsibilities for warrant the proper use of each term implies.
It would be difficult to compile a spectrum of standards of judgment for goodness. Such an effort would be both contentious and clarifying, particularly to idealists and nominalists who would find their comforting conclusions repudiated by such a spectrum: the idealist because no such standard would prove anything more than a rational construction generally accepted and the nominalist because her own acceptance would reveal her blanket skepticism to be unsupportable.
She might still take refuge in the uniqueness of experience. For even if such standards could be defended for at least some goods, skeptics might with good reasons argue that experiences are too variable and private to apply the standards to. What does the art critic who opines on a work say to the artist who defends its originality against the critic’s opinion? How can the champion of standards reply to the person who simply says, “Your standard does not apply to my experience”? The very existence of standards flattens experience, smooths out its quirks and contexts, and disengages private evaluation. The gymnast may respond to the judges’ scores with her own memories of endless hours of practice and self-discipline, her own self-confidence and that of her coach, and so on. The thief responds to the law with tales of abuse and neglect or of need and privation. The hedonist tells the moralist that he enjoys a life of self-indulgence.
Since choices of goodness are perforce engagements of preferential freedom, we should have no doubt that persons can choose whatever they value in their own experience and call it good. Their freedom to choose is not the issue. It is instead whether an open-minded appraisal will convince them that they should choose by some standard of value that their own reason will mandate to their own experience.
That conviction cannot begin with any appeal to experience alone. Everyone’s is different, and we can count on differences even when persons share experiences. Those are rooted in memory of prior experience as the lens through which present experience is viewed as well as more prosaic differences of perception and discrimination. Postmodernists are eager to defend the sanctity of their own minds and the uniqueness of their own experience, at least until they begin pontificating on gendered, ethnic, racial, or class backgrounds as makers of identity. This simplistic notion of the impact of environment on personhood is doubly wrong even when one discounts its simultaneous defense of the sanctity of individualism. First, it is true that experience does forge identity, and as participants in many cultures each of us navigates that process differently. But what allows that navigation cannot be the formative pressure of culture itself. We participate in too many for a monolithic influence to be felt, and we choose to embrace some elements and reject others in defiance of environmental prescription. Though it is in postmodernists’ interest to oversimplify the molding force of broad cultures, the actual motivation for each of us must be our own reasoning faculty employed constantly in the exercise of preference. Certainly, it is influenced by culture, but its ability to negotiate and learn from experience is the common quality that allows communication, grasps conceptualizations, and permits public judgments of truth and goodness to be defended. A strictly cultural formation must be rejected, but the common channels through which we negotiate our experience testifies to some other common source of navigation. If experience is unique and if preference is framed by it, what guides preference to common utility and to the common standards that allow defensible judgments of quality, and what allows a recognition of expertise in at least some kinds of experience? If experience is indeed private, how can we communicate it even adequately to others? I argue the common factor to be our reason.
Human rationality is enormously complex. It has to be to find categories of understanding in the data stream our senses present to the mind (see “The Tyranny of Rationality”). A sustained attack on the objectivity of goodness in favor of contextualization and the relativism it implies is a hallmark of postmodernism, but our species-specific means of interpreting that context offers the possibility of intersubjectivity in preferences even if we reject the possibility of objectivity in our goodness choices. To be clear, the case for a common objectivity in reasoning about truth is made by the worldwide applicability of mathematics and the scientific method, by the interlinking disciplines of empirical science, by the acceptance of denotations in language, by submission to expertise as a logically sequenced key to knowledge or skill, and by the irresistable compulsory quality of rational understanding. So we know that universal reasoning about matters of truth is possible, though by no means easy. But it is so much harder to make the case for that same reasoning capacity in issues of goodness that we must loosen the criteria. Simply put, our reason interprets private experience in common ways. “Intersubjectivity” makes no appeal to ontological goodness at all, arguing merely that persons share something in their personhood that allows a commonality of preference similar to the uniformity of reasoning that allows universal mathematics. By this principle, we may seek out universal goods rather than objective goods not because they exist in reality but in the functioning of our common reasoning about diverse experience. This is certainly a more tentative judgment that the existence of objective truth, but it is essential if public morality is to exist as anything greater than a ruthless struggle of each against all. Still, we must acknowledge the difficulty of making the case, for wishing doesn’t make it so.
It is so easy to find difference in human experience. That, after all, is the core of tribalism, which is merely an older name for cultural relativism. But dig deeper, beyond actions, and note the universality of our natural and preferential freedoms, the source of natural and human rights (see “Natural Law and the Legality of Human Rights”). Notice the universality of our needs as opposed to the particularity of our desires (see “Needs Anchor Morality”). Notice the common endeavors that mark communities, that have always marked public life. It is ironic that the contentious theorists of what we now think of as postmodernism often began as academics studying linguistics. Their drift to post-Structuralism marked a preference for the trees rather than the forest, as Noam Chomsky has tirelessly argued. Were postmodernists correct in thinking us so divergent, our language would communicate only connotations rather than denotations. To be sure, their revolt was largely against the perversions of modernism produced in large part by an anachronistic and delusional deference to authority, which is another way of characterizing their entire movement as a flat rejection of ontological goodness. In discounting conceptualizations as universal, they went too far. For all the pedantry and human science trappings of their turgid prose, their arguments only skimmed the surface of human experience (see “One Postmodern Sentence”). We construct reality using the common conceptual tools our minds provide to us, presenting an intelligible mimesis before consciousness even engages to grasp it. We cannot help but see unity and number, before and after. We will argue about which are causes and which effects, but it will never occur to us to doubt that such things operate in reality. But that reality of itself provides none of these conceptualizations. Causes exist in phenomena no more than effects do, than number or unity does. Uninterpreted reality is but a jumble of events. The unnoticed falling tree cannot be known to make a sound. Time is simply the conceptualization of an order of events. But reality does not provide that order or the time that marks its sequence or the causal relationships that make sense of it. The human mind does. Nominalists will argue that all goodness is subjective, but their argument itself testifies that they not only cannot help thinking in universal terms but also that their thinking must always seek rationality even if they fail to find it. No religious believer can think of the commandments of her faith as being merely private instruction. She reaches for a common moral framework even against the passionate disavowal of her listener and sees a divine order while her listener sees only a natural one. Neither sees things-as-such, which is why they can constructively engage in an argument about the nature of the order that neither could deny. We cannot help but conceptualize events in common modes of reasoning because we cannot help filtering private experience according to universal intuitions that apply perceptions to the mind. Determining truth is the means to preference, making conceptualizations of goodness the most important ones we can think about as we instantiate preference after preference in our ordinary experience.
So I can convince you a hammer is a better tool than a throw pillow to drive a nail and a throw pillow a better tool than a hammer to take a nap. And I can probably convince you that hummus makes a better meal than hay and that a cardiologist can better perform heart surgery than a barrista. Though not all expertise applies to issues of utility, a preference for true expertise must always prove universally useful. We may justify public conceptualizations of goodness in utility and in quality by these means. But that still leaves the very big question hanging: are universal moral standards possible in the same sense as those governing expertise or judgments of quality? What concepts of common human reasoning applied to unique human experience could mold a defensible set of standards governing moral behavior, standards that appeal neither to immediate utility as subjective appraisal nor to absolute dictates by authority or revelation? Can conceptions of morality be defended as universal judgment applicable to all persons even after acknowledging that experiences are irretrievably private and unique? Can they be defended to an open mind with the same methods that judgments of utility or expertise might appeal to? Can such conceptualizations defeat nominalist and idealist preferences?
Before we answer, we should look around us to see if such a thing already exists. We might seek such a conceptualization in positive law. After all, every law is a pronouncement of public morality, telling us what is and is not open to action (see “Foundations of the Law: An Appetizer”). But if we seek a complete universal morality in positive law, we must fail. First, it certainly does not cover all moral issues, thank goodness. Second, it frames its own justifications in a variety of contradictory ways and persons thus accede to it for a variety of reasons. Some warrants for law are strictly positivist and rooted in the law’s ability to punish, imposing a universal utility based on fear alone. Some are strictly contractarian, finding law’s power in constitutions that are in theory and practice every bit as conventional and utilitarian as legal positivism (see “Why Invent a Social Contract?”). And some find power in remnants of divine command and authority. These various warrants also power cultural mores, and this leads to a third problem: no positive law is universal (see “The Axioms of Moral Systems”). All apply only to limited jurisdictions. Many persons will turn immediately to religious authority, but such standards have no means of defeating other standards justified by other authority or of converting those who reject authority in toto. Others refer to religious belief or revelation, but as these are not judgments, their conceptualizations of moral goodness are closed to public appraisal and therefore to common reasoning.
We will find other candidates that do appeal to reason, but so far none has succeeded in securing the same universalism as their moral claims (see “Three Moral Systems”). Their failure tempts the nominalist to fortify her skepticism and the religious idealist to return to her dogma, but this will only renew the intransigence that marks present culture.
For moral universalism to convince absolutists, it must abandon centuries of attack on authority. That battle began with the Protestant Reformation and ended definitively with World War I. What remains is nostalgia mingled with resentment over what the rejection of authority has wrought (see “Tao and the Myth of Religious Return”). The coarser skepticism that claims total subjectivism and the finer one that values social utility alone will never share the belief system that allows religionists to think of goodness as real. But conceptualists can advance a version of moral goodness as a universal possession of reason so long as it is based on what might be called the “utility of furthest ends” as a common value system, particularly for public morality (see “A Utility of Furthest Ends“). But that idea depends on two suppositions: first, that such a conception is available to reason and, second, that it is universal rather than subjective or relative to culture. These are tall orders for any conception of morality.
The first goal challenges the subjectivity of moral reasoning, an axiom of postmodernism that mandates both identity politics and cultural relativism. Its root is the elasticity of minds molded by either private or cultural experience. If experience is private and reason formed by it, no common ethical ground can be claimed upon which to have universal morality built upon a utility of furthest ends. This evidently was the thinking of the founders of utilitarianism as a theory of moral behavior. They considered persons’ pursuits always moved by immediate utility, though John Stuart Mill in particular tried to prescribe universal goods without success. If reasoning is indeed an effect of experience rather than its shaper, no public articulation of moral standards would prove possible, unless one embraced a kind of general will formed by class, race, gender, religion, or nationality as coercive and brute fact. But such a generic view of the self ignores the plethora of communities that persons belong to as well as the substantial differences that remain even among persons largely sharing a single one. What accounts for that? One answer might be other cultures or accretions of them, but how are these arbitrated if not by individual will and reason? This argument might dissuade relativists, but it seems to strengthen ethical subjectivism, leading to the private morality of the virtual circle (see “What Is the Virtual Circle?). But at this point a general conceptualist argument must also force postmodernists to acknowledge that a priori universalist conceptualizations like mathematics seem to appeal to a universal reasoning capacity. Conceptualists might also point to a posteriori ones like the scientific method, a rational mode of determining truth that transcends culture and subjective preference and engages a common reasoning faculty. The skeptic will respond that morality is not amenable to quantification or to empirical science (see “The Limits of Empirical Science”). That is true, but we see these same universal faculties at work in the development of codes of conduct for proper application of expertise. These, like the scientific method, go beyond immediate utility to prescribe universal goods as conceptualizations governing professional practice. Not all such efforts are open even to verbalization of standards, yet expertise clearly exists in some fields and clearly functions across cultures, implying a universality of value that at least challenges cultural relativism. But this can only take us so far, beyond pragmatism but still clearly utilitarian. If morality could be made subject to the decisions of experts, it would already have done so, so we may still entertain some doubt about an adequate conceptualization of a non-religious universal morality, but a disinterested listener would have to admit the arc of reason now bends toward the possibility.
Another way to approach the issue is through the perfection of preference in individual choosing that loosely resembles the development of expertise. My experiences do allow me to sharpen my concepts, to learn from my mistakes, to improve my preferences by means of clearer thinking or better evidence just as I may in considering justice as a qualitative concept of goodness. It is true that I could never claim to grow to expertise in that effort as a jurist might, for my experiences are too broad and different to reach that level, but my conception of justice will still improve with experience in a way that could both profit from disputation and give others profit. Part of that will in all likelihood lead to a more nuanced and generalized notion of justice’s role in utility, in my seeing it as involving equity and qualities beyond simple fairness as I think through what I and other persons are due. Central to that thinking is the notion of equity, that my own desires are objectively no more important than others’ desires, and this realization moves me inexorably toward considerations of the nature of justice. This operation is peculiarly similar to expertise, though it cannot claim the similarity of experiences that conduce to that admirable ability. Competence in such thinking is possible. It will produce categorical understandings of the kinds of relationships open to equity and justice and those that are not. Inevitably, the categoricality will produce a moral bullseye that prescribes justice for strangers and relationships of love for friends and family. The categorical files will fill with prudential judgments that apply them to undistilled experiences, each different yet bound by the categorical moral distinctions that allow them to be constructive of further competencies (see “The Moral Bullseye”).My moral horizon will naturally extend from the immediate to a wider scope. Over time, better evidence and more nuanced thought might contribute to a utility of furthest ends in which public values maximize universal goods universally assented to as I argue for my position and learn from the arguments of those whose experience prompts other conclusions. Who can doubt that such a good-faith effort coupled with humility would move me over time to a clearer conception, and should that effort span generations, move us all? Goodness as a moral concept is complex and multi-faceted, but as I have to employ my preferential freedom to identify it and my circumstantial freedom to act upon my preference, it seems I cannot escape the effort to refine my conceptual understanding in that direction (see “What Do We Mean by ‘Morality’?”).
If we can arrive at common judgments of a priori truths like mathematics and a posteriori truths like the scientific method and linguistic denotation, and if we can arrive at common judgments of deferred utility necessary for the exercise of expertise, and if we can employ common standards of quality in many fields of human endeavor, we must ask if a similar intersubjectivity can steer our universal natural and preferential freedom to common moral judgment. We are obstructed in this effort by the ontological associations traditionally brought to morality by religious dogma. For centuries, believers considered morality governed by categorical duties rather than hypothetical considerations operative in judgments of utility and quality. This separation was so powerful that even nominalists embraced it, putting moral considerations beyond the reach of hypothetical considerations, considering it as a fundamentally different kind of goodness whose independent existence they denied. But conceptual definitions of morality as a “utility of furthest ends” erases that bright distinction, considering morality simply as the broadest and most inclusive sort of preferential standard reliant not on categorical commands or imperatives but on hypotheticals tied to common human experience.
Religionists will bridle at that effort because of their elevation of their own beliefs as will postmodernists who point to the perceptual wall as a guarantor of the privacy of each person’s experience. But their objections still face the challenges of ontology and objectivity. We cannot know the categorical goods that religionists believe in any more than they can, and that makes their private profession an impossible basis for public morality. Postmodernists have proved that point ad nauseam, echoing the modernists who proved it far more capably two centuries earlier. A catalogue of wholesome human pursuits considered hypothetically and intersubjectively might yet prove compatible with at least some of the dogmas that believers accept, though its warrant must always reside in the moral agent rather than in something external to her. That might be the bigger issue for believers who yearn for the external, categorical, absolute, and authoritative truth in moral thinking. The problem for postmodern nominalists would be a different one: to abandon a tabula rasa view of human nature in which culture or environment must prove determinative in favor of one that respects the moral autonomy of each person enough to also expect the responsibility that moral freedom imposes as a claim right. While the contents of a utility of furthest end must be reasoned rather than commanded, we have reason to think it would tamp down the moral chaos that now dominates Western cultures while finally establishing a true warrant for universal rights (see “Natural and Political Rights”). Whereas the believer will find it difficult to embrace any moral hypotheticals, the nominalist will find it equally hard to embrace any that are neither immediate nor “appropriate” to culture. Believers seek moral eternity while non-believers seek moral immediacy. In private preferences, they will continue these pursuits, but they have good reason to consider a public posture that neither betrays their values nor forces them upon others in violation of moral agency.
It is a fair question whether this common conceptualizing ability so famously explored by Immanuel Kant can weigh private experience with enough common categorizing to allow us to overcome the privacy of experience or the tribalism of culture. We are only now learning of the neurological bases for mental conceptualizations and undoing a century of damage by human sciences that considered environment decisive in forming mind (see “The Calamity of the Human Sciences”). The presumptions of Marx, Freud, and the thousands of acolytes who followed their lead are in a long decline as natural science finally finds the means to explore the operation of the brain from the inside out. The use of FMRI technology; work on artificial intelligence and genetics; and the rise of purely observational human sciences will not resolve the problems we face, but they will undue some of the damage earlier generations of scientists have inflicted upon moral progress. Postmodernism, which is built upon false assumptions rooted in plausible observational theories glossed by scientism, should also wither with the advancement of neurological science and the species-specific discoveries it brings (see “The Limits of Empirical Science”). We are learning how the brain is impacted and even formed by environmental factors, not in the isolation of individual experience but in organic structures rooted in physiology that dictate a common nature beyond self, beyond tribe. But none of this will lead to a moral science, for empiricism’s power lies in its truth-finding rather than in determining anything beyond immediate utility. We have been led far astray by the dominance of the human sciences in the last two centuries that have advanced moral theories disguised as science and falsely linked to it. We may hope we have put such scientism behind us and that we may celebrate empiricism’s truth-finding power as a spur to defensible rational constructions of public morality built upon our common human nature. Part of that nature seems an inability to relinquish moral responsibility for preference that is itself a universal human trait no matter how we plot nurture on the graph of human nature. To be clear, we know that concepts are electro-chemical transmissions between neurons in the brain. But we read them as ideas presented to the mind and because they make sense of our environment and open experience to preference, we should accord conceptualizations the highest place in human reasoning (see “Toward a Public Morality”).
All of this only opens the possibility of universal public morality. It does not guarantee it. The lessons of modernism should convince us that viewing moral preference as a value-neutral and conventional operation is asking for too little and that a premodern attachment to beliefs warranted by authority is asking for too much. We should not take from these failures the lesson that goodness is an obscure divine reality or the pragmatic term for what we happen to desire. Universal utility is easy. Universal standards of quality are a bit harder. The Goldilocks requirements for universal morality have yet to be arbitrated, but that cannot begin until we hammer out the axioms by which the argument can occur.