Major Contentions
- Examining human preference reveals two separate preconscious presentations to the mind: sense data presents a mimesis of reality as completed picture and natural freedom draws options for preference; both happen without conscious attention.
- The act of severance is an active separation and slowing of these acts so that judgments of truth are made conscious before determinations of preference are permitted.
- The core operation of empirical science involves just this effort and accounts for both the truth-finding power of its methodology and its incapacity to prescribe goods by the same method it uses so well to find truth in a limited number of experiences.
- Learning this lesson has been exceedingly difficult for natural science; those who have failed to learn it practice scientism, most disturbingly, the human sciences.
- Because determinations of truth and goodness are intimately related in everyday life, non-scientists have a particularly difficult time practicing an act of severance, preferring to automate the process of choosing goods through premature closure, allowing goodness preferences to taint what should be a prior process of determining truth.
- We see this in all three species of goodness: quality, utility, and morality.
- Our opinions of quality are often instantaneous expressions of approval, empty preferences that others may safely ignore; but this broad claim to opine on subjects of which we have neither expertise or competence encourages a dismissal of both as a warranted means of judging truth and goodness.
- Determinations of utility are prompted by desire in momentary experience, which may easily automate preferences just as it automates determinations of truth and reduce presentations of choice to instantaneous responses to stimuli, one of which is the desire that characterizes premature closure.
- But utility invariably involves a time issue, for immediate desires are often in conflict with longer-term ones that are not as quickly presented to consciousness; engaging an act of severance may allow desires to be more capably arbitrated and satisfied.
- This possibility is denied by the most popular method of preference now available; pragmatism is well-suited to a consumerist culture that relies on purely private valuation of utility.
- John Dewey and Jean Piaget disseminated pragmatism as innovations in the human sciences to American students at the beginning of the twentieth century; it was a gloss on utilitarianism, a nineteenth century moral philosophy that elevated pleasure as the highest good, urging a calculation of pleasures and pains as arbitrating choice.
- Pragmatism rejected even this interruption in desire, claiming that determinations of truth ought to be arbitrated by personal desire in each moment.
- Pragmatist theorists regarded desires as the ends for which all experiences are the means and thought that the truth of these experiences could be made conformable to the desires we seek to satisfy.
- This view of truth as “cash value” of immediate use and of desire as being the fixed good that determines cash value gets ends and means backward, for an experience is the fixed reality of any moment from which to choose possible goods; therefore, the goods we desire are malleable, not the experience that presents them to choice.
- A pragmatic orientation will accelerate choosing and prioritize immediate desire because it ensures premature closure, but it will foreclose considerations of long-term utility and habituate us to exploit every experience for immediate gain, thereby punctuating goods in the moment rather than as moral ends.
- Contemporary axiomatic confusions cause us to conflate knowledge and belief, but distinguishing the two terms will alert us to the role of premature closure in commitments to beliefs because they are always judgments corrupted by desire.
- Clearly, pragmatism qualifies as a belief-driven approach to experience, and it is consistent with the virtual circle orientation of postmodernism, which privileges private beliefs as true knowledge, though private in nature.
- This orientation to belief is irreconcilable to premodernists’ respect for religious beliefs; though private, their intimations are mistaken by premodernists for trust in religious authority and so are mistaken as publicly defensible.
- Given this confusion, it is unlikely that premodernist and postmodernist views of belief are reconcilable in the public square, though both indulge a premature closure.
- Religious belief is, however, a form of premature closure which is possibly permissible to reason because some religious questions can neither be answered nor permanently deferred to an act of severance.
- This difficulty obscures the value of the act of severance to moral reasoning, which operates by a different principle from its operations in quality and utility.
- Morality is unique in requiring an act of severance before experience rather than after.
- Since morality definitionally seeks to establish systematic ends of preference in a train of experience rather than seeking goods within a single experience, it is clear that morality establishes its own utility of furthest ends for which all subsequent preferences of utility are only the means.
- This deferral implies that premature closure is incompatible with true moral freedom; the act of severance imparts integrity to the moral agent.
No one will dispute that human beings are choice-making animals. Our neurological circuits seem structured without our active awareness to construct a picture of reality from the torrent of sense data that pours into our brain in every moment of consciousness. It is naïve to assume that composite to be reality itself, but we all think it anyway unless we are brought up short by some anomaly in the picture our brains present to us. Usually, we glide along in blissful ignorance about how much our mind adds to the reality we cannot directly access, for the operation of categorical rationality makes reality seem more comprehensible. We cannot avoid seeing reality in terms of quantity, quality, relation, and modality even if we have no idea of what these categories mean or how they function. The human operating system does all that for us. This preconscious construction means that persons are inherently rational beings, served by a brain that makes reality easier to deal with (see “The Tyranny of Rationality”).
Why do that and why disguise it from consciousness? Perhaps all this preconscious construction is designed to save us time, so we can move expeditiously to the real business of life: choosing the goods that experience offers us (see “What Do We Mean by ‘Good’?”). But here too we are given a shortcut. No sooner do we gain knowledge of the content of a particular experience than our brain gives us a second gift: it presents us with options for choosing. Slow down any conscious moment and you will find that by the time an experience finally rises to conscious attention, it has already been fully assembled into a simulacrum of the reality of that moment and has been intuitively examined for its possibilities, which float up into consciousness as choices for our choosing. Yet again, our mind saves us the effort of laboriously analyzing experience so as to reveal what it can offer us. This natural freedom is fully as automated as consciousness itself as it opens possible goods to our choosing, popping them into consciousness ready to be rank-ordered and considered, which is generally our first conscious response to experience (see “Our Freedom Fetish”). It really is impressive when you slow everything down to examine it.
That is my intent in this analysis: to make you fully aware of that boundary between two uniquely human mental operations that generally go unexamined despite their repetition in every moment of experience. I wish to isolate our determination of the truth of an experience from that of its possible goods and in doing so to inspect — and ultimately to recommend — the act of severance that separates what is nearly always a seamless and preconscious operation. It turns out that this marker figures prominently in a surprising number of important topics despite being almost universally unnoticed.
One way to understand the nature of science, for instance, is to see its history as a slow realization of the necessity of an act of severance in determining truth. Only by cutting off self-interest and bias, by refusing to consider what might come of the knowledge science gains as it gains it, by intentionally slowing down experience, can the scientist arrive at the truth which is science’s primary ambition (see “The Latest Creationism Debate”). Because determinations of truth are so intimately related to those of goodness in common experience, this lesson proved very difficult for scientists to learn. A strict methodology was gradually standardized over four centuries with but one purpose: to narrow the focus of a single experience so that it can be carefully observed and manipulated. What are the hallmarks of natural science but observation and experimentation: efforts to responsibly process an experience or carefully limit its variables prior to any regard for the uses to which that experience may be put? This process also requires great rigor in reportage so as to represent accurately the truth of the experience under investigation. Everything about the scientific method is an attempt to delay conclusions and to forestall considerations of use until the experience is thoroughly understood. It is ironic that the history of science is characterized by repeated efforts to deny this process, to use the same method by which empiricism finds truth to prescribe uses to which that truth might be put (see “The Limits of Empirical Science”). That effort is a skyhook and doomed to fail. What gives science its power is an explicit act of severance, a recognition that choosing goods cannot be made empirical because the search cannot rely on analysis of observable phenomena and careful reportage of data. Natural freedom infers possible goods from experience; they are not contained within it because they lie in the uncertain possibility of some utility in the future. Nothing about this potential is observable or quantifiable or inherent to the event. The experimental drug may cure cancer, but the notion that cancer ought to be cured lies entirely beyond the scope of research. We often hear that public policy ought to be data driven and guided by science, but though that effort surely can yield reliable data, the purposes to which that data is put cannot be so derived.
This problem was first recognized in the Enlightenment by David Hume, who noted that no prescription can ever be drawn from even the most perfect description. It must be imposed by some external process of valuation. Moral theorists summarize this is as the “no ought from is” problem. Natural science only mastered this lesson during its slow professionalization in the nineteenth and twentieth centuries. It is a hard lesson to learn. First, this is not the way consciousness naturally operates. As noted, a determination of the truth in the moment is merely the means to grab its goods, and since we do this so effortlessly, committing to an act of severance was difficult and unnatural. Secondly, empiricism certainly sees its own process of discovery as a real good, but when thought through, scientists learned that their process actually inverts the normal means and end calculus in which truth proves only the means to goodness. Though non-scientists find it hard to accept, scientists only think their methodology to be good because it is a useful means to find the truth they seek, and if a new means to that end suddenly appeared, they would surely abandon their current practice. One of the most difficult truths that natural scientists have mastered is that their method of verification itself repeatedly cautions them to be modest, even tentative, in their declarations. No scientific proof is ever certain, and every discovery only uncovers more or more difficult questions lurking behind it. Every discipline has suffered through scientific revolutions in which the entire basis of its study has had to be revised. The need to avoid scientism, the temptation to expand science’s truth-finding powers beyond what its method warrants into the possible goods that might follow, has proved a most difficult lesson to learn.
The human sciences have never learned it. In the salad years of “natural philosophy” when the methodology of the experimental sciences was itself a process of trial and error, the “science of nature” was thought to include “the science of man,” an empirical approach to human behavior. But from their beginnings, the human sciences abandoned the act of severance, for their ambition from the start was not only to understand human nature but to perfect it. This indeed has remained the unstated goal of those “sciences” that touch upon human will, from psychology to economics and from anthropology to education. Their history is the ongoing saga of trying to discover desirable behaviors by means of scientific inquiry. While this might seem innocuous and even admirable, the method is self-sabotaging. It reveals not only a crucial distinction from natural science but also the dangerous temptation of ignoring the act of severance in blending determinations of truth with those of goodness. The great mystery of human nature is the centrality of felt preferential freedom that derives from our natural freedom to see choice in experience (see “The Essential Transcendental Argument“). The black box of free will remains closed to empirical investigation, a profound mystery (see “The Determinism Problem”). The economist will fill that mystery with her own theory of capital formation or labor value; the criminologist with slanted observations of the justice system suited to her paradigm. The desire that marks all preference proves too difficult for the human sciences to resist.
This lesson only gradually became the accepted practice of true empiricism. The hard sciences employ their paradigms to standardize practice according to established evidence and data-driven theory, but in the soft sciences, paradigms proliferate and no single understanding orders disciplines as practitioners riff like jazz musicians on the findings of their peers. Theories proliferate as social scientists quibble, and progress stalls. Is it any wonder that the human sciences have utterly failed to produce the social progress that is their goal (see “The Calamity of the Human Sciences”)?
There are surely other reasons for their failure beyond ignoring a necessary act of severance, including the difficulty of gathering reliable data from test animals who are themselves not predictable and who are difficult to experiment upon for ethical reasons. But the human sciences’ dismal failure is further evidence that the act of severance is worth serious examination. The sciences show us that employing it facilitates finding the truth that is the means of realizing the goods we value and that ignoring it sabotages our effort to find both truth and goodness. Blurring the act of severance, combining it with a simultaneous and often preconscious presentation of options to preference, definitely speeds up our choosing and further automates it. Employing it slows everything down and is surely more laborious even if we aren’t working to empirical standards, but I intend to show that it is a worthwhile effort.
Because empiricism is fairly well understood procedurally, it makes a decent introduction to the desirability of the act of severance. Or, to put it negatively, it argues against a premature closure in which determinations of truth and goodness are made simultaneously. When we examine the species of goodness, we find premature closure infecting determinations of goodness everywhere.
Take our evaluations of quality, for example. We feel entirely comfortable with instant appraisals of aesthetics and relative merit. We look at a painting or a person, a movie or a defendant, and an automatic impression leaps into consciousness regardless of whether we understand what we are experiencing. This painting is a good one, that person is ugly, this movie is great, that defendant is guilty. When the jury first sees the accused, every member gets an immediate opinion of her guilt or innocence before evidence or legal niceties are understood. We may know nothing of art, but we somehow assume we know when a painting is good or bad (see “Three Portraits”). Really? These instant impressions are nearly meaningless composites of snap emulsions of truth and goodness masquerading as judgments. We only come to recognize this when our empty preferences are challenged. If a differing opinion that is only another disguised inclination is declared, we feel entirely convinced it is wrong, which is just what the person hearing our opinion will think of ours. This kind of expression may do little harm because it is so easily dismissed, except in cases where such premature closure ought to be deferred to an act of severance. No one cares whether you think a movie is good or bad because what you are really saying is simply that you did or did not like it. But when you apply the same process to a defendant or a job applicant or an investment opportunity, which the intuitive use of premature closure will surely encourage, the stakes are raised. And when expertise in judgments of quality deserve our respect, the empty opinions that premature closure permits can do real harm (see “Expertise”).
The vast majority of our experiences concern another species of goodness, utility. We ask experience to offer us opportunities. We wish to make use of them to our own ends. If we are clever, we try to steer our experiences in the direction of utility so as to provide a wide scope of opportunities that we might exploit to satisfy our desires. Most of the thousands of preferences we make every day are assessments of possible use, and Western societies have developed a vast consumer culture to increase our options. This banquet of choice certainly seems well-suited to an operating system that privileges rapidity and automates experiences so as to quickly intuit what is most useful to us in each one so as to exploit opportunities for gratification.
But here too, a little thought might raise a caution flag. For the problem of utility involves not only the number of possible preferences an experience offers but also the span of time in which they might prove useful. As any third-grader will attest, an education is the most useless thing imaginable, at least while one is in the third grade. Our desires are so often at war with each other that utility is not quite as easy to assess as premature closure might lead us to think. Do we want that jelly donut in winter or that beach body in June, the sweet surrender to anger in the moment or the count-to-ten patience that nourishes happy marriage, the snooze alarm or the shower before work? Utility is not such a simple thing when we slow down the act of preference to think just a bit harder about our desires. And, as mentioned, slowing down preference, avoiding premature closure, is just what the act of severance allows because it ensures a clearer understanding of the truth of an experience before it allows options to present themselves, giving us time to be more thoughtful about ordering them. Engaging the act of severance will allow us to make fewer snap decisions and more thoughtful ones.
But that will probably not be a problem if you practice the most common method of preference now in fashion. Since the turn of the twentieth century and the postmodern revolution it ushered in, a philosophy championing immediate utility called pragmatism, the “only American moral system,” has surfed on the wave of worldwide influence to become a fitting partner for a cynical and consumerist society (see “Postmodernism Is Its Discontents”). When John Dewey and Jean Piaget supervised the dissemination of pragmatism into the new American high school curriculum at the beginning of the twentieth century, hopes were high that the human sciences could replace discredited religious authority as the source of moral consensus at a crisis point in Western history (see “The Victorian Rift”). This tipping point for modernist axioms of moral commitment had been a long time coming (see “The Axioms of Moral Commitment”). The contractarian model that had transferred political power to individuals had done nothing to restore the moral consensus that religious wars had sundered, and had in truth made consensus more difficult (see “Two Versions of the Common Good”).
The nineteenth century saw the first attempt to make utility a public morality with the development of utilitarianism, which at least was consistent with democracy in a way that autocracy could not be (see “Three Moral Systems”). And utilitarianism provided an opportunity to see public morality in a new way, as a private calculation of value which, when aggregated, might translate easily to a democratic process. And unlike the pragmatism that would grow from it, utilitarianism asked moral agents to delay preference until they could calculate the horizon of consequences that a choice might present to experience. At first, this was intended to affect all preferences. Persons were expected to calculate the amount of pleasure a choice would provide minus the pains that might accompany the pleasure in a “hedonistic calculus” that would advise each choice. When combined, legislatures might use a similar calculation for all citizens. The clumsiness of this method performed for thousands of preferences daily presaged the utter failure of utilitarianism as a generalized moral method, although it still has its adherents. Attempting to employ it very quickly reminded persons that this was much more trouble than it might be, that gratifying instantaneous desire would be quicker and only a little more hazardous to our ambitions than the more introspective and demanding utilitarian calculus. Enter Dewey at the turn of the twentieth century with a streamlined, high-speed version of utility suited to this new age of very rapid change.
What Dewey — and the other proximate inventors of the method, the psychologist William James and the chemist-philosopher Charles Peirce, proposed and what Piaget, yet another psychologist, popularized in public schools was a kind of utilitarianism light. What makes an experience true, asked Dewey? Our involvement with reality is true for us only insofar as it facilitates our growth and development, when it opens other options to choice, and when it allows us to get what we want. Truth, he said, was “the cash value” of experience, merely a device you could use to secure what you desire. Such an elastic view of reality fully acknowledged and endorsed our proclivity to automate choice, not to oppose premature closure but to elevate it as a guiding principle, to make the most immediate use of a situation its criterion of value. After all, who can predict what changes will alter future consequences, and if these cannot be known, why slow down choosing? Why not speed it up to satisfy more desires? This new “science of social adaptation” permeated American primary and secondary schooling through newly established colleges of education and from there went on to dominate capitalist cultures everywhere (see “The Problem of Moral Pragmatism”). Rather than delay the preference of goodness until an experience is fully understood, pragmatism advised that we arrange our comprehension of experience so as to facilitate any possible goods of the moment that we might clutch, to color our subjective reality with the hues of our desire so as to satisfy them most conveniently. Pragmatism by definition sees life as too fluid and changeable to anticipate and consequences as constantly rejiggered to satisfy present desire. Whereas utilitarianism had benefited from the status of its champions as brilliant philosophers, its successor could boast of being the product of psychological science, indulging what it claimed to be the neurological structures of the human mind while lacking all empirical knowledge of the brain.
In truth, pragmatism was anything but a science and could hardly be called more than an instinctive response to change, an adaptation to environmental challenge that gave full sanction to premature closure. It was unavoidably distortive to experience, for if we enter the moment with the expectation that it will conform to our will and provide us with what we want in that instant, we will most surely cast it as we wish to see it, disregard what confounds our hopes, and so contaminate the outcome in favor of the desire that moves us to immediate preference. This sort of pragmatism is recycled every generation or so, most recently as “the law of attraction,” the counterfactual assumption that deeply held desire will magically produce hoped-for consequences.
A pragmatic worldview makes two related errors: first, that desires are the ends for which all experiences are the means, second, that these means can be made changeable, plastic, and private to facilitate securing our desires. These pragmatic approaches to experience view desire as the fixed part of the experience and reality as conformable to them. In Dewey’s words, the truth we draw out of reality is its “cash value.” Experiences are merely the means to get what we want in the moment. This is backward, for the true variable in experience is not the reality we face, for it will not submit to change until we understand where we may insert the lever of our desire so as to cause change to happen. No, the real variable, the element in reality we may control, is our own desire, which is invariably of our own invention and at least potentially under our preferential control. We can concede that pragmatism is flexible and an intensely personal way to view desires without conceding in the slightest that it is a desirable way. By elevating premature closure, it hinders our fulfillment rather than hastening it, but that realization will likely escape its practitioners who are moved to ever more frantic exertions to adapt to the untoward consequences it ensures.
While examining the interplay between desire and reality, I should also discuss a term related to but significantly different from pragmatism. Regardless of what we think of premature closure, there is one kind of response to reality in which we simply cannot avoid it. That area involves the proper employment of belief. This is a topic rife with confusion, for we have no clear understanding of the nature of belief, nor of how it differs from knowledge or opinion (see “Knowledge, Trust, and Belief”). To further confuse matters, we often see pragmatists refer to their desires as beliefs, and epistemologists sometimes define knowledge as justified, true belief. We don’t have to step into this epistemic morass now and can skirt terminological confusion by defining belief as judgment corrupted by desire. Whether you declare that you believe in divine providence or that you locked the car, you are conveying a mixture of private opinion and hope, partially derived from experience and partially from ignorance. If you knew, you would not have to believe, and if you believe in spite of what you knew or what you can know, you are indulging in a willful ignorance. We all know we are going to die, but we refuse to believe it in an act of self-indulgence.
Now this is a fraught topic in contemporary life because our axioms of commitment are so scrambled. Premodernists are great defenders of the universal truth of their beliefs, confusing them with an earned trust that might justify them, and are entirely willing to impose their alloy of opinion and desire even on those who believe differently (see “Can Religious Belief be Knowledge?”). Postmodernists, ever alert to hypocrisy and bad faith, are quite willing to point out the hypocrisy of those who do not know and yet claim absolute knowledge or, worse, who prefer their beliefs to scientifically demonstrable judgments. But postmodernists suffer their own difficulties with beliefs, since their often-unconscious adoption of a virtual circle mentality elevates every kind of belief to indubitability, though only for themselves (see “What Is the Virtual Circle?”). Both premodernists and postmodernists are devotees of belief, though of different sorts and for different purposes, and the axioms of commitment both employ in pursuit of their own version of the good allow beliefs to exert a premature closure on the possibilities of knowledge. Both deny the act of severance in favor of their own desires. In this respect, they differ not at all from the pragmatists who color every experience with whatever they desire and disregard the possibilities of true knowledge that an act of severance might offer them.
But even the most careful thinker will come to the point where knowledge is not enough, and one of the most difficult epistemic puzzles we all face is to know where that marker lies in some of life’s most important questions. Yes, it does no harm to toss out our beliefs about aliens or future events, about ghosts or Bigfoot into the public square. We can’t know the truth of such matters and, so long as we avoid tinfoil hats and moonlit cemeteries, these open questions can be opined upon without any harm. But admiring our own inventive powers of belief ought not teach us to prefer such empty beliefs to knowledge when knowledge can be found. The self-indulgence of climate deniers, anti-vaxxers, and Dark Web conspiracy theorists is facilitated by the premodern and postmodern axioms of commitment that so privilege a wholesale profession of indiscriminate belief. This is the self-indulgence of committing to impermissible beliefs, which lies at one extreme.
At the other is an abstention from belief when it ought to be indulged. How long do we abstain from religious commitment in the absence of religious knowledge, for instance? Given the enormous difficulties of justifying religious faith, we are tempted to think agnosticism, a simple refusal to decide on core questions of religion, demonstrates an admirable restraint from premature closure (see “Religious Knowledge as Mobius Strip.”). We can hardly choose the good in an experience of the divine when we have no idea of its nature or truth (see “Awe”). How can we reach the point where an act of severance is warranted, where we can look at core religious questions and say, “Now we have sufficient knowledge to offer a reasoned judgment on religious experience”? The agnostic will correctly say that moment will never arrive, but I think her wrong to conclude that an abstention from belief is therefore warranted. William James was a founder of pragmatism, a deep distortion of the relation between truth and goodness, but his advocacy of premature closure was absolutely on target to resolve the seemingly impossible paradox of when to commit to religious belief. Here is the moment when desire ought to steer experience for the simple reason that abstaining from desire in this question is tantamount to dismissing a vital question of human existence entirely (see “The Lure of the Will to Believe”). If we wait until an act of severance is warranted in such matters, we will defer the issue permanently, which is synonymous with deciding it is not worth our attention. Agnosticism expresses itself in experience in behavior that cannot be distinguished from atheism. For a very few of life’s questions, an uncompassed doxastic venture seems not only possible but desirable (see “To What Extent Can Uncompassed Doxastic Ventures Guide True Moral Commitments?”). Of course, this rare employment of premature closure will tempt persons in the present environment to privilege all beliefs and all pragmatic choices, which is the far greater danger to preferential freedom.
To this point, I have concentrated on the interplay of experience, desire, and judgment and have generally advocated an act of severance as the means to make judgments of experience more dependable. But this is in one sense a distortion of preferential freedom. It is true we engage this process constantly in every moment of experience, but it is also true that we do not only live in a series of single experiential moments. The most important occasion in which to engage an act of severance is not for present experience but for future ones, and this must focus our attention on the most important species of goodness: morality.
Morality is a unique kind of goodness, part utility, part quality, yet something more. It may be defined as a quality that characterizes a utility of furthest ends as a systematic and final end of preference. Morality can be thought of as the ultimate preference. That this end be systematic is implied in its finality, for a variability of ends cannot be ultimate nor can it systematically guide preference. And these two qualities of morality — as a systematic preference that is the end of all others — has particular resonance for the act of severance.
So far, I have tried to view the act of severance as a delay in choosing, as a punctuated pause in any moment of experience dividing our judgments of its truth from our preferences of the goods that truth offers. This suggests that the act of severance makes goodness choices only after truth is discerned. But when we engage our moral preference, when we seek a final end that directs all other preferences in experience, we do it before confronting any particular experience. Now this anticipatory choosing obviously cannot precede all experience, for if it did it would tap an a priori moral knowledge that we do not possess. We are only potentially moral creatures, and we cannot know what ends are worth choosing without a trial and error process that involves lots of experience. At some point in the train of utility, as we grow to understand ourselves and our world, we may come to realize that so long as immediate utility is the goal — meaning so long as present desire determines preference — we will enslave ourselves to our own inconstancy. While this serves pragmatic materialism, it cannot allow a moral existence because it is neither ultimate nor systematic. Rather, morality by definition establishes its own utility of furthest ends: an ultimate goal as an end for all experiences which establishes the utility of each experience thereafter (see “The Utility of Furthest Ends“). Even during the immature phase before a moral end has been established, the act of severance will serve us well, for it assists us in a more dispassionate appraisal of our desires that cannot fail to uncover their capriciousness. Nothing in preferential freedom mandates moral responsibility, but if it should occur — in defiance of materialist pursuits and axiomatic dissonance in contemporary life — moral freedom will mandate an act of severance that seeks the good not in any one experience but in all of them, that sees experience less as event than as progression toward some end that our own moral agency defines not in any one experience but in preparation for choosing in all of them (see “What Do We Mean by ‘Morality’?”).
Premature closure is incompatible with true moral freedom, which is indistinguishable from moral responsibility. The act of severance that enables us to define a truly moral preference subordinates what is desired to what is desirable and in doing so, frees the moral agent from the tyranny of premature closure and imparts a unanimity of intent captured by the word integrity.