To know is to have a grasp of the truth warranted by a rational consideration of a preponderance of evidence. That definition packs lots of meaning and unpacking it leads to lots of uncertainty. The first layer requires us to see the necessary rationalism in any claim to knowledge, consisting as it does in comparing a unique element of experience against a categorical definition (“The Tyranny of Rationality”). To even begin that effort, we must isolate some nugget of experience from the ordinary flow of life as an act of analysis in itself. That only begins the inspection. Then to say “I know” it is to imply “by my reasoning” because no matter what we think knowledge to be, this claim for the element we’re examining must meet that definition, so the process of classification, of including or excluding a truth claim as knowledge, cannot be other than rational. To consider warranting that claim is yet another rational act, for no matter what kind of justification we admire, our claim to knowledge must meet it, and that is a judgment of quality that must appeal to the mind (see “What Counts as Justification?“). Another judgment is required to consider what qualifies as a “preponderance” of whatever we think to be “evidence.” With all that reasoning, we should find it unsurprising that we disagree about what we know since reasoning is a pit of error, but one thing we cannot dispute is that only reasoning can pull us out of that pit and into knowledge. Lots of thinkers have thrown us lines to pull us out.
Plato considered knowledge a kind of remembering of the ideal forms of things through reasoning about their representations in reality. By this schema, “truth” is an absolute quality of the ideal and “knowledge” our imperfect grasp of it. Aristotle thought we reached these representations differently: through multiple exposures to individual things that conveyed something of their common qualities to our minds. That distilling process allows conceptions, the skeletal structure of all knowledge, to take root, to be fleshed out by context. Regardless of how we approach it, all thinking about knowledge must argue for a dualism. This kind of bifocal view — we seek for our understanding to reflect imperfectly some other thing both real and elusive—is a constant feature of theories of knowledge, though epistemologists differ about what that elusive something is and therefore about how it is to be claimed as knowledge. The judgment that knowledge must always be an imperfect mirror of something else allows us to accept a lower standard of perfection in claiming it. It is almost never self-evident, certain, or Truth. Considering our truth claims to be judgments allows us to test them against a mental yardstick, to hold them as more or less reliable, from a very few certainties to tenuous judgments warranted by such weak evidence or such spotty reasoning that they merge with empty opinion or expressions of taste. Using such a “scale of judgment” opens our claims of knowledge to continuing tests of better evidence or better reasoning. Because our claims are always provisional, we should not grow overly attached to the things we think we know, for we may have to revise them tomorrow. So to say, “I know this to be true” is also to imply, “But I may be proven wrong.” Though anxiety-producing, such tentativeness keeps our eyes open to error and to the revision it should provoke.
That is a whole lot of complexity, but it is merely the prelude to the thing we really want: we must always consider our knowledge of the true to be only the means to the more challenging end of choosing the good. Seen in this light, all that thinking as we pry into truth is only a lever we can use to force some goodness out of experience, a means to an end. And that second effort requires yet another round of reasoning, this one made even more difficult by the problem of specification, the purely conceptual nature of the three categories of goodness: utility, quality, and morality (see “What Do We Mean by ‘Good’?”). The reality that produces the goodness we seek is not the reality in which we choose it. It lies in the future. We intend for some present moment to produce a change that will make a future moment better, but even in the most optimistic interpretation, we cannot know that what we intend to occur actually will occur, nor can we know that it will produce the good we desire if it does.
If this process of rational analysis sounds complex and exhausting and impossible to complete with success, let me add three further challenges that carry it to the edge of the absurd. FIrst, it is something we do thousands of times every day. Secondly, it is the thing we do that most marks us as human beings worthy of dignity and respect. It is the primary human occupation. We choose. We choose the true and from that we choose the good. Over and over and unremittingly. Thirdly, though each choice is a little maze in itself that challenges judgment, they accumulate in a fractal web of consequence and responsibility that cumulatively map out a life wasted or well-lived. Our preferences direct us and ultimately define us as one unique being among billions, yet the common thread in that infinite variety of patterns of life is that we all choose.
Clearly, that is a very hard thing to do well because it requires such fine discrimination and so many separate acts of judgment that have to be fitted to the complexities of the moment, the weightiness of the good being sought, and our own attention span. Rather than choose an example, allow me to attempt to capture the concept of what is happening in that most important of human concerns: the act of preference. To do that, we must slow the process way, way down and seek its categorical essentials.
The first duty of choosing is comprehending the reality we face. That effort relies on perception and reflection to construct a mimetic reflection of the experience we enter. We seek to know what it consists of. Our rational nature inclines us to synthesize reality so as to clarify the relevant relationships among the discrete entities experience presents to consciousness. It presents a rationally constructed picture of reality to our mind from the data stream of experience, picking and choosing perceptions from the barrage that assault our senses. From this raw clay, our reasoning preconsciously constructs the knowledge of our environment, what we naively call “reality.” This process is invisible to us, for we are not aware of our mind sorting sense data to build some construction of its bits and pieces, wrapped as a present to our conscious reasoning. Stop reading for a moment and focus on only one sense. Other candidates present themselves to sight, yet you choose to see or ignore them. Other sounds are available to hearing, but until you attend to listening, they were mute. The mind directs its intentionality to the the task at hand and effortlessly constructs a picture of reality that begins the progression of consciousness. This is choosing about which we know nothing. We have no more access to the raw data of experience than we do to the data stream that activates our computer screen when we first turn it on. A picture simply appears from somewhere in space, which is an even more amazing thing about consciousness than it is for computers. Once that mimesis floats into consciousness, the mind can work on it, its first conscious act of reason, but consider that it has been working behind the scenes already in assembling what conscious reasoning is about to take apart by analysis to begin the process of comprehension. It may not surprise you to learn that the mind wants to continue this autonomic task, to act on autopilot, to present not only the picture but also the meaning of it. Let us call this temptation premature closure, the urge to continually automate the tasks of judgment.
Premature closure tempts us to oversimplify that picture in a continuation of the way consciousness oversimplifies the sense data pouring into the brain. Persons suffering from hypersensitivity evidently have a very loosely woven filter that allows too much data in. Much more common is the opposite tendency: we assume we know what we are dealing with before we understand it, the act of premature closure. It weaves that net of perceptual awareness too fine, forcing what ought to be an intentional act of awareness into a continuation of the intuitions that begin it. We may ignore its exigencies or consider some past picture as the reliable pattern for some present one. It is complicated by the autonomic functions of habit and sense data discrimination that simplify preference, sometimes dangerously so. Certainly, our prior experiences color future ones, but that realization must move reason to caution us against automatically assuming that we understand what is and must always be a unique context, for each moment of experience differs from all others in ways that might not be obvious. Our search for truth in experience is always challenged by the uniqueness of the moment, but we won’t know that unless we force ourselves to be alert to the anomalies that make this moment like no other.
Immediately after forming the conception, we begin structuring our options for choice, lining them up as candidates for our manipulation. This is natural freedom at work and characterizes universal human nature. The freedom consists in our framing of possible goods from our construction of the picture we think we see. It is astonishing that this natural freedom presents is choices without mental effort. My mind simply presents options once it thinks it has grasped the situation. But the free ride may end at this point and the work of conscious reasoning must begin. What follows pretty quickly is another rational operation that ranks our choices by preference by whatever standard of value we employ at the moment. This is preferential freedom. Whereas natural freedom is a gift of human functionality like the senses that frame it, preferential freedom is at least partly volitional and can be interrogated and cultivated. By whatever standard of value we use, we lean toward the good. Acting on preference is a third operation that employs our circumstantial freedom. Of the three freedoms employed in choosing, only the first is concerned with truth. The second and third invariably move us toward goodness as a consequence of the determination of truth (see “Our Freedom Fetish“). None of this description should be taken to prescribe any particular standard of either truth or goodness. It is intended to outline a process that seems only partially under the control of conscious reasoning, though like computer code, it also seems rational through and through.Why struggle to figure it out? Because we must choose the goods we value in each moment. And this responsibility also presents its temptation to premature closure.
Since knowing reality only sets the table for a banquet of options that our nature automatically presents to preferential freedom, or to put it more formally, since learning a truth is only the means to the goodness choices that follow, we face even greater temptations to premature closure as natural freedom operates on the reality we think we know. On what grounds ought we to choose? How should we rank order the options for choice that every experience presents? It is entirely understandable that many persons will find that a meaningless question, for to their way of thinking, the mind chooses its goods as effortlessly as it presents them to choice. No effort is required to see opportunity in context, and, they say, no effort either to choose it. They view desire as having already rank ordered that choice, presenting it to preference in an intuitive act fully as natural as presenting the range of options itself. So they view preference as equally robotic. We don’t have to work to see choice, and we don’t have to work to choose, they say, because desire works on the unconscious mind to do that for us. And despite the admonitions of the Buddha, no one is free of desire. So an option is good because it tops the list of available options, because for reasons lost in the murk of the preconscious, we want it. No one has to work very hard to feel that pull. Perhaps we ignore the whisper of experience telling us that we ought to revise or reverse the preferential operation that says, “It is good because I desire it,” for only a little experience reminds us that desire on autopilot often steers us wrong. But strong cultural winds often blow that warning away (see “The Problem of Moral Pragmatism”). Again, premature closure operates to distort judgment, if not of the true, then of the good.
Allow me to highlight two moments in this operation when desire might operate upon preference in ways that thwart our own goals. The first occurs at some point in the chain of experiences when we look up from the everyday and squint into the future and determine what long-distance goals we desire. Without reference to any particular choice, we might construct conceptual goods that we wish to pursue. We admire them in the abstract, quite apart from their presence in any moment of experience. Our admiration for them as ends moves us to decide that these are goods worth procuring, not to pursue any preference in the flow of experience but rather because they exist as some future goal. We decide that this something is worth our effort. Perhaps we wish to be rich one day or see Jesus in heaven or become the most popular person in the class. These goals do not arise in the hypotheticality of experience, in the flow of sifting a moment for use. Rather they direct the flow in advance as a categorical desideratum to which momentary preferences are subjected. We decide a good is worthy of pursuit not as the context of this experience but as the directing intention for some or all.
This categorical pursuit of what we think good can, obviously, be biased by all the temptations of premature closure in comprehending or contemplating those moments of experience that have caused us to think it desirable . But a categorical goal also can actively bias the experience it confronts after being considered directive. Confirmation bias may slant either operation so as to foreclose a critical examination of experience or the choices we think it presents. It is possible for a prior categorical commitment to color what perceptions provide so as to deliver an opportunity to pursue the good we have committed ourselves to. But notice that this kind of yearning is disturbingly similar to what happens in the moment as we mine experience for its more immediate goods: a distortion of reality we are not conscious of to facilitate a good we earnestly desire. Because these operations may take place beneath the threshold of awareness, we may commit this error again and again without realizing its cost to our life projects. This ignorance is unlikely to produce bliss.
It occurs because we ignore the act of severance required for us to commit to determining the truth of an experience prior to ordering it for hypothetical or even categorical preference.This is a very efficient way to ruin the act of choice because it emulsifies two sequential and separate acts of judgment into a single expression of desire. Because we want this experience to yield this good, we bias our comprehension to produce it. This mistake speeds up what ought to be a separate and later act of preference, allowing our desire to predetermine not only what we choose but also the nature of the situation that allows its choosing. We enter the experience determined to have it end with our achieving a desire. Why is that a bad idea? Because we cannot know what options reality offers us until we understand it. Only when we grasp its unique context, when we extend our perceptions to reason on them competently and then engage in honest reflection upon how the options this unique moment offers, only then can we choose according to whatever categorical or hypothetical values we prize. Because categorical goods, the ends that direct future experience, can only grow from present experience, the need for an act of severance required to distill it is greater than it is when seeking ordinary utility by hypothetical reasoning. And just as this bias operates to prejudice categorical goods of quality and morality, it operates to slant hypothetical goods of utility. This violates the essential act of severance that has us knowing the true before choosing the good. To emulsify judgment with desire, to see what we wish to see rather than what reality offers us so as to get on to the goods we desire: this kind of premature closure defines one kind of mental operation The signature element of premature closure is a confirmation bias based on desire. It has a very familiar name: belief (see “Can Religious Belief Be Knowledge?”).
To believe is to already be invested, is already to have taken sides: quite the opposite of the dispassionate, ratiocinative process required for judgment to produce reliable knowledge of truth either categorically as an overall director of experience or hypothetically as a judgment in its flow. Belief ignores a necessary prelude to seeking goodness. To judge is to approach experience with an open mind, to construct a mimesis of reality with care and attention appropriate to the moment, to inspect the resultant options, and only then use that construction to choose the good. The process ought to be rational through and through. Because belief automates and directs preference by desire, it corrupts hypothetical choices of simple utility. We want what we want because we want it, because it serves the immediate end of closure, because it eases preference. Because we choose constantly, we can course correct the unwanted consequence of this desire-reward impulse when we find our motives thwarted. But if we do that often enough, we look back at the crooked trail of preference we’ve indulged and seek a larger scale map that makes sense of our frustrations. The true danger of belief comes when we attempt to indulge the same desire to guide categorical choosing. Belief does its greatest damage to one kind of categorical good: morality.
A categorical good is moral if it meets a definitive set of conditions dictated by the inescapable rationality of preferential freedom (see “What Do We Mean by ‘Morality’?“). Moral goals have three essential qualities. First, they are ends in themselves, rather than means to other ends. Why pursue riches? We can only suppose it is to achieve something for which riches are the means. If so, then that something is the proper moral goal rather than the riches that make it possible. Morality is the categorical goal that backstops all others. When we run out of answers to the question of “why prefer this?” and our answer is “because it is good in itself,” we have reached a moral goal. Second, the categorical end provides a reliable guide to preference, directing it without contradiction. Consistency is the most fundamental rule of reason. This implies that moral goals must be systematic. Otherwise, they would force insoluble conflicts in directing hypothetical choice. Finally, a moral end must be inclusive. Not only should it direct preferences in one area but in all, so as to give reason to choices not only moral but also of quality and utility that are in final analysis the means to moral ends. Notice how all three of these requirements rely on conceptual rationality, and this seems apt because the entire fabric of felt human freedom is woven of threads of rational preference. It is not an accident that it is this quality that guarantees persons the dignity that is the root source of all rights and responsibilities.
So what happens when premature closure of categorical ends is biased by the desire inherent in belief? We are often told that people will believe what they want to believe, and this throwaway comment is quite literally true. We are attached to our beliefs. We prefer them to beliefs professed by others. We don’t judge our beliefs to be true; we desire them to be true. A perfect example of the difference in the two terms is our reaction to our own mortality. We know we will die, but most of us refuse to believe that we will because we have no desire to die.
The employment of belief combines, accelerates, and distorts the operations of natural and preferential freedom into a single act of choice. It combines what should be separate stages of rational determination into one operation in which the recognition of truth is arbitrated by desire. This speeds up the often laborious process of comprehension, of course, so as to filter out discordant notes that might indicate anomaly, but automating it, allowing desire to top the list of choices natural freedom presents to the mind, inclines us to structure the reality we face so as to facilitate the preferences we are simultaneously engaging. Blinding us to anomaly means crippling our capacity to recognize truth as an instrumental means to the very goodness that we seek.
It is possible to render more or less of this process more or less conscious and intentional. Employing belief as a habitual means of discrimination renders almost the entire operation as autonomic as the assembly of sense data that begins it, moving everything along by a smooth flow of intention along a customary channel of desire. Bias moves preference without revealing anomaly and without serious, conscious consideration of the reliability of either the reality or the goodness it presents to choice. Alternatively, the entire process might be made more intentional, which certainly slows choosing down as we seek out the errors in our own judgments, correct them, and then attempt to apply a categorical moral structure to unique context in a search for consistency. But note that one process is laborious and the other speedy. One is tenuous and anxiety-producing. The other soothing to desire. If beliefs are strong, they warp perceived reality into a pleasing shape, eliminating by an act of will the act of severance and conscious reasoning. Why prefer the hard road to the easy one, particularly for what is without question the activity that occupies nearly all of our waking hours? I hope the answer is obvious. Belief offers a false utility, a shortcut to choosing that promises what it cannot deliver: a broad highway taking us to the fulfillment of our desires.
Since desire can enter preference as a categorical end before experience is considered as a conceptual goal or as a hypothetical act of premature closure and violation of the act of preference, we ought to examine why belief is even more destructive to categorical ends than to hypothetical ones. If we are determined to have some categorical belief guide experience, we are seeing it as a worthy end, presumably by a reasoning allocation of value. But this presents a problem to the process that must arbitrate it. Belief is necessarily warped by desire. If it tends toward premature closure in hypothetical reasoning about a single experience, what might we expect from thinking it a categorical guide to reasoning about all? Can biased reasoning, the very definition of belief, be relied upon to produce worthy ends for preferential reasoning to pursue any more than it can reveal a serviceable utility to hypothetical ones (see “The Utility of Furthest Ends”)? Allow me to put the question in more prosaic terms: can beliefs form reliable moral ends? Since both the process of preference and the moral sense are so deeply rational, beliefs that compromise their operation by introducing bias into the process seem impossibly compromised by the requirements of moral agency itself.
I have slowed the act of preference to show how belief distorts our grasp of experience. That can be an advantage when reality is deliberately contorted by, say, the creation of fiction. Coleridge’s advice to engage “a willing suspension of disbelief” when entering the world of imagination turns judgment on its head to remind us that fiction is something entirely different from real life (see “Tall Tales”). While it is true that suspending disbelief is not quite the same as embracing belief, both require a lapse of critical judgment so as to open oneself to ready commitment. One of the great joys of fiction is the release of self-discipline that allows us to immerse ourselves in imaginary worlds bound only by the consistency of their own premises. But the heady temptation to transfer belief from fiction to life is a fool’s errand. That emotional swan dive was a hallmark of the Romantics and still exerts is reckless charm, but it does little to help one face facts or even find them in the hurly burly of ordinary life.
Our confusions are not new. Plato famously defines knowledge as “justified, true belief” in the dialogue Theaetetus, and that definition has stuck. But as Wittgenstein pointed out, one can never know if her beliefs are true and the degree of justification is left open by the definition. Perhaps that is because the clear intent of Plato’s dialogue is to show that such a definition is not sufficient to the task of defining knowledge, but then Plato never gives a better one. Perhaps he wished to emphasize the desire that we should feel for knowledge in general by his use of belief, but that same attachment must compel us to withhold our judgments until we have examined them by the best reasoning we can apply to the best evidence available before claiming knowledge as true by a preponderance of both.
Ordinary language gives us clues about the nature of belief. For instance, we use the phrase “believe that” to indicate a single act of commitment to the truth of a proposition we yearn towards. What do we mean then when we say “believe in” magic or love at first sight? Certainly, that same mixture of truth and goodness applies. We believe in what we yearn to be true. But “belief in” implies a broader scope of commitment. To believe in Santa Clause is to also believe that he lives at the North Pole, gives gifts to good children on Christmas, and has a reindeer named Rudolph. “Belief in” subsumes a whole range of truth and goodness claims into one package, tying them together by strands of various cords of desire and logical consistency, only adding to the twisted quality of the claim’s warrant. This only becomes apparent when we string together incompatible sets of beliefs that strike us as discordant or even ridiculous. Why is it odd to say, “I believe in salvation by grace alone and alien abduction”? This gets even stranger when beliefs are thoughtlessly mingled with judgments appealing to public warrant. To say, “I believe in romantic love and the scientific method” should remind us that the things we believe and the things we know are separate categories of experience. But that does not seem to be the common view. In 1951, Edward R. Murrow began a radio program called “This I Believe.” It recorded celebrities giving heartfelt testimonials to long lists of life lessons that invariably began with statements of belief, nearly always mingled with commonplace truisms and quirky stabs at originality. The program was recently revived for public radio, but it seems the decades that have intervened have improved the sound quality of the transmission far more than the thought quality of the speakers.
The hard edges of reality should discourage a too-ready reliance on belief. We recognize that in our colloquial usage. If I ask if you locked the car, and you reply that you believe you did, I will conclude you intend some doubt, but also that you project some desire that you won’t have to go back to check on it. Should I reply that I trust that you did, I am indicating that you have consistently locked it in the past and so I can be confident you’ve done the same in this instance.
Trust and belief are fundamentally different mental operations. Trust is a single publicly-defensible transfer of agency to a personal or institutional authority justified by past experience (see “What Counts as Justification?”). It involves a prior judgment of truth that then moves a preference. We recognize that dichotomy when we say, “Trust must be earned.” We see it when the prison system selects trustees: prisoners whose past model conduct gives confidence they will abide by the rules. The implication is that we do not bestow our trust until we see some experiential pattern that convinces us of a truth in reality: that some future action will prove consistent with some past ones. This is a simple rational judgment, the kind we use moment to moment. Should we be asked why we have given our assent, we would be able to explain the prior incidents that merited it. Such a conviction relies on what is an admittedly weak warrant: undistilled experience, but since we use it so often, we will not quibble. We trust it. It is more than a wish, hope, or desire. To extend trust is a fairly momentous act, for giving it recognizes an authority that relieves us of our primary human operation: the exercise of our own agency. We transfer it to the authority that we trust to make wiser decisions on our behalf than we could make ourselves. This might produce more unease in adults if they had not all experienced just this kind of an operation in childhood. So we all know what it is to trust. For adults in contemporary life, the extension of trust is tentative. We are not comfortable giving up the agency that gives us our dignity and rights, and only a frequent inspection of our own forfeiture allows us to continue its submission (see “Authority, Trust, and Knowledge”). In personal affairs, a continuation of trust over time becomes so automatic and so soothed by confirmational experiences that should it be re-appropriated and inspected for validity, it would be found to be something far more dependable than the trust that some future behavior can be predicted from a few past ones. Of course, human behavior is far too capricious for anyone to claim full knowledge even of oneself (see “Expertise”). Because today’s axioms of moral commitment have so damaged trust, personal authority today more resembles a kind of alternating cycle of inspection and surrender as trust alternates through doubt and renewal. Such active sanction does at least have the advantage of producing knowledge even if it corrodes trust, depending on whether trust confirms the surrender of agency or questions it. If the latter, authority is rejected.
Belief is quite a different animal. To believe is to project faith into the future, to imbue an experience with something of ourselves, to find this projection in congruence with other of our values and beliefs. Such coherence would in all likelihood be difficult to disentangle in itself from the hopes we pin to our declaration of truth or goodness, for its confirmation lies in the future rather than the past and is conformable to desire. The etymology of “belief” implies a level of hope for a desired outcome about which one must assume some doubt. This prejudice is quite different from the dispassionate reasoning process involved in extending trust. When the term is used properly, it indicates coherentist opinion justified by the principle of non-contradiction. Not that there is anything wrong with that warrant when properly employed. We live in hope for a better future. But it would be wrong to call any belief a kind of correspondentist knowledge, wrong to equate it with an earned trust, and most of all wrong to use it as a foundation upon which to base future judgments of truth or goodness. Belief truly is that fragile thing with feathers. The difference in meaning between the two terms should give us sufficient incentive to use each one properly, not only to communicate to others the level of confidence we bring to our declarations, but even more vitally to recognize it ourselves (See “Religious Knowledge as Mobius Strip“). The issue is vastly complicated by the religious associations persons bring to the use of belief. The same word used to express doubt about your car keys becomes a profession of the deepest commitment when used in a credal sense. The same prejudicial longing for the evidence of things not seen defines religious belief. Kierkegaard observed that if faith were knowledge, it would no longer be necessary to believe. I know the Pythagorean theorem is true. I have no need to believe it.
We may repair the terminological confusion by applying a finer filter to issues of public knowledge and private belief than we are accustomed to seeing. This requires finding markers in that foggy frontier between knowledge and belief. We find beliefs permissible so long as they do not contradict our knowledge. It is permissible to believe that aliens live on Titan. It is likely not permissible to believe they live in your tool shed. A stronger confidence may be expressed by demonstrating a belief to be logically entailed, meaning an acceptance of its truth is mandated by prior knowledge even though we cannot use that knowledge to examine the belief itself. That an orphan child had a mother is a reasonable inference that is entailed by what we know about mothers and children even though we may have no knowledge whatsoever about the circumstances of the child’s birth. It is tempting to think logically entailed beliefs to be real knowledge, a judgment that is surely defensible so long as we think of logic in the universal sense we apply to geometric theorems or formal reasoning.
Unfortunately, the axioms of contemporary culture allow reasoning to be as spineless as an earthworm and as personalized as tastes in fashion (see “The Axioms of Moral Systems”). Conclusions entailed by a postmodern sense of subjective reason are indefensible in the public square, and so must still be considered as beliefs rather than as true knowledge in that venue (see “What Is the Virtual Circle?”). No such qualms should attend those judgments that are justified by such proofs of judgment as expertise or empirical science. Such declarations have risen above beliefs and may be reliably touted as knowledge, though like all such claims they are subject to revision through better reasoning or evidence. We typically believe permissible claims, believe or trust logically entailed ones, and know justifed ones, though such neat divisions are challenged by the contemporary axiomatic disarray.
Proper use is an indication of clear thinking and a readiness to engage appropriate justification. Just as the correct use of judgment or opinion indicates the speaker’s confidence in warranting her declaration appropriate to its means of verification and communicates to her listener a clearer notion of her thinking on the subject, proper use of trust and belief will force at least some interrogation of one’s own warrants for declarations that are at best at the frontier of her knowledge. And this has very strong implications for public morality, for the many strands of desire that project beliefs as supposed moral goods must be both privately derived and privately warranted. This confronts religious belief most strongly, for many persons think their religious morality sufficient grounds for public trust. This violates the serious justificatory differences between belief and trust in religious institutions not least because trust requires a surrender of moral agency that belief holds as its sole source of validity. Persons who say they trust the Bible as a guide to moral behavior cannot also impose their own interpretations of its divine commands on others who believe differently. This has proved such an impediment to orthodoxy that institutional religion has found itself forced to either sanction private interpretations as revelation, effectively eliminating the possibility of using the Bible as moral arbiter of public dispute; set up some authoritative interpreter of its moral demands, leading to Reformation style clashes of authority; or insist on a nearly nonsensical literalism that the metaphorical structures of religion were never meant to convey (see “The Problem of Metaphor in Religion”). Trust once powered religious authority. In Western society, that age has passed and is unlikely to be revived (see “Tao and the Myth of Religious Return“). I hope it is obvious that belief never has guided public morality except in short-lived totalitarian dictatorships.
My argument here faces stiff headwinds, I admit. Most persons use trust and belief pretty interchangeably and unless challenged will hardly pause to investigate their own claims to knowledge of either. If they don’t take their own declarations seriously enough to consider justifying them, why should we? Let us grant that much of what we read, hear, and say is as serious as a snore, but let us also admit that at least some of what we value most deeply touches on issues of trust and belief, so the terms are worth getting clear in our heads, especially if we think them a kind of knowledge.
If this doesn’t seem quite right to you, it may be because another lens may change the focal length of belief, granting it equivalency, even superiority, to those things we can warrant by means of correspondence truth tests, and that way of viewing belief is popular today. By this axiom of knowledge, all of our perceptions and reflections must be filtered through a personal perspective that colors them indelibly with the very prejudice that tints all belief, so no truth or goodness claims may be approached objectively or dispassionately. Belief in this view transforms all claims to knowledge into matters of preference, favoring will as the director of truth. This approach like so many knowledge concepts can be traced to Immanuel Kant’s separation of noumena, things in themselves, from phenomena, things as we know them, and to his contention that we can only know the phenomenal side of reality, the mirror rather than what it reflects. Phenomenalism takes this dichotomy seriously and argues that the perceptual wall that sustains the distinction between what is real and what is known must be impassable. If true, this would make a correspondence between mind and world impossible and would thrust all judgment into subjectivism or relativism. Such an obstacle would color all knowledge and make it uniquely our own, a generalization of belief that postmodernism fully embraces. This outlook inhales all sorts of private percepts as personally valid knowledge, building an organic virtual circle whose coherence becomes the only means of judging one’s truths. Those intuitions that form inside the perceptual wall of each person’s mind might be expected to conform to her prior conceptions (see “Postmodernism Is Its Discontents”). Since this virtual circle embraces experience as the source of identity, it also regards the reasoning that issues such judgments as a personal creation, making the principle of non-contradiction a very weak truth test indeed. But it is the only one available to the postmodernist who must rely on it for warrant. In theory, this should not be too much of a problem, for the same creativity that produces each person’s virtual circle must compel her to exercise toleration for those who assemble reality by use of other experiences, so a vapid acceptance of disagreement seems baked into the postmodern outlook that has no means of warranting one construction of reality over another (see “Postmodernism’s Unsettling Disagreements”). All beliefs should be respected in this view.
Now this enlightened openness might seem attractive until one of two outcomes ensues. First, the postmodern believer faces some conflict with someone who assembles reality differently. Since postmodernism views culture as determinative of identity, we should regard cultural disputes as merely subjective disagreements writ large and subject to the same quandary in the face of the inevitable dispute that must follow private experience interpreted by subjective intellect. This is a bad theory, for it both creates dispute and destroys any means of resolving it. Disagreement must end in domination or surrender. This inevitable but unfortunate conclusion has led postmodernism to fetishize both the use and the concealment of power as the only means of resolving conflict. Beginning with Schopenhauer and culminating with Nietzsche, we find a rich literature exploring this subject, one that the post-structuralists brought to a lustrous shine in their vociferous and interminable disputes over correspondence truths the possibility of which they theoretically denied.
This approach to belief and knowledge has vast currency today, not least because of the power of the human sciences that have sustained and directed it (see “The Calamity of the Human Sciences”) and the limitations of the true sciences that disprove its central thesis without providing a means of better directing public morality (see “The Limits of Empirical Science”). The incapacity of this strongest proof of public truth to settle disputes concerning public goods has opened opportunities to postmodern moral theories despite their incoherence. It has also provoked a deep nostalgia in premodernists for a revival of religious authority, which is unlikely to happen. Because the contretemps has dragged on for a century now with no consensus emerging, public trust in institutional authority has largely been replaced by private and provisional commitments to institutions easily revocable and invariably tentative.
It may sound as though I am discouraging belief, but I am not. I am discouraging three approaches to belief. First, we should call it what it is. We only believe what we cannot know by a preponderance of the evidence. We cannot claim belief to be true either because we want it to be or because we are so skeptical of knowledge that we think everything equally subjective. Our passion cannot be a warrant for correspondence knowledge. Second, we should recognize that belief in authority or authority built upon beliefs is merely belief writ large and is no more reliable than beliefs held by the individuals. Popularity may suppress doubt, but it cannot resolve it. “Because everybody thinks so” has been proven wrong so many times it is a wonder it is ever said without a rueful smile. If everyone on planet earth believed in God or didn’t, the truth would remain as before. Majoritarian belief is just as uninformed as individual belief. It’s just louder. It is as foolish to doubt it from contrarian motives as to endorse it from mental laziness. No reason means no reason. Third, belief may get us beyond an impasse of choosing, but its true danger lies in what comes after we move on to other considerations. The real problem lies not in accepting some belief as true or good but in using that acceptance as evidence for future judgments. Belief is the rotten step in reason’s ascent to truth, and to expect it to support further judgment asks it to carry more weight than it can support.
We can deal with that weakness in two constructive ways.
First, we can seek other means of support. For example, a real trust may be a weak basis for knowledge of truth and goodness rooted in undistilled experience, but it avoids the prejudice that characterizes belief. Moral authorities can earn trust, whether they be personal or institutional, and it is reasonable for us to give what has been earned. But that is unlikely to happen in this age of generalized suspicion. Given the zealotry with which we protect our individual agency, we are unlikely to surrender to trust. Many institutions — the family, government, societal hierarchies — can appeal to our sanction even when we are unwilling to extend trust, and while this may cause authorities discomfort, it does allow for active sanction to improve institutions through critical interactions. This is how modernism finally learned to interact with institutional authority, though authority invariably found such sanction unsatisfactory. Empiricism and expertise have proved capable of discovering some kinds of truth and a universal reasoning competence should prove able to guide thinking persons to moral consensus once we agree to the axioms of commitment that will underwrite them. We begin life with such helplessness that our trust forms before our judgment does. Once reason becomes operational, it always seeks stronger evidence, and that is often found, so trust is revoked and agency proclaimed. Both our reasoning and our evidence tends toward refinement as we mature, and it is no small thing to replace childish beliefs with better evidence and reasoning as we grow in wisdom.
But we never grow wise enough to outgrow our need for belief. No matter how carefully we try to find firmer footing, at times all we can do is to embrace both beliefs and the uncertainties they signify. We face many aspects of reality that we simply cannot know. The temptation in the face of such inscrutability is to premature closure, to fill in the blanks of our knowledge of the true with the sirens of desire. Perhaps the deepest well of wisdom involves knowing when that alloy of natural and preferential freedom is permissible and when our quest for truth and search for anomaly should continue. It should be clear that supporting our choices by belief should be the last rather than the first task of judgment, that we must defer belief to the uncharted frontier beyond knowledge rather than the public square. That beautiful, fragile bubble of belief is still necessary for those live options about which we simply cannot achieve knowledge: questions of unknowable future outcomes, issues of the afterlife, problems of ultimate meaning. At the fringes of what we can know, beyond what we can hope to achieve with better reasoning or evidence, lies a corona of beliefs illuminated dimly by those things we do know. The forms of beliefs shift as the light of our knowledge changes, highlighting some and throwing others into deeper shadow. This frontier where knowledge must end, where it has no choice but to touch upon pure desire, seems the proper domain of belief.