- The meaning of “science” has evolved to mean a specific kind of judgment based upon the replicability and communicability of very limited experiences.
- Science only gradually became disentangled from religion, philosophy, and expertise.
- The collapse of religious authority in the Reformation required an alternative mode of justification.
- Over the course of the nineteenth century, empirical science became standardized, subdivided, and professionalized.
- Empirical science affirms the truth of modernist axioms privileging universal reason applied to private experience.
- The scientific method intentionally denies the efficacy of combining truth and goodness claims and demands an act of severance that discourages premature closure to seek immediate utility for its judgments.
- Empirical science may be distinguished from both pseudo-science and the human sciences by its reliance on paradigms and mathematical language.
- In the last decades of the nineteenth century, empirical science retreated from its expansive claims to realms of knowledge that are not amenable to its method.
- Natural science’s success and exclusivity shut out imitators that nevertheless seek to mimic its methodology and products.
- Because empirical science cannot produce moral truth, its success has failed to provide public moral guidance.
- As the purest product of modernist axioms of commitment, empiricism’s failures have exacerbated a contemporaneous collapse of institutional authority and produced a moral vacuum after World War I.
- Empiricism’s discoveries, methodologies, and technological products have eroded public reliance on common sense.
- These failures have further expanded a reliance on private moral principles.
- Because empirical science is the purest expression of modernist principles, it has been largely opposed by postmodernism that rejects the efficacy of universal reason; it privileges private experience and thinks reasoning to be formed by it.
- Though ordinary experience cannot duplicate science’s methodology, it can adapt certain elements to refine prudential reasoning, which can identify and justify moral goals.
The gulf between our truth claims and our warrants for them is so frequently ignored that it is a pleasure to turn to a means of knowing that was built entirely on warranting its declarations. Empiricism, commonly known as natural science, is the most clearly defined contemporary justification for truth precisely because its methodology focuses powerfully on verifying its truth claims. Its clarity and practical value have earned the cultural respect it’s received. For over a century it has been the most powerful in the contemporary arsenal of verification (see “What Makes It True?“). Consequently, it has deservedly received thoughtful attention from academics and popular media. The happy result is a fair understanding of science as a justification for truth claims, and considering how poorly other warrants fare, that is a good sign, for we may use science’s processes as a partial model for more pedestrian efforts. But twentieth century history has shown that effort full of dangerous temptations, so we should not think science a perfect model (see “What Counts as Justification?“). Our conception of empiricism may be more transparent than other means of warrant, but it is neither properly limited nor correctly related to less prestigious means of justification, so I turn to that task here.
Should you ask me to define “science,” I should have to ask the era you wish to discuss, for few terms have seen more variation over the centuries. To the ancients, science was thought to be simply some systematic approach. They spoke of “scientific mathematics” or the science of horsemanship or archery, indicating thought or skill. The term retained that sense into the long era of authority’s dominance, essentially following Aristotle’s three-fold division into natural philosophy, mathematics, and theology, though the term grew suffused with a respect for custom that stifled independence and novelty. One change from classical thinking did occur in medievalism. Theology was not a separate subject of the curriculum. It did not need to be, for it suffused every subject studied. Its justification, authority, offered the means by which serious thinkers could trust claims to truth and goodness. No alarm was raised at the possibility of rational contradiction for the simple reason that trust requires a surrender of the very rational agency the thinker might use to discover anomaly.
By the thirteenth century, Thomas Aquinas had quite unintentionally begun a dissolution of an unquestioning trust in authority, though he was careful to confine his investigation to what Catholic institutional authority allowed. But even that limited venture into his own agency produced a discordant element, for he thought natural reason could be further divided into two distinct mental operations: apprehension and rationality. Later thinkers would call apprehension by another name: perception. Even later, that term would be broadened into its current sense: experience. Aquinas did not exclude any of the traditional applications of science-as-organized-thought: one could have a science of God or of mathematics independent of the perceptions. This might seem a strange oversight on the part of the most thorough thinker in history, but Aquinas was hardly the first to think apprehension involves some “inner sight” that reveals indisputable truths through the operations of the supernatural on the mind. This notion of insight is obvious in Plato and constitutes a route to knowledge that religious authority has always sanctioned so long as it could be translated into dogma. So long as “science” included this capacity, it must prove subservient to the gnostic revelations that insight might claim and authority must formalize. Aquinas certainly could not establish a separation, for the Dominican priest thought natural reasoning must always serve the ends of Church authority . His great contribution was to subject even an inner light to natural reasoning, an analysis later centuries would prove it could not support. From his day forward, reason and perception would be linked. The next seven centuries would be tasked with defining the workings of that link and properly limiting it. That process is not finished.
At this point, I should submit a footnote to this analysis of Aquinas’s contribution to science. We like to work backward from the ostensibly superior vantage point of our current view, but that effort is inescapably flawed by the search for the “golden thread of correctness,” the contemporary lens that we bring to our inspection. We imagine some modern intuition must have imperfectly informed Aquinas’s efforts in preparation for the deep wisdom that is our own judgment of the matter. We try to separate Pythagoras’s geometry from his numerology or Newton’s physics from his Biblical symbology. We consider Galileo the brave prophet of science crucified by medieval religion, ignoring that he dedicated his most famous work to the Pope and continually subjected his earlier theories for religious approval. Even those stalwart prophets of modernism— Descartes, Locke, and Berkeley– claimed we could have certain, intuitive knowledge of God but could never have any certainty of our natural knowledge. So we should remember that even the greatest prophets of science accepted a set of warrants entirely different from our own, one in which supernatural truths impressed themselves upon reason, and authority guided it to certainty. That is certainly not our view of things, but we should not applaud our own excellence unless we can explain why belief is not superior (see “Can Belief Be Knowledge?”). The best corrective for our selective memory is to recall the six years Johannes Kepler spent attempting to squeeze Mars into a circular orbit despite accurate evidence to the contrary. His motive: a perfect God would only create perfectly circular orbits, not inferior ellipses. That was in 1609, at the dawn of what history would call “The Scientific Revolution.”
The crisis of apprehension came in the awful chaos that began with Martin Luther in 1517. Our penchant for the golden thread should not blind us to the miserable desperation of the Reformation effort to replace authority’s certainty, for all of its indubitability was shattered over the next two centuries (see “Premodern Authority”). We see continuity looking backward, but those living through those millennial horrors saw none in their future. Their ultimate warrant had collapsed, and they viewed all truth and goodness claims that had made sense of their world with equal suspicion and without a means of discrimination. One cannot live that way, and so reflexively they sought a replacement. That search in the beginning was naturally aimed at the kind of certainty authority had conferred, and it expected to find equally irrefutable supports for its claims to truth and goodness. Those could not appear, though that realization took far too much time to discover. Even the pre-Socratics had claimed to know truth and goodness through the wisdom of orthodoxy: customary opinion. But with custom crumbling and authority destroyed by its own methodological weakness, a need coursed through the culture like a parching thirst (see “The Fragility of Religious Authority”). A historical view must comfort us that at least a partial substitute was eventually found, but in that dismal age of religious contention, no such golden thread could be woven into some new fabric of meaning. In the wake of the Reformation, clarity could only come from further failure.
Modernism’s defining axiom is to base all truth and goodness claims on universal reason and private experience. These by definition reject authority –- to subject it to rational inspection quite dissolves and replaces the trust that is the engine of authority– and locates the power to arbitrate claims to truth and goodness in the individual. Its earliest defenders were epistemologists: theoreticians of the means by which mind knows reality. Medievalism had favored a teleological ontology: an explanation for the structure of reality infused with purpose. In that system, no doubt could attend the means of transmission, for its truth and goodness claims were guaranteed and certain to those who had placed their trust in the authority that warranted it. To trust in the truth of divine command was to accept the entirety of its explanation for reality as well those who translated that explanation for human consumption. Any dissolution of trust involved only a relocation to another authority who claimed a truth more worthy of trust but appealing to the same justification (see “Authority, Trust, and Knowledge”). Until the Reformation, truth and goodness were as certain as they could be, and the social order derived from those certainties a brute reality closed to inquiry. To be sure, there were apostates, but they were either crushed or domesticated by the machinery of tradition. Aristotle, that paragon of direct observation, was shrunk into just another authority by the late medieval era. It was this reputation as “the philosopher” that Francis Bacon so objected to. It was only after all that fell into ruins that any replacement warrant could be built. No competing explanation could match that surety and comprehensiveness. These new epistemologists of modernism found themselves treading on marshy ground, for no sooner did their reason and experience produce a replacement explanation then some other and more careful thinker critiqued it based upon some combination of his own perceptions and his reflections on them (see “Modernism’s Midwives”) . This new mode of thinking, this empirical cannibalism, seemed entirely unlike the rock-ribbed certainty of authority. It was perpetual revolution, constant doubt, ongoing uncertainty. It’s been that way ever since.
Only a total collapse of authority could have exchanged a confident warrant for one so tentative. It suffered at first in its contrast with authority that was discredited yet sorely missed. It was that yearning, the echo of which today is merely nostalgia, that has forced the constriction of science, always tempting the either/or of faith over reason (see “Tao and the Myth of Religious Return”). But this same temptation served to push modernist thinkers to extraordinary care in their self-critique, seeking the lost Eden of certainty through means that must keep them always in the wilderness of doubt. The warrants that guided their search would repeatedly prove themselves flawed. Private experience must be filtered through individual judgment before it could be bruited aloud and then it must be communicated through an idiomatic language poorly suited to reality’s complexity, and then poured through a second experiential filter to be heard, understood, and agreed to. And this must always be subject to a reasoning that is enticed by bias and subject to partial comprehension. How could any truth or goodness claim survive a rational critique of such a defective process as making a declaration?
Perhaps this context explains an initial attraction to reason divorced from experience. This was the Cartesian solution. But those who placed their bets on a strict rationalism found themselves boxed out of the search fairly early. By the end of the eighteenth century, the argument that an ever stricter reasoning process might lead to certain knowledge had foundered upon the sterility of reasoning independent of perception. What can the mind reflect upon other than the products of experience? This route might find knowledge as certain as religious doctrine, but without an appeal to experience of what use might it be in guiding choice? Today’s formal logicians and logical positivists are still testing answers to that question.
Examined experience proved more promising. The earliest impulse in that direction was to examine the nature of experience itself, the kind of uncritical and everyday perception and reflection we all use constantly. The results were ugly, particularly when set against the crystalline logic of the geometric theorem or the theosophical infallibility of Church fathers. For a “mere empiricism” as expressed by an early modern thinker was nothing more than undistilled experience, a casual acceptance of sense data, the kind of thinking Aristotle joined to direct perception to categorize the world (see “Stereotypes and Categories”). It was the cumulative and corrective work of almost a century to recognize how defective and simplistic that view had been. One simply has to contrast the view of experience advanced by John Locke in his Essay Concerning Human Understanding (1689) with Kant’s Critique of Pure Reason (1781 ) to plumb these depths. A pure rationalism might be the clearest loser from this effort, but undistilled experience was also shredded by this generational effort to think through the reliability of our perceptions and reasoning.
The natural philosophers of the seventeenth and eighteenth centuries used their critiques to refine their thinking and confine their conclusions to those that repetition and extremely close analysis could reveal. Experience, of course, resists that effort, for every one we encounter is uniquely sited in what filmmakers might call its frame. To see an experience truly, we must limit our focus to its elemental constituents. That examination requires a reliable repetition of the experience, for in the first go-round we cannot know what is important about it, how to isolate it from its frame, or how to find the natural fault lines of analysis. That is hard, for that narrowing of focus must use reason to zoom into only those relevant factors that constitute the experience’s essential core and exclude extraneous ones, repeating the experience until study or skill has made it understood and thought has unwrapped its components. It was reason that required this narrowing of experience and made it possible, but that same reason prompted the skeptic David Hume to explain why the effort could not reach certainty. One element can never be duplicated: the time that shapes the frame of the experience. It must change as we seek repetition and that dissolves the repeatability of the experience and with it our confidence to forecast its causes and effects. Just ask any gambler about that. This is the reason why undistilled experience is the most unreliable of correspondence truth tests, though we use it more promiscuously than it deserves (see “Pure, Poor Systems of Knowledge”). But even the closest attention, Hume observed, must introduce at least a similar doubt. Causality, the root of all science, could never be found in the reality science observes but only in the explanations science infers. The modernist effort to replace authority’s certainty with closely examined experience must fail. The degree of uncertainty this factor introduces took time to recognize and negotiate. Proto-scientists of the eighteenth century developed the skills of close observation through national or royal societies to which they presented their results. Controlling the frame of their investigation to what could be repetitively observed became part of their effort. Otherwise, their fellow seekers would find reason to object to their findings. Narrower was always better. That also took time to work through as students of natural philosophy became naturalists and then scientists by means of an increasing standardization of practice. Though it had failed in its epistemological quest for certainty, empiricism continued to pile up reams of new discoveries, launch revolutionary technologies, and pitch new theories that challenged tradition. Was its continued acceptance merely a product of its pragmatic success even in the face of the doubt its own processes had revealed?
That might be a valid historical conclusion, but another oddity emerged from adolescent science. “Narrower” must always imply “deeper,” and if recent developments have revealed anything, it is that there is no end to the profound discoveries of the natural sciences. But a strange thing was revealed by this sharpening of focus that convinced even skeptics that this new methodology was succeeding. From ornithology to optics, from astronomy to zoology, findings in one field supplemented those in another to assemble a multilayered jigsaw puzzle reflecting the very reality that philosophy had thought inhospitable to certain knowledge. True, new puzzles also emerged, but as subject disciplines subdivided, the volume of available knowledge in each field of study increased beyond the competence of any single mind to master. These deeper refinements also became correlative, and yet again. Every naturalist now found himself restricted to a single subdivision of empirical knowledge defined by the capacity of his own mind and at the same time found it necessary to master at least the outlines of those disciplines that enmeshed with his so as to grasp relevant connections. When these natural philosophers looked up from their study, they found kindred communities knitting other research together. Deeper and wider. The edifice of natural science was built fragment by fragment by the efforts of these laborers to construct the grand mirror of nature by the sustained attention to the structure of each small piece. We have to see their labors as fed by the doubt that hollowed out their reliance on reasoning about experience. Their devotion and care had the salubrious effect of cementing the entire explanatory structure more tightly to the reliability that had always been its goal, but it came at the expense of removing that structure from the access of non-specialists. As expertise in scientific study became specialized over the course of the nineteenth century, it discredited the undistilled experience that had launched its quest, subjecting it to a doubt proportional to science’s growing esteem and distinguishing science from common sense. Every scholarly effort to make the empirical process more reliable in the face of its quest for certainty and every rational objection raised to that effort subjected commonsense reasoning to disrepute and further removed understanding from ordinary thought. The loud defenses of common sense we see in nineteenth century literature would have been unnecessary a century before. A century later, they were indefensible.
By the last decades of the eighteenth century, this thoughtful criticism had produced two disjunctive responses. The first was a full-throated rejection of the idea that experience needed correcting coupled with a revival of belief in insight to guide it. This movement rejected everything about modernism except its distrust of authority. It was called Romanticism. It rejected empiricism directly, arguing that it must “murder to dissect” and thus kill the meaning of the very experience it sought to explain. If discrediting science was Romanticism’s goal, it could not hope to prevail against the wave of technology that the nineteenth century produced, but it nevertheless left to Western culture a legacy of distrust of both science and expertise coupled with an enthusiasm for raw experience that popular culture has elevated to a fetish. The second and more successful response was to double down on the empirical bet, to further tighten its methodology and reject outliers to its operation, in short to professionalize. Before they could become true empiricists, these proto-scientists had to invent a process for doing science and refine it so as to repair the deficiencies that experience and reasoning seemed heir to. That effort transformed the casual naturalist of 1800 into the laboratory researcher of 1900 and required a methodological expertise that science students still find quite formidable (see “Expertise”). Humanities students might follow simpler paths steeped in Romantic values. For almost the entirety of the nineteenth century, these two responses to the inadequacies of modernist axioms would duel for the soul of the culture, introducing a schizophrenia into Victorianism that has not dissipated entirely (see “The Victorian Rift”).
But its very success made expertise a moving target. Any medical student today can explain why. Aristotle had categorized everything: planets and politics, gardens and governments. And it is still true that rational categorization lies at the heart of both expertise and natural science (their difference is a late nineteenth century development I will explain shortly). But before it could succeed in the effort to make sense of experience, expertise had to limit it to something reason could grapple with, and that was and is an ever larger problem as we grow in knowledge. Probably the last person to attempt an omnivorous take on knowledge was the famed Prussian explorer Alexander von Humboldt (died 1859), whose quixotic stab at encyclopedic knowledge, Kosmos, is seldom read today. There could be no second Aristotle.
What emerged instead in the late Victorian era was even further constriction of both subject and method. Empiricists toiled to perfect their method, using their tools not only to build but also to reject those raw materials and techniques that failed to meet their increasingly stringent requirements, especially fanciful creations of popular Romantic imagination. These have enjoyed rather a renaissance thanks to the internet in our own day. Perhaps it was to delineate their profession from the quackery of pseudo-science that scientists in the nineteenth century and the next built up their methodological walls not only for the entire structure of the scientific enterprise but for each discipline within it. For every chemist there was the eager alchemist seeking the philosopher’s stone who found himself shut out of serious consideration. For every naturalist teasing out the structure of the very air there was another seeking the phlogiston that allowed combustion or the ether that propelled gravity through space. For every anatomy student there were ten seekers of the physical location of the soul. The gold had to be separated from the dross.
Anyone could tell these perfectionists that experience was a fallible teacher and expertise elusive. Fortunately for their quest, their professionalism happened upon techniques inherent to their means of justification that allowed self-correction. Reasoning may be uncertain, but it possesses the excellent quality of being open to analysis. In its purest form, mathematics, reasoning is so clear that one thinker can review her process of logic and so discover and correct error at any point in a superbly clear sequence. Math’s clarity derives from its artificiality: all of its reasoning is pure because none is subject to the ambiguities accompanying experience. Empirical reasoning cannot match that level of artificial certainty, but it can discover and correct errors in perception and reasoning about it by taking care in its processes. Authority can find no means to do that. Dispute must dissolve the trust that is its sole substitute for evidence. Undistilled experience cannot do that either, for it cannot allow us the space to attempt to repeat what is, after all, a unique set of experiential inputs. We cannot replicate an experience, but we can attempt to limit the variables that make it unique, and so replicate an experiment. Our enslavement to causation forces empiricists to be skeptical about their core rational principle linking causes and effects. Its corrective must always be the old saw, “correlation is not causation” (see “The Determinism Problem”). Our ability to break an experience into discrete components and attempt to repeat it allows us to test our notions of its causes or effects. None of this process is certain, nor is it easy to master. Yet the nineteenth century saw it mastered. By the sheer luck of having trusted universal reasoning applied to individual experience, modernism had found a lock-and-key of truth, though its opening proved a very slow process plagued by manifold errors. It found it had to settle on an imperfect judgment and was forced to warrant truth not as certainty but “by the preponderance of the evidence.” In assaying that evidence, empiricism emerged from expertise so gradually that most observers could not mark the moment of their separation.
I offer two possibilities to define it. Perhaps it was the acceptance of the only infinite and most precise language available to humans. One can be an expert at many things whose mastery may require no communication. But empirical science found its process of peer review and the interconnectedness of its disciplines so fundamental to its progress that it was forced to find a language suited to the precision of its efforts. We recognize “true” natural science today in part through the employment of the language of mathematics. At the frontier of disciplines, we will find a cadre of researchers doing theoretical work that relies extensively, even solely, on the use of mathematics to extend theory in the field. That same recognizable structure of the discipline introduces a second means to distinguish true science from its many imitators. The concept of the paradigm was pioneered by Thomas Kuhn in 1962 to explain the consensual nature of scientific disciplines. He described it as a community agreeing to the overarching theoretical basis of the discipline. This has turned out to be a defining quality of a truly scientific effort as opposed to the many that aspire to science but fail, particularly when coupled with Karl Popper’s theory of falsifiability, which argues for the employment of constructive doubt in scientific endeavor.
These conclusions were made necessary by the continuing professionalism of natural science. As its knowledge quotient increased radically over the course of the nineteenth century, it evolved into something greater than a quilted reflection of correspondence reality, far more than a body of knowledge. It became a means of warranting knowledge of reality, a verb rather than a noun. As the nineteenth century progressed, the active voice of that verb centered on the academic community, specifically as German researchers structured it. Their process for certifying practitioners, defining operating paradigms through creation of disciplinary departments, and structuring experimental procedures became the model for the western world. Kuhn’s view of science as a collective of interacting disciplinary communities was the most important outcome of the German model of science.
The process by which it operates today is most crudely known as the scientific method. That terminology is out of fashion today, for it discounts the intuitional and unorthodox leaps of supposition that have always characterized the empirical process in favor of a static processional through definitive steps that does not connect to either the communitarian work of today’s scientist or her often intuitive hypotheses. Still, the scientific method has something to offer the witness to the empirical enterprise. First, it begins with evidence rather than theorizing. I will concede the connection is cyclical and not linear, but if the human sciences have taught us anything, it is that hypothesizing too soon must multiply theories and tempt us to premature closure, a too-eager squeezing of evidence into the paradigmatic box. The unfortunate and inevitable result is a splintering of paradigms conforming to unsupported conclusions on the part of the researcher. We can broadly categorize pseudo-science by this proliferation of paradigms in contrast with the gate-keeping use of overarching theories characteristic of the natural sciences. A second qualifier is falsifiability. A true science easily envisions tests of its judgments that if failed, would disqualify them. Of course, that failure would then force a revision of hypothesis. Success would be a falsifiable test that the hypothesis passes. A sure sign of a failed science would be the inability of the explanation to be falsified as, say, Freudianism and Marxism have demonstrated in their miserable histories. This failure does not refute their truth, only their claim to be scientific, but in terms of warrant, that is debilitating enough. It is a sign of the pseudo-scientist that she will not take “no” for an answer. No contraindication would satisfy the disciples of Marxism or Freudianism, for the theory itself projects such failure as further proof either because the theory itself is too broad or the evidence too diffuse. So if a child is neglected by his mother or indulged by her, he is likely to show some maternal transference. If a communist revolution fails, it only proves its untimeliness and presages the eventual world revolution. If deflation or inflation follows some monetary policy, either proves a predicted effect. And so on. Finally, we may see true science operating according to some standard operational method explicable by mathematical language. None of these qualifiers is definitive. They are merely indicative. But they are necessary, for the long and slow evolution of science, evolving as much by disqualification of theory as proof of it, has also been marked by shoddy imitation and reflected glory.
Empiricism’s maturation in the nineteenth century was resisted, and its newly-found self-confidence increased that resistance. I’ve mentioned the failing efforts of Romantics to dominate the popular culture. They warned against this “new science” that sought to domesticate nature. Their hackles were particularly raised by the claims of future omnipotence in the positivist arguments of Comte, St. Simon, and Laplace. These cheerful materialists demonstrate how far science had come, especially when we contrast it with the desperate tentativeness that had attended its beginnings. Adolescent science suffered from all of the defects of youth: arrogance, excess ambition, and self-inflicted harm. It was always resisted by religious authority and the traditions it challenged, always suspected of replacing dogma by a different reading of the mind of God. Reactionaries retrospectively and falsely accused empiricism of intentionally destroying the authoritarian structures it had been forced to replace. They still do that. But the combination of its technological and explanatory success surely tempted its practitioners to all the excesses Mary Shelley warned of in her famous novel Frankenstein. Those warnings ring out today about the threat of artificial intelligence and genetic experimentation. Now as then Romantic doubters and reactionary religionists are silenced by empirical triumph. Did not its practitioners impose their will on the rest of the world? What goods could they not accomplish?
As the twentieth century approached, empirical science seemed the final vindication of modernist axioms, sufficiently triumphant to tempt practitioners to hubristic predictions and utopian promises. Modern became synonymous with progress. But the pride and the false hopes it raised in the popular imagination presaged its fall. Defenders of tradition waged a last counterattack as the fin de siècle approached and from the pulpit and the parliament decried wholesale revolutions in modes of life, objections that had only grown more plaintive since Edmund Burke’s dark predictions from a hundred years before. The only novelty of these reactionary responses was their millennialist intensity. Traditionalists had warred on science from its infancy, beginning with the Catholic Inquisition and Index and continuing in the Romantic musings of Vico, Dilthy, and Newman. But the culture’s infatuation with modernism lowered the pitch of that protest, producing an elegiac tone we see in the twentieth century writings of Lewis, Belloc, and Chesterton. We can see it in today’s evangelical movement (see “Theology’s Cloud of Unknowing”). This is both a knee-jerk reaction to change and an ongoing appeal to the gnostic, but what unifies it is a rejection of the modernist warrants of which empiricism is the clearest example in favor of one that is utterly at odds with them. The conservative effort to resuscitate authority was doomed by the attractions of individualism and new technologies. But that hardly prevents today’s champions from making the same case that Pope Pius IX made in his infamous “Syllabus of Errors” in 1864. Change—political, cultural, or intellectual– was seen by religious reactionaries as the enemy and empiricism as the agent of change. Its dominance swept all of western culture into revolution, the greatest of which was the industrial one.
The late nineteenth century and early twentieth finally witnessed the pace of change outrun the elasticity of tradition to adapt. Reactionary resistance might have had a better chance if its two most powerful defenders had not so disagreed on the issue of warrant. Though both traditionalists and Romantics were offended by the scientific outlook and its elevation of reason over insight, they could never present a united front because they could not reconcile their views on authority. Romantics despised and distrusted it while traditionalists could trust nothing else. But empiricism was in no position to exploit the disputes of its enemies, for its champions found themselves growing ever more alienated from those who sought truth by ordinary thinking about everyday experience. The new popular media attempted to effect some bland reconciliation at the end of the nineteenth century, attempting an oil-and-water emulsion of traditional authority, private belief, common sense, and empirical method. There was nothing wrong with these means of knowing truth except the grinding clash of their warrants, each of which denigrated the others, producing a cumulative effect of endless intellectual strife perfectly suited to an age of increasing anxiety.
The defects and incompleteness of each only made the search for reliable means of justifying truth claims more dire as the nineteenth century grew old and Victorianism revealed its hypocrisies in the face of accelerating change. All of these tensions came to crisis in World War I. It proved to be the defining moment of our age, reorienting and limiting justificatory options, producing cultural crises as contentious as the Reformation wars that had crippled authority four hundred years earlier. Empiricism was the pivot in this new crisis.
World War I provided convincing and final proof of the utter bankruptcy of authority. The old grey men who sent millions of their young countrymen to die in the trenches could hardly justify either the cause of the slaughter or its scientific efficiency. Nor could they reconcile the rage of industrial workers, oppressed peasants, and exploited native peoples to pretensions of civilization and progress. An oppressive social order stereotyped the female halves of their own populations. Their appeal to tradition collapsed in plain sight, definitively finalizing a decline that had begun four hundred years before. If trust in tradition had failed, so too did the rationalizations of the war’s apologists, and so modernism began its century of decline. The whiskered ministers and experts who met at Versailles to hammer out a new world order could not appreciate how literally their intentions would be realized. In their defense it should be observed that the sober old men who had brought the world to war failed to appreciate the modernist critical tradition that subjected not only its own methodology but also their hypocrisies to critical inspection. They were the last generation of authority. The next would be the lost generation, orphaned by the Great War, who had witnessed a world gone mad.
Empiricism seemed to supply both reason for and refutation of modernism’s failure.
On the one hand, it had established an ever-widening gulf between the operation of ordinary reasoning upon ordinary experience and the specialized workings of natural science. Its professionalization increased the reliability and prestige of science as it denigrated reasoning about ordinary life known as common sense. The war and its aftermath seemed to confirm the inadequacy of such naive thinking for once and for all. Truth about reality seemed not to be common in two senses: it was not universal and it was not ordinary. The nineteenth century had had its Darwin and Marx to cast it into doubt with theories of deeply hidden formative forces hidden in plain sight in nature and society. But that was just an appetizer for the radical assaults on ordinary reason launched at the beginning of the new century by Freud and Einstein. Could the mind really contain recesses closed to consciousness? Could the very fabric of space and time be as illusionary as a magician’s hands? Even scientists were shaken by their own efforts as Einstein was by Planck’s quantum theory and Freud by Jung’s of the collective unconscious. In the wake of the war, physics’ version of relativity translated into relativism, the plasticity of reason in private experience.
These revelations opened a reality that was nothing like the one ordinary experience perceived, yet it seemed upheld by all the weight of the only universal means of justification left. As incredible as empiricism’s findings might seem, they were repeatedly confirmed by other empirical work within and between disciplines. They revealed a reality too weird to be understood by everyday thinking, the same common sense that had so failed in the war. Fortunately, one needn’t inspect the research or read the journals. Technology alone proved the truth of science, and it plowed through the culture in the first decades of the twentieth century in an orgy of invention. Empirical science proved that reason and closely examined experience did indeed make reality universally intelligible, but they had to be refined far beyond what can be achieved in ordinary life and by commonsense thinking.
But this assault on ordinary reasoning masked a much deeper and more threatening danger: a collapse of the means to justify goodness, the end for which finding truth is merely the means. This too reached its crisis in the wake of the Great War. As the most reliable guarantor of truth, empiricism was also entangled in what had become a moral crisis afflicting all of western civilization, and through its domination of other cultures, the entire world.
It had been simmering since the Reformation. The great strength of religious authority is that it binds its truth claims to service of its goodness claims, specifically moral claims .The same warrant that supports the truth of God’s omnipotence supports the imperatives of the Ten Commandments. Both are sustained by the same trust. The collapse of authority had severed that linkage, and goodness claims were cast into a doubt that modernism found difficult to resolve. So long as authority maintained some cultural dominance, it could continue to claim the moral high ground, but it faced an ongoing erosion by the challenges of modernist warrants, the smashing of trust by warring religious factions, the contradicting truth claims of natural science, and the piling up of hypocrisies that authority perpetrated on a rapidly changing social landscape. Even before the Great War, the contrapuntal chorus of hallowed tradition had devolved into the croaking of disputing claimants for the public’s trust.
Modernism had long recognized the pressing need to compensate for authority’s moral collapse. Philosophy had taken up the challenge in that desperate seventeenth century when thinkers sought to replace religious absolutism as the source of moral goodness. It too was brought low by the constrictions of empiricism, for its lone hope was the power of universal reasoning applied to private experience. But even the most rigorous of its champions, Immanuel Kant, had found reason inadequate to pierce the perceptual wall to comprehend reality. Pure reasoning also faced antinomies that challenged its own mode of verification such as the incompatibility of the determinism so essential to empiricism and free will so essential to moral agency. Kant’s duty ethics, perhaps philosophy’s crowning ethical achievement, provided insufficient motivation to ethical behavior and incompletely self-evident to universal reasoning and therefore insufficiently motivating to public morality (see “The Essential Transcendental Argument”). The moral crisis intensified as authority and common sense suffered their defeats in the nineteenth century. The anachronistic appeals of the Romantics demonstrated a deep desire for certain knowledge of goodness without a submission to reason. But their pretty pantheist dreams had been wrung dry by empiricist revelations of the true state of things and by empiricists who decried their presumptions. Drifting above Walden Pond, Thoreau had seen an inspiring interplay of natural order. Darwin revealed it to be a desperate struggle for survival. Comte predicted a scientistic omniscience, Nietzsche a world of lunatic self-delusion. Modernism was suffering its own schism. Empiricism was the lever prying apart ordinary sense and scientific reasoning. In this new dichotomy, where could room be found for morality? These disputes only deepened the longstanding moral crisis. For all their fundamental disagreements, authority and the new professional science shared one requirement: both called for persons to forsake the workings of their own common sense. Congregants were forced to arbitrate their allegiance to any authoritarian moral demand so as to embrace it in terms they could accept, thus dissolving its warrant by diluting their trust and subjecting it to the operation of their own reasoning. This is an insoluble problem for religious authority even today. An appeal to the autonomy of the moral agent in determinations of goodness is as fundamental to modernism as it is foreign to religious authority. But scientific progress continually reminded the general public of the limits of common sense’s capacity to know truth. Lacking it, what could guide moral preference but that same deficient reasoning? World War I was the final blow, revealing the moral bankruptcy of many sources of authority and tradition, and finally bringing the crisis to full popular consciousness. In the end Darwin and Nietzsche’s analysis prevailed, for the Great War illustrated their nightmare vision in graphic terms.
The first two decades of the twentieth century saw a frenzied search for meaning in every cultural field against a crumbling defense of tradition in full rout. Amidst all the novelty that marked the era, empiricism seemed as solid as a bulldozer, grinding away old verities and leaving piles of miraculous discoveries and inventions. Since its methods seemed to promise reliable justification of universal truths, why could it not also be the sole universal guide to moral preference?
This conclusion was a snare and a delusion, but it had shadowed the whole history of empirical progress. The wholesale shift to empirical warrants for goodness had begun with positivist promises of future glory and blossomed as simplistic versions of complex scientific paradigms were offered as modern solutions to age-old moral dilemmas. If nature prescribes improvement in species, should we not expect that same process in the social order? What noun should follow “scientific” but “progress”? Science proves it! The war conclusively disproved it. Neitzsche, Spengler, Kafka, Flaubert, and Dostoevsky questioned the whole notion of cultural progress. After the war they seemed prescient. But the lure of scientism remained powerful as the finger-pointing began. The term implies a dismissal of all warrants other than empirical ones for public truth claims, but it also denotes a reliance on science to identify and evaluate true goods worthy of human pursuit. So in the face of the final dissolution of authority and the triumph of specialized reasoning, the culture looked to science both to identify moral ends and measure progress in meeting them. It had proven capable of a similar success in pursuit of truth even as other sources failed, so in the disillusionment following the war and in the continuing churning of culture produced in good part by the technology empiricism made possible, why wouldn’t it be the new arbiter of morality?
Its practitioners, now working in a professionalized activity, recognized that natural science could never take on that task. Its long, constrictive adolescence had demonstrated that science was not all that capable of finding the range of truths that must always form the context of any search for the good. Granted, it had proven itself unsurpassed in those experiences open to its methods, but that alone severely limited its operation. So many human interests go beyond perception, connect to other values, or cannot be quantified or repeated. These limitations mean that many experiences simply vanish from scientific consideration. And even those that prove amenable to the rigors of science’s means of inspection must be viewed through the peepholes of each discipline’s paradigm. All of these methodological limits boded ill for empiricism’s task of moral guidance.
But though natural scientists by the turn of the twentieth century were aware of their own limitations, one particular branch of science continued to promise to guide persons to moral truth. It was the bundle of studies involving human choice and will. Granted, this seems the correct focus since moral preference must involve those same faculties. Viewed in the light of natural science’s self-imposed restrictions, it is perfectly natural that these human sciences stepped in to fill the moral vacuum. Because science’s limits took so long to be understood, the implications of these limits for the human sciences took even longer, for they are still not fully appreciated. In part, this failure can be traced to the birth of natural science in the seventeenth century. Remember that it began as a simple, systematic search for knowledge, so all sorts of subjects were considered suitable, and those that concerned human well-being had already faced a long history when science was only beginning. So from its start, the Scientific Revolution had its echo, the Enlightenment, whose clear promise was human perfection made by “the science of man.” Even in these early stabs at modernism, these subjects confronted moral questions far better served by philosophical inquiry than by empirical methods, especially as the meaning of “empirical” evolved. As Victorianism witnessed the constriction of empirical standards, it saw correlative efforts to develop “human sciences” as reliable as natural science. In economics, sociology, anthropology, education, ethnology, and especially psychology, these fields attempted to develop the expertise and the experimental processes that would yield reliable knowledge of truth on a par with those produced by empirical studies. Their practitioners modeled their disciplines on the hard sciences, and they assumed their place in university curricula and shared science’s growing prestige. But they saw their function a bit differently. They thought themselves always on the cusp of fulfilling the Enlightenment dreams of progress and perfection. These are moral goals whose attainment requires defining moral ends (see “What Do We Mean by ‘Morality’?”). The positivists practiced what would become human sciences, as did the founders of great social movements like Marxism, fascism, and utilitarianism. In the post-World War I climate of moral desperation, their theories permeated the culture through and through. If hard science could find truth in nature, then surely the human sciences could do the same where it really counts, in human activities, all in pursuit of mental and social health. The cultural turn to the human sciences began well before the war, but in war’s awful wake they promised to fill in the empty space left by the sterility of the natural sciences, the collapse of common sense, and the disintegration of institutional authority. All kinds of lessons were to be gleaned from the work of these kinds of inquiry, they argued. Just as organisms followed a natural path to progress by natural selection, so too would social development lead to communism. Just as ecosystems reach a healthy balance of interests, so too would the invisible hand of capitalism produce social concord. Just as evolution favors the best adapted species, so too would social Darwinism allow the strong to flourish and the weak to fail. Never mind that these and a thousand other prognoses of the human sciences inevitably conflicted. If truths in space and time are relative to the observer, why should moral truth not be relative to culture?
But something went awry in this sort of thinking, had always been wrong in the effort to translate hard to soft science. In their way, the human sciences suffered their own humiliations in the constrictions of empiricism despite their best efforts to imitate their more clear-sighted peers. The very process that improved natural science’s capabilities to warrant its truth claims, the methodology that led to its greatest success, disqualified human science, and in doing so utterly impugned its claims to warrant moral truth (see “The Calamity of the Human Sciences”).
The problem involves different senses of “goodness.” Empiricism had learned a very hard lesson during its constriction from systematic thinking to professional activity. The rigor involved in its operation is aimed at discovering reliable truths by eliminating the errors commonly made in experience. Though these involve a number of errors ordinary reasoning commits, the worst is the temptation of premature closure, the entirely natural desire to decide upon truths in experience so that preferences may be exercised, to prove those preferences true despite conflicting evidence. As determining truth is merely the means to choosing what we think good, our temptation in ordinary experience is to distort our understanding so as to make that choosing easier. Nothing simplifies choosing like confirming biases or limiting data or disqualifying anomaly. We cannot achieve certainty, so every judgment must be made by applying our reasoning to a preponderance of the evidence, which must always involve approximation. In undistilled experience, we frequently fudge the meaning of “preponderance” so as to make our choosing easier. That decision introduces belief and bias into our considerations that then impact the goodness choices that invariably follow. The great innovation of the professionalization of science was the severing of truth and goodness judgments in favor of a strict focus on an entirely different sense of goodness, utility (see “What Do We Mean by ‘Good’?“). The scientific method is a tortured means of denying to practitioners the comfort of premature closure confirming prior hypotheses and conducing to further preference. That painful process is employed because it is useful to discover truth. Empiricism is indeed unique among warrants in the rigor of its operations, but these are exclusively derived from their utility in finding the true. Its findings only confirm that dedication, for the discoveries of science are valued by practitioners for their truth in the case of pure science and their immediate use in applied science. Should practitioners find a more useful means to finding truth, they would abandon their former methods immediately (see “The Latest Creationism Debate”). This dedication to truth is pure, at least in theory, though as a human activity even empiricism is subject to the lure of premature closure, which is why its methodology is so severe and its process of peer review so rigorous. But finding truth is a different thing from finding moral goodness. Science is quite adept at determining a good drug to cure disease, but it is utterly impotent to determine that disease should be cured. Morality thus must direct science from the outside, so to speak, but it can neither dictate its practice nor be confirmed by it. And that is unnatural to our desires. Nothing more characterizes human nature than the linkage of truth and goodness (see “Truth and Goodness Do a Dance”). The professionalization of science has had to cut that link. What works is to find truth without seeking to gratify preconceptions, without thinking about purpose or verifying theory or implications for future preference, and without care for consequence. That finding then directs a subsequent search for utility, to put the truth just discovered to some practical use. But nothing within that search for either prior truth or subsequent utility can in any way consider imperceptible influences on empirical judgments of truth or goodness. To put that another way, no scientific operation can determine, define, measure, or produce moral prescription. Truth governs the direction of natural science. It disqualifies using the term “science” for what are known as “soft sciences” altogether, unless we footnote the term by noting it as a reversion to its classical meaning. But even Aristotle would disown the pretensions the human sciences displayed in the nineteenth and twentieth centuries.
At a truth-finding level, they have always been stymied by a kind of mystery of their faith, for at the center of their studies must always be the antinomy of human free will. The “black box” of freedom can never be open to perceptual study except in a neurological sense that might fully interpret “brain” without ever touching upon “mind.” Worse, it can never be made subject to the predictiveness that is the sine qua non of empiricist effort. Every human science must face the unpredictability of individual free will in experience, a force not even its possessor may be said to understand and one whose operation can never be perceived from the outside (see “A Preface to the Determinism Problem”). This same mystery motivates human behavior in ways that must stymie causality and therefore prediction. The sweeping generalizations of the human sciences as a result are not empirical because they violate the determinism that allows prediction and so can neither be predictive nor verifiable. Unless severely restricted in scope, they do not even qualify as available to expertise (see “Expertise“). They attract us with their explanatory scope, but built into their theories is the escape hatch of human agency and its resulting unpredictability. Most crucially, they are not falsifiable. They are constructed so that contradictory evidence can easily be incorporated into hypotheses flexible enough to accommodate any evidential anomaly created by the black box of human freedom. And practitioners find that irresistible. The paradigms of soft science theories inevitably fracture into competing and contradictory explanations because their evidence is ambiguous, their theories amorphous, and their hypotheses slanted to their desires. There are no “pure human sciences” because the human will that is their focus can never be seen purely and the experiments that might test it would violate the moral dignity of the potential subject. Without truth, they must also lack the utility that makes natural science so potent. I doubt we could name a single unambiguously useful product of the human sciences. Their work provokes contention rather than gratitude, in part because they thrive in academic environments that fertilize creativity and argument. Their manifold failures should have discredited them sooner and would have if a moral vacuum had not permeated Western culture.
But if they cannot reveal truth, they cannot reliably guide moral preference. Their inadequacy makes space for the intrusion of norms that corrupt their efforts, and these norms can never be empirical because they concern the ends of choosing that are moral, not perceptual. They are in a grammatical sense imperative, rather than indicative. But because they have always concerned human interests, practitioners in the human science have never been able to see that, permeating their theories with premature closure shaped by their prior moral preference and tainting their hypotheses with untestable assumptions. They will deny this, but their manifold paradigms cry out for a preference of the conceptual over the empirical in theory and their sloppy observations of human behavior beg for a fill-in-the-blanks supposition of human intent. The broadness of their disciplines inevitably touches upon related but equally mysterious aspects of individual agency, adding further variables to their studies. Because they wish for their theories to be true, they nudge observation and evidence so as to make them so. Like the believer who sees God’s mercy in the one survivor of the church collapse that kills hundreds, they see the proof of their chosen paradigm in the evidence they cherry pick from their observations and in finding it contend with other practitioners advancing other paradigms in the same field with the same kinds of evidence. So do the human sciences more resemble religion than natural science? They do in binding their truth claims to prior determinations of moral goodness. No one can read Marx without detecting his nostalgia for a lost medieval corporatism or Freud without seeing his hostility to religion. Durkheim’s lust for academic acceptance, Mead’s admiration for sexual freedom, von Mises’s fear of communism: human scientists not only cannot enter the black box of human will but cannot escape its intrusion into their theoretical frameworks. Yet their theories became the unreliable moral guides of the twentieth century and their contentious “expertise” the director of its public life. Their scientism has misdirected public morality for a century.
We have suffered through so many of these depredations that it is difficult to choose a single example. But we can find human pseudo-science in all the great tragedies of the twentieth century. Which view of “human perfectionism” should we pluck from history to illustrate its failures? Examine Marxism’s grand narratives built upon historicism and sociology, or Freudianism’s utterly fascinating but by now thoroughly discredited theories of staged development, fascism’s basis in biased ethnology and anthropology, capitalism’s blasé acceptance of boom/bust cycles and inevitable wealth disparities as the price to be paid for material advancement. Perhaps the most pertinent example of the attempt to mingle scientific principles with moral weight is one that entered the culture at the dawn of the twentieth century, an effort that still permeates and corrupts contemporary educational efforts. Pragmatism was launched in that crazed frenzy of novelty that bracketed World War I. At least in the form advanced by William James and John Dewey, its solution was to view life as a cycle of experimentation, each moment of which might contain its hypotheses and clinical trials. This version was arguably most influential in American educational theory, for it coincided with the spread of compulsory high school education and directed it in the form of Progressivism and Constructivism and thereby entered every American classroom and every American mind in the second half of the twentieth century.
Pragmatism sought to perfect utilitarianism, a popular Victorian moral code that sought to quantify value in a ludicrously pseudo-scientific effort to lend moral gravitas to random desire (see “Three Moral Systems”). Its empirical trappings notwithstanding, utilitarianism was merely subjectivism in fancy dress. Pragmatists recommended adopting that focus on moral flexibility and perfecting it through the application of empirical principles to determinations of use, so that persons might frame their experience as the scientist might and so make choices more productive of their happiness. It was a clever integration, but it managed to combine the worst elements of the two systems it sought to synthesize. Utilitarianism had failed for the same reason human sciences in general fail: because its champions could never isolate their reasoning about truth from their biases and preferences, could never impose upon their method a consistent purpose to their choosing, so persons in practice felt free to choose in the moment whatever immediate outcome they desired without overall self-direction or full awareness of consequences. Think about millions of adherents operating from the conflicts produced by such a system of morality. In practice, it was neither moral nor systematic. Like utilitarianism, pragmatism accepted the “cash value” of whatever ends persons happened to choose by whatever they happened to desire, but then added the impossible twist of pursuing those ends by means of the scientific method. It sought to make an empirical experiment from undistilled experience, to do it reflexively and quickly in the flood of daily life. But such an effort must be refuted by the entire history of nineteenth century science that had levered apart those two kinds of experience so as to more solidly ground its truth claims and which in that process had excised moral considerations from its perceptual study. Pragmatism as a moral system proved itself anything but practical, at least if persons wished to either direct experience to some consistent end or evaluate it by some means beyond their present desire. Without an external guiding moral principle and in the chaos of experience that in no way resembles the laboratory, how could persons either choose what they consider good or act with the dispassionate distance required of the scientist? Wouldn’t they more likely mingle their preferences with their judgments and operate from a perpetual state of confirmation bias and in the tumult of unanticipated consequences utterly fail to achieve the goals they might desire at one moment but might change in the next? It is hard to say whether such a plan more insults morality or science, but it is certain that pragmatism as an ethical system was neither. Dewey and James thought more as philosophers than as social scientists and more as social scientists than as empiricists. Pragmatism could claim only one point in its favor: it was well-suited to a materialist culture of kaleidoscopic preference and moral neutrality (see “The Problem of Moral Pragmatism”). It filled the moral vacuum in the culture at large while contributing to the failures of twentieth century American education, or rather it justified such failure by removing the means to identify the moral goods culture and education should pursue. The effort to produce an “empiricism lite” was assimilated as just one more modernist failure into postmodern theory after the 1970’s (see “Postmodernism Is Its Discontents”).
Fast forward to our own day and our own crying need for moral clarity. The human sciences have retreated somewhat from the grandiose promises of the last century just as true science has largely renounced its adolescent pretensions to scientism. The soft sciences are increasingly absorbed into the hard or have learned to restrict their focus to the observable and avoid the grand theories that have so misguided popular morality. Practitioners who are willing to narrow their fields to closely studied and repeatable experiences can develop a real expertise about some issue relative to human will and produce statistically valuable truths, though their theories can never be predictive for any person regardless of how thoroughly applicable they may be to groups, nor can they ever direct individual preferential freedom to the good (see “Our Freedom Fetish”). We must look elsewhere to fill our empty moral spaces, for moral agency is inevitably rooted in individual reason . We should ask if empiricism, for all its truth-finding prowess, can offer any direction whatsoever for its own efforts or for ours as we pursue the good. Could its strict devotion to utility and its deferment of moral preference somehow lead to a moral framework that might learn from its methods even if it cannot be found through them? The question only echoes in a void, for science’s credo must always be to pursue what is true and most immediately useful by the strictest rational methods. The history of the last century reminds us of the danger in that near-sightedness, for science is a blind beast of enormous power, at the moment one without reins. We cannot expect it to exercise restraint or wisdom in directing its own pursuits. Those rely on moral decisions imposed upon it that it must be blind to as a condition of its competent operation. It certainly cannot be guided by human sciences and must not be allowed to revert to the scientism that marked its adolescence. In the face of its moral blankness, we should ask the obvious: what moral goods should it pursue, what moral limits should direct its dangerous power, and what moral values should interpret its results? We can only answer with the same universal principles that guide other activities that produce laws and mores, but aren’t these sources equally tainted by the collapse of institutional authority and the weaknesses of ordinary reasoning science has revealed?
The current zeitgeist has rather clarified the issue through the last hundred years of blind stumbling. Three axioms guide current moral thinking: traditional authority, postmodernism, and modernism (see “The Axioms of Moral Systems”). The first two hook all truth and goodness claims to one warrant, though each identifies a different one. Authority claims that whatever delivers truth in reality will guarantee moral goodness based on it as part of a single employment of trust. Postmodernism’s virtual circle claims that same linkage but broadens the possible origins of its reliance to whatever sources individuals or cultures judge non-contradictory to whatever degree of rigor they accept in pursuit of subjectivist or culturally relativist morality (see “What is the Virtual Circle?”). History has proved that authority and postmodernism’s attempt to overlay truth and goodness judgments must lead to irreconcilable conflict and confusion, especially in public morality. Modernism may rely on empiricism, expertise, competence, or undistilled experience to guide its truth and goodness claims, though it must be said that each of these seek to match reason and experience to particular context, so they are hardly interchangeable or always complementary. The jury is still out on modernism, though the evidence submitted in the last two centuries has not been promising.
The catastrophes accompanying the human sciences confirm the lesson we learn from the long view of authority and postmodernism. Human sciences also mingle their truth and goodness claims with socially disruptive effect. Empiricism’s greatest success has derived from severing those claims, focusing obsessively on first finding truth in those experiences conducive to its methods and only then seeking utility, but that same rigorous process excludes many truth judgments and all moral considerations from its operation. Expertise follows the same standard of separation, operating upon a broader but far from complete range of human experience. This is immensely frustrating, for these two methods have been extraordinarily effective in their efforts to find truth and chase utility. Undistilled experience has proved both by the constriction of science and by the failures of pragmatism to provide very weak direction for human flourishing. Like expertise and empiricism, it must be guided from the outside. Some moral principle must shape and guide these efforts, must not only shape one’s own experience to fulfillment but also must resolve conflicts with others in public spaces. Even examining the recent history of empiricism alone must convince us of that need.
A rational examination of empiricism cannot find that principle, of course, but it can reveal one or two lessons that should assist the search.
First, it proves that reasoning about experience has the capacity to be not only intersubjective, meaning understandable to others, but also objectively true. Science’s core process, language, and products are intersubjective and intercultural. Empiricism’s matchless success has demonstrated its universality through its reliance on mathematics and the standardization of its methodology . It is possible to view mathematics as a sterile and artificial game. Even so, that every human mind can play it speaks strongly to the common operating system of human reasoning. Any mature mind can comprehend it by applying reason to its rules of operation. The same may be said for the powerful interconnectivity of the empirical sciences revealed by natural scientists working from Rome to Rio and from Boston to Beijing. It is possible to contend with Kant that these interlaced interpretations owe more to the structure of the human mind than to the shape of the universe. But technology reveals not only that these empirical structures accord with human reasoning but also that they interact with reality itself in ways that are predictable and transformative. This proves modernism’s stated premise that human reason can reveal truths about the world. Because they are perceptual, these truths cannot be moral. Science’s success comes by an intentional blindness to moral goodness, but that same success makes rationalist articulations of moral goodness to direct science necessary, though it does not prove them possible. All science has proved is that reason applied to experience can reveal objective truths about the world.
But because they necessarily link their truth and goodness claims, the other two axioms of contemporary morality have fared more poorly.
Postmodernism has been forced by empiricism’s success to retreat from its Nietzschean assaults on the human capacity for finding truth, but it remains attractive to the culture as a theory of morality because it maximizes moral freedom, though it also discourages moral consensus (see “Postmodernism’s Unsettling Disagreements”). The virtual circle meshes pragmatism and cultural relativism with equal lubricity. But like the human sciences that so inform its theories and like religious belief, postmodernism must tie its truths to its values. Its theories of morality become incoherent if its theory of knowledge is contradicted as it has been by the success of empiricism (see “One Postmodern Sentence”). Empirical science has disproved the truth claims that serve as the means to warrant postmodernist moral theory. Reason is capable of universality. It need not be determined by culture, private experience, or historical epoch. It can direct experience, at least as the scientific method structures it, rather than being directed by experience. The universality of science proves that postmodernism is also wrong about language. If communication is not quite a clear vessel into which we pour our private reasoning, it is also not an opaque and heuristic isolation ward. Reason can discover truths about a deterministic world and communicate them intersubjectively not only to other persons but to other cultures. And it can operate upon the world so as to change it in predictable ways. We are not prisoners of our own or our culture’s perceptual wall. Science proves that the human mind processes reason universally and knows reality.
Authority’s claim to infallible truth is also challenged by science’s success. The only way mind can know reality is if reality is deterministic, not miraculous. That same authority that sanctioned tradition has approved not only obvious errors of truth but also grievous offenses against persons, moral offenses odious enough to produce both modernism and postmodernism as responses. The same moral authority that could not engender trust after the wars of the Reformation cannot hope to be resuscitated in the wake of its failures since then. Defenders of other authorities will condemn the perpetrators of these crimes against humanity as inauthentic authorities and false prophets, but what superiority characterizes the trust of their own supporters from that of their enemies? How are congregants to know when their trust is misdirected? When authority has blessed so much falsehood and so much error, how can it avoid an erosion of trust? Hope for religious revival only illustrates the vacuum of moral judgment that a transfer of trust to authority must create.
No one alive today can resolve public conflicts over these issues in postmodernism or authority’s favor without resubmitting it to the same warrant both reject: universal reason. And no moral appeal can succeed unless it is directed to the only source capable of arbitrating it: individual moral agency.
This conclusion does not solve the problem of moral judgment but rather resubmits it to our reason for consideration even in the face of the last two centuries’ suspicions about common sense. Can modernism’s solution of reasoning about experience be revived despite the failures of human science and the postmodernism it spawned, the hypocrisies of disgraced authority, and modernism’s own relentless self-criticism?
I wish to argue for a version of common sense divorced from its historical errors as a form of prudential reasoning, meaning specifically an application of universal reasoning to private experience. The distance empiricism has put between its methods and ordinary reasoning has produced an inimitable success, but ordinary reasoning can close that gap not by attempting simplistic versions of the scientific method but by employing its core lesson: the separation of our judgments of truth from our preferences. This requires the antithesis of pragmatic and religious moral thinking. Rather than leaping to conclusions of truth derived from preference, bias, belief, or “cash value” of immediate use, a view that must invest judgments of truth with the distortions of desire to the confusion of both, our judgments of truth should be decided independently and prior to the consideration of consequences those judgments might produce. Judgments of truth compose the prior conditions that judgments of goodness rely on for their accuracy. I refer here to hypothetical goods, those that mine experience for its utility. The invaluable gift science gives to common sense is the necessity of employing an act of severance that divorces judgments of truth from what ought to be consequent judgments of goodness, the exercise of which includes an active search for anomaly (see “The Act of Severance”). This is the antithesis of the confirmation bias that must follow a linkage of truth and goodness warrants by hypothetical desire. The ratiocinative distance required of judgment ought to be untainted by bias, preference, or the urge for premature closure. This imposes a quarantine on pragmatic desire, total reliance on authority, and temptations to prejudice. It requires effort in practice, but will reward it with a clear-eyed framing of preference that follows determinations of truth, especially if moral ends are categorically determined in advance. This extra effort also has its public benefit, for it removes the truth-seeker from what Plato called “the love of his own opinions” to allow for critical distance to consider other points of view without undue affection for his own, particularly in public spaces (see “Belief in the Public Square”). Neither postmodern views that privilege private conceptions as sacred nor authoritarian ones that do the same for religious dogma can allow that kind of openness to dissent, revision, and improvement of judgment. Both rather ensure the opposite: an irrational attachment that defies anomaly and defends its attachments despite contrary evidence. Finally, the act of severance clarifies the nature of moral direction so as to distinguish it from hypothetical temptations of utility. Kant regarded these temptations as so elemental to preference that morality must be completely separated from context, that categorical reasoning alone must guide deliberations of its nature rather than hypothetical thinking that must always be tempted to self-interest. He understood that nearly all judgments are tied to use, so they must always be tempted to premature closure so as to speed up our choices in experience and provide a spurious sense of certainty. The categorical imperative that guided Kant’s duty ethics did achieve that level of severance from utility, but it also provided too little incentive for persons to pursue it beyond an abstract devotion to duty. But surely moral reasoning is capable of arriving at an act of severance that divorces moral ends from hypotheticality without entirely removing it also from common human concerns (see “Functional Natural Law and the Legality of Human Rights)!
We have no guarantee that such an effort will lead us to moral truth or goodness. On the contrary, we have good reason for skepticism. First, no moral remedy can repair an unkind fate. Second, experience is variable, private and self-interested. Third, even if given clear moral goals to prefer, prudential reasoning is inexact and prone to error no matter how competently exercised. Think of prudence as a form of competence on the continuum of reasoning about experience. Empiricism lies at one extreme of rigor and undistilled experience, the kind that blur our days, at the other. The complexity of one provides more reliable truths than the nimbleness of the other, but then that nimbleness is necessary for efficient choosing in daily life. Ordinary thinking must be both reliable and nimble as far as the situation allows. “Good enough” is not a property of truth, and if it were, not even empiricism could capture it since no truth in experience can be certain. But prudential reasoning is good enough to produce reliable judgments true to a preponderance of the evidence if leavened with a good-faith search for anomaly and a practiced effort to shut out preferential considerations in judgments of truth. Practice is a key contributor to its application, for prudence is a learned skill that only gradually results in competence. It is the effort of a mind accepting moral goals without regard to experience, and then applying whatever categorical it has chosen only after seeking truth so as to mine all the moral goodness that is available to context. As such, prudential reasoning must be as reliable and nimble as the situation allows. It is analogous in this way to measurement. If I am laying out a running route, I needn’t bring a tape measure. That kind of precision is mismatched to the quality of judgment I seek when I measure my three miles. Prudence requires a match between speed and reliability in judgment, framing its constituents by discerning its essentials, and fitting the judgment to the experience. While that is far from the laboratory, it is also far from the breathless multitasking that assaults daily living. Those truths inform and direct moral choice as a subsequent judgment.
But what of that lever that empiricism applied to commonsense reasoning, demonstrating its inadequacy to the complexity of reality? How could any ordinary reasoning prove up to the task of finding and pursuing categorical morality? Three considerations give us some hope that it can. First, despite reality’s complexity, natural science has demonstrated its competency to control nature to human ends, at least to a degree. That attests to the power of human reason to know the reality that frames its moral concerns even when knowledge is provisional and open to self-correction. Secondly, the commonsense reasoning so discredited by empirical science was both tainted by traditional considerations and too similar to ordinary experience. Its jumbling of truth and goodness judgments was anything but a prudent use of judgment, and it was further confused by the misleading and pseudo-scientific claims of the human sciences throughout the twentieth century. Third, many of the empirical studies that cast doubt upon commonsense reasoning do not impact human experience at all. I can hold the knowledge that my desk is solid and that it is mostly space at the atomic level as compatible judgments. Both are true, the former at the perceptual and the latter at the theoretical level. Students are sometimes shocked to hear that the Newtonian physics they learned in high school is incorrect at speeds approaching that of light, but since their classrooms rarely approach that speed, the simpler Newtonian physics provides accurate judgments for the classroom experience. Both relativist and Newtonian physics are true, but at different levels of complexity. Euclidian geometry works on paper, Cartesian in space, and multi-dimensional geometry in a theoretical space-time at the edge of contemporary physics. All are true for the conditions in which they are employed. The conditions for prudential reasoning are entirely conducive to its practice, provided it is employed with appropriate respect for the provisional truths it reveals and with an openness to improved reasoning or evidence as examined experience allows.
Its guiding principle cannot be utility or quality, for these goals always exist in service to some end beyond them. And that goal must be articulated prior to judging the truth of any particular experience and certainly prior to seeking its immediate use. The systematic end decided prior to any particular hypothetical one, the end for which all judgments of truth and utility are merely the means, is a moral judgment. A working moral theory directs all experience to its chosen end, but the situational truths experience subjects for its inspection each ought to be dispassionately judged so that they may then reliably produce results conducing to hypothetical utility or quality all in service to a directive morality. This is quite a responsibility and its gravity is the source of both human dignity and human rights, for all of this conscientious choosing lands firmly in the lap of the individual moral agent (see “Needs and Rights”). That is the lesson we might have learned from the triumphs of science had the hypocrisies of authority, the errors of the human sciences, and the even deeper errors of postmodernism not intruded. A prudential reasoning about experience demands a ruthless self-correction that only universal reason is capable of providing, not only for our private pursuits but also for the public morality that must guide science, law, and custom (see “Natural Law and the Legality of Human Rights”).
No matter how demanding such a task seems in the abstract, we have no choice but to take it on, for we face choice at every moment of consciousness. And to echo the desperate pleas of an earlier moral crisis, what other choice do we have to guide moral choosing than our own best thinking? Perhaps through learning from history and our experiential errors, we can develop the skill of prudential reasoning. That skill may prove that we are not only capable of determining truth but also of sharpening our judgments to clearly warrant them, thus better shaping our options. “Better shaping” means moral choosing, even to a “utility of the furthest ends” that conduces to our flourishing (see “The Utility of Furthest Ends”).
The line of “good enough” in our commonsense judgments stops well short of empirical justifications, but for all their triumphs, even scientists must prove only good enough to reveal the truth and utility they so value. Their endorsement of the contingent determinism that allows them the predictability to power their hypotheses is violated by the sense of freedom they exercise in advancing them, yet somehow they soldier on using the same prudential reasoning in approaching their disciplines as we should apply to our own judgments in experience.