The Limits of Empirical Science


  • Natural science is the only field of knowledge with consensual warrants
  • The definition of “science” is time-sensitive, having once meant only “systematic approach.”
  • What could be called “systematic” is also time-bound and evolved in the premodern era.
  • Thomas Aquinas began the restriction of meaning by distinguishing “rationality” from “apprehension,” though he thought revelations could be apprehended by the soul.
  • Dividing “systematic approaches” to knowledge as Aquinas did opened the door to further investigations of the nature of organized knowledge and produced multiple controversies.
  • In addressing these, we must avoid the “golden thread” reading that views all prior knowledge as a runway to our current understandings and therefore minimizes the other interpretations that were dominant and now are considered to be outdated.
  • One such turn was the Protestant Reformation that assumed persons are capable of using faith to comprehend spiritual truths in defiance of institutional authority.
  • The Reformation was more than an epistemic crisis; it was also a desperate moral catastrophe, for divine authority had underwritten and mutualized all truth and goodness claims from before civilization’s recorded history.
  • Luther’s revolt opened the door to modernism, whose axioms of commitment held individual rational and moral autonomy as essential; unfortunately, eight generations and twenty million lives were lost in discovering and verifying these foundations for every assertion of truth or goodness.
  • The greatest loss was to trust, the sole axiomatic basis for authority, for now uses of power had to be sanctioned rather than trusted; this reliance on individual experience and universal knowledge became the axioms of modernity.
  • Because institutions are essential and assumptions poorly understood, the decline of authority was not finalized until the twentieth century; in the meantime, a constant epistemic and moral conflict between authority and individual agency created social churn.
  • A golden thread history would ignore most of this confusion to focus on “the birth of science” that began with modernism, but even the founders of modern science could not limit their “science” to our contemporary definition, meaning that “science” was subject to the epistemic and moral conflicts that characterized modernism more generally.
  • “Systematic” investigation of organized knowledge produced empirical cannibalism, a generational conversation on epistemic truth; because authority’s truth and goodness claims had been mutually justifying, the empirical cannibals found it necessary to entertain deep doubts about the certainty of any knowledge not built upon experience.
  • But as experience is inherently privately processed, this restriction introduced still more uncertainty into the issue of “reliable knowledge” as the empirical thinkers engaged in a sustained critique of each other’s thinking through the seventeenth and eighteenth centuries.
  • This focus on experience required that it be “distilled,” subject to more rigorous limitations so as to eliminate sources of uncertainty and open to rational criticism, but while this effort was superior to “common sense” understandings of experience, the uniqueness of each moment and the privacy of our processing of it meant experience could never produce certain knowledge.
  • Empirical cannibalism produced such skepticism that naturalists were forced to extraordinary care in their observations and experimentations, leading to the standardization of the scientific method, a process that occupied most of the nineteenth century.
  • Despite deserved skepticism about the unreliability of experience, the specialization of ‘natural philosophy’ produced deep knowledge that slowly became linked into a spectacular quilted, coherent, and justified reflection of reality itself.
  • Further, when the same rigor was introduced to experimentation, it produced technology of great social utility, again indicating that emerging empirical processes were applicable to reality, were therefore “reliable knowledge.”
  • What emerged in the second half of the nineteenth century was “true science” in a contemporary sense as natural science professionalized and further restricted itself.
  • Its maturation and technological products ensured this emerging scientific practice would have a role to play in the culture war between modernist and premodernist axioms of commitment that occupied the sixteenth through the nineteenth centuries.
  • One such development was a denial of science’s methodology and products that grew into the first mass movement in history; Romanticism was a revival of the connection between intuition and revelation that empiricism had denied.
  • A second response to natural science’s success was the slow disillusionment with “commonsense reasoning” as sufficient to interpret the complexities of reality and guide agency to moral choosing.
  • Both Romanticism and the discrediting of common sense were responses to the gradual self-restrictions of a scientific methodology that found reliable knowledge so difficult to attain that all moral issues must be put aside; Romanticism was a revival of “certain knowledge” through intuition; its appeal was heightened by the simultaneous discrediting of common sense.
  • By the mid-nineteenth century, Romanticism had begun to reveal its incoherencies, though it remains a force in contemporary cultures because of its simplistic access to value and its anti-authoritarian reliance on individual intuition, thus flattering agency; simultaneously, commonsense reasoning was further discredited by a series of theoretical interpretations that touched on nearly every moral and epistemic presumption that persons make in exercising choice..
  • The refinements of natural science in the second half of the nineteenth century continually restricted the kinds of experience it was capable of analyzing; they had to be perceptible, measurable, limitable, and reportable in order to be proper subjects for scientific study.
  • These limitations gradually produced a separation of expertise and empiricism with expertise producing less reliable knowledge over a wider range of experiences.
  • As science professionalized, its work became bounded by disciplinary divisions and educational specialization in each discipline’s paradigms; it also adopted a strict peer review process of verifying results; in sum, science by the end of the nineteenth century had become a universal process of warranting claims to knowledge rather than merely a compendium; it became the greatest proof of modernist axioms of commitment.
  • This disciplinary process excluded outliers that became all the pseudo-sciences that now populate the Internet.
  • By 1900, authority was entering is last crisis as empiricism was entering its century of triumph.
  • Modern axioms had triumphed over premodern ones in part because science and its technologies confirmed their utility; but the professionalization of science simultaneously began a retreat from expansive promises of a coming era of “scientific morality,” but modernism seemed impotent to provide any other kind.
  • This crisis centers on science’s inability to systematize judgments of quality and morality; some scientists and nearly all of the general public in technologically advanced nations were ignorant of this incapacity, and so they embraced scientism, the mistaken belief that science can provide guidance to qualitative questions of truth or goodness.
  • This quality problem was exacerbated by the expectations that empirical science had raised in the public mind: its ability to dig deeply into complexity and deal systemstically with whatever was found and to fill needs of social utility even when common sense and authority had failed proved as corrosive to trust as the Reformation had shown.
  • By 1920, three crises came together into one civilizational crisis: the utter failure of trust in authority clarified by World War I, the accelerating pace of societal change brought on largely by science’s products, and the utter desolation of commonsense thinking to resolve institutional quandaries and public moral direction, particularly in the face of scientific theories that were deeply counterintuitive, like Darwinism, Marxism, special relativity, and Freudianism.
  • The 1920’s was the era of liberation from tradition, though this effort also continued throughout the century; it greatly damaged modernist axioms of commitment, all documented by the postmodernist analysis that emerged in the 1970’s.
  • The twentieth century developed into a campaign of constant change, but with no public moral end, the direction of change and its directors became the stuff of endless culture wars, shooting wars, and revolutions, many based upon vague assertions of “rights” to individual self-authentication.
  • At the center of these societal conflicts was the moral basis of institutionalism itself: was it to be trust, interactive sanction, or total rejection of bad faith?
  • With authority’s demise, twentieth century intelligentsia conducted an autopsy on trust based upon a distrust of all exercises of power, particularly institutional power, but at the heart of this investigation was an unanswered question on publicly warranting claims to goodness.
  • Natural science’s success had resulted from a strict act of severance in which truth becomes the sole object of the investigation, meaning that utility of what was discovered had to be applied subsequently, but what methodology could supply that external source if not science itself?
  • This has produced a dualism of scientistic approaches. The first assumes that natural sciences like neurology and genetics will produce in time a “moral science,” which is a contravention of contemporary scientific principles and is likely to be a fool’s errand; a second cadre takes the act of severance seriously enough to deny science access to moral issues entirely and so judges that such matters must not be capable of systematic resolution at all; both responses disqualify empiricism from producing or even sanctioning public moral consensus.
  • One branch of science has eagerly embraced scientism: the human sciences.
  • They are founded on an attempted empirical analysis of felt human freedom and empirical prescriptions to direct and perfect it, both of which are impossible for them to achieve; they exemplify scientism today.
  • Though sharing science’s prestige, human sciences failed to unify their paradigms or develop metrics applicable to individual subjects, and from their beginnings in the Enlightenment, their interests were always on perfection of human nature.
  • Their work ignored the act of severance that natural science mastered in the nineteenth century; nevertheless, the public eagerly embraced their “scientific” analyses because they were broadly explanatory of social phenomena, simplistic, and morally prescriptive.
  • Practitioners in the human science were unable to limit their biases and their societal interests; in addition, ethical concerns also limited using human subjects in experimentation, both impediments to true empirical processes.
  • Every catastrophic societal error of the twentieth century is in some sense traceable to the human sciences; perhaps the most pernicious influences were efforts to develop “scientific” and “moral” solutions like communism, fascism and pragmatism to define human needs.
  • Human sciences have been successful in developing expertise in narrowed segments of human interest subject to the requirements of expertise and also in large scale population studies in which quantification can be applicable and even predictive, though without empirical reliability.
  • The influence of the human sciences in the twentieth century has camouflaged the incapacity of natural science to arrive at judgments of goodness, leaving natural science to proceed largely unguided or misguided in its pursuits.
  • Despite its limitations, natural science has some lessons to teach us: first, that the modernist axiom of universal reason is applicable to experience if sufficiently distilled, though these lessons cannot be moral ones; secondly, that employing an act of severance in ordinary experience can increase the reliability of nonempirical judgments.

A Term Seeking a Meaning

The gulf between our truth claims and our warrants for them is so frequently ignored that it is a pleasure to turn to a means of knowing that was built entirely on warranting its declarations. Empiricism, commonly known as natural science, is the most clearly defined contemporary justification for truth precisely because its methodology focuses powerfully on verification. Its clarity and practical value have earned the cultural respect it has received. For over a century it has been the most powerful in the contemporary arsenal of verification (see “What Makes It True?). Consequently, it has deservedly received thoughtful attention from academics and popular media. The happy result is a fair understanding of science as a justification for truth claims, and considering how poorly other warrants fare, that is a good sign, for we may use science’s processes as a partial model for more ordinary pursuits. But twentieth century history has shown that transfer to be full of dangerous temptations, so we should not think science a perfect model for all claims to truth (see “What Counts as Justification?“). Our conception of empiricism may be more transparent than other means of warrant, but the usual confusions attend its relation to less prestigious means of justification or to other human interests, so I turn to that task here.

Should you ask me to define “science,” I should have to ask the era you wish to discuss, for few terms have seen more variation over the centuries. To the ancients, it was thought to be simply some systematic approach. They spoke of “scientific mathematics” or the science of horsemanship or archery, indicating thought or skill (as late as 1927, the famed phenomenalist Martin Heidegger called his efforts in phenomenology an “epistemic science.”).  The term retained that sense of a simple focused study into the long era of authority’s dominance, essentially following Aristotle’s three-fold division of knowledge into natural philosophy, mathematics, and theology, though the term grew suffused with a respect for custom that stifled independence and novelty. One change from classical thinking did occur in medievalism. Theology was not a separate subject of the curriculum. It did not need to be, for it suffused every subject studied. Its justification, authority, offered the means by which serious thinkers could accept public claims to truth and goodness by a submission of trust (see Knowledge, Trust, and Belief). No alarm was raised at the possibility of rational contradiction for the simple reason that trust requires a surrender of the very rational agency the thinker might use to discover inconsistency.

By the thirteenth century, Thomas Aquinas had quite unintentionally begun a dissolution of an unquestioning trust in authority, though he was careful to enunciate a deference that his reason might have questioned had it been allowed to dominate his devotion. But even that limited venture into his own agency produced a discordant element, for he thought natural reason could be further divided into two distinct mental operations: apprehension and rationality. He maintained that all knowledge came through the senses, but did not restrict apprehension to sense data. Later thinkers would further limit apprehension to a contemporary understanding, calling it perception. In the nineteenth century that term came to include apprehension in the Thomistic sense, at least by those who use it to include intuitions that combine with perceptions to produce our current conceptual preference: experience.

Aquinas did not exclude any of the traditional applications of science-as-organized-thought: one could have a science of scripture or of mathematics independent of the perceptions. His attempt to underwrite faith by reason might seem a strange oversight on the part of the most thorough thinker in history, but Aquinas was hardly the first to think apprehension involves some “inner sight” that reveals indisputable truths through the operations of the supernatural on the mind. This conception of insight is obvious in Plato and constitutes a route to knowledge that religious authority has always sanctioned so long as it could be translated into dogma leading to public trust. So long as “science” included this capacity, it must prove subservient to the gnostic revelations that insight might claim and authority must formalize.  Aquinas certainly could not establish a separation, for the Dominican priest thought natural reasoning must always serve the ends of Church authority, an acceptance he explicitly endorses throughout his monumental Summa Theologica. His great contribution was not to question authority but to think it consonant with close thinking about its nature, to subject even an inner light to natural reasoning. This proved to be an analysis later centuries could not support. From his day forward, reason and perception would be linked. The next seven centuries would be tasked with defining the workings of that link and properly limiting it. That process is not completed.

Even this truncated overview of natural science’s origins might tempt us to a common mistake. We like to work backward from the ostensibly superior vantage point of our current view, but that effort is inescapably flawed by the search for the “golden thread of correctness,” the contemporary lens that we bring to our inspection. We imagine some modern intuition must have imperfectly informed Aquinas’s efforts in preparation for the deep wisdom that is our own judgment of the matter, transforming him into an unwitting prophet of a coming scientific revolution, This cherry-picking is understandable for any historical retrospective but it tends to foreshorten the generational courage required to challenge tradition and the manifold ways —many of them outliers to our own views — that individuals might frame such a challenge. We try to separate Pythagoras’s geometry from his belief in the divine quality of numerology or Aristotle’s careful observations of nature and politics from his conclusions that slavery and women’s inferiority were natural states. I mention this now because the greatest impetus to the development of empirical science as we know it today was the result of one such wrong turn, so our mistaken focus on the “golden thread” of evolving scientific enlightenment nearly always misses the meaning of that wrong turn as a result. One way to challenge religious authority is to sanction your own agency, rely upon the evidence of your own senses, learn from your own experiences, test your own conclusions. These are all science’s ways and every one of them was presaged by Aquinas and the scholastic movement he represented. But that path was not the one Western society chose to take in the first great revolution against authority in history.

Modernism’s Proudest Product

The crisis of apprehension came in the awful chaos that began with Martin Luther in 1517. It was not a revolt of reason against authority but of private belief, religious revelation, total gnostic conviction. The revolution that Luther began was as hostile to empirical progress as authority had been. He did not begin his revolution that way. When Luther spoke out against Catholic authority at the Diet of Worms in 1519, he defiantly threw down his own reasoning as superior. The “Ninety-Five Theses” that he had nailed to the door in his native Wittenberg two years earlier was a catalogue of authority’s offenses against rationality, a self-evident chronicle of contradictions and hypocrisies. The appeal was to the laity’s reasoned experience of Catholicism’s many abuses. One only need read it once to understand what outrages the clergy had long perpetrated against Christendom. And at Worms, Luther made that appeal explicit, arguing that one ought not go against his own reason, that one “could do no other” than to respect it.

If we pick up that golden thread, we nod and agree, for that is surely our view of things. And we can then trace out the fabric of natural science being woven from what follows: to da Vinci and his moons of Saturn and to Copernicus, Galileo, Kepler and Newton. Ever upward!

But this is nothing like what happened, and for us to understand what did, we have to step outside our axioms of commitment, see Luther’s moment not in our terms but in his (see “The Axioms of Moral Systems”). Only by understanding that the full story is still being written can we avoid the hubris that has led us to our present moment.

For religious authority had underwritten every truth and goodness claim since before history was written, as the Pyramid of Cheops attests. Our penchant for the golden thread should not blind us to the miserable desperation of the Reformation effort to replace authority’s certainty. All of its indubitability was shattered, all trust was assaulted, over the next two centuries. The problem was that no other warrant could replace the trust that had illuminated every truth and goodness claim by divine light (see Premodern Authority”). Luther himself later disavowed his own stand at Worms (“Reason is a whore, the greatest enemy faith has; it never comes to the aid of spiritual things, but more frequently than not struggles against the divine Word, treating with contempt all that emanates from God”). Worse, he endorsed the annihilation of German peasants who sought to bring their own understanding of the Bible’s truth to the Holy Roman Empire, resulting in the loss of 200,000 lives. Other believers saw other truths in the gossamer webs of their belief, a pluralism of private revelations and thoughtful apprehensions that clerical authority had forbidden and one that the trusting congregant, like Aquinas, would never consider. Following Luther’s deference to his own reading of sola scriptura, they espoused their own beliefs as revealed and inerrant truth superior to corrupted institutional authority and worthy of popular trust (see “A Problem with Sacred Texts”). But there were so many to choose from! Protestantism’s signature departure from Catholic clerical authority was a belief in the self-sufficiency of the Biblical word. But belief is inherently private and cannot automatically be converted to public trust (see “Religion and Truth”). But whether among Protestant revolutionaries or against their Catholic foes, the next eight generations found no sole scriptural message sufficient to bring disputants to a single Christian truth. The Wars of the Reformation (1519-1688) cost twenty million lives. The worst of it is that no one knew how to rebuild trust, not only in religious faith but in all public declarations of truth or goodness.

We see continuity when we trace the golden thread, but people living through those millennial horrors saw none in their future. Authority of any stripe cannot simply reconcile the dissolution of trust that doubt inspires and therefore cannot regain trust to have it resubmitted without restoring agency to the doubting mind. Once rational agency is restored — and in the Reformation, that means it was simply gained after millennia of automated, childlike submission — the individual seeks some equally reliable authority to trust. This is the intuitive recourse once autonomy is acquired. But when a thousand clamoring voices, many of them former institutional authorities, are all shouting in your ear demanding your submission, which can be trusted? More simply, what can bring order to chaos when every hierarchy’s light of truth and goodness has been snuffed out? Where can one turn in a blackout of truth and the utter darkness of moral perplexity?

No reconciliation or remediation could be found until religious authority’s ultimate warrant had collapsed into chaos and the submission of trust could be stamped out. The disorder hardly ended at the church’s doors, for congregants must now view all truth and goodness claims that had made sense of their world with equal suspicion and without a means of discrimination since no one could say what God wanted. Or rather everyone was saying it.

One cannot live that way, and so reflexively persons sought a replacement for evaporated trust. The sole criterion in the beginning was the same level of certainty authority had conferred, and so any replacement for religion’s authority was expected to find equally irrefutable supports for its claims to truth and goodness. Those could not appear, though that realization took far too much time to discover. In a traditional society, precedent sets the agenda. From the time of the pre-Socratics, institutional authority had claimed to know truth and goodness through the wisdom of orthodoxy: customary opinion. But with custom crumbling and authority destroyed by its own methodological weakness, a need coursed through the culture like a parching thirst (see “The Fragility of Religious Authority”). A historical view must comfort us that in the end at least a partial substitute was eventually found, but in the midst of that contention, no such golden thread could be woven into some new fabric of meaning and value. In the wake of the Reformation, clarity could only come from further failure. I wish to argue that at least some of that failure plagues us today.

What only gradually emerged from the chaos was not science, was nothing like it. It is true that da Vinci said that only observation could find truth, and we applaud his timing the pendulum motion of chandeliers using his own pulse. But we ought not to forget where he performed this experiment: in Pisa’s cathedral. Nor ought we to ignore the other intrusions of religious intuitions in the thinking of those other prophets of modern science. Picture Galileo dedicating his speculative theories to the Pope and later denying them in the face of the Inquisition. Consider the six years Johannes Kepler spent attempting to squeeze Mars into a circular orbit despite convincing evidence to the contrary. His motive: a perfect God would only create perfectly circular orbits, not inferior ellipses. That was in 1609, at the dawn of what history would call “The Scientific Revolution.” Even the paragon of modern science was not immune. Isaac Newton spent the last years of his life seeking the numerical code God had hidden in the text of the Bible. Those we most associate with the birth of modern thinking, the British empiricist philosophers who invented epistemology itself, were hardly free from a supernaturalism that summoned them to trust. Descartes, Locke, and Berkeley all claimed we could have certain, intuitive knowledge of God but could never have any certainty of our natural knowledge. Over generations, both points have been tested. Both are still contested and still disrupt public consensus today. So we should remember that even the greatest prophets of science accepted a set of warrants entirely different from our own, one in which supernatural truths impressed themselves upon reason, and divine revelation guided it to certainty. That is certainly not our view of things, but we should not applaud our own excellence unless we can explain why such belief is not superior, all sanction of the golden thread aside (see Can Belief Be Knowledge?”).

It was only by trial and tears that what we think of as natural science was conceived from the travails of contested religious authority. And it was only by trial and error that the greatest boast of modernism became natural science itself. That took centuries, and in the process of refining its own search for truth, empiricism faced its own set of internal trials and external error.

Modernism’s defining axiom is to base all truth and goodness claims on universal reason and private experience. These by definition reject authority — to subject it to rational inspection quite dissolves and replaces the trust that is the engine of authority — and locates the power to arbitrate claims to truth and goodness in the individual. Modernism’s earliest defenders in the seventeenth century were epistemologists: theoreticians of the means by which mind knows reality. Medievalism had favored a teleological ontology: an explanation for the structure of reality infused with divine purpose. In that system, no doubt could attend the means of transmission, for truth and goodness claims were guaranteed and certain to those who had placed their trust in the authority that warranted them. To trust in the truth of divine command was to trust the entirety of its explanation for reality and the institutions that translated it for for human consumption. Before the Reformation, any dissolution of trust involved only a relocation to another authority who claimed a truth more worthy of trust but appealing to the same ultimate authority (see “Authority, Trust, and Knowledge”). Until the Reformation, truth and goodness were as certain as they could be, and the social order derived from those certainties a brute reality closed to inquiry. To be sure, there were apostates, but they were either crushed or domesticated by the machinery of tradition. “The Apology of Socrates” is elegant testimony to the voluntary deference of rational and moral agency to civil and religious authority. I have mentioned the same subordination in Thomas Aquinas. Aristotle, that paragon of direct observation, was shrunk into just another authority by the late medieval era. It was this reputation as “the philosopher” that Francis Bacon, progenitor of the scientific method, so objected to in the seventeenth century. It was only after all trust had fallen into ruins that any replacement warrant could be built. No competing explanation could match that surety and comprehensiveness. These new epistemologists of modernism found themselves treading on marshy ground, for no sooner did their reason and experience produce a replacement explanation then some other and more careful thinker critiqued it based upon some combination of his own experiences and his reflections on them (see Modernism’s Midwives”) . This new mode of thinking, this empirical cannibalism, seemed entirely unlike the rock-ribbed certainty of authority. It was perpetual revolution, constant doubt, ongoing uncertainty. It’s been that way ever since.

Only a total collapse of authority could have exchanged a confident warrant for one so tentative. It suffered at first in its contrast with authority that was discredited yet sorely missed. It was that yearning, the echo of which today is merely nostalgia, that has forced the constriction of science, always tempting the either/or of faith over reason. But this same temptation served to push modernist thinkers to extraordinary care in their self-critique, seeking the lost Eden of certainty through means that must keep them always in the wilderness of doubt. The warrants that guided their search would repeatedly prove themselves flawed. Private experience must be reliant on the senses before it can be filtered through individual judgment so that it could be bruited aloud. And then it must be communicated through an idiomatic language poorly suited to reality’s complexity, and yet again poured through a second experiential filter to be heard, understood, and agreed to. Finally, it has to be synthesized into a broadly conceptual understanding of truth that can be consensually embraced by universal reasoning. And this risky effort must always be subject to a process that is biased by the privacy of experience and deficiencies in reasoning and so is prone to prejudice, self-deception, and imperfect comprehension. How could any truth or goodness claim survive a rational critique of such a defective process as making a declarative sentence?

Perhaps this context explains an initial attraction to reason divorced from experience. This was the Cartesian solution. But those who placed their bets on a strict rationalism found themselves boxed out of the search fairly early. By the early eighteenth century, the argument that a stricter reasoning process might lead to certain knowledge had foundered upon the sterility of reasoning independent of perception. What can the mind reflect upon other than the products of experience? This route might find knowledge as certain as religious doctrine, but without an appeal to experience of what use might it be in guiding choice? Today’s formal logicians and logical positivists are still testing answers to that question.

Examined experience proved more promising. The earliest impulse in that direction was to examine the nature of experience itself, the kind of uncritical and everyday perception and reflection we all use constantly. The results were ugly, particularly when set against the crystalline logic of the geometric theorem or the theosophical infallibility of Church fathers. For a “mere empiricism” as expressed by an early modern thinker was nothing more than undistilled experience, a casual acceptance of sense data, the kind of thinking Aristotle joined to direct perception to categorize the world (see “Stereotypes and Categories). It was the cumulative and corrective work of almost a century to recognize how defective and simplistic that view had been. One simply has to contrast the view of experience advanced by Locke in his Essay Concerning Human Understanding (1689) with Kant’s Critique of Pure Reason (1781) to plumb these depths. A pure rationalism might be the clearest loser from this effort, but undistilled experience was also cast into greater doubt by this generational effort to think through the reliability of perception and reasoning.

The natural philosophers of the seventeenth and eighteenth centuries used their critiques to refine their thinking and confine their conclusions to those that repetition and extremely close analysis could reveal. Experience, of course, resists that effort, for every one we encounter is uniquely sited in what filmmakers might call its frame. To see an experience truly, we must limit our focus to its elemental constituents. That examination requires a reliable repetition of the experience, for in the first go-round we cannot know what is important about it, how to isolate it from its context, or how to find the natural fault lines of analysis. That is hard, for that narrowing of focus must use reason to zoom into only those relevant factors that constitute the experience’s essential core and exclude extraneous ones, repeating the experience until study or skill has made it understood and thought has unwrapped its components. It was reason that required this narrowing of experience and made it possible, but that same reason prompted the skeptic David Hume to explain why the effort could not reach certainty. One element can never be duplicated: the time that shapes the context of the experience. It must change as we seek repetition and that dissolves reliability and with it our confidence to forecast its causes and effects. Just ask any gambler about that. This is the reason why undistilled experience is the most unreliable of correspondence truth tests, though we use it more promiscuously than it deserves because it is always at hand. We compliment ourselves by calling this common sense. (see “Pure, Poor Systems of Knowledge”).

But even a loose analysis, as David Hume observed, must introduce more doubt. Causality, the root of all science, could never be found in the reality science observes but only in the explanations science infers: “…we are never sensible of any connexion betwixt causes and effects, and tis only by our experience of their constant conjunction that we an arrive at any knowledge of this relation.” The modernist effort to replace authority’s certainty with closely examined experience must fail. The degree of uncertainty this factor introduces took time to recognize and negotiate, but it did have the salutary effect of forcing students of nature to abandon undistilled experience and attend to very narrow slices of experience minutely observed and catalogued. Proto-scientists of the eighteenth century developed these skills of close observation through national or royal societies to which they presented their results. Controlling the frame of their investigation to what could be repetitively observed became part of their effort. Otherwise, their fellow seekers would find reason to object to their findings. Narrower was always better. That also took time to work through as students of natural philosophy became naturalists and then scientists by means of an increasing standardization of practice in the nineteenth century. Though it had failed in its epistemological quest for certainty, empiricism continued to pile up reams of new discoveries, launch revolutionary technologies, and pitch new theories that challenged tradition throughout the 1800’s before erupting in a phantasm of invention in the next century. Was its continued acceptance merely a product of its pragmatic success even in the face of the doubt its own processes had revealed?

That might be a valid historical conclusion, but another oddity emerged from even immature scientific endeavors. “Narrower” must always imply “deeper,” and if recent developments have revealed anything, it is that there is no end to the profound discoveries of the natural sciences. But a strange thing was revealed by this sharpening of focus that convinced even skeptics that this new methodology was succeeding.  From ornithology to optics, from astronomy to zoology, findings in one field supplemented those in another to assemble a multilayered jigsaw puzzle reflecting the very reality that philosophy had thought inhospitable to certain knowledge. True, new puzzles also emerged, but they arose from an ever deeper investigation that uncovered ever more layers of physical reality. One effect of this exponential growth of knowledge that emerged in the first half of the nineteenth century was specialization. As subject disciplines subdivided, the volume of available knowledge in each field of study increased beyond the competence of any single mind to master. These deeper refinements also became correlative, and yet again. Every naturalist now found himself restricted to a single subdivision of empirical knowledge defined by the capacity of his own mind. At the same time he found it necessary to master at least the outlines of those disciplines that enmeshed with his so as to grasp relevant connections. When the natural philosopher looked up from his study, he found kindred communities knitting other research together. Deeper and wider. The edifice of natural science was built fragment by fragment by the efforts of these laborers to construct the grand mirror of nature by the sustained attention to the structure of each small piece. We have to see their labors as fed by the doubt that hollowed out their reliance on reasoning about experience, a stimulus to ever more careful investigation.

Naturalists’ devotion and care had the salubrious effect of cementing their entire explanatory structure more tightly to the reliability that had always been its goal, but it came at the expense of removing that structure from the access of non-specialists. As expertise in scientific study became specialized over the course of the nineteenth century, it discredited the undistilled experience that had launched its quest, subjecting it to a doubt proportional to science’s growing esteem and permanently separating science from common sense. Every scholarly effort to make the empirical process more reliable in the face of its quest for certainty and every rational objection raised to that effort subjected ordinary reasoning to disrepute and further removed specialists’ understanding from ordinary thought. The loud defenses of common sense we see in nineteenth century literature would have been unnecessary a century before. A century later, they were indefensible.

The Promise and Perils of Professionalization

By the last decades of the eighteenth century, modernism had already begun displaying its moral vacuity. As its vanguard and apparent epitome, naturalists’ thoughtful criticism prompted two disjunctive responses in an increasingly literate culture in the West. The first was a full-throated rejection of the idea that ordinary experience requires correcting by careful reasoning, or by any reasoning at all. This was essentially a revival of belief. It embraced two-thirds of modernism’s core axiom: its individualism and its universalism. These are natural antitheses, and it took an ideology to emulsify them for public consumption, an effort that occupied public attention for all of the nineteenth century. It came to be called Romanticism. Rejecting modernism’s reliance on reason as the key to knowledge, Romantics returned private revelation to the center stage of understanding in very much the same way that the Reformation revolutionaries had done. It was the first mass movement of literate society in history. The Reformation’s reformers had fought against religious orthodoxy and authority. In a veiled compliment to the triumph of modernist axioms, Romantics saw the modernist reliance on reason, including science, as a new orthodoxy, a new authority.

This was all wrong, but the Romantics’ error was also a testimony to the remnant power of extant authority and the compromises the Enlightenment had made with tradition in its spasmodic adoption of modernism. By 1800, incompatibilities had begun to reveal their mutual repulsions. Though the consequences were hypocritical at worst and inconsistent at best, the concessions that participatory institutionalism made to traditional authority were more than irritants to public concord. The bargain was implicit. Both sides needed it, for modernism had not provided either certainty or moral clarity, so it relied upon institutional authority’s collaboration to disguise its own axiomatic defects. But despite their steadfast resistance to modernity, the old authorities found they needed to collaborate as well, if only to accommodate the rapidity of societal change powered by technology. Trust was perennially under assault as the social environment required individuals to respond and as modernist axioms became foundational for new institutions. As Romanticism launched its protest, this mélange of motives was unclear to all parties, but the resultant moral disjunction was growing more obvious by the year. Seen in this light, the Romantic revolt of 1800 was a prequel to the far more damaging crisis that would come a century later.

In its mature expression after 1815, Romanticism offered its consumers more than protest. It provided an alternative, not common sense but uncommon sensitivity to the entirety of experience. Though the distinction between undistilled reasoning and empirical logic had not yet been articulated, Romantics warned of the danger of both. They rejected the power of reason entirely. This meant that they must confront empiricism, arguing that it must “murder to dissect” and thus kill the meaning of the very experience it sought to explain by its analytic method. And unlike the naturalists who needed the patronage of established authorities, Romantics would have no truck with authority of any stripe, thus inspiring Thoreau’s war cry: “What I think is right is right: the heart’s emphasis is always right.” Romantics favored another route to truth they considered not only more reliable than science but indubitable. Certain knowledge had been the failed quest of empiricism from the beginning, but Romantics could offer their rapt readers more than sensibility to certain truth by their technique of receptivity to interiority. They promised certain goodness. Their method? Hardly the painstaking analysis of the naturalist but rather an omnivorous consumption of all of experience, and they relied on intuition to filter and refine it to perfect understanding. I won’t go into the loaded history of that term here, but I will refer you to other essays that explore the surprising roots of the Romantic movement and its still-powerful reverberations to our own day (see “Modernism and Its Discontents”). Intuition remains a defining influence in contemporary life (see “Postmodernism’s Unsettling Disagreements”).

The Romantics’ conception of its workings owed much to Immanuel Kant, who unfortunately used the word “intuition” to describe the product of the preconscious rational sorting of sense data the mind does before presenting its completed mimesis of reality to consciousness. Romantics borrowed a fuzzy grasp of the term to propose these to be not a species-specific brain function but rather the whispers of a God-in-nature to the sensitive soul, a revelation of absolute truth and goodness that enmeshed every person in a universal web of Nature whose vibrations guide preference inerrantly to truth and moral perfection. The wider the scope of experience or the more intense or exotic one could make it, the more awe it might produce and the clearer the pantheist voice of God (see “Awe”). By this method, Romanticism’s fans (the etymology traces to “a divinely-inspired insanity”) might avert the fate of the Reformation’s personalized insights, which made every reader of Holy Writ his own pope. Romantics’ scripture was read through the heart and presented in all of experience, but it offered the charm of universality without the work of thought combined with the thrill and utter conviction of divine revelation.

Romanticism was unsustainable, for the freshness of such a search for novel experience  eventually stales, as most adolescents discover. By the middle of the nineteenth century Romanticism had been so formalized and domesticated in Western life that it might have been unrecognizable to its early champions. If discrediting science was Romanticism’s goal, it could not hope to prevail against the wave of technology that the nineteenth century produced: the greatest revolution in living in human history began simultaneously with the Romantic revolt. Even though their revolt was domesticated by its own inconsistencies and the clearcutting march of technology, the Romantics left to Western culture a legacy of distrust of both science and expertise coupled with an enthusiasm for raw experience that popular culture has elevated to a fetish.

It also posed a challenge that emerging natural science had to combat in the larger culture if only because it formed an alternative means of understanding experience. That challenge was likely not the major reason that naturalists during Romanticism’s heyday doubled down on the empirical bet, tightening and standardizing their methodology and beginning a century-long effort to reject outliers to its operation. Their painstaking refinements in observation, experimentation, and quantification were more a product of empirical cannibalism, the relentless self-critique that pushes true science into greater precision and deeper discovery.

To become true professionals, these proto-scientists had to invent a process for doing science and refine it to repair the deficiencies that individual experience and reasoning are heir to. Their gradual perfection of a methodology to limit an experience severely, repeat it as closely as possible, analyze its essentials and accidents, hypothesize, test and measure, and communicate the results was hugely successful.

Seeing broadly was never science’s project. Its depth of analytic effort transformed the casual naturalist of 1800 into the laboratory researcher of 1900 and required a methodological expertise that science students still find quite formidable and that non-scientists frequently misunderstand. Even today, humanities students embrace broad understanding rather than depth in their academic work, an inheritance of Romantic values quite unlike the specialized training and professional preparation of students of the sciences. This is but a remote echo of an undeclared battle for hearts and minds – perhaps better said for hearts or minds — that dominated nineteenth century life. These two responses to the inadequacies of modernist moral prescription would duel for the soul of the culture, introducing a schizophrenia into its everyday life that has not dissipated entirely in ours (see “The Victorian Rift”).

Its very success in the 1800’s made science’s growing expertise a moving target. Any medical student today can explain why. Aristotle had categorized everything: planets and politics, gardens and governments. And it is still true that rational categorization lies at the heart of both expertise and natural science. Naturalists, many of them dedicated amateurs with the wealth and social connections to devote their time and resources to their hobby, only gradually came to be experts in beetles or bats or benthic exploration. Many dilettantes spent their lives amidst the odor of alcohol and decay, moving from animal species to botany to chemical compounds, but the dedicated naturalist made his fame from specializing. Darwin, for instance, was an early authority on beetle life cycles before beginning to focus on “the war of the species.” Gradually, specialization became the norm as low-hanging experimental and observational pickings were plucked, presented and absorbed into various disciplinary divisions of scientific orthodoxy. The audience was learned societies who devoured technical studies in writing or in academia as today. But these often deeply complex summations of long hours of painstaking work packed lecture halls where an eager public raptly followed every exposition of geology or entomology that would no doubt put today’s audience to sleep.

Almost imperceptibly, the dilettante became the expert. Before it can succeed in the effort to make sense of experience, expertise has to limit it to something reason could grapple with and experience can attempt to replicate. That was and is an ever larger problem as we grow in knowledge. Probably the last person to attempt an encyclopedic grasp of nature was the famed Prussian explorer Alexander von Humboldt (died 1859), whose quixotic stab at all empirical knowledge, Kosmos, is seldom read today. There could be no second Aristotle.

Not every phenomenon proved amenable to the growing expertise of practitioners, and it was the work of a century of effort to reveal which experiences were and which were not. Fortunately for their quest, proto-scientists’ trial-and-error stabs at professionalism happened upon techniques inherent to their means of justification that allowed self-correction. Reasoning may be uncertain, but it possesses the excellent quality of being open to analysis. In its purest form, mathematics, reasoning is so clear that one thinker can review her process of logic and so discover and correct error at any point in a superbly clear sequence. Math’s clarity derives from its artificiality: all of its reasoning is pure because none is subject to the ambiguities accompanying experience. Empirical reasoning cannot match that level of artificial certainty, but it can discover and correct errors in perception and reasoning about it by taking care in its processes. Authority can find no means to do that. Dispute must dissolve the trust that is the sole substitute for evidence. Undistilled experience cannot do that either, for it cannot allow us the space to attempt to repeat what is, after all, a unique set of experiential inputs. Practitioners found they could not replicate an experience, but they could learn to limit the variables that make it unique, and so conduct an experiment.

Our categorical reliance on causation, for instance, forces empiricists to be skeptical about their core rational principle linking causes and effects. Its corrective must always be the old saw, “correlation is not causation” (see “The Determinism Problem”). Our ability to break an experience into discrete components and attempt to repeat it allows us to test our notions of its causes or effects. None of this process is certain, nor is it easy to master. Yet the nineteenth century saw it mastered. By the sheer luck of having sanctioned universal reasoning applied to individual experience, modernism had found a lock-and-key of knowledge, though its opening proved a very slow process plagued by manifold errors and ongoing challenges.

Natural science found it had to settle on an imperfect judgment by a reasoned examination of the preponderance of evidence. Its standards for “preponderance” evolved just as its practices did. In assaying that evidence, empiricism emerged from expertise so gradually that most observers could not mark the moment of their separation and many contemporaries still cannot.

I offer two possibilities to define it, both involving the fastidiousness of the empirical enterprise that had the welcome effect of further focusing the expert’s attention.

The first was the acceptance of the only infinite and most precise language available to humans. One can be an expert at many things whose mastery may require no communication. But empirical science found its process of peer review and the interconnectedness of its disciplines so fundamental to its progress that it was forced to find a language suited to the precision of its efforts. We recognize “true” natural science today in part through the employment of the language of mathematics. At the frontier of disciplines, we always find a cadre of researchers doing theoretical work that relies extensively, even solely, on the use of mathematics to extend theory in the field.

That same recognizable structure of the discipline introduces a second means to distinguish true science from expertise and even less rigorous practitioners. It was a restriction of subject matter dictated by the need for quantification and verification. The concept of the paradigm was pioneered by Thomas Kuhn in 1962 to explain the consensual nature of scientific disciplines. He described it as a community agreeing to the overarching theoretical basis of the discipline. This has turned out to be a defining quality of a truly scientific effort, particularly when coupled with Karl Popper’s theory of falsifiability, which argues for the employment of constructive doubt in scientific endeavor. One advantage of this specificity is that it allows empirical science a gatekeeping mechanism that every aspiring scientist implicitly absorbs in her training.

Rapid professionalization resulted. As its knowledge quotient increased radically over the course of the nineteenth century, true science evolved into something greater than a quilted reflection of material reality, far more than a body of knowledge. It became a means of warranting knowledge, a verb rather than a noun. As the nineteenth century progressed, the active voice of that verb centered on the academic community, specifically as German researchers structured it. Their process for certifying practitioners, defining operating paradigms through creation of disciplinary departments in higher education, and structuring experimental procedures became the model for the western world. Kuhn’s view of science as a collective of interacting disciplinary communities was the most important outcome of the German model of science.

As expert naturalists toiled in the mid-nineteenth century to perfect their processes of observation and experimentation so as to reveal new truths about material reality, they were engaged in the mirroring practice of disqualifying old ones. Their growing expertise in using new and better tools in their work also forced them to reject those raw materials and techniques that failed to meet their increasingly stringent requirements, especially fanciful creations of popular Romantic imagination. For every dedicated amateur chemist there was the eager alchemist seeking the philosopher’s stone who found himself shut out of serious consideration. For every naturalist teasing out the structure of the very air there was another seeking the phlogiston that allowed combustion or the ether that propelled gravity through space. For every anatomy student there were ten seekers of the physical location of the soul. As late as 1900, The Times of London offered a ten-thousand-pound prize to anyone who could prove the soul’s existence. Emanations and auras were photographed, dying patients were weighed at the moment of their passing. Impermeable cloth was laid over their bodies to capture the escaping spirit. These bizarre “experiments” are perhaps not considered as odd today as perhaps they were before the popularity of the Internet. Someone is probably still working on efforts of this kind. But that was to be expected when everything could be considered scientific until what qualifies as “science” was made clear to practitioners and their education was standardized. Perhaps it was to delineate their profession from quackery that scientists in the nineteenth century and the next built up their methodological walls not only for the entire structure of the scientific enterprise but for each discipline within it. The gold had to be separated from the dross.

In contemporary empirical science, academic certification, peer-reviewed publications, and professional associations all perform the gate-keeping function that excludes pseudoscience practitioners. That distinction is quite clear to those inside their scientific disciplines, but the line is not so clear to those prowling the frontiers of discovery. It may be argued that pseudoscience still performs a valuable service to true science. On the margins, there may be something to telepathy or hypnosis that scientists will one day allow as they have accepted acupuncture despite its violation of medical orthodoxy. In earth sciences, for instance, plate tectonics was a fringe theory until new evidence proved it a paradigm-changing basis for volcanology and seismology. Nearly every scientific discipline has undergone a similar paradigm shift, nearly always from ordinary practice in a discipline to what those ordinary practitioners dismissed as “pseudoscience” when it was done by a moonstruck visionaries. So pseudoscience amateurs occasionally do break through to “real” scientists as they apply what is now a process of immense power to unlock the secrets of nature.

Hubris and Scientism

Were I weaving this history whole cloth from the golden thread, I would continue lauding empiricism’s dominance into and through the twentieth century when its triumph was total. This is the story of science any educated person knows well. From subatomic quanta to multiverses, natural science has discovered ever deeper layers of complexity in the natural order. Its methodology is unsurpassed in discovering relationships of structure and change that are so rational they can be quantified by the ultimately sensitive logical language of mathematics. And not only discovered but predicted, for at the extremes of empiricism in the refined atmosphere of theoretical science, it is mathematics itself that models natural events too small, too fast, or too distant to observe or too vast to bring into the laboratory. Confirmation followed confirmation through the glory days of relativistic physics, electron microscopy, radio telescopics, x-rays, functional imaging, and on to computers that can crunch all of their data. These technologies allowed not only discovery but also dramatic improvements in human welfare that deserve the immense praise natural science has earned in public estimation. Toiling away in their fog of focus, empiricists may not fully see how thoroughly their method has touched the general public.

The public fascination with science in the twentieth century involved more celebrity than depth of understanding. At the dawn of the twenty-first, Time magazine named the most important persons of the last five hundred and last one hundred years. The overall winner was Gutenberg, the technologist who allowed Luther to launch the modernist movement. The twentieth century winners were two theorists: Einstein and Freud, the thinkers who explained physical and psychic phenomena so complex that they have never been observed directly. Even so, the throughline of the last few centuries confirms this thesis: the less the public could gauge scientific success using ordinary reasoning, the more they seemed to admire it for the technologies that contemporary life finds indispensable.

At this point, a favorable history of science might end with sober warnings that might have gotten lost in the paean of praise if only as caution to science’s self-importance. But even these admonitions would contain an empirical appeal. What else can solve global climate change but new technologies and energy sources? What can resolve the threat of nuclear war but better alarms and defenses? What can forestall the dreaded singularity: that moment in the near-future when artificial intelligence machines attain intelligence superior to the persons who programmed them? This last question must stop us cold because its nature is different. Science has offered no answer. It cannot even comprehend the question’s meaning. How does it define “intelligence”? What constitutes “superior” intelligence?

In 1953, IBM scientist Arthur Samuels invented a computing machine that could play checkers. Forty-three years later, its Deep Blue computer defeated the world’s chess champion, Gary Kasparov. Fifteen years after the chess match that shocked the world, an even more dazzling display of artificial intelligence occurred when IBM’s Watson computer defeated all-time Jeopardy champion, Ken Jennings. IBM is now working on quantum computing to increase speed and capacity to challenge the eighty-six billion neurons in the human brain. You would think that by this time and with this sustained effort, some academic researcher would be capable of quantifying just what the singularity is so that we can notice when some AI computer manages to pass it and thereby, it is feared, threaten humanity. But no. The best metric we have is the famous Turing test: can a machine fool a human being in normal conversation? What kind of a metric is that? We might notice that this hardly qualifies as Deep Blue level brainpower. It is kindergarten inductive logic, if that. Why can’t science better gauge what intelligence is or when the singularity will happen?

This Singularity Problem is a challenge for science in the same sense that science has challenged modernism itself over the course of the 20th century. This pivot point concerns quality. So long as measurable truth itself is the goal, the solution to making more reliable judgments is fairly straightforward. Observations are inherently imprecise, so use mathematical language to make them more so. Undistilled experience presents a succession of unique moments in the flow of daily life, so carefully structure repetitions and disqualify outlying results. Hypotheses can be tested, narrowed, and tested again. Laws, theories, and paradigms are challenged by new evidence or insoluble anomalies and so are revised or replaced, often in what Kuhn calls a “scientific revolution.” All this is inherent in what has now become “normal science.” But what happens when events are jumbled and unclear, conclusions uncertain, and induction as rough as a carpenter’s callous? In other words, when we judge not the quantity in life but its quality and when we exercise this judgment not in the lab but by the kind of rough-and-ready induction every person uses every conscious minute of every day?

This kind of problem concerns ordinary judgments of truth suitable to conceptualizing the meaning of a concept like “intelligence” based on various private experiences. But the Singularity Problem offers a second and even more difficult challenge that involves what constitutes superior intelligence. Once one knows what intelligence is, one then must make a second judgment about the moment when a machine has more of it than a human being. Or is that better of it? But notice something crucial to this second appraisal. It cannot take place until the first one has been completed. We cannot know what is better until we know what is. And that too is a judgment of quality, but of an entirely different kind. It is a judgment of goodness rather than truth, and as even thinking about the singularity problem shows us, adequate judgments of the quality of goodness are much more difficult than those of truth, in part because they rely upon the adequacy of those prior judgments. Both of these challenges are central to the tragedy of the last century when modernist axioms began to fail just as premodern ones did four hundred years earlier. This difficulty can be subsumed under the umbrella term as the quality problem: empirical science cannot answer questions of qualitative truth or qualitative goodness. For all of its triumphs, the empiricist faces these most difficult problems in human affairs as helpless as a naked, newborn babe. The twentieth century is the era when science comes to great power, but it is achieving his greatest ambitions that causes the tragic hero his dizzying fall. Holding this understanding of the quality problem for empirical science in mind, let us return to its pivotal role in the history of the last century.

By 1900, three waves were aligning to aim their combined energies against advanced societies. First, authority’s final crisis had waxed and trust had waned, throwing premodern institutions that relied upon trust to enter crisis. Second, empirical science proved that reason and closely examined experience did indeed make reality universally intelligible and had accelerated its delivery of technological marvels to prove it. This success masked science’s quality problem.  Finally, empiricism’s complexity kept putting its discoveries and processes of justifying truth claims beyond the reach of commonsense thinking, the kind modernist institutions relied upon to address the quality problem in government, education, diplomacy, and ethics. These three crises were epistemic challenges the twentieth century failed to understand or resolve, but as truth determinations must always precede and shape judgments of goodness, the real crisis was one of public moral consensus that must be built upon discoverable public truths that are necessarily judgments of quality.

Keep in mind that none of this would have been fatal to qualitative judgments of either truth or goodness in another age because such judgments would have been far less essential to the premodern mind suspended in a womb of trust. Authority had always slowed social change so as to soothe anxieties that might dissolve trust — a once-successful effort we no longer can contemplate, and communities would turn reflexively to its reassurances in turbulent times. Over the course of the nineteenth century, the same modernist axioms that had made space for empiricism had splintered authority, and by the turn of the twentieth century, trust had been rent. By World War I, the harmonious voices of authority had devolved into a croaking chorus of competitors for what little trust remained.

The only novelty of authorities’ strident demands for trust by the fin de siecle was their fugue-like repetition. This was to be their swan song, but traditional institutions had been tuning up for this finale for centuries. Traditionalists had warred on science from its infancy, beginning with the Catholic Inquisition and Index and continuing in the Romantic musings of Vico, Dilthy, and Newman. But the culture’s infatuation with modernism prompted such enthusiasm that religious revivalists were forced into to an elegiac tone of nostalgia that resonates in the twentieth century writings of Lewis, Belloc, and Chesterton. We can see it in today’s evangelical movement as a conscious rejection of modernist values and an anachronistic and fragmented embrace of the premodern (see “Theology’s Cloud of Unknowing”). It had long been a knee-jerk reaction to change and an ongoing appeal to the gnostic, and what unifies it even now in its variety is a rejection of modernist warrants — of which empiricism is the clearest example — in favor of one that is utterly at odds with them. Today’s religious nostalgists make the same case that Pope Pius IX made in his infamous “Syllabus of Errors” in 1864 (see “Tao and the Myth of Religious Return”). It is a lost hope to see tradition as a single authority worthy of trust. It is little wonder that change — political, cultural, or intellectual — has always been seen by religious reactionaries as the enemy and empiricism as the advance guard of the assault.

Traditionalists’ fears proved far more achievable than their desires. Empirical science’s dominance by 1900 had swept all of western culture into revolution, the greatest of which was the industrial one. Old governmental structures built upon cooperative interchanges and downward flow of power had long been challenged by new institutions championing the individual and change over social stasis. Democracy confronted nobility. Capitalist competition suppressed feudal corporatism. Money trumped birth. No compromise was possible between two orders of living, but time favored the modern over the premodern way of life if only because technology had so ruptured traditional ways. In the midst of the whirlwind, few saw the whole picture and fewer yet saw its cause: a growing vacuum of public moral consensus that had first emerged in the Reformation. Victoria was crowned in 1831. Her seventy-year reign would see the professionalization of empiricism and the crisis of authority synchronize their effects into a single civilizational crisis over what might warrant public value in the absence of trust. Authority and its hierarchies wobbled as old institutions issued their final, failing appeals for public trust in the face of radical change and the triumph of modernist axioms. Trust could not survive such a whirlwind, and common sense could not reconcile the innumerable contradictions, hypocrisies, inconsistencies, and lapses in the moral no-man’s land that signalled the old order coming apart in the storm of technological change.

In the new century, all eyes turned to the one unambiguous success of the age, modernists with hope, premodernists with dread. Science was revolutionizing Western life and thought. With authority’s demise, it would be called upon to direct it. Moral confusion was moved by the same impetus that had launched modernism to begin with — the utter failure of religious authority — but now worsened by authority’s mismatch to a replacement set of deeply buried assumptions, the modernist axioms of commitment, that had sought to repair its failure. Tradition sought submission and surrender of the capacity to choose and deference to longstanding traditions. But for centuries, scientific discoveries and the technologies they allowed had revolutionized everyday life and implicitly upset past structures of social comity, opening new options. And always it showcased the power of human reason in experience. But it is precisely this application of undistilled reason to experience that the quality problem exposes and that empirical science must be impotent to resolve despite its wonders. Ultimately, the dispute was a moral stand-off: authority lacked the social trust to dictate and empirical science lacked the mean. So what could warrant public moral consensus?

In the face of the final dissolution of authority and the triumph of specialized reasoning, the culture looked to science both to identify moral ends and measure progress in meeting them. It had proven capable of a similar success in pursuit of truth even as other sources failed, so in the continuing churn of culture produced in good part by the technology empiricism made possible, why couldn’t it be the new arbiter of social utility? Truth about reality seemed not to be common in two different senses: it was not universally distributed and it was not ordinary. Scientific discoveries had forced this realization. The nineteenth century had had its Darwin and Marx to cast traditional verities into doubt with theories of deeply hidden formative forces hidden in plain sight in nature and society. What heated the popular imagination to the boiling point as the new century dawned was science’s promise to provide equally revolutionary solutions to social problems. Public interest fastened on simplistic versions of complex paradigms that popular media then offered as modern solutions to age-old moral dilemmas. All of these new approaches seemed wrapped in fog of exoticism and gnostic revelation that promised the unveiling of new routes to societal improvement. If nature prescribes improvement in species, should we not expect that same process in the social order? And if that process was a ruthless competition for dominance, then human nature must be helpless to do otherwise. If one animal displaces another, ought not superior races displace inferior ones?  

 As expansive and ambitious as nineteenth century social prescriptions like social Darwinism and communism had seemed, they were merely the appetizer for the radical assaults on reason itself launched at the beginning of the new century by Freud and Einstein. As incredible as empiricism’s findings might seem, they were repeatedly confirmed by other empirical work within and between disciplines. They revealed an external reality too weird to be understood by ordinary minds judging ordinary experience. And those minds were evidently prey to unconscious (or is it preconscious or subconscious?) influences that warp our understanding even before we apply it. Even scientists were shaken by their own efforts as Einstein was by Planck’s quantum theory and Freud by Jung’s of the collective unconscious. Could the mind really contain recesses closed to consciousness? Could reality itself hold hidden realms closed to perception? The new science of psychology found nothing “common” about the dark recesses of the unconscious mind, and the Copenhagen Interpretation of quantum theory combined with general relativity induced the dizzying realization that the very fabric of space and time must be as illusionary as a magician’s hands.

Grossly simplified versions of these theories circulated through the popular media. They were treated as voyages of discovery, and their inventors as the new conquistadors opening undiscovered continents to human knowledge. Any reader could see that natural science’s professionalized wonders had opened a reality that was nothing like the one ordinary experience perceived, yet one verified by all the strength of modernism’s proudest project. Whether one was looking out and looking in, everyday life seemed a thin veneer covering unplumbed depths. Freud publish The Interpretation of Dreams in 1899. Never the modest one, he had asked his publisher to put its publication date as 1900 to emphasize its futurist vision. Einstein published his theory of special relativity in 1901. From the beginning of the twentieth century, psychology’s understanding of the unconscious and physics’ theory of relativity began their assimilation into the popular cultures as relativism, the plasticity of reason in private experience (see “What is the Virtual Circle?”).

Some practitioners, now working in a professionalized activity, recognized that natural science was being misunderstood. Its long, constrictive adolescence had demonstrated that true empiricism was not capable of finding the full range of truths that must always form the context of any search for the good. Granted, it had proven itself unsurpassed in those experiences open to its methods, but that alone severely limited its operation. So many human interests go beyond perception, connect to other values, or cannot be quantified or repeated. These limitations mean that many experiences simply vanish from scientific consideration. And even those that prove amenable to the rigors of science’s means of inspection must be viewed through the peepholes of each discipline’s paradigm. All of these methodological limits boded ill for empiricism’s task of moral guidance. It is still not apparent to most educated persons that natural science cannot replace religious orthodoxy, that its method must deny trust, challenge any authority, and refuse to seek the good by a methodology so competent to seek the truth.

Not all empiricists appreciated this limitation because their methodology had succeeded so well through centuries of refinement, an ignorance exacerbated by the nature of the difficult academic training necessary to enter the profession that left little room in the science curriculum for any other subject knowledge. It is therefore not surprising that some scientists were mesmerized by the golden thread of their vocation’s progress. Also, in a generalized moral crisis, they participated in the crisis of revolutionary change that sought their guidance. If they failed to understand the qualitative implications of the truths they discovered, we can be certain that the general public must not understand them either. And we ought not be surprised that neither could achieve perspective on the dangers these sorts of temptations posed to public moral comity.

All of these tensions came to crisis in World War I(1914-1918). It  proved to be the defining moment of the new century, reorienting and limiting justificatory options, producing cultural crises as crippling as the Reformation wars that had challenged authority four hundred years earlier. World War I provided convincing and final proof of the utter bankruptcy of authority. Traditional orders of government, militarists, and clergy who had sent millions of their young countrymen to die in the trenches could hardly justify either the cause of the slaughter or its scientific efficiency. Nor could they reconcile the resentments of industrial workers, oppressed peasants, and exploited native peoples to pretensions of civilization and progress. An oppressive social order stereotyped the female halves of their own populations. By 1919, their appeal to tradition collapsed in plain sight in war, plague, and totalitarian threat, definitively finalizing a decline that had begun four hundred years before.

If trust had failed, so too did the reasoning of the war’s apologists who had sought to “make the world safe for democracy.” If popular cultures assumed they could turn away from tradition and seize public direction by the light of their own commonsense reasoning, the World War I era began to show them that there seemed no straight path to public moral comity. Having finally vanquished authority convincingly, modernists standing in the ruins of a shattered order could finally be confronted by the bankruptcy of trust they had long fought for. But the moment also revealed their own hypocrisies and accommodations to those institutions that had exploited the public trust: crown, church, caste, race, gender, and deference to old power. Every exercise of authority was now ripe for the modernist critique, and few saw any difference between participatory institutions that modernism had launched and the traditional institutions that operated on trust. Both lay open to private agency. The stage was now set for science to step onto the stage and lead the players…right into the problem of quality.

The 1920’s were the denouement. At the apex of its influence, modernism began its century of decline. The wizened diplomats who met at Versailles to hammer out a new world order could not appreciate how literally their intentions would be realized. They were the last generation of authority. The next would be the lost generation, orphaned by the Great War, who would witness a world going mad. It was a decade of frenetic novelty in every field of cultural endeavor, an eruption of novelty accompanied by explosive economic and geopolitical challenges. Technology and literacy paved the way for began the world consciousness that we take for granted today. Narrative media began their work of framing reality by showcasing a new kind of hero rebelling against a world without meaning (see “Freedom, Antihero, and Zeitgeist”). Nietzsche had seen it coming, and other disillusioned Romantics began to see it too. Who can whisper inerrant moral truths to intuitions if God is dead?  Popular art was the vehicle to most viscerally communicate this empty horizon. It had begun with the Darwinian reaction against Romantic sentimentality in the last decades of the previous century. Literature called it “realism” and “naturalism.” After the horrors of the war, the nearly universal artistic aim was to wipe away every last vestige of tradition and sentiment. It slowly matured into an aesthetic, alienism, whose intent was to distance the present from the past: to war against traditional norms and forms, self-delusion, and hypocrisy (see “Three Portraits).

Rebellion in the name of personal and artistic freedom was both exhilarating and desperate, a clear turning the corner. But to what end? The war that had cost twenty million lives was in retrospect an exercise as futile as crossing of the no-man’s land against the machine gun and howitzer over barbed wire, all technology’s products. The cause of this slaughter was such a mystery that the U.S. Congress’s Nye Committee was still investigating it seventeen years later. The Spanish Flu pandemic of 1918-19 cost another fifty million lives, also cause unknown. The body count of twentieth century slaughters continued through the Russian Revolution, World War II, the Chinese Revolution and too many brushfire wars, anticolonial movements, and civil wars to mention, all of them accumulating to the century’s most accurate neologism: genocide.

It was the era of various “liberation movements” and “rights” campaigns. All were vigorous choreographies of that most fundamental of rights now fully and publicly expressed: the freedom to become something else. But what else (see “Needs and Rights)? The deeper one probes into the history of the twentieth century, the clearer its moral anomie and vacuity. Seen at this scale and in retrospect, the emptiness of this grand vision of liberation from restraint was fully predicted by premodern nostalgists defending old political, religious, economic, or societal institutions, a process that had begun with Edmund Burke’s warnings on the French Revolution. These efforts were always too little and too late, but they were powerful enough to launch repeated authoritarian movements over the course of the twentieth century in the doomed hope of reviving public trust, or at least public order or common purpose (see “Recent Authoritarianism”).

By the 1920’s, the full scope of this new epistemic crisis began to break into popular consciousness. The autopsy for trust, the lifeblood of civilized life since it first stumbled out of its earliest river cradlelands, could not begin until authority lay lifeless on the examining table. Trust had to die to be investigated and its unique qualities as a public moral warrant to be inspected by the same rational agency that had finally killed it. The postmortem occupied intellectuals and alienist artists for most of the twentieth century. Here is what they discovered.

Trust is unique among public justifications. The same warrant that supports the truth of God’s omnipotence supports the imperatives of the Ten Commandments. Both are sustained by the same submission of private rational and moral agency. What now is an intensely personalized commitment we cannot even articulate was once the grammar of public values and the maker of private identity. Its language was all imperatives, declaratives, and exclamatory fealty to institutions that denied interrogatives. Modernism freed individuals to find their own truth, but the gift in Sartre’s words “condemned man to be free” to also find their own good. The collapse of authority had severed that linkage, and goodness claims were cast into a doubt that modernism had found no means to resolve. So long as authority retained some cultural currency, it could continue to claim the moral high ground but it could not hold it against the challenges of modernist warrants, the smashing of trust by warring religious factions, the contradicting truth claims of natural science, and the piling up of hypocrisies that authority and its Quisling modernist collaborators perpetrated on a rapidly changing social landscape.

The autopsy on trust was conducted by a cadre of mainly French intellectuals between and just after the wars, but their analysis were fractured by their conceptual disagreements from the start (see “One Postmodern Sentence”). It finally resolved into a working replacement for modernist axioms of commitment by the 1970’s, but this postmodern critique remained more diagnostic than curative. It merely added another epistemic nostrum to a century that required stronger moral medicine to heal its public disorder. It failed to solve the quality problem of scientistic modernism or the trust problem of premodernism and merely added its own suspicions that have permeated secular societies. It is science’s fate to be at the center of the postmodern problem as it has been of so many others because the diagnostic language of the postmodern critique borrows the methodology, if not the rigor, of empirical science to frame what is in essence a Romantic protest. If one strips away the gnostic quality of that protest, it becomes very clear that even if this critique describes qualitative truth by a pseudoscientific methodology, it certainly cannot even begin to prescribe the goodness that description must allow. Seeing why that is true will help us understand why science has been so monumentally effective at finding certain kinds of truth but also why it must prove entirely incompetent to define how those truths ought to be exploited.

If the nineteenth century professionalization of science is to be understood, it must be seen as an expansion of empiricism’s power to find truth coupled with a contraction of the methods by which it could be found. It was just this consensual process of “doing” science that made it so successful. As we have seen, this scientific methodology intensified an experience’s focus to minimize the variables and approximations of undistilled experience and maximize the application of reason to that very narrow but very deep focus on one aspect of one experience. One of the hardest variables to eliminate was confirmation bias, the very natural human desire to have the experiment confirm the presumptions one brings to it, the hypotheses that are being tested, the laws and theories that it will confirm or contradict. We are not by nature dispassionate participants in our own lives. Commonsense reasoning is inferior to empirical processes in large part because we are so used to exploiting an experience for its goods, mining the moment for its utility. The quests for truth and goodness are so intertwined in ordinary experience that we frequently don’t even notice that we think it good, for instance, to go on to another sentence after we see the period marking this one. We live in a constant cycle of understanding an experience so as to utilize it, a process that marks almost every conscious moment of experience (see “Truth and Goodness Do a Dance”). To separate our judgments of the truth of a moment from the good we can derive from it is so alien to our nature that it takes an intense focus just to notice it, much less to defer it. The Singularity Problem of artificial intelligence cannot consider when AI will prove superior until it can define what intelligence is. First truth, then goodness. It is precisely this control that most characterizes the difference between empirical thinking and commonsense reasoning: true science abstains entirely from the judgments of the good so as to avoid biasing its rigorous judgments of the true. Engaging this act of severance between knowing the true and seeking the good has proved the key to the lock of scientific truth. It is only by engaging it that science’s methodology has uncovered so many of nature’s secrets (see “The Act of Severance.”)

You will notice some fundamental differences between judging truth and goodness. The most basic is that every determination of goodness concerns a reality that does not exist. It is in the future and so is contingent, non-material, and indeterminate. These are just the kinds of characteristics that empiricism is least able to engage. They are, in a word, qualitative factors entirely unapproachable by the methodology that has given science its greatest success. The irony is one entirely suited to the twentieth century: popular sanction of science’s capacity to resolve our current moral crisis was founded upon science’s mastery of the act of severance, but this mastery prohibits science from even seeing, much less solving, that moral crisis.

The self-effacement of science from judgments of quality simply removes such questions from empirical existence, the equivalent of the old joke about the man searching under a street lamp for a quarter he had lost down the street “because the light is better here.” This poses a problem for the strict empiricist who recognizes the act of severance but not the necessity for some other kind of qualitative judgment. For instance, the Nobel laureate physicist Frank Wilczek asks a profound question in his 2021 work, Fundamentals: Ten Keys to Reality. “What is it, in the physical universe, that embodies ‘human purposes’?” He answers his own question thus: “If we try to define ‘human purposes’ precisely, we risk a rapid plunge into the murky depths of vague metaphysics.” For him and for most other natural scientists of the twentieth century, the necessity for qualitative judgments of goodness simply vanish because the act of severance removes it from empirical consideration.

But not every physical scientist recognizes the quality problem and thinks questions of goodness invisible to science. A typical approach is neurologist Sam Harris’s promise of “a science of good and evil” as a chapter title in his 2004 bestseller The End of Faith. His conclusion is that “a scientific understanding of the link between intentions, human relationships, and states of happiness would have much to say about the nature of good and evil and about the proper response to the moral transgressions of others.” If you are unfamiliar with the problem of quality and sympathetic to the golden thread of scientistic progress, you will nod in agreement at Harris’s optimistic prediction that empiricism can tell us something valuable about “states of happiness.” And if you have not considered the act of severance as Harris obviously has failed to do, you will look forward to the era when a science of morality will establish codes of ethics for all of us just as it has for the many medical specialties with which he is familiar. And you will defer to scientists even when their decisions have nothing to do with the empirical process.

Some brain scientists predict the era of FMRI machines and advances in mapping the brain’s workings will prove the mind simply an illusion and self-agency an impediment to progress. All such problems would be shown to be as illusory as the music of the spheres once that pesky “hard problem” of felt human freedom could be explained by deterministic science. Harris has something to say about that as well. “Although we can find no room for it in the causal order, the notion of free will is still accorded a remarkable deference in philosophical and scientific literature, even by scientists who believe (sic) that the mind is entirely dependent on the workings of the brain.” Think of the mental knots one has to tie to imagine that scientists will one day decide to say their power to decide is a self-delusion.

Scientists deeply immersed in their specialties, spending their entire lives deciding upon empirical truths, can perhaps be forgiven for forgetting that their hypotheses are proof of what the experiments that prove them deny. The “philosophical literature” Harris disparages has engaged the free will problem, in some cases with frankness and subtlety. Immanuel Kant’s famous third antinomy explores this paradox clearly. Kant demonstrates convincingly that felt human freedom, what Kant calls “spontaneity,” will never yield to deterministic prediction (see “The Essential Transcendental Argument”). This is not, of course, an empirical judgment. It is a judgment of quality, but anyone who reads Kant must be aware it is based on something other than common sense.

If it is not empirical and it is not common sense reasoning, what can we call the kind of thoughtful yet non-empirical judgment that Kant employs? It seems deeply inductive and yet is not quantitative. Rather it probes deeply into both quality and experience but not the kind of experience the laboratory employs. Kant’s thinking seems to be the opposite of the painstaking elimination of variables that the laboratory employs in isolating the essential elements of a single experience. On the contrary, Kant’s brand of reasoning seeks essential elements of all of experience, meaning of everyone’s. But how can the phenomenological particularity of each person’s particular experience be somehow assimilated into any composited, and more importantly warranted, truth about experience itself? Would such a sentence be meaningful despite the warnings of the philosophers of language like Wittgenstein, the pragmatists like James, and skeptics like Wilzcek?

If we have any hope of finding public moral guidance, we must seek some thinking space between the limitations of undistilled reasoning and of empirical science, of private realities and scientistic probability. That effort must begin with improved clarity about the conceptual nature of the conversation that will produce usable results, one that respects the potential of modernist axioms of universal reasoning and private experience. This has been my purpose in this history of science: to establish a categorical understanding of what it is and can do as well as what it is not and cannot do. So let us try using some inductive analysis and conceptualization to investigate the nature of the problem of quality and the act of severance for science. This is not empirical rigor nor even expert judgment, but if conceptual limits are rationally complementary, it can be thought a competent judgment by a preponderance of the evidence. In matters of public morality, that will have to be sufficient.

On the one hand, some empiricists recognize the quality issue and consider it insuperable. They endorse Wilzcek’s view: natural science cannot find goodness, and if it cannot find it, it either does not exist or cannot be found. Others ignore or minimize science’s qualitative limits and the act of severance as illustrated by Harris’s plan for a “science of morality” that replaces commonsense reasoning with research-based guidance.

Something interesting happens as soon as the observer seeks to utilize results rather than simply know them: she must choose a desirable outcome, decide upon some use for the truth she has uncovered in the natural world. But nothing in that world nor in her efforts to this point can impart what outcome she ought to seek. No matter how well she comprehends the truth of an experience or how deeply she analyzes its constituents, nothing in physical reality can tell her what to do with what she discovers. To put it bluntly, nothing in the scientific process can transform a description into a prescription. Or even more simply, we can find no ought from is. The researcher must introduce some qualitative good into the process that has served her so well to find the truth, and that judgment can never be empirical (see “What Do We Mean by ‘Good’?”).

The implications for this step away from scientific competence is seldom appreciated for the chasm that it is even by practitioners. Frequently it is such a small step that it goes unnoticed. For example, applied science is distinct from pure science because it seeks to improve the world rather than simply know it. And while this task may appear a natural consequence of the scientific process, it is not at all empirical. Metallurgy, chemistry, and physics may provide the knowledge the materials scientist needs to construct a suitable elevator cable, but sizing the cable can happen only after an architect decides that an elevator would be good to install and a cable good to support it. The physician values her patient’s health but none of her medical acumen can inform her that health is worth valuing. Even the scientific method itself is simply a convenient tool for finding quantifiable truth whose practice can never be verified by its own standards. Should some other means prove better, scientists would adopt it (see “The Latest Creationism Debate”). But deciding upon what is better cannot be an experimental result. “Better” is a goodness term. This much is clear: every good is imported into the empirical process. None is found by it.

No one graced by the wealth of technology in contemporary life would deny that science’s products’ utility is the real reason it is regarded as the greatest success of modernist axioms of commitment. Technology made life better by lessening manual labor, producing affordable products, improving human health, speeding transportation, and by the thousand other ways that technology has enriched our lives. Of course, it has not been an unmixed blessing, but nothing in the science that produces that technology allows us to make that judgment. Without this input, natural science would be an inert observational process producing nothing but a clearer picture of the material reality that surrounds us. Every scientist would resemble the astronomer peering at some unimaginably remote galaxy: a reality to be seen but not affected, studied but not made use of.

This separation between reality and its uses was initially ignored for two simple reasons. First, proto-science had no real methodology, so no guardrails were at first available to limit its practices, as has been discussed. Second, in its early applications, the need for discovery and invention was more obvious than the means to satisfy it. Early technologies were responses to the most obvious of human needs in health, labor, and safety, and were moved by common sense and pragmatic motives accessible to anyone inhabiting the experiences they improved. If a sailing rig or a hull’s shape allowed cargo to be moved more quickly or safely, if a plow could be fashioned of a stronger material that would less often break upon the buried stone, if an infectious disease might be averted by sweet-scented herbs, the simplest inductive reason alone could make the obvious connection between the truth thus known and the good it satisfied. All science at first was applied science.

Its professionalization began to break that simplest of bonds between knowing the truth and exploiting the goods it offers to preferential freedom.. As discovery piled upon discovery and invention upon invention in an ever-increasing spiral of complexity and as disciplines began to deepen and separate and practitioners to immerse their entire lives into their fields of study, few thought to question their own competence to use the same methodologies so valuable in finding truth also to find its utility. This gradually led to a covert and largely unnoticed failure that we now call scientism. It is not a truth-finding — but is instead a goodness-seeking — assumption deeply buried within emerging scientific practice. It has haunted the last two centuries of scientific progress. It increased in importance as scientific advancement began its long process of professionalizing in the nineteenth century, and in that process proving the inadequacy of common sense. In one of those tragic historical coincidences that has produced our present crisis, it emerged just as authority began its final decline in the last half of the nineteenth century. So it was a predictable response to a society starving for public moral direction and a resounding smack-down to traditional authority’s hostility to empiricism’s reliance on rational autonomy. It also was an answer to the Romantic movement’s absurd reliance on a pantheistic God breathing inerrant truth into sincere hearts, and imparting certain moral clarity while doing it. But here is the rub: both traditional authority and Romantic pantheism could not be refuted by the methods that employed them. They were coherent truth-and-value-seeking systems. Scientism is not scientific and is actually refuted by the science it seeks to set up as its arbiter of value. Only the long slow adolescence of natural science and the simultaneous wasting away of public moral authority permitted scientism to expand like a heated gas to inflate its role to public moral prescription.

Human Science

But that fundamental incapacity was enormously complicated because one particular sort of science did promise to guide persons to moral truth, to ignore the problem of quality and the act of severance, as well as the black box of human free will. As the twentieth century unfolded its successive catastrophes, true empiricists continued their retreat from prescription even as they continued their descriptive triumphs. But another branch dove right into the empty pool of moral vacuity. This was a bundle of studies whose subject explicitly concerns what deterministic natural science must ignore or deny: human freedom. While it is indisputably true that this is the proper concern of moral prescription since moral preference must involve those same faculties, I hope it is obvious that it is not at all available to empirical science. Viewed in the light of natural science’s self-imposed restrictions, it is understandable that these human sciences stepped in to fill the moral vacuum. Tragically, that does not make their intrusion any more beneficial (see “The Calamity of the Human Sciences). Because science’s limitations took so long to be understood even by practitioners, recognizing qualitative limits and the act of severance for the human sciences took even longer for the human sciences. Their unavoidable dependence on scientism is still not fully appreciated by practitioners, who bask in the reflected success of natural science, nor is it acknowledge by the public.

In part, this failure can be traced to the birth of natural science in the seventeenth century. Remember that it began as a simple, systematic search for knowledge, so all sorts of subjects were considered suitable, and the search for moral improvement was a “science” of systematic study from long before real science began. It seemed natural then that the Scientific Revolution had its echo, the Enlightenment, whose clear promise was human perfection, “the science of man.” Even in these early stabs at modernism, these subjects confronted moral questions far better served by philosophical inquiry than by empirical methods, especially as the meaning of “empirical” evolved. As Victorianism witnessed the constriction of empirical standards, it saw correlative efforts to develop “human sciences” as reliable as natural science.

In economics, sociology, criminology, anthropology, education, ethnology, and especially psychology, these fields attempted to develop the expertise and the experimental processes that would yield reliable knowledge of truth on a par with those produced by empirical studies. Their practitioners modeled their disciplines on the hard sciences, and they assumed their place in university curricula and shared science’s growing prestige. But they saw their function a bit differently. They thought themselves always on the cusp of fulfilling Enlightenment dreams of progress and perfection. These are moral goals whose attainment requires defining moral ends (see “What Do We Mean by ‘Morality’?). They practiced what would become human sciences, as did the founders of great social movements like Marxism, fascism, and utilitarianism. In the post-World War I climate of moral desperation, their theories permeated the culture through and through. If hard science could find truth in nature, then surely the human sciences could do the same where it really counts, in human nature, all in pursuit of mental and social health. The natural turn to the human sciences began well before World War I, but in war’s awful wake they promised to fill in the empty space left by the growing rigor of the natural sciences, the collapse of common sense, and the final collapse of religious authority.

Diverse and even contradicting kinds of lessons were to be gleaned from the work of these inquiries because of their generality and completeness. Capitalists looked to the inevitable rise of social Darwinism while communists embraced its inevitable fall. Relativity taught that spatial relationships were affected by the observer’s position, so phenomenologists argued that subjectivity was just as situational, while Constructivist educators pushed for making every classroom a laboratory of societal adaptation.  Never mind that these and a thousand other prognoses of the human sciences inevitably conflicted. If truths in space and time are relative to the observer, why should moral truth not be relative to culture or a mysterious upwelling from the dark depths of the unconscious?

It is impossible for the human sciences to avoid stepping over the act of severance for the simple reason that the central concern of each is the human person whose felt preferential freedom is her most precious possession. So it was impossible for them to avoid scientism. At a truth-finding level, they have always been stymied by the quality problem. Discoveries in the natural sciences only set the table for preferential freedom to satisfy itself. Finding truth is the necessary prelude for choosing its possible goods closed to empirical inquiry and too situational to be open to expertise. So if the human sciences have to step over that chasm separating truth and goodness, they are sure to stumble if they think it can use the methodology of true empiricism. The “black box” of human will can never be open to perceptual study except in a neurological sense that might fully interpret brain without ever touching upon mind, which might experimentally confront motivation but be blind to intention. Every human science must face the unpredictability of individual free will in experience, a force not even its possessor may be said to understand and one whose operation can never be perceived or individually predicted from the outside in advance of preference and denial of future consequence (see “A Preface to the Determinism Problem”). This same mystery motivates human behavior in ways that must stymie causality and therefore prediction.

The sweeping generalizations of the human sciences as a result are not empirical because they violate the determinism that allows prediction and so can neither be predictive nor verifiable. Unless severely restricted in scope, they do not even qualify as available to expertise (see “Expertise“). They attract us with their broad explanatory power, but built into their theories is the escape hatch of human agency and its resulting unpredictability. Most crucially, they are not falsifiable. They are constructed so that contradictory evidence can easily be incorporated into hypotheses flexible enough to accommodate any evidential anomaly introduced by the black box of human freedom. And practitioners find that elasticity of theory an irresistible lure to their biases. That same lack of rigor allows a constant churn of theoretical structures. Their inadequacy makes space for the intrusion of external norming that corrupts their efforts because these norms can never be empirical. In sum, the paradigms of human science theories inevitably fracture into competing and contradictory explanations because their evidence is ambiguous, their theories amorphous, and their hypotheses slanted by confirmation bias. It is only human to see what verifies and ignore what contradicts our understanding. Empirical science took four hundred years to learn that lesson, but human sciences are only beginning to in our own day.

And there are practical obstacles also. There are no “pure human sciences” because the human will that is their focus can never be seen purely and the experiments that might test it would violate the moral dignity of the potential subject. Because they are poorly predictive of individual will, they also lack the utility that makes natural science discoveries so potent. I doubt we could name a single unambiguously useful product of human science paradigms but can name a thousand errors and societal harms. Their work provokes contention rather than gratitude, in part because they thrive in academic environments that fertilize creativity and dispute. Their manifold failures should have discredited them sooner and would have if a moral vacuum had not permeated Western culture.

They concern the ends of choosing that are moral, not perceptual. But because they have always concerned human interests, practitioners in the human science have never been able to see that, permeating their theories with premature closure shaped by their prior moral preference and tainting their hypotheses with untestable assumptions. They will deny this, but their manifold paradigms cry out for a preference of the conceptual over the empirical in theory and their sloppy observations of human behavior beg for a fill-in-the-blanks supposition of human intent in practice.

The broadness of their disciplines inevitably touches upon related but equally mysterious aspects of individual agency, adding further variables to their studies. Because they wish for their theories to be true, they nudge observation and evidence so as to make them so. Like the believer who sees God’s mercy in the one survivor of the church collapse that kills hundreds, they see the proof of their chosen paradigm in the evidence they cherry pick from their observations and in finding it, they contend with other practitioners advancing other paradigms in the same field with the same kinds of evidence. So do the human sciences more resemble religion than natural science? They do in binding their truth claims to prior determinations of moral goodness.   No one can read Marx without detecting his nostalgia for a lost medieval corporatism or Freud without seeing his hostility to religion. Durkheim’s lust for academic acceptance, Mead’s admiration for sexual freedom, von Mises’s fear of communism: human scientists not only cannot enter the black box of others’ human will but cannot escape its intrusion into their own theoretical frameworks. Nevertheless, their theories became the unreliable moral guides of the twentieth century and their contentious “expertise” the director of its public life. Their scientism has misdirected public morality for a century.

We have suffered through so many of these depredations that it is difficult to choose a single example. But we can find human pseudoscience sparking all the great tragedies of the twentieth century. Which view of “human perfectionism” should we pluck from history to illustrate its failures? Examine Marxism’s grand narratives built upon historicism and sociology, or Freudianism’s utterly fascinating but by now thoroughly discredited theories of staged development, fascism’s biased ethnology and anthropology, capitalism’s blasé acceptance of boom/bust cycles and inevitable wealth disparities as the price to be paid for material advancement. Each example elevates some dream of personal or social perfection unmoored from empirical evidence under the scientistic heading of “progress.”

Perhaps the most pertinent example of the attempt to mingle scientific principles with moral weight is one that entered the culture at the dawn of the twentieth century, an effort that still permeates and corrupts contemporary educational efforts. Pragmatism was launched in that crazed frenzy of novelty that bracketed the World Wars. At least in the form advanced by William James, Charles Peirce, and John Dewey, its solution was to view life as a cycle of experimentation, each moment of which might contain its hypotheses and clinical trials. Even should we think undistilled experience capable of this absurdity, there is more. Since animal adaptation to environment is a Darwinian hypothesis, Pragmatists argued the true purpose of education to be “societal adaptation” rather than critical thought, thus facilitating a cultural conformism deeply at odds with the alienist aesthetic of popular cultures. This approach was arguably most influential in American educational theory, for it coincided with the spread of compulsory high school education and the founding of teachers’ colleges. It entered American life as Progressivism and Constructivism, permeating American classrooms and every American mind in the second half of the twentieth century.

Pragmatism sought to perfect utilitarianism, a popular Victorian consequentialist value system that sought to quantify goodness in a ludicrously pseudo-scientific effort to lend moral gravitas to random desire (see “Three Moral Systems”). Its empirical trappings notwithstanding, utilitarianism was merely subjectivism in fancy dress. Pragmatists recommended adopting a focus on moral flexibility and perfecting it through the application of empirical principles to determinations of use, so that persons might frame their experience as the scientist might and so make choices more productive of their happiness. It was a clever integration, but it managed to combine the worst elements that it sought to synthesize. Utilitarianism had failed for the same reason human sciences in general fail: because its champions could never isolate their prescriptions from their biases and preferences, could never base its “experiments” on quantitative data or find the means to isolate variables to perform experiments on their human subjects. The resultant theories were so elastic that persons in practice felt free to choose in the moment whatever immediate outcome they desired without overall self-direction or full awareness of consequences. Think about millions of adherents operating from the conflicts produced by such a system of morality. In practice, it was neither moral nor systematic. Like utilitarianism, pragmatism accepted the “cash value” of whatever ends persons happened to choose by whatever they happened to desire at they moment they were observed. It dignified such randomness as a “science of happiness.” But such an effort in ordinary experience must be refuted by the entire history of nineteenth century science that had refined experience so as to more solidly ground its truth claims and in that process had excised moral considerations from its perceptual study.

Because it ignored the act of severance and the problem of quality while posing as an empirical approach to happiness, Pragmatism as a moral system proved itself anything but practical, at least if persons wished to either direct experience to some consistent end or evaluate it by some means beyond their present desire. Without an external guiding moral principle and in the chaos of experience that in no way resembles the laboratory, how could persons either choose what they consider good or act with the dispassionate distance required of the scientist? Wouldn’t they more likely mingle their preferences with their judgments and operate from a perpetual state of confirmation bias and in the tumult of unanticipated consequences utterly fail to achieve the goals they might desire at one moment but might change in the next? It is hard to say whether such a plan more insults morality or science, but it is certain that pragmatism as an ethical system was neither.

Dewey and James thought more as philosophers than as social scientists and more as social scientists than as empiricists. Pragmatism could claim only one point in its favor: it was well-suited to a materialist culture of kaleidoscopic preference and moral neutrality (see The Problem of Moral Pragmatism). It worsened the moral vacuum in the public square while contributing to the failures of twentieth century American education, or rather it justified such failure by removing the means to identify the moral goods culture and education should pursue. The effort to produce an “empiricism lite” was assimilated as just one more modernist failure into postmodern theory after the 1970’s (see “Postmodernism Is Its Discontents”).

Fast forward to our own day and our own crying need for moral clarity. The human sciences have retreated somewhat from the grandiose promises of the last century, in doing so still parroting natural science’s adolescent scientism. The soft sciences are increasingly absorbed into the hard or have learned to restrict their focus to the observable and statistical. In essence, they are more successful as statisticians of large numbers rather than analysts of individual agency. This shift forces many but not all such disciplines to avoid the grand prescriptions that have so misguided popular morality. Practitioners who are willing to narrow their fields to closely studied and repeatable experiences can develop a real expertise about some issue relative to human will and produce statistically valuable truths, though their theories can never be predictive for any person regardless of how thoroughly applicable they may be to groups. So it should be equally apparent that they cannot be employing empirical methods to direct individual preferential freedom to the good (see “Our Freedom Fetish”).

It is surely true that human scientists, like empiricists in the natural sciences, are frequently granted a superficial authority in moral matters even today by non-specialists. But if you have followed the story of authority’s decline, it ought to be clear that whatever trust authority now permits is superficial and easily overridden in an atmosphere of individual autonomy and private agency. Persons often incorporate “commonsense” distortions of scientific principles to backstop preexisting desires and then assume them to be “on the authority of science.” Because scientism is not generally understood, its vacuity is not limited to uneducated persons in Western societies but is generalized in cultures wherein persons are always hungry for pragmatic justification for their own desires.

Natural science is our most powerful truth-finding process, one which forces moral vacuity. But its success poses a real threat if that power is not subject to external moral guidance. The history of the last century reinforces that danger. Science is a blind beast of enormous power, at the moment one without reins.  We cannot expect it to exercise restraint or wisdom in directing its own pursuits. Those rely on moral decisions imposed upon it that it must be blind to as a condition of its competent operation. It certainly cannot be guided by human sciences and must not be allowed to revert to the scientism that marked its adolescence.  In the face of its moral blankness, we should ask the obvious: what moral goods should it pursue, what moral limits should direct its great power, and what moral values should interpret its results? We can only answer with the same universal principles that guide other activities that produce laws and mores, but aren’t these sources equally tainted by the collapse of institutional authority, the resultant loss of public comity, and the weaknesses of ordinary reasoning science has revealed?

We must look elsewhere to fill our missing public consensus, for moral agency is inevitably rooted in individual rational agency. We know this is unlikely to be surrendered and if threatened by the need for public order, will likely defend its prerogatives. Before we utterly forsake hope that empiricism, for all its truth-finding prowess, can offer any assistance at all in our search for public moral consensus, we need to draw from it one final and feeble hope. Could its strict subservience to the utility of its methodology and its deferment of goodness considerations somehow assist constructing a universal moral framework that might learn from its methods even if it cannot be found by employing them? The question only echoes in the laboratory, for science’s credo must always be to pursue its truths by the strictest rational methods and the best available evidence. When Newton was asked what Force forced gravity to obey its laws, his answer proved most prescient: “I make no hypotheses.”

But because we require public comity, we must make hypotheses that will allow public consensus while respecting private human dignity. And since private morality must be a “systematic end of preference,” the only resource persons have to locate it is their reason, Romantic intuitions notwithstanding (see “The Tyranny of Rationality”). If Romanticism’s fate in the last half of the nineteenth century and all of the twentieth has not convinced today’s Romantics, the existentialists, of its failures, they must prove impervious to understanding. The only way reason can hope to come to the public consensus that is so desperately needed in our civic life is if it aligns private morality with a public one open to universal assent (see “Functional Natural Law and the Universality of Human Rights”). That will require a far better understanding of justice (see “Justice Is Almost Everything.”) Some moral principle must shape and guide these efforts, must not only shape one’s own experience to fulfillment but also must resolve conflicts with others in public spaces A rational examination of empiricism cannot find that principle, as has been shown, but it can reveal two lessons that should assist the search.

First, it proves that reasoning about experience has the capacity to be not only intersubjective, meaning understandable to others, but also objectively true and universally assented to. Science’s core process, language, and products are intersubjective and intercultural. Empiricism’s matchless success has demonstrated its universality through its reliance on quantification and the standardization of its methodology. It is possible to view mathematics as a sterile and artificial game. Even so, that every human mind can play it speaks strongly to the common operating system of human reasoning. Any mature mind can comprehend it by applying reason to its rules of operation. The same may be said for the powerful interconnectivity of the empirical sciences revealed by natural scientists working from Rome to Rio and from Boston to Beijing.  As Kant said, these interlaced interpretations owe more to the structure of the human mind than to the shape of the universe, but we must remember that Kant regarded that mind as a species-specific functionality. The evidence goes beyond a common mental framework that humans may share and that mathematical reasoning and the universality of the scientific method has proved. Technology reveals not only that these empirical structures accord with human reasoning but also that they interact with reality itself in ways that are predictable and transformative. This proves modernism’s stated premise that human reason can reveal truths about the world. Because they are perceptual, these truths cannot be moral. Science’s success comes by severing its practices from moral goodness, but that same success makes rationalist articulations of moral goodness to direct science necessary, though it does not prove them possible. All science has proved is that reasoning applied to experience can reveal descriptive truths about the world. Unfortunately, it has also proved that reasoning must be highly refined and experience highly limited for those truths to be fully warranted. And since this same demonstration has also shown the incompetency of commonsense reasoning about undistilled experience, we have lost any confidence that universal truth can be found outside the laboratory. And since moral truth cannot be found inside it, we have fortified our private beliefs and abandoned the search for public moral consensus.

This has been an unforced error caused by accidents of history and human failures. It cannot be repaired until private beliefs are understood to be entirely inadequate to warrant public commitments and that the kinds of justifications cobbled together as the Reformation began tearing apart society are too crude and expedient to withstand sustained scrutiny or induce moral reasoning (see “Why Invent a Social Contract”). Even just law is now regarded by political libertarians as an indefensible imposition on private freedom (see “When Is Civil Disobedience Justified?”). Even this last bastion of civic identity can boast of no unifying moral foundation (see “Foundations of the Law: an Appetizer”). Natural science has washed its hands of goodness claims as the price of its success and both human science and ersatz expertise have wreaked sustained disasters on the twentieth centuries as they failed to cope with the axiomatic divisions now working through worldwide cultures (see “My Argument in Brief). The present social disorder is making these issues more obvious. There is some danger that this realization will stimulate a nostalgic turn to authoritarian rule to restore order and moral direction, and we do find states that have reached the crisis point making that turn over the last century. It is not a return, however. Recent authoritarianism has been revived in the shadow of empirical success, which shows the value of retaining rational autonomy. Contemporary authoritarians have had to adapt to a social climate that is inimical to trust of any stripe and to the reliance on private beliefs that have accompanied materialism, pragmatism, and postmodern aesthetics. Trust is not inherent to the virtual circle mentality that is necessarily self-referential to the point of solipsism (see “Belief in the Public Square”).

Because empirical science must be blind to these issues of value and social utility, it cannot rescue us from the present moral crisis. But we need not be blind to its second lesson. Its dedication to the act of severance, the careful separation of judgments of truth from judgments of goodness that result from finding truth, is science’s unheralded contribution to this moment in history. This is the antithesis of the scientism that so corrupted twentieth century moral consensus, the human sciences, and the substitutes for public morality like materialism and Pragmatism that now suffuse contemporary cultures (see “Cultural Consensus”) So we must not think we can fill our moral vacuum by a simple inversion of science’s process but rather by the two lessons it has to teach seekers of public moral consensus.  It is ironic that empiricism could only succeed when values were sucked out of its process, and the larger society that empiricism has so affected can only succeed when values are injected back into it (see “A Utility of Furthest Ends”).