Psychology Today once ran a story on the major influences on therapists’ professional approaches. Guess what most impacted their theoretical outlook. Their dissertation research? Learned papers in their field of specialization? Esteemed colleagues or lengthy seminars? Not even close. These behavioral scientists listed their personal experiences as the factor most determinative in their professional lives regardless of the level of their education or their years of practice. As they offered their analyses and therapeutic advice from the lofty eminence of their professional standing, their wisdom sprang from no fount more substantial than the accidents of some moment from their own past, subject to all the particularities of context and personality, all tinted by the imperfections of memory. Such was the nature of their “expertise.”
Perhaps psychology is too easy a target. But as we survey the history of that peculiar branch of empirical imitation known as the human sciences, we find a rich mine of similar vacuity. Some brilliant observer termed the study of sociology “the excruciating elaboration of the obvious” after reading a study confirming that paraplegic hitchhikers were picked up less frequently than able-bodied ones. And who can forget the red faces of economists as they surveyed the wreckage of the 2008 financial collapse? Whatever plagues psychology also casts its pall on other soft sciences. And therein hangs a twentieth century tale.
First, as always, it is useful to clarify what we are talking about. The human or soft sciences are a set of relatively recent disciplines whose scope and methodology were fixed in the 1900’s. Their subject matter differs from the natural or hard sciences that are their predecessors and their inspiration in this way: the human sciences attempt to explain and determine the causes and effects of human behavior. That requires them to do what all science must do: make their objects of study predictable by means of their methodology. What happens to a person’s body as she jumps from the airplane is physics. What happens to her biological functioning is any of a related group of medical sciences: neurology, physiology, cardiology, biochemistry, hematology, etc. Forensics, chemistry, and aerodynamics explain what happens when her parachute fails to open. All of these studies exclude the issue of why she jumped. That is the domain of the soft sciences. When we try to use empirical methodology to examine “the black box” of choice-making, we enter the human sciences. Besides those mentioned—psychology, sociology, economics — we might add history, criminology, anthropology, education, ethnology, and many other subsets. In some ways these soft sciences resemble the hard sciences. They make use of a specialized vocabulary that excludes outsiders. They branch into sub-disciplines as the hard sciences do. They often set up experiments and publish papers in academic journals. They form departments on college campuses and award undergraduate and graduate degrees. To a casual observer, the human sciences may seem to be just another of the many subdivisions of the empirical quest to subject nature to the scientific method. But don’t be fooled. The emulation of the natural sciences by the human sciences was intentional, even well-intentioned, but it proved quixotic and delusional in the long run and has caused incalculable historical harm.
These threads are only untangled through some historical review. Empirical science as we define it was introduced by the Renaissance. Its emphasis was on close examination of experience. To understand its success, we must see its maturation over the next five centuries as a result of a series of subtractions of focus and procedure and multiplication of safeguards against what the great prophet of modern science, Francis Bacon, called the four idols, the blinders that hinder reliable knowledge. These idols in today’s terms are the limitations imposed by our perceptions, by hypothesizing, by communicating, and by theorizing from present range of knowledge. We forget how wide-eyed the Renaissance masters viewed the landscape of experience after a millennium confined to the still life of religious authority. Literally everything was open to critical investigation, and it was not until the late nineteenth century that science fully put aside studies that failed to meet its increasingly stringent criteria. Newton, after all, spent his last years studying astrology and Biblical prophecy with the same critical method he brought to the inverse square law. It was only gradually that subjects considered unfit for empirical study were weeded out of serious consideration. The reductionist process was dictated neither by importance nor by interest. The limits were imposed by the means of justification, the warrant that scientists used to verify the truth of their findings. The distinction only gradually emerged: the word scientific meant quantitative and predictable. If phenomena did not lend themselves to quantitative measurement and qualitative limitations of focus, if they could not be replicated to enhance reliability of results, or if they yielded results that could not be predicted, they failed to meet the bar of true empirical science. Instead, they were relegated to the netherworld of the pseudo-sciences, covering a huge range of disciplines from hypnosis to acupuncture to spirit detection to phrenology: all still subjects of serious study by amateurs, but all banished from the field of professional inquiry over the course of contraction that marked the rise of the natural sciences. Thus they simply vanished from empirical view, evaporating from its considerations and thereby falling beneath the dignity of professional investigation (see “The Limits of Empirical Science”). This sorting process continues to this day, but the classifications hardened sufficiently by 1900 for scientists to draw fairly rigid exclusionary walls around their fields. Remember that this process has taken five centuries to mature and evolved entirely from the application of the scientific method in a sustained attempt to make experience more reliable.
But we face a chicken-and-egg problem on this score. The methods of science are valued only for their utility. They have produced astonishing results. You can see these in three different orders of magnitude, all equally impressive. First, the interlocking subject disciplines of the hard sciences have proceeded from observation to theory about reality, and the disciplinary theories all confirm each other (the exception of quantum and gravitational forces in physics is being whittled at today). The guiding theoretical basis of each subject discipline is its paradigm. It channels work in the field into productive investigations and excludes outlier theories that challenge them. It unifies experimental and observational work. Now this might be seen as a creative act rather than a discovery of the fundamental unity of nature; that argument fails because the paradigms of one discipline must accord with those of related fields. So, for instance, biology is built on the fundamental chemistry of the periodic table, and each subdivision of the biological sciences investigates its own slice of reality, all building mutually supportive theories upon the dominant paradigms of the field. This interlocking relationship strongly supports the accuracy of each field of study. A second impressive indicator of science’s success is the increasing complexity of its studies, revealing layer after layer of often surprising truths about the material structures of reality. These deep observational and experimental findings often overturn entire paradigms and reorient the life work of experts in the field. At this frontier of investigation, it is not the connections among disciplines that impress us but the abstraction within each field, with experimental verification often resulting from rather than producing theoretical mathematical models. This networking and depth of study are impressive enough to confirm the natural sciences as the royal road to truth, yet what matters most to ordinary folks is the final verification of scientific success: its technology. The greatest material advances in human history have coincided with the maturation of the sciences, and that maturation may be summarized as a learned process of minimizing experience while maximizing reasoning about it, something possible in a laboratory experiment and utterly impossible for undistilled experience (see “What Counts as Justification?“). It is not the experience that makes science work, for experiences by nature are unreplicable, unique, and nested so deeply in context that they prove resistant to analysis. It is analysis itself, the creation of experiences so simple and lacking in variables that they seem made up more of mind than of world, that makes empiricism and its wonders possible. Who can doubt its truth-finding power?
The human sciences were at least initially a part of this mix. Natural philosophy, the eighteenth century term for empirical science, included in its purview the science of man, and efforts to shine the spotlight of close inquiry on human behavior at first seemed to offer dazzling promise. Those same philosophers who taught budding scientists how to reason about experience, the first epistemologists, felt quite at home exploring the mind and theorizing on human nature and the means to improve it. But from the beginning, this new stab at Enlightenment galloped off in the wrong direction, for its quest was not merely to understand human behavior but was also to perfect it. The reason for that error is implicit in their idea of reasoning itself, for to reason on an experience requires a level of effort well beyond the rough-and-ready thinking most of us use in undistilled experience. So any conception of proper human functioning Enlightenment thinkers might propose would require improvements in human conduct unavailable to common experience. With that premise, it seems unsurprising that from its very beginnings, the science of man engaged in a quest more moral than empirical. Unfortunately, studies in the natural sciences that would dazzle the world were in their infancy in the eighteenth century, and so what we would now call the human sciences, pseudo-sciences, and the natural sciences were indistinguishable at first (see “Modernism’s Midwives“). Locke could write on the predominant influence of environment on childhood development, on the origin of government in a mythic state of nature, and a hundred other subjects, convinced he was engaged in the same kind of investigation that his contemporary Van Leeuwenhoek was undertaking with his microscope. It would take another two centuries for the natural sciences to sort out valid investigations as those in which conditions could be observed and measured, variables limited, experiments conducted, predictions confirmed, and results repeated. Over the course of the next two hundred years, empiricism would shed its ancestry in philosophy and theology and embrace mathematics as the infinitely precise language of its practice. But the human sciences over the same span of time would do none of these things. Even so, they would stand proudly in the reflected light of empirical science and dazzle the lay world with the reasonableness and scope of their theories, encourage social experimentation, and agitate for change while wearing the scientist’s scowl of disinterest.
About that scowl… Empiricism learned a powerful lesson in its two hundred year adolescence. Truth may indeed fall to its assaults, or at least some kinds of truth. Part of its ascent to maturity involved figuring out just what kinds of truth science’s increasingly stringent methods could grapple with. But a harder lesson lay hidden in the shadow of that effort, and it is still being dragged into the light today. The entire empirical enterprise might be considered a torturous effort to avoid the pitfalls of premature closure. All of our efforts in undistilled experience see truth as merely a means to an end, that end being the exercise of preferential freedom to choose the goods truth opens to choice (see “Our Freedom Fetish“). Our default process is to connect determinations of truth with preference, to see them as paired, and in the case of belief as a single emulsifying operation (see “Knowledge, Trust, and Belief”). We rush through our determinations of truth in ordinary experience so as to exercise whatever preference context allows, ignoring the inconvenient truth that we cannot know what an experience offers until we know what the experience is. Empiricism learned this hard lesson: the temptation to rush considerations of truth to get to the preferences that truth confers is often so powerful that it distorts reason itself. The scientific method restrains conclusions and the factors that might bias them, focusing instead on an act of severance, closing the door even to considering the utility or quality of a discovery until the scientist is assured in her mind that she understands what she is perceiving. Only then can immediate utility be sought, the other species of goodness typically being far beyond the scope of science’s efforts (see “What Do We Mean by ‘Good’?“). You can see why scientific work in a particular field is called a discipline, for it requires immense effort to seek only truth without regard for preference and, even when discovered, to use that truth only for immediate utility. It is a lesson contrary to the impulse to alloy judgments of truth and goodness (see “Truth and Goodness Do a Dance”). But it has never been learned by the human sciences for three reasons. First, their investigative techniques must always be shoddy because they cannot perceive human will, the crucial variable in all of their studies. It is the dark energy of all of their research, yet it powers all human activity. All the slipshod methodology that marks the history of human science follows from that failure, the inability to limits its objects of study and the consequent slippages into other areas of human concern. Secondly, it cannot experiment upon its subjects in the way that true science does. Ethical considerations alone limit the kind of laboratory methodology that human science can employ to limit variables, use mathematics to quantify hypotheses, and control experimental outcomes for subjects. Finally and most importantly, the human sciences cannot make individual human behavior determinative so as to make it predictable, and so it cannot validate hypotheses, at least not for individual objects of its study. That has not kept it from trying and when the attempt fails, to obscure the failure in order to continue the farcical effort to be taken as empirical. But at what cost?
The effort to deny individuals their preferential freedom, their felt liberty to choose from the possible goods natural freedom presents to choice, to reduce persons to things governed by contingent determinism, is central to human science’s project of perfecting some aspect of human nature. I hope the irony is clear: their method necessarily denies the essence of what makes us human, is blind to what consciousness demonstrates in every moment of every person’s life: are are free to choose. Further, the effort to render human nature scientific also denies what is essential to science. Empiricism’s success has resulted from the avoidance of premature closure in favor of the act of severance. Human sciences have not been able to scrub goodness issues from their studies. And the final irony combines the first two: because they ignore that act of severance, human scientists have been entirely unable to escape the lure of their own moral preferences, and so they have produced grand visions of human perfection and slanted their “experimental” results from the beginning so as to pursue and validate them. The “science of man” was always a quest for human perfectibility, and “enlightenment” is still the goal of the human sciences. What they have produced in abundance are errors, hypocrisies, and historic failures, the greatest of which is to obscure the fundamental nature of moral reasoning and responsibility in favor of a determinism neither the observer nor the subject would consent to. The twentieth century was an ordeal of consequences from their mistaken denial of persons’ felt preferential freedom and a prolonged quest for environmental or genetic factors causing human determinism (see “A Preface to the Determinism Problem”). But if persons are merely things, objects to be manipulated by externals, in what direction ought they be moved? What hypothesis, theory, or behavioral paradigm answers that question? Of course, true science could not respond to what is a profoundly moral question. But human science has always tried even if the effort required a subversive insertion of values distortive of both science and morality.
We can see moral considerations shaping the outlines of every human science. Economics attempts to structure healthy monetary policies without explicit regard for what constitutes healthy societies, ignoring that monetary policies ought to be one source of societal health. Psychology from its beginnings attempted to define mental illness while refusing to define moral health, thereby offering to measure how far the arrow misses the mark without knowing where the mark is. None of this is acknowledged, of course, yet the subversive effort is inescapable. Every economist must insert his own surreptitious notion of societal health to tease out a theory of economic wellbeing; every psychologist must whisper the theoretical location of her psychic bullseye. These could never be enunciated, of course, because stating them must reveal them as moral ambitions directive of science, not laws of nature derived from science. Nothing more clearly reveals the absurdity of this kind of effort than the attempt in the United States to paint education as a kind of psychological and sociological science. So let us track its devolution from the science of man to fallacious social science as a microcosm of the evolution of human sciences in general.
It is perhaps forgivable that Jean Jacques Rousseau’s vision of the social contract should have inspired the French Revolution’s Reign of Terror of 1793. The truth-telling power of natural philosophy was in its infancy, and Rousseau was a thoughtful and witty critic of the social order of the ancien regime; his take on the new Romanticism that was sweeping popular Western culture was original and powerful. Nor is it surprising that Rousseau pushed Immanuel Kant into articulating the revolution in philosophy that was The Critique of Pure Reason. That one philosopher should challenge another is part of the great conversation of ideas stretching back to the Hellenic Age. What is unfathomable to me is that Rousseau should be regarded as the father of American educational theory in the twentieth century. Now here is a puzzle. Rousseau’s narrative advocating “natural education,” Emile, provoked lasting debate about the best means of educating children. Nothing about it was remotely scientific. Perhaps it was more a goad to the old order than a purposeful study, for its readers had to ignore the historical truths that Rousseau fathered five children, all of whom were sent to orphanages as infants; he had no expertise whatsoever in tutoring, and his theories were the product only of his own childhood experiences. Furthermore, Emile was a piece of fiction, about as far from a controlled experiment as can be imagined. The question of whether children should be allowed to follow their interests or be inculcated with habits and general knowledge or whether the two approaches can be combined is certainly worth serious discussion and debate among all interested parties. Ultimately, it is a question for practical judgment by experienced and competent educators and parents to hash out by trial-and-error. But it is not a subject suited to scientific study and Rousseau was not the scientist to disseminate it.
That task fell to John Dewey. In 1894 the pragmatist psychologist was invited to chair the new department of Philosophy and Psychology at the University of Chicago. Note the title of the department: the human sciences were on the cusp of eminence and essentially facing the monumental decision of whether to accept alignment with the sciences or with philosophy. That explosive potential would mark the century to follow, in part because Victorianism was well into its final spasms, having produced a half-century of paradoxes that accumulated to a crisis for traditional institutions and a loss of confidence in common sense that produced authority’s final moral crisis (see “The Victorian Rift”). Dewey was a key player in this drama, for his position in what is often called the only American philosophical movement was that the truth or goodness of any declaration depended on its usefulness to the thinker, that usefulness to be decided by a life of “practical experimentation” (see “The Problem of Moral Pragmatism“). To be fair, the flaccidity of pragmatism as a moral system was anticipated by an earlier and more obviously pseudo-scientific effort: utilitarianism, so the quest for a kind of “scientific morality” had been underway for a century when Dewey launched his effort to mold culture by the latest theories of the most modern psychological practice (see “Three Moral Systems”). Note both his preference and his error here, for they molded American education for the next hundred years. First, his inclination was clearly and permanently toward science, and he saw every reason that our social existence should be ordered on fully empirical principles. But note too the absurdity of considering undistilled experience as a kind of empirical experiment. The essence of empiricism and the source of the power of the natural sciences is the isolation of variables to assist close observation, the replication of results through repeated trials to filter out the faults of perception and preconception, measurement to clarify cause/effect, and replication of results. It is this painstaking process that has produced the glories of natural science mentioned above. And Dewey was suggesting what? That we can emulate this success in ordinary experience, the kind of activity in which perceptions whiz by like oncoming traffic and reasoning faces a kaleidoscope of judgments every second? What neurological or experimental evidence could legitimize such a sweeping recommendation? And he was offered a university chair at a great American institution to do it? How can this kind of lunacy be explained?
I have argued previously that the turn of the twentieth century was marked by fevered attempts to remedy the defects of modernism made manifest by the contradictions of Victorianism culminating in the coming of World War I (see “Postmodernism Is Its Discontents”). John Dewey’s success was one of those attempts; it exploited the technological and theoretical revolutions that natural science was producing, harnessing its power to dreams of social perfection. Dewey’s notions were directly rooted in Rousseau’s musings on natural and spontaneous education, and so yet another grotesque Victorian hybrid of Romanticism and rationalism entered the mainstream, this time attempting to nourish children’s curiosity as the touchstone of a new social science, designed to produce a nation of pragmatists.
It was called Educational Progressivism, and it was launched at the nation’s first laboratory school, founded at Dewey’s insistence in 1896 at the University of Chicago. As the new century dawned, colleges of education sprang up around the country, embracing the primacy of “scientific” pedagogy modeled on the Progressive model. Dewey himself vociferously denigrated dissenters as “traditionalists” and “obstructionists” and so the aerodynamic vision of “adaptation” and “socialization” raced into public view and into the new K-12 public school systems backed by the full force of the new social sciences. The movement splintered in its application– such splintering is both a flaw and the seal of the human sciences lacking in dominant paradigms– but by the 1930’s was called Constructivism based on theories by the Swiss psychologist Jean Piaget (who was the director of the Rousseau Institute in Geneva while publishing his theories, incidentally). Its modified goals could be summarized in two precepts: education serves psychological and sociological goods and all knowledge results from the personal construction of the learner. Constructivism is still the default academic model in U.S. schools. But note the pragmatic implications of its goals and the empirical tone of its means of achieving them. To say that knowledge is created rather than constructed is to strike at the traditional heart of the goal of education. The continuity of Western culture and the critical abilities required to improve it are no longer central to our understanding of education. This abandonment tracked a larger cultural rejection of modernism and an embrace of novelty in the first decades of the twentieth century in all cultural pursuits from architecture to poetics. If the teacher is no longer competent to transmit and translate knowledge for her students, if instead she is to open possibilities to their imaginative efforts to build the world anew, she must become the facilitator of ….what? Whatever the child currently values? And how is progress or promotion to be measured if success is to be determined by the satisfaction or social adaptation of the learner? The jargon implied by this orthodoxy in educational circles was to make the teacher “the guide from the side rather than the sage on the stage.” As students create their own virtual circles of truth, goodness, and beauty, they become the controllers of curriculum and the arbiters of relevance in the classroom (see “What Is the Virtual Circle?“). The current obsession with standards-based education is a direct repudiation of Constructivism. It is surprising to me that this current educational battle has not been more generally recognized and its roots uncovered. Despite being the most cited author in education journals in the twentieth century, Piaget and his four cognitive stages of development were discarded in light of more sophisticated cognitive research by the century’s end (see “Empathy: a Moral Hazard“). Like Rousseau, he had done no structured research to posit these stages, basing them instead on his parenting experiences with his own three children. So there is some progress, for at least Piaget was present for them! Strangely, the erosion of the “scientific” basis for Constructivism seemed to provoke little doubt in its social science adherents, possibly because those who major in education have little subject-discipline knowledge. I predict that their superficial understanding of the roots and implications of the theory, its pseudo-scientific formulation, and its application in American classrooms for nearly a century through the influences of colleges of education and professional licensure will one day be exposed as one of the great causes of the failures of American public education. And so at the beginning of the twenty-first century, we face this strange question: what are the odds of an eighteenth century French Romantic philosopher steering the course of American public education in the twentieth century?
Stranger things have happened. And far worse. The same bizarre combination of Romantic yearning and human sciences produced the two greatest scourges of the last century: communism and fascism. It was the Romantic philosopher Hegel who inspired Marx to posit the dialectical process as the invisible hand of cultural determinism in opposition to that other one, thereby setting up a liberal/conservative debate that somehow never questions either hand’s dehumanizing assumptions. And so a philosophical theory was again transmuted into a human science “law” that binds human will and denies human freedom. And again, the original “scientific” paradigm was modified by successors, this time by Leninism, Trotskyism, and Maoism into successor theories all competing for the crown of “scientific socialism.” Their cost in the U.S.S.R. and southeast Asia was over one hundred million dead, excluding those who died in World War II. On that score, it was the philosopher Herder who molded the Romantic notion of German exceptionalism that then was “confirmed” by ethnology and the new sociology of Emile Durkheim and Max Weber into a theory of the uniqueness of the Aryan race and their supposed persecution by the descendants of Roman Europe. Another sixty million died in that crusade. And so the “new men” of the new human sciences marched confidently toward a future of social perfection first prophesied by the philosophers of the Enlightenment, inventing in their pursuit a word unknown until 1944: “genocide.”
What cannot be tallied are all the distortions of ordinary life that resulted from the falsities of the human sciences. Consider the work of Sigmund Freud, who published his landmark The Interpretation of Dreams in 1900 in the heat of the great crisis of modernism’s axioms of commitment (see “The Axioms of Moral Systems”). The medical doctor who had begun his career flirting with hypnosis and cocaine as cures for female hysteria trolled the shallows of pseudo-science from the beginnings, and his reliance on myths as models of human behaviors introduced powerful Romantic influences to his theories of the unconscious. His later works, in particular, effaced the line between science and philosophy, though it is hard to trace any period in which Freud drew it clearly. Again, his work was subjected to divisive reinterpretation by later generations of psychologists, partially resulting from his own sense of persecution, producing what seems a perennial problem for the human sciences: a profusion of competing and mutually contradictory paradigms, as indicated by the Psychology Today absurdity mentioned at the beginning of this essay. Freud’s theories have thoroughly saturated popular culture. Knowing references to unconscious motivations, sublimations, transferences, and repressions permeate talk shows and earnest coffee house confessionals. We are all amateur psychoanalysts today. The only problem: none of Freud’s theories have been empirically verified. No neurological scientist has located the region of the brain where the unconscious resides. It is every bit as metaphorical a creation as dogmatic theology, as creative as fiction, as suggestive and as allusive as folklore. In derivation, prediction, and confirmation, it is nothing like true science. Psychologists have overwhelmingly abandoned Freudian psychoanalysis for theories that not only contradict Freud but also each other. All theories about the id, ego, and superego, about the collective unconscious or transactional analysis or logotherapy: all are arrows shot at the missing moral bullseye of human wellbeing. None is true science.
But the false sense of power they confer, the illusory control over the levers of the byzantine machinery that is contemporary life, has been their greatest selling point as well as their greatest danger. In the first half of the twentieth century especially, all the high-sounding theories and arcane terminology were swallowed whole by a credulous public desperate to replace the lost moral authority of church and other trusted institutions, to yoke the efficiency of science to the good of mankind, or at least to rein in the blind beast of change. Behavioral scientists were the new priesthood, an all-conquering army of lab-coated experts backed by the latest creations of marketing genius (see “Expertise“). These efficiency experts were also the vanguard of what Hannah Arendt termed “the banality of evil,” soulless bureaucrats who ran the death camps with the same mechanical economy as they ran the new corporate hierarchies. They were the proxies of progress, the harbingers of the new age, whether they served Madison Avenue, the Pentagon, or the Comintern, and to a shell-shocked public stunned by the pace of change and the horrors change had inspired, they seemed the very embodiment of the future. It is not a minor issue to point out that their “scientific approach” offered a method rather than a moral. They were the distillation of “the science of man” precipitated out of behavioral science and easily mixed with whatever purpose they were put to, pure efficiency without direction. That had to be imposed from outside. Of course, as the 1960’s demonstrated, once science’s functionaries were moved to unscientific purposes, their expertise morphed into advocacy, their methodology into madness, and the unexamined morality of their premises dissolved the crystalline purity of their operations. They were the embodiment of Shelley’s Dr. Frankenstein, not the medical doctors she had feared but the doctoral graduates she could never have anticipated, doomed by the hybrid nature of their ambitions to fail, to mar the human perfectibility they had hoped to complete.
There is a repetitive storyline here. Philosophy inspires Romantic yearnings that then are systematized into human science, all occurring as natural science rises in public esteem. These “new sciences” then are transmuted into competitive successor theories that revivify versions of the original theory and produce the very opposite of the social perfection they envisioned. So what is the lesson? We may draw several.
Most obviously, we should beware of the poisonous conflation of Romantic philosophy and pseudo-science. It seems combining the grand visions and yearnings of Romanticism with the trappings of secular priesthood granted to science produces no end of mischief in the world. Every issue examined by human science is worthy of thoughtful and careful conversation by educated persons. But Romantic emotionalism and human science dogmatism are anathema to the kind of conversation required to find the proper place for student-centered learning, economic justice, national identity, and healthful psychology. Perhaps our error was in our approach more than in our ambitions, for our history is a full of examples of social advancement made possible by the yearnings of social reformers. We have been blinded by the power of science and by the failures of institutional authority into accepting truth and goodness claims that seem reasonable and comprehensive, yet are far beyond the power of science to confirm. This process is the very definition of scientism, the unfortunate hagiography of empirical study that regards it as the only road to truth. Hats off to the philosopher of science Karl Popper, whose theory of falsifiability should be required reading for every educated person. Popper warns us that the test of these constructivist visions is not their reasonableness. The “Clue Problem” is to blame. Given a limited number of facts about experiences, a variety of rational explanations can be found to link them into a causal sequence, each seeming to be reasonable and even revelatory. Popper wisely advises that we ignore the logical attraction of these theories in favor of something less self-evident: their falsifiability. We should ask ourselves under what circumstance our rational construction might be proved false. What set of facts or judgments could conclusively prove the error of our theory? Note that the theory must not be proven false, for that would end it, but rather should be subject to tests that might disprove it. Popper’s lessons, incidentally, retain their power in applications beyond science, for the temptations of system-building practiced by postmodernists must lead to desires for premature closure and the self-love that characterize belief in our own explanatory models.
Consider two examples. On the one hand, we have Einstein predicting the apparent shift in the position of a distant star caused by the passage of its light near the gravitational well of our sun. Should it be observed in a non-shifted position or should its position be shifted beyond 1.75 arc-seconds, a miniscule but precise prediction, his theory of general relativity should have to be revised or abandoned. When Arthur Eddington confirmed Einstein’s prediction in the solar eclipse of May, 1919, Einstein became the twentieth century’s most important figure, for his theory had passed the test of falsifiability. The second most important, according to Time magazine was Sigmund Freud for his work on psychology. Popper discusses one of Freud’s followers, Alfred Adler, diagnosing a patient without personal contact, and arguing for the accuracy of his diagnosis “on the basis of my great experience.” But as Popper notes, the diagnosis was so flaccid that it could be modified to suit any of the patient’s personal circumstances—Adler was uninformed on such things—and retroactively claimed to be correct. This self-deluding quality is characteristic of practitioners of the human sciences. Their theories are too vague to be falsifiable. We can charitably attribute their self-deceptions to the catharsis that accompanies their first exposure to the social science paradigm they embrace. It is nothing less than a conversion experience in which their understanding is reoriented by the breathtaking scale of the paradigm they adopt. (see “Can Belief Be Knowledge?“). Of course, thinking of their attachment to their theories in religious terms establishes a knowledge equivalency between human sciences and religious belief that neither side seems willing to embrace. Not surprisingly, current pastoral care more resembles the couch than the confessional as the clergy immerse themselves in Jungian pseudo-science for no other reason than that it puts a shine on their own prior pursuits.
With the memory of the miserable twentieth century receding, we now ask after the future of the human sciences. If their defining quality is their concern with the “black box” of human motivation, they face insurmountable odds in the current zeitgeist and whatever reputation they can salvage must involve historical revisionism rather than empirical progress (see “A Preface to the Determinism Problem”). The future without question belongs to the hard sciences. Neurology and genetics are even now replacing “mind” with “brain” thanks to the invention of the FMRI in the last decade of the twentieth century and advances in computer sciences conducing to artificial intelligence. These will restore the act of severance to studies of human behavior and brain processes and will reveal the species-specific qualities of the human animal. Ironically, the defining one is a felt sense of preferential freedom that confounds deterministic efforts, at least for individual persons. Predictive human science is the problem, for descriptive studies of human activities in the aggregate can yield important statistical data. Just to be clear, no human science has ever been able to predict the preferences of any single person, though it can describe behaviors of large groups mathematically and so make rough large-scale predictions of generalized preference. But letting go of prescription has not been easy. Psychologists initiated into the breathtaking predictive power of Freudianism perverted the entire twentieth century before letting go of their paradigms, which were actually moral theories rather than empirically verifiable predictions. Unable to open that black box, they next resorted to Behavioralism, the study of observable behavior rather than motivation with a concomitant reluctance to advance moral prescription. To favor observation before prescription was a step natural philosophy took in the late Renaissance, but for the human sciences this was a case of better late than never. Now traditional human scientists might regard such a retreat from prediction as a total surrender of the field. Freud certainly would have. The mystery of human consciousness is far too little understood to be the subject of scientific study, but we can observe and make fruitful conclusions about what people actually do. We can call that psychology and still ignore the core questions of motivation and consciousness, the lure of control that prediction implies. And, glory of glories, we can do real experiments in behavioralism that yield quantitative data and allow us to form real and useful descriptions and categorizations. But behavioralism fell into the errors of scientism from its Skinnerian beginnings. Because it could not study human will, it attempted to deny its importance in the recurrent quest to be considered truly empirical. Thankfully, it has largely been replaced with cognitive theories that put human reasoning and preferential freedom at the center of their efforts. This kind of thing is less science than competent observation, but it takes the reality of felt free will seriously enough to do some real good. It is possible to describe a condition we call “depression” as a set of symptoms loosely applied to individuals. We will have to await neurological science’s determinations of its medical causes, but therapists can apply this general diagnosis to that person as a very rough type of deductive reasoning to suggest remedies individuals must decide upon. Economists may find practical ways to manage the Federal Reserve so as to limit the boom and bust cycles that seem inherent to capitalist systems even though each set of conditions, like each person suffering from depression, is a unique case not subject to deterministic prediction. If one looks at a listing of human sciences and thinks about how each field could do something similar, she would quickly see both the potential and severe limitations of the human sciences, limitations serious enough to have them either severely narrow their concerns so as to develop at least a technical expertise or combine with more traditional if less venerated studies that have always dealt with advocacy. Religion and psychology have a natural affinity. Sociology and ethics do as well. Economics cannot be divorced from political science that naturally relies on a humanist study of history and broad knowledge of epistemology. This retreat from the purely scientific might have avoided the spectacular failure of economists to build entirely predictive models of healthy economies without consideration of the non-quantitative factors conducive to societal health or the efforts of sociologists to advocate for “healthy” social structures, as Margaret Mead so shamelessly did in her studies of the Tahitians. From Freud’s characterization of religion as “the history of a delusion,” to Timothy Leary’s advocacy of LSD, to competing economists advocating for future social health rather than analyzing present policy, it seems human scientists are just too human to be as objective as science demands. Against Popper’s advice, they fall in love with their beliefs and claim them as knowledge.
We have lessons to learn about justification as well. The most powerful warrant for our correspondence truth claims is the empirical one. The weakest is that of undistilled experience. That is why I view with befuddlement Dewey’s attempt to conflate one with the other and see the dawn of the twentieth century as an age of epic ignorance and wild experimentation by those who should have known better (see “Postmodernism’s Unsettling Disagreements”). We have yet to rise above those issues. Perhaps if the prophets of the human sciences had indulged in less jargonizing and abstracting, if their ambitions had been less utopian, or if they had practiced greater humility, they might have encouraged the rational discussion of issues that produce real human progress. Or maybe they were too infected with Romantic passion for that, an infection they thought they were curing with their pseudo-science.
The best news about this particular and dreadful chapter in Western history is that it seems to be ending. I don’t often credit postmodernists, but despite their sordid reliance on the terminology and methodology of the human sciences, their world view forbids any admiration of empiricism’s supposed objectivity, and in the case of the soft sciences, their suspicions are well-warranted. Only this morning, I heard an economist author advocate that we distrust so-called experts in his field when they advocate for economic policy. As they say, from the horse’s mouth. Every day, the true empirical sciences continue their relentless advance into the unknown, appropriating the traditional objects of the human sciences with the same inexorable efficiency they once used on philosophy and religion. By engaging the act of severance, science seems to have learned its lessons about moral prescription, and so it abstains. But acknowledging this truth returns us to the position of the very first Enlightenment philosophers, sadder and perhaps wiser, but still lacking in moral direction. As human sciences are absorbed into true empirical ones, some moral prescription will have to be employed for public morality, and it will have to be more rational and consistent than postmodernism and more focused on morality than scientism will allow. Its first task must be to resolve societal discord while still respecting the preferential freedom and individual responsibility that is the most prized possession of the human person. That begins with reversing the depredations of the human sciences.