Psychology Today once ran a story on the major influences on therapists’ professional approaches. Guess what most impacted their theoretical outlook. Their dissertation research? Learned papers in their field of specialization? Esteemed colleagues or lengthy seminars? Not even close. These behavioral scientists listed their personal experiences as the factor most determinative in their professional lives regardless of the level of their education or their years of practice. As they offered their analyses and therapeutic advice to their patients from the lofty eminence of their professional standing, their wisdom sprang from no fount more substantial than the accidents of some moment from their own past, subject to all the particularities of context and personality, all tinted by the imperfections of memory. Such was the nature of their “expertise.”
Perhaps psychology is too easy a target. But as we survey the history of that peculiar branch of empirical imitation known as the human sciences, we find a rich mine of similar absurdity. Some brilliant observer termed the study of sociology “the excruciating elaboration of the obvious” after reading a study confirming that paraplegic hitchhikers were picked up less frequently than able-bodied ones. And who can forget the red faces of economists as they surveyed the wreckage of the 2008 financial collapse, a moment immortalized by Jon Stewart’s pitiless skewering of Mad Money’s Jim Cramer? Whatever plagues psychology also casts its pall on other soft sciences. And therein hangs a twentieth century tale.
First, as always, it is useful to clarify what we are talking about. The human or soft sciences are a set of relatively recent disciplines whose scope and methodology was fixed in the 1900’s. Their subject matter differs from the natural or hard sciences that are their predecessors and their inspiration in this way: the human sciences study elements of human free will and the behavior related to it. What happens to a person’s body as she jumps from the airplane is physics. What happens to her biological functioning is any of a related group of medical sciences: neurology, physiology, cardiology, biochemistry, hematology, etc. Forensics, chemistry, and aerodynamics explain what happens when her parachute fails to open. All of these studies exclude the issue of why she jumped. That is the domain of the soft sciences. When we try to use empirical methodology to examine “the black box” of choice-making, we enter the human sciences. Besides those mentioned—psychology, sociology, economics — we might add criminology, anthropology, education, ethnology, and many other subsets. In some ways these soft sciences resemble the hard sciences. They make use of a specialized vocabulary that excludes outsiders. They branch into sub-disciplines as the hard sciences do. They often set up experiments and publish papers in academic journals. They form departments on college campuses and award undergraduate and graduate degrees. To a casual observer, the human sciences may seem to be just another of the many subdivisions of the empirical quest to subject nature to the scientific method. But don’t be fooled. The emulation of the natural sciences by the human sciences was intentional and perhaps originally well-intentioned, but it proved quixotic and delusional in the long run and has caused incalculable historical harm.
These threads are only untangled through some historical review. Empirical science as we define it was introduced by the Renaissance. Its emphasis was on close examination of experience. To understand its success, we must see its maturation over the next five centuries as a result of a series of subtractions of focus and procedure and multiplication of safeguards against what the great prophet of modern science, Francis Bacon, called the four idols, the blinders that hinder reliable knowledge. These idols in today’s terms are the limitations imposed by our perceptions, by hypothesizing, by communicating, and by theorizing from present range of knowledge. We forget how wide-eyed the Renaissance masters viewed the landscape of experience after a millennium confined to the still life of religious authority. Literally everything was open to critical investigation, and it was not until the nineteenth century that science put aside studies that failed to meet its increasingly stringent criteria. Newton, after all, spent his last years studying astrology and Biblical prophecy with the same critical method he brought to the inverse square law. It was only gradually that subjects considered unfit for empirical study were weeded out of serious consideration. The process was dictated neither by importance nor by interest. The limits were imposed by the means of justification, their warrant. The distinction only gradually emerged: the word scientific meant quantitative and predictable. If phenomena did not lend themselves to counting and measurement or if they yielded results that could not be predicted, they failed to meet the bar of true empirical science. Instead, they were relegated to the netherworld of the pseudo-sciences, covering a huge range of disciplines from hypnosis to acupuncture to spirit detection to phrenology: all still subjects of serious study by amateurs, but all banished from the field of professional inquiry over the course of contraction that marked the rise of the natural sciences. Or they simply vanished from empirical view, evaporating from its considerations and to the champions of the belief system called scientism thereby falling beneath the dignity of serious investigation (see “The Limits of Empirical Science”). This sorting process continues to this day, but the classifications have hardened sufficiently for scientists to draw fairly rigid exclusionary walls around their fields. Remember that this process has taken five centuries to mature and evolved entirely from the application of the scientific method in a sustained attempt to make experience more reliable.
But we face a chicken-and-egg problem on this score. The methods of science are valued not for their own sake but because they work. They have produced astonishing results. You can see these in three different orders of magnitude, all equally impressive. First, the interlocking subject disciplines of the hard sciences have proceeded from observation to theory about reality, and the theories all confirm each other (the exception of quantum and gravitational forces in physics is being whittled at today). The guiding theoretical basis of each subject discipline is its paradigm. It channels work in the field into productive investigations and excludes outlier theories that challenge them. It unifies experimental and observational work. Now this might be seen as a construction rather than a discovery of the fundamental unity of nature; that argument fails because the paradigms of one discipline must accord with those of related fields. So, for instance, biology is built on the fundamental chemistry of the periodic table, and each subdivision of the biological sciences investigates its own slice of reality, all building mutually supportive theories upon the dominant paradigms of the field. This interlocking relationship strongly supports the accuracy of each field of study. A second impressive indicator of science’s success is the increasing complexity of its studies, revealing layer after layer of often surprising truths about the material structures of reality. These deep observational and experimental findings often overturn entire paradigms and reorient the life work of experts in the field. At this frontier of investigation, it is not the connections among disciplines that impress us but the abstraction within each field, with experimental verification often resulting from rather than producing theoretical mathematical models. This networking and depth of study are impressive enough to confirm the natural sciences as the royal road to truth, yet what matters most to ordinary folks is the final verification of scientific success: its technology. The greatest material advances in human history have coincided with the maturation of the sciences, and that maturation may be summarized as a learned process of minimizing experience while maximizing reasoning about it, something possible in a laboratory experiment and utterly impossible for undistilled experience. It is not the experience that makes science work, for experiences are unreplicable, unique, and nested so deeply in context that they prove resistant to analysis. It is analysis itself, the creation of experiences so simple and lacking in variables that they seem made up more of mind than of world, that makes empiricism and its wonders possible. Who can doubt its truth-finding power?
The human sciences were at least initially a part of this mix. Natural philosophy, the eighteenth century term for empirical science, included in its purview the science of man, and efforts to shine the spotlight of close inquiry on human behavior at first seemed to offer dazzling promise. Those same empirical philosophers who taught budding scientists how to reason about experience felt quite at home exploring the mind of man and theorizing on his nature and the means to perfect it. But from the beginning, this new stab at Enlightenment galloped off in the wrong direction. Unfortunately, studies in the natural sciences that would dazzle the world were in their infancy in the eighteenth century, and so what we would now call the human sciences, pseudo-sciences, and the natural sciences were indistinguishable at first (see “Modernism’s Midwives“). Locke could write on the predominant influence of environment on childhood development, on the origin of government in a mythic state of nature, and a hundred other subjects, convinced he was engaged in the same kind of investigation that Van Leuwenhoek was undertaking with his microscope. It would take another two centuries for the natural sciences to sort out valid investigations as those in which conditions could be observed and measured, variables limited, experiments conducted, predictions confirmed, and results repeated. Over the course of the next two hundred years, empiricism would shed its ancestry in philosophy and theology and embrace mathematics as the infinitely precise language of its practice. But the human sciences over the same span of time would do none of these things. Even so, they would stand proudly in the reflected light of empirical science and dazzle the lay world with the reasonableness and scope of their theories, encourage social experimentation, and agitate for change while wearing the scientist’s scowl of disinterest.
About that scowl… Empiricism learned a powerful lesson in its two hundred year adolescence. Truth may indeed fall to its assaults, or at least some kinds of truth. Part of its ascent to maturity involved figuring out just what kinds of truth science’s increasingly stringent methods could grapple with. But a harder lesson lurked in the shadow of that effort. The entire empirical enterprise might be considered a torturous effort to avoid the pitfalls of premature closure. All of our efforts to discover truth in undistilled experience see truth as merely a means to an end, that end being the exercise of preferential freedom to choose the goods truth opens to choice. Our default process is to connect determinations of truth with preference, to see them as paired, and in the case of belief, as a single emulsifying operation. Empiricism learned the hard lesson that the temptations of shutting off reasoning about the reality we face so as to choose the goods that lie beyond them, premature closure, is often so powerful that it distorts what should be a prior and separate operation of finding truth. The scientific method restrains conclusions and the factors that might bias them, focusing instead on an act of severance, closing the door to even considering the utility or quality of a discovery until the scientist is assured in her mind that she understands what she is perceiving. Only then can immediate utility be sought, the other species of goodness typically being far beyond the purview of science’s efforts. That operation is now central to hard or natural science. But it has never been learned by the human sciences for three reasons. First, their investigative techniques must always be shoddy because they cannot perceive human will, the central, crucial variable in all of their studies. All the slipshod methodology that marks the history of human science follows from that failure, the inability to limits its objects of concern and the consequent slippages into other areas of human concern. Secondly, it cannot experiment upon its subjects in the way that true science does. Ethical considerations alone limit the kind of laboratory methodology that human science can employ to limit variables, use mathematics to quantify hypotheses, and control experimental outcomes for subjects. Finally and most importantly, the human sciences concern human preferences at the foundational level. Human sciences study some aspect of being human and at the heart of that enterprise is the black box that defines the various disciplines, the defining quality of the human person. It is her preferential freedom, her felt liberty to choose from the putative goods her natural freedom presents to her choice (see “Our Freedom Fetish“). But science’s success has derived precisely from not doing that, from avoidance of premature closure in pursuit of goods in favor of the act of severance so as to pursue truth. From their very beginnings, human sciences have been entirely unable to escape the lure of their own preferences, and so they have produced grand visions of human perfection and slanted their “experimental” results from the beginning so as to pursue them. The “Science of Man” was always a quest for human perfectibility, and “enlightenment” from their beginnings always the goal of the human sciences. Finding truth was never their raison d’etre, so they have never escaped the trap of premature closure, never satisfied their moral quest, and in the process, have never succeeded in finding predictive truths about the subjects of their studies.
It is perhaps forgivable that Jean Jacques Rousseau’s vision of the social contract should have inspired the French Revolution’s Reign of Terror of 1793. The truth-telling power of natural philosophy was in its infancy, and Rousseau was a thoughtful and witty critic of the social order of the ancien regime; his take on the new Romanticism that was sweeping popular Western culture was original and powerful. Nor is it surprising that Rousseau pushed Immanuel Kant into articulating the revolution in philosophy that was The Critique of Pure Reason. That one philosopher should challenge another is part of the great conversation of ideas stretching back to the Hellenic Age. What is unfathomable to me is that Rousseau should be regarded as the father of American educational theory in the twentieth century. Now here is a puzzle. Rousseau’s fictional narrative advocating “natural education,” Emile, provoked lasting debate about the best means of schooling. This debate ignored the historical truths that Rousseau fathered five children, all of whom were sent to orphanages as infants; he had no expertise whatsoever in tutoring. The question of whether children should be allowed to follow their interests or be inculcated with habits and general knowledge is certainly worth serious discussion and debate among all interested parties. But that is not what happened.
In 1894 the Pragmatist philosopher John Dewey was invited to chair the new department of Philosophy and Psychology at the University of Chicago. Note the title of the department: the human sciences were on the cusp of eminence and essentially facing the monumental decision of whether to accept alignment with the sciences or with philosophy. That explosive potential would mark and mar the century to follow, in part because Victorianism was well into its final spasms (see “The Victorian Rift”). Dewey was a key player in this drama, for his position in what is often called the only American philosophical movement was that the truth or goodness of any declaration depended on its usefulness to the thinker, that usefulness to be decided by a life of “practical experimentation” (see “The Problem of Moral Pragmatism“). Note both his preference and his error here, for they molded American education for the next hundred years. First, his inclination was clearly and permanently toward science, and he saw every reason that our social existence should be ordered on scientific principles. But note too the absurdity of considering undistilled experience as a kind of empirical experiment. The essence of empiricism and the source of the power of the natural sciences is the isolation of variables to assist close observation, the replication of results through repeated trials to filter out the faults of perception and preconception, measurement to clarify cause/effect, and replication of results. It is this painstaking and excruciating process that has produced the glories of natural science mentioned above. And Dewey was suggesting what? That we can emulate this success in ordinary experience, the kind of activity in which perceptions whiz by like oncoming traffic and reasoning faces a kaleidoscope of judgments every second? What neurological or experimental evidence could legitimize such a sweeping recommendation? And he was offered a university chair at a great American institution to do it? How can this kind of lunacy be explained?
I have argued previously (“Postmodernism Is Its Discontents”) that the turn of the twentieth century was marked by fevered attempts to remedy the defects of modernism made manifest by the contradictions of Victorianism culminating in the coming of World War I. John Dewey’s success was one of those attempts; it exploited the technological and theoretical revolutions that natural science was producing, harnessing its power to dreams of social perfection. Dewey’s notions were directly rooted in Rousseau’s musings on natural education, and so yet another grotesque Victorian hybrid of Romanticism and rationalism entered the mainstream, this time attempting to nourish children’s curiosity as the touchstone of a new social science.
It was called Educational Progressivism, and it was launched at the nation’s first laboratory school, founded at Dewey’s insistence in 1896 at the University of Chicago. As the new century dawned, colleges of education sprang up around the country, embracing the primacy of “scientific” pedagogy modeled on the Progressive model. Dewey himself vociferously denigrated obstructionists as “traditionalists,” and so the notion of natural education promoting socialization seeped into the new K-12 public school systems across the nation backed by the full force of the new social sciences. The movement splintered in its application– such splintering is both a flaw and the seal of the human sciences lacking in dominant paradigms– but by the 1930’s was called Constructivism, structured by the Swiss psychologist Jean Piaget (who was the director of the Rousseau Institute in Geneva while publishing his theories, incidentally). Its modified goals could be summarized in two precepts: education should serve psychological and sociological goods and all knowledge results from the personal construction of the learner. Constructivism is still the default academic model in U.S. schools. But note the postmodern implications of its goals and the empirical tone of its means of achieving them. To say that knowledge is created rather than constructed is to strike at the traditional heart of the goal of education. The continuity of Western culture and the critical abilities required to improve it are no longer central to our understanding of education. This abandonment tracked a larger cultural rejection of modernism and embracing of novelty in the first decades of the twentieth century in all cultural pursuits from architecture to poetics. If the teacher is no longer the competent guide to current constructions of knowledge for her students, if instead she is to open possibilities to their imaginative efforts to build the world anew, she must become the facilitator of ….what? Whatever the child currently values? And how is progress or promotion to be measured if success is to be determined by the satisfaction or social adaptation of the learner? The jargon implied by this orthodoxy in educational circles was to make the teacher “the guide from the side rather than the sage on the stage.” As students create their own virtual circles of truth, goodness, and beauty (see “What Is the Virtual Circle?“), they become the controllers of curriculum and the arbiters of relevance in the classroom. The current obsession with standards-based education is a direct repudiation of Constructivism. It is surprising to me that this current educational battle has not been more generally recognized and its roots uncovered. No one should be surprised that Piaget’s four cognitive stages of development were discarded in light of more sophisticated cognitive research (see “Empathy: a Moral Hazard”). Yet strangely, the erosion of the “scientific” basis for Constructivism seemed to provoke little doubt in its social science adherents, possibly because those who major in education have little subject-discipline knowledge. I predict that their understanding of the roots of the theory, its pseudo-scientific formulation, and its application in American classrooms for nearly a century through the influences of colleges of education and professional licensure will one day be exposed as one of the great causes of the failure of American public education. And so at the beginning of the twenty-first century, we face this strange question: what are the odds of an eighteenth century French Romantic philosopher steering the course of American public education in the twentieth century?
Stranger things have happened. And far worse. The same bizarre combination of Romantic yearning and human sciences produced the two greatest scourges of the last century: communism and fascism. It was the Romantic philosopher Hegel who inspired Marx during his days in Berlin to posit as a fundamental law of historical science the dialectical process of feudalism, capitalism, and communism. And so a philosophical theory was again transmuted into a human science “law” that binds human will. And again, the original “scientific” paradigm was modified by successors, this time by Leninism, Trotskyism, and Maoism into successor and competing theories. And again, the theory proved fatally flawed. To offer another example, it was the philosopher Herder who molded the Romantic notion of German exceptionalism that then was “confirmed” by ethnology and the new sociology of thinkers like Max Weber into a theory of the uniqueness of the Aryan race and their supposed persecution by the descendants of Roman Europe. And so the “new men” of the new sciences marched confidently toward a future of social perfection first prophesied by the philosophers of the Enlightenment, inventing in their pursuit a word unknown until 1944: genocide.
A final example is to be seen in the work of Sigmund Freud, who published his landmark The Interpretation of Dreams in 1900. The medical doctor who had begun his career flirting with hypnosis and cocaine as cures for female hysteria trolled the shallows of pseudo-science from the beginnings, and his reliance on myths as models of human behaviors introduced powerful Romantic influences to his theories of the unconscious. His later works, in particular, effaced the line between science and philosophy, though it is hard to trace any period in which Freud drew it clearly. Again, his work was subjected to divisive reinterpretation by later generations of psychologists, partially resulting from his own sense of persecution, producing what seems a perennial problem for the human sciences: a profusion of competing and mutually contradictory paradigms, as indicated by the Psychology Today absurdity mentioned at the beginning of this essay.
There is a repetitive storyline here. Philosophy inspires Romantic yearnings that then are systematized into human science, all occurring as natural science rises in public esteem. These “new sciences” then are transmuted into competitive successor theories that revivify versions of the original theory and produce the very opposite of the social perfection they envisioned. So what is the lesson? We may draw several.
Most obviously, we should beware of the poisonous conflation of Romantic philosophy and the human sciences. It seems combining the grand visions and yearnings of Romanticism with the trappings of secular priesthood granted to science produces no end of mischief in the world. Every issue examined by human science is worthy of thoughtful and careful conversation by educated persons. But Romantic emotionalism and human science dogmatism are anathema to the kind of conversation required to find the proper place for student-centered learning, economic justice, national identity, and healthful psychology. Perhaps our error was in our approach more than in our ambitions, for our history is a full of examples of social advancement made possible by the yearnings of social reformers. We have been blinded by the power of science into accepting truth and goodness claims that seem reasonable and comprehensive, yet are far beyond the power of science to confirm. This process is the very definition of scientism, the unfortunate hagiography of empirical study that regards it as the only road to truth. Hats off to the philosopher of science Karl Popper, whose theory of falsifiability should be required reading for every educated person. Popper’s lessons, incidentally, retain their power in applications beyond science, for the temptations of systematizing practiced by postmodernists and each of us in the building of our understanding of reality must lead to desires for premature closure and the self-love that characterize belief in our own explanatory models. Popper warns us that the test of these constructivist visions is not their reasonableness. The “Clue Problem” is to blame. Given a limited number of facts about experiences, a variety of rational explanations can be found to link them into a causal sequence, each seeming to be reasonable and even revelatory. Popper wisely advises that we ignore the logical attraction of these theories in favor of something less self-evident: their falsifiability. We should ask ourselves under what circumstance our rational construction might be proved false. What set of facts or judgments could conclusively prove the error of our theory? Note that the theory must not be proven false, for that would end it, but rather should be subject to tests that might disprove it.
Consider two examples. On the one hand, we have Einstein predicting the apparent shift in the position of a distant star caused by the passage of its light near the gravitational well of our sun. Should it be observed in a non-shifted position or should its position be shifted beyond 1.75 arc-seconds, a miniscule but precise prediction, his theory of general relativity should have to be revised or abandoned. When Arthur Eddington confirmed Einstein’s prediction in the solar eclipse of May, 1919, Einstein became the twentieth century’s most important figure, for his theory had passed the test of falsifiability. The second most important, according to Time magazine was Sigmund Freud for his work on psychology. Popper discusses one of Freud’s followers, Alfred Adler, diagnosing a patient without personal contact, and arguing for the accuracy of his diagnosis “on the basis of my great experience.” But as Popper notes, the diagnosis was so flaccid that it could be modified to suit any of the patient’s personal circumstances—Adler was uninformed on such things—and retroactively claimed to be correct. This self-deluding quality is characteristic of practitioners of the human sciences. Their theories are too vague to be falsifiable. We can charitably attribute their self-deceptions to the catharsis that accompanies their first exposure to the social science paradigm they embrace. It is nothing less than a conversion experience in which their virtual circles are reoriented by the breathtaking scale of the paradigm they adopt. (For more on this justification issue, see “Can Belief Be Knowledge?“). Of course, thinking of their attachment to their virtual circles in religious terms establishes a knowledge equivalency between human sciences and religious belief that neither side would embrace.
A more obscure question but one of great importance in our current climate concerns the fate of the human sciences themselves. If the defining quality of their nature is their concern with the “black box” of human motivation, then they face insurmountable odds in the current zeitgeist and whatever reputation they can salvage must look backward rather than forward (see “A Preface to the Determinism Problem”). The future without question belongs to the hard sciences. Neurology and genetics are even now supplanting psychology thanks to the invention of the FMRI in the last decade of the twentieth century. Predictive human science is the problem, for descriptive studies of human activities in the aggregate can yield important statistical data. Just to be clear, no human science can predict the preferences of any person, though it can describe behaviors of large groups of persons mathematically and so make large-scale predictions (though not too successfully, it must be noted). But letting go of prescription has not been easy. Psychologists initiated into the breathtaking predictive power of Freudianism consumed the entire twentieth century in learning to let go of their paradigms, which were actually moral theories rather than empirically verifiable predictions. Rather than try to open that black box, they next resorted to behavioralism, the study of observable behavior rather than motivation. In essence, they favored what could be observed. Now traditional human scientists might regard that as a total surrender of the field. Freud certainly would have. But to my mind the mystery of human consciousness is far too little understood to be the subject of scientific study, but we can observe and make fruitful conclusions about what people actually do. We can call that psychology and still ignore the core questions of motivation and consciousness. And, glory of glories, we can do real experiments in behavioralism that yield quantitative data and allow us to form real and useful descriptions and categorizations. But behavioralism fell into the errors of scientism from its Skinnerian beginnings. Because it could not study human will, it attempted to deny its importance. Thankfully, it has largely been replaced with cognitive theories that put human reasoning and preferential freedom at the center of their efforts. This kind of thing is less science than what used to be the provenance of religion or ethics, but it takes the reality of felt free will seriously enough to do some good. If one looks at a listing of human sciences and thinks about how each field could do something similar, she would quickly see both the potential and severe limitations of the human sciences, limitations serious enough to have them combine with more traditional if less venerated studies. It might be best if you ignore the spectacular failure of economists to build predictive models or the efforts of sociologists to advocate for “healthy” social structures, as Margaret Mead so shamelessly did in her studies of the Tahitians. From the very beginnings, human sciences have had a problem isolating the true from the good. From Freud’s characterization of religion as “the history of a delusion,” to Timothy Leary’s advocacy of LSD, to competing economists advocating rather than analyzing policy proposals, it seems human scientists are just too human to be as objective as science demands. Against Popper’s advice, they fall in love with their beliefs and claim them as knowledge.
We have lessons to learn about justification as well. The most powerful warrant for our correspondence truth claims is the empirical one. The weakest is that of undistilled experience (see “What Counts as Justification?“). That is why I view with befuddlement Dewey’s attempt to conflate one with the other and see the dawn of the twentieth century as an age of epic ignorance and wild experimentation by those who should have known better (see “Postmodernism’s Unsettling Disagreements”). Perhaps if the prophets of the human sciences had indulged in less jargonizing and abstracting, they might have encouraged the rational discussion of issues that produce real human progress. Or maybe they were too infected with Romantic passion for that, an infection they thought they were curing with their pseudo-science.
The best news about this particular and dreadful chapter in Western history is that it seems to be ending. I don’t often credit postmodernists, but despite their sordid reliance on the terminology and methodology of the human sciences, their world view forbids any admiration of empiricism’s supposed objectivity, and in the case of the soft sciences, their suspicions are well-warranted. Only this morning, I heard an economist author advocate that we distrust so-called experts in his field. As they say, from the horse’s mouth. Every day, the true empirical sciences continue their relentless advance into the unknown, appropriating the traditional objects of the human sciences with the same inexorable efficiency they once used on philosophy and religion.