Modernism and Its Discontents

Major Contentions

  • Though the term “modern” is broad, I mean it in the sense of the domination of unique axioms of rational and moral commitment different from those employed in earlier and later eras.
  • Because it was inspired by — and ultimately challenged by — other foundational convictions, modernist axioms have always suffered from attacks on their legitimacy; additionally, their nature requires an ongoing self-critique that has revealed modernism’s deficiencies while masking its successes.
  • Modernism was framed by Martin Luther’s revolt against Catholicism in 1517; he made it clear his conscience was captive only to his own reason, that he would either be convinced or remain an outlaw in defiance of religious authority.
  • Though we find nothing extraordinary in his rebellion now, institutional authorities could hardly begin to process a justification for truth or goodness based merely on reasoned, private experience in opposition to dogma, established order, and trust in institutional authority; their bewilderment and the horrid reaction it endorsed reflect their own premodernist axioms of commitment.
  • We might see Luther’s revolt in terms of an heroic rebel speaking his own truth to power, fighting the hypocrisies of vested interest and institutionalized privilege; that viewpoint reflects yet another set of assumptions and marks a postmodern set of axioms that characterize contemporary life.
  • When seen in this light, an axiom takes on the quality of a set of presuppositions, rarely examined, that determine the perspective we use to understand the warrant for an experience or to prescribe its potential goods.
  • The dominant premodern warrant was trust, considering institutions to be formative of individual identity; a bestowal of trust definitionally transfers the power to decide to the authority considered better able to arbitrate issues of truth, goodness, and beauty than the individual who bestows it; this willing surrender is inconceivable to postmodern thinkers, who regard all such bestowals as a theft of agency made in bad faith whose real agenda is maintenance of disparities of power; between these fundamental oppositions, the first as old as civilization and the second the product of the miserable twentieth century, modernism found its footing.
  • In his stand at the Diet of Worms in 1521, Luther appealed to universal reasoning based upon his own private experience as a revocation of trust and a reacquisition of the power to decide; he implicitly appealed to others’ reasoning to reveal the same inconsistencies and hypocrisies in Catholic authority that he had discovered; this appeal was fundamentally inconsistent with premodernism because it placed the capacity for judgment in the individual rather than in the corporate body and it appealed to a common reasoning faculty to make sense of Biblical truth.
  • Unfortunately, Luther’s axioms could not be comprehended by the religious authorities whose failures had inspired them because reasoning on the meaning of scripture relied on a prior revelation that must always be private when subjected to “the priesthood of all believers,” rendering all such convictions acts of private belief rather than universal reasoning; this inherent flaw in religious conviction crippled the potential of the modernist axioms that would grow from Luther’s revolt and prolonged the bloody conflicts that resulted from it.
  • Luther’s revolution began a century and a half of savage religious contention in which institutional authority itself suffered a wholesale loss of trust without revealing any other compelling warrants for social comity and moral direction; those warrants had to be shaped in the maelstrom of the collapse of all social order over the course of the sixteenth and seventeenth centuries, hammered out in the midst of ongoing epistemological crisis.
  • Though the distinction between belief and knowledge has not been consensually accepted to this day, the abject failures of institutional authorities were made abundantly clear over eight generations of struggle, leading to the relocation of rational and moral agency in individuals, transforming subjects into citizens in the process.
  • By the end of the Reformation, authority’s inability to resolve conflict without forfeiting trust proved as impossible for public morality as revelation’s private nature proved for public consumption; these manifest failures gave modernism the space to mature but also composited the judgment that it relied upon with the desire characteristic of religious beliefs.
  • Rene Descartes attempted to eliminate this taint by appealing to the intersubjective quality of human reasoning; he argued that our thinking, not our believing, can prove universal, though he mistrusted experience as inescapably private.
  • Francis Bacon accepted the Cartesian solution of universal reasoning, but he sought to minimize the subjectivity of private experience by standardizing the processes of experience itself so as to create a more universal process of perception; ultimately, this effort produced the scientific method, but first it had to accept a lower degree of certainty than religious authority had provided; establishing a minimal standard proved extremely difficult, a point revivalists of religious authority never fail to notice and one always disputed by modernists.
  • Over the course of the seventeenth century modernism began to be constructed on the twin pillars of universal reasoning and examined experience, but every assertion was open to challenge on the same grounds that founded it; modernist cannibalism was the process of critical examination and modification of its warrants and sustained criticism of its axioms.
  • One fertile area of critique was the reliability of human perception and of the reasoning that made sense of it; examinations of human nature suffered from the “black box” of human preferential freedom and temptations to “perfect” its operations; ultimately, these temptations produced one of the continuing embarrassments of modernist axioms: the pseudo-scientific efforts of the human sciences, most obviously exemplified by social contract theory.
  • What began as good-faith efforts to mimic the emerging natural sciences eventually saw the human sciences indulge in imaginative prescriptions for human perfection as simulacra of morality in the twentieth century, which proved fatal both to modernist axioms and the effort to create postmodernist ones.
  • At the same time, modernist axioms succeeded beyond expectations in the natural sciences, producing deep explanations and bountiful technologies; but by perfecting its methodologies over the nineteenth and twentieth centuries, empiricism starkly revealed its incapacity for moral prescription while also highlighting the failures of other modernist warrants to produce them.
  • These failures were magnified by the continuing appeals of premodern authority to trust even in the face of modernist warrants that rejected authority’s maxims, producing ongoing hypocrisies, exploitations, and contradictions that modernist cannibals were all too eager to highlight and postmodernist ones to exploit.
  • Together, the misdirection of the human sciences combined with the contradictions of institutional authorities produced the moral crisis so clearly revealed by World War I; the conflicts that characterized the twentieth century attested to modernism’s failures and launched the postmodern revolution.
  • Literacy and mass cultures, particularly popular entertainment, allowed postmodern values to permeate Western societies even before postmodernist axioms were fully articulated by the last third of the twentieth century.
  • While postmodern values confirmed modernist emphases on individual experience, they considered reasoning the product of that experience rather than its director; this change inspired both existential angst and the identity politics that followed it.
  • These changes were a direct result of “modernist cannibalism,” particularly the epistemological contention that perception must be inescapably separated from reality by a perceptual wall.
  • This logical conclusion was partially obscured by the power of empirical science to bridge that gap by a formulaic constriction of experience to formal observation and experimentation, but though this effort increased the prestige of science, it also eroded the value of “common sense” as a reliable guide to experience, which devalued moral reasoning.
  • This crisis of modernism eventually produced a postmodern rejection of intersubjectivity, but before the twentieth century crisis, there was an earlier attempt to pierce the “perceptual wall,” the nineteenth century movement called Romanticism.
  • This effort relied on intuition and emotion to provide a certainty that reasoning could not match; its impact on early mass cultures was incalculable, and because its greatest influence coincided with both imperial expansion of European powers and the birth of true mass cultures, Romanticism became the first worldwide cultural value system.
  • Romanticism faded in the last half of the nineteenth century, in part because of the bulldozing power of empirical discovery that challenged its pantheist assumptions and in part because Victorian cultures attempted to domesticate its means to truth and moral certainty.
  • Romanticism was the last universal effort to prescribe moral truth; its failure was magnified by the awesome powers of natural sciences and magnified yet again by the final breakdown of institutional authority, all leading to the civilizational crisis that was World War I.
  • This crisis centered on a single question, “What is progress?”
  • Since this question requires an answer involving goodness — and ultimately moral judgment — it forced a full awareness of the vacuity of possible responses.
  • This failure was highlighted by the final collapse of religious authority, a slow-motion disaster that had begun with Martin Luther’s attack on institutional morality in 1517, but this reallocation of agency was finalized in the first decades of the twentieth century.
  • The catastrophes of the twentieth century were magnified by the abysmal failure of the human sciences to replace religious authority as arbiters of moral truth and procurers of progress.
  • One way to view this disaster is to see it as a conceptual field of battle over moral truth, with postmodernism championing a radical individual agency creating personal standards of value versus a modernist position rooted in the human sciences, the former proclaiming radical freedom and the latter radical contingent determinism.
  • A false reconciliation emerged with the view of the unconscious as the mediating mechanism of preference; it was claimed to be both scientifically sourced and privately directive; it is difficult to gauge whether its fictive existence was more damaging to real science or real morality.
  • The twentieth century was one long attempt to bridge the gap that the act of severance creates between what we think we know and how we employ it to find the goods we value.
  • The very odd outcome of this effort was an enshrinement of belief as the medium of moral commitment, with premodernists considering it to be an attachment to a transcendent moral system and postmodernists thinking it an intricately constructed structure of privately experienced reality; this shared devotion to private belief cannot produce a reconciliation because premodern religionists mistake their beliefs for trust in authority and postmodernists are axiomatically hostile to all authority.
  • This conflict over the true nature of belief has yet to reach its crisis point, which will arise when religious authority is definitively and universally rejected and the human sciences alternative is similarly denied as inconsistent with felt preferential freedom; at that point, advances in true empirical work in neurology, genetics, and artificial intelligence will more clearly reveal what is universal to human preference; at that point, empiricism’s act of severance will force a concession to species-specific universal reasoning to engage in a competent search for universal human rights discovered, defined, and defended through modernist axioms of commitment.

 

We call all sorts of things “modern” intending approval. We think modern appliances and modern medicine to be admirable things. We applaud innovation and novelty, often for their own sakes. The meaning I have in mind is narrower and more specific: I refer to “modernism” as the domination of unique axioms of rational and moral commitment distinguished from others by their history and content. These axioms color, code, and qualify the basic terms we all use to understand the world and our place in it. This implies a difference from other times and other values that underwrite other justifications, though the distinctions are so poorly understood that these same terms have quite different meanings when used in the sense these other axioms require (see “The Axioms of Moral Systems).

The modernism I have in mind has had a hard go: conceived in controversy, born in agony, matured in confusion , and now fading into senescence. Not a single year of its life has passed without assaults on its legitimacy and dissection of its premises. It has hurt its own case by remaining in a state of arrested development, magnifying its failures while failing to flaunt its successes. It has not recognized its own powers or has used them badly and now lies prone on a field of battle its enemies have chosen. In the early twenty-first century, its future looks as bleak as the prospect for public moral consensus, which is not a coincidence since its passing must kill the only means to achieve the common good (see “Two Senses of the Common Good”). In this essay, I wish to analyze some of the signal problems that have tracked its history. These complicate our own.

Modernism as an axiom of justification began with Martin Luther’s stand at the Diet of Worms in 1521. In his famous response to the Catholic Church’s demand that he recant the heretical positions he had first taken four years earlier, Luther staked out the boundaries of modern thought.

I cannot submit my faith either to the pope or to the council, because it is as clear as noonday that they have fallen into error and even into glaring inconsistency with themselves. If, then, I am not convinced by proof from Holy Scripture, or by cogent reasons, if I am not satisfied by the very text I have cited, and if my judgment is not in this way brought into subjection to God’s word, I neither can nor will retract anything; for it cannot be either safe or honest for a Christian to speak against his conscience. Here I stand; I cannot do otherwise; God help me!

If his words move us to shrug, our complacency only shows how completely this language has been absorbed by Western societies today, so much so that we cannot imagine another way to be. But think about Luther’s stand at Worms from the position of his listeners, Roman Catholic scholars and prelates whose trust in authority was so profound that they literally could not understand how a single and unaided mind could dare to oppose it. They frankly could not see how one undirected will could even contemplate opposing a millennium of dogma and the collective wisdom of sixty generations of exegesis. And what could this Luther bring to bear to reveal “error and…inconsistency” but his own critical inspection? Of what was this faculty composed? His own reason? Based upon what authority? His own experience?

Most of us today would not think to ask these questions because we operate from a different set of assumptions. Today we think this all an issue of power. They were strong and Luther weak. They could bring the apparatus of state and ecclesiastical resources to crush this heroic rebel who dared to speak his own truth in the face of their putative authority, which when we inspect it with contemporary eyes must only consist of their ability to dominate. We observe the scene of the tonsured priest in simple black robes staring down the bank of scarlet and bejeweled envoys of the status quo. Martin was speaking his own truth, telling his own story, fighting inequality like that later Martin we so admire. They were defending their own self-interest, preserving their own privilege. We might imagine ourselves acting in the same way in the same position. If this analysis captures our view of that confrontation, it also perpetuates our own peculiar axiom of commitment, one so different that neither Luther nor his inquisitors could understand it.

When we inspect Luther at Worms, we see the same event through different lenses. These separate justificatory assumptions color the judgments that must flow from them. The members of the Diet represent a premodern world built upon tradition (see “Premodern Authority”). We observers would likely challenge that narrative because we employ a postmodern set of assumptions that denies tradition of its authority and authority of our trust.

That might affect our view of the matter in two contrasting ways. We might think our view more correct and sophisticated because it uncovers hidden machinations that even Luther’s interlocutors might have been unaware of, reflecting psychological truths they could not have known. Or we might think this is all an issue of perspective, of self-interest. But is the truth of Luther’s stand at Worms merely a conflict of viewpoints? Would any party at the Diet maintain that its own view was merely a matter of which side of the argument it happened to benefit, or would they find that assumption dishonorable and absurd? If a truth can be found in their contest, does it lie with champions of traditional authority or with its challengers or in some third space between them? The answer you give depends less on the situation than on what you bring to it. Axioms of commitment do not concern the truth one examines so much as the means by which one inspects it. If the details of any issue may be compared to files, the axioms we bring to the issue may be compared to the folder we put them into, the way we choose to understand them.

Those red-robed clerics at Worms carried one set of axioms into the Bishop’s Palace. Their premodern outlook that privileged authority, sacred and secular, and expected individuals to surrender their own capacity to decide in deference to the complex of traditional institutions that must finally trace its will to God’s. Even framing it in this way would have been foreign to them. I use this language because their framing would be foreign to us. They could hardly sanction their own ability to decide. Another name for abandoning that capacity, of surrendering their own agency, is trust: authority’s only justification. (see “Authority, Trust, and Knowledge“).

We doubt their trust, see them bowing to power, acting in self-interest, in defense of established privilege. They saw little or none of what we see because their trust was so absolute as to disallow analysis. What mattered was less their consent than their unthinking acceptance of established order as God-ordained. Consent implies an inspection that trust axiomatically disallows.

We find their outlook linconceivable, which explains the seeming remoteness of the medieval world and its inhabitants to our experience. Our  outlook sees dispute rooted in disparities of power and the pursuit of equality in defiance of tradition and the institutions that perpetuate it. What Luther represented at Worms was something quite different from either of these perspectives, though as its harbinger, he could not quite grasp the extent of the revolution he was starting. His was the first modernist rebellion. It marks a third and distinct kind of axiom.

He sanctioned his own judgment as superior to reams of authoritative texts. To do that, he had to endorse his own agency, prefer his own understanding to tradition and authority. Now history since Luther has made plain that persons can do that in all kinds of ways, but Luther chose a particular one that became the core of the modernist axiom. His standard was universal reason. Luther made clear that his trust had been sundered by hypocrisy and contradictions. His own Ninety-Five Theses that started the Protestant Reformation in 1517 highlighted two kinds of inconsistencies: those involving contradictory doctrines and those involving disjunctions between what Catholic authority decreed and how its representatives acted, hypocrisies so severe that no mere protest or reform could remedy them. Crucially, by detailing them in a public setting, he assumed others would be convinced to think as he thought. By stating specific objections and justifying them by his own reasoning and his experience as an Augustinian priest in Augsburg, one of the domains loyal to the Holy Roman Emperor whose crown was bestowed by the Pope, Luther began a two-century struggle to elevate universal reason applied to private experience as the route to truth and goodness capable of tempting other congregants to stake their own claims founded upon their own reasoned experience. In taking his stand, Luther declared himself the first modernist thinker.

He isn’t really known for that for the very simple reason that he failed to understand the implications of what he was demanding. The world that he faced was thoroughly premodern, a Christendom whose ultimate arbiter must always be institutions competent to arbitrate all dispute by means of a legally static hierarchy of power. Crucially though, inhabitants thoroughly embraced its legitimacy as eternal and divinely ordained. They may have denied that any single power source was the ordained representative of God on earth, and like ourselves were often tempted to pursue pure power with a reverent face. Even if tempted to rebel, they simply reallocated trust rather than revoked it. They could not question that authority, though perhaps not any particular representative of it, was acting in God’s name to work God’s will in the world. And Luther was staring down these same representatives demanding that his own reasoning might prove sufficient to overthrow the whole divine apparatus of Catholicism, that he was right and every artery and vessel that acted “by the grace of God” was therefore wrong. By what right could he claim such a thing?

What happened next proved he did not understand what he was asking. The Protestant Reformation exploded into a paroxysm of rational agents all challenging not only Catholic orthodoxy but each other’s revelations. From 1521 through 1648 religious wars pitted nations, dynasties, towns, and neighbors against each other. Luther lived through the first twenty-five years of it, tirelessly distributing pamphlets and German translations of sacred texts, owning the revolution he had wrought. But very early on, he began to regret the way he had wrought it. “Reason is a whore, the greatest enemy faith has,” was his later judgment on his own means of judging. And so modernism faced its first crisis within its own creator.

We may think of this issue as its birth pangs. It could not be delivered from the religious authority whose failures had created it. Premodernism had long since settled the private versus public constituents of religious commitment by elevating dogmatic authority as supreme over private revelation. Luther assumed he had added a new ingredient to the mix: reasoning on the authority of the Bible only after having received a divine insight. His view of redemptive grace and divine revelation meant that any reader of the Holy Word would be moved to its truth, that the reasoning he had held up as a paragon ought to be used only after revelation had planted its seed of perfect truth in the mind. His faith in divine transmission of inerrant truth relocated from passive trust in Catholic authority to private belief. His trust relocated to his own belief in revealed truth in scripture, the now infamous sola scriptura. This relocation effectively made every reader of scripture his own pope. That turned out very badly. Ten generations of Reformation horrors, slaughterous European wars, and the twenty millions killed in the uncompromising conflicts of protest and retrenchment — not to mention the scores of millions slaughtered since in defense of private belief against sacred dogma — tell us Luther was wrong about that. Inerrant, consensual truth could not derive from revelation interpreted by reason. This traces to the essential nature of private belief as an emulsion of judgment and desire. Infant or adult baptism, transubstantiation or consubstantiation, seven sacraments or two or none, faith or works: every belief is a kind of attachment, truly a faith rather than a knowing (see Knowledge, Trust, and Belief). This was one battle line of the Reformation.

Not only did the distinction between belief and knowledge have to be teased out — something we have yet to complete— but the even more fundamental question of agency, both rational and moral, required resolution. Who decides on what is true and good? Luther at first saw it simply: the individual with God’s guidance does. The Catholic Church, particularly after the Council of Trent in 1545, saw it differently and reinforced institutional authority as the guarantor of truth and goodness, explicitly throwing individual judgment under the coming train of conflicts in favor of tradition and established power, doubling down on the premodern axiom. One only needs to inspect the independence of Roman Catholics, often in opposition to their own dogma and leadership, to recognize how thoroughly contemporary belief has rejected the fiats of that Council. What is crystal clear now was only a dim premonition then. Reformation struggles over the course of generations could find no means to reconcile the zero-sum question of agency: one cannot half surrender it or half retain it for one’s own use. And the fundamentally religious nature of the struggle meant the stakes could not be higher since they involved eternal bliss or damnation as well as the less vital resolutions to the conflicts we wage here below. It is ironic that authority retained the position it had always held since the fall of Rome. Its intellectual position was consistent and provided something individual agency tied to belief could not challenge: a clear public moral warrant bolstered by the authoritative voice of established institutions. That meant certainty and a unity of truth and goodness claims into which no individual could insert the lever of compromise or private truth. Belief could provide a private certainty, to be sure. The burnings of worshipers in their houses, the mass slaughters, the endless succession of new prophets: they were willing to stake their lives on the truth and goodness claims they declared and frequently were martyred because of them, only to be replaced by yet another generation of prophets with new private revelations and new enmities. Belief could not match the public moral power of authority, and that provoked the second crisis of emerging modernism.

From the way I have framed the crisis so far, you might wonder why modernism was not simply snuffed out in its infancy, strangled by a jealous authority that could easily demonstrate the futility of killing for a private faith and as easily prove the advantages of a public one to the common weal. But authority suffered its own crisis during the Reformation years even as it attempted to extinguish the spark of private rebellion. It faced a forfeiture of the trust that was its only means of warrant. As generations of religious war proved the impossibility of proving private and competing revelations, adherents found themselves forced to translate heretical beliefs into authoritative dogmas for second generation followers, forced in other words to turn faith into doctrine, complete with sacred texts and guiding ritual. These new orthodoxies of Lutheranism and Calvinism and Anabaptism and a hundred others were in their turn adopted by rebellious civil authorities for their own utility in wars against opposing princes and prelates. By the third generation of Reformation unrest, authority found itself in an analogous situation to private belief: challenged by differing interpretations of divine will, different orthodoxy, and opposing claims with no means of reconciliation. Public morality now found itself torn by the same dissension that had gripped private belief, though no clear differentiation of the two was possible in the tumult. Over succeeding generations of conflict, trust was eroded until a simple transfer to another authority, which had been the thousand-year resolution of crises of trust in medieval Europe, was understood by combatants to be utterly futile. This was a tipping point not for any one authority but for all, a perfect analogue to the ongoing crisis of belief (see “The Fragility of Religious Authority). For what can any authority appeal to but the trust of its beneficiary? And how was that beneficiary to arbitrate appeals to trust when the existence of an opposing voice itself must challenge trust, must wear it down until nothing is left but fanaticism or suspicion? It is crucial to understand what this erosion means to the congregant. She had surrendered her capacity to judge to the authority she acknowledged to be more capable to guide truth and goodness preferences than she. That is what trust means (see “Authority, Trust, and Knowledge). When doubt arises, she must arbitrate it by appeal to the capacity she has willingly surrendered, and that requires a re-appropriation of her own ability to decide. How can an authority combat that re-appropriation of agency? By demanding trust? This is not to say that persons riding the Reformation whirlwind could see any of this clearly, though it is all to be read in Luther’s Table Talk and Calvin’s Institutes of the Christian Religion and the pronouncements of the Council of Trent. Authority’s inability to resolve conflicts without forfeiting trust proved as insoluble for public morality as revelation’s private nature proved for private convictions. And this impotence gave modernism a bit of space in which to grow.

Its first tottering steps were made by Rene Descartes in the early seventeenth century. Living in the ruins of the French Reformation, in the shadow of the Huguenots and the St. Bartholomew’s Day Massacre, he confronted the splintering effects of private revelation and public failures of trust (see “Modernism’s Midwives). His solution was Luther’s at Worms: rational consistency. But because he turned the spotlight away from the contentions of religious revelation, he could reasonably appeal to what was universal and common to our ordinary reasoning. We shouldn’t start with the hard problems of God’s will, he said. We should start with the intersubjective awareness we share: we can all reason alike. The genius who gave us Cartesian geometry argued that the operations of mathematics proves just what generations of religious conflict had cast doubt upon: we share a reasoning capacity that operates, or at least can operate, in the same way for each of us. We all can comprehend a geometry theorem. We all understand denotations in language. It is our thinking rather than our believing that is universal. We can build on that, not so we will agree on everything, but on enough things to find civil concord and common purpose. Our logic will guide us there.

We would not have Descartes’ solution, half of modernism’s axiom, without Luther’s elevation of individual agency and the atomizing effects of the crisis of religion in the Reformation. By turning away from religion — though not too far away since he also claimed we can have a priori knowledge of God that at least allows for the claims of revelation — Descartes sought to find common ground among warring factions. But was it to be found in the formulae of mathematics only? He was suspicious of claiming common experience even when interpreted by that universal reasoning. The Reformation had confirmed persons see realities quite differently. Descartes imagined a deceptive genie who might warp perceptions as happens when we dream or hallucinate. Luther had trusted Biblical authority, but the Reformation had shown that to mean different things to different people. Descartes thought the same of experience. Reason might reveal common truths, but experience, like belief itself, must be private.

But what is his vaunted universal mind to reason on if not experience? This was the view of his  contemporary, Francis Bacon. The English Reformation had been less bloody to this point than the continental wars of religion, though that would change a generation after his death, so Bacon was less uncertain about the deficiencies of experience. His Novum Organum (1620) squarely faced modernism’s continuing challenge, which was the unreliability of private experience. As always, the problem was rooted in religion. Divine command offered the indisputable advantage of certainty. Its truth and goodness claims were warranted by the same source, an emulsion that could not be dissolved. This lent its truth and goodness claims an indubitability that was comforting to trust. And authority had dominated public morality for so long that this standard of certainty was considered the seal of truth and the minimal standard of morality. In today’s moral chaos, this still strikes some persons as the most powerful argument for religion just as it did in the early years of Reformation chaos. That kind of certainty explains both the ferocity of the religious wars and the agonizing slowness of alternatives to emerge from them. But this amalgam of truth and goodness is brittle rather than strong once examined by reason, for the very act of examination requires rational agency, tempting doubt of both truth and moral prescription. If reasoned experience hoped to replace private belief and public authority, it required a different and lower standard of truth and goodness that also allowed for some gradual improvement. Bacon showed how that could be done. Experience might reveal consensual truths even though doubtful, but these could be improved with more careful thought, Bacon argued, even to the point where they might rival religious confidence, might prove as true as religious dogma. “If a man will begin with certainties, he will end in doubts; but if he will be content to begin with doubts, he will end in certainties.”  Bacon’s project was to explain how that might be accomplished through regulating experience so as to limit errors of perception, language, theory, and convention. He cautioned specifically against established authority, of simply accepting truths second-hand, insisting that evidence only must prove convincing and thereby implicitly focusing not just on reasoning but on the means to improve its quality. Ironically, he condemned contemporaries’ reverence for Aristotle, that father of reasoning on experience, arguing that no one’s authority alone ought to be decisive or confer a spurious certainty. That could only come from more careful attention.

In one stroke Bacon had found modernism’s greatest power and greatest weakness. Its power lay careful thinking on experience. When prophets preached on their own revelation in preference to another’s, they destroyed as they built. The orthodoxies of authority could not respond constructively to critiques without damaging trust. But reason seemed to build upon the products of other minds, to reason cumulatively meant using its own methodology not only to construct but to reconstruct what had gone before, to make it stronger. Challenge belief and you destroy or replace it. Challenge authority and you damage or entrench it. But challenge an analysis with a better one and you improve it. The shoulders of giants indeed!  Improve it, yes, but not to certainty. Bacon was right about refining reason to improve judgments, but he was dead wrong about improving them to certainty. Modernism was forced to face the uncertainty of all of its judgments. The issue always could be framed as distancing it from the Certain Truth that other and competing methodologies might claim to find by other means.

Bacon, credited with founding the scientific method, was more inspiration than inventor. But our assessment of his lack of sophistication is just another way of commenting on how far modernism had to mature before it might build satisfactory axioms of commitment. Still, in prophesying what would one day be called “science,” he was dead on target. It simply took a long time to get there.

With universal reason firmly in hand as one axiom and examined experience as the other, modernist thinking entered a long era of refinement in the late seventeenth century. It asked the right questions: of what does reason consist and how does it process experience? Its answers at first seemed encouraging. The first philosophical empiricists began reasoning about reason: how does logical thought achieve and verify truth? Sophistication brought theoretical complexity which, while thoroughly reasonable, was based so loosely on so few indisputable truths that it soon engendered controversies reminiscent of the Reformation furies over belief and for the same reason: theories about the mind were speculative, easily warped by the desire to find confirming evidence. The early philosophical epistemologists found many things to agree on and many more to fight over. What could the mind know a priori, meaning before experience colored it, and once perceptions etched their patterns on the brain, how were they recorded and translated into experience and then interpreted by conscious reason? Later theorists critiqued their elders, using rational analysis to confirm and refute. This modernist cannibalism worked well when facts were readily available to perception, less well when reason alone was asked to tie together too few consensual truths into some synthesis. By the late seventeenth century, it was becoming clear that discovering the natural world was a fairly straightforward process of ever tighter rules of observation and experiment, all of which revealed new wonders explaining the parts and processes that make up what we all perceive. But perception itself, the means by which we know the world, became a larger and larger problem. It was clear that the senses presented a picture of reality to the mind, but how could the mind know it to be an accurate one? Naturalists found pragmatic checks on their desires and imagination that over two centuries became a true methodology: very close observation, limiting variables, repeating an experience, describing it in arcane and often mathematical language, and opening it to peer review. But theorists who dealt with studying human affairs found theory very much easier to construct and proof very much rarer than their naturalist colleagues. A paucity of observable facts slowed the confirmation process. Ethical considerations impeded experimental efforts. Most frustrating was the bothersome intrusion of human will into this new science of man. Confirmation of natural events proved a reliable check on emerging natural science. It led organically to predictions of outcomes whose accuracy confirmed the hypotheses these naturalists were proposing. The act of looking more closely at experience so as to improve reliability led gradually to putting contingent determinism at the very center of the scientific enterprise. It allowed testable hypotheses, laws, theories, and later, paradigms. But students of human nature and behavior found their subjects frustratingly unpredictable, prone to follow their own ideas rather than the prescriptions of their examiners. This was less of an impediment than contemporary thinking might imagine since the first efforts of these Enlightenment thinkers were highly theoretical explanations of the processes of perception and reflection owing nothing to the physical functioning of the human brain. And so another error wormed its way into modernist axioms: the intrusion of beliefs into human sciences’ theories. From its beginning, its practitioners sought human perfection through modernist axioms that were displacing the more ancient religious ones. It is doubtful Enlightenment thinkers realized they were again falling into the temptations of belief, their studies from the beginning warped by confirmation bias.

Naturalists sought the truths of nature, but human science from the first efforts of the philosophers sought to perfect nature. It was apparent to the dullest sense that universal reasoning was more hope than reality. There are innumerable ways to err in arithmetic but only one correct solution, many more ways to think poorly than well. Implicit in a reliance on reasoning is that reason will be done well, which is an ideal attainment, even a moral goal. Modernist axioms were never psychological descriptions describing how persons act. From their very beginnings, they were prescriptions for seeing the world well and clearly. As an exhausted Europe emerged from the Reformation in the late seventeenth century, it was manifest that enlightenment defined less the nature than the goal of the philosophers. The reality was always less tidy. So advocates of the new science of man set about the challenging business of perfecting mankind.

The most urgent task was to find a means of political and social affiliation reliant on these new axioms of commitment and free from the baggage of divine command. Rejecting authority implied rejecting all the myriad props it had provided for social harmony. But how could persons hope to use reason to make sense of present societal relations that still relied on the authority that must appropriate the agency reason requires, and more to the point, how could such an effort identify societal goods worthy of common pursuit? The Enlightenment sought and found an answer. It was called the social contract, a theoretical creation that saw political power arising from a prehistoric compact in which persons conceded power to a state in return for civil order. Theory required that individuals must possess political power in an initial state to give it away to government, so theory had to imagine a state of nature in which persons exercised total power over themselves so that they could cede some of it to rulers and so have a stake in a government not beholden to the divine. It was complete nonsense, of course, pure theory, with no connection to historical events or political realities. But it did meet the theoretical criteria for an individualist society in which persons’ own agency guides their political decisions in their own self-interest. The social contract was a convenient fiction that has transformed itself into a kind of metaphor for democratic government, though its inventors thought it quite literal. In standard modernist cannibalist fashion, the theory was rejiggered by numerous thinkers: inspired by Plato, first mentioned by Cicero, pitched by utopian religious communities in the Reformation, and given full form at last by Thomas Hobbes in his Leviathan of 1651. It then was reincarnated in various forms, by Locke and Jefferson, by Rousseau, and in the twentieth century by John Rawls. Each version was more inventive than the last, and each was purely speculative and ahistorical. Contractarianism came to embody the contradictions inherent in the human sciences. For all its falseness and manipulativeness, it survived because it provided something modernism needed as a desert needs rain: a justification for moral goods not reliant on religion, one respecting individual rational and moral agency, derived from experience. But it, like the human sciences of which it was a part, was from its beginning a canard: moral philosophy posing as political science, and the more confidence true science earned from its discoveries and inventions, the more desperately the human sciences clung to a reflected prestige. That deception’s ramifications grew harmful in the nineteenth century and nearly fatal in the twentieth as the imaginative prescriptions of human science percolated into public culture as simulacra of morality. The grand theorizing of the positivist sociologists and psychologists, of the criminologists and ethnologists, the educators and the economists moved cultures deeply impressed by the powers of science and increasingly aware of the hypocrisies of institutional authorities to fulfillment of their societal prescriptions. The grand philosophical theories of Smith and Locke in the eighteenth century inspired the even grander scientific pretensions of Saint Simon and Comte, Marx and Spencer, Toynbee and Huxley, Bentham and Mill, Weber and Nietzsche in the nineteenth. By the turn of the twentieth century, religious authority had nearly completed its long fall from public grace and natural science its long adolescence, and the public uncritically embraced the prescriptions of unabashed enthusiasts of pseudo-science and social progress: Freud and Jung, Keynes and von Mises, Dewey and Piaget, Spengler and Lenin, and a long line of others. Two things changed to magnify the damage of these “social scientist” prescriptions for the public weal. First, technology so permeated and disrupted societal norms that a turn to the ill-defined human sciences seemed an almost imperceptible shift from a comprehensive reliance on science in general. Second, older sources of public guidance had catastrophically failed, most clearly in the lunacy of World War I but more generally in the clash of axioms that Victorianism spread throughout the world in its imperial conquests (see “The Victorian Rift). True science not only grew more prestigious than more traditional sources of finding truth, these older options also declined in power and prestige. The twentieth century was the mature phase of this evolution that had begun with the Reformation.

Its many failures can be traced to the sanction granted to the human sciences in particular and to the undirected power of science in general as well as to the growing distrust not only of institutional religion but of all institutional power. The moral vacuum was partially filled by the misdirection of the human sciences, but it was also shaped by one of human science’s products that grew over the course of the twentieth century in academia, particularly in French universities among ethnologists, anthropologists, and psychologists. Using the arcane terminology of the social sciences, it reached maturity in the 1970’s and provided a theoretical grounding for a broadly scientistic theory of social analysis called postmodernism. In its debt to the human sciences, this movement began as modernist. It valued individualism, in truth, even more than modernism had. Because it matured in an era of capitalist domination, imperialist and racialist oppression, and the first stirrings of feminism, its values were egalitarian. Because institutional authority had continued to insist on its right to seek public trust in denial of modernist axioms of individual agency over the entire course of modernist evolution, postmodern critics were hypervigilant in calling out hypocrisy, the betrayal of the aspirations of modernism. Its objections were ultimately political, for the clearest hypocrisy of modernism was a political one. Its contractarian theories not only prompted a libertarian defense of individual freedom in defiance of political and social equality, but they all but guaranteed the oppression of the minority despite their being rooted in founding principles of individual right. The hypocrisies were everywhere apparent. Why did authorities not oppose the gross inequalities that freedom produced? Modernism had defended slavery, had denied the female halves of its populations the rights its theories had championed, had exploited the laboring many to enrich the idle few. The theories of modernist social scientists prescribed a coming social perfection, but the realities of Western societies had betrayed those promises. Certainly, natural science had produced technological wonders that had made material abundance possible, but it had also abused the environment and produced world-killing weaponry all too often employed over the course of the twentieth century. Scientific socialism had cost ninety-four million lives while defending ridiculous social experimentations it claimed were warranted by empirical evidence. Fascism had killed sixty million thanks to pseudo-scientific ethnology, anthropology, and economic theories. Colonialism and imperialism had spread on theories of social Darwinism and racial phenotypes (and these were abetted by missionary motives approved by religious authority). Theories of the subconscious or unconscious permeated popular cultures despite having absolutely no empirical evidence to support what were actually poetic allegories.The twentieth century saw the triumph of empirical science, it is true, but thanks to the human sciences also the permeation of theory into every facet of Western life. Postmodernism was born in protest over realities, but even its protests were framed as theory and it could offer few solutions to modernism’s failures beyond a utopian equality of degree (see “The Riddle of Equality).

Because it was a child of the human sciences with their manifold and contentious paradigms, postmodernism was always steeped in theory, though its subtleties dribbled away as its values percolated into popular cultures. Growing literacy and mass entertainment eventually allowed postmodern values to flood Western life (see Postmodernism Is Its Discontents”). It came primarily from two sources: academia and popular aesthetics (see “Three Portraits). The same percentage of Americans had completed college in 2015 as had completed high school in 1950: around 34%. They were exposed to postmodern thinking in both, thanks to constructivist educational theories and deconstructivist literary ones.  If postmodernism drenched literate society in the last half of the twentieth century, it reached a far wider audience through not only the narrative and pictorial arts but also the music scene. Its broad message of heroic alienation against mass cultures reinforced half of modernism’s axiom and repudiated the other. Postmodernism confirmed the power of individual will in innumerable ways in popular and academic cultures. But it rejected the possibility of universal reasoning, arguing that experience must color and finally form reasoning, an explicit rejection of the possibility of a universal rational faculty. Initially, the conformism this produced was seen as a nearly unavoidable danger because “the culture” was thought the source of all the evils of tradition and bourgeois values and therefore an ongoing threat to personal freedom.  But by the 1980’s, postmodern had so infused every aspect of Western cultures that its message began to change, to regard cultural pressure as irresistible, even desirable, in forming private identity. This eventually led to identity theory, the irresistiblility of cultural forces to mold the mini-cultures that postmodernism championed in opposition to the grand narratives of exploitation pitched by entrenched power. In both iterations, though, postmodern values clashed with modernism’s elevation of universal reason. Why did that happen?

Blame their axiom of commitment and the modernist cannibalism that came of it. As thinkers, and later social scientists, began dissecting the nature of experience, their first conclusion, that experience might also be universal, got tangled up in their investigation of reasoning. They found perception must interpret experience, and they could find no means to make perception as universal as the reasoning they thought must derive from it. On the contrary, by the end of the eighteenth century, they had proved to their own satisfaction that perception must always be private and that it colored experience in ways both unknowable and fatal to any conception of “common experience.” The question remaining was how impermeable this perceptual wall, this individual filter of all experience, must be. At one extreme they could see — and we can confirm — the power of science to make experience subject to common reasoning. Its processes can only accomplish that by constricting experiences to their elemental constituents: limiting variables, attempting to replicate and study one experience in a laboratory setting. The results confirmed that universal reasoning was indeed possible, but only by the most careful application of the scientific method to only the most limited of experiences. But what of that  “common sense,” of the Enlightenment philosophers? How could ordinary reasoning possibly confront the astonishing depth and complexity of physical reality that the professionalization of science was beginning to reveal?

That question prompted two responses, at first quite distinct but by the twentieth century mingled in ways we all recognize.

The first was to find other ways to penetrate the perceptual wall: beyond science, beyond even reason itself. This effort produced history’s first mass movement: Romanticism. By the 1820’s, it had permeated society much as postmodernism would a century later. Drawing on old traditions rejected by modernist thinkers — gnostic revelation, private intuition, emotional intensity, and fideism — this quasi-religious movement could celebrate individual agency and reject religious authority yet still claim revelation. It envisioned truth penetrating the perceptual wall not by some torturous and fallible reasoning process but by a direct transmission of divine certainty to the human heart verified by deep feelings. This recycling of Luther’s positions was disguised by the pantheist quality of the divine in Romantic theories, ironically tracing to the preconscious sorting mechanisms of sense-data that Kant, that most stringent of modernists, had argued for in a revolutionary synthesis of modernist cannibalism. He called that presorted composite of sense-data swimming up to conscious reasoning by a fatal word: intuition. It was the project of two generations of German Romantic philosophers and a century of human scientists to define what that entirely theoretical and hugely contentious thing, intuition, might be and where it came from. The Romantics offered an answer that contradicted Kant. The meticulous rationalist had claimed it was impossible to know experience with certainty since by definition the preconscious is unavailable to analysis. The Romantics knew: it came from a divine web of nature that spoke infallible truth and inerrant goodness to the receptive heart. It is impossible to convey how thoroughly Romanticism penetrated Western cultures in the first half of the nineteenth century at least in part because we have seen so many other popular waves wash over narrative and aesthetic arts since. This first one moved societies not only in Europe and America but also in those lands that imperialism and colonialism dominated. Romanticism was attractive and exhilarating, at least at first, but it also grated against the beleaguered power of religious authority, producing hybrid responses like Methodism and Pietism. And its exhilarating revelations seemed far more suited for youth than maturity, so as the nineteenth century grew old, Romanticism aged with it, its novelty fading and its deficiencies revealed.

It was simply overwhelmed by the scientific miracles of the nineteenth century. Ordinary life was transformed by discovery and technology more in the nineteenth century than in all earlier ones combined. Scientific progress meant modernist axioms. No “Romantic science” was conceivable, and the tensions ordinary persons felt by the tug of two opposing ways of knowing truth and directing preference scarred Victorian life in ways we have inherited but have failed to grasp. By the second half of the nineteenth century, the failings of institutional authority were a preoccupation of social critics, Romanticism had lost its youthful vigor, and science was effacing old ways of living and creating new ones in an erratic response to its inventions. But that produced the greatest crisis of modernism, one we have yet to face. The growing professionalization of science based on the model of German academia continued to distance empirical thought from common sense, challenging its capacity for moral prescription. Granted, it produced technical marvels and astonishing discoveries peeling back the mysteries of reality, prompting continued optimism for progress. That left only one question for empiricism to answer: what is progress?

It required an answer, but how was one to be found, warranted, and publicly agreed to? It is a question of goodness (see What Do We Mean by ‘Good’?”). If asked with even a modicum of insistence, it will eventually lead to questions of morality (see “What Do We Mean by ‘Morality’?). Answers were available, of course, but they were proving both theoretically and pragmatically challenging to justify. The great consensual answer to questions of goodness had always been religious authority, but modernism had been eroding its axioms since the Reformation, and by the end of the nineteenth century, new modes of life, scientific progress, and individual autonomy were submerging its moral power. Science was assaulting common sense, ordinary reasoning about questions of preference. This left room for the human sciences to fill the moral space with deterministic prescriptions for social perfection. Psychology would attempt that task for individuals, but it faced the same axiomatic contradiction all human sciences did even as they grew in prestige. Modernism’s axiom of individual agency directly contradicts the possibility of predicting what individuals will prefer, and implicit in that axiom is the moral value that persons ought to act as rational and moral agents.

The twentieth century would be torn by the struggle between the human sciences and individual agency just as the seventeenth century had been torn by the conflict between agency and traditional authority. The slow collapse of all institutional authority in the twentieth century has been the unwelcome outcome of that struggle. But that collapse made autonomy itself the battleground between human science’s desire for scientific status and individuals’ desire to direct their own preferences by their own standards of value. The perceptual wall formed the conceptual field of battle with postmodernism tugging toward a radical individual agency creating personal standards of value against a modernist position rooted in science that upheld universal reasoning and a contingent determinism that denied the freedom required to exercise it.

Influenced by Romantic depictions of pantheistic intuitions, writers in the last half of the twentieth century found themselves arguing for personal values floating up from some preconscious assembly of sense data perceptions. As was common when big ideas percolate through many minds, this Kantian view of a constructed reality blended with an insistence that the perceptual wall must be impermeable: that the phenomenal awareness of the senses can never confirm the ontological reality they depict, and if experiences are altered as we become aware of them, we cannot know it because we have no access to a reality free of perception. The Romantics had hoped to create space for a divine revelation in that mysterious operation, but empirical science was pushing divine will out of a public picture of material reality. What was left was eventually composited of unconscious mental creations inside the perceptual wall, guiding sacrosanct personal agency in preferences (see What Is the Virtual Circle?). Empiricism had proved its power to find some kinds of truth and that implied a challenge to any other, but after the horrors of World War I, what remained to find goodness besides utility or morality besides private preference exercised within the perceptual wall (see “The Limits of Empirical Science)? The options so clearly cancelled each other out that personal preference was reduced to materialist consumerism and complete pragmatism (see “The Problem of Moral Pragmatism”).

It was a long and twisting road to create modernism, and it was an even more tortuous one to discredit it. For one thing, there is always science, clearing the field, calmly presenting its complex analysis of material reality. By the twentieth century, it had outgrown its adolescent pretensions Not only could it not prescribe goodness; its operation required that the normal fusion of truth and goodness considerations characteristic of belief be deferred or denied by an act of severance that concerns itself with discovering truth without regard to any considerations of preference. The twentieth century was one long attempt to bridge the gap the act of severance creates between what we think we know and how we employ it to find the goods we value (see “Truth and Goodness Do a Dance”). This effort is complicated by a reversion to belief as an emulsion of truth and goodness still valued by postmodernists as a residual private means to moral goodness and by remnant religious authority as an effort to resuscitate public moral commitment. Despite their common faith in belief, we cannot expect consensus in public values from this effort because of core disagreements about the validity of private moral agency versus institutional authority. Though modernism seemingly has failed to enshrine its axiom of universal reasoning in public morality, the triumph of its other axiom, individual agency, seems complete. The need for public moral consensus will not be denied, though, and at this point neither premodern nor postmodern axioms of commitment will succeed because they privilege belief, nor will modernism’s ad hoc creation of contractarian government because it is founded on a moral neutrality that majoritarian will must expand to fill. That will at this point is torn between conceptions of private belief arguing for total individual autonomy and total surrender to authority. That is not sustainable. The crying need of public moral consensus is moving toward crisis in the twenty-first century as human science is forced to surrender the illusion that it can prescribe social goods. If persons are determined by genetics or environment, they will never admit to it (see “The Determinism Problem). Three developments will direct the resolution of this emerging crisis. First, a composite popular culture is even now erasing cultural isolation as it challenges the last remnants of religious authority around the world. Its triumph is inevitable, but its values of themselves will not be moral (see “Cultural Consensus”). Secondly, two developments will revive conceptions of universal rational agency after its long eclipse caused by human science’s search for respectability, and this will help end postmodernism as a viable moral theory, abetted by postmodernism’s own inconsistencies (see “Postmodernism’s Unsettling Disagreements”). True science’s work on genetics and artificial intelligence will challenge and defeat postmodern notions of cultural identity, though these developments will merely reveal what is universal to human preference rather than prescribe the means to improve it. A moral center is still waiting to be consensually embraced by an emerging world culture. Finally, contractarian principles of majoritarian rule will over time be replaced with a greatly clarified grasp of what individuals require to flourish. This will be combined with the material abundance to eliminate the scarcity that has produced so much classism and racism, exploitation and envy, libertarian excess, and egalitarian pipedreams. This movement is already underway under the broad heading of a search for human rights (see Functional Natural Law and the Legality of Human Rights”).

Preferential freedom will never be easy and cultural consensus will always prove challenging (see Our Freedom Fetish”). The historical legacy we have inherited is a perverse one. We all sense that just as we sense the moral vacuum that contentious voices now seek to fill. Our public moral progress depends on reasoning together, and doing that requires a common notion of reasoning well. Our history has obscured that, but it is a responsibility we cannot continue to defer (see “The Tyranny of Rationality).

 

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s