When is Civil Disobedience Justified?

Two fairly mundane recent news articles have prompted me to examine the problem of warranting civil disobedience over the force of law.

The first has become increasingly familiar. According to the local newspaper, a twenty-nine year old man was arrested on multiple counts for threatening a municipal water company employee who wished to read his meter. The suspect was “wearing full body armor with a knife strapped to his chest, a Kevlar helmet with a face mask and armed with pepper spray.” After a brief standoff he was physically subdued by a law officer and arrested. Questioning revealed he was a member of Sovereign Citizens movement and had had multiple previous scrapes with the law. Followers of the movement think, “they can decide which laws to obey and which to ignore” according to the Southern Poverty Law Center. He also told arresting officers he hoped to “go to Pakistan in two years so he could fight and maybe die for something ‘that was good and true.’” But he had been found psychologically unfit for military service by the U.S. government.

The second is more trivial, but perhaps reveals more clearly the scope of the issue I intend to examine. It is against the law in our municipality to set off fireworks, a law more honored in the breach than the observance over the years. But our neighborhood association decided this New Year’s Eve to enforce a more scrupulous adherence to the ban by hiring off-duty police officers to ticket offenders. This effort prompted lively debate on our neighborhood’s Facebook page between those supporting these efforts and those opposed. The president of our association framed the law-and-order side. “Last I checked, we are a country of laws, not of men…. It’s amazing to me that some would try to pick and choose which laws they would abide by…No responsible person should encourage illegal activity….” The opposition argued two points in response. The first was a defense of cultural tradition. “I try to live like it was the good old days. I have no bad intentions.” A more spirited defense engaged the president’s point directly. “We are a nation of laws. But we are a nation of individual freedoms and rights too.” To which his sparring partner replied, “What about respecting the rights of others to have peace and quiet? Lots of folks find fireworks offensive and a safety hazard.” Thereafter, in what has become typical of social media, the argument degenerated. The disputants clearly had different conceptions of what constitutes a right.

Both of these examples illustrate a real confusion about law and citizenship, a tangle that only gets knotted tighter by considerations of admirable civil disobedience versus civic responsibility. In any conflict between rights and law, where are the boundaries and where is the sweet spot that balances the interests of some against the interests of others? More to the point, are interests synonymous with rights? It is troubling to suppose that the ideal is always in the middle– that some fulcrum exists that finds the balance point– for in a totalitarian state that point might be far too close to the interests of the state while in a libertarian dream world, it might celebrate all the chaos of a gold rush boomtown. The notion that we can’t pick and choose which laws to obey rings true, yet we erect monuments to our Founding Fathers, whose picking and choosing spawned a new pattern demanding obedience. Ditto for Gandhi, Walesa, Mandela, and King. So we certainly cannot say what parents always say to their children: a good citizen always obeys the law.

In my last entry I examined some alternatives warranted by other justifications for legal power, rejecting divine command and legal positivism on the grounds that these warrants provide no means of ensuring minority rights or respecting civil disobedience. That cannot be said of social contract theory. As the U.S. Constitution exemplifies, defined rights can be specified in written or tradition-bound compacts, though these rights would of necessity be civil rather than human rights and thus open to a charge of cultural relativism that would do little to arbitrate disagreements between polities that define rights differently. We see that kind of conflict frequently on today’s international scene. But this problem is not by itself sufficient to condemn a social contract that builds in protections for minorities and allows mechanisms for civil disobedience. But we might ask of the framers of such a constitution the same question raised by the two examples cited earlier: what properly limits state power? It is good to have a Bill of Rights, but why these and not others and why ten and not a hundred? It is all very fine to sanctify free speech as a guaranteed civil right, but why should it be, other than that our wise Founding Fathers decided to add it to our compact of government (but only after a bruising debate and with serious dissent as revealed in The Federalist Papers)? The kernel of social contract is that government is a purely conventional invention resulting from a voluntary association of like-minded individuals in pursuit of common interests, so it seems reasonable that had their interests been different, their contract might have taken an entirely different form. Viewed in this light, there is nothing sacred about the rights enshrined in the U.S. charter, and so long as we embrace the fiction of the social contract as justification for government, we’ll have to look elsewhere for some foundation for our rights and some reason to violate positive law when we think they have been abrogated (for a fuller exposition of this issue, please see my blog entry of November 13, 2013,”Where Do Rights Originate?).

It is, of course, quite possible to take seriously the notion that we are not at all bound to law except by our own consent as the examples cited at the beginning of this essay argue. This, after all, underlies the myth of the original social contract– and recall that Locke also argued that each of us renegotiates that contract as soon as we decide to accept the law’s dictates– so what was good for our ancestors should be good enough for us. But that notion quickly produces the dyspeptic realization that any government is an impingement on our freedom. This certainly seems the position of the originator of the term “civil disobedience.” Thoreau’s famous line about that government being best that governs least vivifies the libertarian position that any path out of the state of nature involves reluctant sacrifice of freedom in exchange for security. I think the widespread acceptance of this view explains a big chunk of today’s resentment of government. That resentment seems an inevitable outcome of taking social contract seriously as does the brute libertarianism that follows. But a careful reading of Civil Disobedience raises doubts about whether civil libertarians should make Thoreau their patron saint. And if we reject the warrant Thoreau used to derive his view of government, what befalls the argument for civil disobedience that flows from it?

It is certainly true that his resistance to government constitutes an embrace of the state of nature, one literally demonstrated by his years at Walden Pond. But I argue that the state of nature Thoreau embraces is as far from that envisioned by the framers of the theory as can be conceived. So though Thoreau stands with those who resist government, his argument is less against government per se than against government justified by social contract. Libertarians think he desires anarchy. He does not. Writing in his homebuilt cabin in the Massachusetts winter of 1847, Thoreau was not merely objecting to government overreach; he was objecting to government by majority, by consensus, by contract, and by convention. He argues, “Any man more right than his neighbors constitutes a majority of one already.” Recall that in his original framing of the state of nature, Hobbes had dismissed the notion of “right” as a subjective delusion. Everyone acts to secure what he thinks right, said Hobbes, and it is this pursuit that he inevitably comes into conflict with his neighbor, such war of all against all producing the need for the leviathan of government power. Thoreau is clearly not imagining that kind of natural condition, nor does he envision that kind of barbaric outcome. He is adamant in denying majoritarian rule: “[The state] can have no pure right over my person and property than what I concede to it.” He can only do so because he imagines the demands of social order to be far more dangerous than disagreements among individuals. Why should he have come to a conclusion so different from the social contract theorists who inspired our Founding Fathers?

The answer can be found in the sixty-one years separating them from him. Specifically, two moral revolutions changed the theoretical field, the first advanced by Immanuel Kant and the second by the Romantic revolution he inspired.

It was Kant who argued for moral autonomy as the antithesis and antidote to the tyranny of authority that had dominated political theory during the millennia of divine command. In Critique of Practical Reason, he advanced the argument that each of us constitutes a moral universe whose laws we must actively engage and accept and for which we may be held responsible. Even should we attempt to abdicate that responsibility and assent to authority, he reminds us that it is our own reason that judges the authority worthy of our respect. For Kant, it is this preferential freedom that marks us as moral agents worthy of radical respect. We see the echo of Kant’s argument in Thoreau’s.

But we see something else as well: Thoreau’s confidence in his own moral rectitude was more absolute than Kant with all his admiration for practical reasoning could have imagined. Throughout Thoreau’s writings, and especially in Walden and Civil Disobedience, we see a disdain for convention coupled with an almost strutting confidence in his own moral superiority. The former fuels Thoreau’s contempt for social contract theory, which at its center regards political arrangements as conveniences invented to procure the desires of the majority. Thoreau even goes so far as explicitly to condemn William Paley’s The Principles of Moral and Political Philosophy (1785) precisely because it, like social contract theory, bases government on grounds of practical convenience rather than justice (though Paley also skewers social contract theory for its other falsities, among them the myth of the state of nature). But echoing through Thoreau’s writing is the deeper appeal to justice, to right, and to divine providence combined with an absolute confidence that he, and not that mass of men living in quiet desperation, might decide which laws satisfy justice and which offend it. What grounds such a belief?

It grows from the same confidence that sees anarchy as avoidable and wisdom as universally available if not universally embraced. Thoreau could reject the contractarian argument that conflict among citizens is inevitable for the same reason he could reject Kant’s careful constructions about the limits of reason: Thoreau embraced the Romantic pantheism that was flooding Western culture in the first half of the nineteenth century, and he could assert with confidence, “What I think is right is right; the heart’s emphasis is always right.” The receptive heart in tune with nature could hardly avoid knowing the good and adhering to it as well. With such ready access to indubitability, who could blame him for his disgust with those who would put self-interest above justice? To determine it, one must ignore compacts and laws in favor of a higher and more intuitive knowledge that the god-in-nature provides, so long as she could clear away the delusionary and conflicting propaganda advanced by the majority. This is Thoreau’s answer to the thorny question of when one is entitled to practice civil disobedience.

Seen in the cynical light of a postmodern age, Thoreau’s Romantic warrant seems quaint and pitiable. Facing it prompts a Tooth Fairy kind of moment. If we can’t trust god-in-nature to reveal to our intuitions when laws are unjust and we also can’t trust a social contract rooted in a fiction, founded on convention, and moored to no fixed conception of rights, and if positivist and divine command theory make no provision whatsoever for retention of rights, what can we trust to litigate the defensibility of any act of civil disobedience?

Only natural rights theory provides a solid grounding for answering such a question because it is the only justification for law that begins with individual rights based on needs and sees civil participation as one such need. Rights theory is the political wing of a much more thorough ethical structure whose morality is guided by the development of the habitual disposition to satisfy needs (A broader outline is provided by my blog entry of November 06, 2013, “A Virtue Ethics Primer.”). The determination of needs and the rights that entitle us to their satisfaction are relatively simple to determine by interrogating what we choose to call a need: is the achievement of this desire necessary for flourishing? If we answer affirmatively, we then ask further: is this desire instrumental to some other and more fundamental desire? Does this more basic desire meet the criteria of all human needs: universal and transcultural (both temporally and geographically)? Does the satisfaction of this desire make impossible the satisfaction of some other need? One of the central tenets of virtue ethics is that the requirements for a fully human life are mutually supportive rather than in conflict, accumulating to the summum bonum, the totality of goods necessary for living a complete human life. While they cannot all be satisfied simultaneously and indeed for much of human history may not have been capable of satisfaction at all– consider sanitary or medical needs throughout most of history–their satisfaction remains an unmet need in each generation until finally sated.

The self-questioning that accompanies our attempt to recognize and fulfill needs anchors the intellectual virtue component of the moral system. It is no small thing to negotiate our desires since, as choice-making machines, persons exist in a perpetual pursuit of the good, regardless of how they might define it. Plato was certainly correct when he said we always choose what we consider good at the moment of choosing; otherwise, we would choose an alternative. Virtue ethics makes a point of identifying the truly good so that our desires might reflect our actual needs rather than degenerate into a mad scramble of conflicting wants or manufactured materialist desires.

By now it should be obvious that natural rights theory would champion human needs as human rights: we are entitled to try to meet our needs. They are our due and their satisfaction our moral duty. And since justice may be defined as “to each her due,” we are entitled to a political system that facilitates rather than impedes their satisfaction. This explains why a social contract view of political organization could not be more wrong. Government is neither conventional nor built from expedience. It is not an invention at all, but has always formed a buttress to human happiness. It is a crucial component of the quest for persons’ satisfaction of their needs, operating impartially to arbitrate conflicts among citizens’ desires and actively meeting those that citizens can satisfy only through collective action. Government’s sole function is to deliver justice to each citizen. A perfectly moral government would manage its retributive and distributive roles so assiduously that any failure to meet any citizen’s needs, meaning any failure to secure rights, would be the result of that citizen’s own moral failings. Anything less would be unjust. Anything different would justify civil disobedience by individuals or minorities.

There is more to it, of course. It goes without saying that no government satisfies such a lofty function all that well, whether from its own incompetencies or from the failure of citizens either to understand or accede to its purpose. It faces thorny issues of distributive justice for citizens unable to satisfy their own needs (For more on this, please see my blog entry of December 3, 2013, The Riddle of Equality.). It must balance the just demands of liberty and equality (For more on this issue, please see Our Freedom Fetish, blog entry of November 20, 2013.) And even when it is able to handle such delicacies, it finds itself trying to identify and balance citizens’ real needs in such a way that they both get what they need and are satisfied with the process.

The two examples that began this essay typify aspects of the task. The Sovereign Citizen who viewed his liberty as outweighing justice and who displayed a Thoreauean confidence in his judgment exemplifies an unjust act of civil disobedience. His rights were untrammeled by the governmental agents he opposed and even had they been, he could not have justified threatening the life of another person to defend such a slender transgression. The fireworks issue is a bit more interesting since the right to recreation by cherry bombers in this case opposes the right to a night’s rest by their opponents. But since neither party would suffer by such a slight deprivation (a few hours’ loss of sleep versus a loss of a little recreational excitement), it seems fair to say that this instance hardly rises to worthiness as a civil disobedience issue. Is it therefore safe to say that developing the habit of respect for law would trump the tradition of New Year’s Eve fireworks? Like so many social media squabbles, this one seems unworthy of arousing irascibility, much less civil unrest. Applying the needs-based justification of natural rights theory not only alerts us to when civil disobedience is a moral duty but also to when–and why–it is unwarranted.

Standard

Preliminary Thoughts on Civil Disobedience: Natural Rights Issues

In a previous post (“Where Do Rights Originate?” [November 13, 2013]) I examined the four traditional justifications for the law, finding unambiguous support for rights only in the warrant that recognizes them as morally and chronologically antecedent to the demands of order. The natural rights theory is a unique view of political organization because it ties rights to justice, which it defines in the traditional manner as “to each her due.” It rejects the claims of divine command and legal positivism that provide no means to justify civil disobedience. We see echoes of these positions in the admonition to obey legal authority unequivocally because it represents the god’s will or settled law. But such inflexibility provides no opportunity for individual conscience or minority claims against that very law. Some of the defenders of the fourth and most popular justification for legality, social contract theory, argue that it makes space for rights. It is possible that constitutions and compacts might institutionalize opportunities for individual and minority rights, but nothing in the nature of social contract theory makes that possibility more than one option among many. The consequence is that social contractarians view the morality of civil disobedience as an issue of critical mass: a movement gains legitimacy when it persuades sufficient numbers of citizens to support it and thereby gain the attention of the majority. In this it echoes the moral system of Utilitarianism, unsurprising in that theories of utility arose contemporaneously with theories of social contract. But isn’t this notion of majoritarian warrant incomplete? Can we really defend the notion that Martin Luther King’s appeal to civil rights only achieved legitimacy after the Civil Rights Act of 1964 was passed? If not then, when? On what grounds did King’s movement attract its first followers? Are we to think the King’s appeals had no moral legitimacy when he spoke to one hundred parishioners of the Ebenezer Baptist Church in 1948 and only gained it when he spoke to half a million in Washington in 1963? If numbers legitimize legal power, then how could the “rights” of any minority be protected, much less individual dissent?

It seems clear that any notion of civil disobedience must first delineate the rights whose violation justifies acts of resistance. The natural rights theory argues that civil disobedience is appropriate when two conditions exist. First, the rights of any individual have been intentionally violated by civil functionaries regardless of positive law. Second, legal recourse to redress the grievance has been exhausted. The first condition eliminates the kind of offenses that devolve from external circumstance rather than legal power: the lack of resources that produce famine, for instance. The second links acts of civil resistance to the sophistication of legal processes designed to reveal and ameliorate violations of rights. So a mature democracy might have built-in mechanisms of recall, referendum, and initiative that allow citizens to challenge governmental power that less functional governments might omit, resulting in a delay of protest in the former case and an acceleration of what positivists call criminal action in the latter. I have in mind the contrast between the march on Selma in the U.S. led by Dr. King in 1964 and the attacks by Mr. Mandela’s followers on the South African police beginning in 1976. But even if we apply these two conditions, ambiguities abound. It is no small thing to break the law.

Let us eliminate the confusion resulting from justifications for law that make no provision for the existence of rights in themselves and so discard divine command, social contract, and legal positivism in favor of natural rights theory. Still, even when we begin our justification for law by enshrining rights as foundational, where do we place the moral boundaries of civil disobedience in pursuit of rights?

As far as I have been able to discover, the answer to that question depends on our answer to another that defines the origin and therefore the extent of rights. And this plants us in a swampy history. The progression of rights theory has been anything but linear, which helps explain the wide variations in delineating them.

It is ironic that we can credit Aristotle’s Nicomachean Ethics with both establishing the connections that root natural rights theory and also illustrating its structural challenges, of which I will focus on four.

As a political philosophy, natural rights theory nests itself in a lovely matrix of ontological and epistemological structure (some of that structure is outlined in blog entries of November 6, 2013,“A Virtue Ethics Primer,” and October 15, 2013, “What Do We Mean by Good?”). Viewed in that larger context, natural rights theory is but the political wing of a broader moral theory, virtue ethics, a model deeply rooted in experience. But the same experiential basis that grounds this structure in reality makes possible errors in judgments evaluating that reality. Aristotle is notorious for thinking slavery, misogyny, and aristocracy normative simply because they existed in every society he observed. It is tempting to regard what is universally experienced as morally proper, which only goes to show that the balance between what we do and what we should do is never easy to pin down. A moral outlook that privileges our “natural” inclinations over moral duty tends toward subjectivism and emotivism whereas one that ignores our proclivities leads us into a sterile duty ethic. The trick is to acknowledge our desires without kowtowing to them and in that act of discrimination find a convincing motivation for self-improvement. Natural rights theory is unique among the four warrants for law mentioned earlier in that it is grounded neither in purely conventional arrangements nor in decrees justified by absolute authority but is instead built up from the aggregate of real persons’ actual experience. So how could Aristotle have regarded practices we now regard as abhorrent, such as slavery and oppression of women, as morally acceptable merely because they characterized all two hundred states that he examined in composing his moral theory? More to the point, if the universality of a practice is an indicator that it is “natural,” on what grounds can it be condemned as immoral? This is the first issue natural law has to resolve. It does so by beginning not with the practice but with reasoning about what need it serves to satisfy. For instance, education of young people to prepare them for productive work is a universal practice that serves vital human needs. No justification can exist to deny that education to any person anywhere. Such denial would constitute a violation of rights. To deprive a person of this need because of gender, class, race, or religion would be tantamount to classifying that person as less than human.

A second problem for rights theory involves variations in practices between cultures. Rights devolve from needs, and needs reveal themselves in particular social arrangements that vary by time and place. The problem then becomes how to tease out universal needs from the varied cultural practices that satisfy them. Concerning the need for education, we see this issue playing out today in regard to controversies involving child labor, particularly in traditional agrarian cultures. For generations, children assisted their families in farm work both as contributions to their welfare and as a form of practical education. Is child labor of this type exploitative? Answering this question is less difficult than it might seem if we embrace an ethic that grounds a common moral framework firmly to universal reason applied to varied cultural means to satisfy needs. If the work load is tailored to the child’s age and abilities, if this kind of traditional education will still apply to her culture once she is grown, and if her other needs are also being met, such an education can indeed be regarded as morally appropriate and one culture’s means of satisfying a universal human need. Consider all the ways children have been and are educated in our world today. Respect for cultural diversity is certainly warranted so long as customs prove innocuous. For example, no people’s cuisine is superior to another’s so long as each satisfies human nutritional needs. Beginning with such universal needs provides the psychological grounding in reality that moral systems require while evaluating their satisfaction in moral duty elevates our preferential freedom beyond simple and private preference (for more on this balancing act, please see “The Problem of Moral Pragmatism,” blog entry of March 19, 2014).

A third problem for the moral theorist seeking to ground ideals in observation of experience involves not variations between cultures but rather variations within them. Cultural arrangements are adapted to the natural differences of the individuals who enter into them, introducing further variability into practical moral reasoning. Aristotle regarded normalcy as a term of statistical rather than moral judgment, as his famous observations on left-handedness demonstrate (his judgment has been adapted in the current climate to champion equality for the LBGT community). His astute comment that anyone living outside of society must be either god or beast acknowledges the range of social engagement his observations revealed as “normal.” It seems self-evidently wrong to tell a person who is perfectly content living a solitary life that she lacks an essential component of happiness, yet this is what virtue ethics insists upon. This counterintuitive judgment is based on the summum bonum conception of human flourishing that refuses to define the goal of human existence in terms of contentment. On the other hand, virtue ethics also recognizes the myriad ways individuals can satisfy their needs, which opens possibilities for cultural diversity in the application of rights theory denied to other theories of law.

The three issues for rights theory mentioned above– universality, cultural difference, and individual difference– all require a fine-tuned examination of needs and the many ways they can be met. This examination poses a final, macroscopic problem. As the failures of the human sciences of psychology and sociology have more recently revealed (please see blog entry of February 9, 2014, “The Calamity of the Human Sciences”), we could hardly consider even Aristotle’s careful observations of human society to rise to the level of scientific validity. For all its valuation of reason as the defining characteristic of human beings, rights theory can make no appeal to our culture’s most esteemed warrant for truth claims. It is not scientific. No moral theory can be, for morality concerns itself with questions of goodness that no empirical methodology can answer. Without recourse to scientific verification, any moral theory is hobbled in this age of science worship, but even leaving status issues out of it, rights theory can at best appeal to a rational warrant to support its claims, and Aristotle’s errors on that count should caution us to consider it warranted at best by only a preponderance of the evidence. Surely, its three competitors could claim no more and, as mentioned, can offer no integral defense of civil disobedience.

These difficulties for natural rights theory were compounded by historical developments that modified their rational warrant. Fleshed out by the Epicureans and the Stoics in the classical age, the theory shed some of Aristotle’s caste worship as it explicitly tied itself to universal human reason, inclining toward history’s first intrinsically democratic moral structure. The great Stoic Epictetus was a freed slave who saw reason as an inescapable basis for moral and political participation. Unlike Christianity, which saw acceptance of morality as a conscious choice leading to damnation or salvation, Stoicism regarded rational moral valuation as something all persons must participate in, either well or badly. It is unsurprising that Christianity embraced elements of Stoicism into its dogma, but in doing so the warrant for moral choosing devolved from reason to authority (for more on the relative strengths of correspondence warrants for truth and goodness claims, please see my blog entry of October 7, 2013, “Better, Blended Systems of Knowledge”). Late medieval theorists sought to reconcile authority and reason, but without success, for their modes of warrant are incompatible (for reasons why, please see the blog entry of September 11, 2013, “Religion and Truth”). It still seems jarring to read Aquinas’s towering logical arguments built on the shaky foundations of the authority of Church fathers (the oil-and-water conflict of authority and reason is more fully explored in my blog entry of September 18, 2013, “The Fragility of Religious Authority”). This is not to champion reason alone as a superior warrant. As Descartes would later reveal, pure reason forms an indubitable warrant for truth and goodness claims, but we can thank David Hume for demonstrating that such claims devoid of experience must be sterile. But Aristotle had demonstrated the fallibility of reason applied to experience, and this hybrid justification would prove no match for divine authority’s claims to certainty. Thus divine command swept away the Stoic’s attachment to reasoned experience as a foundation for ethics and law. More’s the pity.

The crisis of justification that changed everything was the Protestant Reformation. I’ve written previously of it as a kind of nuclear winter of warrant, a catastrophe whose pathology tainted every truth, goodness, and beauty claim previously warranted by the authority of church and God (for more on this, please see my blog entry for January 26, 2014, “Premodern Authority”). It should be unsurprising that the first revival of rights theory should be tinged with divine command. Members of the Roundhead army led by John Lilburne made claims for natural rights rooted in Biblical authority as early as 1641. But as was so typical during even the last horrific chapter of Reformation history, the tangle of authority and reason as warrant could not as yet be made straight, and Lilburne’s appeal to the Levellers subsided into the general chaos of the English interregnum.

It was resurrected, after a fashion, by the social contract theorists who finally succeeded in separating political justification from divine command beginning in the 1660’s. But in seeking a myth to derive political power from the people, thinkers like Hobbes, Locke, and Rousseau removed their faith from the Bible and placed it in a spurious state of nature in which “rights” were taken to mean “license.” For these contractarians, our natural freedom gave each of us unlimited rights over ourselves and our property, and everyone else’s too. This trampling of definition still confuses contemporary understandings of rights and results in the mistaken notion that rights can be conferred by constitutions and compacts and that they can be both specified and limited by such documents and traditions. It also introduced the lamentable and erroneous conclusion that the establishment of government of itself requires the surrender of natural rights in favor of protection, the degree of surrender to be specified by contract. This notion compares most unfavorably to the natural rights justification that sees political organization as a natural means of satisfying needs and so as a guarantor of rather than a threat to our rights. Jefferson only perpetuated the contractarian error. He thought rights derived from our earliest political associations and that their legitimacy stemmed from their traditional and consensual nature. This view was dangerous, for it gives a culture an unlimited capacity to grant or revoke rights dependent on its constituent values, a view entirely consistent with the social contract theory Jefferson embraced but far removed from the classical Stoic position of inviolable natural rights rooted in universal and individual reason. In his view of rights, Jefferson seems more than usually confused, for he bemoaned the overthrow of Anglo-Saxon tribal customs by the Normans as a violation of natural rights. Yet the customs that led to the Magna Carta were as much Norman as Saxon, though neither tradition recognized natural rights as universal and, being blended, could hardly have rooted them in tribal custom. It need not be added that the author of the Declaration of Independence and the “inventor” of the only three “rights” most Americans can name did not recognize them as universal either, despite his claim that they applied to “all men,” or at least three-fifths of all men (for more on the relationship between social contract theory and rights, please see “Our Freedom Fetish,” November 20, 2013).

Add these historical confusions to the definitional issues implicit in defining rights discussed earlier, and you can easily see why delineating rights may seem a pretty squishy enterprise. Eleanor Roosevelt gave it a go after World War II, and indeed, the United Nations Declaration of Universal Human Rights of 1948 stands as a monument to the articulation and enumeration of human rights. It is far from perfect, as must be all things assembled by a committee, but it succeeds at deriving rights from needs and in seeing the hallmark of needs as universality, perennially present regardless of culture or epoch. Granted, it too often mistakes instrumental goods for the moral goods they are used to satisfy: e. g. the demand for vacation time for the world’s workers is the instrumental need that reflects the moral need for adequate time for recreation, better served by limiting working hours for the world’s workers. But I daresay that if persons think about what they are saying when they defend “human rights,” they are probably thinking of this document.

Or maybe they aren’t. Postmodern political theory invokes a conventional source of rights heavily indebted to contractarian framework yet committed to a positivist conception of law as morally neutral and justified only by power. It is not pleasant to watch persons holding this position attempt to square cultural practice with human rights: should they defend the deeply traditional “mini-narratives” of honor killings and ritual female castration on anti-imperialist grounds or the universal rights of feminism and freedom of religion that challenge cultural tradition? While their distrust of established power relationships might move them to wholesale approval of civil disobedience, postmodernists must also admit that not all flouting of law is equally admirable.

Finally, one more differentiation of terms needs to be added. Civil rights are those enshrined in law and reflect a positivist specificity peculiar to each jurisdiction. Their connection to human rights is clearly a derivative one. The best definition of justice is “to each her due.” Since the sole role of government is to deliver justice to its citizens, the determination of what is due is entirely dependent on the human rights of the citizenry; these rights translated into positive law dictating retributive and distributive justice guarantee civil rights. Natural rights theory holds as foundational the moral principle that such rights cannot be abrogated, and that depriving any citizen of these rights violates justice and thus the purpose of government. This radical respect both defines the grounds for just civil disobedience and legitimizes individuals and minorities in actions to defend or procure their rights.

Standard

One Postmodern Sentence

The connections intellectuals make among the mooring notions of our age always surprise me; they demonstrate the malleability of the virtual circle we build from experience and fallible reasoning. It sometimes seems as though almost any claim may be advanced as reasonable. And that is an indictment of logic. Or it might be until one uses that same logic a bit more rigorously. It bears repeating that a logical expression is logical all the way down, so to speak. It does not self-destruct under close and patient scrutiny that calls the specious assertion into doubt. This healthy distrust of what Plato called the love of our own opinions was codified by Karl Popper as the principle of falsifiability. What Popper advised in respect to natural science should be broadened into a more generalized strategy for testing our judgments. We are all eager to fall in love with our own point of view, and for that reason we should skeptically examine it.

This conclusion was reinforced by a perusal of the introduction of David Bentley Hart’s “The Beauty of the Infinite: The Aesthetics of Christian Truth”. This is a serious scholarly work. Hart earned his masters from Cambridge and his PhD from University of Virginia and has won plaudits for his published works defending Christianity from atheism. I don’t intend to confront the book’s thesis. I can’t, having been defeated by the introduction alone. What stymied my effort was the writer’s presentation, which seemed composed of a fusillade of unsupported assertions so dense, vague, and twisted that no reasoned response could follow my effort to mine the text for comprehensibility. I find Hart’s sentences to be something other than discursive prose: their density, disregard for clarity, and floating referents closer to journaling than argument. It seems fair to mention at this point that Hart should expect nothing more, for the precondition of his theoretical position is a rejection of the very notion of disinterested rational appraisal. I cannot assert this all that strongly, for Hart assumes the assent he presumably pursues without resorting to anything as prosaic as warrant. It is unsurprising that Hart, a postmodernist to be sure, would advance claims supporting that peculiar position, but the question inevitably arises of why he bothers to take so much paper and ink and time to make it. Finding his core argument is rather like attempting to find the best restaurant meal in East Timor: one has to negotiate unfamiliar terms, confusing directions, and unsubstantiated opinion in equal measure. It may be that his introduction covers ground already so familiar to his presumed reader that Hart feels no obligation to justify or even do more than sketch out his frame of reference. But surely this does a disservice to both the skeptic and the generalist. Postmodernism is hardly a fait accompli, and Hart must appreciate that faith is even less of one, and defensible aesthetic theories least of all. I finished his introduction with only a sense of Hart’s self-satisfaction to reward me for my effort. And that is a shame, for one expects an analysis to produce either assent or rejection, but the vapidity of his language allows the reader neither option. To disengage from Hart’s fundamentally postmodern assertions is to reject notions that float through this culture, but to assent is equally problematic, for one hardly can know what her assent commits her to. Hart, a postmodern theologian, advances positions so far beyond his justifications that one might be signing away her core convictions by accepting on faith any one of his conjectures.

So what can be done with a work whose terms are too squishy to define and whose “arguments” cannot be bothered with correspondence justifications that might be examined by his reader? How can one approach an analysis whose central axiom is that logical justifications are merely rhetorical devices used to exert power over the reader? Why his argument, such as it is, should escape the same charge is just one more mystery. What can we make of this gelatinous glob of aesthetics and religion, illuminated only by the black light of postmodern theory? How can we know before launching into chapter one that engagement with his ideas might prove worth the enormous effort his style demands? Or how can we throw down the book in disgust simply because it makes demands of us that seem absurd? Perhaps the dross is worth the gold.

To illustrate as well as resolve this problem, one that characterizes postmodern thought in general, I will take a single sentence of Hart’s introduction as an object of serious analysis with the goal of both framing the difficulty of more extensive assessment and of exposing the suppositions that underlie what can only be called the effluvium of words that comprise his effort. I ask you to accept that this one sentence is not exceptional but rather entirely indicative of Hart’s presentation. I chose it because it is somewhat independent of what precedes and follows it. Please trust that nothing in its vicinity adds to its intelligibility. I also argue that this sentence is what we might call a postmodern one in that in terms of structure and argument, it is entirely representative of a mode of thought that disdains but fails to replace reason and discursive language in what we can only assume is a rhetorical exercise. At any rate, that may be its intent. I really am not sure.

So here it is.

The great project of “modernity” (the search for comprehensive metanarratives and epistemological foundations by way of a neutral and unaided rationality, available to all reflective intellects, and independent of cultural and linguistic conditions) has surely foundered; “reason” cannot inhabit language (and it certainly has no other home) without falling subject to an indefinite deferral of meaning, a dissemination of signification, a play of nonsense and absence, such that it subsists always in its own aporias, suppression of sense, contradictions, and slippages; and “reason” cannot embody itself in history without at once becoming irrecoverably lost in the labyrinth of time’s interminable contingencies (certainly philosophy has no means of defeating such doubts).”

 The greatest compliment one can give another is to listen attentively to her declarations, straining to comprehend them, and once understood, to weigh them, apply them to experience, and then engage in a deliberative conversation with their author so as to produce a consensual claim to truth, goodness, or beauty justified by some correspondence to a shared reality. But this sentence denies that possibility at the outset, for it rejects access to rationality by “reflective intellects,” and it questions reason’s role in making sense of the declarative contents of the sentence because reason must fall prey to the inadequacies of present culture, etymology, and manipulative intent. So even without careful explication, the sentence seems to deny itself what it intends to proclaim: a declarative truth. If modernity cannot succeed in such an effort, why should postmodernity (or Christianity or any aesthetic proclamation) be permitted to succeed? Metaphorically, this sentence seems to exemplify the old conundrum of the office worker advising the new hire, “Believe me when I tell you no one in this office can be trusted.”

But let us grant it the dignity of a serious effort at comprehension anyway. The sentence seems to me to make mo fewer than fifteen truth claims.

 

  1. That “modernity’ has committed to a “great project,” that project being a search for comprehensive metanarratives and epistemological foundations.
  2. That modernity argues for neutral and unaided rationality as the means to succeed in this great project.
  3. That modernity argues that such rationality is available to all reflective intellects.
  4. That such rationality claims to be independent of all cultural and linguistic conditions.
  5. That the effort outlined in contentions 1-4 has “surely foundered.”
  6. That reason cannot “inhabit” language.
  7. That reason “certainly has no other home” than language.
  8. That any attempt by reason to “inhabit” language subjects it to an indefinite deferral of meaning.
  9. That any attempt by reason to “inhabit” language subjects it to a dissemination of signification.
  10. That such a dissemination of signification is in apposition, “a play of nonsense and absence.”
  11. That such a play causes reason’s attempt to inhabit language always to subsist in its own aporias.
  12. That such a play causes reason’s attempt to inhabit language always to subsist in suppression of sense.
  13. That such a play causes reason’s attempt to inhabit language always to subsist in contradictions and slippages.
  14. That reason’s attempt to “embody itself in history” must result in it at “once becoming irrevocably lost in the labyrinth of time’s interminable contingencies.”
  15. That philosophy has “no means of defeating such doubts.”

 

So what are we to do with these fifteen truth claims? Old-fashioned modernity would ask us to do what Hart advises us cannot be done: examine each truth claim in light of its warrant to determine our judgment of its truth. But what would postmodernity advise as an alternative? My understanding has always been that it would ask us to enfold these truth claims as a jellyfish would a morsel of food: envelope it into our virtual circle, examine it for the bitter taste of personal contradiction, and either make it flesh of our flesh or spit it out like gristle. But Hart doesn’t seem so forthright in his expectation: he presents these claims as self-evident and universal historical truth, though what makes them so escapes me as does the entire postmodern schema of justification. So how do we approach these fifteen declarations? His flabby syntax combines with the blizzard of truth claims to render parts of the sentence almost a poetic utterance, yet one chained to history and philosophy. Do we explicate it as historical truth or appreciate it as an aesthetic unity? If the former, do we take it to be a rhetorical rather than an analytic use of language, and if the latter, what aesthetic theory do we use to apprehend it? Given its discursive inadequacy, how much creative construction must the reader bring to the sentence to vivify it with meaning?

I can’t know how to answer these questions without some sense of what the sentence actually does mean. What follows is my best shot.

 

  1. It is a postmodern charge that modernity’s project is the search for metanarratives and epistemological foundations. This certainly is not how modernism would characterize its own foundations. The notion that the great paradigms of modernity are merely narratives is a pernicious one, for explanatory models are far more than merely stories we tell ourselves (for more, please see the blog entry of July, 14, 2014, “Tall Tales.”). Modernism accepts as axiomatic that the epistemological foundations of analysis are universal and rooted in human nature. ((for more, please see blog entry of August 4, 2014, “The Tyranny of Rationality”).
  2. The only remarkable point of the sentence’s second contention is that rationality might be “aided.” Hart offers no referent in the vicinity of this sentence for that possibility, though one might expect him to champion divine guidance as the aid modernity rejects, which is certainly true. It is equally a truism that modernity unapologetically offers rationality as the means of finding truth in reality along with rationally examined experience (for more please see blog entry of February 2, 2014, “Modernism’s Midwives”).
  3. Modernity does offer the universality of reason as a key to unlocking the secrets of reality. It is actually a serious thing to accuse rationality of this charge, for, if true, it equalizes access to truth and goodness, an intention postmodernism often denies to modernity. In this as in other foundational convictions, modernism has often lost its way in practice, but the appeal to universal rights demonstrates the very self-correcting quality of rationality that Hart denies in this sentence.
  4. The alternative is pretty awful. If rationality were dependent on cultural or other conditions, postmodernism would have no reason to charge modernism with hypocrisy for asserting the superiority of some cultures, races, or genders. That modernists were hypocritical in their judgments of gender, culture, or race might assault either their suppositions that reason is the universal qualifier for judgment or their conclusion that some were more capable of exercising it than others. It cannot, as this sentence implies, call both into question. If Hart subscribes to the postmodern position that rationality is a product of experience and is therefore idiosyncratic, he ends in a relativism that justifies oppression and exploitation on grounds of cultural determinism and a subjectivism that robs individuals of the means to resolve their differences.
  5. This unexamined and unsupported contention is patently false. It seems to question the triumphs of science, the avatar of modernism. Let those who proclaim modernism’s failure try to forego the benefits of its most muscular project: medical, theoretical, mathematical, technological. On what grounds can postmodernism indict natural science as the royal road to those truths about reality that its methodology supports? I am convinced that three possible sources of this disdain exist for the postmodernist. First, a narrow focus on the failures of the human sciences might deceive a poorly educated observer to consider the human sciences representative of science in general. Secondly, an historicist excavation of science’s overreach (of which the human sciences are the detritus) over the last two centuries might mistake the hubris of scientism for the actual activities of today’s natural scientist, which is rather like lumping Comte with Curie. The only areas in which such scientism survives today are the human sciences and popular cosmology. Third, postmodernists might champion some pseudo-science alternative as a means of justifying truth or goodness claims that true science as now defined simply cannot warrant. But in this position as well postmodernism seeks to have it both ways: share in the reflected glory of successful scientific endeavor through the use of arcane terms and fanciful theories without subjecting them to the rigors of the scientific method. True science can never resolve goodness issues; its method forbids judgments based on non-material and non-quantitative factors. Postmodernism, on the other hand, remains true to its roots in philosophy, human science and pseudo-science and so embraces what seems almost a mockery of scientific trappings in its attachment to philology, psychology, sociology, constructivist education, and the embarrassments of Freudianism and Marxism. How charming that Hart’s weighty, macroscopic condemnation of two centuries of Western thought should merit merely a throwaway line without a shred of warrant! Such an act of intellectual vandalism seems only possible for one either preaching to the choir or from the throne of Peter. But a postmodernist religionist cannot embrace both the postmodern disdain for authority and traditional religious elevation of it.
  6. I don’t know what this truth claim means. The metaphorical and poetical preferences of postmodernism are evidence either of intentional misdirection or a self-delusion about the nature of truth. I challenge any reader of Hart’s prose to explain discursively what it means for reason to inhabit language. I don’t know what the words mean. If he means that language can only roughly approximate reasoning, he may have a point, which is why the natural sciences use the language of mathematics to frame and instantiate their truth claims. If he means that non-mathematical language is insufficient to delineate the truths of reality, he certainly has a point, but what of it? The only language capable of revealing that inadequacy is the language that produces it, so unless he wishes to either find a better formulation or stop making truth claims about reality that are not mathematical, he should simply remind us that all such claims, including his own, are to be considered as only provisionally true, and not merely because of the insufficiencies of language but more so because of the uncertainty of their warrant. But then Hart seems not to worry about warrant.
  7. I don’t know what this claim means. It may restate for some reason the one above, or perhaps Hart is gauzily referencing the difficulty of communicating our truth claims to others as opposed to that of framing them for ourselves. Such problems were far more thoughtfully teased out by that early prophet of modernity, Francis Bacon, who appropriately saw them as serious but not fatal impediments to rational examination of experience.
  8. I also don’t know the meaning of this claim. In what sense is meaning “deferred”? What necessitates such deferral? What would resolve it? If language is inadequate to the task of discursively clarifying or communicating our understanding of reality, how can such an inadequacy ever be resolved? What are we to make of the language of this claim? If Hart is referencing the postmodern insistence that language creates rather than contains meaning, he should say so, though such a charge would destroy rather than defer the denotative function of language, something that may mark Hart’s prose but doesn’t seem to have fatally wounded discursive communication more generally. But I am only guessing as to what this claim means.
  9. I am also guessing on this one. If “dissemination” refers to a democratization of meaning based on culture or other group experience, then I suppose the problem references issues of validity suggested by declaration #4. How postmodernism could resolve such claims befuddles me. Modernism would subject them to logical analysis. Linguistic claims that purport to deconstruct the hidden power relationships between signifier and signified have opened language to a psychological critique that psychology as a science is not equal to, though it has given postmodernists that good old thrill of revelation of hidden agendas so characteristic of human science paradigms. If language is not a clear container into which we pour meaning, it is also not infinitely malleable or used inevitably as a weapon. While it is true that the claim to truth is also a claim to power, the power that ensues derives from the truth rather than the claim itself, for possession of truth confers the potential to choose goods accruing from the accurate comprehension of reality. Nothing nefarious or curtained in that. In the physics sense, power is merely the capability to accomplish work, the first stage of which is discerning what work is to be accomplished. It is no accident that every claim to truth is also a claim to power. It is no evil either.
  10. I don’t understand this contention. Do you? Perhaps a use of language even more figurative than the rest of the sentence?
  11. If verbalized truth claims are always attended by doubt, then I would agree with this claim, for the discursive power of ordinary language will never be up to the task of delineating any element of reality beyond a reasonable doubt. Whether such a shadow of a doubt should disqualify a truth claim from rational consideration is a valid epistemological question, as is the degree of certainty required to validate a truth claim. That such uncertainty accompanies a truth claim is hardly an indictment of a modernism that continually subjected its own claims to precisely this question, the resolution lying in the use of more sophisticated (mathematical) language for certain kinds of issues and increased levels of doubt attached to those less empirical. At any rate, such a charge is hardly either a new one or one modernism has not confronted. One might think a proper response to an unavoidable lack of certainty would be an effort to minimize doubt by increasing rational confidence. But perhaps figurative obfuscation is another way to go.
  12. I have no idea what this claims. How is “sense” meant? If the thought refers to “sensation” as an alternative to reasoning, then this argument harkens to the entire idealist objection to rationality. This would constitute another macroscopic contention to which millions of words have already been devoted, one disputed by naturalists and their natural science heirs, an argument hardly resolved by another throwaway line that simply assumes without warrant what it also thinks everybody knows. If by “sense” Hart means our faculties for making sensations intelligible, then to argue that the only means we possess to frame our picture of reality does fatal violence to that reality suggests that we can say nothing valid nor reason gainfully, in which case we can stop torturing ourselves with his writing. But I may be maligning Hart beyond his desserts by reading “sense” in either meaning, so I will only malign him for one more muddy expression.
  13. This claim is false, as neither empirical reasoning nor positivism nor even ordinary reasoning about experience is self-refuting. Let postmodernism refute empirical science and the interlocking subject disciplines it has developed, its use of mathematical language to extend reasoning into theory, and its production of working technology.   In ordinary language, one can posit logical questions that conduce to self-contradiction, and critics make much of the difficulties posed by Heisenberg to science and Godel to formal reasoning. If this kind of thing is what Hart refers to, let us accept his point for the sake of argument, though doing so means ignoring the success of science. The challenge then lies in positing an alternative to a rational sense of operations that proves more effective at resolving issues of truth, goodness, and beauty. Even more basically, it lies in satisfying Hume’s claim that even reason’s success in divorcing cause and effect ontologically would do nothing to dissuade us from its epistemological necessity. As I wrote recently (please see my blog entry “Theology’s Cloud of Unknowing” of October 27, 2014), my accepting determinism will not deter me from exercising what I take to be my free will. And as much as postmodernists condemn reason, they cannot forego it, though Hart demonstrates how their disdain damages their use of it.
  14. If you understand this truth claim, please let me know. I can’t even hazard a guess as to what he means by this one. There seems a bit of a lilt in it, though. Poetry?
  15. If his mode of thought in this sentence can be charitably described as philosophical, this final assertion seems Hart’s strongest by far, for his presentation is bad philosophy and even worse discursive language. Of course, his own position is not the one he is assailing, and his triumphant flourish at the end of the sentence seems to be the claim that modernism has no means of erasing the doubts raised by postmodernists, doubts poorly alluded to in the sentence under examination. In this he is correct, for the failings of modernism are everywhere to be seen. The cadre of French theorists who nailed the jello of postmodernism to history’s wall in the 1970’s were conducting what they saw as a post-mortem. Though poorly understood at the time, modernism had died on the battlefields of the first World War, or at least a variant of it had. The twentieth century was a miserable and sustained effort to resurrect from the ruins adequate warrants for claims to truth, goodness, and beauty in the face of what was widely perceived as modernism’s failures (For a fuller explanation of this event, please see my book What Makes Anything True, Good, Beautiful: Challenges to Justification). But modernism did not die, though it changed, in part in response to academics’ century-long flirtation with postmodernism. Its resilience lay in its relentless self-critique, one that began with the first pitiful attempts by rationalist philosophers attempting to conduct another post-mortem, this one on the cataclysmic collapse of authority in the Reformation. Modernism certainly was never the intellectual bully its critics on the right (religionists nostalgic for a vanished authority) or left (postmodernists sensitive to its hypocrisies and inconsistencies) saw it as being. Rather, from its birth in the seventeenth century it was a desperate attempt to warrant truth, goodness, and beauty claims by some consensual means that might bind persons to each other with even a modicum of the force of lost authority, an attempt subject to endless self-criticism and relentless reductionism. Hart for once uses just the right language to describe philosophy’s failure to replace religious authority, for rationalism truly found no means to defeat the objections of its opponents, only to confront and minimize them. Reason and closely examined experience, the warrants modernism advanced, could never offer the certainty of Gnostic claims, nor in the messy amalgamation of Romantic and modernist warrants embraced by Victorians could it counter charges of bad faith and hypocrisy. The enduring appeal of scientism is a current example of modernism’s failure to project a defensible framework of moral justification, and religionists are quick to identify it as a threat, probably because it is rooted in a nostalgia for authority’s lost certainty, such nostalgia characterizing their position in general. Nor has modernism produced clarity in regard to evaluations of quality (for more on its problems with goodness issues, please see my blog entry of December 12, 2013, “Is Goodness Real?”). And it has not risen to the challenge of constructing an unambiguous aesthetic, though Kant laid a solid foundation for that effort. Modernism’s struggles are likely to continue. It sloughs off accretions and inconsistencies clumsily and often embarrassingly, as it did the hypocrisies of Victorianism that fueled the postmodern critique. Worse, its failures and its efforts to correct them dominate the zeitgeist. Its demand for rational consistency and honest examination of experience open it to relentless criticism from reactionary advocates of authority and from post-structuralist champions of the virtual circle proclaiming their own warrants for truth and goodness claims. But neither of these approaches can succeed, for the history of the Reformation laid bare authority’s bankruptcy in the face of challenges from within its mode of warrant. Authority simply cannot resolve disagreement from other authority. Nor can the virtual circle model of postmodernism resolve conflicts among cultures or between individuals except by recourse to coercion. No critic of modernity has been harder on reason as warrant than modernists themselves, and this relentless application of the principle of falsifiability remains our best hope of finding truth in the face of the kind of obscurantism we see in Hart’s cited sentence.

 

Standard

Theology’s Cloud of Unknowing

No part of the quest for knowledge of truth, goodness, and beauty matters as much as the search for God. It is only in the last year that I have found myself comfortable in that effort for two reasons: I have resolved to my own satisfaction some difficulties inherent in religious commitment, and I am beginning to understand the categories used by religious apologists.

The greatest proof of God’s existence and nature I can discover is neither the ontological nor any of the cosmological proofs. It is the existence of free will in the face of determinism. Frankly, it puzzles me that this argument is not used more extensively by religious apologists. On the contrary, atheists and agnostics hurl scientific determinism in the face of those who wish to claim that God acts in the world. I have argued the futility of religionists disputing determinism in the observable universe (please see “Religionists Fighting the Wrong Battle” of July 6, 2014). It seems a fool’s errand to deny determinism, for that would demand denying the truth of scientific discoveries based upon it. But these are pretty difficult to repudiate since they include not only the eerie correlation between mathematics and empirical research, but also the amazing interlocking bases in the truth claims of all the natural science disciplines. And don’t forget the deal breaker: the technological marvels that science has given mankind. Put simply, to deny determinism in the physical world is to deny that science works. Those who wish to argue that God acts in the world must refute the counterargument that the world unwinds itself in completely predictable fashion, such predictability constituting the lodestone of all scientific endeavor. Now at this point it might seem that I am switching sides, for the argument just given is the atheistic one: God’s action cannot be found in reality because determinism is irrefutable. Allow me to spring my trap now. The stronger you make the determinism case, the more you also make the case that God does indeed act upon reality. For to argue is to choose a side. And to choose is implicitly to deny determinism in favor of free will. Not even the most committed scientist can deny that she chooses her field of study, her theoretical and experimental efforts, and the conclusions she draws from them. The greatest refutation of cosmic determinism is our own sense of freedom.

Now I will confess that I was stymied at this point for the last decade or so, dismissing this sense of freedom as an error and a self-delusion, the same kind of error we make when we trust our direct perception of sense data or when we assume the virtual circle we create from applying reason to sense data is reality itself. I dismissed our sense of free will as just another hiccup in the epistemological/ontological linkage.

Only it isn’t. I am perfectly willing to relinquish any claim to free will, at least in the abstract. Logically, I can hardly do otherwise, for every libertarian and compatibilist argument that attempts to reconcile determinism with free will has failed. As we are indubitably material substances, I am perfectly willing to accept that we are as determined in our choices as the most inert object of scientific inquiry. But no scientist committed to such an inquiry would be able to reconcile her trust in determinism with the simple truth that she cannot avoid feeling free to commit to that trust. The great mystery is not that we are determined but that we feel free despite our knowledge to the contrary. I can no more stop struggling over my choices of what to judge true, good, and beautiful than I can stop my own heart from beating. My brain seems designed to recognize the natural freedom that lies at the center of my humanity, just as it seems compelled to exercise the preferential freedom involved in weighing choices as it yearns for the flourishing that accrues to wise action (for more on these three levels of freedom, please see my blog entry of November 20, 2013, “Our Freedom Fetish”).

Now this truth leads me to one of two conclusions. Either I truly am free, and along with others like me therefore the only free things in a deterministic universe, or I merely feel free as a condition of my own consciousness along with others like me, also demonstrating some odd uniqueness of human nature impossible for other material substances in the universe to duplicate (so far as we know and with a minor caveat for very modest choice-making for higher mammals). I have come to realize that it hardly matters which of these options pertains. Either true freedom or the inescapable sense of it serves as a proof of human uniqueness. Granted, the notion that human beings alone among created matter are actually free would argue for the existence of the soul and place us in contiguity to God. But the other option also works. Even if my perceived freedom is an illusion, one has to ask why that particular illusion? Why can I not escape moral responsibility for my judgments? I could tell myself my options are limited by heredity or environment, but that would do nothing to remove either my sense of moral responsibility or of culpability for wrong choices. C. S. Lewis once remarked that our sense that reality is unfair is proof enough that we possess some sense of divine justice, but I would argue that such an understanding rests on a prior sense of what is due, and even such a vague moral outlook is equally convincing evidence of our uniqueness. So it hardly matters whether we are truly free or err in our sense of moral freedom. Human beings are choice making machines, but it hardly makes evolutionary sense that we expend so much energy agonizing over illusory choices when instinct would prove a far more efficient director of our actions. We don’t live in the kind of world many religionists would prefer: one in which everything operates directly on God’s orders, resulting in a miraculous and therefore incoherent reality that would frustrate any rational agent’s attempts to choose well. So we don’t live in a world where we do not feel free or one where everything else does. We see two foundational oddities at work here, our sense of freedom and reality’s enslavement to determinism. The clincher is the third oddity that marks the connection between the two: the way these antitheses work together to bestow upon us a sense of rationality that guides our moral choosing. Nothing in the strongest case to be made for determinism forbids God’s action in the one area of reality we cannot help feeling exempt from determinism: our sense of moral freedom. Both this sense and reality’s determinism seem signs of the kind of Creator who choreographed the dance between the universe’s determinism and our ability to make choices in it.

As Kant said, the starry heavens above me and the moral sense within me. That point of view might seem irrefutable, at least until one reads Karen Armstrong’s “The Case for God.” Her richly sourced investigation –I count 374 in her bibliography– makes a rather different argument: we can know nothing of God’s nature using reason or reasoned experience. Whatever we learn entails more negative capability than positive knowledge. It is a curious argument for several reasons.

First, it is odd that such a claim is structured as an argument. Armstrong traces a long tradition of mistrusting ratio (reason) as a means of comprehension of spiritual reality, though she acknowledges its success in the kinds of endeavors ordinary life hands us. She prefers muthos (myth) as a means of religious knowledge. This sort of effort does not merely call for a rejection of reason as a means of knowing. Armstrong seems to think its success requires real affronts to our rational capacity; disorientation, contradiction, paradox, koans, and self-neglect are her route to God, one that deliberately frustrates the reasoning we apply to the rest of our experience for the very good reason that nothing else in our existence bears the slightest resemblance to the ineffability of the divine, and our natural inclination to use the tools of ordinary knowing tend to reduce God to something more familiar and pedestrian, an idol. Her exhaustive historical account shows that reduction to be a constant temptation for religionists, one quite understandable since she acknowledges reason’s central role in human activity. Her approach owes something to Kant’s aesthetic theories, and in her conclusion, Armstrong explicitly compares religion and art. Kant thought aesthetic reasoning to be fundamentally different from practical reasoning because it recognizes the unique quality of aesthetic objects that exist for neither practical use nor classification. Armstrong makes a parallel argument for our thinking about God, saying that we cannot regard God as a being, for that would mistakenly place the divine in a class with other beings. She differs from Kant only in arguing that we are incapable of thinking about God at all. For that reason she also favors what might be seen as imaginative alternatives to reason: myth, metaphor, analogy, poetry, visual arts, and music. She devotes a good deal of attention to these qualities in the holy texts of the world’s religions. The third route to religious knowledge Armstrong highlights involves the importance of will: commitment, ritual, prayer, altruism as an antidote to egoism, and meditation.

She charges Christianity with two historical rational errors. The first came with efforts to standardize Christian doctrine in the first few centuries after Christ. Old Testament writings and New Testament candidates for orthodoxy were gradually aligned so as to give logical force to claims for Jesus’ divinity, something Armstrong argues was never implicit in earlier Christianity. Even so, she charges that interpreting scripture as historical and inerrant truth was only made normative after the Enlightenment. Like other religious apologists who view science as an affront to religion (please see the blog posting on “Tao and the Myth of Religious Return” of October 13, 2014), Armstrong sees religious fundamentalism as a defensive response to the aggressive assaults of positivist science. Interestingly, she argues that this response has distorted and threatens to destroy religion since such a defense attempts to rationalize religious belief and place it on an equal footing with other means of warrant more suited to practical reasoning than theology (Please see “The Latest Creationism Debate”, February 16, 2014). In any case, she neither recognizes modernity as an ad hoc response to the self-destruction of authority in the Reformation nor postmodernism as a fundamental challenge to religious faith, though to her credit she does see the human sciences as a threat to contemporary religion if only because the clergy are so eager to wrap themselves in the reflected glory of science. I must add that the same motive moves human sciences in their imitation of the hard sciences (for more, please see my blog posting of February 9, 2014, “The Calamity of the Human Sciences,”).

Other facets of Armstrong’s analysis are also troubling. First, her argument about God’s creation of the universe ex nihilo indicates that this “invention” of theology somehow negates any possibility for inferring the nature of the creator from the creation, but why should that be so? Both St. Paul and St. Augustine make explicit that we can indeed rationally infer something of God from the nature of the universe, and the assumption that it differs from the divinity that made it does nothing to invalidate that connection, so long as we never forget that we can only draw imperfect conclusions from imperfect reasoning applied to an imperfect creation. Secondly, she raises but hardly settles the issue of what the alternative approaches to religion she champions can warrant. She repeatedly argues that religious faith is useful as a source of comfort in the face of misfortune and death, that believers have found a life of altruism to be richer than one of self-centeredness. But she never argues that muthos reveals or can reveal any real truths about the nature of divinity or morals, nor that the pragmatic benefits of religion are anything other than a gratifying illusion. Thirdly, Armstrong repeatedly fails to distinguish between muthos as an extension of reason and as an alternative to it, citing testimony from thinkers as divergent as Thomas Aquinas and John Calvin, Aristotle and Paul Tillich. This is quite a crucial question that characterizes two wildly different traditions in all organized religions, but Armstrong’s eagerness to advance her case blinds her to the distinction. My own sense of faith is that its proper role is to extend reason to the corona of uncertain truth claims we simply cannot warrant with confidence (Please see September 11, 2013, “Religion and Truth”). I also doubt that what Armstrong recommends can be accomplished, for we are too reliant on reason to make sense of existence to ignore its dictates in any one sphere of activity, especially one so central as theology (for more on this, please see my blog entry for August 4, 2014, “The Tyranny of Rationality”). Further, it seems that the only way Armstrong can claim that muthos is equal or superior to the kinds of truths ascertainable by reason and reasoned experience is to warrant it in a purely coherence sense if only because the kinds of intuitions such efforts justify are so deeply personal. But a coherence warrant for a correspondence truth contains the seeds of its own disintegration (please January 12, 2014: “Can Belief Be Knowledge?”) as well as the grossest sort of intolerance for differing interpretations. I find it deeply disturbing that in her entire analysis, Armstrong never once considers the power of authority as warrant to truth and goodness claims, but instead seems to validate psychological need as a sufficient justification for embracing the truth of religious claims. It seems too obvious to mention, but since she doesn’t, I will: people embrace all sorts of things regardless of the truth in pursuit of psychic balm. I hesitate to charge her with bad faith, but if innocent of that charge, she surely is guilty of sloppy categorization, for the question of whether faith supplements or supplants reason is one of the core questions of theology. Adherents of either tradition would surely resent being lumped with their opponents as Armstrong repeatedly does.

As one who as struggled for decades with the apparent irrationality of religious belief, I found perhaps too much comfort in Armstrong’s assertions that the core texts of religious dogma were never meant to provide rational warrant for religious faith, that their power lay in some allegorical, analogical, mythical, or poetic meaning (for more on the problems such a view engenders, please see my posting of October 2, 2013, “The Problem of Metaphor in Religion”). The message I take from such an argument is that any search for correspondence warrant stronger than authority in primal religious texts is doomed to failure, and that any exegesis is as much a creative as an explanatory endeavor.

So we are left with the plodding work of inference based on the nature of creation and the moral sense that shapes human nature. Perhaps Armstrong is correct in her central contention that we can know no essentials of the deepest mystery and pervading immanence of the Creator, but our minds seem ordered by both the determinist nature of creation and our unique sense of freedom to make the attempt.

Standard

Tao and the Myth of Religious Return

I’ve noticed an odd theme running through conversations and my reading over the last few months as I seek clarity on the nature of religious knowledge. Discounting psychological, pragmatic, and utilitarian arguments in favor of how believers justify the core claims of their faith, I’ve found a surprisingly consistent common thread, an historical narrative that parallels the loss of Eden in Genesis. Only the serpent in this garden is science.

This is not just a version of the bumper sticker mentality: “The Bible says it/I believe it/That settles it.” It does not stoop to denying the determinism that underpins the scientific enterprise, which is an affront to reason as well as science (please see “Religionists Fighting the Wrong Battle,” blog posting of July 6, 2014). Nor does it resemble the misguided attempt to establish some parity between the methodologies of religious and scientific reliability (See “Latest Creationism Debate,” blog posting of February 16, 2014), an effort foredoomed to failure. Its attack is far more subtle, respectable, and powerful. These mythmakers deserve a thoughtful response.

Perhaps the most impressive phrasing was given by Alasdair MacIntyre in his magisterial work on ethics, “After Virtue.” C.S. Lewis covers some of the same ground in his most direct polemic, “The Abolition of Man.” Chesterton, Newman, Eliot, Tolstoy, Solzhenitsyn, Maritain, and a host of other very respected authors make their own versions of the same case, each differing in some details but all agreeing in essentials.

The story they tell is this. Something vital has been lost to culture, stolen by the revolution in thought begun by Descartes at the beginning of the seventeenth century. His attempt to establish objectivity and autonomy for our pursuit of knowledge was misguided hubris that launched the scientific enterprise and the Enlightenment, which in toto have rained catastrophe on western culture. Our fading hope for reprieve can only lie in a return to traditional values informed by religious truth, rejection of materialism, and repudiation of scientific theories of man.

Altogether, it is a good story. Some of it is even true.

The first thing to notice is precisely that it is a story, one with the requisite moral. In fact, it is a very old story, as old as the Epic of Gilgamesh and Noah’s flood. It is the story of Eden, of the Pharisees and Jesus, of Augustine’s two cities. Equally telling, it is the story of Plato’s cave, of Aeneas, and of Lewis’s beloved Norse sagas. The tale of the wrongly chosen path and of human hubris is both an archetype and a touchstone. It informed the entire Romantic era’s love of all things medieval. It inspires the young through tales of Atlantis as it characterizes their grandfathers’ fond recollections of misspent youth. I found it surprising to find so much unanimity among philosophers, theologians, cultural commentators, and poets about the centrality of narrative to an understanding of truth, at least until I recalled how saturated they were by Romanticism as it was filtered through the artifices of the Victorian era and how antagonistic they were to the discursive language of science.

But pegging its roots does not dispute its truth. And it goes without saying that nearly all modernist literature of the first half of the twentieth century was colored by just this sense of loss and diminishment. My issue is not with the loss per se. It is with the nature of that loss.

For the regret expressed by the mythmakers is rooted in historical and ethical generalizations that cannot face real scrutiny. I count seven serious errors in their analysis, any one of which would prove a fatal blow to their version of events.

 1. It is clearly untrue that there was some homogeneous value system that scientific thinking ultimately attacked and is in the process of destroying. What C.S. Lewis calls “the Tao” as a shorthand for traditional values was no more coherent than the lost America conservatives wish to resurrect based on wholesome television series of the 1950’s. No single moral system characterized world or western culture before the scientific revolution. One only need consider the challenge medievalism posed to classical culture to see how fragmented western ethical history was in the anno domini, not to mention in other parts of the world. Probably the most sourced of the works I’ve read recently is Michael Aeschliman’s “Restitution of Man,” which marshalls thinkers as diverse as Cicero and Samuel Johnson to his cause. That they would have been surprised to find their views lumped together would be an understatement. The deeply religious authors who present this myth of moral unanimity need hardly have looked beyond their own Christian faith for disproof of their contentions, for the bloodbath of the Reformation is sufficient proof that no moral position went unchallenged during that miserable era when religions warred over divergent moral outlooks. What could they be thinking to claim otherwise?

I have yet to see a straightforward answer to this question from any of these thinkers, but I think I can provide one that they conspicuously avoid supplying. While we see no less moral controversy before Descartes than we’ve seen since, the grounds of the argument have shifted. The unanimity was not in the truth and goodness claims offered by pre-modern thinkers. It was in their mode of justification. What writers like MacIntyre and Chesterton wish to return to is the power of authority as a warrant (Granted, they locate the source of that authority in different places, for MacIntyre the culture and for Chesterton dogma, but both revere tradition). It was authority that the eighteenth century attacked and defeated, something only made possible by the glaring deficiencies religious authority made manifest during the awful decades of the Reformation (for more on those deficiencies, please see my series of postings from January, 2014).

2.These critics of modern science treat its rise as an unprovoked challenge to traditional values rather than a desperate attempt to find alternatives following their collapse in the Reformation (For more, please see my posting “Modernism’s Midwives” of February 2, 2014). But by ignoring the causes for, say, Descartes’ efforts to find consensual warrants for truth claims amidst the ruins of the French Reformation, they also overestimate his success and underrate the power of later attacks on his method. One can hardly blame writers like Chesterton for missing the postmodern revolt that was emerging in his own time. Perhaps he might have seen more clearly how modernism’s warrants, reason and closely examined experience, were assaulted by their very modes of analysis in ways that authority could never withstand if he had realized that the tradition he most revered was authority itself. I think a critic as brilliant as Lewis would have recognized– and abjured– the postmodern revolt on modernism and indeed might have then traced back the roots of his unease, but his death in 1963 came before the brilliant formulations of postmodernism that mainly emerged in the 1970’s, themselves explaining events that had been sorting themselves out since the turn of the twentieth century (for more on this process, please see my book “What Makes Anything True, Good, Beautiful? Challenges to Justification.”) Why MacIntyre, writing in 1984, failed to see it befuddles me. By rooting values and moral duties in culture, he seems to find agreement with postmodernism, though how he could avoid the charge of cultural relativism that tarnishes their arguments escapes me too. I assume the nostalgic mythologizers derived what they took to be proof from yet another error, but one that has deep truths buried within.

 3.They assumed that the “science” that emerged from the birth of empiricism in the seventeenth century is synonymous with modern science, so the boasts of a Bacon or Comte might be seen as proofs of scientistic hubris. This issue requires a bit of teasing out, though. First, the entire nineteenth century has been a continuing effort to refine what counts as valid scientific experience. This entire effort has been reductive in the extreme, rooting out pseudo-sciences and outliers as it builds disciplinary paradigms and establishes links across fields of study. No one can deny that the early proponents of empirical processes were guilty of hubris, but one glory of their method of warrant is that it is self-correcting, and the boasts of early natural science have long been muted as science has matured in the last century. No one who understands true science can argue today that its efforts are guided by any greater value than respect of truth. On all other values, true science must remain mute for the simple reason that its focus on material, measurable, and mathematical substance provides no means to warrant moral claims. Had its critics clearly differentiated truth claims, which science does exceedingly well within its sphere of competence, and goodness claims, they would recognize that morality was safe from science.

But we might pardon them for their error, especially when we see how slow science itself has been to learn its own limits. In this, the mythmakers have their strongest point, for the human sciences have proved guilty of all their charges: hubristic, value-laden, misleading, and a threat to every other means of knowing (for more, please see “The Calamity of the Human Sciences” blog entry of February 9, 2014).

4. But the mythologists here make their fourth mistake in that they don’t seem to distinguish the human from the natural sciences. The former justify all their charges and have since the Enlightenment first championed “the science of man” as an extension of “the science of nature.” Their failure to distinguish the two might lead us to the conclusion that these writers are just a bunch of old men who yearn for the good old days. There’s some truth in that charge as there is power in their charge that the “human sciences” are neither science nor humanizing. But any natural scientist could have told them that. Real science resents “soft science” basking in the reflected glow of true science at least as much as traditionalist humanists do.

But it is not likely that academics trained in the arts and humanities would seek out the counsel of the college of sciences. For the first two-thirds of the twentieth century, they attempted a flanking attack that drew its power from some of the more gruesome scientific accomplishments of that difficult time. This tactic might be called the “Mary Shelley” argument: that natural science freed from moral restraint would create abominations. It did. The killing fields of the Great War faded into nightmare only to be replaced by the genocides that followed, and then by total war and the specter of the mushroom cloud. We may be too close to that era to appreciate how powerfully these prophecies affected culture during the Cold War. Scientistic utopias of the “Brave New World” or “1984” variety may have seemed possible, even likely, but history has shown them to be the fifth mistake of the reactionary mythmakers. 5. The technology produced by the scientific revolution has not diminished human flourishing; the consensus judgment is that it has improved life. At any rate, the technologies that natural science has wrought are so woven into the fabric of world culture today, it is far more difficult to imagine a successful Luddite rebellion now than it would have been a century ago.

The mythmakers were more perceptive in tracing some of the cultural products of a technocentric culture. Perhaps it was natural that they would characterize the popular view of science in terms reminiscent of the laity and the clergy. After all, this was their preferred social structure. They were correct in seeing the layman as befuddled and overawed by the new priesthood of scientists, viewing their accomplishments as equally mysterious and inexplicable. This credulity is a major motive for the human sciences’ efforts to ape the terminology and training of the natural sciences, though, of course, without their successes. It is perhaps more the laymen than the scientists who merit the charge of scientism, for an exaggeration of the capabilities of science analogous to magic can only succeed for the outsider. No practitioner of modern natural science could perpetuate such a hoax from within the discipline. For laymen, pseudo-scientists and some practitioners of soft science, such overblown claims with their echoes of the early prophets of true science might impress nonspecialists for awhile. I should add that popular cosmology is sometimes guilty of that charge, particularly when it attempts to confront questions of the universe’s origin that depend on shaky theoretical underpinnings. What this popular scientism does lead to is a misunderstanding of the nature of moral thinking, but it is an error the mythmakers share. The layman yearns for moral certitude somehow produced through the methodology of science. The mythmakers are right to condemn this is a false hope, for no “ought” can ever derive from even the most certain “is.” Or to put it in liberal arts terms, no imperatives from the indicative. Every capability of science requires a moral injunction to direct it, and that injunction can never be derived from the science that serves it. Medical science may extend life, but it may not decree that life ought to be extended. But the mythmakers’ nostalgia for the moral directives of religious authority are, like their historical narrative itself, a longing for a myth. Authority of any kind must founder on disagreement. It cannot resolve dispute within its own mode of warrant (for more on why, please read my blog on “The Fragility of Religious Authority” of September 18, 2013). Neither empirical science nor religious authority can provide certain moral guidance in a multicultural climate. What can?

I consider expertise to be an admirable guide to judgments of quality as well as to issues of truth that yield to repeated and studied experience, but I must agree with the mythmakers that expertise is difficult to come by in the rough-and-tumble of undifferentiated experience. So in that sense, they are right to condemn the mid-twentieth century’s obsession with soulless professionalization and mass efficiency, though I should quickly add that an increasingly complex society cannot survive without bureaucracy and middle managers.

6. Still, the mythmakers ominous charge that in the absence of religious morality, “efficiency experts” and technocrats would by default be the moral arbiters of mass culture proved to be yet another error on their part. Wouldn’t you agree that is a role more likely assumed by commercial artists and entertainers in today’s culture? At any rate, expertise, though a powerful justification for many kinds of truth and goodness claims, can only have a tenuous hold on moral ones, and that the expertise derived from a long life well lived. Neither scientists nor technical experts have replaced bishops, ministers, or mullahs as moral arbiters.

The “Myth of Religious Return” so prized by conservative literati is a good story for sure. But like all narratives, it suffers in any effort to translate it into discursive language (Please see blog entry “Tall Tales” of July 14, 2014 for more). Without a doubt, the failure of their analysis can be traced to the dawn of modernism, a thought revolution spurred entirely by the dismal failure of religious authority over the century and a half of Reformation conflict. But the mythmakers failed as miserably in understanding their own age, and this serious mistake constitutes their final misjudgment. 7. They failed to appreciate the postmodern revolution that rejected modernism at the dawn of the twentieth century in favor of group identities molded by spurious claims of social science, existential Romanticism, utilitarianism, and American Pragmatism. We are certainly still suffering the consequences of postmodern moral thinking (Please see my blog posting “Postmodernism is its Discontents” of July 7, 2013), and some of the strongest objections conservatives raise to the current moral climate are valid objections to postmodern thinking. Still, thanks to the Enlightenment revolution, itself condemned by both the mythmaking reactionaries and postmodern nihilists, morality is still seen as the most prized possession of the individual’s rational will in pursuit of what it calls good. Many of us exercise that will by choosing to respect religious authority, though less completely than in ages past. Rather than yearn for some mythic, medieval paradise lost, religionists must compete for the moral assent of their adherents as adults, not as children cowering in fear. The intellectual revolution that freed reason from authority also established the sacred right of the individual to choose her own moral outlook justified by her own moral warrant. The mythmakers are certainly correct in asserting that hasn’t always gone so well, but moral agency does not preclude error any more than it perpetuates it.

 

 

Standard

The Tyranny of Rationality

My argument today can be summarized thus: we all are deeply, deterministically rational.

Since we partake of three realities (brute reality, our constructed image of it, and the language we use to convey our knowledge to others), it seems appropriate to examine this claim in reference to each. I do not mean to say that the external reality, the tree that falls in the forest, is rational. Brute reality simply exists, and any character attributed to it requires an interpreting mind. So if we should decide that, yes, the external world is rational, what we are saying is shorthand for what we really should be saying: that we can apply our rational faculties to the substance and events of the world with some confidence, knowing that the predictions and explanations we produce will prove accurate. Further, if they prove inaccurate, we know that our rational faculties can locate a more accurate prediction or take into account some previously hidden factor that will then explain to our satisfaction the way of the world.

Now this congruence between brute reality and our own thinking about it is really very mysterious, for there is no good reason why creatures produced by brute reality should be able to unlock its secrets as well as we do. From the way I’ve framed the two realities referenced thus far, you might assume that the correspondence I have mentioned is rooted in empiricism, the natural sciences. And who can doubt that the disciplines of the hard sciences have proven the exemplars of unlocking the mysteries of nature with the key of human reason? Nothing at all surprising there. You might wonder why I bother bringing up such a cliché.

My answer is that the methods of the hardest of the natural sciences, while profoundly rational, are only more rigorous applications of something deeply rooted in all human experience, something we can no more shuck than the wetness of rain. Even our stoutest protests against rationality, the ecstatic cries of mystics and the Kafkaesque wails of nihilists, are logical shafts of light in a metaphysical darkness and no less attempts to build a working model of reality than applied particle physics. The difference only makes sense in consideration of the process.

Long before neurologists began their contemporary struggle to map the brain, philosophers attempted to probe the mind and its workings. The pioneers of this effort, the first epistemologists, sought to answer the question of how the mind represents brute reality. They quickly discarded the Aristotelian model of direct perception despite its dominance in the thinking of the 17th century. The sort of naïve assumption that we perceive the world complete and entire, that our senses present to us a “true” picture of reality, is a hard one to dismiss, for it is our default approach to experience. But it is patently false. From the vanishing point in art to dreams, hallucinations, and apparent time compression and expansion based on our level of enjoyment, we don’t need to look very hard to find that our perception of brute reality is something different from the reality itself. And don’t even bring into this issue the limits of perception as exemplified in quantum theory and general relativity!

John Locke’s representative theory of perception gave us our second reality, an internal reconstruction of the external one that in his view was a pretty effortless reproduction constructed by the mind. Locke argued that all of our thinking is composed of perceptions and reflections upon them. Of course, that notion of a “reality movie” playing in our head does little to explain either the misperceptions that the brain seems so often guilty of nor the bothersome truth that we all don’t seem to perceive reality the same way. Fast forward a century to George Berkeley’s famous question about the falling tree. How do we know the movie playing in our heads is an accurate representation? All we have access to is the movie.

Berkeley thus builds the most important edifice in epistemology: the perceptual wall, an impenetrable barrier that separates brute reality from whatever we take it to be in our minds. We build a creative representation of reality from perceptions and reflections and have only experience to guide our choices. In fact, as Immanuel Kant famously observed at the end of the eighteenth century, whatever structure we build is formed not only by our senses but by the mental structures in our minds that pick and choose among them to structure our creation. This is an active process of choosing, sorting, and assembling perceptions so as to build a working model of reality inside the perceptual wall. Sense data bombard the mind and just as we can pick out a familiar voice in a noisy room or see a foreground object while ignoring all the others in our field of vision, our minds sort through the barrage of perceptions that our senses transmit to produce a working model of reality. How different is this process from our default notion of direct perception and how likely is it that our minds will build a model that makes sense to us regardless of its fidelity to the entire picture presented to it? And– disturbing thought– how likely that we all will build anything like the same reality from differing experiences?

But here’s the catch. Kant famously insisted that the mechanism for that construction, the sorting device for the incoming data stream, must be profoundly and completely rational. His famous categories of experience were mental sorting and assembling mechanisms that inevitably present to us a rational world. This is why every cause seems to have an effect and every effect a cause, why the world presents itself as both unity and diversity, and why quantity seems so ubiquitous in physical reality. These are simply the way we see things. The way we must see things. We have to remind ourselves ad nauseam that correlation is not causation simply because we are programmed to read causation into every effect we observe. We see constellations in random star positions, animals in cloud formations, and purpose in chaos because that is what we have to see. It seems as plain as the nose on our face unless we check our naïve assumptions at the door. The world is not rational. We are. And we can’t help being.

But wait. There’s more. Just as our moment-by-moment experience of reality is composed of the assemblage of innumerable sense data inputs orchestrated by a mental process, so too is the composite, ongoing picture of reality these experiences produce. We don’t merely act in the world. We respond to it, and that response is a product of an ongoing reflection that orients us to experience. We don’t just think we know the momentary truth of this instant. We know reality. We must think this way so as to navigate our way through the truths that allow us to choose all the goods we come to value, whatever they may be. It is this picture of reality and our place in it that comprise our own virtual circle.

In previous blogs I have discussed the virtual circle at some length (please see post of August 6, 2013). It bears repeating that our ability to find some correspondence between our experience and external reality is only testable by experience and that our efforts to improve those tests has brought us the scientific method. Its essence is an effort to improve the reliability of experience and our reflections upon it. We have other tests, of course. To determine correspondence between an unknowable reality and our picture of it, we may rely on expertise, authority, or undifferentiated experience. But all of these truth tests are inherently rational. We know experts have deeply examined the repeated experiences that produce their expertise. We trust authority in one field because it has proven trustworthy in others (a questionable assumption as I explore in post of January20, 2014.). We (mistakenly) assume a new experience can be examined in light of an old one. Please notice that I am not claiming parity for undifferentiated experience and a scientific experiment. The latter has intentionally confronted the issues of unreliability that plague the former and has attempted remedies for them. What I am claiming is that our conscious assemblage of reality, our virtual circle, is composed of rationally constructed truth claims. Their correspondence is, of course, always in doubt (we cannot guarantee the tree has fallen, after all, only that we have heard it), but the truth tests I have mentioned produce sufficient warrant for us to judge these claims as true (July 9, 2013). It is also defensible, though not certain, that the human operating system that presents such sense data constructions to us operates as a guarantor of intersubjectivity so that we may compare our correspondence claims constructively to those made by others.

But, of course, our constructed reality consists of far more than simple correspondences to material reality. What about correspondences to conceptualizations? How does our mind construct, for instance, true impressions of abstractions like justice or love? And what about the purpose of all this construction? How do we define, limit, and choose the good?

In this zeitgeist, to claim that such things are correspondence constructions, meaning they have some objective reality, is going way out on a limb. I have attempted to make that argument in prior posts (December 10, 2013), but even if you embrace our culture’s attachment to the subjective quality of conceptualizations, and especially conceptualization of goodness, I can still claim with confidence that your subjective experiences are rational, or at least, that you deem them so. Here is why.

In any attempt to find the true or the good, we engage in an act of comparison. In correspondence truth tests, we examine the percept in our minds against the brute reality we seek to know. Is that a Mercedes or an Audi? Even if we embrace the impermeability of the perceptual wall, we still examine our truth claims in comparison to the virtual circle of truths we have already accepted as true. Is this queasy feeling in my gut what I call nausea? This act of analysis so central to every truth and goodness claim cannot help but be a rational one, and it characterizes each moment of consciousness. It builds a second level of rationality over the foundation of Kant’s sense data theory of perception, this one a conscious and comparative one.

An unfortunate consequence of the acceptance of the perceptual wall is the epistemological viewpoint termed phenomenology. This school takes seriously the impermeability of the perceptual wall and argues for the radical subjectivity of all experience. Its adherents take their name from Kant’s famous assertion that we can never know things-as-they-are (“noumena”) but only things-as-they-appear (“phenomena”). We can only see the inside of the perceptual wall, digesting phenomena as they appear in the mind. Perhaps this argument would have been taken less seriously if it hadn’t followed upon the heels of Romanticism, with its perceptual wall-piercing valuation of intuition as a divine source of insight. Question that level of certainty by doubting either the reality of intuition or its divine source and you are left with something far less convincing: the total subjectivity of experience. This bleak picture of humankind’s fruitless search for truth and goodness leant its emotional force to the twentieth century’s infatuation with postmodernism (July 30, 2013).

But note that even in this bleak and black view, we see the light of reason. For phenomenology is founded on Kantian metaphysics, Romanticism on a valuation of intuition as a reliable means of knowledge, postmodernism on a cobbled-together set of reactions to unsettling events in the first decades of the twentieth century. Despite their claims to the contrary, the source of all philosophy is the search for wisdom: the true conditions of reality. And if Theresa of Avila, St. John the Divine, and Franz Kafka find those conditions to extend far beyond the reach of correspondence knowledge — meaning beyond the reach of the third level of reality I mentioned at the beginning of this entry, the use of language– that is still fine. For their beliefs do not render their rational appraisal of reality incorrect. They extend it, perhaps to realms that others might not see or appreciate. In “The Idea of the Holy,” Rudolf Otto makes clear that any concept, even of the numinous if such a thing is possible, is rational.

But even if it isn’t, even if a conversion experience, a horrifying ordeal, a drug-induced revelation that changes your life, cannot be conceptualized as experienced, it must still be incorporated into the virtual circle. It still must comprise its own piece of our picture of reality. And that process too must be a rational one. For the only way we can construct that picture is to examine it according to the rule of either the principle of non-contradiction or, if we are more rigorous in our thinking, the principle of logical entailment. The mental process of turning a unique experience into a bit of the virtuous circle must be an act of conceptualization and thus a rational act. I am certainly not claiming that we all succeed in this effort, nor that we apply very much rigor to the process. The haze of beliefs that extends our knowledge like a sun’s corona are often poorly examined in light of the knowledge we have already accepted, for instance. But even so, note the act of rational comparison that lies at the center of the effort. Perhaps mental health professionals might find a continuum of rationality from the integrated personality to the psychopath. I doubt the latter considers her virtual circle very much compromised. We all think our conception of the world pretty sensible, and each thinks her own the best for the simple reason that she would choose another if it seemed more true or good.

Perhaps logicians will find fault with my argument, insisting that rationality is not a matter of degree and that it indicates some absolute proficiency. I cannot disagree that formal logic establishes a rigor absent in less rigid formulations, but certainly at least some of the difference is attributable to the third reality of language rather than the second reality of the virtual circle. But just as expertise is a less perfect form of rational application of experience than empiricism, so too is ordinary logic a dilution of the methodology of formal logic and for the same reasons. We accept expertise because we cannot frame many experiences in the light of experimental science, accepting the limitations of experts because that is about the best we can do just as we cannot frame ordinary experience with the mathematical structures so admired by formal logicians. Dilute that comparison still further and observe that we subject our beliefs to the far less rigorous tests of non-contradiction because we cannot subject them to the truth tests of correspondence. The lesson should be clear. We are rational beings. Rather than eschew that inherent rationality, we should embrace it and apply the most rigorous tests to our perceptions and reflections that they will withstand. We cannot escape conceptualizing our thinking about truth, goodness, and beauty, and in seeking warrants, we cannot escape the reasoning that must accompany such thinking.

Standard

Prejudice and Privilege

I really dislike looking into matters of race, but you don’t have to scratch the surface of any American problem very hard to find race eating away just under the surface, complicating solutions and, worse, analysis. It is this subterranean process, this rotting under the polished surface of our ideals, that has given rise to the relatively new popularity of the notion of privilege as racism. I wish to examine this view of racism in relation to the traditional notion of prejudice.

Their etymology indicates a major difference that has major implications. While “prejudice” is rooted in the active voice: it means literally “to prejudge” presumably without sufficient evidence, “privilege” derives from French meaning “private” and “law.” To be granted a privilege is a passive, not an active act. We may assume that favor was sought, but its reception was not within the power of the recipient to procure in contrast to the power we have in exercising our judgment actively to show prejudice.

Now the use of these two terms today tracks their etymology, and this distinction is hugely important in the current racial climate for two different reasons, both of which make remedying racism more difficult. First, the concept of personal responsibility is a bedrock moral principle, and that is difficult to connect to privilege as racism. Consequently, we tend to underestimate the degree to which our actions are determined by prior conditions (please see my posting of March 23, 2014) and overestimate our moral freedom in present ones, thus leading to the second problem: we consider our accomplishments to be entirely our own and resist crediting others for even a part of our success.

Contrast this haze of shared responsibility to active prejudice. To commit an act of prejudice is an error committed by the thinker. It is within her power to remedy. It is not only a cultural offense but also a rational one. It connotes sloppy thinking even when the prejudice is positive. For unless a class of things or persons is definitionally exclusive (“All bachelors are unmarried”), one may not reasonably apply even accurate group classifications to individuals, not to mention the difficulties inherent in forming those classifications (a danger postmodernists blithely ignore for some reason. Please see post of March 30, 2014). But to receive a privilege is a gift the recipient has no control over. In the sense social critics use the word, white privilege describes a thing unearned, an accident of birth, a booster rocket for economic and social ascent denied to others. The term is thus not only passive, but also inevitably comparative. From its beginning, the private law benefiting some was implicitly to be contrasted with the public law relatively penalizing others.

So the use of the word privilege changes the nature of a charge of racism. First, the accused may have done nothing in contrast to the implicit irrationality of a prejudicial judgment. She may bear no personal moral responsibility. She is merely the beneficiary of an unearned advantage that she may have neither asked for nor been aware of. Second, the charge insinuates that any advantage thus conferred also must penalize others. Finally, we face the most relevant issue that derives entirely from her degree of moral responsibility: what is she supposed to do about it? Let us attempt some calibration.

First, the power of charging prejudice is inextricably linked to the rational error it commits. Racism is morally offensive in part because it is stupid, and holding the moral high ground in the discussion cannot be separated from intellectual superiority. Racists are ignorant, uneducated, unable to grasp nuance. They make sweeping generalizations that are wildly inaccurate and then attempt to paint individuals with them. A long tradition of finding empirical or rational means to justify racist judgments from pseudo-Aristotle to Thomas Jefferson to Charles Murray attempts to invalidate the association between racism and ignorance, but its existence only reinforces the connection, for all such attempts are now regarded with disdain by intellectuals who are unwilling to sever it or to take any such effort seriously. This intimate connection cannot be carried over to the new racism of privilege, for privilege reveals no requisite flaws in its recipient whatsoever. Southern freedom riders in the 1960’s were as guilty as the dog handlers who attacked them if both were white. Indeed, the notion of privilege immediately conjures up a consequential guilt that might have motivated the former and enraged the latter. But should those risking their lives to end racism be charged with prejudice because they receive benefits from a system they actively oppose? Is such a charge warranted? And must white privilege disadvantage blacks?

We must assume that such privilege derives not from the absolute advantage conferred by being white but by the relative disadvantage the privilege implies for being black. We will call this kind of privilege disparative privilege. Now this relationship requires some investigation on three counts: first, what constitutes the privilege; second, how does it redound to the disadvantage of blacks; and third, must any conception of privilege be built upon comparative relationships?

Conservative whites wish to dismiss the whole notion of white advantage– and with it the notion of white guilt– by insisting that whatever advantage they received was earned rather than given, though they wish to be vague about whether they earned it or their forebears did. At most, they point to cultural values that encourage their success: family structure, emphasis on education, and work ethic. They rightly accentuate the self-discipline their success required, the acceptance of deferred gratification and commitment. But the essence of privilege as racism is the precisely the charge that advantages were not earned, that they accompanied skin color as a gift. So we step into a minefield of moral ambiguity, for I cannot be responsible for a harm I did not commit, nor can I be asked to feel guilt for a benefit I earned. To be fair, these character traits are not confined or exclusive to “white culture,” and to say they are is simply prejudice. To claim that being black automatically results in cultural disadvantage in regard to these prerequisites for success is a claim I can’t imagine any unbiased cultural observer would wish to make. Nor are these automatic socioeconomic markers, for lazy scions of wealthy families and ungrateful second generation Americans are clichés that belie any guaranteed conferral of privilege. So much for any sweeping comparisons. But let’s face it. The conditions for success are certainly better established in some socioeconomic environments than in others, and of the multitudinous strands in the tapestry of any success story, many are woven without effort, simply by the expectations of others that form our horizon of possibilities. Still, it is a gross act of prejudice to see white privilege as an unearned gift which white America takes for granted… at least until one compares it to being black in America. Only by comparison does the generalization hold undeniable truth. White guilt derives not from privilege but from prejudice as surely as the tail of the coin implies a head. Compared to the lot of blacks in this country– not only in the past but in the present– every white person now living was born on third base, and whatever her positive efforts, all were built to a degree upon a scaffold of exploitation. There is no denying that any comparison of white and black privilege will lead to one conclusion: whites still reap unearned privilege and blacks unearned privation because of skin color, and this legacy of active prejudice is a moral stain. White persons are like the boss’s son who may start in the mail room and may work hard but who will never know how much of his success is due to the accident of birth nor to what degree that success has kept less lucky fellow workers from rising as they might have in an equal race. Who can doubt that the future occupant of the corner office will pass on the fruits of his success to his children just as his future underlings will hand off their lesser luck to theirs?

But note in the example that the while the goods are absolute, the harms are all relative. Let us try to think of privilege in an absolute sense. Considered as an unearned gift, we are awash in privileges. We did not earn the social order that benefits us, the political system that frees and equalizes us, the economic system that enriches us, the family that nurtures us, the knowledge that guides us, the beliefs that give meaning to our lives, and on and on. While it makes sense for us to be grateful for these blessings, I can think of no reason we should feel guilty for them. In this large sense, anyone who lives in conditions allowing her to meet her human needs in the world is privileged, for it is by the satisfaction of our needs that our lives are fulfilled and the conditions for that satisfaction have been well-established thanks to the conventions of civilized life (for more on this, please see my post of November 13, 2013). If we have a loving family, dear friends, education, civil order, productive work, and the like, it seems to me we have the goods we are by nature designed to have, and the moral response to that is satisfaction and gratitude, not guilt. What is more, these are limitless goods. There is more than enough of these blessings to go around and my having a sufficiency in a working civil society in no way limits your access. We do not compete for all privileges.

But we do compete for those blessings that are limited, and then we are forced to face both the universality of our needs and the pain their absence produces. The most impoverished citizen in this country is privileged compared to the 84% of Liberians living on less than one dollar per day and the most unjust political jurisdiction here is utopia compared to life in Syria. If comparative privilege imposes a consequential guilt, then we all have a moral duty to ameliorate the living conditions of the poor regardless of their location. But do we? Let us put religious injunctions aside for the moment, though they impose their own moral duties, so that we may confront the central question that a relativistic concept of white privilege and white guilt implies: does relative economic and political privilege inevitably impose moral obligation? Let us refine the question: does privilege impose an obligation even when disparity is not a consequence of the privilege?

Let me acknowledge and laud the sentiment evoked in kind-hearted persons by seeing suffering. We wish to make it better. But a clear-eyed view of this natural desire also compels us to see how conditioned it is by our degree of privilege, as exemplified by the coat-drives-for-pets mentality of some wealthy enclaves. And just as any suffering tugs at the heartstrings, another nearly universal response is simply to turn away. What we happen to see disturbs us, so we simply refuse to see it so as not to be disturbed. This accounts, I think, for at least a part of the gated community mentality that seems so prevalent in rich neighborhoods. The moral principle of ought-implies-can–that moral obligation only follows the ability to act– comes into play here as well. An indiscriminate and wholesale equality of goods would be impossible to conceive much less to achieve (please see my posting of December 3, 2013 for more). The Soviet Union was a case study of that failure. Despite the efforts of thinkers like John Stuart Mill to objectify such sentiments and thereby impart to them some moral valuation– an effort that collapsed in a thicket of evaluating pleasures and pains– we would do well to remember that justice does not require an equality of degree of all perceived goods. For we perceive many things as good and value them differently and there are too few Maseratis to go around. Rather, it is the sufficient distribution of goods necessary to meet our human needs that is required, otherwise called an equality of kind. This operation is a profoundly rational one, the tugging at our heartstrings notwithstanding. We are left then with the suspicion that some kinds of privilege and guilt are not handmaidens of wellbeing and some are and that some wellbeing is earned rather than gifted. So why should guilt be intertwined with privilege like snakes on a caduceus?

But that is just the issue, isn’t it? For in our times and in America and for economic and political privilege especially, the relationship is always partly causal. Some linkages are as thick as wisteria vines squeezing the columns of antebellum mansions. Some family wealth, wealth that produced all the goods it is capable of buying for succeeding generations, was built on the direct foundation of the importation and perpetuation of slavery. Other privileges are less easily traced. It is said the target of the fourth hijacked plane on September 11, 2001 was either the White House or the Capitol. What kind of loss to our national pride would that inflict? Both edifices were built by slaves. When a group of people are disadvantaged by color and so denied an equal wage or vote or voice in social policy sufficient to deprive them of the satisfaction of their needs, someone will reap the benefits, and the misalignment of power being what it is, the odds are that someone already has the sufficiency that justice requires. To the degree that white America has harvested this kind of white privilege, it deserves to feel white guilt. And so long as the privilege is maintained, so too is the guilt, and so too is the moral obligation to correct the moral harm. Reparations for slavery would not clear the slate, for the vestiges of racism would continue, producing continuing disparity and white privilege.

But just to be clear, should I feel guilty for having parents who valued education and instilled a work ethic because others were less fortunate in their choice of parents? White guilt must be measured by the racism that relatively advantages one group over another, not the absolute goods consonant with universal human needs that some received and others did not. Social scientists may attempt to lay all cultural differences at the feet of some ancestral economic exploitation, but such an indictment seems too sweeping to be justified by science and too Marxist to be embraced by interpreters, though it is consistent with postmodern emphasis of culture as the creator of identity (please see posting of July 30, 2013 for more). If a narrower difference maps the battleground of white privilege and white guilt, then fight it there. But let liberals leave out injunctions of religious duty, emotivist objections to inequalities of degree, and claims that all privilege imposes guilt. Let conservatives put away their blind pride in winning a rigged game and their contempt for those losing it. While privilege may broaden our view, it shouldn’t change our focus. Prejudice is still the villain of the piece, still a moral obloquy and intellectual failure. So long as its effects ripple through the culture, white guilt is its proper consequence.

That conclusion applies also to the other kinds of privileges. To the degree that they were unearned benefits at the costs of sexism, colonialism, imperialism, and the like, we might expect to see coinages of terms like male privilege, first world privilege, heterosexual privilege and the like with their attendant trains of guilt. And to the degree that these disparities still hinder exploited people from satisfying their needs while easing the lives of their exploiters, active amelioration is the only morally justifiable response.

So what is active amelioration; what is to be done? Since justice is defined as “to each her due,” it seems clear that justice demands that those unjustly advantaged should be those who make reparations after, of course, performing the required triage. For if all vestiges of racism were magically removed from our society today, we would still be left with the inequities it has long produced, both privilege and privation. Repairing these inequities is not so difficult as egalitarians might imagine since justice requires not only an equality of kind but also the inequality of degree that the exercise of our preferential freedom produces (for more on this, please see my post of November 20, 2013). The shame has never been that some have an excess. Rather it is that the prejudice that helped produce the excess has also produced a deficiency for its victims. It is daunting to accept that the same arguments that produce white guilt hold sway in regard to other kinds of privilege, leading to other moral obligations, but there it is. Since the exercise of influence over other sovereign nations is a governmental function, we as citizens should move our government to act on our behalf in accord with the limitations of the ought-implies-can principle of morality.

This is what is to be done: we have the obligation to root out disparative privilege in all of its other manifestations by actively opposing prejudice in our own circle and by favoring governmental action to produce the equality of kind that justice demands. And let us remember also to be thankful for the absolute privileges we enjoy but did not earn.

Standard