Stereotypes and Categories

In the eternal battle between liberty and equality, I am generally in favor of putting my thumb on the equality side of the scales if only because it is perpetually outweighed by the attractions of liberty, particularly in our current zeitgeist (for more on this battle, please see my posts of November 20 and December 3, 2013). And I deeply admire the commitment to equality that seems to characterize educated young people in our culture, though I abjure the group identity theories that seem to shape their commitment. The thread that runs through so many conversations in this social fabric seems to depend on a definition of stereotyping that confuses me, particularly in regard to its use of categorization as a related term. The imprecision and negative connotations of the former seem to stain the latter, and that is a shame.

 My understanding of the meaning of “stereotyping” is to prejudge an individual according to some group characteristic or to form some generalization of the group from an inadequate exposure to its members. The injustice of such prejudice seems obvious. You probably are familiar with the wag’s distinction between categorization and stereotyping: the latter involves a judgment you disagree with. But that approach robs both words of their meanings and renders productive conversations impossible. Besides, it misses the point. Whether the stereotype is complimentary or insulting is irrelevant to the real issue, for the error is rooted in the prejudice itself regardless of whether the judgment involved is positive or negative. Good stereotypes are as objectionable as bad ones because they constitute a kind of mental sloth: we assume that we know something about the individual that we cannot know based on some group categorization that we may or may not be true. But I think this understanding of the term is controversial. Something else is afoot, something having nothing to do with demanding sound reasoning. Whatever the cause, this whole subject has provoked distaste and avoidance. The reaction among committed egalitarians is to abstain from any kind of categorization whatsoever, but I wonder if this may be an overreaction.

 This may be one of those eddies of self-contradiction that swirls through postmodernism, for it has rooted its theories in the kind of group identity that Marx made infamous. Educated young people who may or may not be aware of postmodernism’s influence on their thinking–being educated means being inundated in the zeitgeist’s current obsessions–are both devoted to the analyses of the influences of composited “cultures” of gender, race, economic class, demographic groups and so on and highly resistant to being accused of having their own consciousness formed by the shaping mechanisms they apply to others. I mean no denigration in pointing out this inconsistency, for it is no different from Marx thinking himself immune to the bourgeois mindset or Freud admiring the power of his ego ideal. We think ourselves above the fray. But though postmodern culture constructs its understandings on deconstructing the hidden influences of the group on the individual, it faces contrary impulses from popular media and commercialism that glorify existential freedom. The zeitgeist gives as it takes away.

 One more point about current culture needs mentioning as preface. The distinction between the powerful and the powerless also plays into the dynamic of stereotyping, for postmodernism’s premises leave it no means to resolve conflict other than by the naked or disguised exercise of power. The French revolutionaries and academics who systematized postmodernism came of age during the anti-colonial and Cold War conflicts that pitted the industrialized and capitalist world against people of poverty and of color. The theorists were apostles of equality, but their worldview was grounded also in a phenomenology of experience that valued nurture over nature. After all, it was their opponents who preached the doctrine of racial, ethnic, gender, and class superiority, so postmodern opposition to the natural superiority of any social category was assured. The problem was the very phenomenology they took as determinative of truth and goodness claims denied them the appeal to justice that claims to equality need to rely on (for more on why, see July 30 and November 13, 2013). I notice that educated young people frequently substitute “appropriate” for “good,” as in “That kind of behavior is not appropriate.” Culture determines etiquette rather than morality determining the good. But this creates a recurring problem, for why should equality be “appropriate” and prejudice “inappropriate” when the goodness of social norms is hollowed out by the same differing experiences that putatively create cultural identity in the first place? Gordon Gecko’s motto, “Greed is good” might simply reflect Wall Street culture, where greed is appropriate. This knotty problem could be resolved by resolutely turning away from issues of correspondence justice in favor of simply equalizing  the liquidity of power, though why this effort should be any more a correspondence good than an outright appeal to justice is never explained. We may assume such an arrangement might be more than appropriate.

 These postmodern influences have changed the dynamic meaning of “stereotyping” to concern unfavorable judgments about groups or their members that perpetuate current power relationships. But this emphasis drags in the perfectly innocent word “categorization” simply because some valid categories do indeed perpetuate current power relationships simply by acknowledging their existence. The problem is that every biased jerk in the world tossing off insults about this or that group is convinced that he is merely explicating a valid categorization. The appropriate response seems to be to avoid any negative group categorization in the interests of social reform, but such efforts also stymie reform because they discourage empirical research into some of the social ills that produce the power inequalities reformers seek to ameliorate and the subsequent appeals to logic and expertise that would repair them.

 This kind of thing can get silly. A female professor is accused of being a “male hegemonist” for asking a boy to help two girls move a heavy table. A statistician is called elitist for citing income statistics of single-parent families in a sociological journal. A criminologist is called bigoted for analyzing the race of felony convictions. At the other extreme, this writer feels perfectly comfortable accusing Wall Street financiers of being greedy. What do we call stereotyping the powerful? How can stereotyping individuals in privileged classes ever be appropriate? Can we accept theories of social determinism for others while still considering ourselves free to make moral choices? Can the legal system make sense of this affront to responsibility?

 Aristotle considered accurate classification the pinnacle of rational thought and thought it to be composed of close analysis of multiple and thoughtful exposures to individual representatives that we use to produce our conception of the class. As some classifications are innocuous– all unmarried men have no wives– we may assume that our current abhorrence of stereotyping will still allow us some categorization. So are we limited only to those that may allow of no possible negative interpretation? On whose judgment: the speaker’s, the one characterized, or the culture’s?  Can nutritional science condemn obesity as a health problem without indicting obese persons? Should single parents feel slandered by census data compiled by demographers demonstrating material disadvantages to children of single-parent homes? An understanding of stereotyping would forbid any reader of those statistics to draw any conclusion about any household from these data-based conclusions, but is it appropriate to draw the conclusions themselves, particularly if they point out some social cost?

 The question of whether even valid categorizations of groups should be avoided or minimized for fear of lapsing into prejudice is a different one from whether these negative characterizations even if valid reveal anything about the individual members or the group. The latter issue is as easy to answer as the former is difficult. It is a blatant injustice to judge the individual by the group’s characteristics even if those characteristics are accurate for the group unless the group characteristics are definitive for each of its members and can be categorized for the aggregate without prejudice. To do otherwise is stereotyping. But must we also shy away from any investigation of group characteristics, perhaps just to be safe? Are we that prone to prejudice?  To answer that question, we must explore one more element of why group characteristics are also so frequently poisoned by prejudice despite the concerted efforts of postmodernists to avoid it. What cultural pressure molds this kind of prejudice despite all the sustained efforts of the postmodernist push toward egalitarianism?

 The dark energy that sustains group prejudices is from a much older influence, older than culture itself, indeed older than humanity, for it characterizes all social animals.  This tribalism begins wholesomely with familial identification. Aristotle saw the blueprint of the political state in this first natural social unit, but he also devoted some attention in Politics to the difficulties of enlarging that unit from clan, the extended family, to city-state. Our affiliation with family is as natural as the imprinting instinct of babies, but it requires some cultural pressure for that instinct to be broadened to strangers. Virtue ethics finds the impetus for this extension of attachment in our less obvious needs. The child reaches for her mother for sustenance and the citizen reaches out to her polity for protection, civil order, education, opportunity for meaningful work, and so on. Though these needs are natural, they also require a rational consideration of the risk/benefit ratio of trusting strangers who are not driven by instinctual drives of protection and self-sacrifice. Our ambivalence toward the other is at the root of tribalism as tribalism as at the root of the group prejudices that demean and dehumanize. Cultural forces can fight these instinctual prejudices to a standstill, but only by relentless education generation after generation that establishes or demonstrates bonds among unrelated persons in polities that conduce in some explicable ways to their meeting their needs. Unfortunately, the group cultures so established are frequently held together by means of opposition to more distant cultures representing the other, and so the larger social or political group thus established is analogized to the natural unit, the family, and the culture established in opposition is set up as the other. And so it goes, as sororities issue bids, urban neighborhoods mark turf, country clubs publish membership guidelines, religious organizations celebrate heritage, racial minorities taunt each other, fundamentalists debate God’s will, countries fracture along ethnic fault lines, and nations build patriotism by demonizing those across their borders. If you leave out the family, all of this us-versus-them is cultural creation, though all of it is modeled on the instinctual tribalism that has the baby shrink away from the unfamiliar face. Every social identity produces an incentive to group prejudice if only because the essential nature of our categorization of the other sees him as unwholesome and inhumane. That instinctual dark mistrust can only be countered by continued efforts to humanize the other by continuing to expand our social and political self-identity, to incorporate the other. We can and do embrace strangers through familiarization (note the etymology) that is dependent on interweaving our efforts to satisfy our needs (virtue ethics defines these needs specifically; for more, please see my posting of November 6, 2013) with those we might normally mistrust as strangers. This effort has motivated some of the major world religions from the beginning, though the effort is perhaps honored more in theory than in practice as they have struggled with the alienating effects of heresy in the face of their exaggerated claims to knowledge (for more, please see entries for January 12 and 20, 2014). More than a glimmer of hope for human progress can be found in the increasing scope of commercial webs of mutual interest over the last century and social networking of recent decades.  

 The effort to dehumanize the other is abetted by prejudicial categorizations that educated culture so discourages in the name of equality. But obviously more is involved than equality in both the problem and the solutions now being tried, which perhaps explains the current aversion to group categorizations in general. As much as I admire the motivation in this effort, the cultural confusion engendered by the confusion of “stereotype” and “category” will only make the task more difficult.

Standard

The Determinism Problem

Having introduced the common anomaly that is termed the determinism problem last time, I hope to investigate it more thoroughly this week, with the goal of resolving it to my own—and I hope to your —satisfaction. Simply put, the problem is that we know we live in a contingently determinist reality, and yet we feel free to choose, granting us a liberty nothing else in the universe seems to possess. Either this sense of freedom is wrong or our sense that the universe is contingently determined is wrong or some means exists to reconcile the anomaly.  Immanuel Kant called this problem an antinomy, the submission of two convincing but contradictory “truths” to experience. I agree with his judgment that reality is unitary and cannot be contradictory, so something must give in this antinomy.

 

But maybe we’re wrong. We take it as a given that the principle of non-contradiction cannot be violated in our conceptions of either truth or of goodness. To a correspondentist in pursuit of the virtuous circle, the complex of truths that signal a complete understanding of reality, an axiom is that reality cannot be self-contradictory but must instead constitute a single unity whose components fit together like jigsaw puzzle pieces. Indeed, the model of the interlocking scientific paradigms of the natural science is both a metaphor for and a piece of that unity. The existence of anomaly is thus prima facie evidence of an error of truth in discovering the virtuous circle. To a coherentist who constructs her own virtual circle of whatever elements she finds most useful, the principle of non-contradiction serves an even more vital function, for in her construction, the coherentist has only that principle against which to test her personalized truth and goodness claims (please see my entry of August 6, 2013 for more on this). The existence of an antinomy as crucial as the free will/determinism problem poses challenges to both models of justification, correspondence and coherence, not to mention to the core constituents of science, logic, and mathematics. So the first relevant question to ask is this: can we live with fundamental antinomies like the determinism problem or do they require rational or experiential resolution?

We have seen the impact of powerful antinomies in the past, and it isn’t pretty. The two greatest knowledge crises that have destroyed consensus and led to long-term societal disruption were each rooted in such anomalies.  These are the Protestant Reformation of the sixteenth and seventeenth centuries and the postmodern revolt of the twentieth. The Reformation pitted traditional authorities against each other, revealing in the process the structural weakness of authority challenged (please see my postings of September 11 and 18, 2013 for more). The fundamental challenge to authority’s successors as warrant, reason and reasoned experience, consisted of precisely the charge that these warrants could not reveal reliable truths about reality. For example, one cause of the postmodern deconstruction of modernism after World War I was the seeming irrationality of Einstein’s theory of relativity, which presented to the world an empirical explanation at odds with what had been thought the plain evidence of reasoned experience. Both of these shattering antinomies were resolved eventually: authority collapsed as warrant for truth and goodness claims over the seventeenth and eighteenth centuries, to be replaced by reason and empiricism, which were themselves challenged by the relativism that Einstein’s theory seemed to sanction (please see my postings of July 22 and July 30, 2013 for more) and the postmodernism that relativism legitimized. Indeed, the twentieth century seemed to consist of a volley of assaults on what the Victorians called “good sense.” The anomie inspired by Freud’s theory of the unconscious, Godel’s Incompleteness Theorems in mathematics, and Heisenberg’s Uncertainty Principle in Quantum Physics are other examples; the latter made even Einstein a little queasy, but not as queasy as general relativity made all of western culture. It is informative to think of these kinds of assaults on reasoning -all profoundly rational-as contrasts to the most influential theoretical efforts of the Victorians–Darwin, Marx, Hegel, Toynbee– all attempts at rational synthesizing. The rawest anomaly of all though was the specter of the most civilized world powers engaged in the senseless butchery of World War I.

In the face of these knowledge crises and the central role of antinomies in them, we might be more concerned about the free will/determinism issue than we are, particularly since it is of such longstanding dispute. While its influences can be traced to postmodernism with all its baggage of received identity and the Romantic excess of existentialism as a reply, not to mention the elegiac and ironic tone of modern literary media, I think its impact has been somewhat muted by the undeniable universal response that we seem to accept the deterministic nature of reality while still feeling free.

Only sincerely religious people should feel comfortable about that. One reconciliation many persons find convincing is religious absolutism, the conviction that our free will is simply a product of human uniqueness. We are free because of our special place in creation, as persons with souls rather than things that are caused, and that is the end of it. We feel free and struggle over our moral duties because we are free and responsible. Our defining quality is precisely that natural freedom to recognize choices in the maelstrom of experience and to choose from among them, a freedom instilled by the creator but one that exacts its price of moral responsibility. This kind of compatibilism resolves the anomaly in a manner closed to the hard determinist who can find no empirical evidence of soul, creator, or freedom in material reality yet who could never deny the awareness of choice that underlies not only her own nature but also the scientific enterprise itself. And allow me to add one more counterintuitive argument in favor of the absolutist cause. This compatibilist theory not only demands that we have free will to choose as a condition of our moral responsibility; it also requires that the rest of creation be determined. In order for us to carry moral responsibility for our choices, it is necessary for us to be able to project their likely outcomes, a condition assured by the determinism that has made scientific progress possible.  Now a reality in which many things are free is one in which outcomes could not be predicted, so our survival, not to mention our salvation, depends on the very determinism that empirical science has revealed to be at the core of creation itself. It is this predictability of cause and effect that gives weight to our moral choices. Religionists thus can not only point to what Kant called “the moral law within me” but also to the orderliness of the cosmos (the etymology of which is “order”) in Kant’s words, “the starry sky above me” to resolve the anomaly. Though this resolution lies beyond the realm of knowledge and its justifications, and therefore beyond the bounds of my efforts in this blog, falling as it does into the realm of the beliefs that extend our knowledge, I find this resolution of the free will/determinism conundrum very persuasive.

Of course, there are others. Recent philosophers of science frequently find themselves tipping toward the hard determinism that underlies all scientific efforts while modernist ethicists often fell on the theological side of the debate discussed above. Having investigated twentieth century compatibilist efforts as well as earlier modernist approaches, I think I have found a view that accomplishes three objectives. First, it resolves my own discomfort while still falling within the realm of knowledge. Second, that resolution is consistent with the absolutist religionist arguments listed above, though it does not rely on them. Third, the resolution it offers is also consistent with correspondence proofs of judgment, among them empiricism and reason, therefore violating neither my own virtual circle, the complex of knowledge and beliefs I accept as coherent truths, nor the virtuous circle, those truths justified by the correspondence proofs of judgment I have often defended in the past (for the most concise statement of which, see October 7, 2013. These are more fully developed in my book, What Makes Anything True, Good, Beautiful: Challenges to Justification).

My compatibilist position is framed by Immanuel Kant’s fundamental argument that the concept of free will is a rational concept that cannot be proved by our subjective perceptions of it in experience. This dualism is fundamental to Kantian epistemology, the notion that we can never know pure concepts, noumena, but only their application in experience, phenomena. Since experience is sure to be partial, subjective, and contextual, Kant thought it inadequate to serve as the base for a claim to certain knowledge of a concept as powerful as free will. So though we certainly feel free, we cannot know that we are free. On the contrary, we know that the phenomenological world is not free but is determined.

Our notions of determinism invariably depend on the validity of the cause-effect relationship, the overwhelming conviction that all events are effects of prior causes and causes of subsequent effects. But David Hume makes clear in his sense-data perceptual theories that this crucial temporal relationship is not ontologically demonstrable, meaning that we can never perceive causes or effects in nature. Our minds must add them to events. The relationships are purely rational rather than material. This argument so rattled Kant that he was moved to theorize a range of rational operations that operate pre-consciously to assemble sense data into a coherent picture of reality and present it to our minds as reality itself. This is why we can only know phenomena. Our minds have already added their rationalizing ingredients to reality before allowing us to perceive it, so we are unable to disentangle the noumena of reality from the rational reconstruction that our mind assembles from sense data. And a key ingredient of that recipe is cause-effect. Reality is full of causes and effects because we highlight them in the data stream that is the phenomenological reality we experience. Or rather, we see them as already highlighted thanks to the sifting actions of the Kantian categories. Of course, it is the empirical enterprise to test and make explicit the pre-conscious connections we form among the objects of experience. Whatever tests we cook up to do that—and the scientific method is the crown jewel of such efforts–Kant argued that the fundamental objects of perception that we manipulate in such conscious operations are not open to dispute, are in fact the products of a common human operating system that grants us common access to phenomena. All of these pre- and post-conscious operations are fundamentally rational. Our sense of freedom, and particularly moral freedom, derives from this distinctive human rationality.

Note that this arrangement allows for just the kind of dualism the free will/determinism problem poses for any compatibilist solution. We may be determined or we may be free. That is an ontological question beyond the scope of rational investigation. But our categorical response is always to feel free, to identify choice that is as fundamental to our framing of experience as is the principle of causation. When a ball rolls into the room, my cat’s eyes follow the ball. Mine turn to see who tossed it.

We may further investigate what constitutes this sense of freedom. The classical free will argument posits “forking paths.” One framing is that one is free if she can do otherwise, if she can take either path. But this rather blurs the issue because how we respond to choice is the last step in a three-step process, each step of which allows for a different kind of freedom. Merely identifying choice is both uniquely human and demonstrably rational. It constitutes the natural freedom that is as central to framing experience as causality. Try to not recognize your options in any situation for even a moment. The inevitable forking path that the phenomenal world brings to our attention vivifies the essence of any claim to freedom–choice — and begs us for analysis, and this peering each way constitutes a second level of freedom, preferential freedom, the rational choosing from among options which is good, better, best, or bad, worse, or worst by whatever standard of goodness we choose to employ. This preferential freedom to recognize the relative good in our options concerns issues of utility, quality, or morality (please see my entry of October 13, 2013 for more). When we have picked the path we think best for whatever reason about whatever choice, we then must choose to go down it. This circumstantial freedom is the visible sign of choice, for it is the choice to act.

So in the antimony of determinism/free will, which of the three kinds of freedom are we postulating? Which is necessary for us to be free? Let us begin an answer by demanding everything. Let us suppose freedom means doing what one wills. But that cannot be, for all of us might will to fly, be invisible, never grow old. If we cannot be free unless we have total freedom, we can never claim to be free, so our standard of freedom must accept limitations, but how much is necessary? At the other extreme is the case of the prisoner in chains who can still think of his mother. Here we see no circumstantial freedom at all, only a preferential freedom of quality. And in the case of a foundering ship whose passengers face the choice of drowning or jettisoning their baggage, does their natural freedom to recognize even a painful choice grant them freedom, even if they dither and fail to prefer one to another, never mind actually getting around to acting on their preferences?

In disentangling what is required for us to be free, we face a grounding question: is their choice an ontological reality? From our objective perception as observers in a passing hot air balloon, we see their ship approaching the shallows and hear their cries of terror and calculate the ship’s draft, only a bit deeper than the trough of the waves crashing over the reef. But in this presentation of reality, where is the choice presented but in their minds? Surely, this is Kant’s phenomenological reality of choice, to see experience in terms of possible goods, of forking paths. And though some paths are identified as a result of conscious reasoning that expands options—else what’s an education for?—some choices will always present themselves with the same irresistibility as those waves threatening those passengers, inevitable products of experience. Sartre was correct in according persons freedom regardless of how desperate their circumstance. There is always the forking path. No matter how awful and bleak the future appears, one can never offer the plaintive cry, “I had no choice.”  One always has the option of not choosing. Or one always may choose suicide.

It is true that what we actually prefer and actually do with this natural freedom may be determined by psychology, morality, or physics. These things are evitable, and so not always under our control. But governments, advertisers, or psychologists can never deprive us of the natural freedom that is our birthright as reasoning creatures and though it too can be broadened by experience, it exists as a simple product of our rational human nature as it goes about constructing reality from sense data.

It is likely that preferential and circumstantial freedom can never be shown to be as free as natural freedom, and their exercise will always be alloyed by determinist factors and perhaps one day shown to be entirely determined. That is certainly the view of the hard determinists who wish to resolve this issue by trusting in the eventual triumph of empirical science. Even now, we can always find determinist features in any preferential or circumstantial choice. The child was spoiled and screamed when she wasn’t given ice cream. The genius was home-schooled. The teacher never vacationed in Fiji. But influences alone are not enough to show a total lack of freedom that is the requirement for determinism to triumph. For even if determinists denigrate natural freedom because it doesn’t require an active commitment of the will, they face serious difficulties in allocating influences over preferential choice. The factors that influence us to prefer one fork to another are complex in three dimensions: number, interrelatedness, and intensity. Now some of these are determinist by virtue of ontological past structures and events. Others are indeterminist as causative factors until brought into consideration of preference once the forking path is isolated through the use of natural freedom. Still others are indeterminist as causative factors and operate beneath the threshold of consciousness. Perhaps we can find some room in this mélange of influences for preferential freedom?

Soft determinism assumes we have some control over at least some of these factors, so if we have any control whatsoever over even one of them, we are preferentially free because our preference cannot be determined in advance. That may not seem like much in comparison to the total freedom we so desire, but remember that the bar for freedom here is very low, particularly when you remember that the determinist camp includes everything else in existence, all of which according to empirical science are contingently determined in their entirety, with the exception of subatomic particles operating according to Heisenberg’s Uncertainty Principle. It might be worth thinking about whether the single little free agency we are now contemplating is analogous to a Heisenberg particle in terms of being a tiny part of a much more complex whole and even more so in that its influences seem not only undetermined but also indeterminable. Given the complexity of the determinative influences on preferential freedom, can the moral agent really say she has any control if only a tiny part of her choice, a part whose power she may not be able to judge or control, is free? On the other hand, that one preferential Heisenberg particle is enough to disprove determinism since its influence cannot be predicted. If true, we are neither free nor determined. So the result of what has become a common rearguard action to isolate a speck of freedom in a sea of determinism, the effort called soft determinism, fails to support preferential freedom even if it succeeds in disproving determinism. To my mind, this invalidates the “ratio theory” of limited free will and responsibility. Perhaps this explains why compatibilists with scientific leanings assume determinism will one day carry the day, for they share with soft determinists doubts about the viability of free will in preferential choice.

To return to hard determinism in regard to preference and action, we can envision what these determinists also predict: a computer whose algorithms factor in all the influences on my choice mentioned above, performing its calculations at the same speed as the human mind and arriving at the same outcome. Now suppose that program spits out its completely accurate prediction the moment before, the moment of, or the moment after my mind does. Imagine it is also able to print out all the factors it considered in its analysis for my review. None of this is too far beyond even present-day predictions. Now, I would like for you to consider what I or anyone else would do with these pronouncements.

 

If a moment before and my mind is informed of the computer’s determined choice, this information becomes a new input that changes my forking paths of natural freedom, offering me other choices to consider, other paths to prefer, and other actions to pursue. And so the mind and the computer continue their game perhaps to infinity, the “deterministic” analysis reduced to one more forking path my natural freedom brings to consciousness. If the computerized prediction is delivered simultaneously to my forming my preference, the “illusion” of free will goes on as before, and the deterministic success of the computer is viewed by the human subject as a kind of parlor game. Granted, if the decision is distasteful, the forking paths trailing off into the gloom, I might bow to the computer’s predictive power, but that too would constitute a preference, a presentation of natural freedom to my reason that I must factor into my preferences. If the computer’s prediction is delivered to me in a sealed envelope after I have chosen and the prediction proven accurate, I might examine the printout explaining my choices and shrug and say, “I knew all that.” Or upon reviewing it, I might find fascinating insights into my own decision processes that might affect future choosing. No scenario would in the slightest deprive me of the “illusion” that I am free to choose and the existence of the determining factor would simply add to the landscape of choice that makes up my natural freedom.

 

So it seems a non-religious solution to the determinism problem involves three logical conclusions (it must be mentioned that accepting these in no way diminishes the charm of religious solutions and may add to them by virtue of applying universality to issues of moral freedom, a prerequisite for absolute morality).

(1) When speaking of freedom, we are talking about three modes of choice: natural, preferential, and circumstantial. Natural freedom is inviolable – and by the way is the foundation of any claim to human rights, an argument I made in this space on November 13, 2013. Preferential freedom is the prerequisite for any moral freedom and responsibility we claim. Its existence is dependent on natural freedom but is also subject to more determinist influence. Circumstantial freedom is both the most visible and the most determined and is the focus of legal responsibility. Our sense of outrage over slavery, for instance, is due to its denial of circumstantial freedom to its victims. While this liberty to act is the focus of much of our conversation about the determinism problem, it is important to note that the other two freedoms are determinants of action and so should claim more of our attention.

(2)  Our sense of moral responsibility derives from the natural freedom to recognize choice and from the preferential freedom to act upon choice, so at this level, we are both free and determined. The influences that shape our preferences are pre-conscious, unconscious, and conscious, and involve complexities of sense-data perception not open to analysis, so it is unlikely that we will either recognize or submit to a determinist solution even if such a solution is ontologically demonstrable.

(3) Though determinism cannot be shown to be total, it certainly can be and has been shown to be influential in preferential and circumstantial freedom, so an alloyed view of the antinomy is likely to continue. So long as scientism remains a temptation in our culture, we are likely to overestimate the empirical means of investigating the composite nature of freedom and determinism and mistakenly minimize the rational bases of natural and preferential freedom. The result will be to obscure the origin of the nature of human rights in the equality of kind that is our natural freedom, to minimize the role of conscious reasoning in judgments of quality and morality, and to focus exclusively- and wrongly- on circumstantial freedom as the only freedom that matters in questions of human action.

See you next week.

S. Del

 

 

 

 

 

 

 

Standard

A Preface to the Determinism Problem

I am bothered by an anomaly in my moral universe and hope to investigate it over the next several weeks. At issue is a contradiction between the central pillar of science—the material universe is contingently deterministic– and what I take to be a universal human orientation to that universe: we are free to choose. On the one hand you have to weigh recent history’s most impressive human achievement: the glorious accomplishments of the natural sciences, all based on the predictability of its objects of study. Empiricism is nothing without the laws of causality that produce testable predictions about everything from subatomic particles to the background radiation from the Big Bang. Causality…hypothesis… confirmation… determinism. Everything we see around us operates according to laws of nature that have been discovered, deepened, and linked over the last five hundred years. In essence, these laws (with an exception or two I will discuss below) forbid freedom to their objects of study. But then there is the bothersome problem of….us. Our free will is the essence of the human experience, so much so that our species is called “wise man,” for our ability to discern choices, weigh them, and choose the best from among them. We are choice-making machines programmed with natural freedom to see choices, preferential freedom to choose, and circumstantial freedom to act. Since the Enlightenment, the human sciences have been stymied by that freedom, and it has crippled their fevered attempts to be taken as valid empirical studies on a par with the natural sciences. So you can see my dilemma. It seems I must embrace two contradictory positions. This violates the fundamental truth test of the virtual circle, the principle of non-contradiction (Please see my posting of August 6, 2013 for more on this). I dislike discovering anomaly, but I dislike ignoring it more.

The free will/determinism question is an old issue in philosophy dating back to the pre-Socratics, but the triumphs of natural science have cast it in a particularly bright light over the last century or so. As in all intractable philosophical problems, we have seen a multitude of attempts to resolve the apparent contradiction, falling into three basic categories. Empiricists typically embrace a determinist outlook that regards our sense of freedom as a delusion. Despite the spectacular failure of the human sciences to crack the nut of free will, determinists assume that advances in neurology, genetics, and other medical sciences will one day lay to rest the antique notion that we are free to choose and will reveal not only the mechanisms that compel our behavior but those that deceive us into thinking we control it. Those who argue that our freedom is real rather than delusionary are called libertarians, but I will try to avoid that term because of its political associations. Among the free will advocates, religious absolutists argue as they have since the time of Zoroaster: that the reason humans act differently from all other things in creation is that they are different.  The existence of the soul, the spark of the divine in each person, elevates humans above the material and the determinism that molds it, lifting us to share in the divine, lending us a bit of the uncaused cause that moves itself without determinism. The third position attempts to reconcile these contrarities and is called compatibilism, arguing that we are in some way both determined and free. My real purpose in this investigation is to delve into what I take to be a convincing argument for a certain kind of compatibilism and to look into the implications for issues of warrant, but these waters are deep, so this week, I will set up the issue and explore some of the complications the issues imply for the virtuous circle, a true grasp of the unity that is reality.

It may well be that empiricists have got it right, that one day we will look back on this controversy with the same smug superiority with which we now view Egyptian astronomy or the humors theory of medicine. Certainly, the odds are on science’s side, for it has rolled back the mysteries of the material universe with metronomic regularity. But this particular issue has a catch that might make science’s task well-nigh impossible. Consider for a moment what we now know about the brain chemistry of strong emotion, say depression or falling in love. Does it make the slightest bit of difference to those in either emotional state to speak of neurotransmitters or serotonin? Does anyone really care if science maps out the brain structures of parental love or criminality? Yes, we care enormously if these things can be affected. Imagine taking anti-depressants or aphrodisiacs to resolve emotional or psychological issues. You don’t have to imagine. Anti-depressants are the most prescribed drug in the U.S. today. But should the determinists scope out the mental self-delusions that produce our sense of moral freedom, what would they do with that information? Note that I am not discussing the pragmatist response to such discoveries, only the truth issues involved. Brave New World solutions would hardly seem improvements over the self-deception that we are free, would they? Imagine for a moment that you could be convinced that your freedom and the responsibility that comes of it are the result of this or that mental error or process, and that you actually act like every other thing that exists, only now you know why and how. What would happen to your vaunted moral freedom and responsibility then? Would anyone willingly give up the one thing that makes us most human? Give it up for what? To be a thing rather than a person? I wager you would instantly choose a response contrary to empirical prediction, and if that was the real prediction and you were told of it, you would choose a third response, and so on. We are cussed characters, aren’t we?

For these reasons, I am most skeptical that empirical science will ever resolve the free will debate as it has so many others. We are blessed or doomed always to feel free and always to feel responsibility for that freedom even if it could be shown that we are not. I must mention one exception to the exclusivity of human free will in the material universe. Heisenberg’s uncertainty principle seems to offer to subatomic particles the freedom which physics denies to the things these particles constitute. Though arguments have been proffered by physicists that the indeterminability of subatomic particles might somehow conduce to human freedom, no real connection has been made, nor has anyone argued convincingly that their unpredictable randomness constitutes real freedom as humans define it.

We might embrace the opposite conclusion with more success, and here is one of those rare moments in our culture when the arguments of religious absolutists will carry more water than that of the hardest of natural scientists. For the life of me, I cannot understand why religionists don’t go to this well more often. For five centuries they have given ground to the empirical sciences, so much so that they are left with very little ground on which to stand (Please see my postings of February 16, 2014 and July 16, 2013 for more on this). When I was young, religionists challenged science to explain the origin of life. They don’t do that much anymore, and that question has joined Zeus’s thunderbolts, Paley’s watch, and Thoreau’s Mother Nature on the discard pile of arguments for religion. In our own day, we are seeing cosmologists challenge the “first cause” argument, postulating a “universe from nothing” that removes the divine from the act of material creation. Yet in the face of defeats that began with the trial of Galileo in 1610, religious absolutists have this trump card to play: why are we free? Every argument of religion versus science can swing on the simple truth that there is an argument with the attendant implication that we are free to decide in favor of one view or the other. And that very freedom constitutes a convincing argument that we are unlike everything else that science studies, as the woes of the human sciences have repeatedly revealed. There is nothing anomalous about claiming that we are the only free things in the universe precisely because we are not merely things, and every scientist knows in his heart of hearts that his free will invalidates the very foundation of the scientific enterprise, at least in regard to himself as an object of study. Why don’t religionists use this argument more insistently?

Granted, their position entails a belief that extends knowledge as all proper beliefs do. In this case the knowledge is that we alone are free in ways like nothing else we know of. Should the reason for that sense of moral freedom implicate God, it would take us not only beyond the proper sphere of science but also beyond the limits of  knowledge and into that corona of beliefs that project knowledge’s light into the misty distances we cannot know (for more on that issue, please see my posting of January 12, 2014). In terms of our coherence truths that build virtual circles, there may be much more to think about in this matter, but the quest for correspondence truth that completes the virtuous circle must stop our inquiry here.

But correspondence offers no impedance to investigating the third option, compatibilism. We have seen a number of attempts to square this particular circle, to find some way for us to be both determined and free. Some of these efforts recall Descartes’ solution to the mind/body problem. In answer to how spirit can interact with matter, Descartes chose the brain’s pineal gland as the organ that transfers our will to our body. Somehow, he thought that a small material thing would be more responsive than a large one to what he took to be the whispers of spirit impulse, though why soul should have such weakling power was never explained. Perhaps a closer parallel is Luther’s notion of unearned grace that offers salvation, yet we must still choose to accept it, an act in conflict with his anti-Pelagian notions of human depravity. It seems to me that all such efforts to imbue us with just a touch of freedom only diminish the scale of the problem rather than resolve it. If we have any freedom at all to choose, we are not determined, even if that freedom involves only our natural and not our circumstantial freedom. When Thomas Hobbes questioned whether passengers on a sinking ship facing the choice to jettison their baggage to keep her afloat could be called free, he answered that the ability to choose among even coerced choices constitutes freedom. Others imagine a prisoner chained in a dungeon who can still think of his mother. Deprived of all action, he is still free to think (Please see my posting of November 20, 2013 for more on the nature of freedom).  It seems to me the human sciences have been pitching this kind of limited and hybrid freedom since Locke’s tabula rasa, and the influences of genetics and environment have been of continuing interest to sociologists and psychologists. But the unspoken assumption underlying these kinds of investigations, indeed all attempts at compatibilism from the empirical side, is that the kinds of influences we can see—the sinking ship, the chains, economic class, gender, race, nationality, culture—are only the tip of the iceberg and that continued empirical effort will excavate other directive forces. The trend in this sort of compatibilism is toward more determinism conducing to the extinguishing of free will itself.

As part of their rear-guard efforts to maintain relevance in the face of science’s triumphs, philosophers have come at the issue of compatibilism from a different direction, insisting that our sense of freedom may be a phenomenological quirk in the human psyche rather than an ontological reality. In this view they may be taken to be less sanguine about an eventual resolution than the empiricists and less comfortable about the distinction between person and thing than religionists, arguing that all we can hope for is a clearer understanding of the mental operations that somehow submerge this contradiction in our consciousness rather than highlight it. The issue has pragmatic and legal implications, for the criminal justice system struggles with allocating responsibility in its deliberations on guilt or innocence, but to my mind the most important connections touch on our justifications for our truth and goodness claims. I will assay what I take to be the strongest warrant for philosophical compatibilism next week.

It does intrigue me that we are not more bothered by the anomaly, though. After all, the greatest intellectual revolutions of recent memory, Marxism and Freudianism, were rooted in determinism, and the postmodern deconstructionism that sought to raze modernism since World War I is built upon a foundation of contingent determinism and anti-rationalism. Our freedom fetish (please see my post of November 20, 2013) is a backlash against that postmodern charge of determinism, so the issue certainly has contemporary and pragmatist resonance. Yet as deeply intertwined as the issue is in the history of the last century, we all still act as though our freedom and the responsibility it entails are indubitable. So our choices seem to boil down to these three. We can openly embrace our freedom and boldly proclaim ourselves the only free things in the universe with all the implications for theology that inspires. We can bow to the triumphal march of empirical science in its quest to bring free will into the sphere of determinism while still doubting that success in the endeavor will disabuse us of the sense that we are free, a process that seems to be underway at the moment, but one with disturbing implications for our orientation toward science, religion, and the kinds of warrants we use to justify our claims to truth and goodness. Or we can investigate some philosophical explanations for the anomaly while keeping our eyes wide open to what those explanations mean for our declarations of value.  I’ll pursue this third option next week.

Until then,

S. Del

 

 

 

Standard

The Problem of Moral Pragmatism

 A number of people I know are proud to consider themselves moral pragmatists. They seem to think their outlook superior to either a rigid ideology or religious orthodoxy. I get the impression they view themselves as more ethically agile in this very busy world than those burdened with rules and systems. In this postmodern zeitgeist it is all too easy to see the inconsistencies and hypocrisies of those who profess to follow creeds and movements. After all, the great moral crusades of our age have done their work, haven’t they, and those that have failed have deserved to. And pragmatism offers the indubitable advantage of all virtual circles: it is contextual, personalized, and therefore according to its adherents beyond criticism. “You do what you have to do.”

 I think they’re wrong, that pragmatism, though awfully easy to do, is impossible to do well. As a moral methodology, it is neither methodology nor moral. It takes a bit of patience to see why.

 

We can first dispose of Pragmatism as a truth theory. We may use William James’ words as suggestive of Pragmatism as epistemology. “Ideas … become true just in so far as they help us to get into satisfactory relations with other parts of our experience…. Any idea upon which we can ride, any idea that will carry us prosperously from any one part of our experience to any other part, linking things satisfactorily, working securely, saving labor; is true for just so much, true in so far forth, true instrumentally.” We can easily see the roots of the coherentist virtual circle in this vision, for it sets up usefulness as the sole criterion for truth. I find this position both too expansive and too restrictive. It is expansive in that it allows for utility to determine truth, a charge Bertrand Russell infamously called “the Santa Clause effect.” Whatever serves our interests therefore becomes true with no means of distinguishing immediate from long-term consequences or of resolving conflicts between individuals’ conflicting interests. It is also too restrictive in that in equating truth with use, it cuts off from our concern any question not currently instrumental and pits the instrumental value of a judgment against our means of determining that value. It has famously been charged that Pragmatism values scientific methodology but would spurn the pure science that directs so much of scientific research.  Or perhaps it would stimulate that research but for reasons divorced from the intellectual curiosity that should initiate investigations. I have previously commented on the opposite problem, so clearly illustrated by John Dewey’s emphasis on treating ordinary experience as a kind of empirical experiment (please see my posting of February 9, 2014), a task both impossible to implement and one certain to lead to all the errors of scientism. We are still enduring the results of a zeitgeist in which only scientific results are considered valid, one result being that all moral issues—those seeking truths about goodness rather than the data of experience– are reduced to the very subjectivist and experiential valuation that Pragmatists accord to their moral theory.

 So what’s wrong with that? In returning to Pragmatist moral outlook, we can place their arguments in the tradition of Bentham’s Utilitarianism, and see their impact on later twentieth century movements like relativism and subjectivism. The essence of all these arguments is that moral theory is synonymous with rather than separate from instrumentalism. In other words we think something morally good because we find it provides us with a means to an end that we find useful. It is important that we define “instrumentalism” clearly because much depends on our understanding of the term. 

 We use the term “good” in an instrumental sense when we actually mean “useful.” So a hammer is a good tool to drive a nail. A bridge is a good bridge if it carries the traffic it was built to carry. Such appraisals are prosaic, provided we can judge the final cause of a thing by the formal cause, the effect of its performance against the purpose for which it is performed. In some cases, we can see a problem in limiting moral judgment to the instrumental. As Dostoevsky noted, an axe is a good weapon with which to murder your landlady. So clearly, instrumentalism and morality are not always synonymous. But while some instrumental judgments are clearly not moral, it is still an open question whether all moral questions are instrumental. Pragmatists base their moral theory on an affirmative reply. Are they right?

 We may assume that religious morality would reply that they aren’t. Their absolute morality stipulates the divinity’s will as determinative rather than the moral agent’s. Religions command. They do not contextualize or subjectivize or find their justifications in experience. Now there are powerful counterarguments to this position, most notably those raised by Plato in The Euthyphro and Kant. I explored their arguments against divine command morality on September 11 and 18, 2013. Even if they find those arguments unconvincing, I think divine command adherents would still find obedience to God’s will more than arbitrary or contextual, and they would surely admit that the lure of a happy afterlife is a strong instrumental stimulus to obedience. So are the Pragmatists correct in seeing religious morality as instrumentalist?

 I don’t think they are. Were religionists obeying what they take to be God’s will solely for their own salvation, were they attempting to placate an angry God to lessen His wrath, then their moral values might be called instrumentalist. But that does not seem to be the intended motive of their moral behavior. They wish to please God, to make Him happier, never mind the bothersome theological implications of that wish. Though their initial allegiance might be based on warrants of logic as Kant and Plato charged, they maintain that allegiance on grounds of authority, trusting in the truth of the dogmas or holy books that stipulate what is good. While they are always open to the charge of self-interest, their morality sets a higher goal, and the lip service religionists pay it indicates they find it worthy of value.

 This distinction introduces what might be a jangling thought to Pragmatists. Do they operate out of a moral theory or a psychological one? Certainly, people often act on instrumental motives, but is Pragmatism claiming that they must do so or that they should? I would say the answer to that question is a split decision, with some Pragmatists espousing their outlook as a desirable moral position and others claiming that other supposed moral systems devolve in practice into instrumental, contextual choices regardless of the moral agent’s judgments. I suspect we could clarify the nature of this debate by asking Pragmatists to be clear on their answer to this question. Should they say, “Well, that is what people actually do regardless of what they claim to be doing,” I think you have ample grounds for challenge. That leaves those who consider Pragmatism a real moral position, albeit a very, very flexible one.

 To those folks, I would respond with a series of questions.

 First, “are you judging your present intentions or anticipated outcomes?” The Utilitarian tradition from which Pragmatism evolved was an avowedly consequentialist moral system, meaning it judged the morality of action based on outcomes rather than intentions. Granted, as one faces the moral fork in the road, she only has her intentions. She cannot know even immediate outcomes with any certainty. Pragmatists might acknowledge this truth by honoring their own intentions motivating choosing, but one of the many deficiencies of their outlook is that they can’t offer the same option to other moral agents facing other kinds of forks in the road simply because they tout the contextualized nature of moral agency, and they can never be in someone else’s mind at the moment of moral choice. In practice, this limitation on intentionality vivifies Longfellow’s sour maxim: “We judge ourselves by what we feel capable of doing; others judge us by what we have already done.” Of course, the first test of any moral system’s worth is the principle of equity, and this bit of self-deception is the first violation of equity Pragmatism stands accused of. There are others.

 Even should we leap that powerful psychological hurdle and insist on the equity that must be the foundational rock of morality, we face insurmountable difficulties on the issue of consequences. Frankly, I don’t see any solution other than self-deception for this one. As William James put it, “’The true’, to put it very briefly, is only the expedient in the way of our thinking, just as ‘the right’ is only the expedient in the way of our behaving. Expedient in almost any fashion; and expedient in the long run and on the whole, of course.” But here the problem for Pragmatism becomes coterminal with the failure of Utilitarianism. How can you possibly project consequences sufficient to make a moral choice? Of course, you see those loitering right there at the fork in the road, and their immediacy might make them loom larger in your moral valuation just as egotism falsely exaggerates the importance of persons near us because they are near us. In truth, equity demands the egotist to value the stranger’s justice as much as his own or that due his loved ones. Such is the nature of equity. The Pragmatist faces a similar confusion, for the issues at the moment of choice might seem more important than the more distant consequences, many of which she could never project from her current moral vantage point. The Pragmatist John Dewey made a seeming virtue of setting up moral choice as the solving of immediate problems, the removal of “indeterminancy” from the present environment by choosing expedients that resolve or clarify the problem situation. But we all have experienced “solutions” that were well-intentioned but based on incomplete awareness of problems or ignorance of consequence; these might have seemed the best instrumental choice at the moment and yet make bad situations worse. To value the short-term because it is the short-term may very well be the definition of “short-sighted” and it accounts for the unsavory connotations of words like “expedient.” The Utilitarian Jeremy Bentham sought to calculate in advance all the goods and bads a choice would produce. He called the goods “hedons” and the bads “dolors.” Such tender neologisms could not resolve the impossibility of forecasting the rippling effects of choice beyond the event horizon of prediction. This weakness of Utility was not repaired by Bentham’s successor, John Stuart Mill. The Pragmatists of the next generation resolved the issue simply by ignoring it and elevating expedience as the moral criterion for choosing! Their claim to practicality seems particularly inappropriate in light of their failure to resolve this simple issue of eventual consequence. Now I will grant that this curtain of ignorance falls over all moral systems. Deontological systems like Kant’s view this issue as so debilitating as to make only intentionality a factor in morality rather than consequences. You don’t have to go that far to see the moral hazard of valuing only immediate consequences, which is what Pragmatists pride themselves on. Indeed, that decision is the worst one available precisely because it so limits questions of valuation to the most immediate of consequences without regard for consistency that would result from apply rule-based moral judgments.

Second, “how do you resolve moral conflict?” The Pragmatist considers contextualization and personal experience indispensable determinists of moral choice. We can imagine that no two individuals face the same choice even in the same situation, for their prior experience and virtual circles will so mold the plastic circumstance they face as to produce different moral problems and options. The result is sure to be conflict and disagreement, not only on moral choice but even on identifying moral context. So in social and political dispute, whose framing of expediency and the moral solutions it offers should prevail? Indeed, the Pragmatists themselves disagreed on the answer to this one. The first Pragmatist, Charles Peirce, recognized the difficulty of resolving moral conflict and chose community standards as the arbiter of disagreement, but this only worsens the problem. First, should we assume that larger communities trump smaller ones, the same majoritarian solution proposed by Utilitarians? In such cases, what operates to protect the moral values of minorities, particularly since Pragmatism uses the same kind of value system for determining truth? And if individuals have to concede to community values, doesn’t that erase the contextualization and individualization that Pragmatists value above all else? It is telling that later Pragmatists did not embrace Peirce’s solution to the problem of moral conflict. William James and John Dewey both appealed to the wisdom of the scientific method to resolve controversy, but questions of moral goodness are precisely the kinds of questions that empiricism cannot answer.

 I think Pragmatism is a very popular moral position today, in part because we lead such frantic lives and in part because it seems to fit postmodern values systems that forbid any real and objective moral structure. In truth, the coherentist virtual circle that postmodernism projects as the model of intellectual and moral evolution is so incoherent and self-contradictory that only a moral outlook as disjointed as moral Pragmatism could be made a part of it.

 Till next week,

S. Del

 

 

 

Standard

Economic Justice

In this entry, I hope to address a moral question that troubles me greatly: why is anyone else entitled to the product of my labor if they do not provide for themselves?

Last week’s entry highlighted the hypocrisy of free market advocates defending both the concept of equality of opportunity and inequality of outcome. The latter is guaranteed by the natural inequalities inherent in our natures and nurtures, but these inequalities doom any possibility of the equal starting line that is a fundamental feature of the American Dream and the great selling point of unfettered capitalism. Appeals to fairness are difficult to judge, for they rely on a comparative calculus of relative merit between individuals, but an appeal to the natural rights of citizens to have their economic needs met allows a calculation of the equality of kind necessary for all to meet their needs without demanding a crushing equality of degree that characterizes failed egalitarian experiments.

The sticking point in this argument is the question of responsibility. It is all very well to say that all citizens in a polity deserve the distributive justice that allows them the equality of kind necessary to fulfill their needs. But the question remains: who carries the responsibility to meet that need: the state, meaning the collective will of the citizenry or the individual in need?

A core argument of natural rights theory is that all needs are also rights, and these include economic needs (for a fuller explanation of natural rights theory, please see my postings of November 3 and December 6, 2013). A polity that deprives citizens of the equality of kind that allows them to meet those needs is an unjust system violating their human rights. Most of us find polities that have such a gross disparity between rich and poor, one in which some wallow in luxury while others starve or are homeless, to be morally objectionable. These cases are easy. It is a bit more difficult to pass judgment on those polities characterized by general scarcity, those in which few citizens are able to satisfy their needs for the goods of human life. The applicable moral principle in this situation calls for an equitable distribution of goods, so in the absence of distributive justice for all, equity for all must suffice in the spirit of an equality of kind, though it is worth mentioning that needs neither lessen nor decrease their urgency no matter how long they go unsatisfied.

In the current climate of materialist excess, a generalized scarcity seems unlikely except in systems that are chronically underdeveloped, and in those cases the deprivation is likely to include a more comprehensive catalogue of unmet needs than merely economic ones. Nothing could be clearer than the moral imperative to repair this deficiency. But who effects the repairs? This macro scale of hardship runs parallel to a situation within a rich polity in which some citizens lack the equality of kind that justice demands while others enjoy a surfeit. The question suggested in these parallel cases can be stated as follows: are rich countries obligated to act positively to meet the economic needs of poor ones; are rich citizens in a developed  country obligated to act positively to meet the economic needs of poor citizens? In simple terms the question is this: who has the moral obligation to help those who lack the equality of kind they are entitled to?

To answer that question requires a little weed whacking. The question is a moral one. And its answer depends on the warrant one employs in her moral judgment. The three absolutist moral systems that command universal love and charity—Judaism, Islam, and Christianity—would provide a clear answer to the question. Their morality frames social justice issues as obligations of the haves to improve the lot of the have-nots. I have attempted in earlier posts to untangle the disjunction between universal charity and justice (on September 11 and December 10, 2013). To an outsider, it seems the command to love one’s neighbor as oneself invalidates any appeal to what that neighbor is personally due in justice. Islam specifies alms-giving as one of its five pillars, and both Christianity and Judaism regard a tithe, a percentage of income donated to the less fortunate, as just, perhaps as repayment for God’s blessings of prosperity. This seems something of a pragmatic compromise in the case of Christianity, for surely Jesus’ self-sacrifice, the model of Christian love, is rooted in the transparent injustice of giving up his own life for the sins of the world. In any case, the distance between a 10% tithe and the self-abnegation that is at the root of Christian love makes any conjoining of the terms “Christianity” and “social justice” problematic. I would prefer an answer to this question that appeals to moral universalism rather than moral absolutism because reason is more ubiquitous—and more consistent—than a particular religion’s traditions. So let us set aside the precepts of religion in the issue of economic justice in favor of giving to each her due, the very essence of justice and the foundation of virtue ethics.

This choice returns us to our conundrum though, for while religious charity and “social justice” may place a high value on active giving, justice insists on a scrupulous adherence to the principle of “what is due,” and that requires more than a bit of wisdom in the arena of distributive justice.

For example, opponents of the “death tax” insist upon its unfairness. But while they may argue that all taxation is theft, they have a stronger case against the income tax than against the inheritance tax. Their logic is that they are entitled to the rewards of their own labor. But it is only hubris and ingratitude, not to mention hypocrisy, to claim that the government that provided the conditions for their capitalist success at its own expense should receive nothing for its labors in providing those conditions. And if they wish to sustain their argument, they can hardly claim that their heirs are due what they have not labored for. I am not arguing for a confiscatory “death tax,” though one could be argued for on the grounds of economic justice. Nothing could better exemplify the principle of the “equal starting line” than an estate tax that redistributes opportunity to each generation. I ask only what capitalists themselves insist upon: that economic rewards should be commensurate with personal achievement, something the scions of the rich cannot claim as easily as their inheritance. But this is only one among innumerable examples of the thorniness of economic justice.

We may discern three rough categories here. The first identifies economic inequality of kind rooted in undeserved circumstance. The largest single cause of bankruptcy in the U.S. is medical expenses. Children suffer because of their parents’ drug use. Good workers are laid off. The moral resolution for this category is clear to all but the most stringent libertarians: here is a class of people who deserve economic assistance sufficient to produce an equality of kind with their fellow citizens.

While exploring this first category—and while enjoying what I assume would be wide agreement that economic assistance sufficient to guarantee an equality of kind—we should explore the problems implicit in the state, the aggregate will of its constituent citizenry, providing economic goods to some of its people rather than simply granting them room to procure them for themselves. Make no mistake. Providing for our economic needs is our own responsibility as adults. At least some of the reluctance we all feel about this issue derives from the unseemliness of treating adults as we would treat children: as people unable to provide for their own needs. The very young, the very ill, the very old, the very unlucky: all of those who undeniably qualify for public assistance, whether it be Aid to Families with Dependent Children, Supplemental Nutritional Assistance, Medicare, Medicaid, unemployment insurance: we all recognize a deficiency in their freedom, one produced by the circumstances that rob them of the ability to do what all adults must do. In these cases, we recognize the moral imperative justice requires, but we also recognize that the costs of providing it involves more than money.

The costs are murkier when considering the second category, an inequality of kind caused in part by circumstance and in part by the responsible individual.  What is the statute of limitations on youthful mistakes or poor investment decisions for a worker’s 401K account? Those filing Chapter 7 bankruptcies are “forgiven” their debts after ten years and are allowed to reestablish good credit. We no longer throw scofflaws into debtor’s prisons. Only those too wealthy to notice their own poor economic decisions or extremely cautious or very young consumers can claim not to have made poor money choices. Nearly all economic assistance programs set criteria for qualification and attempt to motivate their beneficiaries to become independent, which is just as it should be and not merely for economic reasons. Because categories one and two are not exclusive, some recipients of public assistance suffer a mix of misfortunes, some circumstantial and others for which they are responsible. But who wishes to suffer the close scrutiny of Big Brother to apportion relative responsibility in these instances? Critics of government assistance seem all too ready to assume that much of it is undeserved. Certainly true, but would they want a government paternalistic enough to ferret out proportional responsibility in cases where misfortune and carelessness are entangled? I suspect a government that intrusive would also earn their ire, so the question becomes whether the decision on public assistance errs on the side of parsimony or generosity, of giving too little or too much. Both would be unjust, but to unjustly deprive people of economic goods they need seems more damaging to me than to unjustly reward some whose hardship was partially their own doing.

Of course, some make their own hard bed. The final category,  an inequality of kind caused by the responsible party,  includes those who are in financial straits because of big and repeated poor decisions. Gamblers, addicts, criminals, dropouts: people who seem intent on placing their lives on the cliff’s edge also need all the goods that make us human and may illustrate their need for these things more obviously than those in the other categories. Those who play by the rules understandably resent the claims of those in the third category and perhaps the second. Why should they remediate the mistakes of those who refuse to look after their own best interests?

Strictly speaking, justice does not demand that others remedy our own errors, so the argument for economic justice cannot support supplying persons with economic needs that they should provide or should have provided for themselves, for this provision reduces the beneficiary to the status of minors dependent on adults for their sustenance and unjustly deprives those who act responsibly of what is their due. In practice these issues are so contextualized as to require the wisdom of Solomon—the moral issue is dependent on the degree to which the supplicant falls into one of the three categories listed above. That is the variable. The constant in all such cases is the equality of kind all need, and it cannot be negotiated or discounted. As contextualized as each issue is, these same kinds of situational appeals to the standard of justice are a constant feature of our judicial system. No federal regulations can guarantee justice in every situation, but just as justice is the unvarying aim in civil and criminal cases, so should it be the continual objective in the arena of distributive justice. But consider how clearly positive law defines justice in civil and criminal law and how contentious such definitions become in regard to economic goods!

I think the reason issues like these are often painfully opaque is our confusion about the nature of economic justice. Notions of fairness introduce unnecessary complexity into this dispute since fairness must arbitrate one’s due relative to another’s, and that is a quagmire. Justice concerns itself with the satisfaction of needs, equalities of kind, and leaves issues of fairness to the inequalities of degree that regulate fairness. That distinction eases the calculus somewhat, though it still compels us to face squarely the injustice of providing economic goods to those whose poor choices left them in need. I suppose we could apply a Dickensian standard to such decisions as a community, never forgive lapses of personal judgment, and as a consequence allow people to live without the satisfaction of their human needs. This was the kind of thinking that moved Immanuel Kant to advocate for the death penalty: people deserve the consequences of choices freely made no matter how dire. But I am troubled by the easy use of “freely” in such cases. And here we face an issue that the criminal courts have confronted for half a century, one guaranteed by the human sciences’ blurring of the meaning of “freedom.” The courts still act as though persons are completely responsible for their criminal conduct, yet face testimony from criminologists, sociologists, psychologists, and social workers that explicitly call that freedom into doubt. We stand on the threshold of a whole new wave of neurological and genetic evidence that will further cloud what once was the cornerstone of criminal trials: that persons are responsible for their actions because they are free to choose otherwise. All of morality rests on this preferential freedom, and the ad hoc efforts to weigh limits on this freedom in order to apportion relative responsibility in terms of guilt and sentencing has hardly clarified the issue. The essence of all issues of justice is that persons get what they are due, but curtailed freedom means curtailing responsibility, so what is due in questions of both retributive and distributive justice becomes much more difficult to apportion. To what degree do we hold persons who have failed to exercise prudence in the satisfaction of their own needs responsible for that failure?

Ironically, it is political conservatives who carry the banner of Enlightenment liberalism in this quest, upholding the eighteenth century concept of individual responsibility. Political liberals are much more influenced by postmodern notions of group identity and the social creation of personal consciousness, and so they find it difficult to allocate the degree of freedom and therefore responsibility that influences behavior. The difference between the conservative view and the liberal one does not arise in regard to the first category I discussed earlier because both sides hold harmless the individual in need. Only doctrinaire libertarians would disagree that these persons should receive public assistance. The second category presents more gradations of personal responsibility and freedom, though in times of plenty, both sides might see the justice in public assistance simply because the hardships that require it are partly the result of fate, not fault. Both sides would disagree on the constituents and the justice involved in the third group, though. Conservatives seem to think most of  “the 47%,” those who receive some government assistance, are malingerers and frauds working the system, members of the third category who do not deserve a safety net. Am I wrong to think they would feel more generous to the other two groups? There seems nothing but anecdotal evidence on either side of this debate. Reliable statistics seem much harder to find than rancid stories, but certainly the qualifications for all kinds of assistance are designed to winnow out those who cheat the system.

The current level of distrust indicates they could use some improvement. One disturbing aspect of the Affordable Care Act is the requirement that emotional and mental disturbances be treated the same as physical ailments for purposes of insurance coverage, but when this allows a flood of applicants to claim Social Security Disability Benefits, can anyone blame conservatives for crying foul, considering how imprecise and changeable these human science diagnoses of such problems as depression are? On the other side, liberals rightly point the finger at corporate America’s opposition to raising the minimum wage to a livable amount. Many of the working poor require government assistance not because they don’t work hard but because their employers have transferred the responsibility for meeting their economic needs to the government.

My own preference in these questions is probably in line with most other citizens. I prefer to err on the side of generosity if our resources permit it, if only because I know how common it is to make mistakes in life, how hard to see their nature in the moment and how easy in retrospect. But even if applied more strictly than I am comfortable with, the application of “kind” and “degree” to questions of economic justice would provide much-needed clarity to the debate.

Enjoy your week.

S. Del

 

 

 

Standard

Income Inequality

A good moral system must be like a good hiker’s compass. If too finely calibrated, it will swing wildly every time she clambers over a rock. If too dampened, it will point sluggishly in one direction no matter how the trails fork and twist. Any moral system worth its salt should be flexible enough to apply to the range of human choice yet stern enough to direct action in the flux of experience. It must accept our often-pathetic psychological forays into self-delusion, wish fulfillment, and misperception and yet still force us to rise to our best selves. And it should be simple enough to actually use.

 A good test of our moral compass has been laid before us this year by commentators and politicians. The issue of income inequality allows us to clarify and test some common terms that almost immediately lead us into the tall weeds of moral dilemma. As is so often the case, the way we choose to define our terms will determine our position on the issue. The temptation to slant the definitions to reduce the moral ambiguities involved or to exaggerate the moral heft of our starting position is particularly strong in an issue like this because we know going in that no moral system will produce a univocal solution to the problem. And that is what makes it such a good moral litmus test.

 The key term to be defined is inequality. It is a simple and brute judgment that such inequality exists in our culture as it has in all previous ones. On December 3, 2013, I argued that no egalitarian economic system has been successful or could be because individuals are so clearly unequal in talents and their improvements. I referenced several historical efforts that invariably produced a reactionary return to inequality, concluding that only the most onerous expenditure of state power could produce real equality, and that effort always proves so crippling to liberty as to provoke the inevitable backlash. So let us stipulate that if we regard income inequality as a moral problem, we should not regard income equality as a solution. A moral system that offers impossible solutions to difficult problems should be unworthy of our attention, though the historical examples cited in my earlier post indicates disagreement on that point.

 So if the solution to the problem of income inequality is not income equality, what is it? Let us swing in the opposite direction and ask the obvious question. What makes income inequality a problem? If it is a problem, what is the nature of the problem?And if income equality, which seems the obvious counterargument, is not the solution, what is?

 In a free enterprise system, neither equality of opportunity nor equality of outcome would exist. The failure of the former would be guaranteed by the lack of the latter. Opportunity derives from maximal development of talent, but the conditions of that development are never distributed evenly in any society. Why? Because prior outcomes advantage some over others, particularly in a generational view. One only need compare the products of public versus private elementary and secondary schooling in this country or the potential for success that children of poverty face in competition with their wealthier peers.  It is certainly possible to argue either that such inequality is inevitable and therefore beyond moral judgment or that it is a kind of trans-generational justice: the children properly benefit from the worthiness of their parents. The problem is that the latter argument violates one of the central premises of the free enterprise system that produces the inequality: that individuals are entitled to equal opportunity and deserve nothing more or less than the full rewards of their labors. And this judgment applies equally to the children of poverty and privilege. When confronted with this truth and the implied hypocrisy that accompanies it, defenders of free enterprise fall back on the argument that such inequalities are inevitable and therefore beyond the power of morality to address. And here they make a stronger case. For nothing is more obviously true than natural inequality and even the most tender-hearted liberals must face the futility of attempting to achieve an egalitarian condition.

 So they too retreat to a fall-back position, arguing that our economy, while necessarily unequal, should at least be structured to produce more fairness. This is the position that takes aim at the growing disparity between the 1% and the 99%, between CEO’s and their employees, and so on. But the argument to fairness is fraught with ambiguity. The problem is partially due to the meaning of the term, for “fairness” always implies a comparison between two things or persons. Doesn’t it mean “distributed according to relative merit”? I hear absurd claims by executives that they actually work 204 times harder than their median workers, which justifies the current CEO/worker pay disparity. These guys have obviously never put on a roof or poured a highway. They would be on firmer ground if they insist that they have 204 times the responsibility of their employees, that their pay is warranted by their crucial decisions and the stakes involved, though perhaps that argument would fail in regard to police officers, nurses, and firemen. But put that aside for a moment and examine on what grounds one determines “fair pay.” Is it as simple as what the market decides? Is economic fairness synonymous with laissez faire? And if it isn’t, what determines fairness?

 We can abjure crony capitalism and big money’s influence over our tax codes and laws as inimical to democracy on the same grounds as those stated above: that defenders of capitalism cannot defend equality of opportunity and this kind of rigging the system without hypocrisy. Nothing could be clearer than the patent unfairness of the kinds of outcomes we see around us, outcomes that guarantee inequalities of opportunity for large swaths of the population. Let us be clear about the problem: even if the outcomes are the products of fair competition, they destroy the putative individual equality of opportunity that is a pillar of capitalism. This leaves the champions of current capitalism with four options: defend, deny, deflect, or defuse.

 They can defend the fairness of the current system by insisting that the appraisal of the unfairness of the system is an error, that it distributes economic rewards fairly as presently structured, that disparities in income and wealth are reflections of relative merit implied by the definition of “fairness.” In a working system, they say, talent and work ethic will produce the kinds of disparities we see around us. This was the defiant battle cry of Wall Street executives who increased their wealth after the crash of 2008. They consider themselves agile risk takers who take profit where they find it just as capitalism dictates. In this defense, the narrative is that we still have an economy open to the American dream, that anyone can still make it big, that economic mobility is still possible, etc. In order for this argument to hold, they must also deny that economic inequality is an insurmountable obstacle to self-advancement for those at the bottom, and they can always find some tech genius or mogul who pulled himself up by his own bootstraps. Next, they can deflect criticism by pointing to other unfair practices in the workplace, especially deformations produced by the federal government. This deflection may work on its own to explain why some are not succeeding in our economy: witness the focus on the deficit and government spending. Or it may combine with a spirited defense of free enterprise: the government safety net not only feeds the deficit but robs lower income workers of the incentive to enter the market. If they find no willing takers for these approaches, the final resort will be to attempt to defuse their critics by acknowledging the inequalities in the present system but insisting that these are the inevitable by-products of the system we operate by, so issues of fairness cannot arise. One might as well complain that baldness is unfair, they say.

 Some truth adheres to all of these arguments, and their interrelation and broadness makes them difficult to isolate or refute. But all is not lost to the liberal cause, for the great bugbear of our age is less moral evil—something we find difficult to define—than hypocrisy, and the argument for fairness need look no further to prove its point. Capitalism does not set up a system in which handicaps to opportunity may be overcome by those with extraordinary talents or persistence. It does not seek to award the fruits of enterprise according to some gender or racial bias. It does not idealize an economic race in which some competitors must begin far behind others. Its premise is equality of opportunity: an equal shot at the American dream. To accept anything less is to participate in a system that is less than free enterprise promises, and, in actuality, one that is less than free. The irreducible prerequisite for capitalism is the equal starting line, and in pursuit of that goal held sacred by its defenders, a great deal of the stain of inequality can be bleached out of the system.

 But by no means all. The issue then becomes a question of how much is enough. Ever skilled in the logical fallacy of post hoc ergo proper hoc, defenders of free markets will charge that interference in the economy necessary to even out the starting line will distort what should be a free market, never mind that these efforts are themselves attempts to repair a far greater distortion. And they will level a more accurate charge: that these efforts edge us toward the slippery slope of total equality, and we know where that leads.

 All of this can be avoided by a small but essential change of emphasis. Or rather two changes. First, in the face of the distortive effects on equality of opportunity that unfettered free enterprise produces, why accept the premise that the free market system itself should ever be the ideal? Now defenders of unfettered capitalism will argue that we really have no other choice because interference with the system will always distort the invisible hand of supply and demand, that those who would interfere desire nothing less than the egalitarianism that ends in communism, and so the inequalities of opportunity produced by capitalism are simply the cost of doing business. Of course, they aren’t the ones paying that cost. But their argument is retrograde and has been disproven by the history of the last century. From the Sixteenth Amendment of 1909 to the estate tax of 1916, we have a venerable history of tinkering with the markets to increase equality of opportunity. For nearly a century, it has been a bit silly to call the United States a free market economy. Is it merely a coincidence that this same century has seen our mixed markets system become the world’s dynamo? The second change requires us to go beyond terms like “fairness” and “income inequality.” We should unabashedly claim the moral high ground in this issue by framing it as one of economic justice.

 I am not being platitudinous or elevated here. I am arguing that using “fairness” as a basis for addressing this issue only takes the argument so far before running down the slippery slope to communism.

 Here’s why. It is all very well to demand economic fairness of opportunity, provided you embrace the premises of unfettered capitalism. The problem in not embracing it is that there seems no other place to stop demanding “economic equality” until one bumps up against the failed experiment first proposed by Karl Marx. And indeed, defenders of the flawed system we now employ exploit that slippery slope in establishing their own uncompromising demands. If you want to enter that arena, you may argue for equality of opportunity, but even that is a blunt instrument because free market zealots can respond that inequality is simply the cost of doing business and what better system can you offer?

 So change the terms of the debate. The key is to define terms consistently. If “justice” means “to give to each her due,” then begin the economic debate by asking, “What economic goods are due to everyone?” The answer must be the satisfaction of their economic needs, those goods and services that everyone needs to live a full human existence (These topics were developed more fully in my entries of November 6, 13, 20, 2013.) The state, representing the combined interests of its citizens, should regulate markets only to the extent of establishing conditions that allow all citizens to procure the satisfaction of their economic needs. It is each citizen’s responsibility to complete that effort on her own. The establishment of this minimum baseline and the conditions necessary for acquiring its components are the obligations of the state in regard to the economy, along with providing these components for those who are incapable of acquiring them on their own. This is a true equality of opportunity rather than the total equality that capitalists properly dismiss as a utopian dream. So long as all within the polity have these basic human needs met, differences in affluence are to be celebrated as fair acquisitions. The necessary terms to distinguish these differences are kind and degree.

 So long as all members of a polity have an equality of kind that fulfills their economic requirements, inequalities of degree are to be celebrated. This is the equality Jefferson referenced in the Declaration of Independence: all are created equally human and equally entitled to the environment that facilitates the fulfillment of their needs. The same distinction holds in regard to political power. We all have equal rights (a right is merely the recognition of a human need that justice [what we are due] guarantees) to participate in our political system and be judged in court. This is our political equality of kind which justice requires, yet officeholders clearly have more political power than ordinary citizens and so have the inequality of degree that justice also requires as a function of their office. When we seek out the kind and degree distinction, we see it everywhere. It is exemplified by the old joke: “What do you call the physician who graduated at the bottom of his med school class? Doctor.” Certainly, the public defender and the limousine lawyer may have different degrees of competency, but their licensure is identical. Degree and kind are everywhere.

 It is probable that this defense of our right to what we need will be seen by capitalists as being on that old commie slippery slope, but the distinction of kind and degree should establish the proper brake. It is perfectly appropriate that schooling differ by wealth so long as all schools fulfill their intended purpose: to deliver the kind of education adequate to adult responsibilities. Fancy private schools are just, so long as public schools meet this need (right) for adequate education. When public schools decline to the point where illiterate students are allowed to graduate, they no longer provide the equality of kind necessary to validate their existence and the polity they serve is just in demanding that they be improved to adequacy. They are not just in demanding that private schools degenerate their product to some similar level or sacrifice an adequate education for their own students. That would be fair, but not just because it would judge the merit of the better school only in relation to that of the poorer one.  Why do we require and provide education through high school and not through college? What determines qualifications for the Supplemental Nutrition Assistance Program or the length of time the unemployed may draw assistance? What determines the top nominal tax rate? Why impose an estate tax?

 If one ignores the distinction between kind and degree and attempts to negotiate these issues on grounds of absolute equality or fairness, which always concerns relative distributions according to merit, she will end in confusion or moral ambiguity. In terms of economic justice, the kind and degree distinction is fundamental.

 But this is not to call it a panacea. Even if we begin with the kind/degree distinction, we still face difficult issues of personal responsibility and fairness. These will be the subjects of next week’s entry.

Standard

The Latest Creationism Debate

If you missed the creationism/evolution debate on February 4th between Ken Ham of the Creationism Museum in Kentucky and Bill Nye, the science guy, you might want to find it online. I thought it inspiring to see two intelligent and obviously sincere men engage in civil dispute on such an important question. Their arguments and evidence were carefully considered and shed some light on the intractability of the issue and its connection to larger knowledge concerns.

 In brief, Mr. Ham defended the literalist Biblical interpretation that has come to be called creation science. His efforts focused on defending the hypothesis that the earth is only six thousand years old, that Noah’s flood was a worldwide event, and other factual claims, all in defense of the Protestant Bible’s authority. Mr. Nye chose not to attack the Bible but to defend the findings of empirical scientists that warrant the truth claim that the earth is far older. I found it surprising that Mr. Ham was willing to engage the empirical warrant on its own terms, generally conceding the truth of the evidence. He argued that the differences between his view and Mr. Nye’s are purely of interpretation. His conclusion was that scientists practice their own sort of belief system that incline their knowledge toward a belief in a mistaken atheistic construct of events. This charge co-opted Mr. Nye’s major point of attack, which was that knowledge and belief should be kept separate. I think it is worthwhile to take Mr. Ham’s charge seriously and investigate it and his larger argument from the standpoint of justification.

 It is a venerable accusation. The position that science as a discipline cannot be proved by the methods of science but instead must be warranted on grounds of belief is an old one; it is frequently advanced by postmodernists who seek to take science down a notch, to show it to be a flawed human endeavor guided by values rather than evidence. And it is certainly true that values like those prizing the scientific method cannot be derived empirically. No values can, for science can never prescribe the good. It must always be handed the ends before it can investigate the best means to achieve them. It is a valid question to what extent that limitation proves a problem for the empirical enterprise, how distortive are the lenses that the scientific community uses to magnify its objects of study, how much of its work is construction rather than discovery of reality. In its typical iteration, the charge views science as a top-down, paradigmatic enterprise in which values color findings rather than the bottom-up, evidentiary pyramid taught to generations of science students.  

 Two points are advanced by those who, like Mr. Ham, wish to disabuse science of its self-importance.

 The first is that even when followed rigorously, the scientific enterprise does not begin with evidence but with interpretation. This charge was leveled by Thomas Kuhn in The Structure of Scientific Revolutions in 1962. The scientist does not begin her investigations by looking at evidence. She begins by choosing what evidence to look for, by discriminating among all the sense data available to her that are worth investigating. But that act is partisan, for she must have some sense of the relative value of the evidence available to decide what to study, and that valuation necessarily must precede her observation. Critics like Mr. Ham charge that this preconception is a kind of belief. It directs observation rather than derives from it. Kuhn charged that the scientific community in each discipline chooses what is worth looking at, thereby polluting the vaunted sense of objectivity scientists claim as the foundation of their work. Mr. Ham did not make this charge per se, though he might have. I think he knew that making it would also attack the evidence he tried to appropriate for creationism, for the accusation obviously calls into question the evidentiary basis of the scientific method, and he argues that the evidence is not in dispute. I think the top-down argument against scientific objectivity is a poor one. Kuhn’s charge that the scientific community is like any other culture that shapes views is certainly true, and the indoctrination the acolytes of science undergo in their professional education bears unsettling similarity to religious training. But the charge that the professional community itself absorbs the values that guide their research practice including their theoretical basis in the same way is insulting to mature practitioners whose observational and experimental practice is a constant quest for falsification and implication. If Mr. Ham’s and Dr. Kuhn’s charge were true, scientific paradigms would undergo revolution as rarely and with as little consensual agreement as religious heresy, but that is not what happens. In religious reformation, new religions rise while old ones survive, leading to schism and bloodshed. I have written at length in these blogs of the problem of religious authority. By Dr. Kuhn’s own argument in The Structure of Scientific Revolutions, paradigm shifts in the sciences involve a generational shift of the entire community of practitioners, who then adopt the new paradigm en masse. The do so because they evidentiary basis better supports the new paradigm than the old one, and it is this evidentiary basis that guides their judgments about which evidence “counts” in their investigations. So Mr. Ham’s charge is partially true. The “superstructure” of science—its laws, theories, and paradigms—are not in themselves empirically verifiable. But he couldn’t be more wrong in equating their foundation in logic with his own in authority. They begin with judgments, rather than beliefs, and that makes all the difference (Please see my post of January 20, 2014 for more on this crucial point.)

 Mr. Ham’s second point of attack seemed senseless to me. He claimed that science focuses exclusively on the natural world, rejecting out of hand supernatural causes for natural events. I have dealt with this charge in previous posts, especially on July 16 and September 11, 2013. It strikes me as such an absurd judgment as to make me doubt his understanding of the scientific enterprise in toto. What else are scientists to do but study phenomena? If confronted with a true miracle, wouldn’t you expect scientists to be rendered mute or seek out natural causes even if far-fetched? Neither their methodology nor the laws, theories, and paradigms that derive from it could offer a response to the supernatural any more than we humans could respond to an event in a sixth dimension. Perhaps Mr. Ham was bemoaning science’s failure to even entertain the possibility of supernatural causation for the laws of nature themselves, a kind of meta-science that corresponds to his argument that the laws of reasoning are of divine origin. This is a subject worth exploring (Please see my post of December 29, 2013 for more on that issue) but by theologians or philosophers rather than scientists. To fault science whose methods must be perceptual for avoiding the imperceptible seems worse than mistaken. Perhaps Mr. Ham’s intent was to condemn the scientism so rampant in our culture, the view that the methods of science are not only the royal road but the only road to truth. In that case, good on him, but one need hardly make the leap to authority in that case. As in the response to his earlier charge, it is important to note that there are more reliable correspondence roads to truth that both avoid the weaknesses of authority (see September 18, 2013) and the limitations of empirical science (July 22, 2013).

 The deeper accusation Mr. Ham made in the debate went unanswered by Mr. Nye, who perhaps is too devoted a scientist to credit it. It was a kind of magnification of his initial charge about the subjectivism of initial scientific observation, only on a vast scale. It was that the entire “belief system” of empirical science and the secular view of reality it posits are equivalent to the belief system of Biblical inerrancy. It seems to me there are two ways to respond to this charge: the logical and the psychological. Both approaches refute Mr. Ham’s argument.

 The logical position references a host of correspondence justifications, the most formidable of which is the vast difference between authority and empirical warrants. When authority is undisputed, it certainly seems rock-solid. I investigated this issue from several aspects in earlier posts, the most pertinent of which are those of September 18 and December 29, 2013. But Mr. Ham’s authority is certainly not undisputed, differing from the Roman Catholic variant of Christianity and from non-Christian doctrines as well as from Mr. Nye’s agnosticism. Authority challenged is authority dissolved in correspondence truth issues, for authority’s warrant is the trust of its beneficiary, and Mr. Nye is unlikely to trust Mr. Ham’s Book of Genesis. When that book conflicts with empirical evidence, Mr. Ham attempts reconciliation on the Bible’s terms, while still attempting to respect the empirical evidence and the method that produced it. But despite his mental gymnastics, to embrace the scientific enterprise when its methodology accords with your beliefs and reject it  when it doesn’t violates even the weakest of coherence truth justifications, the principle of non-contradiction. To avoid this problem, Mr. Ham posits a false distinction between the explanatory modes of science when applied to the present and to the past, but this does nothing to bolster his case other than to confuse Mr. Nye and the rest of us who see no distinction between the kind of observation a biologist does with living tissue and a geologist with ancient rock.  No difference exists between “observational” and “historical” science. Mr. Ham says we cannot observe the past, so scientists build causal explanations out of the whole cloth of their atheist belief system. Then he proceeds to show a slide of the layering of the Grand Canyon and another of distant nebulae. But what are we seeing if not the geologic and astronomic past? And what are “historical” scientists interpreting but present-day observational evidence laid down in the past? It doesn’t help his case that Mr. Ham references errors in On the Origin of Species as one example of “historical science” error and present-day theories of evolutionary catastrophism as another. So when he points out the errors of “historical science,” does he mean the mistakes earlier proto-scientists made in their theories, such as Lamarckian evolution or Aristotle’s theories of spontaneous generation, or current theories about the past? No one but the most die-hard Luddite can reject science today, and Mr. Ham’s attempts to split hairs to preserve his reliance on Biblical inerrancy while also accepting the blessings of the scientific enterprise seem meant more to bolster his faith than to challenge the “beliefs” of science. It smacks of either confusion or bad faith.

 From a psychological standpoint, it is worthwhile to investigate his accusation from a less flawed position than his own. Is Mr. Nye’s faith in science in any way like Mr. Ham’s faith in Genesis? If the answer is even marginally positive, we have a knowledge problem, for the judgments we form about our truth claims require a level of disinterest that no belief can aspire to, for beliefs by definition signal an attachment, affection, or personal stake that is inimical to the proper exercise of judgment. We face an old issue here, one first raised by Plato who saw knowledge as “justified, true, belief.” Note the incompatibility of the words used in Plato’s definition. This introduces the psychological side of the question. No one watching the debate can deny that Mr. Nye is a passionate believer in the scientific process, and on several occasions both he and Mr. Ham explicitly compared their attachments, the former to empiricism, the latter to revealed Christianity. Wouldn’t that acknowledgement lend credence to Mr. Ham’s charge that science is a belief system as religion is? And wouldn’t that equivalency tarnish the putative objectivity of science? Wouldn’t Mr. Nye admit that he believes in science, loves it even? And wouldn’t that be enough to verify Mr. Ham’s charge?

 Well, maybe. Certainly, those guilty of scientism—and Mr. Nye may very well be among them—would earn Mr. Ham’s accusation, for scientism insists that the methods of science are not only the best path to truth but the only path. And that seems contradicted by both the applicability of other justifications to materialist questions science cannot yet answer and by the success of these less certain modes of warrant to experiences that can never meet science’s standard for isolation of variables, repeatability of results, predictability, and quantification. If we dismiss the adherents of scientism and simply consider the practitioners in the field, can we equate their beliefs with religious faith? I would argue that we cannot. The methods of science are tools that yield pragmatic results. The concordance of paradigms across scientific disciplines, the surprising revolutions within these disciplines wrought from work by practitioners, and the profusion of technology produced as a result of their work all attest to the pragmatic success of the scientific method. Now you could make the same claim about Mr. Ham’s beliefs. I recently came across exactly this point at the climax of the book The Life of Pi, and the literature is rife with testimonials to the health benefits of religious belief and so on. Mr. Nye would willingly confess that science is a tool for truth in his mind, one he would happily abandon if it proved misleading or false. But religionists like Mr. Ham would never say that their faith is merely a means to some pragmatic end. They too regard their belief as a tool for truth, but their trust in its authority would preclude their abandoning it in favor of some other warrant. Rather, they would regard anomaly as their own error and seek to reconcile it with their belief. I have said previously that the conversion experience of believers reorients their entire virtual circle, meaning their psychological investment is so complete as to discourage any entry of anomaly. Mr. Ham admitted as much when he claimed no evidence would diminish his faith. When asked the same question, Mr. Nye responded that just one anomaly would diminish his. It seems clear that the beliefs each professes are fundamentally different. Mr. Nye justifies and then believes. Mr. Ham does the opposite. And that makes all the difference.

 One other psychological note struck me about the debate. I found each man’s sense of wonder to be inspiring. Mr. Ham repeatedly indicated that he felt awe in what he knows and believes (I am not sure that he would be able to separate these two aspects of truth, though I would argue he should if he wants to understand the limits of his correspondence knowledge). He had no trouble leaping from what he regards as the true to the good and the beautiful. Many of his declarations began with a scientific claim and then proceeded smoothly to a theological explanation for the material reality he observes. It was clear that he was in awe of the metaphysical explanation for the physical reality he studies. He was stunned by the beauty of the answers the Book of Genesis provides, answers providing harmony and unity and order to all aspects of reality. Mr. Nye also was clearly moved by the beauty of what he observed, but it was the questions that inspired him rather than the answers. He frequently responded to the questions of his opponent by saying, “I don’t know” or “That is a mystery.” Mr. Ham never responded that way and seemed to regard Mr. Nye’s ignorance as a comparative weakness. It seems the two men saw not only truth but also its relation to belief differently, yet both also saw great beauty in material reality. I couldn’t help but be reminded of Karl Popper’s comments about the human sciences I referenced last week and the week before on the centrality of anomaly to our claims to knowledge. A high tolerance for anomaly seems to mark the mind eager for answers. A low tolerance indicates one comfortable in embracing questions.

 Check out the debate online.

 S. Del

 

 

Standard