Prejudice and Privilege

I really dislike looking into matters of race, but you don’t have to scratch the surface of any American problem very hard to find race eating away just under the surface, complicating solutions and, worse, analysis. It is this subterranean process, this rotting under the polished surface of our ideals, that has given rise to the relatively new popularity of the notion of privilege as racism. I wish to examine this view of racism in relation to the traditional notion of prejudice.

Their etymology indicates a major difference that has major implications. While “prejudice” is rooted in the active voice: it means literally “to prejudge” presumably without sufficient evidence, “privilege” derives from French meaning “private” and “law.” To be granted a privilege is a passive, not an active act. We may assume that favor was sought, but its reception was not within the power of the recipient to procure in contrast to the power we have in exercising our judgment actively to show prejudice.

Now the use of these two terms today tracks their etymology, and this distinction is hugely important in the current racial climate for two different reasons, both of which make remedying racism more difficult. First, the concept of personal responsibility is a bedrock moral principle, and that is difficult to connect to privilege as racism. Consequently, we tend to underestimate the degree to which our actions are determined by prior conditions (please see my posting of March 23, 2014) and overestimate our moral freedom in present ones, thus leading to the second problem: we consider our accomplishments to be entirely our own and resist crediting others for even a part of our success.

Contrast this haze of shared responsibility to active prejudice. To commit an act of prejudice is an error committed by the thinker. It is within her power to remedy. It is not only a cultural offense but also a rational one. It connotes sloppy thinking even when the prejudice is positive. For unless a class of things or persons is definitionally exclusive (“All bachelors are unmarried”), one may not reasonably apply even accurate group classifications to individuals, not to mention the difficulties inherent in forming those classifications (a danger postmodernists blithely ignore for some reason. Please see post of March 30, 2014). But to receive a privilege is a gift the recipient has no control over. In the sense social critics use the word, white privilege describes a thing unearned, an accident of birth, a booster rocket for economic and social ascent denied to others. The term is thus not only passive, but also inevitably comparative. From its beginning, the private law benefiting some was implicitly to be contrasted with the public law relatively penalizing others.

So the use of the word privilege changes the nature of a charge of racism. First, the accused may have done nothing in contrast to the implicit irrationality of a prejudicial judgment. She may bear no personal moral responsibility. She is merely the beneficiary of an unearned advantage that she may have neither asked for nor been aware of. Second, the charge insinuates that any advantage thus conferred also must penalize others. Finally, we face the most relevant issue that derives entirely from her degree of moral responsibility: what is she supposed to do about it? Let us attempt some calibration.

First, the power of charging prejudice is inextricably linked to the rational error it commits. Racism is morally offensive in part because it is stupid, and holding the moral high ground in the discussion cannot be separated from intellectual superiority. Racists are ignorant, uneducated, unable to grasp nuance. They make sweeping generalizations that are wildly inaccurate and then attempt to paint individuals with them. A long tradition of finding empirical or rational means to justify racist judgments from pseudo-Aristotle to Thomas Jefferson to Charles Murray attempts to invalidate the association between racism and ignorance, but its existence only reinforces the connection, for all such attempts are now regarded with disdain by intellectuals who are unwilling to sever it or to take any such effort seriously. This intimate connection cannot be carried over to the new racism of privilege, for privilege reveals no requisite flaws in its recipient whatsoever. Southern freedom riders in the 1960’s were as guilty as the dog handlers who attacked them if both were white. Indeed, the notion of privilege immediately conjures up a consequential guilt that might have motivated the former and enraged the latter. But should those risking their lives to end racism be charged with prejudice because they receive benefits from a system they actively oppose? Is such a charge warranted? And must white privilege disadvantage blacks?

We must assume that such privilege derives not from the absolute advantage conferred by being white but by the relative disadvantage the privilege implies for being black. We will call this kind of privilege disparative privilege. Now this relationship requires some investigation on three counts: first, what constitutes the privilege; second, how does it redound to the disadvantage of blacks; and third, must any conception of privilege be built upon comparative relationships?

Conservative whites wish to dismiss the whole notion of white advantage– and with it the notion of white guilt– by insisting that whatever advantage they received was earned rather than given, though they wish to be vague about whether they earned it or their forebears did. At most, they point to cultural values that encourage their success: family structure, emphasis on education, and work ethic. They rightly accentuate the self-discipline their success required, the acceptance of deferred gratification and commitment. But the essence of privilege as racism is the precisely the charge that advantages were not earned, that they accompanied skin color as a gift. So we step into a minefield of moral ambiguity, for I cannot be responsible for a harm I did not commit, nor can I be asked to feel guilt for a benefit I earned. To be fair, these character traits are not confined or exclusive to “white culture,” and to say they are is simply prejudice. To claim that being black automatically results in cultural disadvantage in regard to these prerequisites for success is a claim I can’t imagine any unbiased cultural observer would wish to make. Nor are these automatic socioeconomic markers, for lazy scions of wealthy families and ungrateful second generation Americans are clichés that belie any guaranteed conferral of privilege. So much for any sweeping comparisons. But let’s face it. The conditions for success are certainly better established in some socioeconomic environments than in others, and of the multitudinous strands in the tapestry of any success story, many are woven without effort, simply by the expectations of others that form our horizon of possibilities. Still, it is a gross act of prejudice to see white privilege as an unearned gift which white America takes for granted… at least until one compares it to being black in America. Only by comparison does the generalization hold undeniable truth. White guilt derives not from privilege but from prejudice as surely as the tail of the coin implies a head. Compared to the lot of blacks in this country– not only in the past but in the present– every white person now living was born on third base, and whatever her positive efforts, all were built to a degree upon a scaffold of exploitation. There is no denying that any comparison of white and black privilege will lead to one conclusion: whites still reap unearned privilege and blacks unearned privation because of skin color, and this legacy of active prejudice is a moral stain. White persons are like the boss’s son who may start in the mail room and may work hard but who will never know how much of his success is due to the accident of birth nor to what degree that success has kept less lucky fellow workers from rising as they might have in an equal race. Who can doubt that the future occupant of the corner office will pass on the fruits of his success to his children just as his future underlings will hand off their lesser luck to theirs?

But note in the example that the while the goods are absolute, the harms are all relative. Let us try to think of privilege in an absolute sense. Considered as an unearned gift, we are awash in privileges. We did not earn the social order that benefits us, the political system that frees and equalizes us, the economic system that enriches us, the family that nurtures us, the knowledge that guides us, the beliefs that give meaning to our lives, and on and on. While it makes sense for us to be grateful for these blessings, I can think of no reason we should feel guilty for them. In this large sense, anyone who lives in conditions allowing her to meet her human needs in the world is privileged, for it is by the satisfaction of our needs that our lives are fulfilled and the conditions for that satisfaction have been well-established thanks to the conventions of civilized life (for more on this, please see my post of November 13, 2013). If we have a loving family, dear friends, education, civil order, productive work, and the like, it seems to me we have the goods we are by nature designed to have, and the moral response to that is satisfaction and gratitude, not guilt. What is more, these are limitless goods. There is more than enough of these blessings to go around and my having a sufficiency in a working civil society in no way limits your access. We do not compete for all privileges.

But we do compete for those blessings that are limited, and then we are forced to face both the universality of our needs and the pain their absence produces. The most impoverished citizen in this country is privileged compared to the 84% of Liberians living on less than one dollar per day and the most unjust political jurisdiction here is utopia compared to life in Syria. If comparative privilege imposes a consequential guilt, then we all have a moral duty to ameliorate the living conditions of the poor regardless of their location. But do we? Let us put religious injunctions aside for the moment, though they impose their own moral duties, so that we may confront the central question that a relativistic concept of white privilege and white guilt implies: does relative economic and political privilege inevitably impose moral obligation? Let us refine the question: does privilege impose an obligation even when disparity is not a consequence of the privilege?

Let me acknowledge and laud the sentiment evoked in kind-hearted persons by seeing suffering. We wish to make it better. But a clear-eyed view of this natural desire also compels us to see how conditioned it is by our degree of privilege, as exemplified by the coat-drives-for-pets mentality of some wealthy enclaves. And just as any suffering tugs at the heartstrings, another nearly universal response is simply to turn away. What we happen to see disturbs us, so we simply refuse to see it so as not to be disturbed. This accounts, I think, for at least a part of the gated community mentality that seems so prevalent in rich neighborhoods. The moral principle of ought-implies-can–that moral obligation only follows the ability to act– comes into play here as well. An indiscriminate and wholesale equality of goods would be impossible to conceive much less to achieve (please see my posting of December 3, 2013 for more). The Soviet Union was a case study of that failure. Despite the efforts of thinkers like John Stuart Mill to objectify such sentiments and thereby impart to them some moral valuation– an effort that collapsed in a thicket of evaluating pleasures and pains– we would do well to remember that justice does not require an equality of degree of all perceived goods. For we perceive many things as good and value them differently and there are too few Maseratis to go around. Rather, it is the sufficient distribution of goods necessary to meet our human needs that is required, otherwise called an equality of kind. This operation is a profoundly rational one, the tugging at our heartstrings notwithstanding. We are left then with the suspicion that some kinds of privilege and guilt are not handmaidens of wellbeing and some are and that some wellbeing is earned rather than gifted. So why should guilt be intertwined with privilege like snakes on a caduceus?

But that is just the issue, isn’t it? For in our times and in America and for economic and political privilege especially, the relationship is always partly causal. Some linkages are as thick as wisteria vines squeezing the columns of antebellum mansions. Some family wealth, wealth that produced all the goods it is capable of buying for succeeding generations, was built on the direct foundation of the importation and perpetuation of slavery. Other privileges are less easily traced. It is said the target of the fourth hijacked plane on September 11, 2001 was either the White House or the Capitol. What kind of loss to our national pride would that inflict? Both edifices were built by slaves. When a group of people are disadvantaged by color and so denied an equal wage or vote or voice in social policy sufficient to deprive them of the satisfaction of their needs, someone will reap the benefits, and the misalignment of power being what it is, the odds are that someone already has the sufficiency that justice requires. To the degree that white America has harvested this kind of white privilege, it deserves to feel white guilt. And so long as the privilege is maintained, so too is the guilt, and so too is the moral obligation to correct the moral harm. Reparations for slavery would not clear the slate, for the vestiges of racism would continue, producing continuing disparity and white privilege.

But just to be clear, should I feel guilty for having parents who valued education and instilled a work ethic because others were less fortunate in their choice of parents? White guilt must be measured by the racism that relatively advantages one group over another, not the absolute goods consonant with universal human needs that some received and others did not. Social scientists may attempt to lay all cultural differences at the feet of some ancestral economic exploitation, but such an indictment seems too sweeping to be justified by science and too Marxist to be embraced by interpreters, though it is consistent with postmodern emphasis of culture as the creator of identity (please see posting of July 30, 2013 for more). If a narrower difference maps the battleground of white privilege and white guilt, then fight it there. But let liberals leave out injunctions of religious duty, emotivist objections to inequalities of degree, and claims that all privilege imposes guilt. Let conservatives put away their blind pride in winning a rigged game and their contempt for those losing it. While privilege may broaden our view, it shouldn’t change our focus. Prejudice is still the villain of the piece, still a moral obloquy and intellectual failure. So long as its effects ripple through the culture, white guilt is its proper consequence.

That conclusion applies also to the other kinds of privileges. To the degree that they were unearned benefits at the costs of sexism, colonialism, imperialism, and the like, we might expect to see coinages of terms like male privilege, first world privilege, heterosexual privilege and the like with their attendant trains of guilt. And to the degree that these disparities still hinder exploited people from satisfying their needs while easing the lives of their exploiters, active amelioration is the only morally justifiable response.

So what is active amelioration; what is to be done? Since justice is defined as “to each her due,” it seems clear that justice demands that those unjustly advantaged should be those who make reparations after, of course, performing the required triage. For if all vestiges of racism were magically removed from our society today, we would still be left with the inequities it has long produced, both privilege and privation. Repairing these inequities is not so difficult as egalitarians might imagine since justice requires not only an equality of kind but also the inequality of degree that the exercise of our preferential freedom produces (for more on this, please see my post of November 20, 2013). The shame has never been that some have an excess. Rather it is that the prejudice that helped produce the excess has also produced a deficiency for its victims. It is daunting to accept that the same arguments that produce white guilt hold sway in regard to other kinds of privilege, leading to other moral obligations, but there it is. Since the exercise of influence over other sovereign nations is a governmental function, we as citizens should move our government to act on our behalf in accord with the limitations of the ought-implies-can principle of morality.

This is what is to be done: we have the obligation to root out disparative privilege in all of its other manifestations by actively opposing prejudice in our own circle and by favoring governmental action to produce the equality of kind that justice demands. And let us remember also to be thankful for the absolute privileges we enjoy but did not earn.

Standard

One-Armed Economics and Wealth Creationism

I venerate expertise as a truth warrant. In judgments of correspondence goodness evaluating quality, it can substitute for clearly defined standards (please see my post of October 15, 2013 for more). Because expertise is to some degree built upon experience, it is a deeply flawed justification for truth and goodness claims, though its reliance on rational examination of experience raises its reliability. That being said, the criteria for developing expertise depend on the subject. The human sciences have compiled a dismal record in this regard, in part because of weaknesses inherent in their fields (see post of February 9, 2014 for more) and in part because their roots in academia encourage professional disagreement. Nevertheless, the hardest of the soft sciences and one of the few that is based upon quantitative analysis is “the dismal science,” economics.

 Because it is a human science, it is built on the shifting sand of conflicting paradigms. We see broad disagreement about essential subject matter along the political spectrum, but even economists embracing capitalism splinter in their premises and the conclusions built upon them. Pit a disciple of Hayek against a Keynsian and watch the sparks fly, justifying Harry Truman’s famous preference for a one-armed economist who wouldn’t say, “on the other hand….” The general unpreparedness of economists for the crash of 2008 does not speak well of their predictive powers. The psychic hot line did better. So call economics an immature science, one step below respectable status. Even keeping that caveat in mind, I cannot help respecting economists for their devotion to data, something all too rare in the human sciences, and I respect the their analytic method, flawed though it may be by the theories that dictate it. They know so much more than I do. I only participate in the economy. I do not profess to understand it. My field is epistemology, so I am painfully aware of what I do not know, but I have a few issues with the concepts of wealth and job creation as some economists define them that confuse me. Perhaps an expert can set me straight.

 I took a dollar out of my wallet the other day, and right above George Washington’s head in a kind of corona, a girlish hand had printed in red marker the name “Maria.” That got me thinking about how many hands that bill had gone through since Maria had first claimed it for her own, and how many transactions had been facilitated by its existence. Now I’ve heard conservative pundits and a few economists insist that wealth can only be created by free enterprise, that government can only transfer rather than build wealth. Since the money comes from tax revenues, they say, it is merely changing hands, not creating value in the way private enterprise does. This seems obvious when given their favorite example of a Steve Jobs inventing the iPhone. There was nothing and then there was something that people were willing to pay for. This is true wealth creation, as in creation ab initio, a making that seems almost divine. Advocates of this position contrast that kind of wealth making with the confiscatory policies of government taxation that only moves money around after, of course, squandering a large percentage in fraud and waste. So when government spends, it is spending not only what the earner would have spent more wisely but also what it did not create. The assumption is that the taxpayer created the wealth by her labor just as Steve Jobs created the iPhone. So taxation is not only wasteful in that it adds an unnecessary drain for money to go down. It is also parasitic in that it adds nothing to the economic basket. Have I got that argument right?

 But Maria’s dollar tells another story. Does your job “create” wealth from nothing in the sense of inventing value? More likely it does what all those people who passed on Maria’s dollar do: perform a service that the payer considers worth a dollar. Whether that service is performed in the public or private sector is irrelevant. The taxpayer pays for a service that government provides just as she pays for a babysitter or a taxi or a pizza. I will admit that I can choose whether to purchase these things and that I have no such choice in government spending except, of course, through my vote. But look at it another way in respect to other purchases. I have no choice about fulfilling any of my economic needs. That’s why they are called “needs” (For more on this, please see my post of November 13, 2013). Can I refuse the grocer, the hospital, the landlord? You may respond that I can choose another provider to meet my needs if dissatisfied and such freedom is denied me in regard to government services. This is an undeniable burr under the saddle of the cowboy libertarian wing of the conservative cause. But let us examine this irritant more closely.

 There are two good reasons for the monopolies that government “enjoys” in performing its services and though these affect but transcend economics.

 First, since the raison d’etre of government is justice rather than profit, it must retain a monopoly of power so as to be the final arbiter in its goal of providing justice to meet those needs that citizens cannot meet by their own efforts. The legitimate scope of such efforts, I must add, is limited to those needs citizens cannot satisfy for themselves. These fall into two broad categories: those too expansive for any individual to provide (such as defense) and those that might be skewed into injustice by gross inequalities (such as the court system and the legal rights of minorities).

 Secondly, the issue of its efficiency and desirability in regard to any particular but necessary service is skewed by the simple truth that government is often the provider of last resort in regard to these essential services, performing public functions (like fire protection and other disaster relief) that no private employer would undertake because they could never prove profitable. Florida provided another example after hurricanes ravaged the state in 2004. Private insurers deserted the state, which was obligated to go into the insurance business to provide needed coverage for homeowners.

 These two realities put to the lie the claim by some economists that government only transfers rather than creates wealth. It provides services citizens cannot provide for themselves, services that meet human needs and that may demand an attention to justice over profit.

 Now it is worth discussing whether government can perform some of these services as well as a private entity, but this is a question of relative efficiency, not absolute wealth creation. But in considering efficiency, you can be sure that no corporation will pursue these kinds of opportunity unless profit is factored into the job, profit that adds to the cost of providing the service, profit that blinds the provider to issues of justice in provision of services. Does it really matter whether Maria’s dollar passes through government hands by way of taxation or into the till of a business if it then goes to pay some employee for doing necessary work? By the logic of those who deny the value of public expenditures, the education private colleges provide has value while that provided by public colleges does not. Can anyone claim that a nurse working at a VA hospital provides no valuable service in comparison to one working at a for-profit hospital? Does this make any sense?

 So much for the absolute claim that public dollars cannot create wealth. But conservative economists might then simply pivot to the question of efficiency, subtracting from that wealth the costs incurred by government’s incompetence, leading to the same conclusion by means of a different subterfuge. For if the admitted economic value is reduced by waste, fraud, and inefficiency they think inherent in government spending, the net sum might still be zero. This is certainly a different argument from the definitional claim that the public sector is a drain, and it resonates. The Soviet Union was hardly an exemplar of socialism, but it surely was a model of waste and bureaucracy, and conservatives are on much more solid ground in condemning the poor performance of some government bureaucracies at all levels. But let us examine this point critically as well, for it is based on two false claims.

The first is that government is for some reason inherently wasteful, but that conclusion relies on a cost/benefit analysis appropriate to private rather than public enterprise. Put another way, “wasteful” in capitalism refers to cost versus profits, but as the goal of public enterprise is not profit, at least not the profit that can be quantified into dollars, on what grounds can it be termed wasteful? Unless critics are willing to use a broader standard of value, they can hardly objectively judge the public sector, and to use the standard of value appropriate to private enterprise is grossly distorting. 

Underlying the charge though is a psychological theory of motivation that capitalism’s champions mistakenly take to be indisputable.  They charge the poor performance of government services less to weak oversight than to the slackness of its work ethic. No reasonable person can argue against the profit motive as an incentivizer of efficiency, but it is carrying the argument to absurdity to view it as the only incentive as Ronald Reagan did in his infamous contrast between public and private workers: “The best minds are not in government.  If any were, business would steal them away.” I doubt that even Steve Jobs found profit more motivating than his own love of discovery and invention. Millions of dedicated police officers, firefighters, teachers, and public servants are moved to do their duties by their commitment to the general welfare rather than the size of their paycheck. Bureaucracy is not necessarily a pejorative term.This is not to dismiss charges of waste and incompetence nor to diminish good faith efforts to make government more effective, only to challenge the presuppositions of those who seek to discredit it by inept comparisions.  It is only an inspiring Chamber of Commerce vision of wealth that sees it as created from nothing by the ingenuity of the human spirit in pursuit of profit.

 Maybe not so inspiring as we might wish, as the ugliness of Ayn Rand’s portrayals demonstrate. Her “arguments” as phrased in her novels are certainly created out of nothing (please see last week’s post on the dangers of fiction-as-reality). But more to the point, they are adolescent fantasies. It is hard to decide whether they are more objectionable for their Romantic excess or their childish ingratitude. Even a moment’s cool thought after reading Rand’s overheated prose should make it obvious that Steve Jobs did not build Apple from nothing. It hardly detracts from his genius to note that he relied on the education, protection, and facilitation that his parents, his community, and his government provided to apply his genius. What would the iPhone have looked like if Jobs had labored away in a slum in Somalia? Rand’s heroes hardly made themselves (though I am not sure their parents would have wanted to claim them), and while her warnings of the dangers of collectivism were on target against a Communism that championed a stupid equality of degree, it is an elephant swatting gnats in today’s liberty-loving America (for more on the battle between liberty and equality, see posts of November 20 and December 3, 2013). The self-made man is a staple of the American dream, perhaps because it is a dream to imagine anyone being totally responsible for his own success.

 A stronger point of leverage against government as a wealth creator might target simple overreach. I mentioned earlier that government’s positive obligations in justice focus on broad needs for the general welfare and more pinpoint needs to arbitrate competing interests. The emphasis must always be on needs that individuals cannot provide for themselves. This is an inherently hazy category. Its components are built upon the universality of human needs (see post of November 13, 2013) that introduces an equality of kind (see post of December 3, 2013) that imposes obligations on government (see posts of February 23, and March 2, 2014). But it is not merely the ambiguity and difficulty of the topic that lead both liberals and conservatives to avoid facing it. Both sides have reasons to blur the issue.

 Liberals refuse to face the thorny issue of individual responsibility. Though each of my needs confers a right, the satisfaction of most of those needs is my own obligation. It is, in truth, my core duty as an adult human being (see post of November 6, 2013). Should I fail through my own error, government as the collective will of my fellow citizens is under no obligation to repair my situation unless I am unable to repair it for myself. I understand that Christian values in the U.S. have tinted many persons’ views of this sort of thing, but all those conservatives who think this a Christian country might want to differentiate their Christian from their political duty (though they seem loath to face either: see below), and liberals who wish to use government resources to satisfy any unmet needs whatsoever might want to clarify in their own minds which are government’s duty and which are each citizen’s. Conservatives have a point about the “nanny state” that liberals rather wish to ignore. Look it at this way; to treat adults who should go about the business of satisfying their own needs as children the rest of us must care for is pure paternalism: insulting and crippling to those we seek to help. It is also a waste of public resources in that individuals are not only responsible but also more efficient in these efforts than those who seek to ameliorate their condition for them. It violates the only duty of government in that it is inherently unjust both to the individual and to the citizens who attempt to do for her what she should do for herself. Liberals need to face the matter of individual responsibility squarely.

 Conservatives have a different motive for blurring the issue of needs, for to base government upon their satisfaction would call into question the social contract justification for government and with it the majoritarian argument that has long delivered injustice to minorities. More pragmatically, it would cost more money, for to finance a government seriously committed to the general welfare– a term I define as meeting needs that individuals cannot meet by their own efforts– would socialize some efforts now undertaken for profit. The absurd cost, inadequate distribution and poor outcomes of American health care is one prime example from among many. The net result would be to change our value system from the orientation that wealth determines worth to a respect for the equality of kind rooted in our common humanity, an innocuous enough notion that you would think Christians as well as champions of human rights would subscribe to, yet one many conservatives find threatening.

 A related wealth creation story lauds the positive contributions of the job creators in our economy, those who stimulate the economy by providing employment for the ninety percent of us who work for a wage. The argument hinges on the more basic notion discussed above: that wealth is created out of nothing only by those operating within the free enterprise system. See, Apple had two employees in 1976 and 45,000 last year. Each of those well-paying positions only exists because of Jobs and Wozniak. They are not only wealth creators. They are also job creators. Again, it is hard to argue with this. Something came into being as a result of their genius that did not exist before and by dint of their hard work and smarts, that new thing has created both wealth and jobs. Surely, job creators deserve recognition and reward for their efforts. This version of events is convincing, yet it seems just a bit truncated and simplistic. There’s more to it than just invention and production. All those sleek computers and phones and tablets are great products, no doubt, but all those high paying jobs and profits were not created solely from production. There is also the little matter of consumption. Even the paragons of job creation could not have made their companies or built their wealth or hired their workers without a demand for their products. And demand depends on the health of the economy. No titan of the marketplace could work her magic in a failing society, which brings us right back to the necessity of government not just as a wealth creator but as a job creator. Just as most wealth creation is not ab initio but derives from the providing of a desired service or product, so too does job creation depend on the consumer’s purchasing power and the health of the economy. This health is a dance between private enterprise and public policy. See the way the stock market embraces the Federal Reserve and vice versa! Yet from the way conservatives portray their version of a job creator, one would think he pays salaries from his own pocket rather than from the operating expenses of his company, but then maybe that impression is enhanced by the ridiculous salaries paid these self-styled giants of the marketplace. A moment’s thought should uncover the real job creators for the bulk of the economy are the middle-class consumers whose income provides the demand that increases the cash flow that creates the jobs. This healthy cycle characterizes any productive economy. To signal out the employer as the lynchpin of this cycle is to distort its nature. The United States has the highest level of economic inequality in the developed world (But we are more equal than Mexico and Turkey. Yay!) One may make a legion of moral arguments about what various stakeholders in our economy deserve, leading to interesting discussions about minimum wage and CEO compensation, but as a purely pragmatic matter, the real job creators in our economy, meaning consumers, can hardly perform their part in the economic cycle if this level of income inequality continues. But the conservative moral argument disputes this pragmatic one. We may discern a number of reasons for the widening gap between rich and poor since the 1970’s, but surely the position that employers have a more important role in the economy than workers and the concomitant conclusion that they have a moral right to a larger slice of the pie than at any time in our history (excepting the ominously significant year of 1928) is largely responsible for the current disparity.

 In moral philosophy, we see the concept of “ought implies can,” a very valuable check on the applicability of moral principles. It is fine to say that such and so moral principle should apply, but the argument is defeated before it begins if no way exists to apply it. It is worth asking if the conservative argument about job creators introduces the ought/can issue. In other words, should moral principle bow to pragmatic necessity? Because consumers are as necessary a part of the business cycle as employers, should that be the end of the discussion? Does their practical necessity as purchasers trump the moral argument for the superiority of job creators in creating wealth? I would argue no, for no amount of pragmatic limitation would tamp down the position that bosses should prosper disproportionately to their employees as much as business cycles will allow because of their greater contribution to the general welfare. Granted, this concession would at least recognize the moral worth of workers to some degree, which would be a decided improvement over the current blindness that elevates employers to godlike status. But even in the healthiest of economies, some pragmatic positions require moral interrogation: that workers are interchangeable drones, that shareholders matter more than employees; that profit is the only real product of any business; that crony capitalism, influence peddling, and corporate welfare are acceptable governmental functions; and that rigging the system so as to deliver obscene wealth to a fortunate few while denying fairness to the rest satisfies the obligation of government to deliver equal justice under the law. We not only can do better as a society, but we ought to.

 Which brings me right back to economics as a science. I began this entry by admiring its devotion to data and quantitative analysis. I will end it by pointing out the most glaring reason why economics as now constructed can never be a real science: it can never apportion proper value to the human concerns that the economy should serve. The goal of science must be to find truth. It does not have the means to find goodness within its modes of warrant, and questions of value are always questions of goodness. I have frequently discussed the wise blindness that science must bring to its objects of analysis (most recently on July 6, 2014) in order to provide a reliable warrant for its truth claims. Like its sister human sciences, economics can never provide that warrant, which is why even those of us lacking in expertise should apply our reasoning to its provenance.

Standard

Tall Tales

Even while I studied and taught literature, I was always troubled by the loose linkage between stories and reality. I am not talking about the reality depicted in the stories themselves. It has always struck me as right and proper to object when they violate their own premises. This might be as simple as the continuity errors that eagle-eyed viewers always point out in movies. Look. Her glass was half-empty and in the next shot is three-quarters full. A more serious failing is the deus ex machina that rectifies a failing story line at its climax. Or perhaps a character acts entirely contrary to her nature without sufficient cause, leading the reader to scratch her head in bemusement or throw the book down in disgust. Still, the bar we set for fiction is pretty low. It need not mirror life, an act critics call mimesis, so long as it remains true to its own premises. If pigs can speak to spiders in the first chapter, if choruses burst into song in the first act, the observer only asks that the same rule applies later, and if things change, that the change is explained so as to allow the work to remain internally consistent. Stories exist in their own world, and that is what pleases us about them.

Only they don’t and it doesn’t. Three-year-olds can effortlessly navigate the gulf between created reality, the made-up world of fiction, and the common reality we all participate in, but something odd evidently happens to grown-ups, and the problem only grows more serious with education. Sophisticated critics and professors of literature engage in an interesting sleight-of-hand in examining the relationship between real and imagined. If confronted outright with a request to define the connection per se, they will deny any explicit linkage because even a moment’s thought will introduce the iron curtain that divides the real from the imagined. But five minutes later they are enthusiastically dissecting mob mentality in Billy Budd or the moral implications of the Grand Inquisitor chapter in The Brothers Karamazov. They seem at least to sense the problem, so they seek cover by referring to Melville or Dostoevsky’s view of things, but what do they expect their often-captive listeners to do with the analysis they are conducting? Are we to confine the novel’s meaning to the fictional world of the nineteenth century whaler or a Russian orthodox monastery? Are we to infer that these two brilliant creative geniuses have nothing to say to the common reality they inhabited– or perhaps to the one we now inhabit– that their brilliance is curtained by the imaginary worlds they created, worlds so rich and dimensioned that we can drag ourselves back to reality only by an effort of will and once returned remain strangely lost, with one foot in the real world and the other in the somehow richer world pinned to the pages or etched on the DVD? Adolescents emerge from the theater lobby with plans to play quidditch or kick-box or buy an assault rifle. Adults finish Macbeth with a richer understanding of the perils of ambition. Really?

I’ve been bothered by this issue for many years, but like so many other super-macroscopic cultural issues, it seemed few others shared my concerns. But in a recent TED talk, the famous sociobiologist E.O. Wilson was asked about our zeitgeist’s obsession with narratives, an issue about which he expressed some concern. I think it time to delve into my own discomfort.

Like so many other big-picture problems, this one suffers from a poverty of appropriate terminology. Wilson observed that our evolutionary bias is toward confronting reality. But, of course, we can’t do that directly, for before “getting” it, we must construct our version of that reality, a process I have termed the virtual circle in these posts (please see August 13, 2013 for a fuller explanation). An entirely accurate construction that mirrors reality in perfect detail is something I define as the virtuous circle, an unattainable goal yet one we cannot help pursuing and compositing in all of our truth, goodness, and beauty claims (for more on this one, check out October 2 and 7, 2013). The mimetic process that occurs in our minds as we attempt this construction occurs constantly and may be considered the perpetual goal of all of our perceptual and most of our rational efforts. Our struggle to identify the true, engage our natural and preferential freedom to choose the good, and negotiate the difficulties of appreciating the beautiful occupies most of the moments of our lives. But not all. Wilson implied that the created reality of narratives provides us with just what we have been exhaustively searching for: a consistent and comprehensible world that we gradually come to understand, but unlike our own, one that we merely observe rather than feel forced to make choices in. By that logic, we jump on continuity errors and narrative inconsistency with an almost feral anger, for our minds are led to think of these created worlds as being the thing we most seek: a mimesis without self-contradiction. Further, our poor brains, exhausted by the creative effort of half discovering and half creating a common reality that we but poorly understand, gratefully absorb the balm of the fiction we indulge ourselves in, absorbing at one level the completeness of this intelligible but imaginary reality yet simultaneously engaging in the same kind of logical analysis that we bring to every conscious moment of our existence. We naturally do so. When I taught literature, I had to remind students that nothing in the created reality of a piece of fiction occurs by chance. Everything is intentional in the made-up world. They found that notion incredible because real life doesn’t work that way. That intentionality is what we yearn for in common reality. It energizes the world’s religions as, I suspect, it sustains the scientist’s trust in her method. There is an authority in creating an imaginary world fully furnished with simulacra of the one we are assured a greater Author created, and there is comfort in turning past the title page or watching the opening credits, knowing that what follows has order and meaning. We delight in immersing ourselves and trusting that world for a very good reason: it differs from the real one, so often dull, bewildering, and meaningless. If narrative literature only existed to produce that delight, it would have paid its way in this weary world. That immersion into another world, into the author’s consciousness, justifies what Robert Coles termed “the call of stories” in his excellent book of the same name. But it seems our brains are made so as to ask more of literature than it can deliver. We cannot help but mine stories for meaning, to ask them to cross over.

No wonder children emerge from the theater’s cocoon with their fingers pointed and thumbs cocked. No wonder critics knit the imagined world of the narrative not only to their own virtual circle but to the virtuous one, drawing out of the created world lessons for the one we all inhabit. Though the process is a natural one, the imaginative interweaving is facilitated by the ease with which students of literature deal with figurative language, especially metaphor (an issue I’ve addressed in another context in these pages on October 2, 2013). They are used to thinking of one thing in connection with another, but the relationships they establish are necessarily imprecise and allusive. And so they shy away from a discursive and frank appraisal of the relationship between created and common reality. The term they would use to describe such an attempt is reductionist (a disparaging term whose reputation I attempted to salvage in these pages on September 3, 2013). I see nothing amiss in asking the critic to state with some precision what relationship the events in a fictional world have to the reader’s participation in the real one. If the author intends to communicate some wisdom about common reality through narrating the experiences of characters or the voice of some speaker, fine. It strikes me that we often communicate our experiences in just that way, intending to impart some wisdom to our listener. But that common sense view runs into two major roadblocks in regard to fiction.

The first involves the limitations of experience. Of the correspondence truth tests, experience is both the most commonly applied and the most unreliable. (For a fuller explanation of why, please see my posting of October 7, 2013.) We build our virtual circles largely from experience, which explains both why we find reality so difficult to comprehend and why the transference from fictional experience to virtual circle seems so natural. But experience’s limitations should caution us in this temptation. As a justification for truth and goodness claims, experience suffers from contextualization. It is necessarily unique and unreplicable, so the lessons learned from each experience are only loosely applicable to later and similar ones. And as experiences are perceptually registered, they are altered in ways we cannot be conscious of, since sense data is filtered pre-consciously so as to present us with a fully-constructed picture of reality. This distortion is profound enough to support postmodern charges that experience itself must be private and subjective. I argue in opposition to this charge that our reasoning about experience may produce a degree of intersubjectivity that allows us some broad degree of consensus. This universal reasoning faculty applied to subjective experience is the anchor that moors us to a common reality. But what do we reason about when we think about a movie or novel? What facts of experience can we accept as true in this created world?

The second problem concerns the intent of the creator. A novel or a movie is a work of art, subject to conventions governing its genre and aesthetic considerations that shape its substance and style. Those seeking that it also somehow convey the truth in its storyline are asking it to accomplish a second and divergent goal, for as we all know to our sorrow, unvarnished common reality could rarely be mistaken for art. To make it so, the creator must distort reality as a sculptor must mold her clay.

To illustrate the issue as it affects literature, contrast a reaction to a biography with a response to a piece of serious fiction. I just finished Christopher Hitchens’ short biography of Thomas Jefferson and David McCullough’s fine life of John Adams. Both works were polished pieces of craft with distinctive styles and authorial expertise. Both followed genre conventions for biography. I consider both to be fine artistic creations. Both paint a fairly unflattering portrait of Jefferson. While I may have quibbled with a few incidents or details, I approached their portraits of the sage of Monticello with equanimity. Here were expositions of another life and another time. Like all actual lives in all times, there were loose ends and unknown motives, questions and inconsistencies. Whatever telescoping of perspective or framing of events, whatever intentional omission or sharp focus each author effected, I judged to be in service to the attempt to convey a true account of a real life that I was free to further investigate and confirm or dispute. In contrast, at the same time I was reading Edith Wharton’s novel House of Mirth, whose storyline chronicles the rise and fall of a social climber in the Gilded Age, Lilly Bart. It was as rich in historical detail as the biographies, with a comprehensive picture of the social environment of her times. The machinations forced upon Lilly by her class and gender roles were as deeply affecting as they were exotic to this twenty-first century male reader. But at the novel’s end, what was I to do with these insights? I had entered a richly furnished late Victorian room and had trolled the minds of all its denizens, had observed their triumphs and bitter falls, and upon closing the book had stored it all in memory. What part of that memory may I think real? Edith Wharton actually inhabited rooms like those she portrayed, joined the social elite, and undoubtedly was acquainted with many a nouveau riche. But what of it? Her admirers will infer that her novel will give us the “flavor” of that life, or an “insight” into it that somehow translates into knowledge of it. But they ask for too much, for at the same instant they wish for the novel to also exist as a unique aesthetic object, one crafted intentionally to produce an emotional response. These purposes are of necessity in conflict. Two examples of that inevitable conflict should suffice to make my point. First, we know from the title page on that something will happen in House of Mirth, and –wonder of wonders– it will happen to the central character! That is a piece of luck! Funny how life turns out differently. Furthermore, somehow the novel’s reader will know enough about the characters and events of the novel to get a pretty rounded picture of not only what happens but why and how and, even more miraculously, some causes and effects of the events depicted. Now even if we are willing to suspend disbelief sufficiently to correlate these unlikely findings with reality, we face yet another problem regarding the insight we are being handed: we are being asked to trust that the author is skilled enough in her authorial craft to accomplish her artistic ends and at the same time observant enough of real truths in the real world to communicate an experience truthfully and reliably, and not just any old experience, but one that conveys some essential truth that cannot be communicated discursively (if it could be, it would be, for discursive language is far easier to employ than writer’s craft). How are we to judge any single feature of her novel as a piece of experiential truth? After all, every representation is from some angle dictated by aesthetic rather than experiential requirements, and we cannot know how accurate any imaginative creation mirrors what it reflects. If Wharton had slipped in some anachronism as a private joke, would I have known? If she grossly exaggerated Lilly’s paralysis in the face of rigid gender roles to drive home some private grievance or authorial machination, if she employed some Dickensian character or plot twists to dramatize her storyline, if her unflattering foray into the consciousness of her male figures was prompted by some misanthropic impulse to stereotype…. How would I know? Can I ever separate my memory of the watering holes of fin de siècle society imparted by her creativity from the histories of the era I have studied? Should I try? I hear a great deal of talk about “artistic truth,” “theme,” and “deeper meaning” in discussions of literature. I would like a clearer understanding of just what knowledge such ideas entail, not to mention the more difficult issue of how such truth claims are warranted. How does artistic, creative genius and authorial skill translate into depth of knowledge of what is grandiosely termed “the human condition”? (for more on aesthetic judgment, please see my post of December 13, 2013.)

As a finished artistic creation, the novel stands on its own to enfold us as a unique intentional work to produce the “disinterested delight” that Kant said characterizes all works of art. I get that. But just because it has those qualities, I question any “lessons” the work can offer us: lessons about history, sociology, psychology, or, in the words of the English teacher, “life.” My brain cannot help but to form the same synthesis with this imaginary and created world as it does with the mimesis of the real world I construct as my virtual circle. After all, mirroring reality is what it does for a living. Though natural, I am convinced such an effort is delusionary and dangerous and should be resisted rather than embraced.

Plato recognized the danger. In Book X of The Republic, he envisions a utopia without creative arts. We largely discount his warning today unless we buy into his theory of forms, whose architecture allowed him to see artistic creation as a mirror of a mirror. Since common reality for Plato was merely a reflection of the ideal, any artistic creation that fulfills a mimetic role must reflect common reality, thus distancing the observer even farther from contemplation of the ideal. One hardly needs to subscribe to the Platonic vision to make that complaint. Consider Augustinian objections to secular literature still exerting their force in the closure of theaters during the English Commonwealth and Jefferson’s well-known quarrel with reading fiction.

As in so many things, Aristotle disagreed with Plato, and at least a whiff of his argument attends every subsequent effort to find truth in the narrative arts. The power of fiction according to Aristotle’s schema in The Poetics is to distill the essence of experience rather than any particular and therefore unique perspective. Just as he envisioned our knowledge of abstractions to be gradually constructed of multiple exposures to their instantiation, so too did he see the artist’s role to distill the essence of experience into its essential archetype, the defining characteristic of the essences portrayed in the narrative. Macbeth is an imaginary king as Shakespeare portrays him, but his approach to gaining and retaining power typifies a certain type of monarch, or so we like to think. The muthos of the play, its essentials, are thus both imaginative and didactic. The author both creates and instructs. The audience responds to effective archetyping with catharsis, which Aristotle saw as an emotional purging. We might grow as callous to blood as Macbeth himself if we actually knew him, but we retain our emotional distance when watching him onstage just enough to explore regicide as an idea and experience our response to it as a vicarious emotion. So we are double winners, Aristotle claims. We derive the emotional charge of involvement with the intellectual depth of detachment. We end the narrative emotionally spent but rationally energized. Aristotle’s arguments are powerful, but they fail to bridge the gap between the imaginary and the real. Certainly, what we experience in Macbeth is a powerful emotional ride that leaves us exhausted well before Macbeth loses his head. But only the catharsis is real, not the manipulated events that produce it, and what experiential truth can derive from events that are so clearly manufactured? I do not mean to say that immortal characters and events cannot be consensually discovered in great literature. We approach our Willy Lomans and Don Corleones with too much reverence to claim that our emotional response to narratives cannot build immortal archetypes. But these are cardboard cutouts compared to any living person. Their power derives from the crispness of their definition, and that clarity is entirely a product of their being merely artifacts, framed by intent. As for the intellectual power we derive from our experiencing their fictional world, I would argue that it is precisely these singular great characters and storylines and the profound implications they generate that produce the greatest intellectual dissent among critics and literary experts. Our emotional response is molded by the intelligence creating fictional narratives– this is after all a world created to elicit it– but our attempt to interrogate that response and extrapolate its significance to common reality must splinter into private conviction and public conjecture when it crashes against the wall between created and common reality. Archetypes there may well be and catharses they may well produce, but when we attempt to derive real-world truths from them and put those truths into discursive language, we enter the thicket of controversy that fuels a hundred academic journals and a thousand websites. The deepest wells of the narrative arts, the Hamlets, the madeleines, the Rosebuds, the monolith, that in Aristotle’s schema should produce the deepest and most powerful consensual truths lead instead to the most vociferous dispute among experts who try to frame those truths in the discursive language of the academic article or popular essay. Why is that? Could it be that the “truths” thus communicated about “the human condition” are as numinous as a religious conversion? Could it be that no reliable truths about “real life” can be produced by the portrayal of an unreal one?

That the narrative arts must serve mimetic purposes seemed relatively undisputed until the Romantics refocused the spotlight upon themselves. But this was hardly an improvement since it necessitated exchanging the universal for the personal, with all the attendant temptations of private experience proffered as artistic genius. These nineteenth century obsessions were magnified by the growth of popular culture and the rise of literacy, cheap publication methods, and universal education. By the twentieth century the new narrative forms of film and television guaranteed the ascendency of the narrative not only as art form but as educational tool. And a new philosophy emerged to spotlight the narrative form, to place it at the very center of its premises. I have often written about the early twentieth century transition from modernism to postmodernism in these pages (please see posts of July 22 and 30, 2013 for more). Its veneration of creativity and subjectivity was matched only by its disdain for empirical science and rationality. When joined to the new technologies that celebrated the narrative form, it stimulated a powerful effort to link created and common reality.

Its focus on creativity, criticism, and irony guaranteed that its approaches would be heterodox, so it took a while for postmodernism as a movement to reach full steam. Its groping for consistency coincided with the maturation of both the movie and the television industries into the powerful social forces we see today, and no one familiar with either could deny the countercurrent of sappy Romanticism that characterize not only these media but also popular literature (for more on the formation of one hybrid archetype of this era, the antihero, see my post of November 26, 2013). Postmodernists embraced the individualist and subjectivist biases of the Romantics, along with a near deification of the artistic rebel. Their mature theory could be discerned in the works of a cadre of mainly French intellectuals by the mid-1970’s. They were academics and literateurs who found fertile soil for their theories, and indeed often communicated them, in literature rather than in philosophy. In appealing to literature to carry philosophical weight, they were honoring a long tradition that included Freud’s grounding his theories in Greek mythology and John Dewey’s reliance on Rousseau’s Emile to support his Progressivist educational theories. But the postmodernists sought even more pride of place for the narrative form. In their terminology, the great historical movements of the modern age were grand narratives, merely widely accepted stories that cultures tell themselves to justify the status quo. Abstract and discursive political, religious, and moral theory is thus dismissed as mere storytelling with all of its fictionalizing. Ironically, postmodernists value another kind of story, mini-narratives, of individuals or of previously neglected and oppressed groups. They taught a generation of aspiring literature instructors to seek out truth in these untold stories. But note the difference between historians reading letters from Tuskegee Airmen and movies celebrating their service in World War II. In the parlance of the movie trailer, “based on a true story” is simply another way of saying, “not true.” Postmodernists also advocated subjecting what they scornfully called the canon of dead, white male authors to a critique using deconstruction, whose purpose was to mine their fiction and poetry for evidence of grand narratives perpetuating exploitative social orders. While racist, homophobic, misogynistic, and capitalist undercurrents certainly swirl through the fiction of the canon, and while avid students pride themselves on uncovering it, I find it disturbing that they assume its molding influence on readers without asking what I think is the more basic question. Yes, readers are seeing in Tennyson or Hugo a disturbing misogyny, but so what? Yes, readers then and now should not have their prejudices confirmed, but not merely because we prefer our prejudices to theirs but because these imaginative works are neither sociological investigations nor psychological confessionals. Perhaps every human creation from cave paintings to kewpie dolls screams a political manifesto, but for my money the meaning is brought to the reading rather than derived from it. Deconstructing fiction is said to have given critics what they have always desired: equal partnership in artistic creation. I doubt if serious academics would have gone for it if they hadn’t already accepted the claim that fiction qualifies as another form of philosophy. But the effort to find truth in fiction was only the first step, leading inevitably to the goal of all truth claims: finding goodness. Let no one think the postmodern method is morally neutral despite an implicit rejection of objective moral standards, for its program of social reform is built on the model of the human sciences (not a good idea at all. Please see my post of September 2, 2013 for why). In substituting sociology and psychology for the kind of pure aesthetic John Ruskin favored, postmodernists transform imaginative works into covertly polemical ones, replacing one exaggerated influence with another, using the narrative form to pursue weighty political ends it could never support. They mock Ruskin’s Romantic pretensions as the merest fluff while mining literature for justification for their moral crusade (I am entirely sympathetic to their egalitarian agenda, by the way, though I find their warrants unforgivably simplistic. Please see my post of November 20, 2013). The truth content of created reality is simply too insubstantial to carry the weight of their analyses. They are shooting at bubbles in the air to fill them with rhetorical lead, but they merely dissolve, and with them goes the cathartic power of narrative media.

So what’s the harm? The child finishes the Harry Potter novel and eagerly reaches for the next in the series. The crowd files out of the movie theater marveling at the latest computer animation effect. The reader closes The Great Gatsby still envisioning the green light at the end of the pier. No harm there, only a rich emotional immersion into a created world. But it is so hard to resist the next step. Presidents mimic action heroes. Romance novel fans inspect their snoring husbands with disdain. Serious and intelligent students learn to seek truth in the deconstruction of the latest serious novel, yet they find no critical consensus on the wisdom it purportedly conveys. E.O. Wilson complained in his TED talk that our attraction to narrative prompts us to seek a simplistic, spurious intelligibility in the world around us. We want good guys and bad guys like in the movies. We yearn for happily ever after like in the fairy tales. We yearn for the omniscience of the novelist’s world. In doing so, we disdain the open-ended complexity of the natural sciences, the hard work of sustained commitment, the doubt and uncertainty of finding truth and choosing the good. We want our stories to be real and reality to be as silky smooth as a heroine’s cheek. But that cannot be.

 

 

 

 

 

 

 

Standard

Religionists Fighting the Wrong Battle

I came across the following article not long ago, and made a close reading of its arguments, many of them broadly Christian in outline, on the subject of material determinism. Original in black, my comments in bold italics. I hope they don’t overly detract from the flow.

From Catholic Answers Magazine (Volume 19, Number 4)
Determined to Deny Your Freedom
By: Peter A. Kwasniewski

“Determinism” is not an everyday word, but we feel the effects of this philosophical view every day—usually in the unspoken assumptions of popular scientific journalism and critiques of religion. It is helpful to be aware of what this view involves and why it is untenable.
Determinism in its most general sense could be described as the theory that the history of the world—all events and their order of occurrence—is fixed and unitary. In other words, there is only one possible history of the world down to every last detail. There are several types of determinism: logical determinism, theological determinism, biological determinism, scientific determinism. In this article I will concentrate on this last and most familiar form. Scientific determinism stems from a belief that modern science, especially physics, has successfully proved that all reality is material and operates according to fixed laws of action and reaction.

No. This demonstrates a basic misunderstanding of science, which could never prove something so ambitious to its own satisfaction, much less yours. What you describe is the self-limitation of all scientific inquiry. It can only work with material phenomena. To see it as holding the philosophical position that any event of any sort is fully explicable (and thus, in principle, predictable) by a pre-existing chain of physical events necessitating it is also mistaken, though this is a fine philosophical definition of determinism. Science can only use the tools it has, so full explanations as science sees them must be couched in the language and employ the tools that science provides. For instance, why I get cancer and you don’t is explicable by science to a degree, but no researcher would claim that explanation is complete in a metaphysical sense, nor would any claim that any empirical explanation whatsoever is ever satisfactorily completed as explanations always lead to further questions. The core issue here is that science applies the philosophical position of determinism to a very limited sphere of study, all involving perceptual experience. You seem to be setting science up for a fall with these hyperbolic definitions.

In a world where science has been elevated to the status of a quasi-religion and its spokesmen to the rank of high priests, we are bound to encounter people who hold this position. It is well to note that the attitude or frame of mind underlying it strikes at the root of religion as such, impeding conversations about anything—God and the human soul, Christ and the Church, sin and grace, even good and evil—that is not strictly empirical or susceptible of laboratory analysis.

This may be true if scientists are unwilling to admit the limitations of their method, but most seem all too willing to acknowledge them. Scientism, not science, fits the definitions you give here, and scientism is easy to refute. Yes, science “impedes” conversations about anything non-empirical, just as it ought to, for that is its sole sphere of action. But nothing stops religionists from diving right in.

Science Explains It All . . .
This view found its rudimentary expressions in the writings of René Descartes, Francis Bacon, Galileo Galilei, Isaac Newton and their contemporaries, but attained a dogmatic consistency in the blatant materialism of Thomas Hobbes, Julien Offray de La Mettrie, Voltaire, and Baron Paul Henri d’Holbach. These writers exaggerated the reach of physical science and claimed that experimental physics was the model for a total explanation of reality.

Yes, don’t forget Poincare and other thinkers who championed what passed for science in the nineteenth century. This sense of unbounded optimism is always a temptation for science, but the view you reference was especially popular in the era before science was as strictly defined as it is now. Bear in mind that the provenance and power of “science” as an activity have tightened over the years, and the trust that eighteenth and nineteenth century writers placed in the future of science has since been challenged by the increasingly rigorous requirements of empirical research. What Voltaire thought of science is as irrelevant to its current status as what Plotinus thought of religion. Pompous self-importance and inflated truth claims are particular temptations of the human sciences, but these are not what we think of when we think of scientific success.

Later on, Charles Darwin’s theory fed into this powerful stream. His godless account of biological diversity showed itself well adapted for integration into a larger philosophy of scientific determinism.

Slanderous. Read the last sentence of On the Origin of Species! Still, just as Copernicus removed the necessity for angelic rotation of the spheres, Darwin removed the necessity for the Great Watchmaker to establish biological diversity and complexity. I suspect biology will soon remove the necessity of God from biological creation and cosmology is already working on a universe originating from nothing. But what is the alternative? To reject evolution and modern cosmology in favor of miraculous creationism and geocentrism? I have written extensively in these pages of the temptations of coherentism, whose virtual circles of personal truths need only be supported by the principle of non-contradiction and of the immediate contradiction religionists face when they attempt to claim absolutist truth about external reality using these same means (see posts beginning on September 11, 2013 for more on this). It is certainly defensible to make truth claims about the transcendent reality we cannot know so long as they cohere with what we do know, but to prefer private belief to correspondence knowledge violates not only the proofs of correspondence but also the only proof open to coherentists. It is a childish error of willfulness.

The rapid and spectacular advance of technology, born from the marriage of modern physics and capitalism, seemed to verify beyond all doubt the materialistic mentality behind both.

I had no idea physics and capitalism were even dating! Seriously, how are they connected? They may both be godless in their subject areas, but the pews are filled with practitioners of both on the Sabbath, so these fields conduce to atheism no more insistently than any others. Yet again, it seems to me science’s silence on metaphysics might be interpreted as an admirable restraint on a subject it can never know.

Given that people nowadays have been more or less habituated by textbooks, teachers, and news media to accept scientific determinism as fact, the apologist should start by explaining that the position is essentially a belief or dogma. It cannot be deduced from empirical knowledge, which must always be imperfect (no scientist would dare to claim that he knows or could know all the “laws of nature” and all the data required to predict future events).

Again, odd definitions. “Deduction” means to draw a conclusion not explicit in the premise, so it strikes me as wrong to say truth cannot be deduced from empirical knowledge (though the use of “fact” in the passage shows a lamentable mental sloppiness). The methodology of science ensures that it is the best means available to arrive at certain kinds of truths. The error here is to consider absence of evidence as evidence of absence. The empirical endeavor simply cannot speak to the issues you value. Its silence should not be interpreted as a rejection of your value system but rather as a blindness to it. I think your quarrel should be with modern theology that stupidly accepts the narrow focus of empiricism as a kind of limitation or criticism of religion’s core values while at the same time modeling its theology and pastoralism on the flimsiest of human sciences. The use of “dogma” is interesting, for it references a common criticism of the scientific enterprise: that it pretends to be based on fact yet is actually predicated on non-verifiable assumptions. On that charge it is clearly guilty. Let us call these assumptions “axioms,” rather than “dogma” so that we may avoid religious equivalencies. The principle of non-contradiction is one such axiom as is the inductive method. These assumptions are based on a deeper axiom: that reality is fundamentally rational (Heisenberg and Godel have taken aim at the latter, an attack crucial to postmodern critiques of natural science). These axioms are working premises rather than dogma. They underlie the investigation, but as early twentieth century theories of quantum mechanics and general relativity have revealed, they are open to criticism and revision using the very same techniques scientists employ in their everyday pursuits. Contrast this pragmatism with religion’s absolutist reliance on the inerrant and revealed truth. Imagine how religionists might respond to assaults on their dogma! You don’t have to. Study the Reformation.

It cannot be considered self-evident because it contradicts the experience of freedom, which has more weight than any theory.

Nice point, but the phenomenological sense of freedom does not necessitate an ontological reality. We may feel free without being free.

The one who puts forward determinism as a universal explanation lays it down a priori, that is, as an axiom and without sufficient evidence.

Now this is just silly. It is neither an axiom nor a priori, but is a deduction drawn from experience. You may argue it is the wrong deduction, but to call it a priori is simply an error of definition. Once we see the claim to determinism as an a posteriori conclusion validated by a near infinitude of experiential instances, the truth of determinism as a foundation stone of empiricism seems unassailable.

Empirical science can never go beyond the boundaries of the measurable or observable, and, as a consequence, is simply unqualified to make judgments about the existence or non-existence of anything beyond its limited field.

Yes! Now with that in mind, go back and revise everything you have accused it of till now. Don’t malign it for ignoring religion and God when it cannot do otherwise and don’t take its silence on metaphysical issues as dismissal.

. . . Or Maybe Not
Let us consider seven instances where scientific determinism founders.
1. It is meaningless to speak of universal “laws of nature” unless they have been instituted by a lawgiver. Matter, as such, is not capable of giving laws of behavior to itself. That means that material things are not the source of these laws; rather, they presuppose laws when they act and react in an intelligible manner.

No. A “law” of nature is simply a large explanatory hypothesis that answers the “how” of some physical question. You are correct in saying that things cannot create laws: these are products of human interpretation. We say the law of gravity dictates the attraction of two masses in a vacuum, but the masses don’t know the law. They do act according to its dictates as we would if falling toward the earth from an airplane. But a law of nature is merely a logical explanation of phenomena, not a prescription dictating behavior, and as such it requires a mind to make the analysis. Such analysis provides real predictive power. Nothing in the nature of natural laws necessitates an external source or creator; all that is required is a mind to explicate the law based on careful observation and rational analysis. Sounds pretty determinist to me, for what is a natural law but a replicable prediction?

Moreover, how did material things come to exist, not merely as matter, but as matter functioning within a system that leads to the formation of stable and orderly structures? Do atoms just mysteriously “know” where to go to in order to make up a certain molecule in a certain kind of organism?

Also a red herring. Does water know to freeze at 32 degrees or boil at 212? Again, matter responds to forces acting upon it in predictable ways. That means determinism. Otherwise, we couldn’t be accurate in our predictions and science would collapse into magic. The totality of these actions produces a “system” which magnifies the predictability and therefore the determinism operating in the system. The system doesn’t predict or know it is a system. We do. Theoretical cosmologists and theologians may find in such macroscopic interlocking order evidence for divine intervention that empiricists might never observe in the microscopic determinism of phenomena. But, I hasten to add, such confirmation could never be scientific simply because the explanation involves notions not open to perception and– pity poor science– it can only deal with phenomena.

The materialist will have sophisticated answers, of course, about how one system gives rise to another and how this environment happens to be suited to that reaction or result. But buried in the fancy language is the same problem: “begging of the question.” They have assumed that which is supposed to be demonstrated.

Huh? Is this a blurry version of the cosmological proofs?

2. A living animal (or one of its organs) is obviously and radically different from a dead animal (or dead organ) even though the material stuff out of which they are made seems to be the same. Therefore, some principle other than and greater than the material parts must exist to account for the life of a living thing. This principle, according to the Western tradition, is the soul. Both Aristotle and St. Thomas Aquinas teach that plants, animals, and especially human persons are animated beings (from anima, soul). It is the soul in each organism that contributes its distinctive nature and controls its activities. The presence of a soul in living things testifies against the materialism that usually accompanies scientific determinism.

You are on higher ground here as you are not challenging science but transcending it, which seems to me your best bet. Science certainly can describe the difference between a live and a dead body, but it must be silent on the existence of the soul that animates it. Ockham’s Razor does play into this, though. We don’t need angels pushing the planets around the sun when we can offer centripetal force, though they still might be doing that. We don’t need Apollo driving his chariot across the sky, though he still might be there. So it is with an animating force. Organisms are more than matter. Life requires energy transfers through biological and chemical systems that cease to function at death and this empirical physical change adequately explains the difference between life and death. Maybe there is a soul involved in this arrangement, but Ockham’s Razor makes it unnecessary in order for pathologists to fully explain the perceptual factors involved. Unfortunately, you cannot prove the existence of a soul any more than your opponents can disprove it, so your argument is unlikely to be a powerful one against active opposition.

3. The human intellect has a unique power: It is capable of knowing simultaneously things that are mutually exclusive. For example, hot and cold are properties of a body (physical object) and cannot exist at the same time in the same respect; a body can either be so hot or so cold, but not at once perfectly hot and perfectly cold. The intellect, however, in knowing hot knows also cold, and in fact knows the one in and through the other. Your mind can be all hot and all cold, inasmuch as you are able to grasp these opposites at the same time. More than that, intellect conceives of hotness and coldness, which are more than mere degrees belonging to some body—they are essences, “whatnesses.” These reflections help show that the intellect is not a body, for something is seen to be true of it that can be true of no body whatsoever.
Now, because the intellect has a power over opposites or contraries that no physical organ has, and because it attains a knowledge of universal things that stand beyo3. The human intellect has a unique power: It is capable of knowing simultaneously things that are mutually exclusive. For example, hot and cold are properties of a body (physical object) and cannot exist at the same time in the same respect; a body can either be so hot or so cold, but not at once perfectly hot and perfectly cold. The intellect, however, in knowing hot knows also cold, and in fact knows the one in and through the other. Your mind can be all hot and all cold, inasmuch as you are able to grasp these opposites at the same time. More than that, intellect conceives of hotness and coldness, which are more than mere degrees belonging to some body—they are essences, “whatnesses.” These reflections help show that the intellect is not a body, for something is seen to be true of it that can be true of no body whatsoever.
Now, because the intellect has a power over opposites or contraries that no phnd the scope of any sense power, the intellect must be immaterial. Since matter is the very cause of a thing’s being corruptible (i.e., able to break down and fall apart), the intellect in itself is incorruptible—it will never break down and fall apart. Hence the soul of man, insofar as it is intellectual, is immortal. What is more, the soul is not subject to opposition from or coercion by material causes. In other words, no body can make you change your mind, unless your mind changes itself. This is a powerful sign that the intellect (or better, the intellectual soul, which includes free will), has its feet planted in the material world by way of the sense powers, but holds its head aloft in a spiritual world where the stakes are truth and falsehood, good and evil.

Well, that was a long trip! This Platonic argument was countered by Aristotle, who argued that the Forms were neither perfect nor divine but were rather conceptualizations built up from experience. This argument was developed ad nauseam by the “realist” opponents to nominalism in the high Middle Ages, but as no convincing resolution of the argument settled the question, it seems to me all the old nominalist and conceptualist arguments can still be made against this one. My position is that a concept like justice is not a real object any more than hatred is– though I hold that both are objects of thought that allow us to recognize and discuss their nature– but that does not mean that either is divine in any way. This is not to disprove the Platonic argument. It can neither be proved nor disproved. But as a completely natural counterargument suffices to explain the phenomena in question, this argument against determinism is not necessarily convincing. BTW, it is a huge leap to go from conceptualism to immortality, so I would say your argument makes some unsupported jumps along the way here even if the initial Platonic points prove defensible. For instance, it certainly is not logically necessary for a nonmaterial substance to have entirely contrary qualities to a material one. Because one thing is contrary to another in one sense does not necessitate it being contrary in all. Men and women are opposites in regard to gender but identical in many other respects. So too may immaterial qualities be like material ones in some respects but not all. At any rate, conceptualist notions of qualities like “consciousness” posit an objective reality rooted in that consciousness rather than in some Platonic realm.
4. The determinist claim that free will is an illusion flies in the face of our immediate and unshakable awareness of freedom over moral actions. It undermines praise and blame, reward and punishment, and the practice of justice, which renders to each what he deserves. If man is not the free cause of his actions, how can he be praised for defending his family from crime, or punished for murdering a fellow human being? All social life and jurisprudence is founded on the fact of moral freedom, which we know with a certainty far greater than any scientific hypothesis commands. Some people use the expression “pre-scientific knowledge” to refer to the fundamental experience of the natural world and of ourselves that not only must come before, but must dominate the interpretation of, all subsequent knowledge. Some scientific theories are reminiscent of a man on a ladder sawing off the planks that support him, or a tightrope walker ready to sever the cord that holds him up.

You seem to mistake wishing for knowing here. I would love to think I am free. Also that I will not die. Both of these convictions are strong in me. But if I face facts squarely, I know I will die. And if I look at the various arguments against determinism in regard to free will, I find that compatibilism, libertarianism, and various attempts to find room for human freedom are wishes rather than arguments. I concede that we feel free to choose, but we are wrong about many things we come to naturally, and even unpleasant truths must be faced. Still, I think you have this one backward. If we have free will, we are not determined. But because we feel like we have free will is not proof that we are not determined. Wish it were.

5. Nothing is a cause unless it has power to cause. No physical thing gives itself power to cause, but always receives this power from something else. Moreover, no physical thing is the cause of its own being, but exists only as a result of prior beings. Thus, for each cause, one must seek the source of its causality; for each being, one must seek the source of its existence. If there is not, prior to all physical causes, a non-physical origin of the power of causality, then nothing could ever begin to cause and nothing would in fact occur. Posterior causes depend on prior causes; if there is not, prior to all physical beings, a non-physical origin of their existence, then nothing would exist—all of which is absurd. The existence and causality of material things therefore depends entirely on a perfectly immaterial uncaused cause of both being and motion—namely, God. Far from doing away with God, scientific determinism cannot make any sense at all without implicitly assuming him—or rather, without arbitrarily transferring divine attributes to matter and chance.

This is certainly what the Deists say and it is a strong argument. Thanks, Aquinas. It seems right to argue that the only way to break the causal chain is to posit an uncaused cause. If the universe had no moment of creation, then everything that could happen would already have happened and we would have reached perfect entropy. Cosmology is doing a bang-up job with multiverses and so on to push the moment of original creation back, but infinity is a long time and I doubt that cosmology will ever be equal to taking that on. Still your strongest point, but I would like to stress Aquinas’s point, one echoed by C.S. Lewis, that calling a creator “God” does nothing to imbue Him with any of the other qualities we like to attribute to the Judeo-Christian divinity. And if we seek to follow Paul’s advice in Romans 1:20 and infer the nature of the divinity from the universe it created, I cannot imagine we would produce the Judeo-Christian God, particularly if we don’t begin with the Biblical version of creation. Any honest induction from the nature of creation should produce a creator who loves material diversity far more than individuals, a conclusion supported both by the size and complexity of material reality and by the fragility and waste implicit in biological evolution.

6. The exponent of scientific determinism is guilty of a dramatic inconsistency between his thinking and his life. His dogma tells him that he is not free, that he is not responsible for his actions, and similarly that nobody else is free or responsible; yet in his life he behaves as a free person towards other free persons, exacts duties of himself and others, and shows mercy or cries out for justice when wrong has been done. His dogma tells him that his wife and children are basically automatons, yet, if he is a good man, he loves them and could never actually believe that the unique relationship he has with them—the experiences they have shared, the meeting of his future wife, their marrying and rearing children—is no more than a lockstep parade of meaningless atoms.

Not really. This seems a corollary to argument four and the same counterarguments apply. We know we are going to die, but nonagenarians keep on eating nonetheless. So do twenty-year-olds whose death in cosmic spans will follow immediately. Why act in a way that violates the basic facts of existence? Because we have hope, which is all this point offers. I am convinced that free will is tied to the Kantian categories, specifically causation, in questions of goodness, so though we feel we possess natural freedom, and therefore bear the burden of choosing, we are actually determined. (Please see my postings of March 16 and 23, 2014 for more.) Since we don’t know what the determining influence is, we feel it doesn’t exist. But people were bound by gravity long before anyone knew what it was. In any case, our beliefs often seem at odds with the known facts of existence, but the conflict should challenge our beliefs rather than the facts.

7. If someone asserts that determinism is true, has he come to understand something true about reality as a whole? If so, how can this truth, which is universal, timeless, and independent of all particular events, be merely an effect of material causes? It already reaches into a domain no longer subject to—indeed totally outside of—the strict chain of physical cause and effect to which the theory appeals. There is no room for truth as such in the world of the determinist; the man who says “determinism is true” refutes himself in the very act of speaking.

This is a restating of your third argument but is much weaker. Conceptualism offers a long history of explanation for terms such as “truth” that traces back to Aristotle’s original rejection of the origin of the Forms. David Hume does a clear job of catching one up to progress from the Greeks to the Scottish Enlightenment, though not much has been worked out since, but the argument you give here would also make unicorns and Klingons not only real but of divine origin. I would add that the man who says, “Determinism is false” should never expect his car to start in the morning. Consider the literally thousands of chemical, electrical, mechanical, metallurgical, and physics processes involved in an automobile’s creation and operation, from the making of the internal combustion engine to refining of gasoline, from radiator to exhaust gasses. Each is based on determinism as is every bit of natural science. The notion that God made a miraculous universe died a well-deserved death beginning in the Renaissance. Its adherents seem not to appreciate that a miraculous universe would be a literally chaotic one, one not open to reason and experience: in other words, the medieval world. To trash human reasoning so willingly seems to me to be a deeply ungrateful act of betrayal to the divinity believers profess to worship.

Nevertheless, the apologist should bear in mind that determinism, as a quasi-religious dogma, is passionately and stubbornly clung to by its adherents, who have often, so to speak, pre-determined the outcome of the dispute before it even gets under way. An apologist is more likely to be successful with ordinary people who have given credit to determinism only because it is repeated ad nauseam in textbooks and the media. Their half-hearted endorsement of it, or of some aspects of it, is thus more easily shaken.

I am afraid you are guilty of this charge, and the success you desire smacks more of religious proselytizing than of a deep investigation into the methodology of science. I once again challenge you and those you seek to convert to visualize the appalling consequences of living in a universe unintelligible to reason, for surely that would characterize one devoid of material determinism. Such a universe would hardly be a “cosmos,” an orderly creation, but rather the “Pandemonium” Milton envisioned as the realm of devils.

Reviewing the weak theories that attempt to rob us of our freedom, we might well desire to cry out again with St. Paul: “For freedom Christ has set us free; stand fast therefore, and do not submit again to a yoke of slavery” (Gal. 5:1); “Now the Lord is the Spirit, and where the Spirit of the Lord is, there is freedom” (2 Cor. 3:17).

I admire Dr. Kwasniewski confronting the issue in such a disciplined way, and his characterization of science does have some validity in regard to the excesses of scientism, which is certainly a problem that science and its popular adherents sometimes fail to confront sufficiently. But as the lines were drawn long ago by Popper and others, there seems no excuse for scientists or educated laymen to cross that line. For an opponent of science to attack the excesses of scientism seems to me an act of bad faith, for the two have long been disentangled, and the proper sphere of science should by now be well-appreciated by practitioners and the rest of us who profit by its technology, rigorous subject disciplines, and deep view of material reality, none of which necessarily dissolves faith and all of which may serve to deepen and broaden it. My most serious objection to the premise of this article is that it chooses the worst possible point of attack. Can anyone take seriously an assault on material determinism in this age of scientific triumph? As I have tried to make clear, such an assault hammers at the very foundation stone of the entire empirical enterprise (and any rational enterprise as well). Such a reactionary appeal to pre-Reformation epistemological positions would have been hopeless in 1700. It seems worse than quixotic to pitch it today in the face of the manifold daily proofs we see of the predictive power of the natural sciences. Science exists because of its explanatory power. Its truth is confirmed by the interlocking paradigms of its discrete subject disciplines, its reliance on the precision and logic of mathematics as its language platform, and, most obviously, its technological marvels. All of these rest squarely and exclusively on the truth of determinism. Why religionists fight this lost cause puzzles me, particularly when they can use some of these same arguments to contrast material determinism with human freedom, attributing the latter to a difference of kind only explicable by postulating the existence of a soul. Dr. Kwasniewski gets half of the argument right –our sense of freedom argues for a spark of the divine–but to grant that same freedom to the material universe by denying determinism puts him in the same boat with those he attacks, only he accuses them of wishing to make man matter without freedom while he wishes to make all matter free. Religionists might consider broadening the gulf between humans and nature rather than narrowing it. Science cannot help itself: it only examines perceptual reality and so must limit its study to our material being. Its glory is that it accepts its limitations and excels within them. Religionists should agree to accept theirs, and stop fighting for a cause long since lost. They might still triumph if they pick their battles more wisely.

Standard

Stereotypes and Categories

In the eternal battle between liberty and equality, I am generally in favor of putting my thumb on the equality side of the scales if only because it is perpetually outweighed by the attractions of liberty, particularly in our current zeitgeist (for more on this battle, please see my posts of November 20 and December 3, 2013). And I deeply admire the commitment to equality that seems to characterize educated young people in our culture, though I abjure the group identity theories that seem to shape their commitment. The thread that runs through so many conversations in this social fabric seems to depend on a definition of stereotyping that confuses me, particularly in regard to its use of categorization as a related term. The imprecision and negative connotations of the former seem to stain the latter, and that is a shame.

 My understanding of the meaning of “stereotyping” is to prejudge an individual according to some group characteristic or to form some generalization of the group from an inadequate exposure to its members. The injustice of such prejudice seems obvious. You probably are familiar with the wag’s distinction between categorization and stereotyping: the latter involves a judgment you disagree with. But that approach robs both words of their meanings and renders productive conversations impossible. Besides, it misses the point. Whether the stereotype is complimentary or insulting is irrelevant to the real issue, for the error is rooted in the prejudice itself regardless of whether the judgment involved is positive or negative. Good stereotypes are as objectionable as bad ones because they constitute a kind of mental sloth: we assume that we know something about the individual that we cannot know based on some group categorization that we may or may not be true. But I think this understanding of the term is controversial. Something else is afoot, something having nothing to do with demanding sound reasoning. Whatever the cause, this whole subject has provoked distaste and avoidance. The reaction among committed egalitarians is to abstain from any kind of categorization whatsoever, but I wonder if this may be an overreaction.

 This may be one of those eddies of self-contradiction that swirls through postmodernism, for it has rooted its theories in the kind of group identity that Marx made infamous. Educated young people who may or may not be aware of postmodernism’s influence on their thinking–being educated means being inundated in the zeitgeist’s current obsessions–are both devoted to the analyses of the influences of composited “cultures” of gender, race, economic class, demographic groups and so on and highly resistant to being accused of having their own consciousness formed by the shaping mechanisms they apply to others. I mean no denigration in pointing out this inconsistency, for it is no different from Marx thinking himself immune to the bourgeois mindset or Freud admiring the power of his ego ideal. We think ourselves above the fray. But though postmodern culture constructs its understandings on deconstructing the hidden influences of the group on the individual, it faces contrary impulses from popular media and commercialism that glorify existential freedom. The zeitgeist gives as it takes away.

 One more point about current culture needs mentioning as preface. The distinction between the powerful and the powerless also plays into the dynamic of stereotyping, for postmodernism’s premises leave it no means to resolve conflict other than by the naked or disguised exercise of power. The French revolutionaries and academics who systematized postmodernism came of age during the anti-colonial and Cold War conflicts that pitted the industrialized and capitalist world against people of poverty and of color. The theorists were apostles of equality, but their worldview was grounded also in a phenomenology of experience that valued nurture over nature. After all, it was their opponents who preached the doctrine of racial, ethnic, gender, and class superiority, so postmodern opposition to the natural superiority of any social category was assured. The problem was the very phenomenology they took as determinative of truth and goodness claims denied them the appeal to justice that claims to equality need to rely on (for more on why, see July 30 and November 13, 2013). I notice that educated young people frequently substitute “appropriate” for “good,” as in “That kind of behavior is not appropriate.” Culture determines etiquette rather than morality determining the good. But this creates a recurring problem, for why should equality be “appropriate” and prejudice “inappropriate” when the goodness of social norms is hollowed out by the same differing experiences that putatively create cultural identity in the first place? Gordon Gecko’s motto, “Greed is good” might simply reflect Wall Street culture, where greed is appropriate. This knotty problem could be resolved by resolutely turning away from issues of correspondence justice in favor of simply equalizing  the liquidity of power, though why this effort should be any more a correspondence good than an outright appeal to justice is never explained. We may assume such an arrangement might be more than appropriate.

 These postmodern influences have changed the dynamic meaning of “stereotyping” to concern unfavorable judgments about groups or their members that perpetuate current power relationships. But this emphasis drags in the perfectly innocent word “categorization” simply because some valid categories do indeed perpetuate current power relationships simply by acknowledging their existence. The problem is that every biased jerk in the world tossing off insults about this or that group is convinced that he is merely explicating a valid categorization. The appropriate response seems to be to avoid any negative group categorization in the interests of social reform, but such efforts also stymie reform because they discourage empirical research into some of the social ills that produce the power inequalities reformers seek to ameliorate and the subsequent appeals to logic and expertise that would repair them.

 This kind of thing can get silly. A female professor is accused of being a “male hegemonist” for asking a boy to help two girls move a heavy table. A statistician is called elitist for citing income statistics of single-parent families in a sociological journal. A criminologist is called bigoted for analyzing the race of felony convictions. At the other extreme, this writer feels perfectly comfortable accusing Wall Street financiers of being greedy. What do we call stereotyping the powerful? How can stereotyping individuals in privileged classes ever be appropriate? Can we accept theories of social determinism for others while still considering ourselves free to make moral choices? Can the legal system make sense of this affront to responsibility?

 Aristotle considered accurate classification the pinnacle of rational thought and thought it to be composed of close analysis of multiple and thoughtful exposures to individual representatives that we use to produce our conception of the class. As some classifications are innocuous– all unmarried men have no wives– we may assume that our current abhorrence of stereotyping will still allow us some categorization. So are we limited only to those that may allow of no possible negative interpretation? On whose judgment: the speaker’s, the one characterized, or the culture’s?  Can nutritional science condemn obesity as a health problem without indicting obese persons? Should single parents feel slandered by census data compiled by demographers demonstrating material disadvantages to children of single-parent homes? An understanding of stereotyping would forbid any reader of those statistics to draw any conclusion about any household from these data-based conclusions, but is it appropriate to draw the conclusions themselves, particularly if they point out some social cost?

 The question of whether even valid categorizations of groups should be avoided or minimized for fear of lapsing into prejudice is a different one from whether these negative characterizations even if valid reveal anything about the individual members or the group. The latter issue is as easy to answer as the former is difficult. It is a blatant injustice to judge the individual by the group’s characteristics even if those characteristics are accurate for the group unless the group characteristics are definitive for each of its members and can be categorized for the aggregate without prejudice. To do otherwise is stereotyping. But must we also shy away from any investigation of group characteristics, perhaps just to be safe? Are we that prone to prejudice?  To answer that question, we must explore one more element of why group characteristics are also so frequently poisoned by prejudice despite the concerted efforts of postmodernists to avoid it. What cultural pressure molds this kind of prejudice despite all the sustained efforts of the postmodernist push toward egalitarianism?

 The dark energy that sustains group prejudices is from a much older influence, older than culture itself, indeed older than humanity, for it characterizes all social animals.  This tribalism begins wholesomely with familial identification. Aristotle saw the blueprint of the political state in this first natural social unit, but he also devoted some attention in Politics to the difficulties of enlarging that unit from clan, the extended family, to city-state. Our affiliation with family is as natural as the imprinting instinct of babies, but it requires some cultural pressure for that instinct to be broadened to strangers. Virtue ethics finds the impetus for this extension of attachment in our less obvious needs. The child reaches for her mother for sustenance and the citizen reaches out to her polity for protection, civil order, education, opportunity for meaningful work, and so on. Though these needs are natural, they also require a rational consideration of the risk/benefit ratio of trusting strangers who are not driven by instinctual drives of protection and self-sacrifice. Our ambivalence toward the other is at the root of tribalism as tribalism as at the root of the group prejudices that demean and dehumanize. Cultural forces can fight these instinctual prejudices to a standstill, but only by relentless education generation after generation that establishes or demonstrates bonds among unrelated persons in polities that conduce in some explicable ways to their meeting their needs. Unfortunately, the group cultures so established are frequently held together by means of opposition to more distant cultures representing the other, and so the larger social or political group thus established is analogized to the natural unit, the family, and the culture established in opposition is set up as the other. And so it goes, as sororities issue bids, urban neighborhoods mark turf, country clubs publish membership guidelines, religious organizations celebrate heritage, racial minorities taunt each other, fundamentalists debate God’s will, countries fracture along ethnic fault lines, and nations build patriotism by demonizing those across their borders. If you leave out the family, all of this us-versus-them is cultural creation, though all of it is modeled on the instinctual tribalism that has the baby shrink away from the unfamiliar face. Every social identity produces an incentive to group prejudice if only because the essential nature of our categorization of the other sees him as unwholesome and inhumane. That instinctual dark mistrust can only be countered by continued efforts to humanize the other by continuing to expand our social and political self-identity, to incorporate the other. We can and do embrace strangers through familiarization (note the etymology) that is dependent on interweaving our efforts to satisfy our needs (virtue ethics defines these needs specifically; for more, please see my posting of November 6, 2013) with those we might normally mistrust as strangers. This effort has motivated some of the major world religions from the beginning, though the effort is perhaps honored more in theory than in practice as they have struggled with the alienating effects of heresy in the face of their exaggerated claims to knowledge (for more, please see entries for January 12 and 20, 2014). More than a glimmer of hope for human progress can be found in the increasing scope of commercial webs of mutual interest over the last century and social networking of recent decades.  

 The effort to dehumanize the other is abetted by prejudicial categorizations that educated culture so discourages in the name of equality. But obviously more is involved than equality in both the problem and the solutions now being tried, which perhaps explains the current aversion to group categorizations in general. As much as I admire the motivation in this effort, the cultural confusion engendered by the confusion of “stereotype” and “category” will only make the task more difficult.

Standard

The Determinism Problem

Having introduced the common anomaly that is termed the determinism problem last time, I hope to investigate it more thoroughly this week, with the goal of resolving it to my own—and I hope to your —satisfaction. Simply put, the problem is that we know we live in a contingently determinist reality, and yet we feel free to choose, granting us a liberty nothing else in the universe seems to possess. Either this sense of freedom is wrong or our sense that the universe is contingently determined is wrong or some means exists to reconcile the anomaly.  Immanuel Kant called this problem an antinomy, the submission of two convincing but contradictory “truths” to experience. I agree with his judgment that reality is unitary and cannot be contradictory, so something must give in this antinomy.

 

But maybe we’re wrong. We take it as a given that the principle of non-contradiction cannot be violated in our conceptions of either truth or of goodness. To a correspondentist in pursuit of the virtuous circle, the complex of truths that signal a complete understanding of reality, an axiom is that reality cannot be self-contradictory but must instead constitute a single unity whose components fit together like jigsaw puzzle pieces. Indeed, the model of the interlocking scientific paradigms of the natural science is both a metaphor for and a piece of that unity. The existence of anomaly is thus prima facie evidence of an error of truth in discovering the virtuous circle. To a coherentist who constructs her own virtual circle of whatever elements she finds most useful, the principle of non-contradiction serves an even more vital function, for in her construction, the coherentist has only that principle against which to test her personalized truth and goodness claims (please see my entry of August 6, 2013 for more on this). The existence of an antinomy as crucial as the free will/determinism problem poses challenges to both models of justification, correspondence and coherence, not to mention to the core constituents of science, logic, and mathematics. So the first relevant question to ask is this: can we live with fundamental antinomies like the determinism problem or do they require rational or experiential resolution?

We have seen the impact of powerful antinomies in the past, and it isn’t pretty. The two greatest knowledge crises that have destroyed consensus and led to long-term societal disruption were each rooted in such anomalies.  These are the Protestant Reformation of the sixteenth and seventeenth centuries and the postmodern revolt of the twentieth. The Reformation pitted traditional authorities against each other, revealing in the process the structural weakness of authority challenged (please see my postings of September 11 and 18, 2013 for more). The fundamental challenge to authority’s successors as warrant, reason and reasoned experience, consisted of precisely the charge that these warrants could not reveal reliable truths about reality. For example, one cause of the postmodern deconstruction of modernism after World War I was the seeming irrationality of Einstein’s theory of relativity, which presented to the world an empirical explanation at odds with what had been thought the plain evidence of reasoned experience. Both of these shattering antinomies were resolved eventually: authority collapsed as warrant for truth and goodness claims over the seventeenth and eighteenth centuries, to be replaced by reason and empiricism, which were themselves challenged by the relativism that Einstein’s theory seemed to sanction (please see my postings of July 22 and July 30, 2013 for more) and the postmodernism that relativism legitimized. Indeed, the twentieth century seemed to consist of a volley of assaults on what the Victorians called “good sense.” The anomie inspired by Freud’s theory of the unconscious, Godel’s Incompleteness Theorems in mathematics, and Heisenberg’s Uncertainty Principle in Quantum Physics are other examples; the latter made even Einstein a little queasy, but not as queasy as general relativity made all of western culture. It is informative to think of these kinds of assaults on reasoning -all profoundly rational-as contrasts to the most influential theoretical efforts of the Victorians–Darwin, Marx, Hegel, Toynbee– all attempts at rational synthesizing. The rawest anomaly of all though was the specter of the most civilized world powers engaged in the senseless butchery of World War I.

In the face of these knowledge crises and the central role of antinomies in them, we might be more concerned about the free will/determinism issue than we are, particularly since it is of such longstanding dispute. While its influences can be traced to postmodernism with all its baggage of received identity and the Romantic excess of existentialism as a reply, not to mention the elegiac and ironic tone of modern literary media, I think its impact has been somewhat muted by the undeniable universal response that we seem to accept the deterministic nature of reality while still feeling free.

Only sincerely religious people should feel comfortable about that. One reconciliation many persons find convincing is religious absolutism, the conviction that our free will is simply a product of human uniqueness. We are free because of our special place in creation, as persons with souls rather than things that are caused, and that is the end of it. We feel free and struggle over our moral duties because we are free and responsible. Our defining quality is precisely that natural freedom to recognize choices in the maelstrom of experience and to choose from among them, a freedom instilled by the creator but one that exacts its price of moral responsibility. This kind of compatibilism resolves the anomaly in a manner closed to the hard determinist who can find no empirical evidence of soul, creator, or freedom in material reality yet who could never deny the awareness of choice that underlies not only her own nature but also the scientific enterprise itself. And allow me to add one more counterintuitive argument in favor of the absolutist cause. This compatibilist theory not only demands that we have free will to choose as a condition of our moral responsibility; it also requires that the rest of creation be determined. In order for us to carry moral responsibility for our choices, it is necessary for us to be able to project their likely outcomes, a condition assured by the determinism that has made scientific progress possible.  Now a reality in which many things are free is one in which outcomes could not be predicted, so our survival, not to mention our salvation, depends on the very determinism that empirical science has revealed to be at the core of creation itself. It is this predictability of cause and effect that gives weight to our moral choices. Religionists thus can not only point to what Kant called “the moral law within me” but also to the orderliness of the cosmos (the etymology of which is “order”) in Kant’s words, “the starry sky above me” to resolve the anomaly. Though this resolution lies beyond the realm of knowledge and its justifications, and therefore beyond the bounds of my efforts in this blog, falling as it does into the realm of the beliefs that extend our knowledge, I find this resolution of the free will/determinism conundrum very persuasive.

Of course, there are others. Recent philosophers of science frequently find themselves tipping toward the hard determinism that underlies all scientific efforts while modernist ethicists often fell on the theological side of the debate discussed above. Having investigated twentieth century compatibilist efforts as well as earlier modernist approaches, I think I have found a view that accomplishes three objectives. First, it resolves my own discomfort while still falling within the realm of knowledge. Second, that resolution is consistent with the absolutist religionist arguments listed above, though it does not rely on them. Third, the resolution it offers is also consistent with correspondence proofs of judgment, among them empiricism and reason, therefore violating neither my own virtual circle, the complex of knowledge and beliefs I accept as coherent truths, nor the virtuous circle, those truths justified by the correspondence proofs of judgment I have often defended in the past (for the most concise statement of which, see October 7, 2013. These are more fully developed in my book, What Makes Anything True, Good, Beautiful: Challenges to Justification).

My compatibilist position is framed by Immanuel Kant’s fundamental argument that the concept of free will is a rational concept that cannot be proved by our subjective perceptions of it in experience. This dualism is fundamental to Kantian epistemology, the notion that we can never know pure concepts, noumena, but only their application in experience, phenomena. Since experience is sure to be partial, subjective, and contextual, Kant thought it inadequate to serve as the base for a claim to certain knowledge of a concept as powerful as free will. So though we certainly feel free, we cannot know that we are free. On the contrary, we know that the phenomenological world is not free but is determined.

Our notions of determinism invariably depend on the validity of the cause-effect relationship, the overwhelming conviction that all events are effects of prior causes and causes of subsequent effects. But David Hume makes clear in his sense-data perceptual theories that this crucial temporal relationship is not ontologically demonstrable, meaning that we can never perceive causes or effects in nature. Our minds must add them to events. The relationships are purely rational rather than material. This argument so rattled Kant that he was moved to theorize a range of rational operations that operate pre-consciously to assemble sense data into a coherent picture of reality and present it to our minds as reality itself. This is why we can only know phenomena. Our minds have already added their rationalizing ingredients to reality before allowing us to perceive it, so we are unable to disentangle the noumena of reality from the rational reconstruction that our mind assembles from sense data. And a key ingredient of that recipe is cause-effect. Reality is full of causes and effects because we highlight them in the data stream that is the phenomenological reality we experience. Or rather, we see them as already highlighted thanks to the sifting actions of the Kantian categories. Of course, it is the empirical enterprise to test and make explicit the pre-conscious connections we form among the objects of experience. Whatever tests we cook up to do that—and the scientific method is the crown jewel of such efforts–Kant argued that the fundamental objects of perception that we manipulate in such conscious operations are not open to dispute, are in fact the products of a common human operating system that grants us common access to phenomena. All of these pre- and post-conscious operations are fundamentally rational. Our sense of freedom, and particularly moral freedom, derives from this distinctive human rationality.

Note that this arrangement allows for just the kind of dualism the free will/determinism problem poses for any compatibilist solution. We may be determined or we may be free. That is an ontological question beyond the scope of rational investigation. But our categorical response is always to feel free, to identify choice that is as fundamental to our framing of experience as is the principle of causation. When a ball rolls into the room, my cat’s eyes follow the ball. Mine turn to see who tossed it.

We may further investigate what constitutes this sense of freedom. The classical free will argument posits “forking paths.” One framing is that one is free if she can do otherwise, if she can take either path. But this rather blurs the issue because how we respond to choice is the last step in a three-step process, each step of which allows for a different kind of freedom. Merely identifying choice is both uniquely human and demonstrably rational. It constitutes the natural freedom that is as central to framing experience as causality. Try to not recognize your options in any situation for even a moment. The inevitable forking path that the phenomenal world brings to our attention vivifies the essence of any claim to freedom–choice — and begs us for analysis, and this peering each way constitutes a second level of freedom, preferential freedom, the rational choosing from among options which is good, better, best, or bad, worse, or worst by whatever standard of goodness we choose to employ. This preferential freedom to recognize the relative good in our options concerns issues of utility, quality, or morality (please see my entry of October 13, 2013 for more). When we have picked the path we think best for whatever reason about whatever choice, we then must choose to go down it. This circumstantial freedom is the visible sign of choice, for it is the choice to act.

So in the antimony of determinism/free will, which of the three kinds of freedom are we postulating? Which is necessary for us to be free? Let us begin an answer by demanding everything. Let us suppose freedom means doing what one wills. But that cannot be, for all of us might will to fly, be invisible, never grow old. If we cannot be free unless we have total freedom, we can never claim to be free, so our standard of freedom must accept limitations, but how much is necessary? At the other extreme is the case of the prisoner in chains who can still think of his mother. Here we see no circumstantial freedom at all, only a preferential freedom of quality. And in the case of a foundering ship whose passengers face the choice of drowning or jettisoning their baggage, does their natural freedom to recognize even a painful choice grant them freedom, even if they dither and fail to prefer one to another, never mind actually getting around to acting on their preferences?

In disentangling what is required for us to be free, we face a grounding question: is their choice an ontological reality? From our objective perception as observers in a passing hot air balloon, we see their ship approaching the shallows and hear their cries of terror and calculate the ship’s draft, only a bit deeper than the trough of the waves crashing over the reef. But in this presentation of reality, where is the choice presented but in their minds? Surely, this is Kant’s phenomenological reality of choice, to see experience in terms of possible goods, of forking paths. And though some paths are identified as a result of conscious reasoning that expands options—else what’s an education for?—some choices will always present themselves with the same irresistibility as those waves threatening those passengers, inevitable products of experience. Sartre was correct in according persons freedom regardless of how desperate their circumstance. There is always the forking path. No matter how awful and bleak the future appears, one can never offer the plaintive cry, “I had no choice.”  One always has the option of not choosing. Or one always may choose suicide.

It is true that what we actually prefer and actually do with this natural freedom may be determined by psychology, morality, or physics. These things are evitable, and so not always under our control. But governments, advertisers, or psychologists can never deprive us of the natural freedom that is our birthright as reasoning creatures and though it too can be broadened by experience, it exists as a simple product of our rational human nature as it goes about constructing reality from sense data.

It is likely that preferential and circumstantial freedom can never be shown to be as free as natural freedom, and their exercise will always be alloyed by determinist factors and perhaps one day shown to be entirely determined. That is certainly the view of the hard determinists who wish to resolve this issue by trusting in the eventual triumph of empirical science. Even now, we can always find determinist features in any preferential or circumstantial choice. The child was spoiled and screamed when she wasn’t given ice cream. The genius was home-schooled. The teacher never vacationed in Fiji. But influences alone are not enough to show a total lack of freedom that is the requirement for determinism to triumph. For even if determinists denigrate natural freedom because it doesn’t require an active commitment of the will, they face serious difficulties in allocating influences over preferential choice. The factors that influence us to prefer one fork to another are complex in three dimensions: number, interrelatedness, and intensity. Now some of these are determinist by virtue of ontological past structures and events. Others are indeterminist as causative factors until brought into consideration of preference once the forking path is isolated through the use of natural freedom. Still others are indeterminist as causative factors and operate beneath the threshold of consciousness. Perhaps we can find some room in this mélange of influences for preferential freedom?

Soft determinism assumes we have some control over at least some of these factors, so if we have any control whatsoever over even one of them, we are preferentially free because our preference cannot be determined in advance. That may not seem like much in comparison to the total freedom we so desire, but remember that the bar for freedom here is very low, particularly when you remember that the determinist camp includes everything else in existence, all of which according to empirical science are contingently determined in their entirety, with the exception of subatomic particles operating according to Heisenberg’s Uncertainty Principle. It might be worth thinking about whether the single little free agency we are now contemplating is analogous to a Heisenberg particle in terms of being a tiny part of a much more complex whole and even more so in that its influences seem not only undetermined but also indeterminable. Given the complexity of the determinative influences on preferential freedom, can the moral agent really say she has any control if only a tiny part of her choice, a part whose power she may not be able to judge or control, is free? On the other hand, that one preferential Heisenberg particle is enough to disprove determinism since its influence cannot be predicted. If true, we are neither free nor determined. So the result of what has become a common rearguard action to isolate a speck of freedom in a sea of determinism, the effort called soft determinism, fails to support preferential freedom even if it succeeds in disproving determinism. To my mind, this invalidates the “ratio theory” of limited free will and responsibility. Perhaps this explains why compatibilists with scientific leanings assume determinism will one day carry the day, for they share with soft determinists doubts about the viability of free will in preferential choice.

To return to hard determinism in regard to preference and action, we can envision what these determinists also predict: a computer whose algorithms factor in all the influences on my choice mentioned above, performing its calculations at the same speed as the human mind and arriving at the same outcome. Now suppose that program spits out its completely accurate prediction the moment before, the moment of, or the moment after my mind does. Imagine it is also able to print out all the factors it considered in its analysis for my review. None of this is too far beyond even present-day predictions. Now, I would like for you to consider what I or anyone else would do with these pronouncements.

 

If a moment before and my mind is informed of the computer’s determined choice, this information becomes a new input that changes my forking paths of natural freedom, offering me other choices to consider, other paths to prefer, and other actions to pursue. And so the mind and the computer continue their game perhaps to infinity, the “deterministic” analysis reduced to one more forking path my natural freedom brings to consciousness. If the computerized prediction is delivered simultaneously to my forming my preference, the “illusion” of free will goes on as before, and the deterministic success of the computer is viewed by the human subject as a kind of parlor game. Granted, if the decision is distasteful, the forking paths trailing off into the gloom, I might bow to the computer’s predictive power, but that too would constitute a preference, a presentation of natural freedom to my reason that I must factor into my preferences. If the computer’s prediction is delivered to me in a sealed envelope after I have chosen and the prediction proven accurate, I might examine the printout explaining my choices and shrug and say, “I knew all that.” Or upon reviewing it, I might find fascinating insights into my own decision processes that might affect future choosing. No scenario would in the slightest deprive me of the “illusion” that I am free to choose and the existence of the determining factor would simply add to the landscape of choice that makes up my natural freedom.

 

So it seems a non-religious solution to the determinism problem involves three logical conclusions (it must be mentioned that accepting these in no way diminishes the charm of religious solutions and may add to them by virtue of applying universality to issues of moral freedom, a prerequisite for absolute morality).

(1) When speaking of freedom, we are talking about three modes of choice: natural, preferential, and circumstantial. Natural freedom is inviolable – and by the way is the foundation of any claim to human rights, an argument I made in this space on November 13, 2013. Preferential freedom is the prerequisite for any moral freedom and responsibility we claim. Its existence is dependent on natural freedom but is also subject to more determinist influence. Circumstantial freedom is both the most visible and the most determined and is the focus of legal responsibility. Our sense of outrage over slavery, for instance, is due to its denial of circumstantial freedom to its victims. While this liberty to act is the focus of much of our conversation about the determinism problem, it is important to note that the other two freedoms are determinants of action and so should claim more of our attention.

(2)  Our sense of moral responsibility derives from the natural freedom to recognize choice and from the preferential freedom to act upon choice, so at this level, we are both free and determined. The influences that shape our preferences are pre-conscious, unconscious, and conscious, and involve complexities of sense-data perception not open to analysis, so it is unlikely that we will either recognize or submit to a determinist solution even if such a solution is ontologically demonstrable.

(3) Though determinism cannot be shown to be total, it certainly can be and has been shown to be influential in preferential and circumstantial freedom, so an alloyed view of the antinomy is likely to continue. So long as scientism remains a temptation in our culture, we are likely to overestimate the empirical means of investigating the composite nature of freedom and determinism and mistakenly minimize the rational bases of natural and preferential freedom. The result will be to obscure the origin of the nature of human rights in the equality of kind that is our natural freedom, to minimize the role of conscious reasoning in judgments of quality and morality, and to focus exclusively- and wrongly- on circumstantial freedom as the only freedom that matters in questions of human action.

See you next week.

S. Del

 

 

 

 

 

 

 

Standard

A Preface to the Determinism Problem

I am bothered by an anomaly in my moral universe and hope to investigate it over the next several weeks. At issue is a contradiction between the central pillar of science—the material universe is contingently deterministic– and what I take to be a universal human orientation to that universe: we are free to choose. On the one hand you have to weigh recent history’s most impressive human achievement: the glorious accomplishments of the natural sciences, all based on the predictability of its objects of study. Empiricism is nothing without the laws of causality that produce testable predictions about everything from subatomic particles to the background radiation from the Big Bang. Causality…hypothesis… confirmation… determinism. Everything we see around us operates according to laws of nature that have been discovered, deepened, and linked over the last five hundred years. In essence, these laws (with an exception or two I will discuss below) forbid freedom to their objects of study. But then there is the bothersome problem of….us. Our free will is the essence of the human experience, so much so that our species is called “wise man,” for our ability to discern choices, weigh them, and choose the best from among them. We are choice-making machines programmed with natural freedom to see choices, preferential freedom to choose, and circumstantial freedom to act. Since the Enlightenment, the human sciences have been stymied by that freedom, and it has crippled their fevered attempts to be taken as valid empirical studies on a par with the natural sciences. So you can see my dilemma. It seems I must embrace two contradictory positions. This violates the fundamental truth test of the virtual circle, the principle of non-contradiction (Please see my posting of August 6, 2013 for more on this). I dislike discovering anomaly, but I dislike ignoring it more.

The free will/determinism question is an old issue in philosophy dating back to the pre-Socratics, but the triumphs of natural science have cast it in a particularly bright light over the last century or so. As in all intractable philosophical problems, we have seen a multitude of attempts to resolve the apparent contradiction, falling into three basic categories. Empiricists typically embrace a determinist outlook that regards our sense of freedom as a delusion. Despite the spectacular failure of the human sciences to crack the nut of free will, determinists assume that advances in neurology, genetics, and other medical sciences will one day lay to rest the antique notion that we are free to choose and will reveal not only the mechanisms that compel our behavior but those that deceive us into thinking we control it. Those who argue that our freedom is real rather than delusionary are called libertarians, but I will try to avoid that term because of its political associations. Among the free will advocates, religious absolutists argue as they have since the time of Zoroaster: that the reason humans act differently from all other things in creation is that they are different.  The existence of the soul, the spark of the divine in each person, elevates humans above the material and the determinism that molds it, lifting us to share in the divine, lending us a bit of the uncaused cause that moves itself without determinism. The third position attempts to reconcile these contrarities and is called compatibilism, arguing that we are in some way both determined and free. My real purpose in this investigation is to delve into what I take to be a convincing argument for a certain kind of compatibilism and to look into the implications for issues of warrant, but these waters are deep, so this week, I will set up the issue and explore some of the complications the issues imply for the virtuous circle, a true grasp of the unity that is reality.

It may well be that empiricists have got it right, that one day we will look back on this controversy with the same smug superiority with which we now view Egyptian astronomy or the humors theory of medicine. Certainly, the odds are on science’s side, for it has rolled back the mysteries of the material universe with metronomic regularity. But this particular issue has a catch that might make science’s task well-nigh impossible. Consider for a moment what we now know about the brain chemistry of strong emotion, say depression or falling in love. Does it make the slightest bit of difference to those in either emotional state to speak of neurotransmitters or serotonin? Does anyone really care if science maps out the brain structures of parental love or criminality? Yes, we care enormously if these things can be affected. Imagine taking anti-depressants or aphrodisiacs to resolve emotional or psychological issues. You don’t have to imagine. Anti-depressants are the most prescribed drug in the U.S. today. But should the determinists scope out the mental self-delusions that produce our sense of moral freedom, what would they do with that information? Note that I am not discussing the pragmatist response to such discoveries, only the truth issues involved. Brave New World solutions would hardly seem improvements over the self-deception that we are free, would they? Imagine for a moment that you could be convinced that your freedom and the responsibility that comes of it are the result of this or that mental error or process, and that you actually act like every other thing that exists, only now you know why and how. What would happen to your vaunted moral freedom and responsibility then? Would anyone willingly give up the one thing that makes us most human? Give it up for what? To be a thing rather than a person? I wager you would instantly choose a response contrary to empirical prediction, and if that was the real prediction and you were told of it, you would choose a third response, and so on. We are cussed characters, aren’t we?

For these reasons, I am most skeptical that empirical science will ever resolve the free will debate as it has so many others. We are blessed or doomed always to feel free and always to feel responsibility for that freedom even if it could be shown that we are not. I must mention one exception to the exclusivity of human free will in the material universe. Heisenberg’s uncertainty principle seems to offer to subatomic particles the freedom which physics denies to the things these particles constitute. Though arguments have been proffered by physicists that the indeterminability of subatomic particles might somehow conduce to human freedom, no real connection has been made, nor has anyone argued convincingly that their unpredictable randomness constitutes real freedom as humans define it.

We might embrace the opposite conclusion with more success, and here is one of those rare moments in our culture when the arguments of religious absolutists will carry more water than that of the hardest of natural scientists. For the life of me, I cannot understand why religionists don’t go to this well more often. For five centuries they have given ground to the empirical sciences, so much so that they are left with very little ground on which to stand (Please see my postings of February 16, 2014 and July 16, 2013 for more on this). When I was young, religionists challenged science to explain the origin of life. They don’t do that much anymore, and that question has joined Zeus’s thunderbolts, Paley’s watch, and Thoreau’s Mother Nature on the discard pile of arguments for religion. In our own day, we are seeing cosmologists challenge the “first cause” argument, postulating a “universe from nothing” that removes the divine from the act of material creation. Yet in the face of defeats that began with the trial of Galileo in 1610, religious absolutists have this trump card to play: why are we free? Every argument of religion versus science can swing on the simple truth that there is an argument with the attendant implication that we are free to decide in favor of one view or the other. And that very freedom constitutes a convincing argument that we are unlike everything else that science studies, as the woes of the human sciences have repeatedly revealed. There is nothing anomalous about claiming that we are the only free things in the universe precisely because we are not merely things, and every scientist knows in his heart of hearts that his free will invalidates the very foundation of the scientific enterprise, at least in regard to himself as an object of study. Why don’t religionists use this argument more insistently?

Granted, their position entails a belief that extends knowledge as all proper beliefs do. In this case the knowledge is that we alone are free in ways like nothing else we know of. Should the reason for that sense of moral freedom implicate God, it would take us not only beyond the proper sphere of science but also beyond the limits of  knowledge and into that corona of beliefs that project knowledge’s light into the misty distances we cannot know (for more on that issue, please see my posting of January 12, 2014). In terms of our coherence truths that build virtual circles, there may be much more to think about in this matter, but the quest for correspondence truth that completes the virtuous circle must stop our inquiry here.

But correspondence offers no impedance to investigating the third option, compatibilism. We have seen a number of attempts to square this particular circle, to find some way for us to be both determined and free. Some of these efforts recall Descartes’ solution to the mind/body problem. In answer to how spirit can interact with matter, Descartes chose the brain’s pineal gland as the organ that transfers our will to our body. Somehow, he thought that a small material thing would be more responsive than a large one to what he took to be the whispers of spirit impulse, though why soul should have such weakling power was never explained. Perhaps a closer parallel is Luther’s notion of unearned grace that offers salvation, yet we must still choose to accept it, an act in conflict with his anti-Pelagian notions of human depravity. It seems to me that all such efforts to imbue us with just a touch of freedom only diminish the scale of the problem rather than resolve it. If we have any freedom at all to choose, we are not determined, even if that freedom involves only our natural and not our circumstantial freedom. When Thomas Hobbes questioned whether passengers on a sinking ship facing the choice to jettison their baggage to keep her afloat could be called free, he answered that the ability to choose among even coerced choices constitutes freedom. Others imagine a prisoner chained in a dungeon who can still think of his mother. Deprived of all action, he is still free to think (Please see my posting of November 20, 2013 for more on the nature of freedom).  It seems to me the human sciences have been pitching this kind of limited and hybrid freedom since Locke’s tabula rasa, and the influences of genetics and environment have been of continuing interest to sociologists and psychologists. But the unspoken assumption underlying these kinds of investigations, indeed all attempts at compatibilism from the empirical side, is that the kinds of influences we can see—the sinking ship, the chains, economic class, gender, race, nationality, culture—are only the tip of the iceberg and that continued empirical effort will excavate other directive forces. The trend in this sort of compatibilism is toward more determinism conducing to the extinguishing of free will itself.

As part of their rear-guard efforts to maintain relevance in the face of science’s triumphs, philosophers have come at the issue of compatibilism from a different direction, insisting that our sense of freedom may be a phenomenological quirk in the human psyche rather than an ontological reality. In this view they may be taken to be less sanguine about an eventual resolution than the empiricists and less comfortable about the distinction between person and thing than religionists, arguing that all we can hope for is a clearer understanding of the mental operations that somehow submerge this contradiction in our consciousness rather than highlight it. The issue has pragmatic and legal implications, for the criminal justice system struggles with allocating responsibility in its deliberations on guilt or innocence, but to my mind the most important connections touch on our justifications for our truth and goodness claims. I will assay what I take to be the strongest warrant for philosophical compatibilism next week.

It does intrigue me that we are not more bothered by the anomaly, though. After all, the greatest intellectual revolutions of recent memory, Marxism and Freudianism, were rooted in determinism, and the postmodern deconstructionism that sought to raze modernism since World War I is built upon a foundation of contingent determinism and anti-rationalism. Our freedom fetish (please see my post of November 20, 2013) is a backlash against that postmodern charge of determinism, so the issue certainly has contemporary and pragmatist resonance. Yet as deeply intertwined as the issue is in the history of the last century, we all still act as though our freedom and the responsibility it entails are indubitable. So our choices seem to boil down to these three. We can openly embrace our freedom and boldly proclaim ourselves the only free things in the universe with all the implications for theology that inspires. We can bow to the triumphal march of empirical science in its quest to bring free will into the sphere of determinism while still doubting that success in the endeavor will disabuse us of the sense that we are free, a process that seems to be underway at the moment, but one with disturbing implications for our orientation toward science, religion, and the kinds of warrants we use to justify our claims to truth and goodness. Or we can investigate some philosophical explanations for the anomaly while keeping our eyes wide open to what those explanations mean for our declarations of value.  I’ll pursue this third option next week.

Until then,

S. Del

 

 

 

Standard