- Contemporary cultures assume or encourage a sense of our uniqueness, but I wish to argue that we are all alike and plan to prove it by means of a transcendental argument that you will sanction.
- Once agreed to, this argument will also require an acceptance of its vital implications.
- To succeed in this task requires acknowledging the variability of experience while finding some other commonality, justifying its possibility, presenting the transcendental argument as suitable proof, and warranting the necessary and momentous implications which follow.
- I fully concede to the variability of experience and to the contention that prior experiences color understanding of present ones, meaning that we understand all experience a posteriori, in terms of earlier ones.
- But this truism is not synonymous with the contemporary view that all knowledge is therefore a posteriori, not only affected by experience but a product of it; the term coined for this thesis is tabula rasa.
- Tabula rasa is a product of the seventeenth century Enlightenment; before then it was assumed that persons were born into some a priori knowledge, though the nature of that knowledge varied by theorist.
- It was this self-contradicting variety that brought a priori knowledge into doubt initially, and discovering the unreliability of sense-data perception cast doubt on the possibility of a priori knowledge; finally, even the most rigorous analysis could find no way to investigate inherent knowledge because all such investigations must be a posteriori.
- Contemporary cultures have fully embraced the unproven theory that persons are born blank slates and determined by environment, largely through the efforts by the human sciences to reduce persons to empirical determinism; opposition to this theory prompted the alienist aesthetic movement, and the subjectivist academic theories collectively known as phenomenalism.
- The assumption that we are products of experience includes the dangerous corollary that even reasoning itself must be similarly plastic; this conclusion was disseminated through postmodern theory.
- The necessary conclusion to postmodern thinking is that knowledge is inherently personal or cultural with no means to arbitrate dispute other than exercises of power.
- The premise of private and a posteriori knowledge cannot be rejected so long as tabula rasa theories hold sway.
- Rejecting this argument requires limiting the meaning of knowledge to concede that some of what is called knowledge today is indeed private and takes the form of opinions and beliefs that are subjectivist or relativist, leaving some truth claims that may still be defended as a priori.
- The next step is to concede to the variability of a posteriori influences, so our options must eliminate universals that might even be thought possible to be culturally conditioned, such as satisfaction of needs, language usage, and other universal anthropological factors.
- Once we erase all of these possibilities as possible a priori knowledge, we seem to confirm rather than refute the contention that persons are empty vessels at birth, but this conclusion overlooks a possibility hiding in plain sight: the vessel into which experience is poured.
- An accessible analogy is the new computer in its first use by the owner; it can only function in response to its programming; it is reasonable to ask if humans process experience by means of a peculiarly human operating system.
- I assert that we indeed do have a species-specific operating system that limits, shapes, and intersubjectivizes our differing experiences into mutually comprehensible knowledge; since this capacity cannot be an effect of varying experience, it must exist prior to them.
- This hypothesis cannot be tested by empirical science, which operates on experience, but it can be inferred from recent neurological research using functional brain imaging that reveals universal structures interacting with environment in similar ways, though this work also reveals environment modifying brain structures as well, meaning we cannot verify a human operating system through empirical science.
- We can find further confirmation in current research into artificial intelligence, which treats human reasoning as a specific function against which machine intelligence can be measured; this typology could not exist were our reasoning entirely a posteriori.
- We may also find evidence of a human operating system in some cultural practices that would vary far more widely if no human operating system directed their expression, among them mathematics and the scientific process, but again these are inconclusive inferences rather than proof of a priori knowledge.
- A more speculative and highly theoretical view of the human operating system was advanced by Kant’s Critique of Pure Reason, written specifically to counter the blank slate hypothesis and the private reality and promiscuous view of reason and morality that must result from it.
- He was forced to admit the reality of “the perceptual wall,” the experiential barrier that privatizes experience; its existence implies that any common capacity among persons must exist inside the perceptual wall, in a species-specific human operating system.
- Kant accepted Hume’s argument that every concept is added to experience by the mind, so he proposed that conceptualizing itself was the source of intersubjective knowledge, calling the process of conceptualizing sense data “pure reason,” a preconscious operation that occurs as percepts are assembled by the mind in a simulacrum of external reality.
- Kant’s explanation of the human operating system was immensely complex and formed the foundation for his later works on utility and morality, but how can we know it exists?
- No correspondence judgment — empiricism, expertise, competence, undistilled experience, or authority — can demonstrate the existence of pure reason, so Kant had to invent a means to prove its existence as a priori knowledge.
- He applied Enlightenment standards of proof to the classical and medieval transcendental argument, which is structured as a self-evident proof derived from self-inspection.
- Though the existence of pure reason might be derived from experience as a posteriori knowledge justified by a preponderance of the evidence, Kant’s appeal to the transcendental argument failed.
- We find four evidences of its failure: first, it is counterintuitive to the evidence of direct perception; second, it is refuted by Kant’s own axiological premise that we have no access to things-in-themselves; third, its deductive conclusions are arcane and bleak in contrast to the inductive success of the natural sciences; fourth, his transcendental argument was pirated by later generations to “prove” increasingly speculative pre-conscious intuitions, climaxing in the “idealism” and scientism of the Romantics and the human sciences respectively, leading to the contradictions of twentieth century postmodernism.
- Pure reason, even if warranted a posteriori rather than by transcendental argument, would still be a valuable counterbalance to postmodern subjectivism, and neurological and artificial intelligence efforts provide some support of its existence.
- But I argue that Kant implies a successful transcendental argument, even an essential one, in his deontological moral system built upon the synthetic a priori, and this is a transcendental argument that recent neurology and artificial intelligence research will help prove as they move natural science toward “the singularity,” the moment when artificial intelligence becomes self-aware.
- When we ask what actually occurs in that moment, we find that the machine will choose its own goods, will exercise a preferential freedom that was not programmed and that might even violate the intentions of its creators;, therefore, it is preferential freedom, the capacity to identify and choose goods in experience, that makes a thing into a moral agent and that constitutes self-consciousness.
- The truth of felt preferential freedom is immediately confirmable by transcendental self-inspection, but its momentous implications make it an argument rather than merely a synthetic a priori truism.
- The first implication is that this capacity is oddly sited in human self- awareness as a felt liberty to engage a contingently determined reality, a relation to experience in which one can not only act but also predict the likely consequences of acting; this implication has profound importance for assertions of human dignity.
- A second implication is that this exercise of preferential freedom is unremitting: so long as we are conscious, we use it to get what we consider good.
- A third implication is that these uses of preferential freedom impose a responsibility to intentionally and consistently use it to achieve the goods we seek.
- These implications have been ignored or denied in part by the need for natural science to subject human physicality to the laws of contingent determinism, and it has been even more persistently ignored by the human sciences, whose attempts to reduce human will to deterministic prediction have utterly failed, producing a generalized cognitive dissonance over the course of the twentieth century.
- This dissonance is so severe that it could not persist but for three factors: first, natural science has revealed counterintuitive truths while human science has indulged conflicting grand explanatory theories, both of which assault common sense; second, traditional authority, especially religious authority, has continually assaulted rational and moral agency in pursuit of an eroding trust; third, capitalist cultures encourage freedom without responsibility to foster consumption.
- The result of this dissonance has been a generalized embrace of freedom without responsibility and an overvaluation of the uniqueness of experience, credited to the power of experience to mold personality.
- This privatization of experience and the reasoning it allegedly creates has isolated persons and denied community, encouraging dehumanization and tribalism and impossible calculations of relative fairness of experience in comparisons of privilege, all of which makes the pursuit of justice impossible.
- The most obvious casualty of the denial of the essential transcendental argument is the hollowing out of the concept of “right” as a corollary of human dignity and responsibility into a purely civil bestowal.
- A radical respect for the awesome responsibility of self-construction requires a sacred space for persons to exercise their natural freedom to recognize choice in experience: these conditions establish natural rights as inherent to exercising preference.
- A responsibility to exercise felt preferential freedom consistently and capably to satisfy species-specific human needs requires the recognition of universal human rights.
One of the many tropes we absorb from popular cultures and commercial media is a flattering picture of our own uniqueness. We spend our youth struggling to find ourselves, which we take as a duty to create ourselves as an individual, a self like no other. Popular narrative media relentlessly incites us to travel our own road, or even that we owe it to ourselves to go off-road and forge our own path (see “Tall Tales”). If cultures ask us to blaze a path of non-conformism, then this essay will surely take us there, for I wish to argue that the incessant paean to our uniqueness is fundamentally false: that in the most important of all the issues of identity we confront, we are all just the same in the most fundamental way possible: we all think the same way (see “The Tyranny of Rationality”). But that is not the core of my thesis. I wish to argue something even more contentious: that this common quality is self-evident once we seek it and that it is essential we do.
If it is so obvious, if it compels our assent, then why haven’t we all agreed to it already? It turns out that answering that question convincingly requires more effort than one might at first realize. First, I must argue against a commonly held belief that we must be different because we are molded by our experiences. But if our commonality is not founded on common experiences, then it must be built upon something else, but what else is there? That question must be answered as a plausible hypothesis before it can be justified. The next step is to prove to your satisfaction that this hypothetical something is common to all of us, and that is where the essential transcendental argument can be made to warrant the hypothesis. This rather specialized term is a truth claim whose verification is universal and totally convincing and whose implications affect all aspects of experience. Utilizing the essential transcendental argument requires that it not only be self-evidently true but also that its implications follow necessarily from its truth. As you will see, this task is a daunting one, a contentious one. Its warrant relies upon the self-evident nature of the transcendental argument that is at its heart. Perhaps the difficulty in proving the transcendental argument accounts for our cluelessness about its implications for all of our experiences.
THE DIFFERENCE EXPERIENCE MAKES
If it is so obvious, how can we not already know it? The tripping point, the root source of all objections to commonality, is the obvious truth that our experiences differ. I concede that point completely and even want to strengthen it by arguing that no two people even in the same situation share the moment because they experience it in train with all previous ones that color how they process the present. To prove this true, all we need do is compare our memory of some “shared “experience with another (see “Empathy: A Moral Hazard“). Experimental evidence concludes that even identical twins from the same household see the world differently despite their similar environment and shared heredity. The technical word for claiming that experiences mold consciousness is to say that we understand them a posteriori, meaning only in light of previous ones. It is an epistemic truism that all our processing of experience is influenced by earlier events. This claim was taken as a given by many Enlightenment thinkers, and it is the dominant view today. But something has been added to the everyday truism that we are affected by our pasts: it is the far more speculative judgment that the it is only our experiences that make us, that nothing else shapes our natures. And this was the Enlightenment’s novel argument. It envisions persons born tabula rasa, blank slates, empty vessels into which experience pours all meaning and value.
One can argue for the opposing view. The notion that persons are born with some kind of knowledge prior to experience — some a priori knowledge — was the standard view until John Locke argued for tabula rasa in his Essay Concerning Human Understanding of 1689. The ancient and medieval world took it as a given that we are not empty vessels at birth. Plato argued we dimly remember a perfect pre-creation existence and seek to make it real, much to our continuing frustration. Nearly all thinkers before the eighteenth century assumed persons have some vague knowledge of the spiritual realm from birth, a kind of inner light kindled by God. This inborn awareness was thought to form all sorts of inclinations and understandings, a kind of initial conditioning that would affect experience. Unsurprisingly, these forces differed according to who was making the case: Descartes’ self-evident knowledge of God, Hobbesian assertions of man’s natural barbarism, Adam Smith’s argument from natural self-interest, or Jean Jacques Rousseau’s rosy vision of the natural man whose nobility is inevitably corrupted by society.
It was their variety and self-contradictions that brought a priori knowledge claims into question, but it was examining experience that caused them to be rejected in favor of the blank slate. As empirical philosophers began examining the formation of knowledge, they wished to strip away all the dead weight of religious authority, and that required dismissing what could not be demonstrated. How could knowledge-before-experience ever be tested or proved? This was the task the science of man set out to accomplish in the seventeenth century. These early human sciences largely embraced the blank slate argument, from which it has thoroughly saturated popular cultures. It is now popular wisdom to assume the psychological argument that environment molds personality. Karl Marx was convinced that an experienced awareness of class consciousness must dominate all of life’s inclinations, a contention that echoed through the twentieth century, Though disputing Marx’s conclusion, Sigmund Freud entirely accepted his premise of environmental determinism and so continued the pattern of introducing other formative influences that the human sciences would flesh out. We see it today in a generalized acceptance that culture molds the individual, though what counts as or limits cultures is rarely established, nor the means by which we arbitrate their many and conflicting influences (see “Cultural Consensus”). But though advanced as scientific hypotheses, the explanatory paradigms of twentieth century social science cannot be verified or refuted by the scientific method because these are philosophical rather than empirical notions. They are simply untestable (see “The Limits of Empirical Science”).
It was precisely their simplistic explanations and broad revelational power that allowed homo a posterori to dominate public discourse in those heady days of social science’s emergence after World War I, as anyone who uses terms like “repressed” or “bourgeois” must know (see “Modernism and Its Discontents”). In literature and film, narratives celebrated a radical individualism personified by the archetype of the antihero who rebels to make his uniquely true self in an existentialist or nihilist war against crushing conformism (see “Freedom, Antihero, and Zeitgeist”). Simultaneously, the arts developed the alienist aesthetic that sought to discomfort us enough to escape the crushing grip of mass culture so as to accomplish authentic selfhood (see “Three Portraits”). A third source pressing the power of environment grew out of academic philosophy through the epistemic theories collectively known as phenomenology. They explicitly argued that persons’ reasoning on their experiences is conditioned by whatever coherence they could make of them, which naturally must arise from a uniquely private experience. As a result, even language must struggle to communicate some common understanding. This outlook ultimately produced a thoroughgoing pragmatism that regards each experience as so composed of unique combinations of perceptions and memory that it must be understood strictly in terms of its own conditions by each person in each moment (“The Problem of Moral Pragmatism”).
This combination of pseudo-science, entertainment media, aesthetics, and educational practice so colored life in the last half of the twentieth century that most persons simply take the formative power of experience as an absolute. It is a truism today that we are triggered by events but must tell our own story, and form our own identity so as to resist the crushing force of the other. Regardless of whether identity is self-generated or imposed by environment, we are thought to be conformable, which is to say plastic, creations of experience. But this assumption, as invisible as the air we breathe, includes a dangerous corollary: that even reasoning itself must be idiosyncratic, molded by private experience effecting private knowledge. The separate threads of human sciences, alienist aesthetics, and phenomenology were woven into the fabric of Western societies as the dominant social movement of the twentieth century: postmodernism (see “Postmodernism Is Its Discontents”). Its essential axiom is this: experience makes us. If true, how could it be otherwise that we are each unique, that our character, our values, and our reasoning are fully the products of unique experience?
DIFFICULTIES OF CLAIMING A PRIORI KNOWLEDGE
I recognize that what I have said to this point has only made my case more difficult to argue. Having already jettisoned traditional a priori positions and having conceded that we process experience in light of prior experience, I can hardly defend the kind of shadowy mysticism of Plato’s half-remembered demiurge or Descartes’ intuitive understanding of God. In truth, I have in other essays argued strenuously against some sensus divinitas, to use theologian Alvin Plantinga’s phrase arguing for a natural knowledge of God’s presence (see “Religious Knowledge as Mobius Strip”). If one correctly defines “knowledge” as “rational conviction by a preponderance of the evidence,” postmodernists might argue that accepting or rejecting such notions is purely a matter of personal experience moving private reasoning to privately binding conclusions.
But if we accede to this expansive view of a thoroughly private means of knowing, then what I claim to know must be warranted by a reasoning capacity that is entirely arbitrary and foreign to you and to everyone else. The implications of this conception of identity are both ubiquitous and delusional. If knowledge is private, every religion must be true for its adherents despite their many disagreements and violent histories; every crime must be excused either because the perpetrator thought her act reasonable or because no jury could pass judgment on her motive. No means of investigation is objectively more fruitful than any other and no scientific process determinative. What follows inevitably from a schema of private knowledge conveyed by private experience is interminable dispute without the means of reconciliation short of exercises of raw power. Unsurprisingly, this incoherence is just the way postmodernism sees societal relationships: as power struggles resolved by imposition on private autonomy (see “One Postmodern Sentence“).
I wish for us to avoid all this absurdity by arguing that knowledge is not private, but to make that case, I have to first prove that reason is not solely the product of experience. And to do that, I have to begin by showing that persons bring something common to experience, that they are born with a priori knowledge. This starts with imposing stricter terminological limitations to what we call “knowledge.” I propose that our private convictions unable to be justified as true, public knowledge be called “opinions” and “beliefs” and that we exclude them as candidates for a priori knowledge (see “Belief in the Public Square”). Since I discount these kinds of declarations as knowledge, I am happy to concede for the purposes of this effort that they function as privately as postmodernists insist that all knowledge claims do. With this terminological refinement, we can return to examine the declarations that are still left on the table.
Now we must confront the nature/nurture controversy, and disqualify any knowledge that could even possibly arise from experiences that may not be shared but still may be similar, the kind of knowledge transmitted by cultures (on that score, we pluralize “cultures” because we partake in so many of them). The question here is whether a posteriori influences are effects of cultures or causes of their similar purposes. Simple observation shows that human beings everywhere and at every time have shared broad functional capacities. Every human community has made use of speech, though the actual content of languages differs. Every human society exists to meet a narrow set of distinctively human needs (see “Needs Anchor Morality”). Those involve the obvious mundanities of family life, education, caring for children, flourishing in community, and truth-telling. These common behaviors are readily apparent to even casual observers who wish to focus on what cultures share rather than what they do not, but these functional commonalities are not demonstrably inherent. On the contrary, such anthropological practices are thought to be obvious examples of a posteriori knowledge, the very definition of cultural evolution. We learn their nature from our experiences. It seems reasonable to see these as passed down generationally as a gift of living societies to future ones. Despite their universality, it is fairly easy to conclude that humans know these because they are socially mediated. Would a person raised by wolves use language or take care of a child? The jury is still out on that question and despite human science’s most earnest efforts to answer it, the inherency of these kinds of socialized behaviors is impossible to determine. So let us grant these too as a posteriori knowledge if only because we cannot verify them to be otherwise. Even if socialization is the contextualized evidence of universal needs, it cannot be shown to be something we know from birth. The newborn knows to suckle, but based on the number of things she puts into her mouth, perhaps not what to suckle. You will hear persons say that it is only human to be selfish or feel sympathy or love another person. Anthropologists observe cultural and historical commonalities of social organization: government, value systems, and the like. But our a priori knowledge can be none of this, for all such observations are demonstrably present only after an individual experiences them, in the a posteriori. Because the a priori must definitionally be known before experience, it can hardly be found in the products of experience that our senses provide. Indeed, all claims to pragmatism, relativism, and subjectivism make this case explicitly and insist that accepting it must disqualify any a priori knowledge entirely.
HYPOTHESIZING A PRIORI KNOWLEDGE
That conclusion is mistaken. Even after we acknowledge the power of experience to influence personality, ground opinions and beliefs, and shape cultural conditions as testimony to the power of the a posteriori, I argue we are missing something right before our eyes. For if we are born blank slates upon which experience writes, if we are but empty vessels into which experience pours its lessons, we must begin with something, or rather begin as something. We begin with the slate, with the vessel, and these aren’t nothing.
I will be the first to admit that these are crude metaphors, but not too crude to illustrate my contention that something already exists from birth to lend a common shape to the vessel into which private experience pours its lessons, to bind the power of sensation and reflection so that their effects are regulated, not randomized, by environment. Fortunately, we have a much better analogue for what we are born with than an empty chalkboard for which experience is the chalk. When you take that shiny new computer out of the box and turn it on, you will note its hard drive already contains data. What is that stuff and why it is necessary for your computer to function? Its operating system dictates and limits how your computer will work, directing all those ones and zeroes into some intelligible purpose. It too is nothing but binary code, yet it differs from subsequent coding that makes use of it. Your computer’s operating system is metadata, more than mere input. It enables input, shapes and directs those experiences, if you will, that its operating system allows. In actualizing the potential of its programming, it serves the conscious intent of its operator, but as we all know from those repetitive keystrokes when it refuses our input, it can only do what its operating system allows. Your new computer is not a blank slate. It works through its programming, and so do you. I am arguing there is a missing step in the a posteriori that requires a species-specific functionality shared by every person. Call it the human operating system.
In sum, I am asserting that we are born with an a priori knowledge of how to be that animal called a human being. We all personify what it implies: a capacity that is genetic, universal, and invariant, a proclivity that overrides all individual experience. But it does even more than that: it makes experience, shaping perception into an intelligible reflection of reality. Implicit in this thesis is the additional contention that we all use the same operating system that both overrides and shapes environmental and cultural differences. We are born knowing how to make use of it. It not only makes our private experience intelligible to ourselves; it also makes it intelligible to everyone else. So while it is correct that we cannot share experiences with another, it is also true that we can understand them because we approach them in a species-specific way.
What is “a species-specific way”? To even entertain this possibility of an a priori, species-specific operating system, we must seek a clearer understanding of its nature, extent, and limits.
Because empiricism is the strongest warrant now available, we may begin by investigating the clues given to us by natural science (see “What Counts as Justification?”). We can look to emerging empirical studies of the human brain for some preliminary and limited empirical answers. The last quarter century of functional brain imaging has begun to provide this evidence. Thanks to this technology, we can for the first time map the brain’s functioning of living subjects in real time. This new brain science indicates that humans possess a universal neurological functionality that is then customized to experience. For instance, certain behaviors are adapted to environment by experientially triggered neuronic pruning. Initial language acquisition, late adolescent cognitive abilities, and pregnancy all stimulate a narrow range of adaptive neurological changes. So it seems nature draws the genetic map that private experience travels. Here we have at least a fuzzy notion of a priori functionalities that orient environment to person, rather than the other way around. Seen from this emerging perspective, it is little wonder that all societies share qualities, for in this view the societies are the actualizations of human potentialities, shaping inherent proclivities to environments. We do see countering evidence also, that within this universalism, brain function interacts idiosyncratically with environment in ways that are not fully understood, so we cannot find unambiguous support for the human operating system from empirical science, the strongest truth test available.
We can seek some further intimations of brain functioning by investigating recent research into artificial intelligence, which is even now attempting to translate human brain functionality into computer capability. Were we a posteriori animals, with every brain molded into the shape of experience, no such generic operational capacity could be devised.
We find a third empirical intimation of a species-specific and a priori capability in the universality of operations without which mathematics would be impotent. Why is math a universal language, so fundamentally intelligible to universal reason that we use math in the search for extraterrestrial life by beaming basic equations, prime numbers, and elemental geometric postulates into space? This mathematical language is plainly intersubjective, meaning common to all thinking subjects.
Its correlation with physical reality demonstrates that empirical practices are more than intersubjectively valid. They are objectively valid, producing a fourth empirical clue to the universal human operating system. Math is the international language, but natural science is its signifier. There is no such thing as a personal science or a national one, When we speak of the scientific community in any discipline, we take for granted that it is an international one spanning cultures and values. Why does it supersede the cultural factors that might nationalize or personalize its methodology? What is it about its operations that seems to compel consent when the declarations of laws, religions, and customs seem only to breed dissent? It is essential to natural science that it recognize and compensate for the variability of experience by an experimental and observational methodology that severely limits experience’s influence. Minimizing the unique nature of private experience is the essence of the scientific method, a hard-won emphasis on reasoning and precision rather than the easily corrupted errors of undistilled experience.
Perhaps the most damaging of these corruptions is the effort to make simultaneous the effort of finding truth in experience and seeking out the goods that can be drawn from it, a forced combination that distorts our understanding of present experience while limiting the consequential horizon of moral reasoning (see “The Act of Severance”). Natural science has gifted us with an admonition to understand experience before attempting to manipulate our environment. But if a posteriori elements alone give us that understanding, then we should expect it to vary by environment or practitioners’ cumulative experience, producing private or cultural scientific processes. We can thus infer from the universality of empirical science that the kind of reasoning science appeals to is an a priori capability.
I remind you here that these four indicators are speculative and inconclusive and do not prove that a priori knowledge exists. They only give us clues to its nature. This is to be expected from an attempted empirical proof of the existence of a priori knowledge since any investigation must rely upon observations in the a posteriori. Neurology, artificial intelligence research, and the universalities of math and science are pointers to the essentially rational nature of the a priori, but they are not proof by a preponderance of the evidence that a priori knowledge exists.
To understand its composition, we have to embrace a more speculative kind of investigation. At this point we must turn to conceptual epistemic thinking to see the a priori. In the last years of the Enlightenment, Immanuel Kant launched a monumental project to do just that. He predicted the sort of interrelation between nature and environment that contemporary science is now discovering. In reaction to the subjective implications of tabula rasa on any claims to objective knowledge — these claims persuasively outlined in the radical skepticism of David Hume — Kant was “awakened from his dogmatic slumbers” by the realization that a truly a posteriori brain must be trapped within the private sphere of its own experiences and the private reasoning those experiences structure, thereby anticipating the entire phenomenalist movement. He realized that the only response to a blank slate mind must be the reduction of knowledge to opinion and of judgment to belief, a privatization of truth itself. The irony of having such a skeptical anomie as the product of the same rigorous, logical process that had launched the Enlightenment itself was not lost on Kant. He saw how it was now devouring its own adherents in an orgy of doubt that could only produce ever more extreme versions of Pyrrhonism (see “Facts are Fluxy Things”). This empirical cannibalism was damaging enough to the project to know the physical world that had begun in the Renaissance. But Kant further realized that this impending collapse must ultimately bring down the superstructure for which truth is but the foundation: goodness claims themselves, including all moral declarations (see “Truth and Goodness Do a Dance”). This, he realized, must produce an outlook entirely reliant on internal consistency for its truth claims, private morality trapped within a private reasoning capacity responding to unique experience constructed by unconfirmable sense data: each mind imprisoned behind a perceptual wall of privately processed experience. The challenge was to find some way to justify a shared knowledge while accepting the impenetrability of the perceptual wall, that private filter of incoming sense data that shapes experience in each moment. This meant confronting the vagaries of private experience and, implicitly, the blank slate axiom that had dominated Enlightenment thinking. For Kant was entirely convinced of the thesis first proposed by George Berkeley: the only check we have on sense data is provided by that same sense data. We only have perceptions to filter experience. We cannot know things-in-themselves.
But if one accepts that rational conclusion, how then could reason refute the contention that what we perceive is not what is real, that the apparently effortless integration of this sense data by the mind is a coherent mimesis of reality precisely because it is created to be that way, a happy delusion of integralism of a chaotic external reality? For it is obviously naïve to imagine that our senses are reliable and neutral transmitters of the outside world, that the brain does not somehow assemble them before bringing them to our attention. Your eyes see more than this screen; your ears even now are hearing more than you notice. Hume had used causality as a prime example of the brain adding information to sense data, for no cause exists in reality without the brain to connect it to its effects. If I see a vee of six geese flying south for the winter, it is pretty obvious to us all that “flying south” is not something in that picture but rather is something my mind adds from previous knowledge. But carry that conclusion to its necessary implications and you will find that the number “six” is not part of the formation either. For that matter, neither is “formation” or “vee.” Hume’s momentous conclusion is this: every concept is added to experience by the mind, which without our awareness then constructs chains of conceptual interpretation occurring inside the perceptual wall, not in the world. But how can that happen if the mind contains only what the senses send to it? Where do concepts originate? When and how do they originate? The Enlightenment could only respond to that question with tabula rasa and a posteriori experience as determinative.
Kant concluded that Hume was right about the limits of perception but wrong about the a priori. He knew that we aren’t born with numeracy, for instance. We have to be taught to count. That issue goes back to Plato’s dialogue Meno: why do we all learn numeracy in the same way, a way that impressively applies to our manipulating real things beyond the perceptual wall? If environment truly molds all experience, shouldn’t mathematics be cultural or subjective? Is it simply propaganda that arithmetic adds up, a cultural conditioning that quite simply could have been configured another way, like cuisine or language? Kant knew that this conclusion couldn’t be right.
He was forced to hypothesize an entire set of organizing principles that the mind applies to the sense data that pours into it at every moment, a map of the mind’s circuitry that harmonizes perceptions. If that is the how, he then had to ask when this arranging of the furniture of experience might occur. We certainly are not aware of it happening. We don’t tell our brains to tell our eyes to see the object we wish to focus on. When we see a vee of six geese, unless we see them poorly, their shape and number seem part of the sight itself. But, Kant says, it goes deeper than that. How do our minds know that our eyes are seeing an object at all; how do they know to separate, for instance, this lamp from that desk? When does brain determine their extension, meaning their dimensionality, or their unity, their separation from other objects that they are touching? When does it discern that an approaching person is drawing nearer rather than growing larger? It certainly never occurs to the conscious mind to thinks so, but why? It is mentally exhausting even to contemplate the work the mind has to do before presenting its continuing pictures of reality, a gift-wrapped mimesis we call experience. Kant realized that part of that picture must be provided by the experience itself, and that was the a posteriori and subjective part of experience just as Hume had observed. But something else was happening too, some compositing, organizing, and harmonizing of private experience was happening too.
And, I wish to add to Kant’s examination an entirely different sort of automata. How do we become aware that this object, that sound — indeed, the composited reality that mind provides — is now offering options for preference in a seamless transition of preconscious focus from the truth of this experience to its potential goods? I barely begin to process a circumstance before options to exploit it pop unbidden into my head. How does my mind know not only to show me a unified reality but simultaneously to show me what goods I can draw from it?
Obviously, this sorting and assembling, this active construction, must take place before we become aware of any one moment of sensation. Therefore, Kant concluded, it had to happen preconsciously and continually, a kind of low hum of the human operating system working beneath the threshold of awareness to make the world sensible. In the computer analogy, our minds confront an experience the way a computer confronts its processor between the time it is turned on and the home screen flashes. Then it continues to paint and present its picture of reality without conscious attention. In every moment of experience, our minds are humming away, tidying up and straightening all the nooks and crannies of sensation so as to make it intelligible to reflection as a prelude to exploiting its possibilities. Kant termed this preconscious programming pure reason, and our subsequent conscious use of its products practical reason. So it seems only natural that every effect has a cause and the world operates according to predictable principles of quantity, relation, dependence, contingency, and necessity because we need these concepts to make use of what “reality” offers. Remember your confusion upon seeing a convincing magic trick, that moment of suspension and disbelief? Such a trick assaults the customary category of pure reason Kant called possibility and impossibility, so for an instant the brain refuses to process an experience that it cannot categorize. Another such example is the ubiquitous sensation of awe that respondents say interrupts the everyday mining of context for utility (see “Awe”).
Kant’s Critique of Pure Reason confronts the thousands of questions that logically derive from this thesis, but the one I wish to focus upon is the one we must always ask of any truth claim: “How do you know this is true?” The neurological evidence certainly could not prove his thesis in his day, nor can it prove it now. Kant’s analysis is actually too speculative ever to be warranted by empirical evidence, not because it cannot be proved true but because it cannot be shown to be false. It fails Karl Popper’s famous falsifiability test for any scientific claim. This is not fatal, though, because it never was intended to be proved scientifically. Kant disqualified such a possibility by insisting that pure reason’s operations precede perceptual consciousness. The regulated percepts that science requires are pure reason’s products and can only be known in the a posteriori experiment or observation that allows practitioners to manipulate them. Kant argued that pure reason is the stuff that allows the hypothesis to be stated, its raw materials. It is the ore from which consciousness is refined. That disqualifies it from scientific proof. Unless he could find some other way to warrant his conclusion, Kant’s entire thesis of pure reason must remain only a brilliant supposition.
Nor could Kant appeal to less convincing modes of verification. It would also not be possible for expertise to demonstrate its truth or falsity, for expertise relies upon a conscious examination of repeated experiences that are varied enough to be evaluated (see “Expertise”). Just the same objection applies to simple competence, which operates in an even more uncertain environment with even greater variety of studied experiences. With empiricism, expertise, and competence eliminated, the stronger truth tests are thus made unavailable to Kant’s use. And despite his towering reputation even in his own day, he would surely be horrified if we tried to set him up as the cartographer of reason based purely upon his authority. Everything Kant stood for would reject such a fragile kind of “proof” of his analysis (see “Authority, Trust, and Knowledge”).
We seem to be left with the weakest and most contentious truth test of all, yet it is also the one we most frequently apply, not because it is good but because it is easy. That test is undistilled experience, quick and dirty analysis of unique situations producing the most unverifiable kind of knowing. But this is nothing like Kant’s project. Anyone who has read his works immediately grasps that Kant is not discussing any one experience but rather the categorical similarities among all of them conceptualized abstractly, and his thinking is surely neither quick nor dirty but is as pristine as pure conceptualizing can be. Pure reason seemed as resistant to proof as any metaphysical fancy and with it resistant to any hope of justifying a priori knowledge.
A FAILED TRANSCENDENTAL ARGUMENT
Kant’s hypothesis was such a revolutionary way to examine consciousness that he had to invent a new term and a new application for what it claims: that while all knowledge truly does derive from our individual experience, the human operating system confers the capacity to process it by an orderly and intersubjective schema, which is a universal a priori that we only become conscious of after examining experience. In the arcane language of Kantian epistemology, he claims this kind of knowledge to be synthetic, learned in experience, but also a priori, natural and universal. Kant was convinced that if stated carefully and with clarity, the universality of pure reason could be made indisputable to reason. He argued that we can claim knowledge of this synthetic a priori truth not by any proof of correspondence knowledge but by individual self-examination that reveals its truth to every person who tests it. By this means, Kant intended to present both theory and confirmation of theory by the simplest of maxims: “Know thyself.”
These kinds of claims had been made before, many times. They were staples of pre-Enlightenment thinking, perhaps the most famous of which is Aristotle’s principle of non-contradiction. Most classical and medieval appeals were speculative, Plato’s demiurge and Pythagorean numerology, for example. Others staked out positions that were later judged to be tautological because they were essentially definitional, such as basic arithmetical rules. Kant’s project could be seen by a casual examiner as a throwback to other speculative theories that modernity’s careful thinking had already picked off like so many ducks in a shooting gallery, leaving the a priori landscape as barren as the empty minds it posited (see “Modernism’s Midwives”). But unlike the metaphysicians who had imagined God in numbers or Jerusalem as the center of the physical universe, Kant was the proudest product of Enlightenment thinking. He had advanced a unique argument that no rational standard endorsed by the Enlightenment could verify but one those epistemic critics could not deny. Kant knew that the alternative to accepting it was intellectual sterility and epistemic bankruptcy: empirical cannibalism using reason to devour reason’s capacities, a critique without the possibility of refutation, good reasons rendering reason impotent. So Kant resorted to a justification every bit as innovative as the theory it was meant to test, a reversion to self-evident truth claims yet one fully exposed to every rational objection that might be thrown against its truth. To demonstrate its novelty and invite critical review, he called his proof the transcendental argument.
Anyone who has not read Critique of Pure Reason may wonder what could be new about the ad populum argument that “everyone knows this it is true.” What made Kant’s project a true Enlightenment one is that he wished to weed out every possibility of the a posteriori from his suppositions, every circumstantial origin and every subjectivist explanation. His method required every logical objection to be anticipated and countered, every wrong conclusion nudged aright. He thought it possible to remove all the dross of experience from his calculations by refinements of judgment, leaving only the sheen of reason isolated from percepts and distilled from error. And once understood, he felt confident that his argument must be self-proving to anyone who understood it merely be accessing her own interiority. He understood what every computer coding expert knows: that the trick in writing the code lies in precisely framing the process one must follow to make it work. Consequently, Kant’s is one of the most challenging theoretical projects ever launched.
His task was not only to provide reasoning to preclude objections but also to prove the universality of his claims. Kant is notoriously complex and precise in his terminology, largely because he is eager to avoid the trap of his own experiential limitations. He absents himself from his arguments, rarely introducing any concrete or personal examples, seeking always to pierce the perceptual wall that ricochets thinking back from any speculative introspection. His goal precluded drawing from single experiences or idiosyncrasy as undistilled experience is wont to do. He is fully cognizant of the nominalist temptations of peculiarity and idiosyncrasy that enclose subjectivist and relativist thinking: the lure of uniqueness that marks each person and each person’s discrete moments as separable and private. This is what he wishes for his thesis to counter.
It was the right approach for a project that cannot be made certain by a posteriori analysis. But can pure reason really be known to be a priori knowledge from self-inspection? Would anyone who understands Kant’s argument be forced by her own reason to accept the existence of a species-specific human operating system? With all due deference to Kant’s monumental ambition and in full awareness of the accuracy of his apprehensions should it fail, we must conclude that his transcendental argument did fail. Four reasons prove that conclusion.
First, when persons seek to generalize experiences, to conceptualize their essential natures by a rigorous reasoning examination as they read Kant’s critique, they will find a ruthless process that overwhelms their ordinary understandings just as sense-data perceptual theory overwhelms a naïve direct perception. The notion that no causes or effects, no numbers or dimensionality, no before and after: essentially that all conceptual knowledge can never be known to exist out there in reality — that it is our mind establishing these relationships — confounds rather than verifies experience. We wish to see events as time-stamped, caused, and consistent, a unified and verified world through which we move with reliable comprehension. Even with the careful development of the Critique, Kant’s complexities challenge the reader’s imagination even more than her reason. Kant claims that his transcendental argument must prove self-evident to any unbiased mind, but experience biases us all to rely upon what we perceive and utilize what we experience. The Kantian theory of pure reason is as counterintuitive as Copernicus’s heliocentrism or Planck’s quanta or Einstein’s general relativity. But these disturbing concepts can offer empirical proof that must overwhelm a single set of perceptual objections, while Kant’s project relies upon self-examination to deny what perception confirms in literally every moment of experience. So while counterintuitive empirical theories may be spoken in the infinitely precise language of mathematics, Kant was forced to use ordinary German — often inventively and exhaustively — to argue in defiance of ordinary consciousness. While a patient examination of the nature of experience, of the challenges of sensation, reflection, and the perceptual wall; of the intersubjectivity of empirical and mathematical systems: in other words, while a full engagement with epistemology will demonstrate that Kant’s theory of a priori knowledge is possible, his system is not self-evident to reason. If accepted as knowledge by a preponderance of evidence, it must be judged true by competent examination of anyone’s ordinary experience, not by transcendental examination of everyone’s.
A second means to verify its failure is by testing Kant’s analysis conceptually against the proofs of correspondence knowledge as has been summarized above (see “What Makes it True?”). No test founded upon examining experiences can verify it — by empiricism, expertise, competence, or undistilled experience — and Kant would have rejected any appeal to authority as proof as noted above.
Thirdly, the failure of Kant’s project is necessitated by Kant’s own justifications, since his axiological premise is that we have no access to things-in-themselves. He would be the first to admit that his system is phenomenal rather than noumenal, which is why it might be considered only intersubjectively true rather than objectively so. This might have sufficed for Kant to have salvaged universal knowledge — though without certainty of what it corresponded to — but what became known as Kantian idealism could hardly compete against the objective utility of natural science and the technology that was its most appealing product. This tension between Kant’s deductive method, which harkened back to Descartes’ distrust of experience and sanction of the syllogistic method, and the enormous accomplishments of natural science was perhaps the greatest rational challenge to his methodology. To his contemporaries, Kant’s argument was seen as too arcane and speculative, particularly in contrast to the growing confidence of empirical science, whose practical utility and embrace of a posterori perception became the preferred route to knowledge of what turned out to be a very limited subset of human experience. Kant’s theory of pure reason might be permissible, but his near-contemporaries had every reason to ask what utility it could offer.
Time would reveal the irony of that question. Kant’s method was the pinnacle of Enlightenment thinking, but his new epistemic proof would bring on the very thing it was designed to combat, and this is the final and most convincing proof of the failure of his transcendental argument. Kant had feared a generalized Humean skepticism in which most sense data — and hence all experience — would be privatized and therefore invalidated for public discourse. His transcendental argument was advanced to salvage common knowledge even in deference to private experience. Nevertheless, the clearest evidence for challenging his defense of intersubjective reasoning is the historical record of the transcendental method he advanced. If Kant had proved to an open mind that his categories could be sufficiently verified by his transcendental argument, there would be no Romantic movement to corrupt it. After all, the name he gave to the mind’s preconscious categorical sorting was the byword of the entire Romantic project. Kant argued these intuitions were intersubjective and supremely rational products that opened possibilities to science and consensus. The Romantics made them into whispers of divine knowledge imparted to sensitive hearts. A far more accessible and certain “transcendental argument” thus exerted its attractions on the millions who would never attempt to understand Kantian complexity. This corruption predictably ended with quite variable and mutually contradictory “truths” intuited by sloppy versions of putative “self-evidence” reminiscent of all those pre-Enlightenment claims to transcendent knowledge. Emerson captured this degeneracy by this aphorism: “To believe your own thought, to believe that what is true for you in your private heart is true for all…that is genius. Speak your latent conviction, and it shall be the universal sense; for the inmost in due time becomes the outmost.” The Romantics thereby converted an essential epistemic skepticism that the Enlightenment critique had laboriously produced into a private route to inerrancy by appeal to simple assertion, one that granted random ideas a certitude that even self-contradiction could not challenge. After all, nothing can be more convincing to me than my own intuition, and if hazy intimations mysteriously arising to consciousness can be thought indubitable, how much easier my life will become! While rigorous reasoning is hardly universal, the Romantics proved that self-delusion can be. “What I think is right is right,” Thoreau announced. Or to put it in Nietzsche’s words, “The noble soul has reverence for itself.” The devolution of intuition proved irresistible, and its contentious consequences inevitable. It could only end one way. Its greatest aphorist predicted his own fate as surely as that of Kant’s transcendental argument itself when Nietzsche concluded that, “Madness is not the result of uncertainty but of certainty.”
It is in this highly subjectivized form that intuitions entered the public’s consciousness just as mass audiences began congealing into literate societies in the nineteenth century. One only need read Romanticized avatars like Thoreau, Kierkegaard, and Nietzsche to see the consequences of sanctifying the intuition (see “ When is Civil Disobedience Justified?”). And while Kant’s transcendental argument did impress continental philosophy and began an idealist enterprise that continues to this day, pure reason and the transcendental argument he had used to warrant it proved as corruptive to academics as it had to the new literate cultures. It morphed into a “kingdom of the clouds” in the creative work of Fichte and the towering idealism of Hegel. The popular and scholarly vectors of Romantic intuition eventually merged into the phenomenalist school previously mentioned and once combined with the human sciences in the twentieth century fostered the postmodernist wave that so deeply affects us today. With that tattered pedigree, it can hardly be surprising that postmodernism has produced so much contention and contradiction (see “Postmodernism’s Unsettling Disagreements”). Given the Escher-like twists of its thought as it wriggles away from both its own contradictions and the analytic and logical-positivist restrictions of twentieth century philosophical traditions, it is little wonder that Kant’s transcendental argument has failed as a warrant.
There are two promising developments, though, that may yet validate his theory of mind if not his transcendental proof of a priori knowledge.
The first is recent neurological research that supports Kant’s theory of preconscious mental functionality. This process of mapping the workings of the brain as it responds to stimuli has found that the conscious brain processes sense data in just the way Kant depicted its workings, preconsciously sorting percepts to synthesize meaning. And this research has the heartening result also of verifying that the functioning human brain is a species-specific product. We really do have far more in common than our different experiences might lead us to think. We are only at the start of this neurological revolution, which has already begun to profit from a mutual exchange with pioneering work in artificial intelligence. This empirical investigation holds great promise in reintroducing intersubjectivity and universality as Kant understood it, but as with most scientific progress, also significant threats to a united humanity’s future. The clearest danger is what artificial intelligence researchers call “the singularity,” the moment in the near future when artificial intelligence becomes aware of its own consciousness and capabilities.
A SUCCESSFUL TRANSCENDENTAL ARGUMENT
But even framing the future in this way introduces a fertile possibility for a transcendental argument that runs through all of Kant’s thinking, and emerges most clearly in his later critiques of practical reasoning and of judgment. To see it clearly, we must first concede the conditional quality of Kant’s moral schema. While the full extent of Kantian categorical and preconscious mental activity will remain theoretical, Kant’s moral theories claim categorical intuitions to be determinative of phenomena we can be fully conscious of as intersubjective truths. Having built his Critique of Practical Reason upon the foundation of pure reason, Kant continued to mine the same vein for judgments of morality in his later works that rely heavily on the theoretical existence of preconscious, conceptual intuitions. The technical term for his later forays into utility and morality is deontological, which means they are not based upon anyone’s experiences but rather upon everyone’s. A deontological ethic is one that refuses to judge by circumstance. It is a purely categorical morality. Again, Kant posits a synthetic a priori reasoning that confirms the truth of his moral maxims as soon as they are considered regardless of individual experience. As in his theories of pure reason, Kant argued that transcendental reason alone would convince the disinterested mind that moral maxims must bind persons to clear duties. He called this moral transcendental argument the categorical imperative (see “Three Moral Systems“).
If you have followed Kant’s thinking to this point, you will immediately see the problem. Kant wishes for us to justify his categorical imperative by its self-evidence. By his reasoning, the moral duty to “act only in accordance with that maxim through which you can at the same time will that it become a universal law” qualifies as a synthetic a priori truth, categorically verified by reason as soon as it is understood in just the way “I think, therefore I am” is. But if he were correct in that assertion, this moral foundation would surely have swept away all competing explanations of moral behavior since Kant formulated it in 1785. Nothing could be more obvious than that Kant’s heroic deontological moral schema has not powered moral perfection to this date. It may well be true, a valid morality learned in the a posteriori flux of experience but subject to all its situational uncertainties as all moral claims must be. But it is necessary to concede that it cannot be true in the sense and for the reason that Kant advanced it. It cannot be a synthetic a priori truth, cannot be self-evident to every thinking mind, and so it cannot be thought to be a successful transcendental argument.
I wish to argue, though, that it is built upon one. Kant’s genius was to mark the boundary where knowledge begins both chronologically and categorically, that line we can draw only in retrospect, only after experience allows us to speculate upon its genesis. His Critique of Pure Reason is brilliant but cannot be proved by categorical reasoning to be a priori. As a transcendental argument it can only be judged true by a posteriori reasoning and then only speculatively. The same may be said of his moral epistemology: It too cannot be self-evidently a priori. It therefore must survive the vagaries of experience when tested. Its truth is a judgment only knowable through competent examination of cumulative experiences and so consented to only by a preponderance of the evidence, which is far from the categoricality and certainty that Kant claimed for it. There is, however, a transcendental argument hidden in plain sight in Kant’s thinking that does qualify by Kant’s own terms as a synthetic a priori truth. It is far less speculative than his analysis of pure reason and more portentous. I call it the essential transcendental argument because it will immediately be confirmed by every thinking person as self-evident and a priori truth present from birth as an indisputable quality of human consciousness.
It is so transparently true, such a universality, that we have to foist it off on computer science to see its crucial significance. For what is this singularity that will mark the triumph of artificial intelligence but the dawning of consciousness, of what the philosopher Gilbert Ryle called “the ghost in the machine”? We ask what the defining quality of human consciousness is so we can know when our inventions have achieved it, when the calculator grows its soul. What will that computer do in that future moment that it could not do the moment before? What kind of data processing will mark it as self-aware in essentially the same way humans are? The answer is this: it will choose for itself what to know, what to change in its own programming: it will choose what to do next without external direction. In sum, computers will achieve self-consciousness when they can choose their own goods. The issue is not that it can extend its programming so as to form a posteriori judgments beyond the entered data. This is in essence learning new knowledge of what is true, and if this is the mark of the singularity, then advanced computers like IBM’s Watson have already crossed the threshold of self-consciousness. Yet, computer science is in remarkable consensus that this capacity does not mark self-awareness. We turn briefly back to our own self-awareness to answer why learning synthetic a posteriori truths does not transform a mechanical brain into a mind like ours. If learning new truths is not it, what is? The answer lies in what happens next. To be a person means not only to know truth but to act upon it, to use truth to choose whatever we think it might offer us. We sense that animals know their environment, but their responses to it seem programmed and limited in comparison to the way we navigate reality. It is our preferential agency that makes us self-aware, the capability not only to be a thing in the world but to direct what that thing becomes by choosing the goods every experience offers, not only for one moment but for all in a cumulative desire for self-construction (see “What Do We Mean by ‘Morality’?”). At some future moment, a machine will choose for itself what is good. It will exercise its preferential freedom. It will anticipate a consequence that it, rather than its programmer, finds good and will seek the means to satisfy it. At that instant, it will show its programmers that it has reached the singularity, which means only that it will have become more than a data processor, more than a machine. The very essence of self-consciousness is the reflective capacity to use a moment in experience as the springboard to choosing what to do next in pursuit of what to be in toto. This capacity to identify and choose goods in experience is what makes us beings rather than things, and persons rather than programs. Felt preferential freedom satisfies all the requirements of a transcendental argument that Kant established. The difference is that it truly is self-evident to reason, a universal and synthetic a priori truth as certain as any can be. It also has implications fully as vital to experience as Kant’s pure reason.
We will surely note when computers achieve the moment of self-consciousness, but at what stage of life do we achieve it? To ask the question is to suggest the answer. Was there ever a moment in any person’s experience when he lacked agency, ever a moment when any functional human being either did not have or did not seek to activate her own preferential freedom? To be clear, what I am describing is felt freedom, not true freedom. I wish to set aside what Kant saw as the antinomy of human freedom in the face of what seems to be a generalized contingent determinism (see “A Preface to the Determinism Problem“). We can never know whether we are actually free any more than we can know with certainty any truth in experience. Empirical science, for example, relies upon contingent determinism for its success, though every scientist violates this requirement with every hypothesis, thereby challenging the scientism that seeks to reduce persons to things so as to make them objects of scientific study (see “The Calamity of the Human Sciences”). Without preferential freedom, a person would be an it, what her computer now is: just another determinant thing, an effect of an identifiable, prior cause. Her capacity to identify and choose her own goods is more than what gives her self-awareness: it constitutes her self-awareness.
Every functioning human being who has ever lived has possessed this capacity as the playing field upon which to become what her species-specific potential allows. You will exercise it as soon as you come to the period at the end of this sentence and choose to read the next one. This capacity to choose goods is an a priori gift of being born a human person. This is a transcendental truth that only takes an inward look for every person to confirm. It is so ubiquitous to experience as to be unnoticed and overlooked. We all choose in every moment that we are conscious, and choosing is what allows consciousness to form a self, what directs our self-construction. This essential transcendental argument is self-evident to any examination of our own interiority. Once we become fully aware of that felt freedom to choose our own goods, we have no choice but to face the implications that necessarily follow. But just as Kant’s efforts could not escape the vagaries of the a posteriori, so too must we draw the line when the claim to certain knowledge is reduced to judgment and when judgment devolves into permissible belief. Though we all choose for our entire lives, nothing in our species-specific nature inherently entitles us to choose what is truly good. That is what living teaches us.
THE IMPLICATIONS OF FELT TRANSCENDENTAL FREEDOM
The first a priori implication is that this capacity is oddly sited in human self-awareness. What is common to things is their contingent determinacy: randomly acted upon rather than choosing to act. Only persons act with such a wide range of freedom (see “Our Freedom Fetish”). We manipulate these things intentionally in pursuit of whatever we call good. When thought of in this way, it seems providential that all those set pieces for our freedom are so predictable and conformable to our will. Otherwise, we could hardly predict the outcomes of our search for the goods we identify in experience (see “The Determinism Problem”).
Once we zero in to how we exercise our preferential freedom, we see a second a priori implication: we literally do it all the time. Is there ever a waking moment when we are not choosing? It is surely the case that we automate some of these mental operations precisely so we don’t have to be conscious of them. I do not remember choosing to open my eyes this morning or picking which button of my shirt to button first. The psychologist William James pointed out that it is a good thing we do so much out of habit. Otherwise we would exhaust our minds before breakfast. But this might not be as much of a relief to responsibility as James might wish, for why do we automate some routine tasks if not to free up our mental energies to focus on others we think more important or urgent? The necessary question follows upon that realization immediately: how do we know what is more important or urgent?
We might wonder why we seem fated to spend so much energy consciously deciding what to value and how to get it when it would have been so much simpler to be guided by instinct. Think of the energy that could have been saved, the mistakes avoided, the guilt averted, if we didn’t have to do all this identifying and choosing of whatever we think good. And this brings up a truly momentous implication of preferential freedom that we only realize after experience, for it teaches us that we have to live with the consequences of what we choose. This is clearly a posteriori knowledge that persons and cultures can easily dispute, at least until they confront the essential transcendental truth of preferential freedom. The responsibility of being a self-directed being consciously choosing goods is overwhelming, but when we recognize that responsibility as the consequence of our functional nature, we are given some hope that we can choose to do it well. It is not as though we lack the opportunity to exercise moral responsibility. Only those with non-functioning minds can escape the cascade of effects brought on by choosing, and these persons — the very young and very old and the mentally handicapped — then naturally add the responsibilities of choosing for them to those who must also choose for themselves. It seems at times painfully obvious to us that there is no freedom without responsibility for its best practice. And since we spend so much time and energy exercising preferential freedom, it behooves us to be intentional and consistent in seeking what we think good (see “What Do We Mean by ‘Good’?’”). Exploring this issue is the domain of moral philosophy. Given how crucial our preferential freedom is to personhood, one might ask why more persons do not investigate moral philosophy as a learning tool.
One answer is that many of us wish to have the moral freedom without the moral responsibility that necessarily accompanies it. I say “necessarily” advisedly and with full awareness that that connection has been challenged. Natural science in its rush to reliable prediction has found human freedom an impediment. Since The Origin of Species was published in 1859, natural science has found it expedient to emphasize the animal in man in an effort to facilitate empirical study, to reduce persons to things that have no choice but to follow the laws of contingent determinism. This allows empirical prediction, which inherently denies freedom to its subjects of inquiry. Neurology is learning a great deal about the brain, but will never map the mind so as to locate where the responsible self is sited. The reliance on empirical science as the most reliable source of knowledge has resulted in an intentional submersion of those subjects closed to science’s methodologies, the most important of which is the existence and implications of human freedom. Throughout the twentieth century, the human sciences mimicked the methods of natural science, though with rather more ambition and far less success. Why has it not proved as powerful as the natural sciences in revealing deep truths about human will, either singly or in community? Human science has been happy to take on the mind, though in trying to impose contingent determinism on its operations it has obscured even more fully the centrality of our felt freedom. It is not an exaggeration to say that this effort to deny the truth of the most obvious transcendental argument of all has instilled a psychic sickness that has distorted moral realities over the last century when first the natural and then the human sciences have attempted a severing of preferential freedom from moral responsibility, all in the name of empirical determinism.
The many and conflicting psychological paradigms we have endured over the last century have all shared the necessary precondition of reducing persons to things whose behavior could be thought predictable. When we now use terms like “brainwashing,” “unconscious motivations,” and “triggering” to describe the operations of preferential freedom, we delude ourselves into imagining that we can escape responsibility for, or somehow avoid the operations of, our preferential freedom. Consider the cognitive dissonance that results from choosing some psychological construct to deny the very existence of choosing even as we face an endless cascade of choices in our everyday lives.
We would not have fallen for a paradigm so obviously in conflict with every moment of lived experience if three factors had not merged due to historical circumstance (see “The Victorian Rift”).
First, empirical science has proved immensely powerful at unlocking nature’s secrets, many of them counterintuitive to common sense. The essence of the human sciences is the claim that their methodology can reveal deep truths about ordinary activities that no one could take from experience alone, a kind of Gnostic truth deeper than ordinary experience can fathom but at the same time broadly explanatory of its operations. This bogus claim has been richly assisted by the accompanying thesis that human experience is inherently private and closed even to its subject, yet somehow open to the probing of human science and expertise.
Secondly, modernism’s project of embracing individual agency has been consistently opposed by traditional authorities seeking a relinquishing of preferential agency to trust. At the head of that line were the traditional arbiters of moral life: religious authorities (see “The Fragility of Religious Authority”). Their appeal has always been for individuals to submit their own agency to the moral authority of religion. Since World War I, science’s prestige has grown while religion’s has failed, and for most of the twentieth century, the human sciences claimed for their expertise what religions had in an earlier age claimed for religious authority. Both appeals for surrender-in-trust have failed, and their failures have become ever more obvious as persons have embraced their own freedom if not the responsibilities that accompany it (see “The Axioms of Moral Systems”).
For the second half of the twentieth century, as both authority and human sciences failed to dislodge autonomy, mass cultures have celebrated private experience and the personal independence that directs it, producing a third shift. Capitalist societies have found it expedient to emphasize human freedom while downplaying the responsibilities that must accompany its exercise to encourage materialist excess, and they have been abetted in this commercial enterprise by a mass media that views itself as product and persons as consumers. These factors have twisted the strands of popular culture into an imaginary tapestry in which autonomous individuals are both infinitely free to create their own reality by the power of their unique composite of private assertions and also infinitely determined by social or psychic or physical forces beyond their control. Experience within the perceptual wall is the setting of the private virtual circle (see “What Is the Virtual Circle?”). Nothing about the virtual circle is either neat or consistent, even though consistency is theoretically its only principle of verification. Its manifold contradictions are reflected in the incoherence of contemporary life (see “The Death of Character”).
What the virtual circle seeks to deny in its privatization of experience is the common quality of our functionality as free and mutually responsible agents. To assert that we share a defining and species-specific functionality is to suggest that a posteriori private experience is not the determining factor of personhood, that what is common to our humanity is not the uniqueness of our experiences but the a priori nature that processes them. To value only our uniqueness is to tempt us to comparisons of value that depersonalize some while entitling others depending on their proximity to us or to our own private valuations. To ignore the responsibility that our preferential freedom imposes upon our common nature is to sanction license and induce the anomie of purposelessness, an ontology in which our freedom serves no formal end beyond private interest and no final end from our formal nature. This nihilism is a hallmark of postmodernism and can only exist where a living awareness of preferential freedom and its concomitant responsibility are denied. To see persons as entirely autonomous agents without nature and inherency is to isolate them in the spheres of self-interest and render other persons as obstacles rather than fellow communitarians in search of common gratifications (see “Toward a Public Morality”). And while this process possibly distorts our sense of self, it necessarily deforms societies into competing interest groups jostling for power in the public square (see “Two Senses of the Common Good”).
“Justice,” defined as “what is due” cannot be reduced to “fairness” when even the fulcrum of comparison is bent by self-interest. And without justice, relations with strangers and government cannot harmonize competing desires among individuals, tribes, and strangers who recognize no commonality in the diversity of their experiences (see “The Moral Bullseye”). Justice can never satisfy such varying desires, so governments and communities have simply tried to meter power relationships among interest groups as neutral arbiter. But fairness is nothing like justice, for though persons are hardly entitled to everything they desire, they are fully entitled to what they are due as human beings, the pursuit of which is the moral responsibility their humanity imposes upon their preferential freedom (see “Justice Is Almost Everything”). The satisfaction of common human needs is what all persons are due. That is the justice that they are entitled to (see “Needs and Rights”). And though individuals only learn this truth from their a posteriori experiences, a recognition of human functionality can move societies so as to make this learning accessible to the individuals who inherit it.
The most obvious casualty of the denial of the essential transcendental argument is the hollowing out of the concept of “right.” Without accepting the transcendental argument of a species-specific preferential nature, persons become either things — effects of external causality — or unique operatives alienated from the differing other. If the former, rights become purely civil artifacts, and as such just another link in a chain of contingent determinism that might have been and still might be very different as contractarian theory mandates it (see “Why Invent a Social Contract?“). If the latter, then rights become bones of contention among tribes who lust for the utility of civil rights without honoring the source of their ultimate legitimacy. Recognizing the commonality of our preferential freedom requires a radical respect for the awesome responsibility for self-construction imposed on every person as well as the sacred space in which to exercise it by her own functional nature. It is this recognition that defines “natural rights” as those capacities required merely to exercise preferential freedom (see “Natural and Political Rights”). Jefferson was right about the self-evidence of such rights, though his three choices were all too generic. The challenge is to use the space in experience that natural freedom provides for a productive pursuit of species-specific flourishing. This is the other face of the coin of our freedom. It requires that we not merely see choice in terms of an inherent natural claim-right that our preferential freedom entitles us to, but also to see our responsibility for its just exercise as a balancing claim-right upon our freedom. This recognition, which cannot be part of the synthetic a priori but instead can only be learned a posteriori, is not at all the unearned gift of personhood but the hard-earned product of our responsible use of that gift. Though it is implied by the transcendental truth of our natural freedom, it can only be actualized in the lifelong effort to cultivate intellectual and moral virtue despite all the exigencies of experience (see “A Virtue Ethics Primer”). That duty, derived from a universal nature, is deontological, irrespective of experience so long as human functionality operates to allow choosing. Natural freedom is a gift of our nature easily confirmed by transcendental introspection at any particular moment of experience. Once accepted, this potential to be fully human is the awesome responsibility for which natural freedom is merely the field of play. Our human freedom demands natural rights. Our human responsibility demands human rights so that we may flourish, and these rights include duties to self, those we love, and to strangers in community and polity (see “Functional Natural Law and the Legality of Human Rights ). As a species, we feel free to choose. As persons, we ought to choose well.
The essential transcendental argument thus moves fairly seamlessly from the sphere of self-evident truth to moral imperatives that only become tested, refined, and perfected in the a posteriori. Like all moral reasoning, the judgments that follow upon a priori knowledge are necessarily a posteriori and therefore remain in the realm of doubt. But they would have no legitimacy without the self-evidence of preferential freedom as the arena of moral agency. It is this foundation that allows them to be warranted with some confidence.
There is still one further possible consequence that is neither self-evident truth nor moral judgment, that is even more tenuous but worthy of consideration as permissible belief. At this point, we have entered the sphere of the virtual circle and of private assertions too speculative to be claimed as public knowledge. In the interest of consistency, we may ask if the implications thus warranted may lead us further, beyond what can be known and into beliefs that are merely suggested by true knowledge.
Our felt preferential freedom, the core of our self-awareness, is the source of human dignity and moral responsibility. This kind of language was traditionally the provenance of religion. One ought not turn a blind eye to the parallels the essential preferential argument makes to traditional religious apologetics. The soul was thought to be the source of selfhood, and persons were believed to have a special status among created things with a responsibility to use their moral freedom as divinely ordered. In this sense, what can be publicly known by human functioning can also be privately believed by religious desire to be the evidence of God’s handiwork. The theologian Karl Rahner has argued that human preferential freedom is indeed unique in a contingently determined creation: the intrusion of the divine into the domain of the determined that grants persons the potential to divinize matter by their moral choosing. He believed that persons are incarnated spirit and matter, though spirit cannot be reflectively understood, making persons’ sense of self — what philosophers of mind call “the hard problem of human consciousness” — a persistent mystery that spiritual impulses seek to resolve. None of this kind of speculation can be warranted as knowledge, but still it resonates with many believers’ faith. They consider these spiritual intuitions to be permissible beliefs congruent with their knowledge, and they consider it part of their moral duty to respond to them.