After the Fall

Somewhere beyond fate and reason, the real work of being human begins.

I was almost eleven years old when I first heard about the accident at Three Mile Island. It happened in 1979, near Harrisburg, Pennsylvania. The news that spring was grim. The country was scared. Had there been a meltdown? How much radiation had escaped? It was a good time to have a U.S. president trained as a nuclear engineer. Despite living in the atomic age and knowing that nuclear fission could be controlled and harnessed to produce energy (or devastatingly unleashed in the form of bombs and missiles), many people were unfamiliar with its mechanics.

At the time, my father ran a nuclear engineering firm in ­Seattle. Like President Carter, he, too, had worked under Admiral Rickover, in the Atomic Energy Commission, the forerunner to the Nuclear Regulatory Commission. My father calmed my fears by explaining the way reactors were designed, how the process worked, what constituted safe levels of radiation, the potential issues, and the relative scope of the problem. Being an engineer meant anticipating what could go wrong and looking for ways to make things safer. Being an engineer meant being intimately aware of design limitations, the physics of failure, materials strength, and system dynamics. In fact, he'd invented a fail-safe system for a company that supplied cranes to build nuclear power plants, cranes designed to lift massive pieces of equipment. If the worst-case scenario happened, he had invented a device to prevent disaster.

Anticipating news stories for the thirtieth anniversary of the Three Mile Island accident, the Office of Public Affairs of the U.S. Nuclear Regulatory Commission released a backgrounder report. In one, carefully worded sentence the report summarizes the seriousness of the event: “Although the TMI-2 plant suffered a severe core meltdown, the most dangerous kind of nuclear power accident, it did not produce the worst-case consequences that reactor experts had long feared.” In one sense it makes it sound like we got lucky. The NRC identifies the seemingly insignificant event, an initial failure that led to a precipitous state of affairs: “The accident began about 4:00 a.m. on March 28, 1979, when the plant experienced a failure in the secondary, non-nuclear section of the plant. The main feedwater pumps stopped running, caused by either a mechanical or electrical failure, which prevented the steam generators from removing heat.” The key phrases are written in the passive voice—they do not identify any people who actively did anything wrong. The failures represented a lack of some important component—things did not perform the way they were supposed to perform.

Creating a controlled nuclear fission chain reaction to generate electricity is a powerful example of applied scientific thinking and engineering—a challenge of Promethean proportions—of what human design is capable of achieving. And yet, with so many fail-safe systems, so many attempts to plan ahead, things still went wrong.

The verb “fail” and its cognate noun “failure” trace their descent back to many words that pertain to deception, such as the Latin word fallere, “to deceive.” The English word sin (in Latin, pecco) also denotes failure, being at fault, being guilty—being culpable, which comes from another Latin word, culpa, also denoting failure, fault, and blameworthiness (ill intention). Aside from the connotation of defect or a lack of some vital competency, fail is also related to the words fall and fault. This etymology reveals our culture's deep ambivalence about failure and its source: on one hand failure comes from without—one is deceived, tripped up; on the other hand failure comes from within—an absence or shortcoming, an insufficiency or inability.

Moral philosophers have long framed the question of responsibility as a matter of free will. As far back as Aristotle the question of causation—why things happen—has been broken down into subcategories. Aristotle described a material cause (what a thing is made of); an efficient cause (how a thing is created); a formal cause (the idea that guided the shaping of the thing); and the final cause (the purpose or intention for creating the thing). In those terms something can fail materially, efficiently, formally, or finally. From a moral standpoint, failure of the first three orders might be considered matters of negligence or ignorance; real censure is usually reserved for the last—faulty intention. In that sense, it is the thought that counts. Indignation arises from the assumption that there was ill intention at work, that someone intended to do wrong or harm; evil intentions are what stir condemnation.

In the classroom, I am often faced with my students' sense that life is a test of character, that one is either a winner or a loser, and that failure of any kind threatens to reveal one's ultimate worth. They often identify modern thought with a scientific, humanistic outlook, one that is tolerant, enlightened, and reasonable—the counterpoint to intolerant, sin-obsessed, Christianity that drearily speaks of utter depravity and the “fallen-ness” of creation. Of course the fullness of life is seldom as clearcut as students assume it to be, and failure has long been on the minds of poets and theologians who have written with a great deal of wisdom and compassion.

For many of my students who first encounter the reformation theology of John Calvin, one of the strangest and initially most repugnant concepts they run up against is the doctrine of original sin first formulated by the fourth century Patristic theologian, Augustine of Hippo, and famously rearticulated and expanded by Calvin to assert the utter depravity of the human condition. Calvin's doctrine of predestination only exacerbates the problem. To my students the doctrine sounds like a setup. “We have no choice!” they declare. “How can God blame people if they are destined to fail—by design! What kind of god puts a forbidden tree in paradise? Why not do what Zeus did to Tantalus and make the tree shrink away from Adam and Eve if they tried to touch it?”

Calvin knew that his doctrine of predestination sounded strange. In his systematic theology, the Institutes, he admits as much, stating: “Consider nothing more unreasonable, than that, of the common mass of mankind, some should be predestinated to salvation, and others to destruction.” In his defense, Calvin cites a long litany of scripture that God explicitly favors some people with special treatment, for example, Abraham, Isaac, Jacob, and their descendants. But if God plays favorites, what happens to responsibility—if I am doomed to failure and have no hope of succeeding, why try?

In one sense, Calvin found spiritual truth and psychological relief by rejecting the notion that humans, through their actions, can save themselves. He said, in effect, stop trying to be perfect—it is futile—and moreover, God does not expect it. God knows all too well the ins and outs of human frailty. No matter how low one goes, no matter how many mistakes one makes, no matter how badly one fails—especially when it is one's own damn fault—one can still experience redemption through God's grace. Getting a helping hand can easily seem like a well-deserved reward for good behavior and worthiness when the times are good; getting a helping hand during bad times after one has patently failed is another matter.

And yet, one can't help but wonder, Job-like, why God would subject people, repeatedly, to such difficult tests, why put off salvation, why make it so hard? In the seventeenth and eighteenth centuries the question of God's relationship to the existence of evil was called the problem of theodicy. John Milton frames the question well in Paradise Lost:

... what cause
Mov'd our Grand Parents in that happy State,
Favour'd of Heav'n so highly, to fall off
From their Creator, and transgress his Will ...

Milton answers his question by imagining a celestial war between God and Satan—a drama played out on a cosmic scale. Satan works through the power of persuasion, while God, commited to free will, leaves Adam and Eve to their own devices.

Who first seduc'd them to that foul revolt?
Th' infernal Serpent; he it was, whose guile
Stird up with Envy and Revenge, deceiv'd
The Mother of Mankind ...

According to Milton, while Eve falls prey to Satan's flattery, and Adam chooses to follow her of his free will, failure is framed in the form of an antagonist who preys upon human frailty. Evil becomes an active force to be resisted and fought. Satan, the deceiver, personifies failure as a kind of elemental deception—that the world is constituted in ways that mislead the unwary, tempt the weak, fool the credulous, and swindle the greedy. The role of a moral person is, then, to resist Satan's power or at least find common cause with those who are taken in.

As the Puritans were despairing of England—seeing the hand of Satan at work—and leaving the Old World behind to start fresh in New England, a new generation of thinkers, scholars, mathematicians, and philosophers set sail for a utopian New Atlantis of their own, a world permeated by reason and grounded in method. Francis Bacon, G. W. Leibniz, Alexander Pope, Benjamin Franklin, and other Enlightenment luminaries repudiated notions of Satan and cosmic battles for the soul of humans. They looked to science and mechanics to instruct people about the way the world worked. Error, for them, was the result of ignorance and superstition. They acquitted God by arguing that God had conceived of the world the way He did to maximize good and minimize evil. Lesser evils existed only insofar as they enabled a higher good. Or as Alexander Pope put it in The Essay on Man (Epistle I, 1732–4):

All Nature is but Art, unknown to thee;
All chance, direction, which thou canst not see
All discord, harmony not understood,
All partial evil, universal good:
And, spite of pride, in erring reason's spite,
One truth is clear, whatever is, is right.

In other words, truth would set one free from the fears and superstitions of the imagination, just as daylight reveals the absence of monsters under the bed. Lightning is not God's wrath delivered to those who misbehaved—it results from a buildup of charged particles in the sky, creating static electricity that takes the most direct path to the ground. Recognizing error, they reasoned, ought to inspire further study to improve one's knowledge and thereby improve the system—demonstrating the best of enlightenment optimism and progress.

As Voltaire brilliantly illustrated in his satirical novella Candide, however, there is something disingenuous and dissatifying about Pope's silver-lined conclusion that all is for the best. Voltaire asks us to consider the people who innocently suffer from the failings of others. Maybe some good will come from it in the long run—but why did they have to be hurt in the process, and meanwhile, what is one supposed to do? And for God's sake, how about a little wine, a little oil to tend their wounds. Voltaire asks what the philosophes offer by way of consolation, to lift one up when one is in the depths, de profundis? How might one find solace in old theological doctrines in a world schooled in reason and inspired by the progress of science?

I had a chance to revisit Calvin's theology in graduate school through the eyes of two scholars, Friedrich Schleiermacher and Charles Taylor, both of whom attempted to translate theology into modern vernacular. Schleiermacher was a nineteenth-century German Protestant theology professor, and Taylor is a twenty-first-century Canadian Catholic philosophy professor. Though very different, both writers begin with the premise that some of the most interesting and important conversations happen outside the parameters of what humans can control. Both thinkers draw heavily on the human experience of finitude, of being limited, of being frail, fallible, and subject to all sorts of injury and illness. Being human, one gets tired, hungry, sick—one feels the limits of one's energy, one's knowledge.

Schleiermacher begins his monumental work of systematic theology, The Christian Faith, with a concept called “the feeling of absolute dependence” (das schlechthinnige Abhängigskeitsgefühl), or “God consciousness.” The idea is that when one is most aware of God's presence, one is also aware of one's relative limitations. Indeed, the more one has in mind one's own limitations, the more the representation of the infinite sounds appealing.

As a teacher I often have times when I sit on the sidelines and observe while students hash it out on their own, when they have to go it alone. And in those moments, I am most aware of how limited my reach is—even if I wanted to I could not protect them from making a mistake or keep them from failing. Students need to fall down and pick themselves up again, just as they need to graduate and start life on their own. And yet as they go off, potentially to fall or fail on their own, I can't help but hope there will be someone there to see them and give them a hand up—that someone will watch over them. Schleiermacher identifies in such feelings the foundations of religious consciousness. Mortals predictably fall away from this consciousness, as Schleiermacher well knew. People get busy, get distracted, get too self-absorbed, become too enamored of their abilities to shape outcomes, to plan and control. People forget the limitations of being human. For Schleiermacher, anything that distracts one from one's essential dependency, vulnerability, and insufficiency is tantamount to sin. In that sense, even feelings of guilt, failure, and despair can distract one from “God consciousness.”

With the backdrop of twentieth century events to consider, Charles Taylor takes on Enlightenment optimism more directly, but argues essentially the same thing as Schleiermacher: that the Age of Reason overlooked important aspects of being human. In Sources of the Self, Taylor argues that rational, scientific thinking encourages a kind of mathematical calculus to help one make decisions. When faced with a problem, engineer-minded thinkers look to the data, apply the approriate formulas, do the analysis, and identify the optimal solution. Seems reasonable. And yet such a reliance on objective methods assumes that one will never be put in the position of having to make a real choice, a choice that defines and commits one in a way that cannot be justified by probability or reason. The objective mode is an attempt to abdicate moral accountability, to find another form of justification—the justification of reason. It assumes that doing the math, crunching the numbers, considering all the options covers one's bases.

The problem, Taylor argues, is that there are times and situations that force one to choose between competing goods, forgoing one when one choosing the other. No matter what I choose, I have to let something go. To help one student means to abandon another. To teach these kids means that I don't teach those kids; I cannot do both, nor can I justify my decision. No calculus can absolve me from having to make choices—the kind of choices that will define me. Taylor observes that science offers little language to begin to make sense of the human condition, to describe the experience of having to take a leap of faith and make defining choices. Taylor worries that the modern insistence on individual self-reliance, scientific proof, and rationality is self-stifling and self-mutilating—a self-inflicted “spiritual lobotomy.” Hope and faith and prayer may not be strictly reasonable, and yet they reflect some of the highest aspirations and sentiments of humanity.

Schleiermacher and Taylor maintain that when people are most aware of their humanity the real work of theology begins. Many of the most valuable experiences—love, wisdom, insight, peace of mind, epiphany, spiritual transformation—seem to arrive unannounced, unsought, unwarranted from without: welcome and needed, but not the result of one's careful planning and deliberation. The most valuable experiences seem largely out of one's control and, in some respects, are undeserved. What else is hope, asks Taylor, if not the “promise of a divine affirmation of the human, more total than humans can ever attain unaided”?

As vulnerable as humans are, the realization that life is uncertain often comes as a surprise. It is almost as if through daily interactions with technology and with the feeling that comes from being in thoroughly engineered and controlled spaces, one begins to feel in control of one's circumstances. Failure becomes existential, weighty, consequential, shocking, and an affront to human dignity. Through the ages, theologians have emphasized that the Renaissance belief in human dignity, no matter how laudable in some spheres, can leave one blind to the grace, forgiveness, redemption, and the compassion that can develop from understanding human fraility. Dignity can help one keep one's head high, but it can also lead to pride and a feeling of entitlement and privilege that is nowhere guaranteed by the constitution of the universe.

Further into its backgrounder report, the Nuclear Regulatory Commission reflects on the long-term impact of the Three Mile Island accident. Here the authors change tack and admit that the “accident was caused by a combination of personnel error, design deficiencies, and component failures.” They do not blame impersonal forces, deception, or ill will; there were no insurmountable technical problems, only mundane, everyday failures. They acknowledge that people actively made mistakes, and, as a result, the NRC saw the need to improve regulations and oversight to make them “broader and more robust,” and to see that the management of the plants “was scrutinized more carefully.” These were things that the commission could control. But there were other, deeper consequences that, so far, have been beyond anyone's control. Beyond the real damage and dangers of the accident, NRC authors state that one of the unfortunate results of the accident was that “public fear and distrust increased.”

Some might argue that the NRC downplayed the risks involved in nuclear energy. Perhaps. But what strikes me is the commission's recognition that a fear of failure can be just as dangerous as overconfidence in one's ability to control technology. Certainly, given the vulnerability of life on Earth, caution is necessary. However, the fear of failing can also lead to the avoidance of risk taking or to overestimating the odds of something going wrong and therefore putting off the day of real learning.

Being willing to try, despite knowing that I might fail, implies a recognition of radical insufficiency: I can't keep bad things from happening. There are no guarantees, and I am not able to control all the variables. In my moments of strength and self-sufficiency I can feel in control and deserving of the strength and success that comes with it. If I prosper, I attribute it to my wits and industry. When I fail, I feel most unworthy and alone. But strangely, these moments of defeat are the times when I am most willing to acknowledge that I cannot go it alone, to hope for support. And, if help arrives, I am most capable of accepting and appreciating it.

A crisis demands the best of me even as it reveals disturbing shortcomings. It can also bring into view unknown resources and unexpected connections. A person confronted by profound failure might develop a faith that results from having survived failure—not a confidence in Providence, that things are all for the best, or that there will always be a silver lining—a faith that appreciates how completely dependent we are on others.

Back in the Garden, perhaps it was not the forbidden fruit that opened the eyes of Adam and Eve and brought a blush to their cheeks but, rather, the knowledge that they failed, when for the first time they realized what it was like to be fully human.

Tags

Ethics, Religion, Literature

Comments

No comments yet.

Also in this Issue

Toward Grace, Strength, and Hope

After the Fall

Goal Control

Where We Live Now

Shooting the Lions

Drown

Résumé of Failures