The Housing Crisis in the Humanities
On June 5, 1995, President Bill Clinton and Secretary of Housing and Urban Development Henry Cisneros announced that the golden age for home buying had dawned: gone were the days of archaic financial regulations and arduous application processes for low-income citizens. Their innovative strategies would eliminate requirements of high down payments, high closing costs, and high credit scores. They also intended to mitigate housing shortages, limited renovation funds, lengthy loan-approval wait times, and complex applications. In their press conference, they described homeownership as a panacea that could elevate the character of America’s citizens, not just their credit scores. This moral improvement would, in turn, remedy a whole host of social ills. President Clinton reminisced that Hillary finally agreed to marry him after he purchased a cute starter home. Those who cared about instilling family values, he argued, ought to support legislation that would help families acquire homes.
Perhaps President Clinton’s compassion was well-intentioned; maybe he really could feel our pain. But the road to hell is paved with good intentions, and fundamental economic laws of supply and demand still apply. It turned out that flooding the market with cheap, accessible cash created an infinite pool of eager, underqualified home buyers that vastly exceeded the number of homes available for purchase. Predictably, increased demand sent home prices soaring, especially in neighborhoods with decent school districts and local amenities. It also turned out that, when lenders realized that they could diffuse their risk by bundling the shoddy “subprime” mortgages and selling them on the secondary marketplace as mortgage-backed securities, they approved loans to every applicant regardless of his or her creditworthiness or potential to repay. The investment bankers who purchased the securities convinced credit agencies to rubber stamp high valuations on them, even though the underlying assets were garbage.
Well, you know the story: boom went the dynamite. The tipping point occurred when banks began offering negatively amortized, interest-only, subprime, adjustable-rate mortgages. (Translation: loans with adjustable interest rates where low-credit borrowers had to pay nothing back to the bank every month but could just roll the accrued interest into their ballooning mortgages). It turns out that homes can’t appreciate indefinitely, and when prices fell, mortgages went underwater and lower-income folks suffered foreclosure, bankruptcy, and ruined credit. And to rub salt in the wound, the federal government bailed out the institutions that facilitated this wealth-transfer scheme.
What does the financial crisis of 2008 have to do with a project on the genealogies of modernity? I suggest we have a “housing bubble” in the humanities, one created by how we periodize subject matter relative to narratives of modernity.
Margreta de Grazia has shown that three nineteenth-century historiographies—Georg W.F. Hegel’s theodicy, Jacob Burckhardt’s cultural history, and Karl Marx’s economic prehistory—all designated the Renaissance as the “inaugural epoch of the modern.” In the process, she argues, they transformed modernity from a deictic—a word whose meaning depends on context, such as “tomorrow” or “you”—into a term that signified a definitive temporal moment. When people had previously designated something as modern, they generally juxtaposed it with the ancients, implied a contest between the two, and insinuated that one age was superior. Affixing modernity in a distinct period and place intensified the pejorative connotations associated with its predecessor, the Middle Ages. Consider how medieval can connote barbarism, cruelty, artlessness, or unsophistication, while modern can imply that something is innovative, nuanced, sophisticated, hip. Even scholars keenly aware of the value judgments embedded in the terms can fall into the trap. Early in his career, C.S. Lewis disparaged literature filled with the supernatural: “Why—damn it—it's medieval!” Although he later regretted his “chronological snobbery,” he retained the intellectual habit of thinking in such terms and defined the age of Tudor literature as a “drab age.” To this day, the field of Tudor literature labors to throw off this nomenclature. Lewis’s example corroborates what James Simpson and Brian Cummings have observed: the mere fact of thinking along these lines simultaneously reifies the false binary and fails to scrutinize it.
It is foolish to suppose that we can entirely scrap the notion of modernity as such, nor can we prevent cultural authorities from making appeals to it. For one thing, periodization seems to be innate; to avoid breaking time into segments and distinguishing between eras, we should have to surrender our perception of its passage, which we seem unable to do. “We cannot not periodize,” as Fredric Jameson reminds us. For another, there are real watershed moments in history where things really do change, such that it is plausible to tell before-and-after narratives that have some validity and utility. Nevertheless, if Bruno Latour is correct, modernity’s project of severing the present from the past altogether—of rupturing history’s continuum and beginning a completely new era—can’t be fully realized, so in that sense, we have never really been modern.
The consequences of concretizing modernity to a particular place and time are enormous. When modernity lost its modularity, it became an asset to be fought over, a territory to be controlled. Modernity became the cool neighborhood that scholars jockeyed to buy into. Efforts to locate one's subject matter close to modernity were claims for disciplinary relevance, even “salvation,” as de Grazia puts it. It did not matter that the claims were far-fetched or that modernity’s purported characteristics are just as common in other eras. (This is why my field is now known as “early modern” literature instead of “Renaissance” literature, as if the new term were any less condescending.) This approach is intellectual gentrification built upon premises as faulty as the assumption that home values will always appreciate. With such a framework, we have no explanation for why medieval thinkers constantly thought of themselves as modern or decried their contemporaries for “modern” ideas, or why thinkers as old as Augustine conceptualized all humanity as living in the Middle Ages (“in hoc interim saeculo”)—in the interim moment between the cities of God and man. We cannot evaluate pre-modern literature without implying that it is little more than an awkward dress rehearsal before the real performance, which began when Shakespeare came onto the scene. But this framing diminishes medieval thinkers’ and artists’ creativity and skill and their influence upon the “modern” masters who followed. We prevent a productive dialogue between the two. I suggest we eschew this hubris, begin instead with a posture of circumspection and humility, and remember two things. The historical periods we have are not incontestable edicts from Father Time but invented heuristics that constrain our inquiry and generate the questions we can ask or even imagine. For this reason, we should not underestimate the shaping force that whatever periodization schematics we select will exert upon us and our conclusions. We make the epochs, but then the epochs make us.
Daniel Zimmerman is a Ph.D. Candidate at the University of Virginia who specializes in early modern literature, dramatic historiography, and the theology of the English Reformation.