A Genealogy of Illness Cost Coverage in the United States
Getting sick can be expensive. For the last 150 years, civil society, government, and market actors have devised strategies to assist individuals and families in covering expenses and lost income related to illness. To this day, Americans rank illness-related costs as a top political priority. As a result, many Americans are advocating for comprehensive reforms in healthcare funding. To comprehend this discontent and chart a way forward, it is imperative to examine the history, politics, and values that have shaped our current illness-related financing system.
The earliest method of addressing illness-related costs in the United Sates involved industrial sickness funds. The purpose of these funds was not primarily to finance healthcare services, but to mitigate a portion of lost income due to illness or injury. Members contributed a small percentage (often 1%) of their income into the fund and would receive approximately 50% of their salary if they could not work. The funds were most prominent between 1880 and the end of World War II, and by World War I, they covered nearly 40 percent of non-agricultural workers. Mutual aid societies and unions frequently organized the funds, though some employers also initiated funds. A key characteristic of the funds was a “community rated” financial structure in which each member paid the same percentage of their income and received equal percentages in the event of illness. The sickness funds were largely voluntary, even those officially established by employers. In many ways, the sickness funds formalized the concept of “passing the hat” on the shop floor. The community-rating and voluntary nature of the funds was consistent with an ethic of mutual aid and solidarity. The funds often organized social events meant to strengthen this sense of mutualism and solidarity among the members. These events were not just for camaraderie; they were essential for managing “moral hazard,” the term for illegitimate illness claims. By fostering social norms and obligations to fellow members, fraudulent claims were discouraged.
An important shift in illness-related cost coverage emerged during the Great Depression. In response to significant income-related reduction in demand for healthcare services, many hospital and physician groups introduced prepaid plans that granted beneficiaries access to a predetermined amount of care each year. The first plan of this kind was designed by Baylor Hospital and provided a group of teachers up to 21 days of inpatient coverage for $6 per person annually. This model laid the groundwork for the Blue Cross and Blue Shield (hereafter called “the Blues”) plans that persist today. Though commercially oriented, these plans retained elements of the community and mutual aid focus of sickness funds by maintaining community rating and a non-profit status. However, these plans marked a significant departure from previous models of illness-related cost coverage as they were designed to directly finance healthcare services rather than merely protect families against loss due to illness.
Prior to World War II, commercial insurance companies rarely offered health insurance. Two major changes brought significant transformation to the commercial health insurance market in the 1930s and 1940s. First, because predicting illness risk had historically been quite challenging, illness-related coverage was perilous for insurers. However, progress in the actuarial sciences in these decades allowed health insurers to estimate illness-related costs, enabling insurers to generate a more predictable cost structure and more easily set optimal fees necessary to ensure a consistent profit. Commercial companies especially could offer lower-cost products to low-risk beneficiaries who were effectively “overpaying” within a community-rated plan. The result of these improvements in actuarial science was a significant migration from community-rated sickness funds and the Blues plans into commercial, for-profit health insurance. Second, during World War II, the Emergency Price Control Act allowed President Roosevelt to freeze industrial wages. Consequently, companies offered employees additional fringe benefits like private health insurance. At the beginning of World War II, only 9% of Americans had commercial health insurance; by 1950, that number increased to nearly 50%. In contrast, by mid-century sickness funds were nearly non-existent. Ultimately, this change enshrined profit motivation in illness coverage.
The rise of commercial insurance not only marked a change in market dynamics and profit motivation; it ushered in a new way of conceptualizing illness-related cost coverage. Conceptualizations of illness coverage moved away from a system grounded in mutual aid and solidarity, and toward the sharing of abstract, individual risk with strangers who could now be viewed as sources of probabilistic threat within the context of commercial contracts. In many ways, this new marketplace transformed sickness-related cost coverage from a source of care for one’s neighbor into a form of individualized self-protection. It is for this very reason that religious communities like the Amish, who argue that commercial insurance degrades their community-animating virtue of mutual aid, refuse to purchase insurance.
As commercial health insurance became an established employment benefit, post-War America invested heavily in healthcare innovation. Between 1941 and 1951, investment in medical research increased from $18 million to $181 million. This fostered rapid technological advancement in the healthcare system. Simultaneously, the Hill-Burton Act of 1946 provided significant federal grants for hospital construction. More hospitals meant more hospital utilization and expenses, which incentivized physicians to fill the brand-new hospital beds and use increasingly sophisticated technology. All told, these investments led to significant surges in healthcare expenditures as annual per capita healthcare costs increased from $103 in 1929 to $146 in 1960 to over $1,800 by 1985.
As costs escalated, policy debate focused on increasing fair access to illness-related cost coverage. The employment-based insurance system left many vulnerable populations without coverage, and it became increasingly apparent that older and poorer Americans could not bear the escalating costs of healthcare, now viewed as indispensable for health. In 1965, the federal government established the Medicare and Medicaid programs to provide coverage for the elderly and impoverished. This marked the first time that government had taken a substantial role in illness-related coverage. However, given the strength of commercial insurance companies and Americans’ wariness of socialized medicine, the United States refrained from establishing a fully government-funded system.
By the early 1970s, with the advent of the Medicare and Medicaid programs, the basic structure of America’s illness-related cost coverage had largely been set. Health care services would generally be covered through a risk transference model organized by employer-based private insurance, small shares of individual private insurance, and government insurance programs with the mutualist sector’s role largely eliminated. Over the last fifty years there has been little innovation; reform efforts have focused primarily on controlling costs and increasing access to the extant insurance system. Federal and private insurers have developed various administrative tools to control costs by managing beneficiary utilization, such as prospective payment systems and utilization management. However, efforts to control costs have been largely ineffective. Because health insurance schemes were primarily oriented to funding a rapidly increasing demand for health care services, beneficiaries, physicians, and hospitals had little appetite for service restrictions. Therefore, costs continued to escalate and were largely transferred to beneficiaries in the form of higher health insurance premiums.
Due, in part, to these rising costs, nearly 20% of the population remained without formal coverage through the early 21st century. Starting in the 1960s, numerous attempts were made to establish universal coverage, but all efforts failed until Congress passed the Affordable Care Act (ACA) in 2010. The ACA increased insurance access by expanding the Medicaid program and providing subsidies for purchasing commercial insurance within government-regulated marketplaces. Though it lacked Republican support, the ACA was intended to be a compromise plan that leveraged commercial insurance to increase system fairness while avoiding single payer, government-run systems that contradicted Americans’ commitment to corporate enterprise and freedom of choice. The ACA also limited the ability of insurance companies to risk rate and established minimum essential benefits. Though these measures reduced uninsurance rates, they restricted plan availability and contributed to cost escalation. Because of persistently high costs, approximately 10% of the population remains uninsured. And even among the insured, high costs remain a nagging issue.
Over the last 150 years, strategies for covering illness-related costs have undergone significant transformation. The focus shifted from safeguarding individual and family income against illness-related work loss to ostensibly improving health by funding the expansion of the healthcare system. This evolution did facilitate increased access to more technologically sophisticated services, primarily through government provision and corporate fringe benefits. At the same time, however, the decline of sickness funds and early community-rated plans transformed a system rooted in voluntarism and mutual aid within civil society into one driven by corporate profits and government compulsion and bureaucracy. Future renewal of the health care system may involve recovering the ethic of solidarity and mutuality that are at the roots of the current system, however imperceptibly.
Grant Martsolf is Professor and UPMC Health System Chair at the University of Pittsburgh School of Nursing. His research interests include policy related to advanced practice nursing, men in health care, and the role of mutualism in America’s social safety net. He is the Director of the Headwaters Project, an initiative at the University of Pittsburgh dedicated to the promotion of the flourishing person as the central animating principle of the applied sciences.