Knowledge in the Time of Cholera Page 2
In this book, I explain how debates over medical professionalization in the nineteenth century—conflicts over issues like licensing, board of health composition, government recognition of alternative medical sects—became epistemological, in the sense that underlying these specific issues was the animating question of what constituted legitimate medical knowledge. Professional struggle was inextricably tied to fundamental intellectual debates waged on the level of epistemology, in which the very identity of what constituted a medical fact was at stake. Or more accurately, these professional debates were made epistemological through a confluence of broad social changes, which enabled epidemics like cholera, and alternative medical movements, which seized the opportunities afforded by cholera to force allopathic medicine to justify its expertise in epistemological terms. Cholera and quacks joined to foment epistemological angst for allopathic medicine. And the eventual character of the profession would be inscribed with their indelible marks, as the allopaths would have to solve their riddles to achieve professional authority.
By emphasizing epistemological struggle and change, I provide a framework to better understand the professionalization of U.S. medicine and, in doing so, offer a more nuanced account of a key (if not the key) cause of the exceptionalism of the U.S. medical system. When professionalization is no longer viewed as flowing directly from medical discoveries, and instead is seen as evolving according to the vicissitudes of epistemic politics, a very different story of the professionalization of American medicine emerges. It is a story of missed opportunities, of intellectual roads not taken, and of significant contributions by alternative medical movements that have all but disappeared from our historical consciousness. It is a story of the recurrent failures of allopathic physicians to reconcile their professional aspirations with the democratic ideals of American culture, and the repeated acceptance and recognition of alternative movements by government institutions. It is a story of the consolidation of professional authority by allopathic physicians through a strategy that circumvented the state—and public oversight—by securing the financial support of private philanthropies. But foremost, it is a story about the long-standing tension between the logic of professionalization and the ideals of democracy out of which an exceptional U.S. medical profession was born—one that raises difficult questions about the role of professions in democratic cultures.
HEROIC DISCOVERIES, MISLEADING STORIES
Conventional accounts of U.S. medical professionalization link professional authority to improvements in medical knowledge without paying attention to the epistemological changes that underlie these “improvements.” In these narratives, the allocation of authority follows on the heels of scientific discoveries, properly meted out according to the merit of the knowledge attained. We can call these accounts, somewhat crudely, “truth-wins-out narratives.” The logic of these narratives, often referred to as the “diffusion model” (Latour 1987), holds that ideas, by their self-evident truth, force people to assent to them. Advocates of these discoveries are subsequently accorded requisite acclaim and authority, often in the form of professional privileges.
In the case of the professionalization of U.S. medicine, the truth-wins-out narrative has two variations. In its most basic form, professional authority was awarded to those who subscribed to true ideas, who made the key “discoveries” in the new science of bacteriology. “True” ideas won out over lesser, partial truths. Professionalization followed the imperatives of scientific progress, with the consolidation of professional power achieved through the gradual, rational incorporation of scientific principles into medical practice (see, for example, Bulloch 1979; de Kruif 1996; Duffy 1993; Rutkow 2010). The second variation of the truth-wins-out narrative shares the logic of scientific progress but adds the element of efficacy. Here professional authority is achieved, not only through the power of ideas, but in the results they obtain, as developments in medical knowledge led to more effective therapeutic or sanitary interventions, which legitimated professional authority of physicians. The germ theory rapidly led to new cures that justified its assumptions and solidified the authority of doctors.
The seductive common sense of the truth-wins-out narrative muddies how we understand the history of the U.S. medical profession, the progress in medicine generally, and the origins of the tremendous authority afforded doctors in the United States. It ignores the heterogeneity of nineteenth-century medical thought, reducing professionalization to a single linear process. Rather than exploring the ascendancy of one way of thinking about medicine, it tells of a single march toward progress in medical knowledge. It obscures significant ideas once entertained but ultimately discarded and dismisses alternative medical movements as aberrations or mere repositories of ignorance, that is, if they are even considered at all. Intellectual developments are decontextualized, depoliticized, and presented as evolving according to the dictates of disinterested research rather than as part of a specific professional project. And the dissemination of bacteriological ideas is given no explanation other than a vague appeal to the propulsion of their self-evident truth: “By this time [1880] the news of Koch’s discoveries had spread to all of the laboratories of Europe and had crossed the ocean and inflamed the doctors of America” (de Kruif 1996 [1926], 119). Exercising what E. P. Thompson (1968, 12) terms the “condescension of posterity,” the truth-wins-out narrative naturalizes professional authority, transforming the flux of the nineteenth century into something of a predetermined outcome.
While the truth-wins-out narrative is rarely laid out as explicitly as this, its underlying logic persists in our understandings of professionalization, as it squares with commonsense notions of the development of science, notions which the sociology of science has spent decades combating. But the history of knowledge does not conform to the strictures of common sense, and the truth-wins-out narrative cannot bear the weight of historical scrutiny. Take the two key discoveries in the history of cholera lauded in the truth-wins-out narrative—John Snow’s discovery of the waterborne nature of the disease in 1855 and Robert Koch’s identification of the cholera microbe in 1884. These true ideas, rapidly accepted, gave birth to bacteriology and scientific medicine, which led to the disease’s demise. A tidy story, yes, but it’s wrong. Snow’s famous cholera map can only be considered a tipping point in the debate over the etiology of cholera by reading history backward. The map, almost an afterthought in Snow’s work, had little effect in convincing skeptics of the validity of the contagion theory (Koch 2005; Vinten-Johansen et al. 2003). There is almost no mention of it in American allopathic journals prior to the twentieth century. Likewise, Koch’s widely reported discovery of Vibrio cholerae did not provide the decisive “win” for the bacteriological model of cholera (Rosenkrantz 1985; Warner 1991), as it was beset with inconsistencies that fostered widespread skepticism (Rothstein 1992, 267). And in terms of combating cholera, the bacteriological model did not produce much in the way of improvements in therapeutics (unlike diphtheria or rabies, no widely used cholera vaccine was ever embraced) or prevention (effective sanitary improvements were done in the name of the now discredited miasmic theory of disease)2 (Dubos 1987; Duffy 1990; McKeown 1976, 1979). These issues—the ambiguity surrounding the theory initially and the lag between the promise of the germ theory and its results—are not just evident in the history of cholera; generally, the biomedical model only yielded significant therapeutic advantages in the 1930s, long after it was accepted by allopaths as legitimate (Spink 1978). Thus, in 1892, the year of the final U.S. cholera epidemic and the dawn of allopathic professional control, the efficacy of the bacteriological model existed largely in its promise. This messy, ambiguous historical record of cholera thus begs the questions, how did this disease come to be seen as a microbe and how did this understanding get folded into the professional project of allopathic medicine?
Despite these problems, the truth-wins-out narrative has proven obstinately resilient, even as historians have challenged it on a number
of grounds (see Grob 2002; Warner 1997, 1998). Were it restricted to publications of the American Medical Association (AMA) or the myths doctors tell themselves, it might not be much of a concern. The problem is that its assumptions insinuate themselves into more critical sociological analyses of professionalization. As such, it is not enough to dismiss it as merely “hagiographic” (Warner 1997, 2).
It is not surprising that older functionalist accounts of professionalization embrace the truth-wins-out logic. When professions are viewed as arising to fulfill some preexisting societal need or structural imperative (see Parsons 1964), it is difficult to maintain the critical distance necessary to challenge the science that justifies such a role. But what of the more critical sociological research that arose in opposition to functionalism? These critical accounts depict professionalization not as a functional response to a societal need but rather as a political process that involves winning allies and creating a strong organizational infrastructure to promote professional goals.3 Still, even this research inadvertently reproduces a version of truth-wins-out logic. Here the issue is reproduction through neglect. In reacting against the truth-wins-out narrative, which gives undo power to ideas, critical analyses tend to ignore ideas altogether, mustering organizational and political explanations for the professionalization of medicine (i.e., Berlant 1975; Freidson 1970, 1988; Larson 1977). Through this silence, they unintentionally reproduce misguided assumptions of the truth-wins-out narrative; focused on the organizational infrastructure of professions, they neglect the intellectual infrastructure. Ideas come to serve merely as window dressing for the real politicking happening behind the scenes.
The power of the truth-wins-out logic is displayed in the way it insinuates itself into the preeminent sociological treatment of the U.S. medical profession, Paul Starr’s The Social Transformation of American Medicine (1982). Critical of both functionalism and purely organizational accounts of professionalization, Starr seeks to integrate organizational and cultural factors, recognizing professional authority as dependent upon force and persuasion (Starr 1982, 13). According to Starr, the AMA was able to consolidate professional authority once Jacksonian egalitarianism gave way to the Progressive Era’s embrace of scientific expertise. Seizing the zeitgeist, the AMA offered more effective ideas and carried out adroit political strategies to achieve professional power.
Starr’s work rightly remains the foremost sociological account of American medical professionalization. However, while Starr’s analytical approach of integrating both cultural and organizational factors is laudable, his historical analysis unfortunately reduces culture to an external context (i.e., Jacksonian democracy or Progressivism). He invokes ambiguous phrases to explain these macro-cultural mechanisms (e.g., “on the shoulders of broad historical forces [140]”). Moreover, he reproduces the logic of the truth-wins-out narrative by treating bacteriological discoveries as self-evident, ignoring the ways that its supporters worked to make them appear so. His respect for the cultural authority of science is so firm that his history suggests that once scientists got their facts straight, medicine was ineluctably transformed in ways that allowed physicians to capitalize “naturally” on the latest discovery or breakthrough. In reducing culture to an external context, Starr never turns his critical eye toward the actual production of medical knowledge.4 For Starr, the achievement of professional legitimacy is seen as the outcome of the removal of an external cultural barrier and the elucidation of crucial facts rather than a project to create legitimacy for a particular vision of medical science.5 This is not to pick on Starr; these analytical failings are widespread. If the poverty of the truth-wins-out logic can penetrate good sociological analyses, the oversight is systematic, the blind spot widespread.
Whither Epistemology?
The persistence of the truth-wins-out logic in the professions literature underscores the need for a more rigorous engagement with the sociology of knowledge. The existing accounts of the U.S. medical profession, both the more hagiographic versions and critical sociological analyses, fail to investigate epistemological change as an object of analysis, as a phenomenon in need of an explanation. Absent such a focus, they suffer from a basic misconception as to the nature of knowledge. Epistemological assumptions are seen as somehow timeless and outside of history. But it does not take a verdant historical imagination to see that often what is widely accepted as true in one period is dismissed as false in the next. Standards of truth change over time. History is strewn with the carcasses of discarded ideas once embraced as truth. Less apparent, but more significant, is that epistemological systems themselves have histories, waxing, waning, and even disappearing altogether. The life and death of ideas is not merely a matter of better or truer ideas supplanting older ones, but also of the emergence of entirely new ways of thinking.
The notion of a general linear progress of knowledge, derided by sociologists of science and challenged by historians, is undermined by the historical plurality of assumptions regarding the standard of truth against which ideas are judged. Ideas are promoted (and demoted) against the backdrop of basic assumptions about the nature of knowledge. To say an idea is accepted as true is to point out that it meets the standards of good knowledge of a particular epistemological system that develops out of social networks of thinkers (Collins 2000). Absent some basic agreement as to what constitutes legitimate knowledge, no such assessment is possible. When these assumptions change—when epistemological standards and values are jettisoned for new ones—the previous era’s ideas must be translated, accommodated, or discarded. Ideas only make sense—and in turn can only be evaluated—from within an epistemological system. This is not meant to dismiss ideas as socially constructed or to relativize all knowledge claims; it is merely to contextualize truth claims, to embed ideas—and the evaluation of these ideas—within their historical-epistemological context.
But what accounts for the adoption of one epistemology over another? Changes on the level of epistemology cannot be explained away by appeals to truth and falsity. In a fundamental sense, the adoption of one epistemology over another is a matter of collective agreement, a typically tacit acceptance of the basic standards for evaluating knowledge claims. Put differently, when the metric for assessing truth claims changes, the ascendancy of one metric over another cannot, by nature, be determined by appeals to truth. It is not a matter of truth versus error, rational versus irrational, but rather of socially mediated choice that arises from the interaction between social actors. It was this matter of acceptance of an epistemological system based on the laboratory—and the conflict out of which it emerged—that was crucial in shaping the professionalization of U.S. medicine during the nineteenth century.
Given the centrality of this epistemological change—and the overwhelming empirical evidence in the historical record of these changes—why has epistemology fallen out of historical accounts of the professionalization of U.S. medicine? Why has the truth-wins-out narrative proven so durable? First, some of the oversight can be attributed to the paradoxical fact that even though epistemological debates are fundamental, they operate subtly. People tend not to discuss epistemological issues; rather they are the background factors that become manifest in specific debates. Doctors rarely fought over the nature of medical knowledge explicitly, but issues of the legitimacy and usefulness of certain forms of knowledge arose consistently in their specific debates over cholera. Pay too much attention to these surface knowledge debates and the epistemological subtext goes unnoticed.
Second, even when accounts of the professionalization of U.S. medicine do acknowledge epistemological changes, they commonly reverse the temporal relation between ideas and epistemology. Epistemological change is viewed as following from new discoveries. A microbe is seen; the lab is embraced. This, however, inverts temporal directionality. Before a microbe can be seen or produced in the lab, there must exist a predisposition to seeing it, an adoption of particular epistemological assumptions that would enable physicians and res
earchers to recognize a discovery as such. Epistemological commitments precede facts, not the other way around. Inverting this temporality renders invisible the role of epistemological change in the production of knowledge and the social organization of knowing.
Finally, this oversight stems from a basic lack of a historical imagination when it comes to epistemology. Commonsense, taken-for-granted notions of truth and falsity assume an ahistorical view of knowledge and truth. When one construes the standards of truth as timeless, knowledge is only relevant to the story of professionalization insofar as doctors made new medical discoveries that measured up to these standards. But this assumption is unwarranted, as standards of truth change over time. The dustbin of history is filled with previously recognized true ideas now deemed false, and there is no metaphysical warrant to assume this will not be the case in the future (Putnam 1995, 192). An epistemological system held as universal in one era gets supplanted in the next by another system that sports the same pretenses to timeless universality. From within such systems, standards seem universal, but taking the long view, we see that they are fundamentally historical.
In the past two decades, research in historical epistemology has challenged the timelessness of truth standards, temporalizing many of the attributes of knowledge and, in turn, offering a social and cultural understanding of epistemological shifts (see Biagioli 1994; Daston 1992; Daston and Galison 2010; Davidson 2001; Dear 1992; Fuller 2002; Ginzburg 1980, 1992; Jonsen and Toulmin 1988; Porter 1988; Poovey 1998; Schweber 2006; Shapin 1994; Shapin and Schaffer 1985; Toulmin 1992).6 Historical epistemology starts from the premise that “basic epistemological categories such as cause, explanation, and objectivity are historically variable and can be studied in the same way as other types of scientific claims” (Schweber 2006, 229). Concepts like Foucault’s notion of epistemes (2002), Ian Hacking’s styles of reasoning (1985), and even Kuhn’s paradigms (1996)7 serve to highlight the changing standards of truth and falsity. Epistemological units like the “fact” typically perceived as universal are shown to have a history (Poovey 1998).