AgBioForum
Volume 4 // Number 3 & 4 // Article 3
PDF Comment on this article Issue contents Previous article Next article
A Primer On Risk: An Interdisciplinary Approach To Thinking About Public Understanding Of Agbiotech
University of Missouri-Columbia
This essay outlines some of the major social-science based understandings in the fields of risk assessment, risk perception, and risk communication emphasizing the theoretical strengths and weaknesses of these various approaches to "thinking about" risk. The essay emphasizes the interdisciplinary nature of the field and the role that scientific expertise about risk should and should not play in democratic societies. It concludes with a brief overview of how these findings might apply to the specific case of agbiotech, particularly in light of cultural understandings about the role of science in modern life.
Key words: risk communication; risk; risk perception; mass communication; politics; public understanding of science.
The field of risk—whether it is defined as risk assessment, risk perception, risk communication, or risk management—is a relatively young one. It has no ready academic home, although scientists, engineers, social scientists, and humanists have made important contributions to the collective project. Nevertheless, almost all scholars concerned with risk find themselves immersed in fields that are outside their original academic disciplines. The inherently interdisciplinary nature of the project has led to struggles over what should count as evidence and which set of discipline-based understandings should dominate. Simultaneously, scholars acknowledge that sets of findings emerging from individual academic fields do not make much sense unless they are placed into a larger context. This essay seeks to provide that context without suggesting that findings from one field should represent a dominant paradigm. The goal is integration, synthesis, and the understanding of complex relationships, rather than cause-effect modeling or proof beyond a statistical doubt.

However, there is an important caveat to the foregoing. This essay does not eschew democracy. Following the tradition of Kuhn (1962) and others, this essay asserts that science knows something, but not everything. Further, it assumes that many of the scientific findings surrounding questions of risk (for example, is humanity currently living on a planet that is undergoing global climatic changes?) are cutting edge, equivocal, and contradictory. Scientists probably will not be able to resolve global warming (or many other issues relating to risk) before voters and consumers have to make choices. That is where the complex process of democracy comes in. As scholars of risk management have noted, democratic decisions about risk tend to take into account a variety of points of view, economic, social, and political factors, and scientific analysis; these decisions may involve multiple levels of public awareness and governmental decision making, and wrestle with thorny issues such as environmental justice. If this essay has a point of view, it is that people, working through elites but not in response to the desires of elites, have both the right and responsibility to make these sorts of decisions.

The Beginning: The Development And Role Of Risk Assessment

Probably the first, and among the most crucial, conceptualizations of risk emerged from engineers who had to determine whether and under what conditions human-created systems would break down. Risk assessment is the strictly mathematical likelihood that a particular mechanism or system will malfunction within certain use parameters. Engineers agree that all mechanical systems will malfunction, in what sociologist Charles Perrow (1984) has called "normal accidents." The goal of a mathematical risk assessment is to provide a specific audience—other engineers, policy makers, automobile drivers—with some notion about the likelihood that a particular sort of breakdown will occur over time. This numeric expression of failure is referred to as the "base rate," a concept that is crucial in understanding risk perception as well as risk assessment. Although risk assessment generally focuses on failure rates, it may also include comparisons of the potential risks with benefits of investing in a particular technology, environmental impact statements, and, in some cases, the development of "worst case" failure scenarios. It is also important to note that risk assessment does not equate with causality—the automobile insurance industry rates individual drivers but cannot account for accidents caused by drunk driving.

Risk assessment, then, deals with issues of probability, and as such is subject to the same sorts of robust understandings and limitations as any other statistical analyses. For example, risk assessment can be highly effective when there are many similar data points from which to build a probabilistic understanding. Insurance companies have literally millions of data points from which to construct the probability—or risk assessment—that particular sorts of drivers will be involved in particular sorts of collisions. Insurance rates are set, effectively and profitably, by this sort of risk assessment. (Readers who are the parents of teenage drivers will understand the impact of this kind of risk assessment in personal and financial terms.) Engineers provided the foundation for the original understanding, but risk assessment may also be applied to natural occurrences. For example, because of fifty years of "data points" as well as ongoing scientific investigation, the United States National Weather Service can fairly successfully predict the "storm track" that individual hurricanes will take as they approach North America. The weather service can provide a fairly accurate, geographically based assessment of the risk posed by individual hurricanes. The numbers associated with risk assessment make it seem solid and reliable. However, the rules that govern probability also govern risk assessment—and those problems are more serious in some instances than in others.

First, for risk assessment to accurately express a risk, a sufficient number of data points generally are required. Moreover, like every other exercise in probability, after a certain point, collecting more data will not lead to a better or more accurate risk assessment. However, some systems (nuclear reactors, for example) produce relatively few data points. Each year that a nuclear reactor functions according to design specifications could be one data point (referred to in risk assessment as "one reactor year"), as could a nuclear reactor incident (minor malfunction) or accident (more major problem). But the fewer data points there are to represent either proper functioning or a breakdown, the less reliable mathematical risk assessment will be. (Those readers familiar with statistical analysis will understand that this is a general finding from that field, not one confined exclusively to risk.) Scientists and engineers have a lot of experience with cars, boats, mines of various sorts, and so forth. These are technologies that have a history. Scientists and engineers have less experience with nuclear power or earthquakes—each system or event provides only a limited number of data points upon which to make a mathematical assessment.

Second, the more complicated any particular device or system is, the more potential there is for a variety of things to break down. The development of a probabilistic risk assessment is based on the assumption that scientists and engineers will be able to accurately detail most failure scenarios. The more complex the system (what some refer to as "highly interactive"), the more difficult it is to anticipate potential failures or routes to failure. Chemical plants are a good example. When engineers designed the chemical plant in Bhopal, India, they engineered the structure so that water would not be able to seep into some parts of the plant. Since water "could not" get into those chemical tanks, the engineers assumed there was no risk that such an event could occur. However, the cause of the 1984 Bhopal chemical plant explosion—and resulting deaths and injuries—was water in the tank (Wilkins, 1987). Those who compute risk assessments for complex systems, such as chemical plants, do not expect to anticipate every possible failure, only those failures that are more likely to occur. However, something unexpected sometimes does happen, and numeric risk assessment cannot and does not even attempt to account for it. The most complicated systems, of course, are not mechanical. The human body and global climate represent fundamentally interactive systems, systems about which it is more difficult to compute a meaningful numeric risk assessment because of many low-probability potential interactions.

Third, some systems are engineered in such a way that particular sorts of breakdowns can occur without the entire system grinding to a halt. Engineers refer to these as "loosely coupled systems." An automobile is an example of a loosely coupled system—the brakes may fail while the steering continues to function. Loosely coupled systems, in general, are not as subject to catastrophic failure as are tightly coupled systems—the sort of system where failure in one area can lead to a system-wide breakdown. Nuclear power plants are considered tightly coupled systems. Engineers have attempted to compensate for this tight coupling by designing many fail-safe mechanisms into nuclear plants. However, if a fail-safe mechanism fails to function, or is disconnected for some reason (as at the Chernobyl nuclear power plant), then a tightly coupled system is more likely to fail catastrophically.

Finally, numeric risk assessment generally must assume that the "science" of a particular risk is settled. However, if there is variation in the scientific analysis of particular systems, then the risk assessment itself may not be able to accurately express newly emerging knowledge. The potential harm that certain chemicals may do in the human body is an excellent example. In the early 1970s, technology could measure only parts per million; risk assessments were computed based on this understanding. However, when it became possible to measure parts per billion, and new scientific tests were done, standards for some chemicals changed radically. In addition, new scientific work (for example, recent studies about the impact of lead in the human body) leads to new risk assessments as the weight of evidence shifts.

In sum, numeric risk assessment has all the virtues and weaknesses of any other form of analysis based on probability. Lots of data points and well understood systems can and often do produce reliable risk assessments. Highly interactive; tightly coupled; or highly interactive and tightly coupled systems; introduce much more "noise" into the calculation. Fewer data points, or emerging as opposed to settled science, can make things more difficult. As a result, in order for a risk assessment of many sorts of systems to be developed, certain assumptions must be made. Change the assumptions, and depending on how sensitive the assessment is to its underlying assumptions, the risk assessment may change. Moreover, any assessment will still not take into account all the possibilities for failure or success. The result: something that looks firmly grounded in empirical evidence is in reality a single point on a continuum of probabilities. That single bit of information (the sound work and intentions of scientists and engineers notwithstanding) is sometimes not enough information on which to make a good decision. Many such decisions will be made, of necessity, in the market or in the political and social arena (Changnon, 2000).

Proposition One: To understand risk assessment, it is important to know what assumptions have been made in developing the numeric expression of risk. In democratic societies, everyone from public officials to scientists and engineers should be able and willing to answer such a question.

Risk Communication: A First Attempt

Risk assessments, of course, were done for a purpose. In the early 1970s, the United States Environmental Protection Agency (EPA) was charged with overseeing the country's drinking water and communicating what it found to the public. The agency, staffed with scientists expert in evaluating water cleanliness, decided that one way to do this was to publicize the risk assessment of various chemicals in the nation's water supply. The scientists assumed that rational citizens, when presented with technically based risk assessments, would pay attention to them and ultimately agree with the government that the water was generally safe to drink or, in cases of contamination, would follow the appropriate instructions for boiling, and so on. The government would inform the public about the science of the issue and the public would believe the agency and respond accordingly. This approach is what communication scholars would call a "unidirectional sender-message-receiver-behavior" model of communication.

The resulting attempts were unsuccessful. Despite the best efforts of the agency, many people said they did not understand the risk assessments, others neglected to pay any attention to them, and yet others said they did not believe the assessments. Some people followed orders to boil, but others did not. Many people questioned whether the smallest amounts of some chemicals (arsenic, for example) were safe, despite the risk assessments. The result was a decade of efforts that were not as effective as the agency hoped and the public demanded (Krimsky & Plough, 1988).

One root of the early problems with risk communication was that hard scientists, who generally developed the risk assessments, assumed that people behaved much like atoms and molecules. Add oxygen and some energy to hydrogen, and water would be produced. Every time. However, social scientists work from a different understanding. Research from those disciplines, whether it has emerged from political science, sociology, psychology, anthropology, or mass communication, has found that different people respond differently to the same inputs. Although the hard scientists were disturbed by the quality and type of public response that agency communications often evoked, social scientists could have accurately predicted the outcome: it reflected one of the foundational understandings of the various disciplines. The goals for social scientists then became to understand why different people reacted differently to the same risk assessment, to develop ways of communicating about risk that would take these predictable differences into account and, hence, to help the EPA and others construct messages that were more effective. The question of different reactions fueled risk research in the 1980s.

A second problem with early risk communication efforts was the assumption of a unidirectional sender-message-receiver-behavior model of communication. Scholars from several disciplines (primarily mass communication and psychology) had abandoned such an approach decades earlier, because of work finding that communication patterns were far more complicated than this model could explain, and that certain behaviors were extraordinarily difficult to alter (Lowery & DeFleur, 1986). As risk communication scholars became involved in the field, they began to question whether the appropriate goal of risk communication was homogeneous public response to government-based interpretation of scientific fact, or whether a more multi-directional and multi-faceted response to risk communication was needed.

Both of these problems encouraged the addition of a social science paradigm as a way to understand risk. The initial goal was to discover the predictable patterns in individual responses to risk.

Proposition Two: People given the same information appear to think and behave differently. These differences appear to arise not from science, but from the complex system that is humanity.

Proposition Three: Risk messages can flow from the sender back to the receiver or among receivers. Democratic cultures need to be prepared to understand this complicated reality.

Risk Perception: Understanding Differences, Raising New Questions

During the 1980s, psychologists, sociologists, anthropologists, and mass communication scholars investigated risk in a number of ways, primarily by using surveys, case studies, and analyses of media content. The bulk of this work was subsumed under the heading of "risk perception." One key to understanding work in risk perception is the concept of "lay rationality"—the notion that average people, who have no specific scientific expertise and limited life experience with many sorts of risk, will perceive and evaluate risk differently than those who are expert (such as scientists and engineers). As research continued, some scholars went further, suggesting that expert rationality, as characterized by numeric risk assessment, failed to account for important dimensions of risk that lay rationality appeared to consider. Things were beginning to get complicated.

Psychologist Paul Slovic was among the first to identity a risk perception heuristic—a predictable pattern of how people will think about risk. Although there are many nuances in this work, the following are among its central findings:

  • When thinking about risk, people do not consider base rates (Fischhoff et al., 1981). In other words, a rational person, when getting ready to drive a car, does not consider that her statistical chances of being involved in an accident are one in three. Instead, most people assume their chances are zero or should be zero. The result is that people often underestimate the likelihood of common risks (e.g., driving a car) and overestimate the likelihood of others (e.g., becoming ill from small amounts of certain chemicals in the water supply).

  • Regardless of base rate, people appear to be more willing to accept risks they know than to accept risks with which they are unfamiliar. This attitude accounts for many people being willing to live in earthquake-prone California or the tornado-ridden Midwest (Drabek, 1986).

  • There is a constellation of qualities about individual risks that allows people to categorize them as either "acceptable" or "dread" (Fischhoff et al., 1981; Slovic, Fischoff & Lichtenstein, 1980). An acceptable risk is thought of as a risk over which one can exercise control (I am a good driver), is familiar (I drive a car every day), is voluntary (I can decide not to drive my car), tends not to be catastrophic (I am more likely to sustain minor injuries while driving a car than to die as a result of an accident), and is containable (if some other driver runs into me, the injuries will involve few people). Dread risk, on the other hand, summons different qualities: it is an activity over which I can exercise little individual control (being an airplane passenger), is relatively unfamiliar (I seldom fly in airplanes), is involuntary (I have no choice but to get my water from the city system), has real potential to be catastrophic (even though statistically more people survive than die in airline crashes, if the airplane I am riding in falls out of the sky, I am dead), is uncontrollable (radiation from the nuclear plant will spread beyond its immediate environs, and may even spread generationally), and is unfair (I am more likely to sustain bad consequences from a particular mechanism or event than are those who live in a different neighborhood, a different state, or are of a different social class).

  • Any elite, whether governmental, scientific, or industrial, will have a much more difficult time getting people to accept a risk that has many of the qualities associated with dread risk, than it will getting people to accept risks that could not be characterized in this way.

Additional findings include the following:

  • People exercise expert rationality only in knowledge domains with which they are familiar. In unfamiliar circumstances, even scientists and engineers perceive of risk through the lay rationality lens (Wilkins & Patterson, 1991).

  • With most risks, women are slightly more risk averse than men, and the young are less risk averse than others.

  • The riskiest thing it is to be—in any culture—is poor (Douglas & Wildavsky, 1982).

  • People tend to evaluate the risk of anything nuclear differently than other risks. "Things nuclear" are more dreaded.

  • People thinking about risk tend to employ what psychologists call the "fundamental attribution error." In other words, they tend to think of the cause of a particular outcome as the result of individual human acts, rather than the result of a larger mechanical, political, or social system (Wilkins & Patterson, 1987). For example, people tend to assume that bad driving causes auto accidents, not poorly engineered and maintained highways or badly engineered cars.

  • People tend to overestimate risks that receive extensive news coverage. For example, most people believe they are more likely to be struck by lightning than they are to be seriously injured in an auto accident, even though the statistics indicate the reverse. However, getting struck by lightning is a news story, but most auto accidents result in no news coverage (Quarantelli, 1986).

  • People can and do weigh risks against benefits. Communities can decide to assume certain risks (for example, agreeing to become the site of a prison because jobs will be created). However, such decisions seldom mirror numeric risk assessments; they are far more likely to reflect many of the qualities of lay rationality. These decisions are much more the result of political and social negotiation than an automatic acceptance of a numeric risk assessment would suggest (Krimsky & Plough, 1988).

  • Contrary to popular belief, news accounts of various risks or disasters tend to underestimate life and property damage (Scanlon, Tukko, & Morton, 1978).

  • News accounts equalize perspectives on risk—the citizen who opposes the citing of a waste disposal incinerator in his neighborhood is just as likely to be quoted as the scientist who says the facility will not add to air or ground water pollution (Krimsky & Plough, 1988).

  • News stories actually function as part of a two-way communication process between those who originated a message about risk and the public who receives it. Government officials, particularly those at the local level, learn a lot about public response to risk by reading the news (Friedman, Dunwoody, & Rogers, 1999; Singer & Endreny, 1987).

  • Some behaviors are extraordinarily difficult to alter, despite accurate risk assessment and voluminous risk communication. Changing sexual behavior in light of the AIDS virus, smoking cessation, and convincing people to adopt a heart-healthy lifestyle make enormous sense in terms of risk assessment. However, in terms of risk perception, it becomes far more difficult to induce behavior—or even belief—change than the statistics alone would suggest (Wilkins & Patterson, 1991).

  • Culture helps define risk. The same activity may be viewed as very risky in one culture, but completely acceptable in another. Furthermore, cultural understandings are fueled by narrative and myth as well as science. Children in Florida, for example, were asked to explain how they thought of a hurricane. The result: kids thought a hurricane was similar to the cyclone in The Wizard of Oz, a wonderful visual image that is significantly at odds with reality, particularly when warning and evacuation messages are the issue (Anderson, 1997; Douglas & Wildavsky, 1982; Hansen, 1993).

  • People have an intuitive understanding that some systems are very interactive. They tend to distrust risk assessment that does not consider this issue, even though they cannot mathematically explain the results of potential interactions (Hilgartner & Bosk, 1988; Krimsky & Plough, 1988).

All of this work by scholars in many disciplines ultimately resulted in a new view of the role of risk communication.

Proposition Four: Lay rationality, far from being inferior to scientific expertise, works with what scholars have termed an expanded vocabulary of risk that includes questions of culture, history and ethics.

Proposition Five: One goal for risk communication should be public discussion, involving various federal and state government agencies, scientists and engineers, local governments and citizens. Risk communication that seeks to circumvent this need for public discussion tends to be unsuccessful and, in many cases, counterproductive from the point of view of those who develop the message.

Applying Constructions Of Risk To Agrobiotechnology

In some sense, the history of risk associated with agrobiotechnology mirrors developments in the larger fields of risk assessment, risk perception, risk communication, and risk management. Indeed, one of the first case studies involving agrobiotechnology—the introduction of the ice minus bacterium in California in 1982-1984—provides a clear set of lessons that also mirror the development in the literature (Krimsky & Plough, 1988). In that case, the University of California and a biotech company worked with the federal government to allow the test introduction of the organism into the environment (it kept fruit from freezing), only to be met with substantial community opposition, long delays, public discussion during which farmers realized they might have trouble selling any crop that had been so treated (thus promoting an economic rationale for failing to adopt the technology), and an emerging understanding that the public needs time to process complicated issues. The ice minus case suggested that different stakeholders had different needs and expectations from risk assessment and risk communication, expectations which the political process—particularly the process of government regulation and oversight—had to fulfill, however imperfectly. Equally informative was what did not work: the university's attempt to engage in highly technical, one-directional risk communication, with the goal of gaining public acceptance of the test. Indeed, had the university understood the impact of risk perception and the role of culture in defining risk, it might have produced a far different set of risk communications, and done so long before any field test was considered.

Agrobiotechnology combines many elements of already existing risks, although industry has been sometimes slow to acknowledge this reality. In some cases, agrobiotechnology could be considered an acceptable risk—it has some qualities of voluntariness, food tends to be familiar, and there may be some important benefits associated with it—although the risks may be more long-term from the consumer's point of view. However, agrobiotechnology also has some of the qualities of dread risk—I can decide what to eat, but not whether to eat; while food is familiar, genetic technology is much less so; how much control an individual can assert over agbiotech appears to be unknown. Agrobiotechnology is also introduced into one of the most highly interactive, but loosely coupled, of systems: the natural environment. Thus, some of the insights developed in the field of risk as defined through natural hazards would seem to apply, which means that questions about warning and the impact of warning messages—what may be subsumed under the notion of "labeling"—would seem very important, if any sort of positive public response is desired. Public opinion polling, on both sides of the Atlantic, supports this view. Whether it is in Europe or the US, about one fourth (22%) of those polled over the past 20 years have had consistent and deep concerns about the introduction of agrobiotechnology into the environment. Indeed, as more than one scholar has noted, the issue is not why the technology has faced European resistance, but rather why the US has been so accepting (Priest, 2001).

However, because agrobiotechnology deals with the natural world, perceiving its risks and communicating about them summons powerful cultural understandings (Priest, 2001; Turney, 1998). Those understandings, although they may not be rational in the scientific sense, do employ the thinking of lay rationality, with its particular emphasisonculture, history, and ethics. These understandings arise from what biologists would describe as a molecular description of DNA, but the repercussions extend to human social systems.

Instead of a piece of hereditary information, it [DNA] has become the key to human relationships and the basis of family cohesion. Instead of a siring of purines and pyrimidines, it has become the essence of destiny and the source of social difference. Instead of an important molecule, it has become the secular equivalent of the human soul. Narratives of genetic essentialism are omnipresent in popular culture, here explaining evil and predicting destiny, there justifying institutional decisions. They reverberate in public debates about identity and race, in court decisions about child custody and criminal responsibility, and in ruminations about the meaning of life (Turney, 1998, p. 218).

These sorts of connections, of course, extend to other forms of life, particularly the foods people eat. Thus, the outbreak of "mad cow" disease in Europe, the potential harm to the monarch butterfly ecosystem (one has to wonder what the outcome would be if it were rattlesnake habitat that were threatened), and the discovery that genetic material from the US Midwest had "migrated" to fields in Mexico, are historically and culturally connected. Science, and even the academy, may see such events as separable, but in the expanded vocabulary of risk perception, they are both meaningful and related.

There is also a political dimension at work; government-controlled societies (China, for example) have mandated the use of agrobiotechnology. Citizens of any democracy may be justifiably suspicious of such governmental mandates; it certainly smacks of involuntary risk. Culture, or what scholars call the social amplification of risk, weaves its own non-scientific web here.

On the other hand, it also seems likely that a cultural tendency to consider genetic heritage—along with distinctions between animals and people, between the living and the dead, and between the animate and the inanimate—to be sacred may have predisposed us to the kind of public outcry that met later developments such as cloning and terminator technology on the agricultural side... once those were propelled into the public arena. Once a threshold of awareness was crossed as to the power of modern biotechnology to upset the established biological order, the perceived threat to an established cultural... order would almost inevitably create this kind of backlash effect (Priest, 2001, p. 79).

The lessons from the field of risk may not provide either the proponents or the opponents of agrobiotechnology with a clear path to a clear goal. But the field itself does suggest that when the science is so unsettled, and when cultural and political understandings help to define the issues, then open public debate is the best chance for all stakeholders to succeed at their desired goals. Of course, this probably means that all stakeholders will have to compromise. However, as the history of the field of risk indicates, compromises often produce acceptance of risk where a more unilateral approach would not. This leads to me a final proposition:

Proposition Six: No one risk is exactly like any other. But science and culture can work through a political and social process to move technology forward in particular ways, to wait and see in others, and to abandon technologies in still others. All likely outcomes will involve compromise.

References

Anderson, A. (1997). Media, culture and the environment. New Brunswick, NJ: Rutgers University Press.

Changnon, S.A. (Ed.) (2000). El Nino 1997-1998: The Climate event of the century. Oxford, UK: Oxford University Press.

Douglas, M. and Wildavsky, A. (1982). Risk and culture: An Essay on the selection of technical and environmental dangers. Berkeley: University of California Press.

Drabek, T. (1986). Human system responses to disaster: An Inventory of sociological findings. New York: Springer-Verlag.

Fischhoff, B., Liechtenstein, S., Slovic, P., Derby, S.L., & Keeney, R.L. (1981). Acceptable risk. Cambridge, UK: Cambridge University Press.

Friedman, S., Dunwoody, S., & Rogers, C. (1999). Communicating uncertainty: Media coverage of new and controversial science. Mahwah, NJ: Erlbaum.

Hansen, A. (1993). The mass media and environmental issues. Leicester, UK: University of Leicester Press.

Hilgartner, S. and Bosk, C. (1988). The rise and fall of social problems: The Public arena model. American Journal of Sociology, 94(1), 53-78.

Krimsky, S. and Plough, A. (1988). Environmental hazards: Communicating risk as a social process. Dover, MA: Auburn House.

Kuhn, T. (1962). The structure of scientific revolutions. Chicago: University of Chicago Press.

Lowery, S. and DeFleur, M. (1986). Milestones in mass communication research. New York: Longman.

Perrow, C. (1984). Normal accidents: Living with high risk technologies. New York: Basic Books.

Priest, S.H. (2001). A grain of truth. Oxford, UK: Rowman & Littlefield Publishers, Inc.

Scanlon, J., Tukko, R., & Morton, G. (1978). Media coverage of crises: Better than reported, worse than necessary. Journalism Quarterly, 55, 66-72.

Singer, E. and Endreny, P. (1987). Reporting hazards: Their benefits and costs. Journal of Communication, 37(3), 10-26.

Slovic, P., Fischhoff, B. and Lichtenstein, S. (1980). Facts and fears: Understanding perceived risk. In R. Schwing & W. Albers (Eds.), Societal risk assessment (pp. 67-93). New York: Plenum.

Quarantelli, E.L. (1986). Disaster studies: An historical analysis of the influences of basic sociology and applied use on the research done in the last 35 years. Newark, DE: University of Delaware, Disaster Research Center.

Turney, J. (1998). Frankenstein's footsteps: Science, genetics and popular culture. New Haven: Yale University Press.

Wilkins, L. (1993). Between facts and values: Print media coverage of the greenhouse effect, 1987-1990. Public Understanding of Science, 2, 71-84.

Wilkins, L. (1987). Shared vulnerability: The media, the public and the Bhopal disaster. Westport, CT: Greenwood Press.

Wilkins, L., & Patterson, P. (1991). Risky business: Communicating issues of science, risk and public policy. Westport, CT: Greenwood Press.

Wilkins, L., & Patterson, P. (1987). Risk analysis and the construction of news. Journal of Communication, 37(3): 80-92.


Suggested citation: Wilkins, L. (2001). A primer on risk: An Interdisciplinary approach to thinking about public understanding of agbiotech. AgBioForum, 4(3&4), 163-172. Available on the World Wide Web: http://www.agbioforum.org.
© 2001 AgBioForum | Design and support provided by Express Academic Services | Contact ABF: editor@agbioforum.org