1. Trang chủ
  2. » Giáo Dục - Đào Tạo

Fundamentals of Risk Analysis and Risk Management - Section 3 doc

92 2,1K 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 92
Dung lượng 1,75 MB

Nội dung

© 1997 by CRC Press, Inc. Section III Risk Perception, Law, Politics, and Risk Communication L1130chIII.1.fm Page 231 Friday, September 3, 2004 4:41 PM © 1997 by CRC Press, Inc. CHAPTER III.1 Risk Perception and Trust Paul Slovic INTRODUCTION Perceived risk can best be characterized as a battleground marked by strong and conflicting views about the nature and seriousness of the risks of modern life. The paradox for those who study risk perception is that, as people have become healthier and safer on average, they have become more — rather than less — concerned about risk, and they feel more and more vulnerable to the risks of modern life. Studies of risk perception attempt to understand this paradox and to understand why it is that our perceptions are so often at variance with what the experts say we should be concerned about. We see, for example, that people have very great concerns about nuclear power and chemical risks (which most experts consider acceptably safe) and rather little concern about dams, alcohol, indoor radon, and motor vehicles (which experts consider to be risky). Perceptions of risk appear to exert a strong influence on the regulatory agenda of government agencies. In 1987, a U.S. Environmental Protection Agency (EPA) task force of 75 experts ranked the seriousness of risk for 31 environmental problems. The results showed that (1) the EPA’s actual priorities differed in many ways from this ranking and (2) their priorities were much closer to the public’s concerns than to the experts’ risk assessments. In particular, hazardous waste disposal was the highest priority item on EPA’s agenda and the area of greatest concern for the public as well, yet this problem was judged only moderate in risk by the experts. It is important to understand why the public is so greatly concerned today about risks from technology and its waste products. This author does not have the answer, but has several hypotheses about factors that might contribute to the perceptions that such risks are high and increasing. One hypothesis is that we have greater ability L1130chIII.1.fm Page 233 Friday, September 3, 2004 4:41 PM © 1997 by CRC Press, Inc. than ever before to detect minute levels of toxic substances. We can detect parts per billion or trillion or even smaller amounts of chemicals in water and air and in our own bodies. At the same time, we have considerable difficulty understanding the health implications of this new knowledge. Second, we have an increasing reliance on powerful new technologies that can have serious consequences if something goes wrong. When we lack familiarity with a technology, it is natural to be suspicious of it and cautious in accepting its risks. Third, in recent years, we have experienced a number of spectacular and catastrophic mishaps, such as Three Mile Island, Cher - nobyl, Bhopal, the Challenger accident, and the chemical contamination at Love Canal. These events receive extensive media coverage which highlights the failure of supposedly “fail-safe” systems. Fourth, we have an immense amount of litigation over risk problems, which brings these problems to public attention and pits expert against expert, leading to loss of credibility on all sides. Fifth, the benefits from technology are often taken for granted. When we fail to perceive significant benefit from an activity, we are intolerant of any degree of risk. Sixth, we are now being told that we have the ability to control many elements of risk, for example, by wearing seatbelts, changing our diets, getting more exercise, and so on. Perhaps the increased awareness that we have control over many risks makes us more frustrated and angered by those risks that we are not to be able to control, such as when exposures are imposed on us involuntarily (e.g., air and water pollution). Seventh, psychological studies indicate that when people are wealthier and have more to lose, they become more cautious in their decision making. Perhaps this holds true with regard to health as well as wealth. Finally, there may be real changes in the nature of today’s risks. For example, there may be greater potential for catastrophe than there was in the past, due to the complexity, potency, and interconnectedness of technological systems (Perrow 1984). Key Words: perceived risk, trust, risk communication, risk assessment, risk management 1. PSYCHOMETRIC STUDIES Public opinion polls have been supplemented by more quantitative studies of risk perception that examine the judgments people make when they are asked to characterize and evaluate hazardous activities and technologies. One broad strategy for studying perceived risk is to develop a taxonomy for hazards that can be used to understand and predict responses to their risks. The most common approach to this goal has employed the psychometric paradigm (Slovic 1986, 1987, Slovic et al. 1985) which produces quantitative representations or “cognitive maps” of risk atti - tudes and perceptions. Within the psychometric paradigm, people make quantitative judgments about the current and desired riskiness of various hazards. These judg - ments are then related to judgments of other properties, such as the hazard’s status on characteristics that have been hypothesized to account for risk perceptions (e.g., L1130chIII.1.fm Page 234 Friday, September 3, 2004 4:41 PM © 1997 by CRC Press, Inc. voluntariness, dread, catastrophic potential, controllability). These characteristics of risk tend to be correlated highly with each other across the domain of hazards. For example, hazards judged to be catastrophic also tend to be seen as uncontrollable and involuntary. Investigation of these relationships by means of factor analysis has shown that the broad domain of risk characteristics can be reduced to a small set of higher-order characteristics or “factors.” The factor space shown in Figure 1 has been replicated often. Factor 1, labeled “Dread Risk,” is defined at its high (right-hand) end by perceived lack of control, dread, catastrophic potential, and fatal consequences. Factor 2, labeled “Unknown Risk,” is defined at its high end by hazards perceived as unknown, unobservable, new, and delayed in their manifestation of harm. Nuclear power stands out in this (and many other) study as uniquely unknown and dreaded, with great potential for catastrophe. Nuclear waste tends to be perceived in a similar way. Chemical hazards such as pesticides and polychlorinated biphenyls (PCBs) are not too distant from nuclear hazards in the upper-right-hand quadrant of the space. Research has shown that laypeople’s perceptions of risk are closely related to these factor spaces. In particular, the further to the right that a hazard appears in the space, the higher its perceived risk, the more people want to see its current risks reduced, and the more people want to see strict regulation employed to achieve the desired reduction in risk (Slovic et al. 1985). In contrast, experts’ perceptions of risk are not closely related to any of the various risk characteristics or factors derived from these characteristics. Instead, experts appear to see riskiness as synonymous with expected annual mortality. As a result, conflicts over “risk” may result from experts and laypeople having different definitions of the concept. Expert recitations of risk probabilities and statistics will do little to change people’s attitudes and perceptions if these perceptions are based on nonprobabilistic and nonstatistical qualities. Another important finding from risk perception research is that men and women have systematically different risk perceptions (see Figure 2). Some have attributed this to men’s greater knowledge of technology and risk (i.e., science literacy). But a study by Barke et al. (1995) found that risk judgements of women scientists differed from the judgements of male scientists in much the same way as men and women nonscientists differed. Women scientists perceived higher risk than men scientists for nuclear power and nuclear waste. Recently, Flynn et al. (1994) examined risk perception as a function of both race and gender. Surprisingly, nonwhite men and women differed rather little in their perceptions and differed little from white women. It was white males who stood apart from the rest in seeing risks as less serious than others (see Figure 3). Subse - quent analysis showed that this “white male effect” was due to the response of 30% of the white male subgroup of relatively high education and income. Why do a substantial percentage of white males see the world as much less risky than everyone else sees it? Perhaps white males see less risk in the world because they create, manage, control, and benefit from so much of it. Perhaps women and nonwhite men see the world as more dangerous because in many ways they are L1130chIII.1.fm Page 235 Friday, September 3, 2004 4:41 PM © 1997 by CRC Press, Inc. more vulnerable, because they benefit less from many of its technologies and insti- tutions, and because they have less power and control. Inasmuch as these sociopolitical factors shape public perception of risks, we can see yet another reason why traditional attempts to make people see the world as white males do, by showing them statistics and risk assessments, are unlikely to succeed. The problem of risk conflict and controversy clearly goes beyond science. It is deeply rooted in the social and political fabric of our society. This analysis points to the need for a fairer and more equitable society, as well as for fairer processes for managing risk. Figure 1 Location of 81 hazards on Factors 1 (Dread Risk) and 2 (Unknown Risk) derived from the interrelationships among 15 risk characterisitics. Each factor is made up of a combination of characteristics, as indicated by the lower diagram. (From Slovic, P. (1987). Science, 236, 280. Copyright American Association for the Advancement of Science. With permission.) L1130chIII.1.fm Page 236 Friday, September 3, 2004 4:41 PM © 1997 by CRC Press, Inc. 2. RISK COMMUNICATION AND TRUST 2.1 The Importance of Trust The research described previously has painted a portrait of risk perception influenced by the interplay of psychological, social, and political factors. Members of the public and experts can disagree about risk because they define risk differently, have different worldviews, or different social status. Another reason why the public often rejects scientists’ risk assessments is lack of trust. Figure 2 Mean risk perception ratings by white males and white females. (From a survey conducted by P. Slovic and co-workers.) L1130chIII.1.fm Page 237 Friday, September 3, 2004 4:41 PM © 1997 by CRC Press, Inc. Social relationships of all types, including risk management, rely heavily on trust. Indeed, much of the contentiousness that has been observed in the risk man - agement arena has been attributed to a climate of distrust that exists between the public, industry, and risk management professionals (e.g., Slovic 1993, Slovic et al. 1991). To appreciate the importance of trust, it is instructive to compare those risks that we fear and avoid with those we accept casually. Starr (1985) has pointed to the public’s lack of concern about the risks from tigers in urban zoos as evidence that acceptance of risks is strongly dependent on confidence in risk management. Risk perception research (Slovic 1990) documents that people view medical technologies based on use of radiation and chemicals (i.e., X-rays and prescription drugs) as high in benefit, low in risk, and clearly acceptable. However, people view Figure 3 Mean risk perception ratings by race and gender. (From Flynn, J., Slovic, P., and Mertz, C. K. (1994). Risk Analysis, 14(6), 1104. With permission.) L1130chIII.1.fm Page 238 Friday, September 3, 2004 4:41 PM © 1997 by CRC Press, Inc. industrial technologies involving radiation and chemicals (i.e., nuclear power, pes- ticides, industrial chemicals) as high in risk, low in benefit, and unacceptable. Although X-rays and medicines pose significant risks, our relatively high degree of trust in the physicians who manage these devices makes them acceptable. Numerous polls have shown that the government and industry officials who oversee the man - agement of nuclear power and nonmedical chemicals are not highly trusted (Flynn et al. 1992, McCallum et al. 1990, Pijawka and Mushkatel 1992, Slovic et al. 1991). Because it is impossible to exclude the public in a highly participatory democ- racy, the response of industry and government to this crisis of confidence has been to turn to the young and still primitive field of risk communication in search of methods to bring experts and laypeople into alignment and make conflicts over technological decisions easier to resolve. Although attention to communication can prevent blunders that exacerbate conflict, there is rather little evidence that risk communication has made any significant contribution to reducing the gap between technical risk assessments and public perceptions or to facilitating decisions about nuclear waste or other major sources of risk conflict. The limited effectiveness of risk communication efforts can be attributed to the lack of trust. If you trust the risk manager, communication is relatively easy. If trust is lacking, no form or process of communication will be satisfactory (Fessenden-Raden et al. 1987). Thus, trust is more fundamental to conflict resolution than is risk communication. 2.2 How Trust Is Created and Destroyed One of the most fundamental qualities of trust has been known for ages. Trust is fragile. It is typically created rather slowly, but it can be destroyed in an instant, by a single mishap or mistake. Thus, once trust is lost, it may take a long time to rebuild it to its former state. In some instances, lost trust may never be regained. Abraham Lincoln understood this quality. In a letter to Alexander McClure he observed: “If you once forfeit the confidence of your fellow citizens, you can never regain their respect and esteem” (italics added). 2.3 The Impact of Events on Trust The fact that trust is easier to destroy than to create reflects certain fundamental mechanisms of human psychology called here “the asymmetry principle.” When it comes to winning trust, the playing field is not level. It is tilted toward distrust for each of the following reasons: 1. Negative (trust-destroying) events are more visible or noticeable than positive (trust-building) events. Negative events often take the form of specific, well-defined incidents such as accidents, lies, discoveries of errors, or other mismanagement. Positive events, while sometimes visible, more often are fuzzy or indistinct. For example, how many positive events are represented by the safe operation of a nuclear power plant for 1 day? Is this one event, dozens of events, hundreds? There is no precise answer. When events are invisible or poorly defined, they carry little or no weight in shaping our attitudes and opinions. L1130chIII.1.fm Page 239 Friday, September 3, 2004 4:41 PM © 1997 by CRC Press, Inc. 2. When events do come to our attention, negative (trust-destroying) events carry much greater weight than positive events. This important psychological tendency is illustrated by a study in which 103 college students rated the impact on trust of 45 hypothetical news events pertaining to the management of a large nuclear power plant in their community (Slovic et al. 1993). The following events were designed to be trust increasing: • There have been no reported safety problems at the plant during the past year. • There is careful selection and training of employees at the plant. • Plant managers live nearby the plant. • The county medical examiner reports that the health of people living near the plant is better than the average for the region. Other events were designed to be trust decreasing: • A potential safety problem was found to have been covered up by plant officials. • Plant safety inspections are delayed in order to meet the electricity production quota for the month. • A nuclear power plant in another state has a serious accident. • The county medical examiner reports that the health of people living near the plant is worse than the average for the region. The respondents were asked to indicate, for each event, whether their trust in the management of the plant would be increased or decreased upon learning of that event. After doing this, they rated how strongly their trust would be affected by the event on a scale ranging from 1 (very small impact on trust) to 7 (very powerful impact on trust). The percentages of Category 7 ratings, shown in Figure 4, dramatically dem- onstrate that negative events are seen as far more likely to have a powerful effect on trust than are positive events. The data shown in Table 1 are typical. The negative event, reporting plant neighbors’ health as worse than average, was rated 6 or 7 on the impact scale by 50% of the respondents. A matched event, reporting neighbors’ health to be better than average, was rated 6 or 7 by only 18.3% of the respondents. There was only one event perceived to have any substantial impact on increasing trust. This event stated that: “An advisory board of local citizens and environmen - talists is established to monitor the plant and is given legal authority to shut the plant down if they believe it to be unsafe.” This strong delegation of authority to the local public was rated 6 or 7 on the impact scale by 38.4% of the respondents. Although this was a far stronger showing than for any other positive event, it would have been a rather average performance in the distribution of impacts for negative events. The importance of an event is related, at least in part, to its frequency (or rarity). An accident in a nuclear plant is more informative with regard to risk than is a day (or even a large number of days) without an accident. Thus, in systems where we are concerned about low-probability/high-consequence events, problematic events will increase our perceptions of risk to a much greater degree than favorable events will decrease them. 3. Adding fuel to the fire of asymmetry is yet another idiosyncracy of human psy- chology; sources of bad (trust-destroying) news tend to be seen as more credible than sources of good news. For example, in several studies of what we call “intuitive toxicology” (Kraus et al. 1992), we have examined people’s confidence in the ability of animal studies to predict human health effects from chemicals. In general, confidence in the validity of animal studies is not particularly high. However, when told that a study has found that a chemical is carcinogenic in animals, people L1130chIII.1.fm Page 240 Friday, September 3, 2004 4:41 PM © 1997 by CRC Press, Inc. express considerable confidence in the validity of this study for predicting health effects in humans. Regulators respond like the public. Positive (bad news) evidence from animal bioassays is presumptive evidence of risk to humans; negative evidence (e.g., the chemical was not found to be harmful) carries little weight (Efron 1984). Figure 4 Differential impact of trust-increasing and trust-decreasing events. Note: Only per- centages of Category 7 ratings (very powerful impact) are shown here. (From Slovic, P. (1993). Risk Analysis, 13, 675. With permission.) L1130chIII.1.fm Page 241 Friday, September 3, 2004 4:41 PM [...]... Markets: An Essay on the Economics of Imperfect Information” Quarterly Journal of Economics 90:629–650 Stone, John (1973a) “A Theory of Capacity and the Insurance of Catastrophic Risks” Journal of Risk and Insurance (Part I) 40: 231 –2 43 Stone, John (1973b) “A Theory of Capacity and the Insurance of Catastrophic Risks” Journal of Risk and Insurance (Part II) 40 :33 9 35 5 QUESTIONS 1 Why are insurers more... Malmfors, T., and Slovic, P (1992) Intuitive toxicology: Expert and lay judgments of chemical risks Risk Analysis, 12, 215– 232 Lichtenberg, J., and MacLean, D (1992) Is good news no news? The Geneva Papers on Risk and Insurance, 17, 36 2 36 5 MacGregor, D., Slovic, P., and Morgan, M G (1994) Perception of risks from electromagnetic fields: A psychometric evaluation of a risk- communication approach Risk Analysis, ... L A., and Covello, V T (1990) Public knowledge and perceptions of chemical risks in six communities (Report No 23 0-0 1-9 0-0 74) Washington, D.C.: U S Environmental Protection Agency Morgan, M G., Slovic, P., Nair, I., Geisler, D., MacGregor, D., Fischhoff, B., Lincoln, D., and Florig, K (1985) Powerline frequency electric and magnetic fields: A pilot study of risk perception Risk Analysis, 5, 139 –149... High -Risk Technologies New York: Basic Books Pijawka, D., and Mushkatel, A (1992) Public opposition to the siting of the high-level nuclear waste repository: The importance of trust Policy Studies Review, 10(4), 180–194 Slovic, P (1986) Informing and educating the public about risk Risk Analysis, 4, 4 03 415 Slovic, P (1987) Perception of risk Science, 236 , 280–285 Slovic, P (1990) Perception of risk. .. Robin and Kunreuther, Howard (1992) “Pricing Insurance and Warranties: Ambiguity and Correlated Risks” The Geneva Papers on Risk and Insurance Theory 17 :35 –60 Kunreuther, Howard, Meszaros, Jacqueline, Hogarth, Robin and Spranca, Mark (1995) “Ambiguity and Underwriter Decision Processes” Journal of Economic Behavior and Organization 26 :33 7 35 2 Launie, J., J Lee, and N Baglini (1986) Principles of Property... Trust as a determinant of opposition to a high-level radioactive waste repository: Analysis of a structural model Risk Analysis, 12, 417– 430 Flynn, J., Slovic, P., and Mertz, C K (1995) Gender, race, and perception of environmental health risks Risk Analysis, 14(6), 1101–1108 Koren, G., and Klein, N (1991) Bias against negative studies in newspaper reports of medical research Journal of the American Medical... environmental risk analysis It also offers insights on how to organize and prosecute comparative risk analysis projects, and it offers cautions as to the uncertainties involved in the process and as to the meaning of the results It is not intended to be a complete guide to comparative risk analysis; other sources of information on the subject are given Key Words: risk, comparative, analysis, assessment, risk. .. Hohenemser, and J X Kasperson (Eds.), (pp 91–1 23) Boulder, CO: Westview © 1997 by CRC Press, Inc L1 130 chIII.1.fm Page 245 Friday, September 3, 2004 4:41 PM Slovic, P., Flynn, J., Johnson, S., and Mertz, C K (19 93) The dynamics of trust in situations of risk (Report No 9 3- 2 ) Eugene, OR: Decision Research Slovic, P., Flynn, J., and Layman, M (1991) Perceived risk, trust, and the politics of nuclear waste... 19 93) , and an example of the development and use of criteria similar to those needed for comparative risk analysis is given in the U.S EPA SAB’s environmental futures study (U.S EPA 1995b) 5 ACHIEVING COMPARATIVE RISK- BASED RANKINGS OF THE ENVIRONMENTAL ISSUES The risk- based ranking of a list of environmental issues is not a defined quantitative, analytical process The large gaps in available data and. .. jointly examining the information about, and the importance of, the selected risk- describing criteria to achieve separate rankings by members of the committees and then comparing and debating these; the joint examination of matrices of issues and risk criteria; the assignment of quantitative scores to the criteria and, by an agreed-upon formula, the combination of these into total scores for each issue, . impact of trust-increasing and trust-decreasing events. Note: Only per- centages of Category 7 ratings (very powerful impact) are shown here. (From Slovic, P. (19 93) . Risk Analysis, 13, 675 chemical risks. Risk Analysis, 12, 215– 232 . Lichtenberg, J., and MacLean, D. (1992). Is good news no news? The Geneva Papers on Risk and Insurance, 17, 36 2 36 5. MacGregor, D., Slovic, P., and Morgan,. (1986). Informing and educating the public about risk. Risk Analysis, 4, 4 03 415. Slovic, P. (1987). Perception of risk. Science, 236 , 280–285. Slovic, P. (1990). Perception of risk from radiation.

Ngày đăng: 11/08/2014, 12:21

Nguồn tham khảo

Tài liệu tham khảo Loại Chi tiết
Krier, J. The Irrational National Air Quality Standards: Macro- and Micro-Mistakes, 22 UCLA Law Rev. 323–330 (1974) Sách, tạp chí
Tiêu đề: UCLA Law Rev
Năm: 1974
Latin, H. The “Significance” of Toxic Health Risks: An Essay on Legal Decisionmaking under Uncertainty, 10 Ecology Law Q. 339 (1982) Sách, tạp chí
Tiêu đề: Significance” of Toxic Health Risks: An Essay on Legal Decisionmaking under Uncertainty, 10 "Ecology Law Q
Năm: 1982
Russell, M. and Gruber, M. Risk Assessment in Environmental Policy Making, 236 Science 286–287 (1987) Sách, tạp chí
Tiêu đề: Science
Năm: 1987
Stewart, R. Regulation, Innovation, and Administrative Law: A Conceptual Framework, 69 Calif. Law Rev. 1256, 1274–1275, 1338–1353 (1981) Sách, tạp chí
Tiêu đề: Calif. Law Rev
Năm: 1981
Ackerman, B. and Hassler, W. Clean Coal/Dirty Air, 79–103 (1981) Khác
United Steelworkers v. Marshall. 647 F.2d 1189 (D.C. Cir. 1980), Cert. denied, 453 U.S. 913 (1981) Khác

TỪ KHÓA LIÊN QUAN