1. Trang chủ
  2. » Tài Chính - Ngân Hàng

Project risk management processes techniques in sights phần 9 docx

41 243 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 41
Dung lượng 660,68 KB

Nội dung

have only one observation, which may or may not reflect issues associated with the current potential design change and which may or may not matter. Some people, who may or may not understand the preceding perspectives, may take the view that the project manager’s best course of action is to assume that approval for the design change will take 3 weeks, since this is the corporate standard time for approvals, implying that the risk of exceeding this estimate belongs to the corporate centre, not the project manager. This approach, known as a ‘conditional estimate cop-out’, is widespread practice in a wide variety of related forms. Such conditions are usually subsequently forgotten. They involve particularly dangerous practice when the assumed conditions for the estimate are ambiguous and allocation of responsibility for the conditions is unclear. Such practice is likely to flourish in organizations whose culture includes strong management pressures to avoid revealing bad news, appearing pessimis- tic, or lacking in confidence. In these situations, the conditional estimate cop-out is a useful defensive mechanism, but one that reinforces the cultur e and can result in a ‘conspiracy of optimism’. Estimates based on this kind of corporate standard may appear rational and objective, but they are actually ‘irrational’, because they do not reflect estimators’ rationally held beliefs. To the authors, all these issues mean that a ‘rational subjective’ approach to estimating is essential. One priority issue is stamping out conditional estimate cop-outs and picking up related effects. Another priority issue is to determine whether the uncertainty matters. If it matters, it needs to receive further attention proportionate to how much it matters and the extent to which it can be managed given available estimation resources. This implies an approach to estimating that is iterative, starting out with a perspective that is transparent and simple, and goes into more detail in later passes to the extent that this is useful. A constructively simple approach Based on the Table 15.5 data, consider a first-pass estimate for design approval of 9 weeks using Table 15.6. The key working assumption is a uniform distribu- tion that is deliberately conservative (biased on the pessimistic side) with respect to the expected value estimate and deliberately conservative and crude with respect to variability. This is a ‘rational’ approach to take because we know people are usually too optimistic when estimating variability (e.g., Kahneman et al., 1982). We also wish to use a simple process to identify what clearly does not matter, so it can be dismissed. The residual sources of variability that are not dismissed on a first pass may or may not matter, and more effort may be needed to clarify what is involved in a second-pass analysis. If both the 9 week expected value and the Æ6-week plausible variation are not problems in the context of plan ning the project as a whole, then no further estimating effort is necessary and the first-pass estimate is ‘fit for the purpose’. If either is a potential problem, further analysis to refine the estimates will be required. Assume that the 310 Effective and efficient risk management 9 week expected duration is a potential problem and that a 15 week outcome would be a significant problem for the project manager. A second pass at estimating the time taken to obtain approval for a design change might start by questionin g a possible trend associated with the 15 week observation. In broad terms this might involve looking at the reasons for varia- bility within what normally happens, developing an understanding of reasons for possible outliers from what normally happens, and developing an understanding of what defines abnormal events. It might be observed that the reason for the previously observed 15 week outcome was a critical review of the project as a whole at the time approval was sought for the design change. However, similar lengthy delays might be associated with a number of other identified reasons for abnormal variation, such as: bad timing in relation to extended leave taken by key approvals staff, perhaps due to illness; serious defects in the project’s man- agement or approval request; and general funding reviews. It might be observed that the 7, 6, and 4 week observation are all normal variations, associated with, for example, pressure on staff from other projects, or rou tine shortcomings in the approval requests involving a need for further information. The 3 week standard, achieved once, might have involved no problems of any kind, a situation that occurred once in five observations. These second-pass deliberations might lead to the specification of a stochastic model of the form outlined in Table 15.7 . This particular model involves sub- jective estimates related to both the duration of an ‘abnormal situation’ and the ‘probability that an abnormal situation is involved’, in the latter case using the range 0.1 to 0.5 with an expected value of 0.3. The one observation of an abnormal situation in Table 15.5 suggests a probability of 0.2 (a 1 in 5 chance), but a rational response to only one observation requires a degree of conservatism if the outcome may be a decision to accept this potential variability and take the analysis no further. Given the limited data about a normal situation, which may not be representative, even the normal situation estimates of 3 to 7 weeks with an expected value of 5 weeks are best viewed as plausible subjective estimates, in a manner consistent with the first-pass approach. Even if no data were available, the Table 15.7 approach would still be a sound rational subjective approach if the numbers seemed sensib le in the context of a project team brainstorm of relevant experience and changes in circumstances. An extended example: estimating and rational subjectivity 311 Table 15.6—Estimating the duration of design change approval—first pass estimates duration comments optimistic estimate 3 weeks lowest observed value, a plausible minimum pessimistic estimate 15 weeks highest observed value, a plausible maximum expected value 9 weeks central value, (3 þ15)/2 Working assumptions: the data come from a uniform probability distribution, 3 and 15 corresponding very approximately to 10 and 90 percentile values. However, it is worth noting that project managers may tend to focus on reasons for delay attributable to approvals staff, while approvals staff will understandably take a different view. Everyone is naturally inclined to look for reasons for variability that do not reflect badly on themselves. Assumptions about how well (or badly) this particular project will manage its approvals request is an issue that should significantly affect the estimates, whether or not data are available. And who is preparing the estimates will inevitably colou r their nature. The second-pass estimation model produces an 8 week expected value that is less than the 9 week expected value from the first pass. The Æ6 week, crude 10 to 90 percentile value associated with the first pass remains plausible, but the distribution shape is considerably refined by the second-pass esti mate. A third pass might now be required, to explore the abnorm al 10 to 20 week possibility, or its 0.1 to 0.5 probability range, and to refine understanding of abnormal events. This could employ well-established project risk modelling and process practices, building on the minimalist basis as outlined earlier, if the importance and complexity of the issues makes it worthwhile. A very rich set of model structures can be drawn on. The basic PERT model implicit in our first two passes is the simplest model available and may not be an appropriate choice. Other estimation contexts offer similar choices. 312 Effective and efficient risk management Table 15.7—Estimating the duration of design change approval—second pass situation duration comments normal situation optimistic estimate 3 weeks lowest observed value, plausible minimum pessimistic estimate 7 weeks highest observed values, plausible maximum expected value 5 weeks central value, ð3 þ 7Þ=2 abnormal situation optimistic estimate 10 weeks plausible minimum, given observed 15 pessimistic estimate 20 weeks plausible maximum, given observed 15 expected value 15 weeks ð10 þ 20Þ=2, equal to observed 15 by design probability that an abnormal situation is involved optimistic estimate 0.1 plausible minimum, given observed 0.2 pessimistic estimate 0.5 plausible maximum, given observed 0.2 expected value 0.3 ð0:1 þ 0:5Þ=2, greater than 0.2 by design combined view optimistic estimate 3 weeks normal minimum pessimistic estimate 20 weeks abnormal maximum expected value 8 weeks ð5 Âð1 À 0:3Þþ15  0:3Þ Working assumptions: the ‘normal’ data come from a uniform probability distribution, 3 and 7 corresponding very approximately to 10 and 90 percentile values. The ‘abnormal’ data come from uniform probability distributions. Probabilities of 0.1 and 0.5 and durations of 10 and 20 weeks both correspond very approximately to 10 and 90 percentile values, defined subjectively (based on unquantified experience) in this case in relation to an observed 1 in 5 chance (probability 0.2) of an observed 15-week outcome, a sample of one. A cube factor to evaluate and interpret estimates If any estimate involves assumptions that may not be true, the conditional nature of the estimate, in terms of its dependence on those assumptions being true, may be very important. Treating such an estimate as if it were unconditional (i.e., not dependent on any assumptions being true) may involve a serious misrepresenta- tion of reality. Unfortunately, there is a common tendency for assumptions underpinning estimates to be subsequently overlooked or not made explicit in the first place. This tendency is reinforced in the context of evaluating the combined effect of uncertainty about all activities in a project. Often this ten- dency is condoned and further reinforced by bias driven by a ‘conspiracy of optimism’. Such treatment of assumptions is especially likely where people do not like uncertainty and they prefer not to see it. The presence of a conspiracy of optimism is more than enough to make this issue crucial in the formulation of estimates. If messengers get shot for telling the truth, people will be motivated to be economical with the truth. Understanding the conditional nature of estimates is particularly importan t when estimates prepared by one party are used by another party, especially when contractual issues are involved. By way of a simple example, suppose the project manager concerned with estimating the approval duration used a second-pass estimate of 8 weeks and similar kinds of estimates for all activity durations in the project as a whole. How should the ‘customer’, ‘the head office’, or any other party who is a ‘user’ of the project manager’s estimates interpret the project manager’s estimate of project duration? The user would be wise to adjust the project manager’s estimate to allow for residual uncertainty due to three basic sources: . known unknowns—explicit assumptions or conditions that, if not valid, could have uncertain, significant consequences; . unknown unknowns—implicit assumptions or conditions that, if not valid, could have uncertain, significant consequences; . bias—systematic estimation errors that have significant consequences. A problem is that adjusting estimates to allow for these sources of uncertainty often involves greater subjectivity than that involved in producing the estimates in question. This is an especially acute problem if ‘objective estimates’ are used that are irrational. User response to this problem varies. One approach is to collude and make no adjustments since there is no objective way to do so. Such a response may reinforce and encourage any ‘conspiracy of optimism’ or requirement for the appearance of objectivity in future estimating. Another response is to demand more explicit, detailed information about assumptions and potential limitations in estimates. However, unless this leads to more detailed scrutiny of estimates and further analys is, it does not in itself lead to changes in estimates. Indeed it may encourage the previously mentioned practice of An extended example: estimating and rational subjectivity 313 conditional estimate cop-outs, especially if proffered assumptions become numerous and are less likely to be scrutinized and their implications explored. A third response, which is very common, is for users of estimates to make informal adjustments to estimates, although the reasons for these adjustments may not be clearly articulated. For example, forecasts from sales staff may be regarded as conservative by managers using the data to develop next year’s incentive scheme, and project managers may treat cost or duration estimates as pessimistic and set deliberately tight performance targets to compensate. A well-known consequence of this is the development of a vicious circle in the production of estimates, whereby the estimator attempts to compensate for the user’s anticipated adjustments, while suspicion of this practice encourages the estimate user to make increased adjustments to estimates. If several estimators are involved and estimates combined in a nested fashion, the scope for uncer- tainty about how realistic aggregated estimates are can be considerable. A current controversy, centred on this issue, is the use of data-based adjustments to cost estimates as tentatively proposed by the UK Treasury (HMT, 2002). To adjust for the historically observed bias in project cost estimates, statistical estimates of bias by project type have been produced. It is argued that these estimates of bias should be used directly as a scaling factor on future cost estimates unless the process used to produce the estimate warrants lower adjustment. All those concerned with following the advice that emerged (www.greenbook.treasury. gov.uk/) can use the approach outlined here. Taking a constructively simple approach involves attempting to roughly size adjustments for known unknowns, unknow n unknowns and bias explicitly, in an effort to size the underlying uncertainty. The need to relate these adjustments to the base estimate implies the use of three scaling factors, F k , F u , and F b , corre- sponding, respectively, to known unknowns, unknown unknowns and bias, that ought to be applied to an expected value estimate E . F k , F u ,orF b < 1 signifies a downward adjustment to an estimate E , while F k , F u ,orF b > 1 signifies an upward adjustment. Each scaling factor will itself be uncertain in size . Each adjustment factor is 1 Æ0 if a negligible adjustment effect is involved, but expected values different from 1 for each factor and an asso- ciated rational subjective probability distribution for each factor with a non-zero spread will often be involved. For conservative estimates of performance measures, like cost or time, expected values for F k and F u > 1 will usually be appropriate, while the expected value of F b might be greater or less than 1 depending on the circumstances. To test the vali dity of the project manager’s estimate of project duration as a whole and to maintain simp licity, suppose the user of this estimate takes a sample of one activity estimate and selects the estimated duration of design approval for this purpose. Consider first the adjustment factor F k for known unknowns: any explicit assumptions that matter. If the project manager has identified a list of sources of uncertainty embodied in the normal situation and another list of sources of 314 Effective and efficient risk management uncertainty embodied in the abnormal situat ion, and if these lists look appro- priate and the quantification of associated uncertainty looks appropriate, then a negligible adjustment for known unknowns is involved and an F k ¼ 1 Æ 0is reasonable. However, if the estimator does not use rational, subjective probabil- ities, then the user of those estimates ought to do so to make a suitable adjust- ment. For example, if the project manager has recorded a conditional estimate cop-out for the approval duration of 3 weeks, this should suggest an expected value for F k greater than 2 with an anticipated outcome range 1 to 10 if the user is familiar with data like those of Table 15.5 and analysis like that of Table 15.7. It would not be rational for the user to fail to make such an adjustment. Similarly, an F u ¼ 1 Æ 0 may be reasonable if the project manager made a provision for unknown unknowns when quantifying approval duration estimates in a Table 15.7 format that the user deems suitably conservative in the light of the quality of the identification of explicit assumptions. In contrast, an expected F k > 2 with an anticipated outcome range 1 to 10 may suggest comparable values for F u , depending on the user’s confidence about F k estimation and the quality of the project manager’s estimate more generally. In respect of any adjustment for systematic estimation errors or bias, setting F b ¼ 1 Æ 0 may be reasonable if F k ¼ 1 Æ 0 and F u ¼ 1 Æ 0 seem sensible, con- servative estimates and the organization involved has a history of no bias. However, if estimates of design approval duration are thought to be understated relative to recent organizational history, a suitably large F b expected value and associated spread is warranted. Estimating scaling factors should depend to some extent on how they will be combined. The expected values of the scale factors might be applied to the conditional expected value of an estimate E to obtain an adjusted expected value E a in a number of ways, including the following: Additive approach E a ¼ E½ðF k À 1ÞþðF u À 1ÞþðF b À 1Þþ1 Mixed approach E a ¼ EF b ½ðF k À 1ÞþðF u À 1Þþ1 Multiplicative approach E a ¼ EF b F k F u The additive approach implies separate adjustments are made to the estimate E and merely added together to obtain E a . The mixed approach implies separate adjustments via F k and F u are applied to the base estimate E after it has been scaled for bias. The multiplicative approach is the most conservative, assuming the adjustments should operate in a cumulative fashion, and is operationally the simplest. This combination of characteristics makes it the preferred choice for the authors. The product F k F u F b constitutes a single ‘cube’ factor, short for Known Unknowns, Unknown Unknowns, and Bias (KUUUB), conveniently designated F 3 and usefully portrayed graphically by the cube shown in Figure 15.3 provided An extended example: estimating and rational subjectivity 315 this does not stimulate a desire for a geometric reinterpretation of F 3 . Given the tendency for perceived uncertainty to grow as it is decomposed, estimating three separate factors and then combining them using the multiplicative approach may be especially appropriate in the first-pass estimating process. A composite scale factor incorporating adjustments for KUUUB could be estimated in probability terms directly, but considering the three components separately helps to clarify the rather different issues involved. Large F 3 values will seem worryingly subjective to those who cling to an irrational objectivity perspective. However, explicit attention to F 3 factors is an essential part of a rational subjectivity approach. It is seriously irrational to assume F 3 ¼ 1 Æ 0 without sound grounds for doing so. At present, most organ- izations fail this rationality test. The key value of explicit quantification of F 3 is forcing those involved to think about the implications of the factors that drive the expected size and variability of F 3 . Such factors may be far more important than the factors captured in a prior conventional estimation process where there is a natural tendency to forget about conditions and assumptions and focus on the numbers. Not con sidering an F 3 factor explicitly can be seen as overlooking Heisenberg’s principle: ‘we have to remember that what we observe is not nature itself, but nature exposed to our method of questioning.’ Attempting to explicitly size F 3 makes it possible to try to avoid this omission. Different parties may emerge with different views about an appropriate F 3 , but the process of discussion should be beneficial. If an organization refuses to acknowledge and estimate F 3 explicitly, the issues involved do not go away: they simply become unmanaged and the realization of associated downside risk will be a betting certainty. The size of appropriate F 3 factors is not just a simple function of objective data availability and the use of statistical estimation techniques; it is a function of the quality of the whole process of estimation and interpretation. In a project management context it will include issues driven by factors like the nature of the intended contracts. In practice, a sample of one estimate yielding an F k significantly different from 1 ought to lead to wider scrutiny of other estimates and other aspects of the 316 Effective and efficient risk management Figure 15.3—A visual representation of the cube factor F 3 process as a whole. In a project planning context, if one sampled activity duration estimate, such as duration of design change approval, yields an F k significantly greater than 1, this ought to prompt scrutiny of other activity esti- mates and the role of the estimates in a wider context. Conversely, if no sample activity estimates are examined, this ought to lead to a large F 3 value for a whole project estimate, given the track record of most organizations. Project teams and all users of their estimates need to negotiate a jointly optimal approach to pro- ducing original estimates and associated F 3 factors. Any aspect of uncertainty that is left out by an estimate producer and is of interest to an estimate user should be addressed in the user’s F 3 . Interpreting anot her party’s subjective or objective probability distributions requires explicit consideration of an F 3 factor. The quality of the modelling as well as the associated parameter estimates need to be assessed to estimate F 3 . This includes issues like attention to dependence. Estimators and users of esti- mates who do not have an agreed approach to F 3 factors are communicating in an ambiguous fashion, which is bound to generate mistrust. Trust is an important driver of the size of F 3 factors. As described here, the F 3 factor concept is very simple and clearly involves a high level of subjectivity. Nevertheless, on the basis of ‘what gets measured gets managed’, it is necessary to highlight important sources of uncertainty and prompt consideration of underlying management implications. For the most part, high levels of precision in F 3 factors and component factors is not practic- able or needed. The reason for sizing F 3 factors is ‘insight not numbers’. However, more developed versions explic itly recognizing subjective probability distributions for F 3 and its components are feasible (Chapman and Ward, 2002) and may be appropriate in estimation or modelling iterations where this is constructive. This extended example makes use of a particular context to illustrate the rational subjectivity and cube factor aspects of a constructively simple approach to estimating. The focus is on important generic assessment issues and is less context-dependent than the first extended example, but some context-specific considerations cannot be avoided. There is considerable scope for addressing the relevance of the specific techniques and the philosophy behind the constructively simple estimating approach in other contexts, some examples being addressed elsewhere (Chapman and Ward, 2002). A further objective Estimation and evaluation of uncertainty are core tasks in any decision support process. The constructively simple estimating approach to these core tasks demonstrated by this example involves all seven important objectives that con- tribute to cost-effective uncertainty assessment discussed in the last section, plus one more. An extended example: estimating and rational subjectivity 317 Objective 8 Avoiding irrational objectivity Corporate culture can drive people to displaying irrational objectivity. An impor- tant objective is neutralizing this pressur e, via ‘rational subjectivity’. In particular, it is very easy to make assumptions, then lose sight of them, between the basic analysis and the ultimate use of that analysis: the F k factor forces integration of the implications of such explicit assumptions; the F u factor picks up the implicit assumptions; and the F b factor integrates any residual bias. Ensuring this is done is an important objective. Simplicity efficiency In addition to a further objective, Objective 7 (simplicity with constructive com- plexity) is developed further in this example. In particular, it provides a useful direct illustration of the notion of ‘simplicity efficiency’. If we see the probability structures tha t estimates are based on as models, with a wide range of feasible choices, a first-pass, constructively simple choice involves targeting a point on bNc in Figure 15.4. Choices on aN b are too simplistic to give enough insight. Later-pass choices should target a point on cNd . Choices like e are inefficient on any pass and should not be used. We start with an effective, constructively simple approach. We add ‘constructive complexity’ where it pays, when it pays, using earlier passes to help manage the choice process with respect to ongoing iterative analysis. Simplicity efficiency is the basis of risk management 318 Effective and efficient risk management Figure 15.4—Simplicity efficiency boundary that is both effective and efficient. Chapman and Ward (2002) develop this simplicity efficiency concept further, in terms of concepts and processes as well as models. Simplicity efficiency might be termed simplicity–insight efficiency (SI efficiency for short), especially if the term risk–reward efficiency (RR efficiency) is adopted instead of risk efficiency. The term SI efficiency emphasizes the nature of the trade-off between simplicity and insight along an efficient frontier or boundary that is directly comparable with the RR trade-off associated with risk efficiency. This book will stick to the term simplicity efficiency. But it is important to see the conceptual link between simplicity efficiency and risk efficiency. Risk efficiency is a property of projects that we try to achieve as a basic objective common to all projects. Simplicity efficiency is a property of RMPs that we try to achieve with respect to all RMPs. Simplicity efficiency is a necessary condition for risk effi- ciency. Both effectiveness and efficiency in project terms requires simplicity efficiency. Ambiguity and a holistic view of uncertainty A holistic view of uncertainty (see Objective 1 as discussed in the last section) must embrace ambiguity as well as variability. Ambiguity is associated with lack of clarity because of lack of data, lack of detail, lack of structure to consider the issues, assumptions employed, sources of bias, and ignorance about how much effort it is worth expending to clarify the situation. This ambiguity warrants attention in all parts of the decision support process, including estimation and evaluation. However, consideration of uncertainty in the form of ambiguity is not facilitated in estimation by the commonly used probability models that focus on variability, especially when variability is associated with objective probabilities. The implications of uncertainty in simple, deterministic model parameters and associated model outputs are commonly explored by sensitivity analysis, and complex probabilistic models commonly use techniques like Monte Carlo simulation to explore uncertainty modelled directly. However, neither of these evaluation approaches explicitly addresses ambiguity issues concerning the structure of the modelling of core issues, choices about the nature of the specific process being used, and the wider characterization of the context being addressed. The SHAMPU process recognizes that estimating expected values and the variability of decision support parameters cannot be decoupled from understand- ing the context, choosing a specific process for this analysis, specifying the model structure, and evaluating and interpreting the consequences of this uncertainty. However, the presence of ambiguity increases the need for data acquisition, estimation, and model development to proceed in a closely coupled proc ess. Failure to recognize this can lead to decision support processes that are irrational as well as ineffective and inefficient. An extended example: estimating and rational subjectivity 319 [...]... by requiring transparent pricing of risk Efficient allocation of risk 333 in tenders and by requiring tender submissions to include plans for managing risk In this way comparisons between tenderers in terms of the extent of wellfounded willingness to bear risk can be made on a more informed basis In practice, tenderers experienced in risk management may be able to demonstrate well-founded willingness... parties so as to maximize the likelihood of project objectives being achieved, taking into account the constraints and risks that act on the project and the strengths and weaknesses of the parties to it 3 39 Risk sharing and incentive contracts Determining an appropriate target cost A further problem in ensuring a risk efficient incentive contract is determining an appropriate value for the target cost... chapter a corporate perspective on project risk management processes (RMPs) is adopted We consider briefly what is involved in establishing and sustaining an organizational project risk management capability (PRMC) The issues involved can be explored more systematically if we consider the setting up and operation of a PRMC as a project in its own right and examine this project in terms of the six W s framework... and Ward, 199 4) concludes the following about the risk efficiency of fixed price and CPFF contracts: 1 a fixed price contract is usually risk efficient in allocating contractorcontrollable risk; 2 a CPFF contract is usually risk efficient in allocating client-controllable risk; 3 in respect of uncontrollable risk, a fixed price contract is risk efficient if the contractor is more willing to accept risk (R . might be based on the willingness of parties to take on a risk (Ward et al., 199 1). However, willingness to bear risk will only result in conscientious management of project risks to the extent that. Manage Project Uncertainty) process (Chapter 9) is conce rned with allocating responsibility for managing project uncertainty to appropriate project parties. As noted previously, the issues involved. different perceptions of project- related risk and different priorities in project risk management. Differences in perception of project success arise most obviously in client– contractor relationships.

Ngày đăng: 14/08/2014, 12:21

TỪ KHÓA LIÊN QUAN