ProQuest Information and Learning 300 North Zeeb Road, Ann Arbor, Mi 48106-1346 USA 800-521-0600 ® Trang 3 A COST EFFECTIVENESS AND PROBABILISTIC SENSITIVITY ANALYSIS OF OPPORTUNISTIC
Trang 1INFORMATION TO USERS
This manuscript has been reproduced from the microfilm master UMI films
the text directly from the original or copy submitted Thus, some thesis and
dissertation copies are in typewriter face, while others may be from any type of computer printer
The quality of this reproduction is dependent upon the quality of the
copy submitted Broken or indistinct print, colored or poor quality illustrations
and photographs, print bleedthrough, substandard margins, and improper alignment can adversely affect reproduction
In the unlikely event that the author did not send UMI a complete manuscript
and there are missing pages, these will be noted Also, if unauthorized
copyright material had to be removed, a note wiil indicate the deletion
Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand corner and continuing from left to right in equal sections with smail overiaps
ProQuest Information and Learning
300 North Zeeb Road, Ann Arbor, Mi 48106-1346 USA 800-521-0600
®
Trang 3A COST EFFECTIVENESS AND PROBABILISTIC SENSITIVITY ANALYSIS OF OPPORTUNISTIC SCREENING VERSUS SYSTEMATIC
SCREENING FOR SIGHT THREATENING DIABETIC EYE DISEASE
Bryce S Sutton, B.A., M.A
A Dissertation Presented to the Faculty of the Graduate School of Saint Louis University in
Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy
Trang 4UM! Number: 3102938 ® UMI UMI Microform 3102938
Copyright 2003 by ProQuest Information and Learning Company All rights reserved This microform edition is protected against
unauthorized copying under Title 17, United States Code
ProQuest Information and Learning Company 300 North Zeeb Road
Trang 5A COST EFFECTIVENESS AND PROBABILISTIC SENSITIVITY ANALYSIS OF OPPORTUNISTIC SCREENING VERSUS SYSTEMATIC
SCREENING FOR SIGHT THREATENING DIABETIC EYE DISEASE
Bryce S Sutton, B.A., M.A
An Abstract Presented to the Faculty of the Graduate School of Saint Louis University in Partial
Fulfillment of the Requirements for the
Degree of Doctor of Philosophy
Trang 6ABSTRACT
According to the American Diabetes Association,
diabetic retinopathy or diabetic eye disease is the leading cause of blindness in the United States To begin to
address the problems associated with the adverse economic impact of diabetes, and in particular diabetes related blindness, a cost effective method of identifying patients for treatment must be determined This study addresses the concerns of health care decision makers from a national health care perspective To inform this decision, a
probabilistic sensitivity analysis and hypothesis test of a screening problem for sight threatening diabetic eye
disease is presented
To provide information of relevance to healthcare decision makers, uncertainty in multiple parameters is
characterized and allowed to vary simultaneously These
parameters affect cost effectiveness calculations of screening for diabetic eye disease and the uncertainty surrounding these parameters is explicitly presented for evaluation
Trang 7systematic screening program implemented by a national health service Point estimates from the clinical
literature are used to generate distributions for the probabilistic sensitivity analysis The results of the
sensitivity analysis are then used to test the hypothesis that cost effectiveness of systematic screening for sight threatening diabetic eye disease is significantly different from the cost effectiveness of opportunistic screening for sight threatening diabetic eye disease
To allow for the possibility that healthcare decision makers face differing health care objectives and/or
differing budget constraints, results are presented in the form of acceptability curves showing the probability that
systematic screening is more cost effective for differing
Trang 8COMMITTEE IN CHARGE OF CANDICACY: Professor Patrick Welch,
Trang 9DEDICATION
This dissertation is dedicated to the memory of
Chester Perry Sutton A man who, despite being robbed of his sight, continued to inspire everyone he touched through his courage and independence
Trang 10List of List of Chapter Chapter Chapter Chapter TABLE OF CONTENTS Tables uu , Figures 4 42.+2 I Introduction Ii Itt IV
Review of the Literature The Felli and Hazan Approach The Phelps/Mushlin Model
Claxton’s Decision Making Approach Methodology oo
Disease Background and Review of Cost Effectiveness Research Presentation of the Cost
Effectiveness Data
Sensitivity Analysis in the Screening Study
The Role of Sensitivity Analysis in Cost Effectiveness Models Characterizing Parameter
Uncertainty about Probabilities
Estimation of Costs in the
Simulation `
Characterizing Parameter
Uncertainty About QALYs
Using Quality Adjusted Life Years
{QALYS} ,
The Standard Gamble Method for
QALYS 1 8
Trang 11TABLE OF CONTENTS (CONT ) Distributions Used in the
Simulation 4 Descriptive Statistics of Distributions from the Baseline Simulation os Modeling Costs of Systematic
and Opportunistic Screening Calculation of the Incremental
Trang 12LIST OF TABLES
Table 2.1: Costs and Utilities Associated
13
with Disease Status and Treatment Status
Table 2.2: Expected Benefits and Expected Costs of Treating and Not Treating Patients Table 2.3: Patients Subgroups, Fallback
Strategies,and Optimal Treatment
Strategies (Cr = $100,000) oe 32 Table 3.1: Baseline values for the Cost
Effectiveness of Opportunistic and
Systematic Screening Programs for Sight Threatening Diabetic Eye - e ~ oe - 55 Disease Cost Effectiveness figures Table 3.2: for Systematic and Opportunistic Screening Programs os we ew 57 Table 3.3: Mean Visual Utility Values in
Patients with Diabetic Retinopathy 79
Table 4.1: Distributions Used in the
Baseline Simulation a 83
Table 4.2: Distributions Representing
Trang 13LIST OF FIGURES
Figure 2.1: The Expected Value of
Clinical Information and the Prior Probability of Disease
Figure 3.1: The Cost Effectiveness Plane Figure 3.2: The Standard Gamble for Some
Chronic Health State Preferred to
Death
Figure 4.1: The Frequency Distribution Opportunistic Screening Rate of Compliance ,
Figure 4.2: The Frequency Distribution
Systematic Screening Rate of
Compliance
Figure 4.3: The Frequency Distribution
the Sensitivity of Opportunistic Screening 020+4
Figure 4.4: The Frequency Distribution the Sensitivity of Systematic Screening 4428+4,4 Figure 4.5: The Frequency Distribution
QALYs from Sight Threatening Diabetic Eye Disease
Figure 4.6: The Frequency Distribution
Potential QALYs Gained
Figure 4.7: The Frequency Distribution for the Duration of Disease Figure 4.8: Total Cost of Systematic
Trang 14LIST OF FIGURES (CONT.) Figure 4.9: Acceptability Curves for the
Trang 15CHAPTER 1: INTRODUCTION
According to the American Diabetes Association, diabetic retinopathy or diabetic eye disease is the leading cause of blindness in the United States Medical and health services related research has determined that the progress of sight threatening diabetic eye disease can be slowed through detection and treatment Early detection of the disease is critical because treatment for sight threatening diabetic eye disease is most effective in the early Stages of the disease[1l] Clinical studies in the United States and elsewhere show that the incidence of
blindness due to the main forms of diabetic eye disease may be dramatically reduced with laser
therapy{2-4]
The economic importance of screening for sight
threatening diabetic eye disease is reinforced by some
astonishing statistics Estimates of the total
economic impact of diabetes are large The estimated
cost of diabetes alone is $14 billion annually while
diabetic blindness alone accounts for $75 million in lost income and public-welfare expense[5] The
Trang 16considers the fact that a large percentage of people who have diabetes do not obtain screening exams for diabetic retinopathy regardless of their disease
status[5]
To begin to address the problems associated with the adverse economic impact of diabetes, and in
particular diabetes related blindness, a cost
effective method of identifying patients for treatment must be determined This study addresses the concerns of health care decision makers from a national health care perspective From this perspective the cost effectiveness of screening, identification of disease,
and treatment, must be determined with respect to a
multitude of concurrent competing health care objectives given a national health care budget
constraint To inform this decision, a probabilistic sensitivity analysis and hypothesis test of a
screening problem for sight threatening diabetic eye disease is presented
To provide information of relevance to health care decision makers, uncertainty in multiple parameters is characterized and allowed to vary Simultaneously These parameters affect cost
Trang 17eye disease and the uncertainty Surrounding these parameters is explicitly presented for evaluation A hypothesis test is performed to compare the cost
effectiveness of an opportunistic {primary care)
screening program for sight threatening diabetic eye disease to a systematic screening program implemented
by a national health service
For the decision problem, baseline information is gleaned from a study designed to evaluate the cost effectiveness of replacing an existing opportunistic
screening for sight threatening diabetic eye disease
with systematic screening for diabetic patients in the United Kingdom[{6] Point estimates from this study are used to generate distributions fer the
probabilistic sensitivity analysis The results of
the sensitivity analysis are then used to test the
hypothesis that cost effectiveness of systematic screening for sight threatening diabetic eye disease is significantly different from the cost effectiveness of opportunistic screening for sight threatening
diabetic eye disease To allow for the possibility
Trang 18curves showing the probability that systematic
screening is more cost effective for differing prices per effectiveness unit relative to Opportunistic
screening
To provide a theoretical framework for the
economic evaluation of diagnostic technologies,
chapter 2 of this study presents a review of the relevant literature The Phelps/Mushlin model
presents a formal theoretical guide in which to
examine competing diagnostic tests{7] This model presents a diagnostic decision problem in which one diagnostic test is compared to a fallback Strategy This model is easily applicable to the present case in
which the fallback strategy is an existing
opportunistic or primary care screening program Next, Claxton’s decision making approach to the stochastic evaluation of health care technologies is presented{8] This approach presents a complimentary
and general approach to statistically test the
incremental cost effectiveness of competing medical interventions In this approach one may adopt a classical/frequentist or Bayesian approach in
Trang 19calculated in a classical hypothesis test for
Significant differences in the cost effectiveness of
competing screening strategies for Sight threatening diabetic eye disease Alongside the Phelphs/Mushlin and Claxton models, Felli and Hazan'’s approach to the
evaluation of health care technologies under
conditions of uncertainty is reviewed[9] The Felli
and Hazan approach presents logical extensions of the classical hypothesis test and probabilistic
sensitivity analysis examined in this study
In chapter 3, the methodology of the economic evaluation of competing Strategies for detecting sight threatening diabetic eye disease is presented Here a Clinical background of the disease is discussed in which the primary factors affecting the economic
evaluation are examined An in-depth analysis of the baseline data used in the simulation follows with
discussion and calculation of the baseline incremental cost effectiveness ratios Distributions used in the Monte Carlo simulation are given with a brief
discussion of the roll of probabilistic sensitivity
analysis in health care decision making Finally a
Trang 20use of quality adjusted life years (QALYs) is presented
Chapter 4 gives the results of the probabilistic sensitivity analysis, hypothesis test, and
acceptability curves Graphs and parameterization of each distribution are given in chapter 4, with a
discussion of the use of each distribution To perform a hypothesis test, a test statistic
incorporating an explicit monetary valuation of the health outcome is given and the results of the test are presented This study supports the conclusion that systematic screening for sight threatening diabetic eye disease is more cost effective than opportunistic or primary care screening for sight threatening diabetic eye disease This result is
highly significant at the standard benchmark price per effectiveness unit of $50,000 per QALY To examine decision sensitivity and the possibility that decision
makers may have alternative valuations of the health outcome, acceptability curves were constructed for the
baseline simulation and alternative simulations The
acceptability curves for the baseline simulation were
Trang 21uncertainty in the duration of disease These curves were also discounted at 0%, 3% and 5% and are
presented in the appendix
In chapter 5, a summary of the Study results is given Chapter 5 concludes with Suggestions for further research Overall the Study suggests that Systematic screening, given the baseline data and characterization of parameter uncertainty, should be implemented as the most cost effective method to screen for sight threatening diabetic eye disease for countries with similar disease prevalence, compliance
rates, and patient cohort characteristics These
results are insensitive to changes in the price per effectiveness unit for values far away from the benchmark value of $50,000 per QALY These results also suggest that different training among primary care physicians, specialists, and other medical personnel, leads to large differences in diagnostic
sensitivity These sensitivity differentials complicate the accurate evaluation of any
Trang 22CHAPTER 1: REFERENCES
Sculpher, M.J., et al., A Relative Cost-
Effectiveness Analysis of Different Methods of Screening for Diabetic Retinopathy Diabetic Medicine, 1991 8: p 644-650
Diabetic Retinopathy Study Research Group, Photocoagulation Treatment of Proliferative
Diabetic Retinopathy: Clinical Application of the
Diabetic Retinopathy (DRS) Findings - DRS Report
Number 8 Ophthalmology, 1981 88: p 583-600
Early Treatment Diabetic Retinopathy Study Research Group, Photocoagulation for Diabetic Macular Edema: Early Treatment Diabetic
Retinopathy Study Report Number 1 Archives of Opthalmology, 1985 103: p 1796-1806
Early Treatment Diabetic Retinopathy Study Research Group, Early Photocoagulation for Diabetic Retinopathy: ETDRS Report Number 9
Ophthalmology, 1991 98(Supplement): p 766-785
Lairson, D.R., et al., Cost-Effectiveness of Alternative Methods for Diabetic Retinopathy Screening Diabetes Care, 1992 15: p 1369-1377 James, M., et al., Cost Effectiveness Analysis of Screening for Sight Threatening Diabetic Eye
Disease British Medical Journal, 2000 320: p
1627-1633
Phelps, C.E and A.I Mushlin, Focusing
Technology Assessment Using Medical Decision
Theory Medical Decision Making, 1988 8: p 279- 289
Claxton, K., The Irrelevance of Inference: A
Decision-Making Approach to the Stochastic
Evaluation of Health Care Technologies Journal
of Health Economics, 1999 18: p 341-364
Felli, J.C and G.B Hazan, Sensitivity Analysis and the Expected Value of Perfect Information
Trang 23CHAPTER 2: REVIEW OF THE LITERATURE
According to Briggs, probabilistic analysis is still not common in health economic evaluation
1iterature[l1, 2] In a study from 1996, only 7 out of
492 studies reviewed contained any probabilistic
analysis, and of the 7, only 2 included discount rates in their probabilistic analysis{3}
Probabilistic sensitivity analysis describes a method in which all uncertainties (probabilities,
costs, utilities) are considered Simultaneously Each
uncertain parameter in a probabilistic sensitivity analysis is assumed to possess a probability
distribution representing the possible values an input parameter may obtain[4] Multiple simulations of the decision problem are run in which each input parameter in the decision problem is randomly assigned a value from its respective distribution
The simulation output contains the mean and
standard deviation for each input parameter and/or any desired statistic over all iterations of the
Trang 24Studies suggests that probabilistic sensitivity analysis is still not widely understood
Probabilistic sensitivity analysis is an
essential method in assessing uncertainty inherent in any decision problem While Simple sensitivity
analysis, threshold analysis, and analysis of
extremes, all provide methods for evaluating
uncertainty, probabilistic analysis is well suited in facilitating natural extensions of Bayesian decision- making analysis Acceptability curves anda full Bayesian decision-making approach can be adopted readily once a probabilistic sensitivity analysis has been completed
A current problem in medical decision-making
models is distinguishing between value sensitivity and
decision sensitivity[5] Value sensitivity refers to sensitivity of a model's outcome measure such as a cost~-utility measure, cost-benefit measure, or cost effectiveness measure Whenever an outcome measure Fluctuates dramatically in response to changes in input parameters, the outcome measure is defined as being "sensitive" to changes in those input
Trang 25changes in input parameters It is quite possible for medical decision-making models to have @ high degree of value sensitivity while at the same time having little or no decision sensitivity or vice versa[5] For example, during a Sensitivity analysis an
incremental cost effectiveness ratio (ICER) may fluctuate considerably when input parameters are changed but the sign of the ICER does not change This indicates the ICER is value sensitive but not decision sensitive For an ICER to be decision
sensitive, the sign of the ICER must change This
indicates that the optimal action, adopt treatment 1 Or treatment 2 for example, has changed It is argued here that decision uncertainty is the relevant
uncertainty and sensitivity analysis should
effectively address this risk in the decision-making problem
Felli et al have demonstrated that sensitivity analysis (SA) methods based on threshold proximity and
entropy can substantially overestimate sensitivity[5] Threshold proximity measures use an established
threshold value to address decision sensitivity When a measure approaches or crosses the established
Trang 26likely to lead to a change in the Optimal alternative This type of analysis becomes difficult as the number of input parameters increases and the effect of
Simultaneous variation in input parameters cannot be
considered as in one- and two-way SA[5] Likewise,
arbitrary definitions of what is the appropriate
threshold, as well as what is an appropriate distance
metric, lead to arbitrary determination of what may or
may not be “sensitive"{6, 7]
Entropy based measures, such as the mutual information index, provide yet another method to assess decision sensitivity[5, 8-10] The mutual
information index represents the information gained
about a distribution B conditioned on a specific value A= a An increase in the index indicates that A
contains information about B, while a small increase indicates A contains little information about B
Critchfield and Willard normalize fas by using the
Trang 27I
(2.2) lng = > Py oes}
# P,
The mutual information index for the Optimal action B influenced by the parameter A normalized by the self- information of B is given by[10]:
(2.3) Sự, = «100%,
8
Critchfield and Willard indicate that a mutual
information index could serve as a proxy for decision sensitivity to the parameter A because the magnitude
of Ss3 indicates the degree to which the parameter A
explains the variability in the action B[5, 10]
However, once again, the question arises as to what
constitutes “sensitivity"™ As Felli and Hazan point out concerning mutual information index figures,
“while it is true that the problem is 'more sensitive’ to these parameters than to others, it is not clear
what this means" [5] Additionally, the constructicn of the index and the difficulty in calculating
conditional probability distributions make the use of
Trang 28information, toward the value of information to reduce uncertainty
The Felli and Hazan Approach
Felli and Hazan conceptually present the expected value of perfect information (EVPI) as the average improvement in a payoff a decision-maker could expect to receive from perfect information relative to the
payoff associated with the actual decision[5] In any
decision the primary problem facing the decision-maker is the choice of action given parameter uncertainty Following Felli and Hazan we can consider a problem with several possible actions and a4 Single uncertain parameter, 5 The decision-maker seeks to maximize his
or her expected payoff, IF, given the value of most likely to obtain, say đo The decision maker seeks to
choose the action ag among all possible alternatives
such that:
(2.4) EL, |¢ =o) = max, EŒ, lý =ếa)
Whenever we base a decision upon our best estimate of
the uncertain parameter ¢, we face some risk that our estimate is wrong Thus given our estimate of é, the
Trang 29payoff Felli and Hazan express this difference
as[5]:
Average Forgone Payoff = E, [max ECV, | 2) — Ef Mis) | = EVPI(Z)
(2.5)
Here the EVPI represents the difference between payoff # from perfect knowledge about parameter uncertainty
prior to the time of decision and the payoff received from the chosen action ag This measure of decision sensitivity provides information not only on parameter
uncertainty, which has the greatest chance of causing a change in the base-optimal action, but also about marginal improvements in payoffs that could be
obtained from reduction in parameter uncertainty{5] Further the EVPI can be expressed in the same units as payoffs in the decision problem, whether they be in effectiveness units, QALYs, or monetary units
Felli and Hazan provide two possible Monte Carlo
Simulation procedures to calculate the EVPI The
procedures are broadly applicable as independence, and
Linearity of parameters is not necessarily required to
calculate the EVPI Again, following Felli and Hazan,
we consider a decision problem which has a payoff /, and depends upon a set of parameters é and an
Trang 30EVPI for a set of parameters ¢; = (ớ,Ìl¡ei with the
remaining set of parameters in the problem denoted ¢/
The EVPI for the parameters of interest cr becomes:
EVPHG,)= E, AES 67 a(S )- EW 1S) 57 ‹a*)
(2.6)
= Ee er {Improvement using a *(¢, ) instead of a*]
where:
a* = base optimal action
a*(¢,) = optimal action as a function of ¢,
The suggested Monte Carlo simulation procedure amounts to generating random parameter values for the set of
interest, $,; For every generated set ¢;, the optimal action as a function of that set a*(¢; }), is
determined Then the improvement obtained from using
a*(cr } as opposed to a* is determined The average of
cal
all improvement values gives the estimated EVPI (¢;) The idea of valuing perfect information about a
decision is not new[11-14] However the application of the expected value of perfect information applied
to medical decision making is new Phelphs and
Mushlin provided the general framework utilizing value
of information analysis in a formal model by
considering a simple treat/no treat decision tree for
Trang 31The Phelps/Mushlin Model
The Phelps/Mushlin model is a decision analytic
model designed to assess diagnostic technologies [15] The authors consider a simple decision tree in which a diagnostic technology is employed to determine the course of treatment The model simplifies the
possible actions by considering only two alternatives, to either treat or to not treat based on the
diagnostic information The value of the diagnostic technology is compared to a preferred fallback
Strategy, in this case to not treat a patient The
model is instructive in presenting a pragmatic medical
decision problem
When new medical technology is introduced it must be compared to existing technology in current medical use An important question when faced with the task of medical technology evaluation is how to determine what research is necessary to adequately inform the
decision By introducing the concept of the expected
value of imperfect diagnostic information, a cut-off
is established which can eliminate certain types of costly medical research [15]
Certain types of medical research must be
Trang 32efficacy of medical technology in that technology's ideal use These types of trials ignore the costs of less than ideal use of these technologies across
heterogeneous populations with varying degrees of
disease prevalence [15] Therefore a model which
evaluates medical technology in the less than ideal
state can provide a more accurate assessment of the
technology's cost effectiveness in actual use The model effectively demonstrates the importance of disease prevalence in the assessment of a diagnostic technology's cost effectiveness
The Phelps and Mushlin model considers several prior probabilities and patient states The
clinician's prior probability of disease and wellness
is given by:
m = prior probability that the patient has disease
(prevalence)
{1 ~ m) = prior probability that the patient does not have disease (well)
To characterize the capability of the diagnostic technology we specify the probabilities of true and false diagnoses as:
p = probability of a true positive given the
Trang 33(1 - p) = probability of false negative given the patient has disease
q = probability of a false positive given the patient is well
(1 -~ 4ì) = probability of a true negative given the
patient is well (specificity)
Two possible actions are allowed in the model either to treat (tf) or to not treat (n) The costs and
benefits (utilities) of each patient health state are
denoted as C,;, and £;; respectively The subscripts i
and j indicate health status and treatment choice
respectively Thus, the cost associated with a
diseased patient who is receiving treatment would be denoted Cre, while the utility associated with a
patient who is well and not receiving treatment would be denoted by E,y, The costs and utilities associated with all possible states are given by table 2.1[15]
Table 2.1: Costs and Utilities Associated with
Disease Status and Treatment Status Costs Utilities
Came ¡ COSt of sick patient, Emer | Utility of sick
treated patient, treated
Can | cost of sick patient, Emn | Utility of sick
not treated patient, not treated Cwr | cost of well patient, | Ey | utility of well
treated patient, treated
Cun | cost of well patient, Eyn | utility of well
not treated patient, not treated
Trang 34
Phelps and Mushlin assume that treating sick people and not treating well people provides more benefit than not treating sick people and treating well people
Or: Ene > Eve and Ey, > Eng The fundamental assumption
in decision analysis is that a decision should be based on maximizing expected benefit or utility
Therefore a decision about whether or not to treat a
patient with a prior probability (m) of disease
depends upon expected benefit of treating and not treating patients weighted by the prevalence of disease (m) The expected utility and costs of
treating and not treating patients are given in table
2.2115]
Table 2.2: Expected Benefits and Expected Costs of
Treating and Not Treating Patients m-E,, +(l-m)-E,, Expected benefit of treatment m-E,, +(l-m)-E,, Expected benefit of not treating
m-C,, +(l-m)-C,, Expected cost of treatment
m Ca + (Ì— HC Expected cost of not
treating
Expressing expected benefit in incremental terms from
treating patients versus not treating patients we obtain[{15]:
[m-E,,, +(l-—m)-E,, ]-[m- E,,, +Œ —m)-E „| (2.7)
Trang 35where:
AE, = Ent Em
AE, = Ey, ~ Ey,
Similarly, for incremental costs of treating versus
not treating we obtain{15}: [mt -Cy, + -m)-C,, ]-[m-c,, +d—m)-C,, ] (2.8) ` =m-AC, —(l-—m)- AC, where: AC, = Coy — Com AC, = —C
Given we seek to compare the cost effectiveness of treatment versus no treatment, equation (2.8) becomes the numerator in the incremental cost effectiveness
ratio (ICER) while equation (2.7) becomes the
denominator in the ICER A decision-maker is faced
with the problem of comparing the net benefit of
treatment versus no treatment to the net benefit
forgone of expanding the next best "marginal" activity [15] If we let the cost effectiveness of the next
best activity be denoted as 1 / g=$ / Quality
Adjusted Life Year (QALY), then the decision maker must compare the inverse of the ICER of treatment versus no treatment to g or[1i5]:
(2.9) m AE, ~ (=m) AE, vơ
m-AC, —(l-m)- Ac,
Trang 36If equation (2.9) is true then the expected benefits
and costs of treating all patients as opposed to not treating exceeds the critical "target" or "ceiling" effectiveness cost ratio The "target" or "ceiling" effectiveness cost ratio is the required effectiveness per unit cost necessary for a diagnostic test or
treatment to be considered "effective" given a medical decision-maker's budget constraint Therefore it is cost effective to undertake treatment for all
patients Once we incorporate g we can express
equation (2.3) in terms of net benefit (conditional on g) or[15]:
(2.10) m-(AE,, ~ gAC,,)>(l-m)-(AE, -g-AC,)
Equation (2.10) illustrates the critical role of both disease prevalence (m) and the ceiling cost
effectiveness ratio (g) Uncertainty about disease prevalence and disagreement on an appropriate level of
g complicate the determination of the optimal course of action As Phelps and Mushlin point out,
"sensitivity analysis across values of g will prove important in many settings."[{15]
Trang 37[15] Thus, there is a critical prior probability of disease that leaves a decision-maker indifferent
between treating and not treating Solving equation {2.10) for disease prevalence we obtain[1i5]:
(AE, ~g-AC,)
(2.11) m, =
(AE, ~ g- AC.) +(AE, ~ gAC„)
If the prior probability of disease m is greater than
the critical disease prevalence m,, treatment becomes the optimal action; otherwise the fallback strategy of
no treatment becomes optimal A prior probability of disease close to the critical disease prevalence
indicates substantial uncertainty about the optimal
course of action When the value of information is
assessed, uncertainty about disease prevalence can have a dramatic effect on the expected value of information of diagnostic tests
To begin to assess the value of diagnostic
information the sensitivity and specificity of the
diagnostic test in question must be introduced The
Phelps/Mushlin model determines the value of a
diagnostic test by determining the expected value of
clinical information (EVCI){15] The choice of
whether to use an imperfect diagnostic test depends
Trang 38The EVCI for a test depends upon the fallback
Strategy If the fallback Strategy is to not treat patients then an imperfect diagnostic test should be used whenever EVCI yr > gCr, Or the expected gain in clinical information exceeds the re~scaled costs of the test based on g- Evaluation of the value of a diagnostic test relative to a fallback strategy is necessary because in some extreme cases for this example it may make sense to treat all or treat none based on the prior probability of disease m For example, if the prior probability of disease m is quite low, very few patients in the relevant
population have disease This lessens the benefit of Screening programs of all patients In this case it may be cost-effective, conditional upon g, to wait for patients to become Symptomatic before treatment rather
than screen for disease Again, this decision must be
contingent upon a monetary valuation of the health Outcome g If a societal perspective is taken then g may be interpreted as the marginal willingness to pay for a health outcome improvement where the budget
constraint is endogenous (16, 17]
To calculate the EVCI, the sensitivity of an
Trang 39imperfect diagnostic test g must be applied to the expected benefits and expected costs of the diagnostic
test If the fallback strategy is to not treat,
expected benefits are[15]:
(2.12)
m-(p- E„ +(1—p)- Eqn) + —m\(l—4)- Eun + 9° Eq, |—[mt- Eng + —m)E., | =m: p- AE, —(l—-m)-q-AE,
The first term on the left-hand side of (2.12) gives the expected benefit of treating true positives and not treating false negatives The second term on the
left-hand side of (2.12) gives the expected benefit of
not treating true negatives and treating false
positives The third term on the left-hand side of
(2.12) subtracts off the benefit of the fallback strategy to not treat everyone Therefore the left- hand side of equation (2.12) can be interpreted as the
net benefit of treatment based on the imperfect diagnostic test less the opportunity cost of not treating all patients The right hand side of (2.12) gives the expected utility in incremental terms
Incorporating g to re-scale costs the EVCI is the
expected incremental benefit less the expected incremental cost or[{15]:
Trang 40Equation (2.13) indicates the functional relationship between the value of clinical information for an
imperfect diagnostic test and the test's sensitivity, specificity, and the prior probability of disease or prevalence Given our fallback strategy is to not treat everyone, the choice about whether or not to adopt this diagnostic technology depends on the
correct identification of positives The first term in the right hand side of equation (2.13) is the net
gain from identifying true positives while the second term on the right hand side of (2.13) is the net loss of false positives An assumption of the
Phelps/Mushlin model is that the correct
identification of negatives outweighs the cost of
false negatives Therefore when all patients are not
treated we need only consider the potential gain from
true positives and net losses from false positives to inform the adoption decision
We can see the relationship between disease
prevalence and the EVCI by taking the derivative of EVCIne with respect to m or{15]:
dEVCI
2.14