1. Trang chủ
  2. » Y Tế - Sức Khỏe

Evidence based Dermatology - part 2 pptx

76 295 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 76
Dung lượng 640,8 KB

Nội dung

56 Definitions of quality, validity and bias Quality, when referring to randomised controlled trials (RCTs), is a multidimensional concept that includes appropriateness of design, conduct, analysis, reporting and its perceived clinical relevance. 1–4 Validity refers to the extent to which the study results relate to the “truth”. Validity may be internal (i.e. are the results of this trial true?) or external (to what extent do the results of this trial apply to my patients?). Factors affecting external validity are discussed further in Chapter 12. Internal validity is a prerequisite for external validity. In addition to assessing the role of chance, a crucial component in appraising the internal validity of a trial is assessment of its potential for bias. Bias denotes a systematic error resulting in an incorrect estimation of the true effect. With respect to clinical trials, bias may be best understood in terms of: • selection bias – resulting in an imbalance in treatment groups • performance bias – treating one group of people differently from the other • detection bias – biased assessment of outcome resulting from lack of blinding • attrition bias – biased handling of deviations from the study protocol and those lost to follow up. This chapter guides the reader on applying the various forms of bias to appraising the internal validity of an RCT. How does one tell a good RCT from a bad one? Quality criteria derived from research Three main factors related to study reporting have been associated with altering the estimation of the risk estimate, usually by inflating the claimed benefit. 3 These are shown in Box 9.1. Generation and concealment of treatment allocation Generation and concealment of treatment allocation are two interrelated steps in the crucial process of randomisation. The first refers to the method used to generate the randomisation sequence. The second refers to the subsequent steps taken by the trialists to conceal the allocation of participants to the intervention groups from the people recruiting the participants. Suggestions for adequate and inadequate definitions of generation of randomisation and subsequent concealment are shown in Box 9.2. Studies that do not describe how the randomisation sequence was generated should be viewed with some suspicion, given that humans frequently subvert the intended aims of randomisation. 5 9 How to critically appraise a study reporting effectiveness of an intervention Hywel Williams Concealing the allocation of interventions from those recruiting participants is a crucial step in the progress of an RCT. The randomisation list is usually kept away from enrolment sites (for example in a central clinical trials office or pharmacy). Less ideally, sealed opaque envelopes are used – a method that is still susceptible to tampering by opening the envelopes or holding them up against a bright light. 5 Failure to conceal such allocation means that those recruiting patients can foresee which treatment a patient is about to have. Such lack of concealment can result in selective enrolment of patients on the basis of prognostic factors, 6 and loss of the “even playing field” that randomisation was designed to achieve. Motives for interfering with the randomisation schedule include a desire on the part of investigators to ensure that their new treatment is successful by deliberately allocating patients in a better prognostic group to that treatment. Another reason may be that a doctor wants to ensure that particular patients are not allocated to a control or placebo group. Such selective recruitment is a form of selection bias, resulting in an unfair comparison of the interventions 57 How to critically appraise a study reporting effectiveness of an intervention Box 9.2 Adequacy of generation and concealment of randomisation sequence Generation of the randomisation code • Adequate: random numbers generated by computer program, table of random numbers, flipping a coin • Inadequate: quasi-randomisation methods (for example date of birth, alternate records, date of attendance at clinic) Concealing the sequence from recruiters • Adequate: if investigators and patients cannot foresee the assignment to intervention groups (i.e. numbered and coded identical sealed boxes prepared by central pharmacy, sealed opaque envelopes) • Inadequate: allocation schedule open for recruiting physician to view beforehand, unsealed envelopes Box 9.1 Factors to consider when assessing the validity of clinical trials in dermatology The “big three” that should always be assessed • Is the method of generating the randomisation sequence and subsequent concealment of allocation of participants described? • Were participants and study assessors blind to the intervention? • Were all those who originally entered the study accounted for in the results and analysis (i.e. was an intention-to-treat analysis performed)? Other factors worth looking for • Did the study investigators use an adequate disease definition? • Did they use outcome measures that mean something to you and your patient? • Were the treatment groups similar with respect to predictors of treatment response at baseline? • Were the main outcome measures declared a priori or did the investigators “data dredge” amongst many outcomes for a statistically significant result? • Did the investigators do an appropriate statistical test if the data were skewed? • Did the investigators test the right thing (i.e. between-group differences rather than just differences from baseline)? • Have the authors misinterpreted no evidence of an effect as being evidence of no effect? • Were the groups treated equally except for the interventions studied? • Who sponsored the study? Could sponsorship have affected the results or the way they were reported? • Is the trial clearly and completely reported by CONSORT standards? under evaluation. Trials in which concealment of allocation was judged to have been inadequate were found to have inflated the estimates of benefit by about 30% when compared with studies reporting adequate concealment. 3 Blinding (masking) the intervention Blinding or masking is the extent to which trial participants are kept unaware of treatment allocation. Blinding can refer to at least four groups of people: those recruiting patients, the study participants themselves, those assessing the outcomes in study participants, and those analysing the results. 7 The term “double blind” traditionally refers to a study in which both the participants and the investigators are “blind” to the study intervention allocation, but the term is ambiguous unless qualified by a statement as to who exactly was blinded. Blinding is less of an issue with objective outcomes such as death but is very important with subjective outcomes such as the opinion of participants or assessment of disease activity, as in most dermatology trials. Blinding may be achieved by a range of techniques such as ensuring that placebo tablets look, feel, smell and taste the same as the active tablets, 8 or, in the case of ointments, by using as a placebo the same vehicle or base in which the active ingredient is formulated. 9 Issues of blinding may seem superficially similar to allocation concealment in that both refer to concealing the interventions. The distinction is important in the sense that failure to conceal the randomisation sequence may result in unequal groups, (i.e. a form of selection bias) whereas failure to mask the intervention once a fair randomisation has taken a place represents a form of detection or information bias. Both can result in an incorrect estimate of the effects of a treatment. Studies that are not double blind typically overestimate treatment effects by about 14% when compared with studies that are double blind. 3 Accounting for all those randomised The whole point of randomisation is to create two or more groups that are as similar to each other as possible, the only exception being the intervention under study. In this way the additional effects of the intervention can be assessed. 10 A potentially serious violation of this principle is the failure to take into account all those who were randomised when conducting the final main analysis, for example participants who deviate from the study protocol, those who do not adhere to the interventions and those who subsequently drop out for other reasons. People who drop out of trials differ from those who remain in them in several ways. 11 People may drop out because they die, encounter adverse events, get worse (or no better), or simply because the proposed regimen is too complicated for a busy person to follow. They may even drop out because the treatment works so well. Ignoring participants who have dropped out in the analysis is not acceptable. Excluding participants who drop out after randomisation potentially biases the results. One way to reduce bias is to perform an intention-to-treat (ITT) analysis, in which all those initially randomised in the final analysis are included. 11,12 Unless one has detailed information on why participants dropped out of a study, it cannot be assumed that an analysis of those remaining in the study to the end are representative of those randomised to the groups at the beginning. Failure to perform an ITT analysis may inflate or deflate estimates of treatment effect. 4 Performing an ITT analysis is often regarded as a major criterion by which the quality of an RCT is assessed. It is entirely appropriate to conduct an analysis of all those who remained at the end of a study (a “per protocol” analysis) alongside the ITT analysis. 12 Discrepancies between results of ITT and per protocol analyses may indicate the potential benefit of the intervention under ideal 58 Evidence-based Dermatology compliance conditions and the need to explore ways of reformulating the intervention so that fewer participants drop out of the trial. Discrepancies may also indicate serious flaws in the study design. Quality scales Faulty reporting generally reflects faulty trial methods. 3,5 A number of scales have been developed for assessing study trial quality over the past 15 years. These vary in the dimensions covered and complexity. 2 Generally, the recent trend has been to use the few quality criteria given in Box 9.1, plus a few more that the appraiser considers important in relation to the condition being studied. 3 It is now considered unwise to use summary quality scores in an attempt to “adjust” the potentially biased treatment estimate because this varies with the scale used and how the components of each scale are weighted. 13 Instead, greater emphasis is placed on using the components of the scale as a check list and considering how each may affect the results. 3 Additional empirical criteria Disease definition Whilst it may seem simple to apply the three criteria of randomisation generation/concealment, blinding and ITT to judge the quality of RCTs, it is still uncertain how far these factors can reliably discriminate between “good” and “bad” RCTs in dermatology. Other factors that are disease specific and rely on content knowledge/expertise are likely to be equally important in determining the quality of some dermatology trials. The influence of such disease-specific factors in dermatology is an area that requires further systematic research. Therefore, as someone with an interest in atopic eczema, I would not trust a study that claimed a beneficial effect for a new treatment if the study included both children and adults with diverse eczematous dermatoses, 14 as people with such conditions might respond differently. 15 Similarly, the definitions of disease used may be an important quality criterion. For example, if I were reading the report of an RCT of an intervention for bullous pemphigoid, I would want to know that the diagnosis in study participants was confirmed by immunofluorescence in order to distinguish it from other bullous disorders of diverse aetiologies and with differing treatment responsiveness. “Sensible” outcome measures In evaluating a clinical trial, look for clinical outcome measures that are clear cut and clinically meaningful to you and your patients. 16 For example, in a study of a systemic treatment for warts, complete disappearance of warts is a meaningful outcome, whereas a decrease in the volume of warts is not. The development of scales and indices for cutaneous diseases and testing their validity, reproducibility and responsiveness has been inadequate. 16,17 A lack of clearly defined and useful outcome variables remains a major problem in interpreting clinical trials in dermatology. Until better scales are developed, trials with the simplest and most objective outcome variables are the best. Categorical outcomes lead to the least amount of confusion and have the strongest conclusions. Thus, trials in which a comparison is made between death and survival, patients with recurrence of disease and those without recurrence, or patients who are cured and those who are not cured are studies whose outcome variables are easily understood and verified. For trials in which the outcomes are less clear cut and more subjective, a simple ordinal scale is probably the best choice. The best ordinal scales involve a minimum of human judgement, have a precision that is much smaller than the differences being sought, and are sufficiently standardised to enable others to use them and produce similar results. 16,17 59 How to critically appraise a study reporting effectiveness of an intervention Similarity of groups for baseline differences In addition to helping to balance known predictors of treatment response such as baseline disease severity (which could serve as confounders when evaluating treatment efficacy between groups), it has also been suggested that randomisation will balance against unknown confounders. 3 This statement is superficially appealing, but is difficult to verify if these confounders are indeed unknown. Even so, randomisation, especially when implemented on small sample sizes, may result in imbalances in possible cofactors that can affect treatment response. In other words, randomisation is not a guarantee against imbalance, although more sophisticated methods of randomisation such as blocking and stratification can help to minimise this. 7 It is quite common to see as the first table in the results section of an RCT report a long list of demographic characteristics of the participants in the different treatment groups and a statement to the effect that “the two groups did not differ statistically at baseline”. This statement is problematic for two reasons. • It is inappropriate to perform such multiple statistic tests without prior hypotheses – indeed many of the variables recorded may be totally irrelevant to predicting treatment response. • There may still be no arbitrary 5% statistical significance even for gross imbalances in treatment groups simply because the groups are so small. Before reading such tables, the most important thing to do is to ask oneself, “What are the most important factors which may predict treatment response?” and then to “eyeball” these in the table of baseline characteristics, if they have been recorded. If there are major imbalances such as baseline severity score, then these can and should be allowed for in a number of ways during analysis, for example a multivariate analysis adjusting for baseline severity as a covariate. 7 Data dredging Many dermatology trials report as many as 10 different outcome measures recorded at several different time points. Even by chance, at least 1 in 20 of such outcomes will be “significant” at the 5% level. Therefore, it is important in studies that use multiple outcomes to ensure that the trialists are not data dredging, that is performing repeated statistical tests for a range of outcome measures and then emphasising only the one that is “significant” at the “magic” 5% level. Such practice is akin to throwing a dart and drawing a dartboard around it. Instead, trialists should declare up front what they would regard as a single “success criterion” for a particular trial. This way it is more credible if that main success criterion is indeed fulfilled – as opposed to some secondary or tertiary outcome measure that turns out to be “significant”. Sometimes, trialists will try to save face by emphasising a range of less clinically significant biological markers of success when in fact the main clinical comparisons look disappointing. Doing the wrong tests It is quite common for continuous data such as acne spot counts to have a skewed frequency distribution. It may then be inappropriate to use parametric tests such as the Student t -test without first transforming the data. Alternatively, non- parametric tests that do not rely on the assumption of a normal distribution can be used. A quick way to check whether a continuous variable is normally distributed is to determine whether the mean minus two standard deviations is less than zero. If it is, the data are likely to be skewed. Testing the wrong thing Performing a statistical test on something other than the main outcome of interest is a subtle but 60 Evidence-based Dermatology not uncommon error in dermatology trials. 18,19 When comparing a continuous outcome measure such as decrease in acne spots between treatment A and treatment B, the correct summary statistic to challenge the null hypothesis of no difference between the treatment is to examine the difference between the two treatments in terms of change of spot count from baseline. Sometimes the investigators simply perform a statistical test on whether the acne lesion count falls from baseline in the two groups independently. If the fall in spot count reaches the 5% level in one group but not in the other, then the authors may conclude that “therefore treatment A is more effective than treatment B”. Perhaps the P value for change in spot count from baseline is 0·04 in one group (i.e. significant) and 0·06 in the other (i.e. conventionally non-significant). This practice is clearly inappropriate since the difference between the two treatments has not been tested. Interpreting trials with negative results Misinterpreting trials with negative results is a common error in dermatology clinical trials. 20 Failure to find a statistically significant difference between treatments should not be interpreted that “treatment is ineffective”. Put another way, no evidence of effect is not the same as evidence of no effect. 21 In many dermatology trials the sample sizes are too small to detect clinically important differences. Providing 95% confidence intervals around the main response estimates allows readers to see what kind of effects might have been missed. For example, in an RCT of famotidine versus diphenhydramine for acute urticaria, itch as measured by a 100 mm visual analogue scale decreased by 36 mm in the famotidine group and by 54 mm in the diphenhydramine group, a difference of 18 mm (54 − 36) in favour of diphenhydramine. Although the statistical test for this difference of 18 mm between the two treatment groups was not significant at the 5% level, there was a trend towards to greater reduction in itch in the diphenhydramine group. The 95% confidence interval around the 18 mm difference between the groups was from −3 to 38. In other words, the results were compatible with a difference of as little as 3 mm in favour of famotidine and as much as 38 mm in favour of diphenhydramine. 22 The trial environment Once randomised, it is important that the two intervention groups are followed up in similar ways. Previous studies have shown the non- specific benefits of being included in a clinical trial, even in placebo groups. 23 Part of the benefit might be the result of better ancillary care prompted by frequent follow ups and being “fussed over” by study assessors. 7 It is important therefore to scrutinise whether the treatment groups have been treated equally in terms of frequency and duration of follow up and whether they have been afforded identical privileges except for the treatment under investigation. Sponsorship issues It is natural to assume that a clinical trial of a drug that has taken years of investment by a drug company and that is sponsored by that same company will strive to demonstrate that the drug is successful. Indeed, millions of dollars of profit may rely on convincing opinion leaders in dermatology of a new drug’s worth. Yet the influence of sponsorship on efficacy claims has not been tested in dermatology RCTs. Drug companies and trialists have many opportunities to influence journal readers when the results of their trial are published (Box 9.3). It should not be assumed that biases in relation to sponsorship are confined to the pharmaceutical industry. Those conducting trials for government agencies might hope to show that a new drug is less cost-effective than standard therapy. Some independent clinicians with preformed 61 How to critically appraise a study reporting effectiveness of an intervention conclusions about an existing treatment may be equally susceptible to being influenced by their own prejudices when testing and writing up the results for that treatment. In assessing a study, readers should always consider who sponsored the study, and ask themselves whether such sponsorship could have influenced the results or the way that they are presented. Absence of declared sponsorship may not mean absence of sponsorship. 24 Attempts to overcome limitations in the conduct, reporting and publication of clinical trials To overcome many of the difficulties discussed in this section, calls for better standards of reporting of trials have led to the CONSORT statement. 25 This contains a structured checklist for reporting the details of clinical trials, including methods of randomisation and concealment, blinding, ITT analysis and a flow diagram to illustrate the progress of trial participants. Several dermatology journals now require that submitted clinical trial reports meet CONSORT standards to be published. 26 Whereas CONSORT may help with better reporting of trials, the creation of prospective clinical trial registers has been seen as one possible way of ensuring that the trial results eventually reach the public domain, and for checking that the investigators adhered to their original protocol. 27 References 1. Jadad AR, Cook DJ, Jones A et al . Methodology and reports of systematic reviews and meta-analyses: a comparison of Cochrane reviews with articles published in paper-based journals. JAMA 1998; 280 :278–80. 2. Moher D, Jadad AR, Nichol G, Penman M, Tugwell P, Walsh S. Assessing the quality of randomized controlled trials: an annotated bibliography of scales and checklists. Control Clin Trials 1995; 16 :62–73. 3. Juni P, Altman DG, Egger M. Systematic reviews in health care: Assessing the quality of controlled clinical trials. BMJ 2001; 323 :42–6. 4. Juni P, Altman DG, Egger M. Assessing the quality of controlled clinical trials. In: Egger M, Davey Smith G, Altman DG, eds. Systematic reviews in health care: meta- analysis in context , 2nd ed . London: BMJ Books, 2001. 5. Schulz KF. Subverting randomization in controlled trials. JAMA 1995; 274 :1456–8. 6. Schulz KF. Randomised trials, human nature, and reporting guidelines. Lancet 1996; 348 :596–8. 62 Evidence-based Dermatology Box 9.3 Ways to enhance the impact of positive studies or reduce the impact of negative studies • Withhold “negative” trials from being published at all by keeping them as “data on file” • Delay release of such “negative” studies into the public domain • Publish negative studies in an obscure or non-English language journal • Select outcome measures that show the treatment in a better light • “Torture” the data by performing multiple statistical tests on subgroups • Select one of many statistical techniques to show the results in the best light • Divert attention from the main “negative” findings by emphasising biomedical markers and “mechanism of action” • Incorrectly interpret equivalence studies, for example by suggesting that two drugs are the same when the confidence intervals surrounding their differences are large • Use a comparator that other studies have not used in order to avoid a head-to-head comparison with a current established treatment • Do not highlight adverse events in the abstract and discussion sections • Use optimistic language and writing styles when discussing essentially negative studies – for example repetition for positive results • Publish positive study results in duplicate or triplicate – overtly or even covertly 7. Pocock SJ. Clinical Trials: a Practical Approach . New York: John Wiley & Sons, 1983. 8. Karlowski TR, Chalmers TC, Frenkel LD, Kapikian AZ, Lewis TL, Lynch JM. Ascorbic acid for the common cold. A prophylactic and therapeutic trial. JAMA 1975; 231 :1038–42. 9. Thomas KS, Armstrong S, Avery A, Li Wan Po A, O’Neill C, Williams HC. Randomised controlled trial of short bursts of a potent topical corticosteroid versus more prolonged use of a mild preparation, for children with mild or moderate atopic eczema. BMJ 2002; 324 :768–71. 10. Altman DG, Bland JM. Statistics notes. Treatment allocation in controlled trials: why randomise? BMJ 1999; 318 :1209. 11. Hollis S, Campbell F. What is meant by intention to treat analysis? Survey of published randomised controlled trials. BMJ 1999; 319 :670–4. 12. Williams HC. Are we going OTT about ITT? Br J Dermatol 2001; 144 :1101–2. 13. Juni P, Witschi A, Bloch R, Egger M. The hazards of scoring the quality of clinical trials for meta-analysis. JAMA 1999; 282 :1054–60. 14. English JS, Bunker CB, Ruthven K, Dowd PM, Greaves MW. A double-blind comparison of the efficacy of betamethasone dipropionate cream twice daily versus once daily in the treatment of steroid responsive dermatoses. Clin Exp Dermatol 1989; 14 :32–4. 15. Hoare C, Li Wan Po A, Williams H. Systematic review of treatments for atopic eczema. Health Technol Assess 2000; 4 :1–191. 16. Bigby M, Gadenne A-S. Understanding and evaluating clinical trials. J Am Acad Dermatol 1996; 34 :555–90. 17. Allen AM. Clinical trials in dermatology, part 3: Measuring responses to treatment. Int J Dermatol 1980; 19 :1–6. 18. Williams HC. Hywel Williams. Top 10 deadly sins of clinical trial reporting. Ned Tijd Derm Venereol 1999; 9 :372–3. 19. Harper J. Double-blind comparison of an antiseptic oil- based bath additive (Oilatum Plus) with regular Oilatum (Oilatum Emollient) for the treatment of atopic eczema. In: Lever R, Levy J, eds. The Bacteriology of Eczema . London: The Royal Society of Medicine Press, 1995. 20. Williams HC, Seed P. Inadequate size of ‘negative’ clinical trials in dermatology. Br J Dermatol 1993; 128 : 317–26. 21. Altman DG, Bland JM. Absence of evidence is not evidence of absence. BMJ 1995; 311 :485. 22. Watson NT, Weiss EL, Harter PM. Famotidine in the treatment of acute urticaria. Clin Exp Dermatol 2000; 25 :186–9. 23. Braunholtz DA, Edwards SJ, Lilford RJ. Are randomized clinical trials good for us (in the short term)? Evidence for a “trial effect”. J Clin Epidemiol 2001; 54 :217–24. 24. Davidoff F, DeAngelis CD, Drazen JM et al . Sponsorship, authorship, and accountability. Lancet 2001; 358 :854–6. 25. Moher D, Schulz KF, Altman DG, Lepage L. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet 2001; 357 :1191–4. 26. Cox NH, Williams HC. Can you COPE with CONSORT? Br J Dermatol 2000; 142 :1–3. 27. Stern JM, Simes RJ. Publication bias: evidence of delayed publication in a cohort study of clinical research projects. BMJ 1997; 315 :640–5. 63 How to critically appraise a study reporting effectiveness of an intervention 64 Introduction As for any other human activities, medical interventions may carry a risk of unintended adverse events. Whenever a physician prescribes a drug, there is the potential for an adverse reaction connected with drug use. Despite limited accurate data, a widely cited meta-analysis of 39 prospective studies performed in US hospitals from 1966 to 1996 found that the incidence of severe adverse drug reactions (i.e. life-threatening reactions and reactions that prolonged hospitalisation) among inpatients was 6·7%, with 0·32% being fatal. 1 Despite these impressive figures, the rate of severe adverse reactions for any given drug is usually very low. However, the system works in such a way that even a small increase in the incidence of a clinically severe reaction may prompt the withdrawal of the implicated drug from the market. It is commonly stated that clinical decisions should balance the benefit of the available options with the risk. A difficulty stems from the fact that data on benefits and risks of medical interventions are usually derived from different study designs and information sources. A large part of our discussion will be focused on the safety of drug use. While systems to survey the safety of medications are well established, they are not for other medical interventions such as surgical procedures and invasive diagnostic tests. It is well accepted that no in vitro or animal models can accurately predict adverse events associated with drug use before the drug is employed in humans. Advances in understanding the causes of adverse reactions (for example pharmacogenomics) may, in the future, enable the risk in individual patients to be predicted in a more reliable way. 2–4 Data sources for determining the safety of medical interventions The limitation of randomised controlled trials (RCTs) The great strength of RCTs is the ability to provide an unbiased estimate of treatment effect by controlling not only for determinants of outcome we know about, but also for those we do not know about. If RCTs demonstrate an important relationship between an agent and an adverse event, then we can be confident of the results. However, RCTs are usually designed to document frequent events, that is, those associated with the intended effect of a treatment. With the usual sample size, which rarely exceeds a few thousand people, RCTs are not suited to accurate documentation of the safety of medical intervention for uncommon events. 5,6 Besides the issue of statistical power, additional limitations include the usual short duration of most clinical trials and the careful selection of the eligible population (restriction in patient selection according to age, comorbidity, etc.). All in all, when an intervention has been proved to be effective in an RCT, the safety issue still remains to be well established. Pharmaceutical companies may strive to work 10 How to assess the evidence for the safety of medical interventions Luigi Naldi out the adverse effect profile of a drug before licensing, but because only a limited number of selected individuals can be exposed to the drug before it is released, only common adverse events can be accurately documented and the complete range of adverse events remains to be elucidated in the post-marketing phase. This limitation is particularly true for delayed reactions and rare but severe acute events. The value of suspicion: case reports and case series In contrast to RCTs, individual cases or case series do not provide a comparison with a control group and are unable to produce reliable risk estimates. In spite of their limitations, astute clinical observations are still fundamental to the description of new disease entities and the raising of new hypotheses concerning disease causation, including the effects of medical interventions. Case reports still represent a first- line modality to detect new adverse reactions once a drug is marketed. 7 Spontaneous surveillance systems such as the International Drug Monitoring Program of the World Health Organization (WHO) capitalise on the collection and periodical analysis of spontaneous reports of suspected adverse drug reactions. 8 All physicians are expected to take an active part in promoting the safety of medical interventions and to contribute by reporting any suspected adverse events they observe in association with drug use. 9 Such a collection of reported adverse events may be explored to raise signals (Box 10.1) to be validated by more formal study designs, that is, studies providing estimates of incidence rates and quantifying risks. 10,11 Spontaneous reporting should be seen as an early warning system for possible unknown adverse events and may be prone to all sorts of bias. 12 Case reports may be more effective in revealing unusual or rare acute adverse events. In general, however, they do not reliably detect adverse drug reactions that occur widely separated in time from the original use of the drug or represent an increased risk of an adverse event that occurs commonly in populations not exposed to the drug. Box 10.1 Criteria for signal assessment in spontaneous surveillance systems • Number of case reports • Presence of a characteristic feature or pattern and absence or rarity of converse findings • Site, timing, dosage–response relationship, reversibility • Rechallenge • Biological plausibility • Laboratory findings (for example drug- dependent antibodies) • Previous experience with related drugs Epidemiological studies: the most comprehensive source of data Quantitative estimates of risks associated with drug use may be obtained from analytic epidemiology studies (i.e. cohort and case-control studies), 13 and from a number of modifications of these traditional study designs pertaining to the broad area of pharmacoepidemiology (Box 10.2). These observational (non-randomised) studies produce less stringent results than RCTs, being prone to unmeasured confounders and biases. On the other hand, these study designs may represent in the “real world” the only practical option to obtain risk estimates once a new drug has entered the market. 65 How to assess the evidence for the safety of medical interventions Box 10.2 Examples of pharmacoepidemiological methods • Intensive hospital monitoring • Prescription event monitoring (PEM) • Cohort studies • Case-control studies and case-control surveillance • Case-crossover design • Record linkage [...]... too far? Clin Exp Dermatol 20 01 ;26 :714 24 4 Dans AL, Dans LF, Guyatt GH, Richardson S Users’ guides to the medical literature: XIV How to decide on the applicability of clinical trial results to your patient EvidenceBased Medicine Working Group JAMA 1998 ;27 9:545–9 5 Li Wan Po, A Dictionary of Evidence- based Medicine Oxford: Radcliffe Medical Press, 1998: 52 83 Evidence- based Dermatology 6 patients on... soap-free group Cleansers containing the antibacterial hexachlorophene produced improvement in 89 Milikan 1976 (B)17 Milikan 1976 (A)17 Korting et al 199516 Kanof 1971 (Part 1)19 Kanof 1971 (Part 2) 19 Fulghum et al 19 821 8 15 2 Vehicle 1 Povidone-iodine cleanser 2 Vehicle 1 Povidone-iodine cleanser 2 Soap 1 Acidic syndet bar 2 Soap 1 Antibacterial soap 2 Soap 1 Antibacterial soap 30 30 120 86 51 2 1... All tetracycline 25 0 mg 2 or 3 details on concomitant therapy Tetracycline responders; no increased IL and NIL; no 1 reduced IL and NIL; 2 Response 30/44 v 31/ 42 Flare 13/30 v 22 /31 study; no concomitant therapy 7 LTF; not ITT; split half-face improved: 21 /28 v 22 /28 ; local reaction: 11 v 9 no wash-out period No significant differences; drug Blinding Results sensitivity 8 6/6 Duration 2 Sulphur/salicylic... in the practice of evidence- based dermatology This step requires consideration of several factors, including an appraisal of the magnitude and meaning of the treatment benefit and adverse events in relation to the patient’s values and preferences .27 Presenting the evidence back to the patient is a complex process requiring good 417 22 3 Williams H Dowling Oration 20 01 Evidence- based dermatology – a bridge... drugs Br J Clin Pharmacol 20 01; 52 (Suppl 1):75S–87S 9 J Natl Cancer Inst Monogr 20 01;30:143–5 Bakan D The general and the aggregate: a 22 methodological distinction Percept Mot Skills 1955;5: 21 1– 12 11 Thornton JG, Lilford RJ, Johnson N Decision analysis in medicine BMJ 19 92; 304:1099–103 23 Ashcroft DM, Li Wan Po A, Williams HC, Griffiths CE Cost- Straus SE, McAlister FA Evidence- based medicine: a effectiveness... literature: XX Integrating research evidence with the care of the individual patient EvidenceBased Medicine Working Group JAMA 20 00 ;28 3 :28 29–36 Charman C, Williams H Outcome measures of disease severity in atopic eczema Arch Dermatol 20 00;136: 763–9 14 Charman C, Williams HC The problem of un-named scales Dermatol 20 00;43:875–8 androgenetic alopecia in women Cutis 1991;48 :24 3–8 26 for measuring atopic eczema... 86 51 2 1 1 3 3 12 16 12 16 12 4 > 12 A, P A, P 0 A, P A, P 0 0 6 LTF; not ITT; crossover study; Comments Both reduced IL and NIL; no 0 intolerance 10/13 v 12/ 14 improved; intolerance 1 = 2 Improved 9/10 v 3/7; exacerbation; no concomitant 1 or 2 times daily (Continued) 3 LTF; all tetracycline 25 0 mg 13 LTF; not ITT therapy analysis; Soap 5 dropouts intergroup comparison 2- week wash-out; 6 LTF; ITT... likelihood of showing a treatment benefit.16 Lack of suitable long-term outcomes is another problem frequently encountered in dermatology clinical trials For example, atopic eczema is a long-term condition for most sufferers, yet of the 27 2 RCTs conducted to date, most have been less than 6 weeks’ duration. 12 Other factors such 79 Evidence- based Dermatology as frequency and duration of the remission are key... comparisons: therapy twice-daily application treat; NIL, non-inflamed lesions; IL, infamed lesions Number, number of patients enrolled; severity (1 = mild; 2 = moderate; 3 = severe); duration (weeks); blinding (0 = open; A = assessor; P = patient); LTF, lost to follow up; ITT, intention-to- Millikan 198 120 3 2 Vehicle A 17 LTF; not ITT; no concomitant 12 1 > 2 IL and NIL; dry skin: 3 v 1 2 3 LTF; not ITT; no... direct cost of treating newly diagnosed melanoma in 1997 to be US$563 million Macro-costing determines the overall cost to care for a particular disease, usually with a 70 population -based approach Kirsner et al .2 used a macro-costing approach to evaluate the cost of hospitalisation for dermatology- specific and diagnosis-related groups (DRGs) using data from the Medicare Provider Analysis and Review . Press, 1995. 20 . Williams HC, Seed P. Inadequate size of ‘negative’ clinical trials in dermatology. Br J Dermatol 1993; 128 : 317 26 . 21 . Altman DG, Bland JM. Absence of evidence is not evidence. term)? Evidence for a “trial effect”. J Clin Epidemiol 20 01; 54 :21 7 24 . 24 . Davidoff F, DeAngelis CD, Drazen JM et al . Sponsorship, authorship, and accountability. Lancet 20 01; 358 :854–6. 25 parallel-group randomised trials. Lancet 20 01; 357 :1191–4. 26 . Cox NH, Williams HC. Can you COPE with CONSORT? Br J Dermatol 20 00; 1 42 :1–3. 27 . Stern JM, Simes RJ. Publication bias: evidence

Ngày đăng: 09/08/2014, 14:22

Nguồn tham khảo

Tài liệu tham khảo Loại Chi tiết
1. Sneddon IB. A clinical trial of tetracycline in rosacea. Br J Dermatol 1966;78:649–52 Khác
2. Marks R, Ellis J. Comparative effectiveness of tetracycline and ampicillin in rosacea. A controlled trial. Lancet 1971;2:1049–52 Khác
3. Nielsen PG. A double-blind study of 1% metronidazole cream versus systemic oxytetracycline therapy for rosacea. Br J Dermatol 1983;109:63–5 Khác
4. Schachter D, Schachter RK, Long B et al. Comparison of metronidazole 1% cream versus oral tetracycline in patients with rosacea. Drug Invest 1991;3:220–4 Khác
5. Wilkin JK, Dewitt S. Treatment of rosacea – topical clindamycin versus oral tetracycline. Int J Dermatol 1993;32:65–7 Khác
6. Bikowski JB. Treatment of rosacea with doxycycline monohydrate. Cutis 2000;66:149–52 Khác
7. Torresani C, Pavesi A, Manara GC. Clarithromycin versus doxycycline in the treatment of rosacea. Int J Dermatol 1997;36:942–6.Evidence-based Dermatology Khác

TỪ KHÓA LIÊN QUAN