Every week sees another expert recommending new practice or a journal promoting landmark research. How are we to make sense of the arguments to ensure that changes genuinely benefit our patients?
Two things are certain: we do not need to become expert appraisers of research, but we do need to be capable of judging evidence to translate it into practice. This process, evidence-based practice (EBP), is an essential part of quality improvement.
EBP progresses through a series of four steps (figure 1). This article addresses the final two steps.
Appraising the evidence
Most studies involve two or more groups of patients, which differ in their exposure to an agent. The exposure may be an experimental drug, as in an RCT, or a possible aetiological agent, as in a cohort study. To appraise a study, start by asking yourself two questions. Is this evidence valid? Are the results (clinically) important?
Validity refers to whether the study was reliable enough to produce credible results. Evaluating validity means looking for biases, meaning faults in the design or conduct of the study that would push the results in one direction. A mnemonic for general types of bias that can be applied to all types of study is RAMBO.
R = Recruitment
The main things to look for are:
- Was eligibility for the study appropriate for the population being studied?
- Was the selection process, such as randomly or consecutively, free from bias?
A = Allocation
- In trials, the participants should be allocated randomly (or there should be good reasons if not).
- In observational studies, such as cohort studies, the distribution of potential risk factors except for the one being studied should, as far as possible, be matched between groups.
M = Maintenance
- Subjects should remain in their allocated groups as far as possible or be analysed as if they had (intention to treat).
- Subjects should, whenever possible, remain blind to their allocation.
- Contamination of exposure between groups should be avoided.
- Follow-up should be maintained for an adequate period.
B and O = Blinded and objective
- Observers should, as far as possible, be blinded to the allocation of subjects.
- Measurements of outcome should preferably be objective.
Important in this context means that the size of the effect found (such as the reduction of the proportion of adverse events) was big enough to matter.
Authors may inflate their findings by reporting the results in ways that, superficially, sound most impressive. In a study of combination steroid and LABA inhalers for COPD, the combination was reported to reduce the exacerbation rate by 25%. However, scrutiny of the results shows the actual reduction amounted to 0.33 exacerbations per patient per year. Most people would not think that clinically important. So clinicians need to know some basic statistics.
- Results are often derived from proportions of groups experiencing an event. This proportion is called the event rate. For the group exposed to an experimental treatment or a possible aetiological agent, this gives us the EER (exposed event rate) and for the comparison (non-exposed) group, the CER (comparator event rate). Statistics can be derived from these using subtraction, division and multiplication by 100.
- Risk difference (absolute risk reduction) is the difference between CER and EER.
- Relative risk (risk ratio) is the ratio of EER to CER.
- Relative risk reduction/increase is the ratio of risk difference to CER (the reduction/increase in risk relative to the baseline risk).
We can see how this works with an example from a trial comparing intensive structured BP lowering versus usual GP care in achieving BP targets (table 1).1
The NNT is another useful statistic. This is the number of people who would need to be exposed for one person to experience the outcome (for example, avoid an undesirable event, such as a stroke, or have a desired event, such as achieving target BP). This calculation turns the risk difference upside down. In our example, NNT = 100 divided by 8.8 = 11.
Applying the evidence
Having appraised the evidence and found it to be valid and important, you are ready to apply it. Ask yourself two questions: is this relevant to my situation? Is this acceptable?
The setting and subjects of a study will never completely match our own situation. The aim is to judge if the differences are sufficiently large to render the results irrelevant.
Some features to look for are:
- Age - trials often exclude older patients, those most affected by disease.
- Comorbidity - rare in trials but common in practice.
- Meaningful and appropriate - for example, a study of inflammatory arthropathies that reported only pain relief would ignore two important outcomes, disability and fatigue.
- Surrogate versus patient outcomes - be cautious; for example, some drugs have been shown to reduce serum cholesterol but not cardiovascular events.
- Duration - trials typically run for shorter periods than chronic diseases are treated.
Evidence that has been found to be valid, important and relevant faces a final test: is it acceptable within our practice? On an organisational level, this means judging whether resources can meet the requirements. In the consulting room, this means communicating risk to patients, enabling them to voice their concerns and expectations and to arrive at a shared decision. That takes us into another important area of knowledge and skills, effective consulting.
- Dr Hopayian is a GP in Leiston, Suffolk, and author of 'Making Your Practice Evidence-Based: A Self-Study Guide for Primary Care', RCGP Publications, 2010
1. Stewart S, Carrington MJ, Swemmer CH et al. Effect of intensive structured care on individual blood pressure targets in primary care: multicentre randomised controlled trial. BMJ 2012; 345: e7156
|CPD IMPACT: EARN MORE CREDITS|
These further action points may allow you to earn more credits by increasing the time spent and the impact achieved.