Wednesday 21 August 2013

Why quantitative studies cannot deliver evidence-based practice alone

Qualitative methodology is pants and has no role in evidence-based practice


It's not uncommon to share a room with a colleague who is repelled by the idea that qualitative research could contribute to improving patient care. There are many more (and I was one) who just don't get where qualitative research fits in and it seems to me that the evidence-based practice (EBP) movement, in some cases deliberately, in others not, has fostered an ideal. And that ideal is quantitative.

For the study of the efficacy (how well it worked in the study) and, indeed, the effectiveness (how well it worked in practice), of a drug the randomised controlled trial with its quantitative output of numerical data on its success or otherwise in treating a given condition is the ideal and I would not argue otherwise. For the limited context and the restricted set of patients in which the trial is conducted, if well done, it will allow some estimate of the "truth" of the efficacy of the treatment. At least for the outputs being measured.

But evidence-based practice is about much more than a risk or odds ratio and p-values or confidence intervals. These numerical - quantitative - outputs are but a tiny element of what I understand as evidence-based practice (EBP).

Rather than look at qualitative studies per se e.g. "how dentists use or don't use  evidence in their practices" (rather than "how many use evidence"), for this blog I just want to draw attention to the way we use qualitative methods to deal with quantitative data in EBP.

Or perhaps it does...


Since the early days of EBP there has 1) been a need to consider the patient's values and aspirations 2) the need to consider our own experience and expertise and 3) a requirement to critically appraise the literature we read. Let's not forget, of course, that there's also been a requirement to use the best available research to inform the discussion.

So if I have a study that tells me that putting a stainless steel crown on a carious deciduous tooth rather than filling it with a composite will result in 12% fewer such teeth needing to be extracted I am grateful for this quantitative information on the efficacy of the intervention. I need this to understand what the potential benefit of using it in my patient could be from the point of view of losing a tooth.

Qualitative critical appraisal



However, in order to evaluate the risk of bias - that is, the risk that the result is not the true reduction in tooth loss due to some systematic error in the design of the study - I would critically appraise it. The thing is that there don't seem to be reliable quantitative ways of doing this. We can score whether the two groups were "randomised" or not - perhaps with a 1 for yes and a 0 for no - but very quickly we ask - how were they randomised and what effect does it have if they don't tell us? We might see a table of baseline characteristics and there's a difference in the baseline amount of caries in the average child in each group - but what does that mean for the results? Perhaps the p-value is 0.04 or perhaps it is 0.004 - how do these different confidences in the estimate of truth affect the way we think about the results?

These are not questions that can be answered reliably quantitatively. In a sense we are analysing the text - the report of a study - to try and construct some idea of what it means. Does this explanation mean this is likely to be a reliable study or not? And this, I would argue, is a qualitative process: we are constructing an idea in our head of whether we think the story the report tells is likely to be the truth or not. Someone else could well construct a different opinion that is contrary to ours. How many times have you read in systematic reviews that disagreements were resolved through consensus or by a third reviewer?

Qualitative understanding of patient values and clinical experience and expertise


What about the other two essential elements of evidence-based practice - the patient's values and our experience and expertise? Here again it is hard to see how we can avoid using qualitative methods and where quantitative methods fail.

Contrary to the positivist "truth" from the study, for a patient, the truth of what is - for want of a better term - in their interests and meets their values and aspirations - could be very different. Perhaps the outcome of the study is not the outcome that interests them. Or perhaps, even if it is, they ascribe a different value to a tooth only lasting 1 year rather than 5.

Likewise, the truth for the clinician about the effectiveness of the treatment may be vastly at odds to the researchers' results as they try to run a small business, manage a clinic, decide which hands-on courses to attend (and which not to), and make sense of their colleagues' opinions about the research, its value, their experience using the treatment...

The issues of why people do things and what drives them to or not to are inherently qualitative and as clinicians trying to practice in an evidence-based way we make decisions in this way each day.

So I guess my conclusion here is that as we teach and train colleagues and students to practice EBP that we not forget the essential component that qualitative methods play in making sense of quantitative data and helping us use it where it is appropriate. As we move forward we may want to think of how we develop some rigour in this process as the various tools for critical appraisal have sought to do. 

Monday 19 August 2013

Qualitative enhancement of Quantitative Systematic Reviews

Some questions need qualitative methods to answer them - even in EBM


In my metamorphosis from EBM quantitative ideologue to more nuanced appreciator of mixed research methods I am learning to re-interpret the value of much of the evidence-based practice I have spent the last several years learning and trying to practice. As many have identified before me (e.g. Black 1994, Popay & Williams 1998), the quantitative research methods upon which clinical and population epidemiology are built can tell only part of the story we need to improve patient care. Here is an extract from the Popay and Williams (1998) paper:
"... depending on the question, qualitative research may be the only appropriate method to be used in finding a valid and useful answer. It is congruent with the philosophy of EBM."
That was from David Sackett, one of the godfathers of EBM, and qualitative research may be defined as "...research that helps us to understand the nature, strengths, and interactions of variables." (Black 1994)

Two different approaches are described to using qualitative research: in a mixed methods way that sees qualitative and quantitative research used alongside each other (or one after the other) in a single research programme; or on its own to answer a question that cannot be answered by quantitative methods. For this blog I will discuss the first - what some authors call using qualitative research as an enhancement.

Flops and failure: what happened and why?


Say that in a trial to change clinicians' behaviour we teach them all the 5 stage approach to using evidence (ask, search, appraise, apply, evaluate). Half the clinicians are randomly assigned to, in addition, be helped do this by a trained facilitator. The other half are taught and then just have to get on with it alone. 6 months later we see whether practice has changed around a set of previously-determined clinical outcomes that the clinicians hoped to make more research-informed. We see no difference in outcome. Facilitators are a flop. What a waste of time, money and research passion. Months of learning, planning, dreaming and hoping come to nothing.

Unless, alongside the trial the researchers had also done some qualitative research - perhaps observed how the facilitators worked with clinicians, how the clinicians engaged (or didn't) with each other / the research / their patients; or perhaps they could have interviewed the clinicians to find out what they thought of the facilitation. There may have been a load of information that the researchers never envisaged could affect the success or otherwise of their perfectly-designed randomised controlled trial. Perhaps the clinicians didn't like the facilitators - found them too abrupt or aloof or "clever". Perhaps the facilitators "didn't get" how to put the research into practice in their setting. Perhaps there were colleagues who gave negative vibes about all this "research stuff" and put them off. If the study were repeated this knowledge may well mean that the new trial is more successful, or perhaps the trial is tweaked to take it into consideration as it proceeds thus preventing wasted time and resources.

Enhancing systematic reviews with qualitative summaries


So today I was thinking about systematic reviews, primarily ones that come up with conclusions that say something like there was 'no / little difference between the interventions'. To me this always seems like a disappointing conclusion to what was likely to have been a long piece of work for the authors. The problem, it seems to me, is that because we only pay attention to the quantitative studies in a review we can't enhance the reader's comprehension of the review using qualitative data. What if there are qualitative studies around the systematic review topic that could indicate why a given intervention might not have worked?

But even for reviews that show a positive effect of, say, a particular behavioural intervention, might it not be more useful to us as clinicians if alongside this were a summary of qualitative studies that looked into what made the intervention successful, why some patients accepted it and others didn't, or what patients and clinicians felt about using it?

Having done one systematic review, and working currently on another, I will grant that as someone steeped in quantitative methodology incorporating qualitative data too would seem to be hard work. But if instead those with quantitative and qualitative expertise were to work together might we not enrich the evidence summaries that clinicians and patients consume?

Fortunately, it seems that the Cochrane Qualitative and Implementation Methods Group has begun work on helping reviewers do just this and the Centre for Reviews and Dissemination incorporate a whole chapter for this in their guidance on conducting a systematic review (see chapter 6) where they write:
"This chapter focuses on the identification, assessment and synthesis of qualitative studies to help explain, interpret and implement the findings from effectiveness reviews."
I haven't found a review that does this yet and don't know how I'd go about identifying one in an efficient way (perhaps a filter for combined Qualitative/Quantitative reviews might be created in PubMed?) but I look forward to someone pointing me to one soon and to more reviewers looking to incorporate the qualitative with the quantitative.

References


Black N. Why we need qualitative research. Journal of Epidemiology and Community Health. 1994;48:425–6.
Popay J, Williams G. Qualitative research and evidence-based healthcare. Journal of the Royal Society of Medicine. 1998 Jan;91 Suppl 35:32–7.