Wednesday 21 August 2013

Why quantitative studies cannot deliver evidence-based practice alone

Qualitative methodology is pants and has no role in evidence-based practice


It's not uncommon to share a room with a colleague who is repelled by the idea that qualitative research could contribute to improving patient care. There are many more (and I was one) who just don't get where qualitative research fits in and it seems to me that the evidence-based practice (EBP) movement, in some cases deliberately, in others not, has fostered an ideal. And that ideal is quantitative.

For the study of the efficacy (how well it worked in the study) and, indeed, the effectiveness (how well it worked in practice), of a drug the randomised controlled trial with its quantitative output of numerical data on its success or otherwise in treating a given condition is the ideal and I would not argue otherwise. For the limited context and the restricted set of patients in which the trial is conducted, if well done, it will allow some estimate of the "truth" of the efficacy of the treatment. At least for the outputs being measured.

But evidence-based practice is about much more than a risk or odds ratio and p-values or confidence intervals. These numerical - quantitative - outputs are but a tiny element of what I understand as evidence-based practice (EBP).

Rather than look at qualitative studies per se e.g. "how dentists use or don't use  evidence in their practices" (rather than "how many use evidence"), for this blog I just want to draw attention to the way we use qualitative methods to deal with quantitative data in EBP.

Or perhaps it does...


Since the early days of EBP there has 1) been a need to consider the patient's values and aspirations 2) the need to consider our own experience and expertise and 3) a requirement to critically appraise the literature we read. Let's not forget, of course, that there's also been a requirement to use the best available research to inform the discussion.

So if I have a study that tells me that putting a stainless steel crown on a carious deciduous tooth rather than filling it with a composite will result in 12% fewer such teeth needing to be extracted I am grateful for this quantitative information on the efficacy of the intervention. I need this to understand what the potential benefit of using it in my patient could be from the point of view of losing a tooth.

Qualitative critical appraisal



However, in order to evaluate the risk of bias - that is, the risk that the result is not the true reduction in tooth loss due to some systematic error in the design of the study - I would critically appraise it. The thing is that there don't seem to be reliable quantitative ways of doing this. We can score whether the two groups were "randomised" or not - perhaps with a 1 for yes and a 0 for no - but very quickly we ask - how were they randomised and what effect does it have if they don't tell us? We might see a table of baseline characteristics and there's a difference in the baseline amount of caries in the average child in each group - but what does that mean for the results? Perhaps the p-value is 0.04 or perhaps it is 0.004 - how do these different confidences in the estimate of truth affect the way we think about the results?

These are not questions that can be answered reliably quantitatively. In a sense we are analysing the text - the report of a study - to try and construct some idea of what it means. Does this explanation mean this is likely to be a reliable study or not? And this, I would argue, is a qualitative process: we are constructing an idea in our head of whether we think the story the report tells is likely to be the truth or not. Someone else could well construct a different opinion that is contrary to ours. How many times have you read in systematic reviews that disagreements were resolved through consensus or by a third reviewer?

Qualitative understanding of patient values and clinical experience and expertise


What about the other two essential elements of evidence-based practice - the patient's values and our experience and expertise? Here again it is hard to see how we can avoid using qualitative methods and where quantitative methods fail.

Contrary to the positivist "truth" from the study, for a patient, the truth of what is - for want of a better term - in their interests and meets their values and aspirations - could be very different. Perhaps the outcome of the study is not the outcome that interests them. Or perhaps, even if it is, they ascribe a different value to a tooth only lasting 1 year rather than 5.

Likewise, the truth for the clinician about the effectiveness of the treatment may be vastly at odds to the researchers' results as they try to run a small business, manage a clinic, decide which hands-on courses to attend (and which not to), and make sense of their colleagues' opinions about the research, its value, their experience using the treatment...

The issues of why people do things and what drives them to or not to are inherently qualitative and as clinicians trying to practice in an evidence-based way we make decisions in this way each day.

So I guess my conclusion here is that as we teach and train colleagues and students to practice EBP that we not forget the essential component that qualitative methods play in making sense of quantitative data and helping us use it where it is appropriate. As we move forward we may want to think of how we develop some rigour in this process as the various tools for critical appraisal have sought to do. 

No comments:

Post a Comment