6 minute read

Mali case study

Next Article
References

References

failure to prescribe in accordance with clinical guidelines (WHO 2002), among other problems.

In this context, the set of measures cited most often for assessing rational medicine use are the INRUD indicators (WHO 1993). The INRUD core indicators represent a minimum set of indicators that the WHO recommends for studies on medication use and prescription practices. However, they mostly measure levels of care but not appropriateness of care and therefore cannot assess many aspects of high-quality of care. Table 3.2 shows medication use statistics from the Mali case study borrowed from the INRUD list. Most of the indicators do not relate actual use to optimal use of a treatment (although some studies attempt to define whether better use is represented by an indicator’s increase or decrease, for example, Holloway et al. 2020). This issue has limited the literature. For example, a systematic review of studies on irrational medicine use in China and Vietnam, based on the WHO framework, notes that “[n]o eligible studies were found to assess whether or not unnecessary or expensive drugs were prescribed, and whether or not the prescription was in accordance with clinical guidelines” (Mao et al. 2015, 9).

Advertisement

The most relevant but less used INRUD indicator is from the list of complementary indicators: “prescription in accordance with treatment guidelines.” As the WHO guidelines note, this measure can be highly effective for well-defined conditions with clear treatment guidelines, but problems exist in terms of defining health problems, in defining what is acceptable treatment, and in obtaining enough encounters with specific problems during the course of a drug use survey. These few lines point to the many challenges that arise when measuring appropriate care and identifying insufficient care as well as nonindicated care. At the core is the

Table 3.2 Rational use of medicines consultation indicators: Mali case study

Indicator Mean

Prescribed antibiotics (%) 63

Received injection or IV (%) Medications prescribed (average) 40

3.8

Medications bought (average) 2.5 Sources: World Bank, using data from the INRUD/WHO Indicators in the Mali case study; Lopez, Sautmann, and Schaner 2022. Note: The indicators were created from data collected for an experimental study on malaria treatment in Mali, which had 627 patient observations in the control group (see box 3.1). All patients with acute symptoms were approached for clinic entry and exit interviews. INRUD indicators cannot directly assess whether a given treatment was appropriate, although the documented levels of antibiotics and injection use and the rate of polypharmacy (multiple medications for a single condition) in this sample are very high. INRUD = International Network for the Rational Use of Drugs; IV = intravenous; WHO = World Health Organization.

problem that quality depends not only on what is provided, but also on what should be provided.1

In recent years, a new generation of studies in the health economics literature has developed several methods that tackle this issue to assess quality of care. The first of these methods, often declared the “gold standard” (Dupas and Miguel 2017), is so-called audit or standardized patient studies, which have been used in multiple research studies across many LMICs.2 Akin to mystery shoppers, standardized patients are trained to present with a specific illness profile and visit the provider incognito, and they are later debriefed about the consultation. Kwan, Bergkvist, et al. (2019) provide an introduction on how to use the method for research, accompanied by a toolkit and manual, and King et al. (2019) provide practical implementation guidance.

The standardized patient method has several benefits. Most importantly, of course, providers do not know who among their patients are audit cases,3 making it likely that the visit records are representative of the provider’s behavior in day-to-day patient interactions. Because the “true” underlying condition is known to the researcher by design, the provider’s behavior can be benchmarked against recommended clinical practice, and their conclusions can be compared with the correct diagnosis. Each component of the consultation can be recorded, from the number of questions asked to the diagnosis and length of time spent with the patient. The method also allows the researcher to vary patient behavior or characteristics systematically to understand provider responses, for example, to identify gender or ethnic discrimination (Borkhoff et al. 2009; Planas et al. 2015) or to measure how providers treat patients with different levels of medical knowledge (Currie, Lin, and Meng 2014). For these purposes, it is particularly useful that several standardized patients can visit the same provider and record their behavior in multiple cases, and conversely, the same individual trained as a standardized patient can present with different illness profiles, different ways of behaving and dressing, and so on.

A disadvantage is that the types of conditions presented, or the scripted behavior and responses by the standardized patients, may not be representative of the actual patient population. Real patients may also have a history of illness or clinical records with which the physician is familiar. In addition, standardized patient studies share with “mystery shopper” and audit research designs in other contexts the problem that they are often not double-blinded, that is, the person assessing quality knows (or infers) the objectives of the study. This may lead the assessor to change their behavior subconsciously to elicit a specific response, causing confirmatory bias (Bertrand and Duflo 2017).

Another method of quality measurement is direct observation. Here, a trained clinician—for instance, a physician, nurse/midwife, or medical student—sits in on the visit and takes notes on various aspects of the consultation. Usually, these observation data are collected using a structured checklist, which reflects established protocols for that type of service (WHO guidelines, national health policy, or other accepted medical protocols). A study in Tanzania shows that the responses in the direct observation checklist correspond closely with patient recall in a “retroactive consultation review” (RCR) (Leonard and Masatu 2006). Moreover, despite an initial Hawthorne effect—that is, the observed physician responding to being observed by increasing their effort—the quality of care recorded in the observed interactions is similar to that in unobserved interactions (as measured by an RCR) after the first approximately 10 consultations (Leonard and Masatu 2010). To the extent that patients do not change their behavior under observation, this approach is closest to “real life” in the sense that the conditions and persons observed are a representative sample of the relevant patient population. However, it may be difficult to construct a checklist that is detailed enough yet covers all the possible cases the physician encounters, especially in a generalist practice.

In some situations, the best way to measure quality is by conducting a patient interview after the consultation with the physician, as in an RCR. Leonard and Masatu (2006) report high agreement between direct physician observation and patient reports when the RCR occurs shortly after the consultation. This is particularly useful when the interview can be combined with a re-evaluation of the patient’s diagnosis. For example, in the malaria case study described in box 3.1, enumerators conducted exit interviews at the clinic as well as follow-up interviews and a malaria test at home the next day. This method uses real patients and may avoid observation bias in physician behavior at least to some degree—but there are disadvantages too. First, patients often cannot accurately report what tests were conducted. Second, Lopez, Sautmann, and Schaner (2022) find that there is selection bias in home malaria testing: only patients with more serious symptoms agree to the rapid diagnostic test (RDT), which involves a finger prick to take blood. The authors therefore construct a malaria risk index from the home test, using predicted malaria probability based on symptom reports and patient demographics to extend the analysis to all patients at the clinic.

To illustrate, figure 3.1 shows the share of patients who received antimalarial prescriptions by predicted malaria risk, by providers who did and did

This article is from: