Quantcast
Channel: external validity
Viewing all articles
Browse latest Browse all 9

Thinking about the placebo effect as a “meaning response” and the implication for policy evaluation

0
0

In recent conversations on research, I’ve noticed that we often get confused when discussing the placebo effect. The mere fact of positive change in a control group administered a placebo does not imply a placebo effect – the change could be due to simple regression to the mean. And, even though one study claims that a placebo induced benefits in subjects who knew they received a placebo, it is still the case that subjects need to believe in a possible benefit in order to actually benefit from a placebo effect. So these conversations spurred me to read more about placebo effects, which led to the very interesting perspective of the anthropologist Daniel Moerman.

Moerman begins from the obvious fact that placebos per se cannot cause the placebo effect – by definition placebos are inert. It is the response of the study subject to the placebo that determines the result, implying there is some interpretation of the placebo that propels the beneficial outcome. Moerman calls this effect of the conveyed meaning embodied in the placebo as the “meaning response”.

Perhaps the simplest depiction of how the meaning response operates is relayed in this experiment: a placebo pain reducer (mere saline solution) is introduced as a “helpful pain reliever” to randomly selected study subjects. Other subjects are not told about a “helpful pain reliever” and instead receive a hidden injection of saline solution (the same substance but without the accompanying description). Both groups received a “placebo” in so far as they received an inert substance. But only the group that received the corresponding message reported a persistent decline in pain.

Here’s another example of the meaning response: Women in the UK who regularly suffer headaches were offered a branded analgesic (of a widely advertised and known brand), generic analgesic, branded placebo, or generic placebo. The analgesic had a greater effect on self-reported pain reduction than placebo (thankfully, for regular users like me!). But brand-name analgesic was more effective than the generic, and brand-name placebo was more effective than its generic. So the brand name (presumably interpreted as a signal of quality) enhanced the effectiveness of both an active drug and an inert drug.

The point is that we respond to the meaning that surrounds the placebo – presumably these meanings shape our anticipation of effects that in turn shape our subjective perception of experience (and may also affect behavior). Economists use the broad term “expectations” to describe the responses that people have to what things mean, or what they know. As Moerman writes “expectancies are the outcome of a complex play of meanings”. Once we cast the placebo effect as a “meaning response embodied by expectations” we can see the dependency that placebo expectations have on the particular context. The same placebo substance can have different effects on different populations: apparently injected placebos are deemed more effective than oral placebos for pain reduction in the U.S. but not in Europe.

Another study, and one that has more direct implications for the kinds of impact evaluations we do, highlights the effect of the meaning response not for the placebo recipient but for the placebo giver, in this case the doctor. Doctors have a lot of influence on their patient’s beliefs and expectations. During a molar extraction, patients were told they would either receive an inert placebo, a narcotic pain-reliever, or a possible pain enhancer (the substance naloxone, which blocks opioid receptors). Neither the doctor nor the patient was told which substance was administered but the doctor, in secret, was told that the supply of pain reliever had run out and so would not be administered. However this was not true, and the pain-reliever was administered to some study subjects. At this point in the study, the placebo-administered patients reported no benefit in pain reduction.

Then in a second phase, the doctors were informed that the pain-reliever had been procured and may be administered to the patient as originally intended. After this news, the patients who were administered the placebo reported significantly reduced pain. The only difference between the phases was that doctors knew that no one in the first group would actually get a pain reliever (even though some did) while patients in the second group might get a pain reliever. Somehow this information or expectation was conveyed to the patient through the behavior of the administering doctor.

Should we be concerned about a “meaning effect” in the impact evaluations we conduct? I do have an inchoate worry as we often evaluate policy innovations implemented on a pilot basis. In general we know we need to understand how the participants and the implementers understand the intervention, and these topics are usually part of our data collection plans. It is especially important to understand participant understanding since it is seldom possible for our impact evaluations to be double-blinded – unlike in the doctor example above, the intervention implementer is aware of what exactly is being implemented and the amount of attention the intervention will receive by policy-makers.

So I do worry about the “meaning” of participating in a pilot study and whether this “meaning”, as understood by the participants, itself affects the outcomes we care about and subsequently goes away when the evaluation period is over.  In principle, a perceived benefit from participating in a new initiative, perhaps even some excitement about being in the spotlight, can affect implementer effort. If so then this is a “meaning response” to the intervention as pilot or novel, rather than to the intervention as-it-is. And note that this concern is separate from concerns such as whether the intervention is at-scale or implemented in a small controlled setting, or whether the implementer is government or NGO, and other things that can also greatly matter for external validity.

We believe (I believe!) there is real value to effectiveness studies (i.e. evaluations of policy innovations at scale implemented under real world conditions) but what if the spotlight itself – the spotlight of a pilot, an innovation, or an evaluation – contributes to the assessed program impact. And if it does, how might we be able to separately identify the spotlight effect from the program effect – can there be a valid counterfactual for this? Perhaps over a longer period, but the lifespan of our prospective evaluations are usually not that long.

Viewing all articles
Browse latest Browse all 9

Latest Images

Trending Articles





Latest Images