This Computer Program Can Tell Whether You’ve Taken A Placebo
SHARE

What’s the difference between a working painkiller and a placebo? Well, you can see it in people’s brains. Brain and pain researchers in the U.K. and U.S. report they’ve developed a prototype algorithm that is able to tell whether a patient took a working analgesic versus a placebo. The algorithm works by analyzing fMRI scans of the patient, made soon after he or she takes the medicine.

In the future, algorithms like this could help researchers develop new painkillers more efficiently. When it comes to pain relief, the placebo effect can be powerful. A study volunteer in an early-stage test of an experimental painkiller might report the drug works great, even though it’s not the drug that’s relieving the pain, but his or her body’s own natural painkilling chemicals, kicked off by the placebo effect. Once doctors test the drug in more people, however, the effect becomes clear, as the data show the drug doesn’t work any better than placebo pills.

However, if fMRI scans are able to tell whether experimental painkillers affect the brain differently than placebos, scientists could use the scans to eliminate ineffective drug candidates more quickly. In other words, if a medicine is going to fail anyway, you want it to fail faster. This new algorithm, coupled with the fMRI scans, could help with that.

If a medicine is going to fail, anyway, you want it to fail faster.

The differences between the fMRI scans of people who took effective pills versus placebo pills aren’t obvious to the naked eye. To test whether there are any real differences at all, the research team analyzed scans from eight studies of different clinically proven analgesics. Each of the studies included volunteers who took the painkiller and volunteers who took a placebo without knowing it. The algorithm-developing team fed all that data into a program designed to learn from seeing patterns in data. Then the team tested the program on a few patient examples.

The program correctly guessed who had taken the real medicines and who had taken placebos, although it was not always perfect. Depending on the drug, its success rates ranged from 57 percent to 83 percent, which are all better than chance. And a bi plus: The program had no false positives. It never identified a placebo as a working drug. So the results are a promising start toward an algorithm that could help researchers and drug companies sort out experimental drugs quickly.

The team published its work last week in the journal Science Translational Medicine.