Blog
How AI platforms can uncover variables that lead to high placebo response
One of the most frustrating scenarios in clinical research is when patients in the treatment arm AND in the placebo arm experience positive outcomes and thereby not showing a significant difference between the arms
Russell Reeve, PhD., VP, Biostatistics and Decision Sciences
Kal Chaudhuri, MBA, Principal, AI/ML Products and Consulting
Jan 06, 2022

This not only has the potential to increase the costs of trials due to enlarged sample sizes but also has the potential of preventing or delaying a promising treatment from reaching the patients in need. While this may be an acceptable outcome in cases where an experimental drug is not efficacious, it can also be the result of an inappropriate trial population.

Understanding the difference can determine whether an experimental drug is worth further investments.

Inconsistent criteria drive inconclusive results

A highly variable placebo effect is a common risk in studies of treatments for central nervous system (CNS) conditions, where the promise of a treatment or interactions with caregivers can be enough to produce positive results in some patients. This effect can also be seen in studies of pain medications, and for treatments for lifestyle-related conditions, like non-alcoholic steatohepatitis (NASH), anorexia or hypertension, where the experience of being in the trial can trigger lifestyle changes that have beneficial effects.

Positive placebo arm results can also occur as a result of poorly defined inclusion/exclusion criteria and inconsistencies in diagnosis. For example, depression diagnoses and pain ratings are largely subjective, and rely on a physician’s interpretation of the patient’s condition. And for conditions with limited diagnostic tools, like NASH, physicians may have to rely on predictive data to inform their diagnosis.

When this variability filters into trial recruiting, sponsors can end up with patients who have been misdiagnosed or whose disease is too mild, and who have very little probability of benefit from the treatment. The challenge is to determine whether the placebo response indicates a drug is not as effective or if it was the result of a mistargeted patient population.

Hundreds of variables

When a variable placebo effect happens, traditional statistical analysis typically will not identify the cause. Statisticians are trained to analyze treatment efficacy in a two-arm setting using preselected patient attributes as variables, which is required by regulatory agencies to ensure unbiased study analyses. However, identifying predictors that boost high placebo response rates requires a different approach. Modern machine learning methods can be employed profitably here to identify the underlying causative parameters.

Evaluating a clinical trial for parameters that are predictive of placebo response, requires a quick exploratory analysis that can evaluate large numbers of variables, whittle those down to the few that provide the bulk of the benefit, and provide actionable outcomes. The number of variables to investigate can be large and may live outside of pharmacokinetic models or clinical trial factors.

As a result, a review of the data conducted by humans makes this process nearly impossible. Common subgroup analyses performed today can take several months and may evaluate only a limited number of variables often not in combinations. This approach can have a very high Type I error rate resulting in false variables to be investigated, and even if successful may not lead to optimal subgroups.

Data science technology in action

Advances in machine learning algorithms and artificial intelligence now make it possible to rapidly understand and investigate the causes of a variable placebo effect with immense detail, giving sponsors a chance to revise inclusion/exclusion criteria and vetting requirements for future trials.

When trial data is analyzed using machine learning algorithms, like those using IQVIA’s Subpopulation Optimization and Modeling Solution (SOMS), the rate of Type I errors and unconscious bias are largely diminished.

SOMS is designed to find subgroups while controlling the Type I error rate, to determine which populations respond positively or negatively to a treatment, and which are likely to show an enhanced placebo response. This helps trialists to focus the trial on the most responsive patients in order to drive strong treatment differentiation.

Some key benefits of using this technology include:

  • Its ability to analyze many variables, in all relevant combinations
  • It can finish an analysis in hours instead of weeks
  • It controls the analysis-wide Type I error rate, limiting findings to variables that show promise of explaining the placebo response
  • It identifies optimal subgroups based on the data presented and not on prior assumptions or convenience.

Needle in a haystack

IQVIA has used SOMS to help many sponsors solve their variable placebo response challenges. Let’s take a look at a case study.

A recent phase 2 trial results showed the investigational treatment was effective for patients in two regions, but not in the third, which had a significant number of patients still to be recruited. Comparison to prior studies suggested that the results were due to a high placebo response and not the efficacy of the therapeutic. The sponsor knew that if they could determine why the variation was occurring, they could fine-tune the inclusion/exclusion criteria to reduce placebo response without eliminating a region with significant patient numbers.

Using the SOMS platform, IQVIA performed a root cause analysis using all available data points, including information on patient recruitment forms used in each region. The analysis, which was completed in just a few days, looked at hundreds of criteria, and ultimately identified 2 key variables that drove the regional variations in treatment outcomes. Controlling for these key variables, the poor performing region experienced reduced treatment variability.

The sponsor used these insights to adjust enrollment criteria for the subsequent phase III trial, so all three regions could remain part of the research, and the treatment effect is now more consistent, and less variable, across all three regions.

It was a small change, but without SOMS, the hundreds of potential variations among sub-populations in the trial could have led the sponsor to the wrong conclusions.

SOMS doesn’t guarantee that a trial can be saved, but it does give sponsors the unbiased, statistically rigorous information needed to make data-driven decisions about drug development. In a world where drug development can cost more than a billion dollars, that kind of insight, saving both time and money, and reducing the chance of false conclusions, can be game changing for sponsors, and the patients they serve.

Related solutions

Contact Us