PLEASE REVIEW OUR UPDATED PRIVACY & COOKIES NOTICE
Randomized clinical trials (RCTs) are the gold standard for making sure health interventions are both safe and effective. However, uninformative clinical studies are recognized as a problem across global health conditions and environments. We consider research uninformative when studies end without confirming there is a finding, regardless of whether the finding is a meaningful health effect from the intervention, or that the intervention clearly had no effect at all. The consequences of uninformative research include decreased impact, wasted money and time, and subjecting participants to treatments with no positive changes from their commitment and investment.
An opportunity exists to improve RCTs in global health by introducing new standards for engagement and implementing some pre-existing and some nascent best practices:
Simulate samples sizes to ensure trials last long enough, but not too long
Use an analysis plan that is fixed up-front, focusing on commonly used endpoints
Communicate with participants, families, and communities before, during, and after a study
Incorporating these approaches makes researchers more likely to reach the study goals and decreases the chance of additional studies being needed to address the same question.
This application allows users to perform adaptive clinical trial simulations to evaluate the pros and cons of a variety of candidate adaptive trial designs.
More about DAC:
Gates Foundation program officers now have the flexibility to offer grantees trial planning grants to institute DAC best practices in their studies.
Efficient clinical research:
a) Uses simulation to identify the best sample size, leading toward the trial ending with insights no sooner or later than is scientifically sound.
b) Answers multiple questions during a single study, sometimes with multiple treatment arms in an adaptive design, and includes interim evaluations.
c) Has generalizable and actionable results, often by identifying target policies beforehand, measuring commonly used endpoints, and performing the study in multiple sites.
d) Engages the community, local health system, and study participants before, during, and after the study.
We have identified a library of best practices that we believe leads to trials ending and resulting in actionable information. Our best practices are informed by years of trial experience, multiple exemplars across a variety of geographies and pathologies, examples of uninformative trials when best practices were absent, and solid science.
A brief sample of DAC best practices includes:
Know disease burden and epidemiology to optimally answer the question: placing studies where burden of disease is adequate to answer the research question makes for more meaningful results.
Use clinical trial simulations: computer simulations will optimize sample size, study operating characteristics, and decrease risks of failure.
Be open to adaptations: use of platform designs or adaptive designs will enable trials to provide an answer sooner, which can decrease participant burden and therefore be more compassionate.
Employ interim evaluations: use of stopping points for blinded data review enables preplanned adjustments during the study.
Collaborate during design with local experts and researchers in new sectors or disciplines: New collaborations can inform the study with experience outside of normal networks.
Communicate before, during, and after the study with participants and their community to decrease the risk of repeating mistakes and strengthen current and future research.
The Design-Analyze-Communicate framework has enormous potential to deliver more knowledge—faster—to save lives. Research that makes use of DAC will result in more informative, inclusive, and cost-effective clinical studies; strengthen trust with the community; and produce research results that change health for the better.
Watch our full-length video below to understand more about DAC trials.