The COVID-19 pandemic has brought with it an unprecedented explosion of scientific research. There are currently nearly 250,000 listings in the World Health Organization’s global database of COVID-19 studies. The listings include preprints (the familiar “not yet published” studies often mentioned in news articles), published literature and reports, plus registries of clinical trials.
But many of these articles and trials have been in vain, say an international group of researchers. While recognizing the “incredible pressure” researchers, regulators and policy makers have felt during COVID-19’s quick and mysterious onslaught, they’re concerned about “an overwhelmingly large number of clinical trials … of questionable methodological quality.”
The researchers blame at least part of the wasted effort on the way many COVID-19 experiments were run. They write: “…the medical research community’s response to COVID-19 has arguably been inefficient and wasteful.”
But they say there is a way to prevent wastage – a research approach that has been used for a long time by pharmaceutical companies to do drug testing but has not been used widely in global health research, in part because of the cost and complexity of the method.
The researchers describe their view in the current issue of Lancet Global Health, which includes additional papers that explore the research approach.
What’s the big problem with COVID-19 trials?
First and foremost, size. In the Lancet paper, Reis, Mills and their co-authors say most of the clinical trials to date have been too small to provide conclusive evidence that a possible treatment does or does not work. Some of these authors calculated last June that more than 100 unique therapeutic agents were being investigated for COVID-19, with substantial overlap and duplicated trials. On average, there were fewer than 100 participants in the individual trials where hundreds or even thousands might be needed for valid results.
“The vast majority of trials are so small that they will never definitively give you an answer,” says Mills. Worse still, “we have lots of examples of small trials that give you an answer and it happens to be wrong because of the play of chance.”
What’s more, the virus is a moving target. Its structure is changing over time – early data may come from one variant, later data from another. And it doesn’t sicken everyone. Reis says, “We need numbers that can address those issues. Otherwise, at the end of the day, we don’t have any results, so we are wasting time and wasting money and wasting effort for everyone.”
Is there anything about SARS-CoV-2 that made it particularly prone to wasted research efforts?
So much work needed to be done so quickly. “What is this disease? What’s causing it? How can we deal with the disease? Is there a treatment?” says Reis.
“Many people who aren’t used to doing clinical research started investigating treatments. And many of the trials were never finished.”
And then there’s the inconsistency of the virus itself. “Many early trials planned for a lot of events,” says Reis – countable events like intubations, hospitalizations or deaths. But sometimes the virus causes only mild illness or no illness at all.
Since not everyone who gets infected gets sick, and most aren’t hospitalized, you need to have a group big enough to have a statistically significant number of “countable” events.
Other than money spent with no benefit, is there any harm in wasted research?
Yes, says Mills – a loss of hope. “We only have a limited amount of the public’s hope to resolve issues. The public is looking to science to come up with a solution to this pandemic. From the very beginning, they have been promised that there are new treatments coming and that there are vaccines coming.” The large vaccine trials were able to deliver answers quickly, but the smaller trials used to test treatments have not.
In addition, trials depend on volunteers. If many studies in the end are deemed a waste of time, volunteers for future studies will be hard to find. A study that doesn’t prove anything risks dissuading future volunteers. “Poorly done clinical trials that were never going to be able to give a definitive answer does the volunteer population a tremendous disservice,” says Mills. They may have wanted to help answer a question, they may have hoped they’d be receiving treatment. But either way, they would have been better off just seeking the best medical care that they could try to find, says Mills.
And working on a failed trial can be disheartening for researchers. “Because a lot of the thousands of clinical trials started at the beginning of COVID, a lot of people got involved. They had difficulty getting any funding. They had difficulty recruiting any patients. And in the end, they have an unusable data set from their trial. I think people – funders, politicians, scientists and staffers in rich and poor countries alike — will be a little more hesitant to initiate clinical trials going forward,” says Mills.
So what’s the solution?
For one thing, you can’t just spring into action when a new disease is recognized unless you’re ready. You need trained researchers to design and implement studies, the ability to start collecting relevant data quickly, and computers and programmers capable of handling the information. While many low- and middle-income countries have taken part in international trials of drugs and vaccines, the involvement tends to be piecemeal. Historically, once the trial is done, the financial support for researchers goes away, education opportunities dry up, and hardware like computers and smartphones are repurposed or go out-of-date.
Improving infrastructure that supports clinical trials in low- and middle-income countries basically means training and financially supporting researchers and maintaining computers so that when a new disease like COVID comes along, local medical workers would be ready to join in as equal partners on collaborative efforts.
In addition, the authors of all four papers in the Lancet series are pushing for broader reliance on experiments known as adaptive clinical trials. These trials are more complex than the typical randomized, controlled clinical trials where people are neatly divided into a treatment group and placebo group and watched for a pre-set period of time. Adaptive trials are designed so the rules can be altered before the trial is finished.
For example, if midway through a trial it looks like a treatment might be effective but there are not enough people in the trial to verify it, more people can be added. Or if a treatment does appear to be working well, more people can be added to the treatment arm so the results will be more statistically significant. And new treatments can be added in, and ineffective treatments removed.
Such trials have been conducted for several decades in high-income countries but have not been common in low- and middle-income countries. That’s changed with COVID-19 – Reis and Mills have several favorites, including the World Health Organization’s Solidarity Trial, which compares four treatments for COVID-19 in what is now more than 12,000 patients in more than 30 countries. The trial has been strong enough to enable WHO to conclude that one of the drugs under investigation, remdesivir, had no meaningful effect.
Reis and Mills are working on an adaptive trial for COVID patients in Brazil and South Africa with five drugs and two interim analyses and are hoping for results in months, rather than the years it would take with individual randomized controlled trials. And already, Reis says, Brazilian researchers looking at interventions for cardiovascular disease are now using the adaptive trial structure.
Are there any drawbacks to adaptive clinical trials?
In a commentary to the Lancet series, researchers from the Centre for Health Research and Development in India applaud the four papers, but say it’s not clear yet whether low-income countries will be able to jump into a system of large-scale adaptive trials. Funding agencies may not want to pay for long-term infrastructure in poor countries, and local ethics committees and data safety boards might not have the experience to consider the highly complex statistical analyses of adaptive trials or the changing conditions – trials that may end early or late, or have new drugs added in.
Still, Mills and Reis say, now is the time for major investments in infrastructure, and it’s already past due. “We should have had all this ready to go years ago,” Mills says, “and we need to be ready for the next challenge”
Joanne Silberner, a former health policy correspondent for NPR, is a freelance journalist living in Seattle.