We thank AJNR for sending us the letter of Drs de Winkel and Roozenbeek, which gives us an opportunity to emphasize the most important, patient-oriented motivation behind our approach.
Care trials such as Flow Diversion in Intracranial Aneurysm Treatment (FIAT) aim to use research methods for the benefit of patients. They are done not primarily to gain new knowledge but rather because they are the best, most ethical way to introduce new devices into endovascular practice. Trying a novel intervention in practice is a research context that requires specific methods to protect the medical interests of the patients. Thus, the trial was not conducted “because previous trials lacked comparison with routine clinical practice,” as the authors of the letter have suggested, but rather, as we wrote in the introduction of the article, “to introduce to endovascular practice a promising-but-unvalidated innovation for patients with difficult intracranial aneurysms.”
If we keep in mind the patients’ interests first, the all-inclusive policy was necessary because we are asked to care for all of these patients for whom flow diversion may be a good option. It was not chosen because it was thought “convenient because there is no widely supported consensus on which patients are suitable for FD and stringent selection criteria may have limited center participation.” Our policy is that promising new (but risky) innovations should be used only in the context of a trial. The main idea is to use clinical trial methodology to protect patients from unvalidated care.1,2 Moreover, protecting patients is not a concern that applies to only a small selected group of patients; it applies to all patients considered for the innovative treatment, here flow diversion.
The purpose of randomized controlled trial (RCT) methodology is to transparently reveal to patients that we are entering unknown territory and to use human intelligence to anticipate and control the potential risks of using an innovative treatment. Risks for each patient are mitigated by only offering the innovation as a 50% chance, balanced by a 50% chance of being treated by the better known, more standard therapy. This procedure is continued until the new treatment is shown to be better than standard therapy, at which time it can be safely used by the community. Alternatively, if the innovation is shown harmful, it is abandoned before too many patients have been harmed, as with the Stenting and Aggressive Medical Management for Preventing Recurrent Stroke in Intracranial Stenosis (SAMMPRIS) trial.3 This process is how trials can work in the interests of patients.
We can now contrast this approach with the aim of the authors of the letter: Let the innovation be used just as if it were standard care and without warning patients that they are being used as research subjects. The haphazard resulting practices will vary widely, of course, because no one really knows what to do. However, this diversity, with large numbers, will serve “as an instrumental variable to evaluate clinical interventions on observational data.” However, this objective is exactly what we all want to prevent! This is experimenting by using novel interventions without methods within the context of care. If it is ever possible to learn from this method, it is only after errors have been committed on a large scale. Haphazard clinical practices and patients should not serve as “instrumental variables” for research.
The idea that observational studies will “facilitate clinical consensus on patient eligibility for FD treatment and works [sic] as a stepping stone for future RCTs” is a naïve illusion that is contradicted by decades of clinical experience. Everybody knows (but few will admit) that observational studies only serve to evade the necessity of doing RCTs. The authors’ claim that “It is too early to perform an RCT” is a well-known trap (it is always too early until it is too late). The proposal is directly responsible for the dearth of good clinical research in our field.
By claiming that “Without a clearly defined target population, it is difficult to assess the generalizability of the results of this study,” the authors show their poor understanding of generalizability, a subject that has previously been discussed at length.4 More important, we are not targeting populations with flow diversion but are treating our patients, one at a time. The central concern of the trial is to offer a way to protect the patient who could benefit from flow diversion but who could also potentially be harmed by an unproven new technology. This is why conservative management had to be included as a potential option to patients because observation rather than treatment is a genuine clinical option that can prevent iatrogenic morbidity. While FIAT included mostly large aneurysms, flow diversion is currently much more frequently used to treat patients with small aneurysms, and the central question remains as to whether these patients should even be treated at all.5
Thus, the remarks that follow are rather bizarre, unless they betray the authors’ dedication to data for data’s sake: “Patients were allowed to be treated conservatively….This has created an imbalance between study groups and complicates the interpretation of the results. Alternatively, it would have been more informative to limit inclusion to patients that actually received aneurysm treatment.” We strain to understand what “balance” the authors want to see between groups; treatments were randomly allocated. The results are not complicated to understand, for they are transparently shown in Fig 2. We also have a hard time understanding how the authors can believe that excluding patients could render a trial “more informative.”
The multiplicity of comparators was an essential feature for FIAT to reach its goal of protecting all patients considered for flow diversion. Multiple comparators are not uncommon in pragmatic trials.6 Consequently, various types of patients treated in various ways were all included. There is no need “to investigate the heterogeneity of treatment effect” as the authors propose, for the heterogeneity is obviously there; they criticized it earlier in the letter. This is, of course, why we provided subgroup details, as promised by protocol, regardless of the tests for interaction. While the authors rehearse the prevalent statistical dogmas regarding interaction tests and subgroup analyses, the idea of a single treatment effect does not make sense here, where treatments as varied as parent vessel occlusion and conservative management were used. However, the authors are right that the trial remains small and that a lot of work remains to be done.
The authors end their letter with their recommendations. They recommend the same old observational approach wrapped up in a fashionable new guise (comparative effectiveness research). It has been attempted without success for decades, with the consequence that we all practice risky opinion-based surgical care. If no evidence is needed to adopt a new intervention into routine practice on a large scale, why would an RCT be needed 20 years later? How much damage will have been done in the meantime? Their final concern, “to minimize research waste,” says it all. However, this is just plain wrong: Of all methods, the observational approach is the least efficient. The authors see patients and surgical practices as opportunities for observational research. Clinical research should instead be designed to minimize harm to patients, and preventing needless morbidity by using RCTs is anything but a “waste.”1
References
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- © 2023 by American Journal of Neuroradiology