gms | German Medical Science

25. Jahrestagung des Netzwerks Evidenzbasierte Medizin e. V.

Netzwerk Evidenzbasierte Medizin e. V. (EbM-Netzwerk)

13. - 15.03.2024, Berlin

Available data in DIGA Cloud could not rule out p-hacking or selectively missing results

Meeting Abstract

Suche in Medline nach

  • David Neurath - UMG, Medizinische Statistik, Deutschland
  • Tim Mathes - UMG, Medizinische Statistik, Deutschland
  • Dawid Pieper - MHB, Versorgungsforschung, Deutschland

Evidenzbasierte Politik und Gesundheitsversorgung – erreichbares Ziel oder Illusion?. 25. Jahrestagung des Netzwerks Evidenzbasierte Medizin. Berlin, 13.-15.03.2024. Düsseldorf: German Medical Science GMS Publishing House; 2024. Doc24ebmPS6-2-06

doi: 10.3205/24ebm118, urn:nbn:de:0183-24ebm1181

Veröffentlicht: 12. März 2024

© 2024 Neurath et al.
Dieser Artikel ist ein Open-Access-Artikel und steht unter den Lizenzbedingungen der Creative Commons Attribution 4.0 License (Namensnennung). Lizenz-Angaben siehe http://creativecommons.org/licenses/by/4.0/.


Gliederung

Text

Background/research question: The approval of DIGAs is based on an evaluation process that assesses the safety, functionality, and quality of the applications (called positive care effects).

Either a DIGA can directly permanently included in the registry or the DIGA enters an initial testing phase for 12 months. For getting initial temporary approval, only effectiveness based on retrospective comparative studies is required. In both cases, evidence from randomized trials (RCTs) is required for permanent inclusion. Noticeable, only one trial is required that shows positive care effects for permanently approval. A preregistration in a study registry and publishing results can be avoided, in the case that there are legal requirements for the protection of trade and business secrets or for the protection of personal data or intellectual property.

Our aim was to assess if the available results for DIGAs are suspect for selectively missing results depending on the statistical significance.

Methods: In March 2023, we conducted an inventory of the listed DIGAs and their underlying evidence for effectiveness on the webpages of the BfArM. We searched PubMed and Google Scholar for protocols and publications and searched study registries. In addition, we contacted the companies.

We used various methods to check if the availability of results is suspicious to depend on statistical significance.

Results: We found p-values for primary outcomes for 29 DIGAs, and effect estimates with precision for 23 DIGAs.

The funnel plot was clearly asymmetric suggesting that positive effects have a higher chance of being published. As we used Cohens d, for which the effect size and its variance is not independent, we plotted the effect size against the sample size in addition and observed a similar pattern.

Four 20 RCTs the risk of selective reporting could be assessed. In the RoB-2 assessment, only four studies showed a low risk for selective outcome reporting.

Conclusion: The available data on DIGAs could not rule out, publication bias, p-hacking, or selective reporting. This suggests that the effectiveness of DIGAs is overestimated in average.

The evaluation process of DiGAs should follow the same methodological principles as applied for medical devices. Our study shows that prospectively published protocols, in which outcomes, outcome measures and analyses methods are specified would be desirable to ensure that publication bias, p-hacking and selective reporting can be ruled out.

Competing interests:

DN: Nothing to declare.

DP: Nothing to declare

TM: Nothing to declare.


References

1.
Belas N, Bengart P, Vogt B. P-hacking in Clinical Trials: A Meta-Analytical Approach. In: FEMM - Working Paper Series. Magdeburg: Otto-von-Guericke-Universität Magdeburg, Fakultät für Wirtschaftswissenschaft; 2017.