Registration for a live webinar on 'Innovative Vaccines and Viral Pathogenesis: Insights from Recent Monkeypox (Mpox) Research' is now open.
See webinar detailsWe noted you are experiencing viewing problems
-
Check with your IT department that JWPlatform, JWPlayer and Amazon AWS & CloudFront are not being blocked by your network. The relevant domains are *.jwplatform.com, *.jwpsrv.com, *.jwpcdn.com, jwpltx.com, jwpsrv.a.ssl.fastly.net, *.amazonaws.com and *.cloudfront.net. The relevant ports are 80 and 443.
-
Check the following talk links to see which ones work correctly:
Auto Mode
HTTP Progressive Download Send us your results from the above test links at access@hstalks.com and we will contact you with further advice on troubleshooting your viewing problems. -
No luck yet? More tips for troubleshooting viewing issues
-
Contact HST Support access@hstalks.com
-
Please review our troubleshooting guide for tips and advice on resolving your viewing problems.
-
For additional help, please don't hesitate to contact HST support access@hstalks.com
We hope you have enjoyed this limited-length demo
This is a limited length demo talk; you may
login or
review methods of
obtaining more access.
Printable Handouts
Navigable Slide Index
- Introduction
- Submissions to FDA’s CDER (2000-2012)
- Reasons for failures
- Impact on clinical data
- Risk-based monitoring
- FDA guidance for industry
- Key risk indicators
- Key risk indicators (examples)
- Simple thresholds
- Confidence intervals
- Comparative monitoring
- Oversight of clinical investigations (FDA)
- EMA reflection paper
- Central statistical monitoring (a few key outcomes)
- Central statistical monitoring (many variables)
- Central statistical monitoring (all variables)
- SMART CSM engine
- SMART CSM engine (all data interrogation)
- CSM compares centers
- Statistical tests
- CSM performs many tests on all variables
- Location test
- Statistical tests (p-value)
- Statistical tests generate many p-values
- Data inconsistency score
- Bubble plot
- Center profile
- Review of statistical signals
- Types of data issues
- Fraud
- Fraud (bubble plot)
- Center with “regular” diary data entry
- Fraud revealed by diary data entry times
- Tampering
- Tampering (bubble plot)
- Tampering (identical bp & respiratory rates)
- Sloppiness
- Sloppiness (bubble plot)
- Sloppiness (too many SAEs)
- Unintentional errors
- Unintentional errors (bubble plot)
- Unintentional errors (miscalibrations)
- Conclusions
Topics Covered
- Quality by design
- Risk based monitoring
- Key risk indicators
- Central Statistical Monitoring (CSM)
- SMART CSM engine
- Statistical tests and variables
- Types of data issues: Fraud, Tampering, Sloppiness, Errors
Links
Series:
Categories:
Talk Citation
Buyse, M. (2016, September 29). Statistical methods to detect fraud and errors in clinical trials [Video file]. In The Biomedical & Life Sciences Collection, Henry Stewart Talks. Retrieved November 21, 2024, from https://doi.org/10.69645/DDCU2600.Export Citation (RIS)
Publication History
Financial Disclosures
- Dr. Marc Buyse, Stock Shareholder (Self-managed): IDDI
Statistical methods to detect fraud and errors in clinical trials
Published on September 29, 2016
40 min
Other Talks in the Series: The Risk of Bias in Randomized Clinical Trials
Transcript
Please wait while the transcript is being prepared...
0:00
Hello, my name is Marc Buyse,
I'm the Chief Scientific Officer
of CluePoints,
a company devoted
to central statistical monitoring
of clinical trials
and also
a professor of Biostatistics
at Hasselt University in Belgium.
Today,
I'm going to talk about methods
to detect fraud and errors
in clinical trials.
Why is this problem important?
0:23
If we look at the number
of submissions to the FDA
between 2000 and 2012,
they were 332
such submissions
of which exactly half failed,
151 failed, at the first cycle,
and the other half was approved
at the first cycle.
So that is a high failure rate.
And of those applications
who failed,
80 were never approved,
so half of them
were never granted approval.
So this is a high failure rate.
And why do
so many applications fail?
The next slide shows you
some reasons for failures.
0:58
If we look at the reasons,
study conduct, and, in particular,
missing data
or data integrity problems
caused about 7% to 9%
of the applications to fail
either during the first cycle
or during any cycle.
And again, that is probably
too high a percentage
to be acceptable.
And if we look at the clinical data
submitted
to support the application,
again about 24% to 36%
of the applications fail
either during the first cycle
or at any cycle
because of inconsistent results
between trials, between sites,
or between endpoints.
And what I'm going
to concentrate on today is
differences in clinical data
submitted by different sites
in a clinical trial.
And again, this is important
because it is the cause
of between a quarter and a third
of the rejections
of new applications for approval.