Registration for a live webinar on 'Chronic inflammation, immune cell trafficking and anti-trafficking agents' is now open.See webinar details
Adaptation in likelihood trials
Published on November 30, 2017 51 min
Other Talks in the Series: Adaptive Clinical Trial Design
Phase II clinical trials - Bayesian methods
- Prof. Fei Ye
- Vanderbilt University Medical Center, USA
The title of this talk is Adaptation in Likelihood Trials. I am Jeffrey Blume. I'm in the Department of Biostatistics at Vanderbilt University, do a lot with clinical trials especially likelihood trials. Today, we're going to talk about how you use likelihood methods to measure statistical evidence in clinical trials in a way that can adapt to what we are seeing in the data, in order to make our design more efficient, to respond to what we're seeing, and to be able to look at the data as we go along in a way that's a principled way. And an excellent way to do this is likelihood trials. There are lots of different ways to do adaptive trials, either frequentist or Bayesian versions. Today, we're going to focus on something that's actually right in the middle. The likelihood approach is actually a very frequentist approach, but it has lots of Bayesian flavors too. So, we'll see how we put all these things together.
I think it's good to start sort of understanding where these ideas have come from. The idea to look at a likelihood function, and that the likelihood function somehow told us a little bit about what the data say about the hypothesis of interest was actually first proposed by R.A. Fisher and advanced by him and his student, G.A. Barnard, later in his career. The idea was picked up by Savage and Cornfield, of course, who went on to become big, very famous proponents of Bayesian methods. After that, we have Ian Hacking and Alan Birnbaum. I put them together because Ian Hacking was a philosopher, who coined the term the Law of Likelihood that I'm going to introduce and talk about, that is the axiom that likelihood methodologists follow for measuring the evidence in the data, and Alan Birnbaum proved the likelihood principle, which is actually a consequence of the law of likelihood, but it was a big deal at the time. Anthony Edwards, also a big proponent of likelihood methods and has direct lines to Fisher. Then the last is, the more recent work that Richard Royall has done. If you think about all the way back to Fisher up through Birnbaum and Edwards, there wasn't much discussion at all about the frequency properties of doing this. The general idea was you could look at the likelihood function, and it was a very descriptive tool for telling you how much evidence you had in the data and what hypotheses were supported. Royall came along and said, "Well, hey, we can make this a very frequentist approach, " in the sense that we can look at its operational characteristics. In fact, the operational characteristics of just simply looking at the likelihood function and using it as a tool are very good. So, that's what we're going to take a look at today because those properties are nice properties, and they really help us when we want to do adaptive things, like, look at the data midway through the trial. There are, of course, lots of people who have contributed to likelihood theory, but these are the key folks who've contributed to sort of the idea that the likelihood function measures the evidence in the data as opposed to just technical advances.