Lew Laboratory: Theoretical and Empirical Pharmacology
The main research focus of this small research group (just me!) is advanced statistical analysis of conventional biomedical experiments. Recently I have devised a novel method of sequential analysis that offers the promise of allowing increased experimental power or a reduction in the number of experimental replicates necessary for a reliable result. Sequential analyses are fairly widely used in the analysis of clinical trials results, but my approach is novel in its simplicity and and offers the substantial advantage over other approaches of being completely generalizable.
Most biomedical researchers have never even heard of sequential analyses, much less applied them, but my approach is simple to implement. It may even sound similar to the (flawed) usage of conventional analyses that some researchers use already...
Two approaches to experimental analysis
The conventional approach:
1. Choose a type 1 error rate, a, that you are comfortable with. (Almost invariably people choose 0.05.)
2. Decide on an effect size that is big enough to care about, or from preliminary experiments estimate how big an effect you expect to find.
3. Estimate from preliminary experiments the degree of variability to expect in the data. Alternatively you can guess the variability on the basis of experience, historical data or simple intuition.
4. Decide on the power you want the experiment to have to detect an effect of the size that is big enough to matter to you, or that you expect to find. Usually people choose a power of 80% or more, corresponding to a type 2 error rate, b, of 20% or less.
5. Perform a power analysis to determine the number of samples, n, that you will need to satisfy the criteria set above.
6. Run the experiments and collect the specified number of samples.
7. Analyse the results and decide whether to accept or reject the null hypothesis on the basis of the P value obtained.
An alternative, sequential approach
1. Choose a type 1 error rate, a, that you are comfortable with. Almost invariably people choose 0.05.
2. Decide on the minimum number of samples that you are prepared to use, nmin, and the maximum number of samples that you are prepared to invest in, nmax.
3. Run the experiments to gather nmin samples, and analyse them to obtain P.
4. If the P value exceeds a threshold then stop the experiments and accept the null hypothesis. If the P value is less than another threshold then stop the experiments and reject the null hypothesis. Otherwise, go on to the next step.
5. Run the experiment again to obtain one more sample in each treatment group, and analyse again to obtain another P value.
6. Go back to two steps. (The protocol automatically terminates at or before nmax.)
If you are a practising laboratory researcher, it is quite likely that the conventional approach will sound only vaguely realistic. Rarely do experimenters to follow the steps in anything like the formal manner presented even though experimenters who use animals may describe the outcomes of step 5 in their ethics applications. Most often the choices of power and effect size are not declared at the design stage, and the number of samples to be obtained is decided on the basis of habit or convention. It is also quite common for experimenters to 'check out the data' with undeclared interim analyses, and to alter their experimental design in response to the interim results either by stopping the experiments early or by increasing the number of samples in an effort to 'chase' a significant outcome. Unfortunately, such ad hoc interventions in the experimental design lead to inflated type 1 error rates, and can substantially reduce the quality of any scientific inference drawn from the outcomes.
In contrast to the above, the steps in the alternative approach may sound surprisingly familiar despite the fact that the method is relatively novel. The iterative cycle of adding new data and reanalysing is quite similar to the informal process described in the paragraph above, but it doesn't render the outcomes invalid by inflation of the type 1 error rate. The sequential analyses are a feature of the experimental design rather than being ad hoc interventions, because the P thresholds used in step 5 are chosen to ensure that the type 1 error rate is maintained at the nominal level. The use of pre-determined stopping rules in the sequential design prevents the accumulation of type 1 errors beyond the desired level. The number of samples that will be needed is not known exactly in advance, but it will always be between nmin and nmax.
Thus we have two contrasting approaches to experimental design—one that is conventional and valid, but tends to suffer in practice from inappropriate on the fly modifications that rob it of validity, and one where on the fly adjustment of the number of samples is essential. Which should laboratory researchers use? Should they eschew interim analyses and apply the conventional approach more rigidly or should they switch to the novel sequential analysis approach? The answers to those questions depend on the relative power or efficiencies of the two methods. Investigation using Monte Carlo simulations so far have shown that the novel approach offers improvements in analytical efficiency in the range of 10 to 40%, and so it is possible that it will be widely adopted before the end of this century.
Lew M. When there should be no difference – how to fail to reject the null hypothesis.
Trends in Pharmacological Sciences 2006; 27: 274-278.
Software to perform the equivalence test described in the above paper can be downloaded from http://www.pharmacology.unimelb.edu.au/statboss.html
A commissioned series of statistical papers is being prepared for the British Journal of Pharmacology. As at March 2007, the first two of the series have been provisionally accepted for publication.
Currently no project details available
Faculty Research Themes
School Research Themes
For further information about this research, please contact Dr Michael Lew