Designing and analysing powerful experiments: practical tips for applied researchers
David McKenzie
Abstract
This paper offers practical advice on how to improve statistical power in randomised experiments through choices and actions researchers can take at the design, implementation and analysis stages. At the design stage, the choice of estimand, choice of treatment, and decisions that affect the residual variance and intra‐cluster correlation can all affect power for a given sample size. At the implementation stage, researchers can boost power through increasing compliance with treatment, reducing attrition and improving outcome measurement. At the analysis stage, power can be increased through using different test statistics or estimands, through the choice of control variables, and through incorporating informative priors in a Bayesian analysis. A key message is that it does not make sense to talk of ‘the’ power of an experiment. A study can be well powered for one outcome or estimand but not others, and a fixed sample size can yield very different levels of power depending on researcher decisions.
1 citation
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.16 × 0.4 = 0.06 |
| M · momentum | 0.53 × 0.15 = 0.08 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.