Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

A comparison of tests for online experiments

Abstract

Online experiments have grown in popularity but the techniques used to evaluate them have not adapted to the continuous stream of results. The goal of this review is to analyze the limitations of current online experiment tests and evaluate newer techniques that are better suited for continuous assessment. Conducting tests on simulated experiments showed that peeking at results can cause 3 times as many false-positives when using t-test's. The conservative nature of multiple comparison adjustments led to rejecting over 50% of winning ideas. Mixture Sequential Probability Ratio Tests (mSPRT) resulted in few Type-I errors even when monitoring results continuously. Using mSPRT led to 1/6th as many Type-I errors. There are known downsides to mSPRT including implementation complexity and computation costs, but these are likely smaller than the value created from having a more reliable analysis technique.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View