Globally, diarrhea is one of the leading causes of morbidity and mortality. Populations living in developing countries disproportionally experience the burden of diarrhea; in these areas contaminated drinking water remains one of the primary environmental pathways for the transmission of diarrheagenic pathogens.
Centrally treated "piped" water is largely recognized as the best method to provide human populations with consistent access to safe drinking water. Piped water requires little additional input from end users, and therefore has few additional barriers to the adoption and sustained consumption of water that is free of microbial contamination. In areas where the construction of piped water systems is not feasible due to capital or maintenance costs, or dispersed rural populations, the promotion of household water treatment and safe storage (HWTS) strategies have been recommended as the best method to deliver to safe water to households that lack affordable alternatives.
Several HWTS strategies have been tested in field efficacy trials, including treatment by chlorine, filtration, boiling and solar disinfection. While various meta-analyses of these trials have suggested that HWTS strategies improve drinking water quality, and reduce diarrhea, there is some concern that: 1) the design and implementation of these trials may have biased results, and relatedly; 2) that these strategies themselves are not being used by target populations outside the context of these intensive trials. Specifically, each of the HWTS strategies listed above comes with small barriers to use: chlorine changes the taste and odor of the water and is not effective against spore forming pathogens (e.g., cryptosporidium), filters are slow and are have limited effectiveness against viruses, boiling changes the temperature of water and requires significant time and fuel inputs, and solar disinfection requires significant time inputs from end users to effectively produce safe water. In efficacy trials with intensive implementation activities and frequent follow up visits, there is concern that the evaluation itself is impacting participant usage of HWTS technologies thereby affecting the generalizability of results. In order to develop and evaluate strategies that will effectively provide safe drinking water in regional contexts it will be necessary to consider additional HWTS strategies and to use evaluation designs that yield results that are generalizable beyond the research context of a specific trial.
In this dissertation, I present three chapters that further the evidence regarding HWTS strategies, trial methodologies, and outcome measures that are available to the HWTS sector. In Chapter 1, I report on the evaluation of a safe water program that promotes a novel ultraviolet-based HWTS strategy to rural Mexican households. Ultraviolet disinfection, while more expensive than the other HWTS technologies mentioned above, and requiring a nominal source of electricity, has the potential to reduce the barriers to adoption and sustained use: it is fast, it inactivates all classes of pathogens (bacteria, viruses, parasites), and does not change the taste or odor of the water. The study presented in this chapter represent the first health evaluation of an ultraviolet-based HWTS strategy in a low- or middle-income country. The study design used for this evaluation, a randomized stepped wedge design, is also detailed; its advantages with respect to producing generalizable results are discussed. Specifically, the design allows an HWTS intervention to be rolled-out as it would have been under non-trial conditions, while maintaining the rigor of a randomized experimental design. The intention-to-treat (ITT) results from the trial show that while there were large improvements in drinking water quality (a 19 percentage point reduction [95% CI: −27%, −14%] in drinking water contamination, measured as the most probable number of Escherichia coli /100 ml [MPN EC]), there was not a significant reduction in the seven-day prevalence of self-reported diarrhea symptoms [Relative Risk (RR): 0.80 (95% CI: 0.51, 1.27)]. It is likely that a lower than expected prevalence in the control group reduced the power of the study to detect an effect of the intervention on diarrhea; however, the large impact of the intervention on drinking water contamination justifies expansion of the intervention to new regions. It is further concluded that future studies of potential health effects would be warranted, but in a larger study, or in an area with a higher burden of diarrheal illness.
In chapter 2, the effect of the intervention among households that "complied with the messages in the community intervention is explored, under the hypothesis that these households would have experienced greater reductions in drinking water contamination and diarrhea compared to the total population. The causal average complier effect (CACE) has been described for parallel arm trials, and is considered an estimate of treatment efficacy in effectiveness trials (i.e., a compliment to the ITT results reported in chapter 1). However, methods to estimate CACE in parallel arm trials require untestable assumptions. In this chapter a novel method to estimate the CACE parameter in a randomized stepped wedge trial is described. The primary insight from this chapter is that this method requires fewer assumptions than are necessary to estimate CACE in a parallel arm trial. To illustrate these methods, a definition of compliance is adopted, and effects of the intervention on household water quality and diarrhea are estimated among households that meet this definition. These stepped wedge CACE are compared to ITT results from the original trial and to CACE estimates from an instrumental variable (IV) estimator (a common method to estimate CACE in parallel arm trials). The stepped wedge CACE parameter estimated modest increases in reductions in drinking water contamination and diarrhea among complier households, compared to the ITT results from Chapter 1. In contrast, results from the IV estimator suggest a doubling in effect compared to the ITT estimates; however, it shown that violations of the assumptions necessary for IV estimation likely biased these results.
In the third and final chapter, the significance of measuring reductions in MPN EC (as a measure of water quality) in HWTS trials is considered. While most species of EC are not themselves pathogenic, they are excreted in large concentrations with human fecal material. Escherichia coli has therefore been a target indicator organism to test for the presence of fecal material and possible pathogen contamination in drinking water for over a century. For decades, in lieu of available methods to easily and cheaply test for EC in drinking water, there has been a reliance on surrogate indicators; the primary surrogate has been thermotolerant ("fecal") coliforms (FC). This chapter reports the results of a systematic review and meta-analysis that evaluates available evidence for a link between diarrhea and the presence of EC and FC in drinking water. The findings suggest that there is evidence to link diarrhea to EC (pooled RR: 1.54 [95% CI: 1.37, 1.74]) but not FC (pooled RR: 1.07 [95% CI: 0.79, 1.45]). It is concluded that this evidence supports the continued use of EC as an indicator organism to measure water quality in HWTS trials, and other field and research applications where an association with diarrhea is important; there is no evidence, however, to support the continued use of FC in these applications. The results from this review also serve to strengthen the findings and conclusion reported in chapter 1.