- Smith, Michael;
- Koren, Victor;
- Zhang, Ziya;
- Moreda, Fekadu;
- Cui, Zhengtao;
- Cosgrove, Brian;
- Mizukami, Naoki;
- Kitzmiller, David;
- Ding, Feng;
- Reed, Seann;
- Anderson, Eric;
- Schaake, John;
- Zhang, Yu;
- Andréassian, Vazken;
- Perrin, Charles;
- Coron, Laurent;
- Valéry, Audrey;
- Khakbaz, Behnaz;
- Sorooshian, Soroosh;
- Behrangi, Ali;
- Imam, Bisher;
- Hsu, Kuo-Lin;
- Todini, Ezio;
- Coccia, Gabriele;
- Mazzetti, Cinzia;
- Andres, Enrique Ortiz;
- Francés, Félix;
- Orozco, Ismael;
- Hartman, Robert;
- Henkel, Arthur;
- Fickenscher, Peter;
- Staggs, Scott
The Office of Hydrologic Development (OHD) of the U.S. National Oceanic and Atmospheric Administration's (NOAA) National Weather Service (NWS) conducted the two phases of the Distributed Model Intercomparison Project (DMIP) as cost-effective studies to guide the transition to spatially distributed hydrologic modeling for operational forecasting at NWS River Forecast Centers (RFCs). Phase 2 of the Distributed Model Intercomparison Project (DMIP 2) was formulated primarily as a mechanism to help guide the U.S. NWS as it expands its use of spatially distributed watershed models for operational river, flash flood, and water resources forecasting. The overall purpose of DMIP 2 was to test many distributed models forced by high quality operational data with a view towards meeting NWS operational forecasting needs. At the same time, DMIP 2 was formulated as an experiment that could be leveraged by the broader scientific community as a platform for the testing, evaluation, and improvement of distributed models.DMIP 2 contained experiments in two regions: in the DMIP 1 Oklahoma basins, and second, in two basins in the Sierra Nevada Mountains in the western USA. This paper presents the overview and results of the DMIP 2 experiments conducted for the two Sierra Nevada basins. Simulations from five independent groups from France, Italy, Spain and the USA were analyzed. Experiments included comparison of lumped and distributed model streamflow simulations generated with uncalibrated and calibrated parameters, and simulations of snow water equivalent (SWE) at interior locations. As in other phases of DMIP, the participant simulations were evaluated against observed hourly streamflow and SWE data and compared with simulations provided by the NWS operational lumped model. A wide range of statistical measures are used to evaluate model performance on a run-period and event basis. Differences between uncalibrated and calibrated model simulations are assessed.Results indicate that in the two study basins, no single model performed best in all cases. In addition, no distributed model was able to consistently outperform the lumped model benchmark. However, one or more distributed models were able to outperform the lumped model benchmark in many of the analyses. Several calibrated distributed models achieved higher correlation and lower bias than the calibrated lumped benchmark in the calibration, validation, and combined periods. Evaluating a number of specific precipitation-runoff events, one calibrated distributed model was able to perform at a level equal to or better than the calibrated lumped model benchmark in terms of event-averaged peak and runoff volume error. However, three distributed models were able to provide improved peak timing compared to the lumped benchmark. Taken together, calibrated distributed models provided specific improvements over the lumped benchmark in 24% of the model-basin pairs for peak flow, 12% of the model-basin pairs for event runoff volume, and 41% of the model-basin pairs for peak timing. Model calibration improved the performance statistics of nearly all models (lumped and distributed). Analysis of several precipitation/runoff events indicates that distributed models may more accurately model the dynamics of the rain/snow line (and resulting hydrologic conditions) compared to the lumped benchmark model. Analysis of SWE simulations shows that better results were achieved at higher elevation observation sites.Although the performance of distributed models was mixed compared to the lumped benchmark, all calibrated models performed well compared to results in the DMIP 2 Oklahoma basins in terms of run period correlation and %Bias, and event-averaged peak and runoff error. This finding is noteworthy considering that these Sierra Nevada basins have complications such as orographically-enhanced precipitation, snow accumulation and melt, rain on snow events, and highly variable topography. Looking at these findings and those from the previous DMIP experiments, it is clear that at this point in their evolution, distributed models have the potential to provide valuable information on specific flood events that could complement lumped model simulations. © 2013.