AbstractWhen compared with differences in snow accumulation predicted by widely used hydrological models, there is a much greater divergence among otherwise “good” models in their simulation of the snow ablation process. Here, we explore differences in the performance of the Variable Infiltration Capacity model (VIC), Noah land surface model with multiparameterization options (Noah-MP), the Catchment model, and the third-generation Simplified Simple Biosphere model (SiB3) in their ability to reproduce observed snow water equivalent (SWE) during the ablation season at 10 Snowpack Telemetry (SNOTEL) stations over 1992–2012. During the ablation period, net radiation generally has stronger correlations with observed melt rates than does air temperature. Average ablation rates tend to be higher (in both model predictions and observations) at stations with a large accumulation of SWE. The differences in the dates of last snow between models and observations range from several days to approximately a month (on average 5.1 days earlier than in observations). If the surface cover in the models is changed from observed vegetation to bare soil in all of the models, only the melt rate of the VIC model increases. The differences in responses of models to canopy removal are directly related to snowpack energy inputs, which are further affected by different algorithms for surface albedo and energy allocation across the models. We also find that the melt rates become higher in VIC and lower in Noah-MP if the shrub/grass present at the observation sites is switched to trees.