The use of algorithmic decision-making is steadily increasing, but people may have misgivings about machines making moral decisions. In two experiments (N = 551), we examined whether people expect machines to weigh information differently than humans in making moral decisions. We found that people expected that a computer judge would be more likely to convict than a human judge, and that both judge types would be more likely to convict based on individuating information than on base-rate information. While our main hypotheses were not supported, these findings suggest that people might anticipate machines will commit to decisions based on less evidence than a human would require, providing a possible explanation for why people are averse to machines making moral decisions.