Dual-process theories (DPT) of cognition posit that performance differences in reasoning stem from an interplay
between heuristics-based processing (i.e., System 1) and more controlled, rule-based processing (System 2). Emerging evidence
suggests that solving classic base-rate problems via Bayesian inference depends on adequately inhibiting the prepotent
representations elicited by System 1 (De Neys, 2014). We propose that DPTs may benefit probabilistic models of reasoning by
providing a framework on which to map individual difference predictions (e.g., how inhibitory capacity, prior knowledge, and
motivation influence adherence to probabilistic rules). We present a dual-process computational model that implements various
normative (i.e., Bayesian) and non-normative rules, which in turn are probabilistically fired based on a functional relationship
between relative (de)activations of each system and variability in agents’ inhibitory capacity and motivation. Simulation results
map onto behavioral data and replicate a variety of base-rate performance patterns, including base-rate neglect