We propose a computational model to account for the regularization behaviour that characterizes language learning andthat has emerged from experimental studies, specifically from concurrent multiple frequency learning tasks (Ferdinand,2015). These experiments show that learners regularize the input frequencies they observe, suggesting that domain-generalfactors might underlie regularization behaviour. Standard models have failed to capture this pattern, so we explore theconsequences of adding a production bias that follows the learning stage in a probabilistic model of frequency learning.We simulate and fit to experimental data a beta-binomial Bayesian sampler model, which allows an explicit quantificationof both the learning and the production bias. Our results reveal that adding a production component to the model leadsto a better fit to data. Given our results, we hypothesize that linguistic regularization may result from general-domainconstraints on learning combined to biases in production, which need not to be considered innate.