Elicited imitation is a widely used method for testing a child's knowledge of a language for scientific or clinical purposes. A child hears an utterance and is asked to repeat what they have heard. While it is assumed that their fluency or speed in doing so is contingent on their linguistic competence, little is known about the cognitive mechanisms and/or representations involved. To explore this, we train an encoder-decoder model, consisting of recurrent neural networks, to encode and reproduce a corpus of child-directed speech and then test its performance on the experimental task of Bannard and Matthews (2008). In that study pre-school children were asked to repeat high- and low-frequency four-word sequences in which the first three words were identical (e.g., sit in your chair and sit in your truck) and the final words and bigrams were closely matched for frequency. We find that like those children our model makes more errors on the initial three words when they are part of a low-frequency than a high-frequency sequence, despite the fact that the words being repeated are identical. We explore why this might be and pinpoint some possible similarities between the model and child language processing.