Categorical speech content can often be perceived directlyfrom continuous auditory cues in the speech stream, buthuman-level performance on speech recognition tasksrequires compensation for contextual variables like speakeridentity. Regression modeling by McMurray and Jongman(2011) has suggested that for many fricative phonemes, acompensation scheme can substantially increasecategorization accuracy beyond even the information from 24un-compensated raw speech cues. Here, we simulate thesame dataset instead using a neurally rather than abstractlyimplemented model: a hybrid dynamic neural field model andconnectionist network. Our model achieved slightly loweraccuracy than McMurray and Jongman’s but similar accuracypatterns across most fricatives. Results also comparedsimilarly to more recent models that were also less neurallyinstantiated but somewhat closer fitting to humans inaccuracy. An even less abstracted model is an immediatefuture goal, as is expanding the present model to additionalsensory modalities and constancy/compensation effects.