Skip to main content
eScholarship
Open Access Publications from the University of California

Processing Time-warped Sequences Using Recurrent Neural Networks: Modelling Rate-dependent Factors in Speech Perception

Abstract

This paper presents a connectionist approach to the processing of time-warped sequences and attempts to account for some aspects of rate-dependent processing in speech perception. The proposed model makes use of recurrent networks, networks which take input one at a time and which can pick up long-distance dependencies. Three recurrent network architectures are tested and compared in four computational experiments designed to assess how well time-warped sequences can be processed. The experiments involve two sets of stimuli, some of which reflect aspects of rate dependent processing in speech; one where the sequences are distinguished by the way their constituent elements are sequentially ordered, and another where the sequences share similar arrangement of the constituent elements but differ in the duration of some of these elements. The results establish certain conditions on rate-dependent processes in a network of this type vis-a-vis the obligatory use of rate information within the syllable, and throw some light on the basic computer science of recurrent neural networks.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View