The ability to extract meaningful relationships from sequences
is crucial to many aspects of perception and cognition, such
as speech and music. This paper explores how leading
computational techniques may be used to model how humans
learn abstract musical relationships, namely, tonality
and octave equivalence. Rather than hard-coding musical
rules, this model uses an unsupervised learning approach to
glean tonal relationships from a musical corpus. We develop
and test a novel input representation technique, using a
perceptually-inspired harmonics-based representation, to bootstrap
the model’s learning of tonal structure. The results are
compared with behavioral data from listeners’ performance on
a standard music perception task: the model effectively encodes
tonal relationships from musical data, simulating expert
performance on the listening task. Lastly, the results are contrasted
with previous findings from a computational model that
uses a more simple symbolic input representation of pitch.