How do people understand concepts such as dog, aggressive
dog, dog house or house dog? The meaning of a concept
depends crucially on the concepts around it. While this
hypothesis has existed for a long time, only recently it has
become possible to test it based on neuroimaging and quantify
it using computational modeling. In this paper, a neural
network is trained with backpropagation to map attribute-
based semantic representations to fMRI images of subjects
reading everyday sentences. Backpropagation is then
extended to the attributes, demonstrating how word meanings
change in different contexts. Across a large corpus of
sentences, the new attributes are more similar to the attributes
of other words in the sentence than they are to the original
attributes, demonstrating that the meaning of the context is
transferred to a degree to each word in the sentence. Such
dynamic conceptual combination effects could be included in
natural language processing systems to encode rich contextual
embeddings to mirror human performance more accurately.