The Remote Associates Test (RAT, CRA) is a classical creativ-ity test used to measure creativity as a function of associativeability. The RAT has been administered in different languages.Nonetheless, because of how embedded in the language thetest is, only a few items are directly translatable, and most ofthe time the RAT is created anew in each language. This pro-cess of manual (and in two cases computational) creation ofRAT items is guided by the researchers’ understanding of thetask. However, are the RAT items in different languages com-parable? In this paper, different RAT stimuli datasets are an-alyzed qualitatively and quantitatively. Significant differencesare observed between certain datasets in terms of solver per-formance. The potential sources of these differences are dis-cussed, together with what this means for creativity psycho-metrics and computational vs. manual creation of stimuli.