Generative models of human identity and appearance havebroad applicability to behavioral science and technology, butthe exquisite sensitivity of human face perception means thattheir utility hinges on the alignment of the model’s representa-tion to human psychological representations and the photoreal-ism of the generated images. Meeting these requirements is anexacting task, and existing models of human identity and ap-pearance are often unworkably abstract, artificial, uncanny, orbiased. Here, we use a variational autoencoder with an autore-gressive decoder to learn a face space from a uniquely diversedataset of portraits that control much of the variation irrele-vant to human identity and appearance. Our method generatesphotorealistic portraits of fictive identities with a smooth, navi-gable latent space. We validate our model’s alignment with hu-man sensitivities by introducing a psychophysical Turing testfor images, which humans mostly fail. Lastly, we demonstratean initial application of our model to the problem of fast searchin mental space to obtain detailed “police sketches” in a smallnumber of trials.