In general, it is still unclear, to what extent, that autistic children would develop the ability to recognize facial expression by age and which basic emotion expressions are consistently difficult to learn. Moreover, what crucial processing and mechanisms would play a key role for the autistic behavior patterns in early social interaction. To answer these questions, a deep learning model is constructed to simulate the eye movement records during judging emotion expression of typical developed and autistic children. The simulation results are: 1. for older autistic models, if the gaze fixations for eyes and mouth of positive emotion is longer, it would lead to greater recognition performance; 2. in contrast, for younger autistic models, it takes longer training sessions to correctly recognize most of negative emotions as too much inferences of internal information occurred while establishing reliable prototypes of facial figures in differentiating the angry, sad, and disgusting expressions.