Head-based pointing is an effective interface for those with limited hand control, though it may involve a learning curve due to discrepancies between desired pointer movement and actual system response to head motion. We theorize that individuals have unique head movement patterns for similar tasks, necessitating tailored mappings from head to pointer motion. This was explored by analyzing video data of participants tracking a moving target on-screen using head movements. The study found that using a select set of facial landmarks outperforms other methods, like focusing on the nose-tip or rotation angles, in aligning head and pointer movements. Despite this, there is still notable bias inherent in the simple affine mapping model used. Significant variations were observed in participants' head movements when responding to similar target paths, with diagonal movements showing a higher error rate. These insights could be crucial in creating new, personalized head-tracking interfaces, enhancing ease and efficiency.