Different types of programming tools are being developed and utilized by programmers at different levels, from traditional autocomplete to modern AI-based chatbots. All of them are designed to help people work more efficiently. However, the benefits of these tools, especially their learning effects, are not well understood.
This dissertation presents usability analyses of programming tools. First, we investigate IDE-based autocomplete features, conducting a between-subjects experiment (N=32) using an eye tracker to evaluate the costs and benefits for programmers learning an unfamiliar API. We conclude that autocomplete's primary benefit is in information access and learning promotion rather than typing reduction. Second, we examine the effectiveness of LLM-based chatbots versus human-written tutorials for learning unfamiliar codebases through a between-subjects experiment (N=15). We found that programmers using LLM chatbots exhibited greater confusion and less confidence, while those using tutorials developed better structural understanding but struggled with low-level implementation details. We found that when determining productivity, familiarity with the programming language is more important than other factors, such as familiarity with the framework or LLM chatbots.