The dream of intelligent assistants to enhance programmer productivity has now become a concrete reality, with rapid advances in artificial intelligence. Large language models (LLMs) have demonstrated impressive capabilities in various domains based on the vast amount of data used to train them. However, tasks which require structured reasoning or those underrepresented in their training data continue to be a challenge for LLMs.Program synthesis offers an alternative approach to learning, particularly effective in data-efficient domains with limited training data. It focuses on searching for a program in a domain-specific language that satisfies a given user intent. Program synthesis enables learning of interpretable models that provide correctness and generalizability guarantees from a few data points leading to data-efficient learning. However, purely symbolic methods based on combinatorial search scale poorly to complex problems. To address these challenges, a hybrid paradigm called neurosymbolic synthesis is being explored. This approach integrates the best of both worlds by combining neural networks with symbolic reasoning, thereby enhancing the
robustness of AI assistants.
This dissertation includes technical contributions spanning symbolic, neurosymbolic and neural approaches to program synthesis. It explores the application of symbolic constraint-based synthesis in SyPhon to model human language, hybrid techniques in Probe and HySynth that guide symbolic search with a probabilistic model, and neural LLM-driven code generation to automate spreadsheet tasks for end users. Additionally, it focuses on strategies to improve user experience by developing more intuitive and user-friendly programming assistants for the future.