Design automation has increasingly leveraged Artificial Intelligence (AI) to streamline workflows, yet bridging the gap between natural language instructions and precise design outputs remains a challenge. Despite advancements in generative AI, tools that interpret user requirements expressed in natural language and translate them into 3D-printable models are limited. This thesis introduces AutoGen3D, a Large Language Model (LLM) based tool to generate parametric CAD models and optimal 3D printing settings based on multimodal inputs. AutoGen3D was implemented using an OpenAI GPT-4 backend for code generation and follows a few-shot prompting strategy by learning from provided examples, while an integrated multimodal feedback mechanism allows for refining or updating generated models. Its capabilities were evaluated through experiments involving objects of varying complexity and 3D printing related constraints. The results show that AutoGen3D effectively generates accurate models for objects it has knowledge of, incorporates user feedback to improve designs, and optimizes model geometry and print settings based on inferred constraints. However, its performance is limited when tasked with generating complex assemblies or objects outside its knowledge base, highlighting the need for further enhancements. This research demonstrates the feasibility of LLM-powered design automation and highlights AutoGen3D’s potential as a prototyping tool for manufacturing and engineering applications. With further refinement, such tools could bridge the gap between natural language inputs and complex design tasks, enabling broader adoption in both professional and consumer-oriented workflows.