- Main
Machine Learning Empowered Agile Hardware Design and Design Automation
Abstract
With the ever-increasing applications comes the realization that efforts and complexity for developing hardware to keep pace with such compute demands are growing at an even faster rate. And, the problem goes further. As the target cadence of Moore's law is already slipping, more burden is placed on the design methodology to achieve the "equivalent scaling". The proliferation of everywhere machine learning (ML) reveals its multi-faceted role: the killer applications that pull transition to novel hardware and compute paradigms (i.e., system for ML), and the important boosters to design methodology that push toward automated and agile hardware development (i.e., ML for system). Aiming to foster the virtuous cycle between ML and hardware, my research features hardware agile development empowered by ML and studies how to infuse intelligence, improve agility, and eventually enable no-human-in-the-loop automation for scalable and efficacious hardware development flow by synergistic investigation across algorithm, architecture, and electronic design automation (EDA).
Specifically, we investigate how different ML techniques can be applied for (1) fast and accurate design evaluation, (2) efficient and scalable design optimization, and (3) high-quality and productive design verification. In design evaluation, we leverage the inherent graph structures of data flow graphs and circuits and explore how domain knowledge can be infused into graph neural network (GNN)-based models, so that we can reconcile timeliness, accuracy, and generalization capability in high-level synthesis (HLS) and logic synthesis performance predictions. In design optimization, we exploit deep reinforcement learning for flexible, scalable, and automated design exploration in HLS resource allocation and workload placement optimization, which is efficient in large search spaces and can be transferred to new designs. In design verification, we utilize the message-passing mechanism in GNN computation to imitate conventional symbolic reasoning, which is scalable to extremely large Boolean networks with billions of nodes and makes better use of modern computing resources. Through multiple case studies, we showcase the possibilities and potentials of ML-driven methodologies in agile and intelligent hardware design and design automation. Going forward, we hope to see the virtuous cycle, in which ML-based techniques are efficiently running on the most powerful computers with the pursuit of designing the next generation computers.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-