By Zhuolun He, The Chinese University of Hong Kong, Hong Kong, zleonhe@gmail.com | Yuan Pu, The Chinese University of Hong Kong, Hong Kong | Haoyuan Wu, The Chinese University of Hong Kong, Hong Kong | Yuhan Qiu, ChatEDA Tech, Hong Kong | Tairu Qiu, ChatEDA Tech, Hong Kong | Bei Yu, The Chinese University of Hong Kong, Hong Kong, byu@cse.cuhk.edu.hk
This survey explores the application of Large Language Models (LLMs) in Electronic Design Automation (EDA), covering their roles as both assistants and autonomous agents. We review current research and practical implementations where LLMs are utilized for tasks such as question answering, script generation, and automated design processes. This work highlights the benefits of LLMs, including enhanced productivity and innovation, while also addressing challenges like accuracy and integration with traditional EDA tools. Furthermore, we discuss the evolution from LLMs as supportive assistants to more sophisticated agents capable of handling complex EDA workflows. This work aims to provide a comprehensive overview and guide future advancements in the integration of LLMs within the EDA domain.
Recent developments in semiconductor technology have significantly increased integration density, with modern System-on-Chip (SoC) designs for high-performance computing now exceeding 10 billion transistors. However, the traditional CMOS transistor scaling law has reached its physical limits. This has necessitated the adoption of More-than-Moore (MtM) technology, which integrates novel design methodologies and heterogeneous computing architectures. Electronic Design Automation (EDA), design reuse, and IP-based methodologies have been instrumental in bridging the productivity gap, reducing time-to-market, and meeting increasingly stringent performance and security requirements. As electronic systems grow in complexity, new design and verification methodologies are emerging to address these challenges effectively.
In the pre-Internet of Things (IoT) era, the security of SoC designs was often an afterthought. However, the widespread adoption of IoT devices has made security a critical concern at every level of deployment. Current EDA tools, while optimizing for performance, inadvertently introduce vulnerabilities that expose circuits to threats such as side-channel attacks, reverse engineering, and hardware Trojans. This increasing focus on security necessitates the development of security-aware EDA tools capable of addressing threats such as fault injection, information leakage, and timing and power-based attacks.
Deep learning algorithms and Large Language Models (LLMs) have emerged as promising solutions to address the growing challenges in SoC and chiplet design and verification. The application of AI-driven methodologies is revolutionizing Register Transfer Logic (RTL) generation, hardware security, and design automation, leading to enhanced power-performance-area (PPA) metrics and improved verification efficiency. This monograph features three works that explore the transformative role of AI in EDA and SoC design, namely Deep Learning and Generative AI for Monolithic and Chiplet SoC Design and Verification, Large Language Models for EDA, and Evaluating LLMs for Automatic RTL Generation for Combinational Circuits via High-Level Synthesis. These works collectively underscore the transformative potential of AI and LLMs in revolutionizing SoC design and EDA workflows. This monograph provides valuable insights for researchers and industry professionals, inspiring further advancements in AI-assisted hardware design and verification.
Companion
Foundations and TrendsĀ® in Electronic Design Automation, Volume 14, Issue 4 Special Issue: Harnessing the Potential of Deep-learning Algorithms and Generative AI for SoC and Chiplet Design and Verification
See the other articles that are also part of this special issue.