Efficient AI Computing,
Transforming the Future.

Wei-Ming Chen

Postdoctoral

(Graduated)

Wei-Ming Chen is a Postdoctoral Associate at MIT EECS advised by Professor Song Han. His research focuses on TinyML, embedded systems, and real-time systems, with a particular emphasis on enabling efficient deep learning on Internet of Things (IoT) devices, such as microcontrollers. Chen's recent work on the MCUNet series (MCUNet, MCUNetv2, and MCUNetv3) has enabled efficient inference and training on devices with limited memory through the co-design of systems and algorithms. He is also a key contributor and maintainer of TinyEngine, an open-source library for high-performance and memory-efficient deep learning on microcontrollers. His work "On-device training under 256KB memory" (MCUNetV3) is highlighted by the MIT homepage in fall 2022. He received first place (among 150 teams) in the flash consumption track of the ACM/IEEE TinyML Design Contest at ICCAD 2022. He developed TinyChatEngine that enables LLM inference on the edge (laptop, Paspberry PI). His research has received more than 1,000 stars on GitHub. After graduation, he joined NVIDIA as a senior deep learning engineer working on large language model acceleration.

Honors and Fellowships

No items found.

Competition Awards

First Place (1/150)
,
ACM/IEEE TinyML Design Contest
,
Memory Occupation Track
, @
ICCAD
,
2022
On-Device Training

Awards

No items found.

Projects

Blog Posts

Talks

No items found.