Retrospective: EIE: Efficient Inference Engine on Sparse and Compressed Neural Network

Song Han¹³, Xingyu Liu⁴, Huizi Mao³ , Jing Pu⁵ , Ardavan Pedram²⁶ , Mark A. Horowitz² , William J. Dally²³
¹MIT ²Stanford ³NVIDIA ⁴CMU ⁵Google ⁶Samsung
(* indicates equal contribution)

News

Waiting for more news.

Awards

Song HanEIE Retrospective
team
received
Top 5 cited papers in 50 years of ISCA
of
.

Competition Awards

No items found.

Abstract

EIE proposed to accelerate pruned and compressed neural networks, exploiting weight sparsity, activation sparsity, and 4-bit weight-sharing in neural network accelerators. Since published in ISCA’16, it opened a new design space to accelerate pruned and sparse neural networks and spawned many algorithm-hardware co-designs for model compression and acceleration, both in academia and commercial AI chips. In retrospect, we review the background of this project, summarize the pros and cons, and discuss new opportunities where pruning, sparsity, and low-precision can accelerate emerging deep learning workloads.

Video

Citation

@article{han2023retrospective,

title={Retrospective: EIE: Efficient Inference Engine on Sparse and Compressed Neural Network},

author={Han, Song and Liu, Xingyu and Mao, Huizi and Pu, Jing and Pedram, Ardavan and Horowitz, Mark A and Dally, William J},

journal={arXiv preprint arXiv:2306.09552},

year={2023}

}

Media

No media articles found.

Acknowledgment

Team Members