Efficient AI Computing,
Transforming the Future.

Projects

To choose projects, simply check the boxes of the categories, topics and techniques.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

TinyTL: Reduce Activations, Not Trainable Parameters for Efficient On-Device Learning

NeurIPS 2020
 (
)

Tiny-Transfer-Learning (TinyTL) provides memory-efficient on-device learning by freezing the weights while only learns the bias modules to get rid of the intermediate activations, and introducing the lite residual module to maintain the adaptation capacity.

Searching Efficient 3D Architectures with Sparse Point-Voxel Convolution

ECCV 2020
 (
)

SPVNAS enhances Point-Voxel Convolution in large-scale outdoor scenes with sparse convolutions. With 3D Neural Architecture Search (3D-NAS), it efficiently and effectively searches the optimal 3D neural network architecture under a given resource constraint.

GCN-RL Circuit Designer: Transferable Transistor Sizing with Graph Neural Networks and Reinforcement Learning

DAC 2020
 (
oral
)

We develop a graph neural network and reinforcement learning based method for analog circuit transistor sizing.

MCUNet: Tiny Deep Learning on IoT Devices

NeurIPS 2020
 (
Spotlight
)

MCUNet is a system-algorithm co-design framework for tiny deep learning on microcontrollers. It consists of TinyNAS and TinyEngine. They are co-designed to fit the tight memory budgets. With system-algorithm co-design, we can significantly improve the deep learning performance on the same tiny memory budget.