Efficient AI Computing,
Transforming the Future.

Projects

To choose projects, simply check the boxes of the categories, topics and techniques.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Once-for-All: Train One Network and Specialize it for Efficient Deployment

ICLR 2020
 (
)

OFA is an efficient AutoML technique that decouples model training from architecture search. Train only once, specialize for many hardware platforms, from CPU/GPU to hardware accelerators. OFA achieves a new SOTA 80.0% ImageNet top1 accuracy under the mobile setting (<600M FLOPs).

HAT: Hardware-Aware Transformers for Efficient Natural Language Processing

ACL 2020
 (
)

HAT NAS framework leverages the hardware feedback in the neural architecture search loop, providing a most suitable model for the target hardware platform. The results on different hardware platforms and datasets show that HAT searched models have better accuracy-efficiency trade-offs.

Park: An Open Platform for Learning-Augmented Computer Systems

NeurIPS 2019
 (
)

We present Park, a platform for researchers to experiment with Reinforcement Learning (RL) for computer systems.

Point-Voxel CNN for Efficient 3D Deep Learning

NeurIPS 2019
 (
Spotlight
)

PVCNN represents the 3D data in points to reduce the memory consumption, while performing the convolutions in voxels to reduce the irregular, sparse data access and improve the locality.