Efficient AI Computing,
Transforming the Future.

News

  • Jan 2024
    Tiny Machine Learning Projects
     appears at
    NeurIPS 2020/2021/2022, MICRO 2023, ICML 2023, MLSys 2024, IEEE CAS Magazine 2023
    .
    This TinyML project aims to enable efficient AI computing on the edge by innovating model compression techniques as well as high-performance system design.
  • Oct 2023
    PockEngine: Sparse and Efficient Fine-tuning in a Pocket
     appears at
    MICRO 2023
    .
    This project introduce PockEngine: a tiny, sparse and efficient engine to enable fine-tuning on various edge devices. PockEngine supports sparse backpropagation: it prunes the backward graph and sparsely updates the model with measured memory saving and latency reduction while maintaining the model quality.
  • Jul 2020
    MCUNet: Tiny Deep Learning on IoT Devices
     appears at
    NeurIPS 2020
    .
    MCUNet is a system-algorithm co-design framework for tiny deep learning on microcontrollers. It consists of TinyNAS and TinyEngine. They are co-designed to fit the tight memory budgets. With system-algorithm co-design, we can significantly improve the deep learning performance on the same tiny memory budget.
  • Aug 2024

    The TinyML and Efficient Deep Learning Computing course will be returning in Fall, with recorded sessions on YouTube!

    6.5940
  • Jun 2024

    AWQ is presented at MLSys 2024. Talk video has been released!

    AWQ
  • Mar 2024

    We show SmoothQuant can enable W8A8 quantization for Llama-1/2, Falcon, Mistral, and Mixtral models with negligible loss.

    SmoothQuant
  • Feb 2024

    AWQ has been accepted to MLSys 2024!

    AWQ
  • Feb 2024

    We released new version of quantized GEMM/GEMV kernels in TinyChat, leading to 38 tokens/second inference speed on NVIDIA Jetson Orin!

    AWQ
  • Jan 2024

    StreamingLLM is integrated by HPC-AI Tech SwiftInfer to support infinite input length for LLM inference.

    StreamingLLM
  • Dec 2023

    Congrats Ji Lin completed and defended his PhD thesis: "Efficient Deep Learning Computing: From TinyML to Large Language Model". Ji joined OpenAI after graduation.

  • Dec 2023

    StreamingLLM is integrated by CMU, UW, and OctoAI, enabling endless and efficient LLM generation on iPhone!

    StreamingLLM
  • Dec 2023

    AWQ is integrated by HuggingFace Transformers' main branch.

    AWQ
  • Dec 2023

    SmoothQuant is adopted by NVIDIA TensorRT-LLM.

    SmoothQuant
  • Nov 2023

    TorchSparse++ has been adopted by One-2-3-45++ from Prof. Hao Su's lab (UCSD) for 3D object generation!

    TorchSparse++
  • Nov 2023

    🔥 AWQ is now integrated natively in Hugging Face transformers through from_pretrained. You can either load quantized models from the Hub or your own HF quantized models.

    AWQ
  • Nov 2023

    SmoothQuant is adopted by Amazon SageMaker.

    SmoothQuant
  • Oct 2023

    TorchQuantum is used in winning team for ACM Quantum Computing for Drug Discovery.

    QuantumNAS
  • Jul 2023

    The TinyML and Efficient Deep Learning Computing course will be returning in Fall, with live sessions on YouTube!

    6.5940
  • Jul 2023

    SpAtten and SpAtten-Chip won the 1st Place Award at 2023 DAC University Demo.

    SpAtten
  • Jul 2023

    We released TinyChat, an efficient and lightweight chatbot interface based on AWQ. TinyChat enables efficient LLM inference on both cloud and edge GPUs. Llama-2-chat models are supported! Check out our implementation here.

    AWQ
  • Jun 2023

    TorchSparse++ has been adopted by One-2-3-45 from Prof. Hao Su's lab (UCSD) for 3D mesh reconstruction!

    TorchSparse++
  • Jun 2022

    TorchSparse has been adopted by SparseNeuS for neural surface reconstruction.

    TorchSparse
  • Oct 2023
    Congrats
    QuantumNAS
     team on
    1st Place Award
     of
    ACM Quantum Computing for Drug Discovery Contest
     on
     @
    ICCAD 2023
     
    2023
    .
    QuantumNAS
  • Nov 2022
    Congrats
    HAT
     team on
    First Place (1/150)
     of
    ACM/IEEE TinyML Design Contest
     on
    Memory Occupation Track
     @
    ICCAD
     
    2022
    .
    HAT
  • Jul 2020
    Congrats
    SPVNAS
     team on
    First Place
     of
    SemanticKITTI leaderboard
     on
    3D semantic segmentation
     @
    ECCV
     
    2020
    .
    SPVNAS
  • Jun 2021
    Congrats
    SPVNAS
     team on
    First Price
     of
    6th AI Driving Olympics
     on
    nuScenes Semantic Segmentation
     @
    ICRA
     
    2021
    .
    SPVNAS
  • Oct 2019
    Congrats
    OFA
     team on
    First Place
     of
    Low-Power Computer Vision Workshop at ICCV 2019
     on
    DSP
     @
    ICCV
     
    2019
    .
    OFA
  • Jun 2019
    Congrats
    OFA
     team on
    First Place
     of
    Low-Power Image Recognition Challenge
     on
    classification, detection
     @
    IEEE
     
    2019
    .
    OFA
  • Jun 2020
    Congrats
    OFA
     team on
    First Place
     of
    Low-Power Computer Vision Challenge
     on
    CPU Detection, FPGA
     @
    CVPR
     
    2020
    .
    OFA
  • Jun 2019
    Congrats
    ProxylessNAS
     team on
    First Place
     of
    Visual Wake Words Challenge
     on
    TF-lite track
     @
    CVPR
     
    2019
    .
    ProxylessNAS
  • Feb 2024
    Congrats
    Hanrui Wang
     on
    Rising Star in Solid-State Circuits at ISSCC
    .
  • Nov 2023
    Congrats
    Zhijian Liu
     on
    2023 Rising Stars in Data Science
    .
  • Jan 2023
    Congrats
    Hanrui Wang
     on
    MARC 2023 Best Pitch Award
    .
  • Nov 2022
    Congrats
    Hanrui Wang
     on
    Gold Medal of ACM Student Research Competition
    .
  • Aug 2023
    Congrats
    Hanrui Wang
     on
    2023 Rising Stars in ML and Systems
    .
  • May 2023
    Congrats
    Song Han
     on
    2023 Sloan Research Fellowship
    .
  • May 2022
    Congrats
    Song Han
     on
    2022 Red Dot Award
    .
  • May 2021
    Congrats
    Song Han
     on
    2021 Samsung Global Research Outreach (GRO) Award
    .
  • May 2021
    Congrats
    Song Han
     on
    2021 NVIDIA Academic Partnership Award
    .
  • May 2020
    Congrats
    Song Han
     on
    2020 NVIDIA Academic Partnership Award
    .
  • May 2020
    Congrats
    Song Han
     on
    2020 IEEE "AIs 10 to Watch: The Future of AI" Award
    .
  • May 2020
    Congrats
    Song Han
     on
    2020 NSF CAREER Award
    .
  • May 2019
    Congrats
    Song Han
     on
    2019 MIT Technology Review list of 35 Innovators Under 35
    .
  • May 2020
    Congrats
    Song Han
     on
    2020 SONY Faculty Award
    .
  • May 2017
    Congrats
    Song Han
     on
    2017 SONY Faculty Award
    .
  • May 2018
    Congrats
    Song Han
     on
    2018 SONY Faculty Award
    .
  • May 2018
    Congrats
    Song Han
     on
    2018 Amazon Machine Learning Research Award
    .
  • May 2019
    Congrats
    Song Han
     on
    2019 Amazon Machine Learning Research Award
    .
  • May 2019
    Congrats
    Song Han
     on
    2019 Facebook Research Award
    .
  • Aug 2022
    Congrats
    Ji Lin
     on
    the 2022 Qualcomm Innovation Fellowship
    .
  • Aug 2023
    Congrats
    Zhijian Liu
     on
    2023 Rising Stars in ML and Systems
    .
  • May 2021
    Congrats
    Hanrui Wang
     on
    2021 Qualcomm Innovation Fellowship
    .
  • May 2021
    Congrats
    Han Cai
     on
    the 2021 Qualcomm Innovation Fellowship
    .
  • May 2021
    Congrats
    Zhijian Liu
     on
    the 2021 Qualcomm Innovation Fellowship
    .
  • May 2020
    Congrats
    Ji Lin
     on
    the 2020 Nvidia Graduate Fellowship Finalist
    .
  • May 2021
    Congrats
    Yujun Lin
     on
    the 2021 DAC Young Fellowship
    .
  • May 2022
    Congrats
    Hanrui Wang
     on
    2022 ACM Student Research Competition Award 1st Place
    .
  • Aug 2022
    Congrats
    Zhijian Liu
     on
    the 2022 MIT Ho-Ching and Han-Ching Fund Award
    .
  • May 2021
    Congrats
    Yujun Lin
     on
    the 2021 Qualcomm Innovation Fellowship
    .
  • May 2020
    Congrats
    Hanrui Wang
     on
    the 2020 Nvidia Graduate Fellowship Finalist
    .
  • May 2020
    Congrats
    Hanrui Wang
     on
    the 2021 Analog Devices Outstanding Student Designer Award
    .
  • May 2020
    Congrats
    Hanrui Wang
     on
    the 2020 DAC Young Fellowship
    .
  • Aug 2018
    Congrats
    Yujun Lin
     on
    the 2018 Robert J. Shillman Fellowship
    .
  • Jun 2019
    Congrats
    Hanrui WangPark
     team
     on
    Best Paper Award
     of
    ICML 2019 Reinforcement Learning for Real Life Workshop
     
    .
    Park
  • Sep 2022
    Congrats
    Hanrui Wang
     team
     on
    Best Paper Award
     of
    IEEE International Conference on Quantum Computing and Engineering (QCE)
     
    .
  • Jun 2024
    Congrats
    AWQ
     team
     on
    Best Paper Award
     of
    MLSys 2024
     
    .
    AWQ
  • May 2017
    Congrats
    Song Han
     team
     on
    Best Paper Award
     of
    FPGA 2017
     
    .
  • May 2016
    Congrats
    Song Han
     team
     on
    Best Paper Award
     of
    ICLR 2016
     
    .
  • Jul 2023
    Congrats
    SpAtten
     team
     on
    Best Demo Award
     of
    DAC University Demo
     
    .
    SpAtten
  • May 2023
    Congrats
    Wei-Chen Wang
     team
     on
    2023 NSF Athena AI Institute Best Poster Award
     of
     
    .
  • Dec 2020
    Congrats
    Hanrui Wang
     team
     on
    Best Presentation Award
     of
    DAC 2020 Young Fellow
     
    .
  • Oct 2024
    A new blog post
    Block Sparse Attention
     is published.
    We introduce Block Sparse Attention, a library of sparse attention kernels that supports various sparse patterns, including streaming attention with token granularity, streaming attention with block granularity, and block-sparse attention. By incorporating these patterns, Block Sparse Attention can significantly reduce the computational costs of LLMs, thereby enhancing their efficiency and scalability. We release the implementation of Block Sparse Attention, which is modified base on FlashAttention 2.4.2.
  • Mar 2024
    A new blog post
    Patch Conv: Patch Convolution to Avoid Large GPU Memory Usage of Conv2D
     is published.
    In this blog, we introduce Patch Conv to reduce memory footprint when generating high-resolution images. PatchConv significantly cuts down the memory usage by over 2.4× compared to existing PyTorch implementation. Code: https://github.com/mit-han-lab/patch_conv
  • Feb 2024
    A new blog post
    DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models
     is published.
    In this blog, we introduce DistriFusion, a training-free algorithm to harness multiple GPUs to accelerate diffusion model inference without sacrificing image quality. It can reduce SDXL latency by up to 6.1× on 8 A100s. Our work has been accepted by CVPR 2024 as a highlight.
  • Mar 2024
    A new blog post
    TinyChat: Visual Language Models & Edge AI 2.0
     is published.
    Explore the latest advancement in TinyChat and AWQ – the integration of Visual Language Models (VLM) on the edge! The exciting advancements in VLM allows LLMs to comprehend visual inputs, enabling seamless image understanding tasks like caption generation, question answering, and more. With the latest release, TinyChat now supports leading VLMs such as VILA, which can be easily quantized with AWQ, empowering users with seamless experience for image understanding tasks.
  • Nov 2022
    A new blog post
    On-Device Training Under 256KB Memory
     is published.
    In MCUNetV3, we enable on-device training under 256KB SRAM and 1MB Flash, using less than 1/1000 memory of PyTorch while matching the accuracy on the visual wake words application. It enables the model to adapt to newly collected sensor data and users can enjoy customized services without uploading the data to the cloud thus protecting privacy.
  • May 2020
    A new blog post
    Efficiently Understanding Videos, Point Cloud and Natural Language on NVIDIA Jetson Xavier NX
     is published.
    Thanks to NVIDIA’s amazing deep learning eco-system, we are able to deploy three applications on Jetson Xavier NX soon after we receive the kit, including efficient video understanding with Temporal Shift Module (TSM, ICCV’19), efficient 3D deep learning with Point-Voxel CNN (PVCNN, NeurIPS’19), and efficient machine translation with hardware-aware transformer (HAT, ACL’20).
  • Jul 2020
    A new blog post
    Reducing the carbon footprint of AI using the Once-for-All network
     is published.
    “The aim is smaller, greener neural networks,” says Song Han, an assistant professor in the Department of Electrical Engineering and Computer Science. “Searching efficient neural network architectures has until now had a huge carbon footprint. But we reduced that footprint by orders of magnitude with these new methods.”
  • Sep 2023
    A new blog post
    TinyChat: Large Language Model on the Edge
     is published.
    Running large language models (LLMs) on the edge is of great importance. In this blog, we introduce TinyChat, an efficient and lightweight system for LLM deployment on the edge. It runs Meta's latest LLaMA-2 model at 30 tokens / second on NVIDIA Jetson Orin and can easily support different models and hardware.
  • Oct 2023
    Song Han
     presented "
    Efficient Vision Transformer
    " at
    the ICCV 2023 Workshop on Resource-Efficient Deep Learning for Computer Vision (RCV'23)
    .
    VideoSlidesMediaEvent
  • Oct 2023
    Song Han
     presented "
    Quantization for Foundation Models
    " at
    the ICCV 2023 Workshop on Low-Bit Quantized Neural Networks
    .
    VideoSlidesMediaEvent
  • Sep 2023
    Song Han
     presented "
    TinyChat for On-device LLM
    " at
    the IAP MIT Workshop on the Future of AI and Cloud Computing Applications and Infrastructure
    .
    VideoSlidesMediaEvent
  • Jun 2023
    Song Han
     presented "
    Efficient Deep Learning Computing with Sparsity
    " at
    CVPR Workshop on Efficient Computer Vision
    .
    VideoSlidesMediaEvent
  • Nov 2021
    Song Han
     presented "
    TinyML and Efficient Deep Learning for Automotive Applications
    " at
    Hyundai Motor Group Developers Conference
    .
    VideoSlidesMediaEvent
  • Nov 2021
    Song Han
     presented "
    Plenary: Putting AI on a Diet: TinyML and Efficient Deep Learning
    " at
    TinyML Technical Forum Asia
    .
    VideoSlidesMediaEvent
  • Oct 2021
    Song Han
     presented "
    TinyML Techniques for Greener, Faster and Sustainable AI
    " at
    IBM IEEE CAS/EDS – AI Compute Symposium
    .
    VideoSlidesMediaEvent
  • Oct 2021
    Song Han
     presented "
    Challenges and Directions of Low-Power Computer Vision
    " at
    International Conference on Computer Vision (ICCV) Workshop Panel
    .
    VideoSlidesMediaEvent
  • Aug 2021
    Song Han
     presented "
    AutoML for Tiny Machine Learning
    " at
    AutoML Workshop at Knowledge Discovery and Data Mining (KDD) Conference
    .
    VideoSlidesMediaEvent
  • Aug 2021
    Song Han
     presented "
    Frontiers of AI Accelerators: Technologies, Circuits and Applications
    " at
    Hong Kong University of Science and Technology, AI Chip Center for Emerging Smart Systems
    .
    VideoSlidesMediaEvent
  • Aug 2021
    Song Han
     presented "
    Putting AI On A Diet: TinyML and Efficient Deep Learning
    " at
    Silicon Research Cooperation (SRC) AI Hardware E-Workshops
    .
    VideoSlidesMediaEvent
  • Jun 2021
    Song Han
     presented "
    NAAS: Neural-Accelerator Architecture Search
    " at
    4th International Workshop on AI-assisted Design for Architecture at ISCA
    .
    VideoSlidesMediaEvent
  • Jun 2021
    Song Han
     presented "
    Machine Learning for Analog and Digital Design
    " at
    VLSI symposia workshop on AI/Machine Learning for Circuit Design and Optimization
    .
    VideoSlidesMediaEvent
  • Jun 2021
    Song Han
     presented "
    Putting AI on a Diet: TinyML and Efficient Deep Learning
    " at
    Efficient Deep Learning for Computer Vision Workshop at CVPR
    .
    VideoSlidesMediaEvent
  • Jun 2021
    Song Han
     presented "
    Putting AI on a Diet: TinyML and Efficient Deep Learning
    " at
    MLOps World – Machine Learning in Production
    .
    VideoSlidesMediaEvent
  • Jun 2021
    Song Han
     presented "
    Putting AI on a Diet: TinyML and Efficient Deep Learning
    " at
    Shanghai Jiaotong University
    .
    VideoSlidesMediaEvent
  • May 2021
    Song Han
     presented "
    Putting AI on a Diet: TinyML and Efficient Deep Learning
    " at
    Apple’s On-Device ML Workshop
    .
    VideoSlidesMediaEvent
  • Apr 2021
    Song Han
     presented "
    Putting AI on a Diet: TinyML and Efficient Deep Learning
    " at
    MLSys’21 On-Device Intelligence Workshop
    .
    VideoSlidesMediaEvent
  • Apr 2021
    Song Han
     presented "
    Putting AI on a Diet: TinyML and Efficient Deep Learning
    " at
    ISQED’21 Embedded Tutorials
    .
    VideoSlidesMediaEvent
  • Jan 2021
    Song Han
     presented "
    Efficient AI: Reducing the Carbon Footprint of AI in the Internet of Things (IoT)
    " at
    MIT ILP Japan conference
    .
    VideoSlidesMediaEvent
  • Nov 2020
    Song Han
     presented "
    Putting AI on a Diet: TinyML and Efficient Deep Learning
    " at
    MIT ILP webinar session on low power/edge/efficient computing
    .
    VideoSlidesMediaEvent
  • Apr 2020
    Song Han
     presented "
    Once-for-All: Train One Network and Specialize it for Efficient Deployment
    " at
    TinyML Webinar
    .
    VideoSlidesMediaEvent