Efficient AI Computing,
Transforming the Future.
Technique enables AI on edge devices to keep learning over time
Nov 16, 2023
StreamingLLM shows how one token can keep AI models running smoothly indefinitely
Oct 5, 2023
AI model speeds up high-resolution computer vision
MIT News, MIT Homepage
Sep 13, 2023
Smaller is Better: Q8-Chat LLM is an Efficient Generative AI Experience on Intel® Xeon® Processors
Aug 7, 2023
Homepage of Prof. Song Han
Homepage of MIT HAN Lab
Copyright © MIT Han Lab.
Designed and Developed by Yujun Lin.