About
News
Publications
Blog
Course
Awards
Talks
Media
Team
Gallery
Efficient AI Computing,
Transforming the Future.
Media
Technique enables AI on edge devices to keep learning over time
MIT News
Nov 16, 2023
PockEngine
StreamingLLM shows how one token can keep AI models running smoothly indefinitely
VentureBeat
Oct 5, 2023
StreamingLLM
AI model speeds up high-resolution computer vision
MIT News, MIT Homepage
Sep 13, 2023
EfficientViT
Smaller is Better: Q8-Chat LLM is an Efficient Generative AI Experience on Intel® Xeon® Processors
Intel News
Aug 7, 2023
SmoothQuant
Load More