Efficient AI Computing,
Transforming the Future.

TinyML and Efficient Deep Learning Computing

6.5940

Fall

2023

https://efficientml.ai

Large generative models (e.g., large language models, diffusion models) have shown remarkable performance, but they require a massive amount of computational resources. To make them more accessible, it is crucial to improve their efficiency.This course will introduce efficient AI computing techniques that enable powerful deep learning applications on resource-constrained devices. Topics include model compression, pruning, quantization, neural architecture search, distributed training, data/model parallelism, gradient compression, and on-device fine-tuning. It also introduces application-specific acceleration techniques for large language models, diffusion models, video recognition, and point cloud. This course will also cover topics about quantum machine learning. Students will get hands-on experience deploying large language models (e.g., LLaMA 2) on a laptop.

  • Time:

    Tuesday/Thursday 3:35-5:00pm Eastern Time

  • Location:
    36-156
  • Office Hour:

    Thursday 5:00-6:00 pm Eastern Time, 38-344 Meeting Room

  • Discussion:
    Piazza
  • Homework Submission:
    Canvas
  • Contact:
    • For external inquiries, personal matters, or emergencies, you can email us at efficientml-staff [at] mit.edu.
    • If you are interested in getting updates, please sign up here to join our mailing list to get notified!

Instructor

Associate Professor

Teaching Assistants

Announcements

  • 2023-12-14

    Final report and course evaluation due

  • 2023-10-31

    Lab 5 is out.

Schedule

Date

Lecture

Logistics

Sep 7

Lecture
1
:

Introduction

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Sep 12

Lecture
2
:

Basics of Deep Learning

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Sep 12

Lecture
2
:

Chapter I: Efficient Inference

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Sep 14

Lecture
3
:

Pruning and Sparsity (Part I)

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Sep 19

Lecture
4
:

Pruning and Sparsity (Part II)

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Sep 21

Lecture
5
:

Quantization (Part I)

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Lab 0 due

Sep 26

Lecture
6
:

Quantization (Part II)

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Sep 28

Lecture
7
:

Neural Architecture Search (Part I)

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Lab 1 due (extended to Sep 30 at 11:59 p.m)

Lab 2 out

Oct 3

Lecture
8
:

Neural Architecture Search (Part II)

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Oct 5

Lecture
9
:

Knowledge Distillation

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Oct 10

Lecture
9
:

Student Holiday — No Class

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Oct 12

Lecture
10
:

MCUNet: TinyML on Microcontrollers

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Lab 2 due

Oct 17

Lecture
11
:

TinyEngine and Parallel Processing

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Oct 17

Lecture
12
:

Chapter II: Domain-Specific Optimization

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Oct 19

Lecture
12
:

Transformer and LLM (Part I)

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Lab 3 due, Lab 4 out

Oct 24

Lecture
13
:

Transformer and LLM (Part II)

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Oct 26

Lecture
14
:

Vision Transformer

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Project ideas out (on Canvas)

Oct 31

Lecture
15
:

GAN, Video, and Point Cloud

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Lab 4 due, Lab 5 out

Nov 2

Lecture
16
:

Diffusion Model

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Nov 2

Lecture
16
:

Chapter III: Efficient Training

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Nov 7

Lecture
17
:

Distributed Training (Part I)

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Nov 9

Lecture
18
:

Distributed Training (Part II)

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Nov 14

Lecture
19
:

On-Device Training and Transfer Learning

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Lab 5 due

Nov 16

Lecture
20
:

Efficient Fine-tuning and Prompt Engineering

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Nov 16

Lecture
20
:

Chapter IV: Advanced Topics

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Nov 21

Lecture
21
:

Basics of Quantum Computing

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Project proposal due

Nov 23

Lecture
21
:

Thanksgiving — No Class

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Nov 28

Lecture
22
:

Quantum Machine Learning

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Nov 30

Lecture
23
:

Noise Robust Quantum ML

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Dec 5

Lecture
24
:

Final Project Presentation

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Dec 7

Lecture
25
:

Final Project Presentation

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Dec 12

Lecture
26
:

Final Project Presentation + Course Summary

[Slides]
[Slides]
[Video]
[Video]
[Video (Live)]
[Video (Live)]

Dec 14: Project report and course evaluation due

Course Videos

Lecture
1
:

Introduction

Lecture
2
:

Basics of Deep Learning

Lecture
3
:

Pruning and Sparsity (Part I)

Lecture
4
:

Pruning and Sparsity (Part II)

Logistics

Grading

The class requirements include five labs, and one final project. This is a PhD level course, and by the end of this class you should have a good understanding of efficient deep learning techniques, and be able to deploy large language models (LLMs) on your laptop.

The grading breakdown is as follows:

  • 5 Labs (15% x 5)
  • Final Project (25%)
  • Proposal (5%)
  • Presentation + Final Report (20%)
  • Participation Bonus (4%)

Note that this class does not have any tests or exams.

Labs

There will be 5 labs over the course of the semester.

  • Lab1: Pruning
  • Lab2: Quantization
  • Lab3: Neural architecture search
  • Lab4: LLM compression
  • Lab5: LLM deployment on laptop

Collaboration Policy

Labs must be done individually: each student must hand in their own answers. However, it is acceptable to collaborate when figuring out answers and to help each other solve the problems. We will be assuming that, as participants in a graduate course, you will be taking the responsibility to make sure you personally understand the solution arising from such collaboration. You also must indicate on each homework with whom you have collaborated.

Late Policy

You will be allowed 6 total homework late days without penalty for the entire semester. You may be late by up to 6 days on any homework assignment. Once those days are used, you will be penalized according to the following policy:

  • Homework is worth full credit at the due time on the due date.
  • The allowed late days are counted by day (i.e., each new late day starts at 11:59 pm ET).
  • Once the allowed late days are exceeded, the penalty is 50% per late day counted by day.
  • The homework is worth zero credit 2 days after exceeding the late day limit.

You must turn in at least 4 of the 5 assignments, even if for zero credit, in order to pass the course.

Regrade Policy

If you feel that we have made a mistake in grading your work, please submit a regrading request to TAs during the office hour and we will consider your request. Please note that regrading of a homework may cause your grade to go either up or down.

Final Project

The class project will be carried out in groups of 2 or 3 people, and has three main parts:

  • proposal: choose from a list of suggested projects, or propose your own project
  • oral presentation (~10 mins per group)
  • final report (4 pages, using the NeurIPS template)

Participation Bonus

We appreciate everyone being actively involved in the class! There are several ways of earning participation bonus credit, which will be capped at 4%:

  • Completing mid-semester evaluation: Around the middle of the semester, we will send out a survey to help us understand how the course is going, and how we can improve. Completing it is worth 1%.
  • Karma point: Any other act that improves the class, which a TA or instructor notices and deems worthy: 3%.