CS 189 at UC Berkeley
Introduction to Machine Learning
Lectures: T/Th 12:30-2 p.m., 155 Dwinelle
Instructor Stella Yu
stellayu (at) berkeley.edu
Office Hours: Tu/Th 2-3 p.m. 400 Cory (see calendar)
Professor Anant Sahai
sahai (at) eecs.berkeley.edu
Office Hours: Tu/Th 2-3 p.m. 400 Cory (see calendar)
Week 1 Overview
Least Squares Framework
Week 2 Overview
Features, Regularization, Hyperparameters and Cross-Validation
Week 3 Overview
MLE, MAP, OLS, Bias-Variance Tradeoffs
Week 4 Overview
Weighted LS, Total LS, Eigenmethods
Week 5 Overview
CCA, Feature Discovery
Week 6 Overview
Nonlinear LS, Gradient Descent
Week 7 Overview
Neural Nets, Stochastic Gradient Descent
Week 8 Overview
Regression for Classification: Generative v. Discriminative
Week 9 Overview
Loss Functions, Hinge-Loss, SVM
Week 10 Overview
Kernel Methods, Nearest Neighbor Techniques
Week 11 Overview
Decision Trees, Boosting, Ensemble Methods
Week 12 Overview
Convolutional Neural Nets, Regularization Revisited
Week 13 Overview
Unsupervised Learning: Nearest Neighbors
Week 14 Overview
Sparsity and Decision Trees
- Note 24 : Convolutional Neural Networks (CNN) (Draft)
- Homework 12 (TeX) (data) (solution) (self-grade)
- Homework 13 (TeX) (solution) (self-grade)
Week 15 Overview
Clustering, Generative Adversarial Network
- Final
- Note 24 : Convolutional Neural Networks (CNN) (Draft)
- Note 25 : Dimensionality Reduction (Draft)
- Homework 13 (TeX) (solution) (self-grade)
- Homework 14 (TeX) (solution) (self-grade)
Notes
See Syllabus for more information. You can find a list of week-by-week topics.
- Note 1: Least Squares
- Note 2: Feature Engineering, Ridge Regression
- Note 3: Hyperparameters, Cross-Validation
- Note 4: Gaussians, MLE, MAP
- Note 5: Bias-Variance Tradeoff
- Note 6: Weighted Least Squares, Multivariate Gaussians
- Note 7: MAP with Colored Noise
- Note 8: Total Least Squares
- Note 9: Principal Component Analysis (PCA)
- Note 10: Canonical Correlation Analysis (CCA)
- Note 11: Nonlinear Least Squares
- Note 12: Neural Nets: Introduction
- Note 13: Backpropagation
- Note 14: QDA/LDA, More Multivariate Gaussians
- Note 15: Discriminative Models, Logistic Regression
- Note 16: Training Logistic Regression, Multiclass Logistic Regression
- Note 17: Support Vector Machines (SVM)
- Note 18: Duality and Dual SVMs
- Note 19: Kernels
- Note 20: Nearest Neighbor Classification
- Note 21: Sparsity
- Note 22: Decision Trees and Random Forests (Draft)
- Note 23: Boosting
- Note 24: Convolutional Neural Networks (CNN) (Draft)
- Note 25: Dimensionality Reduction (Draft)
Discussions
The discussion sections may cover new material and will give you additional practice solving problems. You can attend any discussion section you like. However, if there are fewer desks than students, then students who are officially enrolled in the course will get seating priority. See Syllabus for more information.
- Discussion 01: Review, Least Squares (solution)
- Discussion 02: Ridge Regression
- Discussion 03: Bias-Variance Tradeoff (solution)
- Discussion 04: Multivariate Gaussians (solution)
- Discussion 05: PCA, CCA, and Convexity (solution)
- Discussion 06: Gradient Descent (solution)
- Discussion 07: Backpropagation (solution)
- Discussion 09: LDA/QDA/SGD (solution)
- Discussion 10: SGD/SVM (solution)
- Discussion 11: Kernels/Nearest Neighbors (solution)
- Discussion 13: Convolutional Neural Networks (solution)
- Discussion 14: Clustering (solution)
Homeworks
All homeworks are graded and it is highly-recommended that you do them. Your lowest homework score will be dropped, but this drop should be reserved for emergencies. See Syllabus for more information.
- Homework 0: Course Logistics (solution) (self-grade)
- Homework 01: Review and Least Squares (TeX) (data) (solution) (self-grade)
- Homework 02: Ridge Regression (TeX) (data) (solution) (self-grade)
- Homework 03: Probabilistic Models (TeX) (data) (solution) (self-grade)
- Homework 04: Total Least Squares (TeX) (data) (solution) (self-grade)
- Homework 05: Canonical-Correlation Analysis (TeX) (data) (solution) (self-grade)
- Homework 06: Gradient Descent (TeX) (data) (solution) (self-grade)
- Homework 07: Backpropagation (TeX) (solution) (self-grade) (solution code)
- Homework 08: Midterm Redo (TeX) (solution) (self-grade)
- Homework 09: Classification and SGD (TeX) (solution) (self-grade)
- Homework 10: Support Vector Machines (TeX) (solution) (self-grade)
- Homework 11: Kernels and Neighbors (TeX) (solution) (self-grade)
- Homework 12: Sparsity and Decision Trees (TeX) (data) (solution) (self-grade)
- Homework 13: (Convolutional) Neural Networks (TeX) (solution) (self-grade)
- Homework 14: K-SVD (TeX) (solution) (self-grade)