ML models are ubiquitous -- from transportation (self-driving cars) to finance (credit card or mortgage applications) and careers (company hiring). ML, however, does not come without its risks. Some important risks involve model understanding and accountability: models created by machine learning are largely black boxes that are hard for us to peer into and understand; they are susceptible to unforeseen faults, to adversarial manipulation, and to violations of ethical norms in privacy and fairness. This course will provide an introduction to state-of-the-art ML methods designed to make AI more trustworthy. The course focuses on four concepts: explanations, fairness, privacy, and robustness. We first discuss how to explain and interpret ML model outputs and inner workings. Then, we examine how bias and unfairness can arise in ML models and learn strategies to mitigate this problem. Next, we look at differential privacy and membership inference in the context of models leaking sensitive information when they are not supposed to. Finally, we look at adversarial attacks and methods for imparting robustness against adversarial manipulation. Students will gain understanding of a set of methods and tools for deploying transparent, ethically sound, and robust machine learning solutions. Students will complete labs, homework assignments, and discuss weekly readings. Students will also do a course term project on a topic of their choice, using methods presented in the course.
Students are expected to have the following background:
Permissive but strict. If unsure, please ask the course staff!
We’re generally open to auditing requests by all Stanford affiliates. External requests will be determined on a case by case basis, mostly because the course is hosted on Canvas and we’re not sure how non-Stanford affiliates access Canvas.
You will be able to attend all the lectures, but we won't be able to grade your homework or give advice on final projects. Our human resources are limited. There are only four of us and on top of teaching, we also have full-time jobs or are full-time students.
Even if you’re not auditing, you can still access all the slides, notes, assignments, and final repot instructions. These are posted on the Syllabus page.
To audit the class, please send cs329t-spr2122-staff@lists.stanford.edu an email with the subject title "CS329T: Audit Request" with a few sentences introducing yourself and your relevant background.
We’ll add you as an observer on Canvas -- you need to create an account on Canvas first. You can attend all lectures scheduled there.
There's no textbook. The course relies on lecture slides and accompanying readings.