Large Language Models (LLMs) and applications powered by them have recently received tremendous attention, especially since ChatGPT was released in November 2022. This course will provide an introduction to state-of-the-art methods and tools to make LLMs – models and applications – more trustworthy. The course will be organized into three modules: Part I will provide background on the emerging stack for LLMOps. Students will get a quick introduction to building LLM apps with LlamaIndex and work on a hands-on homework on evaluating a Retrieval-Augmented Generation question-answering app built with an LLM and a vector database. Part II will cover key application areas of LLMs, in particular, healthcare, education, and security. We will interleave presentations with brainstorming about project directions. Part III will cover state-of-the-art LLM (app) evaluation methods and tools. We will cover a sample of topics from relevance, groundedness, confidence, calibration, uncertainty, explainability, privacy, fairness, toxicity, adversarial attacks, and related topics. Students will gain understanding of a set of methods and tools for evaluating LLM applications. Students will complete one homework assignment to gain the necessary background. The bulk of the effort will be on a quarter long course project.
Students are expected to have the following background:
Permissive but strict. If unsure, please ask the course staff!
We’re generally open to auditing requests by all Stanford affiliates and external requests.
You will be able to attend all the lectures, but we won't be able to grade your homework or give advice on final projects because of limited resources.
Even if you’re not auditing, you can still access all the slides, notes, assignments, and final repot instructions. These are posted on the Syllabus page.
To audit the class, please send the TAs an email with the subject title "CS329T: Audit Request" with a few sentences introducing yourself and your relevant background.
There's no textbook. The course relies on lecture slides and accompanying readings.