Teaching

University of Texas at Austin: I will teach in Spring 2025.

Princeton University: I gave a guest lecture on Reinforcement Learning from Human Feedback at the Introduction to Reinforcement Learning course at Princeton.

Carnegie Mellon University: I was a Teaching Assistant for the following two courses, which served as the core course requirements for Ph.D. students in the Machine Learning Department at CMU:

  • 10-715 Advanced Introduction to Machine Learning in Fall 2019.
    Course Description (click to expand) The rapid improvement of sensory techniques and processor speed, and the availability of inexpensive massive digital storage, have led to a growing demand for systems that can automatically comprehend and mine massive and complex data from diverse sources. Machine Learning is becoming the primary mechanism by which information is extracted from Big Data, and a primary pillar that Artificial Intelligence is built upon. This course is designed for Ph.D. students whose primary field of study is machine learning, or who intend to make machine learning methodological research a main focus of their thesis. It will give students a thorough grounding in the algorithms, mathematics, theories, and insights needed to do in-depth research and applications in machine learning.
  • 10-716 Advanced Machine Learning: Theory & Methods in Spring 2019.
    Course Description (click to expand) Advanced Machine Learning is a graduate level course introducing the theoretical foundations of modern machine learning, as well as advanced methods and frameworks used in modern machine learning. The course assumes that students have taken graduate level introductory courses in machine learning (Introduction to Machine Learning, 10-701 or 10-715), as well as Statistics (Intermediate Statistics, 36-700 or 36-705). The course treats both the art of designing good learning algorithms, as well as the science of analyzing an algorithm's computational and statistical properties and performance guarantees. We will cover theoretical foundation topics such as computational and statistical convergence rates, minimax estimation, and concentration of measure. We will also cover advanced machine learning methods such as nonparametric density estimation, nonparametric regression, and Bayesian estimation, as well as advanced frameworks such as privacy, causality, and stochastic learning algorithms.