Title:

Frontiers and Applications at the Interface of Discrete Optimization and Interpretable Machine Learning

Abstract:

Modern machine learning models achieve remarkable predictive accuracy and can capture complex interactions, but they are often difficult to interpret and may fail to reveal useful relationships in the data. This lack of interpretability also limits their use in high-stakes applications such as healthcare, where predictions must be auditable for trust and safety. Using tree ensembles (e.g., gradient boosting or random forests) as a motivating example, we propose a novel optimization-based framework for extracting interpretable rule-based models at the post-training stage. We formulate rule extraction as a large-scale discrete optimization problem that balances predictive accuracy with considerations such as model compactness, stability, and transparency. To address these problems, we develop specialized algorithms that scale beyond the capabilities of off-the-shelf optimization software. Using mental telehealth treatment data from our industry collaborators at SilverCloud Health, we demonstrate how these methods enable practitioners to extract meaningful insights from complex datasets and predictive models.

Bio:

Brian Liu is a fifth-year Ph.D. candidate in Operations Research at MIT, advised by Professor Rahul Mazumder. His research lies at the intersection of discrete optimization, statistics, and computer science, with a focus on developing efficient and interpretable machine learning algorithms. His work is motivated by real-world applications in domains such as healthcare and medicine and has received multiple Best Student Paper Awards from INFORMS (Data Mining; Quality, Statistics, and Reliability) and the American Statistical Association (Statistical Computing; Nonparametric Statistics).