Antonio Linero Given NSF CAREER Award

January 3, 2022 • by the Department of Statistics and Data Sciences
Antonio Linero

Congratulations to Antonio Linero (assistant professor, Department of Statistics and Data Sciences, College of Natural Sciences, The University of Texas at Austin) on his recent National Science Foundation (NSF) award from the Division of Mathematical Sciences. Linero will serve as principal investigator for the project titled, “Foundations for Bayesian Nonparametric Causal Inference.” Learn about this innovative project by reading the abstract below.

Abstract: Fueled by the remarkable recent success of machine learning and artificial intelligence, there has been substantial interest in recent years in using machine learning to shed light on important policy questions. Examples of such questions include "will this treatment for a disease improve the quality of life of a particular patient?" and "will participation in this academic program increase student achievement?" Applying state-of-the-art predictive algorithms to inform policy has received tremendous attention over the past decade, with contributions made by econometricians, statisticians, and computer scientists. Despite the successes of machine learning, there exist pitfalls which are not well-understood by practitioners. We argue that the apparent flexibility of machine learning leads indirectly to a sort of rigidity, with the consequence that the results of an analysis may be a foregone conclusion, driven only by the choice to use a flexible model rather than any empirical data. For example, it is a common popular refrain that "correlation is not causation" and that one must be wary of common-causes which can explain an apparent causal relation; we show, however, that poorly designed machine learning methods behave much as humans do and are biased in some sense towards attributing correlations as causal relations. The overall objective of this proposal is to understand and correct for the hidden assumptions underlying a particular type of algorithms based on Bayesian inference, to develop robust methodology based on these insights, and to begin the development of an overarching computational framework for implementing policy-oriented Bayesian machine learning methods in practice.

The appeal of Bayesian machine learning is that it promises to marry the predictive accuracy of modern machine learning and the principled uncertainty quantification of Bayesian inference. The indirect nature of prior specification for many problems leads, however, to a phenomenon we refer to as prior dogmatism: due to the inherent properties of independent priors on high-dimensional spaces, poorly designed Bayesian models can exhibit extreme bias towards the hypothesis that the amount of confounding is negligible. The first objective of this project is to characterize when this occurs (and, importantly, when it does not) in the relatively simple setting of an observational study with many potential confounding variables, and develop Bayesian methods which can be proven theoretically to be robust to dogmatism. The second objective of this project is to extend the insights obtained from the first objective to more advanced designs, such as adaptive clinical trials and observational studies with time-varying treatments and confounding variables. The final objective of this project is to begin the development of a comprehensive computational platform for implementing Bayesian nonparametric causal inference which allow for both (i) careful control over prior specification for important causal parameters and (ii) in-depth sensitivity analysis to assess robustness to untestable causal assumptions.


More information can be found at the NSF website: Award Abstract #2144933.


Related Articles