Statistics Meet Neural Networks: Bootstrap, Cross-Validations, and Beyond

Jun S. Liu (Harvard University)



Inspired by the great successes of neural networks (NN) for various AI tasks such as image recognition, machine translation, etc., we examine how recent ideas in NN research may be effectively employed in classical statistical problems. Many statistical estimation problems can be formulated as solving for an M-estimator, and their uncertainties can be quantified by multiple copies of weighted M-estimators, such as in bootstrap methods. Incidentally, the problem of tuning parameter selection via cross-validation can also be formulated as putting weights onto samples and obtaining different solutions under different sets of weights and different specifications of the tuning parameters. In this talk, we discuss ways of setting up flexible neural networks (a) to receive inputs as different weights and to give out outputs that we desire so as to achieve either uncertainty quantification or tuning parameter selection, and (b) to form nonparametric prior to enable nonparametric Bayes analysis. This is based on the joint work with Minsuk Shin, Shijie Wang, Zhirui Hu, and Tracy Ke.


Back to Day 3