1st Astrostatistics School

by Stefano Andreon and Roberto Trotta

by Stefano Andreon and Roberto Trotta

Program

Lecture 1)

Probability axioms. A first posterior computation, analytically and by numerical sampling. Upper limits. Intial discussion on the role of the prior. Importance of checking numerical convergence. A glimpse on sensitivity analysis.

Lecture 2)

Single parameters models. Combining information coming from more than one single datum. The prior (and the Malmquist-like effect). Prior sentitivity. A first two-parameters model. A first joint probability contour.

Lecture 3)

An additional two-parameter model (measuring the intrinsic scatter using these data). Comparison of the performances of state-of-the-art methods to measure a dispersion. Introduction to regression: a) pay attention to selection effects! b) avoid fishing expeditions c) prediction differs from parameter estimation (test it with this sample, generated with JAGS. Its CODAindex is here). Comparison of regression fitters: Bayes has a lower bias, fairer errors and less noisy errors.

Lecture 4)

Starting easy: non-linear regression with non-gaussian errors of different sizes (but no error on predictor and no intrinsic scatter). The data. Adding complexity: allowing systematics (intrinsic scatter), using these data.

Lecture 5)

Adding more complexity (heteroscedastic errors on x, Magorrian relation), using these data. Regression with two (or more) predictors, using Planck data, to be done alone without any help. A glimpse on other important issues such as mixture of regressions, non-random data collection, model checking.

Lecture 6)

Introduction to Bayesian model comparison. Automatic implementation of Occam's razor. Model likelihood as predictive data probability. The three three levels of inference. Model comparison slides Exercises

Lecture 7)

Comparison with Frequentist hypothesis testing and p-values. The Bayesian evidence: Meaning and interpretation. Asymptotic behaviour. Prior-free evidence bounds. Sensitivity analysis. Dependency on the choice of prior.

Lecture 8)

Computation of the evidence: Savage-Dickey Density ratio (SDDR); Laplace approximation; nested sampling and MultiNest implementation. Model complexity and Kullback-Leibler divergence. Bayesian Model Averaging and applications in cosmology.

Lectures 1 to 5 are organized in this book.

Attendance to the course has pre-requirements specified HERE. In turn, this requires JAGS. JAGS user manual is here. To test your reading of JAGS output use CODAchain1.txt and CODAindex.txt.