On October 25 we held our first free webinar introducing Stan and Bayesian inference to a broad, mostly non-academic audience. We were thrilled with the level of interest in the subject. Out of 1,300 people that registered for the event, 850 attended and most stayed for the duration of the 1.5 hour talk. Andrew had presented a lot of information and I wanted to share some some references from the talk.
First, the abstract:
Stan is a free and open-source probabilistic programming language and Bayesian inference engine. In this talk, we will demonstrate the use of Stan for some small problems in sports ranking, nonlinear regression, mixture modeling, and decision analysis, to illustrate the general idea that Bayesian data analysis involves model building, model fitting, and model checking. One of our major motivations in building Stan is to efficiently fit complex models to data, and Stan has indeed been used for this purpose in social, biological, and physical sciences, engineering, and business. The purpose of the present webinar is to demonstrate using simple examples how one can directly specify and fit models in Stan and make logical decisions under uncertainty.
If you missed the talk or if you just can’t get enough of Andrew, following is the complete recording.
The pdf of the presentation that includes Stan code can be downloaded here.
One of my favorite examples of the superiority of models that take into account specific problem structure rather than using canonical distributions like logistic regression is Andrew’s golf example. If you want to play with this example, you can download it from my github repo. If you want to see a fully rendered version, you can view it here.
There were lots of questions about getting started with Stan. If you are an R user and are new new to Bayes, you should check out rstanarm package. Ben and Jonah created this package to introduce full Bayes to the broader R community.
If you want to jump directly into Stan and we hope that you do, Stan Manual is a surprisingly readable reference on coding a set of most common models in Stan from simple regression to gaussian processes. The latest version of the manual can be downloaded from Stan’s documentation page. On the same page you will find lots of examples, videos, case studies, and tutorials.
If you have deep mathematical background and want to learn why Hamiltonian Monte Carlo with NUTS, the sampling algorithm inside of Stan, is superior to Gibbs and Metropolis check out the orginal NUTS paper by Matt Hoffman and Andrew Gelman and also take a look at Michael Betancourt’s HMC papers.
Finally, the first Stan conference, StanCon 2017 is coming to New York. Please, join us if you are in the New York area in January.