The samplr package

Package Overview

A common explanation of many human behaviours is that people internally generate a small number of examples (i.e., samples) which they use to make judgments, estimates, etc. Behaviours, then, should depend both on how those samples are acquired and on how they are used. The SAMPLING project investigated whether people the way in which people generate samples can be described by one of a family of local sampling algorithms called Markov Chain Monte Carlo (MCMC). An initial model, the Bayesian Sampler (ABS, Zhu, Sanborn, and Chater 2020), uses iid samples for probability judgments and a later model, the Autocorrelated Bayesian Sampler (ABS, Zhu et al. 2024) used MCMC samples to explain probability judgments as well as choices, confidence judgments, response times, estimates, and confidence intervals.

The samplr package includes the BS and ABS models, so that they can easily be applied to new data. It also provides provide functions that produce samples using a variety of MCMC algorithms, as well as diagnostic tools to compare human data to the performance of these sampling algorithms.

Sampling Algorithms

We provide six MCMC algorithms that have previously been compared to human data (Castillo et al. 2024; Spicer et al. 2022a, 2022b; Zhu et al. 2022). For an introduction on how to use these see the How to Sample vignette, which covers most use cases. If you want to use them in multivariate mixture distributions or with custom functions, see the Multivariate Mixtures and Custom Density Functions vignettes respectively.

Diagnostic Tools

We provide several diagnostic tools to compare human data to MCMC algorithms (listed in the Reference section).

Models

References

Castillo, Lucas, Pablo León-Villagrá, Nick Chater, and Adam N Sanborn. 2024. “Explaining the Flaws in Human Random Generation as Local Sampling with Momentum.” PLOS Computational Biology 20 (1): 1–24. https://doi.org/10.1371/journal.pcbi.1011739.
Spicer, Jake, Jian-Qiao Zhu, Nick Chater, and Adam N Sanborn. 2022a. “How Do People Predict a Random Walk? Lessons for Models of Human Cognition.” Preprint. PsyArXiv. https://doi.org/10.31234/osf.io/fjtha.
Spicer, Jake, Jian-Qiao Zhu, Nick Chater, and Adam N. Sanborn. 2022b. “Perceptual and Cognitive Judgments Show Both Anchoring and Repulsion.” Psychological Science 33 (9): 1395–1407. https://doi.org/10.1177/09567976221089599.
Zhu, Jian-Qiao, Pablo León-Villagrá, Nick Chater, and Adam N Sanborn. 2022. “Understanding the Structure of Cognitive Noise.” PLoS Computational Biology 18 (8): e1010312. https://doi.org/10.1371/journal.pcbi.1010312.
Zhu, Jian-Qiao, Adam N Sanborn, and Nick Chater. 2020. “The Bayesian Sampler: Generic Bayesian Inference Causes Incoherence in Human Probability Judgments.” Psychological Review.
Zhu, Jian-Qiao, Joakim Sundh, Jake Spicer, Nick Chater, and Adam N. Sanborn. 2024. “The Autocorrelated Bayesian Sampler: A Rational Process for Probability Judgments, Estimates, Confidence Intervals, Choices, Confidence Judgments, and Response Times.” Psychological Review 131 (2): 456–93. https://doi.org/10.1037/rev0000427.