Framework for evaluating user-specified finite stage policies and learning realistic policies via doubly robust loss functions. Policy learning methods include doubly robust Q-learning, sequential policy tree learning, and outcome weighted learning. See Nordland and Holst (2022) <doi:10.48550/arXiv.2212.02335> for documentation and references.
Version: | 1.4 |
Depends: | R (≥ 4.0), SuperLearner |
Imports: | data.table (≥ 1.14.5), lava (≥ 1.7.0), future.apply, progressr, methods, policytree (≥ 1.2.0), survival, targeted (≥ 0.4), DynTxRegime |
Suggests: | DTRlearn2, glmnet (≥ 4.1-6), mgcv, xgboost, knitr, ranger, rmarkdown, testthat (≥ 3.0), ggplot2 |
Published: | 2024-04-25 |
DOI: | 10.32614/CRAN.package.polle |
Author: | Andreas Nordland [aut, cre], Klaus Holst [aut] |
Maintainer: | Andreas Nordland <andreasnordland at gmail.com> |
BugReports: | https://github.com/AndreasNordland/polle/issues |
License: | Apache License (≥ 2) |
NeedsCompilation: | no |
Citation: | polle citation info |
Materials: | NEWS |
CRAN checks: | polle results |
Reference manual: | polle.pdf |
Vignettes: |
policy_data policy_eval policy_learn |
Package source: | polle_1.4.tar.gz |
Windows binaries: | r-devel: polle_1.4.zip, r-release: polle_1.4.zip, r-oldrel: polle_1.4.zip |
macOS binaries: | r-release (arm64): polle_1.4.tgz, r-oldrel (arm64): polle_1.4.tgz, r-release (x86_64): polle_1.4.tgz, r-oldrel (x86_64): polle_1.4.tgz |
Old sources: | polle archive |
Please use the canonical form https://CRAN.R-project.org/package=polle to link to this page.