| Status | Seniority | Home | Time | Age | Marital | Records | Job | Expenses | Income | Assets | Debt | Amount | Price |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| good | 9 | rent | 60 | 30 | married | no | freelance | $73 | $129 | $0 | $0 | $800 | $846 |
| good | 17 | rent | 60 | 58 | widow | no | fixed | $48 | $131 | $0 | $0 | $1,000 | $1,658 |
| bad | 10 | owner | 36 | 46 | married | yes | freelance | $90 | $200 | $3,000 | $0 | $2,000 | $2,985 |
| good | 0 | rent | 60 | 24 | single | no | fixed | $63 | $182 | $2,500 | $0 | $900 | $1,325 |
| good | 0 | rent | 36 | 26 | single | no | fixed | $46 | $107 | $0 | $0 | $310 | $910 |
| good | 1 | owner | 60 | 36 | married | no | fixed | $75 | $214 | $3,500 | $0 | $650 | $1,645 |
| good | 29 | owner | 60 | 44 | married | no | fixed | $75 | $125 | $10,000 | $0 | $1,600 | $1,800 |
| good | 9 | parents | 12 | 27 | single | no | fixed | $35 | $80 | $0 | $0 | $200 | $1,093 |
| good | 0 | owner | 60 | 32 | married | no | freelance | $90 | $107 | $15,000 | $0 | $1,200 | $1,957 |
| bad | 0 | parents | 48 | 41 | married | no | partime | $90 | $80 | $0 | $0 | $1,200 | $1,468 |
| good | 6 | owner | 48 | 34 | married | no | freelance | $60 | $125 | $4,000 | $0 | $1,150 | $1,577 |
| good | 7 | owner | 36 | 29 | married | no | fixed | $60 | $121 | $3,000 | $0 | $650 | $915 |
| good | 8 | owner | 60 | 30 | married | no | fixed | $75 | $199 | $5,000 | $2,500 | $1,500 | $1,650 |
| good | 19 | priv | 36 | 37 | married | no | fixed | $75 | $170 | $3,500 | $260 | $600 | $940 |
| bad | 0 | other | 18 | 21 | single | yes | partime | $35 | $50 | $0 | $0 | $400 | $500 |
library(recipes) credit <- credit %>% recipe(Status ~ ., data=.) %>% # the next step only works with factor variables step_string2factor(Status) %>% themis::step_upsample(Status) %>% step_factor2string(Status) %>% # keep characters prep(strings_as_factors=FALSE) %>% juice()
\[ \hat{p}_{mk} = \frac{1}{N_m} \sum_{x_i \in R_m} I(y_i = k) \] where \(N_m\) is the number of items in region \(m\) and the summation counts the number of observations of class \(k\) in region \(m\).
\[ \hat{f}(x)=\sum_{m=1}^M \hat{c}_m I(x \in R_m) \] where \[ \hat{c}_m = \text{avg}(y_i|x_i \in R_m) \] is the average \(y\) value for the region.
formula interface only
character or factorcharacter or factorlibrary(rpart) mod_rpart <- rpart(Status ~ ., data=credit)
library(rpart.plot) rpart.plot(mod_rpart)
library(vip) vip(mod_rpart)
minsplit: Minimum number of observations in a node for a split to be attemptedminbucket: Minimum number of observations in a terminal nodecp: A split must improve the fit by cpmaxdepth: The maximum number of splits for terminal nodes\[ \hat{f} = \frac{1}{B} \sum_{b=1}^B \hat{f}^{*b}(x) \] where \(\hat{f}^{*b}(x)\) is a decision tree fit on the \(b\text{th}\) bootstrapped sample of data, with columns randomly selected for evaluation at each split.
formula interface
factorsfactor for classificationx/y interface
x must be a dense matrixy must be a factor for classificationcredit_imputed <- recipe(Status ~ ., data=credit) %>% step_knnimpute(all_predictors()) %>% prep(strings_as_factors=TRUE) %>% juice()
library(randomForest) mod_randomForest <- randomForest(Status ~ ., data=credit_imputed)
plot(mod_randomForest)
vip(mod_randomForest)
ntree: Number of trees to growmtry: Number of candidate variables to check at each splitreplace: Whether to sample with replacementsampsize: Size of samples to drawnodesize: Maximum size of terminal nodesmaxnodes: Maximum number of terminal nodes for each tree (complexity)formula interface
character or factorfactor for classificationx/y interface
xy argument must be factor for classificationdata.frame interface with name of outputlibrary(ranger) mod_ranger <- ranger(Status ~ ., data=credit_imputed, importance='impurity')
# no plotting method, causes error plot(mod_ranger)
Error in xy.coords(x, y, xlabel, ylabel, log): 'x' is a list, but does not have components 'x' and 'y'
vip(mod_ranger)
num.trees: Number of trees to growmtry: Number of candidate variables to check at each splitmax.depth: Maxim depth of any treereplace: Whether to sample with replacementsample.fraction: Size of samples to drawregularization.factor: Amount of penalization on gainregularization.usedepth: Whether to consider depth in penalizationsplitrule: Type of splitting to perform\[ \hat{y}_i^{t} = \sum_{k=1}^t f_k(x_i) = \hat{y}_i^{(t-1)} + f_t(x_i) \] where \(f_k(x)\) is a tree and \(f_k(x)\) is trained on the residuals of \(f_{k-1}(x)\)
formula interface (x/y interface in other function)
factorcredit_gbm <- recipe(Status ~ ., data=credit) %>% step_integer(Status, zero_based=TRUE) %>% prep(strings_as_factors=TRUE) %>% juice()
library(gbm) mod_gbm <- gbm(Status ~ ., data=credit_gbm, distribution='bernoulli', n.trees=100)
plot(mod_gbm, i.var=c('Seniority', 'Records', 'Income'))
vip(mod_gbm)
n.trees: Number of boosting roundsinteraction.depth: Maximum depth of a tree in terms of the number of interactionsn.minobsinnode: Minimum number of observations in terminal nodesshrinkage: Learning ratebag.fraction: Percentage of data to sample at each round of boostingformula interface
character or factorformula must be a factorx/y interface
x argumenty argument must be a factormod_C5.0_boost <- C5.0(factor(Status) ~ ., data=credit, trials=100)
plot(mod_C5.0_boost, subtree=3)
vip(mod_C5.0_boost)
minCases: Smallest number of samples that must be put in at least two of the splitstrials: Number of boosting roundsx/y interface via xgb.DMatrix()
x can be dense or sparse matrix, or file(s) on discrec_xg <- recipe(Status ~ ., data=credit) %>% step_integer(Status, zero_based=TRUE) %>% step_dummy(all_nominal(), one_hot=TRUE) %>% prep() x_xg <- juice(rec_xg, all_predictors(), composition='dgCMatrix') y_xg <- juice(rec_xg, all_outcomes(), composition='matrix') library(xgboost) credit_xg <- xgb.DMatrix(data=x_xg, label=y_xg)
mod_xgboost <- xgb.train( data=credit_xg, objective='binary:logistic', nrounds=100, watchlist=list(train=credit_xg), print_every_n=10 )
[1] train-error:0.205469 [11] train-error:0.125469 [21] train-error:0.094062 [31] train-error:0.076875 [41] train-error:0.060000 [51] train-error:0.045313 [61] train-error:0.035937 [71] train-error:0.029531 [81] train-error:0.024219 [91] train-error:0.017812 [100] train-error:0.012187
dygraphs::dygraph(mod_xgboost$evaluation_log)
xgb.plot.multi.trees(mod_xgboost)
vip(mod_xgboost)
nrounds: Number of boosting roundseta: Learning rategamma: Minimum loss reduction required to make a splitmax_depth: The most splits for any one treemin_child_weight: Minimum weight needed to make a split (larger: more conservative)subsample: Percent of rows to sample for each treecolsample_bytree: Percent of columns to randomly sample for each treenum_parallel_tree: How many trees to grow in each round
early_stopping_rounds: Number of rounds without improvement before stopping earlyx/y interface via lgb.Dataset()
x can be dense or sparse matrix, or file(s) on discrec_lgb <- recipe(Status ~ ., data=credit) %>%
step_integer(all_nominal(), zero_based=TRUE) %>%
prep()
credit_char <- credit %>% select(-Status) %>%
purrr::map_lgl(~is.character(.x)) %>% which()
x_lgb <- juice(rec_lgb, all_predictors(), composition='dgCMatrix')
y_lgb <- juice(rec_lgb, all_outcomes(), composition='matrix')
library(lightgbm)
credit_lgb <- lgb.Dataset(data=x_lgb, label=y_lgb, categorical_feature=credit_char)
mod_lightgbm <- lightgbm(data=credit_lgb, nrounds=100, obj='binary',
eval_freq=10, is_unbalance=TRUE)
[LightGBM] [Info] Number of positive: 3200, number of negative: 3200 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000161 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 1096 [LightGBM] [Info] Number of data points in the train set: 6400, number of used features: 13 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.500000 -> initscore=0.000000 [1]: train's binary_logloss:0.65796 [11]: train's binary_logloss:0.483632 [21]: train's binary_logloss:0.409745 [31]: train's binary_logloss:0.363922 [41]: train's binary_logloss:0.331057 [51]: train's binary_logloss:0.304451 [61]: train's binary_logloss:0.282312 [71]: train's binary_logloss:0.265342 [81]: train's binary_logloss:0.248841 [91]: train's binary_logloss:0.232305 [100]: train's binary_logloss:0.219844
# no plotting method, causes error plot(mod_lightgbm)
Error in plot.R6(mod_lightgbm): No plot method defined for R6 class lgb.Booster
mod_lightgbm %>% lgb.importance() %>% lgb.plot.importance()
num_iterations: Number of boosting roundslearning_rate: Learning ratemax_depth: The most splits for any one treebagging_fraction: Percent of rows to sample for each treefeature_fraction: Percent of columns to randomly sample for each treeboosting: Type of boosting to performOther Models
formulamatrices
data.frame plus vectorxgb.DMatrix, lgb.Dataset)characters vs factors vs dummy variables vs numeric encoding{rpart}: formula{C5.0}: formula or dense matrices{randomForest}: formula or dense matrix input with factor outcome{ranger}: formula or dense/sparse matrix input with factor outcome{gbm}: formula or dense/sparse matrix input with numeric outcome using gbm.fit(){xgboost}: dense/sparse matrix input with numeric outcome inside xgb.DMatrix object{lightgbm}: dense/sparse matrix input with numeric outcome inside lgb.Dataset object {rpart}: Categorical variables as factor or character{C5.0}: Categorical variables as factor or character or dummy variables{randomForest}: Categorical variables as factor or dummy variables{ranger}: Categorical variables as factor or character or dummy variables{gbm}: Categorical variables as factor or dummy variables{xgboost}: Categorical variables as dummy variables{lightgbm}: Categorical variables as dummy variables or numeric encoding library(parsnip)
spec_rpart <- decision_tree(mode='classification') %>%
set_engine('rpart')
spec_rpart
Decision Tree Model Specification (classification) Computational engine: rpart
spec_C50 <- decision_tree(mode='classification') %>%
set_engine('C5.0')
spec_C50
Decision Tree Model Specification (classification) Computational engine: C5.0
spec_randomForest <- rand_forest(mode='classification') %>%
set_engine('randomForest')
spec_randomForest
Random Forest Model Specification (classification) Computational engine: randomForest
spec_ranger <- rand_forest(mode='classification') %>%
set_engine('ranger')
spec_ranger
Random Forest Model Specification (classification) Computational engine: ranger
spec_C50_boost <- boost_tree(mode='classification') %>%
set_engine('C5.0')
spec_C50_boost
Boosted Tree Model Specification (classification) Computational engine: C5.0
spec_xgboost <- boost_tree(mode='classification') %>%
set_engine('xgboost')
spec_xgboost
Boosted Tree Model Specification (classification) Computational engine: xgboost
library(rsample) set.seed(28676) data_split <- initial_split(credit, prop=.9, strata='Status') train <- training(data_split) test <- testing(data_split)
rec_rpart <- recipe(Status ~ ., data=train) %>% themis::step_upsample(Status) %>% step_other(all_nominal(), -Status, other='misc')
rec_C50 <- recipe(Status ~ ., data=train) %>% themis::step_upsample(Status) %>% step_other(all_nominal(), -Status, other='misc')
rec_randomForest <- recipe(Status ~ ., data=train) %>% themis::step_upsample(Status) %>% step_string2factor(all_nominal(), -Status) %>% step_knnimpute(all_predictors()) %>% step_other(all_nominal(), -Status, other='misc')
rec_ranger <- recipe(Status ~ ., data=train) %>% themis::step_upsample(Status) %>% step_string2factor(all_nominal(), -Status) %>% step_knnimpute(all_predictors()) %>% step_other(all_nominal(), -Status, other='misc')
rec_C50_boost <- recipe(Status ~ ., data=train) %>% themis::step_upsample(Status) %>% step_other(all_nominal(), -Status, other='misc')
rec_xgboost <- recipe(Status ~ ., data=train) %>% themis::step_upsample(Status) %>% step_other(all_nominal(), -Status, other='misc') %>% step_dummy(all_nominal(), -Status, one_hot=TRUE)
library(workflows)
work_rpart <- workflow() %>% add_recipe(rec_rpart) %>% add_model(spec_rpart) work_C50 <- workflow() %>% add_recipe(rec_C50) %>% add_model(spec_C50)
work_randomForest <- workflow() %>% add_recipe(rec_randomForest) %>% add_model(spec_randomForest) work_ranger <- workflow() %>% add_recipe(rec_ranger) %>% add_model(spec_ranger)
work_C50_boost <- workflow() %>% add_recipe(rec_C50_boost) %>% add_model(spec_C50_boost) work_xgboost <- workflow() %>% add_recipe(rec_xgboost) %>% add_model(spec_xgboost)
# trees fit_rpart <- work_rpart %>% fit(data=train) fit_C50 <- work_C50 %>% fit(data=train) # random forests fit_randomForest <- work_randomForest %>% fit(data=train) fit_ranger <- work_ranger %>% fit(data=train) # boosted trees fit_C50_boost <- work_C50_boost %>% fit(data=train) fit_xgboost <- work_xgboost %>% fit(data=train)
models <- list( 'rpart'=work_rpart , 'C50'=work_C50 , 'randomForest'=work_randomForest , 'ranger'=work_ranger , 'C50_boost'=work_C50_boost , 'xgboost'=work_xgboost )
quality_metric <- yardstick::metric_set(yardstick::roc_auc)
quality <- models %>%
purrr::map(
~tune::last_fit(.x, split=data_split, metrics=quality_metric)
)
model_assesments <- quality %>% purrr::map_dfr(tune::collect_metrics) %>% mutate(Model=names(models)) %>% select(Model, AUC=.estimate) %>% bind_cols(quality %>% bind_rows())
plot_ly( data=model_assesments %>% arrange(AUC) %>% mutate(Model=factor(Model, levels=Model)), x=~Model, y=~AUC) %>% add_lines(marker=list(color=~AUC)) %>% add_annotations(text=~Model)
library(bench) # run the speed test model_times <- press( model=models, mark(fit(model[[1]], data=train), iterations=5) ) # combine with AUC model_checks <- model_times %>% select(Time=median, Memory=mem_alloc) %>% bind_cols(model_assesments %>% select(Model, AUC))
plot_ly( data=model_checks %>% arrange(Time) %>% mutate(Model=factor(Model, levels=Model)), x=~Model, y=~Time) %>% add_lines(marker=list(color=~Time)) %>% add_annotations(text=~Model)
model_checks %>% plot_ly(x=~Time, y=~AUC) %>% add_lines(marker=list(color=~AUC)) %>% add_annotations(text=~Model) %>% layout(showlegend = FALSE)
{xgboost} is nearly the fastest{xgboost} is nearly the most correct{tidymodels}Thank You
current session info
─ Session info ─────────────────────────────────────────────────────────────── setting value version R version 4.0.2 (2020-06-22) os Ubuntu 18.04.3 LTS system x86_64, linux-gnu ui X11 language (EN) collate en_US.UTF-8 ctype en_US.UTF-8 tz America/New_York date 2020-08-12 ─ Packages ─────────────────────────────────────────────────────────────────── ! package * version date lib P assertthat 0.2.1 2019-03-21 [?] P backports 1.1.8 2020-06-17 [?] P BBmisc 1.11 2017-03-10 [?] P bench * 1.1.1 2020-01-13 [?] P C50 * 0.1.3.1 2020-05-26 [?] P checkmate 2.0.0 2020-02-06 [?] P class 7.3-17 2020-04-26 [3] P cli 2.0.2 2020-02-28 [?] P codetools 0.2-16 2018-12-24 [3] P colorspace 1.4-1 2019-03-18 [?] P crayon 1.3.4 2017-09-16 [?] P crosstalk 1.1.0.1 2020-03-13 [?] P Cubist 0.2.3 2020-01-10 [?] P data.table 1.13.0 2020-07-24 [?] P DiagrammeR 1.0.6.1 2020-05-08 [?] P dials 0.0.8 2020-07-08 [?] P DiceDesign 1.8-1 2019-07-31 [?] P digest 0.6.25 2020-02-23 [?] P doParallel 1.0.15 2019-08-02 [?] P dplyr * 1.0.0 2020-05-29 [?] P dygraphs 1.1.1.6 2018-07-11 [?] P ellipsis 0.3.1 2020-05-15 [?] P evaluate 0.14 2019-05-28 [?] P fansi 0.4.1 2020-01-08 [?] P farver 2.0.3 2020-01-16 [?] P fastmatch 1.1-0 2017-01-28 [?] P FNN 1.1.3 2019-02-15 [?] P foreach 1.5.0 2020-03-30 [?] P formattable * 0.2.0.1 2016-08-05 [?] P Formula 1.2-3 2018-05-03 [?] P furrr 0.1.0 2018-05-16 [?] P future 1.18.0 2020-07-09 [?] P gbm * 2.1.8 2020-07-15 [?] P generics 0.0.2 2018-11-29 [?] P ggplot2 * 3.3.2 2020-06-19 [?] P ggthemes 4.2.0 2019-05-13 [?] P globals 0.12.5 2019-12-07 [?] P glue 1.4.1 2020-05-13 [?] P gower 0.2.2 2020-06-23 [?] P GPfit 1.0-8 2019-02-08 [?] P gridExtra 2.3 2017-09-09 [?] P gt * 0.2.1 2020-05-23 [?] P gtable 0.3.0 2019-03-25 [?] P hardhat 0.1.4 2020-07-02 [?] P here 0.1 2017-05-28 [?] P htmltools 0.5.0 2020-06-16 [?] P htmlwidgets 1.5.1 2019-10-08 [?] P httr 1.4.2 2020-07-20 [?] P inum 1.0-1 2019-04-25 [?] P ipred 0.9-9 2019-04-28 [?] P iterators 1.0.12 2019-07-26 [?] P jsonlite 1.7.0 2020-06-25 [?] P knitr 1.29 2020-06-23 [?] P labeling 0.3 2014-08-23 [?] P lattice 0.20-41 2020-04-02 [3] P lava 1.6.7 2020-03-05 [?] P lazyeval 0.2.2 2019-03-15 [?] P lhs 1.0.2 2020-04-13 [?] P libcoin 1.0-5 2019-08-27 [?] P lifecycle 0.2.0 2020-03-06 [?] lightgbm * 3.0.0-1 2020-08-10 [1] P listenv 0.8.0 2019-12-05 [?] P lubridate 1.7.9 2020-06-08 [?] P magrittr 1.5 2014-11-22 [?] P MASS 7.3-51.6 2020-04-26 [3] P Matrix 1.2-18 2019-11-27 [3] P mlr 2.17.1 2020-03-24 [?] P munsell 0.5.0 2018-06-12 [?] P mvtnorm 1.1-1 2020-06-09 [?] P nnet 7.3-14 2020-04-26 [3] P parallelMap 1.5.0 2020-03-26 [?] P ParamHelpers 1.14 2020-03-24 [?] P parsnip * 0.1.2 2020-07-03 [?] P parttree * 0.0.0.9000 2020-08-02 [?] P partykit 1.2-9 2020-07-10 [?] P pillar 1.4.6 2020-07-10 [?] P pkgconfig 2.0.3 2019-09-22 [?] P plotly * 4.9.2.1 2020-04-04 [?] P plyr 1.8.6 2020-03-03 [?] P pROC 1.16.2 2020-03-19 [?] P prodlim 2019.11.13 2019-11-17 [?] P profmem 0.5.0 2018-01-30 [?] P purrr 0.3.4 2020-04-17 [?] P R6 * 2.4.1 2019-11-12 [?] P randomForest * 4.6-14 2018-03-25 [?] P ranger * 0.12.1 2020-01-10 [?] P RANN 2.6.1 2019-01-08 [?] P RColorBrewer 1.1-2 2014-12-07 [?] P Rcpp 1.0.5 2020-07-06 [?] P recipes * 0.1.13 2020-06-23 [?] P reshape2 1.4.4 2020-04-09 [?] P rlang 0.4.7 2020-07-09 [?] P rmarkdown 2.3 2020-06-18 [?] P ROSE 0.0-3 2014-07-15 [?] P rpart * 4.1-15 2019-04-12 [3] P rpart.plot * 3.0.8 2019-08-22 [?] P rprojroot 1.3-2 2018-01-03 [?] P rsample * 0.0.7 2020-06-04 [?] P rstudioapi 0.11 2020-02-07 [?] P sass 0.2.0 2020-03-18 [?] P scales 1.1.1 2020-05-11 [?] P sessioninfo 1.1.1 2018-11-05 [?] P stringi 1.4.6 2020-02-17 [?] P stringr 1.4.0 2019-02-10 [?] P survival 3.1-12 2020-04-10 [3] P themis 0.1.1 2020-05-17 [?] P tibble 3.0.3 2020-07-10 [?] P tidyr 1.1.0 2020-05-20 [?] P tidyselect 1.1.0 2020-05-11 [?] P timeDate 3043.102 2018-02-21 [?] P tune * 0.1.1 2020-07-08 [?] P unbalanced 2.0 2015-06-26 [?] P utf8 1.1.4 2018-05-24 [?] P vctrs 0.3.2 2020-07-15 [?] P vip * 0.2.2 2020-04-06 [?] P viridis 0.5.1 2018-03-29 [?] P viridisLite 0.3.0 2018-02-01 [?] P visNetwork 2.0.9 2019-12-06 [?] P withr 2.2.0 2020-04-20 [?] P workflows * 0.1.2 2020-07-07 [?] P xfun 0.16 2020-07-24 [?] P xgboost * 1.1.1.1 2020-06-14 [?] P xts 0.12-0 2020-01-19 [?] P yaml 2.2.1 2020-02-01 [?] P yardstick 0.0.7 2020-07-13 [?] P zoo 1.8-8 2020-05-02 [?] source CRAN (R 4.0.0) RSPM (R 4.0.1) RSPM (R 4.0.2) RSPM (R 4.0.2) RSPM (R 4.0.2) RSPM (R 4.0.2) CRAN (R 4.0.2) RSPM (R 4.0.0) CRAN (R 4.0.2) RSPM (R 4.0.0) CRAN (R 4.0.0) RSPM (R 4.0.2) RSPM (R 4.0.2) RSPM (R 4.0.2) RSPM (R 4.0.0) RSPM (R 4.0.2) RSPM (R 4.0.0) RSPM (R 4.0.0) RSPM (R 4.0.2) RSPM (R 4.0.0) RSPM (R 4.0.0) CRAN (R 4.0.0) RSPM (R 4.0.0) CRAN (R 4.0.0) RSPM (R 4.0.0) RSPM (R 4.0.2) RSPM (R 4.0.2) CRAN (R 4.0.0) RSPM (R 4.0.2) RSPM (R 4.0.0) RSPM (R 4.0.0) RSPM (R 4.0.2) RSPM (R 4.0.2) CRAN (R 4.0.0) RSPM (R 4.0.1) RSPM (R 4.0.2) RSPM (R 4.0.0) CRAN (R 4.0.0) RSPM (R 4.0.2) RSPM (R 4.0.0) RSPM (R 4.0.0) RSPM (R 4.0.2) RSPM (R 4.0.0) RSPM (R 4.0.2) CRAN (R 4.0.0) RSPM (R 4.0.1) RSPM (R 4.0.0) RSPM (R 4.0.2) RSPM (R 4.0.0) RSPM (R 4.0.0) CRAN (R 4.0.0) CRAN (R 4.0.0) CRAN (R 4.0.0) RSPM (R 4.0.0) CRAN (R 4.0.2) RSPM (R 4.0.0) RSPM (R 4.0.2) RSPM (R 4.0.0) RSPM (R 4.0.0) CRAN (R 4.0.0) url RSPM (R 4.0.0) RSPM (R 4.0.0) CRAN (R 4.0.0) CRAN (R 4.0.2) CRAN (R 4.0.2) RSPM (R 4.0.2) RSPM (R 4.0.0) RSPM (R 4.0.0) CRAN (R 4.0.2) RSPM (R 4.0.2) RSPM (R 4.0.2) RSPM (R 4.0.2) Github (grantmcdermott/parttree@3c87221) RSPM (R 4.0.2) CRAN (R 4.0.2) CRAN (R 4.0.0) RSPM (R 4.0.2) RSPM (R 4.0.0) RSPM (R 4.0.0) RSPM (R 4.0.0) RSPM (R 4.0.2) CRAN (R 4.0.0) CRAN (R 4.0.0) RSPM (R 4.0.0) RSPM (R 4.0.0) RSPM (R 4.0.2) RSPM (R 4.0.0) CRAN (R 4.0.2) RSPM (R 4.0.2) RSPM (R 4.0.0) CRAN (R 4.0.2) RSPM (R 4.0.1) RSPM (R 4.0.2) CRAN (R 4.0.2) RSPM (R 4.0.2) CRAN (R 4.0.0) RSPM (R 4.0.0) CRAN (R 4.0.0) RSPM (R 4.0.2) RSPM (R 4.0.0) RSPM (R 4.0.2) RSPM (R 4.0.0) CRAN (R 4.0.0) CRAN (R 4.0.2) RSPM (R 4.0.2) CRAN (R 4.0.2) RSPM (R 4.0.0) RSPM (R 4.0.0) RSPM (R 4.0.0) RSPM (R 4.0.2) RSPM (R 4.0.2) CRAN (R 4.0.0) CRAN (R 4.0.2) CRAN (R 4.0.2) RSPM (R 4.0.0) RSPM (R 4.0.0) RSPM (R 4.0.0) RSPM (R 4.0.0) RSPM (R 4.0.2) RSPM (R 4.0.2) RSPM (R 4.0.0) RSPM (R 4.0.0) RSPM (R 4.0.0) RSPM (R 4.0.2) RSPM (R 4.0.0) [1] /home/jared/consulting/talks/renv/library/R-4.0/x86_64-pc-linux-gnu [2] /tmp/RtmpDJCkoT/renv-system-library [3] /opt/R/4.0.2/lib/R/library P ── Loaded and on-disk path mismatch.