Introduction to creditmodel

2019-11-11

Introduction

The creditmodel package provides a highly efficient R tool suite for Credit Modeling Analysis and Visualization. Contains infrastructure functionalities such as data exploration and preparation, missing values treatment, outliers treatment, variable derivation, variable selection, dimensionality reduction, grid search for hyper parameters, data mining and visualization, model evaluation, strategy analysis etc. creditmodel can facilitate reliable predictive models (such as xgboost or scorecard) and data analysis on a standard laptop computer within minutes. This introductory vignette provides a brief glance at the training_model module of the package.

When I first wrote the creditmodel package, its primary purpose was to provide a tool to make the development of binary classification models (machine learning based models as well as credit scorecard) simpler and faster. Therefore, I wrote the package to automatically build model. However, as the package grew in functionality, this choice was increasingly problematic.

Importantly, the creditmodel package now provides a set of complementary tools with different missions.

Quick Modeling

Now, Let’s start with quick modeling.

B_model = training_model(dat = UCICreditCard,
                        model_name = "UCICreditCard",
                        target = "default.payment.next.month",
                        x_list = NULL,
                        occur_time = "apply_date",
                        obs_id = "ID",
                        dat_test = NULL,
                        preproc = TRUE,
                        miss_values = c(-1, -2),
                        missing_proc = TRUE,
                        outlier_proc = TRUE,
                        trans_log = TRUE,
                        feature_filter = list(filter = c("IV", "PSI", "COR", "XGB"),
                                            cv_folds = 1,
                                            iv_cp = 0.02,
                                            psi_cp = 0.2,
                                            cor_cp = 0.95,
                                            xgb_cp = 0,
                                            hopper = TRUE),
                        vars_plot = FALSE,
                        algorithm = list("LR","XGB"),
                        breaks_list = NULL,
                        LR.params = lr_params(
                            iter = 2,
                            method = 'random_search',
                            tree_control = list(p = 0.02,
                                            cp = c(0.00001, 0.00000001),
                                            xval = 5,
                                            maxdepth = c(10, 15)),
                            bins_control = list(bins_num = 10,
                                            bins_pct = c(0.02, 0.03, 0.05),
                                            b_chi = c(0.01, 0.02, 0.03),
                                            b_odds = 0.1,
                                            b_psi = c(0.02, 0.06),
                                            b_or = c(.05, 0.1, 0.15, 0.2),
                                            mono = c(0.1, 0.2, 0.4, 0.5),
                                            odds_psi = c(0.1, 0.15, 0.2),
                                            kc = 1),
                            f_eval = 'ks',
                            lasso = TRUE,
                            step_wise = FALSE),
                        XGB.params = xgb_params(
                            iter = 3,
                            method = 'random_search',
                            params = list(
                                max_depth = c(3:6),
                                eta = c(0.01, 0.05, 0.1, 0.2),
                                gamma = c(0.01, 0.05, 0.1),
                                min_child_weight = c(1, 5, 10, 20, 30, 40, 50),
                                subsample = c(0.8, 0.7, 0.6, 0.5),
                                colsample_bytree = c(0.8, 0.7, 0.6, 0.5),
                                scale_pos_weight = c(1, 2, 3)),
                            f_eval = 'auc'),
                        parallel = FALSE,
                        cores_num = NULL,
                        save_pmml = FALSE,
                        plot_show = TRUE,
                        model_path = tempdir(),
                        seed = 46)
## -- Building ----------------------------------------------------------------------- UCICreditCard --
## -- Creating the model output file path -------------------------------------------------------------
## -- Seting model output file path:
## * model      : C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/model
## * data       : C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/data
## * variable   : C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/variable
## * performance: C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/performance
## * predict    : C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/predict
## -- Checking datasets and target --------------------------------------------------------------------
## -- Cleansing & Prepocessing data -------------------------------------------------------------------
## -- Checking data and target format...
## -- Cleansing data
## -- Replacing null or blank or miss_values with NA
## -- Deleting low variance variables
## -- Processing NAs & special value rate is more than 0.999
## -- Formating time variables
## -- Transfering character variables which are actually numerical to numeric
## -- Removing duplicated observations
## -- Merging categories which percent is less than 0.001 or obs number is less than 20
## -- Saving data_cleansing to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/data/data_cleansing.csv
## -- Logarithmic transformation
## -- Following variables are log transformed:
## * LIMIT_BAL -> LIMIT_BAL_log
## * PAY_0     -> PAY_0_log
## * PAY_2     -> PAY_2_log
## * PAY_AMT1  -> PAY_AMT1_log
## * PAY_AMT2  -> PAY_AMT2_log
## * PAY_AMT3  -> PAY_AMT3_log
## * PAY_AMT4  -> PAY_AMT4_log
## * PAY_AMT5  -> PAY_AMT5_log
## * PAY_AMT6  -> PAY_AMT6_log
## -- Spliting train & test ---------------------------------------------------------------------------
## -- train_test_split:
## * Total: 30000 (100%)
## * Train: 20874 (70%)
## * Test : 9126 (30%)
## -- Processing outliers using Kmeans and LOF
## * LIMIT_BAL_log  0%  no_outlier
## * AGE    0%  no_outlier
## * PAY_0_log  0%  no_outlier
## * PAY_2_log  0%  no_outlier
## * PAY_3  0%  no_outlier
## * PAY_4  0%  no_outlier
## * PAY_5  0%  no_outlier
## * PAY_6  0%  no_outlier
## * BILL_AMT1  0%  no_outlier
## * BILL_AMT2  0%  no_outlier
## * BILL_AMT3  0%  no_outlier
## * BILL_AMT4  0%  no_outlier
## * BILL_AMT5  0%  no_outlier
## * BILL_AMT6  0%  no_outlier
## * PAY_AMT1_log   0%  no_outlier
## * PAY_AMT2_log   0%  no_outlier
## * PAY_AMT3_log   0%  no_outlier
## * PAY_AMT4_log   0%  no_outlier
## * PAY_AMT5_log   0%  no_outlier
## * PAY_AMT6_log   0%  no_outlier
## -- Saving data_outlier_proc to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/data/data_outlier_proc.csv
## -- Processing NAs
## * MARRIAGE   0.1581% IM
## * PAY_0_log  27.963% IM
## * PAY_2_log  32.6579%    IM
## * PAY_3  33.5154%    IM
## * PAY_4  33.3573%    IM
## * PAY_5  33.5968%    IM
## * PAY_6  35.4939%    IM
## * BILL_AMT1  0.1102% IM
## * BILL_AMT2  0.1246% IM
## * BILL_AMT3  0.1389% IM
## * BILL_AMT4  0.1533% IM
## * BILL_AMT5  0.1198% IM
## * BILL_AMT6  0.1533% IM
## -- Saving data_missing_proc to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/data/data_missing_proc.csv
## -- Filtering features ------------------------------------------------------------------------------
## -- Feature filtering by PSI
## -- Feature filtering by IV
## -- Selecting variables by XGboost
## -- Feature filtering by Correlation
## -- Saving feature_filter to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/variable/feature_filter.csv
## -- Saving feature_filter_table to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/variable/feature_filter_table.csv
## -- Training logistic regression model/scorecard ----------------------------------------------------
## -- Searching optimal binning & feature selection parameters ----------------------------------------
## [1]  train_ks:0.4116  test_ks:0.3876  psi:0.001
## * tree_control:{ p:0.02, cp:0.00000001, xval:5, maxdepth:10 }
## * bins_control:{ bins_num:10, bins_pct:0.02, b_chi:0.02, b_odds:0.1, b_psi:0.02, b_or:0.05, mono:0.4, odds_psi:0.2, kc:1 }
## * thresholds:{ cor_p:0.8, iv_i:0.02, psi_i:0.1, cos_i:0.5 }
## [2]  train_ks:0.4125  test_ks:0.389  psi:0.001
## * tree_control:{ p:0.02, cp:0.00000001, xval:5, maxdepth:10 }
## * bins_control:{ bins_num:10, bins_pct:0.02, b_chi:0.03, b_odds:0.1, b_psi:0.06, b_or:0.05, mono:0.2, odds_psi:0.15, kc:1 }
## * thresholds:{ cor_p:0.8, iv_i:0.02, psi_i:0.1, cos_i:0.5 }
## -- [best iter] -------------------------------------------------------------------------------------
## [2]  train_ks:0.4125 test_ks:0.389   psi:0.001
## * tree_control:{ p:0.02, cp:0.00000001, xval:5, maxdepth:10 }
## * bins_control:{ bins_num:10, bins_pct:0.02, b_chi:0.03, b_odds:0.1, b_psi:0.06, b_or:0.05, mono:0.2, odds_psi:0.15, kc:1 }
## * thresholds:{ cor_p:0.8, iv_i:0.02, psi_i:0.1, cos_i:0.5 }
## -- Constrained optimal binning of varibles ---------------------------------------------------------
## -- Getting optimal binning breaks
## * PAY_0_log: -0.5,0.346573590279972,Inf
## * PAY_2_log: -0.5,0.346573590279972,Inf
## * PAY_3: -1,1,Inf
## * PAY_4: -1,0,Inf
## * PAY_5: -1,1,Inf
## * PAY_AMT1_log: 3.06778244554087,7.60115239706291,Inf
## -- Saving breaks_list.breaks_list to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/variable/LR/breaks_list.breaks_list.csv
## -- Filtering variables by IV & PSI -----------------------------------------------------------------
## -- Selecting variables by PSI & IV
## -- Calculating PSI
## --PAY_0_log
## * PSI: 0  -->  Very stable
## --PAY_2_log
## * PSI: 0  -->  Very stable
## --PAY_3
## * PSI: 0  -->  Very stable
## --PAY_4
## * PSI: 0  -->  Very stable
## --PAY_5
## * PSI: 0  -->  Very stable
## --PAY_AMT1_log
## * PSI: 0  -->  Very stable
## -- Calculating IV
## --PAY_0_log
## * IV: 0.692  -->  Very Strong
## --PAY_2_log
## * IV: 0.538  -->  Very Strong
## --PAY_3
## * IV: 0.405  -->  Very Strong
## --PAY_4
## * IV: 0.352  -->  Very Strong
## --PAY_5
## * IV: 0.314  -->  Very Strong
## --PAY_AMT1_log
## * IV: 0.148  -->  Strong
## -- Saving feature.IV_PSI to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/variable/LR/feature.IV_PSI.csv
## -- Saving feature.PSI to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/variable/LR/feature.PSI.csv
## -- Saving feature.IV to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/variable/LR/feature.IV.csv
## -- Saving LR.IV_PSI_features to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/variable/LR/LR.IV_PSI_features.csv
## -- Transforming WOE --------------------------------------------------------------------------------
## -- Transforming variables to woe
## -- Saving lr_train.dat.woe to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/data/LR/lr_train.dat.woe.csv
## -- Filtering variables by correlation --------------------------------------------------------------
## -- Processing bins table
## * PAY_0_log IV: 0.692 PSI: 0
## * PAY_2_log IV: 0.537 PSI: 0
## * PAY_3 IV: 0.406 PSI: 0
## * PAY_4 IV: 0.352 PSI: 0
## * PAY_5 IV: 0.314 PSI: 0
## * PAY_AMT1_log IV: 0.149 PSI: 0
## -- Filtering variables by LASSO --------------------------------------------------------------------
## Saving 8 x 5 in image
## -- Saving lr_premodel_features to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/variable/LR/lr_premodel_features.csv
## -- Start training lr model -------------------------------------------------------------------------
## -- Saving lr_model_features to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/variable/LR/lr_model_features.csv
## -- Saving UCICreditCard.lr_coef to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/performance/LR/UCICreditCard.lr_coef.csv
## -- Generating standard socrecard -------------------------------------------------------------------
## -- Using scorecard to predict the train and test
## -- Saving lr_train_score to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/predict/LR/lr_train_score.csv
## -- Saving lr_test_score to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/predict/LR/lr_test_score.csv
## -- Saving lr_train_prob to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/predict/LR/lr_train_prob.csv
## -- Saving lr_test_prob to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/predict/LR/lr_test_prob.csv
## -- Producing plots that characterize performance of scorecard
## Saving 12 x 5 in image
## -- Saving UCICreditCard.LR.performance_table to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/performance/LR/UCICreditCard.LR.performance_table.csv
## -- Saving LR.params to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/performance/LR/LR.params.csv
## -- Training XGboost Model --------------------------------------------------------------------------
## -- Searching optimal parameters of XGboost ---------------------------------------------------------
## [1]  train_auc:0.788737  eval_auc:0.749381
## * params:{max_depth:6, eta:0.2, gamma:0.1, min_child_weight:5, subsample:0.7, colsample_bytree:0.8, scale_pos_weight:3}
## [2]  train_auc:0.780476  eval_auc:0.753076
## * params:{max_depth:4, eta:0.2, gamma:0.01, min_child_weight:10, subsample:0.7, colsample_bytree:0.7, scale_pos_weight:1}
## [3]  train_auc:0.782651  eval_auc:0.752638
## * params:{max_depth:5, eta:0.05, gamma:0.05, min_child_weight:30, subsample:0.7, colsample_bytree:0.8, scale_pos_weight:1}
## -- [best iter] -------------------------------------------------------------------------------------
## [2]  train_auc:0.780476  eval_auc:0.753076
## * params:{max_depth:4, eta:0.2, gamma:0.01, min_child_weight:10, subsample:0.7, colsample_bytree:0.7, scale_pos_weight:1}
## -- Saving XGB.x_train to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/data/XGB/XGB.x_train.csv
## -- Saving XGB.x_test to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/data/XGB/XGB.x_test.csv
## -- Saving XGB.y_train to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/data/XGB/XGB.y_train.csv
## -- Saving XGB.y_test to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/data/XGB/XGB.y_test.csv
## [1]  train-auc:0.715596  eval-auc:0.718209 
## Multiple eval metrics are present. Will use eval_auc for early stopping.
## Will train until eval_auc hasn't improved in 100 rounds.
## 
## [2]  train-auc:0.754548  eval-auc:0.754535 
## [3]  train-auc:0.757073  eval-auc:0.755268 
## [4]  train-auc:0.753097  eval-auc:0.751120 
## [5]  train-auc:0.758976  eval-auc:0.756382 
## [6]  train-auc:0.762349  eval-auc:0.759194 
## [7]  train-auc:0.763509  eval-auc:0.760953 
## [8]  train-auc:0.763390  eval-auc:0.761215 
## [9]  train-auc:0.764421  eval-auc:0.762362 
## [10] train-auc:0.764877  eval-auc:0.762378 
## [11] train-auc:0.765499  eval-auc:0.762562 
## [12] train-auc:0.765960  eval-auc:0.762283 
## [13] train-auc:0.766507  eval-auc:0.762515 
## [14] train-auc:0.766704  eval-auc:0.761505 
## [15] train-auc:0.766326  eval-auc:0.761406 
## [16] train-auc:0.766316  eval-auc:0.761204 
## [17] train-auc:0.766672  eval-auc:0.761451 
## [18] train-auc:0.767211  eval-auc:0.761915 
## [19] train-auc:0.768284  eval-auc:0.761592 
## [20] train-auc:0.769152  eval-auc:0.762339 
## [21] train-auc:0.769669  eval-auc:0.761596 
## [22] train-auc:0.769691  eval-auc:0.761598 
## [23] train-auc:0.769759  eval-auc:0.761574 
## [24] train-auc:0.769889  eval-auc:0.761968 
## [25] train-auc:0.769989  eval-auc:0.761984 
## [26] train-auc:0.770158  eval-auc:0.762326 
## [27] train-auc:0.770972  eval-auc:0.763262 
## [28] train-auc:0.771346  eval-auc:0.763250 
## [29] train-auc:0.771547  eval-auc:0.763327 
## [30] train-auc:0.771836  eval-auc:0.763535 
## [31] train-auc:0.772221  eval-auc:0.762963 
## [32] train-auc:0.772607  eval-auc:0.762921 
## [33] train-auc:0.772655  eval-auc:0.762642 
## [34] train-auc:0.773330  eval-auc:0.762931 
## [35] train-auc:0.773600  eval-auc:0.763250 
## [36] train-auc:0.773672  eval-auc:0.763170 
## [37] train-auc:0.773743  eval-auc:0.763202 
## [38] train-auc:0.774023  eval-auc:0.763320 
## [39] train-auc:0.774116  eval-auc:0.763443 
## [40] train-auc:0.774295  eval-auc:0.763224 
## [41] train-auc:0.774701  eval-auc:0.762746 
## [42] train-auc:0.774723  eval-auc:0.762709 
## [43] train-auc:0.774837  eval-auc:0.762722 
## [44] train-auc:0.774982  eval-auc:0.762679 
## [45] train-auc:0.775083  eval-auc:0.762923 
## [46] train-auc:0.775062  eval-auc:0.762952 
## [47] train-auc:0.775190  eval-auc:0.763329 
## [48] train-auc:0.775389  eval-auc:0.763336 
## [49] train-auc:0.775888  eval-auc:0.763354 
## [50] train-auc:0.776065  eval-auc:0.763456 
## [51] train-auc:0.776179  eval-auc:0.763456 
## [52] train-auc:0.776662  eval-auc:0.763180 
## [53] train-auc:0.776869  eval-auc:0.763237 
## [54] train-auc:0.777100  eval-auc:0.763010 
## [55] train-auc:0.777191  eval-auc:0.762765 
## [56] train-auc:0.777166  eval-auc:0.762889 
## [57] train-auc:0.777411  eval-auc:0.763013 
## [58] train-auc:0.777424  eval-auc:0.762981 
## [59] train-auc:0.777868  eval-auc:0.762377 
## [60] train-auc:0.777987  eval-auc:0.762323 
## [61] train-auc:0.778089  eval-auc:0.762485 
## [62] train-auc:0.778069  eval-auc:0.762532 
## [63] train-auc:0.778125  eval-auc:0.762694 
## [64] train-auc:0.778369  eval-auc:0.762639 
## [65] train-auc:0.778341  eval-auc:0.762630 
## [66] train-auc:0.778370  eval-auc:0.762705 
## [67] train-auc:0.778363  eval-auc:0.762684 
## [68] train-auc:0.778890  eval-auc:0.762692 
## [69] train-auc:0.779098  eval-auc:0.762551 
## [70] train-auc:0.779401  eval-auc:0.762285 
## [71] train-auc:0.779712  eval-auc:0.761935 
## [72] train-auc:0.779771  eval-auc:0.762039 
## [73] train-auc:0.779860  eval-auc:0.762030 
## [74] train-auc:0.780006  eval-auc:0.761844 
## [75] train-auc:0.780338  eval-auc:0.761726 
## [76] train-auc:0.780453  eval-auc:0.761616 
## [77] train-auc:0.780456  eval-auc:0.761650 
## [78] train-auc:0.780927  eval-auc:0.761504 
## [79] train-auc:0.781006  eval-auc:0.761436 
## [80] train-auc:0.781168  eval-auc:0.761433 
## [81] train-auc:0.781329  eval-auc:0.761363 
## [82] train-auc:0.781575  eval-auc:0.761895 
## [83] train-auc:0.781690  eval-auc:0.761946 
## [84] train-auc:0.781723  eval-auc:0.761771 
## [85] train-auc:0.781813  eval-auc:0.761862 
## [86] train-auc:0.781958  eval-auc:0.761589 
## [87] train-auc:0.782164  eval-auc:0.761271 
## [88] train-auc:0.782149  eval-auc:0.761196 
## [89] train-auc:0.782177  eval-auc:0.761000 
## [90] train-auc:0.782348  eval-auc:0.761203 
## [91] train-auc:0.782381  eval-auc:0.761141 
## [92] train-auc:0.782423  eval-auc:0.761101 
## [93] train-auc:0.782494  eval-auc:0.760964 
## [94] train-auc:0.782649  eval-auc:0.761211 
## [95] train-auc:0.782786  eval-auc:0.761422 
## [96] train-auc:0.782905  eval-auc:0.761469 
## [97] train-auc:0.783044  eval-auc:0.761776 
## [98] train-auc:0.783252  eval-auc:0.761832 
## [99] train-auc:0.783567  eval-auc:0.761688 
## [100]    train-auc:0.783567  eval-auc:0.761535 
## [101]    train-auc:0.783681  eval-auc:0.761609 
## [102]    train-auc:0.783820  eval-auc:0.761912 
## [103]    train-auc:0.783804  eval-auc:0.761942 
## [104]    train-auc:0.783937  eval-auc:0.762102 
## [105]    train-auc:0.784138  eval-auc:0.761679 
## [106]    train-auc:0.784155  eval-auc:0.761627 
## [107]    train-auc:0.784168  eval-auc:0.761580 
## [108]    train-auc:0.784371  eval-auc:0.761552 
## [109]    train-auc:0.784599  eval-auc:0.761502 
## [110]    train-auc:0.784798  eval-auc:0.761670 
## [111]    train-auc:0.784773  eval-auc:0.761580 
## [112]    train-auc:0.784904  eval-auc:0.761945 
## [113]    train-auc:0.784996  eval-auc:0.761591 
## [114]    train-auc:0.785381  eval-auc:0.761333 
## [115]    train-auc:0.785364  eval-auc:0.761220 
## [116]    train-auc:0.785563  eval-auc:0.761345 
## [117]    train-auc:0.785640  eval-auc:0.761336 
## [118]    train-auc:0.785665  eval-auc:0.761327 
## [119]    train-auc:0.785884  eval-auc:0.761517 
## [120]    train-auc:0.785865  eval-auc:0.761492 
## [121]    train-auc:0.785825  eval-auc:0.761499 
## [122]    train-auc:0.785900  eval-auc:0.761665 
## [123]    train-auc:0.785907  eval-auc:0.761247 
## [124]    train-auc:0.786025  eval-auc:0.761088 
## [125]    train-auc:0.786134  eval-auc:0.761333 
## [126]    train-auc:0.786174  eval-auc:0.761297 
## [127]    train-auc:0.786438  eval-auc:0.761116 
## [128]    train-auc:0.786445  eval-auc:0.760985 
## [129]    train-auc:0.786765  eval-auc:0.760563 
## [130]    train-auc:0.786893  eval-auc:0.760196 
## Stopping. Best iteration:
## [30] train-auc:0.771836  eval-auc:0.763535
## 
## -- Saving UCICreditCard.XGB_input_vars to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/model/XGB/UCICreditCard.XGB_input_vars.csv
## -- Saving XGB_feature_importance to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/variable/XGB/XGB_feature_importance.csv
## -- Saving XGB.train_prob to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/predict/XGB/XGB.train_prob.csv
## -- Saving XGB.test_prob to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/predict/XGB/XGB.test_prob.csv
## -- Producing plots that characterize the performance of XGboost
## Saving 12 x 5 in image
## -- Saving UCICreditCard.XGB.performance_table to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/performance/XGB/UCICreditCard.XGB.performance_table.csv

## -- Saving XGB.params to:
## * C:\Users\28142\AppData\Local\Temp\Rtmpk9JEMK/UCICreditCard/performance/XGB/XGB.params.csv

In a few minutes, the program completed data cleaning and pretreatment, variable screening, scorecard, Xgboost, GBDT, RandomForest four models development and evaluation.