網(wǎng)站首頁 編程語言 正文
trainControl參數(shù)詳解
源碼
caret::trainControl <- function (method = "boot", number = ifelse(grepl("cv", method), 10, 25), repeats = ifelse(grepl("[d_]cv$", method), 1, NA), p = 0.75, search = "grid", initialWindow = NULL, horizon = 1, fixedWindow = TRUE, skip = 0, verboseIter = FALSE, returnData = TRUE, returnResamp = "final", savePredictions = FALSE, classProbs = FALSE, summaryFunction = defaultSummary, selectionFunction = "best", preProcOptions = list(thresh = 0.95, ICAcomp = 3, k = 5, freqCut = 95/5, uniqueCut = 10, cutoff = 0.9), sampling = NULL, index = NULL, indexOut = NULL, indexFinal = NULL, timingSamps = 0, predictionBounds = rep(FALSE, 2), seeds = NA, adaptive = list(min = 5, alpha = 0.05, method = "gls", complete = TRUE), trim = FALSE, allowParallel = TRUE) { if (is.null(selectionFunction)) stop("null selectionFunction values not allowed") if (!(returnResamp %in% c("all", "final", "none"))) stop("incorrect value of returnResamp") if (length(predictionBounds) > 0 && length(predictionBounds) != 2) stop("'predictionBounds' should be a logical or numeric vector of length 2") if (any(names(preProcOptions) == "method")) stop("'method' cannot be specified here") if (any(names(preProcOptions) == "x")) stop("'x' cannot be specified here") if (!is.na(repeats) & !(method %in% c("repeatedcv", "adaptive_cv"))) warning("`repeats` has no meaning for this resampling method.", call. = FALSE) if (!(adaptive$method %in% c("gls", "BT"))) stop("incorrect value of adaptive$method") if (adaptive$alpha < 1e-07 | adaptive$alpha > 1) stop("incorrect value of adaptive$alpha") if (grepl("adapt", method)) { num <- if (method == "adaptive_cv") number * repeats else number if (adaptive$min >= num) stop(paste("adaptive$min should be less than", num)) if (adaptive$min <= 1) stop("adaptive$min should be greater than 1") } if (!(search %in% c("grid", "random"))) stop("`search` should be either 'grid' or 'random'") if (method == "oob" & any(names(match.call()) == "summaryFunction")) { warning("Custom summary measures cannot be computed for out-of-bag resampling. ", "This value of `summaryFunction` will be ignored.", call. = FALSE) } list(method = method, number = number, repeats = repeats, search = search, p = p, initialWindow = initialWindow, horizon = horizon, fixedWindow = fixedWindow, skip = skip, verboseIter = verboseIter, returnData = returnData, returnResamp = returnResamp, savePredictions = savePredictions, classProbs = classProbs, summaryFunction = summaryFunction, selectionFunction = selectionFunction, preProcOptions = preProcOptions, sampling = sampling, index = index, indexOut = indexOut, indexFinal = indexFinal, timingSamps = timingSamps, predictionBounds = predictionBounds, seeds = seeds, adaptive = adaptive, trim = trim, allowParallel = allowParallel) }
參數(shù)詳解
trainControl | 所有參數(shù)詳解 |
---|---|
method | 重抽樣方法:Bootstrap(有放回隨機抽樣) 、Bootstrap632(有放回隨機抽樣擴展) 、LOOCV(留一交叉驗證) 、LGOCV(蒙特卡羅交叉驗證) 、cv(k折交叉驗證) 、repeatedcv(重復(fù)的k折交叉驗證) 、optimism_boot(Efron, B., & Tibshirani, R. J. (1994). “An introduction to the bootstrap”, pages 249-252. CRC press.) 、none(僅使用一個訓(xùn)練集擬合模型) 、oob(袋外估計:隨機森林、多元自適應(yīng)回歸樣條、樹模型、靈活判別分析、條件樹)
|
number | 控制K折交叉驗證的數(shù)目或者Bootstrap和LGOCV的抽樣迭代次數(shù) |
repeats | 控制重復(fù)交叉驗證的次數(shù) |
p | LGOCV:控制訓(xùn)練比例 |
verboseIter | 輸出訓(xùn)練日志的邏輯變量 |
returnData | 邏輯變量,把數(shù)據(jù)保存到trainingData 中(str(trainControl) 查看) |
search | search = grid(網(wǎng)格搜索) ,random(隨機搜索)
|
returnResamp | 包含以下值的字符串:final、all、none ,設(shè)定有多少抽樣性能度量被保存。 |
classProbs | 是否計算類別概率 |
summaryFunction | 根據(jù)重抽樣計算模型性能的函數(shù) |
selectionFunction | 選擇最優(yōu)參數(shù)的函數(shù) |
index | 指定重抽樣樣本(使用相同的重抽樣樣本評估不同的算法、模型) |
allowParallel | 是否允許并行 |
示例
library(mlbench) #使用包中的數(shù)據(jù) Warning message: 程輯包‘mlbench'是用R版本4.1.3 來建造的 > data(Sonar) > str(Sonar[, 1:10]) 'data.frame': 208 obs. of 10 variables: $ V1 : num 0.02 0.0453 0.0262 0.01 0.0762 0.0286 0.0317 0.0519 0.0223 0.0164 ... $ V2 : num 0.0371 0.0523 0.0582 0.0171 0.0666 0.0453 0.0956 0.0548 0.0375 0.0173 ... $ V3 : num 0.0428 0.0843 0.1099 0.0623 0.0481 ... $ V4 : num 0.0207 0.0689 0.1083 0.0205 0.0394 ... $ V5 : num 0.0954 0.1183 0.0974 0.0205 0.059 ... $ V6 : num 0.0986 0.2583 0.228 0.0368 0.0649 ... $ V7 : num 0.154 0.216 0.243 0.11 0.121 ... $ V8 : num 0.16 0.348 0.377 0.128 0.247 ... $ V9 : num 0.3109 0.3337 0.5598 0.0598 0.3564 ... $ V10: num 0.211 0.287 0.619 0.126 0.446 ...
數(shù)據(jù)分割:
library(caret) set.seed(998) inTraining <- createDataPartition(Sonar$Class, p = .75, list = FALSE) training <- Sonar[ inTraining,] #訓(xùn)練集 testing <- Sonar[-inTraining,] #測試集
模型擬合:
fitControl <- trainControl(## 10折交叉驗證 method = "repeatedcv", number = 10, ## 重復(fù)10次 repeats = 1) set.seed(825) gbmFit1 <- train(Class ~ ., data = training, method = "gbm", # 助推樹 trControl = fitControl, verbose = FALSE) gbmFit1 Stochastic Gradient Boosting 157 samples 60 predictor 2 classes: 'M', 'R' No pre-processing Resampling: Cross-Validated (10 fold, repeated 10 times) Summary of sample sizes: 141, 142, 141, 142, 141, 142, ... Resampling results across tuning parameters: interaction.depth n.trees Accuracy Kappa 1 50 0.7935784 0.5797839 1 100 0.8171078 0.6290208 1 150 0.8219608 0.6383173 2 50 0.8041912 0.6027771 2 100 0.8296176 0.6544713 2 150 0.8283627 0.6520181 3 50 0.8110343 0.6170317 3 100 0.8301275 0.6551379 3 150 0.8310343 0.6577252 Tuning parameter 'shrinkage' was held constant at a value of 0.1 Tuning parameter 'n.minobsinnode' was held constant at a value of 10 Accuracy was used to select the optimal model using the largest value. The final values used for the model were n.trees = 150, interaction.depth = 3, shrinkage = 0.1 and n.minobsinnode = 10.
原文鏈接:https://blog.csdn.net/weixin_43217641/article/details/126206900
相關(guān)推薦
- 2022-06-20 C語言手把手帶你掌握帶頭雙向循環(huán)鏈表_C 語言
- 2024-03-25 解決idea配置自定義的maven失敗的問題
- 2022-08-13 從源碼理解SpringBootServletInitializer的作用
- 2022-04-20 用Python實現(xiàn)插值算法_python
- 2022-03-22 nginx開啟gzip壓縮的完整步驟記錄_nginx
- 2022-01-22 linux系統(tǒng)raid0測試實驗
- 2022-09-15 Android自定義ViewGroup實現(xiàn)選擇面板_Android
- 2022-07-06 C++使用easyx實現(xiàn)打磚塊游戲_C 語言
- 最近更新
-
- window11 系統(tǒng)安裝 yarn
- 超詳細(xì)win安裝深度學(xué)習(xí)環(huán)境2025年最新版(
- Linux 中運行的top命令 怎么退出?
- MySQL 中decimal 的用法? 存儲小
- get 、set 、toString 方法的使
- @Resource和 @Autowired注解
- Java基礎(chǔ)操作-- 運算符,流程控制 Flo
- 1. Int 和Integer 的區(qū)別,Jav
- spring @retryable不生效的一種
- Spring Security之認(rèn)證信息的處理
- Spring Security之認(rèn)證過濾器
- Spring Security概述快速入門
- Spring Security之配置體系
- 【SpringBoot】SpringCache
- Spring Security之基于方法配置權(quán)
- redisson分布式鎖中waittime的設(shè)
- maven:解決release錯誤:Artif
- restTemplate使用總結(jié)
- Spring Security之安全異常處理
- MybatisPlus優(yōu)雅實現(xiàn)加密?
- Spring ioc容器與Bean的生命周期。
- 【探索SpringCloud】服務(wù)發(fā)現(xiàn)-Nac
- Spring Security之基于HttpR
- Redis 底層數(shù)據(jù)結(jié)構(gòu)-簡單動態(tài)字符串(SD
- arthas操作spring被代理目標(biāo)對象命令
- Spring中的單例模式應(yīng)用詳解
- 聊聊消息隊列,發(fā)送消息的4種方式
- bootspring第三方資源配置管理
- GIT同步修改后的遠(yuǎn)程分支