Perform k-fold cross-validation with consistent scoring metrics across different model types. The scoring metric is automatically selected based on the detected task type.
cross_val_score(
model,
X,
y,
cv = 5,
scoring = NULL,
show_progress = TRUE,
cl = NULL,
...
)A Model object
Feature matrix or data.frame
Target vector (type determines regression vs classification)
Number of cross-validation folds (default: 5)
Scoring metric: "rmse", "mae", "accuracy", or "f1" (default: auto-detected based on task)
Whether to show progress bar (default: TRUE)
Optional cluster for parallel processing (not yet implemented)
Additional arguments passed to model$fit()
Vector of cross-validation scores for each fold
# \donttest{
library(glmnet)
X <- matrix(rnorm(100), ncol = 4)
y <- 2*X[,1] - 1.5*X[,2] + rnorm(25) # numeric -> regression
mod <- Model$new(glmnet::glmnet)
mod$fit(X, y, alpha = 0, lambda = 0.1)
cv_scores <- cross_val_score(mod, X, y, cv = 5) # auto-uses RMSE
#>
|
| | 0%
|
|============== | 20%
|
|============================ | 40%
|
|========================================== | 60%
|
|======================================================== | 80%
|
|======================================================================| 100%
mean(cv_scores) # Average RMSE
#> [1] 1.512719
# Classification with accuracy scoring
data(iris)
X_class <- as.matrix(iris[, 1:4])
y_class <- iris$Species # factor -> classification
mod2 <- Model$new(e1071::svm)
cv_scores2 <- cross_val_score(mod2, X_class, y_class, cv = 5) # auto-uses accuracy
#>
|
| | 0%
|
|============== | 20%
|
|============================ | 40%
|
|========================================== | 60%
|
|======================================================== | 80%
|
|======================================================================| 100%
mean(cv_scores2) # Average accuracy
#> [1] 0.9666667
# }