Perform k-fold cross-validation with consistent scoring metrics across different model types. The scoring metric is automatically selected based on the detected task type.

cross_val_score(
  model,
  X,
  y,
  cv = 5,
  scoring = NULL,
  show_progress = TRUE,
  cl = NULL,
  ...
)

Arguments

model

A Model object

X

Feature matrix or data.frame

y

Target vector (type determines regression vs classification)

cv

Number of cross-validation folds (default: 5)

scoring

Scoring metric: "rmse", "mae", "accuracy", or "f1" (default: auto-detected based on task)

show_progress

Whether to show progress bar (default: TRUE)

cl

Optional cluster for parallel processing (not yet implemented)

...

Additional arguments passed to model$fit()

Value

Vector of cross-validation scores for each fold

Examples

if (FALSE) { # \dontrun{
library(glmnet)
X <- matrix(rnorm(100), ncol = 4)
y <- 2*X[,1] - 1.5*X[,2] + rnorm(25)  # numeric → regression

mod <- Model$new(glmnet::glmnet)
mod$fit(X, y, alpha = 0, lambda = 0.1)
cv_scores <- cross_val_score(mod, X, y, cv = 5)  # auto-uses RMSE
mean(cv_scores)  # Average RMSE

# Classification with accuracy scoring
data(iris)
X_class <- as.matrix(iris[, 1:4])
y_class <- iris$Species  # factor → classification

mod2 <- Model$new(e1071::svm)
cv_scores2 <- cross_val_score(mod2, X_class, y_class, cv = 5)  # auto-uses accuracy
mean(cv_scores2)  # Average accuracy
} # }