[D] Can a model be complicated enough to overfit every validation fold during a k-fold cross-validation process?
During k-fold cross-validation, is it possible that a model is so sophisticated (e.g., with many hyperparameters to be grid searched) that it gives a good score on almost every validation fold? It’s like the model is intricate enough to somehow leak out to fit the validation set every time (there are k times), essentially overfitting the whole training set (because the sum of the k validation folds is just the whole training set).
If this is possible, then I feel like it’s also possible that this best model will eventually have a high generalization error when tested on the final test set, essentially making cross-validation useless. Did I miss anything here?