Conversation
| check_reorder.Learner = function(learner, task, prediction) { | ||
| newdata = task$data(cols = rev(task$feature_names)) | ||
| tmp = learner$predict_newdata(newdata) | ||
| if (!isTRUE(all.equal(prediction$response, tmp$response))) { |
There was a problem hiding this comment.
This reordering test depends on the response prediction type - I think it would be better to be more general here and find the first prediction type from the learner - learner$predcit_type - and compare that? (though in this case it is not a complete test but it is enough to check) - this for surivval learners what it does is comparing NULL with NULL actually right now!
There was a problem hiding this comment.
ofc the whole point here is that I ma able to make a more specialized check_order.LearnerXYZ, but first let make it more general, right?
There was a problem hiding this comment.
Yes the whole idea is that mlr3survival implements check_order.LearnerSurv. But yes this is actually the specialized version for LearnerClassif and LearnerRegr
| UseMethod("check_marshaling") | ||
| } | ||
|
|
||
| check_marshaling.Learner = function(learner, task) { |
There was a problem hiding this comment.
I don't know if this test is enough - ie is it possible to have a faulty implementation of marshaling that makes this test pass?
There was a problem hiding this comment.
ok you test for it so it should be okay
There was a problem hiding this comment.
use %nin% for readability.
There was a problem hiding this comment.
This code block is in general complicated and I see the response also here used - can we abstract it out? the "surv/dens" are not checked either way here, because we loop over the predict types in run_autotest which created some problems from what I recall...
There was a problem hiding this comment.
I had overlooked that part. I've completely rewritten it. Makes more sense like that?
| } | ||
| } | ||
|
|
||
| if (task$task_type == "classif" && task$id == "sanity") { |
There was a problem hiding this comment.
It would be nice to abstract really generic testing and the S3 implementations that need to be written for individual packages here for example
| } | ||
|
|
||
| for (task in tasks) { | ||
| for (predict_type in predict_types) { |
There was a problem hiding this comment.
I think this might be an extra check_* function here => do we really need to check across all tasks and predict types?
No description provided.