1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/TuneControlDesign.R
\name{makeTuneControlDesign}
\alias{makeTuneControlDesign}
\alias{TuneControlDesign}
\title{Create control object for hyperparameter tuning with predefined design.}
\usage{
makeTuneControlDesign(
same.resampling.instance = TRUE,
impute.val = NULL,
design = NULL,
tune.threshold = FALSE,
tune.threshold.args = list(),
log.fun = "default"
)
}
\arguments{
\item{same.resampling.instance}{(\code{logical(1)})\cr
Should the same resampling instance be used for all evaluations to reduce variance?
Default is \code{TRUE}.}
\item{impute.val}{(\link{numeric})\cr
If something goes wrong during optimization (e.g. the learner crashes),
this value is fed back to the tuner, so the tuning algorithm does not abort.
Imputation is only active if \code{on.learner.error} is configured not to stop in \link{configureMlr}.
It is not stored in the optimization path, an NA and a corresponding error message are
logged instead.
Note that this value is later multiplied by -1 for maximization measures internally, so you
need to enter a larger positive value for maximization here as well.
Default is the worst obtainable value of the performance measure you optimize for when
you aggregate by mean value, or \code{Inf} instead.
For multi-criteria optimization pass a vector of imputation values, one for each of your measures,
in the same order as your measures.}
\item{design}{(\link{data.frame})\cr
\code{data.frame} containing the different parameter settings to be evaluated.
The columns have to be named according to the \code{ParamSet} which will be used in \code{tune()}.
Proper designs can be created with \link[ParamHelpers:generateDesign]{ParamHelpers::generateDesign} for instance.}
\item{tune.threshold}{(\code{logical(1)})\cr
Should the threshold be tuned for the measure at hand, after each hyperparameter evaluation,
via \link{tuneThreshold}?
Only works for classification if the predict type is \dQuote{prob}.
Default is \code{FALSE}.}
\item{tune.threshold.args}{(\link{list})\cr
Further arguments for threshold tuning that are passed down to \link{tuneThreshold}.
Default is none.}
\item{log.fun}{(\code{function} | \code{character(1)})\cr
Function used for logging. If set to \dQuote{default} (the default), the evaluated design points, the resulting
performances, and the runtime will be reported.
If set to \dQuote{memory} the memory usage for each evaluation will also be displayed, with \code{character(1)} small increase
in run time.
Otherwise \code{character(1)} function with arguments \code{learner}, \code{resampling}, \code{measures},
\code{par.set}, \code{control}, \code{opt.path}, \code{dob}, \code{x}, \code{y}, \code{remove.nas},
\code{stage} and \code{prev.stage} is expected.
The default displays the performance measures, the time needed for evaluating,
the currently used memory and the max memory ever used before
(the latter two both taken from \link{gc}).
See the implementation for details.}
}
\value{
(\link{TuneControlDesign})
}
\description{
Completely pre-specifiy a \code{data.frame} of design points to be evaluated
during tuning. All kinds of parameter types can be handled.
}
\seealso{
Other tune:
\code{\link{TuneControl}},
\code{\link{getNestedTuneResultsOptPathDf}()},
\code{\link{getNestedTuneResultsX}()},
\code{\link{getResamplingIndices}()},
\code{\link{getTuneResult}()},
\code{\link{makeModelMultiplexer}()},
\code{\link{makeModelMultiplexerParamSet}()},
\code{\link{makeTuneControlCMAES}()},
\code{\link{makeTuneControlGenSA}()},
\code{\link{makeTuneControlGrid}()},
\code{\link{makeTuneControlIrace}()},
\code{\link{makeTuneControlMBO}()},
\code{\link{makeTuneControlRandom}()},
\code{\link{makeTuneWrapper}()},
\code{\link{tuneParams}()},
\code{\link{tuneThreshold}()}
}
\concept{tune}
|