1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/TuneControlMBO.R
\name{makeTuneControlMBO}
\alias{makeTuneControlMBO}
\alias{TuneControlMBO}
\title{Create control object for hyperparameter tuning with MBO.}
\usage{
makeTuneControlMBO(
same.resampling.instance = TRUE,
impute.val = NULL,
learner = NULL,
mbo.control = NULL,
tune.threshold = FALSE,
tune.threshold.args = list(),
continue = FALSE,
log.fun = "default",
final.dw.perc = NULL,
budget = NULL,
mbo.design = NULL
)
}
\arguments{
\item{same.resampling.instance}{(\code{logical(1)})\cr
Should the same resampling instance be used for all evaluations to reduce variance?
Default is \code{TRUE}.}
\item{impute.val}{(\link{numeric})\cr
If something goes wrong during optimization (e.g. the learner crashes),
this value is fed back to the tuner, so the tuning algorithm does not abort.
Imputation is only active if \code{on.learner.error} is configured not to stop in \link{configureMlr}.
It is not stored in the optimization path, an NA and a corresponding error message are
logged instead.
Note that this value is later multiplied by -1 for maximization measures internally, so you
need to enter a larger positive value for maximization here as well.
Default is the worst obtainable value of the performance measure you optimize for when
you aggregate by mean value, or \code{Inf} instead.
For multi-criteria optimization pass a vector of imputation values, one for each of your measures,
in the same order as your measures.}
\item{learner}{(\link{Learner} | \code{NULL})\cr
The surrogate learner: A regression learner to model performance landscape.
For the default, \code{NULL}, \pkg{mlrMBO} will automatically create a suitable learner based on the rules described in \link[mlrMBO:makeMBOLearner]{mlrMBO::makeMBOLearner}.}
\item{mbo.control}{(\link[mlrMBO:makeMBOControl]{mlrMBO::MBOControl} | \code{NULL})\cr
Control object for model-based optimization tuning.
For the default, \code{NULL}, the control object will be created with all the defaults as described in \link[mlrMBO:makeMBOControl]{mlrMBO::makeMBOControl}.}
\item{tune.threshold}{(\code{logical(1)})\cr
Should the threshold be tuned for the measure at hand, after each hyperparameter evaluation,
via \link{tuneThreshold}?
Only works for classification if the predict type is \dQuote{prob}.
Default is \code{FALSE}.}
\item{tune.threshold.args}{(\link{list})\cr
Further arguments for threshold tuning that are passed down to \link{tuneThreshold}.
Default is none.}
\item{continue}{(\code{logical(1)})\cr
Resume calculation from previous run using \link[mlrMBO:mboContinue]{mlrMBO::mboContinue}?
Requires \dQuote{save.file.path} to be set.
Note that the \link[ParamHelpers:OptPath]{ParamHelpers::OptPath} in the \link[mlrMBO:OptResult]{mlrMBO::OptResult}
will only include the evaluations after the continuation.
The complete \link{OptPath} will be found in the slot \verb{$mbo.result$opt.path}.}
\item{log.fun}{(\code{function} | \code{character(1)})\cr
Function used for logging. If set to \dQuote{default} (the default), the evaluated design points, the resulting
performances, and the runtime will be reported.
If set to \dQuote{memory} the memory usage for each evaluation will also be displayed, with \code{character(1)} small increase
in run time.
Otherwise \code{character(1)} function with arguments \code{learner}, \code{resampling}, \code{measures},
\code{par.set}, \code{control}, \code{opt.path}, \code{dob}, \code{x}, \code{y}, \code{remove.nas},
\code{stage} and \code{prev.stage} is expected.
The default displays the performance measures, the time needed for evaluating,
the currently used memory and the max memory ever used before
(the latter two both taken from \link{gc}).
See the implementation for details.}
\item{final.dw.perc}{(\code{boolean})\cr
If a Learner wrapped by a \link{makeDownsampleWrapper} is used, you can define the value of \code{dw.perc} which is used to train the Learner with the final parameter setting found by the tuning.
Default is \code{NULL} which will not change anything.}
\item{budget}{(\code{integer(1)})\cr
Maximum budget for tuning. This value restricts the number of function evaluations.}
\item{mbo.design}{(\link{data.frame} | \code{NULL})\cr
Initial design as data frame.
If the parameters have corresponding trafo functions,
the design must not be transformed before it is passed!
For the default, \code{NULL}, a default design is created like described in \link[mlrMBO:mbo]{mlrMBO::mbo}.}
}
\value{
(\link{TuneControlMBO})
}
\description{
Model-based / Bayesian optimization with the function
\link[mlrMBO:mbo]{mlrMBO::mbo} from the \pkg{mlrMBO} package.
Please refer to \url{https://github.com/mlr-org/mlrMBO} for further info.
}
\references{
Bernd Bischl, Jakob Richter, Jakob Bossek, Daniel Horn, Janek Thomas and Michel Lang; mlrMBO: A Modular Framework for Model-Based Optimization of Expensive Black-Box Functions, Preprint: \url{https://arxiv.org/abs/1703.03373} (2017).
}
\seealso{
Other tune:
\code{\link{TuneControl}},
\code{\link{getNestedTuneResultsOptPathDf}()},
\code{\link{getNestedTuneResultsX}()},
\code{\link{getResamplingIndices}()},
\code{\link{getTuneResult}()},
\code{\link{makeModelMultiplexer}()},
\code{\link{makeModelMultiplexerParamSet}()},
\code{\link{makeTuneControlCMAES}()},
\code{\link{makeTuneControlDesign}()},
\code{\link{makeTuneControlGenSA}()},
\code{\link{makeTuneControlGrid}()},
\code{\link{makeTuneControlIrace}()},
\code{\link{makeTuneControlRandom}()},
\code{\link{makeTuneWrapper}()},
\code{\link{tuneParams}()},
\code{\link{tuneThreshold}()}
}
\concept{tune}
|