1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/MultilabelClassifierChainsWrapper.R
\name{makeMultilabelClassifierChainsWrapper}
\alias{makeMultilabelClassifierChainsWrapper}
\title{Use classifier chains method (CC) to create a multilabel learner.}
\usage{
makeMultilabelClassifierChainsWrapper(learner, order = NULL)
}
\arguments{
\item{learner}{(\link{Learner} | \code{character(1)})\cr
The learner.
If you pass a string the learner will be created via \link{makeLearner}.}
\item{order}{(\link{character})\cr
Specifies the chain order using the names of the target labels.
E.g. for \code{m} target labels, this must be a character vector of length \code{m} that contains a permutation of the target label names.
Default is \code{NULL} which uses a random ordering of the target label names.}
}
\value{
\link{Learner}.
}
\description{
Every learner which is implemented in mlr and which supports binary
classification can be converted to a wrapped classifier chains multilabel learner.
CC trains a binary classifier for each label following a given order. In training phase,
the feature space of each classifier is extended with true label information of all previous
labels in the chain. During the prediction phase, when true labels are not available, they are
replaced by predicted labels.
Models can easily be accessed via \link{getLearnerModel}.
}
\examples{
if (requireNamespace("rpart")) {
d = getTaskData(yeast.task)
# drop some labels so example runs faster
d = d[seq(1, nrow(d), by = 20), c(1:2, 15:17)]
task = makeMultilabelTask(data = d, target = c("label1", "label2"))
lrn = makeLearner("classif.rpart")
lrn = makeMultilabelBinaryRelevanceWrapper(lrn)
lrn = setPredictType(lrn, "prob")
# train, predict and evaluate
mod = train(lrn, task)
pred = predict(mod, task)
performance(pred, measure = list(multilabel.hamloss, multilabel.subset01, multilabel.f1))
# the next call basically has the same structure for any multilabel meta wrapper
getMultilabelBinaryPerformances(pred, measures = list(mmce, auc))
# above works also with predictions from resample!
}
}
\references{
Montanes, E. et al. (2013)
\emph{Dependent binary relevance models for multi-label classification}
Artificial Intelligence Center, University of Oviedo at Gijon, Spain.
}
\seealso{
Other wrapper:
\code{\link{makeBaggingWrapper}()},
\code{\link{makeClassificationViaRegressionWrapper}()},
\code{\link{makeConstantClassWrapper}()},
\code{\link{makeCostSensClassifWrapper}()},
\code{\link{makeCostSensRegrWrapper}()},
\code{\link{makeDownsampleWrapper}()},
\code{\link{makeDummyFeaturesWrapper}()},
\code{\link{makeExtractFDAFeatsWrapper}()},
\code{\link{makeFeatSelWrapper}()},
\code{\link{makeFilterWrapper}()},
\code{\link{makeImputeWrapper}()},
\code{\link{makeMulticlassWrapper}()},
\code{\link{makeMultilabelBinaryRelevanceWrapper}()},
\code{\link{makeMultilabelDBRWrapper}()},
\code{\link{makeMultilabelNestedStackingWrapper}()},
\code{\link{makeMultilabelStackingWrapper}()},
\code{\link{makeOverBaggingWrapper}()},
\code{\link{makePreprocWrapper}()},
\code{\link{makePreprocWrapperCaret}()},
\code{\link{makeRemoveConstantFeaturesWrapper}()},
\code{\link{makeSMOTEWrapper}()},
\code{\link{makeTuneWrapper}()},
\code{\link{makeUndersampleWrapper}()},
\code{\link{makeWeightedClassesWrapper}()}
Other multilabel:
\code{\link{getMultilabelBinaryPerformances}()},
\code{\link{makeMultilabelBinaryRelevanceWrapper}()},
\code{\link{makeMultilabelDBRWrapper}()},
\code{\link{makeMultilabelNestedStackingWrapper}()},
\code{\link{makeMultilabelStackingWrapper}()}
}
\concept{multilabel}
\concept{wrapper}
|