1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/preProcess.R
\name{preProcess}
\alias{preProcess}
\alias{preProcess.default}
\alias{predict.preProcess}
\title{Pre-Processing of Predictors}
\usage{
preProcess(x, ...)
\method{preProcess}{default}(
x,
method = c("center", "scale"),
thresh = 0.95,
pcaComp = NULL,
na.remove = TRUE,
k = 5,
knnSummary = mean,
outcome = NULL,
fudge = 0.2,
numUnique = 3,
verbose = FALSE,
freqCut = 95/5,
uniqueCut = 10,
cutoff = 0.9,
rangeBounds = c(0, 1),
...
)
\method{predict}{preProcess}(object, newdata, ...)
}
\arguments{
\item{x}{a matrix or data frame. Non-numeric predictors are allowed but will
be ignored.}
\item{\dots}{additional arguments to pass to \code{\link[fastICA]{fastICA}},
such as \code{n.comp}}
\item{method}{a character vector specifying the type of processing. Possible
values are "BoxCox", "YeoJohnson", "expoTrans", "center", "scale", "range",
"knnImpute", "bagImpute", "medianImpute", "pca", "ica", "spatialSign", "corr", "zv",
"nzv", and "conditionalX" (see Details below)}
\item{thresh}{a cutoff for the cumulative percent of variance to be retained
by PCA}
\item{pcaComp}{the specific number of PCA components to keep. If specified,
this over-rides \code{thresh}}
\item{na.remove}{a logical; should missing values be removed from the
calculations?}
\item{k}{the number of nearest neighbors from the training set to use for
imputation}
\item{knnSummary}{function to average the neighbor values per column during
imputation}
\item{outcome}{a numeric or factor vector for the training set outcomes.
This can be used to help estimate the Box-Cox transformation of the
predictor variables (see Details below)}
\item{fudge}{a tolerance value: Box-Cox transformation lambda values within
+/-fudge will be coerced to 0 and within 1+/-fudge will be coerced to 1.}
\item{numUnique}{how many unique values should \code{y} have to estimate the
Box-Cox transformation?}
\item{verbose}{a logical: prints a log as the computations proceed}
\item{freqCut}{the cutoff for the ratio of the most common value to the
second most common value. See \code{\link{nearZeroVar}}.}
\item{uniqueCut}{the cutoff for the percentage of distinct values out of
the number of total samples. See \code{\link{nearZeroVar}}.}
\item{cutoff}{a numeric value for the pair-wise absolute correlation cutoff.
See \code{\link{findCorrelation}}.}
\item{rangeBounds}{a two-element numeric vector specifying closed interval
for range transformation}
\item{object}{an object of class \code{preProcess}}
\item{newdata}{a matrix or data frame of new data to be pre-processed}
}
\value{
\code{preProcess} results in a list with elements \item{call}{the
function call} \item{method}{a named list of operations and the variables
used for each } \item{dim}{the dimensions of \code{x}} \item{bc}{Box-Cox
transformation values, see \code{\link{BoxCoxTrans}}} \item{mean}{a vector
of means (if centering was requested)} \item{std}{a vector of standard
deviations (if scaling or PCA was requested)} \item{rotation}{a matrix of
eigenvectors if PCA was requested} \item{method}{the value of \code{method}}
\item{thresh}{the value of \code{thresh}} \item{ranges}{a matrix of min and
max values for each predictor when \code{method} includes "range" (and
\code{NULL} otherwise)} \item{numComp}{the number of principal components
required of capture the specified amount of variance} \item{ica}{contains
values for the \code{W} and \code{K} matrix of the decomposition}
\item{median}{a vector of medians (if median imputation was requested)}
\code{predict.preProcess} will produce a data frame.
}
\description{
Pre-processing transformation (centering, scaling etc.) can be estimated
from the training data and applied to any data set with the same variables.
}
\details{
In all cases, transformations and operations are estimated using the data in
\code{x} and these operations are applied to new data using these values;
nothing is recomputed when using the \code{predict} function.
The Box-Cox (\code{method = "BoxCox"}), Yeo-Johnson (\code{method =
"YeoJohnson"}), and exponential transformations (\code{method =
"expoTrans"}) have been "repurposed" here: they are being used to transform
the predictor variables. The Box-Cox transformation was developed for
transforming the response variable while another method, the Box-Tidwell
transformation, was created to estimate transformations of predictor data.
However, the Box-Cox method is simpler, more computationally efficient and
is equally effective for estimating power transformations. The Yeo-Johnson
transformation is similar to the Box-Cox model but can accommodate
predictors with zero and/or negative values (while the predictors values for
the Box-Cox transformation must be strictly positive). The exponential
transformation of Manly (1976) can also be used for positive or negative
data.
\code{method = "center"} subtracts the mean of the predictor's data (again
from the data in \code{x}) from the predictor values while \code{method =
"scale"} divides by the standard deviation.
The "range" transformation scales the data to be within \code{rangeBounds}. If new
samples have values larger or smaller than those in the training set, values
will be outside of this range.
Predictors that are not numeric are ignored in the calculations (including
methods "zv`" and "nzv`").
\code{method = "zv"} identifies numeric predictor columns with a single
value (i.e. having zero variance) and excludes them from further
calculations. Similarly, \code{method = "nzv"} does the same by applying
\code{\link{nearZeroVar}} exclude "near zero-variance" predictors. The options
\code{freqCut} and \code{uniqueCut} can be used to modify the filter.
\code{method = "corr"} seeks to filter out highly correlated predictors. See
\code{\link{findCorrelation}}.
For classification, \code{method = "conditionalX"} examines the distribution
of each predictor conditional on the outcome. If there is only one unique
value within any class, the predictor is excluded from further calculations
(see \code{\link{checkConditionalX}} for an example). When \code{outcome} is
not a factor, this calculation is not executed. This operation can be time
consuming when used within resampling via \code{\link{train}}.
The operations are applied in this order: zero-variance filter, near-zero
variance filter, correlation filter, Box-Cox/Yeo-Johnson/exponential transformation, centering,
scaling, range, imputation, PCA, ICA then spatial sign. This is a departure
from versions of \pkg{caret} prior to version 4.76 (where imputation was
done first) and is not backwards compatible if bagging was used for
imputation.
If PCA is requested but centering and scaling are not, the values will still
be centered and scaled. Similarly, when ICA is requested, the data are
automatically centered and scaled.
k-nearest neighbor imputation is carried out by finding the k closest
samples (Euclidian distance) in the training set. Imputation via bagging
fits a bagged tree model for each predictor (as a function of all the
others). This method is simple, accurate and accepts missing values, but it
has much higher computational cost. Imputation via medians takes the median
of each predictor in the training set, and uses them to fill missing values.
This method is simple, fast, and accepts missing values, but treats each
predictor independently, and may be inaccurate.
A warning is thrown if both PCA and ICA are requested. ICA, as implemented
by the \code{\link[fastICA]{fastICA}} package automatically does a PCA
decomposition prior to finding the ICA scores.
The function will throw an error of any numeric variables in \code{x} has
less than two unique values unless either \code{method = "zv"} or
\code{method = "nzv"} are invoked.
Non-numeric data will not be pre-processed and their values will be in the
data frame produced by the \code{predict} function. Note that when PCA or
ICA is used, the non-numeric columns may be in different positions when
predicted.
}
\examples{
data(BloodBrain)
# one variable has one unique value
\dontrun{
preProc <- preProcess(bbbDescr)
preProc <- preProcess(bbbDescr[1:100,-3])
training <- predict(preProc, bbbDescr[1:100,-3])
test <- predict(preProc, bbbDescr[101:208,-3])
}
}
\references{
\url{http://topepo.github.io/caret/pre-processing.html}
Kuhn and Johnson (2013), Applied Predictive Modeling, Springer, New York
(chapter 4)
Kuhn (2008), Building predictive models in R using the caret
(\doi{10.18637/jss.v028.i05})
Box, G. E. P. and Cox, D. R. (1964) An analysis of transformations (with
discussion). Journal of the Royal Statistical Society B, 26, 211-252.
Box, G. E. P. and Tidwell, P. W. (1962) Transformation of the independent
variables. Technometrics 4, 531-550.
Manly, B. L. (1976) Exponential data transformations. The Statistician, 25,
37 - 42.
Yeo, I-K. and Johnson, R. (2000). A new family of power transformations to
improve normality or symmetry. Biometrika, 87, 954-959.
}
\seealso{
\code{\link{BoxCoxTrans}}, \code{\link{expoTrans}}
\code{\link[MASS]{boxcox}}, \code{\link[stats]{prcomp}},
\code{\link[fastICA]{fastICA}}, \code{\link{spatialSign}}
}
\author{
Max Kuhn, median imputation by Zachary Mayer
}
\keyword{utilities}
|