File: step_pca.Rd

package info (click to toggle)
r-cran-recipes 1.0.4%2Bdfsg-1
  • links: PTS, VCS
  • area: main
  • in suites: bookworm
  • size: 3,636 kB
  • sloc: sh: 37; makefile: 2
file content (168 lines) | stat: -rw-r--r-- 6,202 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/pca.R
\name{step_pca}
\alias{step_pca}
\title{PCA Signal Extraction}
\usage{
step_pca(
  recipe,
  ...,
  role = "predictor",
  trained = FALSE,
  num_comp = 5,
  threshold = NA,
  options = list(),
  res = NULL,
  columns = NULL,
  prefix = "PC",
  keep_original_cols = FALSE,
  skip = FALSE,
  id = rand_id("pca")
)
}
\arguments{
\item{recipe}{A recipe object. The step will be added to the
sequence of operations for this recipe.}

\item{...}{One or more selector functions to choose variables
for this step. See \code{\link[=selections]{selections()}} for more details.}

\item{role}{For model terms created by this step, what analysis role should
they be assigned? By default, the new columns created by this step from
the original variables will be used as \emph{predictors} in a model.}

\item{trained}{A logical to indicate if the quantities for
preprocessing have been estimated.}

\item{num_comp}{The number of components to retain as new predictors.
If \code{num_comp} is greater than the number of columns or the number of
possible components, a smaller value will be used. If \code{num_comp = 0}
is set then no transformation is done and selected variables will
stay unchanged.}

\item{threshold}{A fraction of the total variance that should be covered by
the components. For example, \code{threshold = .75} means that \code{step_pca} should
generate enough components to capture 75 percent of the variability in the
variables. Note: using this argument will override and reset any value given
to \code{num_comp}.}

\item{options}{A list of options to the default method for
\code{\link[stats:prcomp]{stats::prcomp()}}. Argument defaults are set to \code{retx = FALSE}, \code{center = FALSE}, \code{scale. = FALSE}, and \code{tol = NULL}. \strong{Note} that the argument \code{x}
should not be passed here (or at all).}

\item{res}{The \code{\link[stats:prcomp]{stats::prcomp.default()}} object is stored here once this
preprocessing step has be trained by \code{\link[=prep]{prep()}}.}

\item{columns}{A character string of variable names that will
be populated elsewhere.}

\item{prefix}{A character string for the prefix of the resulting new
variables. See notes below.}

\item{keep_original_cols}{A logical to keep the original variables in the
output. Defaults to \code{FALSE}.}

\item{skip}{A logical. Should the step be skipped when the
recipe is baked by \code{\link[=bake]{bake()}}? While all operations are baked
when \code{\link[=prep]{prep()}} is run, some operations may not be able to be
conducted on new data (e.g. processing the outcome variable(s)).
Care should be taken when using \code{skip = TRUE} as it may affect
the computations for subsequent operations.}

\item{id}{A character string that is unique to this step to identify it.}
}
\value{
An updated version of \code{recipe} with the new step added to the
sequence of any existing operations.
}
\description{
\code{step_pca} creates a \emph{specification} of a recipe step that will convert
numeric data into one or more principal components.
}
\details{
Principal component analysis (PCA) is a transformation of a
group of variables that produces a new set of artificial
features or components. These components are designed to capture
the maximum amount of information (i.e. variance) in the
original variables. Also, the components are statistically
independent from one another. This means that they can be used
to combat large inter-variables correlations in a data set.

It is advisable to standardize the variables prior to running
PCA. Here, each variable will be centered and scaled prior to
the PCA calculation. This can be changed using the
\code{options} argument or by using \code{\link[=step_center]{step_center()}}
and \code{\link[=step_scale]{step_scale()}}.

The argument \code{num_comp} controls the number of components that
will be retained (the original variables that are used to derive
the components are removed from the data). The new components
will have names that begin with \code{prefix} and a sequence of
numbers. The variable names are padded with zeros. For example,
if \code{num_comp < 10}, their names will be \code{PC1} - \code{PC9}.
If \code{num_comp = 101}, the names would be \code{PC001} -
\code{PC101}.

Alternatively, \code{threshold} can be used to determine the
number of components that are required to capture a specified
fraction of the total variance in the variables.
}
\section{Tidying}{
When you \code{\link[=tidy.recipe]{tidy()}} this step, use either \code{type = "coef"}
for the variable loadings per component or \code{type = "variance"} for how
much variance each component accounts for.
}

\section{Case weights}{


This step performs an unsupervised operation that can utilize case weights.
As a result, case weights are only used with frequency weights. For more
information, see the documentation in \link{case_weights} and the examples on
\code{tidymodels.org}.
}

\examples{
rec <- recipe(~., data = USArrests)
pca_trans <- rec \%>\%
  step_normalize(all_numeric()) \%>\%
  step_pca(all_numeric(), num_comp = 3)
pca_estimates <- prep(pca_trans, training = USArrests)
pca_data <- bake(pca_estimates, USArrests)

rng <- extendrange(c(pca_data$PC1, pca_data$PC2))
plot(pca_data$PC1, pca_data$PC2,
  xlim = rng, ylim = rng
)

with_thresh <- rec \%>\%
  step_normalize(all_numeric()) \%>\%
  step_pca(all_numeric(), threshold = .99)
with_thresh <- prep(with_thresh, training = USArrests)
bake(with_thresh, USArrests)

tidy(pca_trans, number = 2)
tidy(pca_estimates, number = 2)
}
\references{
Jolliffe, I. T. (2010). \emph{Principal Component
Analysis}. Springer.
}
\seealso{
Other multivariate transformation steps: 
\code{\link{step_classdist}()},
\code{\link{step_depth}()},
\code{\link{step_geodist}()},
\code{\link{step_ica}()},
\code{\link{step_isomap}()},
\code{\link{step_kpca_poly}()},
\code{\link{step_kpca_rbf}()},
\code{\link{step_kpca}()},
\code{\link{step_mutate_at}()},
\code{\link{step_nnmf_sparse}()},
\code{\link{step_nnmf}()},
\code{\link{step_pls}()},
\code{\link{step_ratio}()},
\code{\link{step_spatialsign}()}
}
\concept{multivariate transformation steps}