1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/createDataPartition.R, R/createResample.R
\name{createDataPartition}
\alias{createDataPartition}
\alias{createResample}
\alias{createFolds}
\alias{createMultiFolds}
\alias{createTimeSlices}
\alias{groupKFold}
\title{Data Splitting functions}
\usage{
createDataPartition(
y,
times = 1,
p = 0.5,
list = TRUE,
groups = min(5, length(y))
)
createFolds(y, k = 10, list = TRUE, returnTrain = FALSE)
createMultiFolds(y, k = 10, times = 5)
createTimeSlices(y, initialWindow, horizon = 1, fixedWindow = TRUE, skip = 0)
groupKFold(group, k = length(unique(group)))
createResample(y, times = 10, list = TRUE)
}
\arguments{
\item{y}{a vector of outcomes. For \code{createTimeSlices}, these should be
in chronological order.}
\item{times}{the number of partitions to create}
\item{p}{the percentage of data that goes to training}
\item{list}{logical - should the results be in a list (\code{TRUE}) or a
matrix with the number of rows equal to \code{floor(p * length(y))} and
\code{times} columns.}
\item{groups}{for numeric \code{y}, the number of breaks in the quantiles
(see below)}
\item{k}{an integer for the number of folds.}
\item{returnTrain}{a logical. When true, the values returned are the sample
positions corresponding to the data used during training. This argument
only works in conjunction with \code{list = TRUE}}
\item{initialWindow}{The initial number of consecutive values in each
training set sample}
\item{horizon}{the number of consecutive values in test set sample}
\item{fixedWindow}{logical, if \code{FALSE}, all training samples start at 1}
\item{skip}{integer, how many (if any) resamples to skip to thin the total
amount}
\item{group}{a vector of groups whose length matches the number of rows in
the overall data set.}
}
\value{
A list or matrix of row position integers corresponding to the
training data. For \code{createTimeSlices} subsamples are named by the end
index of each training subsample.
}
\description{
A series of test/training partitions are created using
\code{createDataPartition} while \code{createResample} creates one or more
bootstrap samples. \code{createFolds} splits the data into \code{k} groups
while \code{createTimeSlices} creates cross-validation split for series data.
\code{groupKFold} splits the data based on a grouping factor.
}
\details{
For bootstrap samples, simple random sampling is used.
For other data splitting, the random sampling is done within the levels of
\code{y} when \code{y} is a factor in an attempt to balance the class
distributions within the splits.
For numeric \code{y}, the sample is split into groups sections based on
percentiles and sampling is done within these subgroups. For
\code{createDataPartition}, the number of percentiles is set via the
\code{groups} argument. For \code{createFolds} and \code{createMultiFolds},
the number of groups is set dynamically based on the sample size and
\code{k}. For smaller samples sizes, these two functions may not do
stratified splitting and, at most, will split the data into quartiles.
Also, for \code{createDataPartition}, very small class sizes (<= 3) the
classes may not show up in both the training and test data
For multiple k-fold cross-validation, completely independent folds are
created. The names of the list objects will denote the fold membership
using the pattern "Foldi.Repj" meaning the ith section (of k) of the jth
cross-validation set (of \code{times}). Note that this function calls
\code{createFolds} with \code{list = TRUE} and \code{returnTrain = TRUE}.
Hyndman and Athanasopoulos (2013)) discuss rolling forecasting origin
techniques that move the training and test sets in time.
\code{createTimeSlices} can create the indices for this type of splitting.
For Group k-fold cross-validation, the data are split such that no group
is contained in both the modeling and holdout sets. One or more group
could be left out, depending on the value of \code{k}.
}
\examples{
data(oil)
createDataPartition(oilType, 2)
x <- rgamma(50, 3, .5)
inA <- createDataPartition(x, list = FALSE)
plot(density(x[inA]))
rug(x[inA])
points(density(x[-inA]), type = "l", col = 4)
rug(x[-inA], col = 4)
createResample(oilType, 2)
createFolds(oilType, 10)
createFolds(oilType, 5, FALSE)
createFolds(rnorm(21))
createTimeSlices(1:9, 5, 1, fixedWindow = FALSE)
createTimeSlices(1:9, 5, 1, fixedWindow = TRUE)
createTimeSlices(1:9, 5, 3, fixedWindow = TRUE)
createTimeSlices(1:9, 5, 3, fixedWindow = FALSE)
createTimeSlices(1:15, 5, 3)
createTimeSlices(1:15, 5, 3, skip = 2)
createTimeSlices(1:15, 5, 3, skip = 3)
set.seed(131)
groups <- sort(sample(letters[1:4], size = 20, replace = TRUE))
table(groups)
folds <- groupKFold(groups)
lapply(folds, function(x, y) table(y[x]), y = groups)
}
\references{
\url{http://topepo.github.io/caret/data-splitting.html}
Hyndman and Athanasopoulos (2013), Forecasting: principles and practice.
\url{https://otexts.com/fpp2/}
}
\author{
Max Kuhn, \code{createTimeSlices} by Tony Cooper
}
\keyword{utilities}
|