File: tximport.Rd

package info (click to toggle)
r-bioc-tximport 1.18.0%2Bdfsg-1
  • links: PTS, VCS
  • area: main
  • in suites: bookworm, bullseye
  • size: 300 kB
  • sloc: makefile: 5
file content (254 lines) | stat: -rw-r--r-- 11,566 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/tximport.R
\name{tximport}
\alias{tximport}
\title{Import transcript-level abundances and counts
for transcript- and gene-level analysis packages}
\usage{
tximport(
  files,
  type = c("none", "salmon", "sailfish", "alevin", "kallisto", "rsem", "stringtie"),
  txIn = TRUE,
  txOut = FALSE,
  countsFromAbundance = c("no", "scaledTPM", "lengthScaledTPM", "dtuScaledTPM"),
  tx2gene = NULL,
  varReduce = FALSE,
  dropInfReps = FALSE,
  infRepStat = NULL,
  ignoreTxVersion = FALSE,
  ignoreAfterBar = FALSE,
  geneIdCol,
  txIdCol,
  abundanceCol,
  countsCol,
  lengthCol,
  importer = NULL,
  existenceOptional = FALSE,
  sparse = FALSE,
  sparseThreshold = 1,
  readLength = 75,
  alevinArgs = NULL
)
}
\arguments{
\item{files}{a character vector of filenames for the transcript-level abundances}

\item{type}{character, the type of software used to generate the abundances.
Options are "salmon", "sailfish", "alevin", "kallisto", "rsem", "stringtie", or "none".
This argument is used to autofill the arguments below (geneIdCol, etc.)
"none" means that the user will specify these columns.}

\item{txIn}{logical, whether the incoming files are transcript level (default TRUE)}

\item{txOut}{logical, whether the function should just output
transcript-level (default FALSE)}

\item{countsFromAbundance}{character, either "no" (default), "scaledTPM",
"lengthScaledTPM", or "dtuScaledTPM".
Whether to generate estimated counts using abundance estimates:
\itemize{
  \item scaled up to library size (scaledTPM),
  \item scaled using the average transcript length over samples
        and then the library size (lengthScaledTPM), or
  \item scaled using the median transcript length among isoforms of a gene,
        and then the library size (dtuScaledTPM). 
}
dtuScaledTPM is designed for DTU analysis in combination with \code{txOut=TRUE},
and it requires specifing a \code{tx2gene} data.frame.
dtuScaledTPM works such that within a gene, values from all samples and
all transcripts get scaled by the same fixed median transcript length.
If using scaledTPM, lengthScaledTPM, or geneLengthScaledTPM, 
the counts are no longer correlated across samples with transcript length,
and so the length offset matrix should not be used.}

\item{tx2gene}{a two-column data.frame linking transcript id (column 1)
to gene id (column 2).
the column names are not relevant, but this column order must be used. 
this argument is required for gene-level summarization, and the tximport
vignette describes how to construct this data.frame (see Details below).
An automated solution to avoid having to create \code{tx2gene} if
one has quantified with Salmon or alevin with human or mouse transcriptomes
is to use the \code{tximeta} function from the tximeta Bioconductor package.}

\item{varReduce}{whether to reduce per-sample inferential replicates
information into a matrix of sample variances \code{variance} (default FALSE).
alevin computes inferential variance by default for bootstrap
inferential replicates, so this argument is ignored/not necessary}

\item{dropInfReps}{whether to skip reading in inferential replicates
(default FALSE). For alevin, \code{tximport} will still read in the
inferential variance matrix if it exists}

\item{infRepStat}{a function to re-compute counts and abundances from the
inferential replicates, e.g. \code{matrixStats::rowMedians} to re-compute counts 
as the median of the inferential replicates. The order of operations is:
first counts are re-computed, then abundances are re-computed.
Following this, if \code{countsFromAbundance} is not "no",
\code{tximport} will again re-compute counts from the re-computed abundances.
\code{infRepStat} should operate on rows of a matrix. (default is NULL)}

\item{ignoreTxVersion}{logical, whether to split the tx id on the '.' character
to remove version information to facilitate matching with the tx id in \code{tx2gene}
(default FALSE)}

\item{ignoreAfterBar}{logical, whether to split the tx id on the '|' character
to facilitate matching with the tx id in \code{tx2gene} (default FALSE)}

\item{geneIdCol}{name of column with gene id. if missing, the \code{tx2gene}
argument can be used}

\item{txIdCol}{name of column with tx id}

\item{abundanceCol}{name of column with abundances (e.g. TPM or FPKM)}

\item{countsCol}{name of column with estimated counts}

\item{lengthCol}{name of column with feature length information}

\item{importer}{a function used to read in the files}

\item{existenceOptional}{logical, should tximport not check if files exist before attempting
import (default FALSE, meaning files must exist according to \code{file.exists})}

\item{sparse}{logical, whether to try to import data sparsely (default is FALSE).
Initial implementation for \code{txOut=TRUE}, \code{countsFromAbundance="no"}
or \code{"scaledTPM"}, no inferential replicates. Only counts matrix
is returned (and abundance matrix if using \code{"scaledTPM"})}

\item{sparseThreshold}{the minimum threshold for including a count as a
non-zero count during sparse import (default is 1)}

\item{readLength}{numeric, the read length used to calculate counts from
StringTie's output of coverage. Default value (from StringTie) is 75.
The formula used to calculate counts is:
\code{cov * transcript length / read length}}

\item{alevinArgs}{named list, with logical elements \code{filterBarcodes},
\code{tierImport}, \code{forceSlow}, \code{dropMeanVar}.
See Details for definitions.}
}
\value{
A simple list containing matrices: abundance, counts, length.
Another list element 'countsFromAbundance' carries through
the character argument used in the tximport call.
The length matrix contains the average transcript length for each
gene which can be used as an offset for gene-level analysis.
If detected, and \code{txOut=TRUE}, inferential replicates for
each sample will be imported and stored as a list of matrices,
itself an element \code{infReps} in the returned list.
An exception is alevin, in which the \code{infReps} are a list
of bootstrap replicate matrices, where each matrix has
genes as rows and cells as columns.
If \code{varReduce=TRUE} the inferential replicates will be summarized
according to the sample variance, and stored as a matrix \code{variance}.
alevin already computes the variance of the bootstrap inferential replicates
and so this is imported without needing to specify \code{varReduce=TRUE}.
}
\description{
\code{tximport} imports transcript-level estimates from various
external software and optionally summarizes abundances, counts,
and transcript lengths
to the gene-level (default) or outputs transcript-level matrices
(see \code{txOut} argument).
}
\details{
\strong{Inferential replicates:}
\code{tximport} will also load in information about inferential replicates --
a list of matrices of the Gibbs samples from the posterior, or bootstrap replicates,
per sample -- if these data are available in the expected locations relative
to the \code{files}.
The inferential replicates, stored in \code{infReps} in the output list,
are on estimated counts, and therefore follow \code{counts} in the output list.
By setting \code{varReduce=TRUE}, the inferential replicate matrices
will be replaced by a single matrix with the sample variance per transcript/gene
and per sample.

\strong{summarizeToGene:}
While \code{tximport} summarizes to the gene-level by default, 
the user can also perform the import and summarization steps manually,
by specifing \code{txOut=TRUE} and then using the function \code{summarizeToGene}.
Note however that this is equivalent to \code{tximport} with
\code{txOut=FALSE} (the default).

\strong{Solutions on summarization:} regarding \code{"tximport failed at summarizing to the gene-level"}:

\enumerate{
  \item provide a \code{tx2gene} data.frame linking transcripts to genes (more below)
  \item avoid gene-level summarization by specifying \code{txOut=TRUE}
}

See \code{vignette('tximport')} for example code for generating a
\code{tx2gene} data.frame from a \code{TxDb} object.
The \code{tx2gene} data.frame should exactly match and be derived from
the same set of transcripts used for quantifying (the set of transcript
used to create the transcriptome index).

\strong{Tximeta:}
One automated solution for Salmon or alevin quantification data is to use the
\code{tximeta} function in the tximeta Bioconductor package
which builds upon and extends \code{tximport}; this solution should
work out-of-the-box for human and mouse transcriptomes downloaded
from GENCODE, Ensembl, or RefSeq. For other cases, the user
should create the \code{tx2gene} manually as shown in the tximport
vignette.

\strong{On tx2gene construction:}
Note that the \code{keys} and \code{select} functions used
to create the \code{tx2gene} object are documented
in the man page for \link[AnnotationDbi]{AnnotationDb-class} objects
in the AnnotationDbi package (TxDb inherits from AnnotationDb).
For further details on generating TxDb objects from various inputs
see \code{vignette('GenomicFeatures')} from the GenomicFeatures package.

\strong{alevin:}
The \code{alevinArgs} argument includes some alevin-specific arguments.
This optional argument is a list with any or all of the following named logical variables:
\code{filterBarcodes}, \code{tierImport}, and \code{forceSlow}.
The variables are described as follows (with default values in parens):
\code{filterBarcodes} (FALSE) import only cell barcodes listed in
\code{whitelist.txt};
\code{tierImport} (FALSE) import the tier information in addition to counts;
\code{forceSlow} (FALSE) force the use of the slower import R code
even if \code{fishpond} is installed;
\code{dropMeanVar} (FALSE) don't import inferential mean and variance
matrices even if they exist (also skips inferential replicates)
For \code{type="alevin"} all arguments other than \code{files},
\code{dropInfReps}, and \code{alevinArgs} are ignored.
Note that \code{files} should point to a single \code{quants_mat.gz} file,
in the directory structure created by the alevin software
(e.g. do not move the file or delete the other important files).
Note that importing alevin quantifications will be much faster by first
installing the \code{fishpond} package, which contains a C++ importer
for alevin's EDS format.
For alevin, \code{tximport} is importing the gene-by-cell matrix of counts,
as \code{txi$counts}, and effective lengths are not estimated.
\code{txi$mean} and \code{txi$variance} may also be imported if
inferential replicates were used, as well as inferential replicates
if these were output by alevin.
Length correction should not be applied to datasets where there
is not an expected correlation of counts and feature length.
}
\examples{

# load data for demonstrating tximport
# note that the vignette shows more examples
# including how to read in files quickly using the readr package

library(tximportData)
dir <- system.file("extdata", package="tximportData")
samples <- read.table(file.path(dir,"samples.txt"), header=TRUE)
files <- file.path(dir,"salmon", samples$run, "quant.sf.gz")
names(files) <- paste0("sample",1:6)

# tx2gene links transcript IDs to gene IDs for summarization
tx2gene <- read.csv(file.path(dir, "tx2gene.gencode.v27.csv"))

txi <- tximport(files, type="salmon", tx2gene=tx2gene)

}
\references{
Charlotte Soneson, Michael I. Love, Mark D. Robinson (2015)
Differential analyses for RNA-seq: transcript-level estimates
improve gene-level inferences. F1000Research.
\url{http://doi.org/10.12688/f1000research.7563}
}