File: tokenize-methods.Rd

package info (click to toggle)
r-cran-sourcetools 0.1.7-1-1
  • links: PTS, VCS
  • area: main
  • in suites: bookworm, forky, sid, trixie
  • size: 308 kB
  • sloc: cpp: 1,985; ansic: 505; sh: 10; makefile: 2
file content (42 lines) | stat: -rw-r--r-- 1,060 bytes parent folder | download | duplicates (5)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/sourcetools.R
\name{tokenize_file}
\alias{tokenize}
\alias{tokenize_file}
\alias{tokenize_string}
\title{Tokenize R Code}
\usage{
tokenize_file(path)

tokenize_string(string)

tokenize(file = "", text = NULL)
}
\arguments{
\item{file, path}{A file path.}

\item{text, string}{\R code as a character vector of length one.}
}
\value{
A \code{data.frame} with the following columns:

\tabular{ll}{
\code{value}  \tab The token's contents, as a string.     \cr
\code{row}    \tab The row where the token is located.    \cr
\code{column} \tab The column where the token is located. \cr
\code{type}   \tab The token type, as a string.           \cr
}
}
\description{
Tools for tokenizing \R code.
}
\note{
Line numbers are determined by existence of the \code{\\n}
line feed character, under the assumption that code being tokenized
will use either \code{\\n} to indicate newlines (as on modern
Unix systems), or \code{\\r\\n} as on Windows.
}
\examples{
tokenize_string("x <- 1 + 2")
}