File: curl_download.Rd

package info (click to toggle)
r-cran-curl 6.2.1%2Bdfsg-1
  • links: PTS, VCS
  • area: main
  • in suites: trixie
  • size: 1,064 kB
  • sloc: ansic: 3,140; sh: 76; makefile: 5
file content (55 lines) | stat: -rw-r--r-- 1,987 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/download.R
\name{curl_download}
\alias{curl_download}
\title{Download file to disk}
\usage{
curl_download(url, destfile, quiet = TRUE, mode = "wb", handle = new_handle())
}
\arguments{
\item{url}{A character string naming the URL of a resource to be downloaded.}

\item{destfile}{A character string with the name where the downloaded file
is saved. Tilde-expansion is performed.}

\item{quiet}{If \code{TRUE}, suppress status messages (if any), and the
progress bar.}

\item{mode}{A character string specifying the mode with which to write the file.
Useful values are \code{"w"}, \code{"wb"} (binary), \code{"a"} (append)
and \code{"ab"}.}

\item{handle}{a curl handle object}
}
\value{
Path of downloaded file (invisibly).
}
\description{
Libcurl implementation of \code{C_download} (the "internal" download method)
with added support for https, ftps, gzip, etc. Default behavior is identical
to \code{\link[=download.file]{download.file()}}, but request can be fully configured by passing
a custom \code{\link[=handle]{handle()}}.
}
\details{
The main difference between \code{curl_download} and \code{curl_fetch_disk}
is that \code{curl_download} checks the http status code before starting the
download, and raises an error when status is non-successful. The behavior of
\code{curl_fetch_disk} on the other hand is to proceed as normal and write
the error page to disk in case of a non success response.

The \code{curl_download} function does support resuming and removes the temporary
file if the download did not complete successfully.
For a more advanced download interface which supports concurrent requests and
resuming large files, have a look at the \link{multi_download} function.
}
\examples{
# Download large file
\dontrun{
url <- "http://www2.census.gov/acs2011_5yr/pums/csv_pus.zip"
tmp <- tempfile()
curl_download(url, tmp)
}
}
\seealso{
Advanced download interface: \link{multi_download}
}