File: multi.Rd

package info (click to toggle)
r-cran-curl 6.2.1%2Bdfsg-1
  • links: PTS, VCS
  • area: main
  • in suites: trixie
  • size: 1,064 kB
  • sloc: ansic: 3,140; sh: 76; makefile: 5
file content (138 lines) | stat: -rw-r--r-- 5,144 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/multi.R
\name{multi}
\alias{multi}
\alias{multi_add}
\alias{multi_run}
\alias{multi_set}
\alias{multi_list}
\alias{multi_cancel}
\alias{new_pool}
\alias{multi_fdset}
\title{Async Concurrent Requests}
\usage{
multi_add(handle, done = NULL, fail = NULL, data = NULL, pool = NULL)

multi_run(timeout = Inf, poll = FALSE, pool = NULL)

multi_set(
  total_con = 50,
  host_con = 6,
  max_streams = 10,
  multiplex = TRUE,
  pool = NULL
)

multi_list(pool = NULL)

multi_cancel(handle)

new_pool(total_con = 100, host_con = 6, max_streams = 10, multiplex = TRUE)

multi_fdset(pool = NULL)
}
\arguments{
\item{handle}{a curl \link{handle} with preconfigured \code{url} option.}

\item{done}{callback function for completed request. Single argument with
response data in same structure as \link{curl_fetch_memory}.}

\item{fail}{callback function called on failed request. Argument contains
error message.}

\item{data}{(advanced) callback function, file path, or connection object for writing
incoming data. This callback should only be used for \emph{streaming} applications,
where small pieces of incoming data get written before the request has completed. The
signature for the callback function is \code{write(data, final = FALSE)}. If set
to \code{NULL} the entire response gets buffered internally and returned by in
the \code{done} callback (which is usually what you want).}

\item{pool}{a multi handle created by \link{new_pool}. Default uses a global pool.}

\item{timeout}{max time in seconds to wait for results. Use \code{0} to poll for results without
waiting at all.}

\item{poll}{If \code{TRUE} then return immediately after any of the requests has completed.
May also be an integer in which case it returns after n requests have completed.}

\item{total_con}{max total concurrent connections.}

\item{host_con}{max concurrent connections per host.}

\item{max_streams}{max HTTP/2 concurrent multiplex streams per connection.}

\item{multiplex}{use HTTP/2 multiplexing if supported by host and client.}
}
\description{
AJAX style concurrent requests, possibly using HTTP/2 multiplexing.
Results are only available via callback functions. Advanced use only!
For downloading many files in parallel use \link{multi_download} instead.
}
\details{
Requests are created in the usual way using a curl \link{handle} and added
to the scheduler with \link{multi_add}. This function returns immediately
and does not perform the request yet. The user needs to call \link{multi_run}
which performs all scheduled requests concurrently. It returns when all
requests have completed, or case of a \code{timeout} or \code{SIGINT} (e.g.
if the user presses \code{ESC} or \code{CTRL+C} in the console). In case of
the latter, simply call \link{multi_run} again to resume pending requests.

When the request succeeded, the \code{done} callback gets triggered with
the response data. The structure if this data is identical to \link{curl_fetch_memory}.
When the request fails, the \code{fail} callback is triggered with an error
message. Note that failure here means something went wrong in performing the
request such as a connection failure, it does not check the http status code.
Just like \link{curl_fetch_memory}, the user has to implement application logic.

Raising an error within a callback function stops execution of that function
but does not affect other requests.

A single handle cannot be used for multiple simultaneous requests. However
it is possible to add new requests to a pool while it is running, so you
can re-use a handle within the callback of a request from that same handle.
It is up to the user to make sure the same handle is not used in concurrent
requests.

The \link{multi_cancel} function can be used to cancel a pending request.
It has no effect if the request was already completed or canceled.

The \link{multi_fdset} function returns the file descriptors curl is
polling currently, and also a timeout parameter, the number of
milliseconds an application should wait (at most) before proceeding. It
is equivalent to the \code{curl_multi_fdset} and
\code{curl_multi_timeout} calls. It is handy for applications that is
expecting input (or writing output) through both curl, and other file
descriptors.
}
\examples{
results <- list()
success <- function(x){
  results <<- append(results, list(x))
}
failure <- function(str){
  cat(paste("Failed request:", str), file = stderr())
}
# This handle will take longest (3sec)
h1 <- new_handle(url = "https://hb.cran.dev/delay/3")
multi_add(h1, done = success, fail = failure)

# This handle writes data to a file
con <- file("output.txt")
h2 <- new_handle(url = "https://hb.cran.dev/post", postfields = "bla bla")
multi_add(h2, done = success, fail = failure, data = con)

# This handle raises an error
h3 <- new_handle(url = "https://urldoesnotexist.xyz")
multi_add(h3, done = success, fail = failure)

# Actually perform the requests
multi_run(timeout = 2)
multi_run()

# Check the file
readLines("output.txt")
unlink("output.txt")
}
\seealso{
Advanced download interface: \link{multi_download}
}