1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222
|
#' Read HTML or XML.
#'
#' @section Setting the "user agent" header:
#'
#' When performing web scraping tasks it is both good practice --- and often required ---
#' to set the [user agent](https://en.wikipedia.org/wiki/User_agent) request header
#' to a specific value. Sometimes this value is assigned to emulate a browser in order
#' to have content render in a certain way (e.g. `Mozilla/5.0 (Windows NT 5.1; rv:52.0)
#' Gecko/20100101 Firefox/52.0` to emulate more recent Windows browsers). Most often,
#' this value should be set to provide the web resource owner information on who you are
#' and the intent of your actions like this Google scraping bot user agent identifier:
#' `Googlebot/2.1 (+http://www.google.com/bot.html)`.
#'
#' You can set the HTTP user agent for URL-based requests using [httr::set_config()] and [httr::user_agent()]:
#'
#' `httr::set_config(httr::user_agent("me@@example.com; +https://example.com/info.html"))`
#'
#' [httr::set_config()] changes the configuration globally,
#' [httr::with_config()] can be used to change configuration temporarily.
#'
#' @param x A string, a connection, or a raw vector.
#'
#' A string can be either a path, a url or literal xml. Urls will
#' be converted into connections either using `base::url` or, if
#' installed, `curl::curl`. Local paths ending in `.gz`,
#' `.bz2`, `.xz`, `.zip` will be automatically uncompressed.
#'
#' If a connection, the complete connection is read into a raw vector before
#' being parsed.
#' @param encoding Specify a default encoding for the document. Unless
#' otherwise specified XML documents are assumed to be in UTF-8 or
#' UTF-16. If the document is not UTF-8/16, and lacks an explicit
#' encoding directive, this allows you to supply a default.
#' @param ... Additional arguments passed on to methods.
#' @param as_html Optionally parse an xml file as if it's html.
#' @param base_url When loading from a connection, raw vector or literal
#' html/xml, this allows you to specify a base url for the document. Base
#' urls are used to turn relative urls into absolute urls.
#' @param n If `file` is a connection, the number of bytes to read per
#' iteration. Defaults to 64kb.
#' @param verbose When reading from a slow connection, this prints some
#' output on every iteration so you know its working.
#' @param options Set parsing options for the libxml2 parser. Zero or more of
#' \Sexpr[results=rd, stage=build]{xml2:::describe_options(xml2:::xml_parse_options())}
#' @return An XML document. HTML is normalised to valid XML - this may not
#' be exactly the same transformation performed by the browser, but it's
#' a reasonable approximation.
#' @export
#' @examples
#' # Literal xml/html is useful for small examples
#' read_xml("<foo><bar /></foo>")
#' read_html("<html><title>Hi<title></html>")
#' read_html("<html><title>Hi")
#'
#' # From a local path
#' read_html(system.file("extdata", "r-project.html", package = "xml2"))
#'
#' \dontrun{
#' # From a url
#' cd <- read_xml(xml2_example("cd_catalog.xml"))
#' me <- read_html("http://had.co.nz")
#' }
read_xml <- function(x, encoding = "", ..., as_html = FALSE, options = "NOBLANKS") {
UseMethod("read_xml")
}
#' @export
#' @rdname read_xml
read_html <- function(x,
encoding = "",
...,
options = c("RECOVER", "NOERROR", "NOBLANKS", "HUGE")) {
UseMethod("read_html")
}
#' @export
read_html.default <- function(x,
encoding = "",
...,
options = c("RECOVER", "NOERROR", "NOBLANKS", "HUGE")) {
options <- parse_options(options, xml_parse_options())
suppressWarnings(read_xml(x, encoding = encoding, ..., as_html = TRUE, options = options))
}
#' @export
read_html.response <- function(x,
encoding = "",
options = c("RECOVER", "NOERROR", "NOBLANKS"),
...) {
check_installed("httr")
options <- parse_options(options, xml_parse_options())
httr::stop_for_status(x)
content <- httr::content(x, as = "raw")
xml2::read_html(content, encoding = encoding, options = options, ...)
}
#' @export
#' @rdname read_xml
read_xml.character <- function(x,
encoding = "",
...,
as_html = FALSE,
options = "NOBLANKS") {
check_string(x)
options <- parse_options(options, xml_parse_options())
if (grepl("<|>", x)) {
read_xml.raw(charToRaw(enc2utf8(x)), "UTF-8", ..., as_html = as_html, options = options)
} else {
con <- path_to_connection(x)
if (inherits(con, "connection")) {
read_xml.connection(con,
encoding = encoding, ..., as_html = as_html,
base_url = x, options = options
)
} else {
doc <- .Call(doc_parse_file, con,
encoding = encoding, as_html = as_html,
options = options
)
xml_document(doc)
}
}
}
#' @export
#' @rdname read_xml
read_xml.raw <- function(x,
encoding = "",
base_url = "",
...,
as_html = FALSE,
options = "NOBLANKS") {
options <- parse_options(options, xml_parse_options())
doc <- .Call(doc_parse_raw, x, encoding, base_url, as_html, options)
xml_document(doc)
}
#' @export
#' @rdname read_xml
read_xml.connection <- function(x,
encoding = "",
n = 64 * 1024,
verbose = FALSE,
...,
base_url = "",
as_html = FALSE,
options = "NOBLANKS") {
options <- parse_options(options, xml_parse_options())
if (!isOpen(x)) {
open(x, "rb")
on.exit(close(x))
}
raw <- .Call(read_connection_, x, n)
read_xml.raw(raw,
encoding = encoding, base_url = base_url, as_html =
as_html, options = options
)
}
#' @export
read_xml.response <- function(x,
encoding = "",
base_url = "",
...,
as_html = FALSE,
options = "NOBLANKS") {
check_installed("httr")
options <- parse_options(options, xml_parse_options())
httr::stop_for_status(x)
content <- httr::content(x, as = "raw")
xml2::read_xml(content,
encoding = encoding, base_url = if (nzchar(base_url)) base_url else x$url,
as_html = as_html, option = options, ...
)
}
#' @export
read_xml.textConnection <- function(x,
encoding = "",
...) {
s <- paste(readLines(x), collapse = "\n")
read_xml.character(s, ...)
}
#' Download a HTML or XML file
#'
#' Libcurl implementation of `C_download` (the "internal" download method)
#' with added support for https, ftps, gzip, etc. Default behavior is identical
#' to [download.file()], but request can be fully configured by passing
#' a custom [curl::handle()].
#' @inherit curl::curl_download
#' @param file A character string with the name where the downloaded file is
#' saved.
#' @seealso [curl_download][curl::curl_download]
#' @export
#' @examples
#' \dontrun{
#' download_html("http://tidyverse.org/index.html")
#' }
download_xml <- function(url,
file = basename(url),
quiet = TRUE,
mode = "wb",
handle = curl::new_handle()) {
check_installed("curl", "to use `download_xml()`.")
curl::curl_download(url, file, quiet = quiet, mode = mode, handle = handle)
invisible(file)
}
#' @export
#' @rdname download_xml
download_html <- download_xml
|