File: spec_arrow_write_table_arrow.Rd

package info (click to toggle)
r-cran-dbitest 1.8.2-1
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid, trixie
  • size: 1,216 kB
  • sloc: sh: 10; makefile: 2
file content (139 lines) | stat: -rw-r--r-- 4,847 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/spec-arrow-write-table-arrow.R
\docType{data}
\name{spec_arrow_write_table_arrow}
\alias{spec_arrow_write_table_arrow}
\title{spec_arrow_write_table_arrow}
\value{
\code{dbWriteTableArrow()} returns \code{TRUE}, invisibly.
}
\description{
spec_arrow_write_table_arrow
}
\section{Failure modes}{

If the table exists, and both \code{append} and \code{overwrite} arguments are unset,
or \code{append = TRUE} and the data frame with the new data has different
column names,
an error is raised; the remote table remains unchanged.

An error is raised when calling this method for a closed
or invalid connection.
An error is also raised
if \code{name} cannot be processed with \code{\link[DBI:dbQuoteIdentifier]{DBI::dbQuoteIdentifier()}} or
if this results in a non-scalar.
Invalid values for the additional arguments
\code{overwrite}, \code{append}, and \code{temporary}
(non-scalars,
unsupported data types,
\code{NA},
incompatible values,
incompatible columns)
also raise an error.
}

\section{Additional arguments}{

The following arguments are not part of the \code{dbWriteTableArrow()} generic
(to improve compatibility across backends)
but are part of the DBI specification:
\itemize{
\item \code{overwrite} (default: \code{FALSE})
\item \code{append} (default: \code{FALSE})
\item \code{temporary} (default: \code{FALSE})
}

They must be provided as named arguments.
See the "Specification" and "Value" sections for details on their usage.
}

\section{Specification}{

The \code{name} argument is processed as follows,
to support databases that allow non-syntactic names for their objects:
\itemize{
\item If an unquoted table name as string: \code{dbWriteTableArrow()} will do the quoting,
perhaps by calling \code{dbQuoteIdentifier(conn, x = name)}
\item If the result of a call to \code{\link[DBI:dbQuoteIdentifier]{DBI::dbQuoteIdentifier()}}: no more quoting is done
}

The \code{value} argument must be a data frame
with a subset of the columns of the existing table if \code{append = TRUE}.
The order of the columns does not matter with \code{append = TRUE}.

If the \code{overwrite} argument is \code{TRUE}, an existing table of the same name
will be overwritten.
This argument doesn't change behavior if the table does not exist yet.

If the \code{append} argument is \code{TRUE}, the rows in an existing table are
preserved, and the new data are appended.
If the table doesn't exist yet, it is created.

If the \code{temporary} argument is \code{TRUE}, the table is not available in a
second connection and is gone after reconnecting.
Not all backends support this argument.
A regular, non-temporary table is visible in a second connection,
in a pre-existing connection,
and after reconnecting to the database.

SQL keywords can be used freely in table names, column names, and data.
Quotes, commas, spaces, and other special characters such as newlines and tabs,
can also be used in the data,
and, if the database supports non-syntactic identifiers,
also for table names
and column names.

The following data types must be supported at least,
and be read identically with \code{\link[DBI:dbReadTable]{DBI::dbReadTable()}}:
\itemize{
\item integer
\item numeric
(the behavior for \code{Inf} and \code{NaN} is not specified)
\item logical
\item \code{NA} as NULL
\item 64-bit values (using \code{"bigint"} as field type); the result can be
\itemize{
\item converted to a numeric, which may lose precision,
\item converted a character vector, which gives the full decimal
representation
\item written to another table and read again unchanged
}
\item character (in both UTF-8
and native encodings),
supporting empty strings
before and after a non-empty string
\item factor (possibly returned as character)
\item objects of type \link[blob:blob]{blob::blob}
(if supported by the database)
\item date
(if supported by the database;
returned as \code{Date}),
also for dates prior to 1970 or 1900 or after 2038
\item time
(if supported by the database;
returned as objects that inherit from \code{difftime})
\item timestamp
(if supported by the database;
returned as \code{POSIXct}
respecting the time zone but not necessarily preserving the
input time zone),
also for timestamps prior to 1970 or 1900 or after 2038
respecting the time zone but not necessarily preserving the
input time zone)
}

Mixing column types in the same table is supported.
}

\seealso{
Other Arrow specifications: 
\code{\link{spec_arrow_append_table_arrow}},
\code{\link{spec_arrow_create_table_arrow}},
\code{\link{spec_arrow_fetch_arrow}},
\code{\link{spec_arrow_fetch_arrow_chunk}},
\code{\link{spec_arrow_get_query_arrow}},
\code{\link{spec_arrow_read_table_arrow}},
\code{\link{spec_arrow_send_query_arrow}},
\code{\link{spec_result_clear_result}}
}
\concept{Arrow specifications}