1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530
|
<?xml version="1.0" encoding="UTF-8"?>
<!-- $Id: deployment.xml,v 1.3 2005/07/02 09:11:39 fredt Exp $ -->
<chapter>
<title>Deployment Issues</title>
<chapterinfo>
<authorgroup>
<author>
<firstname>Fred</firstname>
<surname>Toussi</surname>
<affiliation>
<orgname>HSQLDB Development Group</orgname>
</affiliation>
<email>ft@cluedup.com</email>
</author>
</authorgroup>
<edition>$Revision: 1.3 $</edition>
<pubdate>$Date: 2005/07/02 09:11:39 $</pubdate>
<keywordset>
<keyword>Hsqldb</keyword>
<keyword>Guide</keyword>
</keywordset>
<legalnotice>
<para>Copyright 2005 Fred Toussi. Permission is granted to distribute
this document without any alteration under the terms of the HSQLDB
license. Additional permission is granted to the HSQLDB Development
Group to distribute this document with or without alterations under the
terms of the HSQLDB license.</para>
</legalnotice>
</chapterinfo>
<section>
<title>Purpose</title>
<para>Many questions repeatedly asked in Forums and mailing lists are
answered in this guide. If you want to use HSQLDB with your application,
you should read this guide. This document covers system related issues.
For issues related to SQL see the <link endterm="sql_issues-title"
linkend="sql_issues-chapter" /> chapter.</para>
</section>
<section>
<title>Mode of Operation and Tables</title>
<para>HSQLDB has many modes of operation and features that allow it to be
used in very different scenarios. Levels of memory usage, speed and
accessibility by different applications are influenced by how HSQLDB is
deployed.</para>
<section>
<title>Mode of Operation</title>
<para>The decision to run HSQLDB as a separate server process or as an
in-process database should be based on the following:</para>
<para>
<itemizedlist>
<listitem>
<para>When HSQLDB is run as a server on a separate machine, it is
isolated from hardware failures and crashes on the hosts running
the application.</para>
</listitem>
<listitem>
<para>When HSQLDB is run as a server on the same machine, it is
isolated from application crashes and memory leaks.</para>
</listitem>
<listitem>
<para>Server connections are slower than in-process connections
due to the overhead of streaming the data for each JDBC
call.</para>
</listitem>
</itemizedlist>
</para>
</section>
<section>
<title>Tables</title>
<para>TEXT tables are designed for special applications where the data
has to be in an interchangeable format, such as CSV. TEXT tables should
not be used for routine storage of data.</para>
<para>MEMORY tables and CACHED tables are generally used for data
storage. The difference between the two is as follows:</para>
<para>
<itemizedlist>
<listitem>
<para>The data for all MEMORY tables is read from the .script file
when the database is started and stored in memory. In contrast the
data for cached tables is not read into memory until the table is
accessed. Furthermore, only part of the data for each CACHED table
is held in memory, allowing tables with more data than can be held
in memory.</para>
</listitem>
<listitem>
<para>When the database is shutdown in the normal way, all the
data for MEMORY tables is written out to the disk. In comparison,
the data in CACHED tables that has changed is written out at
shutdown, plus a compressed backup of all the data in all cached
tables.</para>
</listitem>
<listitem>
<para>The size and capacity of the data cache for all the CACHED
tables is configurable. This makes it possible to allow all the
data in CACHED tables to be cached in memory. In this case, speed
of access is good, but slightly slower than MEMORY tables.</para>
</listitem>
<listitem>
<para>For normal applications it is recommended that MEMORY tables
are used for small amounts of data, leaving CACHED tables for
large data sets. For special applications in which speed is
paramount and a large amount of free memory is available, MEMORY
tables can be used for large tables as well</para>
</listitem>
</itemizedlist>
</para>
</section>
<section>
<title>Large Objects</title>
<para>JDBC Clobs are supported as columns of the type LONGVARCHAR. JDBC
Blobs are supported as columns of the type LONGVARBINARY. When large
objects (LONGVARCHAR, LONGVARBINARY, OBJECT) are stored with table
definitions that contain several normal fields, it is better to use two
tables instead. The first table to contain the normal fields and the
second table to contain the large object plus an identity field. Using
this method has two benefits. (a) The first table can usually be created
as a MEMORY table while only the second table is a CACHED table. (b) The
large objects can be retrieved individually using their identity,
instead of getting loaded into memory for finding the rows during query
processing. An example of two tables and a select query that exploits
the separation between the two follows:</para>
<informalexample>
<programlisting>CREATE MEMORY TABLE MAINTABLE(MAINID INTEGER, ......);</programlisting>
<programlisting>CREATE CACHED TABLE LOBTABLE(LOBID INTEGER, LOBDATA LONGVARBINARY);</programlisting>
<programlisting>SELECT * FROM (SELECT * FROM MAINTABLE <join any other table> WHERE <various conditions apply>) JOIN LOBTABLE ON MAINID=LOBID;</programlisting>
</informalexample>
<para>The inner SELECT finds the required rows without reference to the
LOBTABLE and when it has found all the rows, retrieves the required
large objects from the LOBTABLE.</para>
</section>
<section>
<title>Deployment context</title>
<para>The files used for storing HSQLDB database data are all in the
same directory. New files are always created and deleted by the database
engine. Two simple principles must be observed:</para>
<itemizedlist>
<listitem>
<para>The Java process running HSQLDB must have full privileges on
the directory where the files are stored. This include create and
delete privileges.</para>
</listitem>
<listitem>
<para>The file system must have enough spare room both for the
'permanent' and 'temporary' files. The default maximum size of the
.log file is 200MB. The .data file can grow to up to 8GB. The
.backup file can be up to 50% of the .data file. The temporary file
created at the time of a SHUTDOWN COMPACT can be equal in size to
the .data file.</para>
</listitem>
</itemizedlist>
</section>
</section>
<section>
<title>Memory and Disk Use</title>
<para>Memory used by the program can be thought of as two distinct pools:
memory used for table data, and memory used for building result sets and
other internal operations. In addition, when transactions are used, memory
is utilised for storing the information needed for a rollback.</para>
<para>Since version 1.7.1, memory use has been significantly reduced
compared to previous versions. The memory used for a MEMORY table is the
sum of memory used by each row. Each MEMORY table row is a Java object
that has 2 int or reference variables. It contains an array of objects for
the fields in the row. Each field is an object such as
<classname>Integer</classname>, <classname>Long</classname>,
<classname>String</classname>, etc. In addition each index on the table
adds a node object to the row. Each node object has 6 int or reference
variables. As a result, a table with just one column of type INTEGER will
have four objects per row, with a total of 10 variables of 4 bytes each -
currently taking up 80 bytes per row. Beyond this, each extra column in
the table adds at least a few bytes to the size of each row.</para>
<para>The memory used for a result set row has fewer overheads (fewer
variables and no index nodes) but still uses a lot of memory. All the rows
in the result set are built in memory, so very large result sets may not
be possible. In server mode databases, the result set memory is released
from the server once the database server has returned the result set.
In-process databases release the memory when the application program
releases the <classname>java.sql.ResultSet</classname> object. Server
modes require additional memory for returning result sets, as they convert
the full result set into an array of bytes which is then transmitted to
the client.</para>
<para>When UPDATE and DELETE queries are performed on CACHED tables, the
full set of rows that are affected, including those affected due to ON
UPDATE actions, is held in memory for the duration of the operation. This
means it may not be possible to perform deletes or updates involving very
large numbers of rows of CACHED tables. Such operations should be
performed in smaller sets.</para>
<para>When transactions support is enabled with SET AUTOCOMMIT OFF, lists
of all insert, delete or update operations are stored in memory so that
they can be undone when ROLLBACK is issued. Transactions that span
hundreds of modification to data will take up a lot of memory until the
next COMMIT or ROLLBACK clears the list.</para>
<para>Most JVM implementations allocate up to a maximum amount of memory
(usually 64 MB by default). This amount is generally not adequate when
large memory tables are used, or when the average size of rows in cached
tables is larger than a few hundred bytes. The maximum amount of allocated
memory can be set on the java ... command line that is used for running
HSQLDB. For example, with Sun JVM version 1.3.0 the parameter -Xmx256m
increases the amount to 256 MB.</para>
<para>1.8.0 uses a fast cache for immutable objects such as Integer or
String that are stored in the database. In most circumstances, this
reduces the memory footprint still further as fewer copies of the most
frequently-used objects are kept in memory.</para>
<section>
<title>Cache Memory Allocation</title>
<para>With CACHED tables, the data is stored on disk and only up to a
maximum number of rows are held in memory at any time. The default is up
to 3*16384 rows. The <property>hsqldb.cache_scale</property> database
property can be set to alter this amount. As any random subset of the
rows in any of the CACHED tables can be held in the cache, the amount of
memory needed by cached rows can reach the sum of the rows containing
the largest field data. For example if a table with 100,000 rows
contains 40,000 rows with 1,000 bytes of data in each row and 60,000
rows with 100 bytes in each, the cache can grow to contain nearly 50,000
rows, including all the 40,000 larger rows.</para>
<para>An additional property,
<property>hsqldb.cache_size_scale</property> can be used in conjunction
with the <property>hsqldb.cache_scale</property> property. This puts a
limit in bytes on the total size of rows that are cached. When the
default values is used for both properties, the limit on the total size
of rows is approximately 50MB. (This is the size of binary images of the
rows and indexes. It translates to more actual memory, typically 2-4
times, used for the cache because the data is represented by Java
objects.)</para>
<para>If memory is limited, the <property>hsqldb.cache_scale</property>
or <property>hsqldb.cache_size_scale</property> database properties can
be reduced. In the example above, if the
<property>hsqldb.cache_size_scale</property> is reduced from 10 to 8,
then the total binary size limit is reduced from 50MB to 12.5 MB. This
will allow the number of cached rows to reach 50,000 small rows, but
only 12,500 of the larger rows.</para>
</section>
</section>
<section>
<title>Managing Database Connections</title>
<para>In all running modes (server or in-process) multiple connections to
the database engine are supported. In-process (standalone) mode supports
connections from the client in the same Java Virtual Machine, while server
modes support connections over the network from several different
clients.</para>
<para>Connection pooling software can be used to connect to the database
but it is not generally necessary. With other database engines, connection
pools are used for reasons that may not apply to HSQLDB.</para>
<itemizedlist>
<listitem>
<para>To allow new queries to be performed while a time-consuming
query is being performed in the background. This is not possible with
HSQLDB 1.8.0 as it blocks while performing the first query and deals
with the next query once it has finished it. This capability is under
development and will be introduced in a future version.</para>
</listitem>
<listitem>
<para>To limit the maximum number of simultaneous connections to the
database for performance reasons. With HSQLDB this can be useful only
if your application is designed in a way that opens and closes
connections for each small task.</para>
</listitem>
<listitem>
<para>To control transactions in a multi-threaded application. This
can be useful with HSQLDB as well. For example, in a web application,
a transaction may involve some processing between the queries or user
action across web pages. A separate connection should be used for each
HTTP session so that the work can be committed when completed or
rolled back otherwise. Although this usage cannot be applied to most
other database engines, HSQLDB is perfectly capable of handling over
100 simultaneous HTTP sessions as individual JDBC connections.</para>
</listitem>
</itemizedlist>
<para>An application that is not both multi-threaded and transactional,
such as an application for recording user login and logout actions, does
not need more than one connection. The connection can stay open
indefinitely and reopened only when it is dropped due to network
problems.</para>
<para>When using an in-process database with versions prior to 1.7.2 the
application program had to keep at least one connection to the database
open, otherwise the database would have been closed and further attempts
to create connections could fail. This is not necessary since 1.7.2, which
does not automatically close an in-process database that is opened by
establishing a connection. An explicit SHUTDOWN command, with or without
an argument, is required to close the database. In version 1.8.0 a
connection property can be used to revert to the old behaviour.</para>
<para>When using a server database (and to some extent, an in-process
database), care must be taken to avoid creating and dropping JDBC
Connections too frequently. Failure to observe this will result in
unsuccessful connection attempts when the application is under heavy
load.</para>
</section>
<section>
<title>Upgrading Databases</title>
<para>Any database not produced with the release version of HSQLDB 1.8.0
must be upgraded to this version. This includes databases created with the
RC versions of 1.8.0. The instructions under the <link
endterm="upgrade_via_script-title" linkend="upgrade_via_script-section" />
section should be followed in all cases.</para>
<para>Once a database is upgraded to 1.8.0, it can no longer be used with
Hypersonic or previous versions of HSQLDB.</para>
<para>There may be some potential legacy issues in the upgrade which
should be resolved by editing the .script file:</para>
<itemizedlist>
<listitem>
<para>Version 1.8.0 does not accept duplicate names for indexes that
were allowed before 1.7.2.</para>
</listitem>
<listitem>
<para>Version 1.8.0 does not accept duplicate names for table columns
that were allowed before 1.7.0.</para>
</listitem>
<listitem>
<para>Version 1.8.0 does not create the same type of index for foreign
keys as versions before 1.7.2.</para>
</listitem>
<listitem>
<para>Version 1.8.0 does not accept table or column names that are SQL
identifiers without double quoting.</para>
</listitem>
</itemizedlist>
<section id="upgrade_via_script-section">
<title id="upgrade_via_script-title">Upgrading Using the SCRIPT
Command</title>
<para>To upgrade from 1.7.2 or 1.7.3 to 1.8.0, simply issue the SET
SCRIPTFORMAT TEXT and SHUTDOWN SCRIPT commands with the old version,
then open with the new version of the engine. The upgrade is then
complete.</para>
<para>To upgrade from older version database files (1.7.1 and older)
that do not contain CACHED tables, simple SHUTDOWN with the older
version and open with the new version. If there is any error in the
.script file, try again after editing the .script file.</para>
<para>To upgrade from older version database files (1.7.1 and older)
that contain CACHED tables, use the SCRIPT procedure below. In all
versions of HSQLDB and Hypersonic 1.43, the <literal>SCRIPT
'filename'</literal> command (used as an SQL query) allows you to save a
full record of your database, including database object definitions and
data, to a file of your choice. You can export a script file using the
old version of the database engine and open the script as a database
with 1.8.0.</para>
<procedure>
<title>Upgrade Using SCRIPT procedure</title>
<step>
<para>Open the original database in the old version of
DatabaseManager</para>
</step>
<step>
<para>Issue the SCRIPT command, for example <literal>SCRIPT
'newversion.script'</literal> to create a script file containing a
copy of the database.</para>
</step>
<step>
<para>Use the 1.8.0 version of DatabaseManager to create a new
database, in this example <literal>'newversion'</literal> in a
different directory.</para>
</step>
<step>
<para>SHUTDOWN this database.</para>
</step>
<step>
<para>Copy the <filename>newversion.script</filename> file from step
2 over the file of the same name for the new database created in
4.</para>
</step>
<step>
<para>Try to open the new database using DatabaseManager.</para>
</step>
<step>
<para>If there is any inconsistency in the data, the script line
number is reported on the console and the opening process is
aborted. Edit and correct any problems in the
<filename>newversion.script</filename> before attempting to open
again. Use the guidelines in the next section (Manual Changes to the
.script File). Use a programming editor that is capable of handling
very large files and does not wrap long lines of text.</para>
</step>
</procedure>
</section>
<section>
<title>Manual Changes to the .script File</title>
<para>In 1.8.0 the full range of ALTER TABLE commands is available to
change the data structures and their names. However, if an old database
cannot be opened due to data inconsistencies, or the use of index or
column names that are not compatible with 1.8.0, manual editing of the
SCRIPT file can be performed.</para>
<para>The following changes can be applied so long as they do not affect
the integrity of existing data.</para>
<itemizedlist>
<listitem>
<para>Names of tables, columns and indexes can be changed.</para>
</listitem>
<listitem>
<para><literal>CREATE UNIQUE INDEX ...</literal> to <literal>CREATE
INDEX ...</literal> and vice versa</para>
<para>A unique index can always be converted into a normal index. A
non-unique index can only be converted into a unique index if the
table data for the column(s) is unique in each row.</para>
</listitem>
<listitem>
<para>
<literal>NOT NULL</literal>
</para>
<para>A not-null constraint can always be removed. It can only be
added if the table data for the column has no null values.</para>
</listitem>
<listitem>
<para>
<literal>PRIMARY KEY</literal>
</para>
<para>A primary key constraint can be removed or added. It cannot be
removed if there is a foreign key referencing the column(s).</para>
</listitem>
<listitem>
<para>
<literal>COLUMN TYPES</literal>
</para>
<para>Some changes to column types are possible. For example an
INTEGER column can be changed to BIGINT, or DATE, TIME and TIMESTAMP
columns can be changed to VARCHAR.</para>
</listitem>
</itemizedlist>
<para>After completing the changes and saving the modified *.script
file, you can open the database as normal.</para>
</section>
</section>
<section>
<title>Backing Up Databases</title>
<para>The data for each database consists of up to 5 files in the same
directory. The endings are *.properties, *.script, *.data, *.backup and
*.log (a file with the *.lck ending is used for controlling access to the
database and should not be backed up). These should be backed up together.
The files can be backed up while the engine is running but care should be
taken that a CHECKPOINT or SHUTDOWN operation does not take place during
the backup. It is more efficient to perform the backup immediately after a
CHECKPOINT. The *.data file can be excluded from the backup. In this case,
when restoring, a dummy *.data file is needed which can be an empty, 0
length file. The engine will expand the *.backup file to replace this
dummy file if the backup is restored. If the *.data file is not backed up,
the *.properties file may have to be modified to ensure it contain
modified=yes instead of modified=no prior to restoration. If a backup
immediately follows a checkpoint, then the *.log file can also be
excluded, reducing the significant files to *.properties, *.script and
*.backup. Normal backup methods, such as archiving the files in a
compressed bundle can be used.</para>
</section>
</chapter>
|