File: backup-file.html

package info (click to toggle)
pgadmin3 1.4.3-2
  • links: PTS
  • area: main
  • in suites: etch, etch-m68k
  • size: 29,796 kB
  • ctags: 10,758
  • sloc: cpp: 55,356; sh: 6,164; ansic: 1,520; makefile: 576; sql: 482; xml: 100; perl: 18
file content (102 lines) | stat: -rw-r--r-- 6,041 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<title>23.2.File system level backup</title>
<link rel="stylesheet" href="stylesheet.css" type="text/css">
<link rev="made" href="pgsql-docs@postgresql.org">
<meta name="generator" content="DocBook XSL Stylesheets V1.70.0">
<link rel="start" href="index.html" title="PostgreSQL 8.1.4 Documentation">
<link rel="up" href="backup.html" title="Chapter23.Backup and Restore">
<link rel="prev" href="backup.html" title="Chapter23.Backup and Restore">
<link rel="next" href="backup-online.html" title="23.3.On-line backup and point-in-time recovery (PITR)">
<link rel="copyright" href="ln-legalnotice.html" title="Legal Notice">
</head>
<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"><div class="sect1" lang="en">
<div class="titlepage"><div><div><h2 class="title" style="clear: both">
<a name="backup-file"></a>23.2.File system level backup</h2></div></div></div>
<p>   An alternative backup strategy is to directly copy the files that
   <span class="productname">PostgreSQL</span> uses to store the data in the database. In
   <a href="creating-cluster.html" title="16.2.Creating a Database Cluster">Section16.2, &#8220;Creating a Database Cluster&#8221;</a> it is explained where these files
   are located, but you have probably found them already if you are
   interested in this method. You can use whatever method you prefer
   for doing usual file system backups, for example

</p>
<pre class="programlisting">tar -cf backup.tar /usr/local/pgsql/data</pre>
<p>
  </p>
<p>   There are two restrictions, however, which make this method
   impractical, or at least inferior to the <span class="application">pg_dump</span>
   method:

   </p>
<div class="orderedlist"><ol type="1">
<li><p>      The database server <span class="emphasis"><em>must</em></span> be shut down in order to
      get a usable backup. Half-way measures such as disallowing all
      connections will <span class="emphasis"><em>not</em></span> work
      (mainly because <code class="command">tar</code> and similar tools do not take an
      atomic snapshot of the state of the file system at a point in
      time). Information about stopping the server can be found in
      <a href="postmaster-shutdown.html" title="16.5.Shutting Down the Server">Section16.5, &#8220;Shutting Down the Server&#8221;</a>.  Needless to say that you
      also need to shut down the server before restoring the data.
     </p></li>
<li><p>      If you have dug into the details of the file system layout of the
      database, you may be tempted to try to back up or restore only certain
      individual tables or databases from their respective files or
      directories. This will <span class="emphasis"><em>not</em></span> work because the
      information contained in these files contains only half the
      truth. The other half is in the commit log files
      <code class="filename">pg_clog/*</code>, which contain the commit status of
      all transactions. A table file is only usable with this
      information. Of course it is also impossible to restore only a
      table and the associated <code class="filename">pg_clog</code> data
      because that would render all other tables in the database
      cluster useless.  So file system backups only work for complete
      restoration of an entire database cluster.
     </p></li>
</ol></div>
<p>
  </p>
<p>   An alternative file-system backup approach is to make a
   &#8220;<span class="quote">consistent snapshot</span>&#8221; of the data directory, if the
   file system supports that functionality (and you are willing to
   trust that it is implemented correctly).  The typical procedure is
   to make a &#8220;<span class="quote">frozen snapshot</span>&#8221; of the volume containing the
   database, then copy the whole data directory (not just parts, see
   above) from the snapshot to a backup device, then release the frozen
   snapshot.  This will work even while the database server is running.
   However, a backup created in this way saves
   the database files in a state where the database server was not
   properly shut down; therefore, when you start the database server
   on the backed-up data, it will think the server had crashed
   and replay the WAL log.  This is not a problem, just be aware of
   it (and be sure to include the WAL files in your backup).
  </p>
<p>   If your database is spread across multiple file systems, there may not 
   be any way to obtain exactly-simultaneous frozen snapshots of all 
   the volumes.  For example, if your data files and WAL log are on different
   disks, or if tablespaces are on different file systems, it might
   not be possible to use snapshot backup because the snapshots must be
   simultaneous.
   Read your file system documentation very carefully before trusting
   to the consistent-snapshot technique in such situations.  The safest
   approach is to shut down the database server for long enough to
   establish all the frozen snapshots.
  </p>
<p>   Another option is to use <span class="application">rsync</span> to perform a file
   system backup.  This is done by first running <span class="application">rsync</span>
   while the database server is running, then shutting down the database
   server just long enough to do a second <span class="application">rsync</span>.  The
   second <span class="application">rsync</span> will be much quicker than the first,
   because it has relatively little data to transfer, and the end result
   will be consistent because the server was down.  This method
   allows a file system backup to be performed with minimal downtime.
  </p>
<p>   Note that a file system backup will not necessarily be
   smaller than an SQL dump. On the contrary, it will most likely be
   larger. (<span class="application">pg_dump</span> does not need to dump
   the contents of indexes for example, just the commands to recreate
   them.)
  </p>
</div></body>
</html>