1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286
|
Creating a sample database with application
-----------------------------------------------------------------
As a first test scenario, the pgbench test program that is shipped with
PostgreSQL in the ./contrib directory and produces a not too light load
of concurrent transactions will satisfy our needs.
NOTE: the PL/PgSQL procedural language MUST BE INSTALLED into this
database
LOCAL WILL BE REMOTE
The Slony replication system is based on triggers. One of the nice
side effects of this is that you may, in theory, replicate between two
databases under the same postmaster. Things can get a little
confusing when we're talking about local vs. remote database in that
context. To avoid confusion, from here on we will strictly use the
term "node" to mean one database and its replication daemon program
slon.
To make this example work for people with one or two computer systems
at their disposal, we will define a few shell variables that will be
used throughout the following scripts.
CLUSTER=test1
DBNAME1=pgbench_node1
DBNAME2=pgbench_node2
HOST1=<host name of pgbench_node1>
HOST2=<host name of pgbench_node2>
SLONY_USER=<PostgreSQL superuser to connect as for replication>
PGBENCH_USER=<normal user to run the pgbench application>
Here, we assume that the the Unix user executing all the commands in
this example has the ability to establish all database connections.
This is not intended to become a "pg_hba.conf HOWTO", so replacing
whatever sophisticated authentication method is used with "trust"
until replication works would not be a bad idea.
PREPARING THE TWO DATABASES
As of this writing Slony-I does not attempt to automatically copy the
table definitions when a node subscribes to a data set. Because of this,
we have to create one full and one schema-only pgbench database. Note
that we alter the pgbench table, "history", to give it a primary key.
All tables must have a primary key (or a candidate thereof) defined
for Slony-I to use. The pgbench history table lacks this, so we add
one in
createdb -O $PGBENCH_USER -h $HOST1 $DBNAME1
createdb -O $PGBENCH_USER -h $HOST2 $DBNAME2
pgbench -i -s 1 -U $PGBENCH_USER -h $HOST1 $DBNAME1
psql -U $PGBENCH_USER -h $HOST1 -d $DBNAME1 -c "begin; alter table history add column id serial; update history set id = nextval('history_id_seq'); alter table history add primary key(id); commit"
pg_dump -s -U $SLONY_USER -h $HOST1 $DBNAME1 | psql -U $SLONY_USER -h $HOST2 $DBNAME2
From this moment on, the pgbench test program can be started to
produce transaction load. It is recommended to run the pgbench
application in the foreground of a separate terminal. This gives the
flexibility to stop and restart it with different parameters at any
time. The command to run it would look like this:
pgbench [-n] -s 1 -c <n_clients> -U $PGBENCH_USER -h $HOST1 -t <n_trans> $DBNAME1
* -n suppresses deleting the content of the history table and
* vacuuming the database at the start of pgbench.
* -c <n_clients> specifies the number of concurrent clients to
* simulate (should be between 1 and 10). Note that a high number
* will cause a significant load on the server as any of these
* clients will try to run transactions as fast as possible.
* -t <n_trans> specifies the number of transactions every client
* executes before terminating. A value of 1000 is a good point to
* start.
Configuring the databases for replication
------------------------------------------
Creating the configuration tables, stored procedures, triggers and
setting up the configuration is done with the slonik command. It is a
specialized scripting aid that mostly calls stored procedures in the
node databases. The script to create the initial configuration for a
simple master-slave setup of our pgbench databases looks like this:
Script slony_sample1_setup.sh
#!/bin/sh
CLUSTER=test1
DBNAME1=pgbench_node1
DBNAME2=pgbench_node2
HOST1=<host name of pgbench_node1>
HOST2=<host name of pgbench_node2>
SLONY_USER=<postgres superuser to connect as for replication>
PGBENCH_USER=<normal user to run the pgbench application>
slonik <<_EOF_
# ----
# This defines which namespace the replication system uses
# ----
cluster name = $CLUSTER;
# ----
# Admin conninfo's are used by the slonik program to connect
# to the node databases. So these are the PQconnectdb arguments
# that connect from the administrators workstation (where
# slonik is executed).
# ----
node 1 admin conninfo = 'dbname=$DBNAME1 host=$HOST1 user=$SLONY_USER';
node 2 admin conninfo = 'dbname=$DBNAME2 host=$HOST2 user=$SLONY_USER';
# ----
# Initialize the first node. The id must be 1.
# This creates the schema "_test1" containing all replication
# system specific database objects.
# ----
init cluster ( id = 1, comment = 'Node 1' );
# ----
# The Slony replication system organizes tables in sets. The
# smallest unit another node can subscribe is a set. Usually the
# tables contained in one set would be all tables that have
# relationships to each other. The following commands create
# one set containing all 4 pgbench tables. The "master" or origin
# of the set is node 1.
# ----
create set ( id = 1, origin = 1, comment = 'All pgbench tables' );
set add table ( set id = 1, origin = 1,
id = 1, fully qualified name = 'public.accounts',
comment = 'Table accounts' );
set add table ( set id = 1, origin = 1,
id = 2, fully qualified name = 'public.branches',
comment = 'Table branches' );
set add table ( set id = 1, origin = 1,
id = 3, fully qualified name = 'public.tellers',
comment = 'Table tellers' );
set add table ( set id = 1, origin = 1,
id = 4, fully qualified name = 'public.history',
comment = 'Table history' );
# ----
# Create the second node, tell the two nodes how to connect to
# each other and that they should listen for events on each
# other. Note that these conninfo arguments are used by the
# slon daemon on node 1 to connect to the database of node 2
# and vice versa. So if the replication system is supposed to
# use a separate backbone network between the database servers,
# this is the place to tell it.
# ----
store node ( id = 2, comment = 'Node 2' );
store path ( server = 1, client = 2,
conninfo = 'dbname=$DBNAME1 host=$HOST1 user=$SLONY_USER');
store path ( server = 2, client = 1,
conninfo = 'dbname=$DBNAME2 host=$HOST2 user=$SLONY_USER');
store listen ( origin = 1, provider = 1, receiver = 2 );
store listen ( origin = 2, provider = 2, receiver = 1 );
_EOF_
Time to replicate
-------------------
Is the pgbench application still running?
At this point we have 2 databases that are fully prepared. One is the
master database accessed by the pgbench application. It is time now
to start the replication daemons.
On the system $HOST1, the command to start the replication daemon is
slon $CLUSTER "dbname=$DBNAME1 user=$SLONY_USER"
Since the replication daemon for node 1 is running on the same host as
the database for node 1, there is no need to connect via TCP/IP socket
for it.
Likewise we start the replication daemon for node 2 on $HOST2 with
slon $CLUSTER "dbname=$DBNAME2 user=$SLONY_USER"
Even if the two daemons now will start right away and show a lot of
message exchanging, they are not replicating any data yet. What is
going on is that they synchronize their information about the cluster
configuration.
To start replicating the 4 pgbench tables from node 1 to node 2 we
have to execute the following script:
slony_sample1_subscribe.sh:
#!/bin/sh
CLUSTER=test1
DBNAME1=pgbench_node1
DBNAME2=pgbench_node2
HOST1=<host name of pgbench_node1>
HOST2=<host name of pgbench_node2>
SLONY_USER=<postgres superuser to connect as for replication>
PGBENCH_USER=<normal user to run the pgbench application>
slonik <<_EOF_
# ----
# This defines which namespace the replication system uses
# ----
cluster name = $CLUSTER;
# ----
# Admin conninfo's are used by the slonik program to connect
# to the node databases. So these are the PQconnectdb arguments
# that connect from the administrators workstation (where
# slonik is executed).
# ----
node 1 admin conninfo = 'dbname=$DBNAME1 host=$HOST1 user=$SLONY_USER';
node 2 admin conninfo = 'dbname=$DBNAME2 host=$HOST2 user=$SLONY_USER';
# ----
# Node 2 subscribes set 1
# ----
subscribe set ( id = 1, provider = 1, receiver = 2, forward = no);
_EOF_
Shortly after this script is executed, the replication daemon on
$HOST2 will start to copy the current content of all 4 replicated
tables. While doing so, of course, the pgbench application will
continue to modify the database. When the copy process is finished,
the replication daemon on $HOST2 will start to catch up by applying
the accumulated replication log. It will do this in little steps, 10
seconds worth of application work at a time. Depending on the
performance of the two systems involved, the sizing of the two
databases, the actual transaction load and how well the two databases
are tuned and maintained, this catchup process can be a matter of
minutes, hours, or infinity.
Checking the result
-----------------------------
To check the result of the replication attempt (actually, the
intention was to create an exact copy of the first node, no?) the
pgbench application must be stopped and any eventual replication
backlog processed by node 2. After that, we create data exports (with
ordering) of the 2 databases and compare them:
Script slony_sample1_compare.sh
#!/bin/sh
CLUSTER=test1
DBNAME1=pgbench_node1
DBNAME2=pgbench_node2
HOST1=<host name of pgbench_node1>
HOST2=<host name of pgbench_node2>
SLONY_USER=<postgres superuser to connect as for replication>
PGBENCH_USER=<normal user to run the pgbench application>
echo -n "**** comparing sample1 ... "
psql -U $PGBENCH_USER -h $HOST1 $DBNAME1 >dump.tmp.1.$$ <<_EOF_
select 'accounts:'::text, aid, bid, abalance, filler
from accounts order by aid;
select 'branches:'::text, bid, bbalance, filler
from branches order by bid;
select 'tellers:'::text, tid, bid, tbalance, filler
from tellers order by tid;
select 'history:'::text, tid, bid, aid, delta, mtime, filler,
id
from history order by id;
_EOF_
psql -U $PGBENCH_USER -h $HOST2 $DBNAME2 >dump.tmp.2.$$ <<_EOF_
select 'accounts:'::text, aid, bid, abalance, filler
from accounts order by aid;
select 'branches:'::text, bid, bbalance, filler
from branches order by bid;
select 'tellers:'::text, tid, bid, tbalance, filler
from tellers order by tid;
select 'history:'::text, tid, bid, aid, delta, mtime, filler,
id
from history order by id;
_EOF_
if diff dump.tmp.1.$$ dump.tmp.2.$$ >test_1.diff ; then
echo "success - databases are equal."
rm dump.tmp.?.$$
rm test_1.diff
else
echo "FAILED - see test_1.diff for database differences"
fi
If this script reports any differences, it is worth reporting this to
the developers as we would appreciate hearing how this happened.
|