File: ProbabilisticBroadcast.txt

package info (click to toggle)
libjgroups-java 2.12.2.Final-5
  • links: PTS, VCS
  • area: main
  • in suites: bookworm, bullseye, buster, sid, trixie
  • size: 8,724 kB
  • sloc: java: 109,098; xml: 9,423; sh: 174; makefile: 4
file content (140 lines) | stat: -rw-r--r-- 5,833 bytes parent folder | download | duplicates (3)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140



		Probabilistic Broadcast for JGroups
		===================================



JavaGroups currently uses virtual synchrony (VS) in its main protocol
suite. VS is suited for tightly coupled, lockstep replication. Typical
examples are clusters, replicated databases etc. Group size is 100
max, and it is targeted to LANs rather than WANs.

The problem with VS is that is has to enforce that all members have
received all messages in a view before proceeding to the next
view. This is done by a FLUSH protocol, which ensures (by
retransmission) that each member has seen all messages in the current
view. During the FLUSH protocol, all members are essentially
blocked. Messages can be sent, but they will be sent only when the
FLUSH protocol has terminated (in one of the subsequent view, not in
the current one). The FLUSH protocol itself may need to be restarted,
e.g. in the case when a participating member fails during the FLUSH.

When one node (or a link) in a VS group is slow, it will bring the
performance of the entire group down, as members proceed at the pace
of the slowest members (at least during membership
changes). (Otherwise, the likely result is just growing buffers and
retransmissions, as messages waiting to be delivered are buffered).

The bimodel multicast (or probabilistic broadcast) protocols (PBCAST)
developed at Cornell try to solve this problem by providing
probabilistic reliability guarantees rather than hard ones. In a
nutshell, the probability of a very small number of members receiving
a message is high and the probability of all members receiving it is
high as well. The probability of some members receiving a message is
very small, because the 'epidemic' nature of PBCAST infects the group
exponentially, making sure every member receives a message, or none.

PBCAST protocols therefore scale very well, both in terms of group
member size as well as over WANs with intermittent link/node
failures. By implementing a PBCAST protocol, JavaGroups can now be
used in WAN settings. However, there are no hard reliability
guarantees anymore, just probabilitic ones. Yes there are a number of
applications, which don't need hard reliability, and can live with
probabilistic guarantees, for example replicated naming services and
publish-subscribe applications. In these settings, eventual
convergence of replicated state and low-cost of the protocol is more
important than lock-step replication.

The JavaGroups API will not be changed at all. However, applications
with a protocol stack configured to use PBCAST have to be aware that
views are only an approximation of the membership, not a hard
guarantee.

The PBCAST protocol is located in the ./pbcast subdirectory of
./Protocols. The major changes are:


GMS
---
Unlike VS, the JavaGroups implementation of PBCAST does not per se
guarantee that the set of messages delivered in a view V is the same
at all members. Therefore, applications cannot rely on the fact that
when they send a message in view V, it will be received by all current
non-faulty members in V.

Views are delivered at each receiver at a certain position in the
incoming message stream. However, as PBCAST only provides FIFO (which
guarantees that messages from sender P are seen in the order sent by
P), it is possible that messages sent by senders P and Q in view V1
can be received in different views at each receiver. However, it is
possible to add total order by implementing a TOTAL protocol and
adding it on top of a given protocol stack. This would then
essentially provide VS.

Consider the following example: P send messages m1 and m2 in view V1
(consisting of P, Q and R). While it sends the messages, a new member
S joins the group. Since there is no FLUSH protocol that ensures that
m1 and m2 are delivered in V1, the following could happen: m1 is
delivered to Q and R in V1. Message m2 is delivered to Q, but is lost
to R (e.g. dropped by a lossy link). Now, the new view V2 is installed
by Q (which is the coordinator). Now, m2 is retransmitted by P to
R. Clearly, VS would drop m2 because it was sent in a previous
view. However, PBCAST faces two choices: either accept the message and
deliver it or drop it as well. If we accept it, the FIFO properties
for P are upheld, if we drop it, the next message m3 from P will not
be delivered until m2 was seen by R. (Message IDs are not reset to 0
because we have no total order over views beeing delivered at each
member at the same location in the message stream, as shown
above). Therefore, we have to accept the message.

This leads to the conclusion that views are not used as a demarcation
between message sets, but rather as indication that the group
membership has changed. Therefore, protocols in the PBCAST suite will
only use views to update their internal membership list, but never
make the assumption that all members will see the view change at the
same logical location in their message streams.


FLUSH
-----
Not used anymore, as we're not flushing messages when proceeding to
the next view.


NAKACK
------
Not used anymore. Functionality will be covered by PBCAST. NAKACK made
assumptions about views and messages and can therefore not be used.


VIEW_ENFORCER
-------------
Not used anymore. Messages sent in one view can be delivered in
another one, although this usually doesn't happen. But we cannot make
any assumptions about it.


STATE_TRANSFER
--------------
Not used anymore. New protocol for state transfer, especially geared
towards big states (transfer in multiple transfers). However,
STATE_TRANSFER could still be used (a TOTAL protocol has to be
present).


QUEUE
-----
May be used by the new state transfer protocol


STABLE
------
Not used anymore. Functionality will be covered by PBCAST protocol.



Refs
----
[1] http://www.cs.cornell.edu/Info/Projects/Spinglass/index.html