Package: haproxy / 1.5.8-3+deb8u2


Package Version Patches format
haproxy 1.5.8-3+deb8u2 3.0 (quilt)

Patch series

view the series file
Patch File delta Description
0002 Use dpkg buildflags to build halog.patch | (download)

contrib/halog/Makefile | 16 5 + 11 - 0 !
1 file changed, 5 insertions(+), 11 deletions(-)

 use dpkg-buildflags to build halog

haproxy.service start after syslog.patch | (download)

contrib/systemd/ | 3 2 + 1 - 0 !
1 file changed, 2 insertions(+), 1 deletion(-)

 start after the syslog service using systemd
haproxy.service add documentation.patch | (download)

contrib/systemd/ | 2 2 + 0 - 0 !
1 file changed, 2 insertions(+)

 add documentation field to the systemd unit
haproxy.service check config before reload.patch | (download)

contrib/systemd/ | 1 1 + 0 - 0 !
1 file changed, 1 insertion(+)

 check the configuration before reloading haproxy
 While HAProxy will survive a reload with an invalid configuration, explicitly
 checking the config file for validity will make "systemctl reload" return an
 error and let the user know something went wrong.

haproxy.service use environment variables.patch | (download)

contrib/systemd/ | 8 5 + 3 - 0 !
1 file changed, 5 insertions(+), 3 deletions(-)

 use the variables from /etc/default/haproxy
 This will allow seamless upgrades from the sysvinit system while respecting
 any changes the users may have made. It will also make local configuration
 easier than overriding the systemd unit file.

from upstream/0001 BUG MEDIUM ssl fix bad ssl context init can cause se.patch | (download)

src/ssl_sock.c | 44 34 + 10 - 0 !
1 file changed, 34 insertions(+), 10 deletions(-)

 [patch 1/9] bug/medium: ssl: fix bad ssl context init can cause
 segfault in case of OOM.

Some SSL context's init functions errors were not handled and
can cause a segfault due to an incomplete SSL context

This fix must be backported to 1.5.
(cherry picked from commit 5547615cdac377797ae351a2e024376dbf6d6963)

from upstream/0002 BUG MEDIUM ssl force a full GC in case of memory sho.patch | (download)

src/ssl_sock.c | 30 30 + 0 - 0 !
1 file changed, 30 insertions(+)

 [patch 2/9] bug/medium: ssl: force a full gc in case of memory

When memory becomes scarce and openssl refuses to allocate a new SSL
session, it is worth freeing the pools and trying again instead of
rejecting all incoming SSL connection. This can happen when some
memory usage limits have been assigned to the haproxy process using
-m or with ulimit -m/-v.

This is mostly an enhancement of previous fix and is worth backporting
to 1.5.
(cherry picked from commit fba03cdc5ac6e3ca318b34915596cbc0a0dacc55)

from upstream/0003 BUG MEDIUM checks fix conflicts between agent checks.patch | (download)

include/types/checks.h | 3 2 + 1 - 0 !
include/types/server.h | 1 0 + 1 - 0 !
src/checks.c | 2 1 + 1 - 0 !
src/server.c | 2 1 + 1 - 0 !
src/ssl_sock.c | 2 1 + 1 - 0 !
5 files changed, 5 insertions(+), 5 deletions(-)

 [patch 3/9] bug/medium: checks: fix conflicts between agent checks
 and ssl healthchecks

Lasse Birnbaum Jensen reported an issue when agent checks are used at the same
time as standard healthchecks when SSL is enabled on the server side.

The symptom is that agent checks try to communicate in SSL while it should
manage raw data. This happens because the transport layer is shared between all
kind of checks.

To fix the issue, the transport layer is now stored in each check type,
allowing to use SSL healthchecks when required, while an agent check should
always use the raw_sock implementation.

The fix must be backported to 1.5.
(cherry picked from commit 9ce1311ebc834e20addc7a8392c0fc4e4ad687b7)

from upstream/0004 BUG MAJOR frontend initialize capture pointers earli.patch | (download)

src/frontend.c | 14 10 + 4 - 0 !
1 file changed, 10 insertions(+), 4 deletions(-)

 [patch 4/9] bug/major: frontend: initialize capture pointers earlier

Denys Fedoryshchenko reported and diagnosed a nasty bug caused by TCP
captures, introduced in late 1.5-dev by commit 18bf01e ("MEDIUM: tcp:
add a new tcp-request capture directive"). The problem is that we're
using the array of capture pointers initially designed for HTTP usage
only, and that this array was only reset when starting to process an
HTTP request. In a tcp-only frontend, the pointers are not reset, and
if the capture pool is shared, we can very well point to whatever other
memory location, resulting in random crashes when tcp-request content
captures are processed.

The fix simply consists in initializing these pointers when the pools
are prepared.

A workaround for existing versions consists in either disabling TCP
captures in tcp-only frontends, or in forcing the frontends to work in
HTTP mode.

Thanks to Denys for the amount of testing and detailed reports.

This fix must be backported to 1.5.
(cherry picked from commit 9654e57fac86c773091b892f42015ba2ba56be5a)

from upstream/0005 BUG MEDIUM connection sanitize PPv2 header length be.patch | (download)

src/connection.c | 6 6 + 0 - 0 !
1 file changed, 6 insertions(+)

 [patch 5/9] bug/medium: connection: sanitize ppv2 header length
 before parsing address information

Previously, if hdr_v2->len was less than the length of the protocol
specific address information we could have read after the end of the
buffer and initialize the sockaddr structure with junk.

Signed-off-by: KOVACS Krisztian <>

[WT: this is only tagged medium since proxy protocol is only used from
 trusted sources]

This must be backported to 1.5.
(cherry picked from commit efd3aa93412648cf923bf3d2e171c0b84e9d7a69)

from upstream/0006 BUG MEDIUM pattern don t load more than once a patte.patch | (download)

include/proto/pattern.h | 3 2 + 1 - 0 !
src/acl.c | 2 1 + 1 - 0 !
src/pattern.c | 22 20 + 2 - 0 !
3 files changed, 23 insertions(+), 4 deletions(-)

 [patch 6/9] bug/medium: pattern: don't load more than once a pattern

A memory optimization can use the same pattern expression for many
equal pattern list (same parse method, index method and index_smp

The pattern expression is returned by "pattern_new_expr", but this
function dont indicate if the returned pattern is already in use.

So, the caller function reload the list of patterns in addition with
the existing patterns. This behavior is not a problem with tree indexed
pattern, but it grows the lists indexed patterns.

This fix add a "reuse" flag in return of the function "pattern_new_expr".
If the flag is set, I suppose that the patterns are already loaded.

This fix must be backported into 1.5.
(cherry picked from commit 315ec4217f912f6cc8fcf98624d852f9cd8399f9)

from upstream/0007 BUG MAJOR sessions unlink session from list on out o.patch | (download)

src/session.c | 1 1 + 0 - 0 !
1 file changed, 1 insertion(+)

 [patch 7/9] bug/major: sessions: unlink session from list on out of

Since embryonic sessions were introduced in 1.5-dev12 with commit
2542b53 ("MAJOR: session: introduce embryonic sessions"), a major
bug remained present. If haproxy cannot allocate memory during
session_complete() (for example, no more buffers), it will not
unlink the new session from the sessions list. This will cause
memory corruptions if the memory area from the session is reused
for anything else, and may also cause bogus output on "show sess"
on the CLI.

This fix must be backported to 1.5.
(cherry picked from commit 3b24641745b32289235d765f441ec60fa7381f99)

from upstream/0008 BUG MEDIUM patterns previous fix was incomplete.patch | (download)

src/pattern.c | 2 1 + 1 - 0 !
1 file changed, 1 insertion(+), 1 deletion(-)

 [patch 8/9] bug/medium: patterns: previous fix was incomplete

Dmitry Sivachenko <> reported that commit 315ec42
("BUG/MEDIUM: pattern: don't load more than once a pattern list.")
relies on an uninitialised variable in the stack. While it used to
work fine during the tests, if the uninitialized variable is non-null,
some patterns may be aggregated if loaded multiple times, resulting in
slower processing, which was the original issue it tried to address.

The fix needs to be backported to 1.5.
(cherry picked from commit 4deaf39243c4d941998b1b0175bad05b8a287c0b)

from upstream/0009 BUG MEDIUM payload ensure that a request channel is .patch | (download)

src/payload.c | 6 6 + 0 - 0 !
1 file changed, 6 insertions(+)

 [patch 9/9] bug/medium: payload: ensure that a request channel is

Denys Fedoryshchenko reported a segfault when using certain
sample fetch functions in the "tcp-request connection" rulesets
despite the warnings. This is because some tests for the existence
of the channel were missing.

The fetches which were fixed are :
  - req.ssl_hello_type
  - rep.ssl_hello_type
  - req.ssl_sni

This fix must be backported to 1.5.
(cherry picked from commit 83f2592bcd2e186beeabcba16be16faaab82bd39)

from upstream/0001 BUG MAJOR buffers make the buffer_slow_realign funct.patch | (download)

src/buffer.c | 49 29 + 20 - 0 !
1 file changed, 29 insertions(+), 20 deletions(-)

 bug/major: buffers: make the buffer_slow_realign() function respect
 output data

The function buffer_slow_realign() was initially designed for requests
only and did not consider pending outgoing data. This causes a problem
when called on responses where data remain in the buffer, which may
happen with pipelined requests when the client is slow to read data.

The user-visible effect is that if less than <maxrewrite> bytes are
present in the buffer from a previous response and these bytes cross
the <maxrewrite> boundary close to the end of the buffer, then a new
response will cause a realign and will destroy these pending data and
move the pointer to what's believed to contain pending output data.
Thus the client receives the crap that lies in the buffer instead of
the original output bytes.

This new implementation now properly realigns everything including the
outgoing data which are moved to the end of the buffer while the input
data are moved to the beginning.

This implementation still uses a buffer-to-buffer copy which is not
optimal in terms of performance and which should be replaced by a
buffer switch later.

from upstream/0001 BUG MINOR config fix typo in condition when propagat.patch | (download)

src/cfgparse.c | 2 1 + 1 - 0 !
1 file changed, 1 insertion(+), 1 deletion(-)

 [patch] bug/minor: config: fix typo in condition when propagating
 process binding

propagate_processes() has a typo in a condition :

	if (!from->cap & PR_CAP_FE)

The return is never taken because each proxy has at least one capability
so !from->cap always evaluates to zero. Most of the time the caller already
checks that <from> is a frontend. In the cases where it's not tested
(use_backend, reqsetbe), the rules have been checked for the context to
be a frontend as well, so in the end it had no nasty side effect.

This should be backported to 1.5.

from upstream/0001 BUG MEDIUM config do not propagate processes between.patch | (download)

src/cfgparse.c | 3 3 + 0 - 0 !
1 file changed, 3 insertions(+)

 [patch] bug/medium: config: do not propagate processes between
 stopped processes

Immo Goltz reported a case of segfault while parsing the config where
we try to propagate processes across stopped frontends (those with a
"disabled" statement). The fix is trivial. The workaround consists in
commenting out these frontends, although not always easy.

This fix must be backported to 1.5.
(cherry picked from commit f6b70013389cf9378c6a4d55d3d570de4f95c33c)