Package: haproxy / 2.2.9-2+deb11u6
Metadata
| Package | Version | Patches format |
|---|---|---|
| haproxy | 2.2.9-2+deb11u6 | 3.0 (quilt) |
Patch series
view the series file| Patch | File delta | Description |
|---|---|---|
| 0002 Use dpkg buildflags to build halog.patch | (download) |
contrib/halog/Makefile |
16 5 + 11 - 0 ! |
use dpkg-buildflags to build halog |
| haproxy.service start after syslog.patch | (download) |
contrib/systemd/haproxy.service.in |
2 1 + 1 - 0 ! |
start after rsyslog.service As HAProxy is running chrooted by default, we rely on an additional syslog socket created by rsyslog inside the chroot for logging. As this socket cannot trigger syslog activation, we explicitly order HAProxy after rsyslog.service. Note that we are not using syslog.service here, since the additional socket is rsyslog-specific. |
| haproxy.service add documentation.patch | (download) |
contrib/systemd/haproxy.service.in |
2 2 + 0 - 0 ! |
add documentation field to the systemd unit |
| 0001 BUG MINOR tcpcheck Update .health threshold of agent.patch | (download) |
src/tcpcheck.c |
8 4 + 4 - 0 ! |
[patch] bug/minor: tcpcheck: update .health threshold of agent inside an agent-check If an agent-check is configured for a server, When the response is parsed, the .health threshold of the agent must be updated on up/down/stopped/fail command and not the threshold of the health-check. Otherwise, the agent-check will compete with the health-check and may mark a DOWN server as UP. This patch should fix the issue #1176. It must be backported as far as 2.2. (cherry picked from commit 24ec9434271345857b42cc5bd9c6b497ab01a7e4) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 789bbdc88d7ffe8f520532efb18148ea52ede4ca) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> |
| 2.2 0001 MINOR http add a new function http_validate_scheme t.patch | (download) |
include/haproxy/http.h |
1 1 + 0 - 0 ! |
minor: http: add a new function http_validate_scheme() to validate a scheme While http_parse_scheme() extracts a scheme from a URI by extracting exactly the valid characters and stopping on delimiters, this new function performs the same on a fixed-size string. (cherry picked from commit adfc08e717db600c3ac44ca8f3178d861699b67c) [wt: context adj] Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 073e9c9c10897a05117f29cb9d3ebdbc13ff03b5) [wt: context adj] Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 0fb53c3c025fb158c51c515532f3f52bb2abcdea) Signed-off-by: Willy Tarreau <w@1wt.eu> |
| 2.2 0002 BUG MAJOR h2 verify early that non http https scheme.patch | (download) |
src/h2.c |
2 2 + 0 - 0 ! |
bug/major: h2: verify early that non-http/https schemes match the valid syntax MIME-Version: 1.0 Content-Type: text/plain; charset=latin1 Content-Transfer-Encoding: 8bit While we do explicitly check for strict character sets in the scheme, this is only done when extracting URL components from an assembled one, and we have special handling for "http" and "https" schemes directly in the H2-to-HTX conversion. Sadly, this lets all other ones pass through if they start exactly with "http://" or "https://", allowing the |
| 2.2 0003 BUG MAJOR h2 verify that path starts with a before c.patch | (download) |
src/h2.c |
19 16 + 3 - 0 ! |
bug/major: h2: verify that :path starts with a '/' before
concatenating it
MIME-Version: 1.0
Content-Type: text/plain; charset=latin1
Content-Transfer-Encoding: 8bit
Tim Dsterhus found that while the H2 path is checked for non-emptiness,
invalid chars and '*', a test is missing to verify that except for '*',
it always starts with exactly one '/'. During the reconstruction of the
full URI when passing to HTX, this allows to affect the apparent authority
by appending a port number or a suffix name.
This only affects H2-to-H2 communications, as H2-to-H1 do not use the
authority. Like for previous fix, the following rule installed in the
frontend or backend is sufficient to renormalize the internal URI:
http-request set-header host %[req.hdr(host)]
This needs to be backported to 2.2, since earlier versions do not rebuild
a full URI using the authority and will fail on the malformed path at the
HTTP layer.
(cherry picked from commit d3b22b75025246e81ff8d0c78837d4b89d7cf8f8)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit 2360306269ff65420cba7c847687a774b1025ab5)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit c99c5cd3588a28978cd065abc74508fe81a93a40)
Signed-off-by: Willy Tarreau <w@1wt.eu>
|
| 2.2 0004 BUG MAJOR h2 enforce checks on the method syntax bef.patch | (download) |
src/h2.c |
8 8 + 0 - 0 ! |
bug/major: h2: enforce checks on the method syntax before translating
to HTX
MIME-Version: 1.0
Content-Type: text/plain; charset=latin1
Content-Transfer-Encoding: 8bit
The situation with message components in H2 is always troubling. They're
produced by the HPACK layer which contains a dictionary of well-known
hardcoded values, yet wants to remain binary transparent and protocol-
agnostic with HTTP just being one user, yet at the H2 layer we're
supposed to enforce some checks on some selected pseudo-headers that
come from internal constants... The :method pseudo-header is no exception
and is not tested when coming from the HPACK layer. This makes it possible
to pass random chars into methods, that can be serialized on another H2
connection (where they would not harm), or worse, on an H1 connection
where they can be used to transform the forwareded request. This is
similar to the request line injection described here:
https://portswigger.net/research/http2
A workaround here is to reject malformed methods by placing this rule
in the frontend or backend, at least before leaving haproxy in H1:
http-request reject if { method -m reg [^A-Z0-9] }
Alternately H2 may be globally disabled by commenting out the "alpn"
directive on "bind" lines, and by rejecting H2 streams creation by
adding the following statement to the global section:
tune.h2.max-concurrent-streams 0
This patch adds a check for each character of the method to be part of
the ones permitted in a token, as mentioned in RFC7231#4.1. This should
be backported to versions 2.0 and above, maybe even 1.8. For older
versions not having HTX_FL_PARSING_ERROR, a "goto fail" works as well
as it results in a protocol error at the stream level. Non-HTX versions
were initially thought to be safe but must be carefully rechecked since
they transcode the request into H1 before processing it.
Thanks to Tim Dsterhus for reporting that one.
(cherry picked from commit b4be735a0a7c4a00bf3d774334763536774d7eea)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit 6b827f661374704e91322a82197bbfbfbf910f70)
[wt: adapted since no meth_sl in 2.3]
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit fbeb053d1a83faedbf3edbe04bde39bc7304cddd)
Signed-off-by: Willy Tarreau <w@1wt.eu>
|
| 2.2 0005 BUG MEDIUM h2 give authority precedence over Host.patch | (download) |
src/h2.c |
23 21 + 2 - 0 ! |
bug/medium: h2: give :authority precedence over host MIME-Version: 1.0 Content-Type: text/plain; charset=latin1 Content-Transfer-Encoding: 8bit The wording regarding Host vs :authority in RFC7540 is ambiguous as it says that an intermediary must produce a host header from :authority if Host is missing, but doesn't say anything regarding the possibility that |
| 0001 BUG MEDIUM h2 match absolute path not path absolute .patch | (download) |
src/h2.c |
6 3 + 3 - 0 ! |
[patch] bug/medium: h2: match absolute-path not path-absolute for
:path
RFC7540 states that :path follows RFC3986's path-absolute. However
that was a bug introduced in the spec between draft 04 and draft 05
of the spec, which implicitly causes paths starting with "//" to be
forbidden. HTTP/1 (and now HTTP core semantics) made it explicit
that the request-target in origin-form follows a purposely defined
absolute-path defined as 1*(/ segment) to explicitly allow "//".
http2bis now fixes this by relying on absolute-path so that "//"
becomes valid and matches other versions. Full discussion here:
https://lists.w3.org/Archives/Public/ietf-http-wg/2021JulSep/0245.html
This issue appeared in haproxy with commit 4b8852c70 ("BUG/MAJOR: h2:
verify that :path starts with a '/' before concatenating it") when
making the checks on :path fully comply with the spec, and was backported
as far as 2.0, so this fix must be backported there as well to allow
"//" in H2 again.
(cherry picked from commit 46b7dff8f08cb6c5c3004d8874d6c5bc689a4c51)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit 512cee88df5c40f1d3901a82cf6643fe9f74229e)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit 65b9cf31a1975eb32e6696059c2bf9f0cfca2dff)
Signed-off-by: Willy Tarreau <w@1wt.eu>
|
| 0001 2.0 2.3 BUG MAJOR htx fix missing header name length check i.patch | (download) |
src/htx.c |
8 6 + 2 - 0 ! |
bug/major: htx: fix missing header name length check in htx_add_header/trailer Shachar Menashe for JFrog Security reported that htx_add_header() and htx_add_trailer() were missing a length check on the header name. While this does not allow to overwrite any memory area, it results in bits of the header name length to slip into the header value length and may result in forging certain header names on the input. The sad thing here is that a FIXME comment was present suggesting to add the required length checks :-( The injected headers are visible to the HTTP internals and to the config rules, so haproxy will generally stay synchronized with the server. But there is one exception which is the content-length header field, because it is already deduplicated on the input, but before being indexed. As such, injecting a content-length header after the deduplication stage |
| 0001 BUG MAJOR http htx prevent unbounded loop in http_ma.patch | (download) |
src/http_ana.c |
2 1 + 1 - 0 ! |
bug/major: http/htx: prevent unbounded loop in http_manage_server_side_cookies |
| 0001 BUG MEDIUM mux h2 Refuse interim responses with end .patch | (download) |
src/mux_h2.c |
5 5 + 0 - 0 ! |
bug/medium: mux-h2: refuse interim responses with end-stream flag set |
| 2.0 2.5 BUG CRITICAL http properly reject empty http header .patch | (download) |
src/h1.c |
4 4 + 0 - 0 ! |
bug/critical: http: properly reject empty http header field names The HTTP header parsers surprizingly accepts empty header field names, and this is a leftover from the original code that was agnostic to this. When muxes were introduced, for H2 first, the HPACK decompressor needed to feed headers lists, and since empty header names were strictly forbidden by the protocol, the lists of headers were purposely designed to be terminated by an empty header field name (a principle that is similar to H1's empty line termination). This principle was preserved and generalized to other protocols migrated to muxes (H1/FCGI/H3 etc) without anyone ever noticing that the H1 parser was still able to deliver empty header field names to this list. In addition to this it turns out that the HPACK decompressor, despite a comment in the code, may successfully decompress an empty header field name, and this mistake was propagated to the QPACK decompressor as well. The impact is that an empty header field name may be used to truncate the list of headers and thus make some headers disappear. While for H2/H3 the impact is limited as haproxy sees a request with missing headers, and headers are not used to delimit messages, in the case of HTTP/1, the impact is significant because the presence (and sometimes contents) of certain sensitive headers is detected during the parsing. Thus, some of these headers may be seen, marked as present, their value extracted, but never delivered to upper layers and obviously not forwarded to the other side either. This can have for consequence that certain important header fields such as Connection, Upgrade, Host, |
| 2.2 BUG MAJOR fcgi Fix uninitialized reserved bytes.patch | (download) |
src/fcgi.c |
8 6 + 2 - 0 ! |
bug/major: fcgi: fix uninitialized reserved bytes |
| BUG MAJOR http reject any empty content length heade.patch | (download) |
reg-tests/http-messaging/h1_to_h1.vtc |
26 26 + 0 - 0 ! |
bug/major: http: reject any empty content-length header value |
| MINOR ist add new function ist_find_range to find a .patch | (download) |
include/import/ist.h |
47 47 + 0 - 0 ! |
minor: ist: add new function ist_find_range() to find a character range |
| MINOR ist Add istend function to return a pointer to.patch | (download) |
include/import/ist.h |
6 6 + 0 - 0 ! |
minor: ist: add istend() function to return a pointer to the end of the string |
| MINOR http add new function http_path_has_forbidden_.patch | (download) |
include/haproxy/http.h |
19 19 + 0 - 0 ! |
minor: http: add new function http_path_has_forbidden_char() |
| MINOR h2 pass accept invalid http request down the r.patch | (download) |
include/haproxy/h2.h |
2 1 + 1 - 0 ! |
minor: h2: pass accept-invalid-http-request down the request parser |
| BUG MINOR h1 do not accept as part of the URI compon.patch | (download) |
src/h1.c |
15 11 + 4 - 0 ! |
bug/minor: h1: do not accept '#' as part of the uri component |
| BUG MINOR h2 reject more chars from the path pseudo .patch | (download) |
src/h2.c |
15 11 + 4 - 0 ! |
bug/minor: h2: reject more chars from the :path pseudo header |
| REGTESTS http rules verify that we block by default .patch | (download) |
reg-tests/http-rules/fragment_in_uri.vtc |
39 39 + 0 - 0 ! |
regtests: http-rules: verify that we block '#' by default for normalize-uri |
| DOC clarify the handling of URL fragments in request.patch | (download) |
doc/configuration.txt |
21 17 + 4 - 0 ! |
doc: clarify the handling of url fragments in requests |
