Package: putty / 0.67-3+deb9u1

Metadata

Package Version Patches format
putty 0.67-3+deb9u1 3.0 (quilt)

Patch series

view the series file
Patch File delta Description
pipe buf.patch | (download)

unix/uxshare.c | 3 3 + 0 - 0 !
1 file changed, 3 insertions(+)

 fix build failure on gnu/hurd

putty FTBFS on GNU/Hurd due to a missing definition of PIPE_BUF in
<limits.h>. This is solved by using the _POSIX_PIPE_BUF defined value
of 512 instead. (In fact the assertion will never happen, SALT_SIZE is
defined to 64 and PIPE_BUF is always larger than that, e.g. for Linux
4096, and so is _POSIX_PIPE_BUF.)

Bug-Debian: https://bugs.debian.org/805505
puttygen batch passphrase.patch | (download)

cmdgen.c | 149 99 + 50 - 0 !
doc/man-pg.but | 20 15 + 5 - 0 !
2 files changed, 114 insertions(+), 55 deletions(-)

 add command-line passphrase-file options to command-line puttygen.

Patch due to Colin Watson.

Putting the passphrase in a file avoids exposing it to 'ps' which can
print out every process's command line, while at the same time not
being as platform-specific as the approach of providing an fd number
(since cmdgen.c is in principle a potential cross-platform PuTTYgen,
not just a Unix one, which is why it's not in the 'unix' directory).

Of course it introduces its own risks if someone can read the file
from your disk after you delete it; probably the best approach to
avoiding this, if possible, is to point the option at a file on an
in-memory tmpfs type file system. Or better still, use bash-style
/dev/fd options such as

  puttygen --new-passphrase <(echo -n "my passphrase") [options]

Failing that, try a secure file-wipe utility, as the man page change
mentions.

(And a use case not to be overlooked, of course, is the one where you
actually want to generate an unprotected key - in which case, just
pass /dev/null as the filename.)

vuln agent fwd overflow.patch | (download)

ssh.c | 41 41 + 0 - 0 !
1 file changed, 41 insertions(+)

 sanity-check message length fields in chan_agent input.

Fixes 'vuln-agent-fwd-overflow': a hostile agent-forwarding client
sending a length such as 0xFFFFFFFD can cause c->u.a.totallen to end
up less than c->u.a.lensofar, leading to an attacker-controlled heap
overwrite when those two values are subtracted and used as a bound for
the amount of data to memcpy into the buffer.

Of course the mitigating factor is that if there is any such thing as
a 'hostile agent-forwarding client' in your world then you're likely
to _already_ be in fairly serious trouble - they can make free use of
all the keys stored in your agent, and would surely prefer to do that
than tip their hand by crashing your SSH client.

This is just the sort of thing I should have spotted in one of my past
general tightening-up passes such as commit 896bb7c74, but apparently
didn't :-(

Bug-Debian: https://bugs.debian.org/857642
put hashed noise into random pool.patch | (download)

sshrand.c | 2 1 + 1 - 0 !
1 file changed, 1 insertion(+), 1 deletion(-)

 in random_add_noise, put the hashed noise into the pool, not the raw
 noise

random_add_noise calls SHATransform for every 64 octets of incoming noise,
yet instead of xor'ing the hashed noise into the pool it instead only xor'ed
20 octets of the raw noise in each iteration. This effectively reduced the
amount of new entropy entering the pool.

remove pending toplevel callbacks.patch | (download)

callback.c | 65 50 + 15 - 0 !
putty.h | 1 1 + 0 - 0 !
2 files changed, 51 insertions(+), 15 deletions(-)

 new facility for removing pending toplevel callbacks.

This is used when you're about to destroy an object that is
(potentially) the context parameter for some still-pending toplevel
callback. It causes callbacks.c to go through its pending list and
delete any callback records referring to that context parameter, so
that when you destroy the object those callbacks aren't still waiting
to cause stale-pointer dereferences.

(cherry picked from commit afa9734b7de6656fd8db0b077f00f39d8218a4c6)

random add noise overrun.patch | (download)

sshrand.c | 9 6 + 3 - 0 !
1 file changed, 6 insertions(+), 3 deletions(-)

 fix one-byte buffer overrun in random_add_noise().

The variable 'poolpos' is incremented by two functions: random_byte(),
which reads from the pool, and random_add_noise(), which writes to it.
Both of them must handle the case where poolpos reaches its upper
limit POOLSIZE, wrap poolpos round to the bottom, and trigger a
random_stir().

Unfortunately, random_byte checks that poolpos < POOLSIZE _before_ the
read, so it may leave poolpos==POOLSIZE on exit. And random_add_noise
does it the other way round - it assumes it's safe to write the first
byte _before_ doing the bounds check. So if exactly the right number
of random_byte calls happen before the next add_noise, then a byte of
entropy can be XORed into the thing just beyond the end of pool[].

What _is_ beyond that point is the LSB of poolpos itself! Since
POOLSIZE is not a multiple of 256, this means that poolpos could be
made larger or smaller by this overwrite. If it's made larger, that's
the safe case - the subsequent bounds check will still fail, and then
poolpos will be reset anyway. (And this is also what will happen in
the very likely case that poolpos was cached in a register during that
part of the function.)

The dangerous case is if poolpos is made _smaller_ by that overwrite.
In that situation, the effect will be to rewind the position in the
random pool, delaying a necessary random_stir() and causing
already-output random numbers to either be reused, or to be reused
combined with unhashed input noise. In the latter case, it's just
conceivable that an attacker who was somehow controlling at least one
entropy source might be able to manipulate the RNG output on demand.

uxnet clean up callbacks.patch | (download)

unix/uxnet.c | 1 1 + 0 - 0 !
1 file changed, 1 insertion(+)

 uxnet: clean up callbacks when closing a netsocket.

uxnet.c's method for passing socket errors on to the Plug involves
setting up a toplevel callback using the NetSocket itself as the
context. Therefore, it should call delete_callbacks_for_context when
it destroys a NetSocket. For example, if _two_ socket errors manage to
occur, and the first one causes the socket to be closed, you need the
second callback to not happen, or it'll dereference the freed pointer.

sk_tcp_close fix memory leak.patch | (download)

unix/uxnet.c | 2 2 + 0 - 0 !
windows/winnet.c | 2 2 + 0 - 0 !
2 files changed, 4 insertions(+)

 sk_tcp_close: fix memory leak of output bufchain.

If there was still pending output data on a NetSocket's output_data
bufchain when it was closed, then we wouldn't have freed it, on either
Unix or Windows.

fix bad rsa key handling.patch | (download)

sshbn.c | 1 1 + 0 - 0 !
sshrsa.c | 4 4 + 0 - 0 !
2 files changed, 5 insertions(+)

 fix handling of bad rsa key with n=p=q=0.

In this situation, rsa_verify won't notice anything wrong until it
gets to the point where decbn() tries to subtract 1 from p, and
underruns the Bignum buffer.

Just in case some other attack vector reaches that same problem point,
I've also put a protective assertion in decbn() itself just before the
memory overwrite would have happened.

sanity check public lines.patch | (download)

sshpubk.c | 35 32 + 3 - 0 !
1 file changed, 32 insertions(+), 3 deletions(-)

 sanity-check the 'public-lines' field in ppk files.

If it's too large, memory allocation can fail, or worse,
under-allocate due to integer overflow.

uxsel enum.patch | (download)

unix/gtkwin.c | 6 3 + 3 - 0 !
unix/unix.h | 1 1 + 0 - 0 !
unix/uxagentc.c | 4 2 + 2 - 0 !
unix/uxnet.c | 14 7 + 7 - 0 !
unix/uxplink.c | 6 3 + 3 - 0 !
unix/uxpty.c | 10 5 + 5 - 0 !
unix/uxsftp.c | 6 3 + 3 - 0 !
7 files changed, 24 insertions(+), 23 deletions(-)

 introduce an enum of the uxsel / select_result flags.

Those magic numbers 1,2,4 were getting annoying. Time to replace them
while I can still remember what they do.

switch to poll.patch | (download)

Recipe | 4 2 + 2 - 0 !
cmdline.c | 2 1 + 1 - 0 !
configure.ac | 2 1 + 1 - 0 !
putty.h | 4 2 + 2 - 0 !
unix/unix.h | 19 19 + 0 - 0 !
unix/uxcons.c | 15 8 + 7 - 0 !
unix/uxnet.c | 2 1 + 1 - 0 !
unix/uxplink.c | 70 31 + 39 - 0 !
unix/uxpoll.c | 141 141 + 0 - 0 !
unix/uxsel.c | 4 2 + 2 - 0 !
unix/uxsftp.c | 57 26 + 31 - 0 !
11 files changed, 234 insertions(+), 86 deletions(-)

 switch to using poll(2) in place of select(2).

I've always thought poll was more hassle to set up, because if you
want to reuse part of your pollfds list between calls then you have to
index every fd by its position in the list as well as the fd number
itself, which gives you twice as many indices to keep track of than if
the fd is always its own key.

But the problem is that select is fundamentally limited to the range
of fds that can fit in an fd_set, which is not the range of fds that
can _exist_, so I've had a change of heart and now have to go with
poll.

For the moment, I've surrounded it with a 'pollwrapper' structure that
lets me treat it more or less like select, containing a tree234 that
maps each fd to its location in the list, and also translating between
the simple select r/w/x classification and the richer poll flags.
That's let me do the migration with minimal disruption to the call
sites.

In future perhaps I can start using poll more directly, and/or using
the richer flag system (though the latter might be fiddly because of
sometimes being constrained to use the glib event loop). But this will
do for now.

rsa kex enforce minimum key length.patch | (download)

ssh.c | 12 12 + 0 - 0 !
ssh.h | 2 2 + 0 - 0 !
sshrsa.c | 4 2 + 2 - 0 !
3 files changed, 16 insertions(+), 2 deletions(-)

 rsa kex: enforce the minimum key length.

I completely forgot to check that the server had actually sent a key
of at least MINKLEN bits, as RFC 4432 clearly says that it MUST.
Without this restriction, not only can a server trick the client into
using a shared secret with inadequate entropy, but it can send a key
so short that the client attempts to generate a secret integer of
negative length, with integer-overflowing results.

fix esc6 combining chars crash.patch | (download)

unix/gtkwin.c | 5 4 + 1 - 0 !
1 file changed, 4 insertions(+), 1 deletion(-)

 fix crash on esc#6 + combining chars + gtk + odd-width terminal.

When we're displaying double-width text as a result of the VT100 ESC#6
escape sequence or its friends, and the terminal width is an odd
number of columns, we divide by 2 the number of characters we'll even
try to display, and round _down_: if there's a rightmost odd column,
it stays blank, and doesn't show the left half of a double-width char.

In the GTK redraw function, that rounding-down can set the 'len'
variable to zero. But when we're displaying a character with Unicode
combining chars on top, that fails an assertion that len == 1, because
at the top of the function we set it to 1.

The fix is just to return early if len is reduced to zero by that
rounding: if we're not displaying any characters, then we don't have
to do anything at all.

limit combining chars per cell.patch | (download)

terminal.c | 52 45 + 7 - 0 !
terminal.h | 15 15 + 0 - 0 !
2 files changed, 60 insertions(+), 7 deletions(-)

 limit the number of combining chars per terminal cell.

The previous unlimited system was nicely general, but unfortunately
meant you could easily DoS a PuTTY-based terminal by sending a
printing character followed by an endless stream of identical
combining chars. (In fact, due to accidentally-quadratic linked list
management, you'd DoS it by using up all the CPU even before you got
the point of making it allocate all the RAM.)

The new limit is chosen to be 32, more or less arbitrarily. Overlong
sequences of combining characters are signalled by turning the whole
character cell into U+FFFD REPLACEMENT CHARACTER.

minibidi fix read past end of line.patch | (download)

minibidi.c | 2 1 + 1 - 0 !
1 file changed, 1 insertion(+), 1 deletion(-)

 minibidi: fix read past end of line in rule w5.

The check for a sequence of ET with an EN after it could potentially
skip ETs all the way up to the end of the buffer and then look for an
EN in the following nonexistent array element. Now it only skips ETs
up to count-1, in the same style as the similar check in rule N1.

fix double width crash.patch | (download)

terminal.c | 11 11 + 0 - 0 !
1 file changed, 11 insertions(+)

 fix crash printing a width-2 char in a width-1 terminal.

If the terminal is one column wide, it's not possible to print a
double-width CJK character at all - it won't fit. Replace it with
U+FFFD to indicate that impossibility.

The previous behaviour was to notice that we're in the rightmost
column of the terminal, and invoke the LATTR_WRAPPED2 special case to
wrap to the leftmost column on the next line. But in a width-1
terminal, the rightmost column _is_ the leftmost column, so this would
leave us no better off, and we would have fallen through into the next
case while in exactly the situation we'd tried to rule out.