Package: qemu / 1.1.2+dfsg-6a+deb7u12

Metadata

Package Version Patches format
qemu 1.1.2+dfsg-6a+deb7u12 3.0 (quilt)

Patch series

view the series file
Patch File delta Description
02_kfreebsd.patch | (download)

configure | 8 8 + 0 - 0 !
1 file changed, 8 insertions(+)

---
qemu ifunc sparc.patch | (download)

sparc.ld | 16 14 + 2 - 0 !
1 file changed, 14 insertions(+), 2 deletions(-)

---
configure nss usbredir.patch | (download)

configure | 2 1 + 1 - 0 !
1 file changed, 1 insertion(+), 1 deletion(-)

---
do not include libutil.h.patch | (download)

net/tap-bsd.c | 6 0 + 6 - 0 !
qemu-char.c | 14 7 + 7 - 0 !
savevm.c | 13 0 + 13 - 0 !
vl.c | 6 0 + 6 - 0 !
4 files changed, 7 insertions(+), 32 deletions(-)

 [patch] do not include <libutil.h> needlessly or if it doesn't exist

<libutil.h> and <util.h> on *BSD (some have one, some another)
were #included just for openpty() declaration.  The only file
where this function is actually used is qemu-char.c.

In vl.c, savevm.c and net/tap-bsd.c, none of functions declared
in libutil.h (login logout logwtmp timdomain openpty forkpty
uu_lock realhostname fparseln and a few others depending on
version) are used.

Initially the code which is currently in qemu-char.c was in vl.c,
it has been removed into separate file in commit 0e82f34d077dc2542
Fri Oct 31 18:44:40 2008, but the #includes were left in vl.c.
So with vl.c, we just remove includes - libutil.h, util.h and
pty.h (which declares only openpty() and forkpty()) from there.

The code in net/tap-bsd.c, which come from net/tap.c, had this

commit 5281d757efa6e40d74ce124be048b08d43887555
tcg_s390 fix ld_st with CONFIG_TCG_PASS_AREG0.patch | (download)

tcg/s390/tcg-target.c | 14 7 + 7 - 0 !
1 file changed, 7 insertions(+), 7 deletions(-)

 tcg/s390: fix ld/st with config_tcg_pass_areg0

The load/store slow path has been broken in e141ab52d:
- We need to move 4 registers for store functions and 3 registers for
  load functions and not the reverse.
- According to the s390x calling convention the arguments of a function
  should be zero extended. This means that the register shift should be
  done with TCG_TYPE_I64 to ensure the higher word is correctly zero
  extended when needed.

I am aware that CONFIG_TCG_PASS_AREG0 is being removed and thus that
this patch can be improved, but doing so means it can also be applied to
the 1.1 and 1.2 stable branches.

Cc: qemu-stable@nongnu.org
Cc: Alexander Graf <agraf@suse.de>
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>

linux user fix mips 32 on 64 prealloc case.patch | (download)

linux-user/main.c | 5 5 + 0 - 0 !
1 file changed, 5 insertions(+)

 linux-user: fix mips 32-on-64 prealloc case
Bug-Debian: http://bugs.debian.org/668658

MIPS only supports 31 bits of virtual address space for user space, so let's
make sure we stay within that limit with our preallocated memory block.

This fixes the MIPS user space targets when executed without command line
option.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>

net add netdev options to man page.patch | (download)

qemu-options.hx | 7 7 + 0 - 0 !
1 file changed, 7 insertions(+)

---
revert serial fix retry logic.patch | (download)

hw/serial.c | 4 1 + 3 - 0 !
1 file changed, 1 insertion(+), 3 deletions(-)

 [patch] revert "serial: fix retry logic"
To: Anthony Liguori <aliguori@us.ibm.com>
Bug-Debian: http://bugs.debian.org/686524
Cc: qemu-devel@nongnu.org, qemu-stable@nongnu.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>

This reverts commit 67c5322d7000fd105a926eec44bc1765b7d70bdd:

    I'm not sure if the retry logic has ever worked when not using FIFO mode.  I
    found this while writing a test case although code inspection confirms it is
    definitely broken.

    The TSR retry logic will never actually happen because it is guarded by an
    'if (s->tsr_rety > 0)' but this is the only place that can ever make the
    variable greater than zero.  That effectively makes the retry logic an 'if (0)

    I believe this is a typo and the intention was >= 0.  Once this is fixed thoug
    I see double transmits with my test case.  This is because in the non FIFO
    case, serial_xmit may get invoked while LSR.THRE is still high because the
    character was processed but the retransmit timer was still active.

    We can handle this by simply checking for LSR.THRE and returning early.  It's
    possible that the FIFO paths also need some attention.

    Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>

Even if the previous logic was never worked, new logic breaks stuff -
namely,

 qemu -enable-kvm -nographic -kernel /boot/vmlinuz-$(uname -r) -append console=ttyS0 -serial pty

the above command will cause the virtual machine to stuck at startup
using 100% CPU till one connects to the pty and sends any char to it.

Note this is rather typical invocation for various headless virtual
machines by libvirt.

So revert this change for now, till a better solution will be found.

Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>

intel_hda do not call msi_reset when only device state needs resetting.patch | (download)

hw/intel-hda.c | 14 10 + 4 - 0 !
1 file changed, 10 insertions(+), 4 deletions(-)

 intel_hda: do not call msi_reset when only device state needs resetting
Bug-Debian: http://bugs.debian.org/688964

Commit 8e729e3b521d9 "intel-hda: Fix reset of MSI function"
(applied to 1.1.1 as 0ec39075710) added a call to msi_reset()
into intel_hda_reset() function.  But this function is called
not only from PCI bus reset method, but also from device init
method (intel_hda_set_g_ctl()), and there, we should not reset
msi state.  For this, split intel_hda_reset() into two halves,
one common part with device reset, and one with msi reset,
intel_hda_reset_msi(), which also calls the common part, for
the bus method.

This is only needed for 1.1.x series, since in 1.2+, MSI reset
is called in proper places by the PCI code already.

Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Cc: Jan Kiszka <jan.kiszka@siemens.com>
Cc: 688964@bugs.debian.org

blockdev preserve readonly and snapshot states across media changes.patch | (download)

blockdev.c | 2 2 + 0 - 0 !
1 file changed, 2 insertions(+)

 blockdev: preserve readonly and snapshot states across media changes
Bug-Debian: http://bugs.debian.org/686776

If readonly=on is given at device creation time, the ->readonly flag
needs to be set in the block driver state for this device so that
readonly-ness is preserved across media changes (qmp change command).
Similarly, to preserve the snapshot property requires ->open_flags to
be correct.

Signed-off-by: Kevin Shanahan <kmshanah@disenchant.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 80dd1aae3657a902d262f5d20a7a3c655b23705e)

Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>

ahci properly reset PxCMD on HBA reset.patch | (download)

hw/ide/ahci.c | 2 1 + 1 - 0 !
1 file changed, 1 insertion(+), 1 deletion(-)

 ahci: properly reset pxcmd on hba reset
Bug-Debian: http://bugs.debian.org/696052
Comment: Original patch: http://patchwork.ozlabs.org/patch/179724/
Comment: Final submission: http://patchwork.ozlabs.org/patch/183918/

While testing q35, I found that windows 7 (specifically, windows 7 ultimate
with sp1 x64), wouldn't install because it can't find the cdrom or disk drive.
The failure message is: 'A required cd/dvd device driver is missing. If you
have a driver floppy disk, CD, DVD, or USB flash drive, please insert it now.'
This can also be reproduced on piix by adding an ahci controller, and
observing that windows 7 does not see any devices behind it.

The problem is that when windows issues a HBA reset, qemu does not reset the
individual ports' PxCMD register. Windows 7 then reads back the PxCMD register
and presumably assumes that the ahci controller has already been initialized.
Windows then never sets up the PxIE register to enable interrupts, and thus it
never gets irqs back when it sends ata device inquiry commands.

This change brings qemu into ahci 1.3 specification compliance.

Section 10.4.3 HBA Reset:

"
When GHC.HR is set to '1', GHC.AE, GHC.IE, the IS register, and all port
register fields (except PxFB/PxFBU/PxCLB/PxCLBU) that are not HwInit in the
HBA's register memory space are reset.
"

I've also re-tested Fedora 16 and 17 to verify that they continue to work with
this change.

Signed-off-by: Jason Baron <jbaron@redhat.com>
net notify iothread after flushing queue.patch | (download)

hw/virtio-net.c | 4 0 + 4 - 0 !
net.c | 7 6 + 1 - 0 !
net/queue.c | 5 3 + 2 - 0 !
net/queue.h | 2 1 + 1 - 0 !
4 files changed, 10 insertions(+), 8 deletions(-)

 net: notify iothread after flushing queue
Bug-Debian: http://bugs.debian.org/696063

virtio-net has code to flush the queue and notify the iothread
whenever new receive buffers are added by the guest.  That is
fine, and indeed we need to do the same in all other drivers.
However, notifying the iothread should be work for the network
subsystem.  And since we are at it we can add a little smartness:
if some of the queued packets already could not be delivered,
there is no need to notify the iothread.

Reported-by: Luigi Rizzo <rizzo@iet.unipi.it>
Cc: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Cc: Jan Kiszka <jan.kiszka@siemens.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
e1000 flush queue whenever can_receive can go from false to true.patch | (download)

hw/e1000.c | 4 4 + 0 - 0 !
1 file changed, 4 insertions(+)

 e1000: flush queue whenever can_receive can go from false to true
Bug-Debian: http://bugs.debian.org/696063

When the guests replenish the receive ring buffer, the network device
should flush its queue of pending packets.  This is done with
qemu_flush_queued_packets.

e1000's can_receive can go from false to true when RCTL or RDT are
modified.

Reported-by: Luigi Rizzo <rizzo@iet.unipi.it>
Cc: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Cc: Jan Kiszka <jan.kiszka@siemens.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
e1000 discard packets that are too long if not SBP and not LPE.patch | (download)

hw/e1000.c | 10 10 + 0 - 0 !
1 file changed, 10 insertions(+)

 e1000: discard packets that are too long if !sbp and !lpe
Bug-Debian: http://bugs.debian.org/696051
Comment: first half of the fix for CVE-2012-6075
Comment: see also e1000-discard-oversized-packets-based-on-SBP_LPE.patch
Comment: http://patchwork.ozlabs.org/patch/203291/
Comment: Michael Contreras:
Comment: Tested with linux guest. This error can potentially be exploited. At the very
Comment: least it can cause a DoS to a guest system, and in the worse case it could
Comment: allow remote code execution on the guest system with kernel level privilege.
Comment: Risk seems low, as the network would need to be configured to allow large
Comment: packets.

The e1000_receive function for the e1000 needs to discard packets longer than
1522 bytes if the SBP and LPE flags are disabled. The linux driver assumes
this behavior and allocates memory based on this assumption.

Signed-off-by: Michael Contreras <michael@inetric.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit b0d9ffcd0251161c7c92f94804dcf599dfa3edeb)

Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>

e1000 discard oversized packets based on SBP_LPE.patch | (download)

hw/e1000.c | 7 5 + 2 - 0 !
1 file changed, 5 insertions(+), 2 deletions(-)

---
eepro100 fix network hang when rx buffers run out.patch | (download)

hw/eepro100.c | 4 3 + 1 - 0 !
1 file changed, 3 insertions(+), 1 deletion(-)

 eepro100: fix network hang when rx buffers run out
Bug-Debian: http://bugs.debian.org/696061

This is reported by QA. When installing os with pxe, after the initial
kernel and initrd are loaded, the procedure tries to copy files from install
server to local harddisk, the network becomes stall because of running out of
receive descriptor.

[Whitespace fixes and removed qemu_notify_event() because Paolo's
earlier net patches have moved it into qemu_flush_queued_packets().

Additional info:

I can reproduce the network hang with a tap device doing a iPXE HTTP
boot as follows:

  $ qemu -enable-kvm -m 1024 \
    -netdev tap,id=netdev0,script=no,downscript=no \
    -device i82559er,netdev=netdev0,romfile=80861209.rom \
    -drive if=virtio,cache=none,file=test.img
  iPXE> ifopen net0
  iPXE> config # set static network configuration
  iPXE> kernel http://mirror.bytemark.co.uk/fedora/linux/releases/17/Fedora/x86_64/os/images/pxeboot/vmlinuz

I needed a vanilla iPXE ROM to get to the iPXE prompt.  I think the boot
prompt has been disabled in the ROMs that ship with QEMU to reduce boot
time.

During the vmlinuz HTTP download there is a network hang.  hw/eepro100.c
has reached the end of the rx descriptor list.  When the iPXE driver
replenishes the rx descriptor list we don't kick the QEMU net subsystem
and event loop, thereby leaving the tap netdev without its file
descriptor in select(2).

Stefan Hajnoczi <stefanha@gmail.com>]

Signed-off-by: Bo Yang <boyang@suse.com>
Signed-off-by: Stefan Hajnoczi <stefanha@gmail.com>
(cherry picked from commit 1069985fb132cd4324fc02d371f1e61492a1823f)

Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>

fixes related to processing of qemu s numa option.patch | (download)

cpus.c | 3 2 + 1 - 0 !
hw/pc.c | 3 2 + 1 - 0 !
sysemu.h | 3 2 + 1 - 0 !
vl.c | 43 21 + 22 - 0 !
4 files changed, 27 insertions(+), 25 deletions(-)

 fixes related to processing of qemu's -numa option
Bug-Debian: http://bugs.debian.org/691343

The -numa option to qemu is used to create [fake] numa nodes
and expose them to the guest OS instance.

There are a couple of issues with the -numa option:

a) Max VCPU's that can be specified for a guest while using
   the qemu's -numa option is 64. Due to a typecasting issue
   when the number of VCPUs is > 32 the VCPUs don't show up
   under the specified [fake] numa nodes.

b) KVM currently has support for 160VCPUs per guest. The
   qemu's -numa option has only support for upto 64VCPUs
   per guest.
This patch addresses these two issues.

Below are examples of (a) and (b)

a) >32 VCPUs are specified with the -numa option:

/usr/local/bin/qemu-system-x86_64 \
-enable-kvm \
71:01:01 \
-net tap,ifname=tap0,script=no,downscript=no \
-vnc :4

...
Upstream qemu :

qcow2 fix avail_sectors in cluster allocation code.patch | (download)

block/qcow2-cluster.c | 10 9 + 1 - 0 !
1 file changed, 9 insertions(+), 1 deletion(-)

 qcow2: fix avail_sectors in cluster allocation code
Bug-Debian: http://bugs.debian.org/695905

avail_sectors should really be the number of sectors from the start of
the allocation, not from the start of the write request.

We're lucky enough that this mistake didn't cause any real bug.
avail_sectors is only used in the intialiser of QCowL2Meta:

  .nb_available   = MIN(requested_sectors, avail_sectors),

m->nb_available in turn is only used for COW at the end of the
allocation. A COW occurs only if the request wasn't cluster aligned,
which in turn would imply that requested_sectors was less than
avail_sectors (both in the original and in the fixed version). In this
case avail_sectors is ignored and therefore the mistake doesn't cause
any misbehaviour.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit b7ab0fea37c15ca9e249c42c46f5c48fd1a0943c)

Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>

qcow2 fix refcount table size calculation.patch | (download)

block/qcow2-refcount.c | 3 2 + 1 - 0 !
1 file changed, 2 insertions(+), 1 deletion(-)

 qcow2: fix refcount table size calculation
Bug-Debian: http://bugs.debian.org/691569

A missing factor for the refcount table entry size in the calculation
could mean that too little memory was allocated for the in-memory
representation of the table, resulting in a buffer overflow.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
tap reset vnet header size on open.patch | (download)

net/tap.c | 7 7 + 0 - 0 !
1 file changed, 7 insertions(+)

 tap: reset vnet header size on open
Bug-Debian: http://bugs.debian.org/696057

For tap, we currently assume the vnet header size is 10
(the default value) but that might not be the case
if tap is persistent and has been used by qemu previously.
To fix, set host header size in tap device on open.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 58ddcd50f30cb5c020bd4f9f36b01ee160a27cac)

Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>

vmdk fix data corruption bug in WRITE and READ handling.patch | (download)

block/vmdk.c | 10 8 + 2 - 0 !
1 file changed, 8 insertions(+), 2 deletions(-)

 vmdk: fix data corruption bug in write and read handling
Bug-Debian: http://bugs.debian.org/696050

Fixed a MAJOR BUG in VMDK files on file boundaries on reads
and ALSO ON WRITES WHICH MIGHT CORRUPT THE IMAGE AND DATA!!!!!!

Triggered for example with the following VMDK file (partly listed):
RW 4193792 FLAT "XP-W1-f001.vmdk" 0
RW 2097664 FLAT "XP-W1-f002.vmdk" 0
RW 4193792 FLAT "XP-W1-f003.vmdk" 0
RW 512 FLAT "XP-W1-f004.vmdk" 0
RW 4193792 FLAT "XP-W1-f005.vmdk" 0
RW 2097664 FLAT "XP-W1-f006.vmdk" 0
RW 4193792 FLAT "XP-W1-f007.vmdk" 0
RW 512 FLAT "XP-W1-f008.vmdk" 0

Patch includes:
1.) Patch fixes wrong calculation on extent boundaries. Especially it
fixes the relativeness of the sector number to the current extent.

Verfied correctness with:
1.) Converted either with Virtualbox to VDI and then with qemu-img and
    then with qemu-img only:

    VBoxManage clonehd --format vdi /VM/XP-W/new/XP-W1.vmdk ~/.VirtualBox/Harddisks/XP-W1-new-test.vdi
    ./qemu-img convert -O raw ~/.VirtualBox/Harddisks/XP-W1-new-test.vdi /root/QEMU/VM-XP-W1/XP-W1-via-VBOX.img
    md5sum /root/QEMU/VM-XP-W/XP-W1-direct.img
    md5sum /root/QEMU/VM-XP-W/XP-W1-via-VBOX.img
    => same MD5 hash

2.) Verified debug log files
3.) Run Windows XP successfully
4.) chkdsk run successfully without any errors

Signed-off-by: Gerhard Wiesinger <lists@wiesinger.com>
uhci don t queue up packets after one with the SPD flag set.patch | (download)

hw/usb/hcd-uhci.c | 5 4 + 1 - 0 !
1 file changed, 4 insertions(+), 1 deletion(-)

 uhci: don't queue up packets after one with the spd flag set
Bug-Debian: http://bugs.debian.org/683983
Bug: https://bugs.launchpad.net/bugs/1033727
usb split endpoint init and reset.patch | (download)

hw/usb.h | 1 1 + 0 - 0 !
hw/usb/core.c | 13 11 + 2 - 0 !
hw/usb/host-linux.c | 5 3 + 2 - 0 !
3 files changed, 15 insertions(+), 4 deletions(-)

 usb: split endpoint init and reset

Create a new usb_ep_reset() function to reset endpoint state, without
re-initialiting the queues, so we don't unlink in-flight packets just
because usb-host has to re-parse the descriptor tables.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 19deaa089cb874912767bc6071f3b7372d3ff961)

Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>

virtio net fix guest triggerable buffer overrun CVE 2014 0150.patch | (download)

hw/virtio-net.c | 2 1 + 1 - 0 !
1 file changed, 1 insertion(+), 1 deletion(-)

 [patch] virtio-net: fix guest-triggerable buffer overrun
Bug-Debian: http://bugs.debian.org/744221

When VM guest programs multicast addresses for
a virtio net card, it supplies a 32 bit
entries counter for the number of addresses.
These addresses are read into tail portion of
a fixed macs array which has size MAC_TABLE_ENTRIES,
at offset equal to in_use.

To avoid overflow of this array by guest, qemu attempts
to test the size as follows:
-    if (in_use + mac_data.entries <= MAC_TABLE_ENTRIES) {

however, as mac_data.entries is uint32_t, this sum
can overflow, e.g. if in_use is 1 and mac_data.entries
is 0xffffffff then in_use + mac_data.entries will be 0.

Qemu will then read guest supplied buffer into this
memory, overflowing buffer on heap.

CVE-2014-0150

Cc: qemu-stable@nongnu.org
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Message-id: 1397218574-25058-1-git-send-email-mst@redhat.com
x86 only allow real mode to access 32bit without LMA.patch | (download)

target-i386/helper.c | 6 6 + 0 - 0 !
1 file changed, 6 insertions(+)

 x86: only allow real mode to access 32bit without lma
 When we're running in non-64bit mode with qemu-system-x86_64 we can
 still end up with virtual addresses that are above the 32bit boundary
 if a segment offset is set up.
 .
 GNU Hurd does exactly that. It sets the segment offset to 0x80000000 and
 puts its EIP value to 0x8xxxxxxx to access low memory.
 .
 This doesn't hit us when we enable paging, as there we just mask away the
 unused bits. But with real mode, we assume that vaddr == paddr which is
 wrong in this case. Real hardware wraps the virtual address around at the
 32bit boundary. So let's do the same.
 .
 This fixes booting GNU Hurd in qemu-system-x86_64 for me.
fix entry pointer for ELF kernels loaded with kernel option.patch | (download)

hw/elf_ops.h | 11 11 + 0 - 0 !
1 file changed, 11 insertions(+)

 fix entry pointer for elf kernels loaded with -kernel option
ide correct improper smart self test counter reset CVE 2014 2894.patch | (download)

hw/ide/core.c | 2 1 + 1 - 0 !
1 file changed, 1 insertion(+), 1 deletion(-)

 [patch] ide: correct improper smart self test counter reset in ide
 core (CVE-2014-2894)

The SMART self test counter was incorrectly being reset to zero,
not 1. This had the effect that on every 21st SMART EXECUTE OFFLINE:
 * We would write off the beginning of a dynamically allocated buffer
 * We forgot the SMART history
Fix this.

Signed-off-by: Benoit Canet <benoit@irqsave.net>
Message-id: 1397336390-24664-1-git-send-email-benoit.canet@irqsave.net
scsi allocate SCSITargetReq r buf dynamically CVE 2013 4344.patch | (download)

hw/scsi-bus.c | 45 34 + 11 - 0 !
hw/scsi.h | 2 2 + 0 - 0 !
2 files changed, 36 insertions(+), 11 deletions(-)

 [patch] scsi: allocate scsitargetreq r->buf dynamically [cve-2013-4344]
Bug-Debian: http://bugs.debian.org/725944

r->buf is hardcoded to 2056 which is (256 + 1) * 8, allowing 256 luns at
most. If more than 256 luns are specified by user, we have buffer
overflow in scsi_target_emulate_report_luns.

To fix, we allocate the buffer dynamically.

Signed-off-by: Asias He <asias@redhat.com>
Tested-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 846424350b292f16b732b573273a5c1f195cd7a3)

slirp udp fix NULL pointer deref uninit socket CVE 2014 3640.patch | (download)

slirp/udp.c | 2 1 + 1 - 0 !
1 file changed, 1 insertion(+), 1 deletion(-)

 slirp: udp: fix null pointer dereference because of uninitialized socket
Bug-Debian: http://bugs.debian.org/762532

When guest sends udp packet with source port and source addr 0,
uninitialized socket is picked up when looking for matching and already
created udp sockets, and later passed to sosendto() where NULL pointer
dereference is hit during so->slirp->vnetwork_mask.s_addr access.

Fix this by checking that the socket is not just a socket stub.

This is CVE-2014-3640.

Signed-off-by: Petr Matousek <pmatouse@redhat.com>
Reported-by: Xavier Mehrenberger <xavier.mehrenberger@airbus.com>
Reported-by: Stephane Duverger <stephane.duverger@eads.net>
spice make sure we don t overflow ssd buf CVE 2014 3615.patch | (download)

ui/spice-display.c | 20 15 + 5 - 0 !
1 file changed, 15 insertions(+), 5 deletions(-)

 [patch 2/2] spice: make sure we don't overflow ssd->buf

Related spice-only bug.  We have a fixed 16 MB buffer here, being
presented to the spice-server as qxl video memory in case spice is
used with a non-qxl card.  It's also used with qxl in vga mode.

When using display resolutions requiring more than 16 MB of memory we
are going to overflow that buffer.  In theory the guest can write,
indirectly via spice-server.  The spice-server clears the memory after
setting a new video mode though, triggering a segfault in the overflow
case, so qemu crashes before the guest has a chance to do something
evil.

Fix that by switching to dynamic allocation for the buffer.

CVE-2014-3615

Cc: qemu-stable@nongnu.org
Cc: secalert@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
vbe rework sanity checks CVE 2014 3615.patch | (download)

hw/vga.c | 155 96 + 59 - 0 !
1 file changed, 96 insertions(+), 59 deletions(-)

 [patch 1/2] vbe: rework sanity checks

Plug a bunch of holes in the bochs dispi interface parameter checking.
Add a function doing verification on all registers.  Call that
unconditionally on every register write.  That way we should catch
everything, even changing one register affecting the valid range of
another register.

Some of the holes have been added by commit
e9c6149f6ae6873f14a12eea554925b6aa4c4dec.  Before that commit the
maximum possible framebuffer (VBE_DISPI_MAX_XRES * VBE_DISPI_MAX_YRES *
32 bpp) has been smaller than the qemu vga memory (8MB) and the checking
for VBE_DISPI_MAX_XRES + VBE_DISPI_MAX_YRES + VBE_DISPI_MAX_BPP was ok.

Some of the holes have been there forever, such as
VBE_DISPI_INDEX_X_OFFSET and VBE_DISPI_INDEX_Y_OFFSET register writes
lacking any verification.

Security impact:

(1) Guest can make the ui (gtk/vnc/...) use memory rages outside the vga
frame buffer as source  ->  host memory leak.  Memory isn't leaked to
the guest but to the vnc client though.

(2) Qemu will segfault in case the memory range happens to include
unmapped areas  ->  Guest can DoS itself.

The guest can not modify host memory, so I don't think this can be used
by the guest to escape.

CVE-2014-3615

Cc: qemu-stable@nongnu.org
Cc: secalert@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
image format validation/0001 block cloop validate block_size header field CVE 2014 0144.patch | (download)

block/cloop.c | 23 23 + 0 - 0 !
1 file changed, 23 insertions(+)

 block/cloop: validate block_size header field (cve-2014-0144)

Avoid unbounded s->uncompressed_block memory allocation by checking that
the block_size header field has a reasonable value.  Also enforce the
assumption that the value is a non-zero multiple of 512.

These constraints conform to cloop 2.639's code so we accept existing
image files.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
image format validation/0002 block cloop prevent offsets_size integer overflow CVE 2014 0143.patch | (download)

block/cloop.c | 7 7 + 0 - 0 !
1 file changed, 7 insertions(+)

 block/cloop: prevent offsets_size integer overflow (cve-2014-0143)

The following integer overflow in offsets_size can lead to out-of-bounds
memory stores when n_blocks has a huge value:

    uint32_t n_blocks, offsets_size;
    [...]
    ret = bdrv_pread(bs->file, 128 + 4, &s->n_blocks, 4);
    [...]
    s->n_blocks = be32_to_cpu(s->n_blocks);

    /* read offsets */
    offsets_size = s->n_blocks * sizeof(uint64_t);
    s->offsets = g_malloc(offsets_size);

    [...]

    for(i=0;i<s->n_blocks;i++) {
        s->offsets[i] = be64_to_cpu(s->offsets[i]);

offsets_size can be smaller than n_blocks due to integer overflow.
Therefore s->offsets[] is too small when the for loop byteswaps offsets.

This patch refuses to open files if offsets_size would overflow.

Note that changing the type of offsets_size is not a fix since 32-bit
hosts still only have 32-bit size_t.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
image format validation/0003 block cloop refuse images with huge offsets arrays CVE 2014 0144.patch | (download)

block/cloop.c | 9 9 + 0 - 0 !
1 file changed, 9 insertions(+)

 block/cloop: refuse images with huge offsets arrays (cve-2014-0144)

Limit offsets_size to 512 MB so that:

1. g_malloc() does not abort due to an unreasonable size argument.

2. offsets_size does not overflow the bdrv_pread() int size argument.

This limit imposes a maximum image size of 16 TB at 256 KB block size.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
image format validation/0004 block cloop refuse images with bogus offsets CVE 2014 0144.patch | (download)

block/cloop.c | 32 27 + 5 - 0 !
1 file changed, 27 insertions(+), 5 deletions(-)

 block/cloop: refuse images with bogus offsets (cve-2014-0144)

The offsets[] array allows efficient seeking and tells us the maximum
compressed data size.  If the offsets are bogus the maximum compressed
data size will be unrealistic.

This could cause g_malloc() to abort and bogus offsets mean the image is
broken anyway.  Therefore we should refuse such images.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
image format validation/0005 block cloop fix offsets size off by one.patch | (download)

block/cloop.c | 13 6 + 7 - 0 !
1 file changed, 6 insertions(+), 7 deletions(-)

 block/cloop: fix offsets[] size off-by-one

cloop stores the number of compressed blocks in the n_blocks header
field.  The file actually contains n_blocks + 1 offsets, where the extra
offset is the end-of-file offset.

The following line in cloop_read_block() results in an out-of-bounds
offsets[] access:

    uint32_t bytes = s->offsets[block_num + 1] - s->offsets[block_num];

This patch allocates and loads the extra offset so that
cloop_read_block() works correctly when the last block is accessed.

Notice that we must free s->offsets[] unconditionally now since there is
always an end-of-file offset.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
image format validation/0006 bochs use unsigned variables for offsets and sizes CVE 2014 0147.patch | (download)

block/bochs.c | 16 8 + 8 - 0 !
1 file changed, 8 insertions(+), 8 deletions(-)

 bochs: use unsigned variables for offsets and sizes (cve-2014-0147)

Gets us rid of integer overflows resulting in negative sizes which
aren't correctly checked.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
image format validation/0007 bochs check catalog_size header field CVE 2014 0143.patch | (download)

block/bochs.c | 12 12 + 0 - 0 !
1 file changed, 12 insertions(+)

 bochs: check catalog_size header field (cve-2014-0143)

It should neither become negative nor allow unbounded memory
allocations. This fixes aborts in g_malloc() and an s->catalog_bitmap
buffer overflow on big endian hosts.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
image format validation/0008 bochs check extent_size header field CVE 2014 0142.patch | (download)

block/bochs.c | 8 8 + 0 - 0 !
1 file changed, 8 insertions(+)

 bochs: check extent_size header field (cve-2014-0142)

This fixes two possible division by zero crashes: In bochs_open() and in
seek_to_sector().

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
image format validation/0009 bochs fix bitmap offset calculation.patch | (download)

block/bochs.c | 5 3 + 2 - 0 !
1 file changed, 3 insertions(+), 2 deletions(-)

 bochs: fix bitmap offset calculation

32 bit truncation could let us access the wrong offset in the image.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
image format validation/0010 vpc vhd add bounds check for max_table_entries and block_size CVE 2014 0144.patch | (download)

block/vpc.c | 30 27 + 3 - 0 !
1 file changed, 27 insertions(+), 3 deletions(-)

 vpc/vhd: add bounds check for max_table_entries and block_size (cve-2014-0144)

This adds checks to make sure that max_table_entries and block_size
are in sane ranges.  Memory is allocated based on max_table_entries,
and block_size is used to calculate indices into that allocated
memory, so if these values are incorrect that can lead to potential
unbounded memory allocation, or invalid memory accesses.

Also, the allocation of the pagetable is changed from g_malloc0()
to qemu_blockalign().

Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
image format validation/0011 vpc validate block size CVE 2014 0142.patch | (download)

block/vpc.c | 14 14 + 0 - 0 !
1 file changed, 14 insertions(+)

 vpc: validate block size (cve-2014-0142)

This fixes some cases of division by zero crashes.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
image format validation/0012 vdi add bounds checks for blocks_in_image and disk_size header fields CVE 2014 0144.patch | (download)

block/vdi.c | 34 31 + 3 - 0 !
1 file changed, 31 insertions(+), 3 deletions(-)

 vdi: add bounds checks for blocks_in_image and disk_size header fields (cve-2014-0144)

The maximum blocks_in_image is 0xffffffff / 4, which also limits the
maximum disk_size for a VDI image to 1024TB.  Note that this is the maximum
size that QEMU will currently support with this driver, not necessarily the
maximum size allowed by the image format.

This also fixes an incorrect error message, a bug introduced by commit
5b7aa9b56d1bfc79916262f380c3fc7961becb50 (Reported by Stefan Weil)

Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 63fa06dc978f3669dbfd9443b33cde9e2a7f4b41)

Conflicts:
	block/vdi.c

image format validation/0013 curl check data size before memcpy to local buffer CVE 2014 0144.patch | (download)

block/curl.c | 5 5 + 0 - 0 !
1 file changed, 5 insertions(+)

 curl: check data size before memcpy to local buffer. (cve-2014-0144)

curl_read_cb is callback function for libcurl when data arrives. The
data size passed in here is not guaranteed to be within the range of
request we submitted, so we may overflow the guest IO buffer. Check the
real size we have before memcpy to buffer to avoid overflow.

Signed-off-by: Fam Zheng <famz@redhat.com>
image format validation/0014 qcow2 catch some L1 table index overflows.patch | (download)

block/qcow2-cluster.c | 23 15 + 8 - 0 !
block/qcow2.c | 13 11 + 2 - 0 !
block/qcow2.h | 5 3 + 2 - 0 !
3 files changed, 29 insertions(+), 12 deletions(-)

 qcow2: catch some l1 table index overflows

This catches the situation that is described in the bug report at
https://bugs.launchpad.net/qemu/+bug/865518 and goes like this:

    $ qemu-img create -f qcow2 huge.qcow2 $((1024*1024))T
    Formatting 'huge.qcow2', fmt=qcow2 size=1152921504606846976 encryption=off cluster_size=65536 lazy_refcounts=off
    $ qemu-io /tmp/huge.qcow2 -c "write $((1024*1024*1024*1024*1024*1024 - 1024)) 512"
    Segmentation fault

With this patch applied the segfault will be avoided, however the case
will still fail, though gracefully:

    $ qemu-img create -f qcow2 /tmp/huge.qcow2 $((1024*1024))T
    Formatting 'huge.qcow2', fmt=qcow2 size=1152921504606846976 encryption=off cluster_size=65536 lazy_refcounts=off
    qemu-img: The image size is too large for file format 'qcow2'

Note that even long before these overflow checks kick in, you get
insanely high memory usage (up to INT_MAX * sizeof(uint64_t) = 16 GB for
the L1 table), so with somewhat smaller image sizes you'll probably see
qemu aborting for a failed g_malloc().

If you need huge image sizes, you should increase the cluster size to
the maximum of 2 MB in order to get higher limits.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 2cf7cfa1cde6672b8a35bbed3fbc989f28c05dce)
(this is needed for: qcow2: Fix new L1 table size check (CVE-2014-0143))

Conflicts:
	block/qcow2.c

image format validation/0015 qcow2 check header_length CVE 2014 0144.patch | (download)

block/qcow2.c | 33 25 + 8 - 0 !
1 file changed, 25 insertions(+), 8 deletions(-)

 qcow2: check header_length (cve-2014-0144)

This fixes an unbounded allocation for s->unknown_header_fields.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
image format validation/0016 qcow2 check backing_file_offset CVE 2014 0144.patch | (download)

block/qcow2.c | 6 6 + 0 - 0 !
1 file changed, 6 insertions(+)

 qcow2: check backing_file_offset (cve-2014-0144)

Header, header extension and the backing file name must all be stored in
the first cluster. Setting the backing file to a much higher value
allowed header extensions to become much bigger than we want them to be
(unbounded allocation).

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
image format validation/0017 qcow2 check refcount table size CVE 2014 0144.patch | (download)

block/qcow2-refcount.c | 4 3 + 1 - 0 !
block/qcow2.c | 9 9 + 0 - 0 !
2 files changed, 12 insertions(+), 1 deletion(-)

 qcow2: check refcount table size (cve-2014-0144)

Limit the in-memory reference count table size to 8 MB, it's enough in
practice. This fixes an unbounded allocation as well as a buffer
overflow in qcow2_refcount_init().

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
image format validation/0018 qcow2 validate refcount table offset.patch | (download)

block/qcow2.c | 33 33 + 0 - 0 !
1 file changed, 33 insertions(+)

 qcow2: validate refcount table offset

The end of the refcount table must not exceed INT64_MAX so that integer
overflows are avoided.

Also check for misaligned refcount table. Such images are invalid and
probably the result of data corruption. Error out to avoid further
corruption.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
image format validation/0019 qcow2 validate snapshot table offset size CVE 2014 0144.patch | (download)

block/qcow2-snapshot.c | 29 4 + 25 - 0 !
block/qcow2.c | 15 15 + 0 - 0 !
block/qcow2.h | 29 28 + 1 - 0 !
3 files changed, 47 insertions(+), 26 deletions(-)

 qcow2: validate snapshot table offset/size (cve-2014-0144)

This avoid unbounded memory allocation and fixes a potential buffer
overflow on 32 bit hosts.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
image format validation/0020 qcow2 validate active L1 table offset and size CVE 2014 0144.patch | (download)

block/qcow2.c | 16 16 + 0 - 0 !
1 file changed, 16 insertions(+)

 qcow2: validate active l1 table offset and size (cve-2014-0144)

This avoids an unbounded allocation.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
image format validation/0021 qcow2 fix backing file name length check.patch | (download)

block/qcow2.c | 9 6 + 3 - 0 !
1 file changed, 6 insertions(+), 3 deletions(-)

 qcow2: fix backing file name length check

len could become negative and would pass the check then. Nothing bad
happened because bdrv_pread() happens to return an error for negative
length values, but make variables for sizes unsigned anyway.

This patch also changes the behaviour to error out on invalid lengths
instead of silently truncating it to 1023.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
image format validation/0022 qcow2 avoid integer overflow in get_refcount CVE 2014 0143.patch | (download)

block/qcow2-refcount.c | 2 1 + 1 - 0 !
1 file changed, 1 insertion(+), 1 deletion(-)

 qcow2: avoid integer overflow in get_refcount (cve-2014-0143)

This ensures that the checks catch all invalid cluster indexes
instead of returning the refcount of a wrong cluster.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
image format validation/0023 qcow2 don t rely on free_cluster_index in alloc_refccount_block CVE 2014 0147.patch | (download)

block/qcow2-refcount.c | 75 41 + 34 - 0 !
block/qcow2.c | 11 6 + 5 - 0 !
tests/qemu-iotests/026.out | 6 3 + 3 - 0 !
3 files changed, 50 insertions(+), 42 deletions(-)

 qcow2: don't rely on free_cluster_index in alloc_refcount_block() (cve-2014-0147)

free_cluster_index is only correct if update_refcount() was called from
an allocation function, and even there it's brittle because it's used to
protect unfinished allocations which still have a refcount of 0 - if it
moves in the wrong place, the unfinished allocation can be corrupted.

So not using it any more seems to be a good idea. Instead, use the
first requested cluster to do the calculations. Return -EAGAIN if
unfinished allocations could become invalid and let the caller restart
its search for some free clusters.

The context of creating a snapsnot is one situation where
update_refcount() is called outside of a cluster allocation. For this
case, the change fixes a buffer overflow if a cluster is referenced in
an L2 table that cannot be represented by an existing refcount block.
(new_table[refcount_table_index] was out of bounds)

[Bump the qemu-iotests 026 refblock_alloc.write leak count from 10 to
11.
--Stefan]

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
image format validation/0024 qcow2 check new refcount table size on growth.patch | (download)

block/qcow2-refcount.c | 4 4 + 0 - 0 !
block/qcow2.c | 4 1 + 3 - 0 !
block/qcow2.h | 9 9 + 0 - 0 !
3 files changed, 14 insertions(+), 3 deletions(-)

 qcow2: check new refcount table size on growth

If the size becomes larger than what qcow2_open() would accept, fail the
growing operation.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
image format validation/0025 qcow2 preserve free_byte_offset when qcow2_alloc_bytes_fails.patch | (download)

block/qcow2-refcount.c | 7 4 + 3 - 0 !
1 file changed, 4 insertions(+), 3 deletions(-)

 qcow2: preserve free_byte_offset when qcow2_alloc_bytes() fails

When qcow2_alloc_clusters() error handling code was introduced in commit
5d757b563d59142ca81e1073a8e8396750a0ad1a, the value of free_byte_offset
was clobbered in the error case.  This patch keeps free_byte_offset at 0
so we will try to allocate clusters again next time this function is
called.

Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 206e6d8551839008b6858cf8f500d2e644d2b561)
(needed for the next commit, qcow2: Fix types in qcow2_alloc_clusters and alloc_clusters_noref)

image format validation/0026 qcow2 fix types in qcow2_alloc_clusters and alloc_clusters_noref.patch | (download)

block/qcow2-refcount.c | 11 6 + 5 - 0 !
block/qcow2.h | 6 3 + 3 - 0 !
2 files changed, 9 insertions(+), 8 deletions(-)

 qcow2: fix types in qcow2_alloc_clusters and alloc_clusters_noref

In order to avoid integer overflows.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
image format validation/0027 qcow2 fix new L1 table size check CVE 2014 0143.patch | (download)

block/qcow2-cluster.c | 2 1 + 1 - 0 !
1 file changed, 1 insertion(+), 1 deletion(-)

 qcow2: fix new l1 table size check (cve-2014-0143)

The size in bytes is assigned to an int later, so check that instead of
the number of entries.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>