From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1034128AbdAFNhw (ORCPT ); Fri, 6 Jan 2017 08:37:52 -0500 Received: from mail-wm0-f65.google.com ([74.125.82.65]:35149 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751407AbdAFNhm (ORCPT ); Fri, 6 Jan 2017 08:37:42 -0500 Subject: Re: kvm: use-after-free in complete_emulated_mmio To: Wanpeng Li , Dmitry Vyukov References: Cc: =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , KVM list , LKML , Steve Rutherford , syzkaller From: Paolo Bonzini Message-ID: <87fcd513-ab71-332c-f78c-a7b77a708764@redhat.com> Date: Fri, 6 Jan 2017 14:37:38 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.5.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 06/01/2017 10:59, Wanpeng Li wrote: > 2016-12-27 21:57 GMT+08:00 Dmitry Vyukov : >> Hello, >> >> The following program triggers use-after-free in complete_emulated_mmio: >> https://gist.githubusercontent.com/dvyukov/79c7ee10f568b0d5c33788534bb6edc9/raw/2c2d4ce0fe86398ed81e65281e8c215c7c3632fb/gistfile1.txt >> >> BUG: KASAN: use-after-free in complete_emulated_mmio+0x8dd/0xb70 >> arch/x86/kvm/x86.c:7052 at addr ffff880069f1ed48 >> Read of size 8 by task syz-executor/31542 >> CPU: 3 PID: 31542 Comm: syz-executor Not tainted 4.9.0+ #105 >> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011 >> Call Trace: >> check_memory_region+0x139/0x190 mm/kasan/kasan.c:322 >> memcpy+0x23/0x50 mm/kasan/kasan.c:357 >> complete_emulated_mmio+0x8dd/0xb70 arch/x86/kvm/x86.c:7052 >> kvm_arch_vcpu_ioctl_run+0x308d/0x45f0 arch/x86/kvm/x86.c:7090 >> kvm_vcpu_ioctl+0x673/0x1120 arch/x86/kvm/../../../virt/kvm/kvm_main.c:2569 >> vfs_ioctl fs/ioctl.c:43 [inline] >> do_vfs_ioctl+0x1bf/0x1780 fs/ioctl.c:683 >> SYSC_ioctl fs/ioctl.c:698 [inline] >> SyS_ioctl+0x8f/0xc0 fs/ioctl.c:689 >> entry_SYSCALL_64_fastpath+0x1f/0xc2 >> RIP: 0033:0x4421e9 >> RSP: 002b:00007f320dc67b58 EFLAGS: 00000286 ORIG_RAX: 0000000000000010 >> RAX: ffffffffffffffda RBX: 0000000000000018 RCX: 00000000004421e9 >> RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 0000000000000018 >> RBP: 00000000006dbb20 R08: 0000000000000000 R09: 0000000000000000 >> R10: 0000000000000000 R11: 0000000000000286 R12: 0000000000700000 >> R13: 00007f320de671c8 R14: 00007f320de69000 R15: 0000000000000000 >> Object at ffff880069f183c0, in cache kmalloc-16384 size: 16384 >> Allocated: >> PID = 31567 >> [] save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:57 >> [] save_stack+0x43/0xd0 mm/kasan/kasan.c:502 >> [] set_track mm/kasan/kasan.c:514 [inline] >> [] kasan_kmalloc+0xaa/0xd0 mm/kasan/kasan.c:605 >> [] kmem_cache_alloc_trace+0xec/0x640 mm/slab.c:3629 >> [] kvm_arch_alloc_vm include/linux/slab.h:490 [inline] >> [] kvm_create_vm >> arch/x86/kvm/../../../virt/kvm/kvm_main.c:613 [inline] >> [] kvm_dev_ioctl_create_vm >> arch/x86/kvm/../../../virt/kvm/kvm_main.c:3174 [inline] >> [] kvm_dev_ioctl+0x1be/0x11b0 >> arch/x86/kvm/../../../virt/kvm/kvm_main.c:3218 >> [] vfs_ioctl fs/ioctl.c:43 [inline] >> [] do_vfs_ioctl+0x1bf/0x1780 fs/ioctl.c:683 >> [] SYSC_ioctl fs/ioctl.c:698 [inline] >> [] SyS_ioctl+0x8f/0xc0 fs/ioctl.c:689 >> [] entry_SYSCALL_64_fastpath+0x1f/0xc2 >> Memory state around the buggy address: >> ffff880069f1ec00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc >> ffff880069f1ec80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc >>> ffff880069f1ed00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc >> ^ >> ffff880069f1ed80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc >> ffff880069f1ee00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc >> ================================================================== >> >> >> On commit e93b1cc8a8965da137ffea0b88e5f62fa1d2a9e6 (Dec 19). >> >> >> I've also printed some values when the bug happens: >> >> pr_err("vcpu=%p, mmio_fragments=%p frag=%p frag=%d/%d len=%d gpa=%p write=%d\n", >> vcpu, vcpu->mmio_fragments, frag, vcpu->mmio_cur_fragment, >> vcpu->mmio_nr_fragments, frag->len, (void*)frag->gpa, >> vcpu->mmio_is_write); >> >> [ 26.765898] vcpu=ffff880068590100, mmio_fragments=ffff880068590338 >> frag=ffff880068590338 frag=0/1 len=152 gpa=0000000000001008 write=1 > > > test-2892 [006] .... 118.284172: complete_emulated_mmio: vcpu = > ffff9beefb288000, mmio_fragments = ffff9beefb2881b0, frag = > ffff9beefb2881b0, frag = 0/1, len = 160, gpa = 0000000000001000, write > = 1 > test-2897 [003] .... 118.284196: complete_emulated_mmio: vcpu = > ffff9beef69a0000, mmio_fragments = ffff9beef69a01b0, frag = > ffff9beef69a01b0, frag = 0/1, len = 160, gpa = 0000000000001000, write > = 1 > > Actually the mmio will be splitted to 8 byte piece and returns to > qemu(if it's not emulated by kvm) to be emulated one by one, however, > we can observe that there is no subsequent handle to the left pieces, > I guess the VM is almost destroyed immediately in the testcase, right? Hi Wanpeng, yeah, there are only two KVM_RUN. This bug was discussed off list and we'll send a patch soon. Paolo