From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752007AbdA2Rot (ORCPT ); Sun, 29 Jan 2017 12:44:49 -0500 Received: from mx2.suse.de ([195.135.220.15]:42664 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751993AbdA2Rop (ORCPT ); Sun, 29 Jan 2017 12:44:45 -0500 Subject: Re: mm: deadlock between get_online_cpus/pcpu_alloc To: Dmitry Vyukov , Tejun Heo , Christoph Lameter , "linux-mm@kvack.org" , LKML , Thomas Gleixner , Ingo Molnar , Peter Zijlstra References: Cc: syzkaller , Mel Gorman , Michal Hocko , Andrew Morton From: Vlastimil Babka Message-ID: Date: Sun, 29 Jan 2017 18:22:48 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.7.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 29.1.2017 13:44, Dmitry Vyukov wrote: > Hello, > > I've got the following deadlock report while running syzkaller fuzzer > on f37208bc3c9c2f811460ef264909dfbc7f605a60: > > [ INFO: possible circular locking dependency detected ] > 4.10.0-rc5-next-20170125 #1 Not tainted > ------------------------------------------------------- > syz-executor3/14255 is trying to acquire lock: > (cpu_hotplug.dep_map){++++++}, at: [] > get_online_cpus+0x37/0x90 kernel/cpu.c:239 > > but task is already holding lock: > (pcpu_alloc_mutex){+.+.+.}, at: [] > pcpu_alloc+0xbfe/0x1290 mm/percpu.c:897 > > which lock already depends on the new lock. I suspect the dependency comes from recent changes in drain_all_pages(). They were later redone (for other reasons, but nice to have another validation) in the mmots patch [1], which AFAICS is not yet in mmotm and thus linux-next. Could you try if it helps? Vlastimil [1] http://ozlabs.org/~akpm/mmots/broken-out/mm-page_alloc-use-static-global-work_struct-for-draining-per-cpu-pages.patch > > the existing dependency chain (in reverse order) is: > > -> #2 (pcpu_alloc_mutex){+.+.+.}: > > [] validate_chain kernel/locking/lockdep.c:2265 [inline] > [] __lock_acquire+0x2149/0x3430 kernel/locking/lockdep.c:3338 > [] lock_acquire+0x2a1/0x630 kernel/locking/lockdep.c:3753 > [] __mutex_lock_common kernel/locking/mutex.c:757 [inline] > [] __mutex_lock+0x382/0x25c0 kernel/locking/mutex.c:894 > [] mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:909 > [] pcpu_alloc+0xbfe/0x1290 mm/percpu.c:897 > [] __alloc_percpu+0x24/0x30 mm/percpu.c:1076 > [] smpcfd_prepare_cpu+0x73/0xd0 kernel/smp.c:47 > [] cpuhp_invoke_callback+0x256/0x1480 kernel/cpu.c:136 > [] cpuhp_up_callbacks+0x81/0x2a0 kernel/cpu.c:425 > [] _cpu_up+0x1e3/0x2a0 kernel/cpu.c:940 > [] do_cpu_up+0x73/0xa0 kernel/cpu.c:970 > [] cpu_up+0x18/0x20 kernel/cpu.c:978 > [] smp_init+0x148/0x160 kernel/smp.c:565 > [] kernel_init_freeable+0x43e/0x695 init/main.c:1026 > [] kernel_init+0x13/0x180 init/main.c:955 > [] ret_from_fork+0x31/0x40 arch/x86/entry/entry_64.S:430 > > -> #1 (cpu_hotplug.lock){+.+.+.}: > > [] validate_chain kernel/locking/lockdep.c:2265 [inline] > [] __lock_acquire+0x2149/0x3430 kernel/locking/lockdep.c:3338 > [] lock_acquire+0x2a1/0x630 kernel/locking/lockdep.c:3753 > [] __mutex_lock_common kernel/locking/mutex.c:757 [inline] > [] __mutex_lock+0x382/0x25c0 kernel/locking/mutex.c:894 > [] mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:909 > [] cpu_hotplug_begin+0x206/0x2e0 kernel/cpu.c:297 > [] _cpu_up+0xca/0x2a0 kernel/cpu.c:894 > [] do_cpu_up+0x73/0xa0 kernel/cpu.c:970 > [] cpu_up+0x18/0x20 kernel/cpu.c:978 > [] smp_init+0x148/0x160 kernel/smp.c:565 > [] kernel_init_freeable+0x43e/0x695 init/main.c:1026 > [] kernel_init+0x13/0x180 init/main.c:955 > [] ret_from_fork+0x31/0x40 arch/x86/entry/entry_64.S:430 > > -> #0 (cpu_hotplug.dep_map){++++++}: > > [] check_prev_add kernel/locking/lockdep.c:1828 [inline] > [] check_prevs_add+0xa8f/0x19f0 kernel/locking/lockdep.c:1938 > [] validate_chain kernel/locking/lockdep.c:2265 [inline] > [] __lock_acquire+0x2149/0x3430 kernel/locking/lockdep.c:3338 > [] lock_acquire+0x2a1/0x630 kernel/locking/lockdep.c:3753 > [] get_online_cpus+0x62/0x90 kernel/cpu.c:241 > [] drain_all_pages.part.98+0x8c/0x8f0 mm/page_alloc.c:2371 > [] drain_all_pages mm/page_alloc.c:2364 [inline] > [] __alloc_pages_direct_reclaim mm/page_alloc.c:3435 [inline] > [] __alloc_pages_slowpath+0x966/0x23d0 mm/page_alloc.c:3773 > [] __alloc_pages_nodemask+0x8f5/0xc60 mm/page_alloc.c:3975 > [] __alloc_pages include/linux/gfp.h:426 [inline] > [] __alloc_pages_node include/linux/gfp.h:439 [inline] > [] alloc_pages_node include/linux/gfp.h:453 [inline] > [] pcpu_alloc_pages mm/percpu-vm.c:93 [inline] > [] pcpu_populate_chunk+0x1e1/0x900 mm/percpu-vm.c:282 > [] pcpu_alloc+0xe15/0x1290 mm/percpu.c:999 > [] __alloc_percpu_gfp+0x27/0x30 mm/percpu.c:1063 > [] bpf_array_alloc_percpu kernel/bpf/arraymap.c:33 [inline] > [] array_map_alloc+0x543/0x700 kernel/bpf/arraymap.c:94 > [] find_and_alloc_map kernel/bpf/syscall.c:37 [inline] > [] map_create kernel/bpf/syscall.c:228 [inline] > [] SYSC_bpf kernel/bpf/syscall.c:1040 [inline] > [] SyS_bpf+0x108d/0x27c0 kernel/bpf/syscall.c:997 > [] entry_SYSCALL_64_fastpath+0x1f/0xc2 > > other info that might help us debug this: > > Chain exists of: > cpu_hotplug.dep_map --> cpu_hotplug.lock --> pcpu_alloc_mutex > > Possible unsafe locking scenario: > > CPU0 CPU1 > ---- ---- > lock(pcpu_alloc_mutex); > lock(cpu_hotplug.lock); > lock(pcpu_alloc_mutex); > lock(cpu_hotplug.dep_map); > > *** DEADLOCK *** > > 1 lock held by syz-executor3/14255: > #0: (pcpu_alloc_mutex){+.+.+.}, at: [] > pcpu_alloc+0xbfe/0x1290 mm/percpu.c:897 > > stack backtrace: > CPU: 1 PID: 14255 Comm: syz-executor3 Not tainted 4.10.0-rc5-next-20170125 #1 > Hardware name: Google Google Compute Engine/Google Compute Engine, > BIOS Google 01/01/2011 > Call Trace: > __dump_stack lib/dump_stack.c:15 [inline] > dump_stack+0x2ee/0x3ef lib/dump_stack.c:51 > print_circular_bug+0x307/0x3b0 kernel/locking/lockdep.c:1202 > check_prev_add kernel/locking/lockdep.c:1828 [inline] > check_prevs_add+0xa8f/0x19f0 kernel/locking/lockdep.c:1938 > validate_chain kernel/locking/lockdep.c:2265 [inline] > __lock_acquire+0x2149/0x3430 kernel/locking/lockdep.c:3338 > lock_acquire+0x2a1/0x630 kernel/locking/lockdep.c:3753 > get_online_cpus+0x62/0x90 kernel/cpu.c:241 > drain_all_pages.part.98+0x8c/0x8f0 mm/page_alloc.c:2371 > drain_all_pages mm/page_alloc.c:2364 [inline] > __alloc_pages_direct_reclaim mm/page_alloc.c:3435 [inline] > __alloc_pages_slowpath+0x966/0x23d0 mm/page_alloc.c:3773 > __alloc_pages_nodemask+0x8f5/0xc60 mm/page_alloc.c:3975 > __alloc_pages include/linux/gfp.h:426 [inline] > __alloc_pages_node include/linux/gfp.h:439 [inline] > alloc_pages_node include/linux/gfp.h:453 [inline] > pcpu_alloc_pages mm/percpu-vm.c:93 [inline] > pcpu_populate_chunk+0x1e1/0x900 mm/percpu-vm.c:282 > pcpu_alloc+0xe15/0x1290 mm/percpu.c:999 > __alloc_percpu_gfp+0x27/0x30 mm/percpu.c:1063 > bpf_array_alloc_percpu kernel/bpf/arraymap.c:33 [inline] > array_map_alloc+0x543/0x700 kernel/bpf/arraymap.c:94 > find_and_alloc_map kernel/bpf/syscall.c:37 [inline] > map_create kernel/bpf/syscall.c:228 [inline] > SYSC_bpf kernel/bpf/syscall.c:1040 [inline] > SyS_bpf+0x108d/0x27c0 kernel/bpf/syscall.c:997 > entry_SYSCALL_64_fastpath+0x1f/0xc2 > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: email@kvack.org >