From: Michal Hocko <mhocko@kernel.org> To: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>, peterz@infradead.org Cc: hch@lst.de, mgorman@suse.de, viro@ZenIV.linux.org.uk, linux-mm@kvack.org, hannes@cmpxchg.org, linux-kernel@vger.kernel.org Subject: Re: [RFC PATCH 1/2] mm, vmscan: account the number of isolated pages per zone Date: Mon, 6 Feb 2017 11:39:19 +0100 [thread overview] Message-ID: <20170206103918.GD3097@dhcp22.suse.cz> (raw) In-Reply-To: <201702051943.CFB35412.OOSJVtLFOFQHMF@I-love.SAKURA.ne.jp> On Sun 05-02-17 19:43:07, Tetsuo Handa wrote: > Michal Hocko wrote: > I got same warning with ext4. Maybe we need to check carefully. > > [ 511.215743] ===================================================== > [ 511.218003] WARNING: RECLAIM_FS-safe -> RECLAIM_FS-unsafe lock order detected > [ 511.220031] 4.10.0-rc6-next-20170202+ #500 Not tainted > [ 511.221689] ----------------------------------------------------- > [ 511.223579] a.out/49302 [HC0[0]:SC0[0]:HE1:SE1] is trying to acquire: > [ 511.225533] (cpu_hotplug.dep_map){++++++}, at: [<ffffffff810a1477>] get_online_cpus+0x37/0x80 > [ 511.227795] > [ 511.227795] and this task is already holding: > [ 511.230082] (jbd2_handle){++++-.}, at: [<ffffffff813a8be7>] start_this_handle+0x1a7/0x590 > [ 511.232592] which would create a new lock dependency: > [ 511.234192] (jbd2_handle){++++-.} -> (cpu_hotplug.dep_map){++++++} > [ 511.235966] > [ 511.235966] but this new dependency connects a RECLAIM_FS-irq-safe lock: > [ 511.238563] (jbd2_handle){++++-.} > [ 511.238564] > [ 511.238564] ... which became RECLAIM_FS-irq-safe at: > [ 511.242078] > [ 511.242084] [<ffffffff811089db>] __lock_acquire+0x34b/0x1640 > [ 511.244495] [<ffffffff8110a119>] lock_acquire+0xc9/0x250 > [ 511.246697] [<ffffffff813b3525>] jbd2_log_wait_commit+0x55/0x1d0 [...] > [ 511.276216] to a RECLAIM_FS-irq-unsafe lock: > [ 511.278128] (cpu_hotplug.dep_map){++++++} > [ 511.278130] > [ 511.278130] ... which became RECLAIM_FS-irq-unsafe at: > [ 511.281809] ... > [ 511.281811] > [ 511.282598] [<ffffffff81108141>] mark_held_locks+0x71/0x90 > [ 511.284854] [<ffffffff8110ab6f>] lockdep_trace_alloc+0x6f/0xd0 > [ 511.287218] [<ffffffff812744c8>] kmem_cache_alloc_node_trace+0x48/0x3b0 > [ 511.289755] [<ffffffff810cfa65>] __smpboot_create_thread.part.2+0x35/0xf0 > [ 511.292329] [<ffffffff810d0026>] smpboot_create_threads+0x66/0x90 [...] > [ 511.317867] other info that might help us debug this: > [ 511.317867] > [ 511.320920] Possible interrupt unsafe locking scenario: > [ 511.320920] > [ 511.323218] CPU0 CPU1 > [ 511.324622] ---- ---- > [ 511.325973] lock(cpu_hotplug.dep_map); > [ 511.327246] local_irq_disable(); > [ 511.328870] lock(jbd2_handle); > [ 511.330483] lock(cpu_hotplug.dep_map); > [ 511.332259] <Interrupt> > [ 511.333187] lock(jbd2_handle); Peter, is there any way how to tell the lockdep that this is in fact reclaim safe? The direct reclaim only does the trylock and backs off so we cannot deadlock here. Or am I misinterpreting the trace? -- Michal Hocko SUSE Labs
WARNING: multiple messages have this Message-ID (diff)
From: Michal Hocko <mhocko@kernel.org> To: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>, peterz@infradead.org Cc: hch@lst.de, mgorman@suse.de, viro@ZenIV.linux.org.uk, linux-mm@kvack.org, hannes@cmpxchg.org, linux-kernel@vger.kernel.org Subject: Re: [RFC PATCH 1/2] mm, vmscan: account the number of isolated pages per zone Date: Mon, 6 Feb 2017 11:39:19 +0100 [thread overview] Message-ID: <20170206103918.GD3097@dhcp22.suse.cz> (raw) In-Reply-To: <201702051943.CFB35412.OOSJVtLFOFQHMF@I-love.SAKURA.ne.jp> On Sun 05-02-17 19:43:07, Tetsuo Handa wrote: > Michal Hocko wrote: > I got same warning with ext4. Maybe we need to check carefully. > > [ 511.215743] ===================================================== > [ 511.218003] WARNING: RECLAIM_FS-safe -> RECLAIM_FS-unsafe lock order detected > [ 511.220031] 4.10.0-rc6-next-20170202+ #500 Not tainted > [ 511.221689] ----------------------------------------------------- > [ 511.223579] a.out/49302 [HC0[0]:SC0[0]:HE1:SE1] is trying to acquire: > [ 511.225533] (cpu_hotplug.dep_map){++++++}, at: [<ffffffff810a1477>] get_online_cpus+0x37/0x80 > [ 511.227795] > [ 511.227795] and this task is already holding: > [ 511.230082] (jbd2_handle){++++-.}, at: [<ffffffff813a8be7>] start_this_handle+0x1a7/0x590 > [ 511.232592] which would create a new lock dependency: > [ 511.234192] (jbd2_handle){++++-.} -> (cpu_hotplug.dep_map){++++++} > [ 511.235966] > [ 511.235966] but this new dependency connects a RECLAIM_FS-irq-safe lock: > [ 511.238563] (jbd2_handle){++++-.} > [ 511.238564] > [ 511.238564] ... which became RECLAIM_FS-irq-safe at: > [ 511.242078] > [ 511.242084] [<ffffffff811089db>] __lock_acquire+0x34b/0x1640 > [ 511.244495] [<ffffffff8110a119>] lock_acquire+0xc9/0x250 > [ 511.246697] [<ffffffff813b3525>] jbd2_log_wait_commit+0x55/0x1d0 [...] > [ 511.276216] to a RECLAIM_FS-irq-unsafe lock: > [ 511.278128] (cpu_hotplug.dep_map){++++++} > [ 511.278130] > [ 511.278130] ... which became RECLAIM_FS-irq-unsafe at: > [ 511.281809] ... > [ 511.281811] > [ 511.282598] [<ffffffff81108141>] mark_held_locks+0x71/0x90 > [ 511.284854] [<ffffffff8110ab6f>] lockdep_trace_alloc+0x6f/0xd0 > [ 511.287218] [<ffffffff812744c8>] kmem_cache_alloc_node_trace+0x48/0x3b0 > [ 511.289755] [<ffffffff810cfa65>] __smpboot_create_thread.part.2+0x35/0xf0 > [ 511.292329] [<ffffffff810d0026>] smpboot_create_threads+0x66/0x90 [...] > [ 511.317867] other info that might help us debug this: > [ 511.317867] > [ 511.320920] Possible interrupt unsafe locking scenario: > [ 511.320920] > [ 511.323218] CPU0 CPU1 > [ 511.324622] ---- ---- > [ 511.325973] lock(cpu_hotplug.dep_map); > [ 511.327246] local_irq_disable(); > [ 511.328870] lock(jbd2_handle); > [ 511.330483] lock(cpu_hotplug.dep_map); > [ 511.332259] <Interrupt> > [ 511.333187] lock(jbd2_handle); Peter, is there any way how to tell the lockdep that this is in fact reclaim safe? The direct reclaim only does the trylock and backs off so we cannot deadlock here. Or am I misinterpreting the trace? -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-02-06 10:39 UTC|newest] Thread overview: 110+ messages / expand[flat|nested] mbox.gz Atom feed top 2017-01-18 13:44 [RFC PATCH 0/2] fix unbounded too_many_isolated Michal Hocko 2017-01-18 13:44 ` Michal Hocko 2017-01-18 13:44 ` [RFC PATCH 1/2] mm, vmscan: account the number of isolated pages per zone Michal Hocko 2017-01-18 13:44 ` Michal Hocko 2017-01-18 14:46 ` Mel Gorman 2017-01-18 14:46 ` Mel Gorman 2017-01-18 15:15 ` Michal Hocko 2017-01-18 15:15 ` Michal Hocko 2017-01-18 15:54 ` Mel Gorman 2017-01-18 15:54 ` Mel Gorman 2017-01-18 16:17 ` Michal Hocko 2017-01-18 16:17 ` Michal Hocko 2017-01-18 17:00 ` Mel Gorman 2017-01-18 17:00 ` Mel Gorman 2017-01-18 17:29 ` Michal Hocko 2017-01-18 17:29 ` Michal Hocko 2017-01-19 10:07 ` Mel Gorman 2017-01-19 10:07 ` Mel Gorman 2017-01-19 11:23 ` Michal Hocko 2017-01-19 11:23 ` Michal Hocko 2017-01-19 13:11 ` Mel Gorman 2017-01-19 13:11 ` Mel Gorman 2017-01-20 13:27 ` Tetsuo Handa 2017-01-20 13:27 ` Tetsuo Handa 2017-01-21 7:42 ` Tetsuo Handa 2017-01-21 7:42 ` Tetsuo Handa 2017-01-25 10:15 ` Michal Hocko 2017-01-25 10:15 ` Michal Hocko 2017-01-25 10:19 ` Christoph Hellwig 2017-01-25 10:19 ` Christoph Hellwig 2017-01-25 10:46 ` Michal Hocko 2017-01-25 10:46 ` Michal Hocko 2017-01-25 11:09 ` Tetsuo Handa 2017-01-25 11:09 ` Tetsuo Handa 2017-01-25 13:00 ` Michal Hocko 2017-01-25 13:00 ` Michal Hocko 2017-01-27 14:49 ` Michal Hocko 2017-01-27 14:49 ` Michal Hocko 2017-01-28 15:27 ` Tetsuo Handa 2017-01-28 15:27 ` Tetsuo Handa 2017-01-30 8:55 ` Michal Hocko 2017-01-30 8:55 ` Michal Hocko 2017-02-02 10:14 ` Michal Hocko 2017-02-02 10:14 ` Michal Hocko 2017-02-03 10:57 ` Tetsuo Handa 2017-02-03 10:57 ` Tetsuo Handa 2017-02-03 14:41 ` Michal Hocko 2017-02-03 14:41 ` Michal Hocko 2017-02-03 14:50 ` Michal Hocko 2017-02-03 14:50 ` Michal Hocko 2017-02-03 17:24 ` Brian Foster 2017-02-03 17:24 ` Brian Foster 2017-02-06 6:29 ` Tetsuo Handa 2017-02-06 6:29 ` Tetsuo Handa 2017-02-06 14:35 ` Brian Foster 2017-02-06 14:35 ` Brian Foster 2017-02-06 14:42 ` Michal Hocko 2017-02-06 14:42 ` Michal Hocko 2017-02-06 15:47 ` Brian Foster 2017-02-06 15:47 ` Brian Foster 2017-02-07 10:30 ` Tetsuo Handa 2017-02-07 10:30 ` Tetsuo Handa 2017-02-07 16:54 ` Brian Foster 2017-02-07 16:54 ` Brian Foster 2017-02-03 14:55 ` Michal Hocko 2017-02-03 14:55 ` Michal Hocko 2017-02-05 10:43 ` Tetsuo Handa 2017-02-05 10:43 ` Tetsuo Handa 2017-02-06 10:34 ` Michal Hocko 2017-02-06 10:34 ` Michal Hocko 2017-02-06 10:39 ` Michal Hocko [this message] 2017-02-06 10:39 ` Michal Hocko 2017-02-07 21:12 ` Michal Hocko 2017-02-07 21:12 ` Michal Hocko 2017-02-08 9:24 ` Peter Zijlstra 2017-02-08 9:24 ` Peter Zijlstra 2017-02-21 9:40 ` Michal Hocko 2017-02-21 9:40 ` Michal Hocko 2017-02-21 14:35 ` Tetsuo Handa 2017-02-21 14:35 ` Tetsuo Handa 2017-02-21 15:53 ` Michal Hocko 2017-02-21 15:53 ` Michal Hocko 2017-02-22 2:02 ` Tetsuo Handa 2017-02-22 2:02 ` Tetsuo Handa 2017-02-22 7:54 ` Michal Hocko 2017-02-22 7:54 ` Michal Hocko 2017-02-26 6:30 ` Tetsuo Handa 2017-02-26 6:30 ` Tetsuo Handa 2017-01-31 11:58 ` Michal Hocko 2017-01-31 11:58 ` Michal Hocko 2017-01-31 12:51 ` Christoph Hellwig 2017-01-31 12:51 ` Christoph Hellwig 2017-01-31 13:21 ` Michal Hocko 2017-01-31 13:21 ` Michal Hocko 2017-01-25 10:33 ` [RFC PATCH 1/2] mm, vmscan: account the number of isolated pagesper zone Tetsuo Handa 2017-01-25 10:33 ` Tetsuo Handa 2017-01-25 12:34 ` Michal Hocko 2017-01-25 12:34 ` Michal Hocko 2017-01-25 13:13 ` [RFC PATCH 1/2] mm, vmscan: account the number of isolated pages per zone Tetsuo Handa 2017-01-25 13:13 ` Tetsuo Handa 2017-01-25 9:53 ` Michal Hocko 2017-01-25 9:53 ` Michal Hocko 2017-01-20 6:42 ` Hillf Danton 2017-01-20 6:42 ` Hillf Danton 2017-01-20 9:25 ` Mel Gorman 2017-01-20 9:25 ` Mel Gorman 2017-01-18 13:44 ` [RFC PATCH 2/2] mm, vmscan: do not loop on too_many_isolated for ever Michal Hocko 2017-01-18 13:44 ` Michal Hocko 2017-01-18 14:50 ` Mel Gorman 2017-01-18 14:50 ` Mel Gorman
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20170206103918.GD3097@dhcp22.suse.cz \ --to=mhocko@kernel.org \ --cc=hannes@cmpxchg.org \ --cc=hch@lst.de \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mgorman@suse.de \ --cc=penguin-kernel@I-love.SAKURA.ne.jp \ --cc=peterz@infradead.org \ --cc=viro@ZenIV.linux.org.uk \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.