From: Mel Gorman <mgorman@techsingularity.net> To: Michal Hocko <mhocko@kernel.org> Cc: Vlastimil Babka <vbabka@suse.cz>, Dmitry Vyukov <dvyukov@google.com>, Tejun Heo <tj@kernel.org>, Christoph Lameter <cl@linux.com>, "linux-mm@kvack.org" <linux-mm@kvack.org>, LKML <linux-kernel@vger.kernel.org>, Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@kernel.org>, Peter Zijlstra <peterz@infradead.org>, syzkaller <syzkaller@googlegroups.com>, Andrew Morton <akpm@linux-foundation.org> Subject: Re: mm: deadlock between get_online_cpus/pcpu_alloc Date: Tue, 7 Feb 2017 13:58:46 +0000 [thread overview] Message-ID: <20170207135846.usfrn7e4znjhmogn@techsingularity.net> (raw) In-Reply-To: <20170207123708.GO5065@dhcp22.suse.cz> On Tue, Feb 07, 2017 at 01:37:08PM +0100, Michal Hocko wrote: > > You cannot put sleepable lock inside the preempt disbaled section... > > We can make it a spinlock right? > > Scratch that! For some reason I thought that cpu notifiers are run in an > atomic context. Now that I am checking the code again it turns out I was > wrong. __cpu_notify uses __raw_notifier_call_chain so this is not an > atomic context. Indeed. > Anyway, shouldn't be it sufficient to disable preemption > on drain_local_pages_wq? That would be sufficient for a hot-removed CPU moving the drain request to another CPU and avoiding any scheduling events. > The CPU hotplug callback will not preempt us > and so we cannot work on the same cpus, right? > I don't see a specific guarantee that it cannot be preempted and it would depend on an the exact cpu hotplug implementation which is subject to quite a lot of change. Hence, the mutex provides a guantee that the hot-removed CPU teardown cannot run on the same CPU as a workqueue drain running on a CPU it was not originally scheduled for. -- Mel Gorman SUSE Labs
WARNING: multiple messages have this Message-ID (diff)
From: Mel Gorman <mgorman@techsingularity.net> To: Michal Hocko <mhocko@kernel.org> Cc: Vlastimil Babka <vbabka@suse.cz>, Dmitry Vyukov <dvyukov@google.com>, Tejun Heo <tj@kernel.org>, Christoph Lameter <cl@linux.com>, "linux-mm@kvack.org" <linux-mm@kvack.org>, LKML <linux-kernel@vger.kernel.org>, Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@kernel.org>, Peter Zijlstra <peterz@infradead.org>, syzkaller <syzkaller@googlegroups.com>, Andrew Morton <akpm@linux-foundation.org> Subject: Re: mm: deadlock between get_online_cpus/pcpu_alloc Date: Tue, 7 Feb 2017 13:58:46 +0000 [thread overview] Message-ID: <20170207135846.usfrn7e4znjhmogn@techsingularity.net> (raw) In-Reply-To: <20170207123708.GO5065@dhcp22.suse.cz> On Tue, Feb 07, 2017 at 01:37:08PM +0100, Michal Hocko wrote: > > You cannot put sleepable lock inside the preempt disbaled section... > > We can make it a spinlock right? > > Scratch that! For some reason I thought that cpu notifiers are run in an > atomic context. Now that I am checking the code again it turns out I was > wrong. __cpu_notify uses __raw_notifier_call_chain so this is not an > atomic context. Indeed. > Anyway, shouldn't be it sufficient to disable preemption > on drain_local_pages_wq? That would be sufficient for a hot-removed CPU moving the drain request to another CPU and avoiding any scheduling events. > The CPU hotplug callback will not preempt us > and so we cannot work on the same cpus, right? > I don't see a specific guarantee that it cannot be preempted and it would depend on an the exact cpu hotplug implementation which is subject to quite a lot of change. Hence, the mutex provides a guantee that the hot-removed CPU teardown cannot run on the same CPU as a workqueue drain running on a CPU it was not originally scheduled for. -- Mel Gorman SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-02-07 13:58 UTC|newest] Thread overview: 120+ messages / expand[flat|nested] mbox.gz Atom feed top 2017-01-29 12:44 mm: deadlock between get_online_cpus/pcpu_alloc Dmitry Vyukov 2017-01-29 12:44 ` Dmitry Vyukov 2017-01-29 17:22 ` Vlastimil Babka 2017-01-29 17:22 ` Vlastimil Babka 2017-01-30 15:48 ` Dmitry Vyukov 2017-01-30 15:48 ` Dmitry Vyukov 2017-02-06 19:13 ` Dmitry Vyukov 2017-02-06 19:13 ` Dmitry Vyukov 2017-02-06 22:05 ` Mel Gorman 2017-02-06 22:05 ` Mel Gorman 2017-02-07 8:48 ` Michal Hocko 2017-02-07 8:48 ` Michal Hocko 2017-02-07 9:23 ` Vlastimil Babka 2017-02-07 9:23 ` Vlastimil Babka 2017-02-07 9:46 ` Mel Gorman 2017-02-07 9:46 ` Mel Gorman 2017-02-07 9:53 ` Michal Hocko 2017-02-07 9:53 ` Michal Hocko 2017-02-07 10:42 ` Mel Gorman 2017-02-07 10:42 ` Mel Gorman 2017-02-07 11:13 ` Mel Gorman 2017-02-07 11:13 ` Mel Gorman 2017-02-07 9:43 ` Mel Gorman 2017-02-07 9:43 ` Mel Gorman 2017-02-07 9:49 ` Vlastimil Babka 2017-02-07 9:49 ` Vlastimil Babka 2017-02-07 10:05 ` Michal Hocko 2017-02-07 10:05 ` Michal Hocko 2017-02-07 10:28 ` Mel Gorman 2017-02-07 10:28 ` Mel Gorman 2017-02-07 10:35 ` Michal Hocko 2017-02-07 10:35 ` Michal Hocko 2017-02-07 11:34 ` Mel Gorman 2017-02-07 11:34 ` Mel Gorman 2017-02-07 11:43 ` Michal Hocko 2017-02-07 11:43 ` Michal Hocko 2017-02-07 11:54 ` Vlastimil Babka 2017-02-07 11:54 ` Vlastimil Babka 2017-02-07 12:08 ` Michal Hocko 2017-02-07 12:08 ` Michal Hocko 2017-02-07 12:37 ` Michal Hocko 2017-02-07 12:37 ` Michal Hocko 2017-02-07 12:43 ` Vlastimil Babka 2017-02-07 12:43 ` Vlastimil Babka 2017-02-07 12:48 ` Michal Hocko 2017-02-07 12:48 ` Michal Hocko 2017-02-07 13:57 ` Vlastimil Babka 2017-02-07 13:57 ` Vlastimil Babka 2017-02-07 13:58 ` Mel Gorman [this message] 2017-02-07 13:58 ` Mel Gorman 2017-02-07 14:19 ` Michal Hocko 2017-02-07 14:19 ` Michal Hocko 2017-02-07 15:34 ` Michal Hocko 2017-02-07 15:34 ` Michal Hocko 2017-02-07 16:22 ` Mel Gorman 2017-02-07 16:22 ` Mel Gorman 2017-02-07 16:41 ` Michal Hocko 2017-02-07 16:41 ` Michal Hocko 2017-02-07 16:55 ` Christoph Lameter 2017-02-07 16:55 ` Christoph Lameter 2017-02-07 22:25 ` Thomas Gleixner 2017-02-07 22:25 ` Thomas Gleixner 2017-02-08 7:35 ` Michal Hocko 2017-02-08 7:35 ` Michal Hocko 2017-02-08 12:02 ` Thomas Gleixner 2017-02-08 12:02 ` Thomas Gleixner 2017-02-08 12:21 ` Michal Hocko 2017-02-08 12:21 ` Michal Hocko 2017-02-08 12:26 ` Mel Gorman 2017-02-08 12:26 ` Mel Gorman 2017-02-08 13:23 ` Thomas Gleixner 2017-02-08 13:23 ` Thomas Gleixner 2017-02-08 14:03 ` Mel Gorman 2017-02-08 14:03 ` Mel Gorman 2017-02-08 14:11 ` Peter Zijlstra 2017-02-08 14:11 ` Peter Zijlstra 2017-02-08 15:11 ` Christoph Lameter 2017-02-08 15:11 ` Christoph Lameter 2017-02-08 15:21 ` Michal Hocko 2017-02-08 15:21 ` Michal Hocko 2017-02-08 16:17 ` Christoph Lameter 2017-02-08 16:17 ` Christoph Lameter 2017-02-08 17:46 ` Thomas Gleixner 2017-02-08 17:46 ` Thomas Gleixner 2017-02-09 3:15 ` Christoph Lameter 2017-02-09 3:15 ` Christoph Lameter 2017-02-09 11:42 ` Thomas Gleixner 2017-02-09 11:42 ` Thomas Gleixner 2017-02-09 14:00 ` Christoph Lameter 2017-02-09 14:00 ` Christoph Lameter 2017-02-09 14:53 ` Thomas Gleixner 2017-02-09 14:53 ` Thomas Gleixner 2017-02-09 15:42 ` Christoph Lameter 2017-02-09 15:42 ` Christoph Lameter 2017-02-09 16:12 ` Thomas Gleixner 2017-02-09 16:12 ` Thomas Gleixner 2017-02-09 17:22 ` Christoph Lameter 2017-02-09 17:22 ` Christoph Lameter 2017-02-09 17:40 ` Thomas Gleixner 2017-02-09 17:40 ` Thomas Gleixner 2017-02-09 19:15 ` Michal Hocko 2017-02-09 19:15 ` Michal Hocko 2017-02-10 17:58 ` Christoph Lameter 2017-02-10 17:58 ` Christoph Lameter 2017-02-08 15:06 ` Christoph Lameter 2017-02-08 15:06 ` Christoph Lameter 2017-02-07 17:03 ` Tejun Heo 2017-02-07 17:03 ` Tejun Heo 2017-02-07 20:16 ` Michal Hocko 2017-02-07 20:16 ` Michal Hocko 2017-02-07 13:03 ` Mel Gorman 2017-02-07 13:03 ` Mel Gorman 2017-02-07 13:48 ` Michal Hocko 2017-02-07 13:48 ` Michal Hocko 2017-02-07 11:24 ` Tetsuo Handa 2017-02-07 11:24 ` Tetsuo Handa 2017-02-07 8:43 ` Michal Hocko 2017-02-07 8:43 ` Michal Hocko 2017-02-07 21:53 ` Thomas Gleixner 2017-02-07 21:53 ` Thomas Gleixner
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20170207135846.usfrn7e4znjhmogn@techsingularity.net \ --to=mgorman@techsingularity.net \ --cc=akpm@linux-foundation.org \ --cc=cl@linux.com \ --cc=dvyukov@google.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mhocko@kernel.org \ --cc=mingo@kernel.org \ --cc=peterz@infradead.org \ --cc=syzkaller@googlegroups.com \ --cc=tglx@linutronix.de \ --cc=tj@kernel.org \ --cc=vbabka@suse.cz \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.