linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Qian Cai <cai@lca.pw>
Cc: mingo@redhat.com, juri.lelli@redhat.com,
	vincent.guittot@linaro.org, dietmar.eggemann@arm.com,
	rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de,
	paulmck@kernel.org, tglx@linutronix.de, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2] sched/core: fix illegal RCU from offline CPUs
Date: Mon, 20 Jan 2020 11:16:52 +0100	[thread overview]
Message-ID: <20200120101652.GM14879@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <20200113190331.12788-1-cai@lca.pw>

On Mon, Jan 13, 2020 at 02:03:31PM -0500, Qian Cai wrote:
> In the CPU-offline process, it calls mmdrop() after idle entry and the
> subsequent call to cpuhp_report_idle_dead(). Once execution passes the
> call to rcu_report_dead(), RCU is ignoring the CPU, which results in
> lockdep complaints when mmdrop() uses RCU from either memcg or
> debugobjects, so it by scheduling mmdrop() on another online CPU.
> 
> According to the commit a79e53d85683 ("x86/mm: Fix pgd_lock deadlock"),
> mmdrop() is not interrupt-safe, and called from
> smp_call_function_single() could end up running mmdrop() from the IPI
> interrupt handler.
> 

<deletes ~100 lines of gunk>

Surely the critical information contained in these nearly 100 lines of
splat can be more consicely represented?


> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 90e4b00ace89..1863a6fc4d82 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -6194,7 +6194,8 @@ void idle_task_exit(void)
>  		current->active_mm = &init_mm;
>  		finish_arch_post_lock_switch();
>  	}
> -	mmdrop(mm);
> +	smp_call_function_single(cpumask_first(cpu_online_mask),
> +				 (void (*)(void *))mmdrop_async, mm, 0);
>  }

Bah.. that's horrible. Surely we can find a better place to do this in
the whole hotplug machinery.

Perhaps you can have takedown_cpu() do the mmdrop()?


  reply	other threads:[~2020-01-20 10:17 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-13 19:03 [PATCH v2] sched/core: fix illegal RCU from offline CPUs Qian Cai
2020-01-20 10:16 ` Peter Zijlstra [this message]
2020-01-20 20:35   ` Qian Cai
2020-01-21 10:35     ` Peter Zijlstra
2020-01-24  4:21       ` Qian Cai
2020-01-24  5:02         ` Matthew Wilcox
2020-03-30  2:42       ` Qian Cai
2020-04-01 21:05         ` Qian Cai
2020-04-01 21:40 Qian Cai
2020-04-02 11:24 ` Michael Ellerman
2020-04-02 14:00   ` Qian Cai
2020-04-02 15:54     ` Paul E. McKenney
2020-04-02 16:19       ` Qian Cai
2020-04-02 16:57         ` Paul E. McKenney
2020-04-17 13:26   ` Qian Cai
2020-04-21 13:56     ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200120101652.GM14879@hirez.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=bsegall@google.com \
    --cc=cai@lca.pw \
    --cc=dietmar.eggemann@arm.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=paulmck@kernel.org \
    --cc=rostedt@goodmis.org \
    --cc=tglx@linutronix.de \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).