regressions.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Patrick Schaaf <bof@bof.de>
Cc: stable@vger.kernel.org, regressions@lists.linux.dev,
	anubis@igorloncarevic.com
Subject: Re: stable 5.4.135 REGRESSION / once-a-day WARNING: at kthread_is_per_cpu+0x1c/0x20
Date: Tue, 31 Aug 2021 09:00:52 +0200	[thread overview]
Message-ID: <YS3TpF8B5TA2mGFr@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <CAJ26g5S2tbDhRbWkcgRzAu0eX=FNk00P8srboOSn=jYp4saALA@mail.gmail.com>

On Tue, Aug 31, 2021 at 06:46:40AM +0200, Patrick Schaaf wrote:
> Looking into this again.
> 
> Unfortunately, couldn't see how I would do bisection on the issue, as
> it appears with that 5.4.118 commit implicated by the call stack,and
> with tha tremoved,is obviously gone(that I tested, 5.4.135 with
> b56ad4febe67b8c0647c0a3e427e935a76dedb59 reverted runs smoothly for
> me, while the original 5.4.135 with that 5.4.118 time commit in, now
> on a dozen machines, throws the WARNING.
> 
> I got email on the side of someone (Igor, on Cc) who sees the same
> with DELL servers, a newer 5.10 kernel, for him running IPVS + he sees
> actual operational impact there.
> 
> I just had a look at Linus mainlinetree, and see there is this
> followup / further fix from Peter Zijlstra,
> https://github.com/torvalds/linux/commit/3a7956e25e1d7b3c148569e78895e1f3178122a9
> ; now I'm much too incompetent to try and backport that, as it looks
> more involved, but I imagine such a backport would be needed to fix
> the WARNING (or IPVS breakage of Igor) we see.

3a7956e25e1d ("kthread: Fix PF_KTHREAD vs to_kthread() race") munged
into 5.4.135

Never even seen a compiler, please tests.

---
 kernel/kthread.c    | 43 +++++++++++++++++++++++++++++--------------
 kernel/sched/fair.c |  2 +-
 2 files changed, 30 insertions(+), 15 deletions(-)

diff --git a/kernel/kthread.c b/kernel/kthread.c
index b2bac5d929d2..22750a8af83e 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -76,6 +76,25 @@ static inline struct kthread *to_kthread(struct task_struct *k)
 	return (__force void *)k->set_child_tid;
 }
 
+/*
+ * Variant of to_kthread() that doesn't assume @p is a kthread.
+ *
+ * Per construction; when:
+ *
+ *   (p->flags & PF_KTHREAD) && p->set_child_tid
+ *
+ * the task is both a kthread and struct kthread is persistent. However
+ * PF_KTHREAD on it's own is not, kernel_thread() can exec() (See umh.c and
+ * begin_new_exec()).
+ */
+static inline struct kthread *__to_kthread(struct task_struct *p)
+{
+	void *kthread = (__force void *)p->set_child_tid;
+	if (kthread && !(p->flags & PF_KTHREAD))
+		kthread = NULL;
+	return kthread;
+}
+
 void free_kthread_struct(struct task_struct *k)
 {
 	struct kthread *kthread;
@@ -176,10 +195,11 @@ void *kthread_data(struct task_struct *task)
  */
 void *kthread_probe_data(struct task_struct *task)
 {
-	struct kthread *kthread = to_kthread(task);
+	struct kthread *kthread = __to_kthread(task);
 	void *data = NULL;
 
-	probe_kernel_read(&data, &kthread->data, sizeof(data));
+	if (kthread)
+		probe_kernel_read(&data, &kthread->data, sizeof(data));
 	return data;
 }
 
@@ -490,9 +510,9 @@ void kthread_set_per_cpu(struct task_struct *k, int cpu)
 	set_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
 }
 
-bool kthread_is_per_cpu(struct task_struct *k)
+bool kthread_is_per_cpu(struct task_struct *p)
 {
-	struct kthread *kthread = to_kthread(k);
+	struct kthread *kthread = __to_kthread(p);
 	if (!kthread)
 		return false;
 
@@ -1272,11 +1292,9 @@ EXPORT_SYMBOL(kthread_destroy_worker);
  */
 void kthread_associate_blkcg(struct cgroup_subsys_state *css)
 {
-	struct kthread *kthread;
+	struct kthread *kthread = __to_kthread(current);
+
 
-	if (!(current->flags & PF_KTHREAD))
-		return;
-	kthread = to_kthread(current);
 	if (!kthread)
 		return;
 
@@ -1298,13 +1316,10 @@ EXPORT_SYMBOL(kthread_associate_blkcg);
  */
 struct cgroup_subsys_state *kthread_blkcg(void)
 {
-	struct kthread *kthread;
+	struct kthread *kthread = __to_kthread(current)
 
-	if (current->flags & PF_KTHREAD) {
-		kthread = to_kthread(current);
-		if (kthread)
-			return kthread->blkcg_css;
-	}
+	if (kthread)
+		return kthread->blkcg_css;
 	return NULL;
 }
 EXPORT_SYMBOL(kthread_blkcg);
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 74cb20f32f72..87d9fad9d01d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7301,7 +7301,7 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
 		return 0;
 
 	/* Disregard pcpu kthreads; they are where they need to be. */
-	if ((p->flags & PF_KTHREAD) && kthread_is_per_cpu(p))
+	if (kthread_is_per_cpu(p))
 		return 0;
 
 	if (!cpumask_test_cpu(env->dst_cpu, p->cpus_ptr)) {

  reply	other threads:[~2021-08-31  7:01 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-28 17:02 stable 5.4.135 REGRESSION / once-a-day WARNING: at kthread_is_per_cpu+0x1c/0x20 bof
2021-07-28 17:45 ` Greg KH
2021-07-28 17:51   ` Patrick Schaaf
2021-08-31  4:46   ` Patrick Schaaf
2021-08-31  7:00     ` Peter Zijlstra [this message]
2021-08-31 14:57       ` Patrick Schaaf
2021-07-28 17:40 bof

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YS3TpF8B5TA2mGFr@hirez.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=anubis@igorloncarevic.com \
    --cc=bof@bof.de \
    --cc=regressions@lists.linux.dev \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).