linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] genirq: Move prio assignment into the newly created thread
@ 2020-11-10 11:38 Sebastian Andrzej Siewior
  2020-11-10 11:38 ` [PATCH 1/2] kthread: Move prio/affinite change " Sebastian Andrzej Siewior
  2020-11-10 11:38 ` [PATCH 2/2] genirq: Move prio assignment " Sebastian Andrzej Siewior
  0 siblings, 2 replies; 7+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-11-10 11:38 UTC (permalink / raw)
  To: linux-kernel; +Cc: Thomas Gleixner, Peter Zijlstra, Ben Skeggs, Mike Galbraith

Mike reported [0] a lockdep splat triggered by the nouveau driver together
with enabled threaded interrupts.
I can't easily tell if moving / changing locking within nouveau does not
break anything and does not create the same lockchain in another path.
Mike did confirm that the lockdep splat can be avoided by by moving
`request_irq()' out of the picture [1]. He also tested the two patches
which are scattered in the thread.

Mike is betting on commit
   710da3c8ea7df ("sched/core: Prevent race condition between cpuset and __sched_setscheduler()")

as the source change that introduce that problem assuming that locking
within the nouveau remained unchanged. He did confirm that this splat
exists also in a v5.4 stable kernel [2] which is where the change was
introduced.

[0] https://lkml.kernel.org/r/a23a826af7c108ea5651e73b8fbae5e653f16e86.camel@gmx.de
[1] https://lkml.kernel.org/r/431e81699f2310eabfe5af0a3de400ab99d9323b.camel@gmx.de
[2] https://lkml.kernel.org/r/f7e43b369c7d168c2ff04bfd91dfdf3afd0da12c.camel@gmx.de

Sebastian



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/2] kthread: Move prio/affinite change into the newly created thread
  2020-11-10 11:38 [PATCH 0/2] genirq: Move prio assignment into the newly created thread Sebastian Andrzej Siewior
@ 2020-11-10 11:38 ` Sebastian Andrzej Siewior
  2020-11-17 12:45   ` Peter Zijlstra
  2020-11-10 11:38 ` [PATCH 2/2] genirq: Move prio assignment " Sebastian Andrzej Siewior
  1 sibling, 1 reply; 7+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-11-10 11:38 UTC (permalink / raw)
  To: linux-kernel
  Cc: Thomas Gleixner, Peter Zijlstra, Ben Skeggs, Mike Galbraith,
	Sebastian Andrzej Siewior

With enabled threaded interrupts the nouveau driver reported the
following:
| Chain exists of:
|   &mm->mmap_lock#2 --> &device->mutex --> &cpuset_rwsem
|
|  Possible unsafe locking scenario:
|
|        CPU0                    CPU1
|        ----                    ----
|   lock(&cpuset_rwsem);
|                                lock(&device->mutex);
|                                lock(&cpuset_rwsem);
|   lock(&mm->mmap_lock#2);

The device->mutex is nvkm_device::mutex.

Unblocking the lockchain at `cpuset_rwsem' is probably the easiest thing
to do.
Move the priority reset to the start of the newly created thread.

Fixes: 710da3c8ea7df ("sched/core: Prevent race condition between cpuset and __sched_setscheduler()")
Reported-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Link: https://lkml.kernel.org/r/a23a826af7c108ea5651e73b8fbae5e653f16e86.camel@gmx.de
---
 kernel/kthread.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/kernel/kthread.c b/kernel/kthread.c
index 933a625621b8d..4a31127c6efbf 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -243,6 +243,7 @@ EXPORT_SYMBOL_GPL(kthread_parkme);
 
 static int kthread(void *_create)
 {
+	static const struct sched_param param = { .sched_priority = 0 };
 	/* Copy data: it's on kthread's stack */
 	struct kthread_create_info *create = _create;
 	int (*threadfn)(void *data) = create->threadfn;
@@ -273,6 +274,13 @@ static int kthread(void *_create)
 	init_completion(&self->parked);
 	current->vfork_done = &self->exited;
 
+	/*
+	 * The new thread inherited kthreadd's priority and CPU mask. Reset
+	 * back to default in case they have been changed.
+	 */
+	sched_setscheduler_nocheck(current, SCHED_NORMAL, &param);
+	set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_FLAG_KTHREAD));
+
 	/* OK, tell user we're spawned, wait for stop or wakeup */
 	__set_current_state(TASK_UNINTERRUPTIBLE);
 	create->result = current;
@@ -370,7 +378,6 @@ struct task_struct *__kthread_create_on_node(int (*threadfn)(void *data),
 	}
 	task = create->result;
 	if (!IS_ERR(task)) {
-		static const struct sched_param param = { .sched_priority = 0 };
 		char name[TASK_COMM_LEN];
 
 		/*
@@ -379,13 +386,6 @@ struct task_struct *__kthread_create_on_node(int (*threadfn)(void *data),
 		 */
 		vsnprintf(name, sizeof(name), namefmt, args);
 		set_task_comm(task, name);
-		/*
-		 * root may have changed our (kthreadd's) priority or CPU mask.
-		 * The kernel thread should not inherit these properties.
-		 */
-		sched_setscheduler_nocheck(task, SCHED_NORMAL, &param);
-		set_cpus_allowed_ptr(task,
-				     housekeeping_cpumask(HK_FLAG_KTHREAD));
 	}
 	kfree(create);
 	return task;
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/2] genirq: Move prio assignment into the newly created thread
  2020-11-10 11:38 [PATCH 0/2] genirq: Move prio assignment into the newly created thread Sebastian Andrzej Siewior
  2020-11-10 11:38 ` [PATCH 1/2] kthread: Move prio/affinite change " Sebastian Andrzej Siewior
@ 2020-11-10 11:38 ` Sebastian Andrzej Siewior
  1 sibling, 0 replies; 7+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-11-10 11:38 UTC (permalink / raw)
  To: linux-kernel
  Cc: Thomas Gleixner, Peter Zijlstra, Ben Skeggs, Mike Galbraith,
	Sebastian Andrzej Siewior

From: Thomas Gleixner <tglx@linutronix.de>

With enabled threaded interrupts the nouveau driver reported the
following:
| Chain exists of:
|   &mm->mmap_lock#2 --> &device->mutex --> &cpuset_rwsem
|
|  Possible unsafe locking scenario:
|
|        CPU0                    CPU1
|        ----                    ----
|   lock(&cpuset_rwsem);
|                                lock(&device->mutex);
|                                lock(&cpuset_rwsem);
|   lock(&mm->mmap_lock#2);

The device->mutex is nvkm_device::mutex.

Unblocking the lockchain at `cpuset_rwsem' is probably the easiest thing
to do.
Move the priority assignment to the start of the newly created thread.

Fixes: 710da3c8ea7df ("sched/core: Prevent race condition between cpuset and __sched_setscheduler()")
Reported-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
[bigeasy: Patch description]
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Link: https://lkml.kernel.org/r/a23a826af7c108ea5651e73b8fbae5e653f16e86.camel@gmx.de
---
 kernel/irq/manage.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index c460e0496006e..079688d122a70 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -1155,6 +1155,8 @@ static int irq_thread(void *data)
 	irqreturn_t (*handler_fn)(struct irq_desc *desc,
 			struct irqaction *action);
 
+	sched_set_fifo(current);
+
 	if (force_irqthreads && test_bit(IRQTF_FORCED_THREAD,
 					&action->thread_flags))
 		handler_fn = irq_forced_thread_fn;
@@ -1320,8 +1322,6 @@ setup_irq_thread(struct irqaction *new, unsigned int irq, bool secondary)
 	if (IS_ERR(t))
 		return PTR_ERR(t);
 
-	sched_set_fifo(t);
-
 	/*
 	 * We keep the reference to the task struct even if
 	 * the thread dies to avoid that the interrupt code
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] kthread: Move prio/affinite change into the newly created thread
  2020-11-10 11:38 ` [PATCH 1/2] kthread: Move prio/affinite change " Sebastian Andrzej Siewior
@ 2020-11-17 12:45   ` Peter Zijlstra
  2020-11-20 22:17     ` Sebastian Andrzej Siewior
  2020-11-21 10:55     ` Thomas Gleixner
  0 siblings, 2 replies; 7+ messages in thread
From: Peter Zijlstra @ 2020-11-17 12:45 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Thomas Gleixner, Ben Skeggs, Mike Galbraith,
	Greg Kroah-Hartman

On Tue, Nov 10, 2020 at 12:38:47PM +0100, Sebastian Andrzej Siewior wrote:
> With enabled threaded interrupts the nouveau driver reported the
> following:
> | Chain exists of:
> |   &mm->mmap_lock#2 --> &device->mutex --> &cpuset_rwsem
> |
> |  Possible unsafe locking scenario:
> |
> |        CPU0                    CPU1
> |        ----                    ----
> |   lock(&cpuset_rwsem);
> |                                lock(&device->mutex);
> |                                lock(&cpuset_rwsem);
> |   lock(&mm->mmap_lock#2);
> 
> The device->mutex is nvkm_device::mutex.
> 
> Unblocking the lockchain at `cpuset_rwsem' is probably the easiest thing
> to do.
> Move the priority reset to the start of the newly created thread.
> 
> Fixes: 710da3c8ea7df ("sched/core: Prevent race condition between cpuset and __sched_setscheduler()")
> Reported-by: Mike Galbraith <efault@gmx.de>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Link: https://lkml.kernel.org/r/a23a826af7c108ea5651e73b8fbae5e653f16e86.camel@gmx.de

Moo... yes this is certainly the easiest solution, because nouveau is a
horrible rats nest. But when I spoke to Greg KH about this, he suggested
nouveau ought to be fixed.

Ben, I got terminally lost when trying to untangle nouvea init, is there
any chance this can be fixed to not hold that nvkm_device::mutex thing
while doing request_irq() ?

> ---
>  kernel/kthread.c | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/kernel/kthread.c b/kernel/kthread.c
> index 933a625621b8d..4a31127c6efbf 100644
> --- a/kernel/kthread.c
> +++ b/kernel/kthread.c
> @@ -243,6 +243,7 @@ EXPORT_SYMBOL_GPL(kthread_parkme);
>  
>  static int kthread(void *_create)
>  {
> +	static const struct sched_param param = { .sched_priority = 0 };
>  	/* Copy data: it's on kthread's stack */
>  	struct kthread_create_info *create = _create;
>  	int (*threadfn)(void *data) = create->threadfn;
> @@ -273,6 +274,13 @@ static int kthread(void *_create)
>  	init_completion(&self->parked);
>  	current->vfork_done = &self->exited;
>  
> +	/*
> +	 * The new thread inherited kthreadd's priority and CPU mask. Reset
> +	 * back to default in case they have been changed.
> +	 */
> +	sched_setscheduler_nocheck(current, SCHED_NORMAL, &param);
> +	set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_FLAG_KTHREAD));
> +
>  	/* OK, tell user we're spawned, wait for stop or wakeup */
>  	__set_current_state(TASK_UNINTERRUPTIBLE);
>  	create->result = current;
> @@ -370,7 +378,6 @@ struct task_struct *__kthread_create_on_node(int (*threadfn)(void *data),
>  	}
>  	task = create->result;
>  	if (!IS_ERR(task)) {
> -		static const struct sched_param param = { .sched_priority = 0 };
>  		char name[TASK_COMM_LEN];
>  
>  		/*
> @@ -379,13 +386,6 @@ struct task_struct *__kthread_create_on_node(int (*threadfn)(void *data),
>  		 */
>  		vsnprintf(name, sizeof(name), namefmt, args);
>  		set_task_comm(task, name);
> -		/*
> -		 * root may have changed our (kthreadd's) priority or CPU mask.
> -		 * The kernel thread should not inherit these properties.
> -		 */
> -		sched_setscheduler_nocheck(task, SCHED_NORMAL, &param);
> -		set_cpus_allowed_ptr(task,
> -				     housekeeping_cpumask(HK_FLAG_KTHREAD));
>  	}
>  	kfree(create);
>  	return task;
> -- 
> 2.29.2
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] kthread: Move prio/affinite change into the newly created thread
  2020-11-17 12:45   ` Peter Zijlstra
@ 2020-11-20 22:17     ` Sebastian Andrzej Siewior
  2020-11-21 10:55     ` Thomas Gleixner
  1 sibling, 0 replies; 7+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-11-20 22:17 UTC (permalink / raw)
  To: Ben Skeggs
  Cc: linux-kernel, Thomas Gleixner, Mike Galbraith, Peter Zijlstra,
	Greg Kroah-Hartman, Steven Rostedt

On 2020-11-17 13:45:03 [+0100], Peter Zijlstra wrote:
> On Tue, Nov 10, 2020 at 12:38:47PM +0100, Sebastian Andrzej Siewior wrote:
> > With enabled threaded interrupts the nouveau driver reported the
> > following:
> > | Chain exists of:
> > |   &mm->mmap_lock#2 --> &device->mutex --> &cpuset_rwsem
> > |
> > |  Possible unsafe locking scenario:
> > |
> > |        CPU0                    CPU1
> > |        ----                    ----
> > |   lock(&cpuset_rwsem);
> > |                                lock(&device->mutex);
> > |                                lock(&cpuset_rwsem);
> > |   lock(&mm->mmap_lock#2);
> > 
> > The device->mutex is nvkm_device::mutex.
> > 
> > Unblocking the lockchain at `cpuset_rwsem' is probably the easiest thing
> > to do.
> > Move the priority reset to the start of the newly created thread.
> > 
> > Fixes: 710da3c8ea7df ("sched/core: Prevent race condition between cpuset and __sched_setscheduler()")
> > Reported-by: Mike Galbraith <efault@gmx.de>
> > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> > Link: https://lkml.kernel.org/r/a23a826af7c108ea5651e73b8fbae5e653f16e86.camel@gmx.de
> 
> Moo... yes this is certainly the easiest solution, because nouveau is a
> horrible rats nest. But when I spoke to Greg KH about this, he suggested
> nouveau ought to be fixed.
> 
> Ben, I got terminally lost when trying to untangle nouvea init, is there
> any chance this can be fixed to not hold that nvkm_device::mutex thing
> while doing request_irq() ?

Ben, did you had a chance to peek at this?

> > ---
> >  kernel/kthread.c | 16 ++++++++--------
> >  1 file changed, 8 insertions(+), 8 deletions(-)
> > 
> > diff --git a/kernel/kthread.c b/kernel/kthread.c
> > index 933a625621b8d..4a31127c6efbf 100644
> > --- a/kernel/kthread.c
> > +++ b/kernel/kthread.c
> > @@ -243,6 +243,7 @@ EXPORT_SYMBOL_GPL(kthread_parkme);
> >  
> >  static int kthread(void *_create)
> >  {
> > +	static const struct sched_param param = { .sched_priority = 0 };
> >  	/* Copy data: it's on kthread's stack */
> >  	struct kthread_create_info *create = _create;
> >  	int (*threadfn)(void *data) = create->threadfn;
> > @@ -273,6 +274,13 @@ static int kthread(void *_create)
> >  	init_completion(&self->parked);
> >  	current->vfork_done = &self->exited;
> >  
> > +	/*
> > +	 * The new thread inherited kthreadd's priority and CPU mask. Reset
> > +	 * back to default in case they have been changed.
> > +	 */
> > +	sched_setscheduler_nocheck(current, SCHED_NORMAL, &param);
> > +	set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_FLAG_KTHREAD));
> > +
> >  	/* OK, tell user we're spawned, wait for stop or wakeup */
> >  	__set_current_state(TASK_UNINTERRUPTIBLE);
> >  	create->result = current;
> > @@ -370,7 +378,6 @@ struct task_struct *__kthread_create_on_node(int (*threadfn)(void *data),
> >  	}
> >  	task = create->result;
> >  	if (!IS_ERR(task)) {
> > -		static const struct sched_param param = { .sched_priority = 0 };
> >  		char name[TASK_COMM_LEN];
> >  
> >  		/*
> > @@ -379,13 +386,6 @@ struct task_struct *__kthread_create_on_node(int (*threadfn)(void *data),
> >  		 */
> >  		vsnprintf(name, sizeof(name), namefmt, args);
> >  		set_task_comm(task, name);
> > -		/*
> > -		 * root may have changed our (kthreadd's) priority or CPU mask.
> > -		 * The kernel thread should not inherit these properties.
> > -		 */
> > -		sched_setscheduler_nocheck(task, SCHED_NORMAL, &param);
> > -		set_cpus_allowed_ptr(task,
> > -				     housekeeping_cpumask(HK_FLAG_KTHREAD));
> >  	}
> >  	kfree(create);
> >  	return task;

Sebastian

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] kthread: Move prio/affinite change into the newly created thread
  2020-11-17 12:45   ` Peter Zijlstra
  2020-11-20 22:17     ` Sebastian Andrzej Siewior
@ 2020-11-21 10:55     ` Thomas Gleixner
  2021-03-03 16:25       ` Sebastian Andrzej Siewior
  1 sibling, 1 reply; 7+ messages in thread
From: Thomas Gleixner @ 2020-11-21 10:55 UTC (permalink / raw)
  To: Peter Zijlstra, Sebastian Andrzej Siewior
  Cc: linux-kernel, Ben Skeggs, Mike Galbraith, Greg Kroah-Hartman

On Tue, Nov 17 2020 at 13:45, Peter Zijlstra wrote:
> On Tue, Nov 10, 2020 at 12:38:47PM +0100, Sebastian Andrzej Siewior wrote:
>
> Moo... yes this is certainly the easiest solution, because nouveau is a
> horrible rats nest. But when I spoke to Greg KH about this, he suggested
> nouveau ought to be fixed.
>
> Ben, I got terminally lost when trying to untangle nouvea init, is there
> any chance this can be fixed to not hold that nvkm_device::mutex thing
> while doing request_irq() ?

OTOH, creating a dependency chain vs. cpuset_rwsem and whatever lock is
held by the caller via request_irq() or kthread_create() is not
necessarily restricted to the nivea driver. struct device::mutex (not
the nkvm_device::mutex) is always held when a driver is probed.

The cpuset_rwsem -> mmap_lock dependency is a given, so we're one step
away from a circular dependency vs. mmap_lock.

That was my reasoning to move the stuff out into the thread context.

Thanks,

        tglx




^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] kthread: Move prio/affinite change into the newly created thread
  2020-11-21 10:55     ` Thomas Gleixner
@ 2021-03-03 16:25       ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 7+ messages in thread
From: Sebastian Andrzej Siewior @ 2021-03-03 16:25 UTC (permalink / raw)
  To: Thomas Gleixner, Ben Skeggs
  Cc: Peter Zijlstra, linux-kernel, Mike Galbraith, Greg Kroah-Hartman

On 2020-11-21 11:55:48 [+0100], Thomas Gleixner wrote:
> On Tue, Nov 17 2020 at 13:45, Peter Zijlstra wrote:
> > On Tue, Nov 10, 2020 at 12:38:47PM +0100, Sebastian Andrzej Siewior wrote:
> >
> > Moo... yes this is certainly the easiest solution, because nouveau is a
> > horrible rats nest. But when I spoke to Greg KH about this, he suggested
> > nouveau ought to be fixed.
> >
> > Ben, I got terminally lost when trying to untangle nouvea init, is there
> > any chance this can be fixed to not hold that nvkm_device::mutex thing
> > while doing request_irq() ?
> 
> OTOH, creating a dependency chain vs. cpuset_rwsem and whatever lock is
> held by the caller via request_irq() or kthread_create() is not
> necessarily restricted to the nivea driver. struct device::mutex (not
> the nkvm_device::mutex) is always held when a driver is probed.
> 
> The cpuset_rwsem -> mmap_lock dependency is a given, so we're one step
> away from a circular dependency vs. mmap_lock.
> 
> That was my reasoning to move the stuff out into the thread context.

Just a friendly ping that this is still in my queue…

Ben could please reply here stating your view of the situation?

> Thanks,
> 
>         tglx

Sebastian

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2021-03-03 19:11 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-10 11:38 [PATCH 0/2] genirq: Move prio assignment into the newly created thread Sebastian Andrzej Siewior
2020-11-10 11:38 ` [PATCH 1/2] kthread: Move prio/affinite change " Sebastian Andrzej Siewior
2020-11-17 12:45   ` Peter Zijlstra
2020-11-20 22:17     ` Sebastian Andrzej Siewior
2020-11-21 10:55     ` Thomas Gleixner
2021-03-03 16:25       ` Sebastian Andrzej Siewior
2020-11-10 11:38 ` [PATCH 2/2] genirq: Move prio assignment " Sebastian Andrzej Siewior

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).