All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] various irq handling fixes/docu updates
@ 2022-12-16 15:01 Manfred Spraul
  2022-12-16 15:04 ` [PATCH 1/3] :lib/percpu_counter: percpu_counter_add_batch() overflow/underflow Manfred Spraul
  0 siblings, 1 reply; 9+ messages in thread
From: Manfred Spraul @ 2022-12-16 15:01 UTC (permalink / raw)
  To: LKML, Andrew Morton; +Cc: 1vier1, Manfred Spraul

Hi,

I found three patches from two topics in my outbox.

* 0001-lib-percpu_counter-percpu_counter_add_batch-overflow.patch
* 0002-include-linux-percpu_counter.h-Race-in-uniprocessor-.patch

  The percpu_counter code does take interrupt into account properly,
  this could result in corrupted counters.

0003-kernel-irq-manage.c-disable_irq-might-sleep.patch

  The disable_irq() documentation does not take threaded interrupt
  handlers into account.

I've not added cc stable, the races are not really likely:

- #002 is probably the most likely case: UP systems that use
  percpu_counters from interrupt should observe corruptions.

- #001 is fairly theoretical

- #003 is a docu update and thus out of scope for stable.

@Andrew: Could you add them to -mm and -next?
Especially #003 should be in -next for a few months, to check what the
might_sleep() encounters.

--
	Manfred


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/3] :lib/percpu_counter: percpu_counter_add_batch() overflow/underflow
  2022-12-16 15:01 [PATCH 0/3] various irq handling fixes/docu updates Manfred Spraul
@ 2022-12-16 15:04 ` Manfred Spraul
  2022-12-16 15:04   ` [PATCH 2/3] include/linux/percpu_counter.h: Race in uniprocessor percpu_counter_add() Manfred Spraul
  2022-12-16 15:04   ` [PATCH 3/3] kernel/irq/manage.c: disable_irq() might sleep Manfred Spraul
  0 siblings, 2 replies; 9+ messages in thread
From: Manfred Spraul @ 2022-12-16 15:04 UTC (permalink / raw)
  To: LKML, Andrew Morton; +Cc: 1vier1, Manfred Spraul, Sun, Jiebin

If an interrupt happens between __this_cpu_read(*fbc->counters) and
this_cpu_add(*fbc->counters, amount), and that interrupt modifies
the per_cpu_counter, then the this_cpu_add() after the interrupt
returns may under/overflow.

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: "Sun, Jiebin" <jiebin.sun@intel.com>
---
 lib/percpu_counter.c | 25 +++++++++++++++----------
 1 file changed, 15 insertions(+), 10 deletions(-)

diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c
index 42f729c8e56c..dba56c5c1837 100644
--- a/lib/percpu_counter.c
+++ b/lib/percpu_counter.c
@@ -73,28 +73,33 @@ void percpu_counter_set(struct percpu_counter *fbc, s64 amount)
 EXPORT_SYMBOL(percpu_counter_set);
 
 /*
- * This function is both preempt and irq safe. The former is due to explicit
- * preemption disable. The latter is guaranteed by the fact that the slow path
- * is explicitly protected by an irq-safe spinlock whereas the fast patch uses
- * this_cpu_add which is irq-safe by definition. Hence there is no need muck
- * with irq state before calling this one
+ * local_irq_save() is needed to make the function irq safe:
+ * - The slow path would be ok as protected by an irq-safe spinlock.
+ * - this_cpu_add would be ok as it is irq-safe by definition.
+ * But:
+ * The decision slow path/fast path and the actual update must be atomic, too.
+ * Otherwise a call in process context could check the current values and
+ * decide that the fast path can be used. If now an interrupt occurs before
+ * the this_cpu_add(), and the interrupt updates this_cpu(*fbc->counters),
+ * then the this_cpu_add() that is executed after the interrupt has completed
+ * can produce values larger than "batch" or even overflows.
  */
 void percpu_counter_add_batch(struct percpu_counter *fbc, s64 amount, s32 batch)
 {
 	s64 count;
+	unsigned long flags;
 
-	preempt_disable();
+	local_irq_save(flags);
 	count = __this_cpu_read(*fbc->counters) + amount;
 	if (abs(count) >= batch) {
-		unsigned long flags;
-		raw_spin_lock_irqsave(&fbc->lock, flags);
+		raw_spin_lock(&fbc->lock);
 		fbc->count += count;
 		__this_cpu_sub(*fbc->counters, count - amount);
-		raw_spin_unlock_irqrestore(&fbc->lock, flags);
+		raw_spin_unlock(&fbc->lock);
 	} else {
 		this_cpu_add(*fbc->counters, amount);
 	}
-	preempt_enable();
+	local_irq_restore(flags);
 }
 EXPORT_SYMBOL(percpu_counter_add_batch);
 
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/3] include/linux/percpu_counter.h: Race in uniprocessor percpu_counter_add()
  2022-12-16 15:04 ` [PATCH 1/3] :lib/percpu_counter: percpu_counter_add_batch() overflow/underflow Manfred Spraul
@ 2022-12-16 15:04   ` Manfred Spraul
  2022-12-16 15:04   ` [PATCH 3/3] kernel/irq/manage.c: disable_irq() might sleep Manfred Spraul
  1 sibling, 0 replies; 9+ messages in thread
From: Manfred Spraul @ 2022-12-16 15:04 UTC (permalink / raw)
  To: LKML, Andrew Morton; +Cc: 1vier1, Manfred Spraul, Sun, Jiebin

The percpu interface is supposed to be preempt and irq safe.

But:
The uniprocessor implementation of percpu_counter_add() is not irq safe:
if an interrupt happens during the +=, then the result is undefined.

Therefore: switch from preempt_disable() to local_irq_save().
This prevents interrupts from interrupting the +=, and as a side effect
prevents preemption.

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: "Sun, Jiebin" <jiebin.sun@intel.com>
---
 include/linux/percpu_counter.h | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/include/linux/percpu_counter.h b/include/linux/percpu_counter.h
index a3aae8d57a42..521a733e21a9 100644
--- a/include/linux/percpu_counter.h
+++ b/include/linux/percpu_counter.h
@@ -152,9 +152,11 @@ __percpu_counter_compare(struct percpu_counter *fbc, s64 rhs, s32 batch)
 static inline void
 percpu_counter_add(struct percpu_counter *fbc, s64 amount)
 {
-	preempt_disable();
+	unsigned long flags;
+
+	local_irq_save(flags);
 	fbc->count += amount;
-	preempt_enable();
+	local_irq_restore(flags);
 }
 
 /* non-SMP percpu_counter_add_local is the same with percpu_counter_add */
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/3] kernel/irq/manage.c: disable_irq() might sleep.
  2022-12-16 15:04 ` [PATCH 1/3] :lib/percpu_counter: percpu_counter_add_batch() overflow/underflow Manfred Spraul
  2022-12-16 15:04   ` [PATCH 2/3] include/linux/percpu_counter.h: Race in uniprocessor percpu_counter_add() Manfred Spraul
@ 2022-12-16 15:04   ` Manfred Spraul
  2022-12-20  6:58     ` Sverdlin, Alexander
  2023-01-11 18:38     ` [tip: irq/core] genirq: Add might_sleep() to disable_irq() tip-bot2 for Manfred Spraul
  1 sibling, 2 replies; 9+ messages in thread
From: Manfred Spraul @ 2022-12-16 15:04 UTC (permalink / raw)
  To: LKML, Andrew Morton
  Cc: 1vier1, Manfred Spraul, Thomas Gleixner, Sverdlin, Alexander

With the introduction of threaded interrupt handlers, it is virtually
never safe to call disable_irq() from non-premptible context.

Thus: Update the documentation, add a might_sleep() to catch any
offenders.

Fixes: 3aa551c9b4c4 ("genirq: add threaded interrupt handler support")

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "Sverdlin, Alexander" <alexander.sverdlin@siemens.com>
---
 kernel/irq/manage.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 5b7cf28df290..8ce75495e04f 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -723,10 +723,13 @@ EXPORT_SYMBOL(disable_irq_nosync);
  *	to complete before returning. If you use this function while
  *	holding a resource the IRQ handler may need you will deadlock.
  *
- *	This function may be called - with care - from IRQ context.
+ *	Can only be called from preemptible code as it might sleep when
+ *	an interrupt thread is associated to @irq.
+ *
  */
 void disable_irq(unsigned int irq)
 {
+	might_sleep();
 	if (!__disable_irq_nosync(irq))
 		synchronize_irq(irq);
 }
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/3] kernel/irq/manage.c: disable_irq() might sleep.
  2022-12-16 15:04   ` [PATCH 3/3] kernel/irq/manage.c: disable_irq() might sleep Manfred Spraul
@ 2022-12-20  6:58     ` Sverdlin, Alexander
  2022-12-23 10:54       ` Manfred Spraul
  2023-01-11 18:38     ` [tip: irq/core] genirq: Add might_sleep() to disable_irq() tip-bot2 for Manfred Spraul
  1 sibling, 1 reply; 9+ messages in thread
From: Sverdlin, Alexander @ 2022-12-20  6:58 UTC (permalink / raw)
  To: manfred, linux-kernel, akpm; +Cc: 1vier1, tglx

Hi Manfred!

On Fri, 2022-12-16 at 16:04 +0100, Manfred Spraul wrote:
> With the introduction of threaded interrupt handlers, it is virtually
> never safe to call disable_irq() from non-premptible context.
> 
> Thus: Update the documentation, add a might_sleep() to catch any
> offenders.
> 
> Fixes: 3aa551c9b4c4 ("genirq: add threaded interrupt handler
> support")
> 
> Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: "Sverdlin, Alexander" <alexander.sverdlin@siemens.com>
> ---
>  kernel/irq/manage.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
> index 5b7cf28df290..8ce75495e04f 100644
> --- a/kernel/irq/manage.c
> +++ b/kernel/irq/manage.c
> @@ -723,10 +723,13 @@ EXPORT_SYMBOL(disable_irq_nosync);
>   *     to complete before returning. If you use this function while
>   *     holding a resource the IRQ handler may need you will
> deadlock.
>   *
> - *     This function may be called - with care - from IRQ context.
> + *     Can only be called from preemptible code as it might sleep
> when
> + *     an interrupt thread is associated to @irq.
> + *
>   */
>  void disable_irq(unsigned int irq)
>  {
> +       might_sleep();

I'm not sure about this, latest wait_event() inside synchronize_irq()
has it already.

>         if (!__disable_irq_nosync(irq))
>                 synchronize_irq(irq);
>  }

-- 
Alexander Sverdlin
Siemens AG
www.siemens.com

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/3] kernel/irq/manage.c: disable_irq() might sleep.
  2022-12-20  6:58     ` Sverdlin, Alexander
@ 2022-12-23 10:54       ` Manfred Spraul
  2022-12-23 11:22         ` Sverdlin, Alexander
  0 siblings, 1 reply; 9+ messages in thread
From: Manfred Spraul @ 2022-12-23 10:54 UTC (permalink / raw)
  To: Sverdlin, Alexander, linux-kernel, akpm; +Cc: 1vier1, tglx

Hi Alexander,

On 12/20/22 07:58, Sverdlin, Alexander wrote:
> Hi Manfred!
>
> On Fri, 2022-12-16 at 16:04 +0100, Manfred Spraul wrote:
>> diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
>> index 5b7cf28df290..8ce75495e04f 100644
>> --- a/kernel/irq/manage.c
>> +++ b/kernel/irq/manage.c
>> @@ -723,10 +723,13 @@ EXPORT_SYMBOL(disable_irq_nosync);
>>    *     to complete before returning. If you use this function while
>>    *     holding a resource the IRQ handler may need you will
>> deadlock.
>>    *
>> - *     This function may be called - with care - from IRQ context.
>> + *     Can only be called from preemptible code as it might sleep
>> when
>> + *     an interrupt thread is associated to @irq.
>> + *
>>    */
>>   void disable_irq(unsigned int irq)
>>   {
>> +       might_sleep();
> I'm not sure about this, latest wait_event() inside synchronize_irq()
> has it already.
>
>>          if (!__disable_irq_nosync(irq))
>>                  synchronize_irq(irq);
>>   }

That is the whole point: might_sleep() should be always called. We are 
clarifying an API definition. Everyone who uses disable_irq() from 
non-sleeping context should get a warning, 100% of the time.

Not just within synchronize_irq() if there is an active threaded irq 
handler.

--

     Manfred


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/3] kernel/irq/manage.c: disable_irq() might sleep.
  2022-12-23 10:54       ` Manfred Spraul
@ 2022-12-23 11:22         ` Sverdlin, Alexander
  2022-12-24 20:01           ` Andrew Morton
  0 siblings, 1 reply; 9+ messages in thread
From: Sverdlin, Alexander @ 2022-12-23 11:22 UTC (permalink / raw)
  To: manfred, linux-kernel, akpm; +Cc: 1vier1, tglx

Hello Manfred,

On Fri, 2022-12-23 at 11:54 +0100, Manfred Spraul wrote:
> > I'm not sure about this, latest wait_event() inside
> > synchronize_irq()
> > has it already.
> > 
> > >           if (!__disable_irq_nosync(irq))
> > >                   synchronize_irq(irq);
> > >    }
> 
> That is the whole point: might_sleep() should be always called. We
> are 
> clarifying an API definition. Everyone who uses disable_irq() from 
> non-sleeping context should get a warning, 100% of the time.
> 
> Not just within synchronize_irq() if there is an active threaded irq 
> handler.

As I read it, it will warn almost always, even without threaded handler
configured, only in some error cases it will not warn. I'm just
thinking that disable_irq() might be in a hot path and this is being
checked anyway two calls deeper. But I don't have a strong opinion on
that and it looks like it has been taken into mm tree already.

-- 
Alexander Sverdlin
Siemens AG
www.siemens.com

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/3] kernel/irq/manage.c: disable_irq() might sleep.
  2022-12-23 11:22         ` Sverdlin, Alexander
@ 2022-12-24 20:01           ` Andrew Morton
  0 siblings, 0 replies; 9+ messages in thread
From: Andrew Morton @ 2022-12-24 20:01 UTC (permalink / raw)
  To: Sverdlin, Alexander; +Cc: manfred, linux-kernel, 1vier1, tglx

On Fri, 23 Dec 2022 11:22:30 +0000 "Sverdlin, Alexander" <alexander.sverdlin@siemens.com> wrote:

> But I don't have a strong opinion on
> that and it looks like it has been taken into mm tree already.

It can be taken out again ;)  Hence the "mm-unstable" name.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [tip: irq/core] genirq: Add might_sleep() to disable_irq()
  2022-12-16 15:04   ` [PATCH 3/3] kernel/irq/manage.c: disable_irq() might sleep Manfred Spraul
  2022-12-20  6:58     ` Sverdlin, Alexander
@ 2023-01-11 18:38     ` tip-bot2 for Manfred Spraul
  1 sibling, 0 replies; 9+ messages in thread
From: tip-bot2 for Manfred Spraul @ 2023-01-11 18:38 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Manfred Spraul, Thomas Gleixner, x86, linux-kernel, maz

The following commit has been merged into the irq/core branch of tip:

Commit-ID:     17549b0f184d870f2cfa4e5cfa79f4c4905ed757
Gitweb:        https://git.kernel.org/tip/17549b0f184d870f2cfa4e5cfa79f4c4905ed757
Author:        Manfred Spraul <manfred@colorfullife.com>
AuthorDate:    Fri, 16 Dec 2022 16:04:41 +01:00
Committer:     Thomas Gleixner <tglx@linutronix.de>
CommitterDate: Wed, 11 Jan 2023 19:35:13 +01:00

genirq: Add might_sleep() to disable_irq()

With the introduction of threaded interrupt handlers, it is virtually
never safe to call disable_irq() from non-premptible context.

Thus: Update the documentation, add an explicit might_sleep() to catch any
offenders. This is more obvious and straight forward than the implicit
might_sleep() check deeper down in the disable_irq() call chain.

Fixes: 3aa551c9b4c4 ("genirq: add threaded interrupt handler support")
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20221216150441.200533-3-manfred@colorfullife.com


---
 kernel/irq/manage.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 5b7cf28..8ce7549 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -723,10 +723,13 @@ EXPORT_SYMBOL(disable_irq_nosync);
  *	to complete before returning. If you use this function while
  *	holding a resource the IRQ handler may need you will deadlock.
  *
- *	This function may be called - with care - from IRQ context.
+ *	Can only be called from preemptible code as it might sleep when
+ *	an interrupt thread is associated to @irq.
+ *
  */
 void disable_irq(unsigned int irq)
 {
+	might_sleep();
 	if (!__disable_irq_nosync(irq))
 		synchronize_irq(irq);
 }

^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2023-01-11 18:38 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-16 15:01 [PATCH 0/3] various irq handling fixes/docu updates Manfred Spraul
2022-12-16 15:04 ` [PATCH 1/3] :lib/percpu_counter: percpu_counter_add_batch() overflow/underflow Manfred Spraul
2022-12-16 15:04   ` [PATCH 2/3] include/linux/percpu_counter.h: Race in uniprocessor percpu_counter_add() Manfred Spraul
2022-12-16 15:04   ` [PATCH 3/3] kernel/irq/manage.c: disable_irq() might sleep Manfred Spraul
2022-12-20  6:58     ` Sverdlin, Alexander
2022-12-23 10:54       ` Manfred Spraul
2022-12-23 11:22         ` Sverdlin, Alexander
2022-12-24 20:01           ` Andrew Morton
2023-01-11 18:38     ` [tip: irq/core] genirq: Add might_sleep() to disable_irq() tip-bot2 for Manfred Spraul

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.