All of lore.kernel.org
 help / color / mirror / Atom feed
* [patch 00/19] softirq: Cleanups and RT awareness
@ 2020-11-13 14:02 ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

UlQgcnVucyBzb2Z0aXJxIHByb2Nlc3NpbmcgYWx3YXlzIGluIHRocmVhZCBjb250ZXh0IGFuZCBp
dCByZXF1aXJlcyB0aGF0CmJvdGggdGhlIHNvZnRpcnEgZXhlY3V0aW9uIGFuZCB0aGUgQkggZGlz
YWJsZWQgc2VjdGlvbnMgYXJlIHByZWVtcHRpYmxlLgoKVGhpcyBpcyBhY2hpZXZlZCBieSBzZXJp
YWxpemF0aW9uIHRocm91Z2ggcGVyIENQVSBsb2NhbCBsb2NrcyBhbmQKc3Vic3RpdHV0aW5nIGEg
ZmV3IHBhcnRzIG9mIHRoZSBleGlzdGluZyBzb2Z0aXJxIHByb2Nlc3NpbmcgY29kZSB3aXRoCmhl
bHBlciBmdW5jdGlvbnMuCgpUaGUgZm9sbG93aW5nIHNlcmllcyBoYXMgdHdvIHBhcnRzOgoKICAg
IDEpIENsZWFudXAgaXJxX2NwdXN0YXRzIGFuZCBjb25zb2xpZGF0aW9uIG9mIHRoZSBwcmVlbXB0
IGNvdW50IG1hemUKICAgICAgIHNvIHNvZnRpcnFfY291bnQoKSBhbmQgcmVsYXRlZCBwYXJ0cyBj
YW4gYmUgc3Vic3RpdHV0ZWQgZm9yIFJUCgogICAgMikgVGhlIGFjdHVhbCBjb3JlIGltcGxlbWVu
dGF0aW9uIGluY2x1ZGluZyB0aGUgcmVxdWlyZWQgZml4dXBzIGZvcgogICAgICAgTk9IWiwgUkNV
IGFuZCB0YXNrbGV0cy4KClRoZSBzZXJpZXMgaXMgYWxzbyBhdmFpbGFibGUgZnJvbSBnaXQ6Cgog
IGdpdDovL2dpdC5rZXJuZWwub3JnL3B1Yi9zY20vbGludXgva2VybmVsL2dpdC90Z2x4L2RldmVs
LmdpdCBzb2Z0aXJxCgpUaGUgUlQgdmFyaWFudCBoYXMgc3VjZXNzZnVsbHkgYmVlbiB0ZXN0ZWQg
aW4gdGhlIGN1cnJlbnQgNS4xMC1ydApwYXRjaGVzLiBGb3Igbm9uLVJUIGtlcm5lbHMgdGhlcmUg
aXMgbm8gZnVuY3Rpb25hbCBjaGFuZ2UuCgpUaGFua3MsCgoJdGdseAotLS0KIGIvYXJjaC9hcm0v
aW5jbHVkZS9hc20vaGFyZGlycS5oICAgIHwgICAxMSAKIGIvYXJjaC9hcm0vaW5jbHVkZS9hc20v
aXJxLmggICAgICAgIHwgICAgMiAKIGIvYXJjaC9hcm02NC9pbmNsdWRlL2FzbS9oYXJkaXJxLmgg
IHwgICAgNyAKIGIvYXJjaC9wYXJpc2MvaW5jbHVkZS9hc20vaGFyZGlycS5oIHwgICAgMSAKIGIv
YXJjaC9zaC9pbmNsdWRlL2FzbS9oYXJkaXJxLmggICAgIHwgICAxNCAtCiBiL2FyY2gvc2gva2Vy
bmVsL2lycS5jICAgICAgICAgICAgICB8ICAgIDIgCiBiL2FyY2gvc2gva2VybmVsL3RyYXBzLmMg
ICAgICAgICAgICB8ICAgIDIgCiBiL2FyY2gvdW0vaW5jbHVkZS9hc20vaGFyZGlycS5oICAgICB8
ICAgMTcgLQogYi9pbmNsdWRlL2FzbS1nZW5lcmljL2hhcmRpcnEuaCAgICAgfCAgICA2IAogYi9p
bmNsdWRlL2xpbnV4L2JvdHRvbV9oYWxmLmggICAgICAgfCAgICA4IAogYi9pbmNsdWRlL2xpbnV4
L2hhcmRpcnEuaCAgICAgICAgICAgfCAgICAxIAogYi9pbmNsdWRlL2xpbnV4L2ludGVycnVwdC5o
ICAgICAgICAgfCAgIDEzIC0KIGIvaW5jbHVkZS9saW51eC9wcmVlbXB0LmggICAgICAgICAgIHwg
ICAzNiArLS0KIGIvaW5jbHVkZS9saW51eC9yY3VwZGF0ZS5oICAgICAgICAgIHwgICAgMyAKIGIv
aW5jbHVkZS9saW51eC9zY2hlZC5oICAgICAgICAgICAgIHwgICAgMyAKIGIva2VybmVsL3NvZnRp
cnEuYyAgICAgICAgICAgICAgICAgIHwgIDQxMiArKysrKysrKysrKysrKysrKysrKysrKysrKysr
KystLS0tLS0KIGIva2VybmVsL3RpbWUvdGljay1zY2hlZC5jICAgICAgICAgIHwgICAgMiAKIGlu
Y2x1ZGUvbGludXgvaXJxX2NwdXN0YXQuaCAgICAgICAgIHwgICAyOCAtLQogMTggZmlsZXMgY2hh
bmdlZCwgNDA1IGluc2VydGlvbnMoKyksIDE2MyBkZWxldGlvbnMoLSkKCgoKCgoK

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 00/19] softirq: Cleanups and RT awareness
@ 2020-11-13 14:02 ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Frederic Weisbecker, Paul McKenney,
	Sebastian Andrzej Siewior, Arnd Bergmann, James E.J. Bottomley,
	Helge Deller, linux-parisc, Yoshinori Sato, Rich Felker,
	linux-sh, Jeff Dike, Richard Weinberger, Anton Ivanov, linux-um,
	Russell King, Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

RT runs softirq processing always in thread context and it requires that
both the softirq execution and the BH disabled sections are preemptible.

This is achieved by serialization through per CPU local locks and
substituting a few parts of the existing softirq processing code with
helper functions.

The following series has two parts:

    1) Cleanup irq_cpustats and consolidation of the preempt count maze
       so softirq_count() and related parts can be substituted for RT

    2) The actual core implementation including the required fixups for
       NOHZ, RCU and tasklets.

The series is also available from git:

  git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git softirq

The RT variant has sucessfully been tested in the current 5.10-rt
patches. For non-RT kernels there is no functional change.

Thanks,

	tglx
---
 b/arch/arm/include/asm/hardirq.h    |   11 
 b/arch/arm/include/asm/irq.h        |    2 
 b/arch/arm64/include/asm/hardirq.h  |    7 
 b/arch/parisc/include/asm/hardirq.h |    1 
 b/arch/sh/include/asm/hardirq.h     |   14 -
 b/arch/sh/kernel/irq.c              |    2 
 b/arch/sh/kernel/traps.c            |    2 
 b/arch/um/include/asm/hardirq.h     |   17 -
 b/include/asm-generic/hardirq.h     |    6 
 b/include/linux/bottom_half.h       |    8 
 b/include/linux/hardirq.h           |    1 
 b/include/linux/interrupt.h         |   13 -
 b/include/linux/preempt.h           |   36 +--
 b/include/linux/rcupdate.h          |    3 
 b/include/linux/sched.h             |    3 
 b/kernel/softirq.c                  |  412 ++++++++++++++++++++++++++++++------
 b/kernel/time/tick-sched.c          |    2 
 include/linux/irq_cpustat.h         |   28 --
 18 files changed, 405 insertions(+), 163 deletions(-)







^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 00/19] softirq: Cleanups and RT awareness
@ 2020-11-13 14:02 ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

RT runs softirq processing always in thread context and it requires that
both the softirq execution and the BH disabled sections are preemptible.

This is achieved by serialization through per CPU local locks and
substituting a few parts of the existing softirq processing code with
helper functions.

The following series has two parts:

    1) Cleanup irq_cpustats and consolidation of the preempt count maze
       so softirq_count() and related parts can be substituted for RT

    2) The actual core implementation including the required fixups for
       NOHZ, RCU and tasklets.

The series is also available from git:

  git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git softirq

The RT variant has sucessfully been tested in the current 5.10-rt
patches. For non-RT kernels there is no functional change.

Thanks,

	tglx
---
 b/arch/arm/include/asm/hardirq.h    |   11 
 b/arch/arm/include/asm/irq.h        |    2 
 b/arch/arm64/include/asm/hardirq.h  |    7 
 b/arch/parisc/include/asm/hardirq.h |    1 
 b/arch/sh/include/asm/hardirq.h     |   14 -
 b/arch/sh/kernel/irq.c              |    2 
 b/arch/sh/kernel/traps.c            |    2 
 b/arch/um/include/asm/hardirq.h     |   17 -
 b/include/asm-generic/hardirq.h     |    6 
 b/include/linux/bottom_half.h       |    8 
 b/include/linux/hardirq.h           |    1 
 b/include/linux/interrupt.h         |   13 -
 b/include/linux/preempt.h           |   36 +--
 b/include/linux/rcupdate.h          |    3 
 b/include/linux/sched.h             |    3 
 b/kernel/softirq.c                  |  412 ++++++++++++++++++++++++++++++------
 b/kernel/time/tick-sched.c          |    2 
 include/linux/irq_cpustat.h         |   28 --
 18 files changed, 405 insertions(+), 163 deletions(-)






_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 01/19] parisc: Remove bogus __IRQ_STAT macro
  2020-11-13 14:02 ` Thomas Gleixner
  (?)
@ 2020-11-13 14:02   ` Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

This is a leftover from a historical array based implementation and unused.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Cc: Helge Deller <deller@gmx.de>
Cc: linux-parisc@vger.kernel.org
---
 arch/parisc/include/asm/hardirq.h |    1 -
 1 file changed, 1 deletion(-)

--- a/arch/parisc/include/asm/hardirq.h
+++ b/arch/parisc/include/asm/hardirq.h
@@ -32,7 +32,6 @@ typedef struct {
 DECLARE_PER_CPU_SHARED_ALIGNED(irq_cpustat_t, irq_stat);
 
 #define __ARCH_IRQ_STAT
-#define __IRQ_STAT(cpu, member) (irq_stat[cpu].member)
 #define inc_irq_stat(member)	this_cpu_inc(irq_stat.member)
 #define __inc_irq_stat(member)	__this_cpu_inc(irq_stat.member)
 #define ack_bad_irq(irq) WARN(1, "unexpected IRQ trap at vector %02x\n", irq)

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 01/19] parisc: Remove bogus __IRQ_STAT macro
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Frederic Weisbecker, Paul McKenney,
	Sebastian Andrzej Siewior, Arnd Bergmann, James E.J. Bottomley,
	Helge Deller, linux-parisc, Yoshinori Sato, Rich Felker,
	linux-sh, Jeff Dike, Richard Weinberger, Anton Ivanov, linux-um,
	Russell King, Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

This is a leftover from a historical array based implementation and unused.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Cc: Helge Deller <deller@gmx.de>
Cc: linux-parisc@vger.kernel.org
---
 arch/parisc/include/asm/hardirq.h |    1 -
 1 file changed, 1 deletion(-)

--- a/arch/parisc/include/asm/hardirq.h
+++ b/arch/parisc/include/asm/hardirq.h
@@ -32,7 +32,6 @@ typedef struct {
 DECLARE_PER_CPU_SHARED_ALIGNED(irq_cpustat_t, irq_stat);
 
 #define __ARCH_IRQ_STAT
-#define __IRQ_STAT(cpu, member) (irq_stat[cpu].member)
 #define inc_irq_stat(member)	this_cpu_inc(irq_stat.member)
 #define __inc_irq_stat(member)	__this_cpu_inc(irq_stat.member)
 #define ack_bad_irq(irq) WARN(1, "unexpected IRQ trap at vector %02x\n", irq)


^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 01/19] parisc: Remove bogus __IRQ_STAT macro
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

This is a leftover from a historical array based implementation and unused.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Cc: Helge Deller <deller@gmx.de>
Cc: linux-parisc@vger.kernel.org
---
 arch/parisc/include/asm/hardirq.h |    1 -
 1 file changed, 1 deletion(-)

--- a/arch/parisc/include/asm/hardirq.h
+++ b/arch/parisc/include/asm/hardirq.h
@@ -32,7 +32,6 @@ typedef struct {
 DECLARE_PER_CPU_SHARED_ALIGNED(irq_cpustat_t, irq_stat);
 
 #define __ARCH_IRQ_STAT
-#define __IRQ_STAT(cpu, member) (irq_stat[cpu].member)
 #define inc_irq_stat(member)	this_cpu_inc(irq_stat.member)
 #define __inc_irq_stat(member)	__this_cpu_inc(irq_stat.member)
 #define ack_bad_irq(irq) WARN(1, "unexpected IRQ trap at vector %02x\n", irq)


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 02/19] sh: Get rid of nmi_count()
  2020-11-13 14:02 ` Thomas Gleixner
  (?)
  (?)
@ 2020-11-13 14:02   ` Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

nmi_count() is a historical leftover and SH is the only user. Replace it
with regular per cpu accessors.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: Rich Felker <dalias@libc.org>
Cc: linux-sh@vger.kernel.org
---
 arch/sh/kernel/irq.c   |    2 +-
 arch/sh/kernel/traps.c |    2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

--- a/arch/sh/kernel/irq.c
+++ b/arch/sh/kernel/irq.c
@@ -44,7 +44,7 @@ int arch_show_interrupts(struct seq_file
 
 	seq_printf(p, "%*s: ", prec, "NMI");
 	for_each_online_cpu(j)
-		seq_printf(p, "%10u ", nmi_count(j));
+		seq_printf(p, "%10u ", per_cpu(irq_stat.__nmi_count, j);
 	seq_printf(p, "  Non-maskable interrupts\n");
 
 	seq_printf(p, "%*s: %10u\n", prec, "ERR", atomic_read(&irq_err_count));
--- a/arch/sh/kernel/traps.c
+++ b/arch/sh/kernel/traps.c
@@ -186,7 +186,7 @@ BUILD_TRAP_HANDLER(nmi)
 	arch_ftrace_nmi_enter();
 
 	nmi_enter();
-	nmi_count(cpu)++;
+	this_cpu_inc(irq_stat.__nmi_count);
 
 	switch (notify_die(DIE_NMI, "NMI", regs, 0, vec & 0xff, SIGINT)) {
 	case NOTIFY_OK:

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 02/19] sh: Get rid of nmi_count()
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Frederic Weisbecker, Paul McKenney,
	Sebastian Andrzej Siewior, Arnd Bergmann, Yoshinori Sato,
	Rich Felker, linux-sh, James E.J. Bottomley, Helge Deller,
	linux-parisc, Jeff Dike, Richard Weinberger, Anton Ivanov,
	linux-um, Russell King, Marc Zyngier, Valentin Schneider,
	linux-arm-kernel, Catalin Marinas, Will Deacon

nmi_count() is a historical leftover and SH is the only user. Replace it
with regular per cpu accessors.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: Rich Felker <dalias@libc.org>
Cc: linux-sh@vger.kernel.org
---
 arch/sh/kernel/irq.c   |    2 +-
 arch/sh/kernel/traps.c |    2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

--- a/arch/sh/kernel/irq.c
+++ b/arch/sh/kernel/irq.c
@@ -44,7 +44,7 @@ int arch_show_interrupts(struct seq_file
 
 	seq_printf(p, "%*s: ", prec, "NMI");
 	for_each_online_cpu(j)
-		seq_printf(p, "%10u ", nmi_count(j));
+		seq_printf(p, "%10u ", per_cpu(irq_stat.__nmi_count, j);
 	seq_printf(p, "  Non-maskable interrupts\n");
 
 	seq_printf(p, "%*s: %10u\n", prec, "ERR", atomic_read(&irq_err_count));
--- a/arch/sh/kernel/traps.c
+++ b/arch/sh/kernel/traps.c
@@ -186,7 +186,7 @@ BUILD_TRAP_HANDLER(nmi)
 	arch_ftrace_nmi_enter();
 
 	nmi_enter();
-	nmi_count(cpu)++;
+	this_cpu_inc(irq_stat.__nmi_count);
 
 	switch (notify_die(DIE_NMI, "NMI", regs, 0, vec & 0xff, SIGINT)) {
 	case NOTIFY_OK:


^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 02/19] sh: Get rid of nmi_count()
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

nmi_count() is a historical leftover and SH is the only user. Replace it
with regular per cpu accessors.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: Rich Felker <dalias@libc.org>
Cc: linux-sh@vger.kernel.org
---
 arch/sh/kernel/irq.c   |    2 +-
 arch/sh/kernel/traps.c |    2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

--- a/arch/sh/kernel/irq.c
+++ b/arch/sh/kernel/irq.c
@@ -44,7 +44,7 @@ int arch_show_interrupts(struct seq_file
 
 	seq_printf(p, "%*s: ", prec, "NMI");
 	for_each_online_cpu(j)
-		seq_printf(p, "%10u ", nmi_count(j));
+		seq_printf(p, "%10u ", per_cpu(irq_stat.__nmi_count, j);
 	seq_printf(p, "  Non-maskable interrupts\n");
 
 	seq_printf(p, "%*s: %10u\n", prec, "ERR", atomic_read(&irq_err_count));
--- a/arch/sh/kernel/traps.c
+++ b/arch/sh/kernel/traps.c
@@ -186,7 +186,7 @@ BUILD_TRAP_HANDLER(nmi)
 	arch_ftrace_nmi_enter();
 
 	nmi_enter();
-	nmi_count(cpu)++;
+	this_cpu_inc(irq_stat.__nmi_count);
 
 	switch (notify_die(DIE_NMI, "NMI", regs, 0, vec & 0xff, SIGINT)) {
 	case NOTIFY_OK:


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 02/19] sh: Get rid of nmi_count()
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

nmi_count() is a historical leftover and SH is the only user. Replace it
with regular per cpu accessors.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: Rich Felker <dalias@libc.org>
Cc: linux-sh@vger.kernel.org
---
 arch/sh/kernel/irq.c   |    2 +-
 arch/sh/kernel/traps.c |    2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

--- a/arch/sh/kernel/irq.c
+++ b/arch/sh/kernel/irq.c
@@ -44,7 +44,7 @@ int arch_show_interrupts(struct seq_file
 
 	seq_printf(p, "%*s: ", prec, "NMI");
 	for_each_online_cpu(j)
-		seq_printf(p, "%10u ", nmi_count(j));
+		seq_printf(p, "%10u ", per_cpu(irq_stat.__nmi_count, j);
 	seq_printf(p, "  Non-maskable interrupts\n");
 
 	seq_printf(p, "%*s: %10u\n", prec, "ERR", atomic_read(&irq_err_count));
--- a/arch/sh/kernel/traps.c
+++ b/arch/sh/kernel/traps.c
@@ -186,7 +186,7 @@ BUILD_TRAP_HANDLER(nmi)
 	arch_ftrace_nmi_enter();
 
 	nmi_enter();
-	nmi_count(cpu)++;
+	this_cpu_inc(irq_stat.__nmi_count);
 
 	switch (notify_die(DIE_NMI, "NMI", regs, 0, vec & 0xff, SIGINT)) {
 	case NOTIFY_OK:


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 03/19] irqstat: Get rid of nmi_count() and __IRQ_STAT()
  2020-11-13 14:02 ` Thomas Gleixner
  (?)
  (?)
@ 2020-11-13 14:02   ` Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

Nothing uses this anymore.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/irq_cpustat.h |    4 ----
 1 file changed, 4 deletions(-)

--- a/include/linux/irq_cpustat.h
+++ b/include/linux/irq_cpustat.h
@@ -19,10 +19,6 @@
 
 #ifndef __ARCH_IRQ_STAT
 DECLARE_PER_CPU_ALIGNED(irq_cpustat_t, irq_stat);	/* defined in asm/hardirq.h */
-#define __IRQ_STAT(cpu, member)	(per_cpu(irq_stat.member, cpu))
 #endif
 
-/* arch dependent irq_stat fields */
-#define nmi_count(cpu)		__IRQ_STAT((cpu), __nmi_count)	/* i386 */
-
 #endif	/* __irq_cpustat_h */

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 03/19] irqstat: Get rid of nmi_count() and __IRQ_STAT()
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Frederic Weisbecker, Paul McKenney,
	Sebastian Andrzej Siewior, Arnd Bergmann, James E.J. Bottomley,
	Helge Deller, linux-parisc, Yoshinori Sato, Rich Felker,
	linux-sh, Jeff Dike, Richard Weinberger, Anton Ivanov, linux-um,
	Russell King, Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

Nothing uses this anymore.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/irq_cpustat.h |    4 ----
 1 file changed, 4 deletions(-)

--- a/include/linux/irq_cpustat.h
+++ b/include/linux/irq_cpustat.h
@@ -19,10 +19,6 @@
 
 #ifndef __ARCH_IRQ_STAT
 DECLARE_PER_CPU_ALIGNED(irq_cpustat_t, irq_stat);	/* defined in asm/hardirq.h */
-#define __IRQ_STAT(cpu, member)	(per_cpu(irq_stat.member, cpu))
 #endif
 
-/* arch dependent irq_stat fields */
-#define nmi_count(cpu)		__IRQ_STAT((cpu), __nmi_count)	/* i386 */
-
 #endif	/* __irq_cpustat_h */


^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 03/19] irqstat: Get rid of nmi_count() and __IRQ_STAT()
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

Nothing uses this anymore.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/irq_cpustat.h |    4 ----
 1 file changed, 4 deletions(-)

--- a/include/linux/irq_cpustat.h
+++ b/include/linux/irq_cpustat.h
@@ -19,10 +19,6 @@
 
 #ifndef __ARCH_IRQ_STAT
 DECLARE_PER_CPU_ALIGNED(irq_cpustat_t, irq_stat);	/* defined in asm/hardirq.h */
-#define __IRQ_STAT(cpu, member)	(per_cpu(irq_stat.member, cpu))
 #endif
 
-/* arch dependent irq_stat fields */
-#define nmi_count(cpu)		__IRQ_STAT((cpu), __nmi_count)	/* i386 */
-
 #endif	/* __irq_cpustat_h */


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 03/19] irqstat: Get rid of nmi_count() and __IRQ_STAT()
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

Nothing uses this anymore.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/irq_cpustat.h |    4 ----
 1 file changed, 4 deletions(-)

--- a/include/linux/irq_cpustat.h
+++ b/include/linux/irq_cpustat.h
@@ -19,10 +19,6 @@
 
 #ifndef __ARCH_IRQ_STAT
 DECLARE_PER_CPU_ALIGNED(irq_cpustat_t, irq_stat);	/* defined in asm/hardirq.h */
-#define __IRQ_STAT(cpu, member)	(per_cpu(irq_stat.member, cpu))
 #endif
 
-/* arch dependent irq_stat fields */
-#define nmi_count(cpu)		__IRQ_STAT((cpu), __nmi_count)	/* i386 */
-
 #endif	/* __irq_cpustat_h */


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 04/19] um/irqstat: Get rid of the duplicated declarations
  2020-11-13 14:02 ` Thomas Gleixner
  (?)
  (?)
@ 2020-11-13 14:02   ` Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: James E.J. Bottomley, Rich Felker, Marc Zyngier, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, linux-um, Will Deacon, linux-parisc,
	Helge Deller, Catalin Marinas, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

irq_cpustat_t and ack_bad_irq() are exactly the same as the asm-generic
ones.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com>
Cc: linux-um@lists.infradead.org
---
 arch/um/include/asm/hardirq.h |   17 +----------------
 1 file changed, 1 insertion(+), 16 deletions(-)

--- a/arch/um/include/asm/hardirq.h
+++ b/arch/um/include/asm/hardirq.h
@@ -2,22 +2,7 @@
 #ifndef __ASM_UM_HARDIRQ_H
 #define __ASM_UM_HARDIRQ_H
 
-#include <linux/cache.h>
-#include <linux/threads.h>
-
-typedef struct {
-	unsigned int __softirq_pending;
-} ____cacheline_aligned irq_cpustat_t;
-
-#include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */
-#include <linux/irq.h>
-
-#ifndef ack_bad_irq
-static inline void ack_bad_irq(unsigned int irq)
-{
-	printk(KERN_CRIT "unexpected IRQ trap at vector %02x\n", irq);
-}
-#endif
+#include <asm-generic/hardirq.h>
 
 #define __ARCH_IRQ_EXIT_IRQS_DISABLED 1
 

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 04/19] um/irqstat: Get rid of the duplicated declarations
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Frederic Weisbecker, Paul McKenney,
	Sebastian Andrzej Siewior, Arnd Bergmann, Jeff Dike,
	Richard Weinberger, Anton Ivanov, linux-um, James E.J. Bottomley,
	Helge Deller, linux-parisc, Yoshinori Sato, Rich Felker,
	linux-sh, Russell King, Marc Zyngier, Valentin Schneider,
	linux-arm-kernel, Catalin Marinas, Will Deacon

irq_cpustat_t and ack_bad_irq() are exactly the same as the asm-generic
ones.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com>
Cc: linux-um@lists.infradead.org
---
 arch/um/include/asm/hardirq.h |   17 +----------------
 1 file changed, 1 insertion(+), 16 deletions(-)

--- a/arch/um/include/asm/hardirq.h
+++ b/arch/um/include/asm/hardirq.h
@@ -2,22 +2,7 @@
 #ifndef __ASM_UM_HARDIRQ_H
 #define __ASM_UM_HARDIRQ_H
 
-#include <linux/cache.h>
-#include <linux/threads.h>
-
-typedef struct {
-	unsigned int __softirq_pending;
-} ____cacheline_aligned irq_cpustat_t;
-
-#include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */
-#include <linux/irq.h>
-
-#ifndef ack_bad_irq
-static inline void ack_bad_irq(unsigned int irq)
-{
-	printk(KERN_CRIT "unexpected IRQ trap at vector %02x\n", irq);
-}
-#endif
+#include <asm-generic/hardirq.h>
 
 #define __ARCH_IRQ_EXIT_IRQS_DISABLED 1
 


^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 04/19] um/irqstat: Get rid of the duplicated declarations
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: James E.J. Bottomley, Rich Felker, Marc Zyngier, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, linux-um, Will Deacon, linux-parisc,
	Helge Deller, Catalin Marinas, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

irq_cpustat_t and ack_bad_irq() are exactly the same as the asm-generic
ones.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com>
Cc: linux-um@lists.infradead.org
---
 arch/um/include/asm/hardirq.h |   17 +----------------
 1 file changed, 1 insertion(+), 16 deletions(-)

--- a/arch/um/include/asm/hardirq.h
+++ b/arch/um/include/asm/hardirq.h
@@ -2,22 +2,7 @@
 #ifndef __ASM_UM_HARDIRQ_H
 #define __ASM_UM_HARDIRQ_H
 
-#include <linux/cache.h>
-#include <linux/threads.h>
-
-typedef struct {
-	unsigned int __softirq_pending;
-} ____cacheline_aligned irq_cpustat_t;
-
-#include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */
-#include <linux/irq.h>
-
-#ifndef ack_bad_irq
-static inline void ack_bad_irq(unsigned int irq)
-{
-	printk(KERN_CRIT "unexpected IRQ trap at vector %02x\n", irq);
-}
-#endif
+#include <asm-generic/hardirq.h>
 
 #define __ARCH_IRQ_EXIT_IRQS_DISABLED 1
 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 04/19] um/irqstat: Get rid of the duplicated declarations
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: James E.J. Bottomley, Rich Felker, Marc Zyngier, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, linux-um, Will Deacon, linux-parisc,
	Helge Deller, Catalin Marinas, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

irq_cpustat_t and ack_bad_irq() are exactly the same as the asm-generic
ones.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com>
Cc: linux-um@lists.infradead.org
---
 arch/um/include/asm/hardirq.h |   17 +----------------
 1 file changed, 1 insertion(+), 16 deletions(-)

--- a/arch/um/include/asm/hardirq.h
+++ b/arch/um/include/asm/hardirq.h
@@ -2,22 +2,7 @@
 #ifndef __ASM_UM_HARDIRQ_H
 #define __ASM_UM_HARDIRQ_H
 
-#include <linux/cache.h>
-#include <linux/threads.h>
-
-typedef struct {
-	unsigned int __softirq_pending;
-} ____cacheline_aligned irq_cpustat_t;
-
-#include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */
-#include <linux/irq.h>
-
-#ifndef ack_bad_irq
-static inline void ack_bad_irq(unsigned int irq)
-{
-	printk(KERN_CRIT "unexpected IRQ trap at vector %02x\n", irq);
-}
-#endif
+#include <asm-generic/hardirq.h>
 
 #define __ARCH_IRQ_EXIT_IRQS_DISABLED 1
 


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 05/19] ARM: irqstat: Get rid of duplicated declaration
  2020-11-13 14:02 ` Thomas Gleixner
  (?)
  (?)
@ 2020-11-13 14:02   ` Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	Yoshinori Sato, Peter Zijlstra, Marc Zyngier,
	Frederic Weisbecker, linux-sh, Jeff Dike, Russell King,
	Valentin Schneider, James E.J. Bottomley, Richard Weinberger,
	linux-parisc, Helge Deller, linux-um, Will Deacon,
	Sebastian Andrzej Siewior, linux-arm-kernel, Anton Ivanov

irq_cpustat_t is exactly the same as the asm-generic one. Define
ack_bad_irq so the generic header does not emit the generic version of it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Valentin Schneider <valentin.schneider@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
---
 arch/arm/include/asm/hardirq.h |   11 +++--------
 arch/arm/include/asm/irq.h     |    2 ++
 2 files changed, 5 insertions(+), 8 deletions(-)

--- a/arch/arm/include/asm/hardirq.h
+++ b/arch/arm/include/asm/hardirq.h
@@ -2,16 +2,11 @@
 #ifndef __ASM_HARDIRQ_H
 #define __ASM_HARDIRQ_H
 
-#include <linux/cache.h>
-#include <linux/threads.h>
 #include <asm/irq.h>
 
-typedef struct {
-	unsigned int __softirq_pending;
-} ____cacheline_aligned irq_cpustat_t;
-
-#include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */
-
 #define __ARCH_IRQ_EXIT_IRQS_DISABLED	1
+#define ack_bad_irq ack_bad_irq
+
+#include <asm-generic/hardirq.h>
 
 #endif /* __ASM_HARDIRQ_H */
--- a/arch/arm/include/asm/irq.h
+++ b/arch/arm/include/asm/irq.h
@@ -31,6 +31,8 @@ void handle_IRQ(unsigned int, struct pt_
 void init_IRQ(void);
 
 #ifdef CONFIG_SMP
+#include <linux/cpumask.h>
+
 extern void arch_trigger_cpumask_backtrace(const cpumask_t *mask,
 					   bool exclude_self);
 #define arch_trigger_cpumask_backtrace arch_trigger_cpumask_backtrace

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 05/19] ARM: irqstat: Get rid of duplicated declaration
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Frederic Weisbecker, Paul McKenney,
	Sebastian Andrzej Siewior, Arnd Bergmann, Russell King,
	Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	James E.J. Bottomley, Helge Deller, linux-parisc, Yoshinori Sato,
	Rich Felker, linux-sh, Jeff Dike, Richard Weinberger,
	Anton Ivanov, linux-um, Catalin Marinas, Will Deacon

irq_cpustat_t is exactly the same as the asm-generic one. Define
ack_bad_irq so the generic header does not emit the generic version of it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Valentin Schneider <valentin.schneider@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
---
 arch/arm/include/asm/hardirq.h |   11 +++--------
 arch/arm/include/asm/irq.h     |    2 ++
 2 files changed, 5 insertions(+), 8 deletions(-)

--- a/arch/arm/include/asm/hardirq.h
+++ b/arch/arm/include/asm/hardirq.h
@@ -2,16 +2,11 @@
 #ifndef __ASM_HARDIRQ_H
 #define __ASM_HARDIRQ_H
 
-#include <linux/cache.h>
-#include <linux/threads.h>
 #include <asm/irq.h>
 
-typedef struct {
-	unsigned int __softirq_pending;
-} ____cacheline_aligned irq_cpustat_t;
-
-#include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */
-
 #define __ARCH_IRQ_EXIT_IRQS_DISABLED	1
+#define ack_bad_irq ack_bad_irq
+
+#include <asm-generic/hardirq.h>
 
 #endif /* __ASM_HARDIRQ_H */
--- a/arch/arm/include/asm/irq.h
+++ b/arch/arm/include/asm/irq.h
@@ -31,6 +31,8 @@ void handle_IRQ(unsigned int, struct pt_
 void init_IRQ(void);
 
 #ifdef CONFIG_SMP
+#include <linux/cpumask.h>
+
 extern void arch_trigger_cpumask_backtrace(const cpumask_t *mask,
 					   bool exclude_self);
 #define arch_trigger_cpumask_backtrace arch_trigger_cpumask_backtrace


^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 05/19] ARM: irqstat: Get rid of duplicated declaration
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	Yoshinori Sato, Peter Zijlstra, Marc Zyngier,
	Frederic Weisbecker, linux-sh, Jeff Dike, Russell King,
	Valentin Schneider, James E.J. Bottomley, Richard Weinberger,
	linux-parisc, Helge Deller, linux-um, Will Deacon,
	Sebastian Andrzej Siewior, linux-arm-kernel, Anton Ivanov

irq_cpustat_t is exactly the same as the asm-generic one. Define
ack_bad_irq so the generic header does not emit the generic version of it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Valentin Schneider <valentin.schneider@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
---
 arch/arm/include/asm/hardirq.h |   11 +++--------
 arch/arm/include/asm/irq.h     |    2 ++
 2 files changed, 5 insertions(+), 8 deletions(-)

--- a/arch/arm/include/asm/hardirq.h
+++ b/arch/arm/include/asm/hardirq.h
@@ -2,16 +2,11 @@
 #ifndef __ASM_HARDIRQ_H
 #define __ASM_HARDIRQ_H
 
-#include <linux/cache.h>
-#include <linux/threads.h>
 #include <asm/irq.h>
 
-typedef struct {
-	unsigned int __softirq_pending;
-} ____cacheline_aligned irq_cpustat_t;
-
-#include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */
-
 #define __ARCH_IRQ_EXIT_IRQS_DISABLED	1
+#define ack_bad_irq ack_bad_irq
+
+#include <asm-generic/hardirq.h>
 
 #endif /* __ASM_HARDIRQ_H */
--- a/arch/arm/include/asm/irq.h
+++ b/arch/arm/include/asm/irq.h
@@ -31,6 +31,8 @@ void handle_IRQ(unsigned int, struct pt_
 void init_IRQ(void);
 
 #ifdef CONFIG_SMP
+#include <linux/cpumask.h>
+
 extern void arch_trigger_cpumask_backtrace(const cpumask_t *mask,
 					   bool exclude_self);
 #define arch_trigger_cpumask_backtrace arch_trigger_cpumask_backtrace


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 05/19] ARM: irqstat: Get rid of duplicated declaration
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	Yoshinori Sato, Peter Zijlstra, Marc Zyngier,
	Frederic Weisbecker, linux-sh, Jeff Dike, Russell King,
	Valentin Schneider, James E.J. Bottomley, Richard Weinberger,
	linux-parisc, Helge Deller, linux-um, Will Deacon,
	Sebastian Andrzej Siewior, linux-arm-kernel, Anton Ivanov

irq_cpustat_t is exactly the same as the asm-generic one. Define
ack_bad_irq so the generic header does not emit the generic version of it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Valentin Schneider <valentin.schneider@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
---
 arch/arm/include/asm/hardirq.h |   11 +++--------
 arch/arm/include/asm/irq.h     |    2 ++
 2 files changed, 5 insertions(+), 8 deletions(-)

--- a/arch/arm/include/asm/hardirq.h
+++ b/arch/arm/include/asm/hardirq.h
@@ -2,16 +2,11 @@
 #ifndef __ASM_HARDIRQ_H
 #define __ASM_HARDIRQ_H
 
-#include <linux/cache.h>
-#include <linux/threads.h>
 #include <asm/irq.h>
 
-typedef struct {
-	unsigned int __softirq_pending;
-} ____cacheline_aligned irq_cpustat_t;
-
-#include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */
-
 #define __ARCH_IRQ_EXIT_IRQS_DISABLED	1
+#define ack_bad_irq ack_bad_irq
+
+#include <asm-generic/hardirq.h>
 
 #endif /* __ASM_HARDIRQ_H */
--- a/arch/arm/include/asm/irq.h
+++ b/arch/arm/include/asm/irq.h
@@ -31,6 +31,8 @@ void handle_IRQ(unsigned int, struct pt_
 void init_IRQ(void);
 
 #ifdef CONFIG_SMP
+#include <linux/cpumask.h>
+
 extern void arch_trigger_cpumask_backtrace(const cpumask_t *mask,
 					   bool exclude_self);
 #define arch_trigger_cpumask_backtrace arch_trigger_cpumask_backtrace


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 06/19] arm64: irqstat: Get rid of duplicated declaration
  2020-11-13 14:02 ` Thomas Gleixner
  (?)
  (?)
@ 2020-11-13 14:02   ` Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Rich Felker, Paul McKenney, Arnd Bergmann, linux-sh,
	Peter Zijlstra, Catalin Marinas, Frederic Weisbecker,
	Valentin Schneider, Jeff Dike, Russell King, Yoshinori Sato,
	James E.J. Bottomley, Richard Weinberger, linux-parisc,
	Helge Deller, Marc Zyngier, linux-um, Will Deacon,
	Sebastian Andrzej Siewior, linux-arm-kernel, Anton Ivanov

irq_cpustat_t is exactly the same as the asm-generic one. Define
ack_bad_irq so the generic header does not emit the generic version of it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
---
 arch/arm64/include/asm/hardirq.h |    7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

--- a/arch/arm64/include/asm/hardirq.h
+++ b/arch/arm64/include/asm/hardirq.h
@@ -13,11 +13,8 @@
 #include <asm/kvm_arm.h>
 #include <asm/sysreg.h>
 
-typedef struct {
-	unsigned int __softirq_pending;
-} ____cacheline_aligned irq_cpustat_t;
-
-#include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */
+#define ack_bad_irq ack_bad_irq
+#include <asm-generic/hardirq.h>
 
 #define __ARCH_IRQ_EXIT_IRQS_DISABLED	1
 

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 06/19] arm64: irqstat: Get rid of duplicated declaration
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Frederic Weisbecker, Paul McKenney,
	Sebastian Andrzej Siewior, Arnd Bergmann, Catalin Marinas,
	Will Deacon, Marc Zyngier, linux-arm-kernel,
	James E.J. Bottomley, Helge Deller, linux-parisc, Yoshinori Sato,
	Rich Felker, linux-sh, Jeff Dike, Richard Weinberger,
	Anton Ivanov, linux-um, Russell King, Valentin Schneider

irq_cpustat_t is exactly the same as the asm-generic one. Define
ack_bad_irq so the generic header does not emit the generic version of it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
---
 arch/arm64/include/asm/hardirq.h |    7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

--- a/arch/arm64/include/asm/hardirq.h
+++ b/arch/arm64/include/asm/hardirq.h
@@ -13,11 +13,8 @@
 #include <asm/kvm_arm.h>
 #include <asm/sysreg.h>
 
-typedef struct {
-	unsigned int __softirq_pending;
-} ____cacheline_aligned irq_cpustat_t;
-
-#include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */
+#define ack_bad_irq ack_bad_irq
+#include <asm-generic/hardirq.h>
 
 #define __ARCH_IRQ_EXIT_IRQS_DISABLED	1
 


^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 06/19] arm64: irqstat: Get rid of duplicated declaration
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Rich Felker, Paul McKenney, Arnd Bergmann, linux-sh,
	Peter Zijlstra, Catalin Marinas, Frederic Weisbecker,
	Valentin Schneider, Jeff Dike, Russell King, Yoshinori Sato,
	James E.J. Bottomley, Richard Weinberger, linux-parisc,
	Helge Deller, Marc Zyngier, linux-um, Will Deacon,
	Sebastian Andrzej Siewior, linux-arm-kernel, Anton Ivanov

irq_cpustat_t is exactly the same as the asm-generic one. Define
ack_bad_irq so the generic header does not emit the generic version of it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
---
 arch/arm64/include/asm/hardirq.h |    7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

--- a/arch/arm64/include/asm/hardirq.h
+++ b/arch/arm64/include/asm/hardirq.h
@@ -13,11 +13,8 @@
 #include <asm/kvm_arm.h>
 #include <asm/sysreg.h>
 
-typedef struct {
-	unsigned int __softirq_pending;
-} ____cacheline_aligned irq_cpustat_t;
-
-#include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */
+#define ack_bad_irq ack_bad_irq
+#include <asm-generic/hardirq.h>
 
 #define __ARCH_IRQ_EXIT_IRQS_DISABLED	1
 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 06/19] arm64: irqstat: Get rid of duplicated declaration
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Rich Felker, Paul McKenney, Arnd Bergmann, linux-sh,
	Peter Zijlstra, Catalin Marinas, Frederic Weisbecker,
	Valentin Schneider, Jeff Dike, Russell King, Yoshinori Sato,
	James E.J. Bottomley, Richard Weinberger, linux-parisc,
	Helge Deller, Marc Zyngier, linux-um, Will Deacon,
	Sebastian Andrzej Siewior, linux-arm-kernel, Anton Ivanov

irq_cpustat_t is exactly the same as the asm-generic one. Define
ack_bad_irq so the generic header does not emit the generic version of it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
---
 arch/arm64/include/asm/hardirq.h |    7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

--- a/arch/arm64/include/asm/hardirq.h
+++ b/arch/arm64/include/asm/hardirq.h
@@ -13,11 +13,8 @@
 #include <asm/kvm_arm.h>
 #include <asm/sysreg.h>
 
-typedef struct {
-	unsigned int __softirq_pending;
-} ____cacheline_aligned irq_cpustat_t;
-
-#include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */
+#define ack_bad_irq ack_bad_irq
+#include <asm-generic/hardirq.h>
 
 #define __ARCH_IRQ_EXIT_IRQS_DISABLED	1
 


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 07/19] asm-generic/irqstat: Add optional __nmi_count member
  2020-11-13 14:02 ` Thomas Gleixner
  (?)
@ 2020-11-13 14:02   ` Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

Add an optional __nmi_count member to irq_cpustat_t so more architectures
can use the generic version.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/asm-generic/hardirq.h |    3 +++
 1 file changed, 3 insertions(+)

--- a/include/asm-generic/hardirq.h
+++ b/include/asm-generic/hardirq.h
@@ -7,6 +7,9 @@
 
 typedef struct {
 	unsigned int __softirq_pending;
+#ifdef ARCH_WANTS_NMI_IRQSTAT
+	unsigned int __nmi_count;
+#endif
 } ____cacheline_aligned irq_cpustat_t;
 
 #include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 07/19] asm-generic/irqstat: Add optional __nmi_count member
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Frederic Weisbecker, Paul McKenney,
	Sebastian Andrzej Siewior, Arnd Bergmann, James E.J. Bottomley,
	Helge Deller, linux-parisc, Yoshinori Sato, Rich Felker,
	linux-sh, Jeff Dike, Richard Weinberger, Anton Ivanov, linux-um,
	Russell King, Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

Add an optional __nmi_count member to irq_cpustat_t so more architectures
can use the generic version.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/asm-generic/hardirq.h |    3 +++
 1 file changed, 3 insertions(+)

--- a/include/asm-generic/hardirq.h
+++ b/include/asm-generic/hardirq.h
@@ -7,6 +7,9 @@
 
 typedef struct {
 	unsigned int __softirq_pending;
+#ifdef ARCH_WANTS_NMI_IRQSTAT
+	unsigned int __nmi_count;
+#endif
 } ____cacheline_aligned irq_cpustat_t;
 
 #include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */


^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 07/19] asm-generic/irqstat: Add optional __nmi_count member
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

Add an optional __nmi_count member to irq_cpustat_t so more architectures
can use the generic version.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/asm-generic/hardirq.h |    3 +++
 1 file changed, 3 insertions(+)

--- a/include/asm-generic/hardirq.h
+++ b/include/asm-generic/hardirq.h
@@ -7,6 +7,9 @@
 
 typedef struct {
 	unsigned int __softirq_pending;
+#ifdef ARCH_WANTS_NMI_IRQSTAT
+	unsigned int __nmi_count;
+#endif
 } ____cacheline_aligned irq_cpustat_t;
 
 #include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 08/19] sh: irqstat: Use the generic irq_cpustat_t
  2020-11-13 14:02 ` Thomas Gleixner
  (?)
  (?)
@ 2020-11-13 14:02   ` Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

SH can now use the generic irq_cpustat_t. Define ack_bad_irq so the generic
header does not emit the generic version of it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: Rich Felker <dalias@libc.org>
Cc: linux-sh@vger.kernel.org
---
 arch/sh/include/asm/hardirq.h |   14 ++++----------
 1 file changed, 4 insertions(+), 10 deletions(-)

--- a/arch/sh/include/asm/hardirq.h
+++ b/arch/sh/include/asm/hardirq.h
@@ -2,16 +2,10 @@
 #ifndef __ASM_SH_HARDIRQ_H
 #define __ASM_SH_HARDIRQ_H
 
-#include <linux/threads.h>
-#include <linux/irq.h>
-
-typedef struct {
-	unsigned int __softirq_pending;
-	unsigned int __nmi_count;		/* arch dependent */
-} ____cacheline_aligned irq_cpustat_t;
-
-#include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */
-
 extern void ack_bad_irq(unsigned int irq);
+#define ack_bad_irq ack_bad_irq
+#define ARCH_WANTS_NMI_IRQSTAT
+
+#include <asm-generic/hardirq.h>
 
 #endif /* __ASM_SH_HARDIRQ_H */

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 08/19] sh: irqstat: Use the generic irq_cpustat_t
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Frederic Weisbecker, Paul McKenney,
	Sebastian Andrzej Siewior, Arnd Bergmann, Yoshinori Sato,
	Rich Felker, linux-sh, James E.J. Bottomley, Helge Deller,
	linux-parisc, Jeff Dike, Richard Weinberger, Anton Ivanov,
	linux-um, Russell King, Marc Zyngier, Valentin Schneider,
	linux-arm-kernel, Catalin Marinas, Will Deacon

SH can now use the generic irq_cpustat_t. Define ack_bad_irq so the generic
header does not emit the generic version of it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: Rich Felker <dalias@libc.org>
Cc: linux-sh@vger.kernel.org
---
 arch/sh/include/asm/hardirq.h |   14 ++++----------
 1 file changed, 4 insertions(+), 10 deletions(-)

--- a/arch/sh/include/asm/hardirq.h
+++ b/arch/sh/include/asm/hardirq.h
@@ -2,16 +2,10 @@
 #ifndef __ASM_SH_HARDIRQ_H
 #define __ASM_SH_HARDIRQ_H
 
-#include <linux/threads.h>
-#include <linux/irq.h>
-
-typedef struct {
-	unsigned int __softirq_pending;
-	unsigned int __nmi_count;		/* arch dependent */
-} ____cacheline_aligned irq_cpustat_t;
-
-#include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */
-
 extern void ack_bad_irq(unsigned int irq);
+#define ack_bad_irq ack_bad_irq
+#define ARCH_WANTS_NMI_IRQSTAT
+
+#include <asm-generic/hardirq.h>
 
 #endif /* __ASM_SH_HARDIRQ_H */


^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 08/19] sh: irqstat: Use the generic irq_cpustat_t
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

SH can now use the generic irq_cpustat_t. Define ack_bad_irq so the generic
header does not emit the generic version of it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: Rich Felker <dalias@libc.org>
Cc: linux-sh@vger.kernel.org
---
 arch/sh/include/asm/hardirq.h |   14 ++++----------
 1 file changed, 4 insertions(+), 10 deletions(-)

--- a/arch/sh/include/asm/hardirq.h
+++ b/arch/sh/include/asm/hardirq.h
@@ -2,16 +2,10 @@
 #ifndef __ASM_SH_HARDIRQ_H
 #define __ASM_SH_HARDIRQ_H
 
-#include <linux/threads.h>
-#include <linux/irq.h>
-
-typedef struct {
-	unsigned int __softirq_pending;
-	unsigned int __nmi_count;		/* arch dependent */
-} ____cacheline_aligned irq_cpustat_t;
-
-#include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */
-
 extern void ack_bad_irq(unsigned int irq);
+#define ack_bad_irq ack_bad_irq
+#define ARCH_WANTS_NMI_IRQSTAT
+
+#include <asm-generic/hardirq.h>
 
 #endif /* __ASM_SH_HARDIRQ_H */


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 08/19] sh: irqstat: Use the generic irq_cpustat_t
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

SH can now use the generic irq_cpustat_t. Define ack_bad_irq so the generic
header does not emit the generic version of it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: Rich Felker <dalias@libc.org>
Cc: linux-sh@vger.kernel.org
---
 arch/sh/include/asm/hardirq.h |   14 ++++----------
 1 file changed, 4 insertions(+), 10 deletions(-)

--- a/arch/sh/include/asm/hardirq.h
+++ b/arch/sh/include/asm/hardirq.h
@@ -2,16 +2,10 @@
 #ifndef __ASM_SH_HARDIRQ_H
 #define __ASM_SH_HARDIRQ_H
 
-#include <linux/threads.h>
-#include <linux/irq.h>
-
-typedef struct {
-	unsigned int __softirq_pending;
-	unsigned int __nmi_count;		/* arch dependent */
-} ____cacheline_aligned irq_cpustat_t;
-
-#include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */
-
 extern void ack_bad_irq(unsigned int irq);
+#define ack_bad_irq ack_bad_irq
+#define ARCH_WANTS_NMI_IRQSTAT
+
+#include <asm-generic/hardirq.h>
 
 #endif /* __ASM_SH_HARDIRQ_H */


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 09/19] irqstat: Move declaration into asm-generic/hardirq.h
  2020-11-13 14:02 ` Thomas Gleixner
  (?)
@ 2020-11-13 14:02   ` Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

Move the declaration of the irq_cpustat per cpu variable to
asm-generic/hardirq.h and remove the now empty linux/irq_cpustat.h header.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/asm-generic/hardirq.h |    3 ++-
 include/linux/irq_cpustat.h   |   24 ------------------------
 2 files changed, 2 insertions(+), 25 deletions(-)

--- a/include/asm-generic/hardirq.h
+++ b/include/asm-generic/hardirq.h
@@ -12,7 +12,8 @@ typedef struct {
 #endif
 } ____cacheline_aligned irq_cpustat_t;
 
-#include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */
+DECLARE_PER_CPU_ALIGNED(irq_cpustat_t, irq_stat);
+
 #include <linux/irq.h>
 
 #ifndef ack_bad_irq
--- a/include/linux/irq_cpustat.h
+++ /dev/null
@@ -1,24 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef __irq_cpustat_h
-#define __irq_cpustat_h
-
-/*
- * Contains default mappings for irq_cpustat_t, used by almost every
- * architecture.  Some arch (like s390) have per cpu hardware pages and
- * they define their own mappings for irq_stat.
- *
- * Keith Owens <kaos@ocs.com.au> July 2000.
- */
-
-
-/*
- * Simple wrappers reducing source bloat.  Define all irq_stat fields
- * here, even ones that are arch dependent.  That way we get common
- * definitions instead of differing sets for each arch.
- */
-
-#ifndef __ARCH_IRQ_STAT
-DECLARE_PER_CPU_ALIGNED(irq_cpustat_t, irq_stat);	/* defined in asm/hardirq.h */
-#endif
-
-#endif	/* __irq_cpustat_h */

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 09/19] irqstat: Move declaration into asm-generic/hardirq.h
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Frederic Weisbecker, Paul McKenney,
	Sebastian Andrzej Siewior, Arnd Bergmann, James E.J. Bottomley,
	Helge Deller, linux-parisc, Yoshinori Sato, Rich Felker,
	linux-sh, Jeff Dike, Richard Weinberger, Anton Ivanov, linux-um,
	Russell King, Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

Move the declaration of the irq_cpustat per cpu variable to
asm-generic/hardirq.h and remove the now empty linux/irq_cpustat.h header.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/asm-generic/hardirq.h |    3 ++-
 include/linux/irq_cpustat.h   |   24 ------------------------
 2 files changed, 2 insertions(+), 25 deletions(-)

--- a/include/asm-generic/hardirq.h
+++ b/include/asm-generic/hardirq.h
@@ -12,7 +12,8 @@ typedef struct {
 #endif
 } ____cacheline_aligned irq_cpustat_t;
 
-#include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */
+DECLARE_PER_CPU_ALIGNED(irq_cpustat_t, irq_stat);
+
 #include <linux/irq.h>
 
 #ifndef ack_bad_irq
--- a/include/linux/irq_cpustat.h
+++ /dev/null
@@ -1,24 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef __irq_cpustat_h
-#define __irq_cpustat_h
-
-/*
- * Contains default mappings for irq_cpustat_t, used by almost every
- * architecture.  Some arch (like s390) have per cpu hardware pages and
- * they define their own mappings for irq_stat.
- *
- * Keith Owens <kaos@ocs.com.au> July 2000.
- */
-
-
-/*
- * Simple wrappers reducing source bloat.  Define all irq_stat fields
- * here, even ones that are arch dependent.  That way we get common
- * definitions instead of differing sets for each arch.
- */
-
-#ifndef __ARCH_IRQ_STAT
-DECLARE_PER_CPU_ALIGNED(irq_cpustat_t, irq_stat);	/* defined in asm/hardirq.h */
-#endif
-
-#endif	/* __irq_cpustat_h */


^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 09/19] irqstat: Move declaration into asm-generic/hardirq.h
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

Move the declaration of the irq_cpustat per cpu variable to
asm-generic/hardirq.h and remove the now empty linux/irq_cpustat.h header.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/asm-generic/hardirq.h |    3 ++-
 include/linux/irq_cpustat.h   |   24 ------------------------
 2 files changed, 2 insertions(+), 25 deletions(-)

--- a/include/asm-generic/hardirq.h
+++ b/include/asm-generic/hardirq.h
@@ -12,7 +12,8 @@ typedef struct {
 #endif
 } ____cacheline_aligned irq_cpustat_t;
 
-#include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */
+DECLARE_PER_CPU_ALIGNED(irq_cpustat_t, irq_stat);
+
 #include <linux/irq.h>
 
 #ifndef ack_bad_irq
--- a/include/linux/irq_cpustat.h
+++ /dev/null
@@ -1,24 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef __irq_cpustat_h
-#define __irq_cpustat_h
-
-/*
- * Contains default mappings for irq_cpustat_t, used by almost every
- * architecture.  Some arch (like s390) have per cpu hardware pages and
- * they define their own mappings for irq_stat.
- *
- * Keith Owens <kaos@ocs.com.au> July 2000.
- */
-
-
-/*
- * Simple wrappers reducing source bloat.  Define all irq_stat fields
- * here, even ones that are arch dependent.  That way we get common
- * definitions instead of differing sets for each arch.
- */
-
-#ifndef __ARCH_IRQ_STAT
-DECLARE_PER_CPU_ALIGNED(irq_cpustat_t, irq_stat);	/* defined in asm/hardirq.h */
-#endif
-
-#endif	/* __irq_cpustat_h */


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 10/19] preempt: Cleanup the macro maze a bit
  2020-11-13 14:02 ` Thomas Gleixner
  (?)
@ 2020-11-13 14:02   ` Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

Make the macro maze consistent and prepare it for adding the RT variant for
BH accounting.

 - Use nmi_count() for the NMI portion of preempt count
 - Introduce in_hardirq() to make the naming consistent and non-ambiguos
 - Use the macros to create combined checks (e.g. in_task()) so the
   softirq representation for RT just falls into place.
 - Update comments and move the deprecated macros aside

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/preempt.h |   30 ++++++++++++++++--------------
 1 file changed, 16 insertions(+), 14 deletions(-)

--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -77,31 +77,33 @@
 /* preempt_count() and related functions, depends on PREEMPT_NEED_RESCHED */
 #include <asm/preempt.h>
 
+#define nmi_count()	(preempt_count() & NMI_MASK)
 #define hardirq_count()	(preempt_count() & HARDIRQ_MASK)
 #define softirq_count()	(preempt_count() & SOFTIRQ_MASK)
-#define irq_count()	(preempt_count() & (HARDIRQ_MASK | SOFTIRQ_MASK \
-				 | NMI_MASK))
+#define irq_count()	(nmi_count() | hardirq_count() | softirq_count())
 
 /*
- * Are we doing bottom half or hardware interrupt processing?
+ * Macros to retrieve the current execution context:
  *
- * in_irq()       - We're in (hard) IRQ context
+ * in_nmi()		- We're in NMI context
+ * in_hardirq()		- We're in hard IRQ context
+ * in_serving_softirq()	- We're in softirq context
+ * in_task()		- We're in task context
+ */
+#define in_nmi()		(nmi_count())
+#define in_hardirq()		(hardirq_count())
+#define in_serving_softirq()	(softirq_count() & SOFTIRQ_OFFSET)
+#define in_task()		(!(in_nmi() | in_hardirq() | in_serving_softirq()))
+
+/*
+ * The following macros are deprecated and should not be used in new code:
+ * in_irq()       - Obsolete version of in_hardirq()
  * in_softirq()   - We have BH disabled, or are processing softirqs
  * in_interrupt() - We're in NMI,IRQ,SoftIRQ context or have BH disabled
- * in_serving_softirq() - We're in softirq context
- * in_nmi()       - We're in NMI context
- * in_task()	  - We're in task context
- *
- * Note: due to the BH disabled confusion: in_softirq(),in_interrupt() really
- *       should not be used in new code.
  */
 #define in_irq()		(hardirq_count())
 #define in_softirq()		(softirq_count())
 #define in_interrupt()		(irq_count())
-#define in_serving_softirq()	(softirq_count() & SOFTIRQ_OFFSET)
-#define in_nmi()		(preempt_count() & NMI_MASK)
-#define in_task()		(!(preempt_count() & \
-				   (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET)))
 
 /*
  * The preempt_count offset after preempt_disable();

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 10/19] preempt: Cleanup the macro maze a bit
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Frederic Weisbecker, Paul McKenney,
	Sebastian Andrzej Siewior, Arnd Bergmann, James E.J. Bottomley,
	Helge Deller, linux-parisc, Yoshinori Sato, Rich Felker,
	linux-sh, Jeff Dike, Richard Weinberger, Anton Ivanov, linux-um,
	Russell King, Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

Make the macro maze consistent and prepare it for adding the RT variant for
BH accounting.

 - Use nmi_count() for the NMI portion of preempt count
 - Introduce in_hardirq() to make the naming consistent and non-ambiguos
 - Use the macros to create combined checks (e.g. in_task()) so the
   softirq representation for RT just falls into place.
 - Update comments and move the deprecated macros aside

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/preempt.h |   30 ++++++++++++++++--------------
 1 file changed, 16 insertions(+), 14 deletions(-)

--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -77,31 +77,33 @@
 /* preempt_count() and related functions, depends on PREEMPT_NEED_RESCHED */
 #include <asm/preempt.h>
 
+#define nmi_count()	(preempt_count() & NMI_MASK)
 #define hardirq_count()	(preempt_count() & HARDIRQ_MASK)
 #define softirq_count()	(preempt_count() & SOFTIRQ_MASK)
-#define irq_count()	(preempt_count() & (HARDIRQ_MASK | SOFTIRQ_MASK \
-				 | NMI_MASK))
+#define irq_count()	(nmi_count() | hardirq_count() | softirq_count())
 
 /*
- * Are we doing bottom half or hardware interrupt processing?
+ * Macros to retrieve the current execution context:
  *
- * in_irq()       - We're in (hard) IRQ context
+ * in_nmi()		- We're in NMI context
+ * in_hardirq()		- We're in hard IRQ context
+ * in_serving_softirq()	- We're in softirq context
+ * in_task()		- We're in task context
+ */
+#define in_nmi()		(nmi_count())
+#define in_hardirq()		(hardirq_count())
+#define in_serving_softirq()	(softirq_count() & SOFTIRQ_OFFSET)
+#define in_task()		(!(in_nmi() | in_hardirq() | in_serving_softirq()))
+
+/*
+ * The following macros are deprecated and should not be used in new code:
+ * in_irq()       - Obsolete version of in_hardirq()
  * in_softirq()   - We have BH disabled, or are processing softirqs
  * in_interrupt() - We're in NMI,IRQ,SoftIRQ context or have BH disabled
- * in_serving_softirq() - We're in softirq context
- * in_nmi()       - We're in NMI context
- * in_task()	  - We're in task context
- *
- * Note: due to the BH disabled confusion: in_softirq(),in_interrupt() really
- *       should not be used in new code.
  */
 #define in_irq()		(hardirq_count())
 #define in_softirq()		(softirq_count())
 #define in_interrupt()		(irq_count())
-#define in_serving_softirq()	(softirq_count() & SOFTIRQ_OFFSET)
-#define in_nmi()		(preempt_count() & NMI_MASK)
-#define in_task()		(!(preempt_count() & \
-				   (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET)))
 
 /*
  * The preempt_count offset after preempt_disable();


^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 10/19] preempt: Cleanup the macro maze a bit
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

Make the macro maze consistent and prepare it for adding the RT variant for
BH accounting.

 - Use nmi_count() for the NMI portion of preempt count
 - Introduce in_hardirq() to make the naming consistent and non-ambiguos
 - Use the macros to create combined checks (e.g. in_task()) so the
   softirq representation for RT just falls into place.
 - Update comments and move the deprecated macros aside

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/preempt.h |   30 ++++++++++++++++--------------
 1 file changed, 16 insertions(+), 14 deletions(-)

--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -77,31 +77,33 @@
 /* preempt_count() and related functions, depends on PREEMPT_NEED_RESCHED */
 #include <asm/preempt.h>
 
+#define nmi_count()	(preempt_count() & NMI_MASK)
 #define hardirq_count()	(preempt_count() & HARDIRQ_MASK)
 #define softirq_count()	(preempt_count() & SOFTIRQ_MASK)
-#define irq_count()	(preempt_count() & (HARDIRQ_MASK | SOFTIRQ_MASK \
-				 | NMI_MASK))
+#define irq_count()	(nmi_count() | hardirq_count() | softirq_count())
 
 /*
- * Are we doing bottom half or hardware interrupt processing?
+ * Macros to retrieve the current execution context:
  *
- * in_irq()       - We're in (hard) IRQ context
+ * in_nmi()		- We're in NMI context
+ * in_hardirq()		- We're in hard IRQ context
+ * in_serving_softirq()	- We're in softirq context
+ * in_task()		- We're in task context
+ */
+#define in_nmi()		(nmi_count())
+#define in_hardirq()		(hardirq_count())
+#define in_serving_softirq()	(softirq_count() & SOFTIRQ_OFFSET)
+#define in_task()		(!(in_nmi() | in_hardirq() | in_serving_softirq()))
+
+/*
+ * The following macros are deprecated and should not be used in new code:
+ * in_irq()       - Obsolete version of in_hardirq()
  * in_softirq()   - We have BH disabled, or are processing softirqs
  * in_interrupt() - We're in NMI,IRQ,SoftIRQ context or have BH disabled
- * in_serving_softirq() - We're in softirq context
- * in_nmi()       - We're in NMI context
- * in_task()	  - We're in task context
- *
- * Note: due to the BH disabled confusion: in_softirq(),in_interrupt() really
- *       should not be used in new code.
  */
 #define in_irq()		(hardirq_count())
 #define in_softirq()		(softirq_count())
 #define in_interrupt()		(irq_count())
-#define in_serving_softirq()	(softirq_count() & SOFTIRQ_OFFSET)
-#define in_nmi()		(preempt_count() & NMI_MASK)
-#define in_task()		(!(preempt_count() & \
-				   (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET)))
 
 /*
  * The preempt_count offset after preempt_disable();


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 11/19] softirq: Move related code into one section
  2020-11-13 14:02 ` Thomas Gleixner
  (?)
@ 2020-11-13 14:02   ` Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

To prepare for adding a RT aware variant of softirq serialization and
processing move related code into one section so the necessary #ifdeffery
is reduced to one.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/softirq.c |  107 +++++++++++++++++++++++++++----------------------------
 1 file changed, 54 insertions(+), 53 deletions(-)

--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -92,6 +92,13 @@ static bool ksoftirqd_running(unsigned l
 		!__kthread_should_park(tsk);
 }
 
+#ifdef CONFIG_TRACE_IRQFLAGS
+DEFINE_PER_CPU(int, hardirqs_enabled);
+DEFINE_PER_CPU(int, hardirq_context);
+EXPORT_PER_CPU_SYMBOL_GPL(hardirqs_enabled);
+EXPORT_PER_CPU_SYMBOL_GPL(hardirq_context);
+#endif
+
 /*
  * preempt_count and SOFTIRQ_OFFSET usage:
  * - preempt_count is changed by SOFTIRQ_OFFSET on entering or leaving
@@ -102,17 +109,11 @@ static bool ksoftirqd_running(unsigned l
  * softirq and whether we just have bh disabled.
  */
 
+#ifdef CONFIG_TRACE_IRQFLAGS
 /*
- * This one is for softirq.c-internal use,
- * where hardirqs are disabled legitimately:
+ * This is for softirq.c-internal use, where hardirqs are disabled
+ * legitimately:
  */
-#ifdef CONFIG_TRACE_IRQFLAGS
-
-DEFINE_PER_CPU(int, hardirqs_enabled);
-DEFINE_PER_CPU(int, hardirq_context);
-EXPORT_PER_CPU_SYMBOL_GPL(hardirqs_enabled);
-EXPORT_PER_CPU_SYMBOL_GPL(hardirq_context);
-
 void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
 {
 	unsigned long flags;
@@ -203,6 +204,50 @@ void __local_bh_enable_ip(unsigned long
 }
 EXPORT_SYMBOL(__local_bh_enable_ip);
 
+static inline void invoke_softirq(void)
+{
+	if (ksoftirqd_running(local_softirq_pending()))
+		return;
+
+	if (!force_irqthreads) {
+#ifdef CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK
+		/*
+		 * We can safely execute softirq on the current stack if
+		 * it is the irq stack, because it should be near empty
+		 * at this stage.
+		 */
+		__do_softirq();
+#else
+		/*
+		 * Otherwise, irq_exit() is called on the task stack that can
+		 * be potentially deep already. So call softirq in its own stack
+		 * to prevent from any overrun.
+		 */
+		do_softirq_own_stack();
+#endif
+	} else {
+		wakeup_softirqd();
+	}
+}
+
+asmlinkage __visible void do_softirq(void)
+{
+	__u32 pending;
+	unsigned long flags;
+
+	if (in_interrupt())
+		return;
+
+	local_irq_save(flags);
+
+	pending = local_softirq_pending();
+
+	if (pending && !ksoftirqd_running(pending))
+		do_softirq_own_stack();
+
+	local_irq_restore(flags);
+}
+
 /*
  * We restart softirq processing for at most MAX_SOFTIRQ_RESTART times,
  * but break the loop if need_resched() is set or after 2 ms.
@@ -327,24 +372,6 @@ asmlinkage __visible void __softirq_entr
 	current_restore_flags(old_flags, PF_MEMALLOC);
 }
 
-asmlinkage __visible void do_softirq(void)
-{
-	__u32 pending;
-	unsigned long flags;
-
-	if (in_interrupt())
-		return;
-
-	local_irq_save(flags);
-
-	pending = local_softirq_pending();
-
-	if (pending && !ksoftirqd_running(pending))
-		do_softirq_own_stack();
-
-	local_irq_restore(flags);
-}
-
 /**
  * irq_enter_rcu - Enter an interrupt context with RCU watching
  */
@@ -371,32 +398,6 @@ void irq_enter(void)
 	irq_enter_rcu();
 }
 
-static inline void invoke_softirq(void)
-{
-	if (ksoftirqd_running(local_softirq_pending()))
-		return;
-
-	if (!force_irqthreads) {
-#ifdef CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK
-		/*
-		 * We can safely execute softirq on the current stack if
-		 * it is the irq stack, because it should be near empty
-		 * at this stage.
-		 */
-		__do_softirq();
-#else
-		/*
-		 * Otherwise, irq_exit() is called on the task stack that can
-		 * be potentially deep already. So call softirq in its own stack
-		 * to prevent from any overrun.
-		 */
-		do_softirq_own_stack();
-#endif
-	} else {
-		wakeup_softirqd();
-	}
-}
-
 static inline void tick_irq_exit(void)
 {
 #ifdef CONFIG_NO_HZ_COMMON

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 11/19] softirq: Move related code into one section
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Frederic Weisbecker, Paul McKenney,
	Sebastian Andrzej Siewior, Arnd Bergmann, James E.J. Bottomley,
	Helge Deller, linux-parisc, Yoshinori Sato, Rich Felker,
	linux-sh, Jeff Dike, Richard Weinberger, Anton Ivanov, linux-um,
	Russell King, Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

To prepare for adding a RT aware variant of softirq serialization and
processing move related code into one section so the necessary #ifdeffery
is reduced to one.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/softirq.c |  107 +++++++++++++++++++++++++++----------------------------
 1 file changed, 54 insertions(+), 53 deletions(-)

--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -92,6 +92,13 @@ static bool ksoftirqd_running(unsigned l
 		!__kthread_should_park(tsk);
 }
 
+#ifdef CONFIG_TRACE_IRQFLAGS
+DEFINE_PER_CPU(int, hardirqs_enabled);
+DEFINE_PER_CPU(int, hardirq_context);
+EXPORT_PER_CPU_SYMBOL_GPL(hardirqs_enabled);
+EXPORT_PER_CPU_SYMBOL_GPL(hardirq_context);
+#endif
+
 /*
  * preempt_count and SOFTIRQ_OFFSET usage:
  * - preempt_count is changed by SOFTIRQ_OFFSET on entering or leaving
@@ -102,17 +109,11 @@ static bool ksoftirqd_running(unsigned l
  * softirq and whether we just have bh disabled.
  */
 
+#ifdef CONFIG_TRACE_IRQFLAGS
 /*
- * This one is for softirq.c-internal use,
- * where hardirqs are disabled legitimately:
+ * This is for softirq.c-internal use, where hardirqs are disabled
+ * legitimately:
  */
-#ifdef CONFIG_TRACE_IRQFLAGS
-
-DEFINE_PER_CPU(int, hardirqs_enabled);
-DEFINE_PER_CPU(int, hardirq_context);
-EXPORT_PER_CPU_SYMBOL_GPL(hardirqs_enabled);
-EXPORT_PER_CPU_SYMBOL_GPL(hardirq_context);
-
 void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
 {
 	unsigned long flags;
@@ -203,6 +204,50 @@ void __local_bh_enable_ip(unsigned long
 }
 EXPORT_SYMBOL(__local_bh_enable_ip);
 
+static inline void invoke_softirq(void)
+{
+	if (ksoftirqd_running(local_softirq_pending()))
+		return;
+
+	if (!force_irqthreads) {
+#ifdef CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK
+		/*
+		 * We can safely execute softirq on the current stack if
+		 * it is the irq stack, because it should be near empty
+		 * at this stage.
+		 */
+		__do_softirq();
+#else
+		/*
+		 * Otherwise, irq_exit() is called on the task stack that can
+		 * be potentially deep already. So call softirq in its own stack
+		 * to prevent from any overrun.
+		 */
+		do_softirq_own_stack();
+#endif
+	} else {
+		wakeup_softirqd();
+	}
+}
+
+asmlinkage __visible void do_softirq(void)
+{
+	__u32 pending;
+	unsigned long flags;
+
+	if (in_interrupt())
+		return;
+
+	local_irq_save(flags);
+
+	pending = local_softirq_pending();
+
+	if (pending && !ksoftirqd_running(pending))
+		do_softirq_own_stack();
+
+	local_irq_restore(flags);
+}
+
 /*
  * We restart softirq processing for at most MAX_SOFTIRQ_RESTART times,
  * but break the loop if need_resched() is set or after 2 ms.
@@ -327,24 +372,6 @@ asmlinkage __visible void __softirq_entr
 	current_restore_flags(old_flags, PF_MEMALLOC);
 }
 
-asmlinkage __visible void do_softirq(void)
-{
-	__u32 pending;
-	unsigned long flags;
-
-	if (in_interrupt())
-		return;
-
-	local_irq_save(flags);
-
-	pending = local_softirq_pending();
-
-	if (pending && !ksoftirqd_running(pending))
-		do_softirq_own_stack();
-
-	local_irq_restore(flags);
-}
-
 /**
  * irq_enter_rcu - Enter an interrupt context with RCU watching
  */
@@ -371,32 +398,6 @@ void irq_enter(void)
 	irq_enter_rcu();
 }
 
-static inline void invoke_softirq(void)
-{
-	if (ksoftirqd_running(local_softirq_pending()))
-		return;
-
-	if (!force_irqthreads) {
-#ifdef CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK
-		/*
-		 * We can safely execute softirq on the current stack if
-		 * it is the irq stack, because it should be near empty
-		 * at this stage.
-		 */
-		__do_softirq();
-#else
-		/*
-		 * Otherwise, irq_exit() is called on the task stack that can
-		 * be potentially deep already. So call softirq in its own stack
-		 * to prevent from any overrun.
-		 */
-		do_softirq_own_stack();
-#endif
-	} else {
-		wakeup_softirqd();
-	}
-}
-
 static inline void tick_irq_exit(void)
 {
 #ifdef CONFIG_NO_HZ_COMMON


^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 11/19] softirq: Move related code into one section
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

To prepare for adding a RT aware variant of softirq serialization and
processing move related code into one section so the necessary #ifdeffery
is reduced to one.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/softirq.c |  107 +++++++++++++++++++++++++++----------------------------
 1 file changed, 54 insertions(+), 53 deletions(-)

--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -92,6 +92,13 @@ static bool ksoftirqd_running(unsigned l
 		!__kthread_should_park(tsk);
 }
 
+#ifdef CONFIG_TRACE_IRQFLAGS
+DEFINE_PER_CPU(int, hardirqs_enabled);
+DEFINE_PER_CPU(int, hardirq_context);
+EXPORT_PER_CPU_SYMBOL_GPL(hardirqs_enabled);
+EXPORT_PER_CPU_SYMBOL_GPL(hardirq_context);
+#endif
+
 /*
  * preempt_count and SOFTIRQ_OFFSET usage:
  * - preempt_count is changed by SOFTIRQ_OFFSET on entering or leaving
@@ -102,17 +109,11 @@ static bool ksoftirqd_running(unsigned l
  * softirq and whether we just have bh disabled.
  */
 
+#ifdef CONFIG_TRACE_IRQFLAGS
 /*
- * This one is for softirq.c-internal use,
- * where hardirqs are disabled legitimately:
+ * This is for softirq.c-internal use, where hardirqs are disabled
+ * legitimately:
  */
-#ifdef CONFIG_TRACE_IRQFLAGS
-
-DEFINE_PER_CPU(int, hardirqs_enabled);
-DEFINE_PER_CPU(int, hardirq_context);
-EXPORT_PER_CPU_SYMBOL_GPL(hardirqs_enabled);
-EXPORT_PER_CPU_SYMBOL_GPL(hardirq_context);
-
 void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
 {
 	unsigned long flags;
@@ -203,6 +204,50 @@ void __local_bh_enable_ip(unsigned long
 }
 EXPORT_SYMBOL(__local_bh_enable_ip);
 
+static inline void invoke_softirq(void)
+{
+	if (ksoftirqd_running(local_softirq_pending()))
+		return;
+
+	if (!force_irqthreads) {
+#ifdef CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK
+		/*
+		 * We can safely execute softirq on the current stack if
+		 * it is the irq stack, because it should be near empty
+		 * at this stage.
+		 */
+		__do_softirq();
+#else
+		/*
+		 * Otherwise, irq_exit() is called on the task stack that can
+		 * be potentially deep already. So call softirq in its own stack
+		 * to prevent from any overrun.
+		 */
+		do_softirq_own_stack();
+#endif
+	} else {
+		wakeup_softirqd();
+	}
+}
+
+asmlinkage __visible void do_softirq(void)
+{
+	__u32 pending;
+	unsigned long flags;
+
+	if (in_interrupt())
+		return;
+
+	local_irq_save(flags);
+
+	pending = local_softirq_pending();
+
+	if (pending && !ksoftirqd_running(pending))
+		do_softirq_own_stack();
+
+	local_irq_restore(flags);
+}
+
 /*
  * We restart softirq processing for at most MAX_SOFTIRQ_RESTART times,
  * but break the loop if need_resched() is set or after 2 ms.
@@ -327,24 +372,6 @@ asmlinkage __visible void __softirq_entr
 	current_restore_flags(old_flags, PF_MEMALLOC);
 }
 
-asmlinkage __visible void do_softirq(void)
-{
-	__u32 pending;
-	unsigned long flags;
-
-	if (in_interrupt())
-		return;
-
-	local_irq_save(flags);
-
-	pending = local_softirq_pending();
-
-	if (pending && !ksoftirqd_running(pending))
-		do_softirq_own_stack();
-
-	local_irq_restore(flags);
-}
-
 /**
  * irq_enter_rcu - Enter an interrupt context with RCU watching
  */
@@ -371,32 +398,6 @@ void irq_enter(void)
 	irq_enter_rcu();
 }
 
-static inline void invoke_softirq(void)
-{
-	if (ksoftirqd_running(local_softirq_pending()))
-		return;
-
-	if (!force_irqthreads) {
-#ifdef CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK
-		/*
-		 * We can safely execute softirq on the current stack if
-		 * it is the irq stack, because it should be near empty
-		 * at this stage.
-		 */
-		__do_softirq();
-#else
-		/*
-		 * Otherwise, irq_exit() is called on the task stack that can
-		 * be potentially deep already. So call softirq in its own stack
-		 * to prevent from any overrun.
-		 */
-		do_softirq_own_stack();
-#endif
-	} else {
-		wakeup_softirqd();
-	}
-}
-
 static inline void tick_irq_exit(void)
 {
 #ifdef CONFIG_NO_HZ_COMMON


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 12/19] softirq: Add RT specific softirq accounting
  2020-11-13 14:02 ` Thomas Gleixner
  (?)
@ 2020-11-13 14:02   ` Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

RT requires the softirq to be preemptible and uses a per CPU local lock to
protect BH disabled sections and softirq processing. Therefore RT cannot
use the preempt counter to keep track of BH disabled/serving.

Add a RT only counter to task struct and adjust the relevant macros in
preempt.h.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/hardirq.h |    1 +
 include/linux/preempt.h |    6 +++++-
 include/linux/sched.h   |    3 +++
 3 files changed, 9 insertions(+), 1 deletion(-)

--- a/include/linux/hardirq.h
+++ b/include/linux/hardirq.h
@@ -6,6 +6,7 @@
 #include <linux/preempt.h>
 #include <linux/lockdep.h>
 #include <linux/ftrace_irq.h>
+#include <linux/sched.h>
 #include <linux/vtime.h>
 #include <asm/hardirq.h>
 
--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -79,7 +79,11 @@
 
 #define nmi_count()	(preempt_count() & NMI_MASK)
 #define hardirq_count()	(preempt_count() & HARDIRQ_MASK)
-#define softirq_count()	(preempt_count() & SOFTIRQ_MASK)
+#ifdef CONFIG_PREEMPT_RT
+# define softirq_count()	(current->softirq_disable_cnt & SOFTIRQ_MASK)
+#else
+# define softirq_count()	(preempt_count() & SOFTIRQ_MASK)
+#endif
 #define irq_count()	(nmi_count() | hardirq_count() | softirq_count())
 
 /*
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1004,6 +1004,9 @@ struct task_struct {
 	int				softirq_context;
 	int				irq_config;
 #endif
+#ifdef CONFIG_PREEMPT_RT
+	int				softirq_disable_cnt;
+#endif
 
 #ifdef CONFIG_LOCKDEP
 # define MAX_LOCK_DEPTH			48UL

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 12/19] softirq: Add RT specific softirq accounting
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Frederic Weisbecker, Paul McKenney,
	Sebastian Andrzej Siewior, Arnd Bergmann, James E.J. Bottomley,
	Helge Deller, linux-parisc, Yoshinori Sato, Rich Felker,
	linux-sh, Jeff Dike, Richard Weinberger, Anton Ivanov, linux-um,
	Russell King, Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

RT requires the softirq to be preemptible and uses a per CPU local lock to
protect BH disabled sections and softirq processing. Therefore RT cannot
use the preempt counter to keep track of BH disabled/serving.

Add a RT only counter to task struct and adjust the relevant macros in
preempt.h.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/hardirq.h |    1 +
 include/linux/preempt.h |    6 +++++-
 include/linux/sched.h   |    3 +++
 3 files changed, 9 insertions(+), 1 deletion(-)

--- a/include/linux/hardirq.h
+++ b/include/linux/hardirq.h
@@ -6,6 +6,7 @@
 #include <linux/preempt.h>
 #include <linux/lockdep.h>
 #include <linux/ftrace_irq.h>
+#include <linux/sched.h>
 #include <linux/vtime.h>
 #include <asm/hardirq.h>
 
--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -79,7 +79,11 @@
 
 #define nmi_count()	(preempt_count() & NMI_MASK)
 #define hardirq_count()	(preempt_count() & HARDIRQ_MASK)
-#define softirq_count()	(preempt_count() & SOFTIRQ_MASK)
+#ifdef CONFIG_PREEMPT_RT
+# define softirq_count()	(current->softirq_disable_cnt & SOFTIRQ_MASK)
+#else
+# define softirq_count()	(preempt_count() & SOFTIRQ_MASK)
+#endif
 #define irq_count()	(nmi_count() | hardirq_count() | softirq_count())
 
 /*
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1004,6 +1004,9 @@ struct task_struct {
 	int				softirq_context;
 	int				irq_config;
 #endif
+#ifdef CONFIG_PREEMPT_RT
+	int				softirq_disable_cnt;
+#endif
 
 #ifdef CONFIG_LOCKDEP
 # define MAX_LOCK_DEPTH			48UL


^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 12/19] softirq: Add RT specific softirq accounting
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

RT requires the softirq to be preemptible and uses a per CPU local lock to
protect BH disabled sections and softirq processing. Therefore RT cannot
use the preempt counter to keep track of BH disabled/serving.

Add a RT only counter to task struct and adjust the relevant macros in
preempt.h.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/hardirq.h |    1 +
 include/linux/preempt.h |    6 +++++-
 include/linux/sched.h   |    3 +++
 3 files changed, 9 insertions(+), 1 deletion(-)

--- a/include/linux/hardirq.h
+++ b/include/linux/hardirq.h
@@ -6,6 +6,7 @@
 #include <linux/preempt.h>
 #include <linux/lockdep.h>
 #include <linux/ftrace_irq.h>
+#include <linux/sched.h>
 #include <linux/vtime.h>
 #include <asm/hardirq.h>
 
--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -79,7 +79,11 @@
 
 #define nmi_count()	(preempt_count() & NMI_MASK)
 #define hardirq_count()	(preempt_count() & HARDIRQ_MASK)
-#define softirq_count()	(preempt_count() & SOFTIRQ_MASK)
+#ifdef CONFIG_PREEMPT_RT
+# define softirq_count()	(current->softirq_disable_cnt & SOFTIRQ_MASK)
+#else
+# define softirq_count()	(preempt_count() & SOFTIRQ_MASK)
+#endif
 #define irq_count()	(nmi_count() | hardirq_count() | softirq_count())
 
 /*
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1004,6 +1004,9 @@ struct task_struct {
 	int				softirq_context;
 	int				irq_config;
 #endif
+#ifdef CONFIG_PREEMPT_RT
+	int				softirq_disable_cnt;
+#endif
 
 #ifdef CONFIG_LOCKDEP
 # define MAX_LOCK_DEPTH			48UL


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 13/19] softirq: Move various protections into inline helpers
  2020-11-13 14:02 ` Thomas Gleixner
  (?)
@ 2020-11-13 14:02   ` Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

To allow reuse of the bulk of softirq processing code for RT and to avoid
#ifdeffery all over the place, split protections for various code sections
out into inline helpers so the RT variant can just replace them in one go.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/softirq.c |   53 ++++++++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 44 insertions(+), 9 deletions(-)

--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -204,6 +204,42 @@ void __local_bh_enable_ip(unsigned long
 }
 EXPORT_SYMBOL(__local_bh_enable_ip);
 
+static inline void local_bh_disable_irq_enter(void)
+{
+	local_bh_disable();
+}
+
+static inline void local_bh_enable_irq_enter(void)
+{
+	_local_bh_enable();
+}
+
+static inline void softirq_handle_begin(void)
+{
+	__local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
+}
+
+static inline void softirq_handle_end(void)
+{
+	__local_bh_enable(SOFTIRQ_OFFSET);
+	WARN_ON_ONCE(in_interrupt());
+}
+
+static inline void ksoftirqd_run_begin(void)
+{
+	local_irq_disable();
+}
+
+static inline void ksoftirqd_run_end(void)
+{
+	local_irq_enable();
+}
+
+static inline bool should_wake_ksoftirqd(void)
+{
+	return true;
+}
+
 static inline void invoke_softirq(void)
 {
 	if (ksoftirqd_running(local_softirq_pending()))
@@ -317,7 +353,7 @@ asmlinkage __visible void __softirq_entr
 	pending = local_softirq_pending();
 	account_irq_enter_time(current);
 
-	__local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
+	softirq_handle_begin();
 	in_hardirq = lockdep_softirq_start();
 
 restart:
@@ -367,8 +403,7 @@ asmlinkage __visible void __softirq_entr
 
 	lockdep_softirq_end(in_hardirq);
 	account_irq_exit_time(current);
-	__local_bh_enable(SOFTIRQ_OFFSET);
-	WARN_ON_ONCE(in_interrupt());
+	softirq_handle_end();
 	current_restore_flags(old_flags, PF_MEMALLOC);
 }
 
@@ -382,9 +417,9 @@ void irq_enter_rcu(void)
 		 * Prevent raise_softirq from needlessly waking up ksoftirqd
 		 * here, as softirq will be serviced on return from interrupt.
 		 */
-		local_bh_disable();
+		local_bh_disable_irq_enter();
 		tick_irq_enter();
-		_local_bh_enable();
+		local_bh_enable_irq_enter();
 	}
 	__irq_enter();
 }
@@ -467,7 +502,7 @@ inline void raise_softirq_irqoff(unsigne
 	 * Otherwise we wake up ksoftirqd to make sure we
 	 * schedule the softirq soon.
 	 */
-	if (!in_interrupt())
+	if (!in_interrupt() && should_wake_ksoftirqd())
 		wakeup_softirqd();
 }
 
@@ -645,18 +680,18 @@ static int ksoftirqd_should_run(unsigned
 
 static void run_ksoftirqd(unsigned int cpu)
 {
-	local_irq_disable();
+	ksoftirqd_run_begin();
 	if (local_softirq_pending()) {
 		/*
 		 * We can safely run softirq on inline stack, as we are not deep
 		 * in the task stack here.
 		 */
 		__do_softirq();
-		local_irq_enable();
+		ksoftirqd_run_end();
 		cond_resched();
 		return;
 	}
-	local_irq_enable();
+	ksoftirqd_run_end();
 }
 
 #ifdef CONFIG_HOTPLUG_CPU

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 13/19] softirq: Move various protections into inline helpers
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Frederic Weisbecker, Paul McKenney,
	Sebastian Andrzej Siewior, Arnd Bergmann, James E.J. Bottomley,
	Helge Deller, linux-parisc, Yoshinori Sato, Rich Felker,
	linux-sh, Jeff Dike, Richard Weinberger, Anton Ivanov, linux-um,
	Russell King, Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

To allow reuse of the bulk of softirq processing code for RT and to avoid
#ifdeffery all over the place, split protections for various code sections
out into inline helpers so the RT variant can just replace them in one go.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/softirq.c |   53 ++++++++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 44 insertions(+), 9 deletions(-)

--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -204,6 +204,42 @@ void __local_bh_enable_ip(unsigned long
 }
 EXPORT_SYMBOL(__local_bh_enable_ip);
 
+static inline void local_bh_disable_irq_enter(void)
+{
+	local_bh_disable();
+}
+
+static inline void local_bh_enable_irq_enter(void)
+{
+	_local_bh_enable();
+}
+
+static inline void softirq_handle_begin(void)
+{
+	__local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
+}
+
+static inline void softirq_handle_end(void)
+{
+	__local_bh_enable(SOFTIRQ_OFFSET);
+	WARN_ON_ONCE(in_interrupt());
+}
+
+static inline void ksoftirqd_run_begin(void)
+{
+	local_irq_disable();
+}
+
+static inline void ksoftirqd_run_end(void)
+{
+	local_irq_enable();
+}
+
+static inline bool should_wake_ksoftirqd(void)
+{
+	return true;
+}
+
 static inline void invoke_softirq(void)
 {
 	if (ksoftirqd_running(local_softirq_pending()))
@@ -317,7 +353,7 @@ asmlinkage __visible void __softirq_entr
 	pending = local_softirq_pending();
 	account_irq_enter_time(current);
 
-	__local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
+	softirq_handle_begin();
 	in_hardirq = lockdep_softirq_start();
 
 restart:
@@ -367,8 +403,7 @@ asmlinkage __visible void __softirq_entr
 
 	lockdep_softirq_end(in_hardirq);
 	account_irq_exit_time(current);
-	__local_bh_enable(SOFTIRQ_OFFSET);
-	WARN_ON_ONCE(in_interrupt());
+	softirq_handle_end();
 	current_restore_flags(old_flags, PF_MEMALLOC);
 }
 
@@ -382,9 +417,9 @@ void irq_enter_rcu(void)
 		 * Prevent raise_softirq from needlessly waking up ksoftirqd
 		 * here, as softirq will be serviced on return from interrupt.
 		 */
-		local_bh_disable();
+		local_bh_disable_irq_enter();
 		tick_irq_enter();
-		_local_bh_enable();
+		local_bh_enable_irq_enter();
 	}
 	__irq_enter();
 }
@@ -467,7 +502,7 @@ inline void raise_softirq_irqoff(unsigne
 	 * Otherwise we wake up ksoftirqd to make sure we
 	 * schedule the softirq soon.
 	 */
-	if (!in_interrupt())
+	if (!in_interrupt() && should_wake_ksoftirqd())
 		wakeup_softirqd();
 }
 
@@ -645,18 +680,18 @@ static int ksoftirqd_should_run(unsigned
 
 static void run_ksoftirqd(unsigned int cpu)
 {
-	local_irq_disable();
+	ksoftirqd_run_begin();
 	if (local_softirq_pending()) {
 		/*
 		 * We can safely run softirq on inline stack, as we are not deep
 		 * in the task stack here.
 		 */
 		__do_softirq();
-		local_irq_enable();
+		ksoftirqd_run_end();
 		cond_resched();
 		return;
 	}
-	local_irq_enable();
+	ksoftirqd_run_end();
 }
 
 #ifdef CONFIG_HOTPLUG_CPU


^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 13/19] softirq: Move various protections into inline helpers
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

To allow reuse of the bulk of softirq processing code for RT and to avoid
#ifdeffery all over the place, split protections for various code sections
out into inline helpers so the RT variant can just replace them in one go.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/softirq.c |   53 ++++++++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 44 insertions(+), 9 deletions(-)

--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -204,6 +204,42 @@ void __local_bh_enable_ip(unsigned long
 }
 EXPORT_SYMBOL(__local_bh_enable_ip);
 
+static inline void local_bh_disable_irq_enter(void)
+{
+	local_bh_disable();
+}
+
+static inline void local_bh_enable_irq_enter(void)
+{
+	_local_bh_enable();
+}
+
+static inline void softirq_handle_begin(void)
+{
+	__local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
+}
+
+static inline void softirq_handle_end(void)
+{
+	__local_bh_enable(SOFTIRQ_OFFSET);
+	WARN_ON_ONCE(in_interrupt());
+}
+
+static inline void ksoftirqd_run_begin(void)
+{
+	local_irq_disable();
+}
+
+static inline void ksoftirqd_run_end(void)
+{
+	local_irq_enable();
+}
+
+static inline bool should_wake_ksoftirqd(void)
+{
+	return true;
+}
+
 static inline void invoke_softirq(void)
 {
 	if (ksoftirqd_running(local_softirq_pending()))
@@ -317,7 +353,7 @@ asmlinkage __visible void __softirq_entr
 	pending = local_softirq_pending();
 	account_irq_enter_time(current);
 
-	__local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
+	softirq_handle_begin();
 	in_hardirq = lockdep_softirq_start();
 
 restart:
@@ -367,8 +403,7 @@ asmlinkage __visible void __softirq_entr
 
 	lockdep_softirq_end(in_hardirq);
 	account_irq_exit_time(current);
-	__local_bh_enable(SOFTIRQ_OFFSET);
-	WARN_ON_ONCE(in_interrupt());
+	softirq_handle_end();
 	current_restore_flags(old_flags, PF_MEMALLOC);
 }
 
@@ -382,9 +417,9 @@ void irq_enter_rcu(void)
 		 * Prevent raise_softirq from needlessly waking up ksoftirqd
 		 * here, as softirq will be serviced on return from interrupt.
 		 */
-		local_bh_disable();
+		local_bh_disable_irq_enter();
 		tick_irq_enter();
-		_local_bh_enable();
+		local_bh_enable_irq_enter();
 	}
 	__irq_enter();
 }
@@ -467,7 +502,7 @@ inline void raise_softirq_irqoff(unsigne
 	 * Otherwise we wake up ksoftirqd to make sure we
 	 * schedule the softirq soon.
 	 */
-	if (!in_interrupt())
+	if (!in_interrupt() && should_wake_ksoftirqd())
 		wakeup_softirqd();
 }
 
@@ -645,18 +680,18 @@ static int ksoftirqd_should_run(unsigned
 
 static void run_ksoftirqd(unsigned int cpu)
 {
-	local_irq_disable();
+	ksoftirqd_run_begin();
 	if (local_softirq_pending()) {
 		/*
 		 * We can safely run softirq on inline stack, as we are not deep
 		 * in the task stack here.
 		 */
 		__do_softirq();
-		local_irq_enable();
+		ksoftirqd_run_end();
 		cond_resched();
 		return;
 	}
-	local_irq_enable();
+	ksoftirqd_run_end();
 }
 
 #ifdef CONFIG_HOTPLUG_CPU


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 14/19] softirq: Make softirq control and processing RT aware
  2020-11-13 14:02 ` Thomas Gleixner
  (?)
@ 2020-11-13 14:02   ` Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

Provide a local lock based serialization for soft interrupts on RT which
allows the local_bh_disabled() sections and servicing soft interrupts to be
preemptible.

Provide the necessary inline helpers which allow to reuse the bulk of the
softirq processing code.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/bottom_half.h |    2 
 kernel/softirq.c            |  207 ++++++++++++++++++++++++++++++++++++++++++--
 2 files changed, 201 insertions(+), 8 deletions(-)

--- a/include/linux/bottom_half.h
+++ b/include/linux/bottom_half.h
@@ -4,7 +4,7 @@
 
 #include <linux/preempt.h>
 
-#ifdef CONFIG_TRACE_IRQFLAGS
+#if defined(CONFIG_PREEMPT_RT) || defined(CONFIG_TRACE_IRQFLAGS)
 extern void __local_bh_disable_ip(unsigned long ip, unsigned int cnt);
 #else
 static __always_inline void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -13,6 +13,7 @@
 #include <linux/kernel_stat.h>
 #include <linux/interrupt.h>
 #include <linux/init.h>
+#include <linux/local_lock.h>
 #include <linux/mm.h>
 #include <linux/notifier.h>
 #include <linux/percpu.h>
@@ -100,20 +101,208 @@ EXPORT_PER_CPU_SYMBOL_GPL(hardirq_contex
 #endif
 
 /*
- * preempt_count and SOFTIRQ_OFFSET usage:
- * - preempt_count is changed by SOFTIRQ_OFFSET on entering or leaving
- *   softirq processing.
- * - preempt_count is changed by SOFTIRQ_DISABLE_OFFSET (= 2 * SOFTIRQ_OFFSET)
+ * SOFTIRQ_OFFSET usage:
+ *
+ * On !RT kernels 'count' is the preempt counter, on RT kernels this applies
+ * to a per CPU counter and to task::softirqs_disabled_cnt.
+ *
+ * - count is changed by SOFTIRQ_OFFSET on entering or leaving softirq
+ *   processing.
+ *
+ * - count is changed by SOFTIRQ_DISABLE_OFFSET (= 2 * SOFTIRQ_OFFSET)
  *   on local_bh_disable or local_bh_enable.
+ *
  * This lets us distinguish between whether we are currently processing
  * softirq and whether we just have bh disabled.
  */
+#ifdef CONFIG_PREEMPT_RT
 
-#ifdef CONFIG_TRACE_IRQFLAGS
 /*
- * This is for softirq.c-internal use, where hardirqs are disabled
+ * RT accounts for BH disabled sections in task::softirqs_disabled_cnt and
+ * also in per CPU softirq_ctrl::cnt. This is necessary to allow tasks in a
+ * softirq disabled section to be preempted.
+ *
+ * The per task counter is used for softirq_count(), in_softirq() and
+ * in_serving_softirqs() because these counts are only valid when the task
+ * holding softirq_ctrl::lock is running.
+ *
+ * The per CPU counter prevents pointless wakeups of ksoftirqd in case that
+ * the task which is in a softirq disabled section is preempted or blocks.
+ */
+struct softirq_ctrl {
+	local_lock_t	lock;
+	int		cnt;
+};
+
+static DEFINE_PER_CPU(struct softirq_ctrl, softirq_ctrl) = {
+	.lock	= INIT_LOCAL_LOCK(softirq_ctrl.lock),
+};
+
+void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
+{
+	unsigned long flags;
+	int newcnt;
+
+	WARN_ON_ONCE(in_hardirq());
+
+	/* First entry of a task into a BH disabled section? */
+	if (!current->softirq_disable_cnt) {
+		if (preemptible()) {
+			local_lock(&softirq_ctrl.lock);
+			rcu_read_lock();
+		} else {
+			DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt));
+		}
+	}
+
+	preempt_disable();
+	/*
+	 * Track the per CPU softirq disabled state. On RT this is per CPU
+	 * state to allow preemption of bottom half disabled sections.
+	 */
+	newcnt = this_cpu_add_return(softirq_ctrl.cnt, cnt);
+	/*
+	 * Reflect the result in the task state to prevent recursion on the
+	 * local lock and to make softirq_count() & al work.
+	 */
+	current->softirq_disable_cnt = newcnt;
+
+	if (IS_ENABLED(CONFIG_TRACE_IRQFLAGS) && newcnt = cnt) {
+		raw_local_irq_save(flags);
+		lockdep_softirqs_off(ip);
+		raw_local_irq_restore(flags);
+	}
+	preempt_enable();
+}
+EXPORT_SYMBOL(__local_bh_disable_ip);
+
+static void __local_bh_enable(unsigned int cnt, bool unlock)
+{
+	unsigned long flags;
+	int newcnt;
+
+	DEBUG_LOCKS_WARN_ON(current->softirq_disable_cnt !+			    this_cpu_read(softirq_ctrl.cnt));
+
+	preempt_disable();
+	if (IS_ENABLED(CONFIG_TRACE_IRQFLAGS) && softirq_count() = cnt) {
+		raw_local_irq_save(flags);
+		lockdep_softirqs_on(_RET_IP_);
+		raw_local_irq_restore(flags);
+	}
+
+	newcnt = this_cpu_sub_return(softirq_ctrl.cnt, cnt);
+	current->softirq_disable_cnt = newcnt;
+	preempt_enable();
+
+	if (!newcnt && unlock) {
+		rcu_read_unlock();
+		local_unlock(&softirq_ctrl.lock);
+	}
+}
+
+void __local_bh_enable_ip(unsigned long ip, unsigned int cnt)
+{
+	bool preempt_on = preemptible();
+	unsigned long flags;
+	u32 pending;
+	int curcnt;
+
+	WARN_ON_ONCE(in_irq());
+	lockdep_assert_irqs_enabled();
+
+	local_irq_save(flags);
+	curcnt = this_cpu_read(softirq_ctrl.cnt);
+
+	/*
+	 * If this is not reenabling soft interrupts, no point in trying to
+	 * run pending ones.
+	 */
+	if (curcnt != cnt)
+		goto out;
+
+	pending = local_softirq_pending();
+	if (!pending || ksoftirqd_running(pending))
+		goto out;
+
+	/*
+	 * If this was called from non preemptible context, wake up the
+	 * softirq daemon.
+	 */
+	if (!preempt_on) {
+		wakeup_softirqd();
+		goto out;
+	}
+
+	/*
+	 * Adjust softirq count to SOFTIRQ_OFFSET which makes
+	 * in_serving_softirq() become true.
+	 */
+	cnt = SOFTIRQ_OFFSET;
+	__local_bh_enable(cnt, false);
+	__do_softirq();
+
+out:
+	__local_bh_enable(cnt, preempt_on);
+	local_irq_restore(flags);
+}
+EXPORT_SYMBOL(__local_bh_enable_ip);
+
+/*
+ * Invoked from irq_enter_rcu() to prevent that tick_irq_enter()
+ * pointlessly wakes the softirq daemon. That's handled in __irq_exit_rcu().
+ * None of the above logic in the regular bh_disable/enable functions is
+ * required here.
+ */
+static inline void local_bh_disable_irq_enter(void)
+{
+	this_cpu_add(softirq_ctrl.cnt, SOFTIRQ_DISABLE_OFFSET);
+}
+
+static inline void local_bh_enable_irq_enter(void)
+{
+	this_cpu_sub(softirq_ctrl.cnt, SOFTIRQ_DISABLE_OFFSET);
+}
+
+/*
+ * Invoked from ksoftirqd_run() outside of the interrupt disabled section
+ * to acquire the per CPU local lock for reentrancy protection.
+ */
+static inline void ksoftirqd_run_begin(void)
+{
+	__local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
+	local_irq_disable();
+}
+
+/* Counterpart to ksoftirqd_run_begin() */
+static inline void ksoftirqd_run_end(void)
+{
+	__local_bh_enable(SOFTIRQ_OFFSET, true);
+	WARN_ON_ONCE(in_interrupt());
+	local_irq_enable();
+}
+
+static inline void softirq_handle_begin(void) { }
+static inline void softirq_handle_end(void) { }
+
+static inline void invoke_softirq(void)
+{
+	if (!this_cpu_read(softirq_ctrl.cnt))
+		wakeup_softirqd();
+}
+
+static inline bool should_wake_ksoftirqd(void)
+{
+	return !this_cpu_read(softirq_ctrl.cnt);
+}
+
+#else /* CONFIG_PREEMPT_RT */
+
+/*
+ * This one is for softirq.c-internal use, where hardirqs are disabled
  * legitimately:
  */
+#ifdef CONFIG_TRACE_IRQFLAGS
 void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
 {
 	unsigned long flags;
@@ -284,6 +473,8 @@ asmlinkage __visible void do_softirq(voi
 	local_irq_restore(flags);
 }
 
+#endif /* !CONFIG_PREEMPT_RT */
+
 /*
  * We restart softirq processing for at most MAX_SOFTIRQ_RESTART times,
  * but break the loop if need_resched() is set or after 2 ms.
@@ -388,8 +579,10 @@ asmlinkage __visible void __softirq_entr
 		pending >>= softirq_bit;
 	}
 
-	if (__this_cpu_read(ksoftirqd) = current)
+	if (!IS_ENABLED(CONFIG_PREEMPT_RT) &&
+	    __this_cpu_read(ksoftirqd) = current)
 		rcu_softirq_qs();
+
 	local_irq_disable();
 
 	pending = local_softirq_pending();

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 14/19] softirq: Make softirq control and processing RT aware
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Frederic Weisbecker, Paul McKenney,
	Sebastian Andrzej Siewior, Arnd Bergmann, James E.J. Bottomley,
	Helge Deller, linux-parisc, Yoshinori Sato, Rich Felker,
	linux-sh, Jeff Dike, Richard Weinberger, Anton Ivanov, linux-um,
	Russell King, Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

Provide a local lock based serialization for soft interrupts on RT which
allows the local_bh_disabled() sections and servicing soft interrupts to be
preemptible.

Provide the necessary inline helpers which allow to reuse the bulk of the
softirq processing code.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/bottom_half.h |    2 
 kernel/softirq.c            |  207 ++++++++++++++++++++++++++++++++++++++++++--
 2 files changed, 201 insertions(+), 8 deletions(-)

--- a/include/linux/bottom_half.h
+++ b/include/linux/bottom_half.h
@@ -4,7 +4,7 @@
 
 #include <linux/preempt.h>
 
-#ifdef CONFIG_TRACE_IRQFLAGS
+#if defined(CONFIG_PREEMPT_RT) || defined(CONFIG_TRACE_IRQFLAGS)
 extern void __local_bh_disable_ip(unsigned long ip, unsigned int cnt);
 #else
 static __always_inline void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -13,6 +13,7 @@
 #include <linux/kernel_stat.h>
 #include <linux/interrupt.h>
 #include <linux/init.h>
+#include <linux/local_lock.h>
 #include <linux/mm.h>
 #include <linux/notifier.h>
 #include <linux/percpu.h>
@@ -100,20 +101,208 @@ EXPORT_PER_CPU_SYMBOL_GPL(hardirq_contex
 #endif
 
 /*
- * preempt_count and SOFTIRQ_OFFSET usage:
- * - preempt_count is changed by SOFTIRQ_OFFSET on entering or leaving
- *   softirq processing.
- * - preempt_count is changed by SOFTIRQ_DISABLE_OFFSET (= 2 * SOFTIRQ_OFFSET)
+ * SOFTIRQ_OFFSET usage:
+ *
+ * On !RT kernels 'count' is the preempt counter, on RT kernels this applies
+ * to a per CPU counter and to task::softirqs_disabled_cnt.
+ *
+ * - count is changed by SOFTIRQ_OFFSET on entering or leaving softirq
+ *   processing.
+ *
+ * - count is changed by SOFTIRQ_DISABLE_OFFSET (= 2 * SOFTIRQ_OFFSET)
  *   on local_bh_disable or local_bh_enable.
+ *
  * This lets us distinguish between whether we are currently processing
  * softirq and whether we just have bh disabled.
  */
+#ifdef CONFIG_PREEMPT_RT
 
-#ifdef CONFIG_TRACE_IRQFLAGS
 /*
- * This is for softirq.c-internal use, where hardirqs are disabled
+ * RT accounts for BH disabled sections in task::softirqs_disabled_cnt and
+ * also in per CPU softirq_ctrl::cnt. This is necessary to allow tasks in a
+ * softirq disabled section to be preempted.
+ *
+ * The per task counter is used for softirq_count(), in_softirq() and
+ * in_serving_softirqs() because these counts are only valid when the task
+ * holding softirq_ctrl::lock is running.
+ *
+ * The per CPU counter prevents pointless wakeups of ksoftirqd in case that
+ * the task which is in a softirq disabled section is preempted or blocks.
+ */
+struct softirq_ctrl {
+	local_lock_t	lock;
+	int		cnt;
+};
+
+static DEFINE_PER_CPU(struct softirq_ctrl, softirq_ctrl) = {
+	.lock	= INIT_LOCAL_LOCK(softirq_ctrl.lock),
+};
+
+void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
+{
+	unsigned long flags;
+	int newcnt;
+
+	WARN_ON_ONCE(in_hardirq());
+
+	/* First entry of a task into a BH disabled section? */
+	if (!current->softirq_disable_cnt) {
+		if (preemptible()) {
+			local_lock(&softirq_ctrl.lock);
+			rcu_read_lock();
+		} else {
+			DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt));
+		}
+	}
+
+	preempt_disable();
+	/*
+	 * Track the per CPU softirq disabled state. On RT this is per CPU
+	 * state to allow preemption of bottom half disabled sections.
+	 */
+	newcnt = this_cpu_add_return(softirq_ctrl.cnt, cnt);
+	/*
+	 * Reflect the result in the task state to prevent recursion on the
+	 * local lock and to make softirq_count() & al work.
+	 */
+	current->softirq_disable_cnt = newcnt;
+
+	if (IS_ENABLED(CONFIG_TRACE_IRQFLAGS) && newcnt == cnt) {
+		raw_local_irq_save(flags);
+		lockdep_softirqs_off(ip);
+		raw_local_irq_restore(flags);
+	}
+	preempt_enable();
+}
+EXPORT_SYMBOL(__local_bh_disable_ip);
+
+static void __local_bh_enable(unsigned int cnt, bool unlock)
+{
+	unsigned long flags;
+	int newcnt;
+
+	DEBUG_LOCKS_WARN_ON(current->softirq_disable_cnt !=
+			    this_cpu_read(softirq_ctrl.cnt));
+
+	preempt_disable();
+	if (IS_ENABLED(CONFIG_TRACE_IRQFLAGS) && softirq_count() == cnt) {
+		raw_local_irq_save(flags);
+		lockdep_softirqs_on(_RET_IP_);
+		raw_local_irq_restore(flags);
+	}
+
+	newcnt = this_cpu_sub_return(softirq_ctrl.cnt, cnt);
+	current->softirq_disable_cnt = newcnt;
+	preempt_enable();
+
+	if (!newcnt && unlock) {
+		rcu_read_unlock();
+		local_unlock(&softirq_ctrl.lock);
+	}
+}
+
+void __local_bh_enable_ip(unsigned long ip, unsigned int cnt)
+{
+	bool preempt_on = preemptible();
+	unsigned long flags;
+	u32 pending;
+	int curcnt;
+
+	WARN_ON_ONCE(in_irq());
+	lockdep_assert_irqs_enabled();
+
+	local_irq_save(flags);
+	curcnt = this_cpu_read(softirq_ctrl.cnt);
+
+	/*
+	 * If this is not reenabling soft interrupts, no point in trying to
+	 * run pending ones.
+	 */
+	if (curcnt != cnt)
+		goto out;
+
+	pending = local_softirq_pending();
+	if (!pending || ksoftirqd_running(pending))
+		goto out;
+
+	/*
+	 * If this was called from non preemptible context, wake up the
+	 * softirq daemon.
+	 */
+	if (!preempt_on) {
+		wakeup_softirqd();
+		goto out;
+	}
+
+	/*
+	 * Adjust softirq count to SOFTIRQ_OFFSET which makes
+	 * in_serving_softirq() become true.
+	 */
+	cnt = SOFTIRQ_OFFSET;
+	__local_bh_enable(cnt, false);
+	__do_softirq();
+
+out:
+	__local_bh_enable(cnt, preempt_on);
+	local_irq_restore(flags);
+}
+EXPORT_SYMBOL(__local_bh_enable_ip);
+
+/*
+ * Invoked from irq_enter_rcu() to prevent that tick_irq_enter()
+ * pointlessly wakes the softirq daemon. That's handled in __irq_exit_rcu().
+ * None of the above logic in the regular bh_disable/enable functions is
+ * required here.
+ */
+static inline void local_bh_disable_irq_enter(void)
+{
+	this_cpu_add(softirq_ctrl.cnt, SOFTIRQ_DISABLE_OFFSET);
+}
+
+static inline void local_bh_enable_irq_enter(void)
+{
+	this_cpu_sub(softirq_ctrl.cnt, SOFTIRQ_DISABLE_OFFSET);
+}
+
+/*
+ * Invoked from ksoftirqd_run() outside of the interrupt disabled section
+ * to acquire the per CPU local lock for reentrancy protection.
+ */
+static inline void ksoftirqd_run_begin(void)
+{
+	__local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
+	local_irq_disable();
+}
+
+/* Counterpart to ksoftirqd_run_begin() */
+static inline void ksoftirqd_run_end(void)
+{
+	__local_bh_enable(SOFTIRQ_OFFSET, true);
+	WARN_ON_ONCE(in_interrupt());
+	local_irq_enable();
+}
+
+static inline void softirq_handle_begin(void) { }
+static inline void softirq_handle_end(void) { }
+
+static inline void invoke_softirq(void)
+{
+	if (!this_cpu_read(softirq_ctrl.cnt))
+		wakeup_softirqd();
+}
+
+static inline bool should_wake_ksoftirqd(void)
+{
+	return !this_cpu_read(softirq_ctrl.cnt);
+}
+
+#else /* CONFIG_PREEMPT_RT */
+
+/*
+ * This one is for softirq.c-internal use, where hardirqs are disabled
  * legitimately:
  */
+#ifdef CONFIG_TRACE_IRQFLAGS
 void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
 {
 	unsigned long flags;
@@ -284,6 +473,8 @@ asmlinkage __visible void do_softirq(voi
 	local_irq_restore(flags);
 }
 
+#endif /* !CONFIG_PREEMPT_RT */
+
 /*
  * We restart softirq processing for at most MAX_SOFTIRQ_RESTART times,
  * but break the loop if need_resched() is set or after 2 ms.
@@ -388,8 +579,10 @@ asmlinkage __visible void __softirq_entr
 		pending >>= softirq_bit;
 	}
 
-	if (__this_cpu_read(ksoftirqd) == current)
+	if (!IS_ENABLED(CONFIG_PREEMPT_RT) &&
+	    __this_cpu_read(ksoftirqd) == current)
 		rcu_softirq_qs();
+
 	local_irq_disable();
 
 	pending = local_softirq_pending();


^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 14/19] softirq: Make softirq control and processing RT aware
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

Provide a local lock based serialization for soft interrupts on RT which
allows the local_bh_disabled() sections and servicing soft interrupts to be
preemptible.

Provide the necessary inline helpers which allow to reuse the bulk of the
softirq processing code.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/bottom_half.h |    2 
 kernel/softirq.c            |  207 ++++++++++++++++++++++++++++++++++++++++++--
 2 files changed, 201 insertions(+), 8 deletions(-)

--- a/include/linux/bottom_half.h
+++ b/include/linux/bottom_half.h
@@ -4,7 +4,7 @@
 
 #include <linux/preempt.h>
 
-#ifdef CONFIG_TRACE_IRQFLAGS
+#if defined(CONFIG_PREEMPT_RT) || defined(CONFIG_TRACE_IRQFLAGS)
 extern void __local_bh_disable_ip(unsigned long ip, unsigned int cnt);
 #else
 static __always_inline void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -13,6 +13,7 @@
 #include <linux/kernel_stat.h>
 #include <linux/interrupt.h>
 #include <linux/init.h>
+#include <linux/local_lock.h>
 #include <linux/mm.h>
 #include <linux/notifier.h>
 #include <linux/percpu.h>
@@ -100,20 +101,208 @@ EXPORT_PER_CPU_SYMBOL_GPL(hardirq_contex
 #endif
 
 /*
- * preempt_count and SOFTIRQ_OFFSET usage:
- * - preempt_count is changed by SOFTIRQ_OFFSET on entering or leaving
- *   softirq processing.
- * - preempt_count is changed by SOFTIRQ_DISABLE_OFFSET (= 2 * SOFTIRQ_OFFSET)
+ * SOFTIRQ_OFFSET usage:
+ *
+ * On !RT kernels 'count' is the preempt counter, on RT kernels this applies
+ * to a per CPU counter and to task::softirqs_disabled_cnt.
+ *
+ * - count is changed by SOFTIRQ_OFFSET on entering or leaving softirq
+ *   processing.
+ *
+ * - count is changed by SOFTIRQ_DISABLE_OFFSET (= 2 * SOFTIRQ_OFFSET)
  *   on local_bh_disable or local_bh_enable.
+ *
  * This lets us distinguish between whether we are currently processing
  * softirq and whether we just have bh disabled.
  */
+#ifdef CONFIG_PREEMPT_RT
 
-#ifdef CONFIG_TRACE_IRQFLAGS
 /*
- * This is for softirq.c-internal use, where hardirqs are disabled
+ * RT accounts for BH disabled sections in task::softirqs_disabled_cnt and
+ * also in per CPU softirq_ctrl::cnt. This is necessary to allow tasks in a
+ * softirq disabled section to be preempted.
+ *
+ * The per task counter is used for softirq_count(), in_softirq() and
+ * in_serving_softirqs() because these counts are only valid when the task
+ * holding softirq_ctrl::lock is running.
+ *
+ * The per CPU counter prevents pointless wakeups of ksoftirqd in case that
+ * the task which is in a softirq disabled section is preempted or blocks.
+ */
+struct softirq_ctrl {
+	local_lock_t	lock;
+	int		cnt;
+};
+
+static DEFINE_PER_CPU(struct softirq_ctrl, softirq_ctrl) = {
+	.lock	= INIT_LOCAL_LOCK(softirq_ctrl.lock),
+};
+
+void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
+{
+	unsigned long flags;
+	int newcnt;
+
+	WARN_ON_ONCE(in_hardirq());
+
+	/* First entry of a task into a BH disabled section? */
+	if (!current->softirq_disable_cnt) {
+		if (preemptible()) {
+			local_lock(&softirq_ctrl.lock);
+			rcu_read_lock();
+		} else {
+			DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt));
+		}
+	}
+
+	preempt_disable();
+	/*
+	 * Track the per CPU softirq disabled state. On RT this is per CPU
+	 * state to allow preemption of bottom half disabled sections.
+	 */
+	newcnt = this_cpu_add_return(softirq_ctrl.cnt, cnt);
+	/*
+	 * Reflect the result in the task state to prevent recursion on the
+	 * local lock and to make softirq_count() & al work.
+	 */
+	current->softirq_disable_cnt = newcnt;
+
+	if (IS_ENABLED(CONFIG_TRACE_IRQFLAGS) && newcnt == cnt) {
+		raw_local_irq_save(flags);
+		lockdep_softirqs_off(ip);
+		raw_local_irq_restore(flags);
+	}
+	preempt_enable();
+}
+EXPORT_SYMBOL(__local_bh_disable_ip);
+
+static void __local_bh_enable(unsigned int cnt, bool unlock)
+{
+	unsigned long flags;
+	int newcnt;
+
+	DEBUG_LOCKS_WARN_ON(current->softirq_disable_cnt !=
+			    this_cpu_read(softirq_ctrl.cnt));
+
+	preempt_disable();
+	if (IS_ENABLED(CONFIG_TRACE_IRQFLAGS) && softirq_count() == cnt) {
+		raw_local_irq_save(flags);
+		lockdep_softirqs_on(_RET_IP_);
+		raw_local_irq_restore(flags);
+	}
+
+	newcnt = this_cpu_sub_return(softirq_ctrl.cnt, cnt);
+	current->softirq_disable_cnt = newcnt;
+	preempt_enable();
+
+	if (!newcnt && unlock) {
+		rcu_read_unlock();
+		local_unlock(&softirq_ctrl.lock);
+	}
+}
+
+void __local_bh_enable_ip(unsigned long ip, unsigned int cnt)
+{
+	bool preempt_on = preemptible();
+	unsigned long flags;
+	u32 pending;
+	int curcnt;
+
+	WARN_ON_ONCE(in_irq());
+	lockdep_assert_irqs_enabled();
+
+	local_irq_save(flags);
+	curcnt = this_cpu_read(softirq_ctrl.cnt);
+
+	/*
+	 * If this is not reenabling soft interrupts, no point in trying to
+	 * run pending ones.
+	 */
+	if (curcnt != cnt)
+		goto out;
+
+	pending = local_softirq_pending();
+	if (!pending || ksoftirqd_running(pending))
+		goto out;
+
+	/*
+	 * If this was called from non preemptible context, wake up the
+	 * softirq daemon.
+	 */
+	if (!preempt_on) {
+		wakeup_softirqd();
+		goto out;
+	}
+
+	/*
+	 * Adjust softirq count to SOFTIRQ_OFFSET which makes
+	 * in_serving_softirq() become true.
+	 */
+	cnt = SOFTIRQ_OFFSET;
+	__local_bh_enable(cnt, false);
+	__do_softirq();
+
+out:
+	__local_bh_enable(cnt, preempt_on);
+	local_irq_restore(flags);
+}
+EXPORT_SYMBOL(__local_bh_enable_ip);
+
+/*
+ * Invoked from irq_enter_rcu() to prevent that tick_irq_enter()
+ * pointlessly wakes the softirq daemon. That's handled in __irq_exit_rcu().
+ * None of the above logic in the regular bh_disable/enable functions is
+ * required here.
+ */
+static inline void local_bh_disable_irq_enter(void)
+{
+	this_cpu_add(softirq_ctrl.cnt, SOFTIRQ_DISABLE_OFFSET);
+}
+
+static inline void local_bh_enable_irq_enter(void)
+{
+	this_cpu_sub(softirq_ctrl.cnt, SOFTIRQ_DISABLE_OFFSET);
+}
+
+/*
+ * Invoked from ksoftirqd_run() outside of the interrupt disabled section
+ * to acquire the per CPU local lock for reentrancy protection.
+ */
+static inline void ksoftirqd_run_begin(void)
+{
+	__local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
+	local_irq_disable();
+}
+
+/* Counterpart to ksoftirqd_run_begin() */
+static inline void ksoftirqd_run_end(void)
+{
+	__local_bh_enable(SOFTIRQ_OFFSET, true);
+	WARN_ON_ONCE(in_interrupt());
+	local_irq_enable();
+}
+
+static inline void softirq_handle_begin(void) { }
+static inline void softirq_handle_end(void) { }
+
+static inline void invoke_softirq(void)
+{
+	if (!this_cpu_read(softirq_ctrl.cnt))
+		wakeup_softirqd();
+}
+
+static inline bool should_wake_ksoftirqd(void)
+{
+	return !this_cpu_read(softirq_ctrl.cnt);
+}
+
+#else /* CONFIG_PREEMPT_RT */
+
+/*
+ * This one is for softirq.c-internal use, where hardirqs are disabled
  * legitimately:
  */
+#ifdef CONFIG_TRACE_IRQFLAGS
 void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
 {
 	unsigned long flags;
@@ -284,6 +473,8 @@ asmlinkage __visible void do_softirq(voi
 	local_irq_restore(flags);
 }
 
+#endif /* !CONFIG_PREEMPT_RT */
+
 /*
  * We restart softirq processing for at most MAX_SOFTIRQ_RESTART times,
  * but break the loop if need_resched() is set or after 2 ms.
@@ -388,8 +579,10 @@ asmlinkage __visible void __softirq_entr
 		pending >>= softirq_bit;
 	}
 
-	if (__this_cpu_read(ksoftirqd) == current)
+	if (!IS_ENABLED(CONFIG_PREEMPT_RT) &&
+	    __this_cpu_read(ksoftirqd) == current)
 		rcu_softirq_qs();
+
 	local_irq_disable();
 
 	pending = local_softirq_pending();


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 15/19] tick/sched: Prevent false positive softirq pending warnings on RT
  2020-11-13 14:02 ` Thomas Gleixner
  (?)
@ 2020-11-13 14:02   ` Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

On RT a task which has soft interrupts disabled can block on a lock and
schedule out to idle while soft interrupts are pending. This triggers the
warning in the NOHZ idle code which complains about going idle with pending
soft interrupts. But as the task is blocked soft interrupt processing is
temporarily blocked as well which means that such a warning is a false
positive.

To prevent that check the per CPU state which indicates that a scheduled
out task has soft interrupts disabled.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/bottom_half.h |    6 ++++++
 kernel/softirq.c            |   15 +++++++++++++++
 kernel/time/tick-sched.c    |    2 +-
 3 files changed, 22 insertions(+), 1 deletion(-)

--- a/include/linux/bottom_half.h
+++ b/include/linux/bottom_half.h
@@ -32,4 +32,10 @@ static inline void local_bh_enable(void)
 	__local_bh_enable_ip(_THIS_IP_, SOFTIRQ_DISABLE_OFFSET);
 }
 
+#ifdef CONFIG_PREEMPT_RT
+extern bool local_bh_blocked(void);
+#else
+static inline bool local_bh_blocked(void) { return false; }
+#endif
+
 #endif /* _LINUX_BH_H */
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -138,6 +138,21 @@ static DEFINE_PER_CPU(struct softirq_ctr
 	.lock	= INIT_LOCAL_LOCK(softirq_ctrl.lock),
 };
 
+/**
+ * local_bh_blocked() - Check for idle whether BH processing is blocked
+ *
+ * Returns false if the per CPU softirq::cnt is 0 otherwise true.
+ *
+ * This is invoked from the idle task to guard against false positive
+ * softirq pending warnings, which would happen when the task which holds
+ * softirq_ctrl::lock was the only running task on the CPU and blocks on
+ * some other lock.
+ */
+bool local_bh_blocked(void)
+{
+	return this_cpu_read(softirq_ctrl.cnt) != 0;
+}
+
 void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
 {
 	unsigned long flags;
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -925,7 +925,7 @@ static bool can_stop_idle_tick(int cpu,
 	if (unlikely(local_softirq_pending())) {
 		static int ratelimit;
 
-		if (ratelimit < 10 &&
+		if (ratelimit < 10 && !local_bh_blocked() &&
 		    (local_softirq_pending() & SOFTIRQ_STOP_IDLE_MASK)) {
 			pr_warn("NOHZ tick-stop error: Non-RCU local softirq work is pending, handler #%02x!!!\n",
 				(unsigned int) local_softirq_pending());

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 15/19] tick/sched: Prevent false positive softirq pending warnings on RT
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Frederic Weisbecker, Paul McKenney,
	Sebastian Andrzej Siewior, Arnd Bergmann, James E.J. Bottomley,
	Helge Deller, linux-parisc, Yoshinori Sato, Rich Felker,
	linux-sh, Jeff Dike, Richard Weinberger, Anton Ivanov, linux-um,
	Russell King, Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

On RT a task which has soft interrupts disabled can block on a lock and
schedule out to idle while soft interrupts are pending. This triggers the
warning in the NOHZ idle code which complains about going idle with pending
soft interrupts. But as the task is blocked soft interrupt processing is
temporarily blocked as well which means that such a warning is a false
positive.

To prevent that check the per CPU state which indicates that a scheduled
out task has soft interrupts disabled.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/bottom_half.h |    6 ++++++
 kernel/softirq.c            |   15 +++++++++++++++
 kernel/time/tick-sched.c    |    2 +-
 3 files changed, 22 insertions(+), 1 deletion(-)

--- a/include/linux/bottom_half.h
+++ b/include/linux/bottom_half.h
@@ -32,4 +32,10 @@ static inline void local_bh_enable(void)
 	__local_bh_enable_ip(_THIS_IP_, SOFTIRQ_DISABLE_OFFSET);
 }
 
+#ifdef CONFIG_PREEMPT_RT
+extern bool local_bh_blocked(void);
+#else
+static inline bool local_bh_blocked(void) { return false; }
+#endif
+
 #endif /* _LINUX_BH_H */
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -138,6 +138,21 @@ static DEFINE_PER_CPU(struct softirq_ctr
 	.lock	= INIT_LOCAL_LOCK(softirq_ctrl.lock),
 };
 
+/**
+ * local_bh_blocked() - Check for idle whether BH processing is blocked
+ *
+ * Returns false if the per CPU softirq::cnt is 0 otherwise true.
+ *
+ * This is invoked from the idle task to guard against false positive
+ * softirq pending warnings, which would happen when the task which holds
+ * softirq_ctrl::lock was the only running task on the CPU and blocks on
+ * some other lock.
+ */
+bool local_bh_blocked(void)
+{
+	return this_cpu_read(softirq_ctrl.cnt) != 0;
+}
+
 void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
 {
 	unsigned long flags;
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -925,7 +925,7 @@ static bool can_stop_idle_tick(int cpu,
 	if (unlikely(local_softirq_pending())) {
 		static int ratelimit;
 
-		if (ratelimit < 10 &&
+		if (ratelimit < 10 && !local_bh_blocked() &&
 		    (local_softirq_pending() & SOFTIRQ_STOP_IDLE_MASK)) {
 			pr_warn("NOHZ tick-stop error: Non-RCU local softirq work is pending, handler #%02x!!!\n",
 				(unsigned int) local_softirq_pending());


^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 15/19] tick/sched: Prevent false positive softirq pending warnings on RT
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

On RT a task which has soft interrupts disabled can block on a lock and
schedule out to idle while soft interrupts are pending. This triggers the
warning in the NOHZ idle code which complains about going idle with pending
soft interrupts. But as the task is blocked soft interrupt processing is
temporarily blocked as well which means that such a warning is a false
positive.

To prevent that check the per CPU state which indicates that a scheduled
out task has soft interrupts disabled.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/bottom_half.h |    6 ++++++
 kernel/softirq.c            |   15 +++++++++++++++
 kernel/time/tick-sched.c    |    2 +-
 3 files changed, 22 insertions(+), 1 deletion(-)

--- a/include/linux/bottom_half.h
+++ b/include/linux/bottom_half.h
@@ -32,4 +32,10 @@ static inline void local_bh_enable(void)
 	__local_bh_enable_ip(_THIS_IP_, SOFTIRQ_DISABLE_OFFSET);
 }
 
+#ifdef CONFIG_PREEMPT_RT
+extern bool local_bh_blocked(void);
+#else
+static inline bool local_bh_blocked(void) { return false; }
+#endif
+
 #endif /* _LINUX_BH_H */
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -138,6 +138,21 @@ static DEFINE_PER_CPU(struct softirq_ctr
 	.lock	= INIT_LOCAL_LOCK(softirq_ctrl.lock),
 };
 
+/**
+ * local_bh_blocked() - Check for idle whether BH processing is blocked
+ *
+ * Returns false if the per CPU softirq::cnt is 0 otherwise true.
+ *
+ * This is invoked from the idle task to guard against false positive
+ * softirq pending warnings, which would happen when the task which holds
+ * softirq_ctrl::lock was the only running task on the CPU and blocks on
+ * some other lock.
+ */
+bool local_bh_blocked(void)
+{
+	return this_cpu_read(softirq_ctrl.cnt) != 0;
+}
+
 void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
 {
 	unsigned long flags;
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -925,7 +925,7 @@ static bool can_stop_idle_tick(int cpu,
 	if (unlikely(local_softirq_pending())) {
 		static int ratelimit;
 
-		if (ratelimit < 10 &&
+		if (ratelimit < 10 && !local_bh_blocked() &&
 		    (local_softirq_pending() & SOFTIRQ_STOP_IDLE_MASK)) {
 			pr_warn("NOHZ tick-stop error: Non-RCU local softirq work is pending, handler #%02x!!!\n",
 				(unsigned int) local_softirq_pending());


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 16/19] rcu: Prevent false positive softirq warning on RT
  2020-11-13 14:02 ` Thomas Gleixner
  (?)
@ 2020-11-13 14:02   ` Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

Soft interrupt disabled sections can legitimately be preempted or schedule
out when blocking on a lock on RT enabled kernels so the RCU preempt check
warning has to be disabled for RT kernels.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/rcupdate.h |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -319,7 +319,8 @@ static inline void rcu_preempt_sleep_che
 #define rcu_sleep_check()						\
 	do {								\
 		rcu_preempt_sleep_check();				\
-		RCU_LOCKDEP_WARN(lock_is_held(&rcu_bh_lock_map),	\
+		if (!IS_ENABLED(CONFIG_PREEMPT_RT))			\
+		    RCU_LOCKDEP_WARN(lock_is_held(&rcu_bh_lock_map),	\
 				 "Illegal context switch in RCU-bh read-side critical section"); \
 		RCU_LOCKDEP_WARN(lock_is_held(&rcu_sched_lock_map),	\
 				 "Illegal context switch in RCU-sched read-side critical section"); \

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 16/19] rcu: Prevent false positive softirq warning on RT
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Frederic Weisbecker, Paul McKenney,
	Sebastian Andrzej Siewior, Arnd Bergmann, James E.J. Bottomley,
	Helge Deller, linux-parisc, Yoshinori Sato, Rich Felker,
	linux-sh, Jeff Dike, Richard Weinberger, Anton Ivanov, linux-um,
	Russell King, Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

Soft interrupt disabled sections can legitimately be preempted or schedule
out when blocking on a lock on RT enabled kernels so the RCU preempt check
warning has to be disabled for RT kernels.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/rcupdate.h |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -319,7 +319,8 @@ static inline void rcu_preempt_sleep_che
 #define rcu_sleep_check()						\
 	do {								\
 		rcu_preempt_sleep_check();				\
-		RCU_LOCKDEP_WARN(lock_is_held(&rcu_bh_lock_map),	\
+		if (!IS_ENABLED(CONFIG_PREEMPT_RT))			\
+		    RCU_LOCKDEP_WARN(lock_is_held(&rcu_bh_lock_map),	\
 				 "Illegal context switch in RCU-bh read-side critical section"); \
 		RCU_LOCKDEP_WARN(lock_is_held(&rcu_sched_lock_map),	\
 				 "Illegal context switch in RCU-sched read-side critical section"); \


^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 16/19] rcu: Prevent false positive softirq warning on RT
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

Soft interrupt disabled sections can legitimately be preempted or schedule
out when blocking on a lock on RT enabled kernels so the RCU preempt check
warning has to be disabled for RT kernels.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/rcupdate.h |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -319,7 +319,8 @@ static inline void rcu_preempt_sleep_che
 #define rcu_sleep_check()						\
 	do {								\
 		rcu_preempt_sleep_check();				\
-		RCU_LOCKDEP_WARN(lock_is_held(&rcu_bh_lock_map),	\
+		if (!IS_ENABLED(CONFIG_PREEMPT_RT))			\
+		    RCU_LOCKDEP_WARN(lock_is_held(&rcu_bh_lock_map),	\
 				 "Illegal context switch in RCU-bh read-side critical section"); \
 		RCU_LOCKDEP_WARN(lock_is_held(&rcu_sched_lock_map),	\
 				 "Illegal context switch in RCU-sched read-side critical section"); \


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 17/19] softirq: Replace barrier() with cpu_relax() in tasklet_unlock_wait()
  2020-11-13 14:02 ` Thomas Gleixner
  (?)
@ 2020-11-13 14:02   ` Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

A barrier() in a tight loop which waits for something to happen on a remote
CPU is a pointless exercise. Replace it with cpu_relax() which allows HT
siblings to make progress.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/interrupt.h |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -668,7 +668,8 @@ static inline void tasklet_unlock(struct
 
 static inline void tasklet_unlock_wait(struct tasklet_struct *t)
 {
-	while (test_bit(TASKLET_STATE_RUN, &(t)->state)) { barrier(); }
+	while (test_bit(TASKLET_STATE_RUN, &(t)->state))
+		cpu_relax();
 }
 #else
 #define tasklet_trylock(t) 1

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 17/19] softirq: Replace barrier() with cpu_relax() in tasklet_unlock_wait()
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Frederic Weisbecker, Paul McKenney,
	Sebastian Andrzej Siewior, Arnd Bergmann, James E.J. Bottomley,
	Helge Deller, linux-parisc, Yoshinori Sato, Rich Felker,
	linux-sh, Jeff Dike, Richard Weinberger, Anton Ivanov, linux-um,
	Russell King, Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

A barrier() in a tight loop which waits for something to happen on a remote
CPU is a pointless exercise. Replace it with cpu_relax() which allows HT
siblings to make progress.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/interrupt.h |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -668,7 +668,8 @@ static inline void tasklet_unlock(struct
 
 static inline void tasklet_unlock_wait(struct tasklet_struct *t)
 {
-	while (test_bit(TASKLET_STATE_RUN, &(t)->state)) { barrier(); }
+	while (test_bit(TASKLET_STATE_RUN, &(t)->state))
+		cpu_relax();
 }
 #else
 #define tasklet_trylock(t) 1


^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 17/19] softirq: Replace barrier() with cpu_relax() in tasklet_unlock_wait()
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

A barrier() in a tight loop which waits for something to happen on a remote
CPU is a pointless exercise. Replace it with cpu_relax() which allows HT
siblings to make progress.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/interrupt.h |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -668,7 +668,8 @@ static inline void tasklet_unlock(struct
 
 static inline void tasklet_unlock_wait(struct tasklet_struct *t)
 {
-	while (test_bit(TASKLET_STATE_RUN, &(t)->state)) { barrier(); }
+	while (test_bit(TASKLET_STATE_RUN, &(t)->state))
+		cpu_relax();
 }
 #else
 #define tasklet_trylock(t) 1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 18/19] tasklets: Use static inlines for stub implementations
  2020-11-13 14:02 ` Thomas Gleixner
  (?)
@ 2020-11-13 14:02   ` Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

Inlines exist for a reason.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/interrupt.h |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -672,9 +672,9 @@ static inline void tasklet_unlock_wait(s
 		cpu_relax();
 }
 #else
-#define tasklet_trylock(t) 1
-#define tasklet_unlock_wait(t) do { } while (0)
-#define tasklet_unlock(t) do { } while (0)
+static inline int tasklet_trylock(struct tasklet_struct *t) { return 1; }
+static inline void tasklet_unlock(struct tasklet_struct *t) { }
+static inline void tasklet_unlock_wait(struct tasklet_struct *t) { }
 #endif
 
 extern void __tasklet_schedule(struct tasklet_struct *t);

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 18/19] tasklets: Use static inlines for stub implementations
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Frederic Weisbecker, Paul McKenney,
	Sebastian Andrzej Siewior, Arnd Bergmann, James E.J. Bottomley,
	Helge Deller, linux-parisc, Yoshinori Sato, Rich Felker,
	linux-sh, Jeff Dike, Richard Weinberger, Anton Ivanov, linux-um,
	Russell King, Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

Inlines exist for a reason.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/interrupt.h |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -672,9 +672,9 @@ static inline void tasklet_unlock_wait(s
 		cpu_relax();
 }
 #else
-#define tasklet_trylock(t) 1
-#define tasklet_unlock_wait(t) do { } while (0)
-#define tasklet_unlock(t) do { } while (0)
+static inline int tasklet_trylock(struct tasklet_struct *t) { return 1; }
+static inline void tasklet_unlock(struct tasklet_struct *t) { }
+static inline void tasklet_unlock_wait(struct tasklet_struct *t) { }
 #endif
 
 extern void __tasklet_schedule(struct tasklet_struct *t);


^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 18/19] tasklets: Use static inlines for stub implementations
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

Inlines exist for a reason.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/interrupt.h |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -672,9 +672,9 @@ static inline void tasklet_unlock_wait(s
 		cpu_relax();
 }
 #else
-#define tasklet_trylock(t) 1
-#define tasklet_unlock_wait(t) do { } while (0)
-#define tasklet_unlock(t) do { } while (0)
+static inline int tasklet_trylock(struct tasklet_struct *t) { return 1; }
+static inline void tasklet_unlock(struct tasklet_struct *t) { }
+static inline void tasklet_unlock_wait(struct tasklet_struct *t) { }
 #endif
 
 extern void __tasklet_schedule(struct tasklet_struct *t);


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 19/19] tasklets: Prevent kill/unlock_wait deadlock on RT
  2020-11-13 14:02 ` Thomas Gleixner
  (?)
@ 2020-11-13 14:02   ` Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

tasklet_kill() and tasklet_unlock_wait() spin and wait for the
TASKLET_STATE_SCHED resp. TASKLET_STATE_RUN bit in the tasklet state to be
cleared. This works on !RT nicely because the corresponding execution can
only happen on a different CPU.

On RT softirq processing is preemptible, therefore a task preempting the
softirq processing thread can spin forever. Prevent this by invoking
local_bh_disable()/enable() inside the loop. In case that the softirq
processing thread was preempted by the current task, current will block on
the local lock which yields the CPU to the preempted softirq processing
thread. If the tasklet is processed on a different CPU then the
local_bh_disable()/enable() pair is just a waste of processor cycles.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/interrupt.h |    8 ++------
 kernel/softirq.c          |   38 +++++++++++++++++++++++++++++++++++++-
 2 files changed, 39 insertions(+), 7 deletions(-)

--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -654,7 +654,7 @@ enum
 	TASKLET_STATE_RUN	/* Tasklet is running (SMP only) */
 };
 
-#ifdef CONFIG_SMP
+#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT)
 static inline int tasklet_trylock(struct tasklet_struct *t)
 {
 	return !test_and_set_bit(TASKLET_STATE_RUN, &(t)->state);
@@ -666,11 +666,7 @@ static inline void tasklet_unlock(struct
 	clear_bit(TASKLET_STATE_RUN, &(t)->state);
 }
 
-static inline void tasklet_unlock_wait(struct tasklet_struct *t)
-{
-	while (test_bit(TASKLET_STATE_RUN, &(t)->state))
-		cpu_relax();
-}
+void tasklet_unlock_wait(struct tasklet_struct *t);
 #else
 static inline int tasklet_trylock(struct tasklet_struct *t) { return 1; }
 static inline void tasklet_unlock(struct tasklet_struct *t) { }
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -851,6 +851,29 @@ void tasklet_init(struct tasklet_struct
 }
 EXPORT_SYMBOL(tasklet_init);
 
+#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT)
+
+void tasklet_unlock_wait(struct tasklet_struct *t)
+{
+	while (test_bit(TASKLET_STATE_RUN, &(t)->state)) {
+		if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
+			/*
+			 * Prevent a live lock when current preempted soft
+			 * interrupt processing or prevents ksoftirqd from
+			 * running. If the tasklet runs on a different CPU
+			 * then this has no effect other than doing the BH
+			 * disable/enable dance for nothing.
+			 */
+			local_bh_disable();
+			local_bh_enable();
+		} else {
+			cpu_relax();
+		}
+	}
+}
+EXPORT_SYMBOL(tasklet_unlock_wait);
+#endif
+
 void tasklet_kill(struct tasklet_struct *t)
 {
 	if (in_interrupt())
@@ -858,7 +881,20 @@ void tasklet_kill(struct tasklet_struct
 
 	while (test_and_set_bit(TASKLET_STATE_SCHED, &t->state)) {
 		do {
-			yield();
+			if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
+				/*
+				 * Prevent a live lock when current
+				 * preempted soft interrupt processing or
+				 * prevents ksoftirqd from running. If the
+				 * tasklet runs on a different CPU then
+				 * this has no effect other than doing the
+				 * BH disable/enable dance for nothing.
+				 */
+				local_bh_disable();
+				local_bh_enable();
+			} else {
+				yield();
+			}
 		} while (test_bit(TASKLET_STATE_SCHED, &t->state));
 	}
 	tasklet_unlock_wait(t);

^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 19/19] tasklets: Prevent kill/unlock_wait deadlock on RT
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Frederic Weisbecker, Paul McKenney,
	Sebastian Andrzej Siewior, Arnd Bergmann, James E.J. Bottomley,
	Helge Deller, linux-parisc, Yoshinori Sato, Rich Felker,
	linux-sh, Jeff Dike, Richard Weinberger, Anton Ivanov, linux-um,
	Russell King, Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

tasklet_kill() and tasklet_unlock_wait() spin and wait for the
TASKLET_STATE_SCHED resp. TASKLET_STATE_RUN bit in the tasklet state to be
cleared. This works on !RT nicely because the corresponding execution can
only happen on a different CPU.

On RT softirq processing is preemptible, therefore a task preempting the
softirq processing thread can spin forever. Prevent this by invoking
local_bh_disable()/enable() inside the loop. In case that the softirq
processing thread was preempted by the current task, current will block on
the local lock which yields the CPU to the preempted softirq processing
thread. If the tasklet is processed on a different CPU then the
local_bh_disable()/enable() pair is just a waste of processor cycles.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/interrupt.h |    8 ++------
 kernel/softirq.c          |   38 +++++++++++++++++++++++++++++++++++++-
 2 files changed, 39 insertions(+), 7 deletions(-)

--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -654,7 +654,7 @@ enum
 	TASKLET_STATE_RUN	/* Tasklet is running (SMP only) */
 };
 
-#ifdef CONFIG_SMP
+#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT)
 static inline int tasklet_trylock(struct tasklet_struct *t)
 {
 	return !test_and_set_bit(TASKLET_STATE_RUN, &(t)->state);
@@ -666,11 +666,7 @@ static inline void tasklet_unlock(struct
 	clear_bit(TASKLET_STATE_RUN, &(t)->state);
 }
 
-static inline void tasklet_unlock_wait(struct tasklet_struct *t)
-{
-	while (test_bit(TASKLET_STATE_RUN, &(t)->state))
-		cpu_relax();
-}
+void tasklet_unlock_wait(struct tasklet_struct *t);
 #else
 static inline int tasklet_trylock(struct tasklet_struct *t) { return 1; }
 static inline void tasklet_unlock(struct tasklet_struct *t) { }
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -851,6 +851,29 @@ void tasklet_init(struct tasklet_struct
 }
 EXPORT_SYMBOL(tasklet_init);
 
+#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT)
+
+void tasklet_unlock_wait(struct tasklet_struct *t)
+{
+	while (test_bit(TASKLET_STATE_RUN, &(t)->state)) {
+		if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
+			/*
+			 * Prevent a live lock when current preempted soft
+			 * interrupt processing or prevents ksoftirqd from
+			 * running. If the tasklet runs on a different CPU
+			 * then this has no effect other than doing the BH
+			 * disable/enable dance for nothing.
+			 */
+			local_bh_disable();
+			local_bh_enable();
+		} else {
+			cpu_relax();
+		}
+	}
+}
+EXPORT_SYMBOL(tasklet_unlock_wait);
+#endif
+
 void tasklet_kill(struct tasklet_struct *t)
 {
 	if (in_interrupt())
@@ -858,7 +881,20 @@ void tasklet_kill(struct tasklet_struct
 
 	while (test_and_set_bit(TASKLET_STATE_SCHED, &t->state)) {
 		do {
-			yield();
+			if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
+				/*
+				 * Prevent a live lock when current
+				 * preempted soft interrupt processing or
+				 * prevents ksoftirqd from running. If the
+				 * tasklet runs on a different CPU then
+				 * this has no effect other than doing the
+				 * BH disable/enable dance for nothing.
+				 */
+				local_bh_disable();
+				local_bh_enable();
+			} else {
+				yield();
+			}
 		} while (test_bit(TASKLET_STATE_SCHED, &t->state));
 	}
 	tasklet_unlock_wait(t);


^ permalink raw reply	[flat|nested] 136+ messages in thread

* [patch 19/19] tasklets: Prevent kill/unlock_wait deadlock on RT
@ 2020-11-13 14:02   ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-13 14:02 UTC (permalink / raw)
  To: LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

tasklet_kill() and tasklet_unlock_wait() spin and wait for the
TASKLET_STATE_SCHED resp. TASKLET_STATE_RUN bit in the tasklet state to be
cleared. This works on !RT nicely because the corresponding execution can
only happen on a different CPU.

On RT softirq processing is preemptible, therefore a task preempting the
softirq processing thread can spin forever. Prevent this by invoking
local_bh_disable()/enable() inside the loop. In case that the softirq
processing thread was preempted by the current task, current will block on
the local lock which yields the CPU to the preempted softirq processing
thread. If the tasklet is processed on a different CPU then the
local_bh_disable()/enable() pair is just a waste of processor cycles.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/interrupt.h |    8 ++------
 kernel/softirq.c          |   38 +++++++++++++++++++++++++++++++++++++-
 2 files changed, 39 insertions(+), 7 deletions(-)

--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -654,7 +654,7 @@ enum
 	TASKLET_STATE_RUN	/* Tasklet is running (SMP only) */
 };
 
-#ifdef CONFIG_SMP
+#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT)
 static inline int tasklet_trylock(struct tasklet_struct *t)
 {
 	return !test_and_set_bit(TASKLET_STATE_RUN, &(t)->state);
@@ -666,11 +666,7 @@ static inline void tasklet_unlock(struct
 	clear_bit(TASKLET_STATE_RUN, &(t)->state);
 }
 
-static inline void tasklet_unlock_wait(struct tasklet_struct *t)
-{
-	while (test_bit(TASKLET_STATE_RUN, &(t)->state))
-		cpu_relax();
-}
+void tasklet_unlock_wait(struct tasklet_struct *t);
 #else
 static inline int tasklet_trylock(struct tasklet_struct *t) { return 1; }
 static inline void tasklet_unlock(struct tasklet_struct *t) { }
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -851,6 +851,29 @@ void tasklet_init(struct tasklet_struct
 }
 EXPORT_SYMBOL(tasklet_init);
 
+#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT)
+
+void tasklet_unlock_wait(struct tasklet_struct *t)
+{
+	while (test_bit(TASKLET_STATE_RUN, &(t)->state)) {
+		if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
+			/*
+			 * Prevent a live lock when current preempted soft
+			 * interrupt processing or prevents ksoftirqd from
+			 * running. If the tasklet runs on a different CPU
+			 * then this has no effect other than doing the BH
+			 * disable/enable dance for nothing.
+			 */
+			local_bh_disable();
+			local_bh_enable();
+		} else {
+			cpu_relax();
+		}
+	}
+}
+EXPORT_SYMBOL(tasklet_unlock_wait);
+#endif
+
 void tasklet_kill(struct tasklet_struct *t)
 {
 	if (in_interrupt())
@@ -858,7 +881,20 @@ void tasklet_kill(struct tasklet_struct
 
 	while (test_and_set_bit(TASKLET_STATE_SCHED, &t->state)) {
 		do {
-			yield();
+			if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
+				/*
+				 * Prevent a live lock when current
+				 * preempted soft interrupt processing or
+				 * prevents ksoftirqd from running. If the
+				 * tasklet runs on a different CPU then
+				 * this has no effect other than doing the
+				 * BH disable/enable dance for nothing.
+				 */
+				local_bh_disable();
+				local_bh_enable();
+			} else {
+				yield();
+			}
 		} while (test_bit(TASKLET_STATE_SCHED, &t->state));
 	}
 	tasklet_unlock_wait(t);


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 06/19] arm64: irqstat: Get rid of duplicated declaration
  2020-11-13 14:02   ` Thomas Gleixner
  (?)
  (?)
@ 2020-11-16 10:01     ` Will Deacon
  -1 siblings, 0 replies; 136+ messages in thread
From: Will Deacon @ 2020-11-16 10:01 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Rich Felker, Paul McKenney, Arnd Bergmann, linux-sh,
	Peter Zijlstra, Catalin Marinas, Frederic Weisbecker,
	Valentin Schneider, Jeff Dike, LKML, Yoshinori Sato,
	James E.J. Bottomley, Richard Weinberger, linux-parisc,
	Helge Deller, Marc Zyngier, Russell King, linux-um,
	Sebastian Andrzej Siewior, linux-arm-kernel, Anton Ivanov

On Fri, Nov 13, 2020 at 03:02:13PM +0100, Thomas Gleixner wrote:
> irq_cpustat_t is exactly the same as the asm-generic one. Define
> ack_bad_irq so the generic header does not emit the generic version of it.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: linux-arm-kernel@lists.infradead.org
> ---
>  arch/arm64/include/asm/hardirq.h |    7 ++-----
>  1 file changed, 2 insertions(+), 5 deletions(-)

Acked-by: Will Deacon <will@kernel.org>

Will

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 06/19] arm64: irqstat: Get rid of duplicated declaration
@ 2020-11-16 10:01     ` Will Deacon
  0 siblings, 0 replies; 136+ messages in thread
From: Will Deacon @ 2020-11-16 10:01 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Peter Zijlstra, Frederic Weisbecker, Paul McKenney,
	Sebastian Andrzej Siewior, Arnd Bergmann, Catalin Marinas,
	Marc Zyngier, linux-arm-kernel, James E.J. Bottomley,
	Helge Deller, linux-parisc, Yoshinori Sato, Rich Felker,
	linux-sh, Jeff Dike, Richard Weinberger, Anton Ivanov, linux-um,
	Russell King, Valentin Schneider

On Fri, Nov 13, 2020 at 03:02:13PM +0100, Thomas Gleixner wrote:
> irq_cpustat_t is exactly the same as the asm-generic one. Define
> ack_bad_irq so the generic header does not emit the generic version of it.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: linux-arm-kernel@lists.infradead.org
> ---
>  arch/arm64/include/asm/hardirq.h |    7 ++-----
>  1 file changed, 2 insertions(+), 5 deletions(-)

Acked-by: Will Deacon <will@kernel.org>

Will

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 06/19] arm64: irqstat: Get rid of duplicated declaration
@ 2020-11-16 10:01     ` Will Deacon
  0 siblings, 0 replies; 136+ messages in thread
From: Will Deacon @ 2020-11-16 10:01 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Rich Felker, Paul McKenney, Arnd Bergmann, linux-sh,
	Peter Zijlstra, Catalin Marinas, Frederic Weisbecker,
	Valentin Schneider, Jeff Dike, LKML, Yoshinori Sato,
	James E.J. Bottomley, Richard Weinberger, linux-parisc,
	Helge Deller, Marc Zyngier, Russell King, linux-um,
	Sebastian Andrzej Siewior, linux-arm-kernel, Anton Ivanov

On Fri, Nov 13, 2020 at 03:02:13PM +0100, Thomas Gleixner wrote:
> irq_cpustat_t is exactly the same as the asm-generic one. Define
> ack_bad_irq so the generic header does not emit the generic version of it.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: linux-arm-kernel@lists.infradead.org
> ---
>  arch/arm64/include/asm/hardirq.h |    7 ++-----
>  1 file changed, 2 insertions(+), 5 deletions(-)

Acked-by: Will Deacon <will@kernel.org>

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 06/19] arm64: irqstat: Get rid of duplicated declaration
@ 2020-11-16 10:01     ` Will Deacon
  0 siblings, 0 replies; 136+ messages in thread
From: Will Deacon @ 2020-11-16 10:01 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Rich Felker, Paul McKenney, Arnd Bergmann, linux-sh,
	Peter Zijlstra, Catalin Marinas, Frederic Weisbecker,
	Valentin Schneider, Jeff Dike, LKML, Yoshinori Sato,
	James E.J. Bottomley, Richard Weinberger, linux-parisc,
	Helge Deller, Marc Zyngier, Russell King, linux-um,
	Sebastian Andrzej Siewior, linux-arm-kernel, Anton Ivanov

On Fri, Nov 13, 2020 at 03:02:13PM +0100, Thomas Gleixner wrote:
> irq_cpustat_t is exactly the same as the asm-generic one. Define
> ack_bad_irq so the generic header does not emit the generic version of it.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: linux-arm-kernel@lists.infradead.org
> ---
>  arch/arm64/include/asm/hardirq.h |    7 ++-----
>  1 file changed, 2 insertions(+), 5 deletions(-)

Acked-by: Will Deacon <will@kernel.org>

Will

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 06/19] arm64: irqstat: Get rid of duplicated declaration
  2020-11-13 14:02   ` Thomas Gleixner
  (?)
@ 2020-11-16 10:51     ` Marc Zyngier
  -1 siblings, 0 replies; 136+ messages in thread
From: Marc Zyngier @ 2020-11-16 10:51 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Rich Felker, Paul McKenney, Arnd Bergmann, linux-sh,
	Peter Zijlstra, Catalin Marinas, Frederic Weisbecker,
	Valentin Schneider, Jeff Dike, LKML, Yoshinori Sato,
	James E.J. Bottomley, Richard Weinberger, linux-parisc,
	Helge Deller, Russell King, linux-um, Will Deacon,
	Sebastian Andrzej Siewior, linux-arm-kernel, Anton Ivanov

On 2020-11-13 14:02, Thomas Gleixner wrote:
> irq_cpustat_t is exactly the same as the asm-generic one. Define
> ack_bad_irq so the generic header does not emit the generic version of 
> it.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: linux-arm-kernel@lists.infradead.org
> ---
>  arch/arm64/include/asm/hardirq.h |    7 ++-----
>  1 file changed, 2 insertions(+), 5 deletions(-)
> 
> --- a/arch/arm64/include/asm/hardirq.h
> +++ b/arch/arm64/include/asm/hardirq.h
> @@ -13,11 +13,8 @@
>  #include <asm/kvm_arm.h>
>  #include <asm/sysreg.h>
> 
> -typedef struct {
> -	unsigned int __softirq_pending;
> -} ____cacheline_aligned irq_cpustat_t;
> -
> -#include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t 
> above */
> +#define ack_bad_irq ack_bad_irq
> +#include <asm-generic/hardirq.h>
> 
>  #define __ARCH_IRQ_EXIT_IRQS_DISABLED	1

Acked-by: Marc Zyngier <maz@kernel.org>

         M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 06/19] arm64: irqstat: Get rid of duplicated declaration
@ 2020-11-16 10:51     ` Marc Zyngier
  0 siblings, 0 replies; 136+ messages in thread
From: Marc Zyngier @ 2020-11-16 10:51 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Peter Zijlstra, Frederic Weisbecker, Paul McKenney,
	Sebastian Andrzej Siewior, Arnd Bergmann, Catalin Marinas,
	Will Deacon, linux-arm-kernel, James E.J. Bottomley,
	Helge Deller, linux-parisc, Yoshinori Sato, Rich Felker,
	linux-sh, Jeff Dike, Richard Weinberger, Anton Ivanov, linux-um,
	Russell King, Valentin Schneider

On 2020-11-13 14:02, Thomas Gleixner wrote:
> irq_cpustat_t is exactly the same as the asm-generic one. Define
> ack_bad_irq so the generic header does not emit the generic version of 
> it.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: linux-arm-kernel@lists.infradead.org
> ---
>  arch/arm64/include/asm/hardirq.h |    7 ++-----
>  1 file changed, 2 insertions(+), 5 deletions(-)
> 
> --- a/arch/arm64/include/asm/hardirq.h
> +++ b/arch/arm64/include/asm/hardirq.h
> @@ -13,11 +13,8 @@
>  #include <asm/kvm_arm.h>
>  #include <asm/sysreg.h>
> 
> -typedef struct {
> -	unsigned int __softirq_pending;
> -} ____cacheline_aligned irq_cpustat_t;
> -
> -#include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t 
> above */
> +#define ack_bad_irq ack_bad_irq
> +#include <asm-generic/hardirq.h>
> 
>  #define __ARCH_IRQ_EXIT_IRQS_DISABLED	1

Acked-by: Marc Zyngier <maz@kernel.org>

         M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 06/19] arm64: irqstat: Get rid of duplicated declaration
@ 2020-11-16 10:51     ` Marc Zyngier
  0 siblings, 0 replies; 136+ messages in thread
From: Marc Zyngier @ 2020-11-16 10:51 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Rich Felker, Paul McKenney, Arnd Bergmann, linux-sh,
	Peter Zijlstra, Catalin Marinas, Frederic Weisbecker,
	Valentin Schneider, Jeff Dike, LKML, Yoshinori Sato,
	James E.J. Bottomley, Richard Weinberger, linux-parisc,
	Helge Deller, Russell King, linux-um, Will Deacon,
	Sebastian Andrzej Siewior, linux-arm-kernel, Anton Ivanov

On 2020-11-13 14:02, Thomas Gleixner wrote:
> irq_cpustat_t is exactly the same as the asm-generic one. Define
> ack_bad_irq so the generic header does not emit the generic version of 
> it.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: linux-arm-kernel@lists.infradead.org
> ---
>  arch/arm64/include/asm/hardirq.h |    7 ++-----
>  1 file changed, 2 insertions(+), 5 deletions(-)
> 
> --- a/arch/arm64/include/asm/hardirq.h
> +++ b/arch/arm64/include/asm/hardirq.h
> @@ -13,11 +13,8 @@
>  #include <asm/kvm_arm.h>
>  #include <asm/sysreg.h>
> 
> -typedef struct {
> -	unsigned int __softirq_pending;
> -} ____cacheline_aligned irq_cpustat_t;
> -
> -#include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t 
> above */
> +#define ack_bad_irq ack_bad_irq
> +#include <asm-generic/hardirq.h>
> 
>  #define __ARCH_IRQ_EXIT_IRQS_DISABLED	1

Acked-by: Marc Zyngier <maz@kernel.org>

         M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 10/19] preempt: Cleanup the macro maze a bit
  2020-11-13 14:02   ` Thomas Gleixner
  (?)
  (?)
@ 2020-11-16 12:17     ` Peter Zijlstra
  -1 siblings, 0 replies; 136+ messages in thread
From: Peter Zijlstra @ 2020-11-16 12:17 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Richard Weinberger, Frederic Weisbecker,
	Valentin Schneider, Jeff Dike, LKML, Yoshinori Sato,
	James E.J. Bottomley, linux-parisc, Helge Deller, Russell King,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

On Fri, Nov 13, 2020 at 03:02:17PM +0100, Thomas Gleixner wrote:

> -#define irq_count()	(preempt_count() & (HARDIRQ_MASK | SOFTIRQ_MASK \
> -				 | NMI_MASK))
> +#define irq_count()	(nmi_count() | hardirq_count() | softirq_count())


> +#define in_task()		(!(in_nmi() | in_hardirq() | in_serving_softirq()))
> -#define in_task()		(!(preempt_count() & \
> -				   (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET)))

How horrible is the code-gen? Because preempt_count() is
raw_cpu_read_4() and at least some old compilers will refuse to CSE it
(consider the this_cpu_read_stable mess).

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 10/19] preempt: Cleanup the macro maze a bit
@ 2020-11-16 12:17     ` Peter Zijlstra
  0 siblings, 0 replies; 136+ messages in thread
From: Peter Zijlstra @ 2020-11-16 12:17 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Frederic Weisbecker, Paul McKenney,
	Sebastian Andrzej Siewior, Arnd Bergmann, James E.J. Bottomley,
	Helge Deller, linux-parisc, Yoshinori Sato, Rich Felker,
	linux-sh, Jeff Dike, Richard Weinberger, Anton Ivanov, linux-um,
	Russell King, Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

On Fri, Nov 13, 2020 at 03:02:17PM +0100, Thomas Gleixner wrote:

> -#define irq_count()	(preempt_count() & (HARDIRQ_MASK | SOFTIRQ_MASK \
> -				 | NMI_MASK))
> +#define irq_count()	(nmi_count() | hardirq_count() | softirq_count())


> +#define in_task()		(!(in_nmi() | in_hardirq() | in_serving_softirq()))
> -#define in_task()		(!(preempt_count() & \
> -				   (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET)))

How horrible is the code-gen? Because preempt_count() is
raw_cpu_read_4() and at least some old compilers will refuse to CSE it
(consider the this_cpu_read_stable mess).

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 10/19] preempt: Cleanup the macro maze a bit
@ 2020-11-16 12:17     ` Peter Zijlstra
  0 siblings, 0 replies; 136+ messages in thread
From: Peter Zijlstra @ 2020-11-16 12:17 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Richard Weinberger, Frederic Weisbecker,
	Valentin Schneider, Jeff Dike, LKML, Yoshinori Sato,
	James E.J. Bottomley, linux-parisc, Helge Deller, Russell King,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

On Fri, Nov 13, 2020 at 03:02:17PM +0100, Thomas Gleixner wrote:

> -#define irq_count()	(preempt_count() & (HARDIRQ_MASK | SOFTIRQ_MASK \
> -				 | NMI_MASK))
> +#define irq_count()	(nmi_count() | hardirq_count() | softirq_count())


> +#define in_task()		(!(in_nmi() | in_hardirq() | in_serving_softirq()))
> -#define in_task()		(!(preempt_count() & \
> -				   (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET)))

How horrible is the code-gen? Because preempt_count() is
raw_cpu_read_4() and at least some old compilers will refuse to CSE it
(consider the this_cpu_read_stable mess).

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 10/19] preempt: Cleanup the macro maze a bit
@ 2020-11-16 12:17     ` Peter Zijlstra
  0 siblings, 0 replies; 136+ messages in thread
From: Peter Zijlstra @ 2020-11-16 12:17 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Richard Weinberger, Frederic Weisbecker,
	Valentin Schneider, Jeff Dike, LKML, Yoshinori Sato,
	James E.J. Bottomley, linux-parisc, Helge Deller, Russell King,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

On Fri, Nov 13, 2020 at 03:02:17PM +0100, Thomas Gleixner wrote:

> -#define irq_count()	(preempt_count() & (HARDIRQ_MASK | SOFTIRQ_MASK \
> -				 | NMI_MASK))
> +#define irq_count()	(nmi_count() | hardirq_count() | softirq_count())


> +#define in_task()		(!(in_nmi() | in_hardirq() | in_serving_softirq()))
> -#define in_task()		(!(preempt_count() & \
> -				   (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET)))

How horrible is the code-gen? Because preempt_count() is
raw_cpu_read_4() and at least some old compilers will refuse to CSE it
(consider the this_cpu_read_stable mess).

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 10/19] preempt: Cleanup the macro maze a bit
  2020-11-16 12:17     ` Peter Zijlstra
  (?)
@ 2020-11-16 17:42       ` Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-16 17:42 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Richard Weinberger, Frederic Weisbecker,
	Valentin Schneider, Jeff Dike, LKML, Yoshinori Sato,
	James E.J. Bottomley, linux-parisc, Helge Deller, Russell King,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

On Mon, Nov 16 2020 at 13:17, Peter Zijlstra wrote:
> On Fri, Nov 13, 2020 at 03:02:17PM +0100, Thomas Gleixner wrote:
>
>> -#define irq_count()	(preempt_count() & (HARDIRQ_MASK | SOFTIRQ_MASK \
>> -				 | NMI_MASK))
>> +#define irq_count()	(nmi_count() | hardirq_count() | softirq_count())
>
>
>> +#define in_task()		(!(in_nmi() | in_hardirq() | in_serving_softirq()))
>> -#define in_task()		(!(preempt_count() & \
>> -				   (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET)))
>
> How horrible is the code-gen? Because preempt_count() is
> raw_cpu_read_4() and at least some old compilers will refuse to CSE it
> (consider the this_cpu_read_stable mess).

I looked at gcc8 and 10 output and the compilers are smart enough to
fold it for the !RT case. But yeah ...

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 10/19] preempt: Cleanup the macro maze a bit
@ 2020-11-16 17:42       ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-16 17:42 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Frederic Weisbecker, Paul McKenney,
	Sebastian Andrzej Siewior, Arnd Bergmann, James E.J. Bottomley,
	Helge Deller, linux-parisc, Yoshinori Sato, Rich Felker,
	linux-sh, Jeff Dike, Richard Weinberger, Anton Ivanov, linux-um,
	Russell King, Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

On Mon, Nov 16 2020 at 13:17, Peter Zijlstra wrote:
> On Fri, Nov 13, 2020 at 03:02:17PM +0100, Thomas Gleixner wrote:
>
>> -#define irq_count()	(preempt_count() & (HARDIRQ_MASK | SOFTIRQ_MASK \
>> -				 | NMI_MASK))
>> +#define irq_count()	(nmi_count() | hardirq_count() | softirq_count())
>
>
>> +#define in_task()		(!(in_nmi() | in_hardirq() | in_serving_softirq()))
>> -#define in_task()		(!(preempt_count() & \
>> -				   (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET)))
>
> How horrible is the code-gen? Because preempt_count() is
> raw_cpu_read_4() and at least some old compilers will refuse to CSE it
> (consider the this_cpu_read_stable mess).

I looked at gcc8 and 10 output and the compilers are smart enough to
fold it for the !RT case. But yeah ...

Thanks,

        tglx



^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 10/19] preempt: Cleanup the macro maze a bit
@ 2020-11-16 17:42       ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-16 17:42 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Richard Weinberger, Frederic Weisbecker,
	Valentin Schneider, Jeff Dike, LKML, Yoshinori Sato,
	James E.J. Bottomley, linux-parisc, Helge Deller, Russell King,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

On Mon, Nov 16 2020 at 13:17, Peter Zijlstra wrote:
> On Fri, Nov 13, 2020 at 03:02:17PM +0100, Thomas Gleixner wrote:
>
>> -#define irq_count()	(preempt_count() & (HARDIRQ_MASK | SOFTIRQ_MASK \
>> -				 | NMI_MASK))
>> +#define irq_count()	(nmi_count() | hardirq_count() | softirq_count())
>
>
>> +#define in_task()		(!(in_nmi() | in_hardirq() | in_serving_softirq()))
>> -#define in_task()		(!(preempt_count() & \
>> -				   (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET)))
>
> How horrible is the code-gen? Because preempt_count() is
> raw_cpu_read_4() and at least some old compilers will refuse to CSE it
> (consider the this_cpu_read_stable mess).

I looked at gcc8 and 10 output and the compilers are smart enough to
fold it for the !RT case. But yeah ...

Thanks,

        tglx



_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 05/19] ARM: irqstat: Get rid of duplicated declaration
  2020-11-13 14:02   ` Thomas Gleixner
  (?)
@ 2020-11-16 18:19     ` Valentin Schneider
  -1 siblings, 0 replies; 136+ messages in thread
From: Valentin Schneider @ 2020-11-16 18:19 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Peter Zijlstra, Frederic Weisbecker, Paul McKenney,
	Sebastian Andrzej Siewior, Arnd Bergmann, Russell King,
	Marc Zyngier, linux-arm-kernel, James E.J. Bottomley,
	Helge Deller, linux-parisc, Yoshinori Sato, Rich Felker,
	linux-sh, Jeff Dike, Richard Weinberger, Anton Ivanov, linux-um,
	Catalin Marinas, Will Deacon


On 13/11/20 14:02, Thomas Gleixner wrote:
> irq_cpustat_t is exactly the same as the asm-generic one. Define
> ack_bad_irq so the generic header does not emit the generic version of it.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Russell King <linux@armlinux.org.uk>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Valentin Schneider <valentin.schneider@arm.com>
> Cc: linux-arm-kernel@lists.infradead.org

Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 05/19] ARM: irqstat: Get rid of duplicated declaration
@ 2020-11-16 18:19     ` Valentin Schneider
  0 siblings, 0 replies; 136+ messages in thread
From: Valentin Schneider @ 2020-11-16 18:19 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	Yoshinori Sato, Peter Zijlstra, Marc Zyngier,
	Frederic Weisbecker, linux-sh, Jeff Dike, LKML, Russell King,
	James E.J. Bottomley, Richard Weinberger, linux-parisc,
	Helge Deller, linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov


On 13/11/20 14:02, Thomas Gleixner wrote:
> irq_cpustat_t is exactly the same as the asm-generic one. Define
> ack_bad_irq so the generic header does not emit the generic version of it.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Russell King <linux@armlinux.org.uk>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Valentin Schneider <valentin.schneider@arm.com>
> Cc: linux-arm-kernel@lists.infradead.org

Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 05/19] ARM: irqstat: Get rid of duplicated declaration
@ 2020-11-16 18:19     ` Valentin Schneider
  0 siblings, 0 replies; 136+ messages in thread
From: Valentin Schneider @ 2020-11-16 18:19 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	Yoshinori Sato, Peter Zijlstra, Marc Zyngier,
	Frederic Weisbecker, linux-sh, Jeff Dike, LKML, Russell King,
	James E.J. Bottomley, Richard Weinberger, linux-parisc,
	Helge Deller, linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov


On 13/11/20 14:02, Thomas Gleixner wrote:
> irq_cpustat_t is exactly the same as the asm-generic one. Define
> ack_bad_irq so the generic header does not emit the generic version of it.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Russell King <linux@armlinux.org.uk>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Valentin Schneider <valentin.schneider@arm.com>
> Cc: linux-arm-kernel@lists.infradead.org

Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 10/19] preempt: Cleanup the macro maze a bit
  2020-11-16 17:42       ` Thomas Gleixner
  (?)
@ 2020-11-17 10:21         ` Peter Zijlstra
  -1 siblings, 0 replies; 136+ messages in thread
From: Peter Zijlstra @ 2020-11-17 10:21 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Frederic Weisbecker, Paul McKenney,
	Sebastian Andrzej Siewior, Arnd Bergmann, James E.J. Bottomley,
	Helge Deller, linux-parisc, Yoshinori Sato, Rich Felker,
	linux-sh, Jeff Dike, Richard Weinberger, Anton Ivanov, linux-um,
	Russell King, Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

On Mon, Nov 16, 2020 at 06:42:19PM +0100, Thomas Gleixner wrote:
> On Mon, Nov 16 2020 at 13:17, Peter Zijlstra wrote:
> > On Fri, Nov 13, 2020 at 03:02:17PM +0100, Thomas Gleixner wrote:
> >
> >> -#define irq_count()	(preempt_count() & (HARDIRQ_MASK | SOFTIRQ_MASK \
> >> -				 | NMI_MASK))
> >> +#define irq_count()	(nmi_count() | hardirq_count() | softirq_count())
> >
> >
> >> +#define in_task()		(!(in_nmi() | in_hardirq() | in_serving_softirq()))
> >> -#define in_task()		(!(preempt_count() & \
> >> -				   (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET)))
> >
> > How horrible is the code-gen? Because preempt_count() is
> > raw_cpu_read_4() and at least some old compilers will refuse to CSE it
> > (consider the this_cpu_read_stable mess).
> 
> I looked at gcc8 and 10 output and the compilers are smart enough to
> fold it for the !RT case. But yeah ...

If recent GCC is smart enough I suppose it doesn't matter, thanks!

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 10/19] preempt: Cleanup the macro maze a bit
@ 2020-11-17 10:21         ` Peter Zijlstra
  0 siblings, 0 replies; 136+ messages in thread
From: Peter Zijlstra @ 2020-11-17 10:21 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Richard Weinberger, Frederic Weisbecker,
	Valentin Schneider, Jeff Dike, LKML, Yoshinori Sato,
	James E.J. Bottomley, linux-parisc, Helge Deller, Russell King,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

On Mon, Nov 16, 2020 at 06:42:19PM +0100, Thomas Gleixner wrote:
> On Mon, Nov 16 2020 at 13:17, Peter Zijlstra wrote:
> > On Fri, Nov 13, 2020 at 03:02:17PM +0100, Thomas Gleixner wrote:
> >
> >> -#define irq_count()	(preempt_count() & (HARDIRQ_MASK | SOFTIRQ_MASK \
> >> -				 | NMI_MASK))
> >> +#define irq_count()	(nmi_count() | hardirq_count() | softirq_count())
> >
> >
> >> +#define in_task()		(!(in_nmi() | in_hardirq() | in_serving_softirq()))
> >> -#define in_task()		(!(preempt_count() & \
> >> -				   (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET)))
> >
> > How horrible is the code-gen? Because preempt_count() is
> > raw_cpu_read_4() and at least some old compilers will refuse to CSE it
> > (consider the this_cpu_read_stable mess).
> 
> I looked at gcc8 and 10 output and the compilers are smart enough to
> fold it for the !RT case. But yeah ...

If recent GCC is smart enough I suppose it doesn't matter, thanks!

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 10/19] preempt: Cleanup the macro maze a bit
@ 2020-11-17 10:21         ` Peter Zijlstra
  0 siblings, 0 replies; 136+ messages in thread
From: Peter Zijlstra @ 2020-11-17 10:21 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Richard Weinberger, Frederic Weisbecker,
	Valentin Schneider, Jeff Dike, LKML, Yoshinori Sato,
	James E.J. Bottomley, linux-parisc, Helge Deller, Russell King,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

On Mon, Nov 16, 2020 at 06:42:19PM +0100, Thomas Gleixner wrote:
> On Mon, Nov 16 2020 at 13:17, Peter Zijlstra wrote:
> > On Fri, Nov 13, 2020 at 03:02:17PM +0100, Thomas Gleixner wrote:
> >
> >> -#define irq_count()	(preempt_count() & (HARDIRQ_MASK | SOFTIRQ_MASK \
> >> -				 | NMI_MASK))
> >> +#define irq_count()	(nmi_count() | hardirq_count() | softirq_count())
> >
> >
> >> +#define in_task()		(!(in_nmi() | in_hardirq() | in_serving_softirq()))
> >> -#define in_task()		(!(preempt_count() & \
> >> -				   (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET)))
> >
> > How horrible is the code-gen? Because preempt_count() is
> > raw_cpu_read_4() and at least some old compilers will refuse to CSE it
> > (consider the this_cpu_read_stable mess).
> 
> I looked at gcc8 and 10 output and the compilers are smart enough to
> fold it for the !RT case. But yeah ...

If recent GCC is smart enough I suppose it doesn't matter, thanks!

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 12/19] softirq: Add RT specific softirq accounting
  2020-11-13 14:02   ` Thomas Gleixner
  (?)
@ 2020-11-19 12:18     ` Frederic Weisbecker
  -1 siblings, 0 replies; 136+ messages in thread
From: Frederic Weisbecker @ 2020-11-19 12:18 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Peter Zijlstra, Paul McKenney, Sebastian Andrzej Siewior,
	Arnd Bergmann, James E.J. Bottomley, Helge Deller, linux-parisc,
	Yoshinori Sato, Rich Felker, linux-sh, Jeff Dike,
	Richard Weinberger, Anton Ivanov, linux-um, Russell King,
	Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

On Fri, Nov 13, 2020 at 03:02:19PM +0100, Thomas Gleixner wrote:
> RT requires the softirq to be preemptible and uses a per CPU local lock to
> protect BH disabled sections and softirq processing. Therefore RT cannot
> use the preempt counter to keep track of BH disabled/serving.
> 
> Add a RT only counter to task struct and adjust the relevant macros in
> preempt.h.

You may want to describe a bit the reason for this per task counter.
It's not intuitive at this stage.

Thanks.

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 12/19] softirq: Add RT specific softirq accounting
@ 2020-11-19 12:18     ` Frederic Weisbecker
  0 siblings, 0 replies; 136+ messages in thread
From: Frederic Weisbecker @ 2020-11-19 12:18 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	linux-sh, Peter Zijlstra, Richard Weinberger,
	Sebastian Andrzej Siewior, Valentin Schneider, Jeff Dike, LKML,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Marc Zyngier,
	Russell King, linux-um, Will Deacon, Helge Deller,
	linux-arm-kernel, Anton Ivanov

On Fri, Nov 13, 2020 at 03:02:19PM +0100, Thomas Gleixner wrote:
> RT requires the softirq to be preemptible and uses a per CPU local lock to
> protect BH disabled sections and softirq processing. Therefore RT cannot
> use the preempt counter to keep track of BH disabled/serving.
> 
> Add a RT only counter to task struct and adjust the relevant macros in
> preempt.h.

You may want to describe a bit the reason for this per task counter.
It's not intuitive at this stage.

Thanks.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 12/19] softirq: Add RT specific softirq accounting
@ 2020-11-19 12:18     ` Frederic Weisbecker
  0 siblings, 0 replies; 136+ messages in thread
From: Frederic Weisbecker @ 2020-11-19 12:18 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	linux-sh, Peter Zijlstra, Richard Weinberger,
	Sebastian Andrzej Siewior, Valentin Schneider, Jeff Dike, LKML,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Marc Zyngier,
	Russell King, linux-um, Will Deacon, Helge Deller,
	linux-arm-kernel, Anton Ivanov

On Fri, Nov 13, 2020 at 03:02:19PM +0100, Thomas Gleixner wrote:
> RT requires the softirq to be preemptible and uses a per CPU local lock to
> protect BH disabled sections and softirq processing. Therefore RT cannot
> use the preempt counter to keep track of BH disabled/serving.
> 
> Add a RT only counter to task struct and adjust the relevant macros in
> preempt.h.

You may want to describe a bit the reason for this per task counter.
It's not intuitive at this stage.

Thanks.

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 11/19] softirq: Move related code into one section
  2020-11-13 14:02   ` Thomas Gleixner
@ 2020-11-19 12:20     ` Frederic Weisbecker
  -1 siblings, 0 replies; 136+ messages in thread
From: Frederic Weisbecker @ 2020-11-19 12:20 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Peter Zijlstra, Paul McKenney, Sebastian Andrzej Siewior,
	Arnd Bergmann, James E.J. Bottomley, Helge Deller, linux-parisc,
	Yoshinori Sato, Rich Felker, linux-sh, Jeff Dike,
	Richard Weinberger, Anton Ivanov, linux-um, Russell King,
	Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

On Fri, Nov 13, 2020 at 03:02:18PM +0100, Thomas Gleixner wrote:
> To prepare for adding a RT aware variant of softirq serialization and
> processing move related code into one section so the necessary #ifdeffery
> is reduced to one.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

Up to this patch at least:

Reviewed-by: Frederic Weisbecker <frederic@kernel.org>

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 11/19] softirq: Move related code into one section
@ 2020-11-19 12:20     ` Frederic Weisbecker
  0 siblings, 0 replies; 136+ messages in thread
From: Frederic Weisbecker @ 2020-11-19 12:20 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	linux-sh, Peter Zijlstra, Richard Weinberger,
	Sebastian Andrzej Siewior, Valentin Schneider, Jeff Dike, LKML,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Marc Zyngier,
	Russell King, linux-um, Will Deacon, Helge Deller,
	linux-arm-kernel, Anton Ivanov

On Fri, Nov 13, 2020 at 03:02:18PM +0100, Thomas Gleixner wrote:
> To prepare for adding a RT aware variant of softirq serialization and
> processing move related code into one section so the necessary #ifdeffery
> is reduced to one.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

Up to this patch at least:

Reviewed-by: Frederic Weisbecker <frederic@kernel.org>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 12/19] softirq: Add RT specific softirq accounting
  2020-11-19 12:18     ` Frederic Weisbecker
  (?)
@ 2020-11-19 18:34       ` Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-19 18:34 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Peter Zijlstra, Paul McKenney, Sebastian Andrzej Siewior,
	Arnd Bergmann, James E.J. Bottomley, Helge Deller, linux-parisc,
	Yoshinori Sato, Rich Felker, linux-sh, Jeff Dike,
	Richard Weinberger, Anton Ivanov, linux-um, Russell King,
	Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

On Thu, Nov 19 2020 at 13:18, Frederic Weisbecker wrote:
> On Fri, Nov 13, 2020 at 03:02:19PM +0100, Thomas Gleixner wrote:
>> RT requires the softirq to be preemptible and uses a per CPU local lock to
>> protect BH disabled sections and softirq processing. Therefore RT cannot
>> use the preempt counter to keep track of BH disabled/serving.
>> 
>> Add a RT only counter to task struct and adjust the relevant macros in
>> preempt.h.
>
> You may want to describe a bit the reason for this per task counter.
> It's not intuitive at this stage.

Something like this:

 RT requires the softirq processing and local bottomhalf disabled regions
 to be preemptible. Using the normal preempt count based serialization is
 therefore not possible because this implicitely disables preemption.

 RT kernels use a per CPU local lock to serialize bottomhalfs. As
 local_bh_disable() can nest the lock can only be acquired on the
 outermost invocation of local_bh_disable() and released when the nest
 count becomes zero. Tasks which hold the local lock can be preempted so
 its required to keep track of the nest count per task.

 Add a RT only counter to task struct and adjust the relevant macros in
 preempt.h.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 12/19] softirq: Add RT specific softirq accounting
@ 2020-11-19 18:34       ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-19 18:34 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	linux-sh, Peter Zijlstra, Richard Weinberger,
	Sebastian Andrzej Siewior, Valentin Schneider, Jeff Dike, LKML,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Marc Zyngier,
	Russell King, linux-um, Will Deacon, Helge Deller,
	linux-arm-kernel, Anton Ivanov

On Thu, Nov 19 2020 at 13:18, Frederic Weisbecker wrote:
> On Fri, Nov 13, 2020 at 03:02:19PM +0100, Thomas Gleixner wrote:
>> RT requires the softirq to be preemptible and uses a per CPU local lock to
>> protect BH disabled sections and softirq processing. Therefore RT cannot
>> use the preempt counter to keep track of BH disabled/serving.
>> 
>> Add a RT only counter to task struct and adjust the relevant macros in
>> preempt.h.
>
> You may want to describe a bit the reason for this per task counter.
> It's not intuitive at this stage.

Something like this:

 RT requires the softirq processing and local bottomhalf disabled regions
 to be preemptible. Using the normal preempt count based serialization is
 therefore not possible because this implicitely disables preemption.

 RT kernels use a per CPU local lock to serialize bottomhalfs. As
 local_bh_disable() can nest the lock can only be acquired on the
 outermost invocation of local_bh_disable() and released when the nest
 count becomes zero. Tasks which hold the local lock can be preempted so
 its required to keep track of the nest count per task.

 Add a RT only counter to task struct and adjust the relevant macros in
 preempt.h.

Thanks,

        tglx

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 12/19] softirq: Add RT specific softirq accounting
@ 2020-11-19 18:34       ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-19 18:34 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	linux-sh, Peter Zijlstra, Richard Weinberger,
	Sebastian Andrzej Siewior, Valentin Schneider, Jeff Dike, LKML,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Marc Zyngier,
	Russell King, linux-um, Will Deacon, Helge Deller,
	linux-arm-kernel, Anton Ivanov

On Thu, Nov 19 2020 at 13:18, Frederic Weisbecker wrote:
> On Fri, Nov 13, 2020 at 03:02:19PM +0100, Thomas Gleixner wrote:
>> RT requires the softirq to be preemptible and uses a per CPU local lock to
>> protect BH disabled sections and softirq processing. Therefore RT cannot
>> use the preempt counter to keep track of BH disabled/serving.
>> 
>> Add a RT only counter to task struct and adjust the relevant macros in
>> preempt.h.
>
> You may want to describe a bit the reason for this per task counter.
> It's not intuitive at this stage.

Something like this:

 RT requires the softirq processing and local bottomhalf disabled regions
 to be preemptible. Using the normal preempt count based serialization is
 therefore not possible because this implicitely disables preemption.

 RT kernels use a per CPU local lock to serialize bottomhalfs. As
 local_bh_disable() can nest the lock can only be acquired on the
 outermost invocation of local_bh_disable() and released when the nest
 count becomes zero. Tasks which hold the local lock can be preempted so
 its required to keep track of the nest count per task.

 Add a RT only counter to task struct and adjust the relevant macros in
 preempt.h.

Thanks,

        tglx

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 12/19] softirq: Add RT specific softirq accounting
  2020-11-19 18:34       ` Thomas Gleixner
@ 2020-11-19 22:52         ` Frederic Weisbecker
  -1 siblings, 0 replies; 136+ messages in thread
From: Frederic Weisbecker @ 2020-11-19 22:52 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Peter Zijlstra, Paul McKenney, Sebastian Andrzej Siewior,
	Arnd Bergmann, James E.J. Bottomley, Helge Deller, linux-parisc,
	Yoshinori Sato, Rich Felker, linux-sh, Jeff Dike,
	Richard Weinberger, Anton Ivanov, linux-um, Russell King,
	Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

On Thu, Nov 19, 2020 at 07:34:13PM +0100, Thomas Gleixner wrote:
> On Thu, Nov 19 2020 at 13:18, Frederic Weisbecker wrote:
> > On Fri, Nov 13, 2020 at 03:02:19PM +0100, Thomas Gleixner wrote:
> >> RT requires the softirq to be preemptible and uses a per CPU local lock to
> >> protect BH disabled sections and softirq processing. Therefore RT cannot
> >> use the preempt counter to keep track of BH disabled/serving.
> >> 
> >> Add a RT only counter to task struct and adjust the relevant macros in
> >> preempt.h.
> >
> > You may want to describe a bit the reason for this per task counter.
> > It's not intuitive at this stage.
> 
> Something like this:
> 
>  RT requires the softirq processing and local bottomhalf disabled regions
>  to be preemptible. Using the normal preempt count based serialization is
>  therefore not possible because this implicitely disables preemption.
> 
>  RT kernels use a per CPU local lock to serialize bottomhalfs. As
>  local_bh_disable() can nest the lock can only be acquired on the
>  outermost invocation of local_bh_disable() and released when the nest
>  count becomes zero. Tasks which hold the local lock can be preempted so
>  its required to keep track of the nest count per task.
> 
>  Add a RT only counter to task struct and adjust the relevant macros in
>  preempt.h.
> 
> Thanks,

Very good, thanks!

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 12/19] softirq: Add RT specific softirq accounting
@ 2020-11-19 22:52         ` Frederic Weisbecker
  0 siblings, 0 replies; 136+ messages in thread
From: Frederic Weisbecker @ 2020-11-19 22:52 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	linux-sh, Peter Zijlstra, Richard Weinberger,
	Sebastian Andrzej Siewior, Valentin Schneider, Jeff Dike, LKML,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Marc Zyngier,
	Russell King, linux-um, Will Deacon, Helge Deller,
	linux-arm-kernel, Anton Ivanov

On Thu, Nov 19, 2020 at 07:34:13PM +0100, Thomas Gleixner wrote:
> On Thu, Nov 19 2020 at 13:18, Frederic Weisbecker wrote:
> > On Fri, Nov 13, 2020 at 03:02:19PM +0100, Thomas Gleixner wrote:
> >> RT requires the softirq to be preemptible and uses a per CPU local lock to
> >> protect BH disabled sections and softirq processing. Therefore RT cannot
> >> use the preempt counter to keep track of BH disabled/serving.
> >> 
> >> Add a RT only counter to task struct and adjust the relevant macros in
> >> preempt.h.
> >
> > You may want to describe a bit the reason for this per task counter.
> > It's not intuitive at this stage.
> 
> Something like this:
> 
>  RT requires the softirq processing and local bottomhalf disabled regions
>  to be preemptible. Using the normal preempt count based serialization is
>  therefore not possible because this implicitely disables preemption.
> 
>  RT kernels use a per CPU local lock to serialize bottomhalfs. As
>  local_bh_disable() can nest the lock can only be acquired on the
>  outermost invocation of local_bh_disable() and released when the nest
>  count becomes zero. Tasks which hold the local lock can be preempted so
>  its required to keep track of the nest count per task.
> 
>  Add a RT only counter to task struct and adjust the relevant macros in
>  preempt.h.
> 
> Thanks,

Very good, thanks!

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 14/19] softirq: Make softirq control and processing RT aware
  2020-11-13 14:02   ` Thomas Gleixner
  (?)
@ 2020-11-20  0:26     ` Frederic Weisbecker
  -1 siblings, 0 replies; 136+ messages in thread
From: Frederic Weisbecker @ 2020-11-20  0:26 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Peter Zijlstra, Paul McKenney, Sebastian Andrzej Siewior,
	Arnd Bergmann, James E.J. Bottomley, Helge Deller, linux-parisc,
	Yoshinori Sato, Rich Felker, linux-sh, Jeff Dike,
	Richard Weinberger, Anton Ivanov, linux-um, Russell King,
	Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
> +void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
> +{
> +	unsigned long flags;
> +	int newcnt;
> +
> +	WARN_ON_ONCE(in_hardirq());
> +
> +	/* First entry of a task into a BH disabled section? */
> +	if (!current->softirq_disable_cnt) {
> +		if (preemptible()) {
> +			local_lock(&softirq_ctrl.lock);
> +			rcu_read_lock();

Ah you lock RCU because local_bh_disable() implies it and
since it doesn't disable preemption anymore, you must do it
explicitly?

Perhaps local_lock() should itself imply rcu_read_lock() ?

> +		} else {
> +			DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt));
> +		}
> +	}
> +
> +	preempt_disable();

Do you really need to disable preemption here? Migration is disabled by local_lock()
and I can't figure out a scenario where the below can conflict with a preempting task.

> +	/*
> +	 * Track the per CPU softirq disabled state. On RT this is per CPU
> +	 * state to allow preemption of bottom half disabled sections.
> +	 */
> +	newcnt = this_cpu_add_return(softirq_ctrl.cnt, cnt);

__this_cpu_add_return() ?

> +	/*
> +	 * Reflect the result in the task state to prevent recursion on the
> +	 * local lock and to make softirq_count() & al work.
> +	 */
> +	current->softirq_disable_cnt = newcnt;
> +
> +	if (IS_ENABLED(CONFIG_TRACE_IRQFLAGS) && newcnt == cnt) {
> +		raw_local_irq_save(flags);
> +		lockdep_softirqs_off(ip);
> +		raw_local_irq_restore(flags);
> +	}
> +	preempt_enable();
> +}
> +EXPORT_SYMBOL(__local_bh_disable_ip);
> +
> +static void __local_bh_enable(unsigned int cnt, bool unlock)
> +{
> +	unsigned long flags;
> +	int newcnt;
> +
> +	DEBUG_LOCKS_WARN_ON(current->softirq_disable_cnt !=
> +			    this_cpu_read(softirq_ctrl.cnt));

__this_cpu_read() ? Although that's lockdep only so not too important.

> +
> +	preempt_disable();

Same question about preempt_disable().

> +	if (IS_ENABLED(CONFIG_TRACE_IRQFLAGS) && softirq_count() == cnt) {
> +		raw_local_irq_save(flags);
> +		lockdep_softirqs_on(_RET_IP_);
> +		raw_local_irq_restore(flags);
> +	}
> +
> +	newcnt = this_cpu_sub_return(softirq_ctrl.cnt, cnt);

__this_cpu_sub_return?

> +	current->softirq_disable_cnt = newcnt;
> +	preempt_enable();
> +
> +	if (!newcnt && unlock) {
> +		rcu_read_unlock();
> +		local_unlock(&softirq_ctrl.lock);
> +	}
> +}

Thanks.

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 14/19] softirq: Make softirq control and processing RT aware
@ 2020-11-20  0:26     ` Frederic Weisbecker
  0 siblings, 0 replies; 136+ messages in thread
From: Frederic Weisbecker @ 2020-11-20  0:26 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	linux-sh, Peter Zijlstra, Richard Weinberger,
	Sebastian Andrzej Siewior, Valentin Schneider, Jeff Dike, LKML,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Marc Zyngier,
	Russell King, linux-um, Will Deacon, Helge Deller,
	linux-arm-kernel, Anton Ivanov

On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
> +void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
> +{
> +	unsigned long flags;
> +	int newcnt;
> +
> +	WARN_ON_ONCE(in_hardirq());
> +
> +	/* First entry of a task into a BH disabled section? */
> +	if (!current->softirq_disable_cnt) {
> +		if (preemptible()) {
> +			local_lock(&softirq_ctrl.lock);
> +			rcu_read_lock();

Ah you lock RCU because local_bh_disable() implies it and
since it doesn't disable preemption anymore, you must do it
explicitly?

Perhaps local_lock() should itself imply rcu_read_lock() ?

> +		} else {
> +			DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt));
> +		}
> +	}
> +
> +	preempt_disable();

Do you really need to disable preemption here? Migration is disabled by local_lock()
and I can't figure out a scenario where the below can conflict with a preempting task.

> +	/*
> +	 * Track the per CPU softirq disabled state. On RT this is per CPU
> +	 * state to allow preemption of bottom half disabled sections.
> +	 */
> +	newcnt = this_cpu_add_return(softirq_ctrl.cnt, cnt);

__this_cpu_add_return() ?

> +	/*
> +	 * Reflect the result in the task state to prevent recursion on the
> +	 * local lock and to make softirq_count() & al work.
> +	 */
> +	current->softirq_disable_cnt = newcnt;
> +
> +	if (IS_ENABLED(CONFIG_TRACE_IRQFLAGS) && newcnt == cnt) {
> +		raw_local_irq_save(flags);
> +		lockdep_softirqs_off(ip);
> +		raw_local_irq_restore(flags);
> +	}
> +	preempt_enable();
> +}
> +EXPORT_SYMBOL(__local_bh_disable_ip);
> +
> +static void __local_bh_enable(unsigned int cnt, bool unlock)
> +{
> +	unsigned long flags;
> +	int newcnt;
> +
> +	DEBUG_LOCKS_WARN_ON(current->softirq_disable_cnt !=
> +			    this_cpu_read(softirq_ctrl.cnt));

__this_cpu_read() ? Although that's lockdep only so not too important.

> +
> +	preempt_disable();

Same question about preempt_disable().

> +	if (IS_ENABLED(CONFIG_TRACE_IRQFLAGS) && softirq_count() == cnt) {
> +		raw_local_irq_save(flags);
> +		lockdep_softirqs_on(_RET_IP_);
> +		raw_local_irq_restore(flags);
> +	}
> +
> +	newcnt = this_cpu_sub_return(softirq_ctrl.cnt, cnt);

__this_cpu_sub_return?

> +	current->softirq_disable_cnt = newcnt;
> +	preempt_enable();
> +
> +	if (!newcnt && unlock) {
> +		rcu_read_unlock();
> +		local_unlock(&softirq_ctrl.lock);
> +	}
> +}

Thanks.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 14/19] softirq: Make softirq control and processing RT aware
@ 2020-11-20  0:26     ` Frederic Weisbecker
  0 siblings, 0 replies; 136+ messages in thread
From: Frederic Weisbecker @ 2020-11-20  0:26 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	linux-sh, Peter Zijlstra, Richard Weinberger,
	Sebastian Andrzej Siewior, Valentin Schneider, Jeff Dike, LKML,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Marc Zyngier,
	Russell King, linux-um, Will Deacon, Helge Deller,
	linux-arm-kernel, Anton Ivanov

On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
> +void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
> +{
> +	unsigned long flags;
> +	int newcnt;
> +
> +	WARN_ON_ONCE(in_hardirq());
> +
> +	/* First entry of a task into a BH disabled section? */
> +	if (!current->softirq_disable_cnt) {
> +		if (preemptible()) {
> +			local_lock(&softirq_ctrl.lock);
> +			rcu_read_lock();

Ah you lock RCU because local_bh_disable() implies it and
since it doesn't disable preemption anymore, you must do it
explicitly?

Perhaps local_lock() should itself imply rcu_read_lock() ?

> +		} else {
> +			DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt));
> +		}
> +	}
> +
> +	preempt_disable();

Do you really need to disable preemption here? Migration is disabled by local_lock()
and I can't figure out a scenario where the below can conflict with a preempting task.

> +	/*
> +	 * Track the per CPU softirq disabled state. On RT this is per CPU
> +	 * state to allow preemption of bottom half disabled sections.
> +	 */
> +	newcnt = this_cpu_add_return(softirq_ctrl.cnt, cnt);

__this_cpu_add_return() ?

> +	/*
> +	 * Reflect the result in the task state to prevent recursion on the
> +	 * local lock and to make softirq_count() & al work.
> +	 */
> +	current->softirq_disable_cnt = newcnt;
> +
> +	if (IS_ENABLED(CONFIG_TRACE_IRQFLAGS) && newcnt == cnt) {
> +		raw_local_irq_save(flags);
> +		lockdep_softirqs_off(ip);
> +		raw_local_irq_restore(flags);
> +	}
> +	preempt_enable();
> +}
> +EXPORT_SYMBOL(__local_bh_disable_ip);
> +
> +static void __local_bh_enable(unsigned int cnt, bool unlock)
> +{
> +	unsigned long flags;
> +	int newcnt;
> +
> +	DEBUG_LOCKS_WARN_ON(current->softirq_disable_cnt !=
> +			    this_cpu_read(softirq_ctrl.cnt));

__this_cpu_read() ? Although that's lockdep only so not too important.

> +
> +	preempt_disable();

Same question about preempt_disable().

> +	if (IS_ENABLED(CONFIG_TRACE_IRQFLAGS) && softirq_count() == cnt) {
> +		raw_local_irq_save(flags);
> +		lockdep_softirqs_on(_RET_IP_);
> +		raw_local_irq_restore(flags);
> +	}
> +
> +	newcnt = this_cpu_sub_return(softirq_ctrl.cnt, cnt);

__this_cpu_sub_return?

> +	current->softirq_disable_cnt = newcnt;
> +	preempt_enable();
> +
> +	if (!newcnt && unlock) {
> +		rcu_read_unlock();
> +		local_unlock(&softirq_ctrl.lock);
> +	}
> +}

Thanks.

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 14/19] softirq: Make softirq control and processing RT aware
  2020-11-20  0:26     ` Frederic Weisbecker
  (?)
@ 2020-11-20 13:27       ` Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-20 13:27 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Peter Zijlstra, Paul McKenney, Sebastian Andrzej Siewior,
	Arnd Bergmann, James E.J. Bottomley, Helge Deller, linux-parisc,
	Yoshinori Sato, Rich Felker, linux-sh, Jeff Dike,
	Richard Weinberger, Anton Ivanov, linux-um, Russell King,
	Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

On Fri, Nov 20 2020 at 01:26, Frederic Weisbecker wrote:
> On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
>> +void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
>> +{
>> +	unsigned long flags;
>> +	int newcnt;
>> +
>> +	WARN_ON_ONCE(in_hardirq());
>> +
>> +	/* First entry of a task into a BH disabled section? */
>> +	if (!current->softirq_disable_cnt) {
>> +		if (preemptible()) {
>> +			local_lock(&softirq_ctrl.lock);
>> +			rcu_read_lock();
>
> Ah you lock RCU because local_bh_disable() implies it and
> since it doesn't disable preemption anymore, you must do it
> explicitly?
>
> Perhaps local_lock() should itself imply rcu_read_lock() ?

It's really only required for local_bh_disable(). Lemme add a comment.

>> +		} else {
>> +			DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt));
>> +		}
>> +	}
>> +
>> +	preempt_disable();
>
> Do you really need to disable preemption here? Migration is disabled by local_lock()
> and I can't figure out a scenario where the below can conflict with a
> preempting task.

Indeed it's pointless.

>> +	/*
>> +	 * Track the per CPU softirq disabled state. On RT this is per CPU
>> +	 * state to allow preemption of bottom half disabled sections.
>> +	 */
>> +	newcnt = this_cpu_add_return(softirq_ctrl.cnt, cnt);
>
> __this_cpu_add_return() ?

Yep.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 14/19] softirq: Make softirq control and processing RT aware
@ 2020-11-20 13:27       ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-20 13:27 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	linux-sh, Peter Zijlstra, Richard Weinberger,
	Sebastian Andrzej Siewior, Valentin Schneider, Jeff Dike, LKML,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Marc Zyngier,
	Russell King, linux-um, Will Deacon, Helge Deller,
	linux-arm-kernel, Anton Ivanov

On Fri, Nov 20 2020 at 01:26, Frederic Weisbecker wrote:
> On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
>> +void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
>> +{
>> +	unsigned long flags;
>> +	int newcnt;
>> +
>> +	WARN_ON_ONCE(in_hardirq());
>> +
>> +	/* First entry of a task into a BH disabled section? */
>> +	if (!current->softirq_disable_cnt) {
>> +		if (preemptible()) {
>> +			local_lock(&softirq_ctrl.lock);
>> +			rcu_read_lock();
>
> Ah you lock RCU because local_bh_disable() implies it and
> since it doesn't disable preemption anymore, you must do it
> explicitly?
>
> Perhaps local_lock() should itself imply rcu_read_lock() ?

It's really only required for local_bh_disable(). Lemme add a comment.

>> +		} else {
>> +			DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt));
>> +		}
>> +	}
>> +
>> +	preempt_disable();
>
> Do you really need to disable preemption here? Migration is disabled by local_lock()
> and I can't figure out a scenario where the below can conflict with a
> preempting task.

Indeed it's pointless.

>> +	/*
>> +	 * Track the per CPU softirq disabled state. On RT this is per CPU
>> +	 * state to allow preemption of bottom half disabled sections.
>> +	 */
>> +	newcnt = this_cpu_add_return(softirq_ctrl.cnt, cnt);
>
> __this_cpu_add_return() ?

Yep.

Thanks,

        tglx

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 14/19] softirq: Make softirq control and processing RT aware
@ 2020-11-20 13:27       ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-20 13:27 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	linux-sh, Peter Zijlstra, Richard Weinberger,
	Sebastian Andrzej Siewior, Valentin Schneider, Jeff Dike, LKML,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Marc Zyngier,
	Russell King, linux-um, Will Deacon, Helge Deller,
	linux-arm-kernel, Anton Ivanov

On Fri, Nov 20 2020 at 01:26, Frederic Weisbecker wrote:
> On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
>> +void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
>> +{
>> +	unsigned long flags;
>> +	int newcnt;
>> +
>> +	WARN_ON_ONCE(in_hardirq());
>> +
>> +	/* First entry of a task into a BH disabled section? */
>> +	if (!current->softirq_disable_cnt) {
>> +		if (preemptible()) {
>> +			local_lock(&softirq_ctrl.lock);
>> +			rcu_read_lock();
>
> Ah you lock RCU because local_bh_disable() implies it and
> since it doesn't disable preemption anymore, you must do it
> explicitly?
>
> Perhaps local_lock() should itself imply rcu_read_lock() ?

It's really only required for local_bh_disable(). Lemme add a comment.

>> +		} else {
>> +			DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt));
>> +		}
>> +	}
>> +
>> +	preempt_disable();
>
> Do you really need to disable preemption here? Migration is disabled by local_lock()
> and I can't figure out a scenario where the below can conflict with a
> preempting task.

Indeed it's pointless.

>> +	/*
>> +	 * Track the per CPU softirq disabled state. On RT this is per CPU
>> +	 * state to allow preemption of bottom half disabled sections.
>> +	 */
>> +	newcnt = this_cpu_add_return(softirq_ctrl.cnt, cnt);
>
> __this_cpu_add_return() ?

Yep.

Thanks,

        tglx

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 14/19] softirq: Make softirq control and processing RT aware
  2020-11-13 14:02   ` Thomas Gleixner
  (?)
@ 2020-11-23 13:44     ` Frederic Weisbecker
  -1 siblings, 0 replies; 136+ messages in thread
From: Frederic Weisbecker @ 2020-11-23 13:44 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Peter Zijlstra, Paul McKenney, Sebastian Andrzej Siewior,
	Arnd Bergmann, James E.J. Bottomley, Helge Deller, linux-parisc,
	Yoshinori Sato, Rich Felker, linux-sh, Jeff Dike,
	Richard Weinberger, Anton Ivanov, linux-um, Russell King,
	Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
> +void __local_bh_enable_ip(unsigned long ip, unsigned int cnt)
> +{
> +	bool preempt_on = preemptible();
> +	unsigned long flags;
> +	u32 pending;
> +	int curcnt;
> +
> +	WARN_ON_ONCE(in_irq());
> +	lockdep_assert_irqs_enabled();
> +
> +	local_irq_save(flags);
> +	curcnt = this_cpu_read(softirq_ctrl.cnt);
> +
> +	/*
> +	 * If this is not reenabling soft interrupts, no point in trying to
> +	 * run pending ones.
> +	 */
> +	if (curcnt != cnt)
> +		goto out;
> +
> +	pending = local_softirq_pending();
> +	if (!pending || ksoftirqd_running(pending))
> +		goto out;
> +
> +	/*
> +	 * If this was called from non preemptible context, wake up the
> +	 * softirq daemon.
> +	 */
> +	if (!preempt_on) {
> +		wakeup_softirqd();
> +		goto out;
> +	}
> +
> +	/*
> +	 * Adjust softirq count to SOFTIRQ_OFFSET which makes
> +	 * in_serving_softirq() become true.
> +	 */
> +	cnt = SOFTIRQ_OFFSET;
> +	__local_bh_enable(cnt, false);

But then you enter __do_softirq() with softirq_count() == SOFTIRQ_OFFSET.
__do_softirq() calls softirq_handle_begin() which then sets it back to SOFTIRQ_DISABLE_OFFSET...

> +	__do_softirq();
> +
> +out:
> +	__local_bh_enable(cnt, preempt_on);

You escape from there with a correct preempt_count() but still the softirq executes
under SOFTIRQ_DISABLE_OFFSET and not SOFTIRQ_OFFSET, making in_serving_softirq() false.

> +	local_irq_restore(flags);
> +}
> +EXPORT_SYMBOL(__local_bh_enable_ip);

Thanks.

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 14/19] softirq: Make softirq control and processing RT aware
@ 2020-11-23 13:44     ` Frederic Weisbecker
  0 siblings, 0 replies; 136+ messages in thread
From: Frederic Weisbecker @ 2020-11-23 13:44 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	linux-sh, Peter Zijlstra, Richard Weinberger,
	Sebastian Andrzej Siewior, Valentin Schneider, Jeff Dike, LKML,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Marc Zyngier,
	Russell King, linux-um, Will Deacon, Helge Deller,
	linux-arm-kernel, Anton Ivanov

On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
> +void __local_bh_enable_ip(unsigned long ip, unsigned int cnt)
> +{
> +	bool preempt_on = preemptible();
> +	unsigned long flags;
> +	u32 pending;
> +	int curcnt;
> +
> +	WARN_ON_ONCE(in_irq());
> +	lockdep_assert_irqs_enabled();
> +
> +	local_irq_save(flags);
> +	curcnt = this_cpu_read(softirq_ctrl.cnt);
> +
> +	/*
> +	 * If this is not reenabling soft interrupts, no point in trying to
> +	 * run pending ones.
> +	 */
> +	if (curcnt != cnt)
> +		goto out;
> +
> +	pending = local_softirq_pending();
> +	if (!pending || ksoftirqd_running(pending))
> +		goto out;
> +
> +	/*
> +	 * If this was called from non preemptible context, wake up the
> +	 * softirq daemon.
> +	 */
> +	if (!preempt_on) {
> +		wakeup_softirqd();
> +		goto out;
> +	}
> +
> +	/*
> +	 * Adjust softirq count to SOFTIRQ_OFFSET which makes
> +	 * in_serving_softirq() become true.
> +	 */
> +	cnt = SOFTIRQ_OFFSET;
> +	__local_bh_enable(cnt, false);

But then you enter __do_softirq() with softirq_count() == SOFTIRQ_OFFSET.
__do_softirq() calls softirq_handle_begin() which then sets it back to SOFTIRQ_DISABLE_OFFSET...

> +	__do_softirq();
> +
> +out:
> +	__local_bh_enable(cnt, preempt_on);

You escape from there with a correct preempt_count() but still the softirq executes
under SOFTIRQ_DISABLE_OFFSET and not SOFTIRQ_OFFSET, making in_serving_softirq() false.

> +	local_irq_restore(flags);
> +}
> +EXPORT_SYMBOL(__local_bh_enable_ip);

Thanks.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 14/19] softirq: Make softirq control and processing RT aware
@ 2020-11-23 13:44     ` Frederic Weisbecker
  0 siblings, 0 replies; 136+ messages in thread
From: Frederic Weisbecker @ 2020-11-23 13:44 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	linux-sh, Peter Zijlstra, Richard Weinberger,
	Sebastian Andrzej Siewior, Valentin Schneider, Jeff Dike, LKML,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Marc Zyngier,
	Russell King, linux-um, Will Deacon, Helge Deller,
	linux-arm-kernel, Anton Ivanov

On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
> +void __local_bh_enable_ip(unsigned long ip, unsigned int cnt)
> +{
> +	bool preempt_on = preemptible();
> +	unsigned long flags;
> +	u32 pending;
> +	int curcnt;
> +
> +	WARN_ON_ONCE(in_irq());
> +	lockdep_assert_irqs_enabled();
> +
> +	local_irq_save(flags);
> +	curcnt = this_cpu_read(softirq_ctrl.cnt);
> +
> +	/*
> +	 * If this is not reenabling soft interrupts, no point in trying to
> +	 * run pending ones.
> +	 */
> +	if (curcnt != cnt)
> +		goto out;
> +
> +	pending = local_softirq_pending();
> +	if (!pending || ksoftirqd_running(pending))
> +		goto out;
> +
> +	/*
> +	 * If this was called from non preemptible context, wake up the
> +	 * softirq daemon.
> +	 */
> +	if (!preempt_on) {
> +		wakeup_softirqd();
> +		goto out;
> +	}
> +
> +	/*
> +	 * Adjust softirq count to SOFTIRQ_OFFSET which makes
> +	 * in_serving_softirq() become true.
> +	 */
> +	cnt = SOFTIRQ_OFFSET;
> +	__local_bh_enable(cnt, false);

But then you enter __do_softirq() with softirq_count() == SOFTIRQ_OFFSET.
__do_softirq() calls softirq_handle_begin() which then sets it back to SOFTIRQ_DISABLE_OFFSET...

> +	__do_softirq();
> +
> +out:
> +	__local_bh_enable(cnt, preempt_on);

You escape from there with a correct preempt_count() but still the softirq executes
under SOFTIRQ_DISABLE_OFFSET and not SOFTIRQ_OFFSET, making in_serving_softirq() false.

> +	local_irq_restore(flags);
> +}
> +EXPORT_SYMBOL(__local_bh_enable_ip);

Thanks.

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 14/19] softirq: Make softirq control and processing RT aware
  2020-11-23 13:44     ` Frederic Weisbecker
@ 2020-11-23 19:27       ` Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-23 19:27 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Peter Zijlstra, Paul McKenney, Sebastian Andrzej Siewior,
	Arnd Bergmann, James E.J. Bottomley, Helge Deller, linux-parisc,
	Yoshinori Sato, Rich Felker, linux-sh, Jeff Dike,
	Richard Weinberger, Anton Ivanov, linux-um, Russell King,
	Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

On Mon, Nov 23 2020 at 14:44, Frederic Weisbecker wrote:
> On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
>> +	/*
>> +	 * Adjust softirq count to SOFTIRQ_OFFSET which makes
>> +	 * in_serving_softirq() become true.
>> +	 */
>> +	cnt = SOFTIRQ_OFFSET;
>> +	__local_bh_enable(cnt, false);
>
> But then you enter __do_softirq() with softirq_count() == SOFTIRQ_OFFSET.
> __do_softirq() calls softirq_handle_begin() which then sets it back to
> SOFTIRQ_DISABLE_OFFSET...

The RT variant of it added in this very same patch
> +static inline void softirq_handle_begin(void) { }
> +static inline void softirq_handle_end(void) { }

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 14/19] softirq: Make softirq control and processing RT aware
@ 2020-11-23 19:27       ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-23 19:27 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	linux-sh, Peter Zijlstra, Richard Weinberger,
	Sebastian Andrzej Siewior, Valentin Schneider, Jeff Dike, LKML,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Marc Zyngier,
	Russell King, linux-um, Will Deacon, Helge Deller,
	linux-arm-kernel, Anton Ivanov

On Mon, Nov 23 2020 at 14:44, Frederic Weisbecker wrote:
> On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
>> +	/*
>> +	 * Adjust softirq count to SOFTIRQ_OFFSET which makes
>> +	 * in_serving_softirq() become true.
>> +	 */
>> +	cnt = SOFTIRQ_OFFSET;
>> +	__local_bh_enable(cnt, false);
>
> But then you enter __do_softirq() with softirq_count() == SOFTIRQ_OFFSET.
> __do_softirq() calls softirq_handle_begin() which then sets it back to
> SOFTIRQ_DISABLE_OFFSET...

The RT variant of it added in this very same patch
> +static inline void softirq_handle_begin(void) { }
> +static inline void softirq_handle_end(void) { }

Thanks,

        tglx

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 14/19] softirq: Make softirq control and processing RT aware
  2020-11-23 19:27       ` Thomas Gleixner
  (?)
@ 2020-11-23 19:56         ` Frederic Weisbecker
  -1 siblings, 0 replies; 136+ messages in thread
From: Frederic Weisbecker @ 2020-11-23 19:56 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Peter Zijlstra, Paul McKenney, Sebastian Andrzej Siewior,
	Arnd Bergmann, James E.J. Bottomley, Helge Deller, linux-parisc,
	Yoshinori Sato, Rich Felker, linux-sh, Jeff Dike,
	Richard Weinberger, Anton Ivanov, linux-um, Russell King,
	Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

On Mon, Nov 23, 2020 at 08:27:33PM +0100, Thomas Gleixner wrote:
> On Mon, Nov 23 2020 at 14:44, Frederic Weisbecker wrote:
> > On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
> >> +	/*
> >> +	 * Adjust softirq count to SOFTIRQ_OFFSET which makes
> >> +	 * in_serving_softirq() become true.
> >> +	 */
> >> +	cnt = SOFTIRQ_OFFSET;
> >> +	__local_bh_enable(cnt, false);
> >
> > But then you enter __do_softirq() with softirq_count() == SOFTIRQ_OFFSET.
> > __do_softirq() calls softirq_handle_begin() which then sets it back to
> > SOFTIRQ_DISABLE_OFFSET...
> 
> The RT variant of it added in this very same patch
> > +static inline void softirq_handle_begin(void) { }
> > +static inline void softirq_handle_end(void) { }

Oh missed that indeed, sorry!

> 
> Thanks,
> 
>         tglx

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 14/19] softirq: Make softirq control and processing RT aware
@ 2020-11-23 19:56         ` Frederic Weisbecker
  0 siblings, 0 replies; 136+ messages in thread
From: Frederic Weisbecker @ 2020-11-23 19:56 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	linux-sh, Peter Zijlstra, Richard Weinberger,
	Sebastian Andrzej Siewior, Valentin Schneider, Jeff Dike, LKML,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Marc Zyngier,
	Russell King, linux-um, Will Deacon, Helge Deller,
	linux-arm-kernel, Anton Ivanov

On Mon, Nov 23, 2020 at 08:27:33PM +0100, Thomas Gleixner wrote:
> On Mon, Nov 23 2020 at 14:44, Frederic Weisbecker wrote:
> > On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
> >> +	/*
> >> +	 * Adjust softirq count to SOFTIRQ_OFFSET which makes
> >> +	 * in_serving_softirq() become true.
> >> +	 */
> >> +	cnt = SOFTIRQ_OFFSET;
> >> +	__local_bh_enable(cnt, false);
> >
> > But then you enter __do_softirq() with softirq_count() == SOFTIRQ_OFFSET.
> > __do_softirq() calls softirq_handle_begin() which then sets it back to
> > SOFTIRQ_DISABLE_OFFSET...
> 
> The RT variant of it added in this very same patch
> > +static inline void softirq_handle_begin(void) { }
> > +static inline void softirq_handle_end(void) { }

Oh missed that indeed, sorry!

> 
> Thanks,
> 
>         tglx

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 14/19] softirq: Make softirq control and processing RT aware
@ 2020-11-23 19:56         ` Frederic Weisbecker
  0 siblings, 0 replies; 136+ messages in thread
From: Frederic Weisbecker @ 2020-11-23 19:56 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	linux-sh, Peter Zijlstra, Richard Weinberger,
	Sebastian Andrzej Siewior, Valentin Schneider, Jeff Dike, LKML,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Marc Zyngier,
	Russell King, linux-um, Will Deacon, Helge Deller,
	linux-arm-kernel, Anton Ivanov

On Mon, Nov 23, 2020 at 08:27:33PM +0100, Thomas Gleixner wrote:
> On Mon, Nov 23 2020 at 14:44, Frederic Weisbecker wrote:
> > On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
> >> +	/*
> >> +	 * Adjust softirq count to SOFTIRQ_OFFSET which makes
> >> +	 * in_serving_softirq() become true.
> >> +	 */
> >> +	cnt = SOFTIRQ_OFFSET;
> >> +	__local_bh_enable(cnt, false);
> >
> > But then you enter __do_softirq() with softirq_count() == SOFTIRQ_OFFSET.
> > __do_softirq() calls softirq_handle_begin() which then sets it back to
> > SOFTIRQ_DISABLE_OFFSET...
> 
> The RT variant of it added in this very same patch
> > +static inline void softirq_handle_begin(void) { }
> > +static inline void softirq_handle_end(void) { }

Oh missed that indeed, sorry!

> 
> Thanks,
> 
>         tglx

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 136+ messages in thread

* [tip: irq/core] softirq: Move related code into one section
  2020-11-13 14:02   ` Thomas Gleixner
                     ` (2 preceding siblings ...)
  (?)
@ 2020-11-23 22:51   ` tip-bot2 for Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: tip-bot2 for Thomas Gleixner @ 2020-11-23 22:51 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Thomas Gleixner, Frederic Weisbecker, x86, linux-kernel, maz

The following commit has been merged into the irq/core branch of tip:

Commit-ID:     ae9ef58996a4447dd44aa638759f913c883ba816
Gitweb:        https://git.kernel.org/tip/ae9ef58996a4447dd44aa638759f913c883ba816
Author:        Thomas Gleixner <tglx@linutronix.de>
AuthorDate:    Fri, 13 Nov 2020 15:02:18 +01:00
Committer:     Thomas Gleixner <tglx@linutronix.de>
CommitterDate: Mon, 23 Nov 2020 10:31:06 +01:00

softirq: Move related code into one section

To prepare for adding a RT aware variant of softirq serialization and
processing move related code into one section so the necessary #ifdeffery
is reduced to one.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20201113141733.974214480@linutronix.de

---
 kernel/softirq.c | 107 +++++++++++++++++++++++-----------------------
 1 file changed, 54 insertions(+), 53 deletions(-)

diff --git a/kernel/softirq.c b/kernel/softirq.c
index 09229ad..617009c 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -92,6 +92,13 @@ static bool ksoftirqd_running(unsigned long pending)
 		!__kthread_should_park(tsk);
 }
 
+#ifdef CONFIG_TRACE_IRQFLAGS
+DEFINE_PER_CPU(int, hardirqs_enabled);
+DEFINE_PER_CPU(int, hardirq_context);
+EXPORT_PER_CPU_SYMBOL_GPL(hardirqs_enabled);
+EXPORT_PER_CPU_SYMBOL_GPL(hardirq_context);
+#endif
+
 /*
  * preempt_count and SOFTIRQ_OFFSET usage:
  * - preempt_count is changed by SOFTIRQ_OFFSET on entering or leaving
@@ -102,17 +109,11 @@ static bool ksoftirqd_running(unsigned long pending)
  * softirq and whether we just have bh disabled.
  */
 
+#ifdef CONFIG_TRACE_IRQFLAGS
 /*
- * This one is for softirq.c-internal use,
- * where hardirqs are disabled legitimately:
+ * This is for softirq.c-internal use, where hardirqs are disabled
+ * legitimately:
  */
-#ifdef CONFIG_TRACE_IRQFLAGS
-
-DEFINE_PER_CPU(int, hardirqs_enabled);
-DEFINE_PER_CPU(int, hardirq_context);
-EXPORT_PER_CPU_SYMBOL_GPL(hardirqs_enabled);
-EXPORT_PER_CPU_SYMBOL_GPL(hardirq_context);
-
 void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
 {
 	unsigned long flags;
@@ -203,6 +204,50 @@ void __local_bh_enable_ip(unsigned long ip, unsigned int cnt)
 }
 EXPORT_SYMBOL(__local_bh_enable_ip);
 
+static inline void invoke_softirq(void)
+{
+	if (ksoftirqd_running(local_softirq_pending()))
+		return;
+
+	if (!force_irqthreads) {
+#ifdef CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK
+		/*
+		 * We can safely execute softirq on the current stack if
+		 * it is the irq stack, because it should be near empty
+		 * at this stage.
+		 */
+		__do_softirq();
+#else
+		/*
+		 * Otherwise, irq_exit() is called on the task stack that can
+		 * be potentially deep already. So call softirq in its own stack
+		 * to prevent from any overrun.
+		 */
+		do_softirq_own_stack();
+#endif
+	} else {
+		wakeup_softirqd();
+	}
+}
+
+asmlinkage __visible void do_softirq(void)
+{
+	__u32 pending;
+	unsigned long flags;
+
+	if (in_interrupt())
+		return;
+
+	local_irq_save(flags);
+
+	pending = local_softirq_pending();
+
+	if (pending && !ksoftirqd_running(pending))
+		do_softirq_own_stack();
+
+	local_irq_restore(flags);
+}
+
 /*
  * We restart softirq processing for at most MAX_SOFTIRQ_RESTART times,
  * but break the loop if need_resched() is set or after 2 ms.
@@ -327,24 +372,6 @@ restart:
 	current_restore_flags(old_flags, PF_MEMALLOC);
 }
 
-asmlinkage __visible void do_softirq(void)
-{
-	__u32 pending;
-	unsigned long flags;
-
-	if (in_interrupt())
-		return;
-
-	local_irq_save(flags);
-
-	pending = local_softirq_pending();
-
-	if (pending && !ksoftirqd_running(pending))
-		do_softirq_own_stack();
-
-	local_irq_restore(flags);
-}
-
 /**
  * irq_enter_rcu - Enter an interrupt context with RCU watching
  */
@@ -371,32 +398,6 @@ void irq_enter(void)
 	irq_enter_rcu();
 }
 
-static inline void invoke_softirq(void)
-{
-	if (ksoftirqd_running(local_softirq_pending()))
-		return;
-
-	if (!force_irqthreads) {
-#ifdef CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK
-		/*
-		 * We can safely execute softirq on the current stack if
-		 * it is the irq stack, because it should be near empty
-		 * at this stage.
-		 */
-		__do_softirq();
-#else
-		/*
-		 * Otherwise, irq_exit() is called on the task stack that can
-		 * be potentially deep already. So call softirq in its own stack
-		 * to prevent from any overrun.
-		 */
-		do_softirq_own_stack();
-#endif
-	} else {
-		wakeup_softirqd();
-	}
-}
-
 static inline void tick_irq_exit(void)
 {
 #ifdef CONFIG_NO_HZ_COMMON

^ permalink raw reply related	[flat|nested] 136+ messages in thread

* [tip: irq/core] preempt: Cleanup the macro maze a bit
  2020-11-13 14:02   ` Thomas Gleixner
                     ` (2 preceding siblings ...)
  (?)
@ 2020-11-23 22:51   ` tip-bot2 for Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: tip-bot2 for Thomas Gleixner @ 2020-11-23 22:51 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Thomas Gleixner, Frederic Weisbecker, x86, linux-kernel, maz

The following commit has been merged into the irq/core branch of tip:

Commit-ID:     15115830c88751ba83068aa37da996602ddc6a61
Gitweb:        https://git.kernel.org/tip/15115830c88751ba83068aa37da996602ddc6a61
Author:        Thomas Gleixner <tglx@linutronix.de>
AuthorDate:    Fri, 13 Nov 2020 15:02:17 +01:00
Committer:     Thomas Gleixner <tglx@linutronix.de>
CommitterDate: Mon, 23 Nov 2020 10:31:06 +01:00

preempt: Cleanup the macro maze a bit

Make the macro maze consistent and prepare it for adding the RT variant for
BH accounting.

 - Use nmi_count() for the NMI portion of preempt count
 - Introduce in_hardirq() to make the naming consistent and non-ambiguos
 - Use the macros to create combined checks (e.g. in_task()) so the
   softirq representation for RT just falls into place.
 - Update comments and move the deprecated macros aside

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20201113141733.864469886@linutronix.de

---
 include/linux/preempt.h | 30 ++++++++++++++++--------------
 1 file changed, 16 insertions(+), 14 deletions(-)

diff --git a/include/linux/preempt.h b/include/linux/preempt.h
index 7d9c1c0..7547857 100644
--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -77,31 +77,33 @@
 /* preempt_count() and related functions, depends on PREEMPT_NEED_RESCHED */
 #include <asm/preempt.h>
 
+#define nmi_count()	(preempt_count() & NMI_MASK)
 #define hardirq_count()	(preempt_count() & HARDIRQ_MASK)
 #define softirq_count()	(preempt_count() & SOFTIRQ_MASK)
-#define irq_count()	(preempt_count() & (HARDIRQ_MASK | SOFTIRQ_MASK \
-				 | NMI_MASK))
+#define irq_count()	(nmi_count() | hardirq_count() | softirq_count())
 
 /*
- * Are we doing bottom half or hardware interrupt processing?
+ * Macros to retrieve the current execution context:
  *
- * in_irq()       - We're in (hard) IRQ context
+ * in_nmi()		- We're in NMI context
+ * in_hardirq()		- We're in hard IRQ context
+ * in_serving_softirq()	- We're in softirq context
+ * in_task()		- We're in task context
+ */
+#define in_nmi()		(nmi_count())
+#define in_hardirq()		(hardirq_count())
+#define in_serving_softirq()	(softirq_count() & SOFTIRQ_OFFSET)
+#define in_task()		(!(in_nmi() | in_hardirq() | in_serving_softirq()))
+
+/*
+ * The following macros are deprecated and should not be used in new code:
+ * in_irq()       - Obsolete version of in_hardirq()
  * in_softirq()   - We have BH disabled, or are processing softirqs
  * in_interrupt() - We're in NMI,IRQ,SoftIRQ context or have BH disabled
- * in_serving_softirq() - We're in softirq context
- * in_nmi()       - We're in NMI context
- * in_task()	  - We're in task context
- *
- * Note: due to the BH disabled confusion: in_softirq(),in_interrupt() really
- *       should not be used in new code.
  */
 #define in_irq()		(hardirq_count())
 #define in_softirq()		(softirq_count())
 #define in_interrupt()		(irq_count())
-#define in_serving_softirq()	(softirq_count() & SOFTIRQ_OFFSET)
-#define in_nmi()		(preempt_count() & NMI_MASK)
-#define in_task()		(!(preempt_count() & \
-				   (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET)))
 
 /*
  * The preempt_count offset after preempt_disable();

^ permalink raw reply related	[flat|nested] 136+ messages in thread

* [tip: irq/core] irqstat: Move declaration into asm-generic/hardirq.h
  2020-11-13 14:02   ` Thomas Gleixner
  (?)
  (?)
@ 2020-11-23 22:51   ` tip-bot2 for Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: tip-bot2 for Thomas Gleixner @ 2020-11-23 22:51 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Thomas Gleixner, Frederic Weisbecker, x86, linux-kernel, maz

The following commit has been merged into the irq/core branch of tip:

Commit-ID:     e091bc90cd2d65f48e4688faead2911558d177d7
Gitweb:        https://git.kernel.org/tip/e091bc90cd2d65f48e4688faead2911558d177d7
Author:        Thomas Gleixner <tglx@linutronix.de>
AuthorDate:    Fri, 13 Nov 2020 15:02:16 +01:00
Committer:     Thomas Gleixner <tglx@linutronix.de>
CommitterDate: Mon, 23 Nov 2020 10:31:06 +01:00

irqstat: Move declaration into asm-generic/hardirq.h

Move the declaration of the irq_cpustat per cpu variable to
asm-generic/hardirq.h and remove the now empty linux/irq_cpustat.h header.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20201113141733.737377332@linutronix.de

---
 include/asm-generic/hardirq.h |  3 ++-
 include/linux/irq_cpustat.h   | 24 ------------------------
 2 files changed, 2 insertions(+), 25 deletions(-)
 delete mode 100644 include/linux/irq_cpustat.h

diff --git a/include/asm-generic/hardirq.h b/include/asm-generic/hardirq.h
index f5dd997..7317e82 100644
--- a/include/asm-generic/hardirq.h
+++ b/include/asm-generic/hardirq.h
@@ -12,7 +12,8 @@ typedef struct {
 #endif
 } ____cacheline_aligned irq_cpustat_t;
 
-#include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */
+DECLARE_PER_CPU_ALIGNED(irq_cpustat_t, irq_stat);
+
 #include <linux/irq.h>
 
 #ifndef ack_bad_irq
diff --git a/include/linux/irq_cpustat.h b/include/linux/irq_cpustat.h
deleted file mode 100644
index 78fb2de..0000000
--- a/include/linux/irq_cpustat.h
+++ /dev/null
@@ -1,24 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef __irq_cpustat_h
-#define __irq_cpustat_h
-
-/*
- * Contains default mappings for irq_cpustat_t, used by almost every
- * architecture.  Some arch (like s390) have per cpu hardware pages and
- * they define their own mappings for irq_stat.
- *
- * Keith Owens <kaos@ocs.com.au> July 2000.
- */
-
-
-/*
- * Simple wrappers reducing source bloat.  Define all irq_stat fields
- * here, even ones that are arch dependent.  That way we get common
- * definitions instead of differing sets for each arch.
- */
-
-#ifndef __ARCH_IRQ_STAT
-DECLARE_PER_CPU_ALIGNED(irq_cpustat_t, irq_stat);	/* defined in asm/hardirq.h */
-#endif
-
-#endif	/* __irq_cpustat_h */

^ permalink raw reply related	[flat|nested] 136+ messages in thread

* [tip: irq/core] sh: irqstat: Use the generic irq_cpustat_t
  2020-11-13 14:02   ` Thomas Gleixner
                     ` (2 preceding siblings ...)
  (?)
@ 2020-11-23 22:51   ` tip-bot2 for Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: tip-bot2 for Thomas Gleixner @ 2020-11-23 22:51 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Thomas Gleixner, Frederic Weisbecker, x86, linux-kernel, maz

The following commit has been merged into the irq/core branch of tip:

Commit-ID:     fd15c1941f0ae0b46d48431d0020edfc843abd33
Gitweb:        https://git.kernel.org/tip/fd15c1941f0ae0b46d48431d0020edfc843abd33
Author:        Thomas Gleixner <tglx@linutronix.de>
AuthorDate:    Fri, 13 Nov 2020 15:02:15 +01:00
Committer:     Thomas Gleixner <tglx@linutronix.de>
CommitterDate: Mon, 23 Nov 2020 10:31:06 +01:00

sh: irqstat: Use the generic irq_cpustat_t

SH can now use the generic irq_cpustat_t. Define ack_bad_irq so the generic
header does not emit the generic version of it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20201113141733.625146223@linutronix.de

---
 arch/sh/include/asm/hardirq.h | 14 ++++----------
 1 file changed, 4 insertions(+), 10 deletions(-)

diff --git a/arch/sh/include/asm/hardirq.h b/arch/sh/include/asm/hardirq.h
index edaea35..9fe4495 100644
--- a/arch/sh/include/asm/hardirq.h
+++ b/arch/sh/include/asm/hardirq.h
@@ -2,16 +2,10 @@
 #ifndef __ASM_SH_HARDIRQ_H
 #define __ASM_SH_HARDIRQ_H
 
-#include <linux/threads.h>
-#include <linux/irq.h>
-
-typedef struct {
-	unsigned int __softirq_pending;
-	unsigned int __nmi_count;		/* arch dependent */
-} ____cacheline_aligned irq_cpustat_t;
-
-#include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */
-
 extern void ack_bad_irq(unsigned int irq);
+#define ack_bad_irq ack_bad_irq
+#define ARCH_WANTS_NMI_IRQSTAT
+
+#include <asm-generic/hardirq.h>
 
 #endif /* __ASM_SH_HARDIRQ_H */

^ permalink raw reply related	[flat|nested] 136+ messages in thread

* [tip: irq/core] arm64: irqstat: Get rid of duplicated declaration
  2020-11-13 14:02   ` Thomas Gleixner
                     ` (4 preceding siblings ...)
  (?)
@ 2020-11-23 22:51   ` tip-bot2 for Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: tip-bot2 for Thomas Gleixner @ 2020-11-23 22:51 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Thomas Gleixner, Frederic Weisbecker, Will Deacon, Marc Zyngier,
	x86, linux-kernel

The following commit has been merged into the irq/core branch of tip:

Commit-ID:     2cb0837e56e1b04b773ed05df72297de4e010063
Gitweb:        https://git.kernel.org/tip/2cb0837e56e1b04b773ed05df72297de4e010063
Author:        Thomas Gleixner <tglx@linutronix.de>
AuthorDate:    Fri, 13 Nov 2020 15:02:13 +01:00
Committer:     Thomas Gleixner <tglx@linutronix.de>
CommitterDate: Mon, 23 Nov 2020 10:31:05 +01:00

arm64: irqstat: Get rid of duplicated declaration

irq_cpustat_t is exactly the same as the asm-generic one. Define
ack_bad_irq so the generic header does not emit the generic version of it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Acked-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201113141733.392015387@linutronix.de

---
 arch/arm64/include/asm/hardirq.h | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/hardirq.h b/arch/arm64/include/asm/hardirq.h
index 5ffa4ba..cbfa7b6 100644
--- a/arch/arm64/include/asm/hardirq.h
+++ b/arch/arm64/include/asm/hardirq.h
@@ -13,11 +13,8 @@
 #include <asm/kvm_arm.h>
 #include <asm/sysreg.h>
 
-typedef struct {
-	unsigned int __softirq_pending;
-} ____cacheline_aligned irq_cpustat_t;
-
-#include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */
+#define ack_bad_irq ack_bad_irq
+#include <asm-generic/hardirq.h>
 
 #define __ARCH_IRQ_EXIT_IRQS_DISABLED	1
 

^ permalink raw reply related	[flat|nested] 136+ messages in thread

* [tip: irq/core] asm-generic/irqstat: Add optional __nmi_count member
  2020-11-13 14:02   ` Thomas Gleixner
  (?)
  (?)
@ 2020-11-23 22:51   ` tip-bot2 for Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: tip-bot2 for Thomas Gleixner @ 2020-11-23 22:51 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Thomas Gleixner, Frederic Weisbecker, x86, linux-kernel, maz

The following commit has been merged into the irq/core branch of tip:

Commit-ID:     1adb99eabce9deefb55985c19181d375ba6ff4aa
Gitweb:        https://git.kernel.org/tip/1adb99eabce9deefb55985c19181d375ba6ff4aa
Author:        Thomas Gleixner <tglx@linutronix.de>
AuthorDate:    Fri, 13 Nov 2020 15:02:14 +01:00
Committer:     Thomas Gleixner <tglx@linutronix.de>
CommitterDate: Mon, 23 Nov 2020 10:31:06 +01:00

asm-generic/irqstat: Add optional __nmi_count member

Add an optional __nmi_count member to irq_cpustat_t so more architectures
can use the generic version.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20201113141733.501611990@linutronix.de

---
 include/asm-generic/hardirq.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/include/asm-generic/hardirq.h b/include/asm-generic/hardirq.h
index d14214d..f5dd997 100644
--- a/include/asm-generic/hardirq.h
+++ b/include/asm-generic/hardirq.h
@@ -7,6 +7,9 @@
 
 typedef struct {
 	unsigned int __softirq_pending;
+#ifdef ARCH_WANTS_NMI_IRQSTAT
+	unsigned int __nmi_count;
+#endif
 } ____cacheline_aligned irq_cpustat_t;
 
 #include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */

^ permalink raw reply related	[flat|nested] 136+ messages in thread

* [tip: irq/core] ARM: irqstat: Get rid of duplicated declaration
  2020-11-13 14:02   ` Thomas Gleixner
                     ` (3 preceding siblings ...)
  (?)
@ 2020-11-23 22:51   ` tip-bot2 for Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: tip-bot2 for Thomas Gleixner @ 2020-11-23 22:51 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Thomas Gleixner, Frederic Weisbecker, Valentin Schneider, x86,
	linux-kernel, maz

The following commit has been merged into the irq/core branch of tip:

Commit-ID:     7fd70c65faacd39628ba5f670be6490010c8132f
Gitweb:        https://git.kernel.org/tip/7fd70c65faacd39628ba5f670be6490010c8132f
Author:        Thomas Gleixner <tglx@linutronix.de>
AuthorDate:    Fri, 13 Nov 2020 15:02:12 +01:00
Committer:     Thomas Gleixner <tglx@linutronix.de>
CommitterDate: Mon, 23 Nov 2020 10:31:05 +01:00

ARM: irqstat: Get rid of duplicated declaration

irq_cpustat_t is exactly the same as the asm-generic one. Define
ack_bad_irq so the generic header does not emit the generic version of it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
Link: https://lore.kernel.org/r/20201113141733.276505871@linutronix.de

---
 arch/arm/include/asm/hardirq.h | 11 +++--------
 arch/arm/include/asm/irq.h     |  2 ++
 2 files changed, 5 insertions(+), 8 deletions(-)

diff --git a/arch/arm/include/asm/hardirq.h b/arch/arm/include/asm/hardirq.h
index b95848e..706efaf 100644
--- a/arch/arm/include/asm/hardirq.h
+++ b/arch/arm/include/asm/hardirq.h
@@ -2,16 +2,11 @@
 #ifndef __ASM_HARDIRQ_H
 #define __ASM_HARDIRQ_H
 
-#include <linux/cache.h>
-#include <linux/threads.h>
 #include <asm/irq.h>
 
-typedef struct {
-	unsigned int __softirq_pending;
-} ____cacheline_aligned irq_cpustat_t;
-
-#include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */
-
 #define __ARCH_IRQ_EXIT_IRQS_DISABLED	1
+#define ack_bad_irq ack_bad_irq
+
+#include <asm-generic/hardirq.h>
 
 #endif /* __ASM_HARDIRQ_H */
diff --git a/arch/arm/include/asm/irq.h b/arch/arm/include/asm/irq.h
index 46d4114..1cbcc46 100644
--- a/arch/arm/include/asm/irq.h
+++ b/arch/arm/include/asm/irq.h
@@ -31,6 +31,8 @@ void handle_IRQ(unsigned int, struct pt_regs *);
 void init_IRQ(void);
 
 #ifdef CONFIG_SMP
+#include <linux/cpumask.h>
+
 extern void arch_trigger_cpumask_backtrace(const cpumask_t *mask,
 					   bool exclude_self);
 #define arch_trigger_cpumask_backtrace arch_trigger_cpumask_backtrace

^ permalink raw reply related	[flat|nested] 136+ messages in thread

* [tip: irq/core] um/irqstat: Get rid of the duplicated declarations
  2020-11-13 14:02   ` Thomas Gleixner
                     ` (2 preceding siblings ...)
  (?)
@ 2020-11-23 22:51   ` tip-bot2 for Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: tip-bot2 for Thomas Gleixner @ 2020-11-23 22:51 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Thomas Gleixner, Frederic Weisbecker, x86, linux-kernel, maz

The following commit has been merged into the irq/core branch of tip:

Commit-ID:     e83694a7b249de63beb1d8b45474b796dce3cd45
Gitweb:        https://git.kernel.org/tip/e83694a7b249de63beb1d8b45474b796dce3cd45
Author:        Thomas Gleixner <tglx@linutronix.de>
AuthorDate:    Fri, 13 Nov 2020 15:02:11 +01:00
Committer:     Thomas Gleixner <tglx@linutronix.de>
CommitterDate: Mon, 23 Nov 2020 10:31:05 +01:00

um/irqstat: Get rid of the duplicated declarations

irq_cpustat_t and ack_bad_irq() are exactly the same as the asm-generic
ones.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20201113141733.156361337@linutronix.de

---
 arch/um/include/asm/hardirq.h | 17 +----------------
 1 file changed, 1 insertion(+), 16 deletions(-)

diff --git a/arch/um/include/asm/hardirq.h b/arch/um/include/asm/hardirq.h
index b426796..52e2c36 100644
--- a/arch/um/include/asm/hardirq.h
+++ b/arch/um/include/asm/hardirq.h
@@ -2,22 +2,7 @@
 #ifndef __ASM_UM_HARDIRQ_H
 #define __ASM_UM_HARDIRQ_H
 
-#include <linux/cache.h>
-#include <linux/threads.h>
-
-typedef struct {
-	unsigned int __softirq_pending;
-} ____cacheline_aligned irq_cpustat_t;
-
-#include <linux/irq_cpustat.h>	/* Standard mappings for irq_cpustat_t above */
-#include <linux/irq.h>
-
-#ifndef ack_bad_irq
-static inline void ack_bad_irq(unsigned int irq)
-{
-	printk(KERN_CRIT "unexpected IRQ trap at vector %02x\n", irq);
-}
-#endif
+#include <asm-generic/hardirq.h>
 
 #define __ARCH_IRQ_EXIT_IRQS_DISABLED 1
 

^ permalink raw reply related	[flat|nested] 136+ messages in thread

* [tip: irq/core] irqstat: Get rid of nmi_count() and __IRQ_STAT()
  2020-11-13 14:02   ` Thomas Gleixner
                     ` (2 preceding siblings ...)
  (?)
@ 2020-11-23 22:51   ` tip-bot2 for Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: tip-bot2 for Thomas Gleixner @ 2020-11-23 22:51 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Thomas Gleixner, Frederic Weisbecker, x86, linux-kernel, maz

The following commit has been merged into the irq/core branch of tip:

Commit-ID:     769dda58d1f647a45270db2f02efe2e2de856709
Gitweb:        https://git.kernel.org/tip/769dda58d1f647a45270db2f02efe2e2de856709
Author:        Thomas Gleixner <tglx@linutronix.de>
AuthorDate:    Fri, 13 Nov 2020 15:02:10 +01:00
Committer:     Thomas Gleixner <tglx@linutronix.de>
CommitterDate: Mon, 23 Nov 2020 10:31:05 +01:00

irqstat: Get rid of nmi_count() and __IRQ_STAT()

Nothing uses this anymore.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20201113141733.005212732@linutronix.de

---
 include/linux/irq_cpustat.h | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/include/linux/irq_cpustat.h b/include/linux/irq_cpustat.h
index 6e8895c..78fb2de 100644
--- a/include/linux/irq_cpustat.h
+++ b/include/linux/irq_cpustat.h
@@ -19,10 +19,6 @@
 
 #ifndef __ARCH_IRQ_STAT
 DECLARE_PER_CPU_ALIGNED(irq_cpustat_t, irq_stat);	/* defined in asm/hardirq.h */
-#define __IRQ_STAT(cpu, member)	(per_cpu(irq_stat.member, cpu))
 #endif
 
-/* arch dependent irq_stat fields */
-#define nmi_count(cpu)		__IRQ_STAT((cpu), __nmi_count)	/* i386 */
-
 #endif	/* __irq_cpustat_h */

^ permalink raw reply related	[flat|nested] 136+ messages in thread

* [tip: irq/core] parisc: Remove bogus __IRQ_STAT macro
  2020-11-13 14:02   ` Thomas Gleixner
  (?)
  (?)
@ 2020-11-23 22:51   ` tip-bot2 for Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: tip-bot2 for Thomas Gleixner @ 2020-11-23 22:51 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Thomas Gleixner, Frederic Weisbecker, x86, linux-kernel, maz

The following commit has been merged into the irq/core branch of tip:

Commit-ID:     9f112156f8da016df2dcbe77108e5b070aa58992
Gitweb:        https://git.kernel.org/tip/9f112156f8da016df2dcbe77108e5b070aa58992
Author:        Thomas Gleixner <tglx@linutronix.de>
AuthorDate:    Fri, 13 Nov 2020 15:02:08 +01:00
Committer:     Thomas Gleixner <tglx@linutronix.de>
CommitterDate: Mon, 23 Nov 2020 10:31:05 +01:00

parisc: Remove bogus __IRQ_STAT macro

This is a leftover from a historical array based implementation and unused.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20201113141732.680780121@linutronix.de

---
 arch/parisc/include/asm/hardirq.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/arch/parisc/include/asm/hardirq.h b/arch/parisc/include/asm/hardirq.h
index 7f70395..fad29aa 100644
--- a/arch/parisc/include/asm/hardirq.h
+++ b/arch/parisc/include/asm/hardirq.h
@@ -32,7 +32,6 @@ typedef struct {
 DECLARE_PER_CPU_SHARED_ALIGNED(irq_cpustat_t, irq_stat);
 
 #define __ARCH_IRQ_STAT
-#define __IRQ_STAT(cpu, member) (irq_stat[cpu].member)
 #define inc_irq_stat(member)	this_cpu_inc(irq_stat.member)
 #define __inc_irq_stat(member)	__this_cpu_inc(irq_stat.member)
 #define ack_bad_irq(irq) WARN(1, "unexpected IRQ trap at vector %02x\n", irq)

^ permalink raw reply related	[flat|nested] 136+ messages in thread

* [tip: irq/core] sh: Get rid of nmi_count()
  2020-11-13 14:02   ` Thomas Gleixner
                     ` (2 preceding siblings ...)
  (?)
@ 2020-11-23 22:51   ` tip-bot2 for Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: tip-bot2 for Thomas Gleixner @ 2020-11-23 22:51 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Thomas Gleixner, Frederic Weisbecker, x86, linux-kernel, maz

The following commit has been merged into the irq/core branch of tip:

Commit-ID:     fe3f1d5d7cd3062c0cb8fe70dd77470019dedd19
Gitweb:        https://git.kernel.org/tip/fe3f1d5d7cd3062c0cb8fe70dd77470019dedd19
Author:        Thomas Gleixner <tglx@linutronix.de>
AuthorDate:    Fri, 13 Nov 2020 15:02:09 +01:00
Committer:     Thomas Gleixner <tglx@linutronix.de>
CommitterDate: Mon, 23 Nov 2020 10:31:05 +01:00

sh: Get rid of nmi_count()

nmi_count() is a historical leftover and SH is the only user. Replace it
with regular per cpu accessors.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20201113141732.844232404@linutronix.de

---
 arch/sh/kernel/irq.c   | 2 +-
 arch/sh/kernel/traps.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/sh/kernel/irq.c b/arch/sh/kernel/irq.c
index 5717c7c..5addcb2 100644
--- a/arch/sh/kernel/irq.c
+++ b/arch/sh/kernel/irq.c
@@ -44,7 +44,7 @@ int arch_show_interrupts(struct seq_file *p, int prec)
 
 	seq_printf(p, "%*s: ", prec, "NMI");
 	for_each_online_cpu(j)
-		seq_printf(p, "%10u ", nmi_count(j));
+		seq_printf(p, "%10u ", per_cpu(irq_stat.__nmi_count, j);
 	seq_printf(p, "  Non-maskable interrupts\n");
 
 	seq_printf(p, "%*s: %10u\n", prec, "ERR", atomic_read(&irq_err_count));
diff --git a/arch/sh/kernel/traps.c b/arch/sh/kernel/traps.c
index 9c3d32b..f5beecd 100644
--- a/arch/sh/kernel/traps.c
+++ b/arch/sh/kernel/traps.c
@@ -186,7 +186,7 @@ BUILD_TRAP_HANDLER(nmi)
 	arch_ftrace_nmi_enter();
 
 	nmi_enter();
-	nmi_count(cpu)++;
+	this_cpu_inc(irq_stat.__nmi_count);
 
 	switch (notify_die(DIE_NMI, "NMI", regs, 0, vec & 0xff, SIGINT)) {
 	case NOTIFY_OK:

^ permalink raw reply related	[flat|nested] 136+ messages in thread

* Re: [patch 14/19] softirq: Make softirq control and processing RT aware
  2020-11-23 19:27       ` Thomas Gleixner
  (?)
@ 2020-11-23 23:58         ` Frederic Weisbecker
  -1 siblings, 0 replies; 136+ messages in thread
From: Frederic Weisbecker @ 2020-11-23 23:58 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Peter Zijlstra, Paul McKenney, Sebastian Andrzej Siewior,
	Arnd Bergmann, James E.J. Bottomley, Helge Deller, linux-parisc,
	Yoshinori Sato, Rich Felker, linux-sh, Jeff Dike,
	Richard Weinberger, Anton Ivanov, linux-um, Russell King,
	Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

On Mon, Nov 23, 2020 at 08:27:33PM +0100, Thomas Gleixner wrote:
> On Mon, Nov 23 2020 at 14:44, Frederic Weisbecker wrote:
> > On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
> >> +	/*
> >> +	 * Adjust softirq count to SOFTIRQ_OFFSET which makes
> >> +	 * in_serving_softirq() become true.
> >> +	 */
> >> +	cnt = SOFTIRQ_OFFSET;
> >> +	__local_bh_enable(cnt, false);
> >
> > But then you enter __do_softirq() with softirq_count() == SOFTIRQ_OFFSET.
> > __do_softirq() calls softirq_handle_begin() which then sets it back to
> > SOFTIRQ_DISABLE_OFFSET...
> 
> The RT variant of it added in this very same patch
> > +static inline void softirq_handle_begin(void) { }
> > +static inline void softirq_handle_end(void) { }

Ah but then account_irq_enter_time() is called with SOFTIRQ_OFFSET (it's
currently called with softirq_count == 0 at this point) and that may mess
up irqtime accounting which relies on it. It could spuriously account all
the time between the last (soft-)IRQ exit until now as softirq time.

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 14/19] softirq: Make softirq control and processing RT aware
@ 2020-11-23 23:58         ` Frederic Weisbecker
  0 siblings, 0 replies; 136+ messages in thread
From: Frederic Weisbecker @ 2020-11-23 23:58 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	linux-sh, Peter Zijlstra, Richard Weinberger,
	Sebastian Andrzej Siewior, Valentin Schneider, Jeff Dike, LKML,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Marc Zyngier,
	Russell King, linux-um, Will Deacon, Helge Deller,
	linux-arm-kernel, Anton Ivanov

On Mon, Nov 23, 2020 at 08:27:33PM +0100, Thomas Gleixner wrote:
> On Mon, Nov 23 2020 at 14:44, Frederic Weisbecker wrote:
> > On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
> >> +	/*
> >> +	 * Adjust softirq count to SOFTIRQ_OFFSET which makes
> >> +	 * in_serving_softirq() become true.
> >> +	 */
> >> +	cnt = SOFTIRQ_OFFSET;
> >> +	__local_bh_enable(cnt, false);
> >
> > But then you enter __do_softirq() with softirq_count() == SOFTIRQ_OFFSET.
> > __do_softirq() calls softirq_handle_begin() which then sets it back to
> > SOFTIRQ_DISABLE_OFFSET...
> 
> The RT variant of it added in this very same patch
> > +static inline void softirq_handle_begin(void) { }
> > +static inline void softirq_handle_end(void) { }

Ah but then account_irq_enter_time() is called with SOFTIRQ_OFFSET (it's
currently called with softirq_count == 0 at this point) and that may mess
up irqtime accounting which relies on it. It could spuriously account all
the time between the last (soft-)IRQ exit until now as softirq time.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 14/19] softirq: Make softirq control and processing RT aware
@ 2020-11-23 23:58         ` Frederic Weisbecker
  0 siblings, 0 replies; 136+ messages in thread
From: Frederic Weisbecker @ 2020-11-23 23:58 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	linux-sh, Peter Zijlstra, Richard Weinberger,
	Sebastian Andrzej Siewior, Valentin Schneider, Jeff Dike, LKML,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Marc Zyngier,
	Russell King, linux-um, Will Deacon, Helge Deller,
	linux-arm-kernel, Anton Ivanov

On Mon, Nov 23, 2020 at 08:27:33PM +0100, Thomas Gleixner wrote:
> On Mon, Nov 23 2020 at 14:44, Frederic Weisbecker wrote:
> > On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
> >> +	/*
> >> +	 * Adjust softirq count to SOFTIRQ_OFFSET which makes
> >> +	 * in_serving_softirq() become true.
> >> +	 */
> >> +	cnt = SOFTIRQ_OFFSET;
> >> +	__local_bh_enable(cnt, false);
> >
> > But then you enter __do_softirq() with softirq_count() == SOFTIRQ_OFFSET.
> > __do_softirq() calls softirq_handle_begin() which then sets it back to
> > SOFTIRQ_DISABLE_OFFSET...
> 
> The RT variant of it added in this very same patch
> > +static inline void softirq_handle_begin(void) { }
> > +static inline void softirq_handle_end(void) { }

Ah but then account_irq_enter_time() is called with SOFTIRQ_OFFSET (it's
currently called with softirq_count == 0 at this point) and that may mess
up irqtime accounting which relies on it. It could spuriously account all
the time between the last (soft-)IRQ exit until now as softirq time.

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 14/19] softirq: Make softirq control and processing RT aware
  2020-11-23 23:58         ` Frederic Weisbecker
  (?)
@ 2020-11-24  0:06           ` Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-24  0:06 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Peter Zijlstra, Paul McKenney, Sebastian Andrzej Siewior,
	Arnd Bergmann, James E.J. Bottomley, Helge Deller, linux-parisc,
	Yoshinori Sato, Rich Felker, linux-sh, Jeff Dike,
	Richard Weinberger, Anton Ivanov, linux-um, Russell King,
	Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

On Tue, Nov 24 2020 at 00:58, Frederic Weisbecker wrote:
> On Mon, Nov 23, 2020 at 08:27:33PM +0100, Thomas Gleixner wrote:
>> On Mon, Nov 23 2020 at 14:44, Frederic Weisbecker wrote:
>> > On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
>> >> +	/*
>> >> +	 * Adjust softirq count to SOFTIRQ_OFFSET which makes
>> >> +	 * in_serving_softirq() become true.
>> >> +	 */
>> >> +	cnt = SOFTIRQ_OFFSET;
>> >> +	__local_bh_enable(cnt, false);
>> >
>> > But then you enter __do_softirq() with softirq_count() == SOFTIRQ_OFFSET.
>> > __do_softirq() calls softirq_handle_begin() which then sets it back to
>> > SOFTIRQ_DISABLE_OFFSET...
>> 
>> The RT variant of it added in this very same patch
>> > +static inline void softirq_handle_begin(void) { }
>> > +static inline void softirq_handle_end(void) { }
>
> Ah but then account_irq_enter_time() is called with SOFTIRQ_OFFSET (it's
> currently called with softirq_count == 0 at this point) and that may mess
> up irqtime accounting which relies on it. It could spuriously account all
> the time between the last (soft-)IRQ exit until now as softirq time.

Good point. Haven't thought about that. Let me have a look again.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 14/19] softirq: Make softirq control and processing RT aware
@ 2020-11-24  0:06           ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-24  0:06 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	linux-sh, Peter Zijlstra, Richard Weinberger,
	Sebastian Andrzej Siewior, Valentin Schneider, Jeff Dike, LKML,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Marc Zyngier,
	Russell King, linux-um, Will Deacon, Helge Deller,
	linux-arm-kernel, Anton Ivanov

On Tue, Nov 24 2020 at 00:58, Frederic Weisbecker wrote:
> On Mon, Nov 23, 2020 at 08:27:33PM +0100, Thomas Gleixner wrote:
>> On Mon, Nov 23 2020 at 14:44, Frederic Weisbecker wrote:
>> > On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
>> >> +	/*
>> >> +	 * Adjust softirq count to SOFTIRQ_OFFSET which makes
>> >> +	 * in_serving_softirq() become true.
>> >> +	 */
>> >> +	cnt = SOFTIRQ_OFFSET;
>> >> +	__local_bh_enable(cnt, false);
>> >
>> > But then you enter __do_softirq() with softirq_count() == SOFTIRQ_OFFSET.
>> > __do_softirq() calls softirq_handle_begin() which then sets it back to
>> > SOFTIRQ_DISABLE_OFFSET...
>> 
>> The RT variant of it added in this very same patch
>> > +static inline void softirq_handle_begin(void) { }
>> > +static inline void softirq_handle_end(void) { }
>
> Ah but then account_irq_enter_time() is called with SOFTIRQ_OFFSET (it's
> currently called with softirq_count == 0 at this point) and that may mess
> up irqtime accounting which relies on it. It could spuriously account all
> the time between the last (soft-)IRQ exit until now as softirq time.

Good point. Haven't thought about that. Let me have a look again.

Thanks,

        tglx

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 14/19] softirq: Make softirq control and processing RT aware
@ 2020-11-24  0:06           ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-24  0:06 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	linux-sh, Peter Zijlstra, Richard Weinberger,
	Sebastian Andrzej Siewior, Valentin Schneider, Jeff Dike, LKML,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Marc Zyngier,
	Russell King, linux-um, Will Deacon, Helge Deller,
	linux-arm-kernel, Anton Ivanov

On Tue, Nov 24 2020 at 00:58, Frederic Weisbecker wrote:
> On Mon, Nov 23, 2020 at 08:27:33PM +0100, Thomas Gleixner wrote:
>> On Mon, Nov 23 2020 at 14:44, Frederic Weisbecker wrote:
>> > On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
>> >> +	/*
>> >> +	 * Adjust softirq count to SOFTIRQ_OFFSET which makes
>> >> +	 * in_serving_softirq() become true.
>> >> +	 */
>> >> +	cnt = SOFTIRQ_OFFSET;
>> >> +	__local_bh_enable(cnt, false);
>> >
>> > But then you enter __do_softirq() with softirq_count() == SOFTIRQ_OFFSET.
>> > __do_softirq() calls softirq_handle_begin() which then sets it back to
>> > SOFTIRQ_DISABLE_OFFSET...
>> 
>> The RT variant of it added in this very same patch
>> > +static inline void softirq_handle_begin(void) { }
>> > +static inline void softirq_handle_end(void) { }
>
> Ah but then account_irq_enter_time() is called with SOFTIRQ_OFFSET (it's
> currently called with softirq_count == 0 at this point) and that may mess
> up irqtime accounting which relies on it. It could spuriously account all
> the time between the last (soft-)IRQ exit until now as softirq time.

Good point. Haven't thought about that. Let me have a look again.

Thanks,

        tglx

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 14/19] softirq: Make softirq control and processing RT aware
  2020-11-24  0:06           ` Thomas Gleixner
  (?)
@ 2020-11-24  0:13             ` Frederic Weisbecker
  -1 siblings, 0 replies; 136+ messages in thread
From: Frederic Weisbecker @ 2020-11-24  0:13 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Peter Zijlstra, Paul McKenney, Sebastian Andrzej Siewior,
	Arnd Bergmann, James E.J. Bottomley, Helge Deller, linux-parisc,
	Yoshinori Sato, Rich Felker, linux-sh, Jeff Dike,
	Richard Weinberger, Anton Ivanov, linux-um, Russell King,
	Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

On Tue, Nov 24, 2020 at 01:06:15AM +0100, Thomas Gleixner wrote:
> On Tue, Nov 24 2020 at 00:58, Frederic Weisbecker wrote:
> > On Mon, Nov 23, 2020 at 08:27:33PM +0100, Thomas Gleixner wrote:
> >> On Mon, Nov 23 2020 at 14:44, Frederic Weisbecker wrote:
> >> > On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
> >> >> +	/*
> >> >> +	 * Adjust softirq count to SOFTIRQ_OFFSET which makes
> >> >> +	 * in_serving_softirq() become true.
> >> >> +	 */
> >> >> +	cnt = SOFTIRQ_OFFSET;
> >> >> +	__local_bh_enable(cnt, false);
> >> >
> >> > But then you enter __do_softirq() with softirq_count() == SOFTIRQ_OFFSET.
> >> > __do_softirq() calls softirq_handle_begin() which then sets it back to
> >> > SOFTIRQ_DISABLE_OFFSET...
> >> 
> >> The RT variant of it added in this very same patch
> >> > +static inline void softirq_handle_begin(void) { }
> >> > +static inline void softirq_handle_end(void) { }
> >
> > Ah but then account_irq_enter_time() is called with SOFTIRQ_OFFSET (it's
> > currently called with softirq_count == 0 at this point) and that may mess
> > up irqtime accounting which relies on it. It could spuriously account all
> > the time between the last (soft-)IRQ exit until now as softirq time.
> 
> Good point. Haven't thought about that. Let me have a look again.

But I'm cooking a patchset which moves account_irq_enter_time() after
HARDIRQ_OFFSET or SOFTIRQ_OFFSET is incremented. This will allow us to move
tick_irq_enter() under this layout:

		 preempt_count_add(HARDIRQ_OFFSET)
		 lockdep_hardirq_enter()
		 tick_irq_enter()
		 account_irq_enter_time()

This way tick_irq_enter() can be correctly handled by lockdep and we can remove
the nasty hack which temporarily disables softirqs around it.

And as a side effect it should also fix your issue.

I should have that ready soonish.

Thanks.

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 14/19] softirq: Make softirq control and processing RT aware
@ 2020-11-24  0:13             ` Frederic Weisbecker
  0 siblings, 0 replies; 136+ messages in thread
From: Frederic Weisbecker @ 2020-11-24  0:13 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	linux-sh, Peter Zijlstra, Richard Weinberger,
	Sebastian Andrzej Siewior, Valentin Schneider, Jeff Dike, LKML,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Marc Zyngier,
	Russell King, linux-um, Will Deacon, Helge Deller,
	linux-arm-kernel, Anton Ivanov

On Tue, Nov 24, 2020 at 01:06:15AM +0100, Thomas Gleixner wrote:
> On Tue, Nov 24 2020 at 00:58, Frederic Weisbecker wrote:
> > On Mon, Nov 23, 2020 at 08:27:33PM +0100, Thomas Gleixner wrote:
> >> On Mon, Nov 23 2020 at 14:44, Frederic Weisbecker wrote:
> >> > On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
> >> >> +	/*
> >> >> +	 * Adjust softirq count to SOFTIRQ_OFFSET which makes
> >> >> +	 * in_serving_softirq() become true.
> >> >> +	 */
> >> >> +	cnt = SOFTIRQ_OFFSET;
> >> >> +	__local_bh_enable(cnt, false);
> >> >
> >> > But then you enter __do_softirq() with softirq_count() == SOFTIRQ_OFFSET.
> >> > __do_softirq() calls softirq_handle_begin() which then sets it back to
> >> > SOFTIRQ_DISABLE_OFFSET...
> >> 
> >> The RT variant of it added in this very same patch
> >> > +static inline void softirq_handle_begin(void) { }
> >> > +static inline void softirq_handle_end(void) { }
> >
> > Ah but then account_irq_enter_time() is called with SOFTIRQ_OFFSET (it's
> > currently called with softirq_count == 0 at this point) and that may mess
> > up irqtime accounting which relies on it. It could spuriously account all
> > the time between the last (soft-)IRQ exit until now as softirq time.
> 
> Good point. Haven't thought about that. Let me have a look again.

But I'm cooking a patchset which moves account_irq_enter_time() after
HARDIRQ_OFFSET or SOFTIRQ_OFFSET is incremented. This will allow us to move
tick_irq_enter() under this layout:

		 preempt_count_add(HARDIRQ_OFFSET)
		 lockdep_hardirq_enter()
		 tick_irq_enter()
		 account_irq_enter_time()

This way tick_irq_enter() can be correctly handled by lockdep and we can remove
the nasty hack which temporarily disables softirqs around it.

And as a side effect it should also fix your issue.

I should have that ready soonish.

Thanks.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 14/19] softirq: Make softirq control and processing RT aware
@ 2020-11-24  0:13             ` Frederic Weisbecker
  0 siblings, 0 replies; 136+ messages in thread
From: Frederic Weisbecker @ 2020-11-24  0:13 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	linux-sh, Peter Zijlstra, Richard Weinberger,
	Sebastian Andrzej Siewior, Valentin Schneider, Jeff Dike, LKML,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Marc Zyngier,
	Russell King, linux-um, Will Deacon, Helge Deller,
	linux-arm-kernel, Anton Ivanov

On Tue, Nov 24, 2020 at 01:06:15AM +0100, Thomas Gleixner wrote:
> On Tue, Nov 24 2020 at 00:58, Frederic Weisbecker wrote:
> > On Mon, Nov 23, 2020 at 08:27:33PM +0100, Thomas Gleixner wrote:
> >> On Mon, Nov 23 2020 at 14:44, Frederic Weisbecker wrote:
> >> > On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
> >> >> +	/*
> >> >> +	 * Adjust softirq count to SOFTIRQ_OFFSET which makes
> >> >> +	 * in_serving_softirq() become true.
> >> >> +	 */
> >> >> +	cnt = SOFTIRQ_OFFSET;
> >> >> +	__local_bh_enable(cnt, false);
> >> >
> >> > But then you enter __do_softirq() with softirq_count() == SOFTIRQ_OFFSET.
> >> > __do_softirq() calls softirq_handle_begin() which then sets it back to
> >> > SOFTIRQ_DISABLE_OFFSET...
> >> 
> >> The RT variant of it added in this very same patch
> >> > +static inline void softirq_handle_begin(void) { }
> >> > +static inline void softirq_handle_end(void) { }
> >
> > Ah but then account_irq_enter_time() is called with SOFTIRQ_OFFSET (it's
> > currently called with softirq_count == 0 at this point) and that may mess
> > up irqtime accounting which relies on it. It could spuriously account all
> > the time between the last (soft-)IRQ exit until now as softirq time.
> 
> Good point. Haven't thought about that. Let me have a look again.

But I'm cooking a patchset which moves account_irq_enter_time() after
HARDIRQ_OFFSET or SOFTIRQ_OFFSET is incremented. This will allow us to move
tick_irq_enter() under this layout:

		 preempt_count_add(HARDIRQ_OFFSET)
		 lockdep_hardirq_enter()
		 tick_irq_enter()
		 account_irq_enter_time()

This way tick_irq_enter() can be correctly handled by lockdep and we can remove
the nasty hack which temporarily disables softirqs around it.

And as a side effect it should also fix your issue.

I should have that ready soonish.

Thanks.

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 14/19] softirq: Make softirq control and processing RT aware
  2020-11-24  0:13             ` Frederic Weisbecker
  (?)
@ 2020-11-24  0:22               ` Thomas Gleixner
  -1 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-24  0:22 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Peter Zijlstra, Paul McKenney, Sebastian Andrzej Siewior,
	Arnd Bergmann, James E.J. Bottomley, Helge Deller, linux-parisc,
	Yoshinori Sato, Rich Felker, linux-sh, Jeff Dike,
	Richard Weinberger, Anton Ivanov, linux-um, Russell King,
	Marc Zyngier, Valentin Schneider, linux-arm-kernel,
	Catalin Marinas, Will Deacon

Frederic,

On Tue, Nov 24 2020 at 01:13, Frederic Weisbecker wrote:
> On Tue, Nov 24, 2020 at 01:06:15AM +0100, Thomas Gleixner wrote:
>> Good point. Haven't thought about that. Let me have a look again.
>
> But I'm cooking a patchset which moves account_irq_enter_time() after
> HARDIRQ_OFFSET or SOFTIRQ_OFFSET is incremented. This will allow us to move
> tick_irq_enter() under this layout:
>
> 		 preempt_count_add(HARDIRQ_OFFSET)
> 		 lockdep_hardirq_enter()
> 		 tick_irq_enter()
> 		 account_irq_enter_time()
>
> This way tick_irq_enter() can be correctly handled by lockdep and we can remove
> the nasty hack which temporarily disables softirqs around it.
>
> And as a side effect it should also fix your issue.
>
> I should have that ready soonish.

Sounds to good to be true :)

Looking forward to it!

Thanks for taking care of that!

       tglx

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 14/19] softirq: Make softirq control and processing RT aware
@ 2020-11-24  0:22               ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-24  0:22 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	linux-sh, Peter Zijlstra, Richard Weinberger,
	Sebastian Andrzej Siewior, Valentin Schneider, Jeff Dike, LKML,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Marc Zyngier,
	Russell King, linux-um, Will Deacon, Helge Deller,
	linux-arm-kernel, Anton Ivanov

Frederic,

On Tue, Nov 24 2020 at 01:13, Frederic Weisbecker wrote:
> On Tue, Nov 24, 2020 at 01:06:15AM +0100, Thomas Gleixner wrote:
>> Good point. Haven't thought about that. Let me have a look again.
>
> But I'm cooking a patchset which moves account_irq_enter_time() after
> HARDIRQ_OFFSET or SOFTIRQ_OFFSET is incremented. This will allow us to move
> tick_irq_enter() under this layout:
>
> 		 preempt_count_add(HARDIRQ_OFFSET)
> 		 lockdep_hardirq_enter()
> 		 tick_irq_enter()
> 		 account_irq_enter_time()
>
> This way tick_irq_enter() can be correctly handled by lockdep and we can remove
> the nasty hack which temporarily disables softirqs around it.
>
> And as a side effect it should also fix your issue.
>
> I should have that ready soonish.

Sounds to good to be true :)

Looking forward to it!

Thanks for taking care of that!

       tglx

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 14/19] softirq: Make softirq control and processing RT aware
@ 2020-11-24  0:22               ` Thomas Gleixner
  0 siblings, 0 replies; 136+ messages in thread
From: Thomas Gleixner @ 2020-11-24  0:22 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Rich Felker, Catalin Marinas, Paul McKenney, Arnd Bergmann,
	linux-sh, Peter Zijlstra, Richard Weinberger,
	Sebastian Andrzej Siewior, Valentin Schneider, Jeff Dike, LKML,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Marc Zyngier,
	Russell King, linux-um, Will Deacon, Helge Deller,
	linux-arm-kernel, Anton Ivanov

Frederic,

On Tue, Nov 24 2020 at 01:13, Frederic Weisbecker wrote:
> On Tue, Nov 24, 2020 at 01:06:15AM +0100, Thomas Gleixner wrote:
>> Good point. Haven't thought about that. Let me have a look again.
>
> But I'm cooking a patchset which moves account_irq_enter_time() after
> HARDIRQ_OFFSET or SOFTIRQ_OFFSET is incremented. This will allow us to move
> tick_irq_enter() under this layout:
>
> 		 preempt_count_add(HARDIRQ_OFFSET)
> 		 lockdep_hardirq_enter()
> 		 tick_irq_enter()
> 		 account_irq_enter_time()
>
> This way tick_irq_enter() can be correctly handled by lockdep and we can remove
> the nasty hack which temporarily disables softirqs around it.
>
> And as a side effect it should also fix your issue.
>
> I should have that ready soonish.

Sounds to good to be true :)

Looking forward to it!

Thanks for taking care of that!

       tglx

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 02/19] sh: Get rid of nmi_count()
  2020-11-13 14:02   ` Thomas Gleixner
  (?)
@ 2021-01-01 14:27     ` John Paul Adrian Glaubitz
  -1 siblings, 0 replies; 136+ messages in thread
From: John Paul Adrian Glaubitz @ 2021-01-01 14:27 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: Peter Zijlstra, Frederic Weisbecker, Paul McKenney,
	Sebastian Andrzej Siewior, Arnd Bergmann, Yoshinori Sato,
	Rich Felker, linux-sh, James E.J. Bottomley, Helge Deller,
	linux-parisc, Jeff Dike, Richard Weinberger, Anton Ivanov,
	linux-um, Russell King, Marc Zyngier, Valentin Schneider,
	linux-arm-kernel, Catalin Marinas, Will Deacon

Hello Thomas!

On 11/13/20 3:02 PM, Thomas Gleixner wrote:
> nmi_count() is a historical leftover and SH is the only user. Replace it
> with regular per cpu accessors.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
> Cc: Rich Felker <dalias@libc.org>
> Cc: linux-sh@vger.kernel.org
> ---
>  arch/sh/kernel/irq.c   |    2 +-
>  arch/sh/kernel/traps.c |    2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> --- a/arch/sh/kernel/irq.c
> +++ b/arch/sh/kernel/irq.c
> @@ -44,7 +44,7 @@ int arch_show_interrupts(struct seq_file
>  
>  	seq_printf(p, "%*s: ", prec, "NMI");
>  	for_each_online_cpu(j)
> -		seq_printf(p, "%10u ", nmi_count(j));
> +		seq_printf(p, "%10u ", per_cpu(irq_stat.__nmi_count, j);
>  	seq_printf(p, "  Non-maskable interrupts\n");
>  
>  	seq_printf(p, "%*s: %10u\n", prec, "ERR", atomic_read(&irq_err_count));
> --- a/arch/sh/kernel/traps.c
> +++ b/arch/sh/kernel/traps.c
> @@ -186,7 +186,7 @@ BUILD_TRAP_HANDLER(nmi)
>  	arch_ftrace_nmi_enter();
>  
>  	nmi_enter();
> -	nmi_count(cpu)++;
> +	this_cpu_inc(irq_stat.__nmi_count);
>  
>  	switch (notify_die(DIE_NMI, "NMI", regs, 0, vec & 0xff, SIGINT)) {
>  	case NOTIFY_OK:
> 

Just booted my SH7785LCR board with a kernel based on Linus' latest tree
and can confirm that this change does not cause any regressions.

Adrian

-- 
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer - glaubitz@debian.org
`. `'   Freie Universitaet Berlin - glaubitz@physik.fu-berlin.de
  `-    GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913


^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 02/19] sh: Get rid of nmi_count()
@ 2021-01-01 14:27     ` John Paul Adrian Glaubitz
  0 siblings, 0 replies; 136+ messages in thread
From: John Paul Adrian Glaubitz @ 2021-01-01 14:27 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

Hello Thomas!

On 11/13/20 3:02 PM, Thomas Gleixner wrote:
> nmi_count() is a historical leftover and SH is the only user. Replace it
> with regular per cpu accessors.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
> Cc: Rich Felker <dalias@libc.org>
> Cc: linux-sh@vger.kernel.org
> ---
>  arch/sh/kernel/irq.c   |    2 +-
>  arch/sh/kernel/traps.c |    2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> --- a/arch/sh/kernel/irq.c
> +++ b/arch/sh/kernel/irq.c
> @@ -44,7 +44,7 @@ int arch_show_interrupts(struct seq_file
>  
>  	seq_printf(p, "%*s: ", prec, "NMI");
>  	for_each_online_cpu(j)
> -		seq_printf(p, "%10u ", nmi_count(j));
> +		seq_printf(p, "%10u ", per_cpu(irq_stat.__nmi_count, j);
>  	seq_printf(p, "  Non-maskable interrupts\n");
>  
>  	seq_printf(p, "%*s: %10u\n", prec, "ERR", atomic_read(&irq_err_count));
> --- a/arch/sh/kernel/traps.c
> +++ b/arch/sh/kernel/traps.c
> @@ -186,7 +186,7 @@ BUILD_TRAP_HANDLER(nmi)
>  	arch_ftrace_nmi_enter();
>  
>  	nmi_enter();
> -	nmi_count(cpu)++;
> +	this_cpu_inc(irq_stat.__nmi_count);
>  
>  	switch (notify_die(DIE_NMI, "NMI", regs, 0, vec & 0xff, SIGINT)) {
>  	case NOTIFY_OK:
> 

Just booted my SH7785LCR board with a kernel based on Linus' latest tree
and can confirm that this change does not cause any regressions.

Adrian

-- 
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer - glaubitz@debian.org
`. `'   Freie Universitaet Berlin - glaubitz@physik.fu-berlin.de
  `-    GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 136+ messages in thread

* Re: [patch 02/19] sh: Get rid of nmi_count()
@ 2021-01-01 14:27     ` John Paul Adrian Glaubitz
  0 siblings, 0 replies; 136+ messages in thread
From: John Paul Adrian Glaubitz @ 2021-01-01 14:27 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: Marc Zyngier, Rich Felker, Catalin Marinas, Paul McKenney,
	Arnd Bergmann, linux-sh, Peter Zijlstra, Richard Weinberger,
	Frederic Weisbecker, Valentin Schneider, Jeff Dike, Russell King,
	Yoshinori Sato, James E.J. Bottomley, linux-parisc, Helge Deller,
	linux-um, Will Deacon, Sebastian Andrzej Siewior,
	linux-arm-kernel, Anton Ivanov

Hello Thomas!

On 11/13/20 3:02 PM, Thomas Gleixner wrote:
> nmi_count() is a historical leftover and SH is the only user. Replace it
> with regular per cpu accessors.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
> Cc: Rich Felker <dalias@libc.org>
> Cc: linux-sh@vger.kernel.org
> ---
>  arch/sh/kernel/irq.c   |    2 +-
>  arch/sh/kernel/traps.c |    2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> --- a/arch/sh/kernel/irq.c
> +++ b/arch/sh/kernel/irq.c
> @@ -44,7 +44,7 @@ int arch_show_interrupts(struct seq_file
>  
>  	seq_printf(p, "%*s: ", prec, "NMI");
>  	for_each_online_cpu(j)
> -		seq_printf(p, "%10u ", nmi_count(j));
> +		seq_printf(p, "%10u ", per_cpu(irq_stat.__nmi_count, j);
>  	seq_printf(p, "  Non-maskable interrupts\n");
>  
>  	seq_printf(p, "%*s: %10u\n", prec, "ERR", atomic_read(&irq_err_count));
> --- a/arch/sh/kernel/traps.c
> +++ b/arch/sh/kernel/traps.c
> @@ -186,7 +186,7 @@ BUILD_TRAP_HANDLER(nmi)
>  	arch_ftrace_nmi_enter();
>  
>  	nmi_enter();
> -	nmi_count(cpu)++;
> +	this_cpu_inc(irq_stat.__nmi_count);
>  
>  	switch (notify_die(DIE_NMI, "NMI", regs, 0, vec & 0xff, SIGINT)) {
>  	case NOTIFY_OK:
> 

Just booted my SH7785LCR board with a kernel based on Linus' latest tree
and can confirm that this change does not cause any regressions.

Adrian

-- 
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer - glaubitz@debian.org
`. `'   Freie Universitaet Berlin - glaubitz@physik.fu-berlin.de
  `-    GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 136+ messages in thread

end of thread, other threads:[~2021-01-01 14:30 UTC | newest]

Thread overview: 136+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-13 14:02 [patch 00/19] softirq: Cleanups and RT awareness Thomas Gleixner
2020-11-13 14:02 ` Thomas Gleixner
2020-11-13 14:02 ` Thomas Gleixner
2020-11-13 14:02 ` [patch 01/19] parisc: Remove bogus __IRQ_STAT macro Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-23 22:51   ` [tip: irq/core] " tip-bot2 for Thomas Gleixner
2020-11-13 14:02 ` [patch 02/19] sh: Get rid of nmi_count() Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-23 22:51   ` [tip: irq/core] " tip-bot2 for Thomas Gleixner
2021-01-01 14:27   ` [patch 02/19] " John Paul Adrian Glaubitz
2021-01-01 14:27     ` John Paul Adrian Glaubitz
2021-01-01 14:27     ` John Paul Adrian Glaubitz
2020-11-13 14:02 ` [patch 03/19] irqstat: Get rid of nmi_count() and __IRQ_STAT() Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-23 22:51   ` [tip: irq/core] " tip-bot2 for Thomas Gleixner
2020-11-13 14:02 ` [patch 04/19] um/irqstat: Get rid of the duplicated declarations Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-23 22:51   ` [tip: irq/core] " tip-bot2 for Thomas Gleixner
2020-11-13 14:02 ` [patch 05/19] ARM: irqstat: Get rid of duplicated declaration Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-16 18:19   ` Valentin Schneider
2020-11-16 18:19     ` Valentin Schneider
2020-11-16 18:19     ` Valentin Schneider
2020-11-23 22:51   ` [tip: irq/core] " tip-bot2 for Thomas Gleixner
2020-11-13 14:02 ` [patch 06/19] arm64: " Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-16 10:01   ` Will Deacon
2020-11-16 10:01     ` Will Deacon
2020-11-16 10:01     ` Will Deacon
2020-11-16 10:01     ` Will Deacon
2020-11-16 10:51   ` Marc Zyngier
2020-11-16 10:51     ` Marc Zyngier
2020-11-16 10:51     ` Marc Zyngier
2020-11-23 22:51   ` [tip: irq/core] " tip-bot2 for Thomas Gleixner
2020-11-13 14:02 ` [patch 07/19] asm-generic/irqstat: Add optional __nmi_count member Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-23 22:51   ` [tip: irq/core] " tip-bot2 for Thomas Gleixner
2020-11-13 14:02 ` [patch 08/19] sh: irqstat: Use the generic irq_cpustat_t Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-23 22:51   ` [tip: irq/core] " tip-bot2 for Thomas Gleixner
2020-11-13 14:02 ` [patch 09/19] irqstat: Move declaration into asm-generic/hardirq.h Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-23 22:51   ` [tip: irq/core] " tip-bot2 for Thomas Gleixner
2020-11-13 14:02 ` [patch 10/19] preempt: Cleanup the macro maze a bit Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-16 12:17   ` Peter Zijlstra
2020-11-16 12:17     ` Peter Zijlstra
2020-11-16 12:17     ` Peter Zijlstra
2020-11-16 12:17     ` Peter Zijlstra
2020-11-16 17:42     ` Thomas Gleixner
2020-11-16 17:42       ` Thomas Gleixner
2020-11-16 17:42       ` Thomas Gleixner
2020-11-17 10:21       ` Peter Zijlstra
2020-11-17 10:21         ` Peter Zijlstra
2020-11-17 10:21         ` Peter Zijlstra
2020-11-23 22:51   ` [tip: irq/core] " tip-bot2 for Thomas Gleixner
2020-11-13 14:02 ` [patch 11/19] softirq: Move related code into one section Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-19 12:20   ` Frederic Weisbecker
2020-11-19 12:20     ` Frederic Weisbecker
2020-11-23 22:51   ` [tip: irq/core] " tip-bot2 for Thomas Gleixner
2020-11-13 14:02 ` [patch 12/19] softirq: Add RT specific softirq accounting Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-19 12:18   ` Frederic Weisbecker
2020-11-19 12:18     ` Frederic Weisbecker
2020-11-19 12:18     ` Frederic Weisbecker
2020-11-19 18:34     ` Thomas Gleixner
2020-11-19 18:34       ` Thomas Gleixner
2020-11-19 18:34       ` Thomas Gleixner
2020-11-19 22:52       ` Frederic Weisbecker
2020-11-19 22:52         ` Frederic Weisbecker
2020-11-13 14:02 ` [patch 13/19] softirq: Move various protections into inline helpers Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02 ` [patch 14/19] softirq: Make softirq control and processing RT aware Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-20  0:26   ` Frederic Weisbecker
2020-11-20  0:26     ` Frederic Weisbecker
2020-11-20  0:26     ` Frederic Weisbecker
2020-11-20 13:27     ` Thomas Gleixner
2020-11-20 13:27       ` Thomas Gleixner
2020-11-20 13:27       ` Thomas Gleixner
2020-11-23 13:44   ` Frederic Weisbecker
2020-11-23 13:44     ` Frederic Weisbecker
2020-11-23 13:44     ` Frederic Weisbecker
2020-11-23 19:27     ` Thomas Gleixner
2020-11-23 19:27       ` Thomas Gleixner
2020-11-23 19:56       ` Frederic Weisbecker
2020-11-23 19:56         ` Frederic Weisbecker
2020-11-23 19:56         ` Frederic Weisbecker
2020-11-23 23:58       ` Frederic Weisbecker
2020-11-23 23:58         ` Frederic Weisbecker
2020-11-23 23:58         ` Frederic Weisbecker
2020-11-24  0:06         ` Thomas Gleixner
2020-11-24  0:06           ` Thomas Gleixner
2020-11-24  0:06           ` Thomas Gleixner
2020-11-24  0:13           ` Frederic Weisbecker
2020-11-24  0:13             ` Frederic Weisbecker
2020-11-24  0:13             ` Frederic Weisbecker
2020-11-24  0:22             ` Thomas Gleixner
2020-11-24  0:22               ` Thomas Gleixner
2020-11-24  0:22               ` Thomas Gleixner
2020-11-13 14:02 ` [patch 15/19] tick/sched: Prevent false positive softirq pending warnings on RT Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02 ` [patch 16/19] rcu: Prevent false positive softirq warning " Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02 ` [patch 17/19] softirq: Replace barrier() with cpu_relax() in tasklet_unlock_wait() Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02 ` [patch 18/19] tasklets: Use static inlines for stub implementations Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02 ` [patch 19/19] tasklets: Prevent kill/unlock_wait deadlock on RT Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner
2020-11-13 14:02   ` Thomas Gleixner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.