All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 4.4-rc5 v22 0/4] irq/arm: Use FIQ for NMI backtrace (when possible)
@ 2015-12-20 20:52 ` Daniel Thompson
  0 siblings, 0 replies; 26+ messages in thread
From: Daniel Thompson @ 2015-12-20 20:52 UTC (permalink / raw)
  To: Thomas Gleixner, Jason Cooper, Russell King, Marc Zyngier
  Cc: Daniel Thompson, Will Deacon, Catalin Marinas, Stephen Boyd,
	John Stultz, Steven Rostedt, linux-kernel, linux-arm-kernel,
	patches, linaro-kernel, Sumit Semwal, Dirk Behme, Daniel Drake,
	Dmitry Pervushin, Tim Sander, Petr Mladek

This patchset modifies the GIC driver to allow it, on supported
platforms, to route IPI interrupts to FIQ. It then uses this
feature to allow the NMI backtrace code on arm to be implemented
using FIQ.

The patches have been runtime tested on the following systems, covering
both arm and arm64 systems and those with and without FIQ support:

* Freescale i.MX6 (arm, gicv1, supports FIQ)
* Qualcomm Snapdragon 600 (arm, gicv2, does not support FIQ)
* Hisilicon 6220 (arm64, gicv2, does not support FIQ)


v22:

* Rebase on v4.4-rc5 to adopt the new NMI backtrace code from Russell
  King.

* Polished a few comments and reorganised the patches very slightly
  (shifted a couple of arm changes to patch 4).

* Fixed bug in the way gic_handle_fiq() checks whether it is safe for
  it to read IAR.

v21:

* Change the way SGIs are raised to try to increase robustness starting
  secondary cores. This is a theoretic fix for a regression reported
  by Mark Rutland on vexpress-tc2 but it also allows us to remove
  igroup0_shadow entirely since it is no longer needed.

* Fix a couple of variable names and add comments to describe the
  hardware behavior better (Mark Rutland).

* Improved MULTI_IRQ_HANDLER support by clearing FIQs using
  handle_arch_irq (Marc Zygnier).

* Fix gic_cpu_if_down() to ensure group 1 interrupts are disabled
  when the interface is brought down.

For changes in v20 and earlier see:
  http://thread.gmane.org/gmane.linux.kernel/1928465


Daniel Thompson (4):
  irqchip: gic: Optimize locking in gic_raise_softirq
  irqchip: gic: Make gic_raise_softirq FIQ-safe
  irqchip: gic: Introduce plumbing for IPI FIQ
  ARM: Allow IPI_CPU_BACKTRACE to exploit FIQ

 arch/arm/include/asm/smp.h      |   9 ++
 arch/arm/kernel/smp.c           |   6 +
 arch/arm/kernel/traps.c         |   9 +-
 drivers/irqchip/irq-gic.c       | 235 ++++++++++++++++++++++++++++++++++++----
 include/linux/irqchip/arm-gic.h |   6 +
 5 files changed, 245 insertions(+), 20 deletions(-)

--
2.5.0


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 4.4-rc5 v22 0/4] irq/arm: Use FIQ for NMI backtrace (when possible)
@ 2015-12-20 20:52 ` Daniel Thompson
  0 siblings, 0 replies; 26+ messages in thread
From: Daniel Thompson @ 2015-12-20 20:52 UTC (permalink / raw)
  To: linux-arm-kernel

This patchset modifies the GIC driver to allow it, on supported
platforms, to route IPI interrupts to FIQ. It then uses this
feature to allow the NMI backtrace code on arm to be implemented
using FIQ.

The patches have been runtime tested on the following systems, covering
both arm and arm64 systems and those with and without FIQ support:

* Freescale i.MX6 (arm, gicv1, supports FIQ)
* Qualcomm Snapdragon 600 (arm, gicv2, does not support FIQ)
* Hisilicon 6220 (arm64, gicv2, does not support FIQ)


v22:

* Rebase on v4.4-rc5 to adopt the new NMI backtrace code from Russell
  King.

* Polished a few comments and reorganised the patches very slightly
  (shifted a couple of arm changes to patch 4).

* Fixed bug in the way gic_handle_fiq() checks whether it is safe for
  it to read IAR.

v21:

* Change the way SGIs are raised to try to increase robustness starting
  secondary cores. This is a theoretic fix for a regression reported
  by Mark Rutland on vexpress-tc2 but it also allows us to remove
  igroup0_shadow entirely since it is no longer needed.

* Fix a couple of variable names and add comments to describe the
  hardware behavior better (Mark Rutland).

* Improved MULTI_IRQ_HANDLER support by clearing FIQs using
  handle_arch_irq (Marc Zygnier).

* Fix gic_cpu_if_down() to ensure group 1 interrupts are disabled
  when the interface is brought down.

For changes in v20 and earlier see:
  http://thread.gmane.org/gmane.linux.kernel/1928465


Daniel Thompson (4):
  irqchip: gic: Optimize locking in gic_raise_softirq
  irqchip: gic: Make gic_raise_softirq FIQ-safe
  irqchip: gic: Introduce plumbing for IPI FIQ
  ARM: Allow IPI_CPU_BACKTRACE to exploit FIQ

 arch/arm/include/asm/smp.h      |   9 ++
 arch/arm/kernel/smp.c           |   6 +
 arch/arm/kernel/traps.c         |   9 +-
 drivers/irqchip/irq-gic.c       | 235 ++++++++++++++++++++++++++++++++++++----
 include/linux/irqchip/arm-gic.h |   6 +
 5 files changed, 245 insertions(+), 20 deletions(-)

--
2.5.0

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 4.4-rc5 v22 1/4] irqchip: gic: Optimize locking in gic_raise_softirq
  2015-12-20 20:52 ` Daniel Thompson
@ 2015-12-20 20:52   ` Daniel Thompson
  -1 siblings, 0 replies; 26+ messages in thread
From: Daniel Thompson @ 2015-12-20 20:52 UTC (permalink / raw)
  To: Thomas Gleixner, Jason Cooper, Russell King, Marc Zyngier
  Cc: Daniel Thompson, Will Deacon, Catalin Marinas, Stephen Boyd,
	John Stultz, Steven Rostedt, linux-kernel, linux-arm-kernel,
	patches, linaro-kernel, Sumit Semwal, Dirk Behme, Daniel Drake,
	Dmitry Pervushin, Tim Sander, Petr Mladek

Currently gic_raise_softirq() is locked using irq_controller_lock.
This lock is primarily used to make register read-modify-write sequences
atomic but gic_raise_softirq() uses it instead to ensure that the
big.LITTLE migration logic can figure out when it is safe to migrate
interrupts between physical cores.

This is sub-optimal in closely related ways:

1. No locking at all is required on systems where the b.L switcher is
   not configured.

2. Finer grain locking can be used on systems where the b.L switcher is
   present.

This patch resolves both of the above by introducing a separate finer
grain lock and providing conditionally compiled inlines to lock/unlock
it.

Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Jason Cooper <jason@lakedaemon.net>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
---
 drivers/irqchip/irq-gic.c | 36 +++++++++++++++++++++++++++++++++---
 1 file changed, 33 insertions(+), 3 deletions(-)

diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c
index abf2ffaed392..0c4635ea308d 100644
--- a/drivers/irqchip/irq-gic.c
+++ b/drivers/irqchip/irq-gic.c
@@ -90,6 +90,27 @@ struct gic_chip_data {
 static DEFINE_RAW_SPINLOCK(irq_controller_lock);
 
 /*
+ * This lock is used by the big.LITTLE migration code to ensure no IPIs
+ * can be pended on the old core after the map has been updated.
+ */
+#ifdef CONFIG_BL_SWITCHER
+static DEFINE_RAW_SPINLOCK(cpu_map_migration_lock);
+
+static inline void gic_migration_lock(unsigned long *flags)
+{
+	raw_spin_lock_irqsave(&cpu_map_migration_lock, *flags);
+}
+
+static inline void gic_migration_unlock(unsigned long flags)
+{
+	raw_spin_unlock_irqrestore(&cpu_map_migration_lock, flags);
+}
+#else
+static inline void gic_migration_lock(unsigned long *flags) {}
+static inline void gic_migration_unlock(unsigned long flags) {}
+#endif
+
+/*
  * The GIC mapping of CPU interfaces does not necessarily match
  * the logical CPU numbering.  Let's use a mapping as returned
  * by the GIC itself.
@@ -763,7 +784,7 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
 	int cpu;
 	unsigned long flags, map = 0;
 
-	raw_spin_lock_irqsave(&irq_controller_lock, flags);
+	gic_migration_lock(&flags);
 
 	/* Convert our logical CPU mask into a physical one. */
 	for_each_cpu(cpu, mask)
@@ -778,7 +799,7 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
 	/* this always happens on GIC0 */
 	writel_relaxed(map << 16 | irq, gic_data_dist_base(&gic_data[0]) + GIC_DIST_SOFTINT);
 
-	raw_spin_unlock_irqrestore(&irq_controller_lock, flags);
+	gic_migration_unlock(flags);
 }
 #endif
 
@@ -849,8 +870,17 @@ void gic_migrate_target(unsigned int new_cpu_id)
 
 	raw_spin_lock(&irq_controller_lock);
 
-	/* Update the target interface for this logical CPU */
+	/*
+	 * Update the target interface for this logical CPU
+	 *
+	 * From the point we release the cpu_map_migration_lock any new
+	 * SGIs will be pended on the new cpu which makes the set of SGIs
+	 * pending on the old cpu static. That means we can defer the
+	 * migration until after we have released the irq_controller_lock.
+	 */
+	raw_spin_lock(&cpu_map_migration_lock);
 	gic_cpu_map[cpu] = 1 << new_cpu_id;
+	raw_spin_unlock(&cpu_map_migration_lock);
 
 	/*
 	 * Find all the peripheral interrupts targetting the current
-- 
2.5.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 4.4-rc5 v22 1/4] irqchip: gic: Optimize locking in gic_raise_softirq
@ 2015-12-20 20:52   ` Daniel Thompson
  0 siblings, 0 replies; 26+ messages in thread
From: Daniel Thompson @ 2015-12-20 20:52 UTC (permalink / raw)
  To: linux-arm-kernel

Currently gic_raise_softirq() is locked using irq_controller_lock.
This lock is primarily used to make register read-modify-write sequences
atomic but gic_raise_softirq() uses it instead to ensure that the
big.LITTLE migration logic can figure out when it is safe to migrate
interrupts between physical cores.

This is sub-optimal in closely related ways:

1. No locking at all is required on systems where the b.L switcher is
   not configured.

2. Finer grain locking can be used on systems where the b.L switcher is
   present.

This patch resolves both of the above by introducing a separate finer
grain lock and providing conditionally compiled inlines to lock/unlock
it.

Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Jason Cooper <jason@lakedaemon.net>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
---
 drivers/irqchip/irq-gic.c | 36 +++++++++++++++++++++++++++++++++---
 1 file changed, 33 insertions(+), 3 deletions(-)

diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c
index abf2ffaed392..0c4635ea308d 100644
--- a/drivers/irqchip/irq-gic.c
+++ b/drivers/irqchip/irq-gic.c
@@ -90,6 +90,27 @@ struct gic_chip_data {
 static DEFINE_RAW_SPINLOCK(irq_controller_lock);
 
 /*
+ * This lock is used by the big.LITTLE migration code to ensure no IPIs
+ * can be pended on the old core after the map has been updated.
+ */
+#ifdef CONFIG_BL_SWITCHER
+static DEFINE_RAW_SPINLOCK(cpu_map_migration_lock);
+
+static inline void gic_migration_lock(unsigned long *flags)
+{
+	raw_spin_lock_irqsave(&cpu_map_migration_lock, *flags);
+}
+
+static inline void gic_migration_unlock(unsigned long flags)
+{
+	raw_spin_unlock_irqrestore(&cpu_map_migration_lock, flags);
+}
+#else
+static inline void gic_migration_lock(unsigned long *flags) {}
+static inline void gic_migration_unlock(unsigned long flags) {}
+#endif
+
+/*
  * The GIC mapping of CPU interfaces does not necessarily match
  * the logical CPU numbering.  Let's use a mapping as returned
  * by the GIC itself.
@@ -763,7 +784,7 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
 	int cpu;
 	unsigned long flags, map = 0;
 
-	raw_spin_lock_irqsave(&irq_controller_lock, flags);
+	gic_migration_lock(&flags);
 
 	/* Convert our logical CPU mask into a physical one. */
 	for_each_cpu(cpu, mask)
@@ -778,7 +799,7 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
 	/* this always happens on GIC0 */
 	writel_relaxed(map << 16 | irq, gic_data_dist_base(&gic_data[0]) + GIC_DIST_SOFTINT);
 
-	raw_spin_unlock_irqrestore(&irq_controller_lock, flags);
+	gic_migration_unlock(flags);
 }
 #endif
 
@@ -849,8 +870,17 @@ void gic_migrate_target(unsigned int new_cpu_id)
 
 	raw_spin_lock(&irq_controller_lock);
 
-	/* Update the target interface for this logical CPU */
+	/*
+	 * Update the target interface for this logical CPU
+	 *
+	 * From the point we release the cpu_map_migration_lock any new
+	 * SGIs will be pended on the new cpu which makes the set of SGIs
+	 * pending on the old cpu static. That means we can defer the
+	 * migration until after we have released the irq_controller_lock.
+	 */
+	raw_spin_lock(&cpu_map_migration_lock);
 	gic_cpu_map[cpu] = 1 << new_cpu_id;
+	raw_spin_unlock(&cpu_map_migration_lock);
 
 	/*
 	 * Find all the peripheral interrupts targetting the current
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 4.4-rc5 v22 2/4] irqchip: gic: Make gic_raise_softirq FIQ-safe
  2015-12-20 20:52 ` Daniel Thompson
@ 2015-12-20 20:52   ` Daniel Thompson
  -1 siblings, 0 replies; 26+ messages in thread
From: Daniel Thompson @ 2015-12-20 20:52 UTC (permalink / raw)
  To: Thomas Gleixner, Jason Cooper, Russell King, Marc Zyngier
  Cc: Daniel Thompson, Will Deacon, Catalin Marinas, Stephen Boyd,
	John Stultz, Steven Rostedt, linux-kernel, linux-arm-kernel,
	patches, linaro-kernel, Sumit Semwal, Dirk Behme, Daniel Drake,
	Dmitry Pervushin, Tim Sander, Petr Mladek

It is currently possible for FIQ handlers to re-enter gic_raise_softirq()
and lock up.

    	gic_raise_softirq()
	   lock(x);
-~-> FIQ
        handle_fiq()
	   gic_raise_softirq()
	      lock(x);		<-- Lockup

arch/arm/ uses IPIs to implement arch_irq_work_raise(), thus this issue
renders it difficult for FIQ handlers to safely defer work to less
restrictive calling contexts.

This patch fixes the problem by converting the cpu_map_migration_lock
into a rwlock making it safe to re-enter the function.

Note that having made it safe to re-enter gic_raise_softirq() we no
longer need to mask interrupts during gic_raise_softirq() because the
b.L migration is always performed from task context.

Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Jason Cooper <jason@lakedaemon.net>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
---
 drivers/irqchip/irq-gic.c | 38 +++++++++++++++++++++++++-------------
 1 file changed, 25 insertions(+), 13 deletions(-)

diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c
index 0c4635ea308d..b6c1e96b52a1 100644
--- a/drivers/irqchip/irq-gic.c
+++ b/drivers/irqchip/irq-gic.c
@@ -92,22 +92,25 @@ static DEFINE_RAW_SPINLOCK(irq_controller_lock);
 /*
  * This lock is used by the big.LITTLE migration code to ensure no IPIs
  * can be pended on the old core after the map has been updated.
+ *
+ * This lock may be locked for reading from both IRQ and FIQ handlers
+ * and therefore must not be locked for writing when these are enabled.
  */
 #ifdef CONFIG_BL_SWITCHER
-static DEFINE_RAW_SPINLOCK(cpu_map_migration_lock);
+static DEFINE_RWLOCK(cpu_map_migration_lock);
 
-static inline void gic_migration_lock(unsigned long *flags)
+static inline void gic_migration_lock(void)
 {
-	raw_spin_lock_irqsave(&cpu_map_migration_lock, *flags);
+	read_lock(&cpu_map_migration_lock);
 }
 
-static inline void gic_migration_unlock(unsigned long flags)
+static inline void gic_migration_unlock(void)
 {
-	raw_spin_unlock_irqrestore(&cpu_map_migration_lock, flags);
+	read_unlock(&cpu_map_migration_lock);
 }
 #else
-static inline void gic_migration_lock(unsigned long *flags) {}
-static inline void gic_migration_unlock(unsigned long flags) {}
+static inline void gic_migration_lock(void) {}
+static inline void gic_migration_unlock(void) {}
 #endif
 
 /*
@@ -779,12 +782,20 @@ static void __init gic_pm_init(struct gic_chip_data *gic)
 #endif
 
 #ifdef CONFIG_SMP
+/*
+ * Raise the specified IPI on all cpus set in mask.
+ *
+ * This function is safe to call from all calling contexts, including
+ * FIQ handlers. It relies on gic_migration_lock() being multiply acquirable
+ * to avoid deadlocks when the function is re-entered at different
+ * exception levels.
+ */
 static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
 {
 	int cpu;
-	unsigned long flags, map = 0;
+	unsigned long map = 0;
 
-	gic_migration_lock(&flags);
+	gic_migration_lock();
 
 	/* Convert our logical CPU mask into a physical one. */
 	for_each_cpu(cpu, mask)
@@ -799,7 +810,7 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
 	/* this always happens on GIC0 */
 	writel_relaxed(map << 16 | irq, gic_data_dist_base(&gic_data[0]) + GIC_DIST_SOFTINT);
 
-	gic_migration_unlock(flags);
+	gic_migration_unlock();
 }
 #endif
 
@@ -847,7 +858,8 @@ int gic_get_cpu_id(unsigned int cpu)
  * Migrate all peripheral interrupts with a target matching the current CPU
  * to the interface corresponding to @new_cpu_id.  The CPU interface mapping
  * is also updated.  Targets to other CPU interfaces are unchanged.
- * This must be called with IRQs locally disabled.
+ * This must be called from a task context and with IRQ and FIQ locally
+ * disabled.
  */
 void gic_migrate_target(unsigned int new_cpu_id)
 {
@@ -878,9 +890,9 @@ void gic_migrate_target(unsigned int new_cpu_id)
 	 * pending on the old cpu static. That means we can defer the
 	 * migration until after we have released the irq_controller_lock.
 	 */
-	raw_spin_lock(&cpu_map_migration_lock);
+	write_lock(&cpu_map_migration_lock);
 	gic_cpu_map[cpu] = 1 << new_cpu_id;
-	raw_spin_unlock(&cpu_map_migration_lock);
+	write_unlock(&cpu_map_migration_lock);
 
 	/*
 	 * Find all the peripheral interrupts targetting the current
-- 
2.5.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 4.4-rc5 v22 2/4] irqchip: gic: Make gic_raise_softirq FIQ-safe
@ 2015-12-20 20:52   ` Daniel Thompson
  0 siblings, 0 replies; 26+ messages in thread
From: Daniel Thompson @ 2015-12-20 20:52 UTC (permalink / raw)
  To: linux-arm-kernel

It is currently possible for FIQ handlers to re-enter gic_raise_softirq()
and lock up.

    	gic_raise_softirq()
	   lock(x);
-~-> FIQ
        handle_fiq()
	   gic_raise_softirq()
	      lock(x);		<-- Lockup

arch/arm/ uses IPIs to implement arch_irq_work_raise(), thus this issue
renders it difficult for FIQ handlers to safely defer work to less
restrictive calling contexts.

This patch fixes the problem by converting the cpu_map_migration_lock
into a rwlock making it safe to re-enter the function.

Note that having made it safe to re-enter gic_raise_softirq() we no
longer need to mask interrupts during gic_raise_softirq() because the
b.L migration is always performed from task context.

Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Jason Cooper <jason@lakedaemon.net>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
---
 drivers/irqchip/irq-gic.c | 38 +++++++++++++++++++++++++-------------
 1 file changed, 25 insertions(+), 13 deletions(-)

diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c
index 0c4635ea308d..b6c1e96b52a1 100644
--- a/drivers/irqchip/irq-gic.c
+++ b/drivers/irqchip/irq-gic.c
@@ -92,22 +92,25 @@ static DEFINE_RAW_SPINLOCK(irq_controller_lock);
 /*
  * This lock is used by the big.LITTLE migration code to ensure no IPIs
  * can be pended on the old core after the map has been updated.
+ *
+ * This lock may be locked for reading from both IRQ and FIQ handlers
+ * and therefore must not be locked for writing when these are enabled.
  */
 #ifdef CONFIG_BL_SWITCHER
-static DEFINE_RAW_SPINLOCK(cpu_map_migration_lock);
+static DEFINE_RWLOCK(cpu_map_migration_lock);
 
-static inline void gic_migration_lock(unsigned long *flags)
+static inline void gic_migration_lock(void)
 {
-	raw_spin_lock_irqsave(&cpu_map_migration_lock, *flags);
+	read_lock(&cpu_map_migration_lock);
 }
 
-static inline void gic_migration_unlock(unsigned long flags)
+static inline void gic_migration_unlock(void)
 {
-	raw_spin_unlock_irqrestore(&cpu_map_migration_lock, flags);
+	read_unlock(&cpu_map_migration_lock);
 }
 #else
-static inline void gic_migration_lock(unsigned long *flags) {}
-static inline void gic_migration_unlock(unsigned long flags) {}
+static inline void gic_migration_lock(void) {}
+static inline void gic_migration_unlock(void) {}
 #endif
 
 /*
@@ -779,12 +782,20 @@ static void __init gic_pm_init(struct gic_chip_data *gic)
 #endif
 
 #ifdef CONFIG_SMP
+/*
+ * Raise the specified IPI on all cpus set in mask.
+ *
+ * This function is safe to call from all calling contexts, including
+ * FIQ handlers. It relies on gic_migration_lock() being multiply acquirable
+ * to avoid deadlocks when the function is re-entered at different
+ * exception levels.
+ */
 static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
 {
 	int cpu;
-	unsigned long flags, map = 0;
+	unsigned long map = 0;
 
-	gic_migration_lock(&flags);
+	gic_migration_lock();
 
 	/* Convert our logical CPU mask into a physical one. */
 	for_each_cpu(cpu, mask)
@@ -799,7 +810,7 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
 	/* this always happens on GIC0 */
 	writel_relaxed(map << 16 | irq, gic_data_dist_base(&gic_data[0]) + GIC_DIST_SOFTINT);
 
-	gic_migration_unlock(flags);
+	gic_migration_unlock();
 }
 #endif
 
@@ -847,7 +858,8 @@ int gic_get_cpu_id(unsigned int cpu)
  * Migrate all peripheral interrupts with a target matching the current CPU
  * to the interface corresponding to @new_cpu_id.  The CPU interface mapping
  * is also updated.  Targets to other CPU interfaces are unchanged.
- * This must be called with IRQs locally disabled.
+ * This must be called from a task context and with IRQ and FIQ locally
+ * disabled.
  */
 void gic_migrate_target(unsigned int new_cpu_id)
 {
@@ -878,9 +890,9 @@ void gic_migrate_target(unsigned int new_cpu_id)
 	 * pending on the old cpu static. That means we can defer the
 	 * migration until after we have released the irq_controller_lock.
 	 */
-	raw_spin_lock(&cpu_map_migration_lock);
+	write_lock(&cpu_map_migration_lock);
 	gic_cpu_map[cpu] = 1 << new_cpu_id;
-	raw_spin_unlock(&cpu_map_migration_lock);
+	write_unlock(&cpu_map_migration_lock);
 
 	/*
 	 * Find all the peripheral interrupts targetting the current
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 4.4-rc5 v22 3/4] irqchip: gic: Introduce plumbing for IPI FIQ
  2015-12-20 20:52 ` Daniel Thompson
@ 2015-12-20 20:52   ` Daniel Thompson
  -1 siblings, 0 replies; 26+ messages in thread
From: Daniel Thompson @ 2015-12-20 20:52 UTC (permalink / raw)
  To: Thomas Gleixner, Jason Cooper, Russell King, Marc Zyngier
  Cc: Daniel Thompson, Will Deacon, Catalin Marinas, Stephen Boyd,
	John Stultz, Steven Rostedt, linux-kernel, linux-arm-kernel,
	patches, linaro-kernel, Sumit Semwal, Dirk Behme, Daniel Drake,
	Dmitry Pervushin, Tim Sander, Petr Mladek

Currently it is not possible to exploit FIQ for systems with a GIC, even
on systems are otherwise capable of it. This patch makes it possible
for IPIs to be delivered using FIQ.

To do so it modifies the register state so that normal interrupts are
placed in group 1 and specific IPIs are placed into group 0. It also
configures the controller to raise group 0 interrupts using the FIQ
signal. Finally it provides a means for architecture code to define
which IPIs shall use FIQ and to acknowledge any IPIs that are raised.

All GIC hardware except GICv1-without-TrustZone support provides a means
to group exceptions into group 0 and group 1 but the hardware
functionality is unavailable to the kernel when a secure monitor is
present because access to the grouping registers are prohibited outside
"secure world". However when grouping is not available (or on early
GICv1 implementations where it is available but tricky to enable) the
code to change groups does not deploy and all IPIs will be raised via
IRQ.

Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Jason Cooper <jason@lakedaemon.net>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Tested-by: Jon Medhurst <tixy@linaro.org>
---
 drivers/irqchip/irq-gic.c       | 183 +++++++++++++++++++++++++++++++++++++---
 include/linux/irqchip/arm-gic.h |   6 ++
 2 files changed, 175 insertions(+), 14 deletions(-)

diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c
index b6c1e96b52a1..8077edd0d38d 100644
--- a/drivers/irqchip/irq-gic.c
+++ b/drivers/irqchip/irq-gic.c
@@ -41,6 +41,7 @@
 #include <linux/irqchip.h>
 #include <linux/irqchip/chained_irq.h>
 #include <linux/irqchip/arm-gic.h>
+#include <linux/ratelimit.h>
 
 #include <asm/cputype.h>
 #include <asm/irq.h>
@@ -63,6 +64,10 @@ static void gic_check_cpu_features(void)
 #define gic_check_cpu_features()	do { } while(0)
 #endif
 
+#ifndef SMP_IPI_FIQ_MASK
+#define SMP_IPI_FIQ_MASK 0
+#endif
+
 union gic_base {
 	void __iomem *common_base;
 	void __percpu * __iomem *percpu_base;
@@ -82,6 +87,7 @@ struct gic_chip_data {
 #endif
 	struct irq_domain *domain;
 	unsigned int gic_irqs;
+	bool sgi_with_nsatt;
 #ifdef CONFIG_GIC_NON_BANKED
 	void __iomem *(*get_base)(union gic_base *);
 #endif
@@ -350,12 +356,52 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val,
 }
 #endif
 
+/*
+ * Fully acknowledge (ack, eoi and deactivate) any outstanding FIQ-based IPI,
+ * otherwise do nothing.
+ */
+static void __maybe_unused gic_handle_fiq(struct pt_regs *regs)
+{
+	struct gic_chip_data *gic = &gic_data[0];
+	void __iomem *cpu_base = gic_data_cpu_base(gic);
+	u32 hppstat, hppnr, irqstat, irqnr;
+
+	do {
+		hppstat = readl_relaxed(cpu_base + GIC_CPU_HIGHPRI);
+		hppnr = hppstat & GICC_IAR_INT_ID_MASK;
+		if (!(hppnr < 16 && BIT(hppnr) & SMP_IPI_FIQ_MASK))
+			break;
+
+		irqstat = readl_relaxed(cpu_base + GIC_CPU_INTACK);
+		irqnr = irqstat & GICC_IAR_INT_ID_MASK;
+
+		writel_relaxed(irqstat, cpu_base + GIC_CPU_EOI);
+		if (static_key_true(&supports_deactivate))
+			writel_relaxed(irqstat, cpu_base + GIC_CPU_DEACTIVATE);
+
+		if (WARN_RATELIMIT(irqnr > 16,
+			       "Unexpected irqnr %u (bad prioritization?)\n",
+			       irqnr))
+			continue;
+#ifdef CONFIG_SMP
+		handle_IPI(irqnr, regs);
+#endif
+	} while (1);
+}
+
 static void __exception_irq_entry gic_handle_irq(struct pt_regs *regs)
 {
 	u32 irqstat, irqnr;
 	struct gic_chip_data *gic = &gic_data[0];
 	void __iomem *cpu_base = gic_data_cpu_base(gic);
 
+#ifdef CONFIG_ARM
+	if (in_nmi()) {
+		gic_handle_fiq(regs);
+		return;
+	}
+#endif
+
 	do {
 		irqstat = readl_relaxed(cpu_base + GIC_CPU_INTACK);
 		irqnr = irqstat & GICC_IAR_INT_ID_MASK;
@@ -439,6 +485,55 @@ static struct irq_chip gic_eoimode1_chip = {
 				  IRQCHIP_MASK_ON_SUSPEND,
 };
 
+/*
+ * Shift an interrupt between Group 0 and Group 1.
+ *
+ * In addition to changing the group we also modify the priority to
+ * match what "ARM strongly recommends" for a system where no Group 1
+ * interrupt must ever preempt a Group 0 interrupt.
+ *
+ * It is safe to call this function on systems which do not support
+ * grouping (it will have no effect).
+ */
+static void gic_set_group_irq(struct gic_chip_data *gic, unsigned int hwirq,
+			      int group)
+{
+	void __iomem *base = gic_data_dist_base(gic);
+	unsigned int grp_reg = hwirq / 32 * 4;
+	u32 grp_mask = BIT(hwirq % 32);
+	u32 grp_val;
+
+	unsigned int pri_reg = (hwirq / 4) * 4;
+	u32 pri_mask = BIT(7 + ((hwirq % 4) * 8));
+	u32 pri_val;
+
+	/*
+	 * Systems which do not support grouping will have not have
+	 * the EnableGrp1 bit set.
+	 */
+	if (!(GICD_ENABLE_GRP1 & readl_relaxed(base + GIC_DIST_CTRL)))
+		return;
+
+	raw_spin_lock(&irq_controller_lock);
+
+	grp_val = readl_relaxed(base + GIC_DIST_IGROUP + grp_reg);
+	pri_val = readl_relaxed(base + GIC_DIST_PRI + pri_reg);
+
+	if (group) {
+		grp_val |= grp_mask;
+		pri_val |= pri_mask;
+	} else {
+		grp_val &= ~grp_mask;
+		pri_val &= ~pri_mask;
+	}
+
+	writel_relaxed(grp_val, base + GIC_DIST_IGROUP + grp_reg);
+	writel_relaxed(pri_val, base + GIC_DIST_PRI + pri_reg);
+
+	raw_spin_unlock(&irq_controller_lock);
+}
+
+
 void __init gic_cascade_irq(unsigned int gic_nr, unsigned int irq)
 {
 	if (gic_nr >= MAX_GIC_NR)
@@ -469,19 +564,27 @@ static u8 gic_get_cpumask(struct gic_chip_data *gic)
 static void gic_cpu_if_up(struct gic_chip_data *gic)
 {
 	void __iomem *cpu_base = gic_data_cpu_base(gic);
-	u32 bypass = 0;
-	u32 mode = 0;
+	void __iomem *dist_base = gic_data_dist_base(gic);
+	u32 ctrl = 0;
 
-	if (static_key_true(&supports_deactivate))
-		mode = GIC_CPU_CTRL_EOImodeNS;
+	/*
+	 * Preserve bypass disable bits to be written back later
+	 */
+	ctrl = readl(cpu_base + GIC_CPU_CTRL);
+	ctrl &= GICC_DIS_BYPASS_MASK;
 
 	/*
-	* Preserve bypass disable bits to be written back later
-	*/
-	bypass = readl(cpu_base + GIC_CPU_CTRL);
-	bypass &= GICC_DIS_BYPASS_MASK;
+	 * If EnableGrp1 is set in the distributor then enable group 1
+	 * support for this CPU (and route group 0 interrupts to FIQ).
+	 */
+	if (GICD_ENABLE_GRP1 & readl_relaxed(dist_base + GIC_DIST_CTRL))
+		ctrl |= GICC_COMMON_BPR | GICC_FIQ_EN | GICC_ACK_CTL |
+			GICC_ENABLE_GRP1;
+
+	if (static_key_true(&supports_deactivate))
+		ctrl |= GIC_CPU_CTRL_EOImodeNS;
 
-	writel_relaxed(bypass | mode | GICC_ENABLE, cpu_base + GIC_CPU_CTRL);
+	writel_relaxed(ctrl | GICC_ENABLE, cpu_base + GIC_CPU_CTRL);
 }
 
 
@@ -505,7 +608,31 @@ static void __init gic_dist_init(struct gic_chip_data *gic)
 
 	gic_dist_config(base, gic_irqs, NULL);
 
-	writel_relaxed(GICD_ENABLE, base + GIC_DIST_CTRL);
+	/*
+	 * Set EnableGrp1/EnableGrp0 (bit 1 and 0) or EnableGrp (bit 0 only,
+	 * bit 1 ignored) depending on current security mode.
+	 */
+	writel_relaxed(GICD_ENABLE_GRP1 | GICD_ENABLE, base + GIC_DIST_CTRL);
+
+	/*
+	 * Some GICv1 devices (even those with security extensions) do not
+	 * implement EnableGrp1 meaning some parts of the above write may
+	 * be ignored. We will only enable FIQ support if the bit can be set.
+	 */
+	if (GICD_ENABLE_GRP1 & readl_relaxed(base + GIC_DIST_CTRL)) {
+		/* Place all SPIs in group 1 (signally with IRQ). */
+		for (i = 32; i < gic_irqs; i += 32)
+			writel_relaxed(0xffffffff,
+				       base + GIC_DIST_IGROUP + i * 4 / 32);
+
+		/*
+		 * If the GIC supports the security extension then SGIs
+		 * will be filtered based on the value of NSATT. If the
+		 * GIC has this support then enable NSATT support.
+		 */
+		if (GICD_SECURITY_EXTN & readl_relaxed(base + GIC_DIST_CTR))
+			gic->sgi_with_nsatt = true;
+	}
 }
 
 static void gic_cpu_init(struct gic_chip_data *gic)
@@ -514,6 +641,7 @@ static void gic_cpu_init(struct gic_chip_data *gic)
 	void __iomem *base = gic_data_cpu_base(gic);
 	unsigned int cpu_mask, cpu = smp_processor_id();
 	int i;
+	unsigned long ipi_fiq_mask, fiq;
 
 	/*
 	 * Setting up the CPU map is only relevant for the primary GIC
@@ -539,6 +667,23 @@ static void gic_cpu_init(struct gic_chip_data *gic)
 
 	gic_cpu_config(dist_base, NULL);
 
+	/*
+	 * If the distributor is configured to support interrupt grouping
+	 * then set all SGI and PPI interrupts that are not set in
+	 * SMP_IPI_FIQ_MASK to group1 and ensure any remaining group 0
+	 * interrupts have the right priority.
+	 *
+	 * Note that IGROUP[0] is banked, meaning that although we are
+	 * writing to a distributor register we are actually performing
+	 * part of the per-cpu initialization.
+	 */
+	if (GICD_ENABLE_GRP1 & readl_relaxed(dist_base + GIC_DIST_CTRL)) {
+		ipi_fiq_mask = SMP_IPI_FIQ_MASK;
+		writel_relaxed(~ipi_fiq_mask, dist_base + GIC_DIST_IGROUP + 0);
+		for_each_set_bit(fiq, &ipi_fiq_mask, 16)
+			gic_set_group_irq(gic, fiq, 0);
+	}
+
 	writel_relaxed(GICC_INT_PRI_THRESHOLD, base + GIC_CPU_PRIMASK);
 	gic_cpu_if_up(gic);
 }
@@ -553,7 +698,8 @@ int gic_cpu_if_down(unsigned int gic_nr)
 
 	cpu_base = gic_data_cpu_base(&gic_data[gic_nr]);
 	val = readl(cpu_base + GIC_CPU_CTRL);
-	val &= ~GICC_ENABLE;
+	val &= ~(GICC_COMMON_BPR | GICC_FIQ_EN | GICC_ACK_CTL |
+		 GICC_ENABLE_GRP1 | GICC_ENABLE);
 	writel_relaxed(val, cpu_base + GIC_CPU_CTRL);
 
 	return 0;
@@ -648,7 +794,8 @@ static void gic_dist_restore(unsigned int gic_nr)
 			dist_base + GIC_DIST_ACTIVE_SET + i * 4);
 	}
 
-	writel_relaxed(GICD_ENABLE, dist_base + GIC_DIST_CTRL);
+	writel_relaxed(GICD_ENABLE_GRP1 | GICD_ENABLE,
+		       dist_base + GIC_DIST_CTRL);
 }
 
 static void gic_cpu_save(unsigned int gic_nr)
@@ -794,6 +941,8 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
 {
 	int cpu;
 	unsigned long map = 0;
+	unsigned long softint;
+	void __iomem *dist_base;
 
 	gic_migration_lock();
 
@@ -801,14 +950,20 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
 	for_each_cpu(cpu, mask)
 		map |= gic_cpu_map[cpu];
 
+	/* This always happens on GIC0 */
+	dist_base = gic_data_dist_base(&gic_data[0]);
+
 	/*
 	 * Ensure that stores to Normal memory are visible to the
 	 * other CPUs before they observe us issuing the IPI.
 	 */
 	dmb(ishst);
 
-	/* this always happens on GIC0 */
-	writel_relaxed(map << 16 | irq, gic_data_dist_base(&gic_data[0]) + GIC_DIST_SOFTINT);
+	softint = map << 16 | irq;
+
+	writel_relaxed(softint, dist_base + GIC_DIST_SOFTINT);
+	if (gic_data[0].sgi_with_nsatt)
+		writel_relaxed(softint | 0x8000, dist_base + GIC_DIST_SOFTINT);
 
 	gic_migration_unlock();
 }
diff --git a/include/linux/irqchip/arm-gic.h b/include/linux/irqchip/arm-gic.h
index bae69e5d693c..17b9e20d754e 100644
--- a/include/linux/irqchip/arm-gic.h
+++ b/include/linux/irqchip/arm-gic.h
@@ -23,6 +23,10 @@
 #define GIC_CPU_DEACTIVATE		0x1000
 
 #define GICC_ENABLE			0x1
+#define GICC_ENABLE_GRP1		0x2
+#define GICC_ACK_CTL			0x4
+#define GICC_FIQ_EN			0x8
+#define GICC_COMMON_BPR			0x10
 #define GICC_INT_PRI_THRESHOLD		0xf0
 
 #define GIC_CPU_CTRL_EOImodeNS		(1 << 9)
@@ -48,7 +52,9 @@
 #define GIC_DIST_SGI_PENDING_SET	0xf20
 
 #define GICD_ENABLE			0x1
+#define GICD_ENABLE_GRP1		0x2
 #define GICD_DISABLE			0x0
+#define GICD_SECURITY_EXTN		0x400
 #define GICD_INT_ACTLOW_LVLTRIG		0x0
 #define GICD_INT_EN_CLR_X32		0xffffffff
 #define GICD_INT_EN_SET_SGI		0x0000ffff
-- 
2.5.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 4.4-rc5 v22 3/4] irqchip: gic: Introduce plumbing for IPI FIQ
@ 2015-12-20 20:52   ` Daniel Thompson
  0 siblings, 0 replies; 26+ messages in thread
From: Daniel Thompson @ 2015-12-20 20:52 UTC (permalink / raw)
  To: linux-arm-kernel

Currently it is not possible to exploit FIQ for systems with a GIC, even
on systems are otherwise capable of it. This patch makes it possible
for IPIs to be delivered using FIQ.

To do so it modifies the register state so that normal interrupts are
placed in group 1 and specific IPIs are placed into group 0. It also
configures the controller to raise group 0 interrupts using the FIQ
signal. Finally it provides a means for architecture code to define
which IPIs shall use FIQ and to acknowledge any IPIs that are raised.

All GIC hardware except GICv1-without-TrustZone support provides a means
to group exceptions into group 0 and group 1 but the hardware
functionality is unavailable to the kernel when a secure monitor is
present because access to the grouping registers are prohibited outside
"secure world". However when grouping is not available (or on early
GICv1 implementations where it is available but tricky to enable) the
code to change groups does not deploy and all IPIs will be raised via
IRQ.

Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Jason Cooper <jason@lakedaemon.net>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Tested-by: Jon Medhurst <tixy@linaro.org>
---
 drivers/irqchip/irq-gic.c       | 183 +++++++++++++++++++++++++++++++++++++---
 include/linux/irqchip/arm-gic.h |   6 ++
 2 files changed, 175 insertions(+), 14 deletions(-)

diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c
index b6c1e96b52a1..8077edd0d38d 100644
--- a/drivers/irqchip/irq-gic.c
+++ b/drivers/irqchip/irq-gic.c
@@ -41,6 +41,7 @@
 #include <linux/irqchip.h>
 #include <linux/irqchip/chained_irq.h>
 #include <linux/irqchip/arm-gic.h>
+#include <linux/ratelimit.h>
 
 #include <asm/cputype.h>
 #include <asm/irq.h>
@@ -63,6 +64,10 @@ static void gic_check_cpu_features(void)
 #define gic_check_cpu_features()	do { } while(0)
 #endif
 
+#ifndef SMP_IPI_FIQ_MASK
+#define SMP_IPI_FIQ_MASK 0
+#endif
+
 union gic_base {
 	void __iomem *common_base;
 	void __percpu * __iomem *percpu_base;
@@ -82,6 +87,7 @@ struct gic_chip_data {
 #endif
 	struct irq_domain *domain;
 	unsigned int gic_irqs;
+	bool sgi_with_nsatt;
 #ifdef CONFIG_GIC_NON_BANKED
 	void __iomem *(*get_base)(union gic_base *);
 #endif
@@ -350,12 +356,52 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val,
 }
 #endif
 
+/*
+ * Fully acknowledge (ack, eoi and deactivate) any outstanding FIQ-based IPI,
+ * otherwise do nothing.
+ */
+static void __maybe_unused gic_handle_fiq(struct pt_regs *regs)
+{
+	struct gic_chip_data *gic = &gic_data[0];
+	void __iomem *cpu_base = gic_data_cpu_base(gic);
+	u32 hppstat, hppnr, irqstat, irqnr;
+
+	do {
+		hppstat = readl_relaxed(cpu_base + GIC_CPU_HIGHPRI);
+		hppnr = hppstat & GICC_IAR_INT_ID_MASK;
+		if (!(hppnr < 16 && BIT(hppnr) & SMP_IPI_FIQ_MASK))
+			break;
+
+		irqstat = readl_relaxed(cpu_base + GIC_CPU_INTACK);
+		irqnr = irqstat & GICC_IAR_INT_ID_MASK;
+
+		writel_relaxed(irqstat, cpu_base + GIC_CPU_EOI);
+		if (static_key_true(&supports_deactivate))
+			writel_relaxed(irqstat, cpu_base + GIC_CPU_DEACTIVATE);
+
+		if (WARN_RATELIMIT(irqnr > 16,
+			       "Unexpected irqnr %u (bad prioritization?)\n",
+			       irqnr))
+			continue;
+#ifdef CONFIG_SMP
+		handle_IPI(irqnr, regs);
+#endif
+	} while (1);
+}
+
 static void __exception_irq_entry gic_handle_irq(struct pt_regs *regs)
 {
 	u32 irqstat, irqnr;
 	struct gic_chip_data *gic = &gic_data[0];
 	void __iomem *cpu_base = gic_data_cpu_base(gic);
 
+#ifdef CONFIG_ARM
+	if (in_nmi()) {
+		gic_handle_fiq(regs);
+		return;
+	}
+#endif
+
 	do {
 		irqstat = readl_relaxed(cpu_base + GIC_CPU_INTACK);
 		irqnr = irqstat & GICC_IAR_INT_ID_MASK;
@@ -439,6 +485,55 @@ static struct irq_chip gic_eoimode1_chip = {
 				  IRQCHIP_MASK_ON_SUSPEND,
 };
 
+/*
+ * Shift an interrupt between Group 0 and Group 1.
+ *
+ * In addition to changing the group we also modify the priority to
+ * match what "ARM strongly recommends" for a system where no Group 1
+ * interrupt must ever preempt a Group 0 interrupt.
+ *
+ * It is safe to call this function on systems which do not support
+ * grouping (it will have no effect).
+ */
+static void gic_set_group_irq(struct gic_chip_data *gic, unsigned int hwirq,
+			      int group)
+{
+	void __iomem *base = gic_data_dist_base(gic);
+	unsigned int grp_reg = hwirq / 32 * 4;
+	u32 grp_mask = BIT(hwirq % 32);
+	u32 grp_val;
+
+	unsigned int pri_reg = (hwirq / 4) * 4;
+	u32 pri_mask = BIT(7 + ((hwirq % 4) * 8));
+	u32 pri_val;
+
+	/*
+	 * Systems which do not support grouping will have not have
+	 * the EnableGrp1 bit set.
+	 */
+	if (!(GICD_ENABLE_GRP1 & readl_relaxed(base + GIC_DIST_CTRL)))
+		return;
+
+	raw_spin_lock(&irq_controller_lock);
+
+	grp_val = readl_relaxed(base + GIC_DIST_IGROUP + grp_reg);
+	pri_val = readl_relaxed(base + GIC_DIST_PRI + pri_reg);
+
+	if (group) {
+		grp_val |= grp_mask;
+		pri_val |= pri_mask;
+	} else {
+		grp_val &= ~grp_mask;
+		pri_val &= ~pri_mask;
+	}
+
+	writel_relaxed(grp_val, base + GIC_DIST_IGROUP + grp_reg);
+	writel_relaxed(pri_val, base + GIC_DIST_PRI + pri_reg);
+
+	raw_spin_unlock(&irq_controller_lock);
+}
+
+
 void __init gic_cascade_irq(unsigned int gic_nr, unsigned int irq)
 {
 	if (gic_nr >= MAX_GIC_NR)
@@ -469,19 +564,27 @@ static u8 gic_get_cpumask(struct gic_chip_data *gic)
 static void gic_cpu_if_up(struct gic_chip_data *gic)
 {
 	void __iomem *cpu_base = gic_data_cpu_base(gic);
-	u32 bypass = 0;
-	u32 mode = 0;
+	void __iomem *dist_base = gic_data_dist_base(gic);
+	u32 ctrl = 0;
 
-	if (static_key_true(&supports_deactivate))
-		mode = GIC_CPU_CTRL_EOImodeNS;
+	/*
+	 * Preserve bypass disable bits to be written back later
+	 */
+	ctrl = readl(cpu_base + GIC_CPU_CTRL);
+	ctrl &= GICC_DIS_BYPASS_MASK;
 
 	/*
-	* Preserve bypass disable bits to be written back later
-	*/
-	bypass = readl(cpu_base + GIC_CPU_CTRL);
-	bypass &= GICC_DIS_BYPASS_MASK;
+	 * If EnableGrp1 is set in the distributor then enable group 1
+	 * support for this CPU (and route group 0 interrupts to FIQ).
+	 */
+	if (GICD_ENABLE_GRP1 & readl_relaxed(dist_base + GIC_DIST_CTRL))
+		ctrl |= GICC_COMMON_BPR | GICC_FIQ_EN | GICC_ACK_CTL |
+			GICC_ENABLE_GRP1;
+
+	if (static_key_true(&supports_deactivate))
+		ctrl |= GIC_CPU_CTRL_EOImodeNS;
 
-	writel_relaxed(bypass | mode | GICC_ENABLE, cpu_base + GIC_CPU_CTRL);
+	writel_relaxed(ctrl | GICC_ENABLE, cpu_base + GIC_CPU_CTRL);
 }
 
 
@@ -505,7 +608,31 @@ static void __init gic_dist_init(struct gic_chip_data *gic)
 
 	gic_dist_config(base, gic_irqs, NULL);
 
-	writel_relaxed(GICD_ENABLE, base + GIC_DIST_CTRL);
+	/*
+	 * Set EnableGrp1/EnableGrp0 (bit 1 and 0) or EnableGrp (bit 0 only,
+	 * bit 1 ignored) depending on current security mode.
+	 */
+	writel_relaxed(GICD_ENABLE_GRP1 | GICD_ENABLE, base + GIC_DIST_CTRL);
+
+	/*
+	 * Some GICv1 devices (even those with security extensions) do not
+	 * implement EnableGrp1 meaning some parts of the above write may
+	 * be ignored. We will only enable FIQ support if the bit can be set.
+	 */
+	if (GICD_ENABLE_GRP1 & readl_relaxed(base + GIC_DIST_CTRL)) {
+		/* Place all SPIs in group 1 (signally with IRQ). */
+		for (i = 32; i < gic_irqs; i += 32)
+			writel_relaxed(0xffffffff,
+				       base + GIC_DIST_IGROUP + i * 4 / 32);
+
+		/*
+		 * If the GIC supports the security extension then SGIs
+		 * will be filtered based on the value of NSATT. If the
+		 * GIC has this support then enable NSATT support.
+		 */
+		if (GICD_SECURITY_EXTN & readl_relaxed(base + GIC_DIST_CTR))
+			gic->sgi_with_nsatt = true;
+	}
 }
 
 static void gic_cpu_init(struct gic_chip_data *gic)
@@ -514,6 +641,7 @@ static void gic_cpu_init(struct gic_chip_data *gic)
 	void __iomem *base = gic_data_cpu_base(gic);
 	unsigned int cpu_mask, cpu = smp_processor_id();
 	int i;
+	unsigned long ipi_fiq_mask, fiq;
 
 	/*
 	 * Setting up the CPU map is only relevant for the primary GIC
@@ -539,6 +667,23 @@ static void gic_cpu_init(struct gic_chip_data *gic)
 
 	gic_cpu_config(dist_base, NULL);
 
+	/*
+	 * If the distributor is configured to support interrupt grouping
+	 * then set all SGI and PPI interrupts that are not set in
+	 * SMP_IPI_FIQ_MASK to group1 and ensure any remaining group 0
+	 * interrupts have the right priority.
+	 *
+	 * Note that IGROUP[0] is banked, meaning that although we are
+	 * writing to a distributor register we are actually performing
+	 * part of the per-cpu initialization.
+	 */
+	if (GICD_ENABLE_GRP1 & readl_relaxed(dist_base + GIC_DIST_CTRL)) {
+		ipi_fiq_mask = SMP_IPI_FIQ_MASK;
+		writel_relaxed(~ipi_fiq_mask, dist_base + GIC_DIST_IGROUP + 0);
+		for_each_set_bit(fiq, &ipi_fiq_mask, 16)
+			gic_set_group_irq(gic, fiq, 0);
+	}
+
 	writel_relaxed(GICC_INT_PRI_THRESHOLD, base + GIC_CPU_PRIMASK);
 	gic_cpu_if_up(gic);
 }
@@ -553,7 +698,8 @@ int gic_cpu_if_down(unsigned int gic_nr)
 
 	cpu_base = gic_data_cpu_base(&gic_data[gic_nr]);
 	val = readl(cpu_base + GIC_CPU_CTRL);
-	val &= ~GICC_ENABLE;
+	val &= ~(GICC_COMMON_BPR | GICC_FIQ_EN | GICC_ACK_CTL |
+		 GICC_ENABLE_GRP1 | GICC_ENABLE);
 	writel_relaxed(val, cpu_base + GIC_CPU_CTRL);
 
 	return 0;
@@ -648,7 +794,8 @@ static void gic_dist_restore(unsigned int gic_nr)
 			dist_base + GIC_DIST_ACTIVE_SET + i * 4);
 	}
 
-	writel_relaxed(GICD_ENABLE, dist_base + GIC_DIST_CTRL);
+	writel_relaxed(GICD_ENABLE_GRP1 | GICD_ENABLE,
+		       dist_base + GIC_DIST_CTRL);
 }
 
 static void gic_cpu_save(unsigned int gic_nr)
@@ -794,6 +941,8 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
 {
 	int cpu;
 	unsigned long map = 0;
+	unsigned long softint;
+	void __iomem *dist_base;
 
 	gic_migration_lock();
 
@@ -801,14 +950,20 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
 	for_each_cpu(cpu, mask)
 		map |= gic_cpu_map[cpu];
 
+	/* This always happens on GIC0 */
+	dist_base = gic_data_dist_base(&gic_data[0]);
+
 	/*
 	 * Ensure that stores to Normal memory are visible to the
 	 * other CPUs before they observe us issuing the IPI.
 	 */
 	dmb(ishst);
 
-	/* this always happens on GIC0 */
-	writel_relaxed(map << 16 | irq, gic_data_dist_base(&gic_data[0]) + GIC_DIST_SOFTINT);
+	softint = map << 16 | irq;
+
+	writel_relaxed(softint, dist_base + GIC_DIST_SOFTINT);
+	if (gic_data[0].sgi_with_nsatt)
+		writel_relaxed(softint | 0x8000, dist_base + GIC_DIST_SOFTINT);
 
 	gic_migration_unlock();
 }
diff --git a/include/linux/irqchip/arm-gic.h b/include/linux/irqchip/arm-gic.h
index bae69e5d693c..17b9e20d754e 100644
--- a/include/linux/irqchip/arm-gic.h
+++ b/include/linux/irqchip/arm-gic.h
@@ -23,6 +23,10 @@
 #define GIC_CPU_DEACTIVATE		0x1000
 
 #define GICC_ENABLE			0x1
+#define GICC_ENABLE_GRP1		0x2
+#define GICC_ACK_CTL			0x4
+#define GICC_FIQ_EN			0x8
+#define GICC_COMMON_BPR			0x10
 #define GICC_INT_PRI_THRESHOLD		0xf0
 
 #define GIC_CPU_CTRL_EOImodeNS		(1 << 9)
@@ -48,7 +52,9 @@
 #define GIC_DIST_SGI_PENDING_SET	0xf20
 
 #define GICD_ENABLE			0x1
+#define GICD_ENABLE_GRP1		0x2
 #define GICD_DISABLE			0x0
+#define GICD_SECURITY_EXTN		0x400
 #define GICD_INT_ACTLOW_LVLTRIG		0x0
 #define GICD_INT_EN_CLR_X32		0xffffffff
 #define GICD_INT_EN_SET_SGI		0x0000ffff
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 4.4-rc5 v22 4/4] ARM: Allow IPI_CPU_BACKTRACE to exploit FIQ
  2015-12-20 20:52 ` Daniel Thompson
@ 2015-12-20 20:52   ` Daniel Thompson
  -1 siblings, 0 replies; 26+ messages in thread
From: Daniel Thompson @ 2015-12-20 20:52 UTC (permalink / raw)
  To: Thomas Gleixner, Jason Cooper, Russell King, Marc Zyngier
  Cc: Daniel Thompson, Will Deacon, Catalin Marinas, Stephen Boyd,
	John Stultz, Steven Rostedt, linux-kernel, linux-arm-kernel,
	patches, linaro-kernel, Sumit Semwal, Dirk Behme, Daniel Drake,
	Dmitry Pervushin, Tim Sander, Petr Mladek

The GIC (v1 & v2) driver allows its implementation of handle_arch_irq()
to be called from the FIQ handler but currently the ARM code is not
able to exploit this.

Extend handle_fiq_as_nmi() to call handle_arch_irq(). This will affect
all interrupt controllers, including ones that do not support FIQ. This
is OK because a spurious FIQ is normally fatal. Handling a spurious FIQ
like a normal interrupt does risk deadlock but does give us a chance
of surviving long enough to get an error message out.

We also extend the SMP code to indicate to irq drivers which IPIs they
should seek to implement using FIQ.

Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
---
 arch/arm/include/asm/smp.h | 9 +++++++++
 arch/arm/kernel/smp.c      | 6 ++++++
 arch/arm/kernel/traps.c    | 9 ++++++++-
 3 files changed, 23 insertions(+), 1 deletion(-)

diff --git a/arch/arm/include/asm/smp.h b/arch/arm/include/asm/smp.h
index 3d6dc8b460e4..daf869cff02e 100644
--- a/arch/arm/include/asm/smp.h
+++ b/arch/arm/include/asm/smp.h
@@ -18,6 +18,15 @@
 # error "<asm/smp.h> included in non-SMP build"
 #endif
 
+/*
+ * Identify which IPIs are safe for the irqchip to handle using FIQ.
+ *
+ * This information is advisory. The interrupt controller may not be capable
+ * of routing these IPIs to FIQ and the kernel will continue to work if they
+ * are routed to IRQ as normal.
+ */
+#define SMP_IPI_FIQ_MASK 0x80
+
 #define raw_smp_processor_id() (current_thread_info()->cpu)
 
 struct seq_file;
diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index fe517f1e88d8..853089e1aa8a 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -639,6 +639,11 @@ void handle_IPI(int ipinr, struct pt_regs *regs)
 		break;
 
 	case IPI_CPU_BACKTRACE:
+		if (in_nmi()) {
+			nmi_cpu_backtrace(regs);
+			break;
+		}
+
 		irq_enter();
 		nmi_cpu_backtrace(regs);
 		irq_exit();
@@ -750,6 +755,7 @@ static void raise_nmi(cpumask_t *mask)
 	if (cpumask_test_cpu(smp_processor_id(), mask) && irqs_disabled())
 		nmi_cpu_backtrace(NULL);
 
+	BUILD_BUG_ON(SMP_IPI_FIQ_MASK != BIT(IPI_CPU_BACKTRACE));
 	smp_cross_call(mask, IPI_CPU_BACKTRACE);
 }
 
diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c
index bc698383e822..a04426ee7684 100644
--- a/arch/arm/kernel/traps.c
+++ b/arch/arm/kernel/traps.c
@@ -479,7 +479,14 @@ asmlinkage void __exception_irq_entry handle_fiq_as_nmi(struct pt_regs *regs)
 
 	nmi_enter();
 
-	/* nop. FIQ handlers for special arch/arm features can be added here. */
+	/*
+	 * Either the interrupt controller supports FIQ, meaning it will
+	 * do the right thing with this call, or we will end up treating a
+	 * spurious FIQ (which is normally fatal) as though it were an IRQ
+	 * which, although it risks deadlock, still gives us a sporting
+	 * chance of surviving long enough to log errors.
+	 */
+	handle_arch_irq(regs);
 
 	nmi_exit();
 
-- 
2.5.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 4.4-rc5 v22 4/4] ARM: Allow IPI_CPU_BACKTRACE to exploit FIQ
@ 2015-12-20 20:52   ` Daniel Thompson
  0 siblings, 0 replies; 26+ messages in thread
From: Daniel Thompson @ 2015-12-20 20:52 UTC (permalink / raw)
  To: linux-arm-kernel

The GIC (v1 & v2) driver allows its implementation of handle_arch_irq()
to be called from the FIQ handler but currently the ARM code is not
able to exploit this.

Extend handle_fiq_as_nmi() to call handle_arch_irq(). This will affect
all interrupt controllers, including ones that do not support FIQ. This
is OK because a spurious FIQ is normally fatal. Handling a spurious FIQ
like a normal interrupt does risk deadlock but does give us a chance
of surviving long enough to get an error message out.

We also extend the SMP code to indicate to irq drivers which IPIs they
should seek to implement using FIQ.

Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
---
 arch/arm/include/asm/smp.h | 9 +++++++++
 arch/arm/kernel/smp.c      | 6 ++++++
 arch/arm/kernel/traps.c    | 9 ++++++++-
 3 files changed, 23 insertions(+), 1 deletion(-)

diff --git a/arch/arm/include/asm/smp.h b/arch/arm/include/asm/smp.h
index 3d6dc8b460e4..daf869cff02e 100644
--- a/arch/arm/include/asm/smp.h
+++ b/arch/arm/include/asm/smp.h
@@ -18,6 +18,15 @@
 # error "<asm/smp.h> included in non-SMP build"
 #endif
 
+/*
+ * Identify which IPIs are safe for the irqchip to handle using FIQ.
+ *
+ * This information is advisory. The interrupt controller may not be capable
+ * of routing these IPIs to FIQ and the kernel will continue to work if they
+ * are routed to IRQ as normal.
+ */
+#define SMP_IPI_FIQ_MASK 0x80
+
 #define raw_smp_processor_id() (current_thread_info()->cpu)
 
 struct seq_file;
diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index fe517f1e88d8..853089e1aa8a 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -639,6 +639,11 @@ void handle_IPI(int ipinr, struct pt_regs *regs)
 		break;
 
 	case IPI_CPU_BACKTRACE:
+		if (in_nmi()) {
+			nmi_cpu_backtrace(regs);
+			break;
+		}
+
 		irq_enter();
 		nmi_cpu_backtrace(regs);
 		irq_exit();
@@ -750,6 +755,7 @@ static void raise_nmi(cpumask_t *mask)
 	if (cpumask_test_cpu(smp_processor_id(), mask) && irqs_disabled())
 		nmi_cpu_backtrace(NULL);
 
+	BUILD_BUG_ON(SMP_IPI_FIQ_MASK != BIT(IPI_CPU_BACKTRACE));
 	smp_cross_call(mask, IPI_CPU_BACKTRACE);
 }
 
diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c
index bc698383e822..a04426ee7684 100644
--- a/arch/arm/kernel/traps.c
+++ b/arch/arm/kernel/traps.c
@@ -479,7 +479,14 @@ asmlinkage void __exception_irq_entry handle_fiq_as_nmi(struct pt_regs *regs)
 
 	nmi_enter();
 
-	/* nop. FIQ handlers for special arch/arm features can be added here. */
+	/*
+	 * Either the interrupt controller supports FIQ, meaning it will
+	 * do the right thing with this call, or we will end up treating a
+	 * spurious FIQ (which is normally fatal) as though it were an IRQ
+	 * which, although it risks deadlock, still gives us a sporting
+	 * chance of surviving long enough to log errors.
+	 */
+	handle_arch_irq(regs);
 
 	nmi_exit();
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH 4.4-rc5 v22 4/4] ARM: Allow IPI_CPU_BACKTRACE to exploit FIQ
  2015-12-20 20:52   ` Daniel Thompson
@ 2015-12-20 22:12     ` kbuild test robot
  -1 siblings, 0 replies; 26+ messages in thread
From: kbuild test robot @ 2015-12-20 22:12 UTC (permalink / raw)
  To: Daniel Thompson
  Cc: kbuild-all, Thomas Gleixner, Jason Cooper, Russell King,
	Marc Zyngier, Daniel Thompson, Will Deacon, Catalin Marinas,
	Stephen Boyd, John Stultz, Steven Rostedt, linux-kernel,
	linux-arm-kernel, patches, linaro-kernel, Sumit Semwal,
	Dirk Behme, Daniel Drake, Dmitry Pervushin, Tim Sander,
	Petr Mladek

[-- Attachment #1: Type: text/plain, Size: 1501 bytes --]

Hi Daniel,

[auto build test ERROR on v4.4-rc5]
[also build test ERROR on next-20151218]
[cannot apply to tip/irq/core]

url:    https://github.com/0day-ci/linux/commits/Daniel-Thompson/irq-arm-Use-FIQ-for-NMI-backtrace-when-possible/20151221-045854
config: arm-iop-adma (attached as .config)
reproduce:
        wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=arm 

All errors (new ones prefixed by >>):

   arch/arm/kernel/traps.c: In function 'handle_fiq_as_nmi':
>> arch/arm/kernel/traps.c:489:2: error: implicit declaration of function 'handle_arch_irq' [-Werror=implicit-function-declaration]
     handle_arch_irq(regs);
     ^
   cc1: some warnings being treated as errors

vim +/handle_arch_irq +489 arch/arm/kernel/traps.c

   483		 * Either the interrupt controller supports FIQ, meaning it will
   484		 * do the right thing with this call, or we will end up treating a
   485		 * spurious FIQ (which is normally fatal) as though it were an IRQ
   486		 * which, although it risks deadlock, still gives us a sporting
   487		 * chance of surviving long enough to log errors.
   488		 */
 > 489		handle_arch_irq(regs);
   490	
   491		nmi_exit();
   492	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 20864 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 4.4-rc5 v22 4/4] ARM: Allow IPI_CPU_BACKTRACE to exploit FIQ
@ 2015-12-20 22:12     ` kbuild test robot
  0 siblings, 0 replies; 26+ messages in thread
From: kbuild test robot @ 2015-12-20 22:12 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Daniel,

[auto build test ERROR on v4.4-rc5]
[also build test ERROR on next-20151218]
[cannot apply to tip/irq/core]

url:    https://github.com/0day-ci/linux/commits/Daniel-Thompson/irq-arm-Use-FIQ-for-NMI-backtrace-when-possible/20151221-045854
config: arm-iop-adma (attached as .config)
reproduce:
        wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=arm 

All errors (new ones prefixed by >>):

   arch/arm/kernel/traps.c: In function 'handle_fiq_as_nmi':
>> arch/arm/kernel/traps.c:489:2: error: implicit declaration of function 'handle_arch_irq' [-Werror=implicit-function-declaration]
     handle_arch_irq(regs);
     ^
   cc1: some warnings being treated as errors

vim +/handle_arch_irq +489 arch/arm/kernel/traps.c

   483		 * Either the interrupt controller supports FIQ, meaning it will
   484		 * do the right thing with this call, or we will end up treating a
   485		 * spurious FIQ (which is normally fatal) as though it were an IRQ
   486		 * which, although it risks deadlock, still gives us a sporting
   487		 * chance of surviving long enough to log errors.
   488		 */
 > 489		handle_arch_irq(regs);
   490	
   491		nmi_exit();
   492	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation
-------------- next part --------------
A non-text attachment was scrubbed...
Name: .config.gz
Type: application/octet-stream
Size: 20864 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20151221/90a009bc/attachment-0001.obj>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4.4-rc5 v22 4/4] ARM: Allow IPI_CPU_BACKTRACE to exploit FIQ
  2015-12-20 20:52   ` Daniel Thompson
@ 2015-12-20 22:52     ` kbuild test robot
  -1 siblings, 0 replies; 26+ messages in thread
From: kbuild test robot @ 2015-12-20 22:52 UTC (permalink / raw)
  To: Daniel Thompson
  Cc: kbuild-all, Thomas Gleixner, Jason Cooper, Russell King,
	Marc Zyngier, Daniel Thompson, Will Deacon, Catalin Marinas,
	Stephen Boyd, John Stultz, Steven Rostedt, linux-kernel,
	linux-arm-kernel, patches, linaro-kernel, Sumit Semwal,
	Dirk Behme, Daniel Drake, Dmitry Pervushin, Tim Sander,
	Petr Mladek

[-- Attachment #1: Type: text/plain, Size: 3466 bytes --]

Hi Daniel,

[auto build test ERROR on v4.4-rc5]
[also build test ERROR on next-20151218]
[cannot apply to tip/irq/core]

url:    https://github.com/0day-ci/linux/commits/Daniel-Thompson/irq-arm-Use-FIQ-for-NMI-backtrace-when-possible/20151221-045854
config: arm-socfpga_defconfig (attached as .config)
reproduce:
        wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=arm 

All error/warnings (new ones prefixed by >>):

   In file included from include/uapi/linux/stddef.h:1:0,
                    from include/linux/stddef.h:4,
                    from include/uapi/linux/posix_types.h:4,
                    from include/uapi/linux/types.h:13,
                    from include/linux/types.h:5,
                    from include/linux/list.h:4,
                    from include/linux/module.h:9,
                    from arch/arm/kernel/smp.c:10:
   arch/arm/kernel/smp.c: In function 'raise_nmi':
>> include/linux/compiler.h:484:38: error: call to '__compiletime_assert_766' declared with attribute error: BUILD_BUG_ON failed: SMP_IPI_FIQ_MASK != BIT(IPI_CPU_BACKTRACE)
     _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
                                         ^
   include/linux/compiler.h:467:4: note: in definition of macro '__compiletime_assert'
       prefix ## suffix();    \
       ^
   include/linux/compiler.h:484:2: note: in expansion of macro '_compiletime_assert'
     _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
     ^
   include/linux/bug.h:50:37: note: in expansion of macro 'compiletime_assert'
    #define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
                                        ^
   include/linux/bug.h:74:2: note: in expansion of macro 'BUILD_BUG_ON_MSG'
     BUILD_BUG_ON_MSG(condition, "BUILD_BUG_ON failed: " #condition)
     ^
>> arch/arm/kernel/smp.c:766:2: note: in expansion of macro 'BUILD_BUG_ON'
     BUILD_BUG_ON(SMP_IPI_FIQ_MASK != BIT(IPI_CPU_BACKTRACE));
     ^

vim +/__compiletime_assert_766 +484 include/linux/compiler.h

9a8ab1c3 Daniel Santos  2013-02-21  478   *
9a8ab1c3 Daniel Santos  2013-02-21  479   * In tradition of POSIX assert, this macro will break the build if the
9a8ab1c3 Daniel Santos  2013-02-21  480   * supplied condition is *false*, emitting the supplied error message if the
9a8ab1c3 Daniel Santos  2013-02-21  481   * compiler has support to do so.
9a8ab1c3 Daniel Santos  2013-02-21  482   */
9a8ab1c3 Daniel Santos  2013-02-21  483  #define compiletime_assert(condition, msg) \
9a8ab1c3 Daniel Santos  2013-02-21 @484  	_compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
9a8ab1c3 Daniel Santos  2013-02-21  485  
47933ad4 Peter Zijlstra 2013-11-06  486  #define compiletime_assert_atomic_type(t)				\
47933ad4 Peter Zijlstra 2013-11-06  487  	compiletime_assert(__native_word(t),				\

:::::: The code at line 484 was first introduced by commit
:::::: 9a8ab1c39970a4938a72d94e6fd13be88a797590 bug.h, compiler.h: introduce compiletime_assert & BUILD_BUG_ON_MSG

:::::: TO: Daniel Santos <daniel.santos@pobox.com>
:::::: CC: Linus Torvalds <torvalds@linux-foundation.org>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 15515 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 4.4-rc5 v22 4/4] ARM: Allow IPI_CPU_BACKTRACE to exploit FIQ
@ 2015-12-20 22:52     ` kbuild test robot
  0 siblings, 0 replies; 26+ messages in thread
From: kbuild test robot @ 2015-12-20 22:52 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Daniel,

[auto build test ERROR on v4.4-rc5]
[also build test ERROR on next-20151218]
[cannot apply to tip/irq/core]

url:    https://github.com/0day-ci/linux/commits/Daniel-Thompson/irq-arm-Use-FIQ-for-NMI-backtrace-when-possible/20151221-045854
config: arm-socfpga_defconfig (attached as .config)
reproduce:
        wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=arm 

All error/warnings (new ones prefixed by >>):

   In file included from include/uapi/linux/stddef.h:1:0,
                    from include/linux/stddef.h:4,
                    from include/uapi/linux/posix_types.h:4,
                    from include/uapi/linux/types.h:13,
                    from include/linux/types.h:5,
                    from include/linux/list.h:4,
                    from include/linux/module.h:9,
                    from arch/arm/kernel/smp.c:10:
   arch/arm/kernel/smp.c: In function 'raise_nmi':
>> include/linux/compiler.h:484:38: error: call to '__compiletime_assert_766' declared with attribute error: BUILD_BUG_ON failed: SMP_IPI_FIQ_MASK != BIT(IPI_CPU_BACKTRACE)
     _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
                                         ^
   include/linux/compiler.h:467:4: note: in definition of macro '__compiletime_assert'
       prefix ## suffix();    \
       ^
   include/linux/compiler.h:484:2: note: in expansion of macro '_compiletime_assert'
     _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
     ^
   include/linux/bug.h:50:37: note: in expansion of macro 'compiletime_assert'
    #define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
                                        ^
   include/linux/bug.h:74:2: note: in expansion of macro 'BUILD_BUG_ON_MSG'
     BUILD_BUG_ON_MSG(condition, "BUILD_BUG_ON failed: " #condition)
     ^
>> arch/arm/kernel/smp.c:766:2: note: in expansion of macro 'BUILD_BUG_ON'
     BUILD_BUG_ON(SMP_IPI_FIQ_MASK != BIT(IPI_CPU_BACKTRACE));
     ^

vim +/__compiletime_assert_766 +484 include/linux/compiler.h

9a8ab1c3 Daniel Santos  2013-02-21  478   *
9a8ab1c3 Daniel Santos  2013-02-21  479   * In tradition of POSIX assert, this macro will break the build if the
9a8ab1c3 Daniel Santos  2013-02-21  480   * supplied condition is *false*, emitting the supplied error message if the
9a8ab1c3 Daniel Santos  2013-02-21  481   * compiler has support to do so.
9a8ab1c3 Daniel Santos  2013-02-21  482   */
9a8ab1c3 Daniel Santos  2013-02-21  483  #define compiletime_assert(condition, msg) \
9a8ab1c3 Daniel Santos  2013-02-21 @484  	_compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
9a8ab1c3 Daniel Santos  2013-02-21  485  
47933ad4 Peter Zijlstra 2013-11-06  486  #define compiletime_assert_atomic_type(t)				\
47933ad4 Peter Zijlstra 2013-11-06  487  	compiletime_assert(__native_word(t),				\

:::::: The code at line 484 was first introduced by commit
:::::: 9a8ab1c39970a4938a72d94e6fd13be88a797590 bug.h, compiler.h: introduce compiletime_assert & BUILD_BUG_ON_MSG

:::::: TO: Daniel Santos <daniel.santos@pobox.com>
:::::: CC: Linus Torvalds <torvalds@linux-foundation.org>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation
-------------- next part --------------
A non-text attachment was scrubbed...
Name: .config.gz
Type: application/octet-stream
Size: 15515 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20151221/a9e75d33/attachment-0001.obj>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4.4-rc5 v22 4/4] ARM: Allow IPI_CPU_BACKTRACE to exploit FIQ
  2015-12-20 22:12     ` kbuild test robot
@ 2016-01-04 10:00       ` Daniel Thompson
  -1 siblings, 0 replies; 26+ messages in thread
From: Daniel Thompson @ 2016-01-04 10:00 UTC (permalink / raw)
  To: kbuild test robot
  Cc: kbuild-all, Thomas Gleixner, Jason Cooper, Russell King,
	Marc Zyngier, Will Deacon, Catalin Marinas, Stephen Boyd,
	John Stultz, Steven Rostedt, linux-kernel, linux-arm-kernel,
	patches, linaro-kernel, Sumit Semwal, Dirk Behme, Daniel Drake,
	Dmitry Pervushin, Tim Sander, Petr Mladek

On 20/12/15 22:12, kbuild test robot wrote:
> Hi Daniel,
>
> [auto build test ERROR on v4.4-rc5]
> [also build test ERROR on next-20151218]
> [cannot apply to tip/irq/core]
>
> url:    https://github.com/0day-ci/linux/commits/Daniel-Thompson/irq-arm-Use-FIQ-for-NMI-backtrace-when-possible/20151221-045854
> config: arm-iop-adma (attached as .config)
> reproduce:
>          wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
>          chmod +x ~/bin/make.cross
>          # save the attached .config to linux build tree
>          make.cross ARCH=arm
>
> All errors (new ones prefixed by >>):
>
>     arch/arm/kernel/traps.c: In function 'handle_fiq_as_nmi':
>>> arch/arm/kernel/traps.c:489:2: error: implicit declaration of function 'handle_arch_irq' [-Werror=implicit-function-declaration]
>       handle_arch_irq(regs);
>       ^
>     cc1: some warnings being treated as errors

Thanks. I'll look at this.


> vim +/handle_arch_irq +489 arch/arm/kernel/traps.c
>
>     483		 * Either the interrupt controller supports FIQ, meaning it will
>     484		 * do the right thing with this call, or we will end up treating a
>     485		 * spurious FIQ (which is normally fatal) as though it were an IRQ
>     486		 * which, although it risks deadlock, still gives us a sporting
>     487		 * chance of surviving long enough to log errors.
>     488		 */
>   > 489		handle_arch_irq(regs);
>     490	
>     491		nmi_exit();
>     492	
>
> ---
> 0-DAY kernel test infrastructure                Open Source Technology Center
> https://lists.01.org/pipermail/kbuild-all                   Intel Corporation
>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 4.4-rc5 v22 4/4] ARM: Allow IPI_CPU_BACKTRACE to exploit FIQ
@ 2016-01-04 10:00       ` Daniel Thompson
  0 siblings, 0 replies; 26+ messages in thread
From: Daniel Thompson @ 2016-01-04 10:00 UTC (permalink / raw)
  To: linux-arm-kernel

On 20/12/15 22:12, kbuild test robot wrote:
> Hi Daniel,
>
> [auto build test ERROR on v4.4-rc5]
> [also build test ERROR on next-20151218]
> [cannot apply to tip/irq/core]
>
> url:    https://github.com/0day-ci/linux/commits/Daniel-Thompson/irq-arm-Use-FIQ-for-NMI-backtrace-when-possible/20151221-045854
> config: arm-iop-adma (attached as .config)
> reproduce:
>          wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
>          chmod +x ~/bin/make.cross
>          # save the attached .config to linux build tree
>          make.cross ARCH=arm
>
> All errors (new ones prefixed by >>):
>
>     arch/arm/kernel/traps.c: In function 'handle_fiq_as_nmi':
>>> arch/arm/kernel/traps.c:489:2: error: implicit declaration of function 'handle_arch_irq' [-Werror=implicit-function-declaration]
>       handle_arch_irq(regs);
>       ^
>     cc1: some warnings being treated as errors

Thanks. I'll look at this.


> vim +/handle_arch_irq +489 arch/arm/kernel/traps.c
>
>     483		 * Either the interrupt controller supports FIQ, meaning it will
>     484		 * do the right thing with this call, or we will end up treating a
>     485		 * spurious FIQ (which is normally fatal) as though it were an IRQ
>     486		 * which, although it risks deadlock, still gives us a sporting
>     487		 * chance of surviving long enough to log errors.
>     488		 */
>   > 489		handle_arch_irq(regs);
>     490	
>     491		nmi_exit();
>     492	
>
> ---
> 0-DAY kernel test infrastructure                Open Source Technology Center
> https://lists.01.org/pipermail/kbuild-all                   Intel Corporation
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4.4-rc5 v22 4/4] ARM: Allow IPI_CPU_BACKTRACE to exploit FIQ
  2015-12-20 22:52     ` kbuild test robot
@ 2016-01-04 10:05       ` Daniel Thompson
  -1 siblings, 0 replies; 26+ messages in thread
From: Daniel Thompson @ 2016-01-04 10:05 UTC (permalink / raw)
  To: kbuild test robot
  Cc: kbuild-all, Thomas Gleixner, Jason Cooper, Russell King,
	Marc Zyngier, Will Deacon, Catalin Marinas, Stephen Boyd,
	John Stultz, Steven Rostedt, linux-kernel, linux-arm-kernel,
	patches, linaro-kernel, Sumit Semwal, Dirk Behme, Daniel Drake,
	Dmitry Pervushin, Tim Sander, Petr Mladek

On 20/12/15 22:52, kbuild test robot wrote:
> Hi Daniel,
>
> [auto build test ERROR on v4.4-rc5]
> [also build test ERROR on next-20151218]
> [cannot apply to tip/irq/core]
>
> url:    https://github.com/0day-ci/linux/commits/Daniel-Thompson/irq-arm-Use-FIQ-for-NMI-backtrace-when-possible/20151221-045854
> config: arm-socfpga_defconfig (attached as .config)
> reproduce:
>          wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
>          chmod +x ~/bin/make.cross
>          # save the attached .config to linux build tree
>          make.cross ARCH=arm
>
> All error/warnings (new ones prefixed by >>):
>
>     In file included from include/uapi/linux/stddef.h:1:0,
>                      from include/linux/stddef.h:4,
>                      from include/uapi/linux/posix_types.h:4,
>                      from include/uapi/linux/types.h:13,
>                      from include/linux/types.h:5,
>                      from include/linux/list.h:4,
>                      from include/linux/module.h:9,
>                      from arch/arm/kernel/smp.c:10:
>     arch/arm/kernel/smp.c: In function 'raise_nmi':
>>> include/linux/compiler.h:484:38: error: call to '__compiletime_assert_766' declared with attribute error: BUILD_BUG_ON failed: SMP_IPI_FIQ_MASK != BIT(IPI_CPU_BACKTRACE)
>       _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
>                                           ^
>     include/linux/compiler.h:467:4: note: in definition of macro '__compiletime_assert'
>         prefix ## suffix();    \
>         ^
>     include/linux/compiler.h:484:2: note: in expansion of macro '_compiletime_assert'
>       _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
>       ^
>     include/linux/bug.h:50:37: note: in expansion of macro 'compiletime_assert'
>      #define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
>                                          ^
>     include/linux/bug.h:74:2: note: in expansion of macro 'BUILD_BUG_ON_MSG'
>       BUILD_BUG_ON_MSG(condition, "BUILD_BUG_ON failed: " #condition)
>       ^
>>> arch/arm/kernel/smp.c:766:2: note: in expansion of macro 'BUILD_BUG_ON'
>       BUILD_BUG_ON(SMP_IPI_FIQ_MASK != BIT(IPI_CPU_BACKTRACE));

This should be a false alarm or at least, the BUILD_BUG_ON() is doing 
exactly what it was intended to.

I forgot to mention it in the covering letter (and will fix next time) 
but my patchset requires a fix from Marc Zyngier which changes the 
numeric value of IPI_CPU_BACKTRACE[1].


Daniel.

1: http://thread.gmane.org/gmane.linux.ports.arm.kernel/464985

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 4.4-rc5 v22 4/4] ARM: Allow IPI_CPU_BACKTRACE to exploit FIQ
@ 2016-01-04 10:05       ` Daniel Thompson
  0 siblings, 0 replies; 26+ messages in thread
From: Daniel Thompson @ 2016-01-04 10:05 UTC (permalink / raw)
  To: linux-arm-kernel

On 20/12/15 22:52, kbuild test robot wrote:
> Hi Daniel,
>
> [auto build test ERROR on v4.4-rc5]
> [also build test ERROR on next-20151218]
> [cannot apply to tip/irq/core]
>
> url:    https://github.com/0day-ci/linux/commits/Daniel-Thompson/irq-arm-Use-FIQ-for-NMI-backtrace-when-possible/20151221-045854
> config: arm-socfpga_defconfig (attached as .config)
> reproduce:
>          wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
>          chmod +x ~/bin/make.cross
>          # save the attached .config to linux build tree
>          make.cross ARCH=arm
>
> All error/warnings (new ones prefixed by >>):
>
>     In file included from include/uapi/linux/stddef.h:1:0,
>                      from include/linux/stddef.h:4,
>                      from include/uapi/linux/posix_types.h:4,
>                      from include/uapi/linux/types.h:13,
>                      from include/linux/types.h:5,
>                      from include/linux/list.h:4,
>                      from include/linux/module.h:9,
>                      from arch/arm/kernel/smp.c:10:
>     arch/arm/kernel/smp.c: In function 'raise_nmi':
>>> include/linux/compiler.h:484:38: error: call to '__compiletime_assert_766' declared with attribute error: BUILD_BUG_ON failed: SMP_IPI_FIQ_MASK != BIT(IPI_CPU_BACKTRACE)
>       _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
>                                           ^
>     include/linux/compiler.h:467:4: note: in definition of macro '__compiletime_assert'
>         prefix ## suffix();    \
>         ^
>     include/linux/compiler.h:484:2: note: in expansion of macro '_compiletime_assert'
>       _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
>       ^
>     include/linux/bug.h:50:37: note: in expansion of macro 'compiletime_assert'
>      #define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
>                                          ^
>     include/linux/bug.h:74:2: note: in expansion of macro 'BUILD_BUG_ON_MSG'
>       BUILD_BUG_ON_MSG(condition, "BUILD_BUG_ON failed: " #condition)
>       ^
>>> arch/arm/kernel/smp.c:766:2: note: in expansion of macro 'BUILD_BUG_ON'
>       BUILD_BUG_ON(SMP_IPI_FIQ_MASK != BIT(IPI_CPU_BACKTRACE));

This should be a false alarm or at least, the BUILD_BUG_ON() is doing 
exactly what it was intended to.

I forgot to mention it in the covering letter (and will fix next time) 
but my patchset requires a fix from Marc Zyngier which changes the 
numeric value of IPI_CPU_BACKTRACE[1].


Daniel.

1: http://thread.gmane.org/gmane.linux.ports.arm.kernel/464985

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4.4-rc5 v22 1/4] irqchip: gic: Optimize locking in gic_raise_softirq
  2015-12-20 20:52   ` Daniel Thompson
@ 2016-01-07 14:31     ` Marc Zyngier
  -1 siblings, 0 replies; 26+ messages in thread
From: Marc Zyngier @ 2016-01-07 14:31 UTC (permalink / raw)
  To: Daniel Thompson, Thomas Gleixner, Jason Cooper, Russell King
  Cc: Will Deacon, Catalin Marinas, Stephen Boyd, John Stultz,
	Steven Rostedt, linux-kernel, linux-arm-kernel, patches,
	linaro-kernel, Sumit Semwal, Dirk Behme, Daniel Drake,
	Dmitry Pervushin, Tim Sander, Petr Mladek

On 20/12/15 20:52, Daniel Thompson wrote:
> Currently gic_raise_softirq() is locked using irq_controller_lock.
> This lock is primarily used to make register read-modify-write sequences
> atomic but gic_raise_softirq() uses it instead to ensure that the
> big.LITTLE migration logic can figure out when it is safe to migrate
> interrupts between physical cores.
> 
> This is sub-optimal in closely related ways:
> 
> 1. No locking at all is required on systems where the b.L switcher is
>    not configured.
> 
> 2. Finer grain locking can be used on systems where the b.L switcher is
>    present.
> 
> This patch resolves both of the above by introducing a separate finer
> grain lock and providing conditionally compiled inlines to lock/unlock
> it.
> 
> Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Jason Cooper <jason@lakedaemon.net>
> Cc: Russell King <linux@arm.linux.org.uk>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>

Acked-by: Marc Zyngier <marc.zyngier@arm.com>

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 4.4-rc5 v22 1/4] irqchip: gic: Optimize locking in gic_raise_softirq
@ 2016-01-07 14:31     ` Marc Zyngier
  0 siblings, 0 replies; 26+ messages in thread
From: Marc Zyngier @ 2016-01-07 14:31 UTC (permalink / raw)
  To: linux-arm-kernel

On 20/12/15 20:52, Daniel Thompson wrote:
> Currently gic_raise_softirq() is locked using irq_controller_lock.
> This lock is primarily used to make register read-modify-write sequences
> atomic but gic_raise_softirq() uses it instead to ensure that the
> big.LITTLE migration logic can figure out when it is safe to migrate
> interrupts between physical cores.
> 
> This is sub-optimal in closely related ways:
> 
> 1. No locking at all is required on systems where the b.L switcher is
>    not configured.
> 
> 2. Finer grain locking can be used on systems where the b.L switcher is
>    present.
> 
> This patch resolves both of the above by introducing a separate finer
> grain lock and providing conditionally compiled inlines to lock/unlock
> it.
> 
> Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Jason Cooper <jason@lakedaemon.net>
> Cc: Russell King <linux@arm.linux.org.uk>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>

Acked-by: Marc Zyngier <marc.zyngier@arm.com>

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4.4-rc5 v22 2/4] irqchip: gic: Make gic_raise_softirq FIQ-safe
  2015-12-20 20:52   ` Daniel Thompson
@ 2016-01-07 14:35     ` Marc Zyngier
  -1 siblings, 0 replies; 26+ messages in thread
From: Marc Zyngier @ 2016-01-07 14:35 UTC (permalink / raw)
  To: Daniel Thompson, Thomas Gleixner, Jason Cooper, Russell King
  Cc: Will Deacon, Catalin Marinas, Stephen Boyd, John Stultz,
	Steven Rostedt, linux-kernel, linux-arm-kernel, patches,
	linaro-kernel, Sumit Semwal, Dirk Behme, Daniel Drake,
	Dmitry Pervushin, Tim Sander, Petr Mladek

On 20/12/15 20:52, Daniel Thompson wrote:
> It is currently possible for FIQ handlers to re-enter gic_raise_softirq()
> and lock up.
> 
>     	gic_raise_softirq()
> 	   lock(x);
> -~-> FIQ
>         handle_fiq()
> 	   gic_raise_softirq()
> 	      lock(x);		<-- Lockup
> 
> arch/arm/ uses IPIs to implement arch_irq_work_raise(), thus this issue
> renders it difficult for FIQ handlers to safely defer work to less
> restrictive calling contexts.
> 
> This patch fixes the problem by converting the cpu_map_migration_lock
> into a rwlock making it safe to re-enter the function.
> 
> Note that having made it safe to re-enter gic_raise_softirq() we no
> longer need to mask interrupts during gic_raise_softirq() because the
> b.L migration is always performed from task context.
> 
> Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Jason Cooper <jason@lakedaemon.net>
> Cc: Russell King <linux@arm.linux.org.uk>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>

Acked-by: Marc Zyngier <marc.zyngier@arm.com>

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 4.4-rc5 v22 2/4] irqchip: gic: Make gic_raise_softirq FIQ-safe
@ 2016-01-07 14:35     ` Marc Zyngier
  0 siblings, 0 replies; 26+ messages in thread
From: Marc Zyngier @ 2016-01-07 14:35 UTC (permalink / raw)
  To: linux-arm-kernel

On 20/12/15 20:52, Daniel Thompson wrote:
> It is currently possible for FIQ handlers to re-enter gic_raise_softirq()
> and lock up.
> 
>     	gic_raise_softirq()
> 	   lock(x);
> -~-> FIQ
>         handle_fiq()
> 	   gic_raise_softirq()
> 	      lock(x);		<-- Lockup
> 
> arch/arm/ uses IPIs to implement arch_irq_work_raise(), thus this issue
> renders it difficult for FIQ handlers to safely defer work to less
> restrictive calling contexts.
> 
> This patch fixes the problem by converting the cpu_map_migration_lock
> into a rwlock making it safe to re-enter the function.
> 
> Note that having made it safe to re-enter gic_raise_softirq() we no
> longer need to mask interrupts during gic_raise_softirq() because the
> b.L migration is always performed from task context.
> 
> Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Jason Cooper <jason@lakedaemon.net>
> Cc: Russell King <linux@arm.linux.org.uk>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>

Acked-by: Marc Zyngier <marc.zyngier@arm.com>

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4.4-rc5 v22 3/4] irqchip: gic: Introduce plumbing for IPI FIQ
  2015-12-20 20:52   ` Daniel Thompson
@ 2016-01-07 17:06     ` Marc Zyngier
  -1 siblings, 0 replies; 26+ messages in thread
From: Marc Zyngier @ 2016-01-07 17:06 UTC (permalink / raw)
  To: Daniel Thompson, Thomas Gleixner, Jason Cooper, Russell King
  Cc: Will Deacon, Catalin Marinas, Stephen Boyd, John Stultz,
	Steven Rostedt, linux-kernel, linux-arm-kernel, patches,
	linaro-kernel, Sumit Semwal, Dirk Behme, Daniel Drake,
	Dmitry Pervushin, Tim Sander, Petr Mladek

On 20/12/15 20:52, Daniel Thompson wrote:
> Currently it is not possible to exploit FIQ for systems with a GIC, even
> on systems are otherwise capable of it. This patch makes it possible
> for IPIs to be delivered using FIQ.
> 
> To do so it modifies the register state so that normal interrupts are
> placed in group 1 and specific IPIs are placed into group 0. It also
> configures the controller to raise group 0 interrupts using the FIQ
> signal. Finally it provides a means for architecture code to define
> which IPIs shall use FIQ and to acknowledge any IPIs that are raised.
> 
> All GIC hardware except GICv1-without-TrustZone support provides a means
> to group exceptions into group 0 and group 1 but the hardware
> functionality is unavailable to the kernel when a secure monitor is
> present because access to the grouping registers are prohibited outside
> "secure world". However when grouping is not available (or on early
> GICv1 implementations where it is available but tricky to enable) the
> code to change groups does not deploy and all IPIs will be raised via
> IRQ.
> 
> Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Jason Cooper <jason@lakedaemon.net>
> Cc: Russell King <linux@arm.linux.org.uk>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Tested-by: Jon Medhurst <tixy@linaro.org>
> ---
>  drivers/irqchip/irq-gic.c       | 183 +++++++++++++++++++++++++++++++++++++---
>  include/linux/irqchip/arm-gic.h |   6 ++
>  2 files changed, 175 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c
> index b6c1e96b52a1..8077edd0d38d 100644
> --- a/drivers/irqchip/irq-gic.c
> +++ b/drivers/irqchip/irq-gic.c
> @@ -41,6 +41,7 @@
>  #include <linux/irqchip.h>
>  #include <linux/irqchip/chained_irq.h>
>  #include <linux/irqchip/arm-gic.h>
> +#include <linux/ratelimit.h>
>  
>  #include <asm/cputype.h>
>  #include <asm/irq.h>
> @@ -63,6 +64,10 @@ static void gic_check_cpu_features(void)
>  #define gic_check_cpu_features()	do { } while(0)
>  #endif
>  
> +#ifndef SMP_IPI_FIQ_MASK
> +#define SMP_IPI_FIQ_MASK 0
> +#endif
> +
>  union gic_base {
>  	void __iomem *common_base;
>  	void __percpu * __iomem *percpu_base;
> @@ -82,6 +87,7 @@ struct gic_chip_data {
>  #endif
>  	struct irq_domain *domain;
>  	unsigned int gic_irqs;
> +	bool sgi_with_nsatt;
>  #ifdef CONFIG_GIC_NON_BANKED
>  	void __iomem *(*get_base)(union gic_base *);
>  #endif
> @@ -350,12 +356,52 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val,
>  }
>  #endif
>  
> +/*
> + * Fully acknowledge (ack, eoi and deactivate) any outstanding FIQ-based IPI,
> + * otherwise do nothing.
> + */
> +static void __maybe_unused gic_handle_fiq(struct pt_regs *regs)
> +{
> +	struct gic_chip_data *gic = &gic_data[0];
> +	void __iomem *cpu_base = gic_data_cpu_base(gic);
> +	u32 hppstat, hppnr, irqstat, irqnr;
> +
> +	do {
> +		hppstat = readl_relaxed(cpu_base + GIC_CPU_HIGHPRI);
> +		hppnr = hppstat & GICC_IAR_INT_ID_MASK;
> +		if (!(hppnr < 16 && BIT(hppnr) & SMP_IPI_FIQ_MASK))
> +			break;
> +
> +		irqstat = readl_relaxed(cpu_base + GIC_CPU_INTACK);
> +		irqnr = irqstat & GICC_IAR_INT_ID_MASK;
> +
> +		writel_relaxed(irqstat, cpu_base + GIC_CPU_EOI);
> +		if (static_key_true(&supports_deactivate))
> +			writel_relaxed(irqstat, cpu_base + GIC_CPU_DEACTIVATE);
> +
> +		if (WARN_RATELIMIT(irqnr > 16,

Shouldn't that be irqnr > 15?

> +			       "Unexpected irqnr %u (bad prioritization?)\n",
> +			       irqnr))
> +			continue;
> +#ifdef CONFIG_SMP
> +		handle_IPI(irqnr, regs);
> +#endif
> +	} while (1);
> +}
> +
>  static void __exception_irq_entry gic_handle_irq(struct pt_regs *regs)
>  {
>  	u32 irqstat, irqnr;
>  	struct gic_chip_data *gic = &gic_data[0];
>  	void __iomem *cpu_base = gic_data_cpu_base(gic);
>  
> +#ifdef CONFIG_ARM

What is the reason to make this 32bit specific?

> +	if (in_nmi()) {
> +		gic_handle_fiq(regs);
> +		return;
> +	}
> +#endif
> +
>  	do {
>  		irqstat = readl_relaxed(cpu_base + GIC_CPU_INTACK);
>  		irqnr = irqstat & GICC_IAR_INT_ID_MASK;
> @@ -439,6 +485,55 @@ static struct irq_chip gic_eoimode1_chip = {
>  				  IRQCHIP_MASK_ON_SUSPEND,
>  };
>  
> +/*
> + * Shift an interrupt between Group 0 and Group 1.
> + *
> + * In addition to changing the group we also modify the priority to
> + * match what "ARM strongly recommends" for a system where no Group 1
> + * interrupt must ever preempt a Group 0 interrupt.
> + *
> + * It is safe to call this function on systems which do not support
> + * grouping (it will have no effect).
> + */
> +static void gic_set_group_irq(struct gic_chip_data *gic, unsigned int hwirq,
> +			      int group)
> +{
> +	void __iomem *base = gic_data_dist_base(gic);
> +	unsigned int grp_reg = hwirq / 32 * 4;
> +	u32 grp_mask = BIT(hwirq % 32);
> +	u32 grp_val;
> +

nit: spurious space.

> +	unsigned int pri_reg = (hwirq / 4) * 4;
> +	u32 pri_mask = BIT(7 + ((hwirq % 4) * 8));
> +	u32 pri_val;
> +
> +	/*
> +	 * Systems which do not support grouping will have not have
> +	 * the EnableGrp1 bit set.
> +	 */
> +	if (!(GICD_ENABLE_GRP1 & readl_relaxed(base + GIC_DIST_CTRL)))

nit: I tend to prefer expressions to be written the other way around
(readl() & v). But more importantly, you should be able to cache the
grouping state in the gic_chip_data structure (you seem to have similar
code below).

> +		return;
> +
> +	raw_spin_lock(&irq_controller_lock);
> +
> +	grp_val = readl_relaxed(base + GIC_DIST_IGROUP + grp_reg);
> +	pri_val = readl_relaxed(base + GIC_DIST_PRI + pri_reg);

The priority register is byte-accessible, so you can save yourself some
effort and just write the priority there.

> +
> +	if (group) {
> +		grp_val |= grp_mask;
> +		pri_val |= pri_mask;
> +	} else {
> +		grp_val &= ~grp_mask;
> +		pri_val &= ~pri_mask;
> +	}
> +
> +	writel_relaxed(grp_val, base + GIC_DIST_IGROUP + grp_reg);
> +	writel_relaxed(pri_val, base + GIC_DIST_PRI + pri_reg);
> +
> +	raw_spin_unlock(&irq_controller_lock);
> +}
> +
> +
>  void __init gic_cascade_irq(unsigned int gic_nr, unsigned int irq)
>  {
>  	if (gic_nr >= MAX_GIC_NR)
> @@ -469,19 +564,27 @@ static u8 gic_get_cpumask(struct gic_chip_data *gic)
>  static void gic_cpu_if_up(struct gic_chip_data *gic)
>  {
>  	void __iomem *cpu_base = gic_data_cpu_base(gic);
> -	u32 bypass = 0;
> -	u32 mode = 0;
> +	void __iomem *dist_base = gic_data_dist_base(gic);
> +	u32 ctrl = 0;
>  
> -	if (static_key_true(&supports_deactivate))
> -		mode = GIC_CPU_CTRL_EOImodeNS;
> +	/*
> +	 * Preserve bypass disable bits to be written back later
> +	 */
> +	ctrl = readl(cpu_base + GIC_CPU_CTRL);
> +	ctrl &= GICC_DIS_BYPASS_MASK;
>  
>  	/*
> -	* Preserve bypass disable bits to be written back later
> -	*/
> -	bypass = readl(cpu_base + GIC_CPU_CTRL);
> -	bypass &= GICC_DIS_BYPASS_MASK;
> +	 * If EnableGrp1 is set in the distributor then enable group 1
> +	 * support for this CPU (and route group 0 interrupts to FIQ).
> +	 */
> +	if (GICD_ENABLE_GRP1 & readl_relaxed(dist_base + GIC_DIST_CTRL))
> +		ctrl |= GICC_COMMON_BPR | GICC_FIQ_EN | GICC_ACK_CTL |
> +			GICC_ENABLE_GRP1;
> +
> +	if (static_key_true(&supports_deactivate))
> +		ctrl |= GIC_CPU_CTRL_EOImodeNS;
>  
> -	writel_relaxed(bypass | mode | GICC_ENABLE, cpu_base + GIC_CPU_CTRL);
> +	writel_relaxed(ctrl | GICC_ENABLE, cpu_base + GIC_CPU_CTRL);
>  }
>  
>  
> @@ -505,7 +608,31 @@ static void __init gic_dist_init(struct gic_chip_data *gic)
>  
>  	gic_dist_config(base, gic_irqs, NULL);
>  
> -	writel_relaxed(GICD_ENABLE, base + GIC_DIST_CTRL);
> +	/*
> +	 * Set EnableGrp1/EnableGrp0 (bit 1 and 0) or EnableGrp (bit 0 only,
> +	 * bit 1 ignored) depending on current security mode.
> +	 */
> +	writel_relaxed(GICD_ENABLE_GRP1 | GICD_ENABLE, base + GIC_DIST_CTRL);
> +
> +	/*
> +	 * Some GICv1 devices (even those with security extensions) do not
> +	 * implement EnableGrp1 meaning some parts of the above write may
> +	 * be ignored. We will only enable FIQ support if the bit can be set.
> +	 */
> +	if (GICD_ENABLE_GRP1 & readl_relaxed(base + GIC_DIST_CTRL)) {
> +		/* Place all SPIs in group 1 (signally with IRQ). */
> +		for (i = 32; i < gic_irqs; i += 32)
> +			writel_relaxed(0xffffffff,
> +				       base + GIC_DIST_IGROUP + i * 4 / 32);
> +
> +		/*
> +		 * If the GIC supports the security extension then SGIs
> +		 * will be filtered based on the value of NSATT. If the
> +		 * GIC has this support then enable NSATT support.
> +		 */
> +		if (GICD_SECURITY_EXTN & readl_relaxed(base + GIC_DIST_CTR))
> +			gic->sgi_with_nsatt = true;
> +	}
>  }
>  
>  static void gic_cpu_init(struct gic_chip_data *gic)
> @@ -514,6 +641,7 @@ static void gic_cpu_init(struct gic_chip_data *gic)
>  	void __iomem *base = gic_data_cpu_base(gic);
>  	unsigned int cpu_mask, cpu = smp_processor_id();
>  	int i;
> +	unsigned long ipi_fiq_mask, fiq;

Think of the children (arm64), do not make ipi_fiq_mask a long... If you
pass anything but u32 to writel, you're doing something wrong.

>  
>  	/*
>  	 * Setting up the CPU map is only relevant for the primary GIC
> @@ -539,6 +667,23 @@ static void gic_cpu_init(struct gic_chip_data *gic)
>  
>  	gic_cpu_config(dist_base, NULL);
>  
> +	/*
> +	 * If the distributor is configured to support interrupt grouping
> +	 * then set all SGI and PPI interrupts that are not set in
> +	 * SMP_IPI_FIQ_MASK to group1 and ensure any remaining group 0
> +	 * interrupts have the right priority.
> +	 *
> +	 * Note that IGROUP[0] is banked, meaning that although we are
> +	 * writing to a distributor register we are actually performing
> +	 * part of the per-cpu initialization.
> +	 */
> +	if (GICD_ENABLE_GRP1 & readl_relaxed(dist_base + GIC_DIST_CTRL)) {
> +		ipi_fiq_mask = SMP_IPI_FIQ_MASK;
> +		writel_relaxed(~ipi_fiq_mask, dist_base + GIC_DIST_IGROUP + 0);

or just turn this into writel_relaxed(~SMP_IPI_FIQ_MASK...) and keep
ipi_fiq_mask as a long.

> +		for_each_set_bit(fiq, &ipi_fiq_mask, 16)
> +			gic_set_group_irq(gic, fiq, 0);
> +	}
> +
>  	writel_relaxed(GICC_INT_PRI_THRESHOLD, base + GIC_CPU_PRIMASK);
>  	gic_cpu_if_up(gic);
>  }
> @@ -553,7 +698,8 @@ int gic_cpu_if_down(unsigned int gic_nr)
>  
>  	cpu_base = gic_data_cpu_base(&gic_data[gic_nr]);
>  	val = readl(cpu_base + GIC_CPU_CTRL);
> -	val &= ~GICC_ENABLE;
> +	val &= ~(GICC_COMMON_BPR | GICC_FIQ_EN | GICC_ACK_CTL |
> +		 GICC_ENABLE_GRP1 | GICC_ENABLE);
>  	writel_relaxed(val, cpu_base + GIC_CPU_CTRL);
>  
>  	return 0;
> @@ -648,7 +794,8 @@ static void gic_dist_restore(unsigned int gic_nr)
>  			dist_base + GIC_DIST_ACTIVE_SET + i * 4);
>  	}
>  
> -	writel_relaxed(GICD_ENABLE, dist_base + GIC_DIST_CTRL);
> +	writel_relaxed(GICD_ENABLE_GRP1 | GICD_ENABLE,
> +		       dist_base + GIC_DIST_CTRL);
>  }
>  
>  static void gic_cpu_save(unsigned int gic_nr)
> @@ -794,6 +941,8 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
>  {
>  	int cpu;
>  	unsigned long map = 0;
> +	unsigned long softint;
> +	void __iomem *dist_base;
>  
>  	gic_migration_lock();
>  
> @@ -801,14 +950,20 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
>  	for_each_cpu(cpu, mask)
>  		map |= gic_cpu_map[cpu];
>  
> +	/* This always happens on GIC0 */
> +	dist_base = gic_data_dist_base(&gic_data[0]);
> +
>  	/*
>  	 * Ensure that stores to Normal memory are visible to the
>  	 * other CPUs before they observe us issuing the IPI.
>  	 */
>  	dmb(ishst);
>  
> -	/* this always happens on GIC0 */
> -	writel_relaxed(map << 16 | irq, gic_data_dist_base(&gic_data[0]) + GIC_DIST_SOFTINT);
> +	softint = map << 16 | irq;
> +
> +	writel_relaxed(softint, dist_base + GIC_DIST_SOFTINT);
> +	if (gic_data[0].sgi_with_nsatt)
> +		writel_relaxed(softint | 0x8000, dist_base + GIC_DIST_SOFTINT);

Ah! No way. Writing twice to GICD_SGIR is horribly inefficient - just
think of a virtualized environment where you actually trap to HYP to
emulate SGIs (and some actual HW sucks almost as much...). A better
solution would be to keep track of which SGIs are secure and which are
not. A simple u16 would do.

>  
>  	gic_migration_unlock();
>  }
> diff --git a/include/linux/irqchip/arm-gic.h b/include/linux/irqchip/arm-gic.h
> index bae69e5d693c..17b9e20d754e 100644
> --- a/include/linux/irqchip/arm-gic.h
> +++ b/include/linux/irqchip/arm-gic.h
> @@ -23,6 +23,10 @@
>  #define GIC_CPU_DEACTIVATE		0x1000
>  
>  #define GICC_ENABLE			0x1
> +#define GICC_ENABLE_GRP1		0x2
> +#define GICC_ACK_CTL			0x4
> +#define GICC_FIQ_EN			0x8
> +#define GICC_COMMON_BPR			0x10
>  #define GICC_INT_PRI_THRESHOLD		0xf0
>  
>  #define GIC_CPU_CTRL_EOImodeNS		(1 << 9)
> @@ -48,7 +52,9 @@
>  #define GIC_DIST_SGI_PENDING_SET	0xf20
>  
>  #define GICD_ENABLE			0x1
> +#define GICD_ENABLE_GRP1		0x2
>  #define GICD_DISABLE			0x0
> +#define GICD_SECURITY_EXTN		0x400
>  #define GICD_INT_ACTLOW_LVLTRIG		0x0
>  #define GICD_INT_EN_CLR_X32		0xffffffff
>  #define GICD_INT_EN_SET_SGI		0x0000ffff
> 

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 4.4-rc5 v22 3/4] irqchip: gic: Introduce plumbing for IPI FIQ
@ 2016-01-07 17:06     ` Marc Zyngier
  0 siblings, 0 replies; 26+ messages in thread
From: Marc Zyngier @ 2016-01-07 17:06 UTC (permalink / raw)
  To: linux-arm-kernel

On 20/12/15 20:52, Daniel Thompson wrote:
> Currently it is not possible to exploit FIQ for systems with a GIC, even
> on systems are otherwise capable of it. This patch makes it possible
> for IPIs to be delivered using FIQ.
> 
> To do so it modifies the register state so that normal interrupts are
> placed in group 1 and specific IPIs are placed into group 0. It also
> configures the controller to raise group 0 interrupts using the FIQ
> signal. Finally it provides a means for architecture code to define
> which IPIs shall use FIQ and to acknowledge any IPIs that are raised.
> 
> All GIC hardware except GICv1-without-TrustZone support provides a means
> to group exceptions into group 0 and group 1 but the hardware
> functionality is unavailable to the kernel when a secure monitor is
> present because access to the grouping registers are prohibited outside
> "secure world". However when grouping is not available (or on early
> GICv1 implementations where it is available but tricky to enable) the
> code to change groups does not deploy and all IPIs will be raised via
> IRQ.
> 
> Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Jason Cooper <jason@lakedaemon.net>
> Cc: Russell King <linux@arm.linux.org.uk>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Tested-by: Jon Medhurst <tixy@linaro.org>
> ---
>  drivers/irqchip/irq-gic.c       | 183 +++++++++++++++++++++++++++++++++++++---
>  include/linux/irqchip/arm-gic.h |   6 ++
>  2 files changed, 175 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c
> index b6c1e96b52a1..8077edd0d38d 100644
> --- a/drivers/irqchip/irq-gic.c
> +++ b/drivers/irqchip/irq-gic.c
> @@ -41,6 +41,7 @@
>  #include <linux/irqchip.h>
>  #include <linux/irqchip/chained_irq.h>
>  #include <linux/irqchip/arm-gic.h>
> +#include <linux/ratelimit.h>
>  
>  #include <asm/cputype.h>
>  #include <asm/irq.h>
> @@ -63,6 +64,10 @@ static void gic_check_cpu_features(void)
>  #define gic_check_cpu_features()	do { } while(0)
>  #endif
>  
> +#ifndef SMP_IPI_FIQ_MASK
> +#define SMP_IPI_FIQ_MASK 0
> +#endif
> +
>  union gic_base {
>  	void __iomem *common_base;
>  	void __percpu * __iomem *percpu_base;
> @@ -82,6 +87,7 @@ struct gic_chip_data {
>  #endif
>  	struct irq_domain *domain;
>  	unsigned int gic_irqs;
> +	bool sgi_with_nsatt;
>  #ifdef CONFIG_GIC_NON_BANKED
>  	void __iomem *(*get_base)(union gic_base *);
>  #endif
> @@ -350,12 +356,52 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val,
>  }
>  #endif
>  
> +/*
> + * Fully acknowledge (ack, eoi and deactivate) any outstanding FIQ-based IPI,
> + * otherwise do nothing.
> + */
> +static void __maybe_unused gic_handle_fiq(struct pt_regs *regs)
> +{
> +	struct gic_chip_data *gic = &gic_data[0];
> +	void __iomem *cpu_base = gic_data_cpu_base(gic);
> +	u32 hppstat, hppnr, irqstat, irqnr;
> +
> +	do {
> +		hppstat = readl_relaxed(cpu_base + GIC_CPU_HIGHPRI);
> +		hppnr = hppstat & GICC_IAR_INT_ID_MASK;
> +		if (!(hppnr < 16 && BIT(hppnr) & SMP_IPI_FIQ_MASK))
> +			break;
> +
> +		irqstat = readl_relaxed(cpu_base + GIC_CPU_INTACK);
> +		irqnr = irqstat & GICC_IAR_INT_ID_MASK;
> +
> +		writel_relaxed(irqstat, cpu_base + GIC_CPU_EOI);
> +		if (static_key_true(&supports_deactivate))
> +			writel_relaxed(irqstat, cpu_base + GIC_CPU_DEACTIVATE);
> +
> +		if (WARN_RATELIMIT(irqnr > 16,

Shouldn't that be irqnr > 15?

> +			       "Unexpected irqnr %u (bad prioritization?)\n",
> +			       irqnr))
> +			continue;
> +#ifdef CONFIG_SMP
> +		handle_IPI(irqnr, regs);
> +#endif
> +	} while (1);
> +}
> +
>  static void __exception_irq_entry gic_handle_irq(struct pt_regs *regs)
>  {
>  	u32 irqstat, irqnr;
>  	struct gic_chip_data *gic = &gic_data[0];
>  	void __iomem *cpu_base = gic_data_cpu_base(gic);
>  
> +#ifdef CONFIG_ARM

What is the reason to make this 32bit specific?

> +	if (in_nmi()) {
> +		gic_handle_fiq(regs);
> +		return;
> +	}
> +#endif
> +
>  	do {
>  		irqstat = readl_relaxed(cpu_base + GIC_CPU_INTACK);
>  		irqnr = irqstat & GICC_IAR_INT_ID_MASK;
> @@ -439,6 +485,55 @@ static struct irq_chip gic_eoimode1_chip = {
>  				  IRQCHIP_MASK_ON_SUSPEND,
>  };
>  
> +/*
> + * Shift an interrupt between Group 0 and Group 1.
> + *
> + * In addition to changing the group we also modify the priority to
> + * match what "ARM strongly recommends" for a system where no Group 1
> + * interrupt must ever preempt a Group 0 interrupt.
> + *
> + * It is safe to call this function on systems which do not support
> + * grouping (it will have no effect).
> + */
> +static void gic_set_group_irq(struct gic_chip_data *gic, unsigned int hwirq,
> +			      int group)
> +{
> +	void __iomem *base = gic_data_dist_base(gic);
> +	unsigned int grp_reg = hwirq / 32 * 4;
> +	u32 grp_mask = BIT(hwirq % 32);
> +	u32 grp_val;
> +

nit: spurious space.

> +	unsigned int pri_reg = (hwirq / 4) * 4;
> +	u32 pri_mask = BIT(7 + ((hwirq % 4) * 8));
> +	u32 pri_val;
> +
> +	/*
> +	 * Systems which do not support grouping will have not have
> +	 * the EnableGrp1 bit set.
> +	 */
> +	if (!(GICD_ENABLE_GRP1 & readl_relaxed(base + GIC_DIST_CTRL)))

nit: I tend to prefer expressions to be written the other way around
(readl() & v). But more importantly, you should be able to cache the
grouping state in the gic_chip_data structure (you seem to have similar
code below).

> +		return;
> +
> +	raw_spin_lock(&irq_controller_lock);
> +
> +	grp_val = readl_relaxed(base + GIC_DIST_IGROUP + grp_reg);
> +	pri_val = readl_relaxed(base + GIC_DIST_PRI + pri_reg);

The priority register is byte-accessible, so you can save yourself some
effort and just write the priority there.

> +
> +	if (group) {
> +		grp_val |= grp_mask;
> +		pri_val |= pri_mask;
> +	} else {
> +		grp_val &= ~grp_mask;
> +		pri_val &= ~pri_mask;
> +	}
> +
> +	writel_relaxed(grp_val, base + GIC_DIST_IGROUP + grp_reg);
> +	writel_relaxed(pri_val, base + GIC_DIST_PRI + pri_reg);
> +
> +	raw_spin_unlock(&irq_controller_lock);
> +}
> +
> +
>  void __init gic_cascade_irq(unsigned int gic_nr, unsigned int irq)
>  {
>  	if (gic_nr >= MAX_GIC_NR)
> @@ -469,19 +564,27 @@ static u8 gic_get_cpumask(struct gic_chip_data *gic)
>  static void gic_cpu_if_up(struct gic_chip_data *gic)
>  {
>  	void __iomem *cpu_base = gic_data_cpu_base(gic);
> -	u32 bypass = 0;
> -	u32 mode = 0;
> +	void __iomem *dist_base = gic_data_dist_base(gic);
> +	u32 ctrl = 0;
>  
> -	if (static_key_true(&supports_deactivate))
> -		mode = GIC_CPU_CTRL_EOImodeNS;
> +	/*
> +	 * Preserve bypass disable bits to be written back later
> +	 */
> +	ctrl = readl(cpu_base + GIC_CPU_CTRL);
> +	ctrl &= GICC_DIS_BYPASS_MASK;
>  
>  	/*
> -	* Preserve bypass disable bits to be written back later
> -	*/
> -	bypass = readl(cpu_base + GIC_CPU_CTRL);
> -	bypass &= GICC_DIS_BYPASS_MASK;
> +	 * If EnableGrp1 is set in the distributor then enable group 1
> +	 * support for this CPU (and route group 0 interrupts to FIQ).
> +	 */
> +	if (GICD_ENABLE_GRP1 & readl_relaxed(dist_base + GIC_DIST_CTRL))
> +		ctrl |= GICC_COMMON_BPR | GICC_FIQ_EN | GICC_ACK_CTL |
> +			GICC_ENABLE_GRP1;
> +
> +	if (static_key_true(&supports_deactivate))
> +		ctrl |= GIC_CPU_CTRL_EOImodeNS;
>  
> -	writel_relaxed(bypass | mode | GICC_ENABLE, cpu_base + GIC_CPU_CTRL);
> +	writel_relaxed(ctrl | GICC_ENABLE, cpu_base + GIC_CPU_CTRL);
>  }
>  
>  
> @@ -505,7 +608,31 @@ static void __init gic_dist_init(struct gic_chip_data *gic)
>  
>  	gic_dist_config(base, gic_irqs, NULL);
>  
> -	writel_relaxed(GICD_ENABLE, base + GIC_DIST_CTRL);
> +	/*
> +	 * Set EnableGrp1/EnableGrp0 (bit 1 and 0) or EnableGrp (bit 0 only,
> +	 * bit 1 ignored) depending on current security mode.
> +	 */
> +	writel_relaxed(GICD_ENABLE_GRP1 | GICD_ENABLE, base + GIC_DIST_CTRL);
> +
> +	/*
> +	 * Some GICv1 devices (even those with security extensions) do not
> +	 * implement EnableGrp1 meaning some parts of the above write may
> +	 * be ignored. We will only enable FIQ support if the bit can be set.
> +	 */
> +	if (GICD_ENABLE_GRP1 & readl_relaxed(base + GIC_DIST_CTRL)) {
> +		/* Place all SPIs in group 1 (signally with IRQ). */
> +		for (i = 32; i < gic_irqs; i += 32)
> +			writel_relaxed(0xffffffff,
> +				       base + GIC_DIST_IGROUP + i * 4 / 32);
> +
> +		/*
> +		 * If the GIC supports the security extension then SGIs
> +		 * will be filtered based on the value of NSATT. If the
> +		 * GIC has this support then enable NSATT support.
> +		 */
> +		if (GICD_SECURITY_EXTN & readl_relaxed(base + GIC_DIST_CTR))
> +			gic->sgi_with_nsatt = true;
> +	}
>  }
>  
>  static void gic_cpu_init(struct gic_chip_data *gic)
> @@ -514,6 +641,7 @@ static void gic_cpu_init(struct gic_chip_data *gic)
>  	void __iomem *base = gic_data_cpu_base(gic);
>  	unsigned int cpu_mask, cpu = smp_processor_id();
>  	int i;
> +	unsigned long ipi_fiq_mask, fiq;

Think of the children (arm64), do not make ipi_fiq_mask a long... If you
pass anything but u32 to writel, you're doing something wrong.

>  
>  	/*
>  	 * Setting up the CPU map is only relevant for the primary GIC
> @@ -539,6 +667,23 @@ static void gic_cpu_init(struct gic_chip_data *gic)
>  
>  	gic_cpu_config(dist_base, NULL);
>  
> +	/*
> +	 * If the distributor is configured to support interrupt grouping
> +	 * then set all SGI and PPI interrupts that are not set in
> +	 * SMP_IPI_FIQ_MASK to group1 and ensure any remaining group 0
> +	 * interrupts have the right priority.
> +	 *
> +	 * Note that IGROUP[0] is banked, meaning that although we are
> +	 * writing to a distributor register we are actually performing
> +	 * part of the per-cpu initialization.
> +	 */
> +	if (GICD_ENABLE_GRP1 & readl_relaxed(dist_base + GIC_DIST_CTRL)) {
> +		ipi_fiq_mask = SMP_IPI_FIQ_MASK;
> +		writel_relaxed(~ipi_fiq_mask, dist_base + GIC_DIST_IGROUP + 0);

or just turn this into writel_relaxed(~SMP_IPI_FIQ_MASK...) and keep
ipi_fiq_mask as a long.

> +		for_each_set_bit(fiq, &ipi_fiq_mask, 16)
> +			gic_set_group_irq(gic, fiq, 0);
> +	}
> +
>  	writel_relaxed(GICC_INT_PRI_THRESHOLD, base + GIC_CPU_PRIMASK);
>  	gic_cpu_if_up(gic);
>  }
> @@ -553,7 +698,8 @@ int gic_cpu_if_down(unsigned int gic_nr)
>  
>  	cpu_base = gic_data_cpu_base(&gic_data[gic_nr]);
>  	val = readl(cpu_base + GIC_CPU_CTRL);
> -	val &= ~GICC_ENABLE;
> +	val &= ~(GICC_COMMON_BPR | GICC_FIQ_EN | GICC_ACK_CTL |
> +		 GICC_ENABLE_GRP1 | GICC_ENABLE);
>  	writel_relaxed(val, cpu_base + GIC_CPU_CTRL);
>  
>  	return 0;
> @@ -648,7 +794,8 @@ static void gic_dist_restore(unsigned int gic_nr)
>  			dist_base + GIC_DIST_ACTIVE_SET + i * 4);
>  	}
>  
> -	writel_relaxed(GICD_ENABLE, dist_base + GIC_DIST_CTRL);
> +	writel_relaxed(GICD_ENABLE_GRP1 | GICD_ENABLE,
> +		       dist_base + GIC_DIST_CTRL);
>  }
>  
>  static void gic_cpu_save(unsigned int gic_nr)
> @@ -794,6 +941,8 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
>  {
>  	int cpu;
>  	unsigned long map = 0;
> +	unsigned long softint;
> +	void __iomem *dist_base;
>  
>  	gic_migration_lock();
>  
> @@ -801,14 +950,20 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
>  	for_each_cpu(cpu, mask)
>  		map |= gic_cpu_map[cpu];
>  
> +	/* This always happens on GIC0 */
> +	dist_base = gic_data_dist_base(&gic_data[0]);
> +
>  	/*
>  	 * Ensure that stores to Normal memory are visible to the
>  	 * other CPUs before they observe us issuing the IPI.
>  	 */
>  	dmb(ishst);
>  
> -	/* this always happens on GIC0 */
> -	writel_relaxed(map << 16 | irq, gic_data_dist_base(&gic_data[0]) + GIC_DIST_SOFTINT);
> +	softint = map << 16 | irq;
> +
> +	writel_relaxed(softint, dist_base + GIC_DIST_SOFTINT);
> +	if (gic_data[0].sgi_with_nsatt)
> +		writel_relaxed(softint | 0x8000, dist_base + GIC_DIST_SOFTINT);

Ah! No way. Writing twice to GICD_SGIR is horribly inefficient - just
think of a virtualized environment where you actually trap to HYP to
emulate SGIs (and some actual HW sucks almost as much...). A better
solution would be to keep track of which SGIs are secure and which are
not. A simple u16 would do.

>  
>  	gic_migration_unlock();
>  }
> diff --git a/include/linux/irqchip/arm-gic.h b/include/linux/irqchip/arm-gic.h
> index bae69e5d693c..17b9e20d754e 100644
> --- a/include/linux/irqchip/arm-gic.h
> +++ b/include/linux/irqchip/arm-gic.h
> @@ -23,6 +23,10 @@
>  #define GIC_CPU_DEACTIVATE		0x1000
>  
>  #define GICC_ENABLE			0x1
> +#define GICC_ENABLE_GRP1		0x2
> +#define GICC_ACK_CTL			0x4
> +#define GICC_FIQ_EN			0x8
> +#define GICC_COMMON_BPR			0x10
>  #define GICC_INT_PRI_THRESHOLD		0xf0
>  
>  #define GIC_CPU_CTRL_EOImodeNS		(1 << 9)
> @@ -48,7 +52,9 @@
>  #define GIC_DIST_SGI_PENDING_SET	0xf20
>  
>  #define GICD_ENABLE			0x1
> +#define GICD_ENABLE_GRP1		0x2
>  #define GICD_DISABLE			0x0
> +#define GICD_SECURITY_EXTN		0x400
>  #define GICD_INT_ACTLOW_LVLTRIG		0x0
>  #define GICD_INT_EN_CLR_X32		0xffffffff
>  #define GICD_INT_EN_SET_SGI		0x0000ffff
> 

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4.4-rc5 v22 3/4] irqchip: gic: Introduce plumbing for IPI FIQ
  2016-01-07 17:06     ` Marc Zyngier
@ 2016-01-11 12:02       ` Daniel Thompson
  -1 siblings, 0 replies; 26+ messages in thread
From: Daniel Thompson @ 2016-01-11 12:02 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner, Jason Cooper, Russell King
  Cc: Will Deacon, Catalin Marinas, Stephen Boyd, John Stultz,
	Steven Rostedt, linux-kernel, linux-arm-kernel, patches,
	linaro-kernel, Sumit Semwal, Dirk Behme, Daniel Drake,
	Dmitry Pervushin, Tim Sander, Petr Mladek

On 07/01/16 17:06, Marc Zyngier wrote:
> On 20/12/15 20:52, Daniel Thompson wrote:
>> Currently it is not possible to exploit FIQ for systems with a GIC, even
>> on systems are otherwise capable of it. This patch makes it possible
>> for IPIs to be delivered using FIQ.
>>
>> To do so it modifies the register state so that normal interrupts are
>> placed in group 1 and specific IPIs are placed into group 0. It also
>> configures the controller to raise group 0 interrupts using the FIQ
>> signal. Finally it provides a means for architecture code to define
>> which IPIs shall use FIQ and to acknowledge any IPIs that are raised.
>>
>> All GIC hardware except GICv1-without-TrustZone support provides a means
>> to group exceptions into group 0 and group 1 but the hardware
>> functionality is unavailable to the kernel when a secure monitor is
>> present because access to the grouping registers are prohibited outside
>> "secure world". However when grouping is not available (or on early
>> GICv1 implementations where it is available but tricky to enable) the
>> code to change groups does not deploy and all IPIs will be raised via
>> IRQ.
>>
>> Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
>> Cc: Thomas Gleixner <tglx@linutronix.de>
>> Cc: Jason Cooper <jason@lakedaemon.net>
>> Cc: Russell King <linux@arm.linux.org.uk>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Tested-by: Jon Medhurst <tixy@linaro.org>
>> ---
>>   drivers/irqchip/irq-gic.c       | 183 +++++++++++++++++++++++++++++++++++++---
>>   include/linux/irqchip/arm-gic.h |   6 ++
>>   2 files changed, 175 insertions(+), 14 deletions(-)
>>
>> diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c
>> index b6c1e96b52a1..8077edd0d38d 100644
>> --- a/drivers/irqchip/irq-gic.c
>> +++ b/drivers/irqchip/irq-gic.c
>> @@ -41,6 +41,7 @@
>>   #include <linux/irqchip.h>
>>   #include <linux/irqchip/chained_irq.h>
>>   #include <linux/irqchip/arm-gic.h>
>> +#include <linux/ratelimit.h>
>>
>>   #include <asm/cputype.h>
>>   #include <asm/irq.h>
>> @@ -63,6 +64,10 @@ static void gic_check_cpu_features(void)
>>   #define gic_check_cpu_features()	do { } while(0)
>>   #endif
>>
>> +#ifndef SMP_IPI_FIQ_MASK
>> +#define SMP_IPI_FIQ_MASK 0
>> +#endif
>> +
>>   union gic_base {
>>   	void __iomem *common_base;
>>   	void __percpu * __iomem *percpu_base;
>> @@ -82,6 +87,7 @@ struct gic_chip_data {
>>   #endif
>>   	struct irq_domain *domain;
>>   	unsigned int gic_irqs;
>> +	bool sgi_with_nsatt;
>>   #ifdef CONFIG_GIC_NON_BANKED
>>   	void __iomem *(*get_base)(union gic_base *);
>>   #endif
>> @@ -350,12 +356,52 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val,
>>   }
>>   #endif
>>
>> +/*
>> + * Fully acknowledge (ack, eoi and deactivate) any outstanding FIQ-based IPI,
>> + * otherwise do nothing.
>> + */
>> +static void __maybe_unused gic_handle_fiq(struct pt_regs *regs)
>> +{
>> +	struct gic_chip_data *gic = &gic_data[0];
>> +	void __iomem *cpu_base = gic_data_cpu_base(gic);
>> +	u32 hppstat, hppnr, irqstat, irqnr;
>> +
>> +	do {
>> +		hppstat = readl_relaxed(cpu_base + GIC_CPU_HIGHPRI);
>> +		hppnr = hppstat & GICC_IAR_INT_ID_MASK;
>> +		if (!(hppnr < 16 && BIT(hppnr) & SMP_IPI_FIQ_MASK))
>> +			break;
>> +
>> +		irqstat = readl_relaxed(cpu_base + GIC_CPU_INTACK);
>> +		irqnr = irqstat & GICC_IAR_INT_ID_MASK;
>> +
>> +		writel_relaxed(irqstat, cpu_base + GIC_CPU_EOI);
>> +		if (static_key_true(&supports_deactivate))
>> +			writel_relaxed(irqstat, cpu_base + GIC_CPU_DEACTIVATE);
>> +
>> +		if (WARN_RATELIMIT(irqnr > 16,
>
> Shouldn't that be irqnr > 15?

Yep. Will fix.
>
>> +			       "Unexpected irqnr %u (bad prioritization?)\n",
>> +			       irqnr))
>> +			continue;
>> +#ifdef CONFIG_SMP
>> +		handle_IPI(irqnr, regs);
>> +#endif
>> +	} while (1);
>> +}
>> +
>>   static void __exception_irq_entry gic_handle_irq(struct pt_regs *regs)
>>   {
>>   	u32 irqstat, irqnr;
>>   	struct gic_chip_data *gic = &gic_data[0];
>>   	void __iomem *cpu_base = gic_data_cpu_base(gic);
>>
>> +#ifdef CONFIG_ARM
>
> What is the reason to make this 32bit specific?

I have tried to keep the interfaces for my NMI-like for using arm/fiq 
and arm64/gicv3 as similar as possible.

However for arm/fiq then the arch code *must* tell the gic driver we are 
handling a FIQ via in_nmi() so it can avoid acking the wrong interrupt. 
Conversely, on arm64/gicv3 the arch code is *unable* to tell the gic 
driver we are handling a high priority IRQ because we don't know that 
until the gic driver acks the interrupt.

In the end I can't think of any way for any arch/arm64 code to call into 
gic_handle_irq() with a FIQ to handle and with nmi_enter() already 
called. in_nmi() would always be false at this point, so having this 
code present on arm64 is harmless but also pointless.


>> +	if (in_nmi()) {
>> +		gic_handle_fiq(regs);
>> +		return;
>> +	}
>> +#endif
>> +
>>   	do {
>>   		irqstat = readl_relaxed(cpu_base + GIC_CPU_INTACK);
>>   		irqnr = irqstat & GICC_IAR_INT_ID_MASK;
>> @@ -439,6 +485,55 @@ static struct irq_chip gic_eoimode1_chip = {
>>   				  IRQCHIP_MASK_ON_SUSPEND,
>>   };
>>
>> +/*
>> + * Shift an interrupt between Group 0 and Group 1.
>> + *
>> + * In addition to changing the group we also modify the priority to
>> + * match what "ARM strongly recommends" for a system where no Group 1
>> + * interrupt must ever preempt a Group 0 interrupt.
>> + *
>> + * It is safe to call this function on systems which do not support
>> + * grouping (it will have no effect).
>> + */
>> +static void gic_set_group_irq(struct gic_chip_data *gic, unsigned int hwirq,
>> +			      int group)
>> +{
>> +	void __iomem *base = gic_data_dist_base(gic);
>> +	unsigned int grp_reg = hwirq / 32 * 4;
>> +	u32 grp_mask = BIT(hwirq % 32);
>> +	u32 grp_val;
>> +
>
> nit: spurious space.

Will fix.

>> +	unsigned int pri_reg = (hwirq / 4) * 4;
>> +	u32 pri_mask = BIT(7 + ((hwirq % 4) * 8));
>> +	u32 pri_val;
>> +
>> +	/*
>> +	 * Systems which do not support grouping will have not have
>> +	 * the EnableGrp1 bit set.
>> +	 */
>> +	if (!(GICD_ENABLE_GRP1 & readl_relaxed(base + GIC_DIST_CTRL)))
>
> nit: I tend to prefer expressions to be written the other way around
> (readl() & v). But more importantly, you should be able to cache the
> grouping state in the gic_chip_data structure (you seem to have similar
> code below).

Will do.

>> +		return;
>> +
>> +	raw_spin_lock(&irq_controller_lock);
>> +
>> +	grp_val = readl_relaxed(base + GIC_DIST_IGROUP + grp_reg);
>> +	pri_val = readl_relaxed(base + GIC_DIST_PRI + pri_reg);
>
> The priority register is byte-accessible, so you can save yourself some
> effort and just write the priority there.

Will do (also, I have a really strong sense of dejavu here, I may have 
neglected to apply previous comments from you as broadly as I should 
have and, if so, then sorry).

>> +
>> +	if (group) {
>> +		grp_val |= grp_mask;
>> +		pri_val |= pri_mask;
>> +	} else {
>> +		grp_val &= ~grp_mask;
>> +		pri_val &= ~pri_mask;
>> +	}
>> +
>> +	writel_relaxed(grp_val, base + GIC_DIST_IGROUP + grp_reg);
>> +	writel_relaxed(pri_val, base + GIC_DIST_PRI + pri_reg);
>> +
>> +	raw_spin_unlock(&irq_controller_lock);
>> +}
>> +
>> +
>>   void __init gic_cascade_irq(unsigned int gic_nr, unsigned int irq)
>>   {
>>   	if (gic_nr >= MAX_GIC_NR)
>> @@ -469,19 +564,27 @@ static u8 gic_get_cpumask(struct gic_chip_data *gic)
>>   static void gic_cpu_if_up(struct gic_chip_data *gic)
>>   {
>>   	void __iomem *cpu_base = gic_data_cpu_base(gic);
>> -	u32 bypass = 0;
>> -	u32 mode = 0;
>> +	void __iomem *dist_base = gic_data_dist_base(gic);
>> +	u32 ctrl = 0;
>>
>> -	if (static_key_true(&supports_deactivate))
>> -		mode = GIC_CPU_CTRL_EOImodeNS;
>> +	/*
>> +	 * Preserve bypass disable bits to be written back later
>> +	 */
>> +	ctrl = readl(cpu_base + GIC_CPU_CTRL);
>> +	ctrl &= GICC_DIS_BYPASS_MASK;
>>
>>   	/*
>> -	* Preserve bypass disable bits to be written back later
>> -	*/
>> -	bypass = readl(cpu_base + GIC_CPU_CTRL);
>> -	bypass &= GICC_DIS_BYPASS_MASK;
>> +	 * If EnableGrp1 is set in the distributor then enable group 1
>> +	 * support for this CPU (and route group 0 interrupts to FIQ).
>> +	 */
>> +	if (GICD_ENABLE_GRP1 & readl_relaxed(dist_base + GIC_DIST_CTRL))
>> +		ctrl |= GICC_COMMON_BPR | GICC_FIQ_EN | GICC_ACK_CTL |
>> +			GICC_ENABLE_GRP1;
>> +
>> +	if (static_key_true(&supports_deactivate))
>> +		ctrl |= GIC_CPU_CTRL_EOImodeNS;
>>
>> -	writel_relaxed(bypass | mode | GICC_ENABLE, cpu_base + GIC_CPU_CTRL);
>> +	writel_relaxed(ctrl | GICC_ENABLE, cpu_base + GIC_CPU_CTRL);
>>   }
>>
>>
>> @@ -505,7 +608,31 @@ static void __init gic_dist_init(struct gic_chip_data *gic)
>>
>>   	gic_dist_config(base, gic_irqs, NULL);
>>
>> -	writel_relaxed(GICD_ENABLE, base + GIC_DIST_CTRL);
>> +	/*
>> +	 * Set EnableGrp1/EnableGrp0 (bit 1 and 0) or EnableGrp (bit 0 only,
>> +	 * bit 1 ignored) depending on current security mode.
>> +	 */
>> +	writel_relaxed(GICD_ENABLE_GRP1 | GICD_ENABLE, base + GIC_DIST_CTRL);
>> +
>> +	/*
>> +	 * Some GICv1 devices (even those with security extensions) do not
>> +	 * implement EnableGrp1 meaning some parts of the above write may
>> +	 * be ignored. We will only enable FIQ support if the bit can be set.
>> +	 */
>> +	if (GICD_ENABLE_GRP1 & readl_relaxed(base + GIC_DIST_CTRL)) {
>> +		/* Place all SPIs in group 1 (signally with IRQ). */
>> +		for (i = 32; i < gic_irqs; i += 32)
>> +			writel_relaxed(0xffffffff,
>> +				       base + GIC_DIST_IGROUP + i * 4 / 32);
>> +
>> +		/*
>> +		 * If the GIC supports the security extension then SGIs
>> +		 * will be filtered based on the value of NSATT. If the
>> +		 * GIC has this support then enable NSATT support.
>> +		 */
>> +		if (GICD_SECURITY_EXTN & readl_relaxed(base + GIC_DIST_CTR))
>> +			gic->sgi_with_nsatt = true;
>> +	}
>>   }
>>
>>   static void gic_cpu_init(struct gic_chip_data *gic)
>> @@ -514,6 +641,7 @@ static void gic_cpu_init(struct gic_chip_data *gic)
>>   	void __iomem *base = gic_data_cpu_base(gic);
>>   	unsigned int cpu_mask, cpu = smp_processor_id();
>>   	int i;
>> +	unsigned long ipi_fiq_mask, fiq;
>
> Think of the children (arm64), do not make ipi_fiq_mask a long... If you
> pass anything but u32 to writel, you're doing something wrong.

Will do.


>>
>>   	/*
>>   	 * Setting up the CPU map is only relevant for the primary GIC
>> @@ -539,6 +667,23 @@ static void gic_cpu_init(struct gic_chip_data *gic)
>>
>>   	gic_cpu_config(dist_base, NULL);
>>
>> +	/*
>> +	 * If the distributor is configured to support interrupt grouping
>> +	 * then set all SGI and PPI interrupts that are not set in
>> +	 * SMP_IPI_FIQ_MASK to group1 and ensure any remaining group 0
>> +	 * interrupts have the right priority.
>> +	 *
>> +	 * Note that IGROUP[0] is banked, meaning that although we are
>> +	 * writing to a distributor register we are actually performing
>> +	 * part of the per-cpu initialization.
>> +	 */
>> +	if (GICD_ENABLE_GRP1 & readl_relaxed(dist_base + GIC_DIST_CTRL)) {
>> +		ipi_fiq_mask = SMP_IPI_FIQ_MASK;
>> +		writel_relaxed(~ipi_fiq_mask, dist_base + GIC_DIST_IGROUP + 0);
>
> or just turn this into writel_relaxed(~SMP_IPI_FIQ_MASK...) and keep
> ipi_fiq_mask as a long.
>
>> +		for_each_set_bit(fiq, &ipi_fiq_mask, 16)
>> +			gic_set_group_irq(gic, fiq, 0);
>> +	}
>> +
>>   	writel_relaxed(GICC_INT_PRI_THRESHOLD, base + GIC_CPU_PRIMASK);
>>   	gic_cpu_if_up(gic);
>>   }
>> @@ -553,7 +698,8 @@ int gic_cpu_if_down(unsigned int gic_nr)
>>
>>   	cpu_base = gic_data_cpu_base(&gic_data[gic_nr]);
>>   	val = readl(cpu_base + GIC_CPU_CTRL);
>> -	val &= ~GICC_ENABLE;
>> +	val &= ~(GICC_COMMON_BPR | GICC_FIQ_EN | GICC_ACK_CTL |
>> +		 GICC_ENABLE_GRP1 | GICC_ENABLE);
>>   	writel_relaxed(val, cpu_base + GIC_CPU_CTRL);
>>
>>   	return 0;
>> @@ -648,7 +794,8 @@ static void gic_dist_restore(unsigned int gic_nr)
>>   			dist_base + GIC_DIST_ACTIVE_SET + i * 4);
>>   	}
>>
>> -	writel_relaxed(GICD_ENABLE, dist_base + GIC_DIST_CTRL);
>> +	writel_relaxed(GICD_ENABLE_GRP1 | GICD_ENABLE,
>> +		       dist_base + GIC_DIST_CTRL);
>>   }
>>
>>   static void gic_cpu_save(unsigned int gic_nr)
>> @@ -794,6 +941,8 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
>>   {
>>   	int cpu;
>>   	unsigned long map = 0;
>> +	unsigned long softint;
>> +	void __iomem *dist_base;
>>
>>   	gic_migration_lock();
>>
>> @@ -801,14 +950,20 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
>>   	for_each_cpu(cpu, mask)
>>   		map |= gic_cpu_map[cpu];
>>
>> +	/* This always happens on GIC0 */
>> +	dist_base = gic_data_dist_base(&gic_data[0]);
>> +
>>   	/*
>>   	 * Ensure that stores to Normal memory are visible to the
>>   	 * other CPUs before they observe us issuing the IPI.
>>   	 */
>>   	dmb(ishst);
>>
>> -	/* this always happens on GIC0 */
>> -	writel_relaxed(map << 16 | irq, gic_data_dist_base(&gic_data[0]) + GIC_DIST_SOFTINT);
>> +	softint = map << 16 | irq;
>> +
>> +	writel_relaxed(softint, dist_base + GIC_DIST_SOFTINT);
>> +	if (gic_data[0].sgi_with_nsatt)
>> +		writel_relaxed(softint | 0x8000, dist_base + GIC_DIST_SOFTINT);
>
> Ah! No way. Writing twice to GICD_SGIR is horribly inefficient - just
> think of a virtualized environment where you actually trap to HYP to
> emulate SGIs (and some actual HW sucks almost as much...). A better
> solution would be to keep track of which SGIs are secure and which are
> not. A simple u16 would do.

Will do. I remember why I wrote it like this (concern over whether the 
cache could ever become inconsistent) but reading it back now that does 
look rather paranoid.


Daniel.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 4.4-rc5 v22 3/4] irqchip: gic: Introduce plumbing for IPI FIQ
@ 2016-01-11 12:02       ` Daniel Thompson
  0 siblings, 0 replies; 26+ messages in thread
From: Daniel Thompson @ 2016-01-11 12:02 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/01/16 17:06, Marc Zyngier wrote:
> On 20/12/15 20:52, Daniel Thompson wrote:
>> Currently it is not possible to exploit FIQ for systems with a GIC, even
>> on systems are otherwise capable of it. This patch makes it possible
>> for IPIs to be delivered using FIQ.
>>
>> To do so it modifies the register state so that normal interrupts are
>> placed in group 1 and specific IPIs are placed into group 0. It also
>> configures the controller to raise group 0 interrupts using the FIQ
>> signal. Finally it provides a means for architecture code to define
>> which IPIs shall use FIQ and to acknowledge any IPIs that are raised.
>>
>> All GIC hardware except GICv1-without-TrustZone support provides a means
>> to group exceptions into group 0 and group 1 but the hardware
>> functionality is unavailable to the kernel when a secure monitor is
>> present because access to the grouping registers are prohibited outside
>> "secure world". However when grouping is not available (or on early
>> GICv1 implementations where it is available but tricky to enable) the
>> code to change groups does not deploy and all IPIs will be raised via
>> IRQ.
>>
>> Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
>> Cc: Thomas Gleixner <tglx@linutronix.de>
>> Cc: Jason Cooper <jason@lakedaemon.net>
>> Cc: Russell King <linux@arm.linux.org.uk>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Tested-by: Jon Medhurst <tixy@linaro.org>
>> ---
>>   drivers/irqchip/irq-gic.c       | 183 +++++++++++++++++++++++++++++++++++++---
>>   include/linux/irqchip/arm-gic.h |   6 ++
>>   2 files changed, 175 insertions(+), 14 deletions(-)
>>
>> diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c
>> index b6c1e96b52a1..8077edd0d38d 100644
>> --- a/drivers/irqchip/irq-gic.c
>> +++ b/drivers/irqchip/irq-gic.c
>> @@ -41,6 +41,7 @@
>>   #include <linux/irqchip.h>
>>   #include <linux/irqchip/chained_irq.h>
>>   #include <linux/irqchip/arm-gic.h>
>> +#include <linux/ratelimit.h>
>>
>>   #include <asm/cputype.h>
>>   #include <asm/irq.h>
>> @@ -63,6 +64,10 @@ static void gic_check_cpu_features(void)
>>   #define gic_check_cpu_features()	do { } while(0)
>>   #endif
>>
>> +#ifndef SMP_IPI_FIQ_MASK
>> +#define SMP_IPI_FIQ_MASK 0
>> +#endif
>> +
>>   union gic_base {
>>   	void __iomem *common_base;
>>   	void __percpu * __iomem *percpu_base;
>> @@ -82,6 +87,7 @@ struct gic_chip_data {
>>   #endif
>>   	struct irq_domain *domain;
>>   	unsigned int gic_irqs;
>> +	bool sgi_with_nsatt;
>>   #ifdef CONFIG_GIC_NON_BANKED
>>   	void __iomem *(*get_base)(union gic_base *);
>>   #endif
>> @@ -350,12 +356,52 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val,
>>   }
>>   #endif
>>
>> +/*
>> + * Fully acknowledge (ack, eoi and deactivate) any outstanding FIQ-based IPI,
>> + * otherwise do nothing.
>> + */
>> +static void __maybe_unused gic_handle_fiq(struct pt_regs *regs)
>> +{
>> +	struct gic_chip_data *gic = &gic_data[0];
>> +	void __iomem *cpu_base = gic_data_cpu_base(gic);
>> +	u32 hppstat, hppnr, irqstat, irqnr;
>> +
>> +	do {
>> +		hppstat = readl_relaxed(cpu_base + GIC_CPU_HIGHPRI);
>> +		hppnr = hppstat & GICC_IAR_INT_ID_MASK;
>> +		if (!(hppnr < 16 && BIT(hppnr) & SMP_IPI_FIQ_MASK))
>> +			break;
>> +
>> +		irqstat = readl_relaxed(cpu_base + GIC_CPU_INTACK);
>> +		irqnr = irqstat & GICC_IAR_INT_ID_MASK;
>> +
>> +		writel_relaxed(irqstat, cpu_base + GIC_CPU_EOI);
>> +		if (static_key_true(&supports_deactivate))
>> +			writel_relaxed(irqstat, cpu_base + GIC_CPU_DEACTIVATE);
>> +
>> +		if (WARN_RATELIMIT(irqnr > 16,
>
> Shouldn't that be irqnr > 15?

Yep. Will fix.
>
>> +			       "Unexpected irqnr %u (bad prioritization?)\n",
>> +			       irqnr))
>> +			continue;
>> +#ifdef CONFIG_SMP
>> +		handle_IPI(irqnr, regs);
>> +#endif
>> +	} while (1);
>> +}
>> +
>>   static void __exception_irq_entry gic_handle_irq(struct pt_regs *regs)
>>   {
>>   	u32 irqstat, irqnr;
>>   	struct gic_chip_data *gic = &gic_data[0];
>>   	void __iomem *cpu_base = gic_data_cpu_base(gic);
>>
>> +#ifdef CONFIG_ARM
>
> What is the reason to make this 32bit specific?

I have tried to keep the interfaces for my NMI-like for using arm/fiq 
and arm64/gicv3 as similar as possible.

However for arm/fiq then the arch code *must* tell the gic driver we are 
handling a FIQ via in_nmi() so it can avoid acking the wrong interrupt. 
Conversely, on arm64/gicv3 the arch code is *unable* to tell the gic 
driver we are handling a high priority IRQ because we don't know that 
until the gic driver acks the interrupt.

In the end I can't think of any way for any arch/arm64 code to call into 
gic_handle_irq() with a FIQ to handle and with nmi_enter() already 
called. in_nmi() would always be false at this point, so having this 
code present on arm64 is harmless but also pointless.


>> +	if (in_nmi()) {
>> +		gic_handle_fiq(regs);
>> +		return;
>> +	}
>> +#endif
>> +
>>   	do {
>>   		irqstat = readl_relaxed(cpu_base + GIC_CPU_INTACK);
>>   		irqnr = irqstat & GICC_IAR_INT_ID_MASK;
>> @@ -439,6 +485,55 @@ static struct irq_chip gic_eoimode1_chip = {
>>   				  IRQCHIP_MASK_ON_SUSPEND,
>>   };
>>
>> +/*
>> + * Shift an interrupt between Group 0 and Group 1.
>> + *
>> + * In addition to changing the group we also modify the priority to
>> + * match what "ARM strongly recommends" for a system where no Group 1
>> + * interrupt must ever preempt a Group 0 interrupt.
>> + *
>> + * It is safe to call this function on systems which do not support
>> + * grouping (it will have no effect).
>> + */
>> +static void gic_set_group_irq(struct gic_chip_data *gic, unsigned int hwirq,
>> +			      int group)
>> +{
>> +	void __iomem *base = gic_data_dist_base(gic);
>> +	unsigned int grp_reg = hwirq / 32 * 4;
>> +	u32 grp_mask = BIT(hwirq % 32);
>> +	u32 grp_val;
>> +
>
> nit: spurious space.

Will fix.

>> +	unsigned int pri_reg = (hwirq / 4) * 4;
>> +	u32 pri_mask = BIT(7 + ((hwirq % 4) * 8));
>> +	u32 pri_val;
>> +
>> +	/*
>> +	 * Systems which do not support grouping will have not have
>> +	 * the EnableGrp1 bit set.
>> +	 */
>> +	if (!(GICD_ENABLE_GRP1 & readl_relaxed(base + GIC_DIST_CTRL)))
>
> nit: I tend to prefer expressions to be written the other way around
> (readl() & v). But more importantly, you should be able to cache the
> grouping state in the gic_chip_data structure (you seem to have similar
> code below).

Will do.

>> +		return;
>> +
>> +	raw_spin_lock(&irq_controller_lock);
>> +
>> +	grp_val = readl_relaxed(base + GIC_DIST_IGROUP + grp_reg);
>> +	pri_val = readl_relaxed(base + GIC_DIST_PRI + pri_reg);
>
> The priority register is byte-accessible, so you can save yourself some
> effort and just write the priority there.

Will do (also, I have a really strong sense of dejavu here, I may have 
neglected to apply previous comments from you as broadly as I should 
have and, if so, then sorry).

>> +
>> +	if (group) {
>> +		grp_val |= grp_mask;
>> +		pri_val |= pri_mask;
>> +	} else {
>> +		grp_val &= ~grp_mask;
>> +		pri_val &= ~pri_mask;
>> +	}
>> +
>> +	writel_relaxed(grp_val, base + GIC_DIST_IGROUP + grp_reg);
>> +	writel_relaxed(pri_val, base + GIC_DIST_PRI + pri_reg);
>> +
>> +	raw_spin_unlock(&irq_controller_lock);
>> +}
>> +
>> +
>>   void __init gic_cascade_irq(unsigned int gic_nr, unsigned int irq)
>>   {
>>   	if (gic_nr >= MAX_GIC_NR)
>> @@ -469,19 +564,27 @@ static u8 gic_get_cpumask(struct gic_chip_data *gic)
>>   static void gic_cpu_if_up(struct gic_chip_data *gic)
>>   {
>>   	void __iomem *cpu_base = gic_data_cpu_base(gic);
>> -	u32 bypass = 0;
>> -	u32 mode = 0;
>> +	void __iomem *dist_base = gic_data_dist_base(gic);
>> +	u32 ctrl = 0;
>>
>> -	if (static_key_true(&supports_deactivate))
>> -		mode = GIC_CPU_CTRL_EOImodeNS;
>> +	/*
>> +	 * Preserve bypass disable bits to be written back later
>> +	 */
>> +	ctrl = readl(cpu_base + GIC_CPU_CTRL);
>> +	ctrl &= GICC_DIS_BYPASS_MASK;
>>
>>   	/*
>> -	* Preserve bypass disable bits to be written back later
>> -	*/
>> -	bypass = readl(cpu_base + GIC_CPU_CTRL);
>> -	bypass &= GICC_DIS_BYPASS_MASK;
>> +	 * If EnableGrp1 is set in the distributor then enable group 1
>> +	 * support for this CPU (and route group 0 interrupts to FIQ).
>> +	 */
>> +	if (GICD_ENABLE_GRP1 & readl_relaxed(dist_base + GIC_DIST_CTRL))
>> +		ctrl |= GICC_COMMON_BPR | GICC_FIQ_EN | GICC_ACK_CTL |
>> +			GICC_ENABLE_GRP1;
>> +
>> +	if (static_key_true(&supports_deactivate))
>> +		ctrl |= GIC_CPU_CTRL_EOImodeNS;
>>
>> -	writel_relaxed(bypass | mode | GICC_ENABLE, cpu_base + GIC_CPU_CTRL);
>> +	writel_relaxed(ctrl | GICC_ENABLE, cpu_base + GIC_CPU_CTRL);
>>   }
>>
>>
>> @@ -505,7 +608,31 @@ static void __init gic_dist_init(struct gic_chip_data *gic)
>>
>>   	gic_dist_config(base, gic_irqs, NULL);
>>
>> -	writel_relaxed(GICD_ENABLE, base + GIC_DIST_CTRL);
>> +	/*
>> +	 * Set EnableGrp1/EnableGrp0 (bit 1 and 0) or EnableGrp (bit 0 only,
>> +	 * bit 1 ignored) depending on current security mode.
>> +	 */
>> +	writel_relaxed(GICD_ENABLE_GRP1 | GICD_ENABLE, base + GIC_DIST_CTRL);
>> +
>> +	/*
>> +	 * Some GICv1 devices (even those with security extensions) do not
>> +	 * implement EnableGrp1 meaning some parts of the above write may
>> +	 * be ignored. We will only enable FIQ support if the bit can be set.
>> +	 */
>> +	if (GICD_ENABLE_GRP1 & readl_relaxed(base + GIC_DIST_CTRL)) {
>> +		/* Place all SPIs in group 1 (signally with IRQ). */
>> +		for (i = 32; i < gic_irqs; i += 32)
>> +			writel_relaxed(0xffffffff,
>> +				       base + GIC_DIST_IGROUP + i * 4 / 32);
>> +
>> +		/*
>> +		 * If the GIC supports the security extension then SGIs
>> +		 * will be filtered based on the value of NSATT. If the
>> +		 * GIC has this support then enable NSATT support.
>> +		 */
>> +		if (GICD_SECURITY_EXTN & readl_relaxed(base + GIC_DIST_CTR))
>> +			gic->sgi_with_nsatt = true;
>> +	}
>>   }
>>
>>   static void gic_cpu_init(struct gic_chip_data *gic)
>> @@ -514,6 +641,7 @@ static void gic_cpu_init(struct gic_chip_data *gic)
>>   	void __iomem *base = gic_data_cpu_base(gic);
>>   	unsigned int cpu_mask, cpu = smp_processor_id();
>>   	int i;
>> +	unsigned long ipi_fiq_mask, fiq;
>
> Think of the children (arm64), do not make ipi_fiq_mask a long... If you
> pass anything but u32 to writel, you're doing something wrong.

Will do.


>>
>>   	/*
>>   	 * Setting up the CPU map is only relevant for the primary GIC
>> @@ -539,6 +667,23 @@ static void gic_cpu_init(struct gic_chip_data *gic)
>>
>>   	gic_cpu_config(dist_base, NULL);
>>
>> +	/*
>> +	 * If the distributor is configured to support interrupt grouping
>> +	 * then set all SGI and PPI interrupts that are not set in
>> +	 * SMP_IPI_FIQ_MASK to group1 and ensure any remaining group 0
>> +	 * interrupts have the right priority.
>> +	 *
>> +	 * Note that IGROUP[0] is banked, meaning that although we are
>> +	 * writing to a distributor register we are actually performing
>> +	 * part of the per-cpu initialization.
>> +	 */
>> +	if (GICD_ENABLE_GRP1 & readl_relaxed(dist_base + GIC_DIST_CTRL)) {
>> +		ipi_fiq_mask = SMP_IPI_FIQ_MASK;
>> +		writel_relaxed(~ipi_fiq_mask, dist_base + GIC_DIST_IGROUP + 0);
>
> or just turn this into writel_relaxed(~SMP_IPI_FIQ_MASK...) and keep
> ipi_fiq_mask as a long.
>
>> +		for_each_set_bit(fiq, &ipi_fiq_mask, 16)
>> +			gic_set_group_irq(gic, fiq, 0);
>> +	}
>> +
>>   	writel_relaxed(GICC_INT_PRI_THRESHOLD, base + GIC_CPU_PRIMASK);
>>   	gic_cpu_if_up(gic);
>>   }
>> @@ -553,7 +698,8 @@ int gic_cpu_if_down(unsigned int gic_nr)
>>
>>   	cpu_base = gic_data_cpu_base(&gic_data[gic_nr]);
>>   	val = readl(cpu_base + GIC_CPU_CTRL);
>> -	val &= ~GICC_ENABLE;
>> +	val &= ~(GICC_COMMON_BPR | GICC_FIQ_EN | GICC_ACK_CTL |
>> +		 GICC_ENABLE_GRP1 | GICC_ENABLE);
>>   	writel_relaxed(val, cpu_base + GIC_CPU_CTRL);
>>
>>   	return 0;
>> @@ -648,7 +794,8 @@ static void gic_dist_restore(unsigned int gic_nr)
>>   			dist_base + GIC_DIST_ACTIVE_SET + i * 4);
>>   	}
>>
>> -	writel_relaxed(GICD_ENABLE, dist_base + GIC_DIST_CTRL);
>> +	writel_relaxed(GICD_ENABLE_GRP1 | GICD_ENABLE,
>> +		       dist_base + GIC_DIST_CTRL);
>>   }
>>
>>   static void gic_cpu_save(unsigned int gic_nr)
>> @@ -794,6 +941,8 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
>>   {
>>   	int cpu;
>>   	unsigned long map = 0;
>> +	unsigned long softint;
>> +	void __iomem *dist_base;
>>
>>   	gic_migration_lock();
>>
>> @@ -801,14 +950,20 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
>>   	for_each_cpu(cpu, mask)
>>   		map |= gic_cpu_map[cpu];
>>
>> +	/* This always happens on GIC0 */
>> +	dist_base = gic_data_dist_base(&gic_data[0]);
>> +
>>   	/*
>>   	 * Ensure that stores to Normal memory are visible to the
>>   	 * other CPUs before they observe us issuing the IPI.
>>   	 */
>>   	dmb(ishst);
>>
>> -	/* this always happens on GIC0 */
>> -	writel_relaxed(map << 16 | irq, gic_data_dist_base(&gic_data[0]) + GIC_DIST_SOFTINT);
>> +	softint = map << 16 | irq;
>> +
>> +	writel_relaxed(softint, dist_base + GIC_DIST_SOFTINT);
>> +	if (gic_data[0].sgi_with_nsatt)
>> +		writel_relaxed(softint | 0x8000, dist_base + GIC_DIST_SOFTINT);
>
> Ah! No way. Writing twice to GICD_SGIR is horribly inefficient - just
> think of a virtualized environment where you actually trap to HYP to
> emulate SGIs (and some actual HW sucks almost as much...). A better
> solution would be to keep track of which SGIs are secure and which are
> not. A simple u16 would do.

Will do. I remember why I wrote it like this (concern over whether the 
cache could ever become inconsistent) but reading it back now that does 
look rather paranoid.


Daniel.

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2016-01-11 12:02 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-12-20 20:52 [PATCH 4.4-rc5 v22 0/4] irq/arm: Use FIQ for NMI backtrace (when possible) Daniel Thompson
2015-12-20 20:52 ` Daniel Thompson
2015-12-20 20:52 ` [PATCH 4.4-rc5 v22 1/4] irqchip: gic: Optimize locking in gic_raise_softirq Daniel Thompson
2015-12-20 20:52   ` Daniel Thompson
2016-01-07 14:31   ` Marc Zyngier
2016-01-07 14:31     ` Marc Zyngier
2015-12-20 20:52 ` [PATCH 4.4-rc5 v22 2/4] irqchip: gic: Make gic_raise_softirq FIQ-safe Daniel Thompson
2015-12-20 20:52   ` Daniel Thompson
2016-01-07 14:35   ` Marc Zyngier
2016-01-07 14:35     ` Marc Zyngier
2015-12-20 20:52 ` [PATCH 4.4-rc5 v22 3/4] irqchip: gic: Introduce plumbing for IPI FIQ Daniel Thompson
2015-12-20 20:52   ` Daniel Thompson
2016-01-07 17:06   ` Marc Zyngier
2016-01-07 17:06     ` Marc Zyngier
2016-01-11 12:02     ` Daniel Thompson
2016-01-11 12:02       ` Daniel Thompson
2015-12-20 20:52 ` [PATCH 4.4-rc5 v22 4/4] ARM: Allow IPI_CPU_BACKTRACE to exploit FIQ Daniel Thompson
2015-12-20 20:52   ` Daniel Thompson
2015-12-20 22:12   ` kbuild test robot
2015-12-20 22:12     ` kbuild test robot
2016-01-04 10:00     ` Daniel Thompson
2016-01-04 10:00       ` Daniel Thompson
2015-12-20 22:52   ` kbuild test robot
2015-12-20 22:52     ` kbuild test robot
2016-01-04 10:05     ` Daniel Thompson
2016-01-04 10:05       ` Daniel Thompson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.