All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] More improvements for multiple PLICs
@ 2020-05-16  6:38 ` Anup Patel
  0 siblings, 0 replies; 40+ messages in thread
From: Anup Patel @ 2020-05-16  6:38 UTC (permalink / raw)
  To: Palmer Dabbelt, Paul Walmsley, Thomas Gleixner, Jason Cooper,
	Marc Zyngier
  Cc: Atish Patra, Alistair Francis, Anup Patel, linux-riscv,
	linux-kernel, Anup Patel

This series does more improvements for supporting multiple PLIC
instances.

PATCH1 and PATCH4 are fixes whereas PATCH2 and PATCH3 help users
distinguish between multiple PLIC instances.

These patches are based up Linux-5.7-rc5 and can be found at
plic_imp_v1 branch at: https://github.com/avpatel/linux.git

To try this patches, we will need:
1. OpenSBI multi-PLIC and multi-CLINT support which can be found in
   multi_plic_clint_v1 branch at:
   https://github.com/avpatel/opensbi.git
2. QEMU RISC-V multi-socket support which can be found in
   riscv_multi_socket_v1 branch at:
   https://github.com/avpatel/qemu.git

Anup Patel (4):
  irqchip/sifive-plic: Setup cpuhp once after current handler is present
  irqchip/sifive-plic: Improve boot prints for multiple PLIC instances
  irqchip/sifive-plic: Separate irq_chip for muiltiple PLIC instances
  irqchip/sifive-plic: Set default irq affinity in plic_irqdomain_map()

 drivers/irqchip/irq-sifive-plic.c | 50 +++++++++++++++++++++----------
 1 file changed, 35 insertions(+), 15 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 0/4] More improvements for multiple PLICs
@ 2020-05-16  6:38 ` Anup Patel
  0 siblings, 0 replies; 40+ messages in thread
From: Anup Patel @ 2020-05-16  6:38 UTC (permalink / raw)
  To: Palmer Dabbelt, Paul Walmsley, Thomas Gleixner, Jason Cooper,
	Marc Zyngier
  Cc: Anup Patel, Anup Patel, linux-kernel, Atish Patra,
	Alistair Francis, linux-riscv

This series does more improvements for supporting multiple PLIC
instances.

PATCH1 and PATCH4 are fixes whereas PATCH2 and PATCH3 help users
distinguish between multiple PLIC instances.

These patches are based up Linux-5.7-rc5 and can be found at
plic_imp_v1 branch at: https://github.com/avpatel/linux.git

To try this patches, we will need:
1. OpenSBI multi-PLIC and multi-CLINT support which can be found in
   multi_plic_clint_v1 branch at:
   https://github.com/avpatel/opensbi.git
2. QEMU RISC-V multi-socket support which can be found in
   riscv_multi_socket_v1 branch at:
   https://github.com/avpatel/qemu.git

Anup Patel (4):
  irqchip/sifive-plic: Setup cpuhp once after current handler is present
  irqchip/sifive-plic: Improve boot prints for multiple PLIC instances
  irqchip/sifive-plic: Separate irq_chip for muiltiple PLIC instances
  irqchip/sifive-plic: Set default irq affinity in plic_irqdomain_map()

 drivers/irqchip/irq-sifive-plic.c | 50 +++++++++++++++++++++----------
 1 file changed, 35 insertions(+), 15 deletions(-)

-- 
2.25.1



^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 1/4] irqchip/sifive-plic: Setup cpuhp once after current handler is present
  2020-05-16  6:38 ` Anup Patel
@ 2020-05-16  6:38   ` Anup Patel
  -1 siblings, 0 replies; 40+ messages in thread
From: Anup Patel @ 2020-05-16  6:38 UTC (permalink / raw)
  To: Palmer Dabbelt, Paul Walmsley, Thomas Gleixner, Jason Cooper,
	Marc Zyngier
  Cc: Atish Patra, Alistair Francis, Anup Patel, linux-riscv,
	linux-kernel, Anup Patel

For multiple PLIC instances, the plic_init() is called once for each
PLIC instance. Due to this we have two issues:
1. cpuhp_setup_state() is called multiple times
2. plic_starting_cpu() can crash for boot CPU if cpuhp_setup_state()
   is called before boot CPU PLIC handler is available.

This patch fixes both above issues.

Signed-off-by: Anup Patel <anup.patel@wdc.com>
---
 drivers/irqchip/irq-sifive-plic.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c
index 822e074c0600..7dc23edb3267 100644
--- a/drivers/irqchip/irq-sifive-plic.c
+++ b/drivers/irqchip/irq-sifive-plic.c
@@ -76,6 +76,7 @@ struct plic_handler {
 	void __iomem		*enable_base;
 	struct plic_priv	*priv;
 };
+static bool plic_cpuhp_setup_done;
 static DEFINE_PER_CPU(struct plic_handler, plic_handlers);
 
 static inline void plic_toggle(struct plic_handler *handler,
@@ -282,6 +283,7 @@ static int __init plic_init(struct device_node *node,
 	int error = 0, nr_contexts, nr_handlers = 0, i;
 	u32 nr_irqs;
 	struct plic_priv *priv;
+	struct plic_handler *handler;
 
 	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
 	if (!priv)
@@ -310,7 +312,6 @@ static int __init plic_init(struct device_node *node,
 
 	for (i = 0; i < nr_contexts; i++) {
 		struct of_phandle_args parent;
-		struct plic_handler *handler;
 		irq_hw_number_t hwirq;
 		int cpu, hartid;
 
@@ -364,9 +365,18 @@ static int __init plic_init(struct device_node *node,
 		nr_handlers++;
 	}
 
-	cpuhp_setup_state(CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING,
+	/*
+	 * We can have multiple PLIC instances so setup cpuhp state only
+	 * when context handler for current/boot CPU is present.
+	 */
+	handler = this_cpu_ptr(&plic_handlers);
+	if (handler->present && !plic_cpuhp_setup_done) {
+		cpuhp_setup_state(CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING,
 				  "irqchip/sifive/plic:starting",
 				  plic_starting_cpu, plic_dying_cpu);
+		plic_cpuhp_setup_done = true;
+	}
+
 	pr_info("mapped %d interrupts with %d handlers for %d contexts.\n",
 		nr_irqs, nr_handlers, nr_contexts);
 	set_handle_irq(plic_handle_irq);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 1/4] irqchip/sifive-plic: Setup cpuhp once after current handler is present
@ 2020-05-16  6:38   ` Anup Patel
  0 siblings, 0 replies; 40+ messages in thread
From: Anup Patel @ 2020-05-16  6:38 UTC (permalink / raw)
  To: Palmer Dabbelt, Paul Walmsley, Thomas Gleixner, Jason Cooper,
	Marc Zyngier
  Cc: Anup Patel, Anup Patel, linux-kernel, Atish Patra,
	Alistair Francis, linux-riscv

For multiple PLIC instances, the plic_init() is called once for each
PLIC instance. Due to this we have two issues:
1. cpuhp_setup_state() is called multiple times
2. plic_starting_cpu() can crash for boot CPU if cpuhp_setup_state()
   is called before boot CPU PLIC handler is available.

This patch fixes both above issues.

Signed-off-by: Anup Patel <anup.patel@wdc.com>
---
 drivers/irqchip/irq-sifive-plic.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c
index 822e074c0600..7dc23edb3267 100644
--- a/drivers/irqchip/irq-sifive-plic.c
+++ b/drivers/irqchip/irq-sifive-plic.c
@@ -76,6 +76,7 @@ struct plic_handler {
 	void __iomem		*enable_base;
 	struct plic_priv	*priv;
 };
+static bool plic_cpuhp_setup_done;
 static DEFINE_PER_CPU(struct plic_handler, plic_handlers);
 
 static inline void plic_toggle(struct plic_handler *handler,
@@ -282,6 +283,7 @@ static int __init plic_init(struct device_node *node,
 	int error = 0, nr_contexts, nr_handlers = 0, i;
 	u32 nr_irqs;
 	struct plic_priv *priv;
+	struct plic_handler *handler;
 
 	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
 	if (!priv)
@@ -310,7 +312,6 @@ static int __init plic_init(struct device_node *node,
 
 	for (i = 0; i < nr_contexts; i++) {
 		struct of_phandle_args parent;
-		struct plic_handler *handler;
 		irq_hw_number_t hwirq;
 		int cpu, hartid;
 
@@ -364,9 +365,18 @@ static int __init plic_init(struct device_node *node,
 		nr_handlers++;
 	}
 
-	cpuhp_setup_state(CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING,
+	/*
+	 * We can have multiple PLIC instances so setup cpuhp state only
+	 * when context handler for current/boot CPU is present.
+	 */
+	handler = this_cpu_ptr(&plic_handlers);
+	if (handler->present && !plic_cpuhp_setup_done) {
+		cpuhp_setup_state(CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING,
 				  "irqchip/sifive/plic:starting",
 				  plic_starting_cpu, plic_dying_cpu);
+		plic_cpuhp_setup_done = true;
+	}
+
 	pr_info("mapped %d interrupts with %d handlers for %d contexts.\n",
 		nr_irqs, nr_handlers, nr_contexts);
 	set_handle_irq(plic_handle_irq);
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 2/4] irqchip/sifive-plic: Improve boot prints for multiple PLIC instances
  2020-05-16  6:38 ` Anup Patel
@ 2020-05-16  6:38   ` Anup Patel
  -1 siblings, 0 replies; 40+ messages in thread
From: Anup Patel @ 2020-05-16  6:38 UTC (permalink / raw)
  To: Palmer Dabbelt, Paul Walmsley, Thomas Gleixner, Jason Cooper,
	Marc Zyngier
  Cc: Atish Patra, Alistair Francis, Anup Patel, linux-riscv,
	linux-kernel, Anup Patel

We improve PLIC banner to help distinguish multiple PLIC instances
in boot time prints.

Signed-off-by: Anup Patel <anup.patel@wdc.com>
---
 drivers/irqchip/irq-sifive-plic.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c
index 7dc23edb3267..2d3db927a551 100644
--- a/drivers/irqchip/irq-sifive-plic.c
+++ b/drivers/irqchip/irq-sifive-plic.c
@@ -284,6 +284,11 @@ static int __init plic_init(struct device_node *node,
 	u32 nr_irqs;
 	struct plic_priv *priv;
 	struct plic_handler *handler;
+	struct resource iores;
+
+	error = of_address_to_resource(node, 0, &iores);
+	if (error)
+		return error;
 
 	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
 	if (!priv)
@@ -377,8 +382,10 @@ static int __init plic_init(struct device_node *node,
 		plic_cpuhp_setup_done = true;
 	}
 
-	pr_info("mapped %d interrupts with %d handlers for %d contexts.\n",
-		nr_irqs, nr_handlers, nr_contexts);
+	pr_info("interrupt-controller at 0x%llx "
+		"(interrupts=%d, contexts=%d, handlers=%d)\n",
+		(unsigned long long)iores.start, nr_irqs,
+		nr_contexts, nr_handlers);
 	set_handle_irq(plic_handle_irq);
 	return 0;
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 2/4] irqchip/sifive-plic: Improve boot prints for multiple PLIC instances
@ 2020-05-16  6:38   ` Anup Patel
  0 siblings, 0 replies; 40+ messages in thread
From: Anup Patel @ 2020-05-16  6:38 UTC (permalink / raw)
  To: Palmer Dabbelt, Paul Walmsley, Thomas Gleixner, Jason Cooper,
	Marc Zyngier
  Cc: Anup Patel, Anup Patel, linux-kernel, Atish Patra,
	Alistair Francis, linux-riscv

We improve PLIC banner to help distinguish multiple PLIC instances
in boot time prints.

Signed-off-by: Anup Patel <anup.patel@wdc.com>
---
 drivers/irqchip/irq-sifive-plic.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c
index 7dc23edb3267..2d3db927a551 100644
--- a/drivers/irqchip/irq-sifive-plic.c
+++ b/drivers/irqchip/irq-sifive-plic.c
@@ -284,6 +284,11 @@ static int __init plic_init(struct device_node *node,
 	u32 nr_irqs;
 	struct plic_priv *priv;
 	struct plic_handler *handler;
+	struct resource iores;
+
+	error = of_address_to_resource(node, 0, &iores);
+	if (error)
+		return error;
 
 	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
 	if (!priv)
@@ -377,8 +382,10 @@ static int __init plic_init(struct device_node *node,
 		plic_cpuhp_setup_done = true;
 	}
 
-	pr_info("mapped %d interrupts with %d handlers for %d contexts.\n",
-		nr_irqs, nr_handlers, nr_contexts);
+	pr_info("interrupt-controller at 0x%llx "
+		"(interrupts=%d, contexts=%d, handlers=%d)\n",
+		(unsigned long long)iores.start, nr_irqs,
+		nr_contexts, nr_handlers);
 	set_handle_irq(plic_handle_irq);
 	return 0;
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 3/4] irqchip/sifive-plic: Separate irq_chip for muiltiple PLIC instances
  2020-05-16  6:38 ` Anup Patel
@ 2020-05-16  6:39   ` Anup Patel
  -1 siblings, 0 replies; 40+ messages in thread
From: Anup Patel @ 2020-05-16  6:39 UTC (permalink / raw)
  To: Palmer Dabbelt, Paul Walmsley, Thomas Gleixner, Jason Cooper,
	Marc Zyngier
  Cc: Atish Patra, Alistair Francis, Anup Patel, linux-riscv,
	linux-kernel, Anup Patel

To distinguish interrupts from multiple PLIC instances, we use a
per-PLIC irq_chip instance with a different name.

Signed-off-by: Anup Patel <anup.patel@wdc.com>
---
 drivers/irqchip/irq-sifive-plic.c | 28 +++++++++++++++-------------
 1 file changed, 15 insertions(+), 13 deletions(-)

diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c
index 2d3db927a551..e42fc082ad18 100644
--- a/drivers/irqchip/irq-sifive-plic.c
+++ b/drivers/irqchip/irq-sifive-plic.c
@@ -60,6 +60,7 @@
 #define	PLIC_ENABLE_THRESHOLD		0
 
 struct plic_priv {
+	struct irq_chip chip;
 	struct cpumask lmask;
 	struct irq_domain *irqdomain;
 	void __iomem *regs;
@@ -76,6 +77,7 @@ struct plic_handler {
 	void __iomem		*enable_base;
 	struct plic_priv	*priv;
 };
+static unsigned int plic_count;
 static bool plic_cpuhp_setup_done;
 static DEFINE_PER_CPU(struct plic_handler, plic_handlers);
 
@@ -164,20 +166,12 @@ static void plic_irq_eoi(struct irq_data *d)
 	writel(d->hwirq, handler->hart_base + CONTEXT_CLAIM);
 }
 
-static struct irq_chip plic_chip = {
-	.name		= "SiFive PLIC",
-	.irq_mask	= plic_irq_mask,
-	.irq_unmask	= plic_irq_unmask,
-	.irq_eoi	= plic_irq_eoi,
-#ifdef CONFIG_SMP
-	.irq_set_affinity = plic_set_affinity,
-#endif
-};
-
 static int plic_irqdomain_map(struct irq_domain *d, unsigned int irq,
 			      irq_hw_number_t hwirq)
 {
-	irq_domain_set_info(d, irq, hwirq, &plic_chip, d->host_data,
+	struct plic_priv *priv = d->host_data;
+
+	irq_domain_set_info(d, irq, hwirq, &priv->chip, d->host_data,
 			    handle_fasteoi_irq, NULL, NULL);
 	irq_set_noprobe(irq);
 	return 0;
@@ -294,6 +288,14 @@ static int __init plic_init(struct device_node *node,
 	if (!priv)
 		return -ENOMEM;
 
+	priv->chip.name = kasprintf(GFP_KERNEL, "PLIC%d", plic_count++);
+	priv->chip.irq_mask = plic_irq_mask,
+	priv->chip.irq_unmask = plic_irq_unmask,
+	priv->chip.irq_eoi = plic_irq_eoi,
+#ifdef CONFIG_SMP
+	priv->chip.irq_set_affinity = plic_set_affinity,
+#endif
+
 	priv->regs = of_iomap(node, 0);
 	if (WARN_ON(!priv->regs)) {
 		error = -EIO;
@@ -383,9 +385,9 @@ static int __init plic_init(struct device_node *node,
 	}
 
 	pr_info("interrupt-controller at 0x%llx "
-		"(interrupts=%d, contexts=%d, handlers=%d)\n",
+		"(interrupts=%d, contexts=%d, handlers=%d) (%s)\n",
 		(unsigned long long)iores.start, nr_irqs,
-		nr_contexts, nr_handlers);
+		nr_contexts, nr_handlers, priv->chip.name);
 	set_handle_irq(plic_handle_irq);
 	return 0;
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 3/4] irqchip/sifive-plic: Separate irq_chip for muiltiple PLIC instances
@ 2020-05-16  6:39   ` Anup Patel
  0 siblings, 0 replies; 40+ messages in thread
From: Anup Patel @ 2020-05-16  6:39 UTC (permalink / raw)
  To: Palmer Dabbelt, Paul Walmsley, Thomas Gleixner, Jason Cooper,
	Marc Zyngier
  Cc: Anup Patel, Anup Patel, linux-kernel, Atish Patra,
	Alistair Francis, linux-riscv

To distinguish interrupts from multiple PLIC instances, we use a
per-PLIC irq_chip instance with a different name.

Signed-off-by: Anup Patel <anup.patel@wdc.com>
---
 drivers/irqchip/irq-sifive-plic.c | 28 +++++++++++++++-------------
 1 file changed, 15 insertions(+), 13 deletions(-)

diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c
index 2d3db927a551..e42fc082ad18 100644
--- a/drivers/irqchip/irq-sifive-plic.c
+++ b/drivers/irqchip/irq-sifive-plic.c
@@ -60,6 +60,7 @@
 #define	PLIC_ENABLE_THRESHOLD		0
 
 struct plic_priv {
+	struct irq_chip chip;
 	struct cpumask lmask;
 	struct irq_domain *irqdomain;
 	void __iomem *regs;
@@ -76,6 +77,7 @@ struct plic_handler {
 	void __iomem		*enable_base;
 	struct plic_priv	*priv;
 };
+static unsigned int plic_count;
 static bool plic_cpuhp_setup_done;
 static DEFINE_PER_CPU(struct plic_handler, plic_handlers);
 
@@ -164,20 +166,12 @@ static void plic_irq_eoi(struct irq_data *d)
 	writel(d->hwirq, handler->hart_base + CONTEXT_CLAIM);
 }
 
-static struct irq_chip plic_chip = {
-	.name		= "SiFive PLIC",
-	.irq_mask	= plic_irq_mask,
-	.irq_unmask	= plic_irq_unmask,
-	.irq_eoi	= plic_irq_eoi,
-#ifdef CONFIG_SMP
-	.irq_set_affinity = plic_set_affinity,
-#endif
-};
-
 static int plic_irqdomain_map(struct irq_domain *d, unsigned int irq,
 			      irq_hw_number_t hwirq)
 {
-	irq_domain_set_info(d, irq, hwirq, &plic_chip, d->host_data,
+	struct plic_priv *priv = d->host_data;
+
+	irq_domain_set_info(d, irq, hwirq, &priv->chip, d->host_data,
 			    handle_fasteoi_irq, NULL, NULL);
 	irq_set_noprobe(irq);
 	return 0;
@@ -294,6 +288,14 @@ static int __init plic_init(struct device_node *node,
 	if (!priv)
 		return -ENOMEM;
 
+	priv->chip.name = kasprintf(GFP_KERNEL, "PLIC%d", plic_count++);
+	priv->chip.irq_mask = plic_irq_mask,
+	priv->chip.irq_unmask = plic_irq_unmask,
+	priv->chip.irq_eoi = plic_irq_eoi,
+#ifdef CONFIG_SMP
+	priv->chip.irq_set_affinity = plic_set_affinity,
+#endif
+
 	priv->regs = of_iomap(node, 0);
 	if (WARN_ON(!priv->regs)) {
 		error = -EIO;
@@ -383,9 +385,9 @@ static int __init plic_init(struct device_node *node,
 	}
 
 	pr_info("interrupt-controller at 0x%llx "
-		"(interrupts=%d, contexts=%d, handlers=%d)\n",
+		"(interrupts=%d, contexts=%d, handlers=%d) (%s)\n",
 		(unsigned long long)iores.start, nr_irqs,
-		nr_contexts, nr_handlers);
+		nr_contexts, nr_handlers, priv->chip.name);
 	set_handle_irq(plic_handle_irq);
 	return 0;
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4/4] irqchip/sifive-plic: Set default irq affinity in plic_irqdomain_map()
  2020-05-16  6:38 ` Anup Patel
@ 2020-05-16  6:39   ` Anup Patel
  -1 siblings, 0 replies; 40+ messages in thread
From: Anup Patel @ 2020-05-16  6:39 UTC (permalink / raw)
  To: Palmer Dabbelt, Paul Walmsley, Thomas Gleixner, Jason Cooper,
	Marc Zyngier
  Cc: Atish Patra, Alistair Francis, Anup Patel, linux-riscv,
	linux-kernel, Anup Patel

For multiple PLIC instances, each PLIC can only target a subset of
CPUs which is represented by "lmask" in the "struct plic_priv".

Currently, the default irq affinity for each PLIC interrupt is all
online CPUs which is illegal value for default irq affinity when we
have multiple PLIC instances. To fix this, we now set "lmask" as the
default irq affinity in for each interrupt in plic_irqdomain_map().

Signed-off-by: Anup Patel <anup.patel@wdc.com>
---
 drivers/irqchip/irq-sifive-plic.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c
index e42fc082ad18..9af5e2fd2574 100644
--- a/drivers/irqchip/irq-sifive-plic.c
+++ b/drivers/irqchip/irq-sifive-plic.c
@@ -174,6 +174,7 @@ static int plic_irqdomain_map(struct irq_domain *d, unsigned int irq,
 	irq_domain_set_info(d, irq, hwirq, &priv->chip, d->host_data,
 			    handle_fasteoi_irq, NULL, NULL);
 	irq_set_noprobe(irq);
+	irq_set_affinity(irq, &priv->lmask);
 	return 0;
 }
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4/4] irqchip/sifive-plic: Set default irq affinity in plic_irqdomain_map()
@ 2020-05-16  6:39   ` Anup Patel
  0 siblings, 0 replies; 40+ messages in thread
From: Anup Patel @ 2020-05-16  6:39 UTC (permalink / raw)
  To: Palmer Dabbelt, Paul Walmsley, Thomas Gleixner, Jason Cooper,
	Marc Zyngier
  Cc: Anup Patel, Anup Patel, linux-kernel, Atish Patra,
	Alistair Francis, linux-riscv

For multiple PLIC instances, each PLIC can only target a subset of
CPUs which is represented by "lmask" in the "struct plic_priv".

Currently, the default irq affinity for each PLIC interrupt is all
online CPUs which is illegal value for default irq affinity when we
have multiple PLIC instances. To fix this, we now set "lmask" as the
default irq affinity in for each interrupt in plic_irqdomain_map().

Signed-off-by: Anup Patel <anup.patel@wdc.com>
---
 drivers/irqchip/irq-sifive-plic.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c
index e42fc082ad18..9af5e2fd2574 100644
--- a/drivers/irqchip/irq-sifive-plic.c
+++ b/drivers/irqchip/irq-sifive-plic.c
@@ -174,6 +174,7 @@ static int plic_irqdomain_map(struct irq_domain *d, unsigned int irq,
 	irq_domain_set_info(d, irq, hwirq, &priv->chip, d->host_data,
 			    handle_fasteoi_irq, NULL, NULL);
 	irq_set_noprobe(irq);
+	irq_set_affinity(irq, &priv->lmask);
 	return 0;
 }
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [PATCH 1/4] irqchip/sifive-plic: Setup cpuhp once after current handler is present
  2020-05-16  6:38   ` Anup Patel
@ 2020-05-16 12:11     ` Marc Zyngier
  -1 siblings, 0 replies; 40+ messages in thread
From: Marc Zyngier @ 2020-05-16 12:11 UTC (permalink / raw)
  To: Anup Patel
  Cc: Palmer Dabbelt, Paul Walmsley, Thomas Gleixner, Jason Cooper,
	Atish Patra, Alistair Francis, Anup Patel, linux-riscv,
	linux-kernel

Hi Anup,

On 2020-05-16 07:38, Anup Patel wrote:
> For multiple PLIC instances, the plic_init() is called once for each
> PLIC instance. Due to this we have two issues:
> 1. cpuhp_setup_state() is called multiple times
> 2. plic_starting_cpu() can crash for boot CPU if cpuhp_setup_state()
>    is called before boot CPU PLIC handler is available.
> 
> This patch fixes both above issues.
> 
> Signed-off-by: Anup Patel <anup.patel@wdc.com>
> ---
>  drivers/irqchip/irq-sifive-plic.c | 14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/irqchip/irq-sifive-plic.c
> b/drivers/irqchip/irq-sifive-plic.c
> index 822e074c0600..7dc23edb3267 100644
> --- a/drivers/irqchip/irq-sifive-plic.c
> +++ b/drivers/irqchip/irq-sifive-plic.c
> @@ -76,6 +76,7 @@ struct plic_handler {
>  	void __iomem		*enable_base;
>  	struct plic_priv	*priv;
>  };
> +static bool plic_cpuhp_setup_done;
>  static DEFINE_PER_CPU(struct plic_handler, plic_handlers);
> 
>  static inline void plic_toggle(struct plic_handler *handler,
> @@ -282,6 +283,7 @@ static int __init plic_init(struct device_node 
> *node,
>  	int error = 0, nr_contexts, nr_handlers = 0, i;
>  	u32 nr_irqs;
>  	struct plic_priv *priv;
> +	struct plic_handler *handler;
> 
>  	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
>  	if (!priv)
> @@ -310,7 +312,6 @@ static int __init plic_init(struct device_node 
> *node,
> 
>  	for (i = 0; i < nr_contexts; i++) {
>  		struct of_phandle_args parent;
> -		struct plic_handler *handler;
>  		irq_hw_number_t hwirq;
>  		int cpu, hartid;
> 
> @@ -364,9 +365,18 @@ static int __init plic_init(struct device_node 
> *node,
>  		nr_handlers++;
>  	}
> 
> -	cpuhp_setup_state(CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING,
> +	/*
> +	 * We can have multiple PLIC instances so setup cpuhp state only
> +	 * when context handler for current/boot CPU is present.
> +	 */
> +	handler = this_cpu_ptr(&plic_handlers);
> +	if (handler->present && !plic_cpuhp_setup_done) {

If there is no context handler for the boot CPU, the system is doomed,
right? It isn't able to get any interrupt, and you don't register
the hotplug notifier that could allow secondary CPUs to boot.

So what is the point? It feels like you should just give up here.

Also, the boot CPU is always CPU 0. So checking that you only register
the hotplug notifier from CPU 0 should be enough.

> +		cpuhp_setup_state(CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING,
>  				  "irqchip/sifive/plic:starting",
>  				  plic_starting_cpu, plic_dying_cpu);
> +		plic_cpuhp_setup_done = true;
> +	}
> +
>  	pr_info("mapped %d interrupts with %d handlers for %d contexts.\n",
>  		nr_irqs, nr_handlers, nr_contexts);
>  	set_handle_irq(plic_handle_irq);

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 1/4] irqchip/sifive-plic: Setup cpuhp once after current handler is present
@ 2020-05-16 12:11     ` Marc Zyngier
  0 siblings, 0 replies; 40+ messages in thread
From: Marc Zyngier @ 2020-05-16 12:11 UTC (permalink / raw)
  To: Anup Patel
  Cc: Jason Cooper, Anup Patel, linux-kernel, Atish Patra,
	Palmer Dabbelt, Paul Walmsley, Alistair Francis, Thomas Gleixner,
	linux-riscv

Hi Anup,

On 2020-05-16 07:38, Anup Patel wrote:
> For multiple PLIC instances, the plic_init() is called once for each
> PLIC instance. Due to this we have two issues:
> 1. cpuhp_setup_state() is called multiple times
> 2. plic_starting_cpu() can crash for boot CPU if cpuhp_setup_state()
>    is called before boot CPU PLIC handler is available.
> 
> This patch fixes both above issues.
> 
> Signed-off-by: Anup Patel <anup.patel@wdc.com>
> ---
>  drivers/irqchip/irq-sifive-plic.c | 14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/irqchip/irq-sifive-plic.c
> b/drivers/irqchip/irq-sifive-plic.c
> index 822e074c0600..7dc23edb3267 100644
> --- a/drivers/irqchip/irq-sifive-plic.c
> +++ b/drivers/irqchip/irq-sifive-plic.c
> @@ -76,6 +76,7 @@ struct plic_handler {
>  	void __iomem		*enable_base;
>  	struct plic_priv	*priv;
>  };
> +static bool plic_cpuhp_setup_done;
>  static DEFINE_PER_CPU(struct plic_handler, plic_handlers);
> 
>  static inline void plic_toggle(struct plic_handler *handler,
> @@ -282,6 +283,7 @@ static int __init plic_init(struct device_node 
> *node,
>  	int error = 0, nr_contexts, nr_handlers = 0, i;
>  	u32 nr_irqs;
>  	struct plic_priv *priv;
> +	struct plic_handler *handler;
> 
>  	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
>  	if (!priv)
> @@ -310,7 +312,6 @@ static int __init plic_init(struct device_node 
> *node,
> 
>  	for (i = 0; i < nr_contexts; i++) {
>  		struct of_phandle_args parent;
> -		struct plic_handler *handler;
>  		irq_hw_number_t hwirq;
>  		int cpu, hartid;
> 
> @@ -364,9 +365,18 @@ static int __init plic_init(struct device_node 
> *node,
>  		nr_handlers++;
>  	}
> 
> -	cpuhp_setup_state(CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING,
> +	/*
> +	 * We can have multiple PLIC instances so setup cpuhp state only
> +	 * when context handler for current/boot CPU is present.
> +	 */
> +	handler = this_cpu_ptr(&plic_handlers);
> +	if (handler->present && !plic_cpuhp_setup_done) {

If there is no context handler for the boot CPU, the system is doomed,
right? It isn't able to get any interrupt, and you don't register
the hotplug notifier that could allow secondary CPUs to boot.

So what is the point? It feels like you should just give up here.

Also, the boot CPU is always CPU 0. So checking that you only register
the hotplug notifier from CPU 0 should be enough.

> +		cpuhp_setup_state(CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING,
>  				  "irqchip/sifive/plic:starting",
>  				  plic_starting_cpu, plic_dying_cpu);
> +		plic_cpuhp_setup_done = true;
> +	}
> +
>  	pr_info("mapped %d interrupts with %d handlers for %d contexts.\n",
>  		nr_irqs, nr_handlers, nr_contexts);
>  	set_handle_irq(plic_handle_irq);

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 2/4] irqchip/sifive-plic: Improve boot prints for multiple PLIC instances
  2020-05-16  6:38   ` Anup Patel
@ 2020-05-16 12:20     ` Marc Zyngier
  -1 siblings, 0 replies; 40+ messages in thread
From: Marc Zyngier @ 2020-05-16 12:20 UTC (permalink / raw)
  To: Anup Patel
  Cc: Palmer Dabbelt, Paul Walmsley, Thomas Gleixner, Jason Cooper,
	Atish Patra, Alistair Francis, Anup Patel, linux-riscv,
	linux-kernel

On 2020-05-16 07:38, Anup Patel wrote:
> We improve PLIC banner to help distinguish multiple PLIC instances
> in boot time prints.
> 
> Signed-off-by: Anup Patel <anup.patel@wdc.com>
> ---
>  drivers/irqchip/irq-sifive-plic.c | 11 +++++++++--
>  1 file changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/irqchip/irq-sifive-plic.c
> b/drivers/irqchip/irq-sifive-plic.c
> index 7dc23edb3267..2d3db927a551 100644
> --- a/drivers/irqchip/irq-sifive-plic.c
> +++ b/drivers/irqchip/irq-sifive-plic.c
> @@ -284,6 +284,11 @@ static int __init plic_init(struct device_node 
> *node,
>  	u32 nr_irqs;
>  	struct plic_priv *priv;
>  	struct plic_handler *handler;
> +	struct resource iores;
> +
> +	error = of_address_to_resource(node, 0, &iores);
> +	if (error)
> +		return error;
> 
>  	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
>  	if (!priv)
> @@ -377,8 +382,10 @@ static int __init plic_init(struct device_node 
> *node,
>  		plic_cpuhp_setup_done = true;
>  	}
> 
> -	pr_info("mapped %d interrupts with %d handlers for %d contexts.\n",
> -		nr_irqs, nr_handlers, nr_contexts);
> +	pr_info("interrupt-controller at 0x%llx "
> +		"(interrupts=%d, contexts=%d, handlers=%d)\n",
> +		(unsigned long long)iores.start, nr_irqs,
> +		nr_contexts, nr_handlers);

Instead of displaying "interrupt controller at ...", why not use the
existing printk format for OF nodes? Something along the lines of

     pr_info("%pOF : mapped %d interrupts with %d handlers for %d 
contexts\n",
             node, nr_irqs, nr_handlers, nr_contexts);

>  	set_handle_irq(plic_handle_irq);
>  	return 0;

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 2/4] irqchip/sifive-plic: Improve boot prints for multiple PLIC instances
@ 2020-05-16 12:20     ` Marc Zyngier
  0 siblings, 0 replies; 40+ messages in thread
From: Marc Zyngier @ 2020-05-16 12:20 UTC (permalink / raw)
  To: Anup Patel
  Cc: Jason Cooper, Anup Patel, linux-kernel, Atish Patra,
	Palmer Dabbelt, Paul Walmsley, Alistair Francis, Thomas Gleixner,
	linux-riscv

On 2020-05-16 07:38, Anup Patel wrote:
> We improve PLIC banner to help distinguish multiple PLIC instances
> in boot time prints.
> 
> Signed-off-by: Anup Patel <anup.patel@wdc.com>
> ---
>  drivers/irqchip/irq-sifive-plic.c | 11 +++++++++--
>  1 file changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/irqchip/irq-sifive-plic.c
> b/drivers/irqchip/irq-sifive-plic.c
> index 7dc23edb3267..2d3db927a551 100644
> --- a/drivers/irqchip/irq-sifive-plic.c
> +++ b/drivers/irqchip/irq-sifive-plic.c
> @@ -284,6 +284,11 @@ static int __init plic_init(struct device_node 
> *node,
>  	u32 nr_irqs;
>  	struct plic_priv *priv;
>  	struct plic_handler *handler;
> +	struct resource iores;
> +
> +	error = of_address_to_resource(node, 0, &iores);
> +	if (error)
> +		return error;
> 
>  	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
>  	if (!priv)
> @@ -377,8 +382,10 @@ static int __init plic_init(struct device_node 
> *node,
>  		plic_cpuhp_setup_done = true;
>  	}
> 
> -	pr_info("mapped %d interrupts with %d handlers for %d contexts.\n",
> -		nr_irqs, nr_handlers, nr_contexts);
> +	pr_info("interrupt-controller at 0x%llx "
> +		"(interrupts=%d, contexts=%d, handlers=%d)\n",
> +		(unsigned long long)iores.start, nr_irqs,
> +		nr_contexts, nr_handlers);

Instead of displaying "interrupt controller at ...", why not use the
existing printk format for OF nodes? Something along the lines of

     pr_info("%pOF : mapped %d interrupts with %d handlers for %d 
contexts\n",
             node, nr_irqs, nr_handlers, nr_contexts);

>  	set_handle_irq(plic_handle_irq);
>  	return 0;

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 3/4] irqchip/sifive-plic: Separate irq_chip for muiltiple PLIC instances
  2020-05-16  6:39   ` Anup Patel
@ 2020-05-16 12:29     ` Marc Zyngier
  -1 siblings, 0 replies; 40+ messages in thread
From: Marc Zyngier @ 2020-05-16 12:29 UTC (permalink / raw)
  To: Anup Patel
  Cc: Palmer Dabbelt, Paul Walmsley, Thomas Gleixner, Jason Cooper,
	Atish Patra, Alistair Francis, Anup Patel, linux-riscv,
	linux-kernel

On 2020-05-16 07:39, Anup Patel wrote:
> To distinguish interrupts from multiple PLIC instances, we use a
> per-PLIC irq_chip instance with a different name.
> 
> Signed-off-by: Anup Patel <anup.patel@wdc.com>
> ---
>  drivers/irqchip/irq-sifive-plic.c | 28 +++++++++++++++-------------
>  1 file changed, 15 insertions(+), 13 deletions(-)
> 
> diff --git a/drivers/irqchip/irq-sifive-plic.c
> b/drivers/irqchip/irq-sifive-plic.c
> index 2d3db927a551..e42fc082ad18 100644
> --- a/drivers/irqchip/irq-sifive-plic.c
> +++ b/drivers/irqchip/irq-sifive-plic.c
> @@ -60,6 +60,7 @@
>  #define	PLIC_ENABLE_THRESHOLD		0
> 
>  struct plic_priv {
> +	struct irq_chip chip;
>  	struct cpumask lmask;
>  	struct irq_domain *irqdomain;
>  	void __iomem *regs;
> @@ -76,6 +77,7 @@ struct plic_handler {
>  	void __iomem		*enable_base;
>  	struct plic_priv	*priv;
>  };
> +static unsigned int plic_count;
>  static bool plic_cpuhp_setup_done;
>  static DEFINE_PER_CPU(struct plic_handler, plic_handlers);
> 
> @@ -164,20 +166,12 @@ static void plic_irq_eoi(struct irq_data *d)
>  	writel(d->hwirq, handler->hart_base + CONTEXT_CLAIM);
>  }
> 
> -static struct irq_chip plic_chip = {
> -	.name		= "SiFive PLIC",
> -	.irq_mask	= plic_irq_mask,
> -	.irq_unmask	= plic_irq_unmask,
> -	.irq_eoi	= plic_irq_eoi,
> -#ifdef CONFIG_SMP
> -	.irq_set_affinity = plic_set_affinity,
> -#endif
> -};
> -
>  static int plic_irqdomain_map(struct irq_domain *d, unsigned int irq,
>  			      irq_hw_number_t hwirq)
>  {
> -	irq_domain_set_info(d, irq, hwirq, &plic_chip, d->host_data,
> +	struct plic_priv *priv = d->host_data;
> +
> +	irq_domain_set_info(d, irq, hwirq, &priv->chip, d->host_data,
>  			    handle_fasteoi_irq, NULL, NULL);
>  	irq_set_noprobe(irq);
>  	return 0;
> @@ -294,6 +288,14 @@ static int __init plic_init(struct device_node 
> *node,
>  	if (!priv)
>  		return -ENOMEM;
> 
> +	priv->chip.name = kasprintf(GFP_KERNEL, "PLIC%d", plic_count++);
> +	priv->chip.irq_mask = plic_irq_mask,
> +	priv->chip.irq_unmask = plic_irq_unmask,
> +	priv->chip.irq_eoi = plic_irq_eoi,
> +#ifdef CONFIG_SMP
> +	priv->chip.irq_set_affinity = plic_set_affinity,
> +#endif
> +
>  	priv->regs = of_iomap(node, 0);
>  	if (WARN_ON(!priv->regs)) {
>  		error = -EIO;
> @@ -383,9 +385,9 @@ static int __init plic_init(struct device_node 
> *node,
>  	}
> 
>  	pr_info("interrupt-controller at 0x%llx "
> -		"(interrupts=%d, contexts=%d, handlers=%d)\n",
> +		"(interrupts=%d, contexts=%d, handlers=%d) (%s)\n",
>  		(unsigned long long)iores.start, nr_irqs,
> -		nr_contexts, nr_handlers);
> +		nr_contexts, nr_handlers, priv->chip.name);
>  	set_handle_irq(plic_handle_irq);
>  	return 0;

I really dislike this patch for multiple reasons:

- Allocating a new struc irq_chip just for a string seems over the top,
   specially as all the *useful* stuff stays the same.

- Even if I hate it, /proc is API. I'm sure something, somewhere is
   parsing this. Changing the string is likely to confuse it.

- If you do this for debug purposes, then CONFIG_GENERIC_IRQ_DEBUGFS
   is the right way to look up the information.

- If, for reasons that are beyond me, you actually *need* this, then
   implementing irq_print_chip in your irq_chip structure is the way
   to go.

But frankly, I'd rather you drop this altogether.

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 3/4] irqchip/sifive-plic: Separate irq_chip for muiltiple PLIC instances
@ 2020-05-16 12:29     ` Marc Zyngier
  0 siblings, 0 replies; 40+ messages in thread
From: Marc Zyngier @ 2020-05-16 12:29 UTC (permalink / raw)
  To: Anup Patel
  Cc: Jason Cooper, Anup Patel, linux-kernel, Atish Patra,
	Palmer Dabbelt, Paul Walmsley, Alistair Francis, Thomas Gleixner,
	linux-riscv

On 2020-05-16 07:39, Anup Patel wrote:
> To distinguish interrupts from multiple PLIC instances, we use a
> per-PLIC irq_chip instance with a different name.
> 
> Signed-off-by: Anup Patel <anup.patel@wdc.com>
> ---
>  drivers/irqchip/irq-sifive-plic.c | 28 +++++++++++++++-------------
>  1 file changed, 15 insertions(+), 13 deletions(-)
> 
> diff --git a/drivers/irqchip/irq-sifive-plic.c
> b/drivers/irqchip/irq-sifive-plic.c
> index 2d3db927a551..e42fc082ad18 100644
> --- a/drivers/irqchip/irq-sifive-plic.c
> +++ b/drivers/irqchip/irq-sifive-plic.c
> @@ -60,6 +60,7 @@
>  #define	PLIC_ENABLE_THRESHOLD		0
> 
>  struct plic_priv {
> +	struct irq_chip chip;
>  	struct cpumask lmask;
>  	struct irq_domain *irqdomain;
>  	void __iomem *regs;
> @@ -76,6 +77,7 @@ struct plic_handler {
>  	void __iomem		*enable_base;
>  	struct plic_priv	*priv;
>  };
> +static unsigned int plic_count;
>  static bool plic_cpuhp_setup_done;
>  static DEFINE_PER_CPU(struct plic_handler, plic_handlers);
> 
> @@ -164,20 +166,12 @@ static void plic_irq_eoi(struct irq_data *d)
>  	writel(d->hwirq, handler->hart_base + CONTEXT_CLAIM);
>  }
> 
> -static struct irq_chip plic_chip = {
> -	.name		= "SiFive PLIC",
> -	.irq_mask	= plic_irq_mask,
> -	.irq_unmask	= plic_irq_unmask,
> -	.irq_eoi	= plic_irq_eoi,
> -#ifdef CONFIG_SMP
> -	.irq_set_affinity = plic_set_affinity,
> -#endif
> -};
> -
>  static int plic_irqdomain_map(struct irq_domain *d, unsigned int irq,
>  			      irq_hw_number_t hwirq)
>  {
> -	irq_domain_set_info(d, irq, hwirq, &plic_chip, d->host_data,
> +	struct plic_priv *priv = d->host_data;
> +
> +	irq_domain_set_info(d, irq, hwirq, &priv->chip, d->host_data,
>  			    handle_fasteoi_irq, NULL, NULL);
>  	irq_set_noprobe(irq);
>  	return 0;
> @@ -294,6 +288,14 @@ static int __init plic_init(struct device_node 
> *node,
>  	if (!priv)
>  		return -ENOMEM;
> 
> +	priv->chip.name = kasprintf(GFP_KERNEL, "PLIC%d", plic_count++);
> +	priv->chip.irq_mask = plic_irq_mask,
> +	priv->chip.irq_unmask = plic_irq_unmask,
> +	priv->chip.irq_eoi = plic_irq_eoi,
> +#ifdef CONFIG_SMP
> +	priv->chip.irq_set_affinity = plic_set_affinity,
> +#endif
> +
>  	priv->regs = of_iomap(node, 0);
>  	if (WARN_ON(!priv->regs)) {
>  		error = -EIO;
> @@ -383,9 +385,9 @@ static int __init plic_init(struct device_node 
> *node,
>  	}
> 
>  	pr_info("interrupt-controller at 0x%llx "
> -		"(interrupts=%d, contexts=%d, handlers=%d)\n",
> +		"(interrupts=%d, contexts=%d, handlers=%d) (%s)\n",
>  		(unsigned long long)iores.start, nr_irqs,
> -		nr_contexts, nr_handlers);
> +		nr_contexts, nr_handlers, priv->chip.name);
>  	set_handle_irq(plic_handle_irq);
>  	return 0;

I really dislike this patch for multiple reasons:

- Allocating a new struc irq_chip just for a string seems over the top,
   specially as all the *useful* stuff stays the same.

- Even if I hate it, /proc is API. I'm sure something, somewhere is
   parsing this. Changing the string is likely to confuse it.

- If you do this for debug purposes, then CONFIG_GENERIC_IRQ_DEBUGFS
   is the right way to look up the information.

- If, for reasons that are beyond me, you actually *need* this, then
   implementing irq_print_chip in your irq_chip structure is the way
   to go.

But frankly, I'd rather you drop this altogether.

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 4/4] irqchip/sifive-plic: Set default irq affinity in plic_irqdomain_map()
  2020-05-16  6:39   ` Anup Patel
@ 2020-05-16 12:30     ` Marc Zyngier
  -1 siblings, 0 replies; 40+ messages in thread
From: Marc Zyngier @ 2020-05-16 12:30 UTC (permalink / raw)
  To: Anup Patel
  Cc: Palmer Dabbelt, Paul Walmsley, Thomas Gleixner, Jason Cooper,
	Atish Patra, Alistair Francis, Anup Patel, linux-riscv,
	linux-kernel

On 2020-05-16 07:39, Anup Patel wrote:
> For multiple PLIC instances, each PLIC can only target a subset of
> CPUs which is represented by "lmask" in the "struct plic_priv".
> 
> Currently, the default irq affinity for each PLIC interrupt is all
> online CPUs which is illegal value for default irq affinity when we
> have multiple PLIC instances. To fix this, we now set "lmask" as the
> default irq affinity in for each interrupt in plic_irqdomain_map().
> 
> Signed-off-by: Anup Patel <anup.patel@wdc.com>
> ---
>  drivers/irqchip/irq-sifive-plic.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/irqchip/irq-sifive-plic.c
> b/drivers/irqchip/irq-sifive-plic.c
> index e42fc082ad18..9af5e2fd2574 100644
> --- a/drivers/irqchip/irq-sifive-plic.c
> +++ b/drivers/irqchip/irq-sifive-plic.c
> @@ -174,6 +174,7 @@ static int plic_irqdomain_map(struct irq_domain
> *d, unsigned int irq,
>  	irq_domain_set_info(d, irq, hwirq, &priv->chip, d->host_data,
>  			    handle_fasteoi_irq, NULL, NULL);
>  	irq_set_noprobe(irq);
> +	irq_set_affinity(irq, &priv->lmask);
>  	return 0;
>  }

Isn't that a fix? If so, please add a Fixes: tag, as well as a CC to
stable if you think it should be backported.

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 4/4] irqchip/sifive-plic: Set default irq affinity in plic_irqdomain_map()
@ 2020-05-16 12:30     ` Marc Zyngier
  0 siblings, 0 replies; 40+ messages in thread
From: Marc Zyngier @ 2020-05-16 12:30 UTC (permalink / raw)
  To: Anup Patel
  Cc: Jason Cooper, Anup Patel, linux-kernel, Atish Patra,
	Palmer Dabbelt, Paul Walmsley, Alistair Francis, Thomas Gleixner,
	linux-riscv

On 2020-05-16 07:39, Anup Patel wrote:
> For multiple PLIC instances, each PLIC can only target a subset of
> CPUs which is represented by "lmask" in the "struct plic_priv".
> 
> Currently, the default irq affinity for each PLIC interrupt is all
> online CPUs which is illegal value for default irq affinity when we
> have multiple PLIC instances. To fix this, we now set "lmask" as the
> default irq affinity in for each interrupt in plic_irqdomain_map().
> 
> Signed-off-by: Anup Patel <anup.patel@wdc.com>
> ---
>  drivers/irqchip/irq-sifive-plic.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/irqchip/irq-sifive-plic.c
> b/drivers/irqchip/irq-sifive-plic.c
> index e42fc082ad18..9af5e2fd2574 100644
> --- a/drivers/irqchip/irq-sifive-plic.c
> +++ b/drivers/irqchip/irq-sifive-plic.c
> @@ -174,6 +174,7 @@ static int plic_irqdomain_map(struct irq_domain
> *d, unsigned int irq,
>  	irq_domain_set_info(d, irq, hwirq, &priv->chip, d->host_data,
>  			    handle_fasteoi_irq, NULL, NULL);
>  	irq_set_noprobe(irq);
> +	irq_set_affinity(irq, &priv->lmask);
>  	return 0;
>  }

Isn't that a fix? If so, please add a Fixes: tag, as well as a CC to
stable if you think it should be backported.

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...


^ permalink raw reply	[flat|nested] 40+ messages in thread

* RE: [PATCH 1/4] irqchip/sifive-plic: Setup cpuhp once after current handler is present
  2020-05-16 12:11     ` Marc Zyngier
@ 2020-05-16 12:52       ` Anup Patel
  -1 siblings, 0 replies; 40+ messages in thread
From: Anup Patel @ 2020-05-16 12:52 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Palmer Dabbelt, Paul Walmsley, Thomas Gleixner, Jason Cooper,
	Atish Patra, Alistair Francis, Anup Patel, linux-riscv,
	linux-kernel



> -----Original Message-----
> From: Marc Zyngier <maz@kernel.org>
> Sent: 16 May 2020 17:42
> To: Anup Patel <Anup.Patel@wdc.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
> <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>; Jason
> Cooper <jason@lakedaemon.net>; Atish Patra <Atish.Patra@wdc.com>; Alistair
> Francis <Alistair.Francis@wdc.com>; Anup Patel <anup@brainfault.org>; linux-
> riscv@lists.infradead.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH 1/4] irqchip/sifive-plic: Setup cpuhp once after current
> handler is present
> 
> Hi Anup,
> 
> On 2020-05-16 07:38, Anup Patel wrote:
> > For multiple PLIC instances, the plic_init() is called once for each
> > PLIC instance. Due to this we have two issues:
> > 1. cpuhp_setup_state() is called multiple times 2. plic_starting_cpu()
> > can crash for boot CPU if cpuhp_setup_state()
> >    is called before boot CPU PLIC handler is available.
> >
> > This patch fixes both above issues.
> >
> > Signed-off-by: Anup Patel <anup.patel@wdc.com>
> > ---
> >  drivers/irqchip/irq-sifive-plic.c | 14 ++++++++++++--
> >  1 file changed, 12 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/irqchip/irq-sifive-plic.c
> > b/drivers/irqchip/irq-sifive-plic.c
> > index 822e074c0600..7dc23edb3267 100644
> > --- a/drivers/irqchip/irq-sifive-plic.c
> > +++ b/drivers/irqchip/irq-sifive-plic.c
> > @@ -76,6 +76,7 @@ struct plic_handler {
> >  	void __iomem		*enable_base;
> >  	struct plic_priv	*priv;
> >  };
> > +static bool plic_cpuhp_setup_done;
> >  static DEFINE_PER_CPU(struct plic_handler, plic_handlers);
> >
> >  static inline void plic_toggle(struct plic_handler *handler, @@
> > -282,6 +283,7 @@ static int __init plic_init(struct device_node *node,
> >  	int error = 0, nr_contexts, nr_handlers = 0, i;
> >  	u32 nr_irqs;
> >  	struct plic_priv *priv;
> > +	struct plic_handler *handler;
> >
> >  	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
> >  	if (!priv)
> > @@ -310,7 +312,6 @@ static int __init plic_init(struct device_node
> > *node,
> >
> >  	for (i = 0; i < nr_contexts; i++) {
> >  		struct of_phandle_args parent;
> > -		struct plic_handler *handler;
> >  		irq_hw_number_t hwirq;
> >  		int cpu, hartid;
> >
> > @@ -364,9 +365,18 @@ static int __init plic_init(struct device_node
> > *node,
> >  		nr_handlers++;
> >  	}
> >
> > -	cpuhp_setup_state(CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING,
> > +	/*
> > +	 * We can have multiple PLIC instances so setup cpuhp state only
> > +	 * when context handler for current/boot CPU is present.
> > +	 */
> > +	handler = this_cpu_ptr(&plic_handlers);
> > +	if (handler->present && !plic_cpuhp_setup_done) {
> 
> If there is no context handler for the boot CPU, the system is doomed, right? It
> isn't able to get any interrupt, and you don't register the hotplug notifier that
> could allow secondary CPUs to boot.
> 
> So what is the point? It feels like you should just give up here.
> 
> Also, the boot CPU is always CPU 0. So checking that you only register the
> hotplug notifier from CPU 0 should be enough.

The boot CPU is not fixed in RISC-V, the logical id of the boot CPU will always
be zero but physical id (or HART id) can be something totally different.

On RISC-V NUMA system, we will have a separate PLIC in each NUMA node.

Let's say we have a system with 2 NUMA nodes, each NUMA node having
4 CPUs (or 4 HARTs).  In this case, the DTB passed to Linux will have two PLIC
DT nodes where each PLIC device targets only 4 CPUs (or 4 HARTs). The
plic_init() functions will setup handlers for only 4 CPUs (or 4 HARTs). In other
words, plic_init() for "PLIC0" will setup handler for HART id 0 to 3 and
plic_init() for "PLIC1" will setup handler for HART id 4 to 7. Now, any CPU
can be boot CPU so it is possible that CPU with HART id 4 is boot CPU and
when plic_init() is first called for "PLIC0" the handler for HART id 4 is not
setup because it will be setup later when plic_init() is called for "PLIC1".
This cause plic_starting_cpu() to crash when plic_init() is called for "PLIC0".

I hope above example helps understanding the issue.

I encounter this issue randomly when booting Linux on QEMU RISC-V
with multiple NUMA nodes.

Regards,
Anup

> 
> > +		cpuhp_setup_state(CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING,
> >  				  "irqchip/sifive/plic:starting",
> >  				  plic_starting_cpu, plic_dying_cpu);
> > +		plic_cpuhp_setup_done = true;
> > +	}
> > +
> >  	pr_info("mapped %d interrupts with %d handlers for %d contexts.\n",
> >  		nr_irqs, nr_handlers, nr_contexts);
> >  	set_handle_irq(plic_handle_irq);
> 
> Thanks,
> 
>          M.
> --
> Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 40+ messages in thread

* RE: [PATCH 1/4] irqchip/sifive-plic: Setup cpuhp once after current handler is present
@ 2020-05-16 12:52       ` Anup Patel
  0 siblings, 0 replies; 40+ messages in thread
From: Anup Patel @ 2020-05-16 12:52 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Jason Cooper, Anup Patel, linux-kernel, Atish Patra,
	Palmer Dabbelt, Paul Walmsley, Alistair Francis, Thomas Gleixner,
	linux-riscv



> -----Original Message-----
> From: Marc Zyngier <maz@kernel.org>
> Sent: 16 May 2020 17:42
> To: Anup Patel <Anup.Patel@wdc.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
> <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>; Jason
> Cooper <jason@lakedaemon.net>; Atish Patra <Atish.Patra@wdc.com>; Alistair
> Francis <Alistair.Francis@wdc.com>; Anup Patel <anup@brainfault.org>; linux-
> riscv@lists.infradead.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH 1/4] irqchip/sifive-plic: Setup cpuhp once after current
> handler is present
> 
> Hi Anup,
> 
> On 2020-05-16 07:38, Anup Patel wrote:
> > For multiple PLIC instances, the plic_init() is called once for each
> > PLIC instance. Due to this we have two issues:
> > 1. cpuhp_setup_state() is called multiple times 2. plic_starting_cpu()
> > can crash for boot CPU if cpuhp_setup_state()
> >    is called before boot CPU PLIC handler is available.
> >
> > This patch fixes both above issues.
> >
> > Signed-off-by: Anup Patel <anup.patel@wdc.com>
> > ---
> >  drivers/irqchip/irq-sifive-plic.c | 14 ++++++++++++--
> >  1 file changed, 12 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/irqchip/irq-sifive-plic.c
> > b/drivers/irqchip/irq-sifive-plic.c
> > index 822e074c0600..7dc23edb3267 100644
> > --- a/drivers/irqchip/irq-sifive-plic.c
> > +++ b/drivers/irqchip/irq-sifive-plic.c
> > @@ -76,6 +76,7 @@ struct plic_handler {
> >  	void __iomem		*enable_base;
> >  	struct plic_priv	*priv;
> >  };
> > +static bool plic_cpuhp_setup_done;
> >  static DEFINE_PER_CPU(struct plic_handler, plic_handlers);
> >
> >  static inline void plic_toggle(struct plic_handler *handler, @@
> > -282,6 +283,7 @@ static int __init plic_init(struct device_node *node,
> >  	int error = 0, nr_contexts, nr_handlers = 0, i;
> >  	u32 nr_irqs;
> >  	struct plic_priv *priv;
> > +	struct plic_handler *handler;
> >
> >  	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
> >  	if (!priv)
> > @@ -310,7 +312,6 @@ static int __init plic_init(struct device_node
> > *node,
> >
> >  	for (i = 0; i < nr_contexts; i++) {
> >  		struct of_phandle_args parent;
> > -		struct plic_handler *handler;
> >  		irq_hw_number_t hwirq;
> >  		int cpu, hartid;
> >
> > @@ -364,9 +365,18 @@ static int __init plic_init(struct device_node
> > *node,
> >  		nr_handlers++;
> >  	}
> >
> > -	cpuhp_setup_state(CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING,
> > +	/*
> > +	 * We can have multiple PLIC instances so setup cpuhp state only
> > +	 * when context handler for current/boot CPU is present.
> > +	 */
> > +	handler = this_cpu_ptr(&plic_handlers);
> > +	if (handler->present && !plic_cpuhp_setup_done) {
> 
> If there is no context handler for the boot CPU, the system is doomed, right? It
> isn't able to get any interrupt, and you don't register the hotplug notifier that
> could allow secondary CPUs to boot.
> 
> So what is the point? It feels like you should just give up here.
> 
> Also, the boot CPU is always CPU 0. So checking that you only register the
> hotplug notifier from CPU 0 should be enough.

The boot CPU is not fixed in RISC-V, the logical id of the boot CPU will always
be zero but physical id (or HART id) can be something totally different.

On RISC-V NUMA system, we will have a separate PLIC in each NUMA node.

Let's say we have a system with 2 NUMA nodes, each NUMA node having
4 CPUs (or 4 HARTs).  In this case, the DTB passed to Linux will have two PLIC
DT nodes where each PLIC device targets only 4 CPUs (or 4 HARTs). The
plic_init() functions will setup handlers for only 4 CPUs (or 4 HARTs). In other
words, plic_init() for "PLIC0" will setup handler for HART id 0 to 3 and
plic_init() for "PLIC1" will setup handler for HART id 4 to 7. Now, any CPU
can be boot CPU so it is possible that CPU with HART id 4 is boot CPU and
when plic_init() is first called for "PLIC0" the handler for HART id 4 is not
setup because it will be setup later when plic_init() is called for "PLIC1".
This cause plic_starting_cpu() to crash when plic_init() is called for "PLIC0".

I hope above example helps understanding the issue.

I encounter this issue randomly when booting Linux on QEMU RISC-V
with multiple NUMA nodes.

Regards,
Anup

> 
> > +		cpuhp_setup_state(CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING,
> >  				  "irqchip/sifive/plic:starting",
> >  				  plic_starting_cpu, plic_dying_cpu);
> > +		plic_cpuhp_setup_done = true;
> > +	}
> > +
> >  	pr_info("mapped %d interrupts with %d handlers for %d contexts.\n",
> >  		nr_irqs, nr_handlers, nr_contexts);
> >  	set_handle_irq(plic_handle_irq);
> 
> Thanks,
> 
>          M.
> --
> Jazz is not dead. It just smells funny...


^ permalink raw reply	[flat|nested] 40+ messages in thread

* RE: [PATCH 2/4] irqchip/sifive-plic: Improve boot prints for multiple PLIC instances
  2020-05-16 12:20     ` Marc Zyngier
@ 2020-05-16 12:53       ` Anup Patel
  -1 siblings, 0 replies; 40+ messages in thread
From: Anup Patel @ 2020-05-16 12:53 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Palmer Dabbelt, Paul Walmsley, Thomas Gleixner, Jason Cooper,
	Atish Patra, Alistair Francis, Anup Patel, linux-riscv,
	linux-kernel



> -----Original Message-----
> From: Marc Zyngier <maz@kernel.org>
> Sent: 16 May 2020 17:50
> To: Anup Patel <Anup.Patel@wdc.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
> <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>; Jason
> Cooper <jason@lakedaemon.net>; Atish Patra <Atish.Patra@wdc.com>; Alistair
> Francis <Alistair.Francis@wdc.com>; Anup Patel <anup@brainfault.org>; linux-
> riscv@lists.infradead.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH 2/4] irqchip/sifive-plic: Improve boot prints for multiple
> PLIC instances
> 
> On 2020-05-16 07:38, Anup Patel wrote:
> > We improve PLIC banner to help distinguish multiple PLIC instances in
> > boot time prints.
> >
> > Signed-off-by: Anup Patel <anup.patel@wdc.com>
> > ---
> >  drivers/irqchip/irq-sifive-plic.c | 11 +++++++++--
> >  1 file changed, 9 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/irqchip/irq-sifive-plic.c
> > b/drivers/irqchip/irq-sifive-plic.c
> > index 7dc23edb3267..2d3db927a551 100644
> > --- a/drivers/irqchip/irq-sifive-plic.c
> > +++ b/drivers/irqchip/irq-sifive-plic.c
> > @@ -284,6 +284,11 @@ static int __init plic_init(struct device_node
> > *node,
> >  	u32 nr_irqs;
> >  	struct plic_priv *priv;
> >  	struct plic_handler *handler;
> > +	struct resource iores;
> > +
> > +	error = of_address_to_resource(node, 0, &iores);
> > +	if (error)
> > +		return error;
> >
> >  	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
> >  	if (!priv)
> > @@ -377,8 +382,10 @@ static int __init plic_init(struct device_node
> > *node,
> >  		plic_cpuhp_setup_done = true;
> >  	}
> >
> > -	pr_info("mapped %d interrupts with %d handlers for %d contexts.\n",
> > -		nr_irqs, nr_handlers, nr_contexts);
> > +	pr_info("interrupt-controller at 0x%llx "
> > +		"(interrupts=%d, contexts=%d, handlers=%d)\n",
> > +		(unsigned long long)iores.start, nr_irqs,
> > +		nr_contexts, nr_handlers);
> 
> Instead of displaying "interrupt controller at ...", why not use the existing printk
> format for OF nodes? Something along the lines of
> 
>      pr_info("%pOF : mapped %d interrupts with %d handlers for %d contexts\n",
>              node, nr_irqs, nr_handlers, nr_contexts);

Sure, I will go with your suggestion and use printk for OF nodes.

Regards,
Anup

^ permalink raw reply	[flat|nested] 40+ messages in thread

* RE: [PATCH 2/4] irqchip/sifive-plic: Improve boot prints for multiple PLIC instances
@ 2020-05-16 12:53       ` Anup Patel
  0 siblings, 0 replies; 40+ messages in thread
From: Anup Patel @ 2020-05-16 12:53 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Jason Cooper, Anup Patel, linux-kernel, Atish Patra,
	Palmer Dabbelt, Paul Walmsley, Alistair Francis, Thomas Gleixner,
	linux-riscv



> -----Original Message-----
> From: Marc Zyngier <maz@kernel.org>
> Sent: 16 May 2020 17:50
> To: Anup Patel <Anup.Patel@wdc.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
> <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>; Jason
> Cooper <jason@lakedaemon.net>; Atish Patra <Atish.Patra@wdc.com>; Alistair
> Francis <Alistair.Francis@wdc.com>; Anup Patel <anup@brainfault.org>; linux-
> riscv@lists.infradead.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH 2/4] irqchip/sifive-plic: Improve boot prints for multiple
> PLIC instances
> 
> On 2020-05-16 07:38, Anup Patel wrote:
> > We improve PLIC banner to help distinguish multiple PLIC instances in
> > boot time prints.
> >
> > Signed-off-by: Anup Patel <anup.patel@wdc.com>
> > ---
> >  drivers/irqchip/irq-sifive-plic.c | 11 +++++++++--
> >  1 file changed, 9 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/irqchip/irq-sifive-plic.c
> > b/drivers/irqchip/irq-sifive-plic.c
> > index 7dc23edb3267..2d3db927a551 100644
> > --- a/drivers/irqchip/irq-sifive-plic.c
> > +++ b/drivers/irqchip/irq-sifive-plic.c
> > @@ -284,6 +284,11 @@ static int __init plic_init(struct device_node
> > *node,
> >  	u32 nr_irqs;
> >  	struct plic_priv *priv;
> >  	struct plic_handler *handler;
> > +	struct resource iores;
> > +
> > +	error = of_address_to_resource(node, 0, &iores);
> > +	if (error)
> > +		return error;
> >
> >  	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
> >  	if (!priv)
> > @@ -377,8 +382,10 @@ static int __init plic_init(struct device_node
> > *node,
> >  		plic_cpuhp_setup_done = true;
> >  	}
> >
> > -	pr_info("mapped %d interrupts with %d handlers for %d contexts.\n",
> > -		nr_irqs, nr_handlers, nr_contexts);
> > +	pr_info("interrupt-controller at 0x%llx "
> > +		"(interrupts=%d, contexts=%d, handlers=%d)\n",
> > +		(unsigned long long)iores.start, nr_irqs,
> > +		nr_contexts, nr_handlers);
> 
> Instead of displaying "interrupt controller at ...", why not use the existing printk
> format for OF nodes? Something along the lines of
> 
>      pr_info("%pOF : mapped %d interrupts with %d handlers for %d contexts\n",
>              node, nr_irqs, nr_handlers, nr_contexts);

Sure, I will go with your suggestion and use printk for OF nodes.

Regards,
Anup


^ permalink raw reply	[flat|nested] 40+ messages in thread

* RE: [PATCH 4/4] irqchip/sifive-plic: Set default irq affinity in plic_irqdomain_map()
  2020-05-16 12:30     ` Marc Zyngier
@ 2020-05-16 12:53       ` Anup Patel
  -1 siblings, 0 replies; 40+ messages in thread
From: Anup Patel @ 2020-05-16 12:53 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Palmer Dabbelt, Paul Walmsley, Thomas Gleixner, Jason Cooper,
	Atish Patra, Alistair Francis, Anup Patel, linux-riscv,
	linux-kernel



> -----Original Message-----
> From: Marc Zyngier <maz@kernel.org>
> Sent: 16 May 2020 18:01
> To: Anup Patel <Anup.Patel@wdc.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
> <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>; Jason
> Cooper <jason@lakedaemon.net>; Atish Patra <Atish.Patra@wdc.com>; Alistair
> Francis <Alistair.Francis@wdc.com>; Anup Patel <anup@brainfault.org>; linux-
> riscv@lists.infradead.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH 4/4] irqchip/sifive-plic: Set default irq affinity in
> plic_irqdomain_map()
> 
> On 2020-05-16 07:39, Anup Patel wrote:
> > For multiple PLIC instances, each PLIC can only target a subset of
> > CPUs which is represented by "lmask" in the "struct plic_priv".
> >
> > Currently, the default irq affinity for each PLIC interrupt is all
> > online CPUs which is illegal value for default irq affinity when we
> > have multiple PLIC instances. To fix this, we now set "lmask" as the
> > default irq affinity in for each interrupt in plic_irqdomain_map().
> >
> > Signed-off-by: Anup Patel <anup.patel@wdc.com>
> > ---
> >  drivers/irqchip/irq-sifive-plic.c | 1 +
> >  1 file changed, 1 insertion(+)
> >
> > diff --git a/drivers/irqchip/irq-sifive-plic.c
> > b/drivers/irqchip/irq-sifive-plic.c
> > index e42fc082ad18..9af5e2fd2574 100644
> > --- a/drivers/irqchip/irq-sifive-plic.c
> > +++ b/drivers/irqchip/irq-sifive-plic.c
> > @@ -174,6 +174,7 @@ static int plic_irqdomain_map(struct irq_domain
> > *d, unsigned int irq,
> >  	irq_domain_set_info(d, irq, hwirq, &priv->chip, d->host_data,
> >  			    handle_fasteoi_irq, NULL, NULL);
> >  	irq_set_noprobe(irq);
> > +	irq_set_affinity(irq, &priv->lmask);
> >  	return 0;
> >  }
> 
> Isn't that a fix? If so, please add a Fixes: tag, as well as a CC to stable if you
> think it should be backported.

This is certainly a fix. I will add Fixes: tag like you suggested.

Regards,
Anup

^ permalink raw reply	[flat|nested] 40+ messages in thread

* RE: [PATCH 4/4] irqchip/sifive-plic: Set default irq affinity in plic_irqdomain_map()
@ 2020-05-16 12:53       ` Anup Patel
  0 siblings, 0 replies; 40+ messages in thread
From: Anup Patel @ 2020-05-16 12:53 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Jason Cooper, Anup Patel, linux-kernel, Atish Patra,
	Palmer Dabbelt, Paul Walmsley, Alistair Francis, Thomas Gleixner,
	linux-riscv



> -----Original Message-----
> From: Marc Zyngier <maz@kernel.org>
> Sent: 16 May 2020 18:01
> To: Anup Patel <Anup.Patel@wdc.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
> <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>; Jason
> Cooper <jason@lakedaemon.net>; Atish Patra <Atish.Patra@wdc.com>; Alistair
> Francis <Alistair.Francis@wdc.com>; Anup Patel <anup@brainfault.org>; linux-
> riscv@lists.infradead.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH 4/4] irqchip/sifive-plic: Set default irq affinity in
> plic_irqdomain_map()
> 
> On 2020-05-16 07:39, Anup Patel wrote:
> > For multiple PLIC instances, each PLIC can only target a subset of
> > CPUs which is represented by "lmask" in the "struct plic_priv".
> >
> > Currently, the default irq affinity for each PLIC interrupt is all
> > online CPUs which is illegal value for default irq affinity when we
> > have multiple PLIC instances. To fix this, we now set "lmask" as the
> > default irq affinity in for each interrupt in plic_irqdomain_map().
> >
> > Signed-off-by: Anup Patel <anup.patel@wdc.com>
> > ---
> >  drivers/irqchip/irq-sifive-plic.c | 1 +
> >  1 file changed, 1 insertion(+)
> >
> > diff --git a/drivers/irqchip/irq-sifive-plic.c
> > b/drivers/irqchip/irq-sifive-plic.c
> > index e42fc082ad18..9af5e2fd2574 100644
> > --- a/drivers/irqchip/irq-sifive-plic.c
> > +++ b/drivers/irqchip/irq-sifive-plic.c
> > @@ -174,6 +174,7 @@ static int plic_irqdomain_map(struct irq_domain
> > *d, unsigned int irq,
> >  	irq_domain_set_info(d, irq, hwirq, &priv->chip, d->host_data,
> >  			    handle_fasteoi_irq, NULL, NULL);
> >  	irq_set_noprobe(irq);
> > +	irq_set_affinity(irq, &priv->lmask);
> >  	return 0;
> >  }
> 
> Isn't that a fix? If so, please add a Fixes: tag, as well as a CC to stable if you
> think it should be backported.

This is certainly a fix. I will add Fixes: tag like you suggested.

Regards,
Anup


^ permalink raw reply	[flat|nested] 40+ messages in thread

* RE: [PATCH 3/4] irqchip/sifive-plic: Separate irq_chip for muiltiple PLIC instances
  2020-05-16 12:29     ` Marc Zyngier
@ 2020-05-16 13:01       ` Anup Patel
  -1 siblings, 0 replies; 40+ messages in thread
From: Anup Patel @ 2020-05-16 13:01 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Palmer Dabbelt, Paul Walmsley, Thomas Gleixner, Jason Cooper,
	Atish Patra, Alistair Francis, Anup Patel, linux-riscv,
	linux-kernel



> -----Original Message-----
> From: Marc Zyngier <maz@kernel.org>
> Sent: 16 May 2020 17:59
> To: Anup Patel <Anup.Patel@wdc.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
> <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>; Jason
> Cooper <jason@lakedaemon.net>; Atish Patra <Atish.Patra@wdc.com>; Alistair
> Francis <Alistair.Francis@wdc.com>; Anup Patel <anup@brainfault.org>; linux-
> riscv@lists.infradead.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH 3/4] irqchip/sifive-plic: Separate irq_chip for muiltiple PLIC
> instances
> 
> On 2020-05-16 07:39, Anup Patel wrote:
> > To distinguish interrupts from multiple PLIC instances, we use a
> > per-PLIC irq_chip instance with a different name.
> >
> > Signed-off-by: Anup Patel <anup.patel@wdc.com>
> > ---
> >  drivers/irqchip/irq-sifive-plic.c | 28 +++++++++++++++-------------
> >  1 file changed, 15 insertions(+), 13 deletions(-)
> >
> > diff --git a/drivers/irqchip/irq-sifive-plic.c
> > b/drivers/irqchip/irq-sifive-plic.c
> > index 2d3db927a551..e42fc082ad18 100644
> > --- a/drivers/irqchip/irq-sifive-plic.c
> > +++ b/drivers/irqchip/irq-sifive-plic.c
> > @@ -60,6 +60,7 @@
> >  #define	PLIC_ENABLE_THRESHOLD		0
> >
> >  struct plic_priv {
> > +	struct irq_chip chip;
> >  	struct cpumask lmask;
> >  	struct irq_domain *irqdomain;
> >  	void __iomem *regs;
> > @@ -76,6 +77,7 @@ struct plic_handler {
> >  	void __iomem		*enable_base;
> >  	struct plic_priv	*priv;
> >  };
> > +static unsigned int plic_count;
> >  static bool plic_cpuhp_setup_done;
> >  static DEFINE_PER_CPU(struct plic_handler, plic_handlers);
> >
> > @@ -164,20 +166,12 @@ static void plic_irq_eoi(struct irq_data *d)
> >  	writel(d->hwirq, handler->hart_base + CONTEXT_CLAIM);  }
> >
> > -static struct irq_chip plic_chip = {
> > -	.name		= "SiFive PLIC",
> > -	.irq_mask	= plic_irq_mask,
> > -	.irq_unmask	= plic_irq_unmask,
> > -	.irq_eoi	= plic_irq_eoi,
> > -#ifdef CONFIG_SMP
> > -	.irq_set_affinity = plic_set_affinity,
> > -#endif
> > -};
> > -
> >  static int plic_irqdomain_map(struct irq_domain *d, unsigned int irq,
> >  			      irq_hw_number_t hwirq)
> >  {
> > -	irq_domain_set_info(d, irq, hwirq, &plic_chip, d->host_data,
> > +	struct plic_priv *priv = d->host_data;
> > +
> > +	irq_domain_set_info(d, irq, hwirq, &priv->chip, d->host_data,
> >  			    handle_fasteoi_irq, NULL, NULL);
> >  	irq_set_noprobe(irq);
> >  	return 0;
> > @@ -294,6 +288,14 @@ static int __init plic_init(struct device_node
> > *node,
> >  	if (!priv)
> >  		return -ENOMEM;
> >
> > +	priv->chip.name = kasprintf(GFP_KERNEL, "PLIC%d", plic_count++);
> > +	priv->chip.irq_mask = plic_irq_mask,
> > +	priv->chip.irq_unmask = plic_irq_unmask,
> > +	priv->chip.irq_eoi = plic_irq_eoi,
> > +#ifdef CONFIG_SMP
> > +	priv->chip.irq_set_affinity = plic_set_affinity, #endif
> > +
> >  	priv->regs = of_iomap(node, 0);
> >  	if (WARN_ON(!priv->regs)) {
> >  		error = -EIO;
> > @@ -383,9 +385,9 @@ static int __init plic_init(struct device_node
> > *node,
> >  	}
> >
> >  	pr_info("interrupt-controller at 0x%llx "
> > -		"(interrupts=%d, contexts=%d, handlers=%d)\n",
> > +		"(interrupts=%d, contexts=%d, handlers=%d) (%s)\n",
> >  		(unsigned long long)iores.start, nr_irqs,
> > -		nr_contexts, nr_handlers);
> > +		nr_contexts, nr_handlers, priv->chip.name);
> >  	set_handle_irq(plic_handle_irq);
> >  	return 0;
> 
> I really dislike this patch for multiple reasons:
> 
> - Allocating a new struc irq_chip just for a string seems over the top,
>    specially as all the *useful* stuff stays the same.
> 
> - Even if I hate it, /proc is API. I'm sure something, somewhere is
>    parsing this. Changing the string is likely to confuse it.

AFAIK, we don't have scripts in RISC-V world that depend on
/proc/interrupts content. May be in future such scripts will show up.

For system with multiple PLICs, we are seeing same "SiFive PLIC"
string for all PLIC interrupts in "cat /proc/interrupts". I am trying to
assign different string based on PLIC instance. This is similar to
what GICv2 driver is doing (e.g. GIC-0, GIC-1, ... in /proc/interrupts).

Is there a better way to do this ?

> 
> - If you do this for debug purposes, then CONFIG_GENERIC_IRQ_DEBUGFS
>    is the right way to look up the information.
> 
> - If, for reasons that are beyond me, you actually *need* this, then
>    implementing irq_print_chip in your irq_chip structure is the way
>    to go.
> 
> But frankly, I'd rather you drop this altogether.

I just want to differentiate which interrupt belongs to which PLIC
Instance in /proc/interrupts. I can take a different approach if you
suggest.

Regards,
Anup

^ permalink raw reply	[flat|nested] 40+ messages in thread

* RE: [PATCH 3/4] irqchip/sifive-plic: Separate irq_chip for muiltiple PLIC instances
@ 2020-05-16 13:01       ` Anup Patel
  0 siblings, 0 replies; 40+ messages in thread
From: Anup Patel @ 2020-05-16 13:01 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Jason Cooper, Anup Patel, linux-kernel, Atish Patra,
	Palmer Dabbelt, Paul Walmsley, Alistair Francis, Thomas Gleixner,
	linux-riscv



> -----Original Message-----
> From: Marc Zyngier <maz@kernel.org>
> Sent: 16 May 2020 17:59
> To: Anup Patel <Anup.Patel@wdc.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
> <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>; Jason
> Cooper <jason@lakedaemon.net>; Atish Patra <Atish.Patra@wdc.com>; Alistair
> Francis <Alistair.Francis@wdc.com>; Anup Patel <anup@brainfault.org>; linux-
> riscv@lists.infradead.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH 3/4] irqchip/sifive-plic: Separate irq_chip for muiltiple PLIC
> instances
> 
> On 2020-05-16 07:39, Anup Patel wrote:
> > To distinguish interrupts from multiple PLIC instances, we use a
> > per-PLIC irq_chip instance with a different name.
> >
> > Signed-off-by: Anup Patel <anup.patel@wdc.com>
> > ---
> >  drivers/irqchip/irq-sifive-plic.c | 28 +++++++++++++++-------------
> >  1 file changed, 15 insertions(+), 13 deletions(-)
> >
> > diff --git a/drivers/irqchip/irq-sifive-plic.c
> > b/drivers/irqchip/irq-sifive-plic.c
> > index 2d3db927a551..e42fc082ad18 100644
> > --- a/drivers/irqchip/irq-sifive-plic.c
> > +++ b/drivers/irqchip/irq-sifive-plic.c
> > @@ -60,6 +60,7 @@
> >  #define	PLIC_ENABLE_THRESHOLD		0
> >
> >  struct plic_priv {
> > +	struct irq_chip chip;
> >  	struct cpumask lmask;
> >  	struct irq_domain *irqdomain;
> >  	void __iomem *regs;
> > @@ -76,6 +77,7 @@ struct plic_handler {
> >  	void __iomem		*enable_base;
> >  	struct plic_priv	*priv;
> >  };
> > +static unsigned int plic_count;
> >  static bool plic_cpuhp_setup_done;
> >  static DEFINE_PER_CPU(struct plic_handler, plic_handlers);
> >
> > @@ -164,20 +166,12 @@ static void plic_irq_eoi(struct irq_data *d)
> >  	writel(d->hwirq, handler->hart_base + CONTEXT_CLAIM);  }
> >
> > -static struct irq_chip plic_chip = {
> > -	.name		= "SiFive PLIC",
> > -	.irq_mask	= plic_irq_mask,
> > -	.irq_unmask	= plic_irq_unmask,
> > -	.irq_eoi	= plic_irq_eoi,
> > -#ifdef CONFIG_SMP
> > -	.irq_set_affinity = plic_set_affinity,
> > -#endif
> > -};
> > -
> >  static int plic_irqdomain_map(struct irq_domain *d, unsigned int irq,
> >  			      irq_hw_number_t hwirq)
> >  {
> > -	irq_domain_set_info(d, irq, hwirq, &plic_chip, d->host_data,
> > +	struct plic_priv *priv = d->host_data;
> > +
> > +	irq_domain_set_info(d, irq, hwirq, &priv->chip, d->host_data,
> >  			    handle_fasteoi_irq, NULL, NULL);
> >  	irq_set_noprobe(irq);
> >  	return 0;
> > @@ -294,6 +288,14 @@ static int __init plic_init(struct device_node
> > *node,
> >  	if (!priv)
> >  		return -ENOMEM;
> >
> > +	priv->chip.name = kasprintf(GFP_KERNEL, "PLIC%d", plic_count++);
> > +	priv->chip.irq_mask = plic_irq_mask,
> > +	priv->chip.irq_unmask = plic_irq_unmask,
> > +	priv->chip.irq_eoi = plic_irq_eoi,
> > +#ifdef CONFIG_SMP
> > +	priv->chip.irq_set_affinity = plic_set_affinity, #endif
> > +
> >  	priv->regs = of_iomap(node, 0);
> >  	if (WARN_ON(!priv->regs)) {
> >  		error = -EIO;
> > @@ -383,9 +385,9 @@ static int __init plic_init(struct device_node
> > *node,
> >  	}
> >
> >  	pr_info("interrupt-controller at 0x%llx "
> > -		"(interrupts=%d, contexts=%d, handlers=%d)\n",
> > +		"(interrupts=%d, contexts=%d, handlers=%d) (%s)\n",
> >  		(unsigned long long)iores.start, nr_irqs,
> > -		nr_contexts, nr_handlers);
> > +		nr_contexts, nr_handlers, priv->chip.name);
> >  	set_handle_irq(plic_handle_irq);
> >  	return 0;
> 
> I really dislike this patch for multiple reasons:
> 
> - Allocating a new struc irq_chip just for a string seems over the top,
>    specially as all the *useful* stuff stays the same.
> 
> - Even if I hate it, /proc is API. I'm sure something, somewhere is
>    parsing this. Changing the string is likely to confuse it.

AFAIK, we don't have scripts in RISC-V world that depend on
/proc/interrupts content. May be in future such scripts will show up.

For system with multiple PLICs, we are seeing same "SiFive PLIC"
string for all PLIC interrupts in "cat /proc/interrupts". I am trying to
assign different string based on PLIC instance. This is similar to
what GICv2 driver is doing (e.g. GIC-0, GIC-1, ... in /proc/interrupts).

Is there a better way to do this ?

> 
> - If you do this for debug purposes, then CONFIG_GENERIC_IRQ_DEBUGFS
>    is the right way to look up the information.
> 
> - If, for reasons that are beyond me, you actually *need* this, then
>    implementing irq_print_chip in your irq_chip structure is the way
>    to go.
> 
> But frankly, I'd rather you drop this altogether.

I just want to differentiate which interrupt belongs to which PLIC
Instance in /proc/interrupts. I can take a different approach if you
suggest.

Regards,
Anup


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 3/4] irqchip/sifive-plic: Separate irq_chip for muiltiple PLIC instances
  2020-05-16 13:01       ` Anup Patel
@ 2020-05-16 13:16         ` Marc Zyngier
  -1 siblings, 0 replies; 40+ messages in thread
From: Marc Zyngier @ 2020-05-16 13:16 UTC (permalink / raw)
  To: Anup Patel
  Cc: Palmer Dabbelt, Paul Walmsley, Thomas Gleixner, Jason Cooper,
	Atish Patra, Alistair Francis, Anup Patel, linux-riscv,
	linux-kernel

On 2020-05-16 14:01, Anup Patel wrote:
>> -----Original Message-----
>> From: Marc Zyngier <maz@kernel.org>
>> Sent: 16 May 2020 17:59
>> To: Anup Patel <Anup.Patel@wdc.com>
>> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
>> <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>; 
>> Jason
>> Cooper <jason@lakedaemon.net>; Atish Patra <Atish.Patra@wdc.com>; 
>> Alistair
>> Francis <Alistair.Francis@wdc.com>; Anup Patel <anup@brainfault.org>; 
>> linux-
>> riscv@lists.infradead.org; linux-kernel@vger.kernel.org
>> Subject: Re: [PATCH 3/4] irqchip/sifive-plic: Separate irq_chip for 
>> muiltiple PLIC
>> instances
>> 
>> On 2020-05-16 07:39, Anup Patel wrote:
>> > To distinguish interrupts from multiple PLIC instances, we use a
>> > per-PLIC irq_chip instance with a different name.
>> >
>> > Signed-off-by: Anup Patel <anup.patel@wdc.com>
>> > ---
>> >  drivers/irqchip/irq-sifive-plic.c | 28 +++++++++++++++-------------
>> >  1 file changed, 15 insertions(+), 13 deletions(-)
>> >
>> > diff --git a/drivers/irqchip/irq-sifive-plic.c
>> > b/drivers/irqchip/irq-sifive-plic.c
>> > index 2d3db927a551..e42fc082ad18 100644
>> > --- a/drivers/irqchip/irq-sifive-plic.c
>> > +++ b/drivers/irqchip/irq-sifive-plic.c
>> > @@ -60,6 +60,7 @@
>> >  #define	PLIC_ENABLE_THRESHOLD		0
>> >
>> >  struct plic_priv {
>> > +	struct irq_chip chip;
>> >  	struct cpumask lmask;
>> >  	struct irq_domain *irqdomain;
>> >  	void __iomem *regs;
>> > @@ -76,6 +77,7 @@ struct plic_handler {
>> >  	void __iomem		*enable_base;
>> >  	struct plic_priv	*priv;
>> >  };
>> > +static unsigned int plic_count;
>> >  static bool plic_cpuhp_setup_done;
>> >  static DEFINE_PER_CPU(struct plic_handler, plic_handlers);
>> >
>> > @@ -164,20 +166,12 @@ static void plic_irq_eoi(struct irq_data *d)
>> >  	writel(d->hwirq, handler->hart_base + CONTEXT_CLAIM);  }
>> >
>> > -static struct irq_chip plic_chip = {
>> > -	.name		= "SiFive PLIC",
>> > -	.irq_mask	= plic_irq_mask,
>> > -	.irq_unmask	= plic_irq_unmask,
>> > -	.irq_eoi	= plic_irq_eoi,
>> > -#ifdef CONFIG_SMP
>> > -	.irq_set_affinity = plic_set_affinity,
>> > -#endif
>> > -};
>> > -
>> >  static int plic_irqdomain_map(struct irq_domain *d, unsigned int irq,
>> >  			      irq_hw_number_t hwirq)
>> >  {
>> > -	irq_domain_set_info(d, irq, hwirq, &plic_chip, d->host_data,
>> > +	struct plic_priv *priv = d->host_data;
>> > +
>> > +	irq_domain_set_info(d, irq, hwirq, &priv->chip, d->host_data,
>> >  			    handle_fasteoi_irq, NULL, NULL);
>> >  	irq_set_noprobe(irq);
>> >  	return 0;
>> > @@ -294,6 +288,14 @@ static int __init plic_init(struct device_node
>> > *node,
>> >  	if (!priv)
>> >  		return -ENOMEM;
>> >
>> > +	priv->chip.name = kasprintf(GFP_KERNEL, "PLIC%d", plic_count++);
>> > +	priv->chip.irq_mask = plic_irq_mask,
>> > +	priv->chip.irq_unmask = plic_irq_unmask,
>> > +	priv->chip.irq_eoi = plic_irq_eoi,
>> > +#ifdef CONFIG_SMP
>> > +	priv->chip.irq_set_affinity = plic_set_affinity, #endif
>> > +
>> >  	priv->regs = of_iomap(node, 0);
>> >  	if (WARN_ON(!priv->regs)) {
>> >  		error = -EIO;
>> > @@ -383,9 +385,9 @@ static int __init plic_init(struct device_node
>> > *node,
>> >  	}
>> >
>> >  	pr_info("interrupt-controller at 0x%llx "
>> > -		"(interrupts=%d, contexts=%d, handlers=%d)\n",
>> > +		"(interrupts=%d, contexts=%d, handlers=%d) (%s)\n",
>> >  		(unsigned long long)iores.start, nr_irqs,
>> > -		nr_contexts, nr_handlers);
>> > +		nr_contexts, nr_handlers, priv->chip.name);
>> >  	set_handle_irq(plic_handle_irq);
>> >  	return 0;
>> 
>> I really dislike this patch for multiple reasons:
>> 
>> - Allocating a new struc irq_chip just for a string seems over the 
>> top,
>>    specially as all the *useful* stuff stays the same.
>> 
>> - Even if I hate it, /proc is API. I'm sure something, somewhere is
>>    parsing this. Changing the string is likely to confuse it.
> 
> AFAIK, we don't have scripts in RISC-V world that depend on
> /proc/interrupts content. May be in future such scripts will show up.

How do you know that? Do you keep an exhaustive repository of all
the possible parsers of /proc/cpuinfo (rhetorical question)?

> For system with multiple PLICs, we are seeing same "SiFive PLIC"
> string for all PLIC interrupts in "cat /proc/interrupts". I am trying 
> to
> assign different string based on PLIC instance. This is similar to
> what GICv2 driver is doing (e.g. GIC-0, GIC-1, ... in 
> /proc/interrupts).

Which was a *very* bad idea the first place, and I wish I could get
rid of it. I cannot, for the reason outlined above (it's ABI).
Furthermore, in this case, the GICs are different (they are cascaded).
In your case, they have the same position in the interrupt hierarchy.

> Is there a better way to do this ?
> 
>> 
>> - If you do this for debug purposes, then CONFIG_GENERIC_IRQ_DEBUGFS
>>    is the right way to look up the information.
>> 
>> - If, for reasons that are beyond me, you actually *need* this, then
>>    implementing irq_print_chip in your irq_chip structure is the way
>>    to go.
>> 
>> But frankly, I'd rather you drop this altogether.
> 
> I just want to differentiate which interrupt belongs to which PLIC
> Instance in /proc/interrupts. I can take a different approach if you
> suggest.

I *have* given you a way to implement that in a better way. But again,
I'd rather you *don't* do it for the reason I have outlined above.

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 3/4] irqchip/sifive-plic: Separate irq_chip for muiltiple PLIC instances
@ 2020-05-16 13:16         ` Marc Zyngier
  0 siblings, 0 replies; 40+ messages in thread
From: Marc Zyngier @ 2020-05-16 13:16 UTC (permalink / raw)
  To: Anup Patel
  Cc: Jason Cooper, Anup Patel, linux-kernel, Atish Patra,
	Palmer Dabbelt, Paul Walmsley, Alistair Francis, Thomas Gleixner,
	linux-riscv

On 2020-05-16 14:01, Anup Patel wrote:
>> -----Original Message-----
>> From: Marc Zyngier <maz@kernel.org>
>> Sent: 16 May 2020 17:59
>> To: Anup Patel <Anup.Patel@wdc.com>
>> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
>> <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>; 
>> Jason
>> Cooper <jason@lakedaemon.net>; Atish Patra <Atish.Patra@wdc.com>; 
>> Alistair
>> Francis <Alistair.Francis@wdc.com>; Anup Patel <anup@brainfault.org>; 
>> linux-
>> riscv@lists.infradead.org; linux-kernel@vger.kernel.org
>> Subject: Re: [PATCH 3/4] irqchip/sifive-plic: Separate irq_chip for 
>> muiltiple PLIC
>> instances
>> 
>> On 2020-05-16 07:39, Anup Patel wrote:
>> > To distinguish interrupts from multiple PLIC instances, we use a
>> > per-PLIC irq_chip instance with a different name.
>> >
>> > Signed-off-by: Anup Patel <anup.patel@wdc.com>
>> > ---
>> >  drivers/irqchip/irq-sifive-plic.c | 28 +++++++++++++++-------------
>> >  1 file changed, 15 insertions(+), 13 deletions(-)
>> >
>> > diff --git a/drivers/irqchip/irq-sifive-plic.c
>> > b/drivers/irqchip/irq-sifive-plic.c
>> > index 2d3db927a551..e42fc082ad18 100644
>> > --- a/drivers/irqchip/irq-sifive-plic.c
>> > +++ b/drivers/irqchip/irq-sifive-plic.c
>> > @@ -60,6 +60,7 @@
>> >  #define	PLIC_ENABLE_THRESHOLD		0
>> >
>> >  struct plic_priv {
>> > +	struct irq_chip chip;
>> >  	struct cpumask lmask;
>> >  	struct irq_domain *irqdomain;
>> >  	void __iomem *regs;
>> > @@ -76,6 +77,7 @@ struct plic_handler {
>> >  	void __iomem		*enable_base;
>> >  	struct plic_priv	*priv;
>> >  };
>> > +static unsigned int plic_count;
>> >  static bool plic_cpuhp_setup_done;
>> >  static DEFINE_PER_CPU(struct plic_handler, plic_handlers);
>> >
>> > @@ -164,20 +166,12 @@ static void plic_irq_eoi(struct irq_data *d)
>> >  	writel(d->hwirq, handler->hart_base + CONTEXT_CLAIM);  }
>> >
>> > -static struct irq_chip plic_chip = {
>> > -	.name		= "SiFive PLIC",
>> > -	.irq_mask	= plic_irq_mask,
>> > -	.irq_unmask	= plic_irq_unmask,
>> > -	.irq_eoi	= plic_irq_eoi,
>> > -#ifdef CONFIG_SMP
>> > -	.irq_set_affinity = plic_set_affinity,
>> > -#endif
>> > -};
>> > -
>> >  static int plic_irqdomain_map(struct irq_domain *d, unsigned int irq,
>> >  			      irq_hw_number_t hwirq)
>> >  {
>> > -	irq_domain_set_info(d, irq, hwirq, &plic_chip, d->host_data,
>> > +	struct plic_priv *priv = d->host_data;
>> > +
>> > +	irq_domain_set_info(d, irq, hwirq, &priv->chip, d->host_data,
>> >  			    handle_fasteoi_irq, NULL, NULL);
>> >  	irq_set_noprobe(irq);
>> >  	return 0;
>> > @@ -294,6 +288,14 @@ static int __init plic_init(struct device_node
>> > *node,
>> >  	if (!priv)
>> >  		return -ENOMEM;
>> >
>> > +	priv->chip.name = kasprintf(GFP_KERNEL, "PLIC%d", plic_count++);
>> > +	priv->chip.irq_mask = plic_irq_mask,
>> > +	priv->chip.irq_unmask = plic_irq_unmask,
>> > +	priv->chip.irq_eoi = plic_irq_eoi,
>> > +#ifdef CONFIG_SMP
>> > +	priv->chip.irq_set_affinity = plic_set_affinity, #endif
>> > +
>> >  	priv->regs = of_iomap(node, 0);
>> >  	if (WARN_ON(!priv->regs)) {
>> >  		error = -EIO;
>> > @@ -383,9 +385,9 @@ static int __init plic_init(struct device_node
>> > *node,
>> >  	}
>> >
>> >  	pr_info("interrupt-controller at 0x%llx "
>> > -		"(interrupts=%d, contexts=%d, handlers=%d)\n",
>> > +		"(interrupts=%d, contexts=%d, handlers=%d) (%s)\n",
>> >  		(unsigned long long)iores.start, nr_irqs,
>> > -		nr_contexts, nr_handlers);
>> > +		nr_contexts, nr_handlers, priv->chip.name);
>> >  	set_handle_irq(plic_handle_irq);
>> >  	return 0;
>> 
>> I really dislike this patch for multiple reasons:
>> 
>> - Allocating a new struc irq_chip just for a string seems over the 
>> top,
>>    specially as all the *useful* stuff stays the same.
>> 
>> - Even if I hate it, /proc is API. I'm sure something, somewhere is
>>    parsing this. Changing the string is likely to confuse it.
> 
> AFAIK, we don't have scripts in RISC-V world that depend on
> /proc/interrupts content. May be in future such scripts will show up.

How do you know that? Do you keep an exhaustive repository of all
the possible parsers of /proc/cpuinfo (rhetorical question)?

> For system with multiple PLICs, we are seeing same "SiFive PLIC"
> string for all PLIC interrupts in "cat /proc/interrupts". I am trying 
> to
> assign different string based on PLIC instance. This is similar to
> what GICv2 driver is doing (e.g. GIC-0, GIC-1, ... in 
> /proc/interrupts).

Which was a *very* bad idea the first place, and I wish I could get
rid of it. I cannot, for the reason outlined above (it's ABI).
Furthermore, in this case, the GICs are different (they are cascaded).
In your case, they have the same position in the interrupt hierarchy.

> Is there a better way to do this ?
> 
>> 
>> - If you do this for debug purposes, then CONFIG_GENERIC_IRQ_DEBUGFS
>>    is the right way to look up the information.
>> 
>> - If, for reasons that are beyond me, you actually *need* this, then
>>    implementing irq_print_chip in your irq_chip structure is the way
>>    to go.
>> 
>> But frankly, I'd rather you drop this altogether.
> 
> I just want to differentiate which interrupt belongs to which PLIC
> Instance in /proc/interrupts. I can take a different approach if you
> suggest.

I *have* given you a way to implement that in a better way. But again,
I'd rather you *don't* do it for the reason I have outlined above.

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 1/4] irqchip/sifive-plic: Setup cpuhp once after current handler is present
  2020-05-16 12:52       ` Anup Patel
@ 2020-05-16 13:30         ` Marc Zyngier
  -1 siblings, 0 replies; 40+ messages in thread
From: Marc Zyngier @ 2020-05-16 13:30 UTC (permalink / raw)
  To: Anup Patel
  Cc: Palmer Dabbelt, Paul Walmsley, Thomas Gleixner, Jason Cooper,
	Atish Patra, Alistair Francis, Anup Patel, linux-riscv,
	linux-kernel

On 2020-05-16 13:52, Anup Patel wrote:
>> -----Original Message-----
>> From: Marc Zyngier <maz@kernel.org>
>> Sent: 16 May 2020 17:42
>> To: Anup Patel <Anup.Patel@wdc.com>
>> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
>> <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>; 
>> Jason
>> Cooper <jason@lakedaemon.net>; Atish Patra <Atish.Patra@wdc.com>; 
>> Alistair
>> Francis <Alistair.Francis@wdc.com>; Anup Patel <anup@brainfault.org>; 
>> linux-
>> riscv@lists.infradead.org; linux-kernel@vger.kernel.org
>> Subject: Re: [PATCH 1/4] irqchip/sifive-plic: Setup cpuhp once after 
>> current
>> handler is present
>> 
>> Hi Anup,
>> 
>> On 2020-05-16 07:38, Anup Patel wrote:
>> > For multiple PLIC instances, the plic_init() is called once for each
>> > PLIC instance. Due to this we have two issues:
>> > 1. cpuhp_setup_state() is called multiple times 2. plic_starting_cpu()
>> > can crash for boot CPU if cpuhp_setup_state()
>> >    is called before boot CPU PLIC handler is available.
>> >
>> > This patch fixes both above issues.
>> >
>> > Signed-off-by: Anup Patel <anup.patel@wdc.com>
>> > ---
>> >  drivers/irqchip/irq-sifive-plic.c | 14 ++++++++++++--
>> >  1 file changed, 12 insertions(+), 2 deletions(-)
>> >
>> > diff --git a/drivers/irqchip/irq-sifive-plic.c
>> > b/drivers/irqchip/irq-sifive-plic.c
>> > index 822e074c0600..7dc23edb3267 100644
>> > --- a/drivers/irqchip/irq-sifive-plic.c
>> > +++ b/drivers/irqchip/irq-sifive-plic.c
>> > @@ -76,6 +76,7 @@ struct plic_handler {
>> >  	void __iomem		*enable_base;
>> >  	struct plic_priv	*priv;
>> >  };
>> > +static bool plic_cpuhp_setup_done;
>> >  static DEFINE_PER_CPU(struct plic_handler, plic_handlers);
>> >
>> >  static inline void plic_toggle(struct plic_handler *handler, @@
>> > -282,6 +283,7 @@ static int __init plic_init(struct device_node *node,
>> >  	int error = 0, nr_contexts, nr_handlers = 0, i;
>> >  	u32 nr_irqs;
>> >  	struct plic_priv *priv;
>> > +	struct plic_handler *handler;
>> >
>> >  	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
>> >  	if (!priv)
>> > @@ -310,7 +312,6 @@ static int __init plic_init(struct device_node
>> > *node,
>> >
>> >  	for (i = 0; i < nr_contexts; i++) {
>> >  		struct of_phandle_args parent;
>> > -		struct plic_handler *handler;
>> >  		irq_hw_number_t hwirq;
>> >  		int cpu, hartid;
>> >
>> > @@ -364,9 +365,18 @@ static int __init plic_init(struct device_node
>> > *node,
>> >  		nr_handlers++;
>> >  	}
>> >
>> > -	cpuhp_setup_state(CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING,
>> > +	/*
>> > +	 * We can have multiple PLIC instances so setup cpuhp state only
>> > +	 * when context handler for current/boot CPU is present.
>> > +	 */
>> > +	handler = this_cpu_ptr(&plic_handlers);
>> > +	if (handler->present && !plic_cpuhp_setup_done) {
>> 
>> If there is no context handler for the boot CPU, the system is doomed, 
>> right? It
>> isn't able to get any interrupt, and you don't register the hotplug 
>> notifier that
>> could allow secondary CPUs to boot.
>> 
>> So what is the point? It feels like you should just give up here.
>> 
>> Also, the boot CPU is always CPU 0. So checking that you only register 
>> the
>> hotplug notifier from CPU 0 should be enough.
> 
> The boot CPU is not fixed in RISC-V, the logical id of the boot CPU 
> will always
> be zero but physical id (or HART id) can be something totally 
> different.

So on riscv, smp_processor_id() can return a non-zero value on the
the boot CPU? Interesting... :-/

> 
> On RISC-V NUMA system, we will have a separate PLIC in each NUMA node.
> 
> Let's say we have a system with 2 NUMA nodes, each NUMA node having
> 4 CPUs (or 4 HARTs).  In this case, the DTB passed to Linux will have 
> two PLIC
> DT nodes where each PLIC device targets only 4 CPUs (or 4 HARTs). The
> plic_init() functions will setup handlers for only 4 CPUs (or 4 HARTs). 
> In other
> words, plic_init() for "PLIC0" will setup handler for HART id 0 to 3 
> and
> plic_init() for "PLIC1" will setup handler for HART id 4 to 7. Now, any 
> CPU
> can be boot CPU so it is possible that CPU with HART id 4 is boot CPU 
> and
> when plic_init() is first called for "PLIC0" the handler for HART id 4 
> is not
> setup because it will be setup later when plic_init() is called for 
> "PLIC1".
> This cause plic_starting_cpu() to crash when plic_init() is called for 
> "PLIC0".
> 
> I hope above example helps understanding the issue.

It does, thanks. This pseudo NUMA thing really is a terrible hack...

> 
> I encounter this issue randomly when booting Linux on QEMU RISC-V
> with multiple NUMA nodes.

Then why don't you defer the probing of the PLIC you can't initialize
from this CPU? If you're on CPU4-7, only initialize the PLIC that
matters to you, and not the the others. It would certainly make a lot
more sense, and be more robust.

         M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 1/4] irqchip/sifive-plic: Setup cpuhp once after current handler is present
@ 2020-05-16 13:30         ` Marc Zyngier
  0 siblings, 0 replies; 40+ messages in thread
From: Marc Zyngier @ 2020-05-16 13:30 UTC (permalink / raw)
  To: Anup Patel
  Cc: Jason Cooper, Anup Patel, linux-kernel, Atish Patra,
	Palmer Dabbelt, Paul Walmsley, Alistair Francis, Thomas Gleixner,
	linux-riscv

On 2020-05-16 13:52, Anup Patel wrote:
>> -----Original Message-----
>> From: Marc Zyngier <maz@kernel.org>
>> Sent: 16 May 2020 17:42
>> To: Anup Patel <Anup.Patel@wdc.com>
>> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
>> <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>; 
>> Jason
>> Cooper <jason@lakedaemon.net>; Atish Patra <Atish.Patra@wdc.com>; 
>> Alistair
>> Francis <Alistair.Francis@wdc.com>; Anup Patel <anup@brainfault.org>; 
>> linux-
>> riscv@lists.infradead.org; linux-kernel@vger.kernel.org
>> Subject: Re: [PATCH 1/4] irqchip/sifive-plic: Setup cpuhp once after 
>> current
>> handler is present
>> 
>> Hi Anup,
>> 
>> On 2020-05-16 07:38, Anup Patel wrote:
>> > For multiple PLIC instances, the plic_init() is called once for each
>> > PLIC instance. Due to this we have two issues:
>> > 1. cpuhp_setup_state() is called multiple times 2. plic_starting_cpu()
>> > can crash for boot CPU if cpuhp_setup_state()
>> >    is called before boot CPU PLIC handler is available.
>> >
>> > This patch fixes both above issues.
>> >
>> > Signed-off-by: Anup Patel <anup.patel@wdc.com>
>> > ---
>> >  drivers/irqchip/irq-sifive-plic.c | 14 ++++++++++++--
>> >  1 file changed, 12 insertions(+), 2 deletions(-)
>> >
>> > diff --git a/drivers/irqchip/irq-sifive-plic.c
>> > b/drivers/irqchip/irq-sifive-plic.c
>> > index 822e074c0600..7dc23edb3267 100644
>> > --- a/drivers/irqchip/irq-sifive-plic.c
>> > +++ b/drivers/irqchip/irq-sifive-plic.c
>> > @@ -76,6 +76,7 @@ struct plic_handler {
>> >  	void __iomem		*enable_base;
>> >  	struct plic_priv	*priv;
>> >  };
>> > +static bool plic_cpuhp_setup_done;
>> >  static DEFINE_PER_CPU(struct plic_handler, plic_handlers);
>> >
>> >  static inline void plic_toggle(struct plic_handler *handler, @@
>> > -282,6 +283,7 @@ static int __init plic_init(struct device_node *node,
>> >  	int error = 0, nr_contexts, nr_handlers = 0, i;
>> >  	u32 nr_irqs;
>> >  	struct plic_priv *priv;
>> > +	struct plic_handler *handler;
>> >
>> >  	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
>> >  	if (!priv)
>> > @@ -310,7 +312,6 @@ static int __init plic_init(struct device_node
>> > *node,
>> >
>> >  	for (i = 0; i < nr_contexts; i++) {
>> >  		struct of_phandle_args parent;
>> > -		struct plic_handler *handler;
>> >  		irq_hw_number_t hwirq;
>> >  		int cpu, hartid;
>> >
>> > @@ -364,9 +365,18 @@ static int __init plic_init(struct device_node
>> > *node,
>> >  		nr_handlers++;
>> >  	}
>> >
>> > -	cpuhp_setup_state(CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING,
>> > +	/*
>> > +	 * We can have multiple PLIC instances so setup cpuhp state only
>> > +	 * when context handler for current/boot CPU is present.
>> > +	 */
>> > +	handler = this_cpu_ptr(&plic_handlers);
>> > +	if (handler->present && !plic_cpuhp_setup_done) {
>> 
>> If there is no context handler for the boot CPU, the system is doomed, 
>> right? It
>> isn't able to get any interrupt, and you don't register the hotplug 
>> notifier that
>> could allow secondary CPUs to boot.
>> 
>> So what is the point? It feels like you should just give up here.
>> 
>> Also, the boot CPU is always CPU 0. So checking that you only register 
>> the
>> hotplug notifier from CPU 0 should be enough.
> 
> The boot CPU is not fixed in RISC-V, the logical id of the boot CPU 
> will always
> be zero but physical id (or HART id) can be something totally 
> different.

So on riscv, smp_processor_id() can return a non-zero value on the
the boot CPU? Interesting... :-/

> 
> On RISC-V NUMA system, we will have a separate PLIC in each NUMA node.
> 
> Let's say we have a system with 2 NUMA nodes, each NUMA node having
> 4 CPUs (or 4 HARTs).  In this case, the DTB passed to Linux will have 
> two PLIC
> DT nodes where each PLIC device targets only 4 CPUs (or 4 HARTs). The
> plic_init() functions will setup handlers for only 4 CPUs (or 4 HARTs). 
> In other
> words, plic_init() for "PLIC0" will setup handler for HART id 0 to 3 
> and
> plic_init() for "PLIC1" will setup handler for HART id 4 to 7. Now, any 
> CPU
> can be boot CPU so it is possible that CPU with HART id 4 is boot CPU 
> and
> when plic_init() is first called for "PLIC0" the handler for HART id 4 
> is not
> setup because it will be setup later when plic_init() is called for 
> "PLIC1".
> This cause plic_starting_cpu() to crash when plic_init() is called for 
> "PLIC0".
> 
> I hope above example helps understanding the issue.

It does, thanks. This pseudo NUMA thing really is a terrible hack...

> 
> I encounter this issue randomly when booting Linux on QEMU RISC-V
> with multiple NUMA nodes.

Then why don't you defer the probing of the PLIC you can't initialize
from this CPU? If you're on CPU4-7, only initialize the PLIC that
matters to you, and not the the others. It would certainly make a lot
more sense, and be more robust.

         M.
-- 
Jazz is not dead. It just smells funny...


^ permalink raw reply	[flat|nested] 40+ messages in thread

* RE: [PATCH 1/4] irqchip/sifive-plic: Setup cpuhp once after current handler is present
  2020-05-16 13:30         ` Marc Zyngier
@ 2020-05-16 16:28           ` Anup Patel
  -1 siblings, 0 replies; 40+ messages in thread
From: Anup Patel @ 2020-05-16 16:28 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Palmer Dabbelt, Paul Walmsley, Thomas Gleixner, Jason Cooper,
	Atish Patra, Alistair Francis, Anup Patel, linux-riscv,
	linux-kernel



> -----Original Message-----
> From: Marc Zyngier <maz@kernel.org>
> Sent: 16 May 2020 19:01
> To: Anup Patel <Anup.Patel@wdc.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
> <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>; Jason
> Cooper <jason@lakedaemon.net>; Atish Patra <Atish.Patra@wdc.com>; Alistair
> Francis <Alistair.Francis@wdc.com>; Anup Patel <anup@brainfault.org>; linux-
> riscv@lists.infradead.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH 1/4] irqchip/sifive-plic: Setup cpuhp once after current
> handler is present
> 
> On 2020-05-16 13:52, Anup Patel wrote:
> >> -----Original Message-----
> >> From: Marc Zyngier <maz@kernel.org>
> >> Sent: 16 May 2020 17:42
> >> To: Anup Patel <Anup.Patel@wdc.com>
> >> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
> >> <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>;
> >> Jason Cooper <jason@lakedaemon.net>; Atish Patra
> >> <Atish.Patra@wdc.com>; Alistair Francis <Alistair.Francis@wdc.com>;
> >> Anup Patel <anup@brainfault.org>;
> >> linux-
> >> riscv@lists.infradead.org; linux-kernel@vger.kernel.org
> >> Subject: Re: [PATCH 1/4] irqchip/sifive-plic: Setup cpuhp once after
> >> current handler is present
> >>
> >> Hi Anup,
> >>
> >> On 2020-05-16 07:38, Anup Patel wrote:
> >> > For multiple PLIC instances, the plic_init() is called once for
> >> > each PLIC instance. Due to this we have two issues:
> >> > 1. cpuhp_setup_state() is called multiple times 2.
> >> > plic_starting_cpu() can crash for boot CPU if cpuhp_setup_state()
> >> >    is called before boot CPU PLIC handler is available.
> >> >
> >> > This patch fixes both above issues.
> >> >
> >> > Signed-off-by: Anup Patel <anup.patel@wdc.com>
> >> > ---
> >> >  drivers/irqchip/irq-sifive-plic.c | 14 ++++++++++++--
> >> >  1 file changed, 12 insertions(+), 2 deletions(-)
> >> >
> >> > diff --git a/drivers/irqchip/irq-sifive-plic.c
> >> > b/drivers/irqchip/irq-sifive-plic.c
> >> > index 822e074c0600..7dc23edb3267 100644
> >> > --- a/drivers/irqchip/irq-sifive-plic.c
> >> > +++ b/drivers/irqchip/irq-sifive-plic.c
> >> > @@ -76,6 +76,7 @@ struct plic_handler {
> >> >  	void __iomem		*enable_base;
> >> >  	struct plic_priv	*priv;
> >> >  };
> >> > +static bool plic_cpuhp_setup_done;
> >> >  static DEFINE_PER_CPU(struct plic_handler, plic_handlers);
> >> >
> >> >  static inline void plic_toggle(struct plic_handler *handler, @@
> >> > -282,6 +283,7 @@ static int __init plic_init(struct device_node *node,
> >> >  	int error = 0, nr_contexts, nr_handlers = 0, i;
> >> >  	u32 nr_irqs;
> >> >  	struct plic_priv *priv;
> >> > +	struct plic_handler *handler;
> >> >
> >> >  	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
> >> >  	if (!priv)
> >> > @@ -310,7 +312,6 @@ static int __init plic_init(struct device_node
> >> > *node,
> >> >
> >> >  	for (i = 0; i < nr_contexts; i++) {
> >> >  		struct of_phandle_args parent;
> >> > -		struct plic_handler *handler;
> >> >  		irq_hw_number_t hwirq;
> >> >  		int cpu, hartid;
> >> >
> >> > @@ -364,9 +365,18 @@ static int __init plic_init(struct device_node
> >> > *node,
> >> >  		nr_handlers++;
> >> >  	}
> >> >
> >> > -	cpuhp_setup_state(CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING,
> >> > +	/*
> >> > +	 * We can have multiple PLIC instances so setup cpuhp state only
> >> > +	 * when context handler for current/boot CPU is present.
> >> > +	 */
> >> > +	handler = this_cpu_ptr(&plic_handlers);
> >> > +	if (handler->present && !plic_cpuhp_setup_done) {
> >>
> >> If there is no context handler for the boot CPU, the system is
> >> doomed, right? It isn't able to get any interrupt, and you don't
> >> register the hotplug notifier that could allow secondary CPUs to
> >> boot.
> >>
> >> So what is the point? It feels like you should just give up here.
> >>
> >> Also, the boot CPU is always CPU 0. So checking that you only
> >> register the hotplug notifier from CPU 0 should be enough.
> >
> > The boot CPU is not fixed in RISC-V, the logical id of the boot CPU
> > will always be zero but physical id (or HART id) can be something
> > totally different.
> 
> So on riscv, smp_processor_id() can return a non-zero value on the the boot
> CPU? Interesting... :-/

On RISC-V, smp_processor_id() returns logical id (which is the order in
Which CPUs are brought up).

We have special function cpuid_to_hartid_map() in asm/smp.h which
helps us convert logical id to HART id.

The HART id in RISC-V world is like to MPIDR of ARM world.

> 
> >
> > On RISC-V NUMA system, we will have a separate PLIC in each NUMA node.
> >
> > Let's say we have a system with 2 NUMA nodes, each NUMA node having
> > 4 CPUs (or 4 HARTs).  In this case, the DTB passed to Linux will have
> > two PLIC DT nodes where each PLIC device targets only 4 CPUs (or 4
> > HARTs). The
> > plic_init() functions will setup handlers for only 4 CPUs (or 4 HARTs).
> > In other
> > words, plic_init() for "PLIC0" will setup handler for HART id 0 to 3
> > and
> > plic_init() for "PLIC1" will setup handler for HART id 4 to 7. Now,
> > any CPU can be boot CPU so it is possible that CPU with HART id 4 is
> > boot CPU and when plic_init() is first called for "PLIC0" the handler
> > for HART id 4 is not setup because it will be setup later when
> > plic_init() is called for "PLIC1".
> > This cause plic_starting_cpu() to crash when plic_init() is called for
> > "PLIC0".
> >
> > I hope above example helps understanding the issue.
> 
> It does, thanks. This pseudo NUMA thing really is a terrible hack...
> 
> >
> > I encounter this issue randomly when booting Linux on QEMU RISC-V with
> > multiple NUMA nodes.
> 
> Then why don't you defer the probing of the PLIC you can't initialize from this
> CPU? If you're on CPU4-7, only initialize the PLIC that matters to you, and not
> the the others. It would certainly make a lot more sense, and be more robust.

Okay, I will try -EPROBE defer approach for v2.

Thanks,
Anup

^ permalink raw reply	[flat|nested] 40+ messages in thread

* RE: [PATCH 1/4] irqchip/sifive-plic: Setup cpuhp once after current handler is present
@ 2020-05-16 16:28           ` Anup Patel
  0 siblings, 0 replies; 40+ messages in thread
From: Anup Patel @ 2020-05-16 16:28 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Jason Cooper, Anup Patel, linux-kernel, Atish Patra,
	Palmer Dabbelt, Paul Walmsley, Alistair Francis, Thomas Gleixner,
	linux-riscv



> -----Original Message-----
> From: Marc Zyngier <maz@kernel.org>
> Sent: 16 May 2020 19:01
> To: Anup Patel <Anup.Patel@wdc.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
> <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>; Jason
> Cooper <jason@lakedaemon.net>; Atish Patra <Atish.Patra@wdc.com>; Alistair
> Francis <Alistair.Francis@wdc.com>; Anup Patel <anup@brainfault.org>; linux-
> riscv@lists.infradead.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH 1/4] irqchip/sifive-plic: Setup cpuhp once after current
> handler is present
> 
> On 2020-05-16 13:52, Anup Patel wrote:
> >> -----Original Message-----
> >> From: Marc Zyngier <maz@kernel.org>
> >> Sent: 16 May 2020 17:42
> >> To: Anup Patel <Anup.Patel@wdc.com>
> >> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
> >> <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>;
> >> Jason Cooper <jason@lakedaemon.net>; Atish Patra
> >> <Atish.Patra@wdc.com>; Alistair Francis <Alistair.Francis@wdc.com>;
> >> Anup Patel <anup@brainfault.org>;
> >> linux-
> >> riscv@lists.infradead.org; linux-kernel@vger.kernel.org
> >> Subject: Re: [PATCH 1/4] irqchip/sifive-plic: Setup cpuhp once after
> >> current handler is present
> >>
> >> Hi Anup,
> >>
> >> On 2020-05-16 07:38, Anup Patel wrote:
> >> > For multiple PLIC instances, the plic_init() is called once for
> >> > each PLIC instance. Due to this we have two issues:
> >> > 1. cpuhp_setup_state() is called multiple times 2.
> >> > plic_starting_cpu() can crash for boot CPU if cpuhp_setup_state()
> >> >    is called before boot CPU PLIC handler is available.
> >> >
> >> > This patch fixes both above issues.
> >> >
> >> > Signed-off-by: Anup Patel <anup.patel@wdc.com>
> >> > ---
> >> >  drivers/irqchip/irq-sifive-plic.c | 14 ++++++++++++--
> >> >  1 file changed, 12 insertions(+), 2 deletions(-)
> >> >
> >> > diff --git a/drivers/irqchip/irq-sifive-plic.c
> >> > b/drivers/irqchip/irq-sifive-plic.c
> >> > index 822e074c0600..7dc23edb3267 100644
> >> > --- a/drivers/irqchip/irq-sifive-plic.c
> >> > +++ b/drivers/irqchip/irq-sifive-plic.c
> >> > @@ -76,6 +76,7 @@ struct plic_handler {
> >> >  	void __iomem		*enable_base;
> >> >  	struct plic_priv	*priv;
> >> >  };
> >> > +static bool plic_cpuhp_setup_done;
> >> >  static DEFINE_PER_CPU(struct plic_handler, plic_handlers);
> >> >
> >> >  static inline void plic_toggle(struct plic_handler *handler, @@
> >> > -282,6 +283,7 @@ static int __init plic_init(struct device_node *node,
> >> >  	int error = 0, nr_contexts, nr_handlers = 0, i;
> >> >  	u32 nr_irqs;
> >> >  	struct plic_priv *priv;
> >> > +	struct plic_handler *handler;
> >> >
> >> >  	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
> >> >  	if (!priv)
> >> > @@ -310,7 +312,6 @@ static int __init plic_init(struct device_node
> >> > *node,
> >> >
> >> >  	for (i = 0; i < nr_contexts; i++) {
> >> >  		struct of_phandle_args parent;
> >> > -		struct plic_handler *handler;
> >> >  		irq_hw_number_t hwirq;
> >> >  		int cpu, hartid;
> >> >
> >> > @@ -364,9 +365,18 @@ static int __init plic_init(struct device_node
> >> > *node,
> >> >  		nr_handlers++;
> >> >  	}
> >> >
> >> > -	cpuhp_setup_state(CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING,
> >> > +	/*
> >> > +	 * We can have multiple PLIC instances so setup cpuhp state only
> >> > +	 * when context handler for current/boot CPU is present.
> >> > +	 */
> >> > +	handler = this_cpu_ptr(&plic_handlers);
> >> > +	if (handler->present && !plic_cpuhp_setup_done) {
> >>
> >> If there is no context handler for the boot CPU, the system is
> >> doomed, right? It isn't able to get any interrupt, and you don't
> >> register the hotplug notifier that could allow secondary CPUs to
> >> boot.
> >>
> >> So what is the point? It feels like you should just give up here.
> >>
> >> Also, the boot CPU is always CPU 0. So checking that you only
> >> register the hotplug notifier from CPU 0 should be enough.
> >
> > The boot CPU is not fixed in RISC-V, the logical id of the boot CPU
> > will always be zero but physical id (or HART id) can be something
> > totally different.
> 
> So on riscv, smp_processor_id() can return a non-zero value on the the boot
> CPU? Interesting... :-/

On RISC-V, smp_processor_id() returns logical id (which is the order in
Which CPUs are brought up).

We have special function cpuid_to_hartid_map() in asm/smp.h which
helps us convert logical id to HART id.

The HART id in RISC-V world is like to MPIDR of ARM world.

> 
> >
> > On RISC-V NUMA system, we will have a separate PLIC in each NUMA node.
> >
> > Let's say we have a system with 2 NUMA nodes, each NUMA node having
> > 4 CPUs (or 4 HARTs).  In this case, the DTB passed to Linux will have
> > two PLIC DT nodes where each PLIC device targets only 4 CPUs (or 4
> > HARTs). The
> > plic_init() functions will setup handlers for only 4 CPUs (or 4 HARTs).
> > In other
> > words, plic_init() for "PLIC0" will setup handler for HART id 0 to 3
> > and
> > plic_init() for "PLIC1" will setup handler for HART id 4 to 7. Now,
> > any CPU can be boot CPU so it is possible that CPU with HART id 4 is
> > boot CPU and when plic_init() is first called for "PLIC0" the handler
> > for HART id 4 is not setup because it will be setup later when
> > plic_init() is called for "PLIC1".
> > This cause plic_starting_cpu() to crash when plic_init() is called for
> > "PLIC0".
> >
> > I hope above example helps understanding the issue.
> 
> It does, thanks. This pseudo NUMA thing really is a terrible hack...
> 
> >
> > I encounter this issue randomly when booting Linux on QEMU RISC-V with
> > multiple NUMA nodes.
> 
> Then why don't you defer the probing of the PLIC you can't initialize from this
> CPU? If you're on CPU4-7, only initialize the PLIC that matters to you, and not
> the the others. It would certainly make a lot more sense, and be more robust.

Okay, I will try -EPROBE defer approach for v2.

Thanks,
Anup


^ permalink raw reply	[flat|nested] 40+ messages in thread

* RE: [PATCH 3/4] irqchip/sifive-plic: Separate irq_chip for muiltiple PLIC instances
  2020-05-16 13:16         ` Marc Zyngier
@ 2020-05-16 16:38           ` Anup Patel
  -1 siblings, 0 replies; 40+ messages in thread
From: Anup Patel @ 2020-05-16 16:38 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Palmer Dabbelt, Paul Walmsley, Thomas Gleixner, Jason Cooper,
	Atish Patra, Alistair Francis, Anup Patel, linux-riscv,
	linux-kernel



> -----Original Message-----
> From: Marc Zyngier <maz@kernel.org>
> Sent: 16 May 2020 18:46
> To: Anup Patel <Anup.Patel@wdc.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
> <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>; Jason
> Cooper <jason@lakedaemon.net>; Atish Patra <Atish.Patra@wdc.com>; Alistair
> Francis <Alistair.Francis@wdc.com>; Anup Patel <anup@brainfault.org>; linux-
> riscv@lists.infradead.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH 3/4] irqchip/sifive-plic: Separate irq_chip for muiltiple PLIC
> instances
> 
> On 2020-05-16 14:01, Anup Patel wrote:
> >> -----Original Message-----
> >> From: Marc Zyngier <maz@kernel.org>
> >> Sent: 16 May 2020 17:59
> >> To: Anup Patel <Anup.Patel@wdc.com>
> >> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
> >> <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>;
> >> Jason Cooper <jason@lakedaemon.net>; Atish Patra
> >> <Atish.Patra@wdc.com>; Alistair Francis <Alistair.Francis@wdc.com>;
> >> Anup Patel <anup@brainfault.org>;
> >> linux-
> >> riscv@lists.infradead.org; linux-kernel@vger.kernel.org
> >> Subject: Re: [PATCH 3/4] irqchip/sifive-plic: Separate irq_chip for
> >> muiltiple PLIC instances
> >>
> >> On 2020-05-16 07:39, Anup Patel wrote:
> >> > To distinguish interrupts from multiple PLIC instances, we use a
> >> > per-PLIC irq_chip instance with a different name.
> >> >
> >> > Signed-off-by: Anup Patel <anup.patel@wdc.com>
> >> > ---
> >> >  drivers/irqchip/irq-sifive-plic.c | 28
> >> > +++++++++++++++-------------
> >> >  1 file changed, 15 insertions(+), 13 deletions(-)
> >> >
> >> > diff --git a/drivers/irqchip/irq-sifive-plic.c
> >> > b/drivers/irqchip/irq-sifive-plic.c
> >> > index 2d3db927a551..e42fc082ad18 100644
> >> > --- a/drivers/irqchip/irq-sifive-plic.c
> >> > +++ b/drivers/irqchip/irq-sifive-plic.c
> >> > @@ -60,6 +60,7 @@
> >> >  #define	PLIC_ENABLE_THRESHOLD		0
> >> >
> >> >  struct plic_priv {
> >> > +	struct irq_chip chip;
> >> >  	struct cpumask lmask;
> >> >  	struct irq_domain *irqdomain;
> >> >  	void __iomem *regs;
> >> > @@ -76,6 +77,7 @@ struct plic_handler {
> >> >  	void __iomem		*enable_base;
> >> >  	struct plic_priv	*priv;
> >> >  };
> >> > +static unsigned int plic_count;
> >> >  static bool plic_cpuhp_setup_done;  static DEFINE_PER_CPU(struct
> >> > plic_handler, plic_handlers);
> >> >
> >> > @@ -164,20 +166,12 @@ static void plic_irq_eoi(struct irq_data *d)
> >> >  	writel(d->hwirq, handler->hart_base + CONTEXT_CLAIM);  }
> >> >
> >> > -static struct irq_chip plic_chip = {
> >> > -	.name		= "SiFive PLIC",
> >> > -	.irq_mask	= plic_irq_mask,
> >> > -	.irq_unmask	= plic_irq_unmask,
> >> > -	.irq_eoi	= plic_irq_eoi,
> >> > -#ifdef CONFIG_SMP
> >> > -	.irq_set_affinity = plic_set_affinity,
> >> > -#endif
> >> > -};
> >> > -
> >> >  static int plic_irqdomain_map(struct irq_domain *d, unsigned int irq,
> >> >  			      irq_hw_number_t hwirq)
> >> >  {
> >> > -	irq_domain_set_info(d, irq, hwirq, &plic_chip, d->host_data,
> >> > +	struct plic_priv *priv = d->host_data;
> >> > +
> >> > +	irq_domain_set_info(d, irq, hwirq, &priv->chip, d->host_data,
> >> >  			    handle_fasteoi_irq, NULL, NULL);
> >> >  	irq_set_noprobe(irq);
> >> >  	return 0;
> >> > @@ -294,6 +288,14 @@ static int __init plic_init(struct device_node
> >> > *node,
> >> >  	if (!priv)
> >> >  		return -ENOMEM;
> >> >
> >> > +	priv->chip.name = kasprintf(GFP_KERNEL, "PLIC%d", plic_count++);
> >> > +	priv->chip.irq_mask = plic_irq_mask,
> >> > +	priv->chip.irq_unmask = plic_irq_unmask,
> >> > +	priv->chip.irq_eoi = plic_irq_eoi, #ifdef CONFIG_SMP
> >> > +	priv->chip.irq_set_affinity = plic_set_affinity, #endif
> >> > +
> >> >  	priv->regs = of_iomap(node, 0);
> >> >  	if (WARN_ON(!priv->regs)) {
> >> >  		error = -EIO;
> >> > @@ -383,9 +385,9 @@ static int __init plic_init(struct device_node
> >> > *node,
> >> >  	}
> >> >
> >> >  	pr_info("interrupt-controller at 0x%llx "
> >> > -		"(interrupts=%d, contexts=%d, handlers=%d)\n",
> >> > +		"(interrupts=%d, contexts=%d, handlers=%d) (%s)\n",
> >> >  		(unsigned long long)iores.start, nr_irqs,
> >> > -		nr_contexts, nr_handlers);
> >> > +		nr_contexts, nr_handlers, priv->chip.name);
> >> >  	set_handle_irq(plic_handle_irq);
> >> >  	return 0;
> >>
> >> I really dislike this patch for multiple reasons:
> >>
> >> - Allocating a new struc irq_chip just for a string seems over the
> >> top,
> >>    specially as all the *useful* stuff stays the same.
> >>
> >> - Even if I hate it, /proc is API. I'm sure something, somewhere is
> >>    parsing this. Changing the string is likely to confuse it.
> >
> > AFAIK, we don't have scripts in RISC-V world that depend on
> > /proc/interrupts content. May be in future such scripts will show up.
> 
> How do you know that? Do you keep an exhaustive repository of all the possible
> parsers of /proc/cpuinfo (rhetorical question)?
> 
> > For system with multiple PLICs, we are seeing same "SiFive PLIC"
> > string for all PLIC interrupts in "cat /proc/interrupts". I am trying
> > to assign different string based on PLIC instance. This is similar to
> > what GICv2 driver is doing (e.g. GIC-0, GIC-1, ... in
> > /proc/interrupts).
> 
> Which was a *very* bad idea the first place, and I wish I could get rid of it. I
> cannot, for the reason outlined above (it's ABI).
> Furthermore, in this case, the GICs are different (they are cascaded).
> In your case, they have the same position in the interrupt hierarchy.
> 
> > Is there a better way to do this ?
> >
> >>
> >> - If you do this for debug purposes, then CONFIG_GENERIC_IRQ_DEBUGFS
> >>    is the right way to look up the information.
> >>
> >> - If, for reasons that are beyond me, you actually *need* this, then
> >>    implementing irq_print_chip in your irq_chip structure is the way
> >>    to go.
> >>
> >> But frankly, I'd rather you drop this altogether.
> >
> > I just want to differentiate which interrupt belongs to which PLIC
> > Instance in /proc/interrupts. I can take a different approach if you
> > suggest.
> 
> I *have* given you a way to implement that in a better way. But again, I'd
> rather you *don't* do it for the reason I have outlined above.

I explored kernel/irq/proc.c and we can achieve what this patch does
by implementing irq_print_chip() callback of "struct irq_chip" so we
certainly don't need separate "struct irq_chip" for each PLIC instance.

I will implement irq_print_chip() callback in v2 series.

Regards,
Anup

^ permalink raw reply	[flat|nested] 40+ messages in thread

* RE: [PATCH 3/4] irqchip/sifive-plic: Separate irq_chip for muiltiple PLIC instances
@ 2020-05-16 16:38           ` Anup Patel
  0 siblings, 0 replies; 40+ messages in thread
From: Anup Patel @ 2020-05-16 16:38 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Jason Cooper, Anup Patel, linux-kernel, Atish Patra,
	Palmer Dabbelt, Paul Walmsley, Alistair Francis, Thomas Gleixner,
	linux-riscv



> -----Original Message-----
> From: Marc Zyngier <maz@kernel.org>
> Sent: 16 May 2020 18:46
> To: Anup Patel <Anup.Patel@wdc.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
> <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>; Jason
> Cooper <jason@lakedaemon.net>; Atish Patra <Atish.Patra@wdc.com>; Alistair
> Francis <Alistair.Francis@wdc.com>; Anup Patel <anup@brainfault.org>; linux-
> riscv@lists.infradead.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH 3/4] irqchip/sifive-plic: Separate irq_chip for muiltiple PLIC
> instances
> 
> On 2020-05-16 14:01, Anup Patel wrote:
> >> -----Original Message-----
> >> From: Marc Zyngier <maz@kernel.org>
> >> Sent: 16 May 2020 17:59
> >> To: Anup Patel <Anup.Patel@wdc.com>
> >> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
> >> <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>;
> >> Jason Cooper <jason@lakedaemon.net>; Atish Patra
> >> <Atish.Patra@wdc.com>; Alistair Francis <Alistair.Francis@wdc.com>;
> >> Anup Patel <anup@brainfault.org>;
> >> linux-
> >> riscv@lists.infradead.org; linux-kernel@vger.kernel.org
> >> Subject: Re: [PATCH 3/4] irqchip/sifive-plic: Separate irq_chip for
> >> muiltiple PLIC instances
> >>
> >> On 2020-05-16 07:39, Anup Patel wrote:
> >> > To distinguish interrupts from multiple PLIC instances, we use a
> >> > per-PLIC irq_chip instance with a different name.
> >> >
> >> > Signed-off-by: Anup Patel <anup.patel@wdc.com>
> >> > ---
> >> >  drivers/irqchip/irq-sifive-plic.c | 28
> >> > +++++++++++++++-------------
> >> >  1 file changed, 15 insertions(+), 13 deletions(-)
> >> >
> >> > diff --git a/drivers/irqchip/irq-sifive-plic.c
> >> > b/drivers/irqchip/irq-sifive-plic.c
> >> > index 2d3db927a551..e42fc082ad18 100644
> >> > --- a/drivers/irqchip/irq-sifive-plic.c
> >> > +++ b/drivers/irqchip/irq-sifive-plic.c
> >> > @@ -60,6 +60,7 @@
> >> >  #define	PLIC_ENABLE_THRESHOLD		0
> >> >
> >> >  struct plic_priv {
> >> > +	struct irq_chip chip;
> >> >  	struct cpumask lmask;
> >> >  	struct irq_domain *irqdomain;
> >> >  	void __iomem *regs;
> >> > @@ -76,6 +77,7 @@ struct plic_handler {
> >> >  	void __iomem		*enable_base;
> >> >  	struct plic_priv	*priv;
> >> >  };
> >> > +static unsigned int plic_count;
> >> >  static bool plic_cpuhp_setup_done;  static DEFINE_PER_CPU(struct
> >> > plic_handler, plic_handlers);
> >> >
> >> > @@ -164,20 +166,12 @@ static void plic_irq_eoi(struct irq_data *d)
> >> >  	writel(d->hwirq, handler->hart_base + CONTEXT_CLAIM);  }
> >> >
> >> > -static struct irq_chip plic_chip = {
> >> > -	.name		= "SiFive PLIC",
> >> > -	.irq_mask	= plic_irq_mask,
> >> > -	.irq_unmask	= plic_irq_unmask,
> >> > -	.irq_eoi	= plic_irq_eoi,
> >> > -#ifdef CONFIG_SMP
> >> > -	.irq_set_affinity = plic_set_affinity,
> >> > -#endif
> >> > -};
> >> > -
> >> >  static int plic_irqdomain_map(struct irq_domain *d, unsigned int irq,
> >> >  			      irq_hw_number_t hwirq)
> >> >  {
> >> > -	irq_domain_set_info(d, irq, hwirq, &plic_chip, d->host_data,
> >> > +	struct plic_priv *priv = d->host_data;
> >> > +
> >> > +	irq_domain_set_info(d, irq, hwirq, &priv->chip, d->host_data,
> >> >  			    handle_fasteoi_irq, NULL, NULL);
> >> >  	irq_set_noprobe(irq);
> >> >  	return 0;
> >> > @@ -294,6 +288,14 @@ static int __init plic_init(struct device_node
> >> > *node,
> >> >  	if (!priv)
> >> >  		return -ENOMEM;
> >> >
> >> > +	priv->chip.name = kasprintf(GFP_KERNEL, "PLIC%d", plic_count++);
> >> > +	priv->chip.irq_mask = plic_irq_mask,
> >> > +	priv->chip.irq_unmask = plic_irq_unmask,
> >> > +	priv->chip.irq_eoi = plic_irq_eoi, #ifdef CONFIG_SMP
> >> > +	priv->chip.irq_set_affinity = plic_set_affinity, #endif
> >> > +
> >> >  	priv->regs = of_iomap(node, 0);
> >> >  	if (WARN_ON(!priv->regs)) {
> >> >  		error = -EIO;
> >> > @@ -383,9 +385,9 @@ static int __init plic_init(struct device_node
> >> > *node,
> >> >  	}
> >> >
> >> >  	pr_info("interrupt-controller at 0x%llx "
> >> > -		"(interrupts=%d, contexts=%d, handlers=%d)\n",
> >> > +		"(interrupts=%d, contexts=%d, handlers=%d) (%s)\n",
> >> >  		(unsigned long long)iores.start, nr_irqs,
> >> > -		nr_contexts, nr_handlers);
> >> > +		nr_contexts, nr_handlers, priv->chip.name);
> >> >  	set_handle_irq(plic_handle_irq);
> >> >  	return 0;
> >>
> >> I really dislike this patch for multiple reasons:
> >>
> >> - Allocating a new struc irq_chip just for a string seems over the
> >> top,
> >>    specially as all the *useful* stuff stays the same.
> >>
> >> - Even if I hate it, /proc is API. I'm sure something, somewhere is
> >>    parsing this. Changing the string is likely to confuse it.
> >
> > AFAIK, we don't have scripts in RISC-V world that depend on
> > /proc/interrupts content. May be in future such scripts will show up.
> 
> How do you know that? Do you keep an exhaustive repository of all the possible
> parsers of /proc/cpuinfo (rhetorical question)?
> 
> > For system with multiple PLICs, we are seeing same "SiFive PLIC"
> > string for all PLIC interrupts in "cat /proc/interrupts". I am trying
> > to assign different string based on PLIC instance. This is similar to
> > what GICv2 driver is doing (e.g. GIC-0, GIC-1, ... in
> > /proc/interrupts).
> 
> Which was a *very* bad idea the first place, and I wish I could get rid of it. I
> cannot, for the reason outlined above (it's ABI).
> Furthermore, in this case, the GICs are different (they are cascaded).
> In your case, they have the same position in the interrupt hierarchy.
> 
> > Is there a better way to do this ?
> >
> >>
> >> - If you do this for debug purposes, then CONFIG_GENERIC_IRQ_DEBUGFS
> >>    is the right way to look up the information.
> >>
> >> - If, for reasons that are beyond me, you actually *need* this, then
> >>    implementing irq_print_chip in your irq_chip structure is the way
> >>    to go.
> >>
> >> But frankly, I'd rather you drop this altogether.
> >
> > I just want to differentiate which interrupt belongs to which PLIC
> > Instance in /proc/interrupts. I can take a different approach if you
> > suggest.
> 
> I *have* given you a way to implement that in a better way. But again, I'd
> rather you *don't* do it for the reason I have outlined above.

I explored kernel/irq/proc.c and we can achieve what this patch does
by implementing irq_print_chip() callback of "struct irq_chip" so we
certainly don't need separate "struct irq_chip" for each PLIC instance.

I will implement irq_print_chip() callback in v2 series.

Regards,
Anup


^ permalink raw reply	[flat|nested] 40+ messages in thread

* RE: [PATCH 1/4] irqchip/sifive-plic: Setup cpuhp once after current handler is present
  2020-05-16 16:28           ` Anup Patel
@ 2020-05-17  8:02             ` Anup Patel
  -1 siblings, 0 replies; 40+ messages in thread
From: Anup Patel @ 2020-05-17  8:02 UTC (permalink / raw)
  To: Anup Patel, Marc Zyngier
  Cc: Palmer Dabbelt, Paul Walmsley, Thomas Gleixner, Jason Cooper,
	Atish Patra, Alistair Francis, Anup Patel, linux-riscv,
	linux-kernel



> -----Original Message-----
> From: linux-kernel-owner@vger.kernel.org <linux-kernel-
> owner@vger.kernel.org> On Behalf Of Anup Patel
> Sent: 16 May 2020 21:59
> To: Marc Zyngier <maz@kernel.org>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
> <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>; Jason
> Cooper <jason@lakedaemon.net>; Atish Patra <Atish.Patra@wdc.com>; Alistair
> Francis <Alistair.Francis@wdc.com>; Anup Patel <anup@brainfault.org>; linux-
> riscv@lists.infradead.org; linux-kernel@vger.kernel.org
> Subject: RE: [PATCH 1/4] irqchip/sifive-plic: Setup cpuhp once after current
> handler is present
> 
> 
> 
> > -----Original Message-----
> > From: Marc Zyngier <maz@kernel.org>
> > Sent: 16 May 2020 19:01
> > To: Anup Patel <Anup.Patel@wdc.com>
> > Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
> > <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>;
> > Jason Cooper <jason@lakedaemon.net>; Atish Patra
> > <Atish.Patra@wdc.com>; Alistair Francis <Alistair.Francis@wdc.com>;
> > Anup Patel <anup@brainfault.org>; linux- riscv@lists.infradead.org;
> > linux-kernel@vger.kernel.org
> > Subject: Re: [PATCH 1/4] irqchip/sifive-plic: Setup cpuhp once after
> > current handler is present
> >
> > On 2020-05-16 13:52, Anup Patel wrote:
> > >> -----Original Message-----
> > >> From: Marc Zyngier <maz@kernel.org>
> > >> Sent: 16 May 2020 17:42
> > >> To: Anup Patel <Anup.Patel@wdc.com>
> > >> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
> > >> <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>;
> > >> Jason Cooper <jason@lakedaemon.net>; Atish Patra
> > >> <Atish.Patra@wdc.com>; Alistair Francis <Alistair.Francis@wdc.com>;
> > >> Anup Patel <anup@brainfault.org>;
> > >> linux-
> > >> riscv@lists.infradead.org; linux-kernel@vger.kernel.org
> > >> Subject: Re: [PATCH 1/4] irqchip/sifive-plic: Setup cpuhp once
> > >> after current handler is present
> > >>
> > >> Hi Anup,
> > >>
> > >> On 2020-05-16 07:38, Anup Patel wrote:
> > >> > For multiple PLIC instances, the plic_init() is called once for
> > >> > each PLIC instance. Due to this we have two issues:
> > >> > 1. cpuhp_setup_state() is called multiple times 2.
> > >> > plic_starting_cpu() can crash for boot CPU if cpuhp_setup_state()
> > >> >    is called before boot CPU PLIC handler is available.
> > >> >
> > >> > This patch fixes both above issues.
> > >> >
> > >> > Signed-off-by: Anup Patel <anup.patel@wdc.com>
> > >> > ---
> > >> >  drivers/irqchip/irq-sifive-plic.c | 14 ++++++++++++--
> > >> >  1 file changed, 12 insertions(+), 2 deletions(-)
> > >> >
> > >> > diff --git a/drivers/irqchip/irq-sifive-plic.c
> > >> > b/drivers/irqchip/irq-sifive-plic.c
> > >> > index 822e074c0600..7dc23edb3267 100644
> > >> > --- a/drivers/irqchip/irq-sifive-plic.c
> > >> > +++ b/drivers/irqchip/irq-sifive-plic.c
> > >> > @@ -76,6 +76,7 @@ struct plic_handler {
> > >> >  	void __iomem		*enable_base;
> > >> >  	struct plic_priv	*priv;
> > >> >  };
> > >> > +static bool plic_cpuhp_setup_done;
> > >> >  static DEFINE_PER_CPU(struct plic_handler, plic_handlers);
> > >> >
> > >> >  static inline void plic_toggle(struct plic_handler *handler, @@
> > >> > -282,6 +283,7 @@ static int __init plic_init(struct device_node *node,
> > >> >  	int error = 0, nr_contexts, nr_handlers = 0, i;
> > >> >  	u32 nr_irqs;
> > >> >  	struct plic_priv *priv;
> > >> > +	struct plic_handler *handler;
> > >> >
> > >> >  	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
> > >> >  	if (!priv)
> > >> > @@ -310,7 +312,6 @@ static int __init plic_init(struct
> > >> > device_node *node,
> > >> >
> > >> >  	for (i = 0; i < nr_contexts; i++) {
> > >> >  		struct of_phandle_args parent;
> > >> > -		struct plic_handler *handler;
> > >> >  		irq_hw_number_t hwirq;
> > >> >  		int cpu, hartid;
> > >> >
> > >> > @@ -364,9 +365,18 @@ static int __init plic_init(struct
> > >> > device_node *node,
> > >> >  		nr_handlers++;
> > >> >  	}
> > >> >
> > >> > -	cpuhp_setup_state(CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING,
> > >> > +	/*
> > >> > +	 * We can have multiple PLIC instances so setup cpuhp state
> only
> > >> > +	 * when context handler for current/boot CPU is present.
> > >> > +	 */
> > >> > +	handler = this_cpu_ptr(&plic_handlers);
> > >> > +	if (handler->present && !plic_cpuhp_setup_done) {
> > >>
> > >> If there is no context handler for the boot CPU, the system is
> > >> doomed, right? It isn't able to get any interrupt, and you don't
> > >> register the hotplug notifier that could allow secondary CPUs to
> > >> boot.
> > >>
> > >> So what is the point? It feels like you should just give up here.
> > >>
> > >> Also, the boot CPU is always CPU 0. So checking that you only
> > >> register the hotplug notifier from CPU 0 should be enough.
> > >
> > > The boot CPU is not fixed in RISC-V, the logical id of the boot CPU
> > > will always be zero but physical id (or HART id) can be something
> > > totally different.
> >
> > So on riscv, smp_processor_id() can return a non-zero value on the the
> > boot CPU? Interesting... :-/
> 
> On RISC-V, smp_processor_id() returns logical id (which is the order in Which
> CPUs are brought up).
> 
> We have special function cpuid_to_hartid_map() in asm/smp.h which helps us
> convert logical id to HART id.
> 
> The HART id in RISC-V world is like to MPIDR of ARM world.
> 
> >
> > >
> > > On RISC-V NUMA system, we will have a separate PLIC in each NUMA node.
> > >
> > > Let's say we have a system with 2 NUMA nodes, each NUMA node having
> > > 4 CPUs (or 4 HARTs).  In this case, the DTB passed to Linux will
> > > have two PLIC DT nodes where each PLIC device targets only 4 CPUs
> > > (or 4 HARTs). The
> > > plic_init() functions will setup handlers for only 4 CPUs (or 4 HARTs).
> > > In other
> > > words, plic_init() for "PLIC0" will setup handler for HART id 0 to 3
> > > and
> > > plic_init() for "PLIC1" will setup handler for HART id 4 to 7. Now,
> > > any CPU can be boot CPU so it is possible that CPU with HART id 4 is
> > > boot CPU and when plic_init() is first called for "PLIC0" the
> > > handler for HART id 4 is not setup because it will be setup later
> > > when
> > > plic_init() is called for "PLIC1".
> > > This cause plic_starting_cpu() to crash when plic_init() is called
> > > for "PLIC0".
> > >
> > > I hope above example helps understanding the issue.
> >
> > It does, thanks. This pseudo NUMA thing really is a terrible hack...
> >
> > >
> > > I encounter this issue randomly when booting Linux on QEMU RISC-V
> > > with multiple NUMA nodes.
> >
> > Then why don't you defer the probing of the PLIC you can't initialize
> > from this CPU? If you're on CPU4-7, only initialize the PLIC that
> > matters to you, and not the the others. It would certainly make a lot more
> sense, and be more robust.
> 
> Okay, I will try -EPROBE defer approach for v2.

I tried -EPROBE_DEFER for PLIC instances not targeting boot CPU. It
works fine for boot CPU but now it fails for secondary CPUs because
PLIC probe is deferred after secondary CPU bringup.

This can be further explained by previous example.

Let's say we have a system with 2 NUMA nodes, each NUMA node
having 4 CPUs (or 4 HARTs).  In this case, the DTB passed to Linux will
have two PLIC DT nodes where each PLIC device targets only 4 CPUs (or
4 HARTs). The plic_init() for "PLIC0" will setup handler for HART id 0 to 3
and plic_init() for "PLIC1" will setup handler for HART id 4 to 7. Now, any
CPU can be boot CPU so let's assume CPU with HART id 5 is the boot CPU.
The plic_init() is first called for "PLIC0" which does not target boot CPU
(HART id 5) so we defer "PLIC0" probe.  Next, plic_init() is called for "PLIC1"
which succeeds and we get handler for CPUs with HART id 4 to 7 (including
boot CPU). Later, Linux brings-up secondary CPUs starting with HART id 0
which fails because plic_starting_cpu() crashes for HART id 0 as the "PLIC0"
probe is deferred (i.e. not yet done).

The real reason why -EPROBE_DEFER did not work in this case is that
Linux irq subsystem defers probe after secondary CPU bringup.

I will keep the current approach for v2 series unless you have other
suggestion.

Regards,
Anup

^ permalink raw reply	[flat|nested] 40+ messages in thread

* RE: [PATCH 1/4] irqchip/sifive-plic: Setup cpuhp once after current handler is present
@ 2020-05-17  8:02             ` Anup Patel
  0 siblings, 0 replies; 40+ messages in thread
From: Anup Patel @ 2020-05-17  8:02 UTC (permalink / raw)
  To: Anup Patel, Marc Zyngier
  Cc: Jason Cooper, Anup Patel, linux-kernel, Atish Patra,
	Palmer Dabbelt, Paul Walmsley, Alistair Francis, Thomas Gleixner,
	linux-riscv



> -----Original Message-----
> From: linux-kernel-owner@vger.kernel.org <linux-kernel-
> owner@vger.kernel.org> On Behalf Of Anup Patel
> Sent: 16 May 2020 21:59
> To: Marc Zyngier <maz@kernel.org>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
> <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>; Jason
> Cooper <jason@lakedaemon.net>; Atish Patra <Atish.Patra@wdc.com>; Alistair
> Francis <Alistair.Francis@wdc.com>; Anup Patel <anup@brainfault.org>; linux-
> riscv@lists.infradead.org; linux-kernel@vger.kernel.org
> Subject: RE: [PATCH 1/4] irqchip/sifive-plic: Setup cpuhp once after current
> handler is present
> 
> 
> 
> > -----Original Message-----
> > From: Marc Zyngier <maz@kernel.org>
> > Sent: 16 May 2020 19:01
> > To: Anup Patel <Anup.Patel@wdc.com>
> > Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
> > <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>;
> > Jason Cooper <jason@lakedaemon.net>; Atish Patra
> > <Atish.Patra@wdc.com>; Alistair Francis <Alistair.Francis@wdc.com>;
> > Anup Patel <anup@brainfault.org>; linux- riscv@lists.infradead.org;
> > linux-kernel@vger.kernel.org
> > Subject: Re: [PATCH 1/4] irqchip/sifive-plic: Setup cpuhp once after
> > current handler is present
> >
> > On 2020-05-16 13:52, Anup Patel wrote:
> > >> -----Original Message-----
> > >> From: Marc Zyngier <maz@kernel.org>
> > >> Sent: 16 May 2020 17:42
> > >> To: Anup Patel <Anup.Patel@wdc.com>
> > >> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
> > >> <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>;
> > >> Jason Cooper <jason@lakedaemon.net>; Atish Patra
> > >> <Atish.Patra@wdc.com>; Alistair Francis <Alistair.Francis@wdc.com>;
> > >> Anup Patel <anup@brainfault.org>;
> > >> linux-
> > >> riscv@lists.infradead.org; linux-kernel@vger.kernel.org
> > >> Subject: Re: [PATCH 1/4] irqchip/sifive-plic: Setup cpuhp once
> > >> after current handler is present
> > >>
> > >> Hi Anup,
> > >>
> > >> On 2020-05-16 07:38, Anup Patel wrote:
> > >> > For multiple PLIC instances, the plic_init() is called once for
> > >> > each PLIC instance. Due to this we have two issues:
> > >> > 1. cpuhp_setup_state() is called multiple times 2.
> > >> > plic_starting_cpu() can crash for boot CPU if cpuhp_setup_state()
> > >> >    is called before boot CPU PLIC handler is available.
> > >> >
> > >> > This patch fixes both above issues.
> > >> >
> > >> > Signed-off-by: Anup Patel <anup.patel@wdc.com>
> > >> > ---
> > >> >  drivers/irqchip/irq-sifive-plic.c | 14 ++++++++++++--
> > >> >  1 file changed, 12 insertions(+), 2 deletions(-)
> > >> >
> > >> > diff --git a/drivers/irqchip/irq-sifive-plic.c
> > >> > b/drivers/irqchip/irq-sifive-plic.c
> > >> > index 822e074c0600..7dc23edb3267 100644
> > >> > --- a/drivers/irqchip/irq-sifive-plic.c
> > >> > +++ b/drivers/irqchip/irq-sifive-plic.c
> > >> > @@ -76,6 +76,7 @@ struct plic_handler {
> > >> >  	void __iomem		*enable_base;
> > >> >  	struct plic_priv	*priv;
> > >> >  };
> > >> > +static bool plic_cpuhp_setup_done;
> > >> >  static DEFINE_PER_CPU(struct plic_handler, plic_handlers);
> > >> >
> > >> >  static inline void plic_toggle(struct plic_handler *handler, @@
> > >> > -282,6 +283,7 @@ static int __init plic_init(struct device_node *node,
> > >> >  	int error = 0, nr_contexts, nr_handlers = 0, i;
> > >> >  	u32 nr_irqs;
> > >> >  	struct plic_priv *priv;
> > >> > +	struct plic_handler *handler;
> > >> >
> > >> >  	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
> > >> >  	if (!priv)
> > >> > @@ -310,7 +312,6 @@ static int __init plic_init(struct
> > >> > device_node *node,
> > >> >
> > >> >  	for (i = 0; i < nr_contexts; i++) {
> > >> >  		struct of_phandle_args parent;
> > >> > -		struct plic_handler *handler;
> > >> >  		irq_hw_number_t hwirq;
> > >> >  		int cpu, hartid;
> > >> >
> > >> > @@ -364,9 +365,18 @@ static int __init plic_init(struct
> > >> > device_node *node,
> > >> >  		nr_handlers++;
> > >> >  	}
> > >> >
> > >> > -	cpuhp_setup_state(CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING,
> > >> > +	/*
> > >> > +	 * We can have multiple PLIC instances so setup cpuhp state
> only
> > >> > +	 * when context handler for current/boot CPU is present.
> > >> > +	 */
> > >> > +	handler = this_cpu_ptr(&plic_handlers);
> > >> > +	if (handler->present && !plic_cpuhp_setup_done) {
> > >>
> > >> If there is no context handler for the boot CPU, the system is
> > >> doomed, right? It isn't able to get any interrupt, and you don't
> > >> register the hotplug notifier that could allow secondary CPUs to
> > >> boot.
> > >>
> > >> So what is the point? It feels like you should just give up here.
> > >>
> > >> Also, the boot CPU is always CPU 0. So checking that you only
> > >> register the hotplug notifier from CPU 0 should be enough.
> > >
> > > The boot CPU is not fixed in RISC-V, the logical id of the boot CPU
> > > will always be zero but physical id (or HART id) can be something
> > > totally different.
> >
> > So on riscv, smp_processor_id() can return a non-zero value on the the
> > boot CPU? Interesting... :-/
> 
> On RISC-V, smp_processor_id() returns logical id (which is the order in Which
> CPUs are brought up).
> 
> We have special function cpuid_to_hartid_map() in asm/smp.h which helps us
> convert logical id to HART id.
> 
> The HART id in RISC-V world is like to MPIDR of ARM world.
> 
> >
> > >
> > > On RISC-V NUMA system, we will have a separate PLIC in each NUMA node.
> > >
> > > Let's say we have a system with 2 NUMA nodes, each NUMA node having
> > > 4 CPUs (or 4 HARTs).  In this case, the DTB passed to Linux will
> > > have two PLIC DT nodes where each PLIC device targets only 4 CPUs
> > > (or 4 HARTs). The
> > > plic_init() functions will setup handlers for only 4 CPUs (or 4 HARTs).
> > > In other
> > > words, plic_init() for "PLIC0" will setup handler for HART id 0 to 3
> > > and
> > > plic_init() for "PLIC1" will setup handler for HART id 4 to 7. Now,
> > > any CPU can be boot CPU so it is possible that CPU with HART id 4 is
> > > boot CPU and when plic_init() is first called for "PLIC0" the
> > > handler for HART id 4 is not setup because it will be setup later
> > > when
> > > plic_init() is called for "PLIC1".
> > > This cause plic_starting_cpu() to crash when plic_init() is called
> > > for "PLIC0".
> > >
> > > I hope above example helps understanding the issue.
> >
> > It does, thanks. This pseudo NUMA thing really is a terrible hack...
> >
> > >
> > > I encounter this issue randomly when booting Linux on QEMU RISC-V
> > > with multiple NUMA nodes.
> >
> > Then why don't you defer the probing of the PLIC you can't initialize
> > from this CPU? If you're on CPU4-7, only initialize the PLIC that
> > matters to you, and not the the others. It would certainly make a lot more
> sense, and be more robust.
> 
> Okay, I will try -EPROBE defer approach for v2.

I tried -EPROBE_DEFER for PLIC instances not targeting boot CPU. It
works fine for boot CPU but now it fails for secondary CPUs because
PLIC probe is deferred after secondary CPU bringup.

This can be further explained by previous example.

Let's say we have a system with 2 NUMA nodes, each NUMA node
having 4 CPUs (or 4 HARTs).  In this case, the DTB passed to Linux will
have two PLIC DT nodes where each PLIC device targets only 4 CPUs (or
4 HARTs). The plic_init() for "PLIC0" will setup handler for HART id 0 to 3
and plic_init() for "PLIC1" will setup handler for HART id 4 to 7. Now, any
CPU can be boot CPU so let's assume CPU with HART id 5 is the boot CPU.
The plic_init() is first called for "PLIC0" which does not target boot CPU
(HART id 5) so we defer "PLIC0" probe.  Next, plic_init() is called for "PLIC1"
which succeeds and we get handler for CPUs with HART id 4 to 7 (including
boot CPU). Later, Linux brings-up secondary CPUs starting with HART id 0
which fails because plic_starting_cpu() crashes for HART id 0 as the "PLIC0"
probe is deferred (i.e. not yet done).

The real reason why -EPROBE_DEFER did not work in this case is that
Linux irq subsystem defers probe after secondary CPU bringup.

I will keep the current approach for v2 series unless you have other
suggestion.

Regards,
Anup


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 3/4] irqchip/sifive-plic: Separate irq_chip for muiltiple PLIC instances
  2020-05-16 16:38           ` Anup Patel
@ 2020-05-18  8:14             ` Marc Zyngier
  -1 siblings, 0 replies; 40+ messages in thread
From: Marc Zyngier @ 2020-05-18  8:14 UTC (permalink / raw)
  To: Anup Patel
  Cc: Palmer Dabbelt, Paul Walmsley, Thomas Gleixner, Jason Cooper,
	Atish Patra, Alistair Francis, Anup Patel, linux-riscv,
	linux-kernel

On 2020-05-16 17:38, Anup Patel wrote:
>> -----Original Message-----
>> From: Marc Zyngier <maz@kernel.org>

[...]

>> I *have* given you a way to implement that in a better way. But again, 
>> I'd
>> rather you *don't* do it for the reason I have outlined above.
> 
> I explored kernel/irq/proc.c and we can achieve what this patch does
> by implementing irq_print_chip() callback of "struct irq_chip" so we
> certainly don't need separate "struct irq_chip" for each PLIC instance.
> 
> I will implement irq_print_chip() callback in v2 series.

You still haven't explained *why* you need to have this change.
As it stands, I'm not prepared to take it.

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 3/4] irqchip/sifive-plic: Separate irq_chip for muiltiple PLIC instances
@ 2020-05-18  8:14             ` Marc Zyngier
  0 siblings, 0 replies; 40+ messages in thread
From: Marc Zyngier @ 2020-05-18  8:14 UTC (permalink / raw)
  To: Anup Patel
  Cc: Jason Cooper, Anup Patel, linux-kernel, Atish Patra,
	Palmer Dabbelt, Paul Walmsley, Alistair Francis, Thomas Gleixner,
	linux-riscv

On 2020-05-16 17:38, Anup Patel wrote:
>> -----Original Message-----
>> From: Marc Zyngier <maz@kernel.org>

[...]

>> I *have* given you a way to implement that in a better way. But again, 
>> I'd
>> rather you *don't* do it for the reason I have outlined above.
> 
> I explored kernel/irq/proc.c and we can achieve what this patch does
> by implementing irq_print_chip() callback of "struct irq_chip" so we
> certainly don't need separate "struct irq_chip" for each PLIC instance.
> 
> I will implement irq_print_chip() callback in v2 series.

You still haven't explained *why* you need to have this change.
As it stands, I'm not prepared to take it.

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...


^ permalink raw reply	[flat|nested] 40+ messages in thread

* RE: [PATCH 3/4] irqchip/sifive-plic: Separate irq_chip for muiltiple PLIC instances
  2020-05-18  8:14             ` Marc Zyngier
@ 2020-05-18  9:00               ` Anup Patel
  -1 siblings, 0 replies; 40+ messages in thread
From: Anup Patel @ 2020-05-18  9:00 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Palmer Dabbelt, Paul Walmsley, Thomas Gleixner, Jason Cooper,
	Atish Patra, Alistair Francis, Anup Patel, linux-riscv,
	linux-kernel



> -----Original Message-----
> From: Marc Zyngier <maz@kernel.org>
> Sent: 18 May 2020 13:45
> To: Anup Patel <Anup.Patel@wdc.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
> <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>; Jason
> Cooper <jason@lakedaemon.net>; Atish Patra <Atish.Patra@wdc.com>; Alistair
> Francis <Alistair.Francis@wdc.com>; Anup Patel <anup@brainfault.org>; linux-
> riscv@lists.infradead.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH 3/4] irqchip/sifive-plic: Separate irq_chip for muiltiple PLIC
> instances
> 
> On 2020-05-16 17:38, Anup Patel wrote:
> >> -----Original Message-----
> >> From: Marc Zyngier <maz@kernel.org>
> 
> [...]
> 
> >> I *have* given you a way to implement that in a better way. But
> >> again, I'd rather you *don't* do it for the reason I have outlined
> >> above.
> >
> > I explored kernel/irq/proc.c and we can achieve what this patch does
> > by implementing irq_print_chip() callback of "struct irq_chip" so we
> > certainly don't need separate "struct irq_chip" for each PLIC instance.
> >
> > I will implement irq_print_chip() callback in v2 series.
> 
> You still haven't explained *why* you need to have this change.
> As it stands, I'm not prepared to take it.
> 

This is only for differentiating interrupts of multiple PLIC instance
In /proc/interrupts.

I will drop this patch since (like you mentioned) contents of
/proc/interrupts is considered an ABI and this patch breaks it.

For now, we can infer the PLIC instance for interrupt X based
on contents of /proc/irq/X/node (i.e. interrupt NUMA node id).

Thanks,
Anup

^ permalink raw reply	[flat|nested] 40+ messages in thread

* RE: [PATCH 3/4] irqchip/sifive-plic: Separate irq_chip for muiltiple PLIC instances
@ 2020-05-18  9:00               ` Anup Patel
  0 siblings, 0 replies; 40+ messages in thread
From: Anup Patel @ 2020-05-18  9:00 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Jason Cooper, Anup Patel, linux-kernel, Atish Patra,
	Palmer Dabbelt, Paul Walmsley, Alistair Francis, Thomas Gleixner,
	linux-riscv



> -----Original Message-----
> From: Marc Zyngier <maz@kernel.org>
> Sent: 18 May 2020 13:45
> To: Anup Patel <Anup.Patel@wdc.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Paul Walmsley
> <paul.walmsley@sifive.com>; Thomas Gleixner <tglx@linutronix.de>; Jason
> Cooper <jason@lakedaemon.net>; Atish Patra <Atish.Patra@wdc.com>; Alistair
> Francis <Alistair.Francis@wdc.com>; Anup Patel <anup@brainfault.org>; linux-
> riscv@lists.infradead.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH 3/4] irqchip/sifive-plic: Separate irq_chip for muiltiple PLIC
> instances
> 
> On 2020-05-16 17:38, Anup Patel wrote:
> >> -----Original Message-----
> >> From: Marc Zyngier <maz@kernel.org>
> 
> [...]
> 
> >> I *have* given you a way to implement that in a better way. But
> >> again, I'd rather you *don't* do it for the reason I have outlined
> >> above.
> >
> > I explored kernel/irq/proc.c and we can achieve what this patch does
> > by implementing irq_print_chip() callback of "struct irq_chip" so we
> > certainly don't need separate "struct irq_chip" for each PLIC instance.
> >
> > I will implement irq_print_chip() callback in v2 series.
> 
> You still haven't explained *why* you need to have this change.
> As it stands, I'm not prepared to take it.
> 

This is only for differentiating interrupts of multiple PLIC instance
In /proc/interrupts.

I will drop this patch since (like you mentioned) contents of
/proc/interrupts is considered an ABI and this patch breaks it.

For now, we can infer the PLIC instance for interrupt X based
on contents of /proc/irq/X/node (i.e. interrupt NUMA node id).

Thanks,
Anup


^ permalink raw reply	[flat|nested] 40+ messages in thread

end of thread, other threads:[~2020-05-18  9:01 UTC | newest]

Thread overview: 40+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-16  6:38 [PATCH 0/4] More improvements for multiple PLICs Anup Patel
2020-05-16  6:38 ` Anup Patel
2020-05-16  6:38 ` [PATCH 1/4] irqchip/sifive-plic: Setup cpuhp once after current handler is present Anup Patel
2020-05-16  6:38   ` Anup Patel
2020-05-16 12:11   ` Marc Zyngier
2020-05-16 12:11     ` Marc Zyngier
2020-05-16 12:52     ` Anup Patel
2020-05-16 12:52       ` Anup Patel
2020-05-16 13:30       ` Marc Zyngier
2020-05-16 13:30         ` Marc Zyngier
2020-05-16 16:28         ` Anup Patel
2020-05-16 16:28           ` Anup Patel
2020-05-17  8:02           ` Anup Patel
2020-05-17  8:02             ` Anup Patel
2020-05-16  6:38 ` [PATCH 2/4] irqchip/sifive-plic: Improve boot prints for multiple PLIC instances Anup Patel
2020-05-16  6:38   ` Anup Patel
2020-05-16 12:20   ` Marc Zyngier
2020-05-16 12:20     ` Marc Zyngier
2020-05-16 12:53     ` Anup Patel
2020-05-16 12:53       ` Anup Patel
2020-05-16  6:39 ` [PATCH 3/4] irqchip/sifive-plic: Separate irq_chip for muiltiple " Anup Patel
2020-05-16  6:39   ` Anup Patel
2020-05-16 12:29   ` Marc Zyngier
2020-05-16 12:29     ` Marc Zyngier
2020-05-16 13:01     ` Anup Patel
2020-05-16 13:01       ` Anup Patel
2020-05-16 13:16       ` Marc Zyngier
2020-05-16 13:16         ` Marc Zyngier
2020-05-16 16:38         ` Anup Patel
2020-05-16 16:38           ` Anup Patel
2020-05-18  8:14           ` Marc Zyngier
2020-05-18  8:14             ` Marc Zyngier
2020-05-18  9:00             ` Anup Patel
2020-05-18  9:00               ` Anup Patel
2020-05-16  6:39 ` [PATCH 4/4] irqchip/sifive-plic: Set default irq affinity in plic_irqdomain_map() Anup Patel
2020-05-16  6:39   ` Anup Patel
2020-05-16 12:30   ` Marc Zyngier
2020-05-16 12:30     ` Marc Zyngier
2020-05-16 12:53     ` Anup Patel
2020-05-16 12:53       ` Anup Patel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.