All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 00/19] irqdomain: fix mapping race and clean up locking
@ 2023-02-09 13:23 ` Johan Hovold
  0 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold

Parallel probing (e.g. due to asynchronous probing) of devices that
share interrupts can currently result in two mappings for the same
hardware interrupt to be created.

This series fixes this mapping race and clean up the irqdomain locking
so that in the end the global irq_domain_mutex is only used for managing
the likewise global irq_domain_list, while domain operations (e.g.
IRQ allocations) use per-domain (hierarchy) locking.

Johan


Changes in v5
 - reorder patches (since we do care about stable after all)
 - tweak commit messages and add stable tags
 - use '__locked' suffix for new helper functions
 - drop a line break from an argument list (11/19)

Changes in v4
 - add a comment to __irq_domain_add() as further documentation of the
   domain lock and root pointer (19/19)
 - add a comment documenting that the lockdep assertion in
   irq_domain_set_mapping() also verifies that the domains in a
   hierarchy point to the same root (19/19)

Changes in v3
 - drop dead and bogus code (1--3/19, new)
 - fix racy mapcount accesses (5/19, new)
 - drop revmap mutex (6/19, new)
 - use irq_domain_mutex to address mapping race (9/19)
 - clean up irq_domain_push/pop_irq() (10/19, new)
 - use irq_domain_create_hierarchy() to construct hierarchies
   (11--18/19, new)
 - switch to per-domain locking (19/19, new)

Changes in v2
 - split out redundant-lookup cleanup (1/4)
 - use a per-domain mutex to address mapping race (2/4)
 - move kernel-doc to exported function (2/4)
 - fix association race (3/4, new)
 - use per-domain mutex for associations (4/4, new)


Johan Hovold (19):
  irqdomain: Fix association race
  irqdomain: Fix disassociation race
  irqdomain: Drop bogus fwspec-mapping error handling
  irqdomain: Look for existing mapping only once
  irqdomain: Refactor __irq_domain_alloc_irqs()
  irqdomain: Fix mapping-creation race
  irqdomain: Drop revmap mutex
  irqdomain: Drop dead domain-name assignment
  irqdomain: Drop leftover brackets
  irqdomain: Clean up irq_domain_push/pop_irq()
  x86/ioapic: Use irq_domain_create_hierarchy()
  x86/apic: Use irq_domain_create_hierarchy()
  irqchip/alpine-msi: Use irq_domain_add_hierarchy()
  irqchip/gic-v2m: Use irq_domain_create_hierarchy()
  irqchip/gic-v3-its: Use irq_domain_create_hierarchy()
  irqchip/gic-v3-mbi: Use irq_domain_create_hierarchy()
  irqchip/loongson-pch-msi: Use irq_domain_create_hierarchy()
  irqchip/mvebu-odmi: Use irq_domain_create_hierarchy()
  irqdomain: Switch to per-domain locking

 arch/x86/kernel/apic/io_apic.c         |   7 +-
 arch/x86/platform/uv/uv_irq.c          |   7 +-
 drivers/irqchip/irq-alpine-msi.c       |   8 +-
 drivers/irqchip/irq-gic-v2m.c          |   5 +-
 drivers/irqchip/irq-gic-v3-its.c       |  13 +-
 drivers/irqchip/irq-gic-v3-mbi.c       |   5 +-
 drivers/irqchip/irq-loongson-pch-msi.c |   9 +-
 drivers/irqchip/irq-mvebu-odmi.c       |  13 +-
 include/linux/irqdomain.h              |   6 +-
 kernel/irq/irqdomain.c                 | 341 ++++++++++++++-----------
 10 files changed, 232 insertions(+), 182 deletions(-)

-- 
2.39.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH v5 00/19] irqdomain: fix mapping race and clean up locking
@ 2023-02-09 13:23 ` Johan Hovold
  0 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold

Parallel probing (e.g. due to asynchronous probing) of devices that
share interrupts can currently result in two mappings for the same
hardware interrupt to be created.

This series fixes this mapping race and clean up the irqdomain locking
so that in the end the global irq_domain_mutex is only used for managing
the likewise global irq_domain_list, while domain operations (e.g.
IRQ allocations) use per-domain (hierarchy) locking.

Johan


Changes in v5
 - reorder patches (since we do care about stable after all)
 - tweak commit messages and add stable tags
 - use '__locked' suffix for new helper functions
 - drop a line break from an argument list (11/19)

Changes in v4
 - add a comment to __irq_domain_add() as further documentation of the
   domain lock and root pointer (19/19)
 - add a comment documenting that the lockdep assertion in
   irq_domain_set_mapping() also verifies that the domains in a
   hierarchy point to the same root (19/19)

Changes in v3
 - drop dead and bogus code (1--3/19, new)
 - fix racy mapcount accesses (5/19, new)
 - drop revmap mutex (6/19, new)
 - use irq_domain_mutex to address mapping race (9/19)
 - clean up irq_domain_push/pop_irq() (10/19, new)
 - use irq_domain_create_hierarchy() to construct hierarchies
   (11--18/19, new)
 - switch to per-domain locking (19/19, new)

Changes in v2
 - split out redundant-lookup cleanup (1/4)
 - use a per-domain mutex to address mapping race (2/4)
 - move kernel-doc to exported function (2/4)
 - fix association race (3/4, new)
 - use per-domain mutex for associations (4/4, new)


Johan Hovold (19):
  irqdomain: Fix association race
  irqdomain: Fix disassociation race
  irqdomain: Drop bogus fwspec-mapping error handling
  irqdomain: Look for existing mapping only once
  irqdomain: Refactor __irq_domain_alloc_irqs()
  irqdomain: Fix mapping-creation race
  irqdomain: Drop revmap mutex
  irqdomain: Drop dead domain-name assignment
  irqdomain: Drop leftover brackets
  irqdomain: Clean up irq_domain_push/pop_irq()
  x86/ioapic: Use irq_domain_create_hierarchy()
  x86/apic: Use irq_domain_create_hierarchy()
  irqchip/alpine-msi: Use irq_domain_add_hierarchy()
  irqchip/gic-v2m: Use irq_domain_create_hierarchy()
  irqchip/gic-v3-its: Use irq_domain_create_hierarchy()
  irqchip/gic-v3-mbi: Use irq_domain_create_hierarchy()
  irqchip/loongson-pch-msi: Use irq_domain_create_hierarchy()
  irqchip/mvebu-odmi: Use irq_domain_create_hierarchy()
  irqdomain: Switch to per-domain locking

 arch/x86/kernel/apic/io_apic.c         |   7 +-
 arch/x86/platform/uv/uv_irq.c          |   7 +-
 drivers/irqchip/irq-alpine-msi.c       |   8 +-
 drivers/irqchip/irq-gic-v2m.c          |   5 +-
 drivers/irqchip/irq-gic-v3-its.c       |  13 +-
 drivers/irqchip/irq-gic-v3-mbi.c       |   5 +-
 drivers/irqchip/irq-loongson-pch-msi.c |   9 +-
 drivers/irqchip/irq-mvebu-odmi.c       |  13 +-
 include/linux/irqdomain.h              |   6 +-
 kernel/irq/irqdomain.c                 | 341 ++++++++++++++-----------
 10 files changed, 232 insertions(+), 182 deletions(-)

-- 
2.39.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH v5 01/19] irqdomain: Fix association race
  2023-02-09 13:23 ` Johan Hovold
@ 2023-02-09 13:23   ` Johan Hovold
  -1 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, stable, Hsin-Yi Wang, Mark-PK Tsai

The sanity check for an already mapped virq is done outside of the
irq_domain_mutex-protected section which means that an (unlikely) racing
association may not be detected.

Fix this by factoring out the association implementation, which will
also be used in a follow-on change to fix a shared-interrupt mapping
race.

Fixes: ddaf144c61da ("irqdomain: Refactor irq_domain_associate_many()")
Cc: stable@vger.kernel.org      # 3.11
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 kernel/irq/irqdomain.c | 19 ++++++++++++++-----
 1 file changed, 14 insertions(+), 5 deletions(-)

diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index 798a9042421f..561689a3f050 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -559,8 +559,8 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
 	irq_domain_clear_mapping(domain, hwirq);
 }
 
-int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
-			 irq_hw_number_t hwirq)
+static int irq_domain_associate_locked(struct irq_domain *domain, unsigned int virq,
+				       irq_hw_number_t hwirq)
 {
 	struct irq_data *irq_data = irq_get_irq_data(virq);
 	int ret;
@@ -573,7 +573,6 @@ int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
 	if (WARN(irq_data->domain, "error: virq%i is already associated", virq))
 		return -EINVAL;
 
-	mutex_lock(&irq_domain_mutex);
 	irq_data->hwirq = hwirq;
 	irq_data->domain = domain;
 	if (domain->ops->map) {
@@ -590,7 +589,6 @@ int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
 			}
 			irq_data->domain = NULL;
 			irq_data->hwirq = 0;
-			mutex_unlock(&irq_domain_mutex);
 			return ret;
 		}
 
@@ -601,12 +599,23 @@ int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
 
 	domain->mapcount++;
 	irq_domain_set_mapping(domain, hwirq, irq_data);
-	mutex_unlock(&irq_domain_mutex);
 
 	irq_clear_status_flags(virq, IRQ_NOREQUEST);
 
 	return 0;
 }
+
+int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
+			 irq_hw_number_t hwirq)
+{
+	int ret;
+
+	mutex_lock(&irq_domain_mutex);
+	ret = irq_domain_associate_locked(domain, virq, hwirq);
+	mutex_unlock(&irq_domain_mutex);
+
+	return ret;
+}
 EXPORT_SYMBOL_GPL(irq_domain_associate);
 
 void irq_domain_associate_many(struct irq_domain *domain, unsigned int irq_base,
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 01/19] irqdomain: Fix association race
@ 2023-02-09 13:23   ` Johan Hovold
  0 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, stable, Hsin-Yi Wang, Mark-PK Tsai

The sanity check for an already mapped virq is done outside of the
irq_domain_mutex-protected section which means that an (unlikely) racing
association may not be detected.

Fix this by factoring out the association implementation, which will
also be used in a follow-on change to fix a shared-interrupt mapping
race.

Fixes: ddaf144c61da ("irqdomain: Refactor irq_domain_associate_many()")
Cc: stable@vger.kernel.org      # 3.11
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 kernel/irq/irqdomain.c | 19 ++++++++++++++-----
 1 file changed, 14 insertions(+), 5 deletions(-)

diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index 798a9042421f..561689a3f050 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -559,8 +559,8 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
 	irq_domain_clear_mapping(domain, hwirq);
 }
 
-int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
-			 irq_hw_number_t hwirq)
+static int irq_domain_associate_locked(struct irq_domain *domain, unsigned int virq,
+				       irq_hw_number_t hwirq)
 {
 	struct irq_data *irq_data = irq_get_irq_data(virq);
 	int ret;
@@ -573,7 +573,6 @@ int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
 	if (WARN(irq_data->domain, "error: virq%i is already associated", virq))
 		return -EINVAL;
 
-	mutex_lock(&irq_domain_mutex);
 	irq_data->hwirq = hwirq;
 	irq_data->domain = domain;
 	if (domain->ops->map) {
@@ -590,7 +589,6 @@ int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
 			}
 			irq_data->domain = NULL;
 			irq_data->hwirq = 0;
-			mutex_unlock(&irq_domain_mutex);
 			return ret;
 		}
 
@@ -601,12 +599,23 @@ int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
 
 	domain->mapcount++;
 	irq_domain_set_mapping(domain, hwirq, irq_data);
-	mutex_unlock(&irq_domain_mutex);
 
 	irq_clear_status_flags(virq, IRQ_NOREQUEST);
 
 	return 0;
 }
+
+int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
+			 irq_hw_number_t hwirq)
+{
+	int ret;
+
+	mutex_lock(&irq_domain_mutex);
+	ret = irq_domain_associate_locked(domain, virq, hwirq);
+	mutex_unlock(&irq_domain_mutex);
+
+	return ret;
+}
 EXPORT_SYMBOL_GPL(irq_domain_associate);
 
 void irq_domain_associate_many(struct irq_domain *domain, unsigned int irq_base,
-- 
2.39.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 02/19] irqdomain: Fix disassociation race
  2023-02-09 13:23 ` Johan Hovold
@ 2023-02-09 13:23   ` Johan Hovold
  -1 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, stable, Hsin-Yi Wang, Mark-PK Tsai

The global irq_domain_mutex is held when mapping interrupts from
non-hierarchical domains but currently not when disposing them.

This specifically means that updates of the domain mapcount is racy
(currently only used for statistics in debugfs).

Make sure to hold the global irq_domain_mutex also when disposing
mappings from non-hierarchical domains.

Fixes: 9dc6be3d4193 ("genirq/irqdomain: Add map counter")
Cc: stable@vger.kernel.org      # 4.13
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 kernel/irq/irqdomain.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index 561689a3f050..981cd636275e 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -538,6 +538,9 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
 		return;
 
 	hwirq = irq_data->hwirq;
+
+	mutex_lock(&irq_domain_mutex);
+
 	irq_set_status_flags(irq, IRQ_NOREQUEST);
 
 	/* remove chip and handler */
@@ -557,6 +560,8 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
 
 	/* Clear reverse map for this hwirq */
 	irq_domain_clear_mapping(domain, hwirq);
+
+	mutex_unlock(&irq_domain_mutex);
 }
 
 static int irq_domain_associate_locked(struct irq_domain *domain, unsigned int virq,
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 02/19] irqdomain: Fix disassociation race
@ 2023-02-09 13:23   ` Johan Hovold
  0 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, stable, Hsin-Yi Wang, Mark-PK Tsai

The global irq_domain_mutex is held when mapping interrupts from
non-hierarchical domains but currently not when disposing them.

This specifically means that updates of the domain mapcount is racy
(currently only used for statistics in debugfs).

Make sure to hold the global irq_domain_mutex also when disposing
mappings from non-hierarchical domains.

Fixes: 9dc6be3d4193 ("genirq/irqdomain: Add map counter")
Cc: stable@vger.kernel.org      # 4.13
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 kernel/irq/irqdomain.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index 561689a3f050..981cd636275e 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -538,6 +538,9 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
 		return;
 
 	hwirq = irq_data->hwirq;
+
+	mutex_lock(&irq_domain_mutex);
+
 	irq_set_status_flags(irq, IRQ_NOREQUEST);
 
 	/* remove chip and handler */
@@ -557,6 +560,8 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
 
 	/* Clear reverse map for this hwirq */
 	irq_domain_clear_mapping(domain, hwirq);
+
+	mutex_unlock(&irq_domain_mutex);
 }
 
 static int irq_domain_associate_locked(struct irq_domain *domain, unsigned int virq,
-- 
2.39.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 03/19] irqdomain: Drop bogus fwspec-mapping error handling
  2023-02-09 13:23 ` Johan Hovold
@ 2023-02-09 13:23   ` Johan Hovold
  -1 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, stable, Hsin-Yi Wang, Mark-PK Tsai

In case a newly allocated IRQ ever ends up not having any associated
struct irq_data it would not even be possible to dispose the mapping.

Replace the bogus disposal with a WARN_ON().

This will also be used to fix a shared-interrupt mapping race, hence the
CC-stable tag.

Fixes: 1e2a7d78499e ("irqdomain: Don't set type when mapping an IRQ")
Cc: stable@vger.kernel.org      # 4.8
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 kernel/irq/irqdomain.c | 7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index 981cd636275e..b4326c364ae7 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -847,13 +847,8 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
 	}
 
 	irq_data = irq_get_irq_data(virq);
-	if (!irq_data) {
-		if (irq_domain_is_hierarchy(domain))
-			irq_domain_free_irqs(virq, 1);
-		else
-			irq_dispose_mapping(virq);
+	if (WARN_ON(!irq_data))
 		return 0;
-	}
 
 	/* Store trigger type */
 	irqd_set_trigger_type(irq_data, type);
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 03/19] irqdomain: Drop bogus fwspec-mapping error handling
@ 2023-02-09 13:23   ` Johan Hovold
  0 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, stable, Hsin-Yi Wang, Mark-PK Tsai

In case a newly allocated IRQ ever ends up not having any associated
struct irq_data it would not even be possible to dispose the mapping.

Replace the bogus disposal with a WARN_ON().

This will also be used to fix a shared-interrupt mapping race, hence the
CC-stable tag.

Fixes: 1e2a7d78499e ("irqdomain: Don't set type when mapping an IRQ")
Cc: stable@vger.kernel.org      # 4.8
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 kernel/irq/irqdomain.c | 7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index 981cd636275e..b4326c364ae7 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -847,13 +847,8 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
 	}
 
 	irq_data = irq_get_irq_data(virq);
-	if (!irq_data) {
-		if (irq_domain_is_hierarchy(domain))
-			irq_domain_free_irqs(virq, 1);
-		else
-			irq_dispose_mapping(virq);
+	if (WARN_ON(!irq_data))
 		return 0;
-	}
 
 	/* Store trigger type */
 	irqd_set_trigger_type(irq_data, type);
-- 
2.39.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 04/19] irqdomain: Look for existing mapping only once
  2023-02-09 13:23 ` Johan Hovold
@ 2023-02-09 13:23   ` Johan Hovold
  -1 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, stable, Hsin-Yi Wang, Mark-PK Tsai

Avoid looking for an existing mapping twice when creating a new mapping
using irq_create_fwspec_mapping() by factoring out the actual allocation
which is shared with irq_create_mapping_affinity().

The new helper function will also be used to fix a shared-interrupt
mapping race, hence the Fixes tag.

Fixes: b62b2cf5759b ("irqdomain: Fix handling of type settings for existing mappings")
Cc: stable@vger.kernel.org      # 4.8
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 kernel/irq/irqdomain.c | 60 +++++++++++++++++++++++-------------------
 1 file changed, 33 insertions(+), 27 deletions(-)

diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index b4326c364ae7..3d6a14efae62 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -682,6 +682,34 @@ unsigned int irq_create_direct_mapping(struct irq_domain *domain)
 EXPORT_SYMBOL_GPL(irq_create_direct_mapping);
 #endif
 
+static unsigned int __irq_create_mapping_affinity(struct irq_domain *domain,
+						  irq_hw_number_t hwirq,
+						  const struct irq_affinity_desc *affinity)
+{
+	struct device_node *of_node = irq_domain_get_of_node(domain);
+	int virq;
+
+	pr_debug("irq_create_mapping(0x%p, 0x%lx)\n", domain, hwirq);
+
+	/* Allocate a virtual interrupt number */
+	virq = irq_domain_alloc_descs(-1, 1, hwirq, of_node_to_nid(of_node),
+				      affinity);
+	if (virq <= 0) {
+		pr_debug("-> virq allocation failed\n");
+		return 0;
+	}
+
+	if (irq_domain_associate(domain, virq, hwirq)) {
+		irq_free_desc(virq);
+		return 0;
+	}
+
+	pr_debug("irq %lu on domain %s mapped to virtual irq %u\n",
+		hwirq, of_node_full_name(of_node), virq);
+
+	return virq;
+}
+
 /**
  * irq_create_mapping_affinity() - Map a hardware interrupt into linux irq space
  * @domain: domain owning this hardware interrupt or NULL for default domain
@@ -694,14 +722,11 @@ EXPORT_SYMBOL_GPL(irq_create_direct_mapping);
  * on the number returned from that call.
  */
 unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
-				       irq_hw_number_t hwirq,
-				       const struct irq_affinity_desc *affinity)
+					 irq_hw_number_t hwirq,
+					 const struct irq_affinity_desc *affinity)
 {
-	struct device_node *of_node;
 	int virq;
 
-	pr_debug("irq_create_mapping(0x%p, 0x%lx)\n", domain, hwirq);
-
 	/* Look for default domain if necessary */
 	if (domain == NULL)
 		domain = irq_default_domain;
@@ -709,34 +734,15 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
 		WARN(1, "%s(, %lx) called with NULL domain\n", __func__, hwirq);
 		return 0;
 	}
-	pr_debug("-> using domain @%p\n", domain);
-
-	of_node = irq_domain_get_of_node(domain);
 
 	/* Check if mapping already exists */
 	virq = irq_find_mapping(domain, hwirq);
 	if (virq) {
-		pr_debug("-> existing mapping on virq %d\n", virq);
+		pr_debug("existing mapping on virq %d\n", virq);
 		return virq;
 	}
 
-	/* Allocate a virtual interrupt number */
-	virq = irq_domain_alloc_descs(-1, 1, hwirq, of_node_to_nid(of_node),
-				      affinity);
-	if (virq <= 0) {
-		pr_debug("-> virq allocation failed\n");
-		return 0;
-	}
-
-	if (irq_domain_associate(domain, virq, hwirq)) {
-		irq_free_desc(virq);
-		return 0;
-	}
-
-	pr_debug("irq %lu on domain %s mapped to virtual irq %u\n",
-		hwirq, of_node_full_name(of_node), virq);
-
-	return virq;
+	return __irq_create_mapping_affinity(domain, hwirq, affinity);
 }
 EXPORT_SYMBOL_GPL(irq_create_mapping_affinity);
 
@@ -841,7 +847,7 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
 			return 0;
 	} else {
 		/* Create mapping */
-		virq = irq_create_mapping(domain, hwirq);
+		virq = __irq_create_mapping_affinity(domain, hwirq, NULL);
 		if (!virq)
 			return virq;
 	}
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 04/19] irqdomain: Look for existing mapping only once
@ 2023-02-09 13:23   ` Johan Hovold
  0 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, stable, Hsin-Yi Wang, Mark-PK Tsai

Avoid looking for an existing mapping twice when creating a new mapping
using irq_create_fwspec_mapping() by factoring out the actual allocation
which is shared with irq_create_mapping_affinity().

The new helper function will also be used to fix a shared-interrupt
mapping race, hence the Fixes tag.

Fixes: b62b2cf5759b ("irqdomain: Fix handling of type settings for existing mappings")
Cc: stable@vger.kernel.org      # 4.8
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 kernel/irq/irqdomain.c | 60 +++++++++++++++++++++++-------------------
 1 file changed, 33 insertions(+), 27 deletions(-)

diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index b4326c364ae7..3d6a14efae62 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -682,6 +682,34 @@ unsigned int irq_create_direct_mapping(struct irq_domain *domain)
 EXPORT_SYMBOL_GPL(irq_create_direct_mapping);
 #endif
 
+static unsigned int __irq_create_mapping_affinity(struct irq_domain *domain,
+						  irq_hw_number_t hwirq,
+						  const struct irq_affinity_desc *affinity)
+{
+	struct device_node *of_node = irq_domain_get_of_node(domain);
+	int virq;
+
+	pr_debug("irq_create_mapping(0x%p, 0x%lx)\n", domain, hwirq);
+
+	/* Allocate a virtual interrupt number */
+	virq = irq_domain_alloc_descs(-1, 1, hwirq, of_node_to_nid(of_node),
+				      affinity);
+	if (virq <= 0) {
+		pr_debug("-> virq allocation failed\n");
+		return 0;
+	}
+
+	if (irq_domain_associate(domain, virq, hwirq)) {
+		irq_free_desc(virq);
+		return 0;
+	}
+
+	pr_debug("irq %lu on domain %s mapped to virtual irq %u\n",
+		hwirq, of_node_full_name(of_node), virq);
+
+	return virq;
+}
+
 /**
  * irq_create_mapping_affinity() - Map a hardware interrupt into linux irq space
  * @domain: domain owning this hardware interrupt or NULL for default domain
@@ -694,14 +722,11 @@ EXPORT_SYMBOL_GPL(irq_create_direct_mapping);
  * on the number returned from that call.
  */
 unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
-				       irq_hw_number_t hwirq,
-				       const struct irq_affinity_desc *affinity)
+					 irq_hw_number_t hwirq,
+					 const struct irq_affinity_desc *affinity)
 {
-	struct device_node *of_node;
 	int virq;
 
-	pr_debug("irq_create_mapping(0x%p, 0x%lx)\n", domain, hwirq);
-
 	/* Look for default domain if necessary */
 	if (domain == NULL)
 		domain = irq_default_domain;
@@ -709,34 +734,15 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
 		WARN(1, "%s(, %lx) called with NULL domain\n", __func__, hwirq);
 		return 0;
 	}
-	pr_debug("-> using domain @%p\n", domain);
-
-	of_node = irq_domain_get_of_node(domain);
 
 	/* Check if mapping already exists */
 	virq = irq_find_mapping(domain, hwirq);
 	if (virq) {
-		pr_debug("-> existing mapping on virq %d\n", virq);
+		pr_debug("existing mapping on virq %d\n", virq);
 		return virq;
 	}
 
-	/* Allocate a virtual interrupt number */
-	virq = irq_domain_alloc_descs(-1, 1, hwirq, of_node_to_nid(of_node),
-				      affinity);
-	if (virq <= 0) {
-		pr_debug("-> virq allocation failed\n");
-		return 0;
-	}
-
-	if (irq_domain_associate(domain, virq, hwirq)) {
-		irq_free_desc(virq);
-		return 0;
-	}
-
-	pr_debug("irq %lu on domain %s mapped to virtual irq %u\n",
-		hwirq, of_node_full_name(of_node), virq);
-
-	return virq;
+	return __irq_create_mapping_affinity(domain, hwirq, affinity);
 }
 EXPORT_SYMBOL_GPL(irq_create_mapping_affinity);
 
@@ -841,7 +847,7 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
 			return 0;
 	} else {
 		/* Create mapping */
-		virq = irq_create_mapping(domain, hwirq);
+		virq = __irq_create_mapping_affinity(domain, hwirq, NULL);
 		if (!virq)
 			return virq;
 	}
-- 
2.39.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 05/19] irqdomain: Refactor __irq_domain_alloc_irqs()
  2023-02-09 13:23 ` Johan Hovold
@ 2023-02-09 13:23   ` Johan Hovold
  -1 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, stable, Hsin-Yi Wang, Mark-PK Tsai

Refactor __irq_domain_alloc_irqs() so that it can be called internally
while holding the irq_domain_mutex.

This will be used to fix a shared-interrupt mapping race, hence the
Fixes tag.

Fixes: b62b2cf5759b ("irqdomain: Fix handling of type settings for existing mappings")
Cc: stable@vger.kernel.org      # 4.8
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 kernel/irq/irqdomain.c | 88 +++++++++++++++++++++++-------------------
 1 file changed, 48 insertions(+), 40 deletions(-)

diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index 3d6a14efae62..7b57949bc79c 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -1441,40 +1441,12 @@ int irq_domain_alloc_irqs_hierarchy(struct irq_domain *domain,
 	return domain->ops->alloc(domain, irq_base, nr_irqs, arg);
 }
 
-/**
- * __irq_domain_alloc_irqs - Allocate IRQs from domain
- * @domain:	domain to allocate from
- * @irq_base:	allocate specified IRQ number if irq_base >= 0
- * @nr_irqs:	number of IRQs to allocate
- * @node:	NUMA node id for memory allocation
- * @arg:	domain specific argument
- * @realloc:	IRQ descriptors have already been allocated if true
- * @affinity:	Optional irq affinity mask for multiqueue devices
- *
- * Allocate IRQ numbers and initialized all data structures to support
- * hierarchy IRQ domains.
- * Parameter @realloc is mainly to support legacy IRQs.
- * Returns error code or allocated IRQ number
- *
- * The whole process to setup an IRQ has been split into two steps.
- * The first step, __irq_domain_alloc_irqs(), is to allocate IRQ
- * descriptor and required hardware resources. The second step,
- * irq_domain_activate_irq(), is to program the hardware with preallocated
- * resources. In this way, it's easier to rollback when failing to
- * allocate resources.
- */
-int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base,
-			    unsigned int nr_irqs, int node, void *arg,
-			    bool realloc, const struct irq_affinity_desc *affinity)
+static int irq_domain_alloc_irqs_locked(struct irq_domain *domain, int irq_base,
+					unsigned int nr_irqs, int node, void *arg,
+					bool realloc, const struct irq_affinity_desc *affinity)
 {
 	int i, ret, virq;
 
-	if (domain == NULL) {
-		domain = irq_default_domain;
-		if (WARN(!domain, "domain is NULL; cannot allocate IRQ\n"))
-			return -EINVAL;
-	}
-
 	if (realloc && irq_base >= 0) {
 		virq = irq_base;
 	} else {
@@ -1493,24 +1465,18 @@ int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base,
 		goto out_free_desc;
 	}
 
-	mutex_lock(&irq_domain_mutex);
 	ret = irq_domain_alloc_irqs_hierarchy(domain, virq, nr_irqs, arg);
-	if (ret < 0) {
-		mutex_unlock(&irq_domain_mutex);
+	if (ret < 0)
 		goto out_free_irq_data;
-	}
 
 	for (i = 0; i < nr_irqs; i++) {
 		ret = irq_domain_trim_hierarchy(virq + i);
-		if (ret) {
-			mutex_unlock(&irq_domain_mutex);
+		if (ret)
 			goto out_free_irq_data;
-		}
 	}
-	
+
 	for (i = 0; i < nr_irqs; i++)
 		irq_domain_insert_irq(virq + i);
-	mutex_unlock(&irq_domain_mutex);
 
 	return virq;
 
@@ -1520,6 +1486,48 @@ int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base,
 	irq_free_descs(virq, nr_irqs);
 	return ret;
 }
+
+/**
+ * __irq_domain_alloc_irqs - Allocate IRQs from domain
+ * @domain:	domain to allocate from
+ * @irq_base:	allocate specified IRQ number if irq_base >= 0
+ * @nr_irqs:	number of IRQs to allocate
+ * @node:	NUMA node id for memory allocation
+ * @arg:	domain specific argument
+ * @realloc:	IRQ descriptors have already been allocated if true
+ * @affinity:	Optional irq affinity mask for multiqueue devices
+ *
+ * Allocate IRQ numbers and initialized all data structures to support
+ * hierarchy IRQ domains.
+ * Parameter @realloc is mainly to support legacy IRQs.
+ * Returns error code or allocated IRQ number
+ *
+ * The whole process to setup an IRQ has been split into two steps.
+ * The first step, __irq_domain_alloc_irqs(), is to allocate IRQ
+ * descriptor and required hardware resources. The second step,
+ * irq_domain_activate_irq(), is to program the hardware with preallocated
+ * resources. In this way, it's easier to rollback when failing to
+ * allocate resources.
+ */
+int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base,
+			    unsigned int nr_irqs, int node, void *arg,
+			    bool realloc, const struct irq_affinity_desc *affinity)
+{
+	int ret;
+
+	if (domain == NULL) {
+		domain = irq_default_domain;
+		if (WARN(!domain, "domain is NULL; cannot allocate IRQ\n"))
+			return -EINVAL;
+	}
+
+	mutex_lock(&irq_domain_mutex);
+	ret = irq_domain_alloc_irqs_locked(domain, irq_base, nr_irqs, node, arg,
+					   realloc, affinity);
+	mutex_unlock(&irq_domain_mutex);
+
+	return ret;
+}
 EXPORT_SYMBOL_GPL(__irq_domain_alloc_irqs);
 
 /* The irq_data was moved, fix the revmap to refer to the new location */
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 05/19] irqdomain: Refactor __irq_domain_alloc_irqs()
@ 2023-02-09 13:23   ` Johan Hovold
  0 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, stable, Hsin-Yi Wang, Mark-PK Tsai

Refactor __irq_domain_alloc_irqs() so that it can be called internally
while holding the irq_domain_mutex.

This will be used to fix a shared-interrupt mapping race, hence the
Fixes tag.

Fixes: b62b2cf5759b ("irqdomain: Fix handling of type settings for existing mappings")
Cc: stable@vger.kernel.org      # 4.8
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 kernel/irq/irqdomain.c | 88 +++++++++++++++++++++++-------------------
 1 file changed, 48 insertions(+), 40 deletions(-)

diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index 3d6a14efae62..7b57949bc79c 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -1441,40 +1441,12 @@ int irq_domain_alloc_irqs_hierarchy(struct irq_domain *domain,
 	return domain->ops->alloc(domain, irq_base, nr_irqs, arg);
 }
 
-/**
- * __irq_domain_alloc_irqs - Allocate IRQs from domain
- * @domain:	domain to allocate from
- * @irq_base:	allocate specified IRQ number if irq_base >= 0
- * @nr_irqs:	number of IRQs to allocate
- * @node:	NUMA node id for memory allocation
- * @arg:	domain specific argument
- * @realloc:	IRQ descriptors have already been allocated if true
- * @affinity:	Optional irq affinity mask for multiqueue devices
- *
- * Allocate IRQ numbers and initialized all data structures to support
- * hierarchy IRQ domains.
- * Parameter @realloc is mainly to support legacy IRQs.
- * Returns error code or allocated IRQ number
- *
- * The whole process to setup an IRQ has been split into two steps.
- * The first step, __irq_domain_alloc_irqs(), is to allocate IRQ
- * descriptor and required hardware resources. The second step,
- * irq_domain_activate_irq(), is to program the hardware with preallocated
- * resources. In this way, it's easier to rollback when failing to
- * allocate resources.
- */
-int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base,
-			    unsigned int nr_irqs, int node, void *arg,
-			    bool realloc, const struct irq_affinity_desc *affinity)
+static int irq_domain_alloc_irqs_locked(struct irq_domain *domain, int irq_base,
+					unsigned int nr_irqs, int node, void *arg,
+					bool realloc, const struct irq_affinity_desc *affinity)
 {
 	int i, ret, virq;
 
-	if (domain == NULL) {
-		domain = irq_default_domain;
-		if (WARN(!domain, "domain is NULL; cannot allocate IRQ\n"))
-			return -EINVAL;
-	}
-
 	if (realloc && irq_base >= 0) {
 		virq = irq_base;
 	} else {
@@ -1493,24 +1465,18 @@ int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base,
 		goto out_free_desc;
 	}
 
-	mutex_lock(&irq_domain_mutex);
 	ret = irq_domain_alloc_irqs_hierarchy(domain, virq, nr_irqs, arg);
-	if (ret < 0) {
-		mutex_unlock(&irq_domain_mutex);
+	if (ret < 0)
 		goto out_free_irq_data;
-	}
 
 	for (i = 0; i < nr_irqs; i++) {
 		ret = irq_domain_trim_hierarchy(virq + i);
-		if (ret) {
-			mutex_unlock(&irq_domain_mutex);
+		if (ret)
 			goto out_free_irq_data;
-		}
 	}
-	
+
 	for (i = 0; i < nr_irqs; i++)
 		irq_domain_insert_irq(virq + i);
-	mutex_unlock(&irq_domain_mutex);
 
 	return virq;
 
@@ -1520,6 +1486,48 @@ int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base,
 	irq_free_descs(virq, nr_irqs);
 	return ret;
 }
+
+/**
+ * __irq_domain_alloc_irqs - Allocate IRQs from domain
+ * @domain:	domain to allocate from
+ * @irq_base:	allocate specified IRQ number if irq_base >= 0
+ * @nr_irqs:	number of IRQs to allocate
+ * @node:	NUMA node id for memory allocation
+ * @arg:	domain specific argument
+ * @realloc:	IRQ descriptors have already been allocated if true
+ * @affinity:	Optional irq affinity mask for multiqueue devices
+ *
+ * Allocate IRQ numbers and initialized all data structures to support
+ * hierarchy IRQ domains.
+ * Parameter @realloc is mainly to support legacy IRQs.
+ * Returns error code or allocated IRQ number
+ *
+ * The whole process to setup an IRQ has been split into two steps.
+ * The first step, __irq_domain_alloc_irqs(), is to allocate IRQ
+ * descriptor and required hardware resources. The second step,
+ * irq_domain_activate_irq(), is to program the hardware with preallocated
+ * resources. In this way, it's easier to rollback when failing to
+ * allocate resources.
+ */
+int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base,
+			    unsigned int nr_irqs, int node, void *arg,
+			    bool realloc, const struct irq_affinity_desc *affinity)
+{
+	int ret;
+
+	if (domain == NULL) {
+		domain = irq_default_domain;
+		if (WARN(!domain, "domain is NULL; cannot allocate IRQ\n"))
+			return -EINVAL;
+	}
+
+	mutex_lock(&irq_domain_mutex);
+	ret = irq_domain_alloc_irqs_locked(domain, irq_base, nr_irqs, node, arg,
+					   realloc, affinity);
+	mutex_unlock(&irq_domain_mutex);
+
+	return ret;
+}
 EXPORT_SYMBOL_GPL(__irq_domain_alloc_irqs);
 
 /* The irq_data was moved, fix the revmap to refer to the new location */
-- 
2.39.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 06/19] irqdomain: Fix mapping-creation race
  2023-02-09 13:23 ` Johan Hovold
@ 2023-02-09 13:23   ` Johan Hovold
  -1 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, stable, Dmitry Torokhov, Jon Hunter,
	Hsin-Yi Wang, Mark-PK Tsai

Parallel probing of devices that share interrupts (e.g. when a driver
uses asynchronous probing) can currently result in two mappings for the
same hardware interrupt to be created due to missing serialisation.

Make sure to hold the irq_domain_mutex when creating mappings so that
looking for an existing mapping before creating a new one is done
atomically.

Fixes: 765230b5f084 ("driver-core: add asynchronous probing support for drivers")
Fixes: b62b2cf5759b ("irqdomain: Fix handling of type settings for existing mappings")
Link: https://lore.kernel.org/r/YuJXMHoT4ijUxnRb@hovoldconsulting.com
Cc: stable@vger.kernel.org      # 4.8
Cc: Dmitry Torokhov <dtor@chromium.org>
Cc: Jon Hunter <jonathanh@nvidia.com>
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 kernel/irq/irqdomain.c | 55 ++++++++++++++++++++++++++++++------------
 1 file changed, 40 insertions(+), 15 deletions(-)

diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index 7b57949bc79c..1ddb01bd49a4 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -25,6 +25,9 @@ static DEFINE_MUTEX(irq_domain_mutex);
 
 static struct irq_domain *irq_default_domain;
 
+static int irq_domain_alloc_irqs_locked(struct irq_domain *domain, int irq_base,
+					unsigned int nr_irqs, int node, void *arg,
+					bool realloc, const struct irq_affinity_desc *affinity);
 static void irq_domain_check_hierarchy(struct irq_domain *domain);
 
 struct irqchip_fwid {
@@ -682,9 +685,9 @@ unsigned int irq_create_direct_mapping(struct irq_domain *domain)
 EXPORT_SYMBOL_GPL(irq_create_direct_mapping);
 #endif
 
-static unsigned int __irq_create_mapping_affinity(struct irq_domain *domain,
-						  irq_hw_number_t hwirq,
-						  const struct irq_affinity_desc *affinity)
+static unsigned int irq_create_mapping_affinity_locked(struct irq_domain *domain,
+						       irq_hw_number_t hwirq,
+						       const struct irq_affinity_desc *affinity)
 {
 	struct device_node *of_node = irq_domain_get_of_node(domain);
 	int virq;
@@ -699,7 +702,7 @@ static unsigned int __irq_create_mapping_affinity(struct irq_domain *domain,
 		return 0;
 	}
 
-	if (irq_domain_associate(domain, virq, hwirq)) {
+	if (irq_domain_associate_locked(domain, virq, hwirq)) {
 		irq_free_desc(virq);
 		return 0;
 	}
@@ -735,14 +738,20 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
 		return 0;
 	}
 
+	mutex_lock(&irq_domain_mutex);
+
 	/* Check if mapping already exists */
 	virq = irq_find_mapping(domain, hwirq);
 	if (virq) {
 		pr_debug("existing mapping on virq %d\n", virq);
-		return virq;
+		goto out;
 	}
 
-	return __irq_create_mapping_affinity(domain, hwirq, affinity);
+	virq = irq_create_mapping_affinity_locked(domain, hwirq, affinity);
+out:
+	mutex_unlock(&irq_domain_mutex);
+
+	return virq;
 }
 EXPORT_SYMBOL_GPL(irq_create_mapping_affinity);
 
@@ -809,6 +818,8 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
 	if (WARN_ON(type & ~IRQ_TYPE_SENSE_MASK))
 		type &= IRQ_TYPE_SENSE_MASK;
 
+	mutex_lock(&irq_domain_mutex);
+
 	/*
 	 * If we've already configured this interrupt,
 	 * don't do it again, or hell will break loose.
@@ -821,7 +832,7 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
 		 * interrupt number.
 		 */
 		if (type == IRQ_TYPE_NONE || type == irq_get_trigger_type(virq))
-			return virq;
+			goto out;
 
 		/*
 		 * If the trigger type has not been set yet, then set
@@ -830,36 +841,43 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
 		if (irq_get_trigger_type(virq) == IRQ_TYPE_NONE) {
 			irq_data = irq_get_irq_data(virq);
 			if (!irq_data)
-				return 0;
+				goto err;
 
 			irqd_set_trigger_type(irq_data, type);
-			return virq;
+			goto out;
 		}
 
 		pr_warn("type mismatch, failed to map hwirq-%lu for %s!\n",
 			hwirq, of_node_full_name(to_of_node(fwspec->fwnode)));
-		return 0;
+		goto err;
 	}
 
 	if (irq_domain_is_hierarchy(domain)) {
-		virq = irq_domain_alloc_irqs(domain, 1, NUMA_NO_NODE, fwspec);
+		virq = irq_domain_alloc_irqs_locked(domain, -1, 1, NUMA_NO_NODE,
+						    fwspec, false, NULL);
 		if (virq <= 0)
-			return 0;
+			goto err;
 	} else {
 		/* Create mapping */
-		virq = __irq_create_mapping_affinity(domain, hwirq, NULL);
+		virq = irq_create_mapping_affinity_locked(domain, hwirq, NULL);
 		if (!virq)
-			return virq;
+			goto err;
 	}
 
 	irq_data = irq_get_irq_data(virq);
 	if (WARN_ON(!irq_data))
-		return 0;
+		goto err;
 
 	/* Store trigger type */
 	irqd_set_trigger_type(irq_data, type);
+out:
+	mutex_unlock(&irq_domain_mutex);
 
 	return virq;
+err:
+	mutex_unlock(&irq_domain_mutex);
+
+	return 0;
 }
 EXPORT_SYMBOL_GPL(irq_create_fwspec_mapping);
 
@@ -1888,6 +1906,13 @@ void irq_domain_set_info(struct irq_domain *domain, unsigned int virq,
 	irq_set_handler_data(virq, handler_data);
 }
 
+static int irq_domain_alloc_irqs_locked(struct irq_domain *domain, int irq_base,
+					unsigned int nr_irqs, int node, void *arg,
+					bool realloc, const struct irq_affinity_desc *affinity)
+{
+	return -EINVAL;
+}
+
 static void irq_domain_check_hierarchy(struct irq_domain *domain)
 {
 }
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 06/19] irqdomain: Fix mapping-creation race
@ 2023-02-09 13:23   ` Johan Hovold
  0 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, stable, Dmitry Torokhov, Jon Hunter,
	Hsin-Yi Wang, Mark-PK Tsai

Parallel probing of devices that share interrupts (e.g. when a driver
uses asynchronous probing) can currently result in two mappings for the
same hardware interrupt to be created due to missing serialisation.

Make sure to hold the irq_domain_mutex when creating mappings so that
looking for an existing mapping before creating a new one is done
atomically.

Fixes: 765230b5f084 ("driver-core: add asynchronous probing support for drivers")
Fixes: b62b2cf5759b ("irqdomain: Fix handling of type settings for existing mappings")
Link: https://lore.kernel.org/r/YuJXMHoT4ijUxnRb@hovoldconsulting.com
Cc: stable@vger.kernel.org      # 4.8
Cc: Dmitry Torokhov <dtor@chromium.org>
Cc: Jon Hunter <jonathanh@nvidia.com>
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 kernel/irq/irqdomain.c | 55 ++++++++++++++++++++++++++++++------------
 1 file changed, 40 insertions(+), 15 deletions(-)

diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index 7b57949bc79c..1ddb01bd49a4 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -25,6 +25,9 @@ static DEFINE_MUTEX(irq_domain_mutex);
 
 static struct irq_domain *irq_default_domain;
 
+static int irq_domain_alloc_irqs_locked(struct irq_domain *domain, int irq_base,
+					unsigned int nr_irqs, int node, void *arg,
+					bool realloc, const struct irq_affinity_desc *affinity);
 static void irq_domain_check_hierarchy(struct irq_domain *domain);
 
 struct irqchip_fwid {
@@ -682,9 +685,9 @@ unsigned int irq_create_direct_mapping(struct irq_domain *domain)
 EXPORT_SYMBOL_GPL(irq_create_direct_mapping);
 #endif
 
-static unsigned int __irq_create_mapping_affinity(struct irq_domain *domain,
-						  irq_hw_number_t hwirq,
-						  const struct irq_affinity_desc *affinity)
+static unsigned int irq_create_mapping_affinity_locked(struct irq_domain *domain,
+						       irq_hw_number_t hwirq,
+						       const struct irq_affinity_desc *affinity)
 {
 	struct device_node *of_node = irq_domain_get_of_node(domain);
 	int virq;
@@ -699,7 +702,7 @@ static unsigned int __irq_create_mapping_affinity(struct irq_domain *domain,
 		return 0;
 	}
 
-	if (irq_domain_associate(domain, virq, hwirq)) {
+	if (irq_domain_associate_locked(domain, virq, hwirq)) {
 		irq_free_desc(virq);
 		return 0;
 	}
@@ -735,14 +738,20 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
 		return 0;
 	}
 
+	mutex_lock(&irq_domain_mutex);
+
 	/* Check if mapping already exists */
 	virq = irq_find_mapping(domain, hwirq);
 	if (virq) {
 		pr_debug("existing mapping on virq %d\n", virq);
-		return virq;
+		goto out;
 	}
 
-	return __irq_create_mapping_affinity(domain, hwirq, affinity);
+	virq = irq_create_mapping_affinity_locked(domain, hwirq, affinity);
+out:
+	mutex_unlock(&irq_domain_mutex);
+
+	return virq;
 }
 EXPORT_SYMBOL_GPL(irq_create_mapping_affinity);
 
@@ -809,6 +818,8 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
 	if (WARN_ON(type & ~IRQ_TYPE_SENSE_MASK))
 		type &= IRQ_TYPE_SENSE_MASK;
 
+	mutex_lock(&irq_domain_mutex);
+
 	/*
 	 * If we've already configured this interrupt,
 	 * don't do it again, or hell will break loose.
@@ -821,7 +832,7 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
 		 * interrupt number.
 		 */
 		if (type == IRQ_TYPE_NONE || type == irq_get_trigger_type(virq))
-			return virq;
+			goto out;
 
 		/*
 		 * If the trigger type has not been set yet, then set
@@ -830,36 +841,43 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
 		if (irq_get_trigger_type(virq) == IRQ_TYPE_NONE) {
 			irq_data = irq_get_irq_data(virq);
 			if (!irq_data)
-				return 0;
+				goto err;
 
 			irqd_set_trigger_type(irq_data, type);
-			return virq;
+			goto out;
 		}
 
 		pr_warn("type mismatch, failed to map hwirq-%lu for %s!\n",
 			hwirq, of_node_full_name(to_of_node(fwspec->fwnode)));
-		return 0;
+		goto err;
 	}
 
 	if (irq_domain_is_hierarchy(domain)) {
-		virq = irq_domain_alloc_irqs(domain, 1, NUMA_NO_NODE, fwspec);
+		virq = irq_domain_alloc_irqs_locked(domain, -1, 1, NUMA_NO_NODE,
+						    fwspec, false, NULL);
 		if (virq <= 0)
-			return 0;
+			goto err;
 	} else {
 		/* Create mapping */
-		virq = __irq_create_mapping_affinity(domain, hwirq, NULL);
+		virq = irq_create_mapping_affinity_locked(domain, hwirq, NULL);
 		if (!virq)
-			return virq;
+			goto err;
 	}
 
 	irq_data = irq_get_irq_data(virq);
 	if (WARN_ON(!irq_data))
-		return 0;
+		goto err;
 
 	/* Store trigger type */
 	irqd_set_trigger_type(irq_data, type);
+out:
+	mutex_unlock(&irq_domain_mutex);
 
 	return virq;
+err:
+	mutex_unlock(&irq_domain_mutex);
+
+	return 0;
 }
 EXPORT_SYMBOL_GPL(irq_create_fwspec_mapping);
 
@@ -1888,6 +1906,13 @@ void irq_domain_set_info(struct irq_domain *domain, unsigned int virq,
 	irq_set_handler_data(virq, handler_data);
 }
 
+static int irq_domain_alloc_irqs_locked(struct irq_domain *domain, int irq_base,
+					unsigned int nr_irqs, int node, void *arg,
+					bool realloc, const struct irq_affinity_desc *affinity)
+{
+	return -EINVAL;
+}
+
 static void irq_domain_check_hierarchy(struct irq_domain *domain)
 {
 }
-- 
2.39.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 07/19] irqdomain: Drop revmap mutex
  2023-02-09 13:23 ` Johan Hovold
@ 2023-02-09 13:23   ` Johan Hovold
  -1 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, Hsin-Yi Wang, Mark-PK Tsai

The revmap mutex is essentially only used to maintain the integrity of
the radix tree during updates (lookups use RCU).

As the global irq_domain_mutex is now held in all paths that update the
revmap structures there is strictly no longer any need for the dedicated
mutex, which can be removed.

Drop the revmap mutex and add lockdep assertions to the revmap helpers
to make sure that the global lock is always held when updating the
revmap.

Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 include/linux/irqdomain.h |  2 --
 kernel/irq/irqdomain.c    | 13 ++++++-------
 2 files changed, 6 insertions(+), 9 deletions(-)

diff --git a/include/linux/irqdomain.h b/include/linux/irqdomain.h
index a372086750ca..16399de00b48 100644
--- a/include/linux/irqdomain.h
+++ b/include/linux/irqdomain.h
@@ -143,7 +143,6 @@ struct irq_domain_chip_generic;
  * Revmap data, used internally by the irq domain code:
  * @revmap_size:	Size of the linear map table @revmap[]
  * @revmap_tree:	Radix map tree for hwirqs that don't fit in the linear map
- * @revmap_mutex:	Lock for the revmap
  * @revmap:		Linear table of irq_data pointers
  */
 struct irq_domain {
@@ -171,7 +170,6 @@ struct irq_domain {
 	irq_hw_number_t			hwirq_max;
 	unsigned int			revmap_size;
 	struct radix_tree_root		revmap_tree;
-	struct mutex			revmap_mutex;
 	struct irq_data __rcu		*revmap[];
 };
 
diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index 1ddb01bd49a4..28d74549414b 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -217,7 +217,6 @@ struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int s
 
 	/* Fill structure */
 	INIT_RADIX_TREE(&domain->revmap_tree, GFP_KERNEL);
-	mutex_init(&domain->revmap_mutex);
 	domain->ops = ops;
 	domain->host_data = host_data;
 	domain->hwirq_max = hwirq_max;
@@ -505,30 +504,30 @@ static bool irq_domain_is_nomap(struct irq_domain *domain)
 static void irq_domain_clear_mapping(struct irq_domain *domain,
 				     irq_hw_number_t hwirq)
 {
+	lockdep_assert_held(&irq_domain_mutex);
+
 	if (irq_domain_is_nomap(domain))
 		return;
 
-	mutex_lock(&domain->revmap_mutex);
 	if (hwirq < domain->revmap_size)
 		rcu_assign_pointer(domain->revmap[hwirq], NULL);
 	else
 		radix_tree_delete(&domain->revmap_tree, hwirq);
-	mutex_unlock(&domain->revmap_mutex);
 }
 
 static void irq_domain_set_mapping(struct irq_domain *domain,
 				   irq_hw_number_t hwirq,
 				   struct irq_data *irq_data)
 {
+	lockdep_assert_held(&irq_domain_mutex);
+
 	if (irq_domain_is_nomap(domain))
 		return;
 
-	mutex_lock(&domain->revmap_mutex);
 	if (hwirq < domain->revmap_size)
 		rcu_assign_pointer(domain->revmap[hwirq], irq_data);
 	else
 		radix_tree_insert(&domain->revmap_tree, hwirq, irq_data);
-	mutex_unlock(&domain->revmap_mutex);
 }
 
 static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
@@ -1553,11 +1552,12 @@ static void irq_domain_fix_revmap(struct irq_data *d)
 {
 	void __rcu **slot;
 
+	lockdep_assert_held(&irq_domain_mutex);
+
 	if (irq_domain_is_nomap(d->domain))
 		return;
 
 	/* Fix up the revmap. */
-	mutex_lock(&d->domain->revmap_mutex);
 	if (d->hwirq < d->domain->revmap_size) {
 		/* Not using radix tree */
 		rcu_assign_pointer(d->domain->revmap[d->hwirq], d);
@@ -1566,7 +1566,6 @@ static void irq_domain_fix_revmap(struct irq_data *d)
 		if (slot)
 			radix_tree_replace_slot(&d->domain->revmap_tree, slot, d);
 	}
-	mutex_unlock(&d->domain->revmap_mutex);
 }
 
 /**
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 07/19] irqdomain: Drop revmap mutex
@ 2023-02-09 13:23   ` Johan Hovold
  0 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, Hsin-Yi Wang, Mark-PK Tsai

The revmap mutex is essentially only used to maintain the integrity of
the radix tree during updates (lookups use RCU).

As the global irq_domain_mutex is now held in all paths that update the
revmap structures there is strictly no longer any need for the dedicated
mutex, which can be removed.

Drop the revmap mutex and add lockdep assertions to the revmap helpers
to make sure that the global lock is always held when updating the
revmap.

Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 include/linux/irqdomain.h |  2 --
 kernel/irq/irqdomain.c    | 13 ++++++-------
 2 files changed, 6 insertions(+), 9 deletions(-)

diff --git a/include/linux/irqdomain.h b/include/linux/irqdomain.h
index a372086750ca..16399de00b48 100644
--- a/include/linux/irqdomain.h
+++ b/include/linux/irqdomain.h
@@ -143,7 +143,6 @@ struct irq_domain_chip_generic;
  * Revmap data, used internally by the irq domain code:
  * @revmap_size:	Size of the linear map table @revmap[]
  * @revmap_tree:	Radix map tree for hwirqs that don't fit in the linear map
- * @revmap_mutex:	Lock for the revmap
  * @revmap:		Linear table of irq_data pointers
  */
 struct irq_domain {
@@ -171,7 +170,6 @@ struct irq_domain {
 	irq_hw_number_t			hwirq_max;
 	unsigned int			revmap_size;
 	struct radix_tree_root		revmap_tree;
-	struct mutex			revmap_mutex;
 	struct irq_data __rcu		*revmap[];
 };
 
diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index 1ddb01bd49a4..28d74549414b 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -217,7 +217,6 @@ struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int s
 
 	/* Fill structure */
 	INIT_RADIX_TREE(&domain->revmap_tree, GFP_KERNEL);
-	mutex_init(&domain->revmap_mutex);
 	domain->ops = ops;
 	domain->host_data = host_data;
 	domain->hwirq_max = hwirq_max;
@@ -505,30 +504,30 @@ static bool irq_domain_is_nomap(struct irq_domain *domain)
 static void irq_domain_clear_mapping(struct irq_domain *domain,
 				     irq_hw_number_t hwirq)
 {
+	lockdep_assert_held(&irq_domain_mutex);
+
 	if (irq_domain_is_nomap(domain))
 		return;
 
-	mutex_lock(&domain->revmap_mutex);
 	if (hwirq < domain->revmap_size)
 		rcu_assign_pointer(domain->revmap[hwirq], NULL);
 	else
 		radix_tree_delete(&domain->revmap_tree, hwirq);
-	mutex_unlock(&domain->revmap_mutex);
 }
 
 static void irq_domain_set_mapping(struct irq_domain *domain,
 				   irq_hw_number_t hwirq,
 				   struct irq_data *irq_data)
 {
+	lockdep_assert_held(&irq_domain_mutex);
+
 	if (irq_domain_is_nomap(domain))
 		return;
 
-	mutex_lock(&domain->revmap_mutex);
 	if (hwirq < domain->revmap_size)
 		rcu_assign_pointer(domain->revmap[hwirq], irq_data);
 	else
 		radix_tree_insert(&domain->revmap_tree, hwirq, irq_data);
-	mutex_unlock(&domain->revmap_mutex);
 }
 
 static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
@@ -1553,11 +1552,12 @@ static void irq_domain_fix_revmap(struct irq_data *d)
 {
 	void __rcu **slot;
 
+	lockdep_assert_held(&irq_domain_mutex);
+
 	if (irq_domain_is_nomap(d->domain))
 		return;
 
 	/* Fix up the revmap. */
-	mutex_lock(&d->domain->revmap_mutex);
 	if (d->hwirq < d->domain->revmap_size) {
 		/* Not using radix tree */
 		rcu_assign_pointer(d->domain->revmap[d->hwirq], d);
@@ -1566,7 +1566,6 @@ static void irq_domain_fix_revmap(struct irq_data *d)
 		if (slot)
 			radix_tree_replace_slot(&d->domain->revmap_tree, slot, d);
 	}
-	mutex_unlock(&d->domain->revmap_mutex);
 }
 
 /**
-- 
2.39.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 08/19] irqdomain: Drop dead domain-name assignment
  2023-02-09 13:23 ` Johan Hovold
@ 2023-02-09 13:23   ` Johan Hovold
  -1 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, Hsin-Yi Wang, Mark-PK Tsai

Since commit d59f6617eef0 ("genirq: Allow fwnode to carry name
information only") an IRQ domain is always given a name during
allocation (e.g. used for the debugfs entry).

Drop the leftover name assignment when allocating the first IRQ.

Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 kernel/irq/irqdomain.c | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index 28d74549414b..3d635b8bb465 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -598,10 +598,6 @@ static int irq_domain_associate_locked(struct irq_domain *domain, unsigned int v
 			irq_data->hwirq = 0;
 			return ret;
 		}
-
-		/* If not already assigned, give the domain the chip's name */
-		if (!domain->name && irq_data->chip)
-			domain->name = irq_data->chip->name;
 	}
 
 	domain->mapcount++;
@@ -1155,10 +1151,6 @@ static void irq_domain_insert_irq(int virq)
 
 		domain->mapcount++;
 		irq_domain_set_mapping(domain, data->hwirq, data);
-
-		/* If not already assigned, give the domain the chip's name */
-		if (!domain->name && data->chip)
-			domain->name = data->chip->name;
 	}
 
 	irq_clear_status_flags(virq, IRQ_NOREQUEST);
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 08/19] irqdomain: Drop dead domain-name assignment
@ 2023-02-09 13:23   ` Johan Hovold
  0 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, Hsin-Yi Wang, Mark-PK Tsai

Since commit d59f6617eef0 ("genirq: Allow fwnode to carry name
information only") an IRQ domain is always given a name during
allocation (e.g. used for the debugfs entry).

Drop the leftover name assignment when allocating the first IRQ.

Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 kernel/irq/irqdomain.c | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index 28d74549414b..3d635b8bb465 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -598,10 +598,6 @@ static int irq_domain_associate_locked(struct irq_domain *domain, unsigned int v
 			irq_data->hwirq = 0;
 			return ret;
 		}
-
-		/* If not already assigned, give the domain the chip's name */
-		if (!domain->name && irq_data->chip)
-			domain->name = irq_data->chip->name;
 	}
 
 	domain->mapcount++;
@@ -1155,10 +1151,6 @@ static void irq_domain_insert_irq(int virq)
 
 		domain->mapcount++;
 		irq_domain_set_mapping(domain, data->hwirq, data);
-
-		/* If not already assigned, give the domain the chip's name */
-		if (!domain->name && data->chip)
-			domain->name = data->chip->name;
 	}
 
 	irq_clear_status_flags(virq, IRQ_NOREQUEST);
-- 
2.39.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 09/19] irqdomain: Drop leftover brackets
  2023-02-09 13:23 ` Johan Hovold
@ 2023-02-09 13:23   ` Johan Hovold
  -1 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, Philippe Mathieu-Daudé,
	Hsin-Yi Wang, Mark-PK Tsai

Drop some unnecessary brackets that were left in place when the
corresponding code was updated.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 kernel/irq/irqdomain.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index 3d635b8bb465..a1c6e01b395e 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -221,9 +221,8 @@ struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int s
 	domain->host_data = host_data;
 	domain->hwirq_max = hwirq_max;
 
-	if (direct_max) {
+	if (direct_max)
 		domain->flags |= IRQ_DOMAIN_FLAG_NO_MAP;
-	}
 
 	domain->revmap_size = size;
 
@@ -631,9 +630,8 @@ void irq_domain_associate_many(struct irq_domain *domain, unsigned int irq_base,
 	pr_debug("%s(%s, irqbase=%i, hwbase=%i, count=%i)\n", __func__,
 		of_node_full_name(of_node), irq_base, (int)hwirq_base, count);
 
-	for (i = 0; i < count; i++) {
+	for (i = 0; i < count; i++)
 		irq_domain_associate(domain, irq_base + i, hwirq_base + i);
-	}
 }
 EXPORT_SYMBOL_GPL(irq_domain_associate_many);
 
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 09/19] irqdomain: Drop leftover brackets
@ 2023-02-09 13:23   ` Johan Hovold
  0 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, Philippe Mathieu-Daudé,
	Hsin-Yi Wang, Mark-PK Tsai

Drop some unnecessary brackets that were left in place when the
corresponding code was updated.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 kernel/irq/irqdomain.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index 3d635b8bb465..a1c6e01b395e 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -221,9 +221,8 @@ struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int s
 	domain->host_data = host_data;
 	domain->hwirq_max = hwirq_max;
 
-	if (direct_max) {
+	if (direct_max)
 		domain->flags |= IRQ_DOMAIN_FLAG_NO_MAP;
-	}
 
 	domain->revmap_size = size;
 
@@ -631,9 +630,8 @@ void irq_domain_associate_many(struct irq_domain *domain, unsigned int irq_base,
 	pr_debug("%s(%s, irqbase=%i, hwbase=%i, count=%i)\n", __func__,
 		of_node_full_name(of_node), irq_base, (int)hwirq_base, count);
 
-	for (i = 0; i < count; i++) {
+	for (i = 0; i < count; i++)
 		irq_domain_associate(domain, irq_base + i, hwirq_base + i);
-	}
 }
 EXPORT_SYMBOL_GPL(irq_domain_associate_many);
 
-- 
2.39.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 10/19] irqdomain: Clean up irq_domain_push/pop_irq()
  2023-02-09 13:23 ` Johan Hovold
@ 2023-02-09 13:23   ` Johan Hovold
  -1 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, Philippe Mathieu-Daudé,
	Hsin-Yi Wang, Mark-PK Tsai

The irq_domain_push_irq() interface is used to add a new (outmost) level
to a hierarchical domain after IRQs have been allocated.

Possibly due to differing mental images of hierarchical domains, the
names used for the irq_data variables make these functions much harder
to understand than what they need to be.

Rename the struct irq_data pointer to the data embedded in the
descriptor as simply 'irq_data' and refer to the data allocated by this
interface as 'parent_irq_data' so that the names reflect how
hierarchical domains are implemented.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 kernel/irq/irqdomain.c | 65 +++++++++++++++++++++---------------------
 1 file changed, 32 insertions(+), 33 deletions(-)

diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index a1c6e01b395e..804d316329c8 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -1571,8 +1571,8 @@ static void irq_domain_fix_revmap(struct irq_data *d)
  */
 int irq_domain_push_irq(struct irq_domain *domain, int virq, void *arg)
 {
-	struct irq_data *child_irq_data;
-	struct irq_data *root_irq_data = irq_get_irq_data(virq);
+	struct irq_data *irq_data = irq_get_irq_data(virq);
+	struct irq_data *parent_irq_data;
 	struct irq_desc *desc;
 	int rv = 0;
 
@@ -1597,45 +1597,44 @@ int irq_domain_push_irq(struct irq_domain *domain, int virq, void *arg)
 	if (WARN_ON(!irq_domain_is_hierarchy(domain)))
 		return -EINVAL;
 
-	if (!root_irq_data)
+	if (!irq_data)
 		return -EINVAL;
 
-	if (domain->parent != root_irq_data->domain)
+	if (domain->parent != irq_data->domain)
 		return -EINVAL;
 
-	child_irq_data = kzalloc_node(sizeof(*child_irq_data), GFP_KERNEL,
-				      irq_data_get_node(root_irq_data));
-	if (!child_irq_data)
+	parent_irq_data = kzalloc_node(sizeof(*parent_irq_data), GFP_KERNEL,
+				       irq_data_get_node(irq_data));
+	if (!parent_irq_data)
 		return -ENOMEM;
 
 	mutex_lock(&irq_domain_mutex);
 
 	/* Copy the original irq_data. */
-	*child_irq_data = *root_irq_data;
+	*parent_irq_data = *irq_data;
 
 	/*
-	 * Overwrite the root_irq_data, which is embedded in struct
-	 * irq_desc, with values for this domain.
+	 * Overwrite the irq_data, which is embedded in struct irq_desc, with
+	 * values for this domain.
 	 */
-	root_irq_data->parent_data = child_irq_data;
-	root_irq_data->domain = domain;
-	root_irq_data->mask = 0;
-	root_irq_data->hwirq = 0;
-	root_irq_data->chip = NULL;
-	root_irq_data->chip_data = NULL;
+	irq_data->parent_data = parent_irq_data;
+	irq_data->domain = domain;
+	irq_data->mask = 0;
+	irq_data->hwirq = 0;
+	irq_data->chip = NULL;
+	irq_data->chip_data = NULL;
 
 	/* May (probably does) set hwirq, chip, etc. */
 	rv = irq_domain_alloc_irqs_hierarchy(domain, virq, 1, arg);
 	if (rv) {
 		/* Restore the original irq_data. */
-		*root_irq_data = *child_irq_data;
-		kfree(child_irq_data);
+		*irq_data = *parent_irq_data;
+		kfree(parent_irq_data);
 		goto error;
 	}
 
-	irq_domain_fix_revmap(child_irq_data);
-	irq_domain_set_mapping(domain, root_irq_data->hwirq, root_irq_data);
-
+	irq_domain_fix_revmap(parent_irq_data);
+	irq_domain_set_mapping(domain, irq_data->hwirq, irq_data);
 error:
 	mutex_unlock(&irq_domain_mutex);
 
@@ -1653,8 +1652,8 @@ EXPORT_SYMBOL_GPL(irq_domain_push_irq);
  */
 int irq_domain_pop_irq(struct irq_domain *domain, int virq)
 {
-	struct irq_data *root_irq_data = irq_get_irq_data(virq);
-	struct irq_data *child_irq_data;
+	struct irq_data *irq_data = irq_get_irq_data(virq);
+	struct irq_data *parent_irq_data;
 	struct irq_data *tmp_irq_data;
 	struct irq_desc *desc;
 
@@ -1676,37 +1675,37 @@ int irq_domain_pop_irq(struct irq_domain *domain, int virq)
 	if (domain == NULL)
 		return -EINVAL;
 
-	if (!root_irq_data)
+	if (!irq_data)
 		return -EINVAL;
 
 	tmp_irq_data = irq_domain_get_irq_data(domain, virq);
 
 	/* We can only "pop" if this domain is at the top of the list */
-	if (WARN_ON(root_irq_data != tmp_irq_data))
+	if (WARN_ON(irq_data != tmp_irq_data))
 		return -EINVAL;
 
-	if (WARN_ON(root_irq_data->domain != domain))
+	if (WARN_ON(irq_data->domain != domain))
 		return -EINVAL;
 
-	child_irq_data = root_irq_data->parent_data;
-	if (WARN_ON(!child_irq_data))
+	parent_irq_data = irq_data->parent_data;
+	if (WARN_ON(!parent_irq_data))
 		return -EINVAL;
 
 	mutex_lock(&irq_domain_mutex);
 
-	root_irq_data->parent_data = NULL;
+	irq_data->parent_data = NULL;
 
-	irq_domain_clear_mapping(domain, root_irq_data->hwirq);
+	irq_domain_clear_mapping(domain, irq_data->hwirq);
 	irq_domain_free_irqs_hierarchy(domain, virq, 1);
 
 	/* Restore the original irq_data. */
-	*root_irq_data = *child_irq_data;
+	*irq_data = *parent_irq_data;
 
-	irq_domain_fix_revmap(root_irq_data);
+	irq_domain_fix_revmap(irq_data);
 
 	mutex_unlock(&irq_domain_mutex);
 
-	kfree(child_irq_data);
+	kfree(parent_irq_data);
 
 	return 0;
 }
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 10/19] irqdomain: Clean up irq_domain_push/pop_irq()
@ 2023-02-09 13:23   ` Johan Hovold
  0 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, Philippe Mathieu-Daudé,
	Hsin-Yi Wang, Mark-PK Tsai

The irq_domain_push_irq() interface is used to add a new (outmost) level
to a hierarchical domain after IRQs have been allocated.

Possibly due to differing mental images of hierarchical domains, the
names used for the irq_data variables make these functions much harder
to understand than what they need to be.

Rename the struct irq_data pointer to the data embedded in the
descriptor as simply 'irq_data' and refer to the data allocated by this
interface as 'parent_irq_data' so that the names reflect how
hierarchical domains are implemented.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 kernel/irq/irqdomain.c | 65 +++++++++++++++++++++---------------------
 1 file changed, 32 insertions(+), 33 deletions(-)

diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index a1c6e01b395e..804d316329c8 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -1571,8 +1571,8 @@ static void irq_domain_fix_revmap(struct irq_data *d)
  */
 int irq_domain_push_irq(struct irq_domain *domain, int virq, void *arg)
 {
-	struct irq_data *child_irq_data;
-	struct irq_data *root_irq_data = irq_get_irq_data(virq);
+	struct irq_data *irq_data = irq_get_irq_data(virq);
+	struct irq_data *parent_irq_data;
 	struct irq_desc *desc;
 	int rv = 0;
 
@@ -1597,45 +1597,44 @@ int irq_domain_push_irq(struct irq_domain *domain, int virq, void *arg)
 	if (WARN_ON(!irq_domain_is_hierarchy(domain)))
 		return -EINVAL;
 
-	if (!root_irq_data)
+	if (!irq_data)
 		return -EINVAL;
 
-	if (domain->parent != root_irq_data->domain)
+	if (domain->parent != irq_data->domain)
 		return -EINVAL;
 
-	child_irq_data = kzalloc_node(sizeof(*child_irq_data), GFP_KERNEL,
-				      irq_data_get_node(root_irq_data));
-	if (!child_irq_data)
+	parent_irq_data = kzalloc_node(sizeof(*parent_irq_data), GFP_KERNEL,
+				       irq_data_get_node(irq_data));
+	if (!parent_irq_data)
 		return -ENOMEM;
 
 	mutex_lock(&irq_domain_mutex);
 
 	/* Copy the original irq_data. */
-	*child_irq_data = *root_irq_data;
+	*parent_irq_data = *irq_data;
 
 	/*
-	 * Overwrite the root_irq_data, which is embedded in struct
-	 * irq_desc, with values for this domain.
+	 * Overwrite the irq_data, which is embedded in struct irq_desc, with
+	 * values for this domain.
 	 */
-	root_irq_data->parent_data = child_irq_data;
-	root_irq_data->domain = domain;
-	root_irq_data->mask = 0;
-	root_irq_data->hwirq = 0;
-	root_irq_data->chip = NULL;
-	root_irq_data->chip_data = NULL;
+	irq_data->parent_data = parent_irq_data;
+	irq_data->domain = domain;
+	irq_data->mask = 0;
+	irq_data->hwirq = 0;
+	irq_data->chip = NULL;
+	irq_data->chip_data = NULL;
 
 	/* May (probably does) set hwirq, chip, etc. */
 	rv = irq_domain_alloc_irqs_hierarchy(domain, virq, 1, arg);
 	if (rv) {
 		/* Restore the original irq_data. */
-		*root_irq_data = *child_irq_data;
-		kfree(child_irq_data);
+		*irq_data = *parent_irq_data;
+		kfree(parent_irq_data);
 		goto error;
 	}
 
-	irq_domain_fix_revmap(child_irq_data);
-	irq_domain_set_mapping(domain, root_irq_data->hwirq, root_irq_data);
-
+	irq_domain_fix_revmap(parent_irq_data);
+	irq_domain_set_mapping(domain, irq_data->hwirq, irq_data);
 error:
 	mutex_unlock(&irq_domain_mutex);
 
@@ -1653,8 +1652,8 @@ EXPORT_SYMBOL_GPL(irq_domain_push_irq);
  */
 int irq_domain_pop_irq(struct irq_domain *domain, int virq)
 {
-	struct irq_data *root_irq_data = irq_get_irq_data(virq);
-	struct irq_data *child_irq_data;
+	struct irq_data *irq_data = irq_get_irq_data(virq);
+	struct irq_data *parent_irq_data;
 	struct irq_data *tmp_irq_data;
 	struct irq_desc *desc;
 
@@ -1676,37 +1675,37 @@ int irq_domain_pop_irq(struct irq_domain *domain, int virq)
 	if (domain == NULL)
 		return -EINVAL;
 
-	if (!root_irq_data)
+	if (!irq_data)
 		return -EINVAL;
 
 	tmp_irq_data = irq_domain_get_irq_data(domain, virq);
 
 	/* We can only "pop" if this domain is at the top of the list */
-	if (WARN_ON(root_irq_data != tmp_irq_data))
+	if (WARN_ON(irq_data != tmp_irq_data))
 		return -EINVAL;
 
-	if (WARN_ON(root_irq_data->domain != domain))
+	if (WARN_ON(irq_data->domain != domain))
 		return -EINVAL;
 
-	child_irq_data = root_irq_data->parent_data;
-	if (WARN_ON(!child_irq_data))
+	parent_irq_data = irq_data->parent_data;
+	if (WARN_ON(!parent_irq_data))
 		return -EINVAL;
 
 	mutex_lock(&irq_domain_mutex);
 
-	root_irq_data->parent_data = NULL;
+	irq_data->parent_data = NULL;
 
-	irq_domain_clear_mapping(domain, root_irq_data->hwirq);
+	irq_domain_clear_mapping(domain, irq_data->hwirq);
 	irq_domain_free_irqs_hierarchy(domain, virq, 1);
 
 	/* Restore the original irq_data. */
-	*root_irq_data = *child_irq_data;
+	*irq_data = *parent_irq_data;
 
-	irq_domain_fix_revmap(root_irq_data);
+	irq_domain_fix_revmap(irq_data);
 
 	mutex_unlock(&irq_domain_mutex);
 
-	kfree(child_irq_data);
+	kfree(parent_irq_data);
 
 	return 0;
 }
-- 
2.39.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 11/19] x86/ioapic: Use irq_domain_create_hierarchy()
  2023-02-09 13:23 ` Johan Hovold
@ 2023-02-09 13:23   ` Johan Hovold
  -1 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, Philippe Mathieu-Daudé,
	Hsin-Yi Wang, Mark-PK Tsai

Use the irq_domain_create_hierarchy() helper to create the hierarchical
domain, which both serves as documentation and avoids poking at
irqdomain internals.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 arch/x86/kernel/apic/io_apic.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
index a868b76cd3d4..1f83b052bb74 100644
--- a/arch/x86/kernel/apic/io_apic.c
+++ b/arch/x86/kernel/apic/io_apic.c
@@ -2364,9 +2364,8 @@ static int mp_irqdomain_create(int ioapic)
 		return -ENODEV;
 	}
 
-	ip->irqdomain = irq_domain_create_linear(fn, hwirqs, cfg->ops,
-						 (void *)(long)ioapic);
-
+	ip->irqdomain = irq_domain_create_hierarchy(parent, 0, hwirqs, fn, cfg->ops,
+						    (void *)(long)ioapic);
 	if (!ip->irqdomain) {
 		/* Release fw handle if it was allocated above */
 		if (!cfg->dev)
@@ -2374,8 +2373,6 @@ static int mp_irqdomain_create(int ioapic)
 		return -ENOMEM;
 	}
 
-	ip->irqdomain->parent = parent;
-
 	if (cfg->type == IOAPIC_DOMAIN_LEGACY ||
 	    cfg->type == IOAPIC_DOMAIN_STRICT)
 		ioapic_dynirq_base = max(ioapic_dynirq_base,
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 11/19] x86/ioapic: Use irq_domain_create_hierarchy()
@ 2023-02-09 13:23   ` Johan Hovold
  0 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, Philippe Mathieu-Daudé,
	Hsin-Yi Wang, Mark-PK Tsai

Use the irq_domain_create_hierarchy() helper to create the hierarchical
domain, which both serves as documentation and avoids poking at
irqdomain internals.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 arch/x86/kernel/apic/io_apic.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
index a868b76cd3d4..1f83b052bb74 100644
--- a/arch/x86/kernel/apic/io_apic.c
+++ b/arch/x86/kernel/apic/io_apic.c
@@ -2364,9 +2364,8 @@ static int mp_irqdomain_create(int ioapic)
 		return -ENODEV;
 	}
 
-	ip->irqdomain = irq_domain_create_linear(fn, hwirqs, cfg->ops,
-						 (void *)(long)ioapic);
-
+	ip->irqdomain = irq_domain_create_hierarchy(parent, 0, hwirqs, fn, cfg->ops,
+						    (void *)(long)ioapic);
 	if (!ip->irqdomain) {
 		/* Release fw handle if it was allocated above */
 		if (!cfg->dev)
@@ -2374,8 +2373,6 @@ static int mp_irqdomain_create(int ioapic)
 		return -ENOMEM;
 	}
 
-	ip->irqdomain->parent = parent;
-
 	if (cfg->type == IOAPIC_DOMAIN_LEGACY ||
 	    cfg->type == IOAPIC_DOMAIN_STRICT)
 		ioapic_dynirq_base = max(ioapic_dynirq_base,
-- 
2.39.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 12/19] x86/apic: Use irq_domain_create_hierarchy()
  2023-02-09 13:23 ` Johan Hovold
@ 2023-02-09 13:23   ` Johan Hovold
  -1 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, Philippe Mathieu-Daudé,
	Hsin-Yi Wang, Mark-PK Tsai

Use the irq_domain_create_hierarchy() helper to create the hierarchical
domain, which both serves as documentation and avoids poking at
irqdomain internals.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 arch/x86/platform/uv/uv_irq.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/x86/platform/uv/uv_irq.c b/arch/x86/platform/uv/uv_irq.c
index 1a536a187d74..ee21d6a36a80 100644
--- a/arch/x86/platform/uv/uv_irq.c
+++ b/arch/x86/platform/uv/uv_irq.c
@@ -166,10 +166,9 @@ static struct irq_domain *uv_get_irq_domain(void)
 	if (!fn)
 		goto out;
 
-	uv_domain = irq_domain_create_tree(fn, &uv_domain_ops, NULL);
-	if (uv_domain)
-		uv_domain->parent = x86_vector_domain;
-	else
+	uv_domain = irq_domain_create_hierarchy(x86_vector_domain, 0, 0, fn,
+						&uv_domain_ops, NULL);
+	if (!uv_domain)
 		irq_domain_free_fwnode(fn);
 out:
 	mutex_unlock(&uv_lock);
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 12/19] x86/apic: Use irq_domain_create_hierarchy()
@ 2023-02-09 13:23   ` Johan Hovold
  0 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, Philippe Mathieu-Daudé,
	Hsin-Yi Wang, Mark-PK Tsai

Use the irq_domain_create_hierarchy() helper to create the hierarchical
domain, which both serves as documentation and avoids poking at
irqdomain internals.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 arch/x86/platform/uv/uv_irq.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/x86/platform/uv/uv_irq.c b/arch/x86/platform/uv/uv_irq.c
index 1a536a187d74..ee21d6a36a80 100644
--- a/arch/x86/platform/uv/uv_irq.c
+++ b/arch/x86/platform/uv/uv_irq.c
@@ -166,10 +166,9 @@ static struct irq_domain *uv_get_irq_domain(void)
 	if (!fn)
 		goto out;
 
-	uv_domain = irq_domain_create_tree(fn, &uv_domain_ops, NULL);
-	if (uv_domain)
-		uv_domain->parent = x86_vector_domain;
-	else
+	uv_domain = irq_domain_create_hierarchy(x86_vector_domain, 0, 0, fn,
+						&uv_domain_ops, NULL);
+	if (!uv_domain)
 		irq_domain_free_fwnode(fn);
 out:
 	mutex_unlock(&uv_lock);
-- 
2.39.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 13/19] irqchip/alpine-msi: Use irq_domain_add_hierarchy()
  2023-02-09 13:23 ` Johan Hovold
@ 2023-02-09 13:23   ` Johan Hovold
  -1 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, Philippe Mathieu-Daudé,
	Hsin-Yi Wang, Mark-PK Tsai

Use the irq_domain_add_hierarchy() helper to create the hierarchical
domain, which both serves as documentation and avoids poking at
irqdomain internals.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 drivers/irqchip/irq-alpine-msi.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/drivers/irqchip/irq-alpine-msi.c b/drivers/irqchip/irq-alpine-msi.c
index 5ddb8e578ac6..604459372fdd 100644
--- a/drivers/irqchip/irq-alpine-msi.c
+++ b/drivers/irqchip/irq-alpine-msi.c
@@ -204,16 +204,14 @@ static int alpine_msix_init_domains(struct alpine_msix_data *priv,
 		return -ENXIO;
 	}
 
-	middle_domain = irq_domain_add_tree(NULL,
-					    &alpine_msix_middle_domain_ops,
-					    priv);
+	middle_domain = irq_domain_add_hierarchy(gic_domain, 0, 0, NULL,
+						 &alpine_msix_middle_domain_ops,
+						 priv);
 	if (!middle_domain) {
 		pr_err("Failed to create the MSIX middle domain\n");
 		return -ENOMEM;
 	}
 
-	middle_domain->parent = gic_domain;
-
 	msi_domain = pci_msi_create_irq_domain(of_node_to_fwnode(node),
 					       &alpine_msix_domain_info,
 					       middle_domain);
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 13/19] irqchip/alpine-msi: Use irq_domain_add_hierarchy()
@ 2023-02-09 13:23   ` Johan Hovold
  0 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, Philippe Mathieu-Daudé,
	Hsin-Yi Wang, Mark-PK Tsai

Use the irq_domain_add_hierarchy() helper to create the hierarchical
domain, which both serves as documentation and avoids poking at
irqdomain internals.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 drivers/irqchip/irq-alpine-msi.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/drivers/irqchip/irq-alpine-msi.c b/drivers/irqchip/irq-alpine-msi.c
index 5ddb8e578ac6..604459372fdd 100644
--- a/drivers/irqchip/irq-alpine-msi.c
+++ b/drivers/irqchip/irq-alpine-msi.c
@@ -204,16 +204,14 @@ static int alpine_msix_init_domains(struct alpine_msix_data *priv,
 		return -ENXIO;
 	}
 
-	middle_domain = irq_domain_add_tree(NULL,
-					    &alpine_msix_middle_domain_ops,
-					    priv);
+	middle_domain = irq_domain_add_hierarchy(gic_domain, 0, 0, NULL,
+						 &alpine_msix_middle_domain_ops,
+						 priv);
 	if (!middle_domain) {
 		pr_err("Failed to create the MSIX middle domain\n");
 		return -ENOMEM;
 	}
 
-	middle_domain->parent = gic_domain;
-
 	msi_domain = pci_msi_create_irq_domain(of_node_to_fwnode(node),
 					       &alpine_msix_domain_info,
 					       middle_domain);
-- 
2.39.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 14/19] irqchip/gic-v2m: Use irq_domain_create_hierarchy()
  2023-02-09 13:23 ` Johan Hovold
@ 2023-02-09 13:23   ` Johan Hovold
  -1 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, Philippe Mathieu-Daudé,
	Hsin-Yi Wang, Mark-PK Tsai

Use the irq_domain_create_hierarchy() helper to create the hierarchical
domain, which both serves as documentation and avoids poking at
irqdomain internals.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 drivers/irqchip/irq-gic-v2m.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/irqchip/irq-gic-v2m.c b/drivers/irqchip/irq-gic-v2m.c
index f4d7eeb13951..f1e75b35a52a 100644
--- a/drivers/irqchip/irq-gic-v2m.c
+++ b/drivers/irqchip/irq-gic-v2m.c
@@ -287,15 +287,14 @@ static __init int gicv2m_allocate_domains(struct irq_domain *parent)
 	if (!v2m)
 		return 0;
 
-	inner_domain = irq_domain_create_tree(v2m->fwnode,
-					      &gicv2m_domain_ops, v2m);
+	inner_domain = irq_domain_create_hierarchy(parent, 0, 0, v2m->fwnode,
+						   &gicv2m_domain_ops, v2m);
 	if (!inner_domain) {
 		pr_err("Failed to create GICv2m domain\n");
 		return -ENOMEM;
 	}
 
 	irq_domain_update_bus_token(inner_domain, DOMAIN_BUS_NEXUS);
-	inner_domain->parent = parent;
 	pci_domain = pci_msi_create_irq_domain(v2m->fwnode,
 					       &gicv2m_msi_domain_info,
 					       inner_domain);
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 14/19] irqchip/gic-v2m: Use irq_domain_create_hierarchy()
@ 2023-02-09 13:23   ` Johan Hovold
  0 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, Philippe Mathieu-Daudé,
	Hsin-Yi Wang, Mark-PK Tsai

Use the irq_domain_create_hierarchy() helper to create the hierarchical
domain, which both serves as documentation and avoids poking at
irqdomain internals.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 drivers/irqchip/irq-gic-v2m.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/irqchip/irq-gic-v2m.c b/drivers/irqchip/irq-gic-v2m.c
index f4d7eeb13951..f1e75b35a52a 100644
--- a/drivers/irqchip/irq-gic-v2m.c
+++ b/drivers/irqchip/irq-gic-v2m.c
@@ -287,15 +287,14 @@ static __init int gicv2m_allocate_domains(struct irq_domain *parent)
 	if (!v2m)
 		return 0;
 
-	inner_domain = irq_domain_create_tree(v2m->fwnode,
-					      &gicv2m_domain_ops, v2m);
+	inner_domain = irq_domain_create_hierarchy(parent, 0, 0, v2m->fwnode,
+						   &gicv2m_domain_ops, v2m);
 	if (!inner_domain) {
 		pr_err("Failed to create GICv2m domain\n");
 		return -ENOMEM;
 	}
 
 	irq_domain_update_bus_token(inner_domain, DOMAIN_BUS_NEXUS);
-	inner_domain->parent = parent;
 	pci_domain = pci_msi_create_irq_domain(v2m->fwnode,
 					       &gicv2m_msi_domain_info,
 					       inner_domain);
-- 
2.39.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 15/19] irqchip/gic-v3-its: Use irq_domain_create_hierarchy()
  2023-02-09 13:23 ` Johan Hovold
@ 2023-02-09 13:23   ` Johan Hovold
  -1 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, Hsin-Yi Wang, Mark-PK Tsai

Use the irq_domain_create_hierarchy() helper to create the hierarchical
domain, which both serves as documentation and avoids poking at
irqdomain internals.

Note that the domain host_data was first set to the struct its_node
during allocation only to immediately be overwritten with the struct
msi_domain_info.

Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 drivers/irqchip/irq-gic-v3-its.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index 973ede0197e3..5634d29b644d 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -4909,18 +4909,19 @@ static int its_init_domain(struct fwnode_handle *handle, struct its_node *its)
 	if (!info)
 		return -ENOMEM;
 
-	inner_domain = irq_domain_create_tree(handle, &its_domain_ops, its);
+	info->ops = &its_msi_domain_ops;
+	info->data = its;
+
+	inner_domain = irq_domain_create_hierarchy(its_parent,
+						   its->msi_domain_flags, 0,
+						   handle, &its_domain_ops,
+						   info);
 	if (!inner_domain) {
 		kfree(info);
 		return -ENOMEM;
 	}
 
-	inner_domain->parent = its_parent;
 	irq_domain_update_bus_token(inner_domain, DOMAIN_BUS_NEXUS);
-	inner_domain->flags |= its->msi_domain_flags;
-	info->ops = &its_msi_domain_ops;
-	info->data = its;
-	inner_domain->host_data = info;
 
 	return 0;
 }
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 15/19] irqchip/gic-v3-its: Use irq_domain_create_hierarchy()
@ 2023-02-09 13:23   ` Johan Hovold
  0 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, Hsin-Yi Wang, Mark-PK Tsai

Use the irq_domain_create_hierarchy() helper to create the hierarchical
domain, which both serves as documentation and avoids poking at
irqdomain internals.

Note that the domain host_data was first set to the struct its_node
during allocation only to immediately be overwritten with the struct
msi_domain_info.

Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 drivers/irqchip/irq-gic-v3-its.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index 973ede0197e3..5634d29b644d 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -4909,18 +4909,19 @@ static int its_init_domain(struct fwnode_handle *handle, struct its_node *its)
 	if (!info)
 		return -ENOMEM;
 
-	inner_domain = irq_domain_create_tree(handle, &its_domain_ops, its);
+	info->ops = &its_msi_domain_ops;
+	info->data = its;
+
+	inner_domain = irq_domain_create_hierarchy(its_parent,
+						   its->msi_domain_flags, 0,
+						   handle, &its_domain_ops,
+						   info);
 	if (!inner_domain) {
 		kfree(info);
 		return -ENOMEM;
 	}
 
-	inner_domain->parent = its_parent;
 	irq_domain_update_bus_token(inner_domain, DOMAIN_BUS_NEXUS);
-	inner_domain->flags |= its->msi_domain_flags;
-	info->ops = &its_msi_domain_ops;
-	info->data = its;
-	inner_domain->host_data = info;
 
 	return 0;
 }
-- 
2.39.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 16/19] irqchip/gic-v3-mbi: Use irq_domain_create_hierarchy()
  2023-02-09 13:23 ` Johan Hovold
@ 2023-02-09 13:23   ` Johan Hovold
  -1 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, Philippe Mathieu-Daudé,
	Hsin-Yi Wang, Mark-PK Tsai

Use the irq_domain_create_hierarchy() helper to create the hierarchical
domain, which both serves as documentation and avoids poking at
irqdomain internals.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 drivers/irqchip/irq-gic-v3-mbi.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/irqchip/irq-gic-v3-mbi.c b/drivers/irqchip/irq-gic-v3-mbi.c
index e1efdec9e9ac..dbb8b1efda44 100644
--- a/drivers/irqchip/irq-gic-v3-mbi.c
+++ b/drivers/irqchip/irq-gic-v3-mbi.c
@@ -233,13 +233,12 @@ static int mbi_allocate_domains(struct irq_domain *parent)
 	struct irq_domain *nexus_domain, *pci_domain, *plat_domain;
 	int err;
 
-	nexus_domain = irq_domain_create_tree(parent->fwnode,
-					      &mbi_domain_ops, NULL);
+	nexus_domain = irq_domain_create_hierarchy(parent, 0, 0, parent->fwnode,
+						   &mbi_domain_ops, NULL);
 	if (!nexus_domain)
 		return -ENOMEM;
 
 	irq_domain_update_bus_token(nexus_domain, DOMAIN_BUS_NEXUS);
-	nexus_domain->parent = parent;
 
 	err = mbi_allocate_pci_domain(nexus_domain, &pci_domain);
 
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 16/19] irqchip/gic-v3-mbi: Use irq_domain_create_hierarchy()
@ 2023-02-09 13:23   ` Johan Hovold
  0 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, Philippe Mathieu-Daudé,
	Hsin-Yi Wang, Mark-PK Tsai

Use the irq_domain_create_hierarchy() helper to create the hierarchical
domain, which both serves as documentation and avoids poking at
irqdomain internals.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 drivers/irqchip/irq-gic-v3-mbi.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/irqchip/irq-gic-v3-mbi.c b/drivers/irqchip/irq-gic-v3-mbi.c
index e1efdec9e9ac..dbb8b1efda44 100644
--- a/drivers/irqchip/irq-gic-v3-mbi.c
+++ b/drivers/irqchip/irq-gic-v3-mbi.c
@@ -233,13 +233,12 @@ static int mbi_allocate_domains(struct irq_domain *parent)
 	struct irq_domain *nexus_domain, *pci_domain, *plat_domain;
 	int err;
 
-	nexus_domain = irq_domain_create_tree(parent->fwnode,
-					      &mbi_domain_ops, NULL);
+	nexus_domain = irq_domain_create_hierarchy(parent, 0, 0, parent->fwnode,
+						   &mbi_domain_ops, NULL);
 	if (!nexus_domain)
 		return -ENOMEM;
 
 	irq_domain_update_bus_token(nexus_domain, DOMAIN_BUS_NEXUS);
-	nexus_domain->parent = parent;
 
 	err = mbi_allocate_pci_domain(nexus_domain, &pci_domain);
 
-- 
2.39.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 17/19] irqchip/loongson-pch-msi: Use irq_domain_create_hierarchy()
  2023-02-09 13:23 ` Johan Hovold
@ 2023-02-09 13:23   ` Johan Hovold
  -1 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, Philippe Mathieu-Daudé,
	Hsin-Yi Wang, Mark-PK Tsai

Use the irq_domain_create_hierarchy() helper to create the hierarchical
domain, which both serves as documentation and avoids poking at
irqdomain internals.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 drivers/irqchip/irq-loongson-pch-msi.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/drivers/irqchip/irq-loongson-pch-msi.c b/drivers/irqchip/irq-loongson-pch-msi.c
index a72ede90ffc6..6e1e1f011bb2 100644
--- a/drivers/irqchip/irq-loongson-pch-msi.c
+++ b/drivers/irqchip/irq-loongson-pch-msi.c
@@ -163,16 +163,15 @@ static int pch_msi_init_domains(struct pch_msi_data *priv,
 {
 	struct irq_domain *middle_domain, *msi_domain;
 
-	middle_domain = irq_domain_create_linear(domain_handle,
-						priv->num_irqs,
-						&pch_msi_middle_domain_ops,
-						priv);
+	middle_domain = irq_domain_create_hierarchy(parent, 0, priv->num_irqs,
+						    domain_handle,
+						    &pch_msi_middle_domain_ops,
+						    priv);
 	if (!middle_domain) {
 		pr_err("Failed to create the MSI middle domain\n");
 		return -ENOMEM;
 	}
 
-	middle_domain->parent = parent;
 	irq_domain_update_bus_token(middle_domain, DOMAIN_BUS_NEXUS);
 
 	msi_domain = pci_msi_create_irq_domain(domain_handle,
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 17/19] irqchip/loongson-pch-msi: Use irq_domain_create_hierarchy()
@ 2023-02-09 13:23   ` Johan Hovold
  0 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, Philippe Mathieu-Daudé,
	Hsin-Yi Wang, Mark-PK Tsai

Use the irq_domain_create_hierarchy() helper to create the hierarchical
domain, which both serves as documentation and avoids poking at
irqdomain internals.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 drivers/irqchip/irq-loongson-pch-msi.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/drivers/irqchip/irq-loongson-pch-msi.c b/drivers/irqchip/irq-loongson-pch-msi.c
index a72ede90ffc6..6e1e1f011bb2 100644
--- a/drivers/irqchip/irq-loongson-pch-msi.c
+++ b/drivers/irqchip/irq-loongson-pch-msi.c
@@ -163,16 +163,15 @@ static int pch_msi_init_domains(struct pch_msi_data *priv,
 {
 	struct irq_domain *middle_domain, *msi_domain;
 
-	middle_domain = irq_domain_create_linear(domain_handle,
-						priv->num_irqs,
-						&pch_msi_middle_domain_ops,
-						priv);
+	middle_domain = irq_domain_create_hierarchy(parent, 0, priv->num_irqs,
+						    domain_handle,
+						    &pch_msi_middle_domain_ops,
+						    priv);
 	if (!middle_domain) {
 		pr_err("Failed to create the MSI middle domain\n");
 		return -ENOMEM;
 	}
 
-	middle_domain->parent = parent;
 	irq_domain_update_bus_token(middle_domain, DOMAIN_BUS_NEXUS);
 
 	msi_domain = pci_msi_create_irq_domain(domain_handle,
-- 
2.39.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 18/19] irqchip/mvebu-odmi: Use irq_domain_create_hierarchy()
  2023-02-09 13:23 ` Johan Hovold
@ 2023-02-09 13:23   ` Johan Hovold
  -1 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, Philippe Mathieu-Daudé,
	Hsin-Yi Wang, Mark-PK Tsai

Use the irq_domain_create_hierarchy() helper to create the hierarchical
domain, which both serves as documentation and avoids poking at
irqdomain internals.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 drivers/irqchip/irq-mvebu-odmi.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/drivers/irqchip/irq-mvebu-odmi.c b/drivers/irqchip/irq-mvebu-odmi.c
index dc4145abdd6f..108091533e10 100644
--- a/drivers/irqchip/irq-mvebu-odmi.c
+++ b/drivers/irqchip/irq-mvebu-odmi.c
@@ -161,7 +161,7 @@ static struct msi_domain_info odmi_msi_domain_info = {
 static int __init mvebu_odmi_init(struct device_node *node,
 				  struct device_node *parent)
 {
-	struct irq_domain *inner_domain, *plat_domain;
+	struct irq_domain *parent_domain, *inner_domain, *plat_domain;
 	int ret, i;
 
 	if (of_property_read_u32(node, "marvell,odmi-frames", &odmis_count))
@@ -197,16 +197,17 @@ static int __init mvebu_odmi_init(struct device_node *node,
 		}
 	}
 
-	inner_domain = irq_domain_create_linear(of_node_to_fwnode(node),
-						odmis_count * NODMIS_PER_FRAME,
-						&odmi_domain_ops, NULL);
+	parent_domain = irq_find_host(parent);
+
+	inner_domain = irq_domain_create_hierarchy(parent_domain, 0,
+						   odmis_count * NODMIS_PER_FRAME,
+						   of_node_to_fwnode(node),
+						   &odmi_domain_ops, NULL);
 	if (!inner_domain) {
 		ret = -ENOMEM;
 		goto err_unmap;
 	}
 
-	inner_domain->parent = irq_find_host(parent);
-
 	plat_domain = platform_msi_create_irq_domain(of_node_to_fwnode(node),
 						     &odmi_msi_domain_info,
 						     inner_domain);
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 18/19] irqchip/mvebu-odmi: Use irq_domain_create_hierarchy()
@ 2023-02-09 13:23   ` Johan Hovold
  0 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, Philippe Mathieu-Daudé,
	Hsin-Yi Wang, Mark-PK Tsai

Use the irq_domain_create_hierarchy() helper to create the hierarchical
domain, which both serves as documentation and avoids poking at
irqdomain internals.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 drivers/irqchip/irq-mvebu-odmi.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/drivers/irqchip/irq-mvebu-odmi.c b/drivers/irqchip/irq-mvebu-odmi.c
index dc4145abdd6f..108091533e10 100644
--- a/drivers/irqchip/irq-mvebu-odmi.c
+++ b/drivers/irqchip/irq-mvebu-odmi.c
@@ -161,7 +161,7 @@ static struct msi_domain_info odmi_msi_domain_info = {
 static int __init mvebu_odmi_init(struct device_node *node,
 				  struct device_node *parent)
 {
-	struct irq_domain *inner_domain, *plat_domain;
+	struct irq_domain *parent_domain, *inner_domain, *plat_domain;
 	int ret, i;
 
 	if (of_property_read_u32(node, "marvell,odmi-frames", &odmis_count))
@@ -197,16 +197,17 @@ static int __init mvebu_odmi_init(struct device_node *node,
 		}
 	}
 
-	inner_domain = irq_domain_create_linear(of_node_to_fwnode(node),
-						odmis_count * NODMIS_PER_FRAME,
-						&odmi_domain_ops, NULL);
+	parent_domain = irq_find_host(parent);
+
+	inner_domain = irq_domain_create_hierarchy(parent_domain, 0,
+						   odmis_count * NODMIS_PER_FRAME,
+						   of_node_to_fwnode(node),
+						   &odmi_domain_ops, NULL);
 	if (!inner_domain) {
 		ret = -ENOMEM;
 		goto err_unmap;
 	}
 
-	inner_domain->parent = irq_find_host(parent);
-
 	plat_domain = platform_msi_create_irq_domain(of_node_to_fwnode(node),
 						     &odmi_msi_domain_info,
 						     inner_domain);
-- 
2.39.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 19/19] irqdomain: Switch to per-domain locking
  2023-02-09 13:23 ` Johan Hovold
@ 2023-02-09 13:23   ` Johan Hovold
  -1 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, Hsin-Yi Wang, Mark-PK Tsai

The IRQ domain structures are currently protected by the global
irq_domain_mutex. Switch to using more fine-grained per-domain locking,
which can speed up parallel probing by reducing lock contention.

On a recent arm64 laptop, the total time spent waiting for the locks
during boot drops from 160 to 40 ms on average, while the maximum
aggregate wait time drops from 550 to 90 ms over ten runs for example.

Note that the domain lock of the root domain (innermost domain) must be
used for hierarchical domains. For non-hierarchical domains (as for root
domains), the new root pointer is set to the domain itself so that
domain->root->mutex can be used in shared code paths.

Also note that hierarchical domains should be constructed using
irq_domain_create_hierarchy() (or irq_domain_add_hierarchy()) to avoid
poking at irqdomain internals. As a safeguard, the lockdep assertion in
irq_domain_set_mapping() will catch any offenders that fail to set the
root domain pointer.

Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 include/linux/irqdomain.h |  4 +++
 kernel/irq/irqdomain.c    | 61 +++++++++++++++++++++++++--------------
 2 files changed, 44 insertions(+), 21 deletions(-)

diff --git a/include/linux/irqdomain.h b/include/linux/irqdomain.h
index 16399de00b48..cad47737a052 100644
--- a/include/linux/irqdomain.h
+++ b/include/linux/irqdomain.h
@@ -125,6 +125,8 @@ struct irq_domain_chip_generic;
  *		core code.
  * @flags:	Per irq_domain flags
  * @mapcount:	The number of mapped interrupts
+ * @mutex:	Domain lock, hierarhical domains use root domain's lock
+ * @root:	Pointer to root domain, or containing structure if non-hierarchical
  *
  * Optional elements:
  * @fwnode:	Pointer to firmware node associated with the irq_domain. Pretty easy
@@ -152,6 +154,8 @@ struct irq_domain {
 	void				*host_data;
 	unsigned int			flags;
 	unsigned int			mapcount;
+	struct mutex			mutex;
+	struct irq_domain		*root;
 
 	/* Optional data */
 	struct fwnode_handle		*fwnode;
diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index 804d316329c8..c96aa5e5a94b 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -226,6 +226,17 @@ struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int s
 
 	domain->revmap_size = size;
 
+	/*
+	 * Hierarchical domains use the domain lock of the root domain
+	 * (innermost domain).
+	 *
+	 * For non-hierarchical domains (as for root domains), the root
+	 * pointer is set to the domain itself so that domain->root->mutex
+	 * can be used in shared code paths.
+	 */
+	mutex_init(&domain->mutex);
+	domain->root = domain;
+
 	irq_domain_check_hierarchy(domain);
 
 	mutex_lock(&irq_domain_mutex);
@@ -503,7 +514,7 @@ static bool irq_domain_is_nomap(struct irq_domain *domain)
 static void irq_domain_clear_mapping(struct irq_domain *domain,
 				     irq_hw_number_t hwirq)
 {
-	lockdep_assert_held(&irq_domain_mutex);
+	lockdep_assert_held(&domain->root->mutex);
 
 	if (irq_domain_is_nomap(domain))
 		return;
@@ -518,7 +529,11 @@ static void irq_domain_set_mapping(struct irq_domain *domain,
 				   irq_hw_number_t hwirq,
 				   struct irq_data *irq_data)
 {
-	lockdep_assert_held(&irq_domain_mutex);
+	/*
+	 * This also makes sure that all domains point to the same root when
+	 * called from irq_domain_insert_irq() for each domain in a hierarchy.
+	 */
+	lockdep_assert_held(&domain->root->mutex);
 
 	if (irq_domain_is_nomap(domain))
 		return;
@@ -540,7 +555,7 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
 
 	hwirq = irq_data->hwirq;
 
-	mutex_lock(&irq_domain_mutex);
+	mutex_lock(&domain->mutex);
 
 	irq_set_status_flags(irq, IRQ_NOREQUEST);
 
@@ -562,7 +577,7 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
 	/* Clear reverse map for this hwirq */
 	irq_domain_clear_mapping(domain, hwirq);
 
-	mutex_unlock(&irq_domain_mutex);
+	mutex_unlock(&domain->mutex);
 }
 
 static int irq_domain_associate_locked(struct irq_domain *domain, unsigned int virq,
@@ -612,9 +627,9 @@ int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
 {
 	int ret;
 
-	mutex_lock(&irq_domain_mutex);
+	mutex_lock(&domain->mutex);
 	ret = irq_domain_associate_locked(domain, virq, hwirq);
-	mutex_unlock(&irq_domain_mutex);
+	mutex_unlock(&domain->mutex);
 
 	return ret;
 }
@@ -731,7 +746,7 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
 		return 0;
 	}
 
-	mutex_lock(&irq_domain_mutex);
+	mutex_lock(&domain->mutex);
 
 	/* Check if mapping already exists */
 	virq = irq_find_mapping(domain, hwirq);
@@ -742,7 +757,7 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
 
 	virq = irq_create_mapping_affinity_locked(domain, hwirq, affinity);
 out:
-	mutex_unlock(&irq_domain_mutex);
+	mutex_unlock(&domain->mutex);
 
 	return virq;
 }
@@ -811,7 +826,7 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
 	if (WARN_ON(type & ~IRQ_TYPE_SENSE_MASK))
 		type &= IRQ_TYPE_SENSE_MASK;
 
-	mutex_lock(&irq_domain_mutex);
+	mutex_lock(&domain->root->mutex);
 
 	/*
 	 * If we've already configured this interrupt,
@@ -864,11 +879,11 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
 	/* Store trigger type */
 	irqd_set_trigger_type(irq_data, type);
 out:
-	mutex_unlock(&irq_domain_mutex);
+	mutex_unlock(&domain->root->mutex);
 
 	return virq;
 err:
-	mutex_unlock(&irq_domain_mutex);
+	mutex_unlock(&domain->root->mutex);
 
 	return 0;
 }
@@ -1132,6 +1147,7 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
 	else
 		domain = irq_domain_create_tree(fwnode, ops, host_data);
 	if (domain) {
+		domain->root = parent->root;
 		domain->parent = parent;
 		domain->flags |= flags;
 	}
@@ -1528,10 +1544,10 @@ int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base,
 			return -EINVAL;
 	}
 
-	mutex_lock(&irq_domain_mutex);
+	mutex_lock(&domain->root->mutex);
 	ret = irq_domain_alloc_irqs_locked(domain, irq_base, nr_irqs, node, arg,
 					   realloc, affinity);
-	mutex_unlock(&irq_domain_mutex);
+	mutex_unlock(&domain->root->mutex);
 
 	return ret;
 }
@@ -1542,7 +1558,7 @@ static void irq_domain_fix_revmap(struct irq_data *d)
 {
 	void __rcu **slot;
 
-	lockdep_assert_held(&irq_domain_mutex);
+	lockdep_assert_held(&d->domain->root->mutex);
 
 	if (irq_domain_is_nomap(d->domain))
 		return;
@@ -1608,7 +1624,7 @@ int irq_domain_push_irq(struct irq_domain *domain, int virq, void *arg)
 	if (!parent_irq_data)
 		return -ENOMEM;
 
-	mutex_lock(&irq_domain_mutex);
+	mutex_lock(&domain->root->mutex);
 
 	/* Copy the original irq_data. */
 	*parent_irq_data = *irq_data;
@@ -1636,7 +1652,7 @@ int irq_domain_push_irq(struct irq_domain *domain, int virq, void *arg)
 	irq_domain_fix_revmap(parent_irq_data);
 	irq_domain_set_mapping(domain, irq_data->hwirq, irq_data);
 error:
-	mutex_unlock(&irq_domain_mutex);
+	mutex_unlock(&domain->root->mutex);
 
 	return rv;
 }
@@ -1691,7 +1707,7 @@ int irq_domain_pop_irq(struct irq_domain *domain, int virq)
 	if (WARN_ON(!parent_irq_data))
 		return -EINVAL;
 
-	mutex_lock(&irq_domain_mutex);
+	mutex_lock(&domain->root->mutex);
 
 	irq_data->parent_data = NULL;
 
@@ -1703,7 +1719,7 @@ int irq_domain_pop_irq(struct irq_domain *domain, int virq)
 
 	irq_domain_fix_revmap(irq_data);
 
-	mutex_unlock(&irq_domain_mutex);
+	mutex_unlock(&domain->root->mutex);
 
 	kfree(parent_irq_data);
 
@@ -1719,17 +1735,20 @@ EXPORT_SYMBOL_GPL(irq_domain_pop_irq);
 void irq_domain_free_irqs(unsigned int virq, unsigned int nr_irqs)
 {
 	struct irq_data *data = irq_get_irq_data(virq);
+	struct irq_domain *domain;
 	int i;
 
 	if (WARN(!data || !data->domain || !data->domain->ops->free,
 		 "NULL pointer, cannot free irq\n"))
 		return;
 
-	mutex_lock(&irq_domain_mutex);
+	domain = data->domain;
+
+	mutex_lock(&domain->root->mutex);
 	for (i = 0; i < nr_irqs; i++)
 		irq_domain_remove_irq(virq + i);
-	irq_domain_free_irqs_hierarchy(data->domain, virq, nr_irqs);
-	mutex_unlock(&irq_domain_mutex);
+	irq_domain_free_irqs_hierarchy(domain, virq, nr_irqs);
+	mutex_unlock(&domain->root->mutex);
 
 	irq_domain_free_irq_data(virq, nr_irqs);
 	irq_free_descs(virq, nr_irqs);
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 19/19] irqdomain: Switch to per-domain locking
@ 2023-02-09 13:23   ` Johan Hovold
  0 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-09 13:23 UTC (permalink / raw)
  To: Marc Zyngier, Thomas Gleixner
  Cc: x86, platform-driver-x86, linux-arm-kernel, linux-mips,
	linux-kernel, Johan Hovold, Hsin-Yi Wang, Mark-PK Tsai

The IRQ domain structures are currently protected by the global
irq_domain_mutex. Switch to using more fine-grained per-domain locking,
which can speed up parallel probing by reducing lock contention.

On a recent arm64 laptop, the total time spent waiting for the locks
during boot drops from 160 to 40 ms on average, while the maximum
aggregate wait time drops from 550 to 90 ms over ten runs for example.

Note that the domain lock of the root domain (innermost domain) must be
used for hierarchical domains. For non-hierarchical domains (as for root
domains), the new root pointer is set to the domain itself so that
domain->root->mutex can be used in shared code paths.

Also note that hierarchical domains should be constructed using
irq_domain_create_hierarchy() (or irq_domain_add_hierarchy()) to avoid
poking at irqdomain internals. As a safeguard, the lockdep assertion in
irq_domain_set_mapping() will catch any offenders that fail to set the
root domain pointer.

Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
 include/linux/irqdomain.h |  4 +++
 kernel/irq/irqdomain.c    | 61 +++++++++++++++++++++++++--------------
 2 files changed, 44 insertions(+), 21 deletions(-)

diff --git a/include/linux/irqdomain.h b/include/linux/irqdomain.h
index 16399de00b48..cad47737a052 100644
--- a/include/linux/irqdomain.h
+++ b/include/linux/irqdomain.h
@@ -125,6 +125,8 @@ struct irq_domain_chip_generic;
  *		core code.
  * @flags:	Per irq_domain flags
  * @mapcount:	The number of mapped interrupts
+ * @mutex:	Domain lock, hierarhical domains use root domain's lock
+ * @root:	Pointer to root domain, or containing structure if non-hierarchical
  *
  * Optional elements:
  * @fwnode:	Pointer to firmware node associated with the irq_domain. Pretty easy
@@ -152,6 +154,8 @@ struct irq_domain {
 	void				*host_data;
 	unsigned int			flags;
 	unsigned int			mapcount;
+	struct mutex			mutex;
+	struct irq_domain		*root;
 
 	/* Optional data */
 	struct fwnode_handle		*fwnode;
diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index 804d316329c8..c96aa5e5a94b 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -226,6 +226,17 @@ struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int s
 
 	domain->revmap_size = size;
 
+	/*
+	 * Hierarchical domains use the domain lock of the root domain
+	 * (innermost domain).
+	 *
+	 * For non-hierarchical domains (as for root domains), the root
+	 * pointer is set to the domain itself so that domain->root->mutex
+	 * can be used in shared code paths.
+	 */
+	mutex_init(&domain->mutex);
+	domain->root = domain;
+
 	irq_domain_check_hierarchy(domain);
 
 	mutex_lock(&irq_domain_mutex);
@@ -503,7 +514,7 @@ static bool irq_domain_is_nomap(struct irq_domain *domain)
 static void irq_domain_clear_mapping(struct irq_domain *domain,
 				     irq_hw_number_t hwirq)
 {
-	lockdep_assert_held(&irq_domain_mutex);
+	lockdep_assert_held(&domain->root->mutex);
 
 	if (irq_domain_is_nomap(domain))
 		return;
@@ -518,7 +529,11 @@ static void irq_domain_set_mapping(struct irq_domain *domain,
 				   irq_hw_number_t hwirq,
 				   struct irq_data *irq_data)
 {
-	lockdep_assert_held(&irq_domain_mutex);
+	/*
+	 * This also makes sure that all domains point to the same root when
+	 * called from irq_domain_insert_irq() for each domain in a hierarchy.
+	 */
+	lockdep_assert_held(&domain->root->mutex);
 
 	if (irq_domain_is_nomap(domain))
 		return;
@@ -540,7 +555,7 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
 
 	hwirq = irq_data->hwirq;
 
-	mutex_lock(&irq_domain_mutex);
+	mutex_lock(&domain->mutex);
 
 	irq_set_status_flags(irq, IRQ_NOREQUEST);
 
@@ -562,7 +577,7 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
 	/* Clear reverse map for this hwirq */
 	irq_domain_clear_mapping(domain, hwirq);
 
-	mutex_unlock(&irq_domain_mutex);
+	mutex_unlock(&domain->mutex);
 }
 
 static int irq_domain_associate_locked(struct irq_domain *domain, unsigned int virq,
@@ -612,9 +627,9 @@ int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
 {
 	int ret;
 
-	mutex_lock(&irq_domain_mutex);
+	mutex_lock(&domain->mutex);
 	ret = irq_domain_associate_locked(domain, virq, hwirq);
-	mutex_unlock(&irq_domain_mutex);
+	mutex_unlock(&domain->mutex);
 
 	return ret;
 }
@@ -731,7 +746,7 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
 		return 0;
 	}
 
-	mutex_lock(&irq_domain_mutex);
+	mutex_lock(&domain->mutex);
 
 	/* Check if mapping already exists */
 	virq = irq_find_mapping(domain, hwirq);
@@ -742,7 +757,7 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
 
 	virq = irq_create_mapping_affinity_locked(domain, hwirq, affinity);
 out:
-	mutex_unlock(&irq_domain_mutex);
+	mutex_unlock(&domain->mutex);
 
 	return virq;
 }
@@ -811,7 +826,7 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
 	if (WARN_ON(type & ~IRQ_TYPE_SENSE_MASK))
 		type &= IRQ_TYPE_SENSE_MASK;
 
-	mutex_lock(&irq_domain_mutex);
+	mutex_lock(&domain->root->mutex);
 
 	/*
 	 * If we've already configured this interrupt,
@@ -864,11 +879,11 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
 	/* Store trigger type */
 	irqd_set_trigger_type(irq_data, type);
 out:
-	mutex_unlock(&irq_domain_mutex);
+	mutex_unlock(&domain->root->mutex);
 
 	return virq;
 err:
-	mutex_unlock(&irq_domain_mutex);
+	mutex_unlock(&domain->root->mutex);
 
 	return 0;
 }
@@ -1132,6 +1147,7 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
 	else
 		domain = irq_domain_create_tree(fwnode, ops, host_data);
 	if (domain) {
+		domain->root = parent->root;
 		domain->parent = parent;
 		domain->flags |= flags;
 	}
@@ -1528,10 +1544,10 @@ int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base,
 			return -EINVAL;
 	}
 
-	mutex_lock(&irq_domain_mutex);
+	mutex_lock(&domain->root->mutex);
 	ret = irq_domain_alloc_irqs_locked(domain, irq_base, nr_irqs, node, arg,
 					   realloc, affinity);
-	mutex_unlock(&irq_domain_mutex);
+	mutex_unlock(&domain->root->mutex);
 
 	return ret;
 }
@@ -1542,7 +1558,7 @@ static void irq_domain_fix_revmap(struct irq_data *d)
 {
 	void __rcu **slot;
 
-	lockdep_assert_held(&irq_domain_mutex);
+	lockdep_assert_held(&d->domain->root->mutex);
 
 	if (irq_domain_is_nomap(d->domain))
 		return;
@@ -1608,7 +1624,7 @@ int irq_domain_push_irq(struct irq_domain *domain, int virq, void *arg)
 	if (!parent_irq_data)
 		return -ENOMEM;
 
-	mutex_lock(&irq_domain_mutex);
+	mutex_lock(&domain->root->mutex);
 
 	/* Copy the original irq_data. */
 	*parent_irq_data = *irq_data;
@@ -1636,7 +1652,7 @@ int irq_domain_push_irq(struct irq_domain *domain, int virq, void *arg)
 	irq_domain_fix_revmap(parent_irq_data);
 	irq_domain_set_mapping(domain, irq_data->hwirq, irq_data);
 error:
-	mutex_unlock(&irq_domain_mutex);
+	mutex_unlock(&domain->root->mutex);
 
 	return rv;
 }
@@ -1691,7 +1707,7 @@ int irq_domain_pop_irq(struct irq_domain *domain, int virq)
 	if (WARN_ON(!parent_irq_data))
 		return -EINVAL;
 
-	mutex_lock(&irq_domain_mutex);
+	mutex_lock(&domain->root->mutex);
 
 	irq_data->parent_data = NULL;
 
@@ -1703,7 +1719,7 @@ int irq_domain_pop_irq(struct irq_domain *domain, int virq)
 
 	irq_domain_fix_revmap(irq_data);
 
-	mutex_unlock(&irq_domain_mutex);
+	mutex_unlock(&domain->root->mutex);
 
 	kfree(parent_irq_data);
 
@@ -1719,17 +1735,20 @@ EXPORT_SYMBOL_GPL(irq_domain_pop_irq);
 void irq_domain_free_irqs(unsigned int virq, unsigned int nr_irqs)
 {
 	struct irq_data *data = irq_get_irq_data(virq);
+	struct irq_domain *domain;
 	int i;
 
 	if (WARN(!data || !data->domain || !data->domain->ops->free,
 		 "NULL pointer, cannot free irq\n"))
 		return;
 
-	mutex_lock(&irq_domain_mutex);
+	domain = data->domain;
+
+	mutex_lock(&domain->root->mutex);
 	for (i = 0; i < nr_irqs; i++)
 		irq_domain_remove_irq(virq + i);
-	irq_domain_free_irqs_hierarchy(data->domain, virq, nr_irqs);
-	mutex_unlock(&irq_domain_mutex);
+	irq_domain_free_irqs_hierarchy(domain, virq, nr_irqs);
+	mutex_unlock(&domain->root->mutex);
 
 	irq_domain_free_irq_data(virq, nr_irqs);
 	irq_free_descs(virq, nr_irqs);
-- 
2.39.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 06/19] irqdomain: Fix mapping-creation race
  2023-02-09 13:23   ` Johan Hovold
@ 2023-02-09 14:03     ` Marc Zyngier
  -1 siblings, 0 replies; 58+ messages in thread
From: Marc Zyngier @ 2023-02-09 14:03 UTC (permalink / raw)
  To: Johan Hovold
  Cc: Thomas Gleixner, x86, platform-driver-x86, linux-arm-kernel,
	linux-mips, linux-kernel, stable, Dmitry Torokhov, Jon Hunter,
	Hsin-Yi Wang, Mark-PK Tsai

On Thu, 09 Feb 2023 13:23:10 +0000,
Johan Hovold <johan+linaro@kernel.org> wrote:
> 
> Parallel probing of devices that share interrupts (e.g. when a driver
> uses asynchronous probing) can currently result in two mappings for the
> same hardware interrupt to be created due to missing serialisation.
> 
> Make sure to hold the irq_domain_mutex when creating mappings so that
> looking for an existing mapping before creating a new one is done
> atomically.
> 
> Fixes: 765230b5f084 ("driver-core: add asynchronous probing support for drivers")
> Fixes: b62b2cf5759b ("irqdomain: Fix handling of type settings for existing mappings")
> Link: https://lore.kernel.org/r/YuJXMHoT4ijUxnRb@hovoldconsulting.com
> Cc: stable@vger.kernel.org      # 4.8
> Cc: Dmitry Torokhov <dtor@chromium.org>
> Cc: Jon Hunter <jonathanh@nvidia.com>
> Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
> Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
> Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
> ---
>  kernel/irq/irqdomain.c | 55 ++++++++++++++++++++++++++++++------------
>  1 file changed, 40 insertions(+), 15 deletions(-)
> 
> diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
> index 7b57949bc79c..1ddb01bd49a4 100644
> --- a/kernel/irq/irqdomain.c
> +++ b/kernel/irq/irqdomain.c
> @@ -25,6 +25,9 @@ static DEFINE_MUTEX(irq_domain_mutex);
>  
>  static struct irq_domain *irq_default_domain;
>  
> +static int irq_domain_alloc_irqs_locked(struct irq_domain *domain, int irq_base,
> +					unsigned int nr_irqs, int node, void *arg,
> +					bool realloc, const struct irq_affinity_desc *affinity);
>  static void irq_domain_check_hierarchy(struct irq_domain *domain);
>  
>  struct irqchip_fwid {
> @@ -682,9 +685,9 @@ unsigned int irq_create_direct_mapping(struct irq_domain *domain)
>  EXPORT_SYMBOL_GPL(irq_create_direct_mapping);
>  #endif
>  
> -static unsigned int __irq_create_mapping_affinity(struct irq_domain *domain,
> -						  irq_hw_number_t hwirq,
> -						  const struct irq_affinity_desc *affinity)
> +static unsigned int irq_create_mapping_affinity_locked(struct irq_domain *domain,
> +						       irq_hw_number_t hwirq,
> +						       const struct irq_affinity_desc *affinity)
>  {
>  	struct device_node *of_node = irq_domain_get_of_node(domain);
>  	int virq;
> @@ -699,7 +702,7 @@ static unsigned int __irq_create_mapping_affinity(struct irq_domain *domain,
>  		return 0;
>  	}
>  
> -	if (irq_domain_associate(domain, virq, hwirq)) {
> +	if (irq_domain_associate_locked(domain, virq, hwirq)) {
>  		irq_free_desc(virq);
>  		return 0;
>  	}
> @@ -735,14 +738,20 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
>  		return 0;
>  	}
>  
> +	mutex_lock(&irq_domain_mutex);
> +
>  	/* Check if mapping already exists */
>  	virq = irq_find_mapping(domain, hwirq);
>  	if (virq) {
>  		pr_debug("existing mapping on virq %d\n", virq);
> -		return virq;
> +		goto out;
>  	}
>  
> -	return __irq_create_mapping_affinity(domain, hwirq, affinity);
> +	virq = irq_create_mapping_affinity_locked(domain, hwirq, affinity);
> +out:
> +	mutex_unlock(&irq_domain_mutex);
> +
> +	return virq;
>  }
>  EXPORT_SYMBOL_GPL(irq_create_mapping_affinity);
>  
> @@ -809,6 +818,8 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
>  	if (WARN_ON(type & ~IRQ_TYPE_SENSE_MASK))
>  		type &= IRQ_TYPE_SENSE_MASK;
>  
> +	mutex_lock(&irq_domain_mutex);
> +
>  	/*
>  	 * If we've already configured this interrupt,
>  	 * don't do it again, or hell will break loose.
> @@ -821,7 +832,7 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
>  		 * interrupt number.
>  		 */
>  		if (type == IRQ_TYPE_NONE || type == irq_get_trigger_type(virq))
> -			return virq;
> +			goto out;
>  
>  		/*
>  		 * If the trigger type has not been set yet, then set
> @@ -830,36 +841,43 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
>  		if (irq_get_trigger_type(virq) == IRQ_TYPE_NONE) {
>  			irq_data = irq_get_irq_data(virq);
>  			if (!irq_data)
> -				return 0;
> +				goto err;
>  
>  			irqd_set_trigger_type(irq_data, type);
> -			return virq;
> +			goto out;
>  		}
>  
>  		pr_warn("type mismatch, failed to map hwirq-%lu for %s!\n",
>  			hwirq, of_node_full_name(to_of_node(fwspec->fwnode)));
> -		return 0;
> +		goto err;
>  	}
>  
>  	if (irq_domain_is_hierarchy(domain)) {
> -		virq = irq_domain_alloc_irqs(domain, 1, NUMA_NO_NODE, fwspec);
> +		virq = irq_domain_alloc_irqs_locked(domain, -1, 1, NUMA_NO_NODE,
> +						    fwspec, false, NULL);
>  		if (virq <= 0)
> -			return 0;
> +			goto err;
>  	} else {
>  		/* Create mapping */
> -		virq = __irq_create_mapping_affinity(domain, hwirq, NULL);
> +		virq = irq_create_mapping_affinity_locked(domain, hwirq, NULL);
>  		if (!virq)
> -			return virq;
> +			goto err;
>  	}
>  
>  	irq_data = irq_get_irq_data(virq);
>  	if (WARN_ON(!irq_data))
> -		return 0;
> +		goto err;
>  
>  	/* Store trigger type */
>  	irqd_set_trigger_type(irq_data, type);
> +out:
> +	mutex_unlock(&irq_domain_mutex);
>  
>  	return virq;
> +err:
> +	mutex_unlock(&irq_domain_mutex);
> +
> +	return 0;

nit: it'd look better if we had a single exit path with the unlock,
setting virq to 0 on failure. Not a big deal, as this can be tidied up
when applied.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 06/19] irqdomain: Fix mapping-creation race
@ 2023-02-09 14:03     ` Marc Zyngier
  0 siblings, 0 replies; 58+ messages in thread
From: Marc Zyngier @ 2023-02-09 14:03 UTC (permalink / raw)
  To: Johan Hovold
  Cc: Thomas Gleixner, x86, platform-driver-x86, linux-arm-kernel,
	linux-mips, linux-kernel, stable, Dmitry Torokhov, Jon Hunter,
	Hsin-Yi Wang, Mark-PK Tsai

On Thu, 09 Feb 2023 13:23:10 +0000,
Johan Hovold <johan+linaro@kernel.org> wrote:
> 
> Parallel probing of devices that share interrupts (e.g. when a driver
> uses asynchronous probing) can currently result in two mappings for the
> same hardware interrupt to be created due to missing serialisation.
> 
> Make sure to hold the irq_domain_mutex when creating mappings so that
> looking for an existing mapping before creating a new one is done
> atomically.
> 
> Fixes: 765230b5f084 ("driver-core: add asynchronous probing support for drivers")
> Fixes: b62b2cf5759b ("irqdomain: Fix handling of type settings for existing mappings")
> Link: https://lore.kernel.org/r/YuJXMHoT4ijUxnRb@hovoldconsulting.com
> Cc: stable@vger.kernel.org      # 4.8
> Cc: Dmitry Torokhov <dtor@chromium.org>
> Cc: Jon Hunter <jonathanh@nvidia.com>
> Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
> Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
> Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
> ---
>  kernel/irq/irqdomain.c | 55 ++++++++++++++++++++++++++++++------------
>  1 file changed, 40 insertions(+), 15 deletions(-)
> 
> diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
> index 7b57949bc79c..1ddb01bd49a4 100644
> --- a/kernel/irq/irqdomain.c
> +++ b/kernel/irq/irqdomain.c
> @@ -25,6 +25,9 @@ static DEFINE_MUTEX(irq_domain_mutex);
>  
>  static struct irq_domain *irq_default_domain;
>  
> +static int irq_domain_alloc_irqs_locked(struct irq_domain *domain, int irq_base,
> +					unsigned int nr_irqs, int node, void *arg,
> +					bool realloc, const struct irq_affinity_desc *affinity);
>  static void irq_domain_check_hierarchy(struct irq_domain *domain);
>  
>  struct irqchip_fwid {
> @@ -682,9 +685,9 @@ unsigned int irq_create_direct_mapping(struct irq_domain *domain)
>  EXPORT_SYMBOL_GPL(irq_create_direct_mapping);
>  #endif
>  
> -static unsigned int __irq_create_mapping_affinity(struct irq_domain *domain,
> -						  irq_hw_number_t hwirq,
> -						  const struct irq_affinity_desc *affinity)
> +static unsigned int irq_create_mapping_affinity_locked(struct irq_domain *domain,
> +						       irq_hw_number_t hwirq,
> +						       const struct irq_affinity_desc *affinity)
>  {
>  	struct device_node *of_node = irq_domain_get_of_node(domain);
>  	int virq;
> @@ -699,7 +702,7 @@ static unsigned int __irq_create_mapping_affinity(struct irq_domain *domain,
>  		return 0;
>  	}
>  
> -	if (irq_domain_associate(domain, virq, hwirq)) {
> +	if (irq_domain_associate_locked(domain, virq, hwirq)) {
>  		irq_free_desc(virq);
>  		return 0;
>  	}
> @@ -735,14 +738,20 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
>  		return 0;
>  	}
>  
> +	mutex_lock(&irq_domain_mutex);
> +
>  	/* Check if mapping already exists */
>  	virq = irq_find_mapping(domain, hwirq);
>  	if (virq) {
>  		pr_debug("existing mapping on virq %d\n", virq);
> -		return virq;
> +		goto out;
>  	}
>  
> -	return __irq_create_mapping_affinity(domain, hwirq, affinity);
> +	virq = irq_create_mapping_affinity_locked(domain, hwirq, affinity);
> +out:
> +	mutex_unlock(&irq_domain_mutex);
> +
> +	return virq;
>  }
>  EXPORT_SYMBOL_GPL(irq_create_mapping_affinity);
>  
> @@ -809,6 +818,8 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
>  	if (WARN_ON(type & ~IRQ_TYPE_SENSE_MASK))
>  		type &= IRQ_TYPE_SENSE_MASK;
>  
> +	mutex_lock(&irq_domain_mutex);
> +
>  	/*
>  	 * If we've already configured this interrupt,
>  	 * don't do it again, or hell will break loose.
> @@ -821,7 +832,7 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
>  		 * interrupt number.
>  		 */
>  		if (type == IRQ_TYPE_NONE || type == irq_get_trigger_type(virq))
> -			return virq;
> +			goto out;
>  
>  		/*
>  		 * If the trigger type has not been set yet, then set
> @@ -830,36 +841,43 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
>  		if (irq_get_trigger_type(virq) == IRQ_TYPE_NONE) {
>  			irq_data = irq_get_irq_data(virq);
>  			if (!irq_data)
> -				return 0;
> +				goto err;
>  
>  			irqd_set_trigger_type(irq_data, type);
> -			return virq;
> +			goto out;
>  		}
>  
>  		pr_warn("type mismatch, failed to map hwirq-%lu for %s!\n",
>  			hwirq, of_node_full_name(to_of_node(fwspec->fwnode)));
> -		return 0;
> +		goto err;
>  	}
>  
>  	if (irq_domain_is_hierarchy(domain)) {
> -		virq = irq_domain_alloc_irqs(domain, 1, NUMA_NO_NODE, fwspec);
> +		virq = irq_domain_alloc_irqs_locked(domain, -1, 1, NUMA_NO_NODE,
> +						    fwspec, false, NULL);
>  		if (virq <= 0)
> -			return 0;
> +			goto err;
>  	} else {
>  		/* Create mapping */
> -		virq = __irq_create_mapping_affinity(domain, hwirq, NULL);
> +		virq = irq_create_mapping_affinity_locked(domain, hwirq, NULL);
>  		if (!virq)
> -			return virq;
> +			goto err;
>  	}
>  
>  	irq_data = irq_get_irq_data(virq);
>  	if (WARN_ON(!irq_data))
> -		return 0;
> +		goto err;
>  
>  	/* Store trigger type */
>  	irqd_set_trigger_type(irq_data, type);
> +out:
> +	mutex_unlock(&irq_domain_mutex);
>  
>  	return virq;
> +err:
> +	mutex_unlock(&irq_domain_mutex);
> +
> +	return 0;

nit: it'd look better if we had a single exit path with the unlock,
setting virq to 0 on failure. Not a big deal, as this can be tidied up
when applied.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 19/19] irqdomain: Switch to per-domain locking
  2023-02-09 13:23   ` Johan Hovold
@ 2023-02-09 16:00     ` Marc Zyngier
  -1 siblings, 0 replies; 58+ messages in thread
From: Marc Zyngier @ 2023-02-09 16:00 UTC (permalink / raw)
  To: Johan Hovold
  Cc: Thomas Gleixner, x86, platform-driver-x86, linux-arm-kernel,
	linux-mips, linux-kernel, Hsin-Yi Wang, Mark-PK Tsai

On Thu, 09 Feb 2023 13:23:23 +0000,
Johan Hovold <johan+linaro@kernel.org> wrote:
> 
> The IRQ domain structures are currently protected by the global
> irq_domain_mutex. Switch to using more fine-grained per-domain locking,
> which can speed up parallel probing by reducing lock contention.
> 
> On a recent arm64 laptop, the total time spent waiting for the locks
> during boot drops from 160 to 40 ms on average, while the maximum
> aggregate wait time drops from 550 to 90 ms over ten runs for example.
> 
> Note that the domain lock of the root domain (innermost domain) must be
> used for hierarchical domains. For non-hierarchical domains (as for root
> domains), the new root pointer is set to the domain itself so that
> domain->root->mutex can be used in shared code paths.
> 
> Also note that hierarchical domains should be constructed using
> irq_domain_create_hierarchy() (or irq_domain_add_hierarchy()) to avoid
> poking at irqdomain internals. As a safeguard, the lockdep assertion in
> irq_domain_set_mapping() will catch any offenders that fail to set the
> root domain pointer.
> 
> Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
> Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
> Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
> ---
>  include/linux/irqdomain.h |  4 +++
>  kernel/irq/irqdomain.c    | 61 +++++++++++++++++++++++++--------------
>  2 files changed, 44 insertions(+), 21 deletions(-)
> 
> diff --git a/include/linux/irqdomain.h b/include/linux/irqdomain.h
> index 16399de00b48..cad47737a052 100644
> --- a/include/linux/irqdomain.h
> +++ b/include/linux/irqdomain.h
> @@ -125,6 +125,8 @@ struct irq_domain_chip_generic;
>   *		core code.
>   * @flags:	Per irq_domain flags
>   * @mapcount:	The number of mapped interrupts
> + * @mutex:	Domain lock, hierarhical domains use root domain's lock

nit: hierarchical

> + * @root:	Pointer to root domain, or containing structure if non-hierarchical
>   *
>   * Optional elements:
>   * @fwnode:	Pointer to firmware node associated with the irq_domain. Pretty easy
> @@ -152,6 +154,8 @@ struct irq_domain {
>  	void				*host_data;
>  	unsigned int			flags;
>  	unsigned int			mapcount;
> +	struct mutex			mutex;
> +	struct irq_domain		*root;
>  
>  	/* Optional data */
>  	struct fwnode_handle		*fwnode;
> diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
> index 804d316329c8..c96aa5e5a94b 100644
> --- a/kernel/irq/irqdomain.c
> +++ b/kernel/irq/irqdomain.c
> @@ -226,6 +226,17 @@ struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int s
>  
>  	domain->revmap_size = size;
>  
> +	/*
> +	 * Hierarchical domains use the domain lock of the root domain
> +	 * (innermost domain).
> +	 *
> +	 * For non-hierarchical domains (as for root domains), the root
> +	 * pointer is set to the domain itself so that domain->root->mutex
> +	 * can be used in shared code paths.
> +	 */
> +	mutex_init(&domain->mutex);
> +	domain->root = domain;
> +
>  	irq_domain_check_hierarchy(domain);
>  
>  	mutex_lock(&irq_domain_mutex);
> @@ -503,7 +514,7 @@ static bool irq_domain_is_nomap(struct irq_domain *domain)
>  static void irq_domain_clear_mapping(struct irq_domain *domain,
>  				     irq_hw_number_t hwirq)
>  {
> -	lockdep_assert_held(&irq_domain_mutex);
> +	lockdep_assert_held(&domain->root->mutex);
>  
>  	if (irq_domain_is_nomap(domain))
>  		return;
> @@ -518,7 +529,11 @@ static void irq_domain_set_mapping(struct irq_domain *domain,
>  				   irq_hw_number_t hwirq,
>  				   struct irq_data *irq_data)
>  {
> -	lockdep_assert_held(&irq_domain_mutex);
> +	/*
> +	 * This also makes sure that all domains point to the same root when
> +	 * called from irq_domain_insert_irq() for each domain in a hierarchy.
> +	 */
> +	lockdep_assert_held(&domain->root->mutex);
>  
>  	if (irq_domain_is_nomap(domain))
>  		return;
> @@ -540,7 +555,7 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
>  
>  	hwirq = irq_data->hwirq;
>  
> -	mutex_lock(&irq_domain_mutex);
> +	mutex_lock(&domain->mutex);

So you made that point about being able to uniformly using root>mutex,
which I think is a good invariant. Yet you hardly make use of it. Why?

>  
>  	irq_set_status_flags(irq, IRQ_NOREQUEST);
>  
> @@ -562,7 +577,7 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
>  	/* Clear reverse map for this hwirq */
>  	irq_domain_clear_mapping(domain, hwirq);
>  
> -	mutex_unlock(&irq_domain_mutex);
> +	mutex_unlock(&domain->mutex);
>  }
>  
>  static int irq_domain_associate_locked(struct irq_domain *domain, unsigned int virq,
> @@ -612,9 +627,9 @@ int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
>  {
>  	int ret;
>  
> -	mutex_lock(&irq_domain_mutex);
> +	mutex_lock(&domain->mutex);
>  	ret = irq_domain_associate_locked(domain, virq, hwirq);
> -	mutex_unlock(&irq_domain_mutex);
> +	mutex_unlock(&domain->mutex);
>  
>  	return ret;
>  }
> @@ -731,7 +746,7 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
>  		return 0;
>  	}
>  
> -	mutex_lock(&irq_domain_mutex);
> +	mutex_lock(&domain->mutex);
>  
>  	/* Check if mapping already exists */
>  	virq = irq_find_mapping(domain, hwirq);
> @@ -742,7 +757,7 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
>  
>  	virq = irq_create_mapping_affinity_locked(domain, hwirq, affinity);
>  out:
> -	mutex_unlock(&irq_domain_mutex);
> +	mutex_unlock(&domain->mutex);
>  
>  	return virq;
>  }
> @@ -811,7 +826,7 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
>  	if (WARN_ON(type & ~IRQ_TYPE_SENSE_MASK))
>  		type &= IRQ_TYPE_SENSE_MASK;
>  
> -	mutex_lock(&irq_domain_mutex);
> +	mutex_lock(&domain->root->mutex);
>  
>  	/*
>  	 * If we've already configured this interrupt,
> @@ -864,11 +879,11 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
>  	/* Store trigger type */
>  	irqd_set_trigger_type(irq_data, type);
>  out:
> -	mutex_unlock(&irq_domain_mutex);
> +	mutex_unlock(&domain->root->mutex);
>  
>  	return virq;
>  err:
> -	mutex_unlock(&irq_domain_mutex);
> +	mutex_unlock(&domain->root->mutex);
>  
>  	return 0;
>  }
> @@ -1132,6 +1147,7 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
>  	else
>  		domain = irq_domain_create_tree(fwnode, ops, host_data);
>  	if (domain) {
> +		domain->root = parent->root;
>  		domain->parent = parent;
>  		domain->flags |= flags;

So we still have a bug here, as we have published a domain that we
keep updating. A parallel probing could find it in the interval and do
something completely wrong.

Splitting the work would help, as per the following patch.

Thanks,

	M.

diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index c96aa5e5a94b..6c675ef672e9 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -126,23 +126,12 @@ void irq_domain_free_fwnode(struct fwnode_handle *fwnode)
 }
 EXPORT_SYMBOL_GPL(irq_domain_free_fwnode);
 
-/**
- * __irq_domain_add() - Allocate a new irq_domain data structure
- * @fwnode: firmware node for the interrupt controller
- * @size: Size of linear map; 0 for radix mapping only
- * @hwirq_max: Maximum number of interrupts supported by controller
- * @direct_max: Maximum value of direct maps; Use ~0 for no limit; 0 for no
- *              direct mapping
- * @ops: domain callbacks
- * @host_data: Controller private data pointer
- *
- * Allocates and initializes an irq_domain structure.
- * Returns pointer to IRQ domain, or NULL on failure.
- */
-struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int size,
-				    irq_hw_number_t hwirq_max, int direct_max,
-				    const struct irq_domain_ops *ops,
-				    void *host_data)
+static struct irq_domain *__irq_domain_create(struct fwnode_handle *fwnode,
+					      unsigned int size,
+					      irq_hw_number_t hwirq_max,
+					      int direct_max,
+					      const struct irq_domain_ops *ops,
+					      void *host_data)
 {
 	struct irqchip_fwid *fwid;
 	struct irq_domain *domain;
@@ -239,12 +228,44 @@ struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int s
 
 	irq_domain_check_hierarchy(domain);
 
+	return domain;
+}
+
+static void __irq_domain_publish(struct irq_domain *domain)
+{
 	mutex_lock(&irq_domain_mutex);
 	debugfs_add_domain_dir(domain);
 	list_add(&domain->link, &irq_domain_list);
 	mutex_unlock(&irq_domain_mutex);
 
 	pr_debug("Added domain %s\n", domain->name);
+}
+
+/**
+ * __irq_domain_add() - Allocate a new irq_domain data structure
+ * @fwnode: firmware node for the interrupt controller
+ * @size: Size of linear map; 0 for radix mapping only
+ * @hwirq_max: Maximum number of interrupts supported by controller
+ * @direct_max: Maximum value of direct maps; Use ~0 for no limit; 0 for no
+ *              direct mapping
+ * @ops: domain callbacks
+ * @host_data: Controller private data pointer
+ *
+ * Allocates and initializes an irq_domain structure.
+ * Returns pointer to IRQ domain, or NULL on failure.
+ */
+struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int size,
+				    irq_hw_number_t hwirq_max, int direct_max,
+				    const struct irq_domain_ops *ops,
+				    void *host_data)
+{
+	struct irq_domain *domain;
+
+	domain = __irq_domain_create(fwnode, size, hwirq_max, direct_max,
+				     ops, host_data);
+	if (domain)
+		__irq_domain_publish(domain);
+
 	return domain;
 }
 EXPORT_SYMBOL_GPL(__irq_domain_add);
@@ -1143,13 +1164,16 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
 	struct irq_domain *domain;
 
 	if (size)
-		domain = irq_domain_create_linear(fwnode, size, ops, host_data);
+		domain = __irq_domain_create(fwnode, size, size, 0, ops, host_data);
 	else
-		domain = irq_domain_create_tree(fwnode, ops, host_data);
+		domain = __irq_domain_create(fwnode, 0, ~0, 0, ops, host_data);
+
 	if (domain) {
 		domain->root = parent->root;
 		domain->parent = parent;
 		domain->flags |= flags;
+
+		__irq_domain_publish(domain);
 	}
 
 	return domain;

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 19/19] irqdomain: Switch to per-domain locking
@ 2023-02-09 16:00     ` Marc Zyngier
  0 siblings, 0 replies; 58+ messages in thread
From: Marc Zyngier @ 2023-02-09 16:00 UTC (permalink / raw)
  To: Johan Hovold
  Cc: Thomas Gleixner, x86, platform-driver-x86, linux-arm-kernel,
	linux-mips, linux-kernel, Hsin-Yi Wang, Mark-PK Tsai

On Thu, 09 Feb 2023 13:23:23 +0000,
Johan Hovold <johan+linaro@kernel.org> wrote:
> 
> The IRQ domain structures are currently protected by the global
> irq_domain_mutex. Switch to using more fine-grained per-domain locking,
> which can speed up parallel probing by reducing lock contention.
> 
> On a recent arm64 laptop, the total time spent waiting for the locks
> during boot drops from 160 to 40 ms on average, while the maximum
> aggregate wait time drops from 550 to 90 ms over ten runs for example.
> 
> Note that the domain lock of the root domain (innermost domain) must be
> used for hierarchical domains. For non-hierarchical domains (as for root
> domains), the new root pointer is set to the domain itself so that
> domain->root->mutex can be used in shared code paths.
> 
> Also note that hierarchical domains should be constructed using
> irq_domain_create_hierarchy() (or irq_domain_add_hierarchy()) to avoid
> poking at irqdomain internals. As a safeguard, the lockdep assertion in
> irq_domain_set_mapping() will catch any offenders that fail to set the
> root domain pointer.
> 
> Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
> Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
> Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
> ---
>  include/linux/irqdomain.h |  4 +++
>  kernel/irq/irqdomain.c    | 61 +++++++++++++++++++++++++--------------
>  2 files changed, 44 insertions(+), 21 deletions(-)
> 
> diff --git a/include/linux/irqdomain.h b/include/linux/irqdomain.h
> index 16399de00b48..cad47737a052 100644
> --- a/include/linux/irqdomain.h
> +++ b/include/linux/irqdomain.h
> @@ -125,6 +125,8 @@ struct irq_domain_chip_generic;
>   *		core code.
>   * @flags:	Per irq_domain flags
>   * @mapcount:	The number of mapped interrupts
> + * @mutex:	Domain lock, hierarhical domains use root domain's lock

nit: hierarchical

> + * @root:	Pointer to root domain, or containing structure if non-hierarchical
>   *
>   * Optional elements:
>   * @fwnode:	Pointer to firmware node associated with the irq_domain. Pretty easy
> @@ -152,6 +154,8 @@ struct irq_domain {
>  	void				*host_data;
>  	unsigned int			flags;
>  	unsigned int			mapcount;
> +	struct mutex			mutex;
> +	struct irq_domain		*root;
>  
>  	/* Optional data */
>  	struct fwnode_handle		*fwnode;
> diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
> index 804d316329c8..c96aa5e5a94b 100644
> --- a/kernel/irq/irqdomain.c
> +++ b/kernel/irq/irqdomain.c
> @@ -226,6 +226,17 @@ struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int s
>  
>  	domain->revmap_size = size;
>  
> +	/*
> +	 * Hierarchical domains use the domain lock of the root domain
> +	 * (innermost domain).
> +	 *
> +	 * For non-hierarchical domains (as for root domains), the root
> +	 * pointer is set to the domain itself so that domain->root->mutex
> +	 * can be used in shared code paths.
> +	 */
> +	mutex_init(&domain->mutex);
> +	domain->root = domain;
> +
>  	irq_domain_check_hierarchy(domain);
>  
>  	mutex_lock(&irq_domain_mutex);
> @@ -503,7 +514,7 @@ static bool irq_domain_is_nomap(struct irq_domain *domain)
>  static void irq_domain_clear_mapping(struct irq_domain *domain,
>  				     irq_hw_number_t hwirq)
>  {
> -	lockdep_assert_held(&irq_domain_mutex);
> +	lockdep_assert_held(&domain->root->mutex);
>  
>  	if (irq_domain_is_nomap(domain))
>  		return;
> @@ -518,7 +529,11 @@ static void irq_domain_set_mapping(struct irq_domain *domain,
>  				   irq_hw_number_t hwirq,
>  				   struct irq_data *irq_data)
>  {
> -	lockdep_assert_held(&irq_domain_mutex);
> +	/*
> +	 * This also makes sure that all domains point to the same root when
> +	 * called from irq_domain_insert_irq() for each domain in a hierarchy.
> +	 */
> +	lockdep_assert_held(&domain->root->mutex);
>  
>  	if (irq_domain_is_nomap(domain))
>  		return;
> @@ -540,7 +555,7 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
>  
>  	hwirq = irq_data->hwirq;
>  
> -	mutex_lock(&irq_domain_mutex);
> +	mutex_lock(&domain->mutex);

So you made that point about being able to uniformly using root>mutex,
which I think is a good invariant. Yet you hardly make use of it. Why?

>  
>  	irq_set_status_flags(irq, IRQ_NOREQUEST);
>  
> @@ -562,7 +577,7 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
>  	/* Clear reverse map for this hwirq */
>  	irq_domain_clear_mapping(domain, hwirq);
>  
> -	mutex_unlock(&irq_domain_mutex);
> +	mutex_unlock(&domain->mutex);
>  }
>  
>  static int irq_domain_associate_locked(struct irq_domain *domain, unsigned int virq,
> @@ -612,9 +627,9 @@ int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
>  {
>  	int ret;
>  
> -	mutex_lock(&irq_domain_mutex);
> +	mutex_lock(&domain->mutex);
>  	ret = irq_domain_associate_locked(domain, virq, hwirq);
> -	mutex_unlock(&irq_domain_mutex);
> +	mutex_unlock(&domain->mutex);
>  
>  	return ret;
>  }
> @@ -731,7 +746,7 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
>  		return 0;
>  	}
>  
> -	mutex_lock(&irq_domain_mutex);
> +	mutex_lock(&domain->mutex);
>  
>  	/* Check if mapping already exists */
>  	virq = irq_find_mapping(domain, hwirq);
> @@ -742,7 +757,7 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
>  
>  	virq = irq_create_mapping_affinity_locked(domain, hwirq, affinity);
>  out:
> -	mutex_unlock(&irq_domain_mutex);
> +	mutex_unlock(&domain->mutex);
>  
>  	return virq;
>  }
> @@ -811,7 +826,7 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
>  	if (WARN_ON(type & ~IRQ_TYPE_SENSE_MASK))
>  		type &= IRQ_TYPE_SENSE_MASK;
>  
> -	mutex_lock(&irq_domain_mutex);
> +	mutex_lock(&domain->root->mutex);
>  
>  	/*
>  	 * If we've already configured this interrupt,
> @@ -864,11 +879,11 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
>  	/* Store trigger type */
>  	irqd_set_trigger_type(irq_data, type);
>  out:
> -	mutex_unlock(&irq_domain_mutex);
> +	mutex_unlock(&domain->root->mutex);
>  
>  	return virq;
>  err:
> -	mutex_unlock(&irq_domain_mutex);
> +	mutex_unlock(&domain->root->mutex);
>  
>  	return 0;
>  }
> @@ -1132,6 +1147,7 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
>  	else
>  		domain = irq_domain_create_tree(fwnode, ops, host_data);
>  	if (domain) {
> +		domain->root = parent->root;
>  		domain->parent = parent;
>  		domain->flags |= flags;

So we still have a bug here, as we have published a domain that we
keep updating. A parallel probing could find it in the interval and do
something completely wrong.

Splitting the work would help, as per the following patch.

Thanks,

	M.

diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index c96aa5e5a94b..6c675ef672e9 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -126,23 +126,12 @@ void irq_domain_free_fwnode(struct fwnode_handle *fwnode)
 }
 EXPORT_SYMBOL_GPL(irq_domain_free_fwnode);
 
-/**
- * __irq_domain_add() - Allocate a new irq_domain data structure
- * @fwnode: firmware node for the interrupt controller
- * @size: Size of linear map; 0 for radix mapping only
- * @hwirq_max: Maximum number of interrupts supported by controller
- * @direct_max: Maximum value of direct maps; Use ~0 for no limit; 0 for no
- *              direct mapping
- * @ops: domain callbacks
- * @host_data: Controller private data pointer
- *
- * Allocates and initializes an irq_domain structure.
- * Returns pointer to IRQ domain, or NULL on failure.
- */
-struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int size,
-				    irq_hw_number_t hwirq_max, int direct_max,
-				    const struct irq_domain_ops *ops,
-				    void *host_data)
+static struct irq_domain *__irq_domain_create(struct fwnode_handle *fwnode,
+					      unsigned int size,
+					      irq_hw_number_t hwirq_max,
+					      int direct_max,
+					      const struct irq_domain_ops *ops,
+					      void *host_data)
 {
 	struct irqchip_fwid *fwid;
 	struct irq_domain *domain;
@@ -239,12 +228,44 @@ struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int s
 
 	irq_domain_check_hierarchy(domain);
 
+	return domain;
+}
+
+static void __irq_domain_publish(struct irq_domain *domain)
+{
 	mutex_lock(&irq_domain_mutex);
 	debugfs_add_domain_dir(domain);
 	list_add(&domain->link, &irq_domain_list);
 	mutex_unlock(&irq_domain_mutex);
 
 	pr_debug("Added domain %s\n", domain->name);
+}
+
+/**
+ * __irq_domain_add() - Allocate a new irq_domain data structure
+ * @fwnode: firmware node for the interrupt controller
+ * @size: Size of linear map; 0 for radix mapping only
+ * @hwirq_max: Maximum number of interrupts supported by controller
+ * @direct_max: Maximum value of direct maps; Use ~0 for no limit; 0 for no
+ *              direct mapping
+ * @ops: domain callbacks
+ * @host_data: Controller private data pointer
+ *
+ * Allocates and initializes an irq_domain structure.
+ * Returns pointer to IRQ domain, or NULL on failure.
+ */
+struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int size,
+				    irq_hw_number_t hwirq_max, int direct_max,
+				    const struct irq_domain_ops *ops,
+				    void *host_data)
+{
+	struct irq_domain *domain;
+
+	domain = __irq_domain_create(fwnode, size, hwirq_max, direct_max,
+				     ops, host_data);
+	if (domain)
+		__irq_domain_publish(domain);
+
 	return domain;
 }
 EXPORT_SYMBOL_GPL(__irq_domain_add);
@@ -1143,13 +1164,16 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
 	struct irq_domain *domain;
 
 	if (size)
-		domain = irq_domain_create_linear(fwnode, size, ops, host_data);
+		domain = __irq_domain_create(fwnode, size, size, 0, ops, host_data);
 	else
-		domain = irq_domain_create_tree(fwnode, ops, host_data);
+		domain = __irq_domain_create(fwnode, 0, ~0, 0, ops, host_data);
+
 	if (domain) {
 		domain->root = parent->root;
 		domain->parent = parent;
 		domain->flags |= flags;
+
+		__irq_domain_publish(domain);
 	}
 
 	return domain;

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 06/19] irqdomain: Fix mapping-creation race
  2023-02-09 14:03     ` Marc Zyngier
@ 2023-02-10  9:10       ` Johan Hovold
  -1 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-10  9:10 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Johan Hovold, Thomas Gleixner, x86, platform-driver-x86,
	linux-arm-kernel, linux-mips, linux-kernel, stable,
	Dmitry Torokhov, Jon Hunter, Hsin-Yi Wang, Mark-PK Tsai

On Thu, Feb 09, 2023 at 02:03:19PM +0000, Marc Zyngier wrote:
> On Thu, 09 Feb 2023 13:23:10 +0000,
> Johan Hovold <johan+linaro@kernel.org> wrote:
> > 
> > Parallel probing of devices that share interrupts (e.g. when a driver
> > uses asynchronous probing) can currently result in two mappings for the
> > same hardware interrupt to be created due to missing serialisation.
> > 
> > Make sure to hold the irq_domain_mutex when creating mappings so that
> > looking for an existing mapping before creating a new one is done
> > atomically.
> > 
> > Fixes: 765230b5f084 ("driver-core: add asynchronous probing support for drivers")
> > Fixes: b62b2cf5759b ("irqdomain: Fix handling of type settings for existing mappings")
> > Link: https://lore.kernel.org/r/YuJXMHoT4ijUxnRb@hovoldconsulting.com
> > Cc: stable@vger.kernel.org      # 4.8
> > Cc: Dmitry Torokhov <dtor@chromium.org>
> > Cc: Jon Hunter <jonathanh@nvidia.com>
> > Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
> > Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
> > Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
> > ---
> >  kernel/irq/irqdomain.c | 55 ++++++++++++++++++++++++++++++------------
> >  1 file changed, 40 insertions(+), 15 deletions(-)
> > 
> > diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
> > index 7b57949bc79c..1ddb01bd49a4 100644
> > --- a/kernel/irq/irqdomain.c
> > +++ b/kernel/irq/irqdomain.c
> > @@ -25,6 +25,9 @@ static DEFINE_MUTEX(irq_domain_mutex);
> >  
> >  static struct irq_domain *irq_default_domain;
> >  
> > +static int irq_domain_alloc_irqs_locked(struct irq_domain *domain, int irq_base,
> > +					unsigned int nr_irqs, int node, void *arg,
> > +					bool realloc, const struct irq_affinity_desc *affinity);
> >  static void irq_domain_check_hierarchy(struct irq_domain *domain);
> >  
> >  struct irqchip_fwid {
> > @@ -682,9 +685,9 @@ unsigned int irq_create_direct_mapping(struct irq_domain *domain)
> >  EXPORT_SYMBOL_GPL(irq_create_direct_mapping);
> >  #endif
> >  
> > -static unsigned int __irq_create_mapping_affinity(struct irq_domain *domain,
> > -						  irq_hw_number_t hwirq,
> > -						  const struct irq_affinity_desc *affinity)
> > +static unsigned int irq_create_mapping_affinity_locked(struct irq_domain *domain,
> > +						       irq_hw_number_t hwirq,
> > +						       const struct irq_affinity_desc *affinity)
> >  {
> >  	struct device_node *of_node = irq_domain_get_of_node(domain);
> >  	int virq;
> > @@ -699,7 +702,7 @@ static unsigned int __irq_create_mapping_affinity(struct irq_domain *domain,
> >  		return 0;
> >  	}
> >  
> > -	if (irq_domain_associate(domain, virq, hwirq)) {
> > +	if (irq_domain_associate_locked(domain, virq, hwirq)) {
> >  		irq_free_desc(virq);
> >  		return 0;
> >  	}
> > @@ -735,14 +738,20 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
> >  		return 0;
> >  	}
> >  
> > +	mutex_lock(&irq_domain_mutex);
> > +
> >  	/* Check if mapping already exists */
> >  	virq = irq_find_mapping(domain, hwirq);
> >  	if (virq) {
> >  		pr_debug("existing mapping on virq %d\n", virq);
> > -		return virq;
> > +		goto out;
> >  	}
> >  
> > -	return __irq_create_mapping_affinity(domain, hwirq, affinity);
> > +	virq = irq_create_mapping_affinity_locked(domain, hwirq, affinity);
> > +out:
> > +	mutex_unlock(&irq_domain_mutex);
> > +
> > +	return virq;
> >  }
> >  EXPORT_SYMBOL_GPL(irq_create_mapping_affinity);
> >  
> > @@ -809,6 +818,8 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
> >  	if (WARN_ON(type & ~IRQ_TYPE_SENSE_MASK))
> >  		type &= IRQ_TYPE_SENSE_MASK;
> >  
> > +	mutex_lock(&irq_domain_mutex);
> > +
> >  	/*
> >  	 * If we've already configured this interrupt,
> >  	 * don't do it again, or hell will break loose.
> > @@ -821,7 +832,7 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
> >  		 * interrupt number.
> >  		 */
> >  		if (type == IRQ_TYPE_NONE || type == irq_get_trigger_type(virq))
> > -			return virq;
> > +			goto out;
> >  
> >  		/*
> >  		 * If the trigger type has not been set yet, then set
> > @@ -830,36 +841,43 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
> >  		if (irq_get_trigger_type(virq) == IRQ_TYPE_NONE) {
> >  			irq_data = irq_get_irq_data(virq);
> >  			if (!irq_data)
> > -				return 0;
> > +				goto err;
> >  
> >  			irqd_set_trigger_type(irq_data, type);
> > -			return virq;
> > +			goto out;
> >  		}
> >  
> >  		pr_warn("type mismatch, failed to map hwirq-%lu for %s!\n",
> >  			hwirq, of_node_full_name(to_of_node(fwspec->fwnode)));
> > -		return 0;
> > +		goto err;
> >  	}
> >  
> >  	if (irq_domain_is_hierarchy(domain)) {
> > -		virq = irq_domain_alloc_irqs(domain, 1, NUMA_NO_NODE, fwspec);
> > +		virq = irq_domain_alloc_irqs_locked(domain, -1, 1, NUMA_NO_NODE,
> > +						    fwspec, false, NULL);
> >  		if (virq <= 0)
> > -			return 0;
> > +			goto err;
> >  	} else {
> >  		/* Create mapping */
> > -		virq = __irq_create_mapping_affinity(domain, hwirq, NULL);
> > +		virq = irq_create_mapping_affinity_locked(domain, hwirq, NULL);
> >  		if (!virq)
> > -			return virq;
> > +			goto err;
> >  	}
> >  
> >  	irq_data = irq_get_irq_data(virq);
> >  	if (WARN_ON(!irq_data))
> > -		return 0;
> > +		goto err;
> >  
> >  	/* Store trigger type */
> >  	irqd_set_trigger_type(irq_data, type);
> > +out:
> > +	mutex_unlock(&irq_domain_mutex);
> >  
> >  	return virq;
> > +err:
> > +	mutex_unlock(&irq_domain_mutex);
> > +
> > +	return 0;
> 
> nit: it'd look better if we had a single exit path with the unlock,
> setting virq to 0 on failure. Not a big deal, as this can be tidied up
> when applied.

Using a single exit path would result in a slightly bigger diff (5
lines) and would not separate the success and error paths as clearly,
but yeah, it's possibly still preferred (see result below).

Johan


diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index 7b57949bc79c..bfda4adc05c0 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -25,6 +25,9 @@ static DEFINE_MUTEX(irq_domain_mutex);
 
 static struct irq_domain *irq_default_domain;
 
+static int irq_domain_alloc_irqs_locked(struct irq_domain *domain, int irq_base,
+					unsigned int nr_irqs, int node, void *arg,
+					bool realloc, const struct irq_affinity_desc *affinity);
 static void irq_domain_check_hierarchy(struct irq_domain *domain);
 
 struct irqchip_fwid {
@@ -682,9 +685,9 @@ unsigned int irq_create_direct_mapping(struct irq_domain *domain)
 EXPORT_SYMBOL_GPL(irq_create_direct_mapping);
 #endif
 
-static unsigned int __irq_create_mapping_affinity(struct irq_domain *domain,
-						  irq_hw_number_t hwirq,
-						  const struct irq_affinity_desc *affinity)
+static unsigned int irq_create_mapping_affinity_locked(struct irq_domain *domain,
+						       irq_hw_number_t hwirq,
+						       const struct irq_affinity_desc *affinity)
 {
 	struct device_node *of_node = irq_domain_get_of_node(domain);
 	int virq;
@@ -699,7 +702,7 @@ static unsigned int __irq_create_mapping_affinity(struct irq_domain *domain,
 		return 0;
 	}
 
-	if (irq_domain_associate(domain, virq, hwirq)) {
+	if (irq_domain_associate_locked(domain, virq, hwirq)) {
 		irq_free_desc(virq);
 		return 0;
 	}
@@ -735,14 +738,20 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
 		return 0;
 	}
 
+	mutex_lock(&irq_domain_mutex);
+
 	/* Check if mapping already exists */
 	virq = irq_find_mapping(domain, hwirq);
 	if (virq) {
 		pr_debug("existing mapping on virq %d\n", virq);
-		return virq;
+		goto out;
 	}
 
-	return __irq_create_mapping_affinity(domain, hwirq, affinity);
+	virq = irq_create_mapping_affinity_locked(domain, hwirq, affinity);
+out:
+	mutex_unlock(&irq_domain_mutex);
+
+	return virq;
 }
 EXPORT_SYMBOL_GPL(irq_create_mapping_affinity);
 
@@ -809,6 +818,8 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
 	if (WARN_ON(type & ~IRQ_TYPE_SENSE_MASK))
 		type &= IRQ_TYPE_SENSE_MASK;
 
+	mutex_lock(&irq_domain_mutex);
+
 	/*
 	 * If we've already configured this interrupt,
 	 * don't do it again, or hell will break loose.
@@ -821,7 +832,7 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
 		 * interrupt number.
 		 */
 		if (type == IRQ_TYPE_NONE || type == irq_get_trigger_type(virq))
-			return virq;
+			goto out;
 
 		/*
 		 * If the trigger type has not been set yet, then set
@@ -829,35 +840,45 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
 		 */
 		if (irq_get_trigger_type(virq) == IRQ_TYPE_NONE) {
 			irq_data = irq_get_irq_data(virq);
-			if (!irq_data)
-				return 0;
+			if (!irq_data) {
+				virq = 0;
+				goto out;
+			}
 
 			irqd_set_trigger_type(irq_data, type);
-			return virq;
+			goto out;
 		}
 
 		pr_warn("type mismatch, failed to map hwirq-%lu for %s!\n",
 			hwirq, of_node_full_name(to_of_node(fwspec->fwnode)));
-		return 0;
+		virq = 0;
+		goto out;
 	}
 
 	if (irq_domain_is_hierarchy(domain)) {
-		virq = irq_domain_alloc_irqs(domain, 1, NUMA_NO_NODE, fwspec);
-		if (virq <= 0)
-			return 0;
+		virq = irq_domain_alloc_irqs_locked(domain, -1, 1, NUMA_NO_NODE,
+						    fwspec, false, NULL);
+		if (virq <= 0) {
+			virq = 0;
+			goto out;
+		}
 	} else {
 		/* Create mapping */
-		virq = __irq_create_mapping_affinity(domain, hwirq, NULL);
+		virq = irq_create_mapping_affinity_locked(domain, hwirq, NULL);
 		if (!virq)
-			return virq;
+			goto out;
 	}
 
 	irq_data = irq_get_irq_data(virq);
-	if (WARN_ON(!irq_data))
-		return 0;
+	if (WARN_ON(!irq_data)) {
+		virq = 0;
+		goto out;
+	}
 
 	/* Store trigger type */
 	irqd_set_trigger_type(irq_data, type);
+out:
+	mutex_unlock(&irq_domain_mutex);
 
 	return virq;
 }
@@ -1888,6 +1909,13 @@ void irq_domain_set_info(struct irq_domain *domain, unsigned int virq,
 	irq_set_handler_data(virq, handler_data);
 }
 
+static int irq_domain_alloc_irqs_locked(struct irq_domain *domain, int irq_base,
+					unsigned int nr_irqs, int node, void *arg,
+					bool realloc, const struct irq_affinity_desc *affinity)
+{
+	return -EINVAL;
+}
+
 static void irq_domain_check_hierarchy(struct irq_domain *domain)
 {
 }

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 06/19] irqdomain: Fix mapping-creation race
@ 2023-02-10  9:10       ` Johan Hovold
  0 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-10  9:10 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Johan Hovold, Thomas Gleixner, x86, platform-driver-x86,
	linux-arm-kernel, linux-mips, linux-kernel, stable,
	Dmitry Torokhov, Jon Hunter, Hsin-Yi Wang, Mark-PK Tsai

On Thu, Feb 09, 2023 at 02:03:19PM +0000, Marc Zyngier wrote:
> On Thu, 09 Feb 2023 13:23:10 +0000,
> Johan Hovold <johan+linaro@kernel.org> wrote:
> > 
> > Parallel probing of devices that share interrupts (e.g. when a driver
> > uses asynchronous probing) can currently result in two mappings for the
> > same hardware interrupt to be created due to missing serialisation.
> > 
> > Make sure to hold the irq_domain_mutex when creating mappings so that
> > looking for an existing mapping before creating a new one is done
> > atomically.
> > 
> > Fixes: 765230b5f084 ("driver-core: add asynchronous probing support for drivers")
> > Fixes: b62b2cf5759b ("irqdomain: Fix handling of type settings for existing mappings")
> > Link: https://lore.kernel.org/r/YuJXMHoT4ijUxnRb@hovoldconsulting.com
> > Cc: stable@vger.kernel.org      # 4.8
> > Cc: Dmitry Torokhov <dtor@chromium.org>
> > Cc: Jon Hunter <jonathanh@nvidia.com>
> > Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
> > Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
> > Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
> > ---
> >  kernel/irq/irqdomain.c | 55 ++++++++++++++++++++++++++++++------------
> >  1 file changed, 40 insertions(+), 15 deletions(-)
> > 
> > diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
> > index 7b57949bc79c..1ddb01bd49a4 100644
> > --- a/kernel/irq/irqdomain.c
> > +++ b/kernel/irq/irqdomain.c
> > @@ -25,6 +25,9 @@ static DEFINE_MUTEX(irq_domain_mutex);
> >  
> >  static struct irq_domain *irq_default_domain;
> >  
> > +static int irq_domain_alloc_irqs_locked(struct irq_domain *domain, int irq_base,
> > +					unsigned int nr_irqs, int node, void *arg,
> > +					bool realloc, const struct irq_affinity_desc *affinity);
> >  static void irq_domain_check_hierarchy(struct irq_domain *domain);
> >  
> >  struct irqchip_fwid {
> > @@ -682,9 +685,9 @@ unsigned int irq_create_direct_mapping(struct irq_domain *domain)
> >  EXPORT_SYMBOL_GPL(irq_create_direct_mapping);
> >  #endif
> >  
> > -static unsigned int __irq_create_mapping_affinity(struct irq_domain *domain,
> > -						  irq_hw_number_t hwirq,
> > -						  const struct irq_affinity_desc *affinity)
> > +static unsigned int irq_create_mapping_affinity_locked(struct irq_domain *domain,
> > +						       irq_hw_number_t hwirq,
> > +						       const struct irq_affinity_desc *affinity)
> >  {
> >  	struct device_node *of_node = irq_domain_get_of_node(domain);
> >  	int virq;
> > @@ -699,7 +702,7 @@ static unsigned int __irq_create_mapping_affinity(struct irq_domain *domain,
> >  		return 0;
> >  	}
> >  
> > -	if (irq_domain_associate(domain, virq, hwirq)) {
> > +	if (irq_domain_associate_locked(domain, virq, hwirq)) {
> >  		irq_free_desc(virq);
> >  		return 0;
> >  	}
> > @@ -735,14 +738,20 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
> >  		return 0;
> >  	}
> >  
> > +	mutex_lock(&irq_domain_mutex);
> > +
> >  	/* Check if mapping already exists */
> >  	virq = irq_find_mapping(domain, hwirq);
> >  	if (virq) {
> >  		pr_debug("existing mapping on virq %d\n", virq);
> > -		return virq;
> > +		goto out;
> >  	}
> >  
> > -	return __irq_create_mapping_affinity(domain, hwirq, affinity);
> > +	virq = irq_create_mapping_affinity_locked(domain, hwirq, affinity);
> > +out:
> > +	mutex_unlock(&irq_domain_mutex);
> > +
> > +	return virq;
> >  }
> >  EXPORT_SYMBOL_GPL(irq_create_mapping_affinity);
> >  
> > @@ -809,6 +818,8 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
> >  	if (WARN_ON(type & ~IRQ_TYPE_SENSE_MASK))
> >  		type &= IRQ_TYPE_SENSE_MASK;
> >  
> > +	mutex_lock(&irq_domain_mutex);
> > +
> >  	/*
> >  	 * If we've already configured this interrupt,
> >  	 * don't do it again, or hell will break loose.
> > @@ -821,7 +832,7 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
> >  		 * interrupt number.
> >  		 */
> >  		if (type == IRQ_TYPE_NONE || type == irq_get_trigger_type(virq))
> > -			return virq;
> > +			goto out;
> >  
> >  		/*
> >  		 * If the trigger type has not been set yet, then set
> > @@ -830,36 +841,43 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
> >  		if (irq_get_trigger_type(virq) == IRQ_TYPE_NONE) {
> >  			irq_data = irq_get_irq_data(virq);
> >  			if (!irq_data)
> > -				return 0;
> > +				goto err;
> >  
> >  			irqd_set_trigger_type(irq_data, type);
> > -			return virq;
> > +			goto out;
> >  		}
> >  
> >  		pr_warn("type mismatch, failed to map hwirq-%lu for %s!\n",
> >  			hwirq, of_node_full_name(to_of_node(fwspec->fwnode)));
> > -		return 0;
> > +		goto err;
> >  	}
> >  
> >  	if (irq_domain_is_hierarchy(domain)) {
> > -		virq = irq_domain_alloc_irqs(domain, 1, NUMA_NO_NODE, fwspec);
> > +		virq = irq_domain_alloc_irqs_locked(domain, -1, 1, NUMA_NO_NODE,
> > +						    fwspec, false, NULL);
> >  		if (virq <= 0)
> > -			return 0;
> > +			goto err;
> >  	} else {
> >  		/* Create mapping */
> > -		virq = __irq_create_mapping_affinity(domain, hwirq, NULL);
> > +		virq = irq_create_mapping_affinity_locked(domain, hwirq, NULL);
> >  		if (!virq)
> > -			return virq;
> > +			goto err;
> >  	}
> >  
> >  	irq_data = irq_get_irq_data(virq);
> >  	if (WARN_ON(!irq_data))
> > -		return 0;
> > +		goto err;
> >  
> >  	/* Store trigger type */
> >  	irqd_set_trigger_type(irq_data, type);
> > +out:
> > +	mutex_unlock(&irq_domain_mutex);
> >  
> >  	return virq;
> > +err:
> > +	mutex_unlock(&irq_domain_mutex);
> > +
> > +	return 0;
> 
> nit: it'd look better if we had a single exit path with the unlock,
> setting virq to 0 on failure. Not a big deal, as this can be tidied up
> when applied.

Using a single exit path would result in a slightly bigger diff (5
lines) and would not separate the success and error paths as clearly,
but yeah, it's possibly still preferred (see result below).

Johan


diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index 7b57949bc79c..bfda4adc05c0 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -25,6 +25,9 @@ static DEFINE_MUTEX(irq_domain_mutex);
 
 static struct irq_domain *irq_default_domain;
 
+static int irq_domain_alloc_irqs_locked(struct irq_domain *domain, int irq_base,
+					unsigned int nr_irqs, int node, void *arg,
+					bool realloc, const struct irq_affinity_desc *affinity);
 static void irq_domain_check_hierarchy(struct irq_domain *domain);
 
 struct irqchip_fwid {
@@ -682,9 +685,9 @@ unsigned int irq_create_direct_mapping(struct irq_domain *domain)
 EXPORT_SYMBOL_GPL(irq_create_direct_mapping);
 #endif
 
-static unsigned int __irq_create_mapping_affinity(struct irq_domain *domain,
-						  irq_hw_number_t hwirq,
-						  const struct irq_affinity_desc *affinity)
+static unsigned int irq_create_mapping_affinity_locked(struct irq_domain *domain,
+						       irq_hw_number_t hwirq,
+						       const struct irq_affinity_desc *affinity)
 {
 	struct device_node *of_node = irq_domain_get_of_node(domain);
 	int virq;
@@ -699,7 +702,7 @@ static unsigned int __irq_create_mapping_affinity(struct irq_domain *domain,
 		return 0;
 	}
 
-	if (irq_domain_associate(domain, virq, hwirq)) {
+	if (irq_domain_associate_locked(domain, virq, hwirq)) {
 		irq_free_desc(virq);
 		return 0;
 	}
@@ -735,14 +738,20 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
 		return 0;
 	}
 
+	mutex_lock(&irq_domain_mutex);
+
 	/* Check if mapping already exists */
 	virq = irq_find_mapping(domain, hwirq);
 	if (virq) {
 		pr_debug("existing mapping on virq %d\n", virq);
-		return virq;
+		goto out;
 	}
 
-	return __irq_create_mapping_affinity(domain, hwirq, affinity);
+	virq = irq_create_mapping_affinity_locked(domain, hwirq, affinity);
+out:
+	mutex_unlock(&irq_domain_mutex);
+
+	return virq;
 }
 EXPORT_SYMBOL_GPL(irq_create_mapping_affinity);
 
@@ -809,6 +818,8 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
 	if (WARN_ON(type & ~IRQ_TYPE_SENSE_MASK))
 		type &= IRQ_TYPE_SENSE_MASK;
 
+	mutex_lock(&irq_domain_mutex);
+
 	/*
 	 * If we've already configured this interrupt,
 	 * don't do it again, or hell will break loose.
@@ -821,7 +832,7 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
 		 * interrupt number.
 		 */
 		if (type == IRQ_TYPE_NONE || type == irq_get_trigger_type(virq))
-			return virq;
+			goto out;
 
 		/*
 		 * If the trigger type has not been set yet, then set
@@ -829,35 +840,45 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
 		 */
 		if (irq_get_trigger_type(virq) == IRQ_TYPE_NONE) {
 			irq_data = irq_get_irq_data(virq);
-			if (!irq_data)
-				return 0;
+			if (!irq_data) {
+				virq = 0;
+				goto out;
+			}
 
 			irqd_set_trigger_type(irq_data, type);
-			return virq;
+			goto out;
 		}
 
 		pr_warn("type mismatch, failed to map hwirq-%lu for %s!\n",
 			hwirq, of_node_full_name(to_of_node(fwspec->fwnode)));
-		return 0;
+		virq = 0;
+		goto out;
 	}
 
 	if (irq_domain_is_hierarchy(domain)) {
-		virq = irq_domain_alloc_irqs(domain, 1, NUMA_NO_NODE, fwspec);
-		if (virq <= 0)
-			return 0;
+		virq = irq_domain_alloc_irqs_locked(domain, -1, 1, NUMA_NO_NODE,
+						    fwspec, false, NULL);
+		if (virq <= 0) {
+			virq = 0;
+			goto out;
+		}
 	} else {
 		/* Create mapping */
-		virq = __irq_create_mapping_affinity(domain, hwirq, NULL);
+		virq = irq_create_mapping_affinity_locked(domain, hwirq, NULL);
 		if (!virq)
-			return virq;
+			goto out;
 	}
 
 	irq_data = irq_get_irq_data(virq);
-	if (WARN_ON(!irq_data))
-		return 0;
+	if (WARN_ON(!irq_data)) {
+		virq = 0;
+		goto out;
+	}
 
 	/* Store trigger type */
 	irqd_set_trigger_type(irq_data, type);
+out:
+	mutex_unlock(&irq_domain_mutex);
 
 	return virq;
 }
@@ -1888,6 +1909,13 @@ void irq_domain_set_info(struct irq_domain *domain, unsigned int virq,
 	irq_set_handler_data(virq, handler_data);
 }
 
+static int irq_domain_alloc_irqs_locked(struct irq_domain *domain, int irq_base,
+					unsigned int nr_irqs, int node, void *arg,
+					bool realloc, const struct irq_affinity_desc *affinity)
+{
+	return -EINVAL;
+}
+
 static void irq_domain_check_hierarchy(struct irq_domain *domain)
 {
 }

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 19/19] irqdomain: Switch to per-domain locking
  2023-02-09 16:00     ` Marc Zyngier
@ 2023-02-10  9:56       ` Johan Hovold
  -1 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-10  9:56 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Johan Hovold, Thomas Gleixner, x86, platform-driver-x86,
	linux-arm-kernel, linux-mips, linux-kernel, Hsin-Yi Wang,
	Mark-PK Tsai

On Thu, Feb 09, 2023 at 04:00:55PM +0000, Marc Zyngier wrote:
> On Thu, 09 Feb 2023 13:23:23 +0000,
> Johan Hovold <johan+linaro@kernel.org> wrote:
> > 
> > The IRQ domain structures are currently protected by the global
> > irq_domain_mutex. Switch to using more fine-grained per-domain locking,
> > which can speed up parallel probing by reducing lock contention.
> > 
> > On a recent arm64 laptop, the total time spent waiting for the locks
> > during boot drops from 160 to 40 ms on average, while the maximum
> > aggregate wait time drops from 550 to 90 ms over ten runs for example.
> > 
> > Note that the domain lock of the root domain (innermost domain) must be
> > used for hierarchical domains. For non-hierarchical domains (as for root
> > domains), the new root pointer is set to the domain itself so that
> > domain->root->mutex can be used in shared code paths.
> > 
> > Also note that hierarchical domains should be constructed using
> > irq_domain_create_hierarchy() (or irq_domain_add_hierarchy()) to avoid
> > poking at irqdomain internals. As a safeguard, the lockdep assertion in
> > irq_domain_set_mapping() will catch any offenders that fail to set the
> > root domain pointer.
> > 
> > Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
> > Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
> > Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
> > ---
> >  include/linux/irqdomain.h |  4 +++
> >  kernel/irq/irqdomain.c    | 61 +++++++++++++++++++++++++--------------
> >  2 files changed, 44 insertions(+), 21 deletions(-)
> > 
> > diff --git a/include/linux/irqdomain.h b/include/linux/irqdomain.h
> > index 16399de00b48..cad47737a052 100644
> > --- a/include/linux/irqdomain.h
> > +++ b/include/linux/irqdomain.h
> > @@ -125,6 +125,8 @@ struct irq_domain_chip_generic;
> >   *		core code.
> >   * @flags:	Per irq_domain flags
> >   * @mapcount:	The number of mapped interrupts
> > + * @mutex:	Domain lock, hierarhical domains use root domain's lock
> 
> nit: hierarchical
> 
> > + * @root:	Pointer to root domain, or containing structure if non-hierarchical

> > @@ -226,6 +226,17 @@ struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int s
> >  
> >  	domain->revmap_size = size;
> >  
> > +	/*
> > +	 * Hierarchical domains use the domain lock of the root domain
> > +	 * (innermost domain).
> > +	 *
> > +	 * For non-hierarchical domains (as for root domains), the root
> > +	 * pointer is set to the domain itself so that domain->root->mutex
> > +	 * can be used in shared code paths.
> > +	 */
> > +	mutex_init(&domain->mutex);
> > +	domain->root = domain;
> > +
> >  	irq_domain_check_hierarchy(domain);
> >  
> >  	mutex_lock(&irq_domain_mutex);

> > @@ -518,7 +529,11 @@ static void irq_domain_set_mapping(struct irq_domain *domain,
> >  				   irq_hw_number_t hwirq,
> >  				   struct irq_data *irq_data)
> >  {
> > -	lockdep_assert_held(&irq_domain_mutex);
> > +	/*
> > +	 * This also makes sure that all domains point to the same root when
> > +	 * called from irq_domain_insert_irq() for each domain in a hierarchy.
> > +	 */
> > +	lockdep_assert_held(&domain->root->mutex);
> >  
> >  	if (irq_domain_is_nomap(domain))
> >  		return;
> > @@ -540,7 +555,7 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
> >  
> >  	hwirq = irq_data->hwirq;
> >  
> > -	mutex_lock(&irq_domain_mutex);
> > +	mutex_lock(&domain->mutex);
> 
> So you made that point about being able to uniformly using root>mutex,
> which I think is a good invariant. Yet you hardly make use of it. Why?

I went back and forth over that a bit, but decided to only use
domain->root->mutex in paths that can be called for hierarchical
domains (i.e. the "shared code paths" mentioned above).

Using it in paths that are clearly only called for non-hierarchical
domains where domain->root == domain felt a bit lazy.

The counter argument is of course that using domain->root->lock allows
people to think less about the code they are changing, but that's not
necessarily always a good thing.

Also note that the lockdep asserts in the revmap helpers would catch
anyone using domain->mutex where they should not (i.e. using
domain->mutex for an hierarchical domain).

> >  	irq_set_status_flags(irq, IRQ_NOREQUEST);
> >  
> > @@ -562,7 +577,7 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
> >  	/* Clear reverse map for this hwirq */
> >  	irq_domain_clear_mapping(domain, hwirq);
> >  
> > -	mutex_unlock(&irq_domain_mutex);
> > +	mutex_unlock(&domain->mutex);
> >  }
> >  
> >  static int irq_domain_associate_locked(struct irq_domain *domain, unsigned int virq,
> > @@ -612,9 +627,9 @@ int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
> >  {
> >  	int ret;
> >  
> > -	mutex_lock(&irq_domain_mutex);
> > +	mutex_lock(&domain->mutex);
> >  	ret = irq_domain_associate_locked(domain, virq, hwirq);
> > -	mutex_unlock(&irq_domain_mutex);
> > +	mutex_unlock(&domain->mutex);
> >  
> >  	return ret;
> >  }
> > @@ -731,7 +746,7 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
> >  		return 0;
> >  	}
> >  
> > -	mutex_lock(&irq_domain_mutex);
> > +	mutex_lock(&domain->mutex);
> >  
> >  	/* Check if mapping already exists */
> >  	virq = irq_find_mapping(domain, hwirq);
> > @@ -742,7 +757,7 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
> >  
> >  	virq = irq_create_mapping_affinity_locked(domain, hwirq, affinity);
> >  out:
> > -	mutex_unlock(&irq_domain_mutex);
> > +	mutex_unlock(&domain->mutex);
> >  
> >  	return virq;
> >  }
> > @@ -811,7 +826,7 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
> >  	if (WARN_ON(type & ~IRQ_TYPE_SENSE_MASK))
> >  		type &= IRQ_TYPE_SENSE_MASK;
> >  
> > -	mutex_lock(&irq_domain_mutex);
> > +	mutex_lock(&domain->root->mutex);
> >  
> >  	/*
> >  	 * If we've already configured this interrupt,
> > @@ -864,11 +879,11 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
> >  	/* Store trigger type */
> >  	irqd_set_trigger_type(irq_data, type);
> >  out:
> > -	mutex_unlock(&irq_domain_mutex);
> > +	mutex_unlock(&domain->root->mutex);
> >  
> >  	return virq;
> >  err:
> > -	mutex_unlock(&irq_domain_mutex);
> > +	mutex_unlock(&domain->root->mutex);
> >  
> >  	return 0;
> >  }
> > @@ -1132,6 +1147,7 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
> >  	else
> >  		domain = irq_domain_create_tree(fwnode, ops, host_data);
> >  	if (domain) {
> > +		domain->root = parent->root;
> >  		domain->parent = parent;
> >  		domain->flags |= flags;
> 
> So we still have a bug here, as we have published a domain that we
> keep updating. A parallel probing could find it in the interval and do
> something completely wrong.

Indeed we do, even if device links should make this harder to hit these
days.

> Splitting the work would help, as per the following patch.

Looks good to me. Do you want to submit that as a patch that I'll rebase
on or should I submit it as part of a v6?

Johan

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 19/19] irqdomain: Switch to per-domain locking
@ 2023-02-10  9:56       ` Johan Hovold
  0 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-10  9:56 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Johan Hovold, Thomas Gleixner, x86, platform-driver-x86,
	linux-arm-kernel, linux-mips, linux-kernel, Hsin-Yi Wang,
	Mark-PK Tsai

On Thu, Feb 09, 2023 at 04:00:55PM +0000, Marc Zyngier wrote:
> On Thu, 09 Feb 2023 13:23:23 +0000,
> Johan Hovold <johan+linaro@kernel.org> wrote:
> > 
> > The IRQ domain structures are currently protected by the global
> > irq_domain_mutex. Switch to using more fine-grained per-domain locking,
> > which can speed up parallel probing by reducing lock contention.
> > 
> > On a recent arm64 laptop, the total time spent waiting for the locks
> > during boot drops from 160 to 40 ms on average, while the maximum
> > aggregate wait time drops from 550 to 90 ms over ten runs for example.
> > 
> > Note that the domain lock of the root domain (innermost domain) must be
> > used for hierarchical domains. For non-hierarchical domains (as for root
> > domains), the new root pointer is set to the domain itself so that
> > domain->root->mutex can be used in shared code paths.
> > 
> > Also note that hierarchical domains should be constructed using
> > irq_domain_create_hierarchy() (or irq_domain_add_hierarchy()) to avoid
> > poking at irqdomain internals. As a safeguard, the lockdep assertion in
> > irq_domain_set_mapping() will catch any offenders that fail to set the
> > root domain pointer.
> > 
> > Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
> > Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
> > Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
> > ---
> >  include/linux/irqdomain.h |  4 +++
> >  kernel/irq/irqdomain.c    | 61 +++++++++++++++++++++++++--------------
> >  2 files changed, 44 insertions(+), 21 deletions(-)
> > 
> > diff --git a/include/linux/irqdomain.h b/include/linux/irqdomain.h
> > index 16399de00b48..cad47737a052 100644
> > --- a/include/linux/irqdomain.h
> > +++ b/include/linux/irqdomain.h
> > @@ -125,6 +125,8 @@ struct irq_domain_chip_generic;
> >   *		core code.
> >   * @flags:	Per irq_domain flags
> >   * @mapcount:	The number of mapped interrupts
> > + * @mutex:	Domain lock, hierarhical domains use root domain's lock
> 
> nit: hierarchical
> 
> > + * @root:	Pointer to root domain, or containing structure if non-hierarchical

> > @@ -226,6 +226,17 @@ struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int s
> >  
> >  	domain->revmap_size = size;
> >  
> > +	/*
> > +	 * Hierarchical domains use the domain lock of the root domain
> > +	 * (innermost domain).
> > +	 *
> > +	 * For non-hierarchical domains (as for root domains), the root
> > +	 * pointer is set to the domain itself so that domain->root->mutex
> > +	 * can be used in shared code paths.
> > +	 */
> > +	mutex_init(&domain->mutex);
> > +	domain->root = domain;
> > +
> >  	irq_domain_check_hierarchy(domain);
> >  
> >  	mutex_lock(&irq_domain_mutex);

> > @@ -518,7 +529,11 @@ static void irq_domain_set_mapping(struct irq_domain *domain,
> >  				   irq_hw_number_t hwirq,
> >  				   struct irq_data *irq_data)
> >  {
> > -	lockdep_assert_held(&irq_domain_mutex);
> > +	/*
> > +	 * This also makes sure that all domains point to the same root when
> > +	 * called from irq_domain_insert_irq() for each domain in a hierarchy.
> > +	 */
> > +	lockdep_assert_held(&domain->root->mutex);
> >  
> >  	if (irq_domain_is_nomap(domain))
> >  		return;
> > @@ -540,7 +555,7 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
> >  
> >  	hwirq = irq_data->hwirq;
> >  
> > -	mutex_lock(&irq_domain_mutex);
> > +	mutex_lock(&domain->mutex);
> 
> So you made that point about being able to uniformly using root>mutex,
> which I think is a good invariant. Yet you hardly make use of it. Why?

I went back and forth over that a bit, but decided to only use
domain->root->mutex in paths that can be called for hierarchical
domains (i.e. the "shared code paths" mentioned above).

Using it in paths that are clearly only called for non-hierarchical
domains where domain->root == domain felt a bit lazy.

The counter argument is of course that using domain->root->lock allows
people to think less about the code they are changing, but that's not
necessarily always a good thing.

Also note that the lockdep asserts in the revmap helpers would catch
anyone using domain->mutex where they should not (i.e. using
domain->mutex for an hierarchical domain).

> >  	irq_set_status_flags(irq, IRQ_NOREQUEST);
> >  
> > @@ -562,7 +577,7 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
> >  	/* Clear reverse map for this hwirq */
> >  	irq_domain_clear_mapping(domain, hwirq);
> >  
> > -	mutex_unlock(&irq_domain_mutex);
> > +	mutex_unlock(&domain->mutex);
> >  }
> >  
> >  static int irq_domain_associate_locked(struct irq_domain *domain, unsigned int virq,
> > @@ -612,9 +627,9 @@ int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
> >  {
> >  	int ret;
> >  
> > -	mutex_lock(&irq_domain_mutex);
> > +	mutex_lock(&domain->mutex);
> >  	ret = irq_domain_associate_locked(domain, virq, hwirq);
> > -	mutex_unlock(&irq_domain_mutex);
> > +	mutex_unlock(&domain->mutex);
> >  
> >  	return ret;
> >  }
> > @@ -731,7 +746,7 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
> >  		return 0;
> >  	}
> >  
> > -	mutex_lock(&irq_domain_mutex);
> > +	mutex_lock(&domain->mutex);
> >  
> >  	/* Check if mapping already exists */
> >  	virq = irq_find_mapping(domain, hwirq);
> > @@ -742,7 +757,7 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
> >  
> >  	virq = irq_create_mapping_affinity_locked(domain, hwirq, affinity);
> >  out:
> > -	mutex_unlock(&irq_domain_mutex);
> > +	mutex_unlock(&domain->mutex);
> >  
> >  	return virq;
> >  }
> > @@ -811,7 +826,7 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
> >  	if (WARN_ON(type & ~IRQ_TYPE_SENSE_MASK))
> >  		type &= IRQ_TYPE_SENSE_MASK;
> >  
> > -	mutex_lock(&irq_domain_mutex);
> > +	mutex_lock(&domain->root->mutex);
> >  
> >  	/*
> >  	 * If we've already configured this interrupt,
> > @@ -864,11 +879,11 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
> >  	/* Store trigger type */
> >  	irqd_set_trigger_type(irq_data, type);
> >  out:
> > -	mutex_unlock(&irq_domain_mutex);
> > +	mutex_unlock(&domain->root->mutex);
> >  
> >  	return virq;
> >  err:
> > -	mutex_unlock(&irq_domain_mutex);
> > +	mutex_unlock(&domain->root->mutex);
> >  
> >  	return 0;
> >  }
> > @@ -1132,6 +1147,7 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
> >  	else
> >  		domain = irq_domain_create_tree(fwnode, ops, host_data);
> >  	if (domain) {
> > +		domain->root = parent->root;
> >  		domain->parent = parent;
> >  		domain->flags |= flags;
> 
> So we still have a bug here, as we have published a domain that we
> keep updating. A parallel probing could find it in the interval and do
> something completely wrong.

Indeed we do, even if device links should make this harder to hit these
days.

> Splitting the work would help, as per the following patch.

Looks good to me. Do you want to submit that as a patch that I'll rebase
on or should I submit it as part of a v6?

Johan

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 19/19] irqdomain: Switch to per-domain locking
  2023-02-10  9:56       ` Johan Hovold
@ 2023-02-10 11:38         ` Marc Zyngier
  -1 siblings, 0 replies; 58+ messages in thread
From: Marc Zyngier @ 2023-02-10 11:38 UTC (permalink / raw)
  To: Johan Hovold
  Cc: Johan Hovold, Thomas Gleixner, x86, platform-driver-x86,
	linux-arm-kernel, linux-mips, linux-kernel, Hsin-Yi Wang,
	Mark-PK Tsai

On Fri, 10 Feb 2023 09:56:03 +0000,
Johan Hovold <johan@kernel.org> wrote:
> 
> On Thu, Feb 09, 2023 at 04:00:55PM +0000, Marc Zyngier wrote:
> > On Thu, 09 Feb 2023 13:23:23 +0000,
> > Johan Hovold <johan+linaro@kernel.org> wrote:
> > > 
> > > The IRQ domain structures are currently protected by the global
> > > irq_domain_mutex. Switch to using more fine-grained per-domain locking,
> > > which can speed up parallel probing by reducing lock contention.
> > > 
> > > On a recent arm64 laptop, the total time spent waiting for the locks
> > > during boot drops from 160 to 40 ms on average, while the maximum
> > > aggregate wait time drops from 550 to 90 ms over ten runs for example.
> > > 
> > > Note that the domain lock of the root domain (innermost domain) must be
> > > used for hierarchical domains. For non-hierarchical domains (as for root
> > > domains), the new root pointer is set to the domain itself so that
> > > domain->root->mutex can be used in shared code paths.
> > > 
> > > Also note that hierarchical domains should be constructed using
> > > irq_domain_create_hierarchy() (or irq_domain_add_hierarchy()) to avoid
> > > poking at irqdomain internals. As a safeguard, the lockdep assertion in
> > > irq_domain_set_mapping() will catch any offenders that fail to set the
> > > root domain pointer.
> > > 
> > > Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
> > > Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
> > > Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
> > > ---
> > >  include/linux/irqdomain.h |  4 +++
> > >  kernel/irq/irqdomain.c    | 61 +++++++++++++++++++++++++--------------
> > >  2 files changed, 44 insertions(+), 21 deletions(-)
> > > 
> > > diff --git a/include/linux/irqdomain.h b/include/linux/irqdomain.h
> > > index 16399de00b48..cad47737a052 100644
> > > --- a/include/linux/irqdomain.h
> > > +++ b/include/linux/irqdomain.h
> > > @@ -125,6 +125,8 @@ struct irq_domain_chip_generic;
> > >   *		core code.
> > >   * @flags:	Per irq_domain flags
> > >   * @mapcount:	The number of mapped interrupts
> > > + * @mutex:	Domain lock, hierarhical domains use root domain's lock
> > 
> > nit: hierarchical
> > 
> > > + * @root:	Pointer to root domain, or containing structure if non-hierarchical
> 
> > > @@ -226,6 +226,17 @@ struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int s
> > >  
> > >  	domain->revmap_size = size;
> > >  
> > > +	/*
> > > +	 * Hierarchical domains use the domain lock of the root domain
> > > +	 * (innermost domain).
> > > +	 *
> > > +	 * For non-hierarchical domains (as for root domains), the root
> > > +	 * pointer is set to the domain itself so that domain->root->mutex
> > > +	 * can be used in shared code paths.
> > > +	 */
> > > +	mutex_init(&domain->mutex);
> > > +	domain->root = domain;
> > > +
> > >  	irq_domain_check_hierarchy(domain);
> > >  
> > >  	mutex_lock(&irq_domain_mutex);
> 
> > > @@ -518,7 +529,11 @@ static void irq_domain_set_mapping(struct irq_domain *domain,
> > >  				   irq_hw_number_t hwirq,
> > >  				   struct irq_data *irq_data)
> > >  {
> > > -	lockdep_assert_held(&irq_domain_mutex);
> > > +	/*
> > > +	 * This also makes sure that all domains point to the same root when
> > > +	 * called from irq_domain_insert_irq() for each domain in a hierarchy.
> > > +	 */
> > > +	lockdep_assert_held(&domain->root->mutex);
> > >  
> > >  	if (irq_domain_is_nomap(domain))
> > >  		return;
> > > @@ -540,7 +555,7 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
> > >  
> > >  	hwirq = irq_data->hwirq;
> > >  
> > > -	mutex_lock(&irq_domain_mutex);
> > > +	mutex_lock(&domain->mutex);
> > 
> > So you made that point about being able to uniformly using root>mutex,
> > which I think is a good invariant. Yet you hardly make use of it. Why?
> 
> I went back and forth over that a bit, but decided to only use
> domain->root->mutex in paths that can be called for hierarchical
> domains (i.e. the "shared code paths" mentioned above).
> 
> Using it in paths that are clearly only called for non-hierarchical
> domains where domain->root == domain felt a bit lazy.

My concern here is that as this code gets further refactored, it may
become much harder to reason about what is the correct level of
locking.

> The counter argument is of course that using domain->root->lock allows
> people to think less about the code they are changing, but that's not
> necessarily always a good thing.

Eventually, non-hierarchical domains should simply die and be replaced
with a single level hierarchy. Having a unified locking in place will
definitely make the required work clearer.

> Also note that the lockdep asserts in the revmap helpers would catch
> anyone using domain->mutex where they should not (i.e. using
> domain->mutex for an hierarchical domain).

Lockdep is great, but lockdep is a runtime thing. It doesn't help
reasoning about what gets locked when changing this code.

> > > @@ -1132,6 +1147,7 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
> > >  	else
> > >  		domain = irq_domain_create_tree(fwnode, ops, host_data);
> > >  	if (domain) {
> > > +		domain->root = parent->root;
> > >  		domain->parent = parent;
> > >  		domain->flags |= flags;
> > 
> > So we still have a bug here, as we have published a domain that we
> > keep updating. A parallel probing could find it in the interval and do
> > something completely wrong.
> 
> Indeed we do, even if device links should make this harder to hit these
> days.
> 
> > Splitting the work would help, as per the following patch.
> 
> Looks good to me. Do you want to submit that as a patch that I'll rebase
> on or should I submit it as part of a v6?

Just take it directly.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 19/19] irqdomain: Switch to per-domain locking
@ 2023-02-10 11:38         ` Marc Zyngier
  0 siblings, 0 replies; 58+ messages in thread
From: Marc Zyngier @ 2023-02-10 11:38 UTC (permalink / raw)
  To: Johan Hovold
  Cc: Johan Hovold, Thomas Gleixner, x86, platform-driver-x86,
	linux-arm-kernel, linux-mips, linux-kernel, Hsin-Yi Wang,
	Mark-PK Tsai

On Fri, 10 Feb 2023 09:56:03 +0000,
Johan Hovold <johan@kernel.org> wrote:
> 
> On Thu, Feb 09, 2023 at 04:00:55PM +0000, Marc Zyngier wrote:
> > On Thu, 09 Feb 2023 13:23:23 +0000,
> > Johan Hovold <johan+linaro@kernel.org> wrote:
> > > 
> > > The IRQ domain structures are currently protected by the global
> > > irq_domain_mutex. Switch to using more fine-grained per-domain locking,
> > > which can speed up parallel probing by reducing lock contention.
> > > 
> > > On a recent arm64 laptop, the total time spent waiting for the locks
> > > during boot drops from 160 to 40 ms on average, while the maximum
> > > aggregate wait time drops from 550 to 90 ms over ten runs for example.
> > > 
> > > Note that the domain lock of the root domain (innermost domain) must be
> > > used for hierarchical domains. For non-hierarchical domains (as for root
> > > domains), the new root pointer is set to the domain itself so that
> > > domain->root->mutex can be used in shared code paths.
> > > 
> > > Also note that hierarchical domains should be constructed using
> > > irq_domain_create_hierarchy() (or irq_domain_add_hierarchy()) to avoid
> > > poking at irqdomain internals. As a safeguard, the lockdep assertion in
> > > irq_domain_set_mapping() will catch any offenders that fail to set the
> > > root domain pointer.
> > > 
> > > Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
> > > Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
> > > Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
> > > ---
> > >  include/linux/irqdomain.h |  4 +++
> > >  kernel/irq/irqdomain.c    | 61 +++++++++++++++++++++++++--------------
> > >  2 files changed, 44 insertions(+), 21 deletions(-)
> > > 
> > > diff --git a/include/linux/irqdomain.h b/include/linux/irqdomain.h
> > > index 16399de00b48..cad47737a052 100644
> > > --- a/include/linux/irqdomain.h
> > > +++ b/include/linux/irqdomain.h
> > > @@ -125,6 +125,8 @@ struct irq_domain_chip_generic;
> > >   *		core code.
> > >   * @flags:	Per irq_domain flags
> > >   * @mapcount:	The number of mapped interrupts
> > > + * @mutex:	Domain lock, hierarhical domains use root domain's lock
> > 
> > nit: hierarchical
> > 
> > > + * @root:	Pointer to root domain, or containing structure if non-hierarchical
> 
> > > @@ -226,6 +226,17 @@ struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int s
> > >  
> > >  	domain->revmap_size = size;
> > >  
> > > +	/*
> > > +	 * Hierarchical domains use the domain lock of the root domain
> > > +	 * (innermost domain).
> > > +	 *
> > > +	 * For non-hierarchical domains (as for root domains), the root
> > > +	 * pointer is set to the domain itself so that domain->root->mutex
> > > +	 * can be used in shared code paths.
> > > +	 */
> > > +	mutex_init(&domain->mutex);
> > > +	domain->root = domain;
> > > +
> > >  	irq_domain_check_hierarchy(domain);
> > >  
> > >  	mutex_lock(&irq_domain_mutex);
> 
> > > @@ -518,7 +529,11 @@ static void irq_domain_set_mapping(struct irq_domain *domain,
> > >  				   irq_hw_number_t hwirq,
> > >  				   struct irq_data *irq_data)
> > >  {
> > > -	lockdep_assert_held(&irq_domain_mutex);
> > > +	/*
> > > +	 * This also makes sure that all domains point to the same root when
> > > +	 * called from irq_domain_insert_irq() for each domain in a hierarchy.
> > > +	 */
> > > +	lockdep_assert_held(&domain->root->mutex);
> > >  
> > >  	if (irq_domain_is_nomap(domain))
> > >  		return;
> > > @@ -540,7 +555,7 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
> > >  
> > >  	hwirq = irq_data->hwirq;
> > >  
> > > -	mutex_lock(&irq_domain_mutex);
> > > +	mutex_lock(&domain->mutex);
> > 
> > So you made that point about being able to uniformly using root>mutex,
> > which I think is a good invariant. Yet you hardly make use of it. Why?
> 
> I went back and forth over that a bit, but decided to only use
> domain->root->mutex in paths that can be called for hierarchical
> domains (i.e. the "shared code paths" mentioned above).
> 
> Using it in paths that are clearly only called for non-hierarchical
> domains where domain->root == domain felt a bit lazy.

My concern here is that as this code gets further refactored, it may
become much harder to reason about what is the correct level of
locking.

> The counter argument is of course that using domain->root->lock allows
> people to think less about the code they are changing, but that's not
> necessarily always a good thing.

Eventually, non-hierarchical domains should simply die and be replaced
with a single level hierarchy. Having a unified locking in place will
definitely make the required work clearer.

> Also note that the lockdep asserts in the revmap helpers would catch
> anyone using domain->mutex where they should not (i.e. using
> domain->mutex for an hierarchical domain).

Lockdep is great, but lockdep is a runtime thing. It doesn't help
reasoning about what gets locked when changing this code.

> > > @@ -1132,6 +1147,7 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
> > >  	else
> > >  		domain = irq_domain_create_tree(fwnode, ops, host_data);
> > >  	if (domain) {
> > > +		domain->root = parent->root;
> > >  		domain->parent = parent;
> > >  		domain->flags |= flags;
> > 
> > So we still have a bug here, as we have published a domain that we
> > keep updating. A parallel probing could find it in the interval and do
> > something completely wrong.
> 
> Indeed we do, even if device links should make this harder to hit these
> days.
> 
> > Splitting the work would help, as per the following patch.
> 
> Looks good to me. Do you want to submit that as a patch that I'll rebase
> on or should I submit it as part of a v6?

Just take it directly.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 19/19] irqdomain: Switch to per-domain locking
  2023-02-10 11:38         ` Marc Zyngier
@ 2023-02-10 12:57           ` Johan Hovold
  -1 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-10 12:57 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Johan Hovold, Thomas Gleixner, x86, platform-driver-x86,
	linux-arm-kernel, linux-mips, linux-kernel, Hsin-Yi Wang,
	Mark-PK Tsai

On Fri, Feb 10, 2023 at 11:38:58AM +0000, Marc Zyngier wrote:
> On Fri, 10 Feb 2023 09:56:03 +0000,
> Johan Hovold <johan@kernel.org> wrote:
> > 
> > On Thu, Feb 09, 2023 at 04:00:55PM +0000, Marc Zyngier wrote:
> > > On Thu, 09 Feb 2023 13:23:23 +0000,
> > > Johan Hovold <johan+linaro@kernel.org> wrote:

> > I went back and forth over that a bit, but decided to only use
> > domain->root->mutex in paths that can be called for hierarchical
> > domains (i.e. the "shared code paths" mentioned above).
> > 
> > Using it in paths that are clearly only called for non-hierarchical
> > domains where domain->root == domain felt a bit lazy.
> 
> My concern here is that as this code gets further refactored, it may
> become much harder to reason about what is the correct level of
> locking.

Yeah, that's conceivable.

> > The counter argument is of course that using domain->root->lock allows
> > people to think less about the code they are changing, but that's not
> > necessarily always a good thing.
> 
> Eventually, non-hierarchical domains should simply die and be replaced
> with a single level hierarchy. Having a unified locking in place will
> definitely make the required work clearer.
> 
> > Also note that the lockdep asserts in the revmap helpers would catch
> > anyone using domain->mutex where they should not (i.e. using
> > domain->mutex for an hierarchical domain).
> 
> Lockdep is great, but lockdep is a runtime thing. It doesn't help
> reasoning about what gets locked when changing this code.

Contributers are expected to test their changes with lockdep enabled,
right?

But sure, using root->domain->mutex throughout may prevent prevent
people from getting this wrong.

I'll update this for v6.
 
> > > > @@ -1132,6 +1147,7 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
> > > >  	else
> > > >  		domain = irq_domain_create_tree(fwnode, ops, host_data);
> > > >  	if (domain) {
> > > > +		domain->root = parent->root;
> > > >  		domain->parent = parent;
> > > >  		domain->flags |= flags;
> > > 
> > > So we still have a bug here, as we have published a domain that we
> > > keep updating. A parallel probing could find it in the interval and do
> > > something completely wrong.
> > 
> > Indeed we do, even if device links should make this harder to hit these
> > days.
> > 
> > > Splitting the work would help, as per the following patch.
> > 
> > Looks good to me. Do you want to submit that as a patch that I'll rebase
> > on or should I submit it as part of a v6?
> 
> Just take it directly.

Ok, thanks.

I guess this turns the "Use irq_domain_create_hierarchy()" patches into
fixes that should be backported as well.

But note that your proposed diff may not be sufficient to prevent
lookups from racing with domain registration generally. Many drivers
still update the bus token after the domain has been added (and
apparently some still set flags also after creating hierarchies I just
noticed, e.g. amd_iommu_create_irq_domain).

It seems we'd need to expose a separate allocation and registration
interface, or at least pass in the bus token to a new combined
interface.

Johan

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 19/19] irqdomain: Switch to per-domain locking
@ 2023-02-10 12:57           ` Johan Hovold
  0 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-10 12:57 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Johan Hovold, Thomas Gleixner, x86, platform-driver-x86,
	linux-arm-kernel, linux-mips, linux-kernel, Hsin-Yi Wang,
	Mark-PK Tsai

On Fri, Feb 10, 2023 at 11:38:58AM +0000, Marc Zyngier wrote:
> On Fri, 10 Feb 2023 09:56:03 +0000,
> Johan Hovold <johan@kernel.org> wrote:
> > 
> > On Thu, Feb 09, 2023 at 04:00:55PM +0000, Marc Zyngier wrote:
> > > On Thu, 09 Feb 2023 13:23:23 +0000,
> > > Johan Hovold <johan+linaro@kernel.org> wrote:

> > I went back and forth over that a bit, but decided to only use
> > domain->root->mutex in paths that can be called for hierarchical
> > domains (i.e. the "shared code paths" mentioned above).
> > 
> > Using it in paths that are clearly only called for non-hierarchical
> > domains where domain->root == domain felt a bit lazy.
> 
> My concern here is that as this code gets further refactored, it may
> become much harder to reason about what is the correct level of
> locking.

Yeah, that's conceivable.

> > The counter argument is of course that using domain->root->lock allows
> > people to think less about the code they are changing, but that's not
> > necessarily always a good thing.
> 
> Eventually, non-hierarchical domains should simply die and be replaced
> with a single level hierarchy. Having a unified locking in place will
> definitely make the required work clearer.
> 
> > Also note that the lockdep asserts in the revmap helpers would catch
> > anyone using domain->mutex where they should not (i.e. using
> > domain->mutex for an hierarchical domain).
> 
> Lockdep is great, but lockdep is a runtime thing. It doesn't help
> reasoning about what gets locked when changing this code.

Contributers are expected to test their changes with lockdep enabled,
right?

But sure, using root->domain->mutex throughout may prevent prevent
people from getting this wrong.

I'll update this for v6.
 
> > > > @@ -1132,6 +1147,7 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
> > > >  	else
> > > >  		domain = irq_domain_create_tree(fwnode, ops, host_data);
> > > >  	if (domain) {
> > > > +		domain->root = parent->root;
> > > >  		domain->parent = parent;
> > > >  		domain->flags |= flags;
> > > 
> > > So we still have a bug here, as we have published a domain that we
> > > keep updating. A parallel probing could find it in the interval and do
> > > something completely wrong.
> > 
> > Indeed we do, even if device links should make this harder to hit these
> > days.
> > 
> > > Splitting the work would help, as per the following patch.
> > 
> > Looks good to me. Do you want to submit that as a patch that I'll rebase
> > on or should I submit it as part of a v6?
> 
> Just take it directly.

Ok, thanks.

I guess this turns the "Use irq_domain_create_hierarchy()" patches into
fixes that should be backported as well.

But note that your proposed diff may not be sufficient to prevent
lookups from racing with domain registration generally. Many drivers
still update the bus token after the domain has been added (and
apparently some still set flags also after creating hierarchies I just
noticed, e.g. amd_iommu_create_irq_domain).

It seems we'd need to expose a separate allocation and registration
interface, or at least pass in the bus token to a new combined
interface.

Johan

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 19/19] irqdomain: Switch to per-domain locking
  2023-02-10 12:57           ` Johan Hovold
@ 2023-02-10 15:06             ` Marc Zyngier
  -1 siblings, 0 replies; 58+ messages in thread
From: Marc Zyngier @ 2023-02-10 15:06 UTC (permalink / raw)
  To: Johan Hovold
  Cc: Johan Hovold, Thomas Gleixner, x86, platform-driver-x86,
	linux-arm-kernel, linux-mips, linux-kernel, Hsin-Yi Wang,
	Mark-PK Tsai

On Fri, 10 Feb 2023 12:57:40 +0000,
Johan Hovold <johan@kernel.org> wrote:
> 
> On Fri, Feb 10, 2023 at 11:38:58AM +0000, Marc Zyngier wrote:
> > On Fri, 10 Feb 2023 09:56:03 +0000,
> > Johan Hovold <johan@kernel.org> wrote:
> > > 
> > > On Thu, Feb 09, 2023 at 04:00:55PM +0000, Marc Zyngier wrote:
> > > > On Thu, 09 Feb 2023 13:23:23 +0000,
> > > > Johan Hovold <johan+linaro@kernel.org> wrote:
> 
> > > I went back and forth over that a bit, but decided to only use
> > > domain->root->mutex in paths that can be called for hierarchical
> > > domains (i.e. the "shared code paths" mentioned above).
> > > 
> > > Using it in paths that are clearly only called for non-hierarchical
> > > domains where domain->root == domain felt a bit lazy.
> > 
> > My concern here is that as this code gets further refactored, it may
> > become much harder to reason about what is the correct level of
> > locking.
> 
> Yeah, that's conceivable.
> 
> > > The counter argument is of course that using domain->root->lock allows
> > > people to think less about the code they are changing, but that's not
> > > necessarily always a good thing.
> > 
> > Eventually, non-hierarchical domains should simply die and be replaced
> > with a single level hierarchy. Having a unified locking in place will
> > definitely make the required work clearer.
> > 
> > > Also note that the lockdep asserts in the revmap helpers would catch
> > > anyone using domain->mutex where they should not (i.e. using
> > > domain->mutex for an hierarchical domain).
> > 
> > Lockdep is great, but lockdep is a runtime thing. It doesn't help
> > reasoning about what gets locked when changing this code.
> 
> Contributers are expected to test their changes with lockdep enabled,
> right?
> 
> But sure, using root->domain->mutex throughout may prevent prevent
> people from getting this wrong.
> 
> I'll update this for v6.
>  
> > > > > @@ -1132,6 +1147,7 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
> > > > >  	else
> > > > >  		domain = irq_domain_create_tree(fwnode, ops, host_data);
> > > > >  	if (domain) {
> > > > > +		domain->root = parent->root;
> > > > >  		domain->parent = parent;
> > > > >  		domain->flags |= flags;
> > > > 
> > > > So we still have a bug here, as we have published a domain that we
> > > > keep updating. A parallel probing could find it in the interval and do
> > > > something completely wrong.
> > > 
> > > Indeed we do, even if device links should make this harder to hit these
> > > days.
> > > 
> > > > Splitting the work would help, as per the following patch.
> > > 
> > > Looks good to me. Do you want to submit that as a patch that I'll rebase
> > > on or should I submit it as part of a v6?
> > 
> > Just take it directly.
> 
> Ok, thanks.
> 
> I guess this turns the "Use irq_domain_create_hierarchy()" patches into
> fixes that should be backported as well.

Maybe. Backports are not my immediate concern.

> But note that your proposed diff may not be sufficient to prevent
> lookups from racing with domain registration generally. Many drivers
> still update the bus token after the domain has been added (and
> apparently some still set flags also after creating hierarchies I just
> noticed, e.g. amd_iommu_create_irq_domain).

The bus token should only rarely be a problem, as it is often set on
an intermediate level which isn't directly looked-up by anything else.
And if it did happen, it would probably result in a the domain not
being found.

Flags, on the other hand, are more problematic. But I consider this a
driver bug which should be fixed independently.

> It seems we'd need to expose a separate allocation and registration
> interface, or at least pass in the bus token to a new combined
> interface.

Potentially, yes. But this could come later down the line. I'm more
concerned in getting this series into -next, as the merge window is
fast approaching.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 19/19] irqdomain: Switch to per-domain locking
@ 2023-02-10 15:06             ` Marc Zyngier
  0 siblings, 0 replies; 58+ messages in thread
From: Marc Zyngier @ 2023-02-10 15:06 UTC (permalink / raw)
  To: Johan Hovold
  Cc: Johan Hovold, Thomas Gleixner, x86, platform-driver-x86,
	linux-arm-kernel, linux-mips, linux-kernel, Hsin-Yi Wang,
	Mark-PK Tsai

On Fri, 10 Feb 2023 12:57:40 +0000,
Johan Hovold <johan@kernel.org> wrote:
> 
> On Fri, Feb 10, 2023 at 11:38:58AM +0000, Marc Zyngier wrote:
> > On Fri, 10 Feb 2023 09:56:03 +0000,
> > Johan Hovold <johan@kernel.org> wrote:
> > > 
> > > On Thu, Feb 09, 2023 at 04:00:55PM +0000, Marc Zyngier wrote:
> > > > On Thu, 09 Feb 2023 13:23:23 +0000,
> > > > Johan Hovold <johan+linaro@kernel.org> wrote:
> 
> > > I went back and forth over that a bit, but decided to only use
> > > domain->root->mutex in paths that can be called for hierarchical
> > > domains (i.e. the "shared code paths" mentioned above).
> > > 
> > > Using it in paths that are clearly only called for non-hierarchical
> > > domains where domain->root == domain felt a bit lazy.
> > 
> > My concern here is that as this code gets further refactored, it may
> > become much harder to reason about what is the correct level of
> > locking.
> 
> Yeah, that's conceivable.
> 
> > > The counter argument is of course that using domain->root->lock allows
> > > people to think less about the code they are changing, but that's not
> > > necessarily always a good thing.
> > 
> > Eventually, non-hierarchical domains should simply die and be replaced
> > with a single level hierarchy. Having a unified locking in place will
> > definitely make the required work clearer.
> > 
> > > Also note that the lockdep asserts in the revmap helpers would catch
> > > anyone using domain->mutex where they should not (i.e. using
> > > domain->mutex for an hierarchical domain).
> > 
> > Lockdep is great, but lockdep is a runtime thing. It doesn't help
> > reasoning about what gets locked when changing this code.
> 
> Contributers are expected to test their changes with lockdep enabled,
> right?
> 
> But sure, using root->domain->mutex throughout may prevent prevent
> people from getting this wrong.
> 
> I'll update this for v6.
>  
> > > > > @@ -1132,6 +1147,7 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
> > > > >  	else
> > > > >  		domain = irq_domain_create_tree(fwnode, ops, host_data);
> > > > >  	if (domain) {
> > > > > +		domain->root = parent->root;
> > > > >  		domain->parent = parent;
> > > > >  		domain->flags |= flags;
> > > > 
> > > > So we still have a bug here, as we have published a domain that we
> > > > keep updating. A parallel probing could find it in the interval and do
> > > > something completely wrong.
> > > 
> > > Indeed we do, even if device links should make this harder to hit these
> > > days.
> > > 
> > > > Splitting the work would help, as per the following patch.
> > > 
> > > Looks good to me. Do you want to submit that as a patch that I'll rebase
> > > on or should I submit it as part of a v6?
> > 
> > Just take it directly.
> 
> Ok, thanks.
> 
> I guess this turns the "Use irq_domain_create_hierarchy()" patches into
> fixes that should be backported as well.

Maybe. Backports are not my immediate concern.

> But note that your proposed diff may not be sufficient to prevent
> lookups from racing with domain registration generally. Many drivers
> still update the bus token after the domain has been added (and
> apparently some still set flags also after creating hierarchies I just
> noticed, e.g. amd_iommu_create_irq_domain).

The bus token should only rarely be a problem, as it is often set on
an intermediate level which isn't directly looked-up by anything else.
And if it did happen, it would probably result in a the domain not
being found.

Flags, on the other hand, are more problematic. But I consider this a
driver bug which should be fixed independently.

> It seems we'd need to expose a separate allocation and registration
> interface, or at least pass in the bus token to a new combined
> interface.

Potentially, yes. But this could come later down the line. I'm more
concerned in getting this series into -next, as the merge window is
fast approaching.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 19/19] irqdomain: Switch to per-domain locking
  2023-02-10 15:06             ` Marc Zyngier
@ 2023-02-11 11:35               ` Johan Hovold
  -1 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-11 11:35 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Johan Hovold, Thomas Gleixner, x86, platform-driver-x86,
	linux-arm-kernel, linux-mips, linux-kernel, Hsin-Yi Wang,
	Mark-PK Tsai

On Fri, Feb 10, 2023 at 03:06:37PM +0000, Marc Zyngier wrote:
> On Fri, 10 Feb 2023 12:57:40 +0000,
> Johan Hovold <johan@kernel.org> wrote:
> > 
> > On Fri, Feb 10, 2023 at 11:38:58AM +0000, Marc Zyngier wrote:
> > > On Fri, 10 Feb 2023 09:56:03 +0000,
> > > Johan Hovold <johan@kernel.org> wrote:
 
> > > > > > @@ -1132,6 +1147,7 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
> > > > > >  	else
> > > > > >  		domain = irq_domain_create_tree(fwnode, ops, host_data);
> > > > > >  	if (domain) {
> > > > > > +		domain->root = parent->root;
> > > > > >  		domain->parent = parent;
> > > > > >  		domain->flags |= flags;
> > > > > 
> > > > > So we still have a bug here, as we have published a domain that we
> > > > > keep updating. A parallel probing could find it in the interval and do
> > > > > something completely wrong.
> > > > 
> > > > Indeed we do, even if device links should make this harder to hit these
> > > > days.
> > > > 
> > > > > Splitting the work would help, as per the following patch.
> > > > 
> > > > Looks good to me. Do you want to submit that as a patch that I'll rebase
> > > > on or should I submit it as part of a v6?
> > > 
> > > Just take it directly.
> > 
> > Ok, thanks.

I've added a commit message and turned it into a patch to include in v6
now:

commit 3af395aa894c7df94ef2337e572e5e1710b4bbda (HEAD -> work)
Author: Marc Zyngier <maz@kernel.org>
Date:   Thu Feb 9 16:00:55 2023 +0000

    irqdomain: Fix domain registration race
    
    Hierarchical domains created using irq_domain_create_hierarchy() are
    currently added to the domain list before having been fully initialised.
    
    This specifically means that a racing allocation request might fail to
    allocate irq data for the inner domains of a hierarchy in case the
    parent domain pointer has not yet been set up.
    
    Note that this is not really any issue for irqchip drivers that are
    registered early via IRQCHIP_DECLARE() or IRQCHIP_ACPI_DECLARE(), but
    could potentially cause trouble with drivers that are registered later
    (e.g. when using IRQCHIP_PLATFORM_DRIVER_BEGIN(), gpiochip drivers,
    etc.).
    
    Fixes: afb7da83b9f4 ("irqdomain: Introduce helper function irq_domain_add_hierarchy()")
    Cc: stable@vger.kernel.org      # 3.19
    ...
    [ johan: add a commit message ]
    Signed-off-by: Johan Hovold <johan+linaro@kernel.org>

Could you just give your SoB for the diff here so I can credit you as
author?

> > I guess this turns the "Use irq_domain_create_hierarchy()" patches into
> > fixes that should be backported as well.
> 
> Maybe. Backports are not my immediate concern.

Turns out all of those drivers are registered early via
IRQCHIP_DECLARE() or IRQCHIP_ACPI_DECLARE() so there shouldn't really be
any risk of hitting this race for those.
 
> > But note that your proposed diff may not be sufficient to prevent
> > lookups from racing with domain registration generally. Many drivers
> > still update the bus token after the domain has been added (and
> > apparently some still set flags also after creating hierarchies I just
> > noticed, e.g. amd_iommu_create_irq_domain).
> 
> The bus token should only rarely be a problem, as it is often set on
> an intermediate level which isn't directly looked-up by anything else.
> And if it did happen, it would probably result in a the domain not
> being found.
> 
> Flags, on the other hand, are more problematic. But I consider this a
> driver bug which should be fixed independently.

I agree.
 
> > It seems we'd need to expose a separate allocation and registration
> > interface, or at least pass in the bus token to a new combined
> > interface.
> 
> Potentially, yes. But this could come later down the line. I'm more
> concerned in getting this series into -next, as the merge window is
> fast approaching.

I'll post a v6 first thing Monday if you can give me that SoB before
then.

Johan

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 19/19] irqdomain: Switch to per-domain locking
@ 2023-02-11 11:35               ` Johan Hovold
  0 siblings, 0 replies; 58+ messages in thread
From: Johan Hovold @ 2023-02-11 11:35 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Johan Hovold, Thomas Gleixner, x86, platform-driver-x86,
	linux-arm-kernel, linux-mips, linux-kernel, Hsin-Yi Wang,
	Mark-PK Tsai

On Fri, Feb 10, 2023 at 03:06:37PM +0000, Marc Zyngier wrote:
> On Fri, 10 Feb 2023 12:57:40 +0000,
> Johan Hovold <johan@kernel.org> wrote:
> > 
> > On Fri, Feb 10, 2023 at 11:38:58AM +0000, Marc Zyngier wrote:
> > > On Fri, 10 Feb 2023 09:56:03 +0000,
> > > Johan Hovold <johan@kernel.org> wrote:
 
> > > > > > @@ -1132,6 +1147,7 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
> > > > > >  	else
> > > > > >  		domain = irq_domain_create_tree(fwnode, ops, host_data);
> > > > > >  	if (domain) {
> > > > > > +		domain->root = parent->root;
> > > > > >  		domain->parent = parent;
> > > > > >  		domain->flags |= flags;
> > > > > 
> > > > > So we still have a bug here, as we have published a domain that we
> > > > > keep updating. A parallel probing could find it in the interval and do
> > > > > something completely wrong.
> > > > 
> > > > Indeed we do, even if device links should make this harder to hit these
> > > > days.
> > > > 
> > > > > Splitting the work would help, as per the following patch.
> > > > 
> > > > Looks good to me. Do you want to submit that as a patch that I'll rebase
> > > > on or should I submit it as part of a v6?
> > > 
> > > Just take it directly.
> > 
> > Ok, thanks.

I've added a commit message and turned it into a patch to include in v6
now:

commit 3af395aa894c7df94ef2337e572e5e1710b4bbda (HEAD -> work)
Author: Marc Zyngier <maz@kernel.org>
Date:   Thu Feb 9 16:00:55 2023 +0000

    irqdomain: Fix domain registration race
    
    Hierarchical domains created using irq_domain_create_hierarchy() are
    currently added to the domain list before having been fully initialised.
    
    This specifically means that a racing allocation request might fail to
    allocate irq data for the inner domains of a hierarchy in case the
    parent domain pointer has not yet been set up.
    
    Note that this is not really any issue for irqchip drivers that are
    registered early via IRQCHIP_DECLARE() or IRQCHIP_ACPI_DECLARE(), but
    could potentially cause trouble with drivers that are registered later
    (e.g. when using IRQCHIP_PLATFORM_DRIVER_BEGIN(), gpiochip drivers,
    etc.).
    
    Fixes: afb7da83b9f4 ("irqdomain: Introduce helper function irq_domain_add_hierarchy()")
    Cc: stable@vger.kernel.org      # 3.19
    ...
    [ johan: add a commit message ]
    Signed-off-by: Johan Hovold <johan+linaro@kernel.org>

Could you just give your SoB for the diff here so I can credit you as
author?

> > I guess this turns the "Use irq_domain_create_hierarchy()" patches into
> > fixes that should be backported as well.
> 
> Maybe. Backports are not my immediate concern.

Turns out all of those drivers are registered early via
IRQCHIP_DECLARE() or IRQCHIP_ACPI_DECLARE() so there shouldn't really be
any risk of hitting this race for those.
 
> > But note that your proposed diff may not be sufficient to prevent
> > lookups from racing with domain registration generally. Many drivers
> > still update the bus token after the domain has been added (and
> > apparently some still set flags also after creating hierarchies I just
> > noticed, e.g. amd_iommu_create_irq_domain).
> 
> The bus token should only rarely be a problem, as it is often set on
> an intermediate level which isn't directly looked-up by anything else.
> And if it did happen, it would probably result in a the domain not
> being found.
> 
> Flags, on the other hand, are more problematic. But I consider this a
> driver bug which should be fixed independently.

I agree.
 
> > It seems we'd need to expose a separate allocation and registration
> > interface, or at least pass in the bus token to a new combined
> > interface.
> 
> Potentially, yes. But this could come later down the line. I'm more
> concerned in getting this series into -next, as the merge window is
> fast approaching.

I'll post a v6 first thing Monday if you can give me that SoB before
then.

Johan

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 19/19] irqdomain: Switch to per-domain locking
  2023-02-11 11:35               ` Johan Hovold
@ 2023-02-11 12:52                 ` Marc Zyngier
  -1 siblings, 0 replies; 58+ messages in thread
From: Marc Zyngier @ 2023-02-11 12:52 UTC (permalink / raw)
  To: Johan Hovold
  Cc: Johan Hovold, Thomas Gleixner, x86, platform-driver-x86,
	linux-arm-kernel, linux-mips, linux-kernel, Hsin-Yi Wang,
	Mark-PK Tsai

AOn Sat, 11 Feb 2023 11:35:32 +0000,
Johan Hovold <johan@kernel.org> wrote:
> 
> On Fri, Feb 10, 2023 at 03:06:37PM +0000, Marc Zyngier wrote:
> > On Fri, 10 Feb 2023 12:57:40 +0000,
> > Johan Hovold <johan@kernel.org> wrote:
> > > 
> > > On Fri, Feb 10, 2023 at 11:38:58AM +0000, Marc Zyngier wrote:
> > > > On Fri, 10 Feb 2023 09:56:03 +0000,
> > > > Johan Hovold <johan@kernel.org> wrote:
>  
> > > > > > > @@ -1132,6 +1147,7 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
> > > > > > >  	else
> > > > > > >  		domain = irq_domain_create_tree(fwnode, ops, host_data);
> > > > > > >  	if (domain) {
> > > > > > > +		domain->root = parent->root;
> > > > > > >  		domain->parent = parent;
> > > > > > >  		domain->flags |= flags;
> > > > > > 
> > > > > > So we still have a bug here, as we have published a domain that we
> > > > > > keep updating. A parallel probing could find it in the interval and do
> > > > > > something completely wrong.
> > > > > 
> > > > > Indeed we do, even if device links should make this harder to hit these
> > > > > days.
> > > > > 
> > > > > > Splitting the work would help, as per the following patch.
> > > > > 
> > > > > Looks good to me. Do you want to submit that as a patch that I'll rebase
> > > > > on or should I submit it as part of a v6?
> > > > 
> > > > Just take it directly.
> > > 
> > > Ok, thanks.
> 
> I've added a commit message and turned it into a patch to include in v6
> now:
> 
> commit 3af395aa894c7df94ef2337e572e5e1710b4bbda (HEAD -> work)
> Author: Marc Zyngier <maz@kernel.org>
> Date:   Thu Feb 9 16:00:55 2023 +0000
> 
>     irqdomain: Fix domain registration race
>     
>     Hierarchical domains created using irq_domain_create_hierarchy() are
>     currently added to the domain list before having been fully initialised.
>     
>     This specifically means that a racing allocation request might fail to
>     allocate irq data for the inner domains of a hierarchy in case the
>     parent domain pointer has not yet been set up.
>     
>     Note that this is not really any issue for irqchip drivers that are
>     registered early via IRQCHIP_DECLARE() or IRQCHIP_ACPI_DECLARE(), but
>     could potentially cause trouble with drivers that are registered later
>     (e.g. when using IRQCHIP_PLATFORM_DRIVER_BEGIN(), gpiochip drivers,
>     etc.).
>     
>     Fixes: afb7da83b9f4 ("irqdomain: Introduce helper function irq_domain_add_hierarchy()")
>     Cc: stable@vger.kernel.org      # 3.19
>     ...
>     [ johan: add a commit message ]
>     Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
> 
> Could you just give your SoB for the diff here so I can credit you as
> author?

Thanks for that. Feel free to add:

Signed-off-by: Marc Zyngier <maz@kernel.org>

> 
> > > I guess this turns the "Use irq_domain_create_hierarchy()" patches into
> > > fixes that should be backported as well.
> > 
> > Maybe. Backports are not my immediate concern.
> 
> Turns out all of those drivers are registered early via
> IRQCHIP_DECLARE() or IRQCHIP_ACPI_DECLARE() so there shouldn't really be
> any risk of hitting this race for those.
>  
> > > But note that your proposed diff may not be sufficient to prevent
> > > lookups from racing with domain registration generally. Many drivers
> > > still update the bus token after the domain has been added (and
> > > apparently some still set flags also after creating hierarchies I just
> > > noticed, e.g. amd_iommu_create_irq_domain).
> > 
> > The bus token should only rarely be a problem, as it is often set on
> > an intermediate level which isn't directly looked-up by anything else.
> > And if it did happen, it would probably result in a the domain not
> > being found.
> > 
> > Flags, on the other hand, are more problematic. But I consider this a
> > driver bug which should be fixed independently.
> 
> I agree.
>  
> > > It seems we'd need to expose a separate allocation and registration
> > > interface, or at least pass in the bus token to a new combined
> > > interface.
> > 
> > Potentially, yes. But this could come later down the line. I'm more
> > concerned in getting this series into -next, as the merge window is
> > fast approaching.
> 
> I'll post a v6 first thing Monday if you can give me that SoB before
> then.

You should be all set. Please post the series at your earliest
convenience, and I'll let i simmer in -next for a bit.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 19/19] irqdomain: Switch to per-domain locking
@ 2023-02-11 12:52                 ` Marc Zyngier
  0 siblings, 0 replies; 58+ messages in thread
From: Marc Zyngier @ 2023-02-11 12:52 UTC (permalink / raw)
  To: Johan Hovold
  Cc: Johan Hovold, Thomas Gleixner, x86, platform-driver-x86,
	linux-arm-kernel, linux-mips, linux-kernel, Hsin-Yi Wang,
	Mark-PK Tsai

AOn Sat, 11 Feb 2023 11:35:32 +0000,
Johan Hovold <johan@kernel.org> wrote:
> 
> On Fri, Feb 10, 2023 at 03:06:37PM +0000, Marc Zyngier wrote:
> > On Fri, 10 Feb 2023 12:57:40 +0000,
> > Johan Hovold <johan@kernel.org> wrote:
> > > 
> > > On Fri, Feb 10, 2023 at 11:38:58AM +0000, Marc Zyngier wrote:
> > > > On Fri, 10 Feb 2023 09:56:03 +0000,
> > > > Johan Hovold <johan@kernel.org> wrote:
>  
> > > > > > > @@ -1132,6 +1147,7 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
> > > > > > >  	else
> > > > > > >  		domain = irq_domain_create_tree(fwnode, ops, host_data);
> > > > > > >  	if (domain) {
> > > > > > > +		domain->root = parent->root;
> > > > > > >  		domain->parent = parent;
> > > > > > >  		domain->flags |= flags;
> > > > > > 
> > > > > > So we still have a bug here, as we have published a domain that we
> > > > > > keep updating. A parallel probing could find it in the interval and do
> > > > > > something completely wrong.
> > > > > 
> > > > > Indeed we do, even if device links should make this harder to hit these
> > > > > days.
> > > > > 
> > > > > > Splitting the work would help, as per the following patch.
> > > > > 
> > > > > Looks good to me. Do you want to submit that as a patch that I'll rebase
> > > > > on or should I submit it as part of a v6?
> > > > 
> > > > Just take it directly.
> > > 
> > > Ok, thanks.
> 
> I've added a commit message and turned it into a patch to include in v6
> now:
> 
> commit 3af395aa894c7df94ef2337e572e5e1710b4bbda (HEAD -> work)
> Author: Marc Zyngier <maz@kernel.org>
> Date:   Thu Feb 9 16:00:55 2023 +0000
> 
>     irqdomain: Fix domain registration race
>     
>     Hierarchical domains created using irq_domain_create_hierarchy() are
>     currently added to the domain list before having been fully initialised.
>     
>     This specifically means that a racing allocation request might fail to
>     allocate irq data for the inner domains of a hierarchy in case the
>     parent domain pointer has not yet been set up.
>     
>     Note that this is not really any issue for irqchip drivers that are
>     registered early via IRQCHIP_DECLARE() or IRQCHIP_ACPI_DECLARE(), but
>     could potentially cause trouble with drivers that are registered later
>     (e.g. when using IRQCHIP_PLATFORM_DRIVER_BEGIN(), gpiochip drivers,
>     etc.).
>     
>     Fixes: afb7da83b9f4 ("irqdomain: Introduce helper function irq_domain_add_hierarchy()")
>     Cc: stable@vger.kernel.org      # 3.19
>     ...
>     [ johan: add a commit message ]
>     Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
> 
> Could you just give your SoB for the diff here so I can credit you as
> author?

Thanks for that. Feel free to add:

Signed-off-by: Marc Zyngier <maz@kernel.org>

> 
> > > I guess this turns the "Use irq_domain_create_hierarchy()" patches into
> > > fixes that should be backported as well.
> > 
> > Maybe. Backports are not my immediate concern.
> 
> Turns out all of those drivers are registered early via
> IRQCHIP_DECLARE() or IRQCHIP_ACPI_DECLARE() so there shouldn't really be
> any risk of hitting this race for those.
>  
> > > But note that your proposed diff may not be sufficient to prevent
> > > lookups from racing with domain registration generally. Many drivers
> > > still update the bus token after the domain has been added (and
> > > apparently some still set flags also after creating hierarchies I just
> > > noticed, e.g. amd_iommu_create_irq_domain).
> > 
> > The bus token should only rarely be a problem, as it is often set on
> > an intermediate level which isn't directly looked-up by anything else.
> > And if it did happen, it would probably result in a the domain not
> > being found.
> > 
> > Flags, on the other hand, are more problematic. But I consider this a
> > driver bug which should be fixed independently.
> 
> I agree.
>  
> > > It seems we'd need to expose a separate allocation and registration
> > > interface, or at least pass in the bus token to a new combined
> > > interface.
> > 
> > Potentially, yes. But this could come later down the line. I'm more
> > concerned in getting this series into -next, as the merge window is
> > fast approaching.
> 
> I'll post a v6 first thing Monday if you can give me that SoB before
> then.

You should be all set. Please post the series at your earliest
convenience, and I'll let i simmer in -next for a bit.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 58+ messages in thread

end of thread, other threads:[~2023-02-11 12:53 UTC | newest]

Thread overview: 58+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-09 13:23 [PATCH v5 00/19] irqdomain: fix mapping race and clean up locking Johan Hovold
2023-02-09 13:23 ` Johan Hovold
2023-02-09 13:23 ` [PATCH v5 01/19] irqdomain: Fix association race Johan Hovold
2023-02-09 13:23   ` Johan Hovold
2023-02-09 13:23 ` [PATCH v5 02/19] irqdomain: Fix disassociation race Johan Hovold
2023-02-09 13:23   ` Johan Hovold
2023-02-09 13:23 ` [PATCH v5 03/19] irqdomain: Drop bogus fwspec-mapping error handling Johan Hovold
2023-02-09 13:23   ` Johan Hovold
2023-02-09 13:23 ` [PATCH v5 04/19] irqdomain: Look for existing mapping only once Johan Hovold
2023-02-09 13:23   ` Johan Hovold
2023-02-09 13:23 ` [PATCH v5 05/19] irqdomain: Refactor __irq_domain_alloc_irqs() Johan Hovold
2023-02-09 13:23   ` Johan Hovold
2023-02-09 13:23 ` [PATCH v5 06/19] irqdomain: Fix mapping-creation race Johan Hovold
2023-02-09 13:23   ` Johan Hovold
2023-02-09 14:03   ` Marc Zyngier
2023-02-09 14:03     ` Marc Zyngier
2023-02-10  9:10     ` Johan Hovold
2023-02-10  9:10       ` Johan Hovold
2023-02-09 13:23 ` [PATCH v5 07/19] irqdomain: Drop revmap mutex Johan Hovold
2023-02-09 13:23   ` Johan Hovold
2023-02-09 13:23 ` [PATCH v5 08/19] irqdomain: Drop dead domain-name assignment Johan Hovold
2023-02-09 13:23   ` Johan Hovold
2023-02-09 13:23 ` [PATCH v5 09/19] irqdomain: Drop leftover brackets Johan Hovold
2023-02-09 13:23   ` Johan Hovold
2023-02-09 13:23 ` [PATCH v5 10/19] irqdomain: Clean up irq_domain_push/pop_irq() Johan Hovold
2023-02-09 13:23   ` Johan Hovold
2023-02-09 13:23 ` [PATCH v5 11/19] x86/ioapic: Use irq_domain_create_hierarchy() Johan Hovold
2023-02-09 13:23   ` Johan Hovold
2023-02-09 13:23 ` [PATCH v5 12/19] x86/apic: " Johan Hovold
2023-02-09 13:23   ` Johan Hovold
2023-02-09 13:23 ` [PATCH v5 13/19] irqchip/alpine-msi: Use irq_domain_add_hierarchy() Johan Hovold
2023-02-09 13:23   ` Johan Hovold
2023-02-09 13:23 ` [PATCH v5 14/19] irqchip/gic-v2m: Use irq_domain_create_hierarchy() Johan Hovold
2023-02-09 13:23   ` Johan Hovold
2023-02-09 13:23 ` [PATCH v5 15/19] irqchip/gic-v3-its: " Johan Hovold
2023-02-09 13:23   ` Johan Hovold
2023-02-09 13:23 ` [PATCH v5 16/19] irqchip/gic-v3-mbi: " Johan Hovold
2023-02-09 13:23   ` Johan Hovold
2023-02-09 13:23 ` [PATCH v5 17/19] irqchip/loongson-pch-msi: " Johan Hovold
2023-02-09 13:23   ` Johan Hovold
2023-02-09 13:23 ` [PATCH v5 18/19] irqchip/mvebu-odmi: " Johan Hovold
2023-02-09 13:23   ` Johan Hovold
2023-02-09 13:23 ` [PATCH v5 19/19] irqdomain: Switch to per-domain locking Johan Hovold
2023-02-09 13:23   ` Johan Hovold
2023-02-09 16:00   ` Marc Zyngier
2023-02-09 16:00     ` Marc Zyngier
2023-02-10  9:56     ` Johan Hovold
2023-02-10  9:56       ` Johan Hovold
2023-02-10 11:38       ` Marc Zyngier
2023-02-10 11:38         ` Marc Zyngier
2023-02-10 12:57         ` Johan Hovold
2023-02-10 12:57           ` Johan Hovold
2023-02-10 15:06           ` Marc Zyngier
2023-02-10 15:06             ` Marc Zyngier
2023-02-11 11:35             ` Johan Hovold
2023-02-11 11:35               ` Johan Hovold
2023-02-11 12:52               ` Marc Zyngier
2023-02-11 12:52                 ` Marc Zyngier

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.