All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v7 0/5] genirq/gpio: Add driver for ThunderX and OCTEON-TX SoCs
@ 2017-08-09 22:51 David Daney
  2017-08-09 22:51 ` [PATCH v7 1/5] genirq: Export more irq_chip_*_parent() functions David Daney
                   ` (4 more replies)
  0 siblings, 5 replies; 13+ messages in thread
From: David Daney @ 2017-08-09 22:51 UTC (permalink / raw)
  To: Linus Walleij, Alexandre Courbot, Mark Rutland, Marc Zyngier,
	Thomas Gleixner, linux-gpio
  Cc: linux-kernel, David Daney

The ThunderX/OCTEON-TX GPIO hardware looks like a PCIe device, with
the interrupt signal from each GPIO line being routed to a dedicated
MSI-X.  This interrupt routing requires that we add some custom
processing to the beginning of the MSI-X irqdomain hierarchy.

Changes from v6:

 - Trivial rebase to v4.13.0-rc4:  Omit device tree bindings patch,
   which was already merged, and fix MAINTAINERS patch collision.

Changes from v5:

 - Added .irq_{request,release}_resources() functions to irq_chip.

 - Quit calling irq_set_irq_type() in the irq_map() function.

 - Added more error condition checking to irq_domain_{push,pop}_irq()

Changes from v4:

 - Rebased to post-v4.10 to support .set_config() function.

 - Added .get_direction() support.

 - Added PIN_CONFIG_INPUT_DEBOUNCE support.

 - Removed some improper use of ENOSYS.

Changes from v3:

 - Add some "depends on" to the driver Kconfig to avoid build errors
   in some architectures when doing COMPILE_TEST builds.

Changes from v2:

 - in 4/6: Added Rob Harring's Acked-by

 - Added three patches to genirq/irqdomain to support interrupt code
   in the driver.

 - Rewrite irq code in driver to use irqdomain hierarchy.

 - Other naming and style changes as recommended by Linus Walleij.

Changes from v1:

 - in 1/3: Addressed Rob Harring's comments.

 - in 2/3: Trivial cleanups found in internal review + add some
   comments.

David Daney (5):
  genirq: Export more irq_chip_*_parent() functions.
  genirq: Add handle_fasteoi_{level,edge}_irq flow handlers.
  irqdomain: Add irq_domain_{push,pop}_irq() functions.
  gpio: Add gpio driver support for ThunderX and OCTEON-TX
  MAINTAINERS: Add entry for THUNDERX GPIO Driver.

 MAINTAINERS                  |   5 +
 drivers/gpio/Kconfig         |   8 +
 drivers/gpio/Makefile        |   1 +
 drivers/gpio/gpio-thunderx.c | 639 +++++++++++++++++++++++++++++++++++++++++++
 include/linux/irq.h          |   2 +
 include/linux/irqdomain.h    |   3 +
 kernel/irq/chip.c            | 105 +++++++
 kernel/irq/irqdomain.c       | 178 ++++++++++++
 8 files changed, 941 insertions(+)
 create mode 100644 drivers/gpio/gpio-thunderx.c

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v7 1/5] genirq: Export more irq_chip_*_parent() functions.
  2017-08-09 22:51 [PATCH v7 0/5] genirq/gpio: Add driver for ThunderX and OCTEON-TX SoCs David Daney
@ 2017-08-09 22:51 ` David Daney
  2017-08-09 22:51 ` [PATCH v7 2/5] genirq: Add handle_fasteoi_{level,edge}_irq flow handlers David Daney
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 13+ messages in thread
From: David Daney @ 2017-08-09 22:51 UTC (permalink / raw)
  To: Linus Walleij, Alexandre Courbot, Mark Rutland, Marc Zyngier,
	Thomas Gleixner, linux-gpio
  Cc: linux-kernel, David Daney

Many of the family of functions including irq_chip_mask_parent(),
irq_chip_unmask_parent() are exported, but not all.

Add EXPORT_SYMBOL_GPL to irq_chip_enable_parent,
irq_chip_disable_parent and irq_chip_set_affinity_parent, so they
likewise are usable from modules.

Signed-off-by: David Daney <david.daney@cavium.com>
---
 kernel/irq/chip.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
index a3cc37c..6514f07 100644
--- a/kernel/irq/chip.c
+++ b/kernel/irq/chip.c
@@ -1105,6 +1105,7 @@ void irq_chip_enable_parent(struct irq_data *data)
 	else
 		data->chip->irq_unmask(data);
 }
+EXPORT_SYMBOL_GPL(irq_chip_enable_parent);
 
 /**
  * irq_chip_disable_parent - Disable the parent interrupt (defaults to mask if
@@ -1119,6 +1120,7 @@ void irq_chip_disable_parent(struct irq_data *data)
 	else
 		data->chip->irq_mask(data);
 }
+EXPORT_SYMBOL_GPL(irq_chip_disable_parent);
 
 /**
  * irq_chip_ack_parent - Acknowledge the parent interrupt
@@ -1181,6 +1183,7 @@ int irq_chip_set_affinity_parent(struct irq_data *data,
 
 	return -ENOSYS;
 }
+EXPORT_SYMBOL_GPL(irq_chip_set_affinity_parent);
 
 /**
  * irq_chip_set_type_parent - Set IRQ type on the parent interrupt
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v7 2/5] genirq: Add handle_fasteoi_{level,edge}_irq flow handlers.
  2017-08-09 22:51 [PATCH v7 0/5] genirq/gpio: Add driver for ThunderX and OCTEON-TX SoCs David Daney
  2017-08-09 22:51 ` [PATCH v7 1/5] genirq: Export more irq_chip_*_parent() functions David Daney
@ 2017-08-09 22:51 ` David Daney
  2017-08-14 10:25   ` Thomas Gleixner
  2017-08-09 22:51 ` [PATCH v7 3/5] irqdomain: Add irq_domain_{push,pop}_irq() functions David Daney
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: David Daney @ 2017-08-09 22:51 UTC (permalink / raw)
  To: Linus Walleij, Alexandre Courbot, Mark Rutland, Marc Zyngier,
	Thomas Gleixner, linux-gpio
  Cc: linux-kernel, David Daney

Follow-on patch for gpio-thunderx uses a irqdomain hierarchy which
requires slightly different flow handlers, add them to chip.c which
contains most of the other flow handlers.

Signed-off-by: David Daney <david.daney@cavium.com>
---
 include/linux/irq.h |   2 ++
 kernel/irq/chip.c   | 102 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 104 insertions(+)

diff --git a/include/linux/irq.h b/include/linux/irq.h
index d2d54379..86a2273 100644
--- a/include/linux/irq.h
+++ b/include/linux/irq.h
@@ -568,6 +568,8 @@ static inline int irq_set_parent(int irq, int parent_irq)
 extern int irq_chip_pm_get(struct irq_data *data);
 extern int irq_chip_pm_put(struct irq_data *data);
 #ifdef	CONFIG_IRQ_DOMAIN_HIERARCHY
+extern void handle_fasteoi_edge_irq(struct irq_desc *desc);
+extern void handle_fasteoi_level_irq(struct irq_desc *desc);
 extern void irq_chip_enable_parent(struct irq_data *data);
 extern void irq_chip_disable_parent(struct irq_data *data);
 extern void irq_chip_ack_parent(struct irq_data *data);
diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
index 6514f07..7511591 100644
--- a/kernel/irq/chip.c
+++ b/kernel/irq/chip.c
@@ -1093,6 +1093,108 @@ void irq_cpu_offline(void)
 
 #ifdef	CONFIG_IRQ_DOMAIN_HIERARCHY
 /**
+ *	handle_fasteoi_edge_irq - irq handler for edge hierarchy
+ *	stacked on transparent controllers
+ *
+ *	@desc:	the interrupt description structure for this irq
+ *
+ *	Like handle_fasteoi_irq(), but for use with hierarchy where
+ *	the irq_chip also needs to have its ->irq_ack() function
+ *	called.
+ */
+void handle_fasteoi_edge_irq(struct irq_desc *desc)
+{
+	struct irq_chip *chip = desc->irq_data.chip;
+
+	raw_spin_lock(&desc->lock);
+
+	if (!irq_may_run(desc))
+		goto out;
+
+	desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING);
+
+	/*
+	 * If its disabled or no action available
+	 * then mask it and get out of here:
+	 */
+	if (unlikely(!desc->action || irqd_irq_disabled(&desc->irq_data))) {
+		desc->istate |= IRQS_PENDING;
+		mask_irq(desc);
+		goto out;
+	}
+
+	kstat_incr_irqs_this_cpu(desc);
+	if (desc->istate & IRQS_ONESHOT)
+		mask_irq(desc);
+
+	/* Start handling the irq */
+	desc->irq_data.chip->irq_ack(&desc->irq_data);
+
+	preflow_handler(desc);
+	handle_irq_event(desc);
+
+	cond_unmask_eoi_irq(desc, chip);
+
+	raw_spin_unlock(&desc->lock);
+	return;
+out:
+	if (!(chip->flags & IRQCHIP_EOI_IF_HANDLED))
+		chip->irq_eoi(&desc->irq_data);
+	raw_spin_unlock(&desc->lock);
+}
+EXPORT_SYMBOL_GPL(handle_fasteoi_edge_irq);
+
+/**
+ *	handle_fasteoi_level_irq - irq handler for level hierarchy
+ *	stacked on transparent controllers
+ *
+ *	@desc:	the interrupt description structure for this irq
+ *
+ *	Like handle_fasteoi_irq(), but for use with hierarchy where
+ *	the irq_chip also needs to have its ->irq_mask_ack() function
+ *	called.
+ */
+void handle_fasteoi_level_irq(struct irq_desc *desc)
+{
+	struct irq_chip *chip = desc->irq_data.chip;
+
+	raw_spin_lock(&desc->lock);
+	mask_ack_irq(desc);
+
+	if (!irq_may_run(desc))
+		goto out;
+
+	desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING);
+
+	/*
+	 * If its disabled or no action available
+	 * then mask it and get out of here:
+	 */
+	if (unlikely(!desc->action || irqd_irq_disabled(&desc->irq_data))) {
+		desc->istate |= IRQS_PENDING;
+		mask_irq(desc);
+		goto out;
+	}
+
+	kstat_incr_irqs_this_cpu(desc);
+	if (desc->istate & IRQS_ONESHOT)
+		mask_irq(desc);
+
+	preflow_handler(desc);
+	handle_irq_event(desc);
+
+	cond_unmask_eoi_irq(desc, chip);
+
+	raw_spin_unlock(&desc->lock);
+	return;
+out:
+	if (!(chip->flags & IRQCHIP_EOI_IF_HANDLED))
+		chip->irq_eoi(&desc->irq_data);
+	raw_spin_unlock(&desc->lock);
+}
+EXPORT_SYMBOL_GPL(handle_fasteoi_level_irq);
+
+/**
  * irq_chip_enable_parent - Enable the parent interrupt (defaults to unmask if
  * NULL)
  * @data:	Pointer to interrupt specific data
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v7 3/5] irqdomain: Add irq_domain_{push,pop}_irq() functions.
  2017-08-09 22:51 [PATCH v7 0/5] genirq/gpio: Add driver for ThunderX and OCTEON-TX SoCs David Daney
  2017-08-09 22:51 ` [PATCH v7 1/5] genirq: Export more irq_chip_*_parent() functions David Daney
  2017-08-09 22:51 ` [PATCH v7 2/5] genirq: Add handle_fasteoi_{level,edge}_irq flow handlers David Daney
@ 2017-08-09 22:51 ` David Daney
  2017-08-15 13:50   ` Marc Zyngier
  2017-08-09 22:51 ` [PATCH v7 4/5] gpio: Add gpio driver support for ThunderX and OCTEON-TX David Daney
  2017-08-09 22:51 ` [PATCH v7 5/5] MAINTAINERS: Add entry for THUNDERX GPIO Driver David Daney
  4 siblings, 1 reply; 13+ messages in thread
From: David Daney @ 2017-08-09 22:51 UTC (permalink / raw)
  To: Linus Walleij, Alexandre Courbot, Mark Rutland, Marc Zyngier,
	Thomas Gleixner, linux-gpio
  Cc: linux-kernel, David Daney

For an already existing irqdomain hierarchy, as might be obtained via
a call to pci_enable_msix_range(), a PCI driver wishing to add an
additional irqdomain to the hierarchy needs to be able to insert the
irqdomain to that already initialized hierarchy.  Calling
irq_domain_create_hierarchy() allows the new irqdomain to be created,
but no existing code allows for initializing the associated irq_data.

Add a couple of helper functions (irq_domain_push_irq() and
irq_domain_pop_irq()) to initialize the irq_data for the new
irqdomain added to an existing hierarchy.

Signed-off-by: David Daney <david.daney@cavium.com>
---
 include/linux/irqdomain.h |   3 +
 kernel/irq/irqdomain.c    | 178 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 181 insertions(+)

diff --git a/include/linux/irqdomain.h b/include/linux/irqdomain.h
index cac77a5..2318f29 100644
--- a/include/linux/irqdomain.h
+++ b/include/linux/irqdomain.h
@@ -460,6 +460,9 @@ extern void irq_domain_free_irqs_common(struct irq_domain *domain,
 extern void irq_domain_free_irqs_top(struct irq_domain *domain,
 				     unsigned int virq, unsigned int nr_irqs);
 
+extern int irq_domain_push_irq(struct irq_domain *domain, int virq, void *arg);
+extern int irq_domain_pop_irq(struct irq_domain *domain, int virq);
+
 extern int irq_domain_alloc_irqs_parent(struct irq_domain *domain,
 					unsigned int irq_base,
 					unsigned int nr_irqs, void *arg);
diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index f1f2514..629f770 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -1448,6 +1448,184 @@ int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base,
 	return ret;
 }
 
+/* The irq_data was moved, fix the revmap to refer to the new location */
+static void irq_domain_fix_revmap(struct irq_data *d)
+{
+	void **slot;
+
+	if (d->hwirq < d->domain->revmap_size)
+		return; /* Not using radix tree. */
+
+	/* Fix up the revmap. */
+	mutex_lock(&revmap_trees_mutex);
+	slot = radix_tree_lookup_slot(&d->domain->revmap_tree, d->hwirq);
+	if (slot)
+		radix_tree_replace_slot(&d->domain->revmap_tree, slot, d);
+	mutex_unlock(&revmap_trees_mutex);
+}
+
+/**
+ * irq_domain_push_irq() - Push a domain in to the top of a hierarchy.
+ * @domain:	Domain to push.
+ * @virq:	Irq to push the domain in to.
+ * @arg:	Passed to the irq_domain_ops alloc() function.
+ *
+ * For an already existing irqdomain hierarchy, as might be obtained
+ * via a call to pci_enable_msix(), add an additional domain to the
+ * head of the processing chain.  Must be called before request_irq()
+ * has been called.
+ */
+int irq_domain_push_irq(struct irq_domain *domain, int virq, void *arg)
+{
+	struct irq_data *child_irq_data;
+	struct irq_data *root_irq_data = irq_get_irq_data(virq);
+	struct irq_desc *desc;
+	int rv = 0;
+
+	/*
+	 * Check that no action has been set, which indicates the virq
+	 * is in a state where this function doesn't have to deal with
+	 * races between interrupt handling and maintaining the
+	 * hierarchy.  This will catch gross misuse.  Attempting to
+	 * make the check race free would require holding locks across
+	 * calls to struct irq_domain_ops->alloc(), which could lead
+	 * to deadlock, so we just do a simple check before starting.
+	 */
+	desc = irq_to_desc(virq);
+	if (!desc)
+		return -EINVAL;
+	if (WARN_ON(desc->action))
+		return -EBUSY;
+
+	if (domain == NULL)
+		return -EINVAL;
+
+	if (WARN_ON(!domain->ops->alloc))
+		return -EINVAL;
+
+	if (!root_irq_data)
+		return -EINVAL;
+
+	child_irq_data = kzalloc_node(sizeof(*child_irq_data), GFP_KERNEL,
+				      irq_data_get_node(root_irq_data));
+	if (!child_irq_data)
+		return -ENOMEM;
+
+	mutex_lock(&irq_domain_mutex);
+
+	/* Copy the original irq_data. */
+	*child_irq_data = *root_irq_data;
+
+	irq_domain_fix_revmap(child_irq_data);
+
+	/*
+	 * Overwrite the root_irq_data, which is embedded in struct
+	 * irq_desc, with values for this domain.
+	 */
+	root_irq_data->parent_data = child_irq_data;
+	root_irq_data->domain = domain;
+	root_irq_data->mask = 0;
+	root_irq_data->hwirq = 0;
+	root_irq_data->chip = NULL;
+	root_irq_data->chip_data = NULL;
+	rv = domain->ops->alloc(domain, virq, 1, arg);
+	if (rv) {
+		/* Restore the original irq_data. */
+		*root_irq_data = *child_irq_data;
+		irq_domain_fix_revmap(root_irq_data);
+		goto error;
+	}
+
+	if (root_irq_data->hwirq < domain->revmap_size) {
+		domain->linear_revmap[root_irq_data->hwirq] = virq;
+	} else {
+		mutex_lock(&revmap_trees_mutex);
+		radix_tree_insert(&domain->revmap_tree,
+				  root_irq_data->hwirq, root_irq_data);
+		mutex_unlock(&revmap_trees_mutex);
+	}
+error:
+	mutex_unlock(&irq_domain_mutex);
+
+	return rv;
+}
+EXPORT_SYMBOL_GPL(irq_domain_push_irq);
+
+/**
+ * irq_domain_pop_irq() - Remove a domain from the top of a hierarchy.
+ * @domain:	Domain to remove.
+ * @virq:	Irq to remove the domain from.
+ *
+ * Undo the effects of a call to irq_domain_push_irq().  Must be
+ * called either before request_irq() or after free_irq().
+ */
+int irq_domain_pop_irq(struct irq_domain *domain, int virq)
+{
+	struct irq_data *root_irq_data = irq_get_irq_data(virq);
+	struct irq_data *child_irq_data;
+	struct irq_data *tmp_irq_data;
+	struct irq_desc *desc;
+
+	/*
+	 * Check that no action is set, which indicates the virq is in
+	 * a state where this function doesn't have to deal with races
+	 * between interrupt handling and maintaining the hierarchy.
+	 * This will catch gross misuse.  Attempting to make the check
+	 * race free would require holding locks across calls to
+	 * struct irq_domain_ops->free(), which could lead to
+	 * deadlock, so we just do a simple check before starting.
+	 */
+	desc = irq_to_desc(virq);
+	if (!desc)
+		return -EINVAL;
+	if (WARN_ON(desc->action))
+		return -EBUSY;
+
+	if (domain == NULL)
+		return -EINVAL;
+
+	if (!root_irq_data)
+		return -EINVAL;
+
+	tmp_irq_data = irq_domain_get_irq_data(domain, virq);
+
+	/* We can only "pop" if this domain is at the top of the list */
+	if (WARN_ON(root_irq_data != tmp_irq_data))
+		return -EINVAL;
+
+	if (WARN_ON(root_irq_data->domain != domain))
+		return -EINVAL;
+
+	child_irq_data = root_irq_data->parent_data;
+	if (WARN_ON(!child_irq_data))
+		return -EINVAL;
+
+	mutex_lock(&irq_domain_mutex);
+
+	root_irq_data->parent_data = NULL;
+
+	if (root_irq_data->hwirq >= domain->revmap_size) {
+		mutex_lock(&revmap_trees_mutex);
+		radix_tree_delete(&domain->revmap_tree, root_irq_data->hwirq);
+		mutex_unlock(&revmap_trees_mutex);
+	}
+
+	if (domain->ops->free)
+		domain->ops->free(domain, virq, 1);
+
+	/* Restore the original irq_data. */
+	*root_irq_data = *child_irq_data;
+
+	irq_domain_fix_revmap(root_irq_data);
+
+	mutex_unlock(&irq_domain_mutex);
+
+	kfree(child_irq_data);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(irq_domain_pop_irq);
+
 /**
  * irq_domain_free_irqs - Free IRQ number and associated data structures
  * @virq:	base IRQ number
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v7 4/5] gpio: Add gpio driver support for ThunderX and OCTEON-TX
  2017-08-09 22:51 [PATCH v7 0/5] genirq/gpio: Add driver for ThunderX and OCTEON-TX SoCs David Daney
                   ` (2 preceding siblings ...)
  2017-08-09 22:51 ` [PATCH v7 3/5] irqdomain: Add irq_domain_{push,pop}_irq() functions David Daney
@ 2017-08-09 22:51 ` David Daney
  2017-08-09 22:51 ` [PATCH v7 5/5] MAINTAINERS: Add entry for THUNDERX GPIO Driver David Daney
  4 siblings, 0 replies; 13+ messages in thread
From: David Daney @ 2017-08-09 22:51 UTC (permalink / raw)
  To: Linus Walleij, Alexandre Courbot, Mark Rutland, Marc Zyngier,
	Thomas Gleixner, linux-gpio
  Cc: linux-kernel, David Daney

Cavium ThunderX and OCTEON-TX are arm64 based SoCs.  Add driver for
the on-chip GPIO pins.

Signed-off-by: David Daney <david.daney@cavium.com>
---
 drivers/gpio/Kconfig         |   8 +
 drivers/gpio/Makefile        |   1 +
 drivers/gpio/gpio-thunderx.c | 639 +++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 648 insertions(+)
 create mode 100644 drivers/gpio/gpio-thunderx.c

diff --git a/drivers/gpio/Kconfig b/drivers/gpio/Kconfig
index 461d6fc..4e7f2bd 100644
--- a/drivers/gpio/Kconfig
+++ b/drivers/gpio/Kconfig
@@ -450,6 +450,14 @@ config GPIO_TS4800
 	help
 	  This driver support TS-4800 FPGA GPIO controllers.
 
+config GPIO_THUNDERX
+	tristate "Cavium ThunderX/OCTEON-TX GPIO"
+	depends on ARCH_THUNDER || (64BIT && COMPILE_TEST)
+	depends on PCI_MSI && IRQ_DOMAIN_HIERARCHY
+	help
+	  Say yes here to support the on-chip GPIO lines on the ThunderX
+	  and OCTEON-TX families of SoCs.
+
 config GPIO_TZ1090
 	bool "Toumaz Xenif TZ1090 GPIO support"
 	depends on SOC_TZ1090
diff --git a/drivers/gpio/Makefile b/drivers/gpio/Makefile
index a9fda6c..9ae168a 100644
--- a/drivers/gpio/Makefile
+++ b/drivers/gpio/Makefile
@@ -113,6 +113,7 @@ obj-$(CONFIG_GPIO_SYSCON)	+= gpio-syscon.o
 obj-$(CONFIG_GPIO_TB10X)	+= gpio-tb10x.o
 obj-$(CONFIG_GPIO_TC3589X)	+= gpio-tc3589x.o
 obj-$(CONFIG_GPIO_TEGRA)	+= gpio-tegra.o
+obj-$(CONFIG_GPIO_THUNDERX)	+= gpio-thunderx.o
 obj-$(CONFIG_GPIO_TIMBERDALE)	+= gpio-timberdale.o
 obj-$(CONFIG_GPIO_PALMAS)	+= gpio-palmas.o
 obj-$(CONFIG_GPIO_TPIC2810)	+= gpio-tpic2810.o
diff --git a/drivers/gpio/gpio-thunderx.c b/drivers/gpio/gpio-thunderx.c
new file mode 100644
index 0000000..c93b377
--- /dev/null
+++ b/drivers/gpio/gpio-thunderx.c
@@ -0,0 +1,639 @@
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License.  See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2016, 2017 Cavium Inc.
+ */
+
+#include <linux/bitops.h>
+#include <linux/gpio/driver.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/irq.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/spinlock.h>
+
+
+#define GPIO_RX_DAT	0x0
+#define GPIO_TX_SET	0x8
+#define GPIO_TX_CLR	0x10
+#define GPIO_CONST	0x90
+#define  GPIO_CONST_GPIOS_MASK 0xff
+#define GPIO_BIT_CFG	0x400
+#define  GPIO_BIT_CFG_TX_OE		BIT(0)
+#define  GPIO_BIT_CFG_PIN_XOR		BIT(1)
+#define  GPIO_BIT_CFG_INT_EN		BIT(2)
+#define  GPIO_BIT_CFG_INT_TYPE		BIT(3)
+#define  GPIO_BIT_CFG_FIL_MASK		GENMASK(11, 4)
+#define  GPIO_BIT_CFG_FIL_CNT_SHIFT	4
+#define  GPIO_BIT_CFG_FIL_SEL_SHIFT	8
+#define  GPIO_BIT_CFG_TX_OD		BIT(12)
+#define  GPIO_BIT_CFG_PIN_SEL_MASK	GENMASK(25, 16)
+#define GPIO_INTR	0x800
+#define  GPIO_INTR_INTR			BIT(0)
+#define  GPIO_INTR_INTR_W1S		BIT(1)
+#define  GPIO_INTR_ENA_W1C		BIT(2)
+#define  GPIO_INTR_ENA_W1S		BIT(3)
+#define GPIO_2ND_BANK	0x1400
+
+#define GLITCH_FILTER_400NS ((4u << GPIO_BIT_CFG_FIL_SEL_SHIFT) | \
+			     (9u << GPIO_BIT_CFG_FIL_CNT_SHIFT))
+
+struct thunderx_gpio;
+
+struct thunderx_line {
+	struct thunderx_gpio	*txgpio;
+	unsigned int		line;
+	unsigned int		fil_bits;
+};
+
+struct thunderx_gpio {
+	struct gpio_chip	chip;
+	u8 __iomem		*register_base;
+	struct irq_domain	*irqd;
+	struct msix_entry	*msix_entries;	/* per line MSI-X */
+	struct thunderx_line	*line_entries;	/* per line irq info */
+	raw_spinlock_t		lock;
+	unsigned long		invert_mask[2];
+	unsigned long		od_mask[2];
+	int			base_msi;
+};
+
+static unsigned int bit_cfg_reg(unsigned int line)
+{
+	return 8 * line + GPIO_BIT_CFG;
+}
+
+static unsigned int intr_reg(unsigned int line)
+{
+	return 8 * line + GPIO_INTR;
+}
+
+static bool thunderx_gpio_is_gpio_nowarn(struct thunderx_gpio *txgpio,
+					 unsigned int line)
+{
+	u64 bit_cfg = readq(txgpio->register_base + bit_cfg_reg(line));
+
+	return (bit_cfg & GPIO_BIT_CFG_PIN_SEL_MASK) == 0;
+}
+
+/*
+ * Check (and WARN) that the pin is available for GPIO.  We will not
+ * allow modification of the state of non-GPIO pins from this driver.
+ */
+static bool thunderx_gpio_is_gpio(struct thunderx_gpio *txgpio,
+				  unsigned int line)
+{
+	bool rv = thunderx_gpio_is_gpio_nowarn(txgpio, line);
+
+	WARN_RATELIMIT(!rv, "Pin %d not available for GPIO\n", line);
+
+	return rv;
+}
+
+static int thunderx_gpio_request(struct gpio_chip *chip, unsigned int line)
+{
+	struct thunderx_gpio *txgpio = gpiochip_get_data(chip);
+
+	return thunderx_gpio_is_gpio(txgpio, line) ? 0 : -EIO;
+}
+
+static int thunderx_gpio_dir_in(struct gpio_chip *chip, unsigned int line)
+{
+	struct thunderx_gpio *txgpio = gpiochip_get_data(chip);
+
+	if (!thunderx_gpio_is_gpio(txgpio, line))
+		return -EIO;
+
+	raw_spin_lock(&txgpio->lock);
+	clear_bit(line, txgpio->invert_mask);
+	clear_bit(line, txgpio->od_mask);
+	writeq(txgpio->line_entries[line].fil_bits,
+	       txgpio->register_base + bit_cfg_reg(line));
+	raw_spin_unlock(&txgpio->lock);
+	return 0;
+}
+
+static void thunderx_gpio_set(struct gpio_chip *chip, unsigned int line,
+			      int value)
+{
+	struct thunderx_gpio *txgpio = gpiochip_get_data(chip);
+	int bank = line / 64;
+	int bank_bit = line % 64;
+
+	void __iomem *reg = txgpio->register_base +
+		(bank * GPIO_2ND_BANK) + (value ? GPIO_TX_SET : GPIO_TX_CLR);
+
+	writeq(BIT_ULL(bank_bit), reg);
+}
+
+static int thunderx_gpio_dir_out(struct gpio_chip *chip, unsigned int line,
+				 int value)
+{
+	struct thunderx_gpio *txgpio = gpiochip_get_data(chip);
+	u64 bit_cfg = txgpio->line_entries[line].fil_bits | GPIO_BIT_CFG_TX_OE;
+
+	if (!thunderx_gpio_is_gpio(txgpio, line))
+		return -EIO;
+
+	raw_spin_lock(&txgpio->lock);
+
+	thunderx_gpio_set(chip, line, value);
+
+	if (test_bit(line, txgpio->invert_mask))
+		bit_cfg |= GPIO_BIT_CFG_PIN_XOR;
+
+	if (test_bit(line, txgpio->od_mask))
+		bit_cfg |= GPIO_BIT_CFG_TX_OD;
+
+	writeq(bit_cfg, txgpio->register_base + bit_cfg_reg(line));
+
+	raw_spin_unlock(&txgpio->lock);
+	return 0;
+}
+
+static int thunderx_gpio_get_direction(struct gpio_chip *chip, unsigned int line)
+{
+	struct thunderx_gpio *txgpio = gpiochip_get_data(chip);
+	u64 bit_cfg;
+
+	if (!thunderx_gpio_is_gpio_nowarn(txgpio, line))
+		/*
+		 * Say it is input for now to avoid WARNing on
+		 * gpiochip_add_data().  We will WARN if someone
+		 * requests it or tries to use it.
+		 */
+		return 1;
+
+	bit_cfg = readq(txgpio->register_base + bit_cfg_reg(line));
+
+	return !(bit_cfg & GPIO_BIT_CFG_TX_OE);
+}
+
+static int thunderx_gpio_set_config(struct gpio_chip *chip,
+				    unsigned int line,
+				    unsigned long cfg)
+{
+	bool orig_invert, orig_od, orig_dat, new_invert, new_od;
+	u32 arg, sel;
+	u64 bit_cfg;
+	int bank = line / 64;
+	int bank_bit = line % 64;
+	int ret = -ENOTSUPP;
+	struct thunderx_gpio *txgpio = gpiochip_get_data(chip);
+	void __iomem *reg = txgpio->register_base + (bank * GPIO_2ND_BANK) + GPIO_TX_SET;
+
+	if (!thunderx_gpio_is_gpio(txgpio, line))
+		return -EIO;
+
+	raw_spin_lock(&txgpio->lock);
+	orig_invert = test_bit(line, txgpio->invert_mask);
+	new_invert  = orig_invert;
+	orig_od = test_bit(line, txgpio->od_mask);
+	new_od = orig_od;
+	orig_dat = ((readq(reg) >> bank_bit) & 1) ^ orig_invert;
+	bit_cfg = readq(txgpio->register_base + bit_cfg_reg(line));
+	switch (pinconf_to_config_param(cfg)) {
+	case PIN_CONFIG_DRIVE_OPEN_DRAIN:
+		/*
+		 * Weird, setting open-drain mode causes signal
+		 * inversion.  Note this so we can compensate in the
+		 * dir_out function.
+		 */
+		set_bit(line, txgpio->invert_mask);
+		new_invert  = true;
+		set_bit(line, txgpio->od_mask);
+		new_od = true;
+		ret = 0;
+		break;
+	case PIN_CONFIG_DRIVE_PUSH_PULL:
+		clear_bit(line, txgpio->invert_mask);
+		new_invert  = false;
+		clear_bit(line, txgpio->od_mask);
+		new_od  = false;
+		ret = 0;
+		break;
+	case PIN_CONFIG_INPUT_DEBOUNCE:
+		arg = pinconf_to_config_argument(cfg);
+		if (arg > 1228) { /* 15 * 2^15 * 2.5nS maximum */
+			ret = -EINVAL;
+			break;
+		}
+		arg *= 400; /* scale to 2.5nS clocks. */
+		sel = 0;
+		while (arg > 15) {
+			sel++;
+			arg++; /* always round up */
+			arg >>= 1;
+		}
+		txgpio->line_entries[line].fil_bits =
+			(sel << GPIO_BIT_CFG_FIL_SEL_SHIFT) |
+			(arg << GPIO_BIT_CFG_FIL_CNT_SHIFT);
+		bit_cfg &= ~GPIO_BIT_CFG_FIL_MASK;
+		bit_cfg |= txgpio->line_entries[line].fil_bits;
+		writeq(bit_cfg, txgpio->register_base + bit_cfg_reg(line));
+		ret = 0;
+		break;
+	default:
+		break;
+	}
+	raw_spin_unlock(&txgpio->lock);
+
+	/*
+	 * If currently output and OPEN_DRAIN changed, install the new
+	 * settings
+	 */
+	if ((new_invert != orig_invert || new_od != orig_od) &&
+	    (bit_cfg & GPIO_BIT_CFG_TX_OE))
+		ret = thunderx_gpio_dir_out(chip, line, orig_dat ^ new_invert);
+
+	return ret;
+}
+
+static int thunderx_gpio_get(struct gpio_chip *chip, unsigned int line)
+{
+	struct thunderx_gpio *txgpio = gpiochip_get_data(chip);
+	int bank = line / 64;
+	int bank_bit = line % 64;
+	u64 read_bits = readq(txgpio->register_base + (bank * GPIO_2ND_BANK) + GPIO_RX_DAT);
+	u64 masked_bits = read_bits & BIT_ULL(bank_bit);
+
+	if (test_bit(line, txgpio->invert_mask))
+		return masked_bits == 0;
+	else
+		return masked_bits != 0;
+}
+
+static void thunderx_gpio_set_multiple(struct gpio_chip *chip,
+				       unsigned long *mask,
+				       unsigned long *bits)
+{
+	int bank;
+	u64 set_bits, clear_bits;
+	struct thunderx_gpio *txgpio = gpiochip_get_data(chip);
+
+	for (bank = 0; bank <= chip->ngpio / 64; bank++) {
+		set_bits = bits[bank] & mask[bank];
+		clear_bits = ~bits[bank] & mask[bank];
+		writeq(set_bits, txgpio->register_base + (bank * GPIO_2ND_BANK) + GPIO_TX_SET);
+		writeq(clear_bits, txgpio->register_base + (bank * GPIO_2ND_BANK) + GPIO_TX_CLR);
+	}
+}
+
+static void thunderx_gpio_irq_ack(struct irq_data *data)
+{
+	struct thunderx_line *txline = irq_data_get_irq_chip_data(data);
+
+	writeq(GPIO_INTR_INTR,
+	       txline->txgpio->register_base + intr_reg(txline->line));
+}
+
+static void thunderx_gpio_irq_mask(struct irq_data *data)
+{
+	struct thunderx_line *txline = irq_data_get_irq_chip_data(data);
+
+	writeq(GPIO_INTR_ENA_W1C,
+	       txline->txgpio->register_base + intr_reg(txline->line));
+}
+
+static void thunderx_gpio_irq_mask_ack(struct irq_data *data)
+{
+	struct thunderx_line *txline = irq_data_get_irq_chip_data(data);
+
+	writeq(GPIO_INTR_ENA_W1C | GPIO_INTR_INTR,
+	       txline->txgpio->register_base + intr_reg(txline->line));
+}
+
+static void thunderx_gpio_irq_unmask(struct irq_data *data)
+{
+	struct thunderx_line *txline = irq_data_get_irq_chip_data(data);
+
+	writeq(GPIO_INTR_ENA_W1S,
+	       txline->txgpio->register_base + intr_reg(txline->line));
+}
+
+static int thunderx_gpio_irq_set_type(struct irq_data *data,
+				      unsigned int flow_type)
+{
+	struct thunderx_line *txline = irq_data_get_irq_chip_data(data);
+	struct thunderx_gpio *txgpio = txline->txgpio;
+	u64 bit_cfg;
+
+	irqd_set_trigger_type(data, flow_type);
+
+	bit_cfg = txline->fil_bits | GPIO_BIT_CFG_INT_EN;
+
+	if (flow_type & IRQ_TYPE_EDGE_BOTH) {
+		irq_set_handler_locked(data, handle_fasteoi_edge_irq);
+		bit_cfg |= GPIO_BIT_CFG_INT_TYPE;
+	} else {
+		irq_set_handler_locked(data, handle_fasteoi_level_irq);
+	}
+
+	raw_spin_lock(&txgpio->lock);
+	if (flow_type & (IRQ_TYPE_EDGE_FALLING | IRQ_TYPE_LEVEL_LOW)) {
+		bit_cfg |= GPIO_BIT_CFG_PIN_XOR;
+		set_bit(txline->line, txgpio->invert_mask);
+	} else {
+		clear_bit(txline->line, txgpio->invert_mask);
+	}
+	clear_bit(txline->line, txgpio->od_mask);
+	writeq(bit_cfg, txgpio->register_base + bit_cfg_reg(txline->line));
+	raw_spin_unlock(&txgpio->lock);
+
+	return IRQ_SET_MASK_OK;
+}
+
+static void thunderx_gpio_irq_enable(struct irq_data *data)
+{
+	irq_chip_enable_parent(data);
+	thunderx_gpio_irq_unmask(data);
+}
+
+static void thunderx_gpio_irq_disable(struct irq_data *data)
+{
+	thunderx_gpio_irq_mask(data);
+	irq_chip_disable_parent(data);
+}
+
+static int thunderx_gpio_irq_request_resources(struct irq_data *data)
+{
+	struct thunderx_line *txline = irq_data_get_irq_chip_data(data);
+	struct thunderx_gpio *txgpio = txline->txgpio;
+	struct irq_data *parent_data = data->parent_data;
+	int r;
+
+	r = gpiochip_lock_as_irq(&txgpio->chip, txline->line);
+	if (r)
+		return r;
+
+	if (parent_data && parent_data->chip->irq_request_resources) {
+		r = parent_data->chip->irq_request_resources(parent_data);
+		if (r)
+			goto error;
+	}
+
+	return 0;
+error:
+	gpiochip_unlock_as_irq(&txgpio->chip, txline->line);
+	return r;
+}
+
+static void thunderx_gpio_irq_release_resources(struct irq_data *data)
+{
+	struct thunderx_line *txline = irq_data_get_irq_chip_data(data);
+	struct thunderx_gpio *txgpio = txline->txgpio;
+	struct irq_data *parent_data = data->parent_data;
+
+	if (parent_data && parent_data->chip->irq_release_resources)
+		parent_data->chip->irq_release_resources(parent_data);
+
+	gpiochip_unlock_as_irq(&txgpio->chip, txline->line);
+}
+
+/*
+ * Interrupts are chained from underlying MSI-X vectors.  We have
+ * these irq_chip functions to be able to handle level triggering
+ * semantics and other acknowledgment tasks associated with the GPIO
+ * mechanism.
+ */
+static struct irq_chip thunderx_gpio_irq_chip = {
+	.name			= "GPIO",
+	.irq_enable		= thunderx_gpio_irq_enable,
+	.irq_disable		= thunderx_gpio_irq_disable,
+	.irq_ack		= thunderx_gpio_irq_ack,
+	.irq_mask		= thunderx_gpio_irq_mask,
+	.irq_mask_ack		= thunderx_gpio_irq_mask_ack,
+	.irq_unmask		= thunderx_gpio_irq_unmask,
+	.irq_eoi		= irq_chip_eoi_parent,
+	.irq_set_affinity	= irq_chip_set_affinity_parent,
+	.irq_request_resources	= thunderx_gpio_irq_request_resources,
+	.irq_release_resources	= thunderx_gpio_irq_release_resources,
+	.irq_set_type		= thunderx_gpio_irq_set_type,
+
+	.flags			= IRQCHIP_SET_TYPE_MASKED
+};
+
+static int thunderx_gpio_irq_map(struct irq_domain *d, unsigned int irq,
+				 irq_hw_number_t hwirq)
+{
+	struct thunderx_gpio *txgpio = d->host_data;
+
+	if (hwirq >= txgpio->chip.ngpio)
+		return -EINVAL;
+	if (!thunderx_gpio_is_gpio_nowarn(txgpio, hwirq))
+		return -EPERM;
+	return 0;
+}
+
+static int thunderx_gpio_irq_translate(struct irq_domain *d,
+				       struct irq_fwspec *fwspec,
+				       irq_hw_number_t *hwirq,
+				       unsigned int *type)
+{
+	struct thunderx_gpio *txgpio = d->host_data;
+
+	if (WARN_ON(fwspec->param_count < 2))
+		return -EINVAL;
+	if (fwspec->param[0] >= txgpio->chip.ngpio)
+		return -EINVAL;
+	*hwirq = fwspec->param[0];
+	*type = fwspec->param[1] & IRQ_TYPE_SENSE_MASK;
+	return 0;
+}
+
+static int thunderx_gpio_irq_alloc(struct irq_domain *d, unsigned int virq,
+				   unsigned int nr_irqs, void *arg)
+{
+	struct thunderx_line *txline = arg;
+
+	return irq_domain_set_hwirq_and_chip(d, virq, txline->line,
+					     &thunderx_gpio_irq_chip, txline);
+}
+
+static const struct irq_domain_ops thunderx_gpio_irqd_ops = {
+	.map		= thunderx_gpio_irq_map,
+	.alloc		= thunderx_gpio_irq_alloc,
+	.translate	= thunderx_gpio_irq_translate
+};
+
+static int thunderx_gpio_to_irq(struct gpio_chip *chip, unsigned int offset)
+{
+	struct thunderx_gpio *txgpio = gpiochip_get_data(chip);
+
+	return irq_find_mapping(txgpio->irqd, offset);
+}
+
+static int thunderx_gpio_probe(struct pci_dev *pdev,
+			       const struct pci_device_id *id)
+{
+	void __iomem * const *tbl;
+	struct device *dev = &pdev->dev;
+	struct thunderx_gpio *txgpio;
+	struct gpio_chip *chip;
+	int ngpio, i;
+	int err = 0;
+
+	txgpio = devm_kzalloc(dev, sizeof(*txgpio), GFP_KERNEL);
+	if (!txgpio)
+		return -ENOMEM;
+
+	raw_spin_lock_init(&txgpio->lock);
+	chip = &txgpio->chip;
+
+	pci_set_drvdata(pdev, txgpio);
+
+	err = pcim_enable_device(pdev);
+	if (err) {
+		dev_err(dev, "Failed to enable PCI device: err %d\n", err);
+		goto out;
+	}
+
+	err = pcim_iomap_regions(pdev, 1 << 0, KBUILD_MODNAME);
+	if (err) {
+		dev_err(dev, "Failed to iomap PCI device: err %d\n", err);
+		goto out;
+	}
+
+	tbl = pcim_iomap_table(pdev);
+	txgpio->register_base = tbl[0];
+	if (!txgpio->register_base) {
+		dev_err(dev, "Cannot map PCI resource\n");
+		err = -ENOMEM;
+		goto out;
+	}
+
+	if (pdev->subsystem_device == 0xa10a) {
+		/* CN88XX has no GPIO_CONST register*/
+		ngpio = 50;
+		txgpio->base_msi = 48;
+	} else {
+		u64 c = readq(txgpio->register_base + GPIO_CONST);
+
+		ngpio = c & GPIO_CONST_GPIOS_MASK;
+		txgpio->base_msi = (c >> 8) & 0xff;
+	}
+
+	txgpio->msix_entries = devm_kzalloc(dev,
+					  sizeof(struct msix_entry) * ngpio,
+					  GFP_KERNEL);
+	if (!txgpio->msix_entries) {
+		err = -ENOMEM;
+		goto out;
+	}
+
+	txgpio->line_entries = devm_kzalloc(dev,
+					    sizeof(struct thunderx_line) * ngpio,
+					    GFP_KERNEL);
+	if (!txgpio->line_entries) {
+		err = -ENOMEM;
+		goto out;
+	}
+
+	for (i = 0; i < ngpio; i++) {
+		u64 bit_cfg = readq(txgpio->register_base + bit_cfg_reg(i));
+
+		txgpio->msix_entries[i].entry = txgpio->base_msi + (2 * i);
+		txgpio->line_entries[i].line = i;
+		txgpio->line_entries[i].txgpio = txgpio;
+		/*
+		 * If something has already programmed the pin, use
+		 * the existing glitch filter settings, otherwise go
+		 * to 400nS.
+		 */
+		txgpio->line_entries[i].fil_bits = bit_cfg ?
+			(bit_cfg & GPIO_BIT_CFG_FIL_MASK) : GLITCH_FILTER_400NS;
+
+		if ((bit_cfg & GPIO_BIT_CFG_TX_OE) && (bit_cfg & GPIO_BIT_CFG_TX_OD))
+			set_bit(i, txgpio->od_mask);
+		if (bit_cfg & GPIO_BIT_CFG_PIN_XOR)
+			set_bit(i, txgpio->invert_mask);
+	}
+
+
+	/* Enable all MSI-X for interrupts on all possible lines. */
+	err = pci_enable_msix_range(pdev, txgpio->msix_entries, ngpio, ngpio);
+	if (err < 0)
+		goto out;
+
+	/*
+	 * Push GPIO specific irqdomain on hierarchy created as a side
+	 * effect of the pci_enable_msix()
+	 */
+	txgpio->irqd = irq_domain_create_hierarchy(irq_get_irq_data(txgpio->msix_entries[0].vector)->domain,
+						   0, 0, of_node_to_fwnode(dev->of_node),
+						   &thunderx_gpio_irqd_ops, txgpio);
+	if (!txgpio->irqd)
+		goto out;
+
+	/* Push on irq_data and the domain for each line. */
+	for (i = 0; i < ngpio; i++) {
+		err = irq_domain_push_irq(txgpio->irqd,
+					  txgpio->msix_entries[i].vector,
+					  &txgpio->line_entries[i]);
+		if (err < 0)
+			dev_err(dev, "irq_domain_push_irq: %d\n", err);
+	}
+
+	chip->label = KBUILD_MODNAME;
+	chip->parent = dev;
+	chip->owner = THIS_MODULE;
+	chip->request = thunderx_gpio_request;
+	chip->base = -1; /* System allocated */
+	chip->can_sleep = false;
+	chip->ngpio = ngpio;
+	chip->get_direction = thunderx_gpio_get_direction;
+	chip->direction_input = thunderx_gpio_dir_in;
+	chip->get = thunderx_gpio_get;
+	chip->direction_output = thunderx_gpio_dir_out;
+	chip->set = thunderx_gpio_set;
+	chip->set_multiple = thunderx_gpio_set_multiple;
+	chip->set_config = thunderx_gpio_set_config;
+	chip->to_irq = thunderx_gpio_to_irq;
+	err = devm_gpiochip_add_data(dev, chip, txgpio);
+	if (err)
+		goto out;
+
+	dev_info(dev, "ThunderX GPIO: %d lines with base %d.\n",
+		 ngpio, chip->base);
+	return 0;
+out:
+	pci_set_drvdata(pdev, NULL);
+	return err;
+}
+
+static void thunderx_gpio_remove(struct pci_dev *pdev)
+{
+	int i;
+	struct thunderx_gpio *txgpio = pci_get_drvdata(pdev);
+
+	for (i = 0; i < txgpio->chip.ngpio; i++)
+		irq_domain_pop_irq(txgpio->irqd,
+				   txgpio->msix_entries[i].vector);
+
+	irq_domain_remove(txgpio->irqd);
+
+	pci_set_drvdata(pdev, NULL);
+}
+
+static const struct pci_device_id thunderx_gpio_id_table[] = {
+	{ PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, 0xA00A) },
+	{ 0, }	/* end of table */
+};
+
+MODULE_DEVICE_TABLE(pci, thunderx_gpio_id_table);
+
+static struct pci_driver thunderx_gpio_driver = {
+	.name = KBUILD_MODNAME,
+	.id_table = thunderx_gpio_id_table,
+	.probe = thunderx_gpio_probe,
+	.remove = thunderx_gpio_remove,
+};
+
+module_pci_driver(thunderx_gpio_driver);
+
+MODULE_DESCRIPTION("Cavium Inc. ThunderX/OCTEON-TX GPIO Driver");
+MODULE_LICENSE("GPL");
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v7 5/5] MAINTAINERS: Add entry for THUNDERX GPIO Driver.
  2017-08-09 22:51 [PATCH v7 0/5] genirq/gpio: Add driver for ThunderX and OCTEON-TX SoCs David Daney
                   ` (3 preceding siblings ...)
  2017-08-09 22:51 ` [PATCH v7 4/5] gpio: Add gpio driver support for ThunderX and OCTEON-TX David Daney
@ 2017-08-09 22:51 ` David Daney
  4 siblings, 0 replies; 13+ messages in thread
From: David Daney @ 2017-08-09 22:51 UTC (permalink / raw)
  To: Linus Walleij, Alexandre Courbot, Mark Rutland, Marc Zyngier,
	Thomas Gleixner, linux-gpio
  Cc: linux-kernel, David Daney

Signed-off-by: David Daney <david.daney@cavium.com>
---
 MAINTAINERS | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 3c41902..5c39d80 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -12993,6 +12993,11 @@ M:	Yehezkel Bernat <yehezkel.bernat@intel.com>
 S:	Maintained
 F:	drivers/thunderbolt/
 
+THUNDERX GPIO DRIVER
+M:	David Daney <david.daney@cavium.com>
+S:	Maintained
+F:	drivers/gpio/gpio-thunderx.c
+
 TI AM437X VPFE DRIVER
 M:	"Lad, Prabhakar" <prabhakar.csengg@gmail.com>
 L:	linux-media@vger.kernel.org
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v7 2/5] genirq: Add handle_fasteoi_{level,edge}_irq flow handlers.
  2017-08-09 22:51 ` [PATCH v7 2/5] genirq: Add handle_fasteoi_{level,edge}_irq flow handlers David Daney
@ 2017-08-14 10:25   ` Thomas Gleixner
  2017-08-16 15:59     ` David Daney
  0 siblings, 1 reply; 13+ messages in thread
From: Thomas Gleixner @ 2017-08-14 10:25 UTC (permalink / raw)
  To: David Daney
  Cc: Linus Walleij, Alexandre Courbot, Mark Rutland, Marc Zyngier,
	linux-gpio, linux-kernel

On Wed, 9 Aug 2017, David Daney wrote:
>  #ifdef	CONFIG_IRQ_DOMAIN_HIERARCHY

Can we please make them conditional in order not to bloat all kernels with
it? Like we do for handle_edge_eoi_irq() ?

>  /**
> + *	handle_fasteoi_edge_irq - irq handler for edge hierarchy
> + *	stacked on transparent controllers
> + *
> + *	@desc:	the interrupt description structure for this irq
> + *
> + *	Like handle_fasteoi_irq(), but for use with hierarchy where
> + *	the irq_chip also needs to have its ->irq_ack() function
> + *	called.
> + */
> +void handle_fasteoi_edge_irq(struct irq_desc *desc)
> +{
> +	struct irq_chip *chip = desc->irq_data.chip;
> +
> +	raw_spin_lock(&desc->lock);
> +
> +	if (!irq_may_run(desc))
> +		goto out;
> +
> +	desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING);
> +
> +	/*
> +	 * If its disabled or no action available
> +	 * then mask it and get out of here:
> +	 */
> +	if (unlikely(!desc->action || irqd_irq_disabled(&desc->irq_data))) {
> +		desc->istate |= IRQS_PENDING;
> +		mask_irq(desc);
> +		goto out;
> +	}
> +
> +	kstat_incr_irqs_this_cpu(desc);
> +	if (desc->istate & IRQS_ONESHOT)
> +		mask_irq(desc);
> +
> +	/* Start handling the irq */
> +	desc->irq_data.chip->irq_ack(&desc->irq_data);
> +
> +	preflow_handler(desc);
> +	handle_irq_event(desc);

Hmm. That's quite different to the way we handle edge interrupts
normally. See handle_edge_irq() and handle_edge_eoi_irq().

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v7 3/5] irqdomain: Add irq_domain_{push,pop}_irq() functions.
  2017-08-09 22:51 ` [PATCH v7 3/5] irqdomain: Add irq_domain_{push,pop}_irq() functions David Daney
@ 2017-08-15 13:50   ` Marc Zyngier
  2017-08-15 18:00     ` David Daney
  0 siblings, 1 reply; 13+ messages in thread
From: Marc Zyngier @ 2017-08-15 13:50 UTC (permalink / raw)
  To: David Daney, Linus Walleij, Alexandre Courbot, Mark Rutland,
	Thomas Gleixner, linux-gpio
  Cc: linux-kernel

Hi David,

On 09/08/17 23:51, David Daney wrote:
> For an already existing irqdomain hierarchy, as might be obtained via
> a call to pci_enable_msix_range(), a PCI driver wishing to add an
> additional irqdomain to the hierarchy needs to be able to insert the
> irqdomain to that already initialized hierarchy.  Calling
> irq_domain_create_hierarchy() allows the new irqdomain to be created,
> but no existing code allows for initializing the associated irq_data.
> 
> Add a couple of helper functions (irq_domain_push_irq() and
> irq_domain_pop_irq()) to initialize the irq_data for the new
> irqdomain added to an existing hierarchy.
> 
> Signed-off-by: David Daney <david.daney@cavium.com>
> ---
>  include/linux/irqdomain.h |   3 +
>  kernel/irq/irqdomain.c    | 178 ++++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 181 insertions(+)
> 
> diff --git a/include/linux/irqdomain.h b/include/linux/irqdomain.h
> index cac77a5..2318f29 100644
> --- a/include/linux/irqdomain.h
> +++ b/include/linux/irqdomain.h
> @@ -460,6 +460,9 @@ extern void irq_domain_free_irqs_common(struct irq_domain *domain,
>  extern void irq_domain_free_irqs_top(struct irq_domain *domain,
>  				     unsigned int virq, unsigned int nr_irqs);
>  
> +extern int irq_domain_push_irq(struct irq_domain *domain, int virq, void *arg);
> +extern int irq_domain_pop_irq(struct irq_domain *domain, int virq);
> +
>  extern int irq_domain_alloc_irqs_parent(struct irq_domain *domain,
>  					unsigned int irq_base,
>  					unsigned int nr_irqs, void *arg);
> diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
> index f1f2514..629f770 100644
> --- a/kernel/irq/irqdomain.c
> +++ b/kernel/irq/irqdomain.c
> @@ -1448,6 +1448,184 @@ int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base,
>  	return ret;
>  }
>  
> +/* The irq_data was moved, fix the revmap to refer to the new location */
> +static void irq_domain_fix_revmap(struct irq_data *d)
> +{
> +	void **slot;
> +
> +	if (d->hwirq < d->domain->revmap_size)
> +		return; /* Not using radix tree. */
> +
> +	/* Fix up the revmap. */
> +	mutex_lock(&revmap_trees_mutex);
> +	slot = radix_tree_lookup_slot(&d->domain->revmap_tree, d->hwirq);
> +	if (slot)
> +		radix_tree_replace_slot(&d->domain->revmap_tree, slot, d);

radix_tree_replace_slot already deals with non-existing entries, so the
initial radix_tree_lookup_slot call is superfluous.

> +	mutex_unlock(&revmap_trees_mutex);
> +}
> +
> +/**
> + * irq_domain_push_irq() - Push a domain in to the top of a hierarchy.
> + * @domain:	Domain to push.
> + * @virq:	Irq to push the domain in to.
> + * @arg:	Passed to the irq_domain_ops alloc() function.
> + *
> + * For an already existing irqdomain hierarchy, as might be obtained
> + * via a call to pci_enable_msix(), add an additional domain to the
> + * head of the processing chain.  Must be called before request_irq()
> + * has been called.
> + */
> +int irq_domain_push_irq(struct irq_domain *domain, int virq, void *arg)
> +{
> +	struct irq_data *child_irq_data;
> +	struct irq_data *root_irq_data = irq_get_irq_data(virq);
> +	struct irq_desc *desc;
> +	int rv = 0;
> +
> +	/*
> +	 * Check that no action has been set, which indicates the virq
> +	 * is in a state where this function doesn't have to deal with
> +	 * races between interrupt handling and maintaining the
> +	 * hierarchy.  This will catch gross misuse.  Attempting to
> +	 * make the check race free would require holding locks across
> +	 * calls to struct irq_domain_ops->alloc(), which could lead
> +	 * to deadlock, so we just do a simple check before starting.
> +	 */
> +	desc = irq_to_desc(virq);
> +	if (!desc)
> +		return -EINVAL;
> +	if (WARN_ON(desc->action))
> +		return -EBUSY;
> +
> +	if (domain == NULL)
> +		return -EINVAL;
> +
> +	if (WARN_ON(!domain->ops->alloc))
> +		return -EINVAL;

I'd rather you use irq_domain_is_hierarchy() instead. Same effect, but
less likely to break in the long run.

> +
> +	if (!root_irq_data)
> +		return -EINVAL;
> +
> +	child_irq_data = kzalloc_node(sizeof(*child_irq_data), GFP_KERNEL,
> +				      irq_data_get_node(root_irq_data));
> +	if (!child_irq_data)
> +		return -ENOMEM;
> +
> +	mutex_lock(&irq_domain_mutex);
> +
> +	/* Copy the original irq_data. */
> +	*child_irq_data = *root_irq_data;
> +
> +	irq_domain_fix_revmap(child_irq_data);

What is the benefit of updating the revmap early? We don't do that in
the pop case. Can't we do it in one go once the allocation has succeeded?

> +
> +	/*
> +	 * Overwrite the root_irq_data, which is embedded in struct
> +	 * irq_desc, with values for this domain.
> +	 */
> +	root_irq_data->parent_data = child_irq_data;
> +	root_irq_data->domain = domain;
> +	root_irq_data->mask = 0;
> +	root_irq_data->hwirq = 0;
> +	root_irq_data->chip = NULL;
> +	root_irq_data->chip_data = NULL;
> +	rv = domain->ops->alloc(domain, virq, 1, arg);

That'd be irq_domain_alloc_irqs_hierarchy().

Overall, I'm a bit concerned that alloc() is allowed to be recursive
itself. Hopefully nobody will do that, but you never know. A possible
way of trapping this would be to only set parent_data *after* the
allocation has been done.

Another concern is that I never see domain->parent being checked. It
should match child_irq_data->domain, so that you can never push a domain
on an interrupt that is not part of the parent domain.

> +	if (rv) {
> +		/* Restore the original irq_data. */
> +		*root_irq_data = *child_irq_data;
> +		irq_domain_fix_revmap(root_irq_data);
> +		goto error;
> +	}
> +
> +	if (root_irq_data->hwirq < domain->revmap_size) {
> +		domain->linear_revmap[root_irq_data->hwirq] = virq;
> +	} else {
> +		mutex_lock(&revmap_trees_mutex);
> +		radix_tree_insert(&domain->revmap_tree,
> +				  root_irq_data->hwirq, root_irq_data);
> +		mutex_unlock(&revmap_trees_mutex);
> +	}

We already have this exact code twice (in irq_domain_insert_irq and
irq_domain_associate). How about making it a helper?

> +error:
> +	mutex_unlock(&irq_domain_mutex);
> +
> +	return rv;
> +}
> +EXPORT_SYMBOL_GPL(irq_domain_push_irq);
> +
> +/**
> + * irq_domain_pop_irq() - Remove a domain from the top of a hierarchy.
> + * @domain:	Domain to remove.
> + * @virq:	Irq to remove the domain from.
> + *
> + * Undo the effects of a call to irq_domain_push_irq().  Must be
> + * called either before request_irq() or after free_irq().
> + */
> +int irq_domain_pop_irq(struct irq_domain *domain, int virq)
> +{
> +	struct irq_data *root_irq_data = irq_get_irq_data(virq);
> +	struct irq_data *child_irq_data;
> +	struct irq_data *tmp_irq_data;
> +	struct irq_desc *desc;
> +
> +	/*
> +	 * Check that no action is set, which indicates the virq is in
> +	 * a state where this function doesn't have to deal with races
> +	 * between interrupt handling and maintaining the hierarchy.
> +	 * This will catch gross misuse.  Attempting to make the check
> +	 * race free would require holding locks across calls to
> +	 * struct irq_domain_ops->free(), which could lead to
> +	 * deadlock, so we just do a simple check before starting.
> +	 */
> +	desc = irq_to_desc(virq);
> +	if (!desc)
> +		return -EINVAL;
> +	if (WARN_ON(desc->action))
> +		return -EBUSY;
> +
> +	if (domain == NULL)
> +		return -EINVAL;
> +
> +	if (!root_irq_data)
> +		return -EINVAL;
> +
> +	tmp_irq_data = irq_domain_get_irq_data(domain, virq);
> +
> +	/* We can only "pop" if this domain is at the top of the list */
> +	if (WARN_ON(root_irq_data != tmp_irq_data))
> +		return -EINVAL;
> +
> +	if (WARN_ON(root_irq_data->domain != domain))
> +		return -EINVAL;
> +
> +	child_irq_data = root_irq_data->parent_data;
> +	if (WARN_ON(!child_irq_data))
> +		return -EINVAL;
> +
> +	mutex_lock(&irq_domain_mutex);
> +
> +	root_irq_data->parent_data = NULL;
> +
> +	if (root_irq_data->hwirq >= domain->revmap_size) {
> +		mutex_lock(&revmap_trees_mutex);
> +		radix_tree_delete(&domain->revmap_tree, root_irq_data->hwirq);
> +		mutex_unlock(&revmap_trees_mutex);
> +	}

What about clearing it from the revmap if it fits there? Also, this code
already exists in irq_domain_disassociate and irq_domain_remove_irq, and
making that a helper is overdue.

> +
> +	if (domain->ops->free)
> +		domain->ops->free(domain, virq, 1);

Use irq_domain_free_irqs_hierarchy(), making it conditional in that helper.

> +
> +	/* Restore the original irq_data. */
> +	*root_irq_data = *child_irq_data;
> +
> +	irq_domain_fix_revmap(root_irq_data);
> +
> +	mutex_unlock(&irq_domain_mutex);
> +
> +	kfree(child_irq_data);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(irq_domain_pop_irq);
> +
>  /**
>   * irq_domain_free_irqs - Free IRQ number and associated data structures
>   * @virq:	base IRQ number
> 

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v7 3/5] irqdomain: Add irq_domain_{push,pop}_irq() functions.
  2017-08-15 13:50   ` Marc Zyngier
@ 2017-08-15 18:00     ` David Daney
  2017-08-15 19:02         ` Marc Zyngier
  0 siblings, 1 reply; 13+ messages in thread
From: David Daney @ 2017-08-15 18:00 UTC (permalink / raw)
  To: Marc Zyngier, David Daney, Linus Walleij, Alexandre Courbot,
	Mark Rutland, Thomas Gleixner, linux-gpio
  Cc: linux-kernel

On 08/15/2017 06:50 AM, Marc Zyngier wrote:
> Hi David,
> 
> On 09/08/17 23:51, David Daney wrote:
[...]
>> diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
>> index f1f2514..629f770 100644
>> --- a/kernel/irq/irqdomain.c
>> +++ b/kernel/irq/irqdomain.c
>> @@ -1448,6 +1448,184 @@ int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base,
>>   	return ret;
>>   }
>>   
>> +/* The irq_data was moved, fix the revmap to refer to the new location */
>> +static void irq_domain_fix_revmap(struct irq_data *d)
>> +{
>> +	void **slot;
>> +
>> +	if (d->hwirq < d->domain->revmap_size)
>> +		return; /* Not using radix tree. */
>> +
>> +	/* Fix up the revmap. */
>> +	mutex_lock(&revmap_trees_mutex);
>> +	slot = radix_tree_lookup_slot(&d->domain->revmap_tree, d->hwirq);
>> +	if (slot)
>> +		radix_tree_replace_slot(&d->domain->revmap_tree, slot, d);
> 
> radix_tree_replace_slot already deals with non-existing entries, so the
> initial radix_tree_lookup_slot call is superfluous.

This comment I don't understand.  To replace an element in the tree, you 
must know the slot.  I see no alternative to calling 
radix_tree_lookup_slot().  If I am mistaken, it would be helpful to know 
in a little more detail how you think it should be done.


> 
>> +	mutex_unlock(&revmap_trees_mutex);
>> +}
>> +
>> +/**
>> + * irq_domain_push_irq() - Push a domain in to the top of a hierarchy.
>> + * @domain:	Domain to push.
>> + * @virq:	Irq to push the domain in to.
>> + * @arg:	Passed to the irq_domain_ops alloc() function.
>> + *
>> + * For an already existing irqdomain hierarchy, as might be obtained
>> + * via a call to pci_enable_msix(), add an additional domain to the
>> + * head of the processing chain.  Must be called before request_irq()
>> + * has been called.
>> + */
>> +int irq_domain_push_irq(struct irq_domain *domain, int virq, void *arg)
>> +{
>> +	struct irq_data *child_irq_data;
>> +	struct irq_data *root_irq_data = irq_get_irq_data(virq);
>> +	struct irq_desc *desc;
>> +	int rv = 0;
>> +
>> +	/*
>> +	 * Check that no action has been set, which indicates the virq
>> +	 * is in a state where this function doesn't have to deal with
>> +	 * races between interrupt handling and maintaining the
>> +	 * hierarchy.  This will catch gross misuse.  Attempting to
>> +	 * make the check race free would require holding locks across
>> +	 * calls to struct irq_domain_ops->alloc(), which could lead
>> +	 * to deadlock, so we just do a simple check before starting.
>> +	 */
>> +	desc = irq_to_desc(virq);
>> +	if (!desc)
>> +		return -EINVAL;
>> +	if (WARN_ON(desc->action))
>> +		return -EBUSY;
>> +
>> +	if (domain == NULL)
>> +		return -EINVAL;
>> +
>> +	if (WARN_ON(!domain->ops->alloc))
>> +		return -EINVAL;
> 
> I'd rather you use irq_domain_is_hierarchy() instead. Same effect, but
> less likely to break in the long run.

Good idea, I will do it.

> 
>> +
>> +	if (!root_irq_data)
>> +		return -EINVAL;
>> +
>> +	child_irq_data = kzalloc_node(sizeof(*child_irq_data), GFP_KERNEL,
>> +				      irq_data_get_node(root_irq_data));
>> +	if (!child_irq_data)
>> +		return -ENOMEM;
>> +
>> +	mutex_lock(&irq_domain_mutex);
>> +
>> +	/* Copy the original irq_data. */
>> +	*child_irq_data = *root_irq_data;
>> +
>> +	irq_domain_fix_revmap(child_irq_data);
> 
> What is the benefit of updating the revmap early?

The idea was to have it be valid for the ops->alloc() call, but ...

> We don't do that in
> the pop case. Can't we do it in one go once the allocation has succeeded?

... the code in  irqdomain.c that calls ops->alloc() doesn't have revmap 
established at the call site, so I agree that this can go after the 
allocation so we don't have to undo it on failure.

> 
>> +
>> +	/*
>> +	 * Overwrite the root_irq_data, which is embedded in struct
>> +	 * irq_desc, with values for this domain.
>> +	 */
>> +	root_irq_data->parent_data = child_irq_data;
>> +	root_irq_data->domain = domain;
>> +	root_irq_data->mask = 0;
>> +	root_irq_data->hwirq = 0;
>> +	root_irq_data->chip = NULL;
>> +	root_irq_data->chip_data = NULL;
>> +	rv = domain->ops->alloc(domain, virq, 1, arg);
> 
> That'd be irq_domain_alloc_irqs_hierarchy().
> 

Yes.

> Overall, I'm a bit concerned that alloc() is allowed to be recursive
> itself. Hopefully nobody will do that, but you never know. A possible
> way of trapping this would be to only set parent_data *after* the
> allocation has been done.

I will try it.

> 
> Another concern is that I never see domain->parent being checked. It
> should match child_irq_data->domain, so that you can never push a domain
> on an interrupt that is not part of the parent domain.

I will add a check for this.


> 
>> +	if (rv) {
>> +		/* Restore the original irq_data. */
>> +		*root_irq_data = *child_irq_data;
>> +		irq_domain_fix_revmap(root_irq_data);
>> +		goto error;
>> +	}
>> +
>> +	if (root_irq_data->hwirq < domain->revmap_size) {
>> +		domain->linear_revmap[root_irq_data->hwirq] = virq;
>> +	} else {
>> +		mutex_lock(&revmap_trees_mutex);
>> +		radix_tree_insert(&domain->revmap_tree,
>> +				  root_irq_data->hwirq, root_irq_data);
>> +		mutex_unlock(&revmap_trees_mutex);
>> +	}
> 
> We already have this exact code twice (in irq_domain_insert_irq and
> irq_domain_associate). How about making it a helper?

OK.

> 
>> +error:
>> +	mutex_unlock(&irq_domain_mutex);
>> +
>> +	return rv;
>> +}
>> +EXPORT_SYMBOL_GPL(irq_domain_push_irq);
>> +
>> +/**
>> + * irq_domain_pop_irq() - Remove a domain from the top of a hierarchy.
>> + * @domain:	Domain to remove.
>> + * @virq:	Irq to remove the domain from.
>> + *
>> + * Undo the effects of a call to irq_domain_push_irq().  Must be
>> + * called either before request_irq() or after free_irq().
>> + */
>> +int irq_domain_pop_irq(struct irq_domain *domain, int virq)
>> +{
>> +	struct irq_data *root_irq_data = irq_get_irq_data(virq);
>> +	struct irq_data *child_irq_data;
>> +	struct irq_data *tmp_irq_data;
>> +	struct irq_desc *desc;
>> +
>> +	/*
>> +	 * Check that no action is set, which indicates the virq is in
>> +	 * a state where this function doesn't have to deal with races
>> +	 * between interrupt handling and maintaining the hierarchy.
>> +	 * This will catch gross misuse.  Attempting to make the check
>> +	 * race free would require holding locks across calls to
>> +	 * struct irq_domain_ops->free(), which could lead to
>> +	 * deadlock, so we just do a simple check before starting.
>> +	 */
>> +	desc = irq_to_desc(virq);
>> +	if (!desc)
>> +		return -EINVAL;
>> +	if (WARN_ON(desc->action))
>> +		return -EBUSY;
>> +
>> +	if (domain == NULL)
>> +		return -EINVAL;
>> +
>> +	if (!root_irq_data)
>> +		return -EINVAL;
>> +
>> +	tmp_irq_data = irq_domain_get_irq_data(domain, virq);
>> +
>> +	/* We can only "pop" if this domain is at the top of the list */
>> +	if (WARN_ON(root_irq_data != tmp_irq_data))
>> +		return -EINVAL;
>> +
>> +	if (WARN_ON(root_irq_data->domain != domain))
>> +		return -EINVAL;
>> +
>> +	child_irq_data = root_irq_data->parent_data;
>> +	if (WARN_ON(!child_irq_data))
>> +		return -EINVAL;
>> +
>> +	mutex_lock(&irq_domain_mutex);
>> +
>> +	root_irq_data->parent_data = NULL;
>> +
>> +	if (root_irq_data->hwirq >= domain->revmap_size) {
>> +		mutex_lock(&revmap_trees_mutex);
>> +		radix_tree_delete(&domain->revmap_tree, root_irq_data->hwirq);
>> +		mutex_unlock(&revmap_trees_mutex);ops->alloc()
>> +	}
> 
> What about clearing it from the revmap if it fits there? Also, this code
> already exists in irq_domain_disassociate and irq_domain_remove_irq, and
> making that a helper is overdue.

I will investigate doing something like that.


> 
>> +
>> +	if (domain->ops->free)
>> +		domain->ops->free(domain, virq, 1);
> 
> Use irq_domain_free_irqs_hierarchy(), making it conditional in that helper.

OK


> 
>> +
>> +	/* Restore the original irq_data. */
>> +	*root_irq_data = *child_irq_data;
>> +
>> +	irq_domain_fix_revmap(root_irq_data);
>> +
>> +	mutex_unlock(&irq_domain_mutex);
>> +
>> +	kfree(child_irq_data);
>> +
>> +	return 0;
>> +}
>> +EXPORT_SYMBOL_GPL(irq_domain_pop_irq);
>> +
>>   /**
>>    * irq_domain_free_irqs - Free IRQ number and associated data structures
>>    * @virq:	base IRQ number
>>
> 
> Thanks,
> 
> 	M.
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v7 3/5] irqdomain: Add irq_domain_{push,pop}_irq() functions.
  2017-08-15 18:00     ` David Daney
@ 2017-08-15 19:02         ` Marc Zyngier
  0 siblings, 0 replies; 13+ messages in thread
From: Marc Zyngier @ 2017-08-15 19:02 UTC (permalink / raw)
  To: David Daney
  Cc: David Daney, Linus Walleij, Alexandre Courbot, Mark Rutland,
	Thomas Gleixner, linux-gpio, linux-kernel

On Tue, Aug 15 2017 at 11:00:19 am BST, David Daney <ddaney@caviumnetworks.com> wrote:
> On 08/15/2017 06:50 AM, Marc Zyngier wrote:
>> Hi David,
>>
>> On 09/08/17 23:51, David Daney wrote:
> [...]
>>> diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
>>> index f1f2514..629f770 100644
>>> --- a/kernel/irq/irqdomain.c
>>> +++ b/kernel/irq/irqdomain.c
>>> @@ -1448,6 +1448,184 @@ int __irq_domain_alloc_irqs(struct
> irq_domain *domain, int irq_base,
>>>   	return ret;
>>>   }
>>>   +/* The irq_data was moved, fix the revmap to refer to the new
>>> location */
>>> +static void irq_domain_fix_revmap(struct irq_data *d)
>>> +{
>>> +	void **slot;
>>> +
>>> +	if (d->hwirq < d->domain->revmap_size)
>>> +		return; /* Not using radix tree. */
>>> +
>>> +	/* Fix up the revmap. */
>>> +	mutex_lock(&revmap_trees_mutex);
>>> +	slot = radix_tree_lookup_slot(&d->domain->revmap_tree, d->hwirq);
>>> +	if (slot)
>>> +		radix_tree_replace_slot(&d->domain->revmap_tree, slot, d);
>>
>> radix_tree_replace_slot already deals with non-existing entries, so the
>> initial radix_tree_lookup_slot call is superfluous.
>
> This comment I don't understand.  To replace an element in the tree,
> you must know the slot.  I see no alternative to calling
> radix_tree_lookup_slot().  If I am mistaken, it would be helpful to
> know in a little more detail how you think it should be done.

Nah, you're right. I'm just grossly mistaken. Ignore this.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v7 3/5] irqdomain: Add irq_domain_{push,pop}_irq() functions.
@ 2017-08-15 19:02         ` Marc Zyngier
  0 siblings, 0 replies; 13+ messages in thread
From: Marc Zyngier @ 2017-08-15 19:02 UTC (permalink / raw)
  To: David Daney
  Cc: David Daney, Linus Walleij, Alexandre Courbot, Mark Rutland,
	Thomas Gleixner, linux-gpio, linux-kernel

On Tue, Aug 15 2017 at 11:00:19 am BST, David Daney <ddaney@caviumnetworks.com> wrote:
> On 08/15/2017 06:50 AM, Marc Zyngier wrote:
>> Hi David,
>>
>> On 09/08/17 23:51, David Daney wrote:
> [...]
>>> diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
>>> index f1f2514..629f770 100644
>>> --- a/kernel/irq/irqdomain.c
>>> +++ b/kernel/irq/irqdomain.c
>>> @@ -1448,6 +1448,184 @@ int __irq_domain_alloc_irqs(struct
> irq_domain *domain, int irq_base,
>>>   	return ret;
>>>   }
>>>   +/* The irq_data was moved, fix the revmap to refer to the new
>>> location */
>>> +static void irq_domain_fix_revmap(struct irq_data *d)
>>> +{
>>> +	void **slot;
>>> +
>>> +	if (d->hwirq < d->domain->revmap_size)
>>> +		return; /* Not using radix tree. */
>>> +
>>> +	/* Fix up the revmap. */
>>> +	mutex_lock(&revmap_trees_mutex);
>>> +	slot = radix_tree_lookup_slot(&d->domain->revmap_tree, d->hwirq);
>>> +	if (slot)
>>> +		radix_tree_replace_slot(&d->domain->revmap_tree, slot, d);
>>
>> radix_tree_replace_slot already deals with non-existing entries, so the
>> initial radix_tree_lookup_slot call is superfluous.
>
> This comment I don't understand.  To replace an element in the tree,
> you must know the slot.  I see no alternative to calling
> radix_tree_lookup_slot().  If I am mistaken, it would be helpful to
> know in a little more detail how you think it should be done.

Nah, you're right. I'm just grossly mistaken. Ignore this.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v7 2/5] genirq: Add handle_fasteoi_{level,edge}_irq flow handlers.
  2017-08-14 10:25   ` Thomas Gleixner
@ 2017-08-16 15:59     ` David Daney
  2017-08-16 16:26       ` Thomas Gleixner
  0 siblings, 1 reply; 13+ messages in thread
From: David Daney @ 2017-08-16 15:59 UTC (permalink / raw)
  To: Thomas Gleixner, David Daney
  Cc: Linus Walleij, Alexandre Courbot, Mark Rutland, Marc Zyngier,
	linux-gpio, linux-kernel

On 08/14/2017 03:25 AM, Thomas Gleixner wrote:
> On Wed, 9 Aug 2017, David Daney wrote:
>>   #ifdef	CONFIG_IRQ_DOMAIN_HIERARCHY
> 
> Can we please make them conditional in order not to bloat all kernels with
> it? Like we do for handle_edge_eoi_irq() ?


Yes.  Because these are initially used by exactly one driver, I could 
just make it conditional on that driver being enabled.

Alternately, I could invent a new Kconfig symbol to gate these and 
select that when the driver is enabled.

Do you have a preference?

> 
>>   /**
>> + *	handle_fasteoi_edge_irq - irq handler for edge hierarchy
>> + *	stacked on transparent controllers
>> + *
>> + *	@desc:	the interrupt description structure for this irq
>> + *
>> + *	Like handle_fasteoi_irq(), but for use with hierarchy where
>> + *	the irq_chip also needs to have its ->irq_ack() function
>> + *	called.
>> + */
>> +void handle_fasteoi_edge_irq(struct irq_desc *desc)
>> +{
>> +	struct irq_chip *chip = desc->irq_data.chip;
>> +
>> +	raw_spin_lock(&desc->lock);
>> +
>> +	if (!irq_may_run(desc))
>> +		goto out;
>> +
>> +	desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING);
>> +
>> +	/*
>> +	 * If its disabled or no action available
>> +	 * then mask it and get out of here:
>> +	 */
>> +	if (unlikely(!desc->action || irqd_irq_disabled(&desc->irq_data))) {
>> +		desc->istate |= IRQS_PENDING;
>> +		mask_irq(desc);
>> +		goto out;
>> +	}
>> +
>> +	kstat_incr_irqs_this_cpu(desc);
>> +	if (desc->istate & IRQS_ONESHOT)
>> +		mask_irq(desc);
>> +
>> +	/* Start handling the irq */
>> +	desc->irq_data.chip->irq_ack(&desc->irq_data);
>> +
>> +	preflow_handler(desc);
>> +	handle_irq_event(desc);
> 
> Hmm. That's quite different to the way we handle edge interrupts
> normally. See handle_edge_irq() and handle_edge_eoi_irq().

Yes, these are not standard edge interrupts.  If they were, I wouldn't 
need new handlers.

For this particular irqdomain hierarchy, I need exactly 
handle_fasteoi_irq() semantics with the addition of a call to the 
chip->irq_ack() function *before* the handler is called.  I chose to 
"clone" and enhance handle_fasteoi_irq(), rather than adding hooks with 
runtime checks to the existing handle_fasteoi_irq().  There is code 
bloat this way, but a smaller risk of breaking other things.

Any additional "stuff" that is not needed to cover this use case would 
just be adding dead code.  In the future, if there is a need to enable 
more users of these functions, I would not object to doing more.

Thanks,
David Daney


> 
> Thanks,
> 
> 	tglx
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v7 2/5] genirq: Add handle_fasteoi_{level,edge}_irq flow handlers.
  2017-08-16 15:59     ` David Daney
@ 2017-08-16 16:26       ` Thomas Gleixner
  0 siblings, 0 replies; 13+ messages in thread
From: Thomas Gleixner @ 2017-08-16 16:26 UTC (permalink / raw)
  To: David Daney
  Cc: David Daney, Linus Walleij, Alexandre Courbot, Mark Rutland,
	Marc Zyngier, linux-gpio, linux-kernel

On Wed, 16 Aug 2017, David Daney wrote:

> On 08/14/2017 03:25 AM, Thomas Gleixner wrote:
> > On Wed, 9 Aug 2017, David Daney wrote:
> > >   #ifdef	CONFIG_IRQ_DOMAIN_HIERARCHY
> > 
> > Can we please make them conditional in order not to bloat all kernels with
> > it? Like we do for handle_edge_eoi_irq() ?
> 
> 
> Yes.  Because these are initially used by exactly one driver, I could just
> make it conditional on that driver being enabled.
> 
> Alternately, I could invent a new Kconfig symbol to gate these and select that
> when the driver is enabled.
> 
> Do you have a preference?

The latter.

> > Hmm. That's quite different to the way we handle edge interrupts
> > normally. See handle_edge_irq() and handle_edge_eoi_irq().
> 
> Yes, these are not standard edge interrupts.  If they were, I wouldn't need
> new handlers.
> 
> For this particular irqdomain hierarchy, I need exactly handle_fasteoi_irq()
> semantics with the addition of a call to the chip->irq_ack() function *before*
> the handler is called.  I chose to "clone" and enhance handle_fasteoi_irq(),
> rather than adding hooks with runtime checks to the existing
> handle_fasteoi_irq().  There is code bloat this way, but a smaller risk of
> breaking other things.
> 
> Any additional "stuff" that is not needed to cover this use case would just be
> adding dead code.  In the future, if there is a need to enable more users of
> these functions, I would not object to doing more.

Ok, so we might rename that thing to something else than edge() as all
edge handlers have this processing loop which is required not to lose
edges.

Thanks

	tglx

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2017-08-16 16:26 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-09 22:51 [PATCH v7 0/5] genirq/gpio: Add driver for ThunderX and OCTEON-TX SoCs David Daney
2017-08-09 22:51 ` [PATCH v7 1/5] genirq: Export more irq_chip_*_parent() functions David Daney
2017-08-09 22:51 ` [PATCH v7 2/5] genirq: Add handle_fasteoi_{level,edge}_irq flow handlers David Daney
2017-08-14 10:25   ` Thomas Gleixner
2017-08-16 15:59     ` David Daney
2017-08-16 16:26       ` Thomas Gleixner
2017-08-09 22:51 ` [PATCH v7 3/5] irqdomain: Add irq_domain_{push,pop}_irq() functions David Daney
2017-08-15 13:50   ` Marc Zyngier
2017-08-15 18:00     ` David Daney
2017-08-15 19:02       ` Marc Zyngier
2017-08-15 19:02         ` Marc Zyngier
2017-08-09 22:51 ` [PATCH v7 4/5] gpio: Add gpio driver support for ThunderX and OCTEON-TX David Daney
2017-08-09 22:51 ` [PATCH v7 5/5] MAINTAINERS: Add entry for THUNDERX GPIO Driver David Daney

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.