linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* support for partial irq affinity assignment
@ 2016-11-07 18:47 Christoph Hellwig
  2016-11-07 18:47 ` [PATCH 1/7] genirq/affinity: Introduce struct irq_affinity Christoph Hellwig
                   ` (6 more replies)
  0 siblings, 7 replies; 33+ messages in thread
From: Christoph Hellwig @ 2016-11-07 18:47 UTC (permalink / raw)
  To: tglx; +Cc: axboe, linux-block, linux-pci, linux-kernel

This series adds support for automatic interrupt assignment to devices
that have a few vectors that are set aside for admin or config purposes
and thus should not fall into the general per-cpu assginment pool.

The first patch adds that support to the core IRQ and PCI/msi code,
and the second is a small tweak to a block layer helper to make use
of it.  I'd love to have both go into the same tree so that consumers
of this (e.g. the virtio, scsi and rdma trees) only need to pull in
one of these trees as dependency.

Changes since V1:
 - split up into more patches (thanks to Thomas for the initial work)
 - simplify internal parameter passing a bit (also thanks to Thomas)

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH 1/7] genirq/affinity: Introduce struct irq_affinity
  2016-11-07 18:47 support for partial irq affinity assignment Christoph Hellwig
@ 2016-11-07 18:47 ` Christoph Hellwig
  2016-11-08  8:11   ` Hannes Reinecke
                     ` (2 more replies)
  2016-11-07 18:47 ` [PATCH 2/7] genirq/affinity: Handle pre/post vectors in irq_calc_affinity_vectors() Christoph Hellwig
                   ` (5 subsequent siblings)
  6 siblings, 3 replies; 33+ messages in thread
From: Christoph Hellwig @ 2016-11-07 18:47 UTC (permalink / raw)
  To: tglx; +Cc: axboe, linux-block, linux-pci, linux-kernel, Christogh Hellwig

From: Christogh Hellwig <hch@lst.de>

Some drivers (various network and RDMA adapter for example) have a MSI-X
vector layout where most of the vectors are used for I/O queues and should
have CPU affinity assigned to them, but some (usually 1 but sometimes more)
at the beginning or end are used for low-performance admin or configuration
work and should not have any explicit affinity assigned to them.

This adds a new irq_affinity structure, which will be passed through a
variant of pci_irq_alloc_vectors that allows to specify these
requirements (and is extensible to any future quirks in that area) so that
the core IRQ affinity algorithm can take this quirks into account.

Signed-off-by: Christogh Hellwig <hch@lst.de>
---
 include/linux/interrupt.h | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index 72f0721..7284bcd 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -232,6 +232,18 @@ struct irq_affinity_notify {
 	void (*release)(struct kref *ref);
 };
 
+/**
+ * struct irq_affinity - Description for auto irq affinity assignements
+ * @pre_vectors:	Reserved vectors at the beginning of the MSIX
+ *			vector space
+ * @post_vectors:	Reserved vectors at the end of the MSIX
+ *			vector space
+ */
+struct irq_affinity {
+	int	pre_vectors;
+	int	post_vectors;
+};
+
 #if defined(CONFIG_SMP)
 
 extern cpumask_var_t irq_default_affinity;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 2/7] genirq/affinity: Handle pre/post vectors in irq_calc_affinity_vectors()
  2016-11-07 18:47 support for partial irq affinity assignment Christoph Hellwig
  2016-11-07 18:47 ` [PATCH 1/7] genirq/affinity: Introduce struct irq_affinity Christoph Hellwig
@ 2016-11-07 18:47 ` Christoph Hellwig
  2016-11-08  8:14   ` Hannes Reinecke
  2016-11-07 18:47 ` [PATCH 3/7] genirq/affinity: Handle pre/post vectors in irq_create_affinity_masks() Christoph Hellwig
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 33+ messages in thread
From: Christoph Hellwig @ 2016-11-07 18:47 UTC (permalink / raw)
  To: tglx; +Cc: axboe, linux-block, linux-pci, linux-kernel, Christogh Hellwig

From: Christogh Hellwig <hch@lst.de>

Only calculate the affinity for the main I/O vectors, and skip the
pre or post vectors specified by struct irq_affinity.

Also remove the irq_affinity cpumask argument that has never been used.
If we ever need it in the future we can pass it through struct
irq_affinity.

Signed-off-by: Christogh Hellwig <hch@lst.de>
---
 drivers/pci/msi.c         |  6 ++----
 include/linux/interrupt.h |  4 ++--
 kernel/irq/affinity.c     | 24 ++++++++++--------------
 3 files changed, 14 insertions(+), 20 deletions(-)

diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
index ad70507..c58d3c2 100644
--- a/drivers/pci/msi.c
+++ b/drivers/pci/msi.c
@@ -1091,8 +1091,7 @@ static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
 
 	for (;;) {
 		if (affinity) {
-			nvec = irq_calc_affinity_vectors(dev->irq_affinity,
-					nvec);
+			nvec = irq_calc_affinity_vectors(nvec, NULL);
 			if (nvec < minvec)
 				return -ENOSPC;
 		}
@@ -1140,8 +1139,7 @@ static int __pci_enable_msix_range(struct pci_dev *dev,
 
 	for (;;) {
 		if (affinity) {
-			nvec = irq_calc_affinity_vectors(dev->irq_affinity,
-					nvec);
+			nvec = irq_calc_affinity_vectors(nvec, NULL);
 			if (nvec < minvec)
 				return -ENOSPC;
 		}
diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index 7284bcd..092adfb 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -291,7 +291,7 @@ extern int
 irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify);
 
 struct cpumask *irq_create_affinity_masks(const struct cpumask *affinity, int nvec);
-int irq_calc_affinity_vectors(const struct cpumask *affinity, int maxvec);
+int irq_calc_affinity_vectors(int maxvec, const struct irq_affinity *affd);
 
 #else /* CONFIG_SMP */
 
@@ -331,7 +331,7 @@ irq_create_affinity_masks(const struct cpumask *affinity, int nvec)
 }
 
 static inline int
-irq_calc_affinity_vectors(const struct cpumask *affinity, int maxvec)
+irq_calc_affinity_vectors(int maxvec, const struct irq_affinity *affd)
 {
 	return maxvec;
 }
diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index 17f51d63..8d92597 100644
--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affinity.c
@@ -131,24 +131,20 @@ struct cpumask *irq_create_affinity_masks(const struct cpumask *affinity,
 }
 
 /**
- * irq_calc_affinity_vectors - Calculate to optimal number of vectors for a given affinity mask
- * @affinity:		The affinity mask to spread. If NULL cpu_online_mask
- *			is used
- * @maxvec:		The maximum number of vectors available
+ * irq_calc_affinity_vectors - Calculate the optimal number of vectors
+ * @maxvec:	The maximum number of vectors available
+ * @affd:	Description of the affinity requirements
  */
-int irq_calc_affinity_vectors(const struct cpumask *affinity, int maxvec)
+int irq_calc_affinity_vectors(int maxvec, const struct irq_affinity *affd)
 {
-	int cpus, ret;
+	int resv = affd->pre_vectors + affd->post_vectors;
+	int vecs = maxvec - resv;
+	int cpus;
 
 	/* Stabilize the cpumasks */
 	get_online_cpus();
-	/* If the supplied affinity mask is NULL, use cpu online mask */
-	if (!affinity)
-		affinity = cpu_online_mask;
-
-	cpus = cpumask_weight(affinity);
-	ret = (cpus < maxvec) ? cpus : maxvec;
-
+	cpus = cpumask_weight(cpu_online_mask);
 	put_online_cpus();
-	return ret;
+
+	return min(cpus, vecs) + resv;
 }
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 3/7] genirq/affinity: Handle pre/post vectors in irq_create_affinity_masks()
  2016-11-07 18:47 support for partial irq affinity assignment Christoph Hellwig
  2016-11-07 18:47 ` [PATCH 1/7] genirq/affinity: Introduce struct irq_affinity Christoph Hellwig
  2016-11-07 18:47 ` [PATCH 2/7] genirq/affinity: Handle pre/post vectors in irq_calc_affinity_vectors() Christoph Hellwig
@ 2016-11-07 18:47 ` Christoph Hellwig
  2016-11-08  8:15   ` Hannes Reinecke
  2016-11-08 21:20   ` Bjorn Helgaas
  2016-11-07 18:47 ` [PATCH 4/7] pci/msi: Propagate irq affinity description through the MSI code Christoph Hellwig
                   ` (3 subsequent siblings)
  6 siblings, 2 replies; 33+ messages in thread
From: Christoph Hellwig @ 2016-11-07 18:47 UTC (permalink / raw)
  To: tglx; +Cc: axboe, linux-block, linux-pci, linux-kernel, Christogh Hellwig

From: Christogh Hellwig <hch@lst.de>

Only calculate the affinity for the main I/O vectors, and skip the
pre or post vectors specified by struct irq_affinity.

Also remove the irq_affinity cpumask argument that has never been used.
If we ever need it in the future we can pass it through struct
irq_affinity.

Signed-off-by: Christogh Hellwig <hch@lst.de>
---
 drivers/pci/msi.c         |  4 ++--
 include/linux/interrupt.h |  4 ++--
 kernel/irq/affinity.c     | 46 +++++++++++++++++++++++++---------------------
 3 files changed, 29 insertions(+), 25 deletions(-)

diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
index c58d3c2..1761b8a 100644
--- a/drivers/pci/msi.c
+++ b/drivers/pci/msi.c
@@ -558,7 +558,7 @@ msi_setup_entry(struct pci_dev *dev, int nvec, bool affinity)
 	u16 control;
 
 	if (affinity) {
-		masks = irq_create_affinity_masks(dev->irq_affinity, nvec);
+		masks = irq_create_affinity_masks(nvec, NULL);
 		if (!masks)
 			pr_err("Unable to allocate affinity masks, ignoring\n");
 	}
@@ -697,7 +697,7 @@ static int msix_setup_entries(struct pci_dev *dev, void __iomem *base,
 	int ret, i;
 
 	if (affinity) {
-		masks = irq_create_affinity_masks(dev->irq_affinity, nvec);
+		masks = irq_create_affinity_masks(nvec, NULL);
 		if (!masks)
 			pr_err("Unable to allocate affinity masks, ignoring\n");
 	}
diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index 092adfb..bca8f1c 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -290,7 +290,7 @@ extern int irq_set_affinity_hint(unsigned int irq, const struct cpumask *m);
 extern int
 irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify);
 
-struct cpumask *irq_create_affinity_masks(const struct cpumask *affinity, int nvec);
+struct cpumask *irq_create_affinity_masks(int nvec, const struct irq_affinity *affd);
 int irq_calc_affinity_vectors(int maxvec, const struct irq_affinity *affd);
 
 #else /* CONFIG_SMP */
@@ -325,7 +325,7 @@ irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify)
 }
 
 static inline struct cpumask *
-irq_create_affinity_masks(const struct cpumask *affinity, int nvec)
+irq_create_affinity_masks(int nvec, const struct irq_affinity *affd)
 {
 	return NULL;
 }
diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index 8d92597..17360bd 100644
--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affinity.c
@@ -51,16 +51,16 @@ static int get_nodes_in_cpumask(const struct cpumask *mask, nodemask_t *nodemsk)
 
 /**
  * irq_create_affinity_masks - Create affinity masks for multiqueue spreading
- * @affinity:		The affinity mask to spread. If NULL cpu_online_mask
- *			is used
- * @nvecs:		The number of vectors
+ * @nvecs:	The total number of vectors
+ * @affd:	Description of the affinity requirements
  *
  * Returns the masks pointer or NULL if allocation failed.
  */
-struct cpumask *irq_create_affinity_masks(const struct cpumask *affinity,
-					  int nvec)
+struct cpumask *
+irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
 {
-	int n, nodes, vecs_per_node, cpus_per_vec, extra_vecs, curvec = 0;
+	int n, nodes, vecs_per_node, cpus_per_vec, extra_vecs, curvec;
+	int affv = nvecs - affd->pre_vectors - affd->post_vectors;
 	nodemask_t nodemsk = NODE_MASK_NONE;
 	struct cpumask *masks;
 	cpumask_var_t nmsk;
@@ -68,46 +68,46 @@ struct cpumask *irq_create_affinity_masks(const struct cpumask *affinity,
 	if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL))
 		return NULL;
 
-	masks = kzalloc(nvec * sizeof(*masks), GFP_KERNEL);
+	masks = kcalloc(nvecs, sizeof(*masks), GFP_KERNEL);
 	if (!masks)
 		goto out;
 
+	/* Fill out vectors at the beginning that don't need affinity */
+	for (curvec = 0; curvec < affd->pre_vectors; curvec++)
+		cpumask_copy(masks + curvec, cpu_possible_mask);
+
 	/* Stabilize the cpumasks */
 	get_online_cpus();
-	/* If the supplied affinity mask is NULL, use cpu online mask */
-	if (!affinity)
-		affinity = cpu_online_mask;
-
-	nodes = get_nodes_in_cpumask(affinity, &nodemsk);
+	nodes = get_nodes_in_cpumask(cpu_online_mask, &nodemsk);
 
 	/*
 	 * If the number of nodes in the mask is less than or equal the
 	 * number of vectors we just spread the vectors across the nodes.
 	 */
-	if (nvec <= nodes) {
+	if (affv <= nodes) {
 		for_each_node_mask(n, nodemsk) {
 			cpumask_copy(masks + curvec, cpumask_of_node(n));
-			if (++curvec == nvec)
+			if (++curvec == affv)
 				break;
 		}
-		goto outonl;
+		goto done;
 	}
 
 	/* Spread the vectors per node */
-	vecs_per_node = nvec / nodes;
+	vecs_per_node = affv / nodes;
 	/* Account for rounding errors */
-	extra_vecs = nvec - (nodes * vecs_per_node);
+	extra_vecs = affv - (nodes * vecs_per_node);
 
 	for_each_node_mask(n, nodemsk) {
 		int ncpus, v, vecs_to_assign = vecs_per_node;
 
 		/* Get the cpus on this node which are in the mask */
-		cpumask_and(nmsk, affinity, cpumask_of_node(n));
+		cpumask_and(nmsk, cpu_online_mask, cpumask_of_node(n));
 
 		/* Calculate the number of cpus per vector */
 		ncpus = cpumask_weight(nmsk);
 
-		for (v = 0; curvec < nvec && v < vecs_to_assign; curvec++, v++) {
+		for (v = 0; curvec < affv && v < vecs_to_assign; curvec++, v++) {
 			cpus_per_vec = ncpus / vecs_to_assign;
 
 			/* Account for extra vectors to compensate rounding errors */
@@ -119,12 +119,16 @@ struct cpumask *irq_create_affinity_masks(const struct cpumask *affinity,
 			irq_spread_init_one(masks + curvec, nmsk, cpus_per_vec);
 		}
 
-		if (curvec >= nvec)
+		if (curvec >= affv)
 			break;
 	}
 
-outonl:
+done:
 	put_online_cpus();
+
+	/* Fill out vectors at the end that don't need affinity */
+	for (; curvec < nvecs; curvec++)
+		cpumask_copy(masks + curvec, cpu_possible_mask);
 out:
 	free_cpumask_var(nmsk);
 	return masks;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 4/7] pci/msi: Propagate irq affinity description through the MSI code
  2016-11-07 18:47 support for partial irq affinity assignment Christoph Hellwig
                   ` (2 preceding siblings ...)
  2016-11-07 18:47 ` [PATCH 3/7] genirq/affinity: Handle pre/post vectors in irq_create_affinity_masks() Christoph Hellwig
@ 2016-11-07 18:47 ` Christoph Hellwig
  2016-11-08  8:16   ` Hannes Reinecke
                     ` (2 more replies)
  2016-11-07 18:47 ` [PATCH 5/7] pci/msi: Provide pci_alloc_irq_vectors_affinity() Christoph Hellwig
                   ` (2 subsequent siblings)
  6 siblings, 3 replies; 33+ messages in thread
From: Christoph Hellwig @ 2016-11-07 18:47 UTC (permalink / raw)
  To: tglx; +Cc: axboe, linux-block, linux-pci, linux-kernel, Christogh Hellwig

From: Christogh Hellwig <hch@lst.de>

No API change yet, just pass it down all the way from
pci_alloc_irq_vectors to the core MSI code.

Signed-off-by: Christogh Hellwig <hch@lst.de>
---
 drivers/pci/msi.c | 62 +++++++++++++++++++++++++++++--------------------------
 1 file changed, 33 insertions(+), 29 deletions(-)

diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
index 1761b8a..512f388 100644
--- a/drivers/pci/msi.c
+++ b/drivers/pci/msi.c
@@ -551,14 +551,14 @@ static int populate_msi_sysfs(struct pci_dev *pdev)
 }
 
 static struct msi_desc *
-msi_setup_entry(struct pci_dev *dev, int nvec, bool affinity)
+msi_setup_entry(struct pci_dev *dev, int nvec, const struct irq_affinity *affd)
 {
 	struct cpumask *masks = NULL;
 	struct msi_desc *entry;
 	u16 control;
 
-	if (affinity) {
-		masks = irq_create_affinity_masks(nvec, NULL);
+	if (affd) {
+		masks = irq_create_affinity_masks(nvec, affd);
 		if (!masks)
 			pr_err("Unable to allocate affinity masks, ignoring\n");
 	}
@@ -618,7 +618,8 @@ static int msi_verify_entries(struct pci_dev *dev)
  * an error, and a positive return value indicates the number of interrupts
  * which could have been allocated.
  */
-static int msi_capability_init(struct pci_dev *dev, int nvec, bool affinity)
+static int msi_capability_init(struct pci_dev *dev, int nvec,
+			       const struct irq_affinity *affd)
 {
 	struct msi_desc *entry;
 	int ret;
@@ -626,7 +627,7 @@ static int msi_capability_init(struct pci_dev *dev, int nvec, bool affinity)
 
 	pci_msi_set_enable(dev, 0);	/* Disable MSI during set up */
 
-	entry = msi_setup_entry(dev, nvec, affinity);
+	entry = msi_setup_entry(dev, nvec, affd);
 	if (!entry)
 		return -ENOMEM;
 
@@ -690,14 +691,14 @@ static void __iomem *msix_map_region(struct pci_dev *dev, unsigned nr_entries)
 
 static int msix_setup_entries(struct pci_dev *dev, void __iomem *base,
 			      struct msix_entry *entries, int nvec,
-			      bool affinity)
+			      const struct irq_affinity *affd)
 {
 	struct cpumask *curmsk, *masks = NULL;
 	struct msi_desc *entry;
 	int ret, i;
 
-	if (affinity) {
-		masks = irq_create_affinity_masks(nvec, NULL);
+	if (affd) {
+		masks = irq_create_affinity_masks(nvec, affd);
 		if (!masks)
 			pr_err("Unable to allocate affinity masks, ignoring\n");
 	}
@@ -753,14 +754,14 @@ static void msix_program_entries(struct pci_dev *dev,
  * @dev: pointer to the pci_dev data structure of MSI-X device function
  * @entries: pointer to an array of struct msix_entry entries
  * @nvec: number of @entries
- * @affinity: flag to indicate cpu irq affinity mask should be set
+ * @affd: Optional pointer to enable automatic affinity assignement
  *
  * Setup the MSI-X capability structure of device function with a
  * single MSI-X irq. A return of zero indicates the successful setup of
  * requested MSI-X entries with allocated irqs or non-zero for otherwise.
  **/
 static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,
-				int nvec, bool affinity)
+				int nvec, const struct irq_affinity *affd)
 {
 	int ret;
 	u16 control;
@@ -775,7 +776,7 @@ static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,
 	if (!base)
 		return -ENOMEM;
 
-	ret = msix_setup_entries(dev, base, entries, nvec, affinity);
+	ret = msix_setup_entries(dev, base, entries, nvec, affd);
 	if (ret)
 		return ret;
 
@@ -956,7 +957,7 @@ int pci_msix_vec_count(struct pci_dev *dev)
 EXPORT_SYMBOL(pci_msix_vec_count);
 
 static int __pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries,
-			     int nvec, bool affinity)
+			     int nvec, const struct irq_affinity *affd)
 {
 	int nr_entries;
 	int i, j;
@@ -988,7 +989,7 @@ static int __pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries,
 		dev_info(&dev->dev, "can't enable MSI-X (MSI IRQ already assigned)\n");
 		return -EINVAL;
 	}
-	return msix_capability_init(dev, entries, nvec, affinity);
+	return msix_capability_init(dev, entries, nvec, affd);
 }
 
 /**
@@ -1008,7 +1009,7 @@ static int __pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries,
  **/
 int pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries, int nvec)
 {
-	return __pci_enable_msix(dev, entries, nvec, false);
+	return __pci_enable_msix(dev, entries, nvec, NULL);
 }
 EXPORT_SYMBOL(pci_enable_msix);
 
@@ -1059,9 +1060,8 @@ int pci_msi_enabled(void)
 EXPORT_SYMBOL(pci_msi_enabled);
 
 static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
-		unsigned int flags)
+				  const struct irq_affinity *affd)
 {
-	bool affinity = flags & PCI_IRQ_AFFINITY;
 	int nvec;
 	int rc;
 
@@ -1090,13 +1090,13 @@ static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
 		nvec = maxvec;
 
 	for (;;) {
-		if (affinity) {
-			nvec = irq_calc_affinity_vectors(nvec, NULL);
+		if (affd) {
+			nvec = irq_calc_affinity_vectors(nvec, affd);
 			if (nvec < minvec)
 				return -ENOSPC;
 		}
 
-		rc = msi_capability_init(dev, nvec, affinity);
+		rc = msi_capability_init(dev, nvec, affd);
 		if (rc == 0)
 			return nvec;
 
@@ -1123,28 +1123,27 @@ static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
  **/
 int pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec)
 {
-	return __pci_enable_msi_range(dev, minvec, maxvec, 0);
+	return __pci_enable_msi_range(dev, minvec, maxvec, NULL);
 }
 EXPORT_SYMBOL(pci_enable_msi_range);
 
 static int __pci_enable_msix_range(struct pci_dev *dev,
-		struct msix_entry *entries, int minvec, int maxvec,
-		unsigned int flags)
+				   struct msix_entry *entries, int minvec,
+				   int maxvec, const struct irq_affinity *affd)
 {
-	bool affinity = flags & PCI_IRQ_AFFINITY;
 	int rc, nvec = maxvec;
 
 	if (maxvec < minvec)
 		return -ERANGE;
 
 	for (;;) {
-		if (affinity) {
-			nvec = irq_calc_affinity_vectors(nvec, NULL);
+		if (affd) {
+			nvec = irq_calc_affinity_vectors(nvec, affd);
 			if (nvec < minvec)
 				return -ENOSPC;
 		}
 
-		rc = __pci_enable_msix(dev, entries, nvec, affinity);
+		rc = __pci_enable_msix(dev, entries, nvec, affd);
 		if (rc == 0)
 			return nvec;
 
@@ -1175,7 +1174,7 @@ static int __pci_enable_msix_range(struct pci_dev *dev,
 int pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries,
 		int minvec, int maxvec)
 {
-	return __pci_enable_msix_range(dev, entries, minvec, maxvec, 0);
+	return __pci_enable_msix_range(dev, entries, minvec, maxvec, NULL);
 }
 EXPORT_SYMBOL(pci_enable_msix_range);
 
@@ -1199,17 +1198,22 @@ EXPORT_SYMBOL(pci_enable_msix_range);
 int pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,
 		unsigned int max_vecs, unsigned int flags)
 {
+	static const struct irq_affinity msi_default_affd;
+	const struct irq_affinity *affd = NULL;
 	int vecs = -ENOSPC;
 
+	if (flags & PCI_IRQ_AFFINITY)
+		affd = &msi_default_affd;
+
 	if (flags & PCI_IRQ_MSIX) {
 		vecs = __pci_enable_msix_range(dev, NULL, min_vecs, max_vecs,
-				flags);
+				affd);
 		if (vecs > 0)
 			return vecs;
 	}
 
 	if (flags & PCI_IRQ_MSI) {
-		vecs = __pci_enable_msi_range(dev, min_vecs, max_vecs, flags);
+		vecs = __pci_enable_msi_range(dev, min_vecs, max_vecs, affd);
 		if (vecs > 0)
 			return vecs;
 	}
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 5/7] pci/msi: Provide pci_alloc_irq_vectors_affinity()
  2016-11-07 18:47 support for partial irq affinity assignment Christoph Hellwig
                   ` (3 preceding siblings ...)
  2016-11-07 18:47 ` [PATCH 4/7] pci/msi: Propagate irq affinity description through the MSI code Christoph Hellwig
@ 2016-11-07 18:47 ` Christoph Hellwig
  2016-11-08  8:17   ` Hannes Reinecke
  2016-11-08 21:17   ` Bjorn Helgaas
  2016-11-07 18:47 ` [PATCH 6/7] pci: Remove the irq_affinity mask from struct pci_dev Christoph Hellwig
  2016-11-07 18:47 ` [PATCH 7/7] blk-mq: add a first_vec argument to blk_mq_pci_map_queues Christoph Hellwig
  6 siblings, 2 replies; 33+ messages in thread
From: Christoph Hellwig @ 2016-11-07 18:47 UTC (permalink / raw)
  To: tglx; +Cc: axboe, linux-block, linux-pci, linux-kernel, Christogh Hellwig

From: Christogh Hellwig <hch@lst.de>

This is a variant of pci_alloc_irq_vectors() that allows passing a
struct irq_affinity to provide fine-grainded IRQ affinity control.
For now this means being able to exclude vectors at the beginning or
end of the MSI vector space, but it could also be used for any other
quirks needed in the future (e.g. more vectors than CPUs, or exluding
CPUs from the spreading).

Signed-off-by: Christogh Hellwig <hch@lst.de>
---
 drivers/pci/msi.c   | 20 +++++++++++++-------
 include/linux/pci.h | 24 +++++++++++++++++++-----
 2 files changed, 32 insertions(+), 12 deletions(-)

diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
index 512f388..dd27f73 100644
--- a/drivers/pci/msi.c
+++ b/drivers/pci/msi.c
@@ -1179,11 +1179,12 @@ int pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries,
 EXPORT_SYMBOL(pci_enable_msix_range);
 
 /**
- * pci_alloc_irq_vectors - allocate multiple IRQs for a device
+ * pci_alloc_irq_vectors_affinity - allocate multiple IRQs for a device
  * @dev:		PCI device to operate on
  * @min_vecs:		minimum number of vectors required (must be >= 1)
  * @max_vecs:		maximum (desired) number of vectors
  * @flags:		flags or quirks for the allocation
+ * @affd:		optional description of the affinity requirements
  *
  * Allocate up to @max_vecs interrupt vectors for @dev, using MSI-X or MSI
  * vectors if available, and fall back to a single legacy vector
@@ -1195,15 +1196,20 @@ EXPORT_SYMBOL(pci_enable_msix_range);
  * To get the Linux IRQ number used for a vector that can be passed to
  * request_irq() use the pci_irq_vector() helper.
  */
-int pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,
-		unsigned int max_vecs, unsigned int flags)
+int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,
+				   unsigned int max_vecs, unsigned int flags,
+				   const struct irq_affinity *affd)
 {
 	static const struct irq_affinity msi_default_affd;
-	const struct irq_affinity *affd = NULL;
 	int vecs = -ENOSPC;
 
-	if (flags & PCI_IRQ_AFFINITY)
-		affd = &msi_default_affd;
+	if (flags & PCI_IRQ_AFFINITY) {
+		if (!affd)
+			affd = &msi_default_affd;
+	} else {
+		if (WARN_ON(affd))
+			affd = NULL;
+	}
 
 	if (flags & PCI_IRQ_MSIX) {
 		vecs = __pci_enable_msix_range(dev, NULL, min_vecs, max_vecs,
@@ -1226,7 +1232,7 @@ int pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,
 
 	return vecs;
 }
-EXPORT_SYMBOL(pci_alloc_irq_vectors);
+EXPORT_SYMBOL(pci_alloc_irq_vectors_affinity);
 
 /**
  * pci_free_irq_vectors - free previously allocated IRQs for a device
diff --git a/include/linux/pci.h b/include/linux/pci.h
index 0e49f70..7090f5f 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -244,6 +244,7 @@ struct pci_cap_saved_state {
 	struct pci_cap_saved_data cap;
 };
 
+struct irq_affinity;
 struct pcie_link_state;
 struct pci_vpd;
 struct pci_sriov;
@@ -1310,8 +1311,10 @@ static inline int pci_enable_msix_exact(struct pci_dev *dev,
 		return rc;
 	return 0;
 }
-int pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,
-		unsigned int max_vecs, unsigned int flags);
+int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,
+				   unsigned int max_vecs, unsigned int flags,
+				   const struct irq_affinity *affd);
+
 void pci_free_irq_vectors(struct pci_dev *dev);
 int pci_irq_vector(struct pci_dev *dev, unsigned int nr);
 const struct cpumask *pci_irq_get_affinity(struct pci_dev *pdev, int vec);
@@ -1339,14 +1342,17 @@ static inline int pci_enable_msix_range(struct pci_dev *dev,
 static inline int pci_enable_msix_exact(struct pci_dev *dev,
 		      struct msix_entry *entries, int nvec)
 { return -ENOSYS; }
-static inline int pci_alloc_irq_vectors(struct pci_dev *dev,
-		unsigned int min_vecs, unsigned int max_vecs,
-		unsigned int flags)
+
+static inline int
+pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,
+			       unsigned int max_vecs, unsigned int flags,
+			       const struct irq_affinity *aff_desc)
 {
 	if (min_vecs > 1)
 		return -EINVAL;
 	return 1;
 }
+
 static inline void pci_free_irq_vectors(struct pci_dev *dev)
 {
 }
@@ -1364,6 +1370,14 @@ static inline const struct cpumask *pci_irq_get_affinity(struct pci_dev *pdev,
 }
 #endif
 
+static inline int
+pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,
+		      unsigned int max_vecs, unsigned int flags)
+{
+	return pci_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs, flags,
+					      NULL);
+}
+
 #ifdef CONFIG_PCIEPORTBUS
 extern bool pcie_ports_disabled;
 extern bool pcie_ports_auto;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 6/7] pci: Remove the irq_affinity mask from struct pci_dev
  2016-11-07 18:47 support for partial irq affinity assignment Christoph Hellwig
                   ` (4 preceding siblings ...)
  2016-11-07 18:47 ` [PATCH 5/7] pci/msi: Provide pci_alloc_irq_vectors_affinity() Christoph Hellwig
@ 2016-11-07 18:47 ` Christoph Hellwig
  2016-11-08  8:17   ` Hannes Reinecke
                     ` (2 more replies)
  2016-11-07 18:47 ` [PATCH 7/7] blk-mq: add a first_vec argument to blk_mq_pci_map_queues Christoph Hellwig
  6 siblings, 3 replies; 33+ messages in thread
From: Christoph Hellwig @ 2016-11-07 18:47 UTC (permalink / raw)
  To: tglx; +Cc: axboe, linux-block, linux-pci, linux-kernel

This has never been used, and now is totally unreferenced.  Nuke it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 include/linux/pci.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/include/linux/pci.h b/include/linux/pci.h
index 7090f5f..f2ba6ac 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -333,7 +333,6 @@ struct pci_dev {
 	 * directly, use the values stored here. They might be different!
 	 */
 	unsigned int	irq;
-	struct cpumask	*irq_affinity;
 	struct resource resource[DEVICE_COUNT_RESOURCE]; /* I/O and memory regions + expansion ROMs */
 
 	bool match_driver;		/* Skip attaching driver */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 7/7] blk-mq: add a first_vec argument to blk_mq_pci_map_queues
  2016-11-07 18:47 support for partial irq affinity assignment Christoph Hellwig
                   ` (5 preceding siblings ...)
  2016-11-07 18:47 ` [PATCH 6/7] pci: Remove the irq_affinity mask from struct pci_dev Christoph Hellwig
@ 2016-11-07 18:47 ` Christoph Hellwig
  2016-11-08  8:27   ` Johannes Thumshirn
  6 siblings, 1 reply; 33+ messages in thread
From: Christoph Hellwig @ 2016-11-07 18:47 UTC (permalink / raw)
  To: tglx; +Cc: axboe, linux-block, linux-pci, linux-kernel

This allows skipping the first N IRQ vectors in case they are used for
control or admin interrupts.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
---
 block/blk-mq-pci.c         | 6 ++++--
 drivers/nvme/host/pci.c    | 2 +-
 include/linux/blk-mq-pci.h | 3 ++-
 3 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/block/blk-mq-pci.c b/block/blk-mq-pci.c
index 966c216..03ff7c4 100644
--- a/block/blk-mq-pci.c
+++ b/block/blk-mq-pci.c
@@ -21,6 +21,7 @@
  * blk_mq_pci_map_queues - provide a default queue mapping for PCI device
  * @set:	tagset to provide the mapping for
  * @pdev:	PCI device associated with @set.
+ * @first_vec:	first interrupt vectors to use for queues (usually 0)
  *
  * This function assumes the PCI device @pdev has at least as many available
  * interrupt vetors as @set has queues.  It will then queuery the vector
@@ -28,12 +29,13 @@
  * that maps a queue to the CPUs that have irq affinity for the corresponding
  * vector.
  */
-int blk_mq_pci_map_queues(struct blk_mq_tag_set *set, struct pci_dev *pdev)
+int blk_mq_pci_map_queues(struct blk_mq_tag_set *set, struct pci_dev *pdev,
+		int first_vec)
 {
 	const struct cpumask *mask;
 	unsigned int queue, cpu;
 
-	for (queue = 0; queue < set->nr_hw_queues; queue++) {
+	for (queue = first_vec; queue < set->nr_hw_queues; queue++) {
 		mask = pci_irq_get_affinity(pdev, queue);
 		if (!mask)
 			return -EINVAL;
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 0248d0e..6e6b917 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -273,7 +273,7 @@ static int nvme_pci_map_queues(struct blk_mq_tag_set *set)
 {
 	struct nvme_dev *dev = set->driver_data;
 
-	return blk_mq_pci_map_queues(set, to_pci_dev(dev->dev));
+	return blk_mq_pci_map_queues(set, to_pci_dev(dev->dev), 0);
 }
 
 /**
diff --git a/include/linux/blk-mq-pci.h b/include/linux/blk-mq-pci.h
index 6ab5952..fde26d2 100644
--- a/include/linux/blk-mq-pci.h
+++ b/include/linux/blk-mq-pci.h
@@ -4,6 +4,7 @@
 struct blk_mq_tag_set;
 struct pci_dev;
 
-int blk_mq_pci_map_queues(struct blk_mq_tag_set *set, struct pci_dev *pdev);
+int blk_mq_pci_map_queues(struct blk_mq_tag_set *set, struct pci_dev *pdev,
+		int first_vec);
 
 #endif /* _LINUX_BLK_MQ_PCI_H */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [PATCH 1/7] genirq/affinity: Introduce struct irq_affinity
  2016-11-07 18:47 ` [PATCH 1/7] genirq/affinity: Introduce struct irq_affinity Christoph Hellwig
@ 2016-11-08  8:11   ` Hannes Reinecke
  2016-11-08  8:19   ` Johannes Thumshirn
  2016-11-08 21:25   ` Bjorn Helgaas
  2 siblings, 0 replies; 33+ messages in thread
From: Hannes Reinecke @ 2016-11-08  8:11 UTC (permalink / raw)
  To: Christoph Hellwig, tglx; +Cc: axboe, linux-block, linux-pci, linux-kernel

On 11/07/2016 07:47 PM, Christoph Hellwig wrote:
> From: Christogh Hellwig <hch@lst.de>
>
> Some drivers (various network and RDMA adapter for example) have a MSI-X
> vector layout where most of the vectors are used for I/O queues and should
> have CPU affinity assigned to them, but some (usually 1 but sometimes more)
> at the beginning or end are used for low-performance admin or configuration
> work and should not have any explicit affinity assigned to them.
>
> This adds a new irq_affinity structure, which will be passed through a
> variant of pci_irq_alloc_vectors that allows to specify these
> requirements (and is extensible to any future quirks in that area) so that
> the core IRQ affinity algorithm can take this quirks into account.
>
> Signed-off-by: Christogh Hellwig <hch@lst.de>
> ---
>  include/linux/interrupt.h | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
>
> diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
> index 72f0721..7284bcd 100644
> --- a/include/linux/interrupt.h
> +++ b/include/linux/interrupt.h
> @@ -232,6 +232,18 @@ struct irq_affinity_notify {
>  	void (*release)(struct kref *ref);
>  };
>
> +/**
> + * struct irq_affinity - Description for auto irq affinity assignements
> + * @pre_vectors:	Reserved vectors at the beginning of the MSIX
> + *			vector space
> + * @post_vectors:	Reserved vectors at the end of the MSIX
> + *			vector space
> + */
> +struct irq_affinity {
> +	int	pre_vectors;
> +	int	post_vectors;
> +};
> +
>  #if defined(CONFIG_SMP)
>
>  extern cpumask_var_t irq_default_affinity;
>
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 2/7] genirq/affinity: Handle pre/post vectors in irq_calc_affinity_vectors()
  2016-11-07 18:47 ` [PATCH 2/7] genirq/affinity: Handle pre/post vectors in irq_calc_affinity_vectors() Christoph Hellwig
@ 2016-11-08  8:14   ` Hannes Reinecke
  2016-11-08 14:40     ` Thomas Gleixner
  0 siblings, 1 reply; 33+ messages in thread
From: Hannes Reinecke @ 2016-11-08  8:14 UTC (permalink / raw)
  To: Christoph Hellwig, tglx; +Cc: axboe, linux-block, linux-pci, linux-kernel

On 11/07/2016 07:47 PM, Christoph Hellwig wrote:
> From: Christogh Hellwig <hch@lst.de>
>
> Only calculate the affinity for the main I/O vectors, and skip the
> pre or post vectors specified by struct irq_affinity.
>
> Also remove the irq_affinity cpumask argument that has never been used.
> If we ever need it in the future we can pass it through struct
> irq_affinity.
>
> Signed-off-by: Christogh Hellwig <hch@lst.de>
> ---
>  drivers/pci/msi.c         |  6 ++----
>  include/linux/interrupt.h |  4 ++--
>  kernel/irq/affinity.c     | 24 ++++++++++--------------
>  3 files changed, 14 insertions(+), 20 deletions(-)
>
> diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
> index ad70507..c58d3c2 100644
> --- a/drivers/pci/msi.c
> +++ b/drivers/pci/msi.c
> @@ -1091,8 +1091,7 @@ static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
>
>  	for (;;) {
>  		if (affinity) {
> -			nvec = irq_calc_affinity_vectors(dev->irq_affinity,
> -					nvec);
> +			nvec = irq_calc_affinity_vectors(nvec, NULL);
>  			if (nvec < minvec)
>  				return -ENOSPC;
>  		}
> @@ -1140,8 +1139,7 @@ static int __pci_enable_msix_range(struct pci_dev *dev,
>
>  	for (;;) {
>  		if (affinity) {
> -			nvec = irq_calc_affinity_vectors(dev->irq_affinity,
> -					nvec);
> +			nvec = irq_calc_affinity_vectors(nvec, NULL);
>  			if (nvec < minvec)
>  				return -ENOSPC;
>  		}
> diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
> index 7284bcd..092adfb 100644
> --- a/include/linux/interrupt.h
> +++ b/include/linux/interrupt.h
> @@ -291,7 +291,7 @@ extern int
>  irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify);
>
>  struct cpumask *irq_create_affinity_masks(const struct cpumask *affinity, int nvec);
> -int irq_calc_affinity_vectors(const struct cpumask *affinity, int maxvec);
> +int irq_calc_affinity_vectors(int maxvec, const struct irq_affinity *affd);
>
>  #else /* CONFIG_SMP */
>
> @@ -331,7 +331,7 @@ irq_create_affinity_masks(const struct cpumask *affinity, int nvec)
>  }
>
>  static inline int
> -irq_calc_affinity_vectors(const struct cpumask *affinity, int maxvec)
> +irq_calc_affinity_vectors(int maxvec, const struct irq_affinity *affd)
>  {
>  	return maxvec;
>  }
> diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
> index 17f51d63..8d92597 100644
> --- a/kernel/irq/affinity.c
> +++ b/kernel/irq/affinity.c
> @@ -131,24 +131,20 @@ struct cpumask *irq_create_affinity_masks(const struct cpumask *affinity,
>  }
>
>  /**
> - * irq_calc_affinity_vectors - Calculate to optimal number of vectors for a given affinity mask
> - * @affinity:		The affinity mask to spread. If NULL cpu_online_mask
> - *			is used
> - * @maxvec:		The maximum number of vectors available
> + * irq_calc_affinity_vectors - Calculate the optimal number of vectors
> + * @maxvec:	The maximum number of vectors available
> + * @affd:	Description of the affinity requirements
>   */
> -int irq_calc_affinity_vectors(const struct cpumask *affinity, int maxvec)
> +int irq_calc_affinity_vectors(int maxvec, const struct irq_affinity *affd)
>  {
> -	int cpus, ret;
> +	int resv = affd->pre_vectors + affd->post_vectors;
> +	int vecs = maxvec - resv;
> +	int cpus;
>
Shouldn't you check for NULL affd here?

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 3/7] genirq/affinity: Handle pre/post vectors in irq_create_affinity_masks()
  2016-11-07 18:47 ` [PATCH 3/7] genirq/affinity: Handle pre/post vectors in irq_create_affinity_masks() Christoph Hellwig
@ 2016-11-08  8:15   ` Hannes Reinecke
  2016-11-08 14:55     ` Christoph Hellwig
  2016-11-08 21:20   ` Bjorn Helgaas
  1 sibling, 1 reply; 33+ messages in thread
From: Hannes Reinecke @ 2016-11-08  8:15 UTC (permalink / raw)
  To: Christoph Hellwig, tglx; +Cc: axboe, linux-block, linux-pci, linux-kernel

On 11/07/2016 07:47 PM, Christoph Hellwig wrote:
> From: Christogh Hellwig <hch@lst.de>
>
> Only calculate the affinity for the main I/O vectors, and skip the
> pre or post vectors specified by struct irq_affinity.
>
> Also remove the irq_affinity cpumask argument that has never been used.
> If we ever need it in the future we can pass it through struct
> irq_affinity.
>
> Signed-off-by: Christogh Hellwig <hch@lst.de>
> ---
>  drivers/pci/msi.c         |  4 ++--
>  include/linux/interrupt.h |  4 ++--
>  kernel/irq/affinity.c     | 46 +++++++++++++++++++++++++---------------------
>  3 files changed, 29 insertions(+), 25 deletions(-)
>
> diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
> index c58d3c2..1761b8a 100644
> --- a/drivers/pci/msi.c
> +++ b/drivers/pci/msi.c
> @@ -558,7 +558,7 @@ msi_setup_entry(struct pci_dev *dev, int nvec, bool affinity)
>  	u16 control;
>
>  	if (affinity) {
> -		masks = irq_create_affinity_masks(dev->irq_affinity, nvec);
> +		masks = irq_create_affinity_masks(nvec, NULL);
>  		if (!masks)
>  			pr_err("Unable to allocate affinity masks, ignoring\n");
>  	}
> @@ -697,7 +697,7 @@ static int msix_setup_entries(struct pci_dev *dev, void __iomem *base,
>  	int ret, i;
>
>  	if (affinity) {
> -		masks = irq_create_affinity_masks(dev->irq_affinity, nvec);
> +		masks = irq_create_affinity_masks(nvec, NULL);
>  		if (!masks)
>  			pr_err("Unable to allocate affinity masks, ignoring\n");
>  	}
> diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
> index 092adfb..bca8f1c 100644
> --- a/include/linux/interrupt.h
> +++ b/include/linux/interrupt.h
> @@ -290,7 +290,7 @@ extern int irq_set_affinity_hint(unsigned int irq, const struct cpumask *m);
>  extern int
>  irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify);
>
> -struct cpumask *irq_create_affinity_masks(const struct cpumask *affinity, int nvec);
> +struct cpumask *irq_create_affinity_masks(int nvec, const struct irq_affinity *affd);
>  int irq_calc_affinity_vectors(int maxvec, const struct irq_affinity *affd);
>
>  #else /* CONFIG_SMP */
> @@ -325,7 +325,7 @@ irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify)
>  }
>
>  static inline struct cpumask *
> -irq_create_affinity_masks(const struct cpumask *affinity, int nvec)
> +irq_create_affinity_masks(int nvec, const struct irq_affinity *affd)
>  {
>  	return NULL;
>  }
> diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
> index 8d92597..17360bd 100644
> --- a/kernel/irq/affinity.c
> +++ b/kernel/irq/affinity.c
> @@ -51,16 +51,16 @@ static int get_nodes_in_cpumask(const struct cpumask *mask, nodemask_t *nodemsk)
>
>  /**
>   * irq_create_affinity_masks - Create affinity masks for multiqueue spreading
> - * @affinity:		The affinity mask to spread. If NULL cpu_online_mask
> - *			is used
> - * @nvecs:		The number of vectors
> + * @nvecs:	The total number of vectors
> + * @affd:	Description of the affinity requirements
>   *
>   * Returns the masks pointer or NULL if allocation failed.
>   */
> -struct cpumask *irq_create_affinity_masks(const struct cpumask *affinity,
> -					  int nvec)
> +struct cpumask *
> +irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
>  {
> -	int n, nodes, vecs_per_node, cpus_per_vec, extra_vecs, curvec = 0;
> +	int n, nodes, vecs_per_node, cpus_per_vec, extra_vecs, curvec;
> +	int affv = nvecs - affd->pre_vectors - affd->post_vectors;
>  	nodemask_t nodemsk = NODE_MASK_NONE;
>  	struct cpumask *masks;
>  	cpumask_var_t nmsk;
Check for NULL affd?

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 4/7] pci/msi: Propagate irq affinity description through the MSI code
  2016-11-07 18:47 ` [PATCH 4/7] pci/msi: Propagate irq affinity description through the MSI code Christoph Hellwig
@ 2016-11-08  8:16   ` Hannes Reinecke
  2016-11-08  8:24   ` Johannes Thumshirn
  2016-11-08 21:18   ` Bjorn Helgaas
  2 siblings, 0 replies; 33+ messages in thread
From: Hannes Reinecke @ 2016-11-08  8:16 UTC (permalink / raw)
  To: Christoph Hellwig, tglx; +Cc: axboe, linux-block, linux-pci, linux-kernel

On 11/07/2016 07:47 PM, Christoph Hellwig wrote:
> From: Christogh Hellwig <hch@lst.de>
>
> No API change yet, just pass it down all the way from
> pci_alloc_irq_vectors to the core MSI code.
>
> Signed-off-by: Christogh Hellwig <hch@lst.de>
> ---
>  drivers/pci/msi.c | 62 +++++++++++++++++++++++++++++--------------------------
>  1 file changed, 33 insertions(+), 29 deletions(-)
>
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 5/7] pci/msi: Provide pci_alloc_irq_vectors_affinity()
  2016-11-07 18:47 ` [PATCH 5/7] pci/msi: Provide pci_alloc_irq_vectors_affinity() Christoph Hellwig
@ 2016-11-08  8:17   ` Hannes Reinecke
  2016-11-08  8:27     ` Johannes Thumshirn
  2016-11-08 21:17   ` Bjorn Helgaas
  1 sibling, 1 reply; 33+ messages in thread
From: Hannes Reinecke @ 2016-11-08  8:17 UTC (permalink / raw)
  To: Christoph Hellwig, tglx; +Cc: axboe, linux-block, linux-pci, linux-kernel

On 11/07/2016 07:47 PM, Christoph Hellwig wrote:
> From: Christogh Hellwig <hch@lst.de>
>
> This is a variant of pci_alloc_irq_vectors() that allows passing a
> struct irq_affinity to provide fine-grainded IRQ affinity control.
> For now this means being able to exclude vectors at the beginning or
> end of the MSI vector space, but it could also be used for any other
> quirks needed in the future (e.g. more vectors than CPUs, or exluding
> CPUs from the spreading).
>
> Signed-off-by: Christogh Hellwig <hch@lst.de>
> ---
>  drivers/pci/msi.c   | 20 +++++++++++++-------
>  include/linux/pci.h | 24 +++++++++++++++++++-----
>  2 files changed, 32 insertions(+), 12 deletions(-)
>
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 6/7] pci: Remove the irq_affinity mask from struct pci_dev
  2016-11-07 18:47 ` [PATCH 6/7] pci: Remove the irq_affinity mask from struct pci_dev Christoph Hellwig
@ 2016-11-08  8:17   ` Hannes Reinecke
  2016-11-08  8:27   ` Johannes Thumshirn
  2016-11-08 20:59   ` Bjorn Helgaas
  2 siblings, 0 replies; 33+ messages in thread
From: Hannes Reinecke @ 2016-11-08  8:17 UTC (permalink / raw)
  To: Christoph Hellwig, tglx; +Cc: axboe, linux-block, linux-pci, linux-kernel

On 11/07/2016 07:47 PM, Christoph Hellwig wrote:
> This has never been used, and now is totally unreferenced.  Nuke it.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  include/linux/pci.h | 1 -
>  1 file changed, 1 deletion(-)
>
> diff --git a/include/linux/pci.h b/include/linux/pci.h
> index 7090f5f..f2ba6ac 100644
> --- a/include/linux/pci.h
> +++ b/include/linux/pci.h
> @@ -333,7 +333,6 @@ struct pci_dev {
>  	 * directly, use the values stored here. They might be different!
>  	 */
>  	unsigned int	irq;
> -	struct cpumask	*irq_affinity;
>  	struct resource resource[DEVICE_COUNT_RESOURCE]; /* I/O and memory regions + expansion ROMs */
>
>  	bool match_driver;		/* Skip attaching driver */
>
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 1/7] genirq/affinity: Introduce struct irq_affinity
  2016-11-07 18:47 ` [PATCH 1/7] genirq/affinity: Introduce struct irq_affinity Christoph Hellwig
  2016-11-08  8:11   ` Hannes Reinecke
@ 2016-11-08  8:19   ` Johannes Thumshirn
  2016-11-08 21:25   ` Bjorn Helgaas
  2 siblings, 0 replies; 33+ messages in thread
From: Johannes Thumshirn @ 2016-11-08  8:19 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: tglx, axboe, linux-block, linux-pci, linux-kernel

On Mon, Nov 07, 2016 at 10:47:36AM -0800, Christoph Hellwig wrote:
> From: Christogh Hellwig <hch@lst.de>
> 
> Some drivers (various network and RDMA adapter for example) have a MSI-X
> vector layout where most of the vectors are used for I/O queues and should
> have CPU affinity assigned to them, but some (usually 1 but sometimes more)
> at the beginning or end are used for low-performance admin or configuration
> work and should not have any explicit affinity assigned to them.
> 
> This adds a new irq_affinity structure, which will be passed through a
> variant of pci_irq_alloc_vectors that allows to specify these
> requirements (and is extensible to any future quirks in that area) so that
> the core IRQ affinity algorithm can take this quirks into account.
> 
> Signed-off-by: Christogh Hellwig <hch@lst.de>
> ---

Looks good,
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 4/7] pci/msi: Propagate irq affinity description through the MSI code
  2016-11-07 18:47 ` [PATCH 4/7] pci/msi: Propagate irq affinity description through the MSI code Christoph Hellwig
  2016-11-08  8:16   ` Hannes Reinecke
@ 2016-11-08  8:24   ` Johannes Thumshirn
  2016-11-08 21:18   ` Bjorn Helgaas
  2 siblings, 0 replies; 33+ messages in thread
From: Johannes Thumshirn @ 2016-11-08  8:24 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: tglx, axboe, linux-block, linux-pci, linux-kernel

On Mon, Nov 07, 2016 at 10:47:39AM -0800, Christoph Hellwig wrote:
> From: Christogh Hellwig <hch@lst.de>
> 
> No API change yet, just pass it down all the way from
> pci_alloc_irq_vectors to the core MSI code.
> 
> Signed-off-by: Christogh Hellwig <hch@lst.de>
> ---

Looks good,
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 5/7] pci/msi: Provide pci_alloc_irq_vectors_affinity()
  2016-11-08  8:17   ` Hannes Reinecke
@ 2016-11-08  8:27     ` Johannes Thumshirn
  0 siblings, 0 replies; 33+ messages in thread
From: Johannes Thumshirn @ 2016-11-08  8:27 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: Christoph Hellwig, tglx, axboe, linux-block, linux-pci, linux-kernel

On Tue, Nov 08, 2016 at 09:17:33AM +0100, Hannes Reinecke wrote:
> On 11/07/2016 07:47 PM, Christoph Hellwig wrote:
> > From: Christogh Hellwig <hch@lst.de>
> > 
> > This is a variant of pci_alloc_irq_vectors() that allows passing a
> > struct irq_affinity to provide fine-grainded IRQ affinity control.
> > For now this means being able to exclude vectors at the beginning or
> > end of the MSI vector space, but it could also be used for any other
> > quirks needed in the future (e.g. more vectors than CPUs, or exluding
> > CPUs from the spreading).
> > 
> > Signed-off-by: Christogh Hellwig <hch@lst.de>
> > ---

Looks good,
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 6/7] pci: Remove the irq_affinity mask from struct pci_dev
  2016-11-07 18:47 ` [PATCH 6/7] pci: Remove the irq_affinity mask from struct pci_dev Christoph Hellwig
  2016-11-08  8:17   ` Hannes Reinecke
@ 2016-11-08  8:27   ` Johannes Thumshirn
  2016-11-08 20:59   ` Bjorn Helgaas
  2 siblings, 0 replies; 33+ messages in thread
From: Johannes Thumshirn @ 2016-11-08  8:27 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: tglx, axboe, linux-block, linux-pci, linux-kernel

On Mon, Nov 07, 2016 at 10:47:41AM -0800, Christoph Hellwig wrote:
> This has never been used, and now is totally unreferenced.  Nuke it.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---

Looks good,
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 7/7] blk-mq: add a first_vec argument to blk_mq_pci_map_queues
  2016-11-07 18:47 ` [PATCH 7/7] blk-mq: add a first_vec argument to blk_mq_pci_map_queues Christoph Hellwig
@ 2016-11-08  8:27   ` Johannes Thumshirn
  0 siblings, 0 replies; 33+ messages in thread
From: Johannes Thumshirn @ 2016-11-08  8:27 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: tglx, axboe, linux-block, linux-pci, linux-kernel

On Mon, Nov 07, 2016 at 10:47:42AM -0800, Christoph Hellwig wrote:
> This allows skipping the first N IRQ vectors in case they are used for
> control or admin interrupts.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Hannes Reinecke <hare@suse.com>

Looks good,
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 2/7] genirq/affinity: Handle pre/post vectors in irq_calc_affinity_vectors()
  2016-11-08  8:14   ` Hannes Reinecke
@ 2016-11-08 14:40     ` Thomas Gleixner
  0 siblings, 0 replies; 33+ messages in thread
From: Thomas Gleixner @ 2016-11-08 14:40 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: Christoph Hellwig, axboe, linux-block, linux-pci, linux-kernel

On Tue, 8 Nov 2016, Hannes Reinecke wrote:
> Shouldn't you check for NULL affd here?

No. The introduction of the default affinity struct should happen in that
patch and it should be handed down instead of NULL. Ditto for the next
patch.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 3/7] genirq/affinity: Handle pre/post vectors in irq_create_affinity_masks()
  2016-11-08  8:15   ` Hannes Reinecke
@ 2016-11-08 14:55     ` Christoph Hellwig
  2016-11-08 14:59       ` Hannes Reinecke
  0 siblings, 1 reply; 33+ messages in thread
From: Christoph Hellwig @ 2016-11-08 14:55 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: Christoph Hellwig, tglx, axboe, linux-block, linux-pci, linux-kernel

[please trim the f***king context in your replies, thanks..]

On Tue, Nov 08, 2016 at 09:15:27AM +0100, Hannes Reinecke wrote:
>> +irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
>>  {
>> -	int n, nodes, vecs_per_node, cpus_per_vec, extra_vecs, curvec = 0;
>> +	int n, nodes, vecs_per_node, cpus_per_vec, extra_vecs, curvec;
>> +	int affv = nvecs - affd->pre_vectors - affd->post_vectors;
>>  	nodemask_t nodemsk = NODE_MASK_NONE;
>>  	struct cpumask *masks;
>>  	cpumask_var_t nmsk;
> Check for NULL affd?

We expect all callers to pass a valid one.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 3/7] genirq/affinity: Handle pre/post vectors in irq_create_affinity_masks()
  2016-11-08 14:55     ` Christoph Hellwig
@ 2016-11-08 14:59       ` Hannes Reinecke
  2016-11-08 15:00         ` Christoph Hellwig
  0 siblings, 1 reply; 33+ messages in thread
From: Hannes Reinecke @ 2016-11-08 14:59 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: tglx, axboe, linux-block, linux-pci, linux-kernel

On 11/08/2016 03:55 PM, Christoph Hellwig wrote:
> [please trim the f***king context in your replies, thanks..]
>
> On Tue, Nov 08, 2016 at 09:15:27AM +0100, Hannes Reinecke wrote:
>>> +irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
>>>  {
>>> -	int n, nodes, vecs_per_node, cpus_per_vec, extra_vecs, curvec = 0;
>>> +	int n, nodes, vecs_per_node, cpus_per_vec, extra_vecs, curvec;
>>> +	int affv = nvecs - affd->pre_vectors - affd->post_vectors;
>>>  	nodemask_t nodemsk = NODE_MASK_NONE;
>>>  	struct cpumask *masks;
>>>  	cpumask_var_t nmsk;
>> Check for NULL affd?
>
> We expect all callers to pass a valid one.

Which you don't in this patch:

@@ -697,7 +697,7 @@ static int msix_setup_entries(struct pci_dev *dev, 
void __iomem *base,
      int ret, i;

      if (affinity) {
-        masks = irq_create_affinity_masks(dev->irq_affinity, nvec);
+        masks = irq_create_affinity_masks(nvec, NULL);
          if (!masks)
              pr_err("Unable to allocate affinity masks, ignoring\n");
      }

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 3/7] genirq/affinity: Handle pre/post vectors in irq_create_affinity_masks()
  2016-11-08 14:59       ` Hannes Reinecke
@ 2016-11-08 15:00         ` Christoph Hellwig
  2016-11-08 16:27           ` Thomas Gleixner
  0 siblings, 1 reply; 33+ messages in thread
From: Christoph Hellwig @ 2016-11-08 15:00 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: Christoph Hellwig, tglx, axboe, linux-block, linux-pci, linux-kernel

On Tue, Nov 08, 2016 at 03:59:16PM +0100, Hannes Reinecke wrote:
>
> Which you don't in this patch:

True.  We will always in the end, but the split isn't right, we'll
need to pass the non-NULL argument starting in this patch.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 3/7] genirq/affinity: Handle pre/post vectors in irq_create_affinity_masks()
  2016-11-08 15:00         ` Christoph Hellwig
@ 2016-11-08 16:27           ` Thomas Gleixner
  2016-11-08 16:33             ` Christoph Hellwig
  0 siblings, 1 reply; 33+ messages in thread
From: Thomas Gleixner @ 2016-11-08 16:27 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Hannes Reinecke, axboe, linux-block, linux-pci, linux-kernel

On Tue, 8 Nov 2016, Christoph Hellwig wrote:

> On Tue, Nov 08, 2016 at 03:59:16PM +0100, Hannes Reinecke wrote:
> >
> > Which you don't in this patch:
> 
> True.  We will always in the end, but the split isn't right, we'll
> need to pass the non-NULL argument starting in this patch.

No, in the previous one ....
 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 3/7] genirq/affinity: Handle pre/post vectors in irq_create_affinity_masks()
  2016-11-08 16:27           ` Thomas Gleixner
@ 2016-11-08 16:33             ` Christoph Hellwig
  0 siblings, 0 replies; 33+ messages in thread
From: Christoph Hellwig @ 2016-11-08 16:33 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Christoph Hellwig, Hannes Reinecke, axboe, linux-block,
	linux-pci, linux-kernel

On Tue, Nov 08, 2016 at 05:27:52PM +0100, Thomas Gleixner wrote:
> On Tue, 8 Nov 2016, Christoph Hellwig wrote:
> 
> > On Tue, Nov 08, 2016 at 03:59:16PM +0100, Hannes Reinecke wrote:
> > >
> > > Which you don't in this patch:
> > 
> > True.  We will always in the end, but the split isn't right, we'll
> > need to pass the non-NULL argument starting in this patch.
> 
> No, in the previous one ....

That too, but it's for different functions.  I have a rebased version
that does this, I'll run it through quick testing and will repost
it later today.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 6/7] pci: Remove the irq_affinity mask from struct pci_dev
  2016-11-07 18:47 ` [PATCH 6/7] pci: Remove the irq_affinity mask from struct pci_dev Christoph Hellwig
  2016-11-08  8:17   ` Hannes Reinecke
  2016-11-08  8:27   ` Johannes Thumshirn
@ 2016-11-08 20:59   ` Bjorn Helgaas
  2 siblings, 0 replies; 33+ messages in thread
From: Bjorn Helgaas @ 2016-11-08 20:59 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: tglx, axboe, linux-block, linux-pci, linux-kernel

s/pci/PCI/ (in subject)

On Mon, Nov 07, 2016 at 10:47:41AM -0800, Christoph Hellwig wrote:
> This has never been used, and now is totally unreferenced.  Nuke it.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Acked-by: Bjorn Helgaas <bhelgaas@google.com>

> ---
>  include/linux/pci.h | 1 -
>  1 file changed, 1 deletion(-)
> 
> diff --git a/include/linux/pci.h b/include/linux/pci.h
> index 7090f5f..f2ba6ac 100644
> --- a/include/linux/pci.h
> +++ b/include/linux/pci.h
> @@ -333,7 +333,6 @@ struct pci_dev {
>  	 * directly, use the values stored here. They might be different!
>  	 */
>  	unsigned int	irq;
> -	struct cpumask	*irq_affinity;
>  	struct resource resource[DEVICE_COUNT_RESOURCE]; /* I/O and memory regions + expansion ROMs */
>  
>  	bool match_driver;		/* Skip attaching driver */
> -- 
> 2.1.4
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-pci" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 5/7] pci/msi: Provide pci_alloc_irq_vectors_affinity()
  2016-11-07 18:47 ` [PATCH 5/7] pci/msi: Provide pci_alloc_irq_vectors_affinity() Christoph Hellwig
  2016-11-08  8:17   ` Hannes Reinecke
@ 2016-11-08 21:17   ` Bjorn Helgaas
  2016-11-08 21:20     ` Christoph Hellwig
  1 sibling, 1 reply; 33+ messages in thread
From: Bjorn Helgaas @ 2016-11-08 21:17 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: tglx, axboe, linux-block, linux-pci, linux-kernel

On Mon, Nov 07, 2016 at 10:47:40AM -0800, Christoph Hellwig wrote:
> From: Christogh Hellwig <hch@lst.de>

s/Christogh/Christoph/ (also below)

> This is a variant of pci_alloc_irq_vectors() that allows passing a
> struct irq_affinity to provide fine-grainded IRQ affinity control.

s/grainded/grained/

> For now this means being able to exclude vectors at the beginning or
> end of the MSI vector space, but it could also be used for any other
> quirks needed in the future (e.g. more vectors than CPUs, or exluding

s/exluding/excluding/

> CPUs from the spreading).
> 
> Signed-off-by: Christogh Hellwig <hch@lst.de>

Acked-by: Bjorn Helgaas <bhelgaas@google.com>

> +int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,
> +				   unsigned int max_vecs, unsigned int flags,
> +				   const struct irq_affinity *affd)

> +int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,
> +				   unsigned int max_vecs, unsigned int flags,
> +				   const struct irq_affinity *affd);

> +static inline int
> +pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,
> +			       unsigned int max_vecs, unsigned int flags,
> +			       const struct irq_affinity *aff_desc)

Maybe use the same formal parameter name as in the definition and
declaration above?

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 4/7] pci/msi: Propagate irq affinity description through the MSI code
  2016-11-07 18:47 ` [PATCH 4/7] pci/msi: Propagate irq affinity description through the MSI code Christoph Hellwig
  2016-11-08  8:16   ` Hannes Reinecke
  2016-11-08  8:24   ` Johannes Thumshirn
@ 2016-11-08 21:18   ` Bjorn Helgaas
  2 siblings, 0 replies; 33+ messages in thread
From: Bjorn Helgaas @ 2016-11-08 21:18 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: tglx, axboe, linux-block, linux-pci, linux-kernel

s|pci/msi|PCI/MSI| (subject)
s/irq/IRQ/ (subject)

On Mon, Nov 07, 2016 at 10:47:39AM -0800, Christoph Hellwig wrote:
> From: Christogh Hellwig <hch@lst.de>
> 
> No API change yet, just pass it down all the way from
> pci_alloc_irq_vectors to the core MSI code.

pci_alloc_irq_vectors()

> Signed-off-by: Christogh Hellwig <hch@lst.de>

Acked-by: Bjorn Helgaas <bhelgaas@google.com>

> ---
>  drivers/pci/msi.c | 62 +++++++++++++++++++++++++++++--------------------------
>  1 file changed, 33 insertions(+), 29 deletions(-)
> 
> diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
> index 1761b8a..512f388 100644
> --- a/drivers/pci/msi.c
> +++ b/drivers/pci/msi.c
> @@ -551,14 +551,14 @@ static int populate_msi_sysfs(struct pci_dev *pdev)
>  }
>  
>  static struct msi_desc *
> -msi_setup_entry(struct pci_dev *dev, int nvec, bool affinity)
> +msi_setup_entry(struct pci_dev *dev, int nvec, const struct irq_affinity *affd)
>  {
>  	struct cpumask *masks = NULL;
>  	struct msi_desc *entry;
>  	u16 control;
>  
> -	if (affinity) {
> -		masks = irq_create_affinity_masks(nvec, NULL);
> +	if (affd) {
> +		masks = irq_create_affinity_masks(nvec, affd);
>  		if (!masks)
>  			pr_err("Unable to allocate affinity masks, ignoring\n");
>  	}
> @@ -618,7 +618,8 @@ static int msi_verify_entries(struct pci_dev *dev)
>   * an error, and a positive return value indicates the number of interrupts
>   * which could have been allocated.
>   */
> -static int msi_capability_init(struct pci_dev *dev, int nvec, bool affinity)
> +static int msi_capability_init(struct pci_dev *dev, int nvec,
> +			       const struct irq_affinity *affd)
>  {
>  	struct msi_desc *entry;
>  	int ret;
> @@ -626,7 +627,7 @@ static int msi_capability_init(struct pci_dev *dev, int nvec, bool affinity)
>  
>  	pci_msi_set_enable(dev, 0);	/* Disable MSI during set up */
>  
> -	entry = msi_setup_entry(dev, nvec, affinity);
> +	entry = msi_setup_entry(dev, nvec, affd);
>  	if (!entry)
>  		return -ENOMEM;
>  
> @@ -690,14 +691,14 @@ static void __iomem *msix_map_region(struct pci_dev *dev, unsigned nr_entries)
>  
>  static int msix_setup_entries(struct pci_dev *dev, void __iomem *base,
>  			      struct msix_entry *entries, int nvec,
> -			      bool affinity)
> +			      const struct irq_affinity *affd)
>  {
>  	struct cpumask *curmsk, *masks = NULL;
>  	struct msi_desc *entry;
>  	int ret, i;
>  
> -	if (affinity) {
> -		masks = irq_create_affinity_masks(nvec, NULL);
> +	if (affd) {
> +		masks = irq_create_affinity_masks(nvec, affd);
>  		if (!masks)
>  			pr_err("Unable to allocate affinity masks, ignoring\n");
>  	}
> @@ -753,14 +754,14 @@ static void msix_program_entries(struct pci_dev *dev,
>   * @dev: pointer to the pci_dev data structure of MSI-X device function
>   * @entries: pointer to an array of struct msix_entry entries
>   * @nvec: number of @entries
> - * @affinity: flag to indicate cpu irq affinity mask should be set
> + * @affd: Optional pointer to enable automatic affinity assignement
>   *
>   * Setup the MSI-X capability structure of device function with a
>   * single MSI-X irq. A return of zero indicates the successful setup of
>   * requested MSI-X entries with allocated irqs or non-zero for otherwise.
>   **/
>  static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,
> -				int nvec, bool affinity)
> +				int nvec, const struct irq_affinity *affd)
>  {
>  	int ret;
>  	u16 control;
> @@ -775,7 +776,7 @@ static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,
>  	if (!base)
>  		return -ENOMEM;
>  
> -	ret = msix_setup_entries(dev, base, entries, nvec, affinity);
> +	ret = msix_setup_entries(dev, base, entries, nvec, affd);
>  	if (ret)
>  		return ret;
>  
> @@ -956,7 +957,7 @@ int pci_msix_vec_count(struct pci_dev *dev)
>  EXPORT_SYMBOL(pci_msix_vec_count);
>  
>  static int __pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries,
> -			     int nvec, bool affinity)
> +			     int nvec, const struct irq_affinity *affd)
>  {
>  	int nr_entries;
>  	int i, j;
> @@ -988,7 +989,7 @@ static int __pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries,
>  		dev_info(&dev->dev, "can't enable MSI-X (MSI IRQ already assigned)\n");
>  		return -EINVAL;
>  	}
> -	return msix_capability_init(dev, entries, nvec, affinity);
> +	return msix_capability_init(dev, entries, nvec, affd);
>  }
>  
>  /**
> @@ -1008,7 +1009,7 @@ static int __pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries,
>   **/
>  int pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries, int nvec)
>  {
> -	return __pci_enable_msix(dev, entries, nvec, false);
> +	return __pci_enable_msix(dev, entries, nvec, NULL);
>  }
>  EXPORT_SYMBOL(pci_enable_msix);
>  
> @@ -1059,9 +1060,8 @@ int pci_msi_enabled(void)
>  EXPORT_SYMBOL(pci_msi_enabled);
>  
>  static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
> -		unsigned int flags)
> +				  const struct irq_affinity *affd)
>  {
> -	bool affinity = flags & PCI_IRQ_AFFINITY;
>  	int nvec;
>  	int rc;
>  
> @@ -1090,13 +1090,13 @@ static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
>  		nvec = maxvec;
>  
>  	for (;;) {
> -		if (affinity) {
> -			nvec = irq_calc_affinity_vectors(nvec, NULL);
> +		if (affd) {
> +			nvec = irq_calc_affinity_vectors(nvec, affd);
>  			if (nvec < minvec)
>  				return -ENOSPC;
>  		}
>  
> -		rc = msi_capability_init(dev, nvec, affinity);
> +		rc = msi_capability_init(dev, nvec, affd);
>  		if (rc == 0)
>  			return nvec;
>  
> @@ -1123,28 +1123,27 @@ static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
>   **/
>  int pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec)
>  {
> -	return __pci_enable_msi_range(dev, minvec, maxvec, 0);
> +	return __pci_enable_msi_range(dev, minvec, maxvec, NULL);
>  }
>  EXPORT_SYMBOL(pci_enable_msi_range);
>  
>  static int __pci_enable_msix_range(struct pci_dev *dev,
> -		struct msix_entry *entries, int minvec, int maxvec,
> -		unsigned int flags)
> +				   struct msix_entry *entries, int minvec,
> +				   int maxvec, const struct irq_affinity *affd)
>  {
> -	bool affinity = flags & PCI_IRQ_AFFINITY;
>  	int rc, nvec = maxvec;
>  
>  	if (maxvec < minvec)
>  		return -ERANGE;
>  
>  	for (;;) {
> -		if (affinity) {
> -			nvec = irq_calc_affinity_vectors(nvec, NULL);
> +		if (affd) {
> +			nvec = irq_calc_affinity_vectors(nvec, affd);
>  			if (nvec < minvec)
>  				return -ENOSPC;
>  		}
>  
> -		rc = __pci_enable_msix(dev, entries, nvec, affinity);
> +		rc = __pci_enable_msix(dev, entries, nvec, affd);
>  		if (rc == 0)
>  			return nvec;
>  
> @@ -1175,7 +1174,7 @@ static int __pci_enable_msix_range(struct pci_dev *dev,
>  int pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries,
>  		int minvec, int maxvec)
>  {
> -	return __pci_enable_msix_range(dev, entries, minvec, maxvec, 0);
> +	return __pci_enable_msix_range(dev, entries, minvec, maxvec, NULL);
>  }
>  EXPORT_SYMBOL(pci_enable_msix_range);
>  
> @@ -1199,17 +1198,22 @@ EXPORT_SYMBOL(pci_enable_msix_range);
>  int pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,
>  		unsigned int max_vecs, unsigned int flags)
>  {
> +	static const struct irq_affinity msi_default_affd;
> +	const struct irq_affinity *affd = NULL;
>  	int vecs = -ENOSPC;
>  
> +	if (flags & PCI_IRQ_AFFINITY)
> +		affd = &msi_default_affd;
> +
>  	if (flags & PCI_IRQ_MSIX) {
>  		vecs = __pci_enable_msix_range(dev, NULL, min_vecs, max_vecs,
> -				flags);
> +				affd);
>  		if (vecs > 0)
>  			return vecs;
>  	}
>  
>  	if (flags & PCI_IRQ_MSI) {
> -		vecs = __pci_enable_msi_range(dev, min_vecs, max_vecs, flags);
> +		vecs = __pci_enable_msi_range(dev, min_vecs, max_vecs, affd);
>  		if (vecs > 0)
>  			return vecs;
>  	}
> -- 
> 2.1.4
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-pci" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 5/7] pci/msi: Provide pci_alloc_irq_vectors_affinity()
  2016-11-08 21:17   ` Bjorn Helgaas
@ 2016-11-08 21:20     ` Christoph Hellwig
  0 siblings, 0 replies; 33+ messages in thread
From: Christoph Hellwig @ 2016-11-08 21:20 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Christoph Hellwig, tglx, axboe, linux-block, linux-pci, linux-kernel

On Tue, Nov 08, 2016 at 03:17:32PM -0600, Bjorn Helgaas wrote:
> On Mon, Nov 07, 2016 at 10:47:40AM -0800, Christoph Hellwig wrote:
> > From: Christogh Hellwig <hch@lst.de>
> 
> s/Christogh/Christoph/ (also below)

Haha, so much for taking Thomas' split of my patches and not proof-reading
them..

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 3/7] genirq/affinity: Handle pre/post vectors in irq_create_affinity_masks()
  2016-11-07 18:47 ` [PATCH 3/7] genirq/affinity: Handle pre/post vectors in irq_create_affinity_masks() Christoph Hellwig
  2016-11-08  8:15   ` Hannes Reinecke
@ 2016-11-08 21:20   ` Bjorn Helgaas
  1 sibling, 0 replies; 33+ messages in thread
From: Bjorn Helgaas @ 2016-11-08 21:20 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: tglx, axboe, linux-block, linux-pci, linux-kernel

On Mon, Nov 07, 2016 at 10:47:38AM -0800, Christoph Hellwig wrote:
> From: Christogh Hellwig <hch@lst.de>
> 
> Only calculate the affinity for the main I/O vectors, and skip the
> pre or post vectors specified by struct irq_affinity.
> 
> Also remove the irq_affinity cpumask argument that has never been used.
> If we ever need it in the future we can pass it through struct
> irq_affinity.
> 
> Signed-off-by: Christogh Hellwig <hch@lst.de>

s/Christogh/Christoph/ (also above, and maybe other patches too?)

Acked-by: Bjorn Helgaas <bhelgaas@google.com>

> ---
>  drivers/pci/msi.c         |  4 ++--
>  include/linux/interrupt.h |  4 ++--
>  kernel/irq/affinity.c     | 46 +++++++++++++++++++++++++---------------------
>  3 files changed, 29 insertions(+), 25 deletions(-)
> 
> diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
> index c58d3c2..1761b8a 100644
> --- a/drivers/pci/msi.c
> +++ b/drivers/pci/msi.c
> @@ -558,7 +558,7 @@ msi_setup_entry(struct pci_dev *dev, int nvec, bool affinity)
>  	u16 control;
>  
>  	if (affinity) {
> -		masks = irq_create_affinity_masks(dev->irq_affinity, nvec);
> +		masks = irq_create_affinity_masks(nvec, NULL);
>  		if (!masks)
>  			pr_err("Unable to allocate affinity masks, ignoring\n");
>  	}
> @@ -697,7 +697,7 @@ static int msix_setup_entries(struct pci_dev *dev, void __iomem *base,
>  	int ret, i;
>  
>  	if (affinity) {
> -		masks = irq_create_affinity_masks(dev->irq_affinity, nvec);
> +		masks = irq_create_affinity_masks(nvec, NULL);
>  		if (!masks)
>  			pr_err("Unable to allocate affinity masks, ignoring\n");

Not caused by this patch, but can we use dev_err() here and above?

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 1/7] genirq/affinity: Introduce struct irq_affinity
  2016-11-07 18:47 ` [PATCH 1/7] genirq/affinity: Introduce struct irq_affinity Christoph Hellwig
  2016-11-08  8:11   ` Hannes Reinecke
  2016-11-08  8:19   ` Johannes Thumshirn
@ 2016-11-08 21:25   ` Bjorn Helgaas
  2016-11-08 22:30     ` Christoph Hellwig
  2 siblings, 1 reply; 33+ messages in thread
From: Bjorn Helgaas @ 2016-11-08 21:25 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: tglx, axboe, linux-block, linux-pci, linux-kernel

On Mon, Nov 07, 2016 at 10:47:36AM -0800, Christoph Hellwig wrote:
> From: Christogh Hellwig <hch@lst.de>
> 
> Some drivers (various network and RDMA adapter for example) have a MSI-X
> vector layout where most of the vectors are used for I/O queues and should
> have CPU affinity assigned to them, but some (usually 1 but sometimes more)
> at the beginning or end are used for low-performance admin or configuration
> work and should not have any explicit affinity assigned to them.
> 
> This adds a new irq_affinity structure, which will be passed through a
> variant of pci_irq_alloc_vectors that allows to specify these
> requirements (and is extensible to any future quirks in that area) so that
> the core IRQ affinity algorithm can take this quirks into account.
> 
> Signed-off-by: Christogh Hellwig <hch@lst.de>

s/Christogh/Christoph/ (also above)

What tree would you prefer?  I vote for the IRQ tree since that seems
to be where the interesting parts are, and I think I acked all the PCI
bits.

> + * struct irq_affinity - Description for auto irq affinity assignements
> + * @pre_vectors:	Reserved vectors at the beginning of the MSIX
> + *			vector space
> + * @post_vectors:	Reserved vectors at the end of the MSIX
> + *			vector space

Maybe include something more informative than just "reserved", e.g.,
"Don't apply affinity to @pre_vectors at beginning of MSI-X vector
space" or "Vectors at beginning of MSI-X vector space that are exempt
from affinity"?

> + */
> +struct irq_affinity {
> +	int	pre_vectors;
> +	int	post_vectors;
> +};
> +
>  #if defined(CONFIG_SMP)
>  
>  extern cpumask_var_t irq_default_affinity;
> -- 
> 2.1.4
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-pci" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 1/7] genirq/affinity: Introduce struct irq_affinity
  2016-11-08 21:25   ` Bjorn Helgaas
@ 2016-11-08 22:30     ` Christoph Hellwig
  0 siblings, 0 replies; 33+ messages in thread
From: Christoph Hellwig @ 2016-11-08 22:30 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Christoph Hellwig, tglx, axboe, linux-block, linux-pci, linux-kernel

On Tue, Nov 08, 2016 at 03:25:27PM -0600, Bjorn Helgaas wrote:
> What tree would you prefer?  I vote for the IRQ tree since that seems
> to be where the interesting parts are, and I think I acked all the PCI
> bits.

Yes, that would be my preference to.

> 
> > + * struct irq_affinity - Description for auto irq affinity assignements
> > + * @pre_vectors:	Reserved vectors at the beginning of the MSIX
> > + *			vector space
> > + * @post_vectors:	Reserved vectors at the end of the MSIX
> > + *			vector space
> 
> Maybe include something more informative than just "reserved", e.g.,
> "Don't apply affinity to @pre_vectors at beginning of MSI-X vector
> space" or "Vectors at beginning of MSI-X vector space that are exempt
> from affinity"?

Sure.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH 7/7] blk-mq: add a first_vec argument to blk_mq_pci_map_queues
  2016-11-09  1:15 support for partial irq affinity assignment V3 Christoph Hellwig
@ 2016-11-09  1:15 ` Christoph Hellwig
  0 siblings, 0 replies; 33+ messages in thread
From: Christoph Hellwig @ 2016-11-09  1:15 UTC (permalink / raw)
  To: tglx; +Cc: axboe, linux-block, linux-pci, linux-kernel

This allows skipping the first N IRQ vectors in case they are used for
control or admin interrupts.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
---
 block/blk-mq-pci.c         | 6 ++++--
 drivers/nvme/host/pci.c    | 2 +-
 include/linux/blk-mq-pci.h | 3 ++-
 3 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/block/blk-mq-pci.c b/block/blk-mq-pci.c
index 966c216..03ff7c4 100644
--- a/block/blk-mq-pci.c
+++ b/block/blk-mq-pci.c
@@ -21,6 +21,7 @@
  * blk_mq_pci_map_queues - provide a default queue mapping for PCI device
  * @set:	tagset to provide the mapping for
  * @pdev:	PCI device associated with @set.
+ * @first_vec:	first interrupt vectors to use for queues (usually 0)
  *
  * This function assumes the PCI device @pdev has at least as many available
  * interrupt vetors as @set has queues.  It will then queuery the vector
@@ -28,12 +29,13 @@
  * that maps a queue to the CPUs that have irq affinity for the corresponding
  * vector.
  */
-int blk_mq_pci_map_queues(struct blk_mq_tag_set *set, struct pci_dev *pdev)
+int blk_mq_pci_map_queues(struct blk_mq_tag_set *set, struct pci_dev *pdev,
+		int first_vec)
 {
 	const struct cpumask *mask;
 	unsigned int queue, cpu;
 
-	for (queue = 0; queue < set->nr_hw_queues; queue++) {
+	for (queue = first_vec; queue < set->nr_hw_queues; queue++) {
 		mask = pci_irq_get_affinity(pdev, queue);
 		if (!mask)
 			return -EINVAL;
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 0248d0e..6e6b917 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -273,7 +273,7 @@ static int nvme_pci_map_queues(struct blk_mq_tag_set *set)
 {
 	struct nvme_dev *dev = set->driver_data;
 
-	return blk_mq_pci_map_queues(set, to_pci_dev(dev->dev));
+	return blk_mq_pci_map_queues(set, to_pci_dev(dev->dev), 0);
 }
 
 /**
diff --git a/include/linux/blk-mq-pci.h b/include/linux/blk-mq-pci.h
index 6ab5952..fde26d2 100644
--- a/include/linux/blk-mq-pci.h
+++ b/include/linux/blk-mq-pci.h
@@ -4,6 +4,7 @@
 struct blk_mq_tag_set;
 struct pci_dev;
 
-int blk_mq_pci_map_queues(struct blk_mq_tag_set *set, struct pci_dev *pdev);
+int blk_mq_pci_map_queues(struct blk_mq_tag_set *set, struct pci_dev *pdev,
+		int first_vec);
 
 #endif /* _LINUX_BLK_MQ_PCI_H */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2016-11-09  1:16 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-07 18:47 support for partial irq affinity assignment Christoph Hellwig
2016-11-07 18:47 ` [PATCH 1/7] genirq/affinity: Introduce struct irq_affinity Christoph Hellwig
2016-11-08  8:11   ` Hannes Reinecke
2016-11-08  8:19   ` Johannes Thumshirn
2016-11-08 21:25   ` Bjorn Helgaas
2016-11-08 22:30     ` Christoph Hellwig
2016-11-07 18:47 ` [PATCH 2/7] genirq/affinity: Handle pre/post vectors in irq_calc_affinity_vectors() Christoph Hellwig
2016-11-08  8:14   ` Hannes Reinecke
2016-11-08 14:40     ` Thomas Gleixner
2016-11-07 18:47 ` [PATCH 3/7] genirq/affinity: Handle pre/post vectors in irq_create_affinity_masks() Christoph Hellwig
2016-11-08  8:15   ` Hannes Reinecke
2016-11-08 14:55     ` Christoph Hellwig
2016-11-08 14:59       ` Hannes Reinecke
2016-11-08 15:00         ` Christoph Hellwig
2016-11-08 16:27           ` Thomas Gleixner
2016-11-08 16:33             ` Christoph Hellwig
2016-11-08 21:20   ` Bjorn Helgaas
2016-11-07 18:47 ` [PATCH 4/7] pci/msi: Propagate irq affinity description through the MSI code Christoph Hellwig
2016-11-08  8:16   ` Hannes Reinecke
2016-11-08  8:24   ` Johannes Thumshirn
2016-11-08 21:18   ` Bjorn Helgaas
2016-11-07 18:47 ` [PATCH 5/7] pci/msi: Provide pci_alloc_irq_vectors_affinity() Christoph Hellwig
2016-11-08  8:17   ` Hannes Reinecke
2016-11-08  8:27     ` Johannes Thumshirn
2016-11-08 21:17   ` Bjorn Helgaas
2016-11-08 21:20     ` Christoph Hellwig
2016-11-07 18:47 ` [PATCH 6/7] pci: Remove the irq_affinity mask from struct pci_dev Christoph Hellwig
2016-11-08  8:17   ` Hannes Reinecke
2016-11-08  8:27   ` Johannes Thumshirn
2016-11-08 20:59   ` Bjorn Helgaas
2016-11-07 18:47 ` [PATCH 7/7] blk-mq: add a first_vec argument to blk_mq_pci_map_queues Christoph Hellwig
2016-11-08  8:27   ` Johannes Thumshirn
2016-11-09  1:15 support for partial irq affinity assignment V3 Christoph Hellwig
2016-11-09  1:15 ` [PATCH 7/7] blk-mq: add a first_vec argument to blk_mq_pci_map_queues Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).