linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* spread MSI(-X) vectors to all possible CPUs V2
@ 2017-05-19  8:57 Christoph Hellwig
  2017-05-19  8:57 ` [PATCH 1/7] genirq: allow assigning affinity to present but not online CPUs Christoph Hellwig
                   ` (6 more replies)
  0 siblings, 7 replies; 15+ messages in thread
From: Christoph Hellwig @ 2017-05-19  8:57 UTC (permalink / raw)
  To: Thomas Gleixner, Jens Axboe
  Cc: Keith Busch, linux-nvme, linux-block, linux-kernel

Hi all,

this series changes our automatic MSI-X vector assignment so that it
takes all present CPUs into account instead of all online ones.  This
allows to better deal with cpu hotplug events, which could happen
frequently due to power management for example.

Changes since V1:
 - rebase to current Linus' tree
 - add irq_lock_sparse calls
 - move memory allocations outside of (raw) spinlocks
 - remove the irq_force_complete_move call
 - factor some common code into helpers
 - identation fixups

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 1/7] genirq: allow assigning affinity to present but not online CPUs
  2017-05-19  8:57 spread MSI(-X) vectors to all possible CPUs V2 Christoph Hellwig
@ 2017-05-19  8:57 ` Christoph Hellwig
  2017-05-19  8:57 ` [PATCH 2/7] genirq/affinity: assign vectors to all present CPUs Christoph Hellwig
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2017-05-19  8:57 UTC (permalink / raw)
  To: Thomas Gleixner, Jens Axboe
  Cc: Keith Busch, linux-nvme, linux-block, linux-kernel

This will allow us to spread MSI/MSI-X affinity over all present CPUs and
thus better deal with systems where cpus are take on and offline all the
time.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 kernel/irq/manage.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 070be980c37a..5c25d4a5dc46 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -361,17 +361,17 @@ static int setup_affinity(struct irq_desc *desc, struct cpumask *mask)
 	if (irqd_affinity_is_managed(&desc->irq_data) ||
 	    irqd_has_set(&desc->irq_data, IRQD_AFFINITY_SET)) {
 		if (cpumask_intersects(desc->irq_common_data.affinity,
-				       cpu_online_mask))
+				       cpu_present_mask))
 			set = desc->irq_common_data.affinity;
 		else
 			irqd_clear(&desc->irq_data, IRQD_AFFINITY_SET);
 	}
 
-	cpumask_and(mask, cpu_online_mask, set);
+	cpumask_and(mask, cpu_present_mask, set);
 	if (node != NUMA_NO_NODE) {
 		const struct cpumask *nodemask = cpumask_of_node(node);
 
-		/* make sure at least one of the cpus in nodemask is online */
+		/* make sure at least one of the cpus in nodemask is present */
 		if (cpumask_intersects(mask, nodemask))
 			cpumask_and(mask, mask, nodemask);
 	}
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 2/7] genirq/affinity: assign vectors to all present CPUs
  2017-05-19  8:57 spread MSI(-X) vectors to all possible CPUs V2 Christoph Hellwig
  2017-05-19  8:57 ` [PATCH 1/7] genirq: allow assigning affinity to present but not online CPUs Christoph Hellwig
@ 2017-05-19  8:57 ` Christoph Hellwig
  2017-05-21 18:31   ` Thomas Gleixner
  2017-05-19  8:57 ` [PATCH 3/7] genirq/affinity: factor out a irq_affinity_set helper Christoph Hellwig
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Christoph Hellwig @ 2017-05-19  8:57 UTC (permalink / raw)
  To: Thomas Gleixner, Jens Axboe
  Cc: Keith Busch, linux-nvme, linux-block, linux-kernel

Currently we only assign spread vectors to online CPUs, which ties the
IRQ mapping to the currently online devices and doesn't deal nicely with
the fact that CPUs could come and go rapidly due to e.g. power management.

Instead assign vectors to all present CPUs to avoid this churn.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 kernel/irq/affinity.c | 43 ++++++++++++++++++++++++++++---------------
 1 file changed, 28 insertions(+), 15 deletions(-)

diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index e2d356dd7581..414b0be64bfc 100644
--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affinity.c
@@ -4,6 +4,8 @@
 #include <linux/slab.h>
 #include <linux/cpu.h>
 
+static cpumask_var_t node_to_present_cpumask[MAX_NUMNODES] __read_mostly;
+
 static void irq_spread_init_one(struct cpumask *irqmsk, struct cpumask *nmsk,
 				int cpus_per_vec)
 {
@@ -40,8 +42,8 @@ static int get_nodes_in_cpumask(const struct cpumask *mask, nodemask_t *nodemsk)
 	int n, nodes = 0;
 
 	/* Calculate the number of nodes in the supplied affinity mask */
-	for_each_online_node(n) {
-		if (cpumask_intersects(mask, cpumask_of_node(n))) {
+	for_each_node(n) {
+		if (cpumask_intersects(mask, node_to_present_cpumask[n])) {
 			node_set(n, *nodemsk);
 			nodes++;
 		}
@@ -77,9 +79,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
 	for (curvec = 0; curvec < affd->pre_vectors; curvec++)
 		cpumask_copy(masks + curvec, irq_default_affinity);
 
-	/* Stabilize the cpumasks */
-	get_online_cpus();
-	nodes = get_nodes_in_cpumask(cpu_online_mask, &nodemsk);
+	nodes = get_nodes_in_cpumask(cpu_present_mask, &nodemsk);
 
 	/*
 	 * If the number of nodes in the mask is greater than or equal the
@@ -87,7 +87,8 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
 	 */
 	if (affv <= nodes) {
 		for_each_node_mask(n, nodemsk) {
-			cpumask_copy(masks + curvec, cpumask_of_node(n));
+			cpumask_copy(masks + curvec,
+				     node_to_present_cpumask[n]);
 			if (++curvec == last_affv)
 				break;
 		}
@@ -101,7 +102,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
 		vecs_per_node = (affv - (curvec - affd->pre_vectors)) / nodes;
 
 		/* Get the cpus on this node which are in the mask */
-		cpumask_and(nmsk, cpu_online_mask, cpumask_of_node(n));
+		cpumask_and(nmsk, cpu_present_mask, node_to_present_cpumask[n]);
 
 		/* Calculate the number of cpus per vector */
 		ncpus = cpumask_weight(nmsk);
@@ -128,8 +129,6 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
 	}
 
 done:
-	put_online_cpus();
-
 	/* Fill out vectors at the end that don't need affinity */
 	for (; curvec < nvecs; curvec++)
 		cpumask_copy(masks + curvec, irq_default_affinity);
@@ -147,12 +146,26 @@ int irq_calc_affinity_vectors(int maxvec, const struct irq_affinity *affd)
 {
 	int resv = affd->pre_vectors + affd->post_vectors;
 	int vecs = maxvec - resv;
-	int cpus;
 
-	/* Stabilize the cpumasks */
-	get_online_cpus();
-	cpus = cpumask_weight(cpu_online_mask);
-	put_online_cpus();
+	return min_t(int, cpumask_weight(cpu_present_mask), vecs) + resv;
+}
+
+static int __init irq_build_cpumap(void)
+{
+	int node, cpu;
+
+	for (node = 0; node < nr_node_ids; node++) {
+		if (!zalloc_cpumask_var(&node_to_present_cpumask[node],
+				GFP_KERNEL))
+			panic("can't allocate early memory\n");
+	}
 
-	return min(cpus, vecs) + resv;
+	for_each_present_cpu(cpu) {
+		node = cpu_to_node(cpu);
+		cpumask_set_cpu(cpu, node_to_present_cpumask[node]);
+	}
+
+	return 0;
 }
+
+subsys_initcall(irq_build_cpumap);
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 3/7] genirq/affinity: factor out a irq_affinity_set helper
  2017-05-19  8:57 spread MSI(-X) vectors to all possible CPUs V2 Christoph Hellwig
  2017-05-19  8:57 ` [PATCH 1/7] genirq: allow assigning affinity to present but not online CPUs Christoph Hellwig
  2017-05-19  8:57 ` [PATCH 2/7] genirq/affinity: assign vectors to all present CPUs Christoph Hellwig
@ 2017-05-19  8:57 ` Christoph Hellwig
  2017-05-21 19:03   ` Thomas Gleixner
  2017-05-19  8:57 ` [PATCH 4/7] genirq/affinity: update CPU affinity for CPU hotplug events Christoph Hellwig
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Christoph Hellwig @ 2017-05-19  8:57 UTC (permalink / raw)
  To: Thomas Gleixner, Jens Axboe
  Cc: Keith Busch, linux-nvme, linux-block, linux-kernel

Factor out code from the x86 cpu hot plug code to program the affinity
for a vector for a hot plug / hot unplug event.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 arch/x86/kernel/irq.c     | 23 ++---------------------
 include/linux/interrupt.h |  1 +
 kernel/irq/affinity.c     | 26 ++++++++++++++++++++++++++
 3 files changed, 29 insertions(+), 21 deletions(-)

diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
index f34fe7444836..a54eac5d81b3 100644
--- a/arch/x86/kernel/irq.c
+++ b/arch/x86/kernel/irq.c
@@ -437,7 +437,6 @@ void fixup_irqs(void)
 	struct irq_desc *desc;
 	struct irq_data *data;
 	struct irq_chip *chip;
-	int ret;
 
 	for_each_irq_desc(irq, desc) {
 		int break_affinity = 0;
@@ -482,26 +481,8 @@ void fixup_irqs(void)
 			continue;
 		}
 
-		if (!irqd_can_move_in_process_context(data) && chip->irq_mask)
-			chip->irq_mask(data);
-
-		if (chip->irq_set_affinity) {
-			ret = chip->irq_set_affinity(data, affinity, true);
-			if (ret == -ENOSPC)
-				pr_crit("IRQ %d set affinity failed because there are no available vectors.  The device assigned to this IRQ is unstable.\n", irq);
-		} else {
-			if (!(warned++))
-				set_affinity = 0;
-		}
-
-		/*
-		 * We unmask if the irq was not marked masked by the
-		 * core code. That respects the lazy irq disable
-		 * behaviour.
-		 */
-		if (!irqd_can_move_in_process_context(data) &&
-		    !irqd_irq_masked(data) && chip->irq_unmask)
-			chip->irq_unmask(data);
+		if (!irq_affinity_set(irq, desc, affinity) && !warned++)
+			set_affinity = 0;
 
 		raw_spin_unlock(&desc->lock);
 
diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index a6fba4804672..afd3aa33e9b0 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -292,6 +292,7 @@ irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify);
 
 struct cpumask *irq_create_affinity_masks(int nvec, const struct irq_affinity *affd);
 int irq_calc_affinity_vectors(int maxvec, const struct irq_affinity *affd);
+bool irq_affinity_set(int irq, struct irq_desc *desc, const cpumask_t *mask);
 
 #else /* CONFIG_SMP */
 
diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index 414b0be64bfc..d58431f59f7c 100644
--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affinity.c
@@ -150,6 +150,32 @@ int irq_calc_affinity_vectors(int maxvec, const struct irq_affinity *affd)
 	return min_t(int, cpumask_weight(cpu_present_mask), vecs) + resv;
 }
 
+bool irq_affinity_set(int irq, struct irq_desc *desc, const cpumask_t *mask)
+{
+	struct irq_data *data = irq_desc_get_irq_data(desc);
+	struct irq_chip *chip = irq_data_get_irq_chip(data);
+	bool ret = false;
+
+	if (!irqd_can_move_in_process_context(data) && chip->irq_mask)
+		chip->irq_mask(data);
+
+	if (chip->irq_set_affinity) {
+		if (chip->irq_set_affinity(data, mask, true) == -ENOSPC)
+			pr_crit("IRQ %d set affinity failed because there are no available vectors.  The device assigned to this IRQ is unstable.\n", irq);
+		ret = true;
+	}
+
+	/*
+	 * We unmask if the irq was not marked masked by the core code.
+	 * That respects the lazy irq disable behaviour.
+	 */
+	if (!irqd_can_move_in_process_context(data) &&
+	    !irqd_irq_masked(data) && chip->irq_unmask)
+		chip->irq_unmask(data);
+
+	return ret;
+}
+
 static int __init irq_build_cpumap(void)
 {
 	int node, cpu;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 4/7] genirq/affinity: update CPU affinity for CPU hotplug events
  2017-05-19  8:57 spread MSI(-X) vectors to all possible CPUs V2 Christoph Hellwig
                   ` (2 preceding siblings ...)
  2017-05-19  8:57 ` [PATCH 3/7] genirq/affinity: factor out a irq_affinity_set helper Christoph Hellwig
@ 2017-05-19  8:57 ` Christoph Hellwig
  2017-05-19  8:57 ` [PATCH 5/7] blk-mq: include all present CPUs in the default queue mapping Christoph Hellwig
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2017-05-19  8:57 UTC (permalink / raw)
  To: Thomas Gleixner, Jens Axboe
  Cc: Keith Busch, linux-nvme, linux-block, linux-kernel

Remove a CPU from the affinity mask when it goes offline and add it
back when it returns.  In case the vetor was assigned only to the CPU
going offline it will be shutdown and re-started when the CPU
reappears.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 arch/x86/kernel/irq.c      |   3 +-
 include/linux/cpuhotplug.h |   1 +
 include/linux/irq.h        |   9 ++++
 kernel/cpu.c               |   6 +++
 kernel/irq/affinity.c      | 129 ++++++++++++++++++++++++++++++++++++++++++++-
 5 files changed, 146 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
index a54eac5d81b3..72c35ed534f1 100644
--- a/arch/x86/kernel/irq.c
+++ b/arch/x86/kernel/irq.c
@@ -453,7 +453,8 @@ void fixup_irqs(void)
 
 		data = irq_desc_get_irq_data(desc);
 		affinity = irq_data_get_affinity_mask(data);
-		if (!irq_has_action(irq) || irqd_is_per_cpu(data) ||
+		if (irqd_affinity_is_managed(data) ||
+		    !irq_has_action(irq) || irqd_is_per_cpu(data) ||
 		    cpumask_subset(affinity, cpu_online_mask)) {
 			raw_spin_unlock(&desc->lock);
 			continue;
diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
index 0f2a80377520..c15f22c54535 100644
--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@ -124,6 +124,7 @@ enum cpuhp_state {
 	CPUHP_AP_ONLINE_IDLE,
 	CPUHP_AP_SMPBOOT_THREADS,
 	CPUHP_AP_X86_VDSO_VMA_ONLINE,
+	CPUHP_AP_IRQ_AFFINITY_ONLINE,
 	CPUHP_AP_PERF_ONLINE,
 	CPUHP_AP_PERF_X86_ONLINE,
 	CPUHP_AP_PERF_X86_UNCORE_ONLINE,
diff --git a/include/linux/irq.h b/include/linux/irq.h
index f887351aa80e..ae15b8582685 100644
--- a/include/linux/irq.h
+++ b/include/linux/irq.h
@@ -216,6 +216,7 @@ enum {
 	IRQD_WAKEUP_ARMED		= (1 << 19),
 	IRQD_FORWARDED_TO_VCPU		= (1 << 20),
 	IRQD_AFFINITY_MANAGED		= (1 << 21),
+	IRQD_AFFINITY_SUSPENDED		= (1 << 22),
 };
 
 #define __irqd_to_state(d) ACCESS_PRIVATE((d)->common, state_use_accessors)
@@ -329,6 +330,11 @@ static inline void irqd_clr_activated(struct irq_data *d)
 	__irqd_to_state(d) &= ~IRQD_ACTIVATED;
 }
 
+static inline bool irqd_affinity_is_suspended(struct irq_data *d)
+{
+	return __irqd_to_state(d) & IRQD_AFFINITY_SUSPENDED;
+}
+
 #undef __irqd_to_state
 
 static inline irq_hw_number_t irqd_to_hwirq(struct irq_data *d)
@@ -1025,4 +1031,7 @@ int __ipi_send_mask(struct irq_desc *desc, const struct cpumask *dest);
 int ipi_send_single(unsigned int virq, unsigned int cpu);
 int ipi_send_mask(unsigned int virq, const struct cpumask *dest);
 
+int irq_affinity_online_cpu(unsigned int cpu);
+int irq_affinity_offline_cpu(unsigned int cpu);
+
 #endif /* _LINUX_IRQ_H */
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 9ae6fbe5b5cf..ef0c5b63ca0d 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -27,6 +27,7 @@
 #include <linux/smpboot.h>
 #include <linux/relay.h>
 #include <linux/slab.h>
+#include <linux/irq.h>
 
 #include <trace/events/power.h>
 #define CREATE_TRACE_POINTS
@@ -1252,6 +1253,11 @@ static struct cpuhp_step cpuhp_ap_states[] = {
 		.startup.single		= smpboot_unpark_threads,
 		.teardown.single	= NULL,
 	},
+	[CPUHP_AP_IRQ_AFFINITY_ONLINE] = {
+		.name			= "irq/affinity:online",
+		.startup.single		= irq_affinity_online_cpu,
+		.teardown.single	= irq_affinity_offline_cpu,
+	},
 	[CPUHP_AP_PERF_ONLINE] = {
 		.name			= "perf:online",
 		.startup.single		= perf_event_init_cpu,
diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index d58431f59f7c..809a7d241eff 100644
--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affinity.c
@@ -1,8 +1,13 @@
-
+/*
+ * Copyright (C) 2016 Thomas Gleixner.
+ * Copyright (C) 2016-2017 Christoph Hellwig.
+ */
 #include <linux/interrupt.h>
 #include <linux/kernel.h>
 #include <linux/slab.h>
 #include <linux/cpu.h>
+#include <linux/irq.h>
+#include "internals.h"
 
 static cpumask_var_t node_to_present_cpumask[MAX_NUMNODES] __read_mostly;
 
@@ -176,6 +181,128 @@ bool irq_affinity_set(int irq, struct irq_desc *desc, const cpumask_t *mask)
 	return ret;
 }
 
+static void irq_affinity_online_irq(unsigned int irq, struct irq_desc *desc,
+				    unsigned int cpu)
+{
+	const struct cpumask *affinity;
+	struct irq_data *data;
+	struct irq_chip *chip;
+	unsigned long flags;
+	cpumask_var_t mask;
+
+	if (!desc)
+		return;
+	if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
+		return;
+
+	raw_spin_lock_irqsave(&desc->lock, flags);
+
+	data = irq_desc_get_irq_data(desc);
+	affinity = irq_data_get_affinity_mask(data);
+	if (!irqd_affinity_is_managed(data) ||
+	    !irq_has_action(irq) ||
+	    !cpumask_test_cpu(cpu, affinity))
+		goto out_free_cpumask;
+
+	/*
+	 * The interrupt descriptor might have been cleaned up
+	 * already, but it is not yet removed from the radix tree
+	 */
+	chip = irq_data_get_irq_chip(data);
+	if (!chip)
+		goto out_free_cpumask;
+
+	if (WARN_ON_ONCE(!chip->irq_set_affinity))
+		goto out_free_cpumask;
+
+	cpumask_and(mask, affinity, cpu_online_mask);
+	cpumask_set_cpu(cpu, mask);
+	if (irqd_has_set(data, IRQD_AFFINITY_SUSPENDED)) {
+		irq_startup(desc, false);
+		irqd_clear(data, IRQD_AFFINITY_SUSPENDED);
+	} else {
+		irq_affinity_set(irq, desc, mask);
+	}
+
+out_free_cpumask:
+	free_cpumask_var(mask);
+	raw_spin_unlock_irqrestore(&desc->lock, flags);
+}
+
+int irq_affinity_online_cpu(unsigned int cpu)
+{
+	struct irq_desc *desc;
+	unsigned int irq;
+
+	irq_lock_sparse();
+	for_each_irq_desc(irq, desc)
+		irq_affinity_online_irq(irq, desc, cpu);
+	irq_unlock_sparse();
+	return 0;
+}
+
+static void irq_affinity_offline_irq(unsigned int irq, struct irq_desc *desc,
+				     unsigned int cpu)
+{
+	const struct cpumask *affinity;
+	struct irq_data *data;
+	struct irq_chip *chip;
+	unsigned long flags;
+	cpumask_var_t mask;
+
+	if (!desc)
+		return;
+	if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
+		return;
+
+	raw_spin_lock_irqsave(&desc->lock, flags);
+
+	data = irq_desc_get_irq_data(desc);
+	affinity = irq_data_get_affinity_mask(data);
+	if (!irqd_affinity_is_managed(data) ||
+	    !irq_has_action(irq) ||
+	    irqd_has_set(data, IRQD_AFFINITY_SUSPENDED) ||
+	    !cpumask_test_cpu(cpu, affinity))
+		goto out_free_cpumask;
+
+	/*
+	 * The interrupt descriptor might have been cleaned up
+	 * already, but it is not yet removed from the radix tree
+	 */
+	chip = irq_data_get_irq_chip(data);
+	if (!chip)
+		goto out_free_cpumask;
+
+	if (WARN_ON_ONCE(!chip->irq_set_affinity))
+		goto out_free_cpumask;
+
+
+	cpumask_copy(mask, affinity);
+	cpumask_clear_cpu(cpu, mask);
+	if (cpumask_empty(mask)) {
+		irqd_set(data, IRQD_AFFINITY_SUSPENDED);
+		irq_shutdown(desc);
+	} else {
+		irq_affinity_set(irq, desc, mask);
+	}
+
+out_free_cpumask:
+	free_cpumask_var(mask);
+	raw_spin_unlock_irqrestore(&desc->lock, flags);
+}
+
+int irq_affinity_offline_cpu(unsigned int cpu)
+{
+	struct irq_desc *desc;
+	unsigned int irq;
+
+	irq_lock_sparse();
+	for_each_irq_desc(irq, desc)
+		irq_affinity_offline_irq(irq, desc, cpu);
+	irq_unlock_sparse();
+	return 0;
+}
+
 static int __init irq_build_cpumap(void)
 {
 	int node, cpu;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 5/7] blk-mq: include all present CPUs in the default queue mapping
  2017-05-19  8:57 spread MSI(-X) vectors to all possible CPUs V2 Christoph Hellwig
                   ` (3 preceding siblings ...)
  2017-05-19  8:57 ` [PATCH 4/7] genirq/affinity: update CPU affinity for CPU hotplug events Christoph Hellwig
@ 2017-05-19  8:57 ` Christoph Hellwig
  2017-05-19  8:57 ` [PATCH 6/7] blk-mq: create hctx for each present CPU Christoph Hellwig
  2017-05-19  8:57 ` [PATCH 7/7] nvme: allocate queues for all possible CPUs Christoph Hellwig
  6 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2017-05-19  8:57 UTC (permalink / raw)
  To: Thomas Gleixner, Jens Axboe
  Cc: Keith Busch, linux-nvme, linux-block, linux-kernel

This way we get a nice distribution independent of the current cpu
online / offline state.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq-cpumap.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
index 8e61e8640e17..5eaecd40f701 100644
--- a/block/blk-mq-cpumap.c
+++ b/block/blk-mq-cpumap.c
@@ -35,7 +35,6 @@ int blk_mq_map_queues(struct blk_mq_tag_set *set)
 {
 	unsigned int *map = set->mq_map;
 	unsigned int nr_queues = set->nr_hw_queues;
-	const struct cpumask *online_mask = cpu_online_mask;
 	unsigned int i, nr_cpus, nr_uniq_cpus, queue, first_sibling;
 	cpumask_var_t cpus;
 
@@ -44,7 +43,7 @@ int blk_mq_map_queues(struct blk_mq_tag_set *set)
 
 	cpumask_clear(cpus);
 	nr_cpus = nr_uniq_cpus = 0;
-	for_each_cpu(i, online_mask) {
+	for_each_present_cpu(i) {
 		nr_cpus++;
 		first_sibling = get_first_sibling(i);
 		if (!cpumask_test_cpu(first_sibling, cpus))
@@ -54,7 +53,7 @@ int blk_mq_map_queues(struct blk_mq_tag_set *set)
 
 	queue = 0;
 	for_each_possible_cpu(i) {
-		if (!cpumask_test_cpu(i, online_mask)) {
+		if (!cpumask_test_cpu(i, cpu_present_mask)) {
 			map[i] = 0;
 			continue;
 		}
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 6/7] blk-mq: create hctx for each present CPU
  2017-05-19  8:57 spread MSI(-X) vectors to all possible CPUs V2 Christoph Hellwig
                   ` (4 preceding siblings ...)
  2017-05-19  8:57 ` [PATCH 5/7] blk-mq: include all present CPUs in the default queue mapping Christoph Hellwig
@ 2017-05-19  8:57 ` Christoph Hellwig
  2017-05-19  8:57 ` [PATCH 7/7] nvme: allocate queues for all possible CPUs Christoph Hellwig
  6 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2017-05-19  8:57 UTC (permalink / raw)
  To: Thomas Gleixner, Jens Axboe
  Cc: Keith Busch, linux-nvme, linux-block, linux-kernel

Currently we only create hctx for online CPUs, which can lead to a lot
of churn due to frequent soft offline / online operations.  Instead
allocate one for each present CPU to avoid this and dramatically simplify
the code.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq.c             | 120 +++++----------------------------------------
 block/blk-mq.h             |   5 --
 include/linux/cpuhotplug.h |   1 -
 3 files changed, 11 insertions(+), 115 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index a69ad122ed66..01bbc30a2807 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -37,9 +37,6 @@
 #include "blk-wbt.h"
 #include "blk-mq-sched.h"
 
-static DEFINE_MUTEX(all_q_mutex);
-static LIST_HEAD(all_q_list);
-
 static void blk_mq_poll_stats_start(struct request_queue *q);
 static void blk_mq_poll_stats_fn(struct blk_stat_callback *cb);
 static void __blk_mq_stop_hw_queues(struct request_queue *q, bool sync);
@@ -1985,8 +1982,8 @@ static void blk_mq_init_cpu_queues(struct request_queue *q,
 		INIT_LIST_HEAD(&__ctx->rq_list);
 		__ctx->queue = q;
 
-		/* If the cpu isn't online, the cpu is mapped to first hctx */
-		if (!cpu_online(i))
+		/* If the cpu isn't present, the cpu is mapped to first hctx */
+		if (!cpu_present(i))
 			continue;
 
 		hctx = blk_mq_map_queue(q, i);
@@ -2029,8 +2026,7 @@ static void blk_mq_free_map_and_requests(struct blk_mq_tag_set *set,
 	}
 }
 
-static void blk_mq_map_swqueue(struct request_queue *q,
-			       const struct cpumask *online_mask)
+static void blk_mq_map_swqueue(struct request_queue *q)
 {
 	unsigned int i, hctx_idx;
 	struct blk_mq_hw_ctx *hctx;
@@ -2048,13 +2044,11 @@ static void blk_mq_map_swqueue(struct request_queue *q,
 	}
 
 	/*
-	 * Map software to hardware queues
+	 * Map software to hardware queues.
+	 *
+	 * If the cpu isn't present, the cpu is mapped to first hctx.
 	 */
-	for_each_possible_cpu(i) {
-		/* If the cpu isn't online, the cpu is mapped to first hctx */
-		if (!cpumask_test_cpu(i, online_mask))
-			continue;
-
+	for_each_present_cpu(i) {
 		hctx_idx = q->mq_map[i];
 		/* unmapped hw queue can be remapped after CPU topo changed */
 		if (!set->tags[hctx_idx] &&
@@ -2340,16 +2334,8 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
 		blk_queue_softirq_done(q, set->ops->complete);
 
 	blk_mq_init_cpu_queues(q, set->nr_hw_queues);
-
-	get_online_cpus();
-	mutex_lock(&all_q_mutex);
-
-	list_add_tail(&q->all_q_node, &all_q_list);
 	blk_mq_add_queue_tag_set(set, q);
-	blk_mq_map_swqueue(q, cpu_online_mask);
-
-	mutex_unlock(&all_q_mutex);
-	put_online_cpus();
+	blk_mq_map_swqueue(q);
 
 	if (!(set->flags & BLK_MQ_F_NO_SCHED)) {
 		int ret;
@@ -2375,18 +2361,12 @@ void blk_mq_free_queue(struct request_queue *q)
 {
 	struct blk_mq_tag_set	*set = q->tag_set;
 
-	mutex_lock(&all_q_mutex);
-	list_del_init(&q->all_q_node);
-	mutex_unlock(&all_q_mutex);
-
 	blk_mq_del_queue_tag_set(q);
-
 	blk_mq_exit_hw_queues(q, set, set->nr_hw_queues);
 }
 
 /* Basically redo blk_mq_init_queue with queue frozen */
-static void blk_mq_queue_reinit(struct request_queue *q,
-				const struct cpumask *online_mask)
+static void blk_mq_queue_reinit(struct request_queue *q)
 {
 	WARN_ON_ONCE(!atomic_read(&q->mq_freeze_depth));
 
@@ -2399,76 +2379,12 @@ static void blk_mq_queue_reinit(struct request_queue *q,
 	 * involves free and re-allocate memory, worthy doing?)
 	 */
 
-	blk_mq_map_swqueue(q, online_mask);
+	blk_mq_map_swqueue(q);
 
 	blk_mq_sysfs_register(q);
 	blk_mq_debugfs_register_hctxs(q);
 }
 
-/*
- * New online cpumask which is going to be set in this hotplug event.
- * Declare this cpumasks as global as cpu-hotplug operation is invoked
- * one-by-one and dynamically allocating this could result in a failure.
- */
-static struct cpumask cpuhp_online_new;
-
-static void blk_mq_queue_reinit_work(void)
-{
-	struct request_queue *q;
-
-	mutex_lock(&all_q_mutex);
-	/*
-	 * We need to freeze and reinit all existing queues.  Freezing
-	 * involves synchronous wait for an RCU grace period and doing it
-	 * one by one may take a long time.  Start freezing all queues in
-	 * one swoop and then wait for the completions so that freezing can
-	 * take place in parallel.
-	 */
-	list_for_each_entry(q, &all_q_list, all_q_node)
-		blk_freeze_queue_start(q);
-	list_for_each_entry(q, &all_q_list, all_q_node)
-		blk_mq_freeze_queue_wait(q);
-
-	list_for_each_entry(q, &all_q_list, all_q_node)
-		blk_mq_queue_reinit(q, &cpuhp_online_new);
-
-	list_for_each_entry(q, &all_q_list, all_q_node)
-		blk_mq_unfreeze_queue(q);
-
-	mutex_unlock(&all_q_mutex);
-}
-
-static int blk_mq_queue_reinit_dead(unsigned int cpu)
-{
-	cpumask_copy(&cpuhp_online_new, cpu_online_mask);
-	blk_mq_queue_reinit_work();
-	return 0;
-}
-
-/*
- * Before hotadded cpu starts handling requests, new mappings must be
- * established.  Otherwise, these requests in hw queue might never be
- * dispatched.
- *
- * For example, there is a single hw queue (hctx) and two CPU queues (ctx0
- * for CPU0, and ctx1 for CPU1).
- *
- * Now CPU1 is just onlined and a request is inserted into ctx1->rq_list
- * and set bit0 in pending bitmap as ctx1->index_hw is still zero.
- *
- * And then while running hw queue, blk_mq_flush_busy_ctxs() finds bit0 is set
- * in pending bitmap and tries to retrieve requests in hctx->ctxs[0]->rq_list.
- * But htx->ctxs[0] is a pointer to ctx0, so the request in ctx1->rq_list is
- * ignored.
- */
-static int blk_mq_queue_reinit_prepare(unsigned int cpu)
-{
-	cpumask_copy(&cpuhp_online_new, cpu_online_mask);
-	cpumask_set_cpu(cpu, &cpuhp_online_new);
-	blk_mq_queue_reinit_work();
-	return 0;
-}
-
 static int __blk_mq_alloc_rq_maps(struct blk_mq_tag_set *set)
 {
 	int i;
@@ -2678,7 +2594,7 @@ void blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, int nr_hw_queues)
 	blk_mq_update_queue_map(set);
 	list_for_each_entry(q, &set->tag_list, tag_set_list) {
 		blk_mq_realloc_hw_ctxs(set, q);
-		blk_mq_queue_reinit(q, cpu_online_mask);
+		blk_mq_queue_reinit(q);
 	}
 
 	list_for_each_entry(q, &set->tag_list, tag_set_list)
@@ -2887,24 +2803,10 @@ bool blk_mq_poll(struct request_queue *q, blk_qc_t cookie)
 }
 EXPORT_SYMBOL_GPL(blk_mq_poll);
 
-void blk_mq_disable_hotplug(void)
-{
-	mutex_lock(&all_q_mutex);
-}
-
-void blk_mq_enable_hotplug(void)
-{
-	mutex_unlock(&all_q_mutex);
-}
-
 static int __init blk_mq_init(void)
 {
 	cpuhp_setup_state_multi(CPUHP_BLK_MQ_DEAD, "block/mq:dead", NULL,
 				blk_mq_hctx_notify_dead);
-
-	cpuhp_setup_state_nocalls(CPUHP_BLK_MQ_PREPARE, "block/mq:prepare",
-				  blk_mq_queue_reinit_prepare,
-				  blk_mq_queue_reinit_dead);
 	return 0;
 }
 subsys_initcall(blk_mq_init);
diff --git a/block/blk-mq.h b/block/blk-mq.h
index cc67b48e3551..558df56544d2 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -56,11 +56,6 @@ void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
 				bool at_head);
 void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
 				struct list_head *list);
-/*
- * CPU hotplug helpers
- */
-void blk_mq_enable_hotplug(void);
-void blk_mq_disable_hotplug(void);
 
 /*
  * CPU -> queue mappings
diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
index c15f22c54535..7f815d915977 100644
--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@ -58,7 +58,6 @@ enum cpuhp_state {
 	CPUHP_XEN_EVTCHN_PREPARE,
 	CPUHP_ARM_SHMOBILE_SCU_PREPARE,
 	CPUHP_SH_SH3X_PREPARE,
-	CPUHP_BLK_MQ_PREPARE,
 	CPUHP_NET_FLOW_PREPARE,
 	CPUHP_TOPOLOGY_PREPARE,
 	CPUHP_NET_IUCV_PREPARE,
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 7/7] nvme: allocate queues for all possible CPUs
  2017-05-19  8:57 spread MSI(-X) vectors to all possible CPUs V2 Christoph Hellwig
                   ` (5 preceding siblings ...)
  2017-05-19  8:57 ` [PATCH 6/7] blk-mq: create hctx for each present CPU Christoph Hellwig
@ 2017-05-19  8:57 ` Christoph Hellwig
  6 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2017-05-19  8:57 UTC (permalink / raw)
  To: Thomas Gleixner, Jens Axboe
  Cc: Keith Busch, linux-nvme, linux-block, linux-kernel

Unlike most drіvers that simply pass the maximum possible vectors to
pci_alloc_irq_vectors NVMe needs to configure the device before allocting
the vectors, so it needs a manual update for the new scheme of using
all present CPUs.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/nvme/host/pci.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index fed803232edc..6580a21d1425 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1520,7 +1520,7 @@ static int nvme_setup_io_queues(struct nvme_dev *dev)
 	struct pci_dev *pdev = to_pci_dev(dev->dev);
 	int result, nr_io_queues, size;
 
-	nr_io_queues = num_online_cpus();
+	nr_io_queues = num_present_cpus();
 	result = nvme_set_queue_count(&dev->ctrl, &nr_io_queues);
 	if (result < 0)
 		return result;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/7] genirq/affinity: assign vectors to all present CPUs
  2017-05-19  8:57 ` [PATCH 2/7] genirq/affinity: assign vectors to all present CPUs Christoph Hellwig
@ 2017-05-21 18:31   ` Thomas Gleixner
  2017-05-23  9:35     ` Christoph Hellwig
  0 siblings, 1 reply; 15+ messages in thread
From: Thomas Gleixner @ 2017-05-21 18:31 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Jens Axboe, Keith Busch, linux-nvme, linux-block, linux-kernel

On Fri, 19 May 2017, Christoph Hellwig wrote:
> -	/* Stabilize the cpumasks */
> -	get_online_cpus();

How is that protected against physical CPU hotplug? Physical CPU hotplug
manipulates the present mask.

> -	nodes = get_nodes_in_cpumask(cpu_online_mask, &nodemsk);
> +	nodes = get_nodes_in_cpumask(cpu_present_mask, &nodemsk);
> +static int __init irq_build_cpumap(void)
> +{
> +	int node, cpu;
> +
> +	for (node = 0; node < nr_node_ids; node++) {
> +		if (!zalloc_cpumask_var(&node_to_present_cpumask[node],
> +				GFP_KERNEL))
> +			panic("can't allocate early memory\n");
> +	}
>  
> -	return min(cpus, vecs) + resv;
> +	for_each_present_cpu(cpu) {
> +		node = cpu_to_node(cpu);
> +		cpumask_set_cpu(cpu, node_to_present_cpumask[node]);
> +	}

This mask needs updating on physical hotplug as well.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 3/7] genirq/affinity: factor out a irq_affinity_set helper
  2017-05-19  8:57 ` [PATCH 3/7] genirq/affinity: factor out a irq_affinity_set helper Christoph Hellwig
@ 2017-05-21 19:03   ` Thomas Gleixner
  2017-05-23  9:37     ` Christoph Hellwig
  0 siblings, 1 reply; 15+ messages in thread
From: Thomas Gleixner @ 2017-05-21 19:03 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Jens Axboe, Keith Busch, linux-nvme, linux-block, linux-kernel

On Fri, 19 May 2017, Christoph Hellwig wrote:

> Factor out code from the x86 cpu hot plug code to program the affinity
> for a vector for a hot plug / hot unplug event.
> +bool irq_affinity_set(int irq, struct irq_desc *desc, const cpumask_t *mask)
> +{
> +	struct irq_data *data = irq_desc_get_irq_data(desc);
> +	struct irq_chip *chip = irq_data_get_irq_chip(data);
> +	bool ret = false;
> +
> +	if (!irqd_can_move_in_process_context(data) && chip->irq_mask)

Using that inline directly will make this function useless on architectures
which have GENERIC_PENDING_IRQS not set.

kernel/irq/manage.c has that wrapped. We need to move those wrappers to the
internal header file, so that it gets compiled out on other platforms.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/7] genirq/affinity: assign vectors to all present CPUs
  2017-05-21 18:31   ` Thomas Gleixner
@ 2017-05-23  9:35     ` Christoph Hellwig
  0 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2017-05-23  9:35 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Christoph Hellwig, Jens Axboe, Keith Busch, linux-nvme,
	linux-block, linux-kernel

On Sun, May 21, 2017 at 08:31:47PM +0200, Thomas Gleixner wrote:
> On Fri, 19 May 2017, Christoph Hellwig wrote:
> > -	/* Stabilize the cpumasks */
> > -	get_online_cpus();
> 
> How is that protected against physical CPU hotplug? Physical CPU hotplug
> manipulates the present mask.

It does indeed seem to.  Documentation/core-api/cpu_hotplug.rst claims
there are no locking rules for manipulations of cpu_present_mask,
maybe it needs and update to mention get/put_online_cpus() ?

Or maybe I should just switch to possible_cpu mask here like a lot of
code seems to do to avoid the hot plug issues, but that might be a bit
of a waste.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 3/7] genirq/affinity: factor out a irq_affinity_set helper
  2017-05-21 19:03   ` Thomas Gleixner
@ 2017-05-23  9:37     ` Christoph Hellwig
  0 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2017-05-23  9:37 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Christoph Hellwig, Jens Axboe, Keith Busch, linux-nvme,
	linux-block, linux-kernel

On Sun, May 21, 2017 at 09:03:06PM +0200, Thomas Gleixner wrote:
> > +{
> > +	struct irq_data *data = irq_desc_get_irq_data(desc);
> > +	struct irq_chip *chip = irq_data_get_irq_chip(data);
> > +	bool ret = false;
> > +
> > +	if (!irqd_can_move_in_process_context(data) && chip->irq_mask)
> 
> Using that inline directly will make this function useless on architectures
> which have GENERIC_PENDING_IRQS not set.
> 
> kernel/irq/manage.c has that wrapped. We need to move those wrappers to the
> internal header file, so that it gets compiled out on other platforms.

Ok, will do.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: spread MSI(-X) vectors to all possible CPUs V2
  2017-06-16  6:48 ` Christoph Hellwig
@ 2017-06-16  7:28   ` Thomas Gleixner
  0 siblings, 0 replies; 15+ messages in thread
From: Thomas Gleixner @ 2017-06-16  7:28 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Jens Axboe, Keith Busch, linux-block, linux-kernel, linux-nvme

On Fri, 16 Jun 2017, Christoph Hellwig wrote:
> can you take a look at the generic patches as they are the required
> base for the block work?

It's next on my ever growing todo list....

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: spread MSI(-X) vectors to all possible CPUs V2
  2017-06-03 14:03 spread MSI(-X) vectors to all possible CPUs V2 Christoph Hellwig
@ 2017-06-16  6:48 ` Christoph Hellwig
  2017-06-16  7:28   ` Thomas Gleixner
  0 siblings, 1 reply; 15+ messages in thread
From: Christoph Hellwig @ 2017-06-16  6:48 UTC (permalink / raw)
  To: Thomas Gleixner, Jens Axboe
  Cc: Keith Busch, linux-block, linux-kernel, linux-nvme

Thomas,

can you take a look at the generic patches as they are the required
base for the block work?

^ permalink raw reply	[flat|nested] 15+ messages in thread

* spread MSI(-X) vectors to all possible CPUs V2
@ 2017-06-03 14:03 Christoph Hellwig
  2017-06-16  6:48 ` Christoph Hellwig
  0 siblings, 1 reply; 15+ messages in thread
From: Christoph Hellwig @ 2017-06-03 14:03 UTC (permalink / raw)
  To: Thomas Gleixner, Jens Axboe
  Cc: Keith Busch, linux-nvme, linux-block, linux-kernel

Hi all,

this series changes our automatic MSI-X vector assignment so that it
takes all present CPUs into account instead of all online ones.  This
allows to better deal with cpu hotplug events, which could happen
frequently due to power management for example.

Changes since V1:
 - rebase to current Linus' tree
 - add irq_lock_sparse calls
 - move memory allocations outside of (raw) spinlocks
 - make the possible cpus per node mask safe vs physical CPU hotplug
 - remove the irq_force_complete_move call
 - factor some common code into helpers
 - identation fixups

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2017-06-16  7:28 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-19  8:57 spread MSI(-X) vectors to all possible CPUs V2 Christoph Hellwig
2017-05-19  8:57 ` [PATCH 1/7] genirq: allow assigning affinity to present but not online CPUs Christoph Hellwig
2017-05-19  8:57 ` [PATCH 2/7] genirq/affinity: assign vectors to all present CPUs Christoph Hellwig
2017-05-21 18:31   ` Thomas Gleixner
2017-05-23  9:35     ` Christoph Hellwig
2017-05-19  8:57 ` [PATCH 3/7] genirq/affinity: factor out a irq_affinity_set helper Christoph Hellwig
2017-05-21 19:03   ` Thomas Gleixner
2017-05-23  9:37     ` Christoph Hellwig
2017-05-19  8:57 ` [PATCH 4/7] genirq/affinity: update CPU affinity for CPU hotplug events Christoph Hellwig
2017-05-19  8:57 ` [PATCH 5/7] blk-mq: include all present CPUs in the default queue mapping Christoph Hellwig
2017-05-19  8:57 ` [PATCH 6/7] blk-mq: create hctx for each present CPU Christoph Hellwig
2017-05-19  8:57 ` [PATCH 7/7] nvme: allocate queues for all possible CPUs Christoph Hellwig
2017-06-03 14:03 spread MSI(-X) vectors to all possible CPUs V2 Christoph Hellwig
2017-06-16  6:48 ` Christoph Hellwig
2017-06-16  7:28   ` Thomas Gleixner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).