linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V2 0/5] genirq/affinity: irq vector spread among online CPUs as far as possible
@ 2018-03-05  3:13 Ming Lei
  2018-03-05  3:13 ` [PATCH V2 1/5] genirq/affinity: rename *node_to_possible_cpumask as *node_to_cpumask Ming Lei
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Ming Lei @ 2018-03-05  3:13 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Thomas Gleixner, linux-kernel
  Cc: linux-block, Laurence Oberman, Ming Lei

Hi,

This patchset tries to spread among online CPUs as far as possible, so
that we can avoid to allocate too less irq vectors with online CPUs
mapped.

For example, in a 8cores system, 4 cpu cores(4~7) are offline/non present,
on a device with 4 queues:

1) before this patchset
	irq 39, cpu list 0-2
	irq 40, cpu list 3-4,6
	irq 41, cpu list 5
	irq 42, cpu list 7

2) after this patchset
	irq 39, cpu list 0,4
	irq 40, cpu list 1,6
	irq 41, cpu list 2,5
	irq 42, cpu list 3,7

Without this patchset, only two vectors(39, 40) can be active, but there
can be 4 active irq vectors after applying this patchset.

One disadvantage is that CPUs from different NUMA node can be mapped to
one same irq vector. Given generally one CPU should be enough to handle
one irq vector, it shouldn't be a big deal. Especailly more vectors have
to be allocated, otherwise performance can be hurt in current
assignment.

V2:
	- address coments from Christoph
	- mark irq_build_affinity_masks as static
	- move constification of get_nodes_in_cpumask's parameter into one
	  prep patch
	- add Reviewed-by tag

Thanks
Ming

Ming Lei (5):
  genirq/affinity: rename *node_to_possible_cpumask as *node_to_cpumask
  genirq/affinity: mark 'node_to_cpumask' as const for
    get_nodes_in_cpumask()
  genirq/affinity: move actual irq vector spread into one helper
  genirq/affinity: support to do irq vectors spread starting from any
    vector
  genirq/affinity: irq vector spread among online CPUs as far as
    possible

 kernel/irq/affinity.c | 145 ++++++++++++++++++++++++++++++++------------------
 1 file changed, 94 insertions(+), 51 deletions(-)

-- 
2.9.5

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH V2 1/5] genirq/affinity: rename *node_to_possible_cpumask as *node_to_cpumask
  2018-03-05  3:13 [PATCH V2 0/5] genirq/affinity: irq vector spread among online CPUs as far as possible Ming Lei
@ 2018-03-05  3:13 ` Ming Lei
  2018-03-05  3:13 ` [PATCH V2 2/5] genirq/affinity: mark 'node_to_cpumask' as const for get_nodes_in_cpumask() Ming Lei
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Ming Lei @ 2018-03-05  3:13 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Thomas Gleixner, linux-kernel
  Cc: linux-block, Laurence Oberman, Ming Lei

The following patches will introduce two stage irq spread for improving
irq spread on all possible CPUs.

No funtional change.

Cc: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 kernel/irq/affinity.c | 26 +++++++++++++-------------
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index a37a3b4b6342..4b1c4763212d 100644
--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affinity.c
@@ -39,7 +39,7 @@ static void irq_spread_init_one(struct cpumask *irqmsk, struct cpumask *nmsk,
 	}
 }
 
-static cpumask_var_t *alloc_node_to_possible_cpumask(void)
+static cpumask_var_t *alloc_node_to_cpumask(void)
 {
 	cpumask_var_t *masks;
 	int node;
@@ -62,7 +62,7 @@ static cpumask_var_t *alloc_node_to_possible_cpumask(void)
 	return NULL;
 }
 
-static void free_node_to_possible_cpumask(cpumask_var_t *masks)
+static void free_node_to_cpumask(cpumask_var_t *masks)
 {
 	int node;
 
@@ -71,7 +71,7 @@ static void free_node_to_possible_cpumask(cpumask_var_t *masks)
 	kfree(masks);
 }
 
-static void build_node_to_possible_cpumask(cpumask_var_t *masks)
+static void build_node_to_cpumask(cpumask_var_t *masks)
 {
 	int cpu;
 
@@ -79,14 +79,14 @@ static void build_node_to_possible_cpumask(cpumask_var_t *masks)
 		cpumask_set_cpu(cpu, masks[cpu_to_node(cpu)]);
 }
 
-static int get_nodes_in_cpumask(cpumask_var_t *node_to_possible_cpumask,
+static int get_nodes_in_cpumask(cpumask_var_t *node_to_cpumask,
 				const struct cpumask *mask, nodemask_t *nodemsk)
 {
 	int n, nodes = 0;
 
 	/* Calculate the number of nodes in the supplied affinity mask */
 	for_each_node(n) {
-		if (cpumask_intersects(mask, node_to_possible_cpumask[n])) {
+		if (cpumask_intersects(mask, node_to_cpumask[n])) {
 			node_set(n, *nodemsk);
 			nodes++;
 		}
@@ -109,7 +109,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
 	int last_affv = affv + affd->pre_vectors;
 	nodemask_t nodemsk = NODE_MASK_NONE;
 	struct cpumask *masks;
-	cpumask_var_t nmsk, *node_to_possible_cpumask;
+	cpumask_var_t nmsk, *node_to_cpumask;
 
 	/*
 	 * If there aren't any vectors left after applying the pre/post
@@ -125,8 +125,8 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
 	if (!masks)
 		goto out;
 
-	node_to_possible_cpumask = alloc_node_to_possible_cpumask();
-	if (!node_to_possible_cpumask)
+	node_to_cpumask = alloc_node_to_cpumask();
+	if (!node_to_cpumask)
 		goto out;
 
 	/* Fill out vectors at the beginning that don't need affinity */
@@ -135,8 +135,8 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
 
 	/* Stabilize the cpumasks */
 	get_online_cpus();
-	build_node_to_possible_cpumask(node_to_possible_cpumask);
-	nodes = get_nodes_in_cpumask(node_to_possible_cpumask, cpu_possible_mask,
+	build_node_to_cpumask(node_to_cpumask);
+	nodes = get_nodes_in_cpumask(node_to_cpumask, cpu_possible_mask,
 				     &nodemsk);
 
 	/*
@@ -146,7 +146,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
 	if (affv <= nodes) {
 		for_each_node_mask(n, nodemsk) {
 			cpumask_copy(masks + curvec,
-				     node_to_possible_cpumask[n]);
+				     node_to_cpumask[n]);
 			if (++curvec == last_affv)
 				break;
 		}
@@ -160,7 +160,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
 		vecs_per_node = (affv - (curvec - affd->pre_vectors)) / nodes;
 
 		/* Get the cpus on this node which are in the mask */
-		cpumask_and(nmsk, cpu_possible_mask, node_to_possible_cpumask[n]);
+		cpumask_and(nmsk, cpu_possible_mask, node_to_cpumask[n]);
 
 		/* Calculate the number of cpus per vector */
 		ncpus = cpumask_weight(nmsk);
@@ -192,7 +192,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
 	/* Fill out vectors at the end that don't need affinity */
 	for (; curvec < nvecs; curvec++)
 		cpumask_copy(masks + curvec, irq_default_affinity);
-	free_node_to_possible_cpumask(node_to_possible_cpumask);
+	free_node_to_cpumask(node_to_cpumask);
 out:
 	free_cpumask_var(nmsk);
 	return masks;
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH V2 2/5] genirq/affinity: mark 'node_to_cpumask' as const for get_nodes_in_cpumask()
  2018-03-05  3:13 [PATCH V2 0/5] genirq/affinity: irq vector spread among online CPUs as far as possible Ming Lei
  2018-03-05  3:13 ` [PATCH V2 1/5] genirq/affinity: rename *node_to_possible_cpumask as *node_to_cpumask Ming Lei
@ 2018-03-05  3:13 ` Ming Lei
  2018-03-05  3:13 ` [PATCH V2 3/5] genirq/affinity: move actual irq vector spread into one helper Ming Lei
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Ming Lei @ 2018-03-05  3:13 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Thomas Gleixner, linux-kernel
  Cc: linux-block, Laurence Oberman, Ming Lei, Christoph Hellwig

Inside irq_create_affinity_masks(), once 'node_to_cpumask' is created,
it is accessed read-only, so mark it as const for
get_nodes_in_cpumask().

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 kernel/irq/affinity.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index 4b1c4763212d..9f49d6ef0dc8 100644
--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affinity.c
@@ -79,7 +79,7 @@ static void build_node_to_cpumask(cpumask_var_t *masks)
 		cpumask_set_cpu(cpu, masks[cpu_to_node(cpu)]);
 }
 
-static int get_nodes_in_cpumask(cpumask_var_t *node_to_cpumask,
+static int get_nodes_in_cpumask(const cpumask_var_t *node_to_cpumask,
 				const struct cpumask *mask, nodemask_t *nodemsk)
 {
 	int n, nodes = 0;
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH V2 3/5] genirq/affinity: move actual irq vector spread into one helper
  2018-03-05  3:13 [PATCH V2 0/5] genirq/affinity: irq vector spread among online CPUs as far as possible Ming Lei
  2018-03-05  3:13 ` [PATCH V2 1/5] genirq/affinity: rename *node_to_possible_cpumask as *node_to_cpumask Ming Lei
  2018-03-05  3:13 ` [PATCH V2 2/5] genirq/affinity: mark 'node_to_cpumask' as const for get_nodes_in_cpumask() Ming Lei
@ 2018-03-05  3:13 ` Ming Lei
  2018-03-05 16:28   ` kbuild test robot
  2018-03-05  3:13 ` [PATCH V2 4/5] genirq/affinity: support to do irq vectors spread starting from any vector Ming Lei
  2018-03-05  3:13 ` [PATCH V2 5/5] genirq/affinity: irq vector spread among online CPUs as far as possible Ming Lei
  4 siblings, 1 reply; 9+ messages in thread
From: Ming Lei @ 2018-03-05  3:13 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Thomas Gleixner, linux-kernel
  Cc: linux-block, Laurence Oberman, Ming Lei

No functional change, just prepare for converting to 2-stage
irq vector spread.

Cc: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 kernel/irq/affinity.c | 97 +++++++++++++++++++++++++++++----------------------
 1 file changed, 55 insertions(+), 42 deletions(-)

diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index 9f49d6ef0dc8..256adf92ec62 100644
--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affinity.c
@@ -94,50 +94,19 @@ static int get_nodes_in_cpumask(const cpumask_var_t *node_to_cpumask,
 	return nodes;
 }
 
-/**
- * irq_create_affinity_masks - Create affinity masks for multiqueue spreading
- * @nvecs:	The total number of vectors
- * @affd:	Description of the affinity requirements
- *
- * Returns the masks pointer or NULL if allocation failed.
- */
-struct cpumask *
-irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
+static int irq_build_affinity_masks(int nvecs, const struct irq_affinity *affd,
+				    const cpumask_var_t *node_to_cpumask,
+				    const struct cpumask *cpu_mask,
+				    struct cpumask *nmsk,
+				    struct cpumask *masks)
 {
-	int n, nodes, cpus_per_vec, extra_vecs, curvec;
 	int affv = nvecs - affd->pre_vectors - affd->post_vectors;
 	int last_affv = affv + affd->pre_vectors;
+	int curvec = affd->pre_vectors;
 	nodemask_t nodemsk = NODE_MASK_NONE;
-	struct cpumask *masks;
-	cpumask_var_t nmsk, *node_to_cpumask;
-
-	/*
-	 * If there aren't any vectors left after applying the pre/post
-	 * vectors don't bother with assigning affinity.
-	 */
-	if (!affv)
-		return NULL;
-
-	if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL))
-		return NULL;
-
-	masks = kcalloc(nvecs, sizeof(*masks), GFP_KERNEL);
-	if (!masks)
-		goto out;
+	int n, nodes, cpus_per_vec, extra_vecs;
 
-	node_to_cpumask = alloc_node_to_cpumask();
-	if (!node_to_cpumask)
-		goto out;
-
-	/* Fill out vectors at the beginning that don't need affinity */
-	for (curvec = 0; curvec < affd->pre_vectors; curvec++)
-		cpumask_copy(masks + curvec, irq_default_affinity);
-
-	/* Stabilize the cpumasks */
-	get_online_cpus();
-	build_node_to_cpumask(node_to_cpumask);
-	nodes = get_nodes_in_cpumask(node_to_cpumask, cpu_possible_mask,
-				     &nodemsk);
+	nodes = get_nodes_in_cpumask(node_to_cpumask, cpu_mask, &nodemsk);
 
 	/*
 	 * If the number of nodes in the mask is greater than or equal the
@@ -150,7 +119,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
 			if (++curvec == last_affv)
 				break;
 		}
-		goto done;
+		goto out;
 	}
 
 	for_each_node_mask(n, nodemsk) {
@@ -160,7 +129,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
 		vecs_per_node = (affv - (curvec - affd->pre_vectors)) / nodes;
 
 		/* Get the cpus on this node which are in the mask */
-		cpumask_and(nmsk, cpu_possible_mask, node_to_cpumask[n]);
+		cpumask_and(nmsk, cpu_mask, node_to_cpumask[n]);
 
 		/* Calculate the number of cpus per vector */
 		ncpus = cpumask_weight(nmsk);
@@ -186,7 +155,51 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
 		--nodes;
 	}
 
-done:
+out:
+	return curvec - affd->pre_vectors;
+}
+
+/**
+ * irq_create_affinity_masks - Create affinity masks for multiqueue spreading
+ * @nvecs:	The total number of vectors
+ * @affd:	Description of the affinity requirements
+ *
+ * Returns the masks pointer or NULL if allocation failed.
+ */
+struct cpumask *
+irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
+{
+	int curvec;
+	struct cpumask *masks;
+	cpumask_var_t nmsk, *node_to_cpumask;
+
+	/*
+	 * If there aren't any vectors left after applying the pre/post
+	 * vectors don't bother with assigning affinity.
+	 */
+	if (nvecs == affd->pre_vectors + affd->post_vectors)
+		return NULL;
+
+	if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL))
+		return NULL;
+
+	masks = kcalloc(nvecs, sizeof(*masks), GFP_KERNEL);
+	if (!masks)
+		goto out;
+
+	node_to_cpumask = alloc_node_to_cpumask();
+	if (!node_to_cpumask)
+		goto out;
+
+	/* Fill out vectors at the beginning that don't need affinity */
+	for (curvec = 0; curvec < affd->pre_vectors; curvec++)
+		cpumask_copy(masks + curvec, irq_default_affinity);
+
+	/* Stabilize the cpumasks */
+	get_online_cpus();
+	build_node_to_cpumask(node_to_cpumask);
+	curvec += irq_build_affinity_masks(nvecs, affd, node_to_cpumask,
+					   cpu_possible_mask, nmsk, masks);
 	put_online_cpus();
 
 	/* Fill out vectors at the end that don't need affinity */
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH V2 4/5] genirq/affinity: support to do irq vectors spread starting from any vector
  2018-03-05  3:13 [PATCH V2 0/5] genirq/affinity: irq vector spread among online CPUs as far as possible Ming Lei
                   ` (2 preceding siblings ...)
  2018-03-05  3:13 ` [PATCH V2 3/5] genirq/affinity: move actual irq vector spread into one helper Ming Lei
@ 2018-03-05  3:13 ` Ming Lei
  2018-03-05  3:13 ` [PATCH V2 5/5] genirq/affinity: irq vector spread among online CPUs as far as possible Ming Lei
  4 siblings, 0 replies; 9+ messages in thread
From: Ming Lei @ 2018-03-05  3:13 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Thomas Gleixner, linux-kernel
  Cc: linux-block, Laurence Oberman, Ming Lei

Now two parameters(start_vec, affv) are introduced to irq_build_affinity_masks(),
then this helper can build the affinity of each irq vector starting from
the irq vector of 'start_vec', and handle at most 'affv' vectors.

This way is required to do 2-stages irq vectors spread among all
possible CPUs.

Cc: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 kernel/irq/affinity.c | 23 +++++++++++++++--------
 1 file changed, 15 insertions(+), 8 deletions(-)

diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index 256adf92ec62..a8c5d07890a6 100644
--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affinity.c
@@ -94,17 +94,17 @@ static int get_nodes_in_cpumask(const cpumask_var_t *node_to_cpumask,
 	return nodes;
 }
 
-static int irq_build_affinity_masks(int nvecs, const struct irq_affinity *affd,
+static int irq_build_affinity_masks(const struct irq_affinity *affd,
+				    const int start_vec, const int affv,
 				    const cpumask_var_t *node_to_cpumask,
 				    const struct cpumask *cpu_mask,
 				    struct cpumask *nmsk,
 				    struct cpumask *masks)
 {
-	int affv = nvecs - affd->pre_vectors - affd->post_vectors;
 	int last_affv = affv + affd->pre_vectors;
-	int curvec = affd->pre_vectors;
+	int curvec = start_vec;
 	nodemask_t nodemsk = NODE_MASK_NONE;
-	int n, nodes, cpus_per_vec, extra_vecs;
+	int n, nodes, cpus_per_vec, extra_vecs, done = 0;
 
 	nodes = get_nodes_in_cpumask(node_to_cpumask, cpu_mask, &nodemsk);
 
@@ -116,8 +116,10 @@ static int irq_build_affinity_masks(int nvecs, const struct irq_affinity *affd,
 		for_each_node_mask(n, nodemsk) {
 			cpumask_copy(masks + curvec,
 				     node_to_cpumask[n]);
-			if (++curvec == last_affv)
+			if (++done == affv)
 				break;
+			if (++curvec == last_affv)
+				curvec = affd->pre_vectors;
 		}
 		goto out;
 	}
@@ -150,13 +152,16 @@ static int irq_build_affinity_masks(int nvecs, const struct irq_affinity *affd,
 			irq_spread_init_one(masks + curvec, nmsk, cpus_per_vec);
 		}
 
-		if (curvec >= last_affv)
+		done += v;
+		if (done >= affv)
 			break;
+		if (curvec >= last_affv)
+			curvec = affd->pre_vectors;
 		--nodes;
 	}
 
 out:
-	return curvec - affd->pre_vectors;
+	return done;
 }
 
 /**
@@ -169,6 +174,7 @@ static int irq_build_affinity_masks(int nvecs, const struct irq_affinity *affd,
 struct cpumask *
 irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
 {
+	int affv = nvecs - affd->pre_vectors - affd->post_vectors;
 	int curvec;
 	struct cpumask *masks;
 	cpumask_var_t nmsk, *node_to_cpumask;
@@ -198,7 +204,8 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
 	/* Stabilize the cpumasks */
 	get_online_cpus();
 	build_node_to_cpumask(node_to_cpumask);
-	curvec += irq_build_affinity_masks(nvecs, affd, node_to_cpumask,
+	curvec += irq_build_affinity_masks(affd, curvec, affv,
+					   node_to_cpumask,
 					   cpu_possible_mask, nmsk, masks);
 	put_online_cpus();
 
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH V2 5/5] genirq/affinity: irq vector spread among online CPUs as far as possible
  2018-03-05  3:13 [PATCH V2 0/5] genirq/affinity: irq vector spread among online CPUs as far as possible Ming Lei
                   ` (3 preceding siblings ...)
  2018-03-05  3:13 ` [PATCH V2 4/5] genirq/affinity: support to do irq vectors spread starting from any vector Ming Lei
@ 2018-03-05  3:13 ` Ming Lei
  4 siblings, 0 replies; 9+ messages in thread
From: Ming Lei @ 2018-03-05  3:13 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Thomas Gleixner, linux-kernel
  Cc: linux-block, Laurence Oberman, Ming Lei

84676c1f21 ("genirq/affinity: assign vectors to all possible CPUs")
may cause irq vector assigned to all offline CPUs, and this kind of
assignment may cause much less irq vectors mapped to online CPUs, and
performance may get hurt.

For example, in a 8 cores system, 0~3 online, 4~8 offline/not present,
see 'lscpu':

	[ming@box]$lscpu
	Architecture:          x86_64
	CPU op-mode(s):        32-bit, 64-bit
	Byte Order:            Little Endian
	CPU(s):                4
	On-line CPU(s) list:   0-3
	Thread(s) per core:    1
	Core(s) per socket:    2
	Socket(s):             2
	NUMA node(s):          2
	...
	NUMA node0 CPU(s):     0-3
	NUMA node1 CPU(s):
	...

For example, one device has 4 queues:

1) before 84676c1f21 ("genirq/affinity: assign vectors to all possible CPUs")
	irq 39, cpu list 0
	irq 40, cpu list 1
	irq 41, cpu list 2
	irq 42, cpu list 3

2) after 84676c1f21 ("genirq/affinity: assign vectors to all possible CPUs")
	irq 39, cpu list 0-2
	irq 40, cpu list 3-4,6
	irq 41, cpu list 5
	irq 42, cpu list 7

3) after applying this patch against V4.15+:
	irq 39, cpu list 0,4
	irq 40, cpu list 1,6
	irq 41, cpu list 2,5
	irq 42, cpu list 3,7

This patch tries to do irq vector spread among online CPUs as far as
possible by 2 stages spread.

The above assignment 3) isn't the optimal result from NUMA view, but it
returns more irq vectors with online CPU mapped, given in reality one CPU
should be enough to handle one irq vector, so it is better to do this way.

Cc: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reported-by: Laurence Oberman <loberman@redhat.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 kernel/irq/affinity.c | 35 +++++++++++++++++++++++++++++------
 1 file changed, 29 insertions(+), 6 deletions(-)

diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index a8c5d07890a6..aa2635416fc5 100644
--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affinity.c
@@ -106,6 +106,9 @@ static int irq_build_affinity_masks(const struct irq_affinity *affd,
 	nodemask_t nodemsk = NODE_MASK_NONE;
 	int n, nodes, cpus_per_vec, extra_vecs, done = 0;
 
+	if (!cpumask_weight(cpu_mask))
+		return 0;
+
 	nodes = get_nodes_in_cpumask(node_to_cpumask, cpu_mask, &nodemsk);
 
 	/*
@@ -175,9 +178,9 @@ struct cpumask *
 irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
 {
 	int affv = nvecs - affd->pre_vectors - affd->post_vectors;
-	int curvec;
+	int curvec, vecs_offline, vecs_online;
 	struct cpumask *masks;
-	cpumask_var_t nmsk, *node_to_cpumask;
+	cpumask_var_t nmsk, cpu_mask, *node_to_cpumask;
 
 	/*
 	 * If there aren't any vectors left after applying the pre/post
@@ -193,9 +196,12 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
 	if (!masks)
 		goto out;
 
+	if (!alloc_cpumask_var(&cpu_mask, GFP_KERNEL))
+		goto out;
+
 	node_to_cpumask = alloc_node_to_cpumask();
 	if (!node_to_cpumask)
-		goto out;
+		goto out_free_cpu_mask;
 
 	/* Fill out vectors at the beginning that don't need affinity */
 	for (curvec = 0; curvec < affd->pre_vectors; curvec++)
@@ -204,15 +210,32 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
 	/* Stabilize the cpumasks */
 	get_online_cpus();
 	build_node_to_cpumask(node_to_cpumask);
-	curvec += irq_build_affinity_masks(affd, curvec, affv,
-					   node_to_cpumask,
-					   cpu_possible_mask, nmsk, masks);
+	/* spread on online CPUs starting from the vector of affd->pre_vectors */
+	vecs_online = irq_build_affinity_masks(affd, curvec, affv,
+					       node_to_cpumask,
+					       cpu_online_mask, nmsk, masks);
+
+	/* spread on offline CPUs starting from the next vector to be handled */
+	if (vecs_online >= affv)
+		curvec = affd->pre_vectors;
+	else
+		curvec = affd->pre_vectors + vecs_online;
+	cpumask_andnot(cpu_mask, cpu_possible_mask, cpu_online_mask);
+	vecs_offline = irq_build_affinity_masks(affd, curvec, affv,
+						node_to_cpumask,
+					        cpu_mask, nmsk, masks);
 	put_online_cpus();
 
 	/* Fill out vectors at the end that don't need affinity */
+	if (vecs_online + vecs_offline >= affv)
+		curvec = affv + affd->pre_vectors;
+	else
+		curvec = affd->pre_vectors + vecs_online + vecs_offline;
 	for (; curvec < nvecs; curvec++)
 		cpumask_copy(masks + curvec, irq_default_affinity);
 	free_node_to_cpumask(node_to_cpumask);
+out_free_cpu_mask:
+	free_cpumask_var(cpu_mask);
 out:
 	free_cpumask_var(nmsk);
 	return masks;
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH V2 3/5] genirq/affinity: move actual irq vector spread into one helper
  2018-03-05  3:13 ` [PATCH V2 3/5] genirq/affinity: move actual irq vector spread into one helper Ming Lei
@ 2018-03-05 16:28   ` kbuild test robot
  2018-03-08  7:48     ` Christoph Hellwig
  2018-03-08 10:05     ` Ming Lei
  0 siblings, 2 replies; 9+ messages in thread
From: kbuild test robot @ 2018-03-05 16:28 UTC (permalink / raw)
  To: Ming Lei
  Cc: kbuild-all, Jens Axboe, Christoph Hellwig, Thomas Gleixner,
	linux-kernel, linux-block, Laurence Oberman, Ming Lei

[-- Attachment #1: Type: text/plain, Size: 3219 bytes --]

Hi Ming,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on tip/irq/core]
[also build test WARNING on v4.16-rc4 next-20180305]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Ming-Lei/genirq-affinity-irq-vector-spread-among-online-CPUs-as-far-as-possible/20180305-184912
config: i386-randconfig-a1-201809 (attached as .config)
compiler: gcc-4.9 (Debian 4.9.4-2) 4.9.4
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All warnings (new ones prefixed by >>):

   kernel/irq/affinity.c: In function 'irq_create_affinity_masks':
>> kernel/irq/affinity.c:201:50: warning: passing argument 3 of 'irq_build_affinity_masks' from incompatible pointer type
     curvec += irq_build_affinity_masks(nvecs, affd, node_to_cpumask,
                                                     ^
   kernel/irq/affinity.c:97:12: note: expected 'const struct cpumask (*)[1]' but argument is of type 'struct cpumask (*)[1]'
    static int irq_build_affinity_masks(int nvecs, const struct irq_affinity *affd,
               ^

vim +/irq_build_affinity_masks +201 kernel/irq/affinity.c

   161	
   162	/**
   163	 * irq_create_affinity_masks - Create affinity masks for multiqueue spreading
   164	 * @nvecs:	The total number of vectors
   165	 * @affd:	Description of the affinity requirements
   166	 *
   167	 * Returns the masks pointer or NULL if allocation failed.
   168	 */
   169	struct cpumask *
   170	irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
   171	{
   172		int curvec;
   173		struct cpumask *masks;
   174		cpumask_var_t nmsk, *node_to_cpumask;
   175	
   176		/*
   177		 * If there aren't any vectors left after applying the pre/post
   178		 * vectors don't bother with assigning affinity.
   179		 */
   180		if (nvecs == affd->pre_vectors + affd->post_vectors)
   181			return NULL;
   182	
   183		if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL))
   184			return NULL;
   185	
   186		masks = kcalloc(nvecs, sizeof(*masks), GFP_KERNEL);
   187		if (!masks)
   188			goto out;
   189	
   190		node_to_cpumask = alloc_node_to_cpumask();
   191		if (!node_to_cpumask)
   192			goto out;
   193	
   194		/* Fill out vectors at the beginning that don't need affinity */
   195		for (curvec = 0; curvec < affd->pre_vectors; curvec++)
   196			cpumask_copy(masks + curvec, irq_default_affinity);
   197	
   198		/* Stabilize the cpumasks */
   199		get_online_cpus();
   200		build_node_to_cpumask(node_to_cpumask);
 > 201		curvec += irq_build_affinity_masks(nvecs, affd, node_to_cpumask,
   202						   cpu_possible_mask, nmsk, masks);
   203		put_online_cpus();
   204	
   205		/* Fill out vectors at the end that don't need affinity */
   206		for (; curvec < nvecs; curvec++)
   207			cpumask_copy(masks + curvec, irq_default_affinity);
   208		free_node_to_cpumask(node_to_cpumask);
   209	out:
   210		free_cpumask_var(nmsk);
   211		return masks;
   212	}
   213	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 29296 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH V2 3/5] genirq/affinity: move actual irq vector spread into one helper
  2018-03-05 16:28   ` kbuild test robot
@ 2018-03-08  7:48     ` Christoph Hellwig
  2018-03-08 10:05     ` Ming Lei
  1 sibling, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2018-03-08  7:48 UTC (permalink / raw)
  To: kbuild test robot
  Cc: Ming Lei, kbuild-all, Jens Axboe, Christoph Hellwig,
	Thomas Gleixner, linux-kernel, linux-block, Laurence Oberman

Can you fix this up and resend?

On Tue, Mar 06, 2018 at 12:28:32AM +0800, kbuild test robot wrote:
> Hi Ming,
> 
> Thank you for the patch! Perhaps something to improve:
> 
> [auto build test WARNING on tip/irq/core]
> [also build test WARNING on v4.16-rc4 next-20180305]
> [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
> 
> url:    https://github.com/0day-ci/linux/commits/Ming-Lei/genirq-affinity-irq-vector-spread-among-online-CPUs-as-far-as-possible/20180305-184912
> config: i386-randconfig-a1-201809 (attached as .config)
> compiler: gcc-4.9 (Debian 4.9.4-2) 4.9.4
> reproduce:
>         # save the attached .config to linux build tree
>         make ARCH=i386 
> 
> All warnings (new ones prefixed by >>):
> 
>    kernel/irq/affinity.c: In function 'irq_create_affinity_masks':
> >> kernel/irq/affinity.c:201:50: warning: passing argument 3 of 'irq_build_affinity_masks' from incompatible pointer type
>      curvec += irq_build_affinity_masks(nvecs, affd, node_to_cpumask,
>                                                      ^
>    kernel/irq/affinity.c:97:12: note: expected 'const struct cpumask (*)[1]' but argument is of type 'struct cpumask (*)[1]'
>     static int irq_build_affinity_masks(int nvecs, const struct irq_affinity *affd,
>                ^
> 
> vim +/irq_build_affinity_masks +201 kernel/irq/affinity.c
> 
>    161	
>    162	/**
>    163	 * irq_create_affinity_masks - Create affinity masks for multiqueue spreading
>    164	 * @nvecs:	The total number of vectors
>    165	 * @affd:	Description of the affinity requirements
>    166	 *
>    167	 * Returns the masks pointer or NULL if allocation failed.
>    168	 */
>    169	struct cpumask *
>    170	irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
>    171	{
>    172		int curvec;
>    173		struct cpumask *masks;
>    174		cpumask_var_t nmsk, *node_to_cpumask;
>    175	
>    176		/*
>    177		 * If there aren't any vectors left after applying the pre/post
>    178		 * vectors don't bother with assigning affinity.
>    179		 */
>    180		if (nvecs == affd->pre_vectors + affd->post_vectors)
>    181			return NULL;
>    182	
>    183		if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL))
>    184			return NULL;
>    185	
>    186		masks = kcalloc(nvecs, sizeof(*masks), GFP_KERNEL);
>    187		if (!masks)
>    188			goto out;
>    189	
>    190		node_to_cpumask = alloc_node_to_cpumask();
>    191		if (!node_to_cpumask)
>    192			goto out;
>    193	
>    194		/* Fill out vectors at the beginning that don't need affinity */
>    195		for (curvec = 0; curvec < affd->pre_vectors; curvec++)
>    196			cpumask_copy(masks + curvec, irq_default_affinity);
>    197	
>    198		/* Stabilize the cpumasks */
>    199		get_online_cpus();
>    200		build_node_to_cpumask(node_to_cpumask);
>  > 201		curvec += irq_build_affinity_masks(nvecs, affd, node_to_cpumask,
>    202						   cpu_possible_mask, nmsk, masks);
>    203		put_online_cpus();
>    204	
>    205		/* Fill out vectors at the end that don't need affinity */
>    206		for (; curvec < nvecs; curvec++)
>    207			cpumask_copy(masks + curvec, irq_default_affinity);
>    208		free_node_to_cpumask(node_to_cpumask);
>    209	out:
>    210		free_cpumask_var(nmsk);
>    211		return masks;
>    212	}
>    213	
> 
> ---
> 0-DAY kernel test infrastructure                Open Source Technology Center
> https://lists.01.org/pipermail/kbuild-all                   Intel Corporation


---end quoted text---

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH V2 3/5] genirq/affinity: move actual irq vector spread into one helper
  2018-03-05 16:28   ` kbuild test robot
  2018-03-08  7:48     ` Christoph Hellwig
@ 2018-03-08 10:05     ` Ming Lei
  1 sibling, 0 replies; 9+ messages in thread
From: Ming Lei @ 2018-03-08 10:05 UTC (permalink / raw)
  To: kbuild test robot
  Cc: kbuild-all, Jens Axboe, Christoph Hellwig, Thomas Gleixner,
	linux-kernel, linux-block, Laurence Oberman

On Tue, Mar 06, 2018 at 12:28:32AM +0800, kbuild test robot wrote:
> Hi Ming,
> 
> Thank you for the patch! Perhaps something to improve:
> 
> [auto build test WARNING on tip/irq/core]
> [also build test WARNING on v4.16-rc4 next-20180305]
> [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
> 
> url:    https://github.com/0day-ci/linux/commits/Ming-Lei/genirq-affinity-irq-vector-spread-among-online-CPUs-as-far-as-possible/20180305-184912
> config: i386-randconfig-a1-201809 (attached as .config)
> compiler: gcc-4.9 (Debian 4.9.4-2) 4.9.4
> reproduce:
>         # save the attached .config to linux build tree
>         make ARCH=i386 
> 
> All warnings (new ones prefixed by >>):
> 
>    kernel/irq/affinity.c: In function 'irq_create_affinity_masks':
> >> kernel/irq/affinity.c:201:50: warning: passing argument 3 of 'irq_build_affinity_masks' from incompatible pointer type
>      curvec += irq_build_affinity_masks(nvecs, affd, node_to_cpumask,
>                                                      ^
>    kernel/irq/affinity.c:97:12: note: expected 'const struct cpumask (*)[1]' but argument is of type 'struct cpumask (*)[1]'
>     static int irq_build_affinity_masks(int nvecs, const struct irq_affinity *affd,

Looks this warning can only be triggered on ARCH=i386 with gcc-4.X.

Can't reproduce it when building on other ARCHs, and can't reproduce
it with gcc-6 too.


Thanks,
Ming

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2018-03-08 10:05 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-05  3:13 [PATCH V2 0/5] genirq/affinity: irq vector spread among online CPUs as far as possible Ming Lei
2018-03-05  3:13 ` [PATCH V2 1/5] genirq/affinity: rename *node_to_possible_cpumask as *node_to_cpumask Ming Lei
2018-03-05  3:13 ` [PATCH V2 2/5] genirq/affinity: mark 'node_to_cpumask' as const for get_nodes_in_cpumask() Ming Lei
2018-03-05  3:13 ` [PATCH V2 3/5] genirq/affinity: move actual irq vector spread into one helper Ming Lei
2018-03-05 16:28   ` kbuild test robot
2018-03-08  7:48     ` Christoph Hellwig
2018-03-08 10:05     ` Ming Lei
2018-03-05  3:13 ` [PATCH V2 4/5] genirq/affinity: support to do irq vectors spread starting from any vector Ming Lei
2018-03-05  3:13 ` [PATCH V2 5/5] genirq/affinity: irq vector spread among online CPUs as far as possible Ming Lei

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).