linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] block: build default queue map via irq_create_affinity_masks
@ 2021-06-30  3:51 Ming Lei
  2021-06-30  7:38 ` Ming Lei
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Ming Lei @ 2021-06-30  3:51 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Ming Lei

The default queue mapping builder of blk_mq_map_queues doesn't take NUMA
topo into account, so the built mapping is pretty bad, since CPUs
belonging to different NUMA node are assigned to same queue. It is
observed that IOPS drops by ~30% when running two jobs on same hctx
of null_blk from two CPUs belonging to two NUMA nodes compared with
from same NUMA node.

Address the issue by reusing irq_create_affinity_masks() for building
the default queue mapping, so that we can re-use the mapping created
for managed irq.

Lots of drivers may benefit from the change, such as nvme pci poll,
nvme tcp, ...

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-mq-cpumap.c | 60 +++++++++----------------------------------
 1 file changed, 12 insertions(+), 48 deletions(-)

diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
index 3db84d3197f1..946e373296a3 100644
--- a/block/blk-mq-cpumap.c
+++ b/block/blk-mq-cpumap.c
@@ -10,67 +10,31 @@
 #include <linux/mm.h>
 #include <linux/smp.h>
 #include <linux/cpu.h>
+#include <linux/interrupt.h>
 
 #include <linux/blk-mq.h>
 #include "blk.h"
 #include "blk-mq.h"
 
-static int queue_index(struct blk_mq_queue_map *qmap,
-		       unsigned int nr_queues, const int q)
-{
-	return qmap->queue_offset + (q % nr_queues);
-}
-
-static int get_first_sibling(unsigned int cpu)
-{
-	unsigned int ret;
-
-	ret = cpumask_first(topology_sibling_cpumask(cpu));
-	if (ret < nr_cpu_ids)
-		return ret;
-
-	return cpu;
-}
-
 int blk_mq_map_queues(struct blk_mq_queue_map *qmap)
 {
+	struct irq_affinity_desc *masks = NULL;
+	struct irq_affinity affd = {0};
 	unsigned int *map = qmap->mq_map;
 	unsigned int nr_queues = qmap->nr_queues;
-	unsigned int cpu, first_sibling, q = 0;
+	unsigned int q;
 
-	for_each_possible_cpu(cpu)
-		map[cpu] = -1;
+	masks = irq_create_affinity_masks(nr_queues, &affd);
+	if (!masks)
+		return -ENOMEM;
 
-	/*
-	 * Spread queues among present CPUs first for minimizing
-	 * count of dead queues which are mapped by all un-present CPUs
-	 */
-	for_each_present_cpu(cpu) {
-		if (q >= nr_queues)
-			break;
-		map[cpu] = queue_index(qmap, nr_queues, q++);
-	}
+	for (q = 0; q < nr_queues; q++) {
+		unsigned int cpu;
 
-	for_each_possible_cpu(cpu) {
-		if (map[cpu] != -1)
-			continue;
-		/*
-		 * First do sequential mapping between CPUs and queues.
-		 * In case we still have CPUs to map, and we have some number of
-		 * threads per cores then map sibling threads to the same queue
-		 * for performance optimizations.
-		 */
-		if (q < nr_queues) {
-			map[cpu] = queue_index(qmap, nr_queues, q++);
-		} else {
-			first_sibling = get_first_sibling(cpu);
-			if (first_sibling == cpu)
-				map[cpu] = queue_index(qmap, nr_queues, q++);
-			else
-				map[cpu] = map[first_sibling];
-		}
+		for_each_cpu(cpu, &masks[q].mask)
+			map[cpu] = q;
 	}
-
+	kfree(masks);
 	return 0;
 }
 EXPORT_SYMBOL_GPL(blk_mq_map_queues);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] block: build default queue map via irq_create_affinity_masks
  2021-06-30  3:51 [PATCH] block: build default queue map via irq_create_affinity_masks Ming Lei
@ 2021-06-30  7:38 ` Ming Lei
  2021-07-01  9:54 ` Christoph Hellwig
  2021-07-01 10:06 ` John Garry
  2 siblings, 0 replies; 6+ messages in thread
From: Ming Lei @ 2021-06-30  7:38 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block

On Wed, Jun 30, 2021 at 11:51:53AM +0800, Ming Lei wrote:
> The default queue mapping builder of blk_mq_map_queues doesn't take NUMA
> topo into account, so the built mapping is pretty bad, since CPUs
> belonging to different NUMA node are assigned to same queue. It is
> observed that IOPS drops by ~30% when running two jobs on same hctx
> of null_blk from two CPUs belonging to two NUMA nodes compared with
> from same NUMA node.
> 
> Address the issue by reusing irq_create_affinity_masks() for building
> the default queue mapping, so that we can re-use the mapping created
> for managed irq.
> 
> Lots of drivers may benefit from the change, such as nvme pci poll,
> nvme tcp, ...
> 
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>  block/blk-mq-cpumap.c | 60 +++++++++----------------------------------
>  1 file changed, 12 insertions(+), 48 deletions(-)
> 
> diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
> index 3db84d3197f1..946e373296a3 100644
> --- a/block/blk-mq-cpumap.c
> +++ b/block/blk-mq-cpumap.c
> @@ -10,67 +10,31 @@
>  #include <linux/mm.h>
>  #include <linux/smp.h>
>  #include <linux/cpu.h>
> +#include <linux/interrupt.h>
>  
>  #include <linux/blk-mq.h>
>  #include "blk.h"
>  #include "blk-mq.h"
>  
> -static int queue_index(struct blk_mq_queue_map *qmap,
> -		       unsigned int nr_queues, const int q)
> -{
> -	return qmap->queue_offset + (q % nr_queues);
> -}
> -
> -static int get_first_sibling(unsigned int cpu)
> -{
> -	unsigned int ret;
> -
> -	ret = cpumask_first(topology_sibling_cpumask(cpu));
> -	if (ret < nr_cpu_ids)
> -		return ret;
> -
> -	return cpu;
> -}
> -
>  int blk_mq_map_queues(struct blk_mq_queue_map *qmap)
>  {
> +	struct irq_affinity_desc *masks = NULL;
> +	struct irq_affinity affd = {0};
>  	unsigned int *map = qmap->mq_map;
>  	unsigned int nr_queues = qmap->nr_queues;
> -	unsigned int cpu, first_sibling, q = 0;
> +	unsigned int q;
>  
> -	for_each_possible_cpu(cpu)
> -		map[cpu] = -1;
> +	masks = irq_create_affinity_masks(nr_queues, &affd);
> +	if (!masks)
> +		return -ENOMEM;
>  
> -	/*
> -	 * Spread queues among present CPUs first for minimizing
> -	 * count of dead queues which are mapped by all un-present CPUs
> -	 */
> -	for_each_present_cpu(cpu) {
> -		if (q >= nr_queues)
> -			break;
> -		map[cpu] = queue_index(qmap, nr_queues, q++);
> -	}
> +	for (q = 0; q < nr_queues; q++) {
> +		unsigned int cpu;
>  
> -	for_each_possible_cpu(cpu) {
> -		if (map[cpu] != -1)
> -			continue;
> -		/*
> -		 * First do sequential mapping between CPUs and queues.
> -		 * In case we still have CPUs to map, and we have some number of
> -		 * threads per cores then map sibling threads to the same queue
> -		 * for performance optimizations.
> -		 */
> -		if (q < nr_queues) {
> -			map[cpu] = queue_index(qmap, nr_queues, q++);
> -		} else {
> -			first_sibling = get_first_sibling(cpu);
> -			if (first_sibling == cpu)
> -				map[cpu] = queue_index(qmap, nr_queues, q++);
> -			else
> -				map[cpu] = map[first_sibling];
> -		}
> +		for_each_cpu(cpu, &masks[q].mask)
> +			map[cpu] = q;

oops, the above line should have been:

			map[cpu] = qmap->queue_offset + q;

Thanks,
Ming


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] block: build default queue map via irq_create_affinity_masks
  2021-06-30  3:51 [PATCH] block: build default queue map via irq_create_affinity_masks Ming Lei
  2021-06-30  7:38 ` Ming Lei
@ 2021-07-01  9:54 ` Christoph Hellwig
  2021-07-08 13:56   ` Thomas Gleixner
  2021-07-01 10:06 ` John Garry
  2 siblings, 1 reply; 6+ messages in thread
From: Christoph Hellwig @ 2021-07-01  9:54 UTC (permalink / raw)
  To: Ming Lei; +Cc: Jens Axboe, linux-block, Thomas Gleixner

On Wed, Jun 30, 2021 at 11:51:53AM +0800, Ming Lei wrote:
> The default queue mapping builder of blk_mq_map_queues doesn't take NUMA
> topo into account, so the built mapping is pretty bad, since CPUs
> belonging to different NUMA node are assigned to same queue. It is
> observed that IOPS drops by ~30% when running two jobs on same hctx
> of null_blk from two CPUs belonging to two NUMA nodes compared with
> from same NUMA node.
> 
> Address the issue by reusing irq_create_affinity_masks() for building
> the default queue mapping, so that we can re-use the mapping created
> for managed irq.

This looks sensible, but adding Thomas to see if he is fine with
using an irq function like this.  Maybe it needs to move out of the
irq code and grow a new name if we use it like this.

> 
> Lots of drivers may benefit from the change, such as nvme pci poll,
> nvme tcp, ...
> 
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>  block/blk-mq-cpumap.c | 60 +++++++++----------------------------------
>  1 file changed, 12 insertions(+), 48 deletions(-)
> 
> diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
> index 3db84d3197f1..946e373296a3 100644
> --- a/block/blk-mq-cpumap.c
> +++ b/block/blk-mq-cpumap.c
> @@ -10,67 +10,31 @@
>  #include <linux/mm.h>
>  #include <linux/smp.h>
>  #include <linux/cpu.h>
> +#include <linux/interrupt.h>
>  
>  #include <linux/blk-mq.h>
>  #include "blk.h"
>  #include "blk-mq.h"
>  
> -static int queue_index(struct blk_mq_queue_map *qmap,
> -		       unsigned int nr_queues, const int q)
> -{
> -	return qmap->queue_offset + (q % nr_queues);
> -}
> -
> -static int get_first_sibling(unsigned int cpu)
> -{
> -	unsigned int ret;
> -
> -	ret = cpumask_first(topology_sibling_cpumask(cpu));
> -	if (ret < nr_cpu_ids)
> -		return ret;
> -
> -	return cpu;
> -}
> -
>  int blk_mq_map_queues(struct blk_mq_queue_map *qmap)
>  {
> +	struct irq_affinity_desc *masks = NULL;
> +	struct irq_affinity affd = {0};
>  	unsigned int *map = qmap->mq_map;
>  	unsigned int nr_queues = qmap->nr_queues;
> -	unsigned int cpu, first_sibling, q = 0;
> +	unsigned int q;
>  
> -	for_each_possible_cpu(cpu)
> -		map[cpu] = -1;
> +	masks = irq_create_affinity_masks(nr_queues, &affd);
> +	if (!masks)
> +		return -ENOMEM;
>  
> -	/*
> -	 * Spread queues among present CPUs first for minimizing
> -	 * count of dead queues which are mapped by all un-present CPUs
> -	 */
> -	for_each_present_cpu(cpu) {
> -		if (q >= nr_queues)
> -			break;
> -		map[cpu] = queue_index(qmap, nr_queues, q++);
> -	}
> +	for (q = 0; q < nr_queues; q++) {
> +		unsigned int cpu;
>  
> -	for_each_possible_cpu(cpu) {
> -		if (map[cpu] != -1)
> -			continue;
> -		/*
> -		 * First do sequential mapping between CPUs and queues.
> -		 * In case we still have CPUs to map, and we have some number of
> -		 * threads per cores then map sibling threads to the same queue
> -		 * for performance optimizations.
> -		 */
> -		if (q < nr_queues) {
> -			map[cpu] = queue_index(qmap, nr_queues, q++);
> -		} else {
> -			first_sibling = get_first_sibling(cpu);
> -			if (first_sibling == cpu)
> -				map[cpu] = queue_index(qmap, nr_queues, q++);
> -			else
> -				map[cpu] = map[first_sibling];
> -		}
> +		for_each_cpu(cpu, &masks[q].mask)
> +			map[cpu] = q;
>  	}
> -
> +	kfree(masks);
>  	return 0;
>  }
>  EXPORT_SYMBOL_GPL(blk_mq_map_queues);
> -- 
> 2.31.1
> 
---end quoted text---

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] block: build default queue map via irq_create_affinity_masks
  2021-06-30  3:51 [PATCH] block: build default queue map via irq_create_affinity_masks Ming Lei
  2021-06-30  7:38 ` Ming Lei
  2021-07-01  9:54 ` Christoph Hellwig
@ 2021-07-01 10:06 ` John Garry
  2021-07-02  1:10   ` Ming Lei
  2 siblings, 1 reply; 6+ messages in thread
From: John Garry @ 2021-07-01 10:06 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe; +Cc: linux-block

On 30/06/2021 04:51, Ming Lei wrote:
> The default queue mapping builder of blk_mq_map_queues doesn't take NUMA
> topo into account, so the built mapping is pretty bad, since CPUs
> belonging to different NUMA node are assigned to same queue. It is
> observed that IOPS drops by ~30% when running two jobs on same hctx
> of null_blk from two CPUs belonging to two NUMA nodes compared with
> from same NUMA node.
> 
> Address the issue by reusing irq_create_affinity_masks() for building
> the default queue mapping, so that we can re-use the mapping created
> for managed irq.
> 
> Lots of drivers may benefit from the change, such as nvme pci poll,
> nvme tcp, ...
> 
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>   block/blk-mq-cpumap.c | 60 +++++++++----------------------------------
>   1 file changed, 12 insertions(+), 48 deletions(-)
> 
> diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
> index 3db84d3197f1..946e373296a3 100644
> --- a/block/blk-mq-cpumap.c
> +++ b/block/blk-mq-cpumap.c
> @@ -10,67 +10,31 @@
>   #include <linux/mm.h>
>   #include <linux/smp.h>
>   #include <linux/cpu.h>
> +#include <linux/interrupt.h>

Similar to what Christoph mentioned, seems strange to be including 
interrupt.h

>   
>   #include <linux/blk-mq.h>
>   #include "blk.h"
>   #include "blk-mq.h"
>   
> -static int queue_index(struct blk_mq_queue_map *qmap,
> -		       unsigned int nr_queues, const int q)
> -{
> -	return qmap->queue_offset + (q % nr_queues);
> -}
> -
> -static int get_first_sibling(unsigned int cpu)
> -{
> -	unsigned int ret;
> -
> -	ret = cpumask_first(topology_sibling_cpumask(cpu));
> -	if (ret < nr_cpu_ids)
> -		return ret;
> -
> -	return cpu;
> -}
> -
>   int blk_mq_map_queues(struct blk_mq_queue_map *qmap)
>   {
> +	struct irq_affinity_desc *masks = NULL;
> +	struct irq_affinity affd = {0};

should this be simply {}? I forget...

>   	unsigned int *map = qmap->mq_map;
>   	unsigned int nr_queues = qmap->nr_queues;
> -	unsigned int cpu, first_sibling, q = 0;
> +	unsigned int q;
>   
> -	for_each_possible_cpu(cpu)
> -		map[cpu] = -1;
> +	masks = irq_create_affinity_masks(nr_queues, &affd);
> +	if (!masks)
> +		return -ENOMEM;

should we fall back on something else here? Seems that this function 
does not fail just because out of memory.

Thanks,
John

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] block: build default queue map via irq_create_affinity_masks
  2021-07-01 10:06 ` John Garry
@ 2021-07-02  1:10   ` Ming Lei
  0 siblings, 0 replies; 6+ messages in thread
From: Ming Lei @ 2021-07-02  1:10 UTC (permalink / raw)
  To: John Garry; +Cc: Thomas Gleixner, Jens Axboe, linux-block, Christoph Hellwig

On Thu, Jul 01, 2021 at 11:06:57AM +0100, John Garry wrote:
> On 30/06/2021 04:51, Ming Lei wrote:
> > The default queue mapping builder of blk_mq_map_queues doesn't take NUMA
> > topo into account, so the built mapping is pretty bad, since CPUs
> > belonging to different NUMA node are assigned to same queue. It is
> > observed that IOPS drops by ~30% when running two jobs on same hctx
> > of null_blk from two CPUs belonging to two NUMA nodes compared with
> > from same NUMA node.
> > 
> > Address the issue by reusing irq_create_affinity_masks() for building
> > the default queue mapping, so that we can re-use the mapping created
> > for managed irq.
> > 
> > Lots of drivers may benefit from the change, such as nvme pci poll,
> > nvme tcp, ...
> > 
> > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > ---
> >   block/blk-mq-cpumap.c | 60 +++++++++----------------------------------
> >   1 file changed, 12 insertions(+), 48 deletions(-)
> > 
> > diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
> > index 3db84d3197f1..946e373296a3 100644
> > --- a/block/blk-mq-cpumap.c
> > +++ b/block/blk-mq-cpumap.c
> > @@ -10,67 +10,31 @@
> >   #include <linux/mm.h>
> >   #include <linux/smp.h>
> >   #include <linux/cpu.h>
> > +#include <linux/interrupt.h>
> 
> Similar to what Christoph mentioned, seems strange to be including
> interrupt.h

The ideal way is to abstract & move the affinity building code into lib/,
but it needs to refactor kernel/irq/affinity.c a bit.

Also here each queue means one blk-mq hw queue, it is still not too
strange to associate it with interrupt and re-use the interrupt affinity
building code.

Let's see how Thomas thinks about this usage.

> 
> >   #include <linux/blk-mq.h>
> >   #include "blk.h"
> >   #include "blk-mq.h"
> > -static int queue_index(struct blk_mq_queue_map *qmap,
> > -		       unsigned int nr_queues, const int q)
> > -{
> > -	return qmap->queue_offset + (q % nr_queues);
> > -}
> > -
> > -static int get_first_sibling(unsigned int cpu)
> > -{
> > -	unsigned int ret;
> > -
> > -	ret = cpumask_first(topology_sibling_cpumask(cpu));
> > -	if (ret < nr_cpu_ids)
> > -		return ret;
> > -
> > -	return cpu;
> > -}
> > -
> >   int blk_mq_map_queues(struct blk_mq_queue_map *qmap)
> >   {
> > +	struct irq_affinity_desc *masks = NULL;
> > +	struct irq_affinity affd = {0};
> 
> should this be simply {}? I forget...

I think both should be fine, and two usages can be found in kernel code.

> 
> >   	unsigned int *map = qmap->mq_map;
> >   	unsigned int nr_queues = qmap->nr_queues;
> > -	unsigned int cpu, first_sibling, q = 0;
> > +	unsigned int q;
> > -	for_each_possible_cpu(cpu)
> > -		map[cpu] = -1;
> > +	masks = irq_create_affinity_masks(nr_queues, &affd);
> > +	if (!masks)
> > +		return -ENOMEM;
> 
> should we fall back on something else here? Seems that this function does
> not fail just because out of memory.

The default case is nr_set == 1, so the only failure is out of
memory, and irq_create_affinity_masks() basically creates cpumask for
each vector/queue and assigns possible CPUs among these vector/queue.


Thanks,
Ming


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] block: build default queue map via irq_create_affinity_masks
  2021-07-01  9:54 ` Christoph Hellwig
@ 2021-07-08 13:56   ` Thomas Gleixner
  0 siblings, 0 replies; 6+ messages in thread
From: Thomas Gleixner @ 2021-07-08 13:56 UTC (permalink / raw)
  To: Christoph Hellwig, Ming Lei; +Cc: Jens Axboe, linux-block

On Thu, Jul 01 2021 at 10:54, Christoph Hellwig wrote:
>> Address the issue by reusing irq_create_affinity_masks() for building
>> the default queue mapping, so that we can re-use the mapping created
>> for managed irq.
>
> This looks sensible, but adding Thomas to see if he is fine with
> using an irq function like this.  Maybe it needs to move out of the
> irq code and grow a new name if we use it like this.

Yes, making it less irq centric and sticking it into lib makes sense.

The usage sites then can have their specific wrappers around it.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2021-07-08 13:56 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-30  3:51 [PATCH] block: build default queue map via irq_create_affinity_masks Ming Lei
2021-06-30  7:38 ` Ming Lei
2021-07-01  9:54 ` Christoph Hellwig
2021-07-08 13:56   ` Thomas Gleixner
2021-07-01 10:06 ` John Garry
2021-07-02  1:10   ` Ming Lei

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).