linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@infradead.org>
To: Ming Lei <ming.lei@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>,
	linux-block@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [PATCH] block: build default queue map via irq_create_affinity_masks
Date: Thu, 1 Jul 2021 10:54:37 +0100	[thread overview]
Message-ID: <YN2Q3XFSS7T23bih@infradead.org> (raw)
In-Reply-To: <20210630035153.2099975-1-ming.lei@redhat.com>

On Wed, Jun 30, 2021 at 11:51:53AM +0800, Ming Lei wrote:
> The default queue mapping builder of blk_mq_map_queues doesn't take NUMA
> topo into account, so the built mapping is pretty bad, since CPUs
> belonging to different NUMA node are assigned to same queue. It is
> observed that IOPS drops by ~30% when running two jobs on same hctx
> of null_blk from two CPUs belonging to two NUMA nodes compared with
> from same NUMA node.
> 
> Address the issue by reusing irq_create_affinity_masks() for building
> the default queue mapping, so that we can re-use the mapping created
> for managed irq.

This looks sensible, but adding Thomas to see if he is fine with
using an irq function like this.  Maybe it needs to move out of the
irq code and grow a new name if we use it like this.

> 
> Lots of drivers may benefit from the change, such as nvme pci poll,
> nvme tcp, ...
> 
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>  block/blk-mq-cpumap.c | 60 +++++++++----------------------------------
>  1 file changed, 12 insertions(+), 48 deletions(-)
> 
> diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
> index 3db84d3197f1..946e373296a3 100644
> --- a/block/blk-mq-cpumap.c
> +++ b/block/blk-mq-cpumap.c
> @@ -10,67 +10,31 @@
>  #include <linux/mm.h>
>  #include <linux/smp.h>
>  #include <linux/cpu.h>
> +#include <linux/interrupt.h>
>  
>  #include <linux/blk-mq.h>
>  #include "blk.h"
>  #include "blk-mq.h"
>  
> -static int queue_index(struct blk_mq_queue_map *qmap,
> -		       unsigned int nr_queues, const int q)
> -{
> -	return qmap->queue_offset + (q % nr_queues);
> -}
> -
> -static int get_first_sibling(unsigned int cpu)
> -{
> -	unsigned int ret;
> -
> -	ret = cpumask_first(topology_sibling_cpumask(cpu));
> -	if (ret < nr_cpu_ids)
> -		return ret;
> -
> -	return cpu;
> -}
> -
>  int blk_mq_map_queues(struct blk_mq_queue_map *qmap)
>  {
> +	struct irq_affinity_desc *masks = NULL;
> +	struct irq_affinity affd = {0};
>  	unsigned int *map = qmap->mq_map;
>  	unsigned int nr_queues = qmap->nr_queues;
> -	unsigned int cpu, first_sibling, q = 0;
> +	unsigned int q;
>  
> -	for_each_possible_cpu(cpu)
> -		map[cpu] = -1;
> +	masks = irq_create_affinity_masks(nr_queues, &affd);
> +	if (!masks)
> +		return -ENOMEM;
>  
> -	/*
> -	 * Spread queues among present CPUs first for minimizing
> -	 * count of dead queues which are mapped by all un-present CPUs
> -	 */
> -	for_each_present_cpu(cpu) {
> -		if (q >= nr_queues)
> -			break;
> -		map[cpu] = queue_index(qmap, nr_queues, q++);
> -	}
> +	for (q = 0; q < nr_queues; q++) {
> +		unsigned int cpu;
>  
> -	for_each_possible_cpu(cpu) {
> -		if (map[cpu] != -1)
> -			continue;
> -		/*
> -		 * First do sequential mapping between CPUs and queues.
> -		 * In case we still have CPUs to map, and we have some number of
> -		 * threads per cores then map sibling threads to the same queue
> -		 * for performance optimizations.
> -		 */
> -		if (q < nr_queues) {
> -			map[cpu] = queue_index(qmap, nr_queues, q++);
> -		} else {
> -			first_sibling = get_first_sibling(cpu);
> -			if (first_sibling == cpu)
> -				map[cpu] = queue_index(qmap, nr_queues, q++);
> -			else
> -				map[cpu] = map[first_sibling];
> -		}
> +		for_each_cpu(cpu, &masks[q].mask)
> +			map[cpu] = q;
>  	}
> -
> +	kfree(masks);
>  	return 0;
>  }
>  EXPORT_SYMBOL_GPL(blk_mq_map_queues);
> -- 
> 2.31.1
> 
---end quoted text---

  parent reply	other threads:[~2021-07-01  9:55 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-30  3:51 [PATCH] block: build default queue map via irq_create_affinity_masks Ming Lei
2021-06-30  7:38 ` Ming Lei
2021-07-01  9:54 ` Christoph Hellwig [this message]
2021-07-08 13:56   ` Thomas Gleixner
2021-07-01 10:06 ` John Garry
2021-07-02  1:10   ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YN2Q3XFSS7T23bih@infradead.org \
    --to=hch@infradead.org \
    --cc=axboe@kernel.dk \
    --cc=linux-block@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).