All of lore.kernel.org
 help / color / mirror / Atom feed
From: John Garry <john.g.garry@oracle.com>
To: Ming Lei <ming.lei@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Jens Axboe <axboe@kernel.dk>
Cc: linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
	Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH V4 6/6] blk-mq: Build default queue map via group_cpus_evenly()
Date: Wed, 11 Jan 2023 09:58:45 +0000	[thread overview]
Message-ID: <e37b3007-a9eb-ff07-2efd-467a5eb8114c@oracle.com> (raw)
In-Reply-To: <20221227022905.352674-7-ming.lei@redhat.com>

On 27/12/2022 02:29, Ming Lei wrote:
> The default queue mapping builder of blk_mq_map_queues doesn't take NUMA
> topo into account, so the built mapping is pretty bad, since CPUs
> belonging to different NUMA node are assigned to same queue. It is
> observed that IOPS drops by ~30% when running two jobs on same hctx
> of null_blk from two CPUs belonging to two NUMA nodes compared with
> from same NUMA node.
> 
> Address the issue by reusing group_cpus_evenly() for building queue
> mapping since group_cpus_evenly() does group cpus according to CPU/NUMA
> locality.
> 
> Also performance data becomes more stable with this patchset given
> correct queue mapping is applied wrt. numa locality viewpoint, for
> example, on one two nodes arm64 machine with 160 cpus, node 0(cpu 0~79),
> node 1(cpu 80~159):
> 
> 1) modprobe null_blk nr_devices=1 submit_queues=2
> 
> 2) run 'fio(t/io_uring -p 0 -n 4 -r 20 /dev/nullb0)', and observe that
> IOPS becomes much stable on multiple tests:
> 
> - without patched: IOPS is 2.5M ~ 4.5M
> - patched: IOPS is 4.3 ~ 5M
> 
> Lots of drivers may benefit from the change, such as nvme pci poll,
> nvme tcp, ...
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>

FWIW, but just a comment below:

Reviewed-by: John Garry <john.g.garry@oracle.com>

> ---
>   block/blk-mq-cpumap.c | 63 +++++++++----------------------------------
>   1 file changed, 13 insertions(+), 50 deletions(-)
> 
> diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
> index 9c2fce1a7b50..0c612c19feb8 100644
> --- a/block/blk-mq-cpumap.c
> +++ b/block/blk-mq-cpumap.c
> @@ -10,66 +10,29 @@
>   #include <linux/mm.h>
>   #include <linux/smp.h>
>   #include <linux/cpu.h>
> +#include <linux/group_cpus.h>
>   
>   #include <linux/blk-mq.h>
>   #include "blk.h"
>   #include "blk-mq.h"
>   
> -static int queue_index(struct blk_mq_queue_map *qmap,
> -		       unsigned int nr_queues, const int q)
> -{
> -	return qmap->queue_offset + (q % nr_queues);
> -}
> -
> -static int get_first_sibling(unsigned int cpu)
> -{
> -	unsigned int ret;
> -
> -	ret = cpumask_first(topology_sibling_cpumask(cpu));
> -	if (ret < nr_cpu_ids)
> -		return ret;
> -
> -	return cpu;
> -}
> -
>   void blk_mq_map_queues(struct blk_mq_queue_map *qmap)
>   {
> -	unsigned int *map = qmap->mq_map;
> -	unsigned int nr_queues = qmap->nr_queues;
> -	unsigned int cpu, first_sibling, q = 0;
> -
> -	for_each_possible_cpu(cpu)
> -		map[cpu] = -1;
> -
> -	/*
> -	 * Spread queues among present CPUs first for minimizing
> -	 * count of dead queues which are mapped by all un-present CPUs
> -	 */
> -	for_each_present_cpu(cpu) {
> -		if (q >= nr_queues)
> -			break;
> -		map[cpu] = queue_index(qmap, nr_queues, q++);
> +	const struct cpumask *masks;
> +	unsigned int queue, cpu;
> +
> +	masks = group_cpus_evenly(qmap->nr_queues);
> +	if (!masks) {
> +		for_each_possible_cpu(cpu)
> +			qmap->mq_map[cpu] = qmap->queue_offset;

I'm not sure if we should try something better than just assigning all 
CPUs to a single queue (which we seem to be doing), but I suppose we 
don't expect masks alloc to fail and there are bigger issues to deal 
with if it does ....

> +		return;
>   	}
>   
> -	for_each_possible_cpu(cpu) {
> -		if (map[cpu] != -1)
> -			continue;
> -		/*
> -		 * First do sequential mapping between CPUs and queues.
> -		 * In case we still have CPUs to map, and we have some number of
> -		 * threads per cores then map sibling threads to the same queue
> -		 * for performance optimizations.
> -		 */
> -		if (q < nr_queues) {
> -			map[cpu] = queue_index(qmap, nr_queues, q++);
> -		} else {
> -			first_sibling = get_first_sibling(cpu);
> -			if (first_sibling == cpu)
> -				map[cpu] = queue_index(qmap, nr_queues, q++);
> -			else
> -				map[cpu] = map[first_sibling];
> -		}
> +	for (queue = 0; queue < qmap->nr_queues; queue++) {
> +		for_each_cpu(cpu, &masks[queue])
> +			qmap->mq_map[cpu] = qmap->queue_offset + queue;
>   	}
> +	kfree(masks);
>   }
>   EXPORT_SYMBOL_GPL(blk_mq_map_queues);
>   


  reply	other threads:[~2023-01-11 10:02 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-27  2:28 [PATCH V4 0/6] genirq/affinity: Abstract APIs from managed irq affinity spread Ming Lei
2022-12-27  2:29 ` [PATCH V4 1/6] genirq/affinity: Remove the 'firstvec' parameter from irq_build_affinity_masks Ming Lei
2023-01-11  9:23   ` John Garry
2023-01-17 17:54   ` [tip: irq/core] " tip-bot2 for Ming Lei
2022-12-27  2:29 ` [PATCH V4 2/6] genirq/affinity: Pass affinity managed mask array to irq_build_affinity_masks Ming Lei
2023-01-11  9:46   ` John Garry
2023-01-17 17:54   ` [tip: irq/core] " tip-bot2 for Ming Lei
2022-12-27  2:29 ` [PATCH V4 3/6] genirq/affinity: Don't pass irq_affinity_desc " Ming Lei
2023-01-11 10:16   ` John Garry
2023-01-17 17:54   ` [tip: irq/core] " tip-bot2 for Ming Lei
2022-12-27  2:29 ` [PATCH V4 4/6] genirq/affinity: Rename irq_build_affinity_masks as group_cpus_evenly Ming Lei
2023-01-17 17:54   ` [tip: irq/core] " tip-bot2 for Ming Lei
2022-12-27  2:29 ` [PATCH V4 5/6] genirq/affinity: Move group_cpus_evenly() into lib/ Ming Lei
2023-01-17 17:54   ` [tip: irq/core] " tip-bot2 for Ming Lei
2023-01-18 11:37   ` [tip: irq/core] genirq/affinity: Only build SMP-only helper functions on SMP kernels tip-bot2 for Ingo Molnar
2022-12-27  2:29 ` [PATCH V4 6/6] blk-mq: Build default queue map via group_cpus_evenly() Ming Lei
2023-01-11  9:58   ` John Garry [this message]
2023-01-17 17:54   ` [tip: irq/core] " tip-bot2 for Ming Lei
2023-01-11  2:18 ` [PATCH V4 0/6] genirq/affinity: Abstract APIs from managed irq affinity spread Ming Lei
2023-01-11 18:58   ` Thomas Gleixner
2023-01-12  1:45     ` Ming Lei
2023-01-11 19:04   ` Jens Axboe
2023-01-16 19:13     ` Thomas Gleixner
2023-01-17 13:12       ` Jens Axboe
2023-01-11 10:06 ` John Garry

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e37b3007-a9eb-ff07-2efd-467a5eb8114c@oracle.com \
    --to=john.g.garry@oracle.com \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.