All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ming Lei <tom.leiming@gmail.com>
To: Brian King <brking@linux.vnet.ibm.com>
Cc: Jens Axboe <axboe@kernel.dk>,
	linux-block <linux-block@vger.kernel.org>,
	"open list:DEVICE-MAPPER (LVM)" <dm-devel@redhat.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@redhat.com>
Subject: Re: [dm-devel] [PATCH 1/1] block: Convert hd_struct in_flight from atomic to percpu
Date: Sat, 1 Jul 2017 07:23:38 +0800	[thread overview]
Message-ID: <CACVXFVPpJBB5ieHg5nwyC4NF3Qd+W-TKJMFB-Qm4cVYK2B6M1w@mail.gmail.com> (raw)
In-Reply-To: <ca8ccbe7-beb6-bc0a-046c-b999004f0157@linux.vnet.ibm.com>

Hi Bian,

On Sat, Jul 1, 2017 at 2:33 AM, Brian King <brking@linux.vnet.ibm.com> wrote:
> On 06/30/2017 09:08 AM, Jens Axboe wrote:
>>>>> Compared with the totally percpu approach, this way might help 1:M or
>>>>> N:M mapping, but won't help 1:1 map(NVMe), when hctx is mapped to
>>>>> each CPU(especially there are huge hw queues on a big system), :-(
>>>>
>>>> Not disagreeing with that, without having some mechanism to only
>>>> loop queues that have pending requests. That would be similar to the
>>>> ctx_map for sw to hw queues. But I don't think that would be worthwhile
>>>> doing, I like your pnode approach better. However, I'm still not fully
>>>> convinced that one per node is enough to get the scalability we need.
>>>>
>>>> Would be great if Brian could re-test with your updated patch, so we
>>>> know how it works for him at least.
>>>
>>> I'll try running with both approaches today and see how they compare.
>>
>> Focus on Ming's, a variant of that is the most likely path forward,
>> imho. It'd be great to do a quick run on mine as well, just to establish
>> how it compares to mainline, though.
>
> On my initial runs, the one from you Jens, appears to perform a bit better, although
> both are a huge improvement from what I was seeing before.
>
> I ran 4k random reads using fio to nullblk in two configurations on my 20 core
> system with 4 NUMA nodes and 4-way SMT, so 80 logical CPUs. I ran both 80 threads
> to a single null_blk as well as 80 threads to 80 null_block devices, so one thread

Could you share what the '80 null_block devices' is?  It means you
create 80 null_blk
devices? Or you create one null_blk and make its hw queue number as 80
via module
parameter of ''submit_queues"?

I guess we should focus on multi-queue case since it is the normal way of NVMe.

> per null_blk. This is what I saw on this machine:
>
> Using the Per node atomic change from Ming Lei
> 1 null_blk, 80 threads
> iops=9376.5K
>
> 80 null_blk, 1 thread
> iops=9523.5K
>
>
> Using the alternate patch from Jens using the tags
> 1 null_blk, 80 threads
> iops=9725.8K
>
> 80 null_blk, 1 thread
> iops=9569.4K

If 1 thread means single fio job, looks the number is too too high, that means
one random IO can complete in about 0.1us(100ns) on single CPU, not sure if it
is possible, :-)


Thanks,
Ming Lei

  reply	other threads:[~2017-06-30 23:23 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-28 21:12 [PATCH 1/1] block: Convert hd_struct in_flight from atomic to percpu Brian King
2017-06-28 21:49 ` Jens Axboe
2017-06-28 21:49   ` Jens Axboe
2017-06-28 22:04   ` Brian King
2017-06-28 22:04     ` Brian King
2017-06-29  8:40   ` Ming Lei
2017-06-29 15:58     ` Jens Axboe
2017-06-29 16:00       ` Jens Axboe
2017-06-29 18:42         ` Jens Axboe
2017-06-30  1:20           ` Ming Lei
2017-06-30  2:17             ` Jens Axboe
2017-06-30 13:05               ` [dm-devel] " Brian King
2017-06-30 13:05                 ` Brian King
2017-06-30 14:08                 ` [dm-devel] " Jens Axboe
2017-06-30 18:33                   ` Brian King
2017-06-30 23:23                     ` Ming Lei [this message]
2017-06-30 23:26                       ` Jens Axboe
2017-07-01  2:18                         ` Brian King
2017-07-04  1:20                           ` Ming Lei
2017-07-04 20:58                             ` Brian King
2017-07-01  4:17                   ` Jens Axboe
2017-07-01  4:59                     ` Jens Axboe
2017-07-01 16:43                       ` Jens Axboe
2017-07-04 20:55                         ` Brian King
2017-07-04 21:57                           ` Jens Axboe
2017-06-29 16:25       ` Ming Lei
2017-06-29 17:31         ` Brian King
2017-06-30  1:08           ` Ming Lei
2017-06-30  1:08             ` Ming Lei
2017-06-28 21:54 ` Jens Axboe
2017-06-28 21:54   ` Jens Axboe
2017-06-28 21:59   ` Jens Axboe
2017-06-28 22:07     ` [dm-devel] " Brian King
2017-06-28 22:19       ` Jens Axboe
2017-06-29 12:59         ` Brian King

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CACVXFVPpJBB5ieHg5nwyC4NF3Qd+W-TKJMFB-Qm4cVYK2B6M1w@mail.gmail.com \
    --to=tom.leiming@gmail.com \
    --cc=agk@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=brking@linux.vnet.ibm.com \
    --cc=dm-devel@redhat.com \
    --cc=linux-block@vger.kernel.org \
    --cc=snitzer@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.