From: Sagi Grimberg <sagi@grimberg.me>
To: Ming Lei <ming.lei@redhat.com>, linux-nvme@lists.infradead.org
Cc: Keith Busch <kbusch@kernel.org>, Jens Axboe <axboe@fb.com>,
Long Li <longli@microsoft.com>, Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH 2/2] nvme-pci: poll IO after batch submission for multi-mapping queue
Date: Mon, 11 Nov 2019 17:44:10 -0800 [thread overview]
Message-ID: <82fb330e-a507-999a-69f3-947f13bbaae1@grimberg.me> (raw)
In-Reply-To: <20191108035508.26395-3-ming.lei@redhat.com>
> f9dde187fa92("nvme-pci: remove cq check after submission") removes
> cq check after submission, this change actually causes performance
> regression on some NVMe drive in which single nvmeq handles requests
> originated from more than one blk-mq sw queues(call it multi-mapping
> queue).
>
> Actually polling IO after submission can handle IO more efficiently,
> especially for multi-mapping queue:
>
> 1) the poll itself is very cheap, and lockless check on cq is enough,
> see nvme_cqe_pending(). Especially the check can be done after batch
> submission is done.
>
> 2) when IO completion is observed via the poll in submission, the requst
> may be completed without interrupt involved, or the interrupt handler
> overload can be decreased.
>
> 3) when single sw queue is submiting IOs to this hw queue, if IOs completion
> is observed via this poll, the IO can be completed locally, which is
> cheaper than remote completion.
>
> Follows test result done on Azure L80sv2 guest with NVMe drive(
> Microsoft Corporation Device b111). This guest has 80 CPUs and 10
> numa nodes, and each NVMe drive supports 8 hw queues.
I think that the cpu lockup is a different problem, and we should
separate this patch from that problem..
>
> 1) test script:
> fio --bs=4k --ioengine=libaio --iodepth=64 --filename=/dev/nvme0n1 \
> --iodepth_batch_submit=16 --iodepth_batch_complete_min=16 \
> --direct=1 --runtime=30 --numjobs=1 --rw=randread \
> --name=test --group_reporting --gtod_reduce=1
>
> 2) test result:
> | v5.3 | v5.3 with this patchset
> -------------------------------------------
> IOPS | 130K | 424K
>
> Given IO is handled more efficiently in this way, the original report
> of CPU lockup[1] on Hyper-V can't be observed any more after this patch
> is applied. This issue is usually triggered when running IO from all
> CPUs concurrently.
>
This is just adding code that we already removed but in a more
convoluted way...
The correct place that should optimize the polling is aio/io_uring and
not the driver locally IMO. Adding blk_poll to aio_getevents like
io_uring would be a lot better I think..
> I also run test on Optane(32 hw queues) in big machine(96 cores, 2 numa
> nodes), small improvement is observed on running the above fio over two
> NVMe drive with batch 1.
Given that you add shared lock and atomic ops in the data path you are
bound to hurt some latency oriented workloads in some way..
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply other threads:[~2019-11-12 1:44 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-08 3:55 [PATCH 0/2] nvme-pci: improve IO performance via poll after batch submission Ming Lei
2019-11-08 3:55 ` [PATCH 1/2] nvme-pci: move sq/cq_poll lock initialization into nvme_init_queue Ming Lei
2019-11-08 4:12 ` Keith Busch
2019-11-08 7:09 ` Ming Lei
2019-11-08 3:55 ` [PATCH 2/2] nvme-pci: poll IO after batch submission for multi-mapping queue Ming Lei
2019-11-11 20:44 ` Christoph Hellwig
2019-11-12 0:33 ` Long Li
2019-11-12 1:35 ` Sagi Grimberg
2019-11-12 2:39 ` Ming Lei
2019-11-12 16:25 ` Hannes Reinecke
2019-11-12 16:49 ` Keith Busch
2019-11-12 17:29 ` Hannes Reinecke
2019-11-13 3:05 ` Ming Lei
2019-11-13 3:17 ` Keith Busch
2019-11-13 3:57 ` Ming Lei
2019-11-12 21:20 ` Long Li
2019-11-12 21:36 ` Keith Busch
2019-11-13 0:50 ` Long Li
2019-11-13 2:24 ` Ming Lei
2019-11-12 2:07 ` Ming Lei
2019-11-12 1:44 ` Sagi Grimberg [this message]
2019-11-12 9:56 ` Ming Lei
2019-11-12 17:35 ` Sagi Grimberg
2019-11-12 21:17 ` Long Li
2019-11-12 23:44 ` Jens Axboe
2019-11-13 2:47 ` Ming Lei
2019-11-12 18:11 ` Nadolski, Edmund
2019-11-13 13:46 ` Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=82fb330e-a507-999a-69f3-947f13bbaae1@grimberg.me \
--to=sagi@grimberg.me \
--cc=axboe@fb.com \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=longli@microsoft.com \
--cc=ming.lei@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).