Linux-NVME Archive on lore.kernel.org
 help / color / Atom feed
From: Long Li <longli@microsoft.com>
To: Sagi Grimberg <sagi@grimberg.me>, Ming Lei <ming.lei@redhat.com>
Cc: Keith Busch <kbusch@kernel.org>, Jens Axboe <axboe@fb.com>,
	Christoph Hellwig <hch@lst.de>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>
Subject: RE: [PATCH 2/2] nvme-pci: poll IO after batch submission for multi-mapping queue
Date: Tue, 12 Nov 2019 21:17:57 +0000
Message-ID: <CY4PR21MB0741CB372ABC707223ECC6FECE770@CY4PR21MB0741.namprd21.prod.outlook.com> (raw)
In-Reply-To: <4664ca6f-2ebb-c69c-5b7f-226a86394adf@grimberg.me>

>Subject: Re: [PATCH 2/2] nvme-pci: poll IO after batch submission for multi-
>mapping queue
>
>
>>>> f9dde187fa92("nvme-pci: remove cq check after submission") removes
>>>> cq check after submission, this change actually causes performance
>>>> regression on some NVMe drive in which single nvmeq handles requests
>>>> originated from more than one blk-mq sw queues(call it multi-mapping
>>>> queue).
>>>>
>>>> Actually polling IO after submission can handle IO more efficiently,
>>>> especially for multi-mapping queue:
>>>>
>>>> 1) the poll itself is very cheap, and lockless check on cq is
>>>> enough, see nvme_cqe_pending(). Especially the check can be done
>>>> after batch submission is done.
>>>>
>>>> 2) when IO completion is observed via the poll in submission, the
>>>> requst may be completed without interrupt involved, or the interrupt
>>>> handler overload can be decreased.
>>>>
>>>> 3) when single sw queue is submiting IOs to this hw queue, if IOs
>>>> completion is observed via this poll, the IO can be completed
>>>> locally, which is cheaper than remote completion.
>>>>
>>>> Follows test result done on Azure L80sv2 guest with NVMe drive(
>>>> Microsoft Corporation Device b111). This guest has 80 CPUs and 10
>>>> numa nodes, and each NVMe drive supports 8 hw queues.
>>>
>>> I think that the cpu lockup is a different problem, and we should
>>> separate this patch from that problem..
>>
>> Why?
>>
>> Most of CPU lockup is a performance issue in essence. In theory,
>> improvement in IO path could alleviate the soft lockup.
>
>I don't think its a performance issue, being exposed to stall in hard irq is a
>fundamental issue. I don't see how this patch solves it.

With the patch, it's possible to process CQ on the CPU issuing I/O, effectively distributing the work to process CQ across multiple CPUs.

The original condition to trigger lockup is that multiple CPUs issuing I/O, and one CPU (where the HW queue will interrupt) processes CQ for all of them. With this patch and when I/O load is heavy, those issuing CPUs always have something to poll after I/O is issued. It's very difficult to overload the CPU that takes interrupts.

I have run tests for several days and couldn't repro the original lockup with the patch.

>
>>>> 1) test script:
>>>> fio --bs=4k --ioengine=libaio --iodepth=64 --filename=/dev/nvme0n1 \
>>>> 	--iodepth_batch_submit=16 --iodepth_batch_complete_min=16 \
>>>> 	--direct=1 --runtime=30 --numjobs=1 --rw=randread \
>>>> 	--name=test --group_reporting --gtod_reduce=1
>>>>
>>>> 2) test result:
>>>>        | v5.3       | v5.3 with this patchset
>>>> -------------------------------------------
>>>> IOPS | 130K       | 424K
>>>>
>>>> Given IO is handled more efficiently in this way, the original
>>>> report of CPU lockup[1] on Hyper-V can't be observed any more after
>>>> this patch is applied. This issue is usually triggered when running
>>>> IO from all CPUs concurrently.
>>>>
>>>
>>> This is just adding code that we already removed but in a more
>>> convoluted way...
>>
>> That commit removing the code actually causes regression for Azure
>> NVMe.
>
>This issue has been observed long before we removed the polling from the
>submission path and the cq_lock split.
>
>>> The correct place that should optimize the polling is aio/io_uring
>>> and not the driver locally IMO. Adding blk_poll to aio_getevents like
>>> io_uring would be a lot better I think..
>>
>> This poll is actually one-shot poll, and I shouldn't call it poll, and
>> it should have been called as 'check cq'.
>>
>> I believe it has been tried for supporting aio poll before, seems not
>> successful.
>
>Is there a fundamental reason why it can work for io_uring and cannot work
>for aio?
>
>>>> I also run test on Optane(32 hw queues) in big machine(96 cores, 2
>>>> numa nodes), small improvement is observed on running the above fio
>>>> over two NVMe drive with batch 1.
>>>
>>> Given that you add shared lock and atomic ops in the data path you
>>> are bound to hurt some latency oriented workloads in some way..
>>
>> The spin_trylock_irqsave() is just called in case that
>> nvme_cqe_pending() is true. My test on Optane doesn't show that latency
>is hurt.
>
>It also condition on the multi-mapping bit..
>
>Can we know for a fact that this doesn't hurt what-so-ever? If so, we should
>always do it, not conditionally do it. I would test this for io_uring test
>applications that are doing heavy polling. I think Jens had some benchmarks
>he used for how fast io_uring can go in a single cpu core...
>
>> However, I just found the Azure's NVMe is a bit special, in which the
>> 'Interrupt Coalescing' Feature register shows zero. But IO interrupt
>> is often triggered when there are many commands completed by drive.
>>
>> For example in fio test(4k, randread aio, single job), when IOPS is
>> 110K, interrupts per second is just 13~14K. When running heavy IO, the
>> interrupts per second can just reach 40~50K at most. And for normal
>> nvme drive, if 'Interrupt Coalescing' isn't used, most of times one
>> interrupt just complete one request in the rand IO test.
>>
>> That said Azure's implementation must apply aggressive interrupt
>> coalescing even though the register doesn't claim it.
>
>Did you check how many completions a reaped per interrupt?
>
>> That seems the root cause of soft lockup for Azure since lots of
>> requests may be handled in one interrupt event, especially when
>> interrupt event is delay-handled by CPU. Then it can explain why this
>> patch improves Azure NVNe so much in single job fio.
>>
>> But for other drives with N:1 mapping, the soft lockup risk still exists.
>
>As I said, we can discuss this as an optimization, but we should not consider
>this as a solution to the irq-stall issue reported on Azure as we agree that it
>doesn't solve the fundamental problem.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply index

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-08  3:55 [PATCH 0/2] nvme-pci: improve IO performance via poll after batch submission Ming Lei
2019-11-08  3:55 ` [PATCH 1/2] nvme-pci: move sq/cq_poll lock initialization into nvme_init_queue Ming Lei
2019-11-08  4:12   ` Keith Busch
2019-11-08  7:09     ` Ming Lei
2019-11-08  3:55 ` [PATCH 2/2] nvme-pci: poll IO after batch submission for multi-mapping queue Ming Lei
2019-11-11 20:44   ` Christoph Hellwig
2019-11-12  0:33     ` Long Li
2019-11-12  1:35       ` Sagi Grimberg
2019-11-12  2:39       ` Ming Lei
2019-11-12 16:25         ` Hannes Reinecke
2019-11-12 16:49           ` Keith Busch
2019-11-12 17:29             ` Hannes Reinecke
2019-11-13  3:05               ` Ming Lei
2019-11-13  3:17                 ` Keith Busch
2019-11-13  3:57                   ` Ming Lei
2019-11-12 21:20         ` Long Li
2019-11-12 21:36           ` Keith Busch
2019-11-13  0:50             ` Long Li
2019-11-13  2:24           ` Ming Lei
2019-11-12  2:07     ` Ming Lei
2019-11-12  1:44   ` Sagi Grimberg
2019-11-12  9:56     ` Ming Lei
2019-11-12 17:35       ` Sagi Grimberg
2019-11-12 21:17         ` Long Li [this message]
2019-11-12 23:44         ` Jens Axboe
2019-11-13  2:47         ` Ming Lei
2019-11-12 18:11   ` Nadolski, Edmund
2019-11-13 13:46     ` Ming Lei

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CY4PR21MB0741CB372ABC707223ECC6FECE770@CY4PR21MB0741.namprd21.prod.outlook.com \
    --to=longli@microsoft.com \
    --cc=axboe@fb.com \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=ming.lei@redhat.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-NVME Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-nvme/0 linux-nvme/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-nvme linux-nvme/ https://lore.kernel.org/linux-nvme \
		linux-nvme@lists.infradead.org
	public-inbox-index linux-nvme

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.infradead.lists.linux-nvme


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git