From: Ming Lei <ming.lei@redhat.com> To: Sagi Grimberg <sagi@grimberg.me> Cc: Keith Busch <kbusch@kernel.org>, Jens Axboe <axboe@fb.com>, Long Li <longli@microsoft.com>, Christoph Hellwig <hch@lst.de>, linux-nvme@lists.infradead.org Subject: Re: [PATCH 2/2] nvme-pci: poll IO after batch submission for multi-mapping queue Date: Tue, 12 Nov 2019 17:56:49 +0800 Message-ID: <20191112095649.GE15079@ming.t460p> (raw) In-Reply-To: <82fb330e-a507-999a-69f3-947f13bbaae1@grimberg.me> On Mon, Nov 11, 2019 at 05:44:10PM -0800, Sagi Grimberg wrote: > > > f9dde187fa92("nvme-pci: remove cq check after submission") removes > > cq check after submission, this change actually causes performance > > regression on some NVMe drive in which single nvmeq handles requests > > originated from more than one blk-mq sw queues(call it multi-mapping > > queue). > > > > Actually polling IO after submission can handle IO more efficiently, > > especially for multi-mapping queue: > > > > 1) the poll itself is very cheap, and lockless check on cq is enough, > > see nvme_cqe_pending(). Especially the check can be done after batch > > submission is done. > > > > 2) when IO completion is observed via the poll in submission, the requst > > may be completed without interrupt involved, or the interrupt handler > > overload can be decreased. > > > > 3) when single sw queue is submiting IOs to this hw queue, if IOs completion > > is observed via this poll, the IO can be completed locally, which is > > cheaper than remote completion. > > > > Follows test result done on Azure L80sv2 guest with NVMe drive( > > Microsoft Corporation Device b111). This guest has 80 CPUs and 10 > > numa nodes, and each NVMe drive supports 8 hw queues. > > I think that the cpu lockup is a different problem, and we should > separate this patch from that problem.. Why? Most of CPU lockup is a performance issue in essence. In theory, improvement in IO path could alleviate the soft lockup. > > > > > 1) test script: > > fio --bs=4k --ioengine=libaio --iodepth=64 --filename=/dev/nvme0n1 \ > > --iodepth_batch_submit=16 --iodepth_batch_complete_min=16 \ > > --direct=1 --runtime=30 --numjobs=1 --rw=randread \ > > --name=test --group_reporting --gtod_reduce=1 > > > > 2) test result: > > | v5.3 | v5.3 with this patchset > > ------------------------------------------- > > IOPS | 130K | 424K > > > > Given IO is handled more efficiently in this way, the original report > > of CPU lockup[1] on Hyper-V can't be observed any more after this patch > > is applied. This issue is usually triggered when running IO from all > > CPUs concurrently. > > > > This is just adding code that we already removed but in a more > convoluted way... That commit removing the code actually causes regression for Azure NVMe. > > The correct place that should optimize the polling is aio/io_uring and > not the driver locally IMO. Adding blk_poll to aio_getevents like > io_uring would be a lot better I think.. This poll is actually one-shot poll, and I shouldn't call it poll, and it should have been called as 'check cq'. I believe it has been tried for supporting aio poll before, seems not successful. > > > I also run test on Optane(32 hw queues) in big machine(96 cores, 2 numa > > nodes), small improvement is observed on running the above fio over two > > NVMe drive with batch 1. > > Given that you add shared lock and atomic ops in the data path you are > bound to hurt some latency oriented workloads in some way.. The spin_trylock_irqsave() is just called in case that nvme_cqe_pending() is true. My test on Optane doesn't show that latency is hurt. However, I just found the Azure's NVMe is a bit special, in which the 'Interrupt Coalescing' Feature register shows zero. But IO interrupt is often triggered when there are many commands completed by drive. For example in fio test(4k, randread aio, single job), when IOPS is 110K, interrupts per second is just 13~14K. When running heavy IO, the interrupts per second can just reach 40~50K at most. And for normal nvme drive, if 'Interrupt Coalescing' isn't used, most of times one interrupt just complete one request in the rand IO test. That said Azure's implementation must apply aggressive interrupt coalescing even though the register doesn't claim it. That seems the root cause of soft lockup for Azure since lots of requests may be handled in one interrupt event, especially when interrupt event is delay-handled by CPU. Then it can explain why this patch improves Azure NVNe so much in single job fio. But for other drives with N:1 mapping, the soft lockup risk still exists. thanks, Ming _______________________________________________ Linux-nvme mailing list Linux-nvme@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply index Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-11-08 3:55 [PATCH 0/2] nvme-pci: improve IO performance via poll after batch submission Ming Lei 2019-11-08 3:55 ` [PATCH 1/2] nvme-pci: move sq/cq_poll lock initialization into nvme_init_queue Ming Lei 2019-11-08 4:12 ` Keith Busch 2019-11-08 7:09 ` Ming Lei 2019-11-08 3:55 ` [PATCH 2/2] nvme-pci: poll IO after batch submission for multi-mapping queue Ming Lei 2019-11-11 20:44 ` Christoph Hellwig 2019-11-12 0:33 ` Long Li 2019-11-12 1:35 ` Sagi Grimberg 2019-11-12 2:39 ` Ming Lei 2019-11-12 16:25 ` Hannes Reinecke 2019-11-12 16:49 ` Keith Busch 2019-11-12 17:29 ` Hannes Reinecke 2019-11-13 3:05 ` Ming Lei 2019-11-13 3:17 ` Keith Busch 2019-11-13 3:57 ` Ming Lei 2019-11-12 21:20 ` Long Li 2019-11-12 21:36 ` Keith Busch 2019-11-13 0:50 ` Long Li 2019-11-13 2:24 ` Ming Lei 2019-11-12 2:07 ` Ming Lei 2019-11-12 1:44 ` Sagi Grimberg 2019-11-12 9:56 ` Ming Lei [this message] 2019-11-12 17:35 ` Sagi Grimberg 2019-11-12 21:17 ` Long Li 2019-11-12 23:44 ` Jens Axboe 2019-11-13 2:47 ` Ming Lei 2019-11-12 18:11 ` Nadolski, Edmund 2019-11-13 13:46 ` Ming Lei
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20191112095649.GE15079@ming.t460p \ --to=ming.lei@redhat.com \ --cc=axboe@fb.com \ --cc=hch@lst.de \ --cc=kbusch@kernel.org \ --cc=linux-nvme@lists.infradead.org \ --cc=longli@microsoft.com \ --cc=sagi@grimberg.me \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
Linux-NVME Archive on lore.kernel.org Archives are clonable: git clone --mirror https://lore.kernel.org/linux-nvme/0 linux-nvme/git/0.git # If you have public-inbox 1.1+ installed, you may # initialize and index your mirror using the following commands: public-inbox-init -V2 linux-nvme linux-nvme/ https://lore.kernel.org/linux-nvme \ linux-nvme@lists.infradead.org public-inbox-index linux-nvme Example config snippet for mirrors Newsgroup available over NNTP: nntp://nntp.lore.kernel.org/org.infradead.lists.linux-nvme AGPL code for this site: git clone https://public-inbox.org/public-inbox.git