All of lore.kernel.org
 help / color / mirror / Atom feed
From: sagi@grimberg.me (Sagi Grimberg)
Subject: [PATCH rfc 0/6] convert nvme pci to use irq-poll service
Date: Thu, 6 Oct 2016 00:55:07 +0300	[thread overview]
Message-ID: <a66d99fb-15d3-2620-5eb6-e6bb4a0a80bc@grimberg.me> (raw)
In-Reply-To: <20161005214730.GA1053@localhost.localdomain>


>> I ran some tests with this and it seemed to work pretty well with
>> my low-end nvme devices. One phenomenon I've encountered was that
>> for single core long queue-depth'ed randread workload I saw around
>> ~8-10% iops decrease. However when running multi-core IO I didn't
>> see any noticeable performance degradation. non-polling Canonical
>> randread latency doesn't seem to be affected as well. And also
>> polling mode IO is not affected as expected.
>>
>> So in addition for review and feedback, this is a call for testing
>> and benchmarking as this touches the critical data path.
>
> Hi Sagi,
>
> Just reconfirming your findings with another data point, I ran this on
> controllers with 3D Xpoint media, and single depth 4k random read latency
> increased almost 7%. I'll try see if there's anything else we can do to
> bring that in.

Actually I didn't notice latency increase with QD=1 but I'm using
low-end devices so I might have missed it.
Did you use libaio or psync (for polling mode)?

I'm a bit surprised that scheduling soft-irq (on the same core) is
so expensive (the networking folks are using it all over...)
Perhaps we need to look into napi and see if we're doing something
wrong there...

I wander if we kept the cq processing in queue_rq but budget it to
something normally balanced? maybe poll budget to 4 completions?

Does this have any effect with your Xpoint?
--
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 150941a1a730..28b33f518a3d 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -608,6 +608,7 @@ static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
                 goto out;
         }
         __nvme_submit_cmd(nvmeq, &cmnd);
+       __nvme_process_cq(nvmeq, 4);
         spin_unlock_irq(&nvmeq->q_lock);
         return BLK_MQ_RQ_QUEUE_OK;
  out:
--

  reply	other threads:[~2016-10-05 21:55 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-05  9:42 [PATCH rfc 0/6] convert nvme pci to use irq-poll service Sagi Grimberg
2016-10-05  9:42 ` [PATCH rfc 1/6] nvme-pci: Split __nvme_process_cq to poll and handle Sagi Grimberg
2016-10-05 13:21   ` Johannes Thumshirn
2016-10-05 16:52     ` Sagi Grimberg
2016-10-05 19:49       ` Jon Derrick
2016-10-10  7:25       ` Johannes Thumshirn
2016-10-05  9:42 ` [PATCH rfc 2/6] nvme-pci: Add budget to __nvme_process_cq Sagi Grimberg
2016-10-05 13:26   ` Johannes Thumshirn
2016-10-05  9:42 ` [PATCH rfc 3/6] nvme-pci: Use irq-poll for completion processing Sagi Grimberg
2016-10-05 13:40   ` Johannes Thumshirn
2016-10-05 16:57     ` Sagi Grimberg
2016-10-10  7:47       ` Johannes Thumshirn
2016-10-05  9:42 ` [PATCH rfc 4/6] nvme: don't consume cq in queue_rq Sagi Grimberg
2016-10-05 13:42   ` Johannes Thumshirn
2016-10-05  9:42 ` [PATCH rfc 5/6] nvme-pci: open-code polling logic in nvme_poll Sagi Grimberg
2016-10-05 13:52   ` Johannes Thumshirn
2016-10-05 17:02     ` Sagi Grimberg
2016-10-10  7:49       ` Johannes Thumshirn
2016-10-05  9:42 ` [PATCH rfc 6/6] nvme-pci: Get rid of threaded interrupts Sagi Grimberg
2016-10-05 13:47   ` Johannes Thumshirn
2016-10-05 21:47 ` [PATCH rfc 0/6] convert nvme pci to use irq-poll service Keith Busch
2016-10-05 21:55   ` Sagi Grimberg [this message]
2016-10-05 22:49     ` Keith Busch
2016-10-05 22:48       ` Sagi Grimberg
2016-10-27 10:09 ` Johannes Thumshirn
2016-10-27 10:31   ` Sagi Grimberg
2016-10-27 10:58     ` Johannes Thumshirn

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a66d99fb-15d3-2620-5eb6-e6bb4a0a80bc@grimberg.me \
    --to=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.