All of lore.kernel.org
 help / color / mirror / Atom feed
From: Hannes Reinecke <hare@suse.de>
To: Johannes Thumshirn <jthumshirn@suse.de>,
	Sagi Grimberg <sagi@grimberg.me>
Cc: Jens Axboe <axboe@kernel.dk>,
	Christoph Hellwig <hch@infradead.org>,
	Linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org,
	linux-block@vger.kernel.org, Keith Busch <keith.busch@intel.com>,
	"lsf-pc@lists.linux-foundation.org"
	<lsf-pc@lists.linux-foundation.org>
Subject: Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers
Date: Wed, 18 Jan 2017 16:39:19 +0100	[thread overview]
Message-ID: <aaf5c35f-7e81-fa44-d467-53b574debdd4@suse.de> (raw)
In-Reply-To: <20170118151643.GJ3514@linux-x5ow.site>

On 01/18/2017 04:16 PM, Johannes Thumshirn wrote:
> On Wed, Jan 18, 2017 at 05:14:36PM +0200, Sagi Grimberg wrote:
>>
>>> Hannes just spotted this:
>>> static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
>>>                         const struct blk_mq_queue_data *bd)
>>> {
>>> [...]
>>>        __nvme_submit_cmd(nvmeq, &cmnd);
>>>        nvme_process_cq(nvmeq);
>>>        spin_unlock_irq(&nvmeq->q_lock);
>>>        return BLK_MQ_RQ_QUEUE_OK;
>>> out_cleanup_iod:
>>>        nvme_free_iod(dev, req);
>>> out_free_cmd:
>>>        nvme_cleanup_cmd(req);
>>>        return ret;
>>> }
>>>
>>> So we're draining the CQ on submit. This of cause makes polling for
>>> completions in the IRQ handler rather pointless as we already did in the
>>> submission path.
>>
>> I think you missed:
>> http://git.infradead.org/nvme.git/commit/49c91e3e09dc3c9dd1718df85112a8cce3ab7007
> 
> I indeed did, thanks.
> 
But it doesn't help.

We're still having to wait for the first interrupt, and if we're really
fast that's the only completion we have to process.

Try this:


diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index b4b32e6..e2dd9e2 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -623,6 +623,8 @@ static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
        }
        __nvme_submit_cmd(nvmeq, &cmnd);
        spin_unlock(&nvmeq->sq_lock);
+       disable_irq_nosync(nvmeq_irq(irq));
+       irq_poll_sched(&nvmeq->iop);
        return BLK_MQ_RQ_QUEUE_OK;
 out_cleanup_iod:
        nvme_free_iod(dev, req);

That should avoid the first interrupt, and with a bit of lock reduce the
number of interrupts _drastically_.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N�rnberg
GF: F. Imend�rffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG N�rnberg)

WARNING: multiple messages have this Message-ID (diff)
From: Hannes Reinecke <hare@suse.de>
To: Johannes Thumshirn <jthumshirn@suse.de>,
	Sagi Grimberg <sagi@grimberg.me>
Cc: Jens Axboe <axboe@kernel.dk>,
	Christoph Hellwig <hch@infradead.org>,
	Linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org,
	linux-block@vger.kernel.org, Keith Busch <keith.busch@intel.com>,
	"lsf-pc@lists.linux-foundation.org"
	<lsf-pc@lists.linux-foundation.org>
Subject: Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers
Date: Wed, 18 Jan 2017 16:39:19 +0100	[thread overview]
Message-ID: <aaf5c35f-7e81-fa44-d467-53b574debdd4@suse.de> (raw)
In-Reply-To: <20170118151643.GJ3514@linux-x5ow.site>

On 01/18/2017 04:16 PM, Johannes Thumshirn wrote:
> On Wed, Jan 18, 2017 at 05:14:36PM +0200, Sagi Grimberg wrote:
>>
>>> Hannes just spotted this:
>>> static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
>>>                         const struct blk_mq_queue_data *bd)
>>> {
>>> [...]
>>>        __nvme_submit_cmd(nvmeq, &cmnd);
>>>        nvme_process_cq(nvmeq);
>>>        spin_unlock_irq(&nvmeq->q_lock);
>>>        return BLK_MQ_RQ_QUEUE_OK;
>>> out_cleanup_iod:
>>>        nvme_free_iod(dev, req);
>>> out_free_cmd:
>>>        nvme_cleanup_cmd(req);
>>>        return ret;
>>> }
>>>
>>> So we're draining the CQ on submit. This of cause makes polling for
>>> completions in the IRQ handler rather pointless as we already did in the
>>> submission path.
>>
>> I think you missed:
>> http://git.infradead.org/nvme.git/commit/49c91e3e09dc3c9dd1718df85112a8cce3ab7007
> 
> I indeed did, thanks.
> 
But it doesn't help.

We're still having to wait for the first interrupt, and if we're really
fast that's the only completion we have to process.

Try this:


diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index b4b32e6..e2dd9e2 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -623,6 +623,8 @@ static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
        }
        __nvme_submit_cmd(nvmeq, &cmnd);
        spin_unlock(&nvmeq->sq_lock);
+       disable_irq_nosync(nvmeq_irq(irq));
+       irq_poll_sched(&nvmeq->iop);
        return BLK_MQ_RQ_QUEUE_OK;
 out_cleanup_iod:
        nvme_free_iod(dev, req);

That should avoid the first interrupt, and with a bit of lock reduce the
number of interrupts _drastically_.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

WARNING: multiple messages have this Message-ID (diff)
From: hare@suse.de (Hannes Reinecke)
Subject: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers
Date: Wed, 18 Jan 2017 16:39:19 +0100	[thread overview]
Message-ID: <aaf5c35f-7e81-fa44-d467-53b574debdd4@suse.de> (raw)
In-Reply-To: <20170118151643.GJ3514@linux-x5ow.site>

On 01/18/2017 04:16 PM, Johannes Thumshirn wrote:
> On Wed, Jan 18, 2017@05:14:36PM +0200, Sagi Grimberg wrote:
>>
>>> Hannes just spotted this:
>>> static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
>>>                         const struct blk_mq_queue_data *bd)
>>> {
>>> [...]
>>>        __nvme_submit_cmd(nvmeq, &cmnd);
>>>        nvme_process_cq(nvmeq);
>>>        spin_unlock_irq(&nvmeq->q_lock);
>>>        return BLK_MQ_RQ_QUEUE_OK;
>>> out_cleanup_iod:
>>>        nvme_free_iod(dev, req);
>>> out_free_cmd:
>>>        nvme_cleanup_cmd(req);
>>>        return ret;
>>> }
>>>
>>> So we're draining the CQ on submit. This of cause makes polling for
>>> completions in the IRQ handler rather pointless as we already did in the
>>> submission path.
>>
>> I think you missed:
>> http://git.infradead.org/nvme.git/commit/49c91e3e09dc3c9dd1718df85112a8cce3ab7007
> 
> I indeed did, thanks.
> 
But it doesn't help.

We're still having to wait for the first interrupt, and if we're really
fast that's the only completion we have to process.

Try this:


diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index b4b32e6..e2dd9e2 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -623,6 +623,8 @@ static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
        }
        __nvme_submit_cmd(nvmeq, &cmnd);
        spin_unlock(&nvmeq->sq_lock);
+       disable_irq_nosync(nvmeq_irq(irq));
+       irq_poll_sched(&nvmeq->iop);
        return BLK_MQ_RQ_QUEUE_OK;
 out_cleanup_iod:
        nvme_free_iod(dev, req);

That should avoid the first interrupt, and with a bit of lock reduce the
number of interrupts _drastically_.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare at suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: F. Imend?rffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG N?rnberg)

  reply	other threads:[~2017-01-18 15:39 UTC|newest]

Thread overview: 120+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-11 13:43 [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers Johannes Thumshirn
2017-01-11 13:43 ` Johannes Thumshirn
2017-01-11 13:43 ` Johannes Thumshirn
2017-01-11 13:46 ` Hannes Reinecke
2017-01-11 13:46   ` Hannes Reinecke
2017-01-11 13:46   ` Hannes Reinecke
2017-01-11 15:07 ` Jens Axboe
2017-01-11 15:07   ` Jens Axboe
2017-01-11 15:13   ` Jens Axboe
2017-01-11 15:13     ` Jens Axboe
2017-01-12  8:23     ` Sagi Grimberg
2017-01-12  8:23       ` Sagi Grimberg
2017-01-12 10:02       ` Johannes Thumshirn
2017-01-12 10:02         ` Johannes Thumshirn
2017-01-12 10:02         ` Johannes Thumshirn
2017-01-12 11:44         ` Sagi Grimberg
2017-01-12 11:44           ` Sagi Grimberg
2017-01-12 12:53           ` Johannes Thumshirn
2017-01-12 12:53             ` Johannes Thumshirn
2017-01-12 12:53             ` Johannes Thumshirn
2017-01-12 14:41             ` [Lsf-pc] " Sagi Grimberg
2017-01-12 14:41               ` Sagi Grimberg
2017-01-12 18:59               ` Johannes Thumshirn
2017-01-12 18:59                 ` Johannes Thumshirn
2017-01-12 18:59                 ` Johannes Thumshirn
2017-01-17 15:38       ` Sagi Grimberg
2017-01-17 15:38         ` Sagi Grimberg
2017-01-17 15:45         ` Sagi Grimberg
2017-01-17 15:45           ` Sagi Grimberg
2017-01-20 12:22           ` Johannes Thumshirn
2017-01-20 12:22             ` Johannes Thumshirn
2017-01-20 12:22             ` Johannes Thumshirn
2017-01-17 16:15         ` Sagi Grimberg
2017-01-17 16:15           ` Sagi Grimberg
2017-01-17 16:27           ` Johannes Thumshirn
2017-01-17 16:27             ` Johannes Thumshirn
2017-01-17 16:27             ` Johannes Thumshirn
2017-01-17 16:38             ` Sagi Grimberg
2017-01-17 16:38               ` Sagi Grimberg
2017-01-18 13:51               ` Johannes Thumshirn
2017-01-18 13:51                 ` Johannes Thumshirn
2017-01-18 13:51                 ` Johannes Thumshirn
2017-01-18 14:27                 ` Sagi Grimberg
2017-01-18 14:27                   ` Sagi Grimberg
2017-01-18 14:36                   ` Andrey Kuzmin
2017-01-18 14:36                     ` Andrey Kuzmin
2017-01-18 14:40                     ` Sagi Grimberg
2017-01-18 14:40                       ` Sagi Grimberg
2017-01-18 15:35                       ` Andrey Kuzmin
2017-01-18 15:35                         ` Andrey Kuzmin
2017-01-18 14:58                   ` Johannes Thumshirn
2017-01-18 14:58                     ` Johannes Thumshirn
2017-01-18 14:58                     ` Johannes Thumshirn
2017-01-18 15:14                     ` Sagi Grimberg
2017-01-18 15:14                       ` Sagi Grimberg
2017-01-18 15:16                       ` Johannes Thumshirn
2017-01-18 15:16                         ` Johannes Thumshirn
2017-01-18 15:16                         ` Johannes Thumshirn
2017-01-18 15:39                         ` Hannes Reinecke [this message]
2017-01-18 15:39                           ` Hannes Reinecke
2017-01-18 15:39                           ` Hannes Reinecke
2017-01-19  8:12                           ` Sagi Grimberg
2017-01-19  8:12                             ` Sagi Grimberg
2017-01-19  8:23                             ` Sagi Grimberg
2017-01-19  8:23                               ` Sagi Grimberg
2017-01-19  9:18                               ` Johannes Thumshirn
2017-01-19  9:18                                 ` Johannes Thumshirn
2017-01-19  9:18                                 ` Johannes Thumshirn
2017-01-19  9:13                             ` Johannes Thumshirn
2017-01-19  9:13                               ` Johannes Thumshirn
2017-01-19  9:13                               ` Johannes Thumshirn
2017-01-17 16:44         ` Andrey Kuzmin
2017-01-17 16:50           ` Sagi Grimberg
2017-01-17 16:50             ` Sagi Grimberg
2017-01-18 14:02             ` Hannes Reinecke
2017-01-18 14:02               ` Hannes Reinecke
2017-01-20  0:13               ` Jens Axboe
2017-01-20  0:13                 ` Jens Axboe
2017-01-13 15:56     ` Johannes Thumshirn
2017-01-13 15:56       ` Johannes Thumshirn
2017-01-13 15:56       ` Johannes Thumshirn
2017-01-11 15:16   ` Hannes Reinecke
2017-01-11 15:16     ` Hannes Reinecke
2017-01-11 15:16     ` Hannes Reinecke
2017-01-12  4:36   ` Stephen Bates
2017-01-12  4:44     ` Jens Axboe
2017-01-12  4:44       ` Jens Axboe
2017-01-12  4:56       ` Stephen Bates
2017-01-12  4:56         ` Stephen Bates
2017-01-19 10:57   ` Ming Lei
2017-01-19 10:57     ` Ming Lei
2017-01-19 11:03     ` Hannes Reinecke
2017-01-19 11:03       ` Hannes Reinecke
2017-01-11 16:08 ` Bart Van Assche
2017-01-11 16:08   ` Bart Van Assche
2017-01-11 16:08   ` Bart Van Assche
2017-01-11 16:12   ` hch
2017-01-11 16:12     ` hch
2017-01-11 16:15     ` Jens Axboe
2017-01-11 16:15       ` Jens Axboe
2017-01-11 16:22     ` Hannes Reinecke
2017-01-11 16:22       ` Hannes Reinecke
2017-01-11 16:22       ` Hannes Reinecke
2017-01-11 16:26       ` Bart Van Assche
2017-01-11 16:26         ` Bart Van Assche
2017-01-11 16:26         ` Bart Van Assche
2017-01-11 16:45         ` Hannes Reinecke
2017-01-11 16:45           ` Hannes Reinecke
2017-01-11 16:45           ` Hannes Reinecke
2017-01-12  8:52         ` sagi grimberg
2017-01-12  8:52           ` sagi grimberg
2017-01-11 16:14   ` Johannes Thumshirn
2017-01-11 16:14     ` Johannes Thumshirn
2017-01-11 16:14     ` Johannes Thumshirn
2017-01-12  8:41   ` Sagi Grimberg
2017-01-12  8:41     ` Sagi Grimberg
2017-01-12  8:41     ` Sagi Grimberg
2017-01-12 19:13     ` Bart Van Assche
2017-01-12 19:13       ` Bart Van Assche
2017-01-12 19:13       ` Bart Van Assche

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aaf5c35f-7e81-fa44-d467-53b574debdd4@suse.de \
    --to=hare@suse.de \
    --cc=Linux-scsi@vger.kernel.org \
    --cc=axboe@kernel.dk \
    --cc=hch@infradead.org \
    --cc=jthumshirn@suse.de \
    --cc=keith.busch@intel.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.