All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jens Axboe <axboe@kernel.dk>
To: Christoph Hellwig <hch@lst.de>,
	Keith Busch <keith.busch@intel.com>,
	Sagi Grimberg <sagi@grimberg.me>
Cc: Max Gurtovoy <maxg@mellanox.com>,
	linux-nvme@lists.infradead.org, linux-block@vger.kernel.org
Subject: Re: block and nvme polling improvements V3
Date: Tue, 4 Dec 2018 11:40:16 -0700	[thread overview]
Message-ID: <11a7106d-17e2-ceb4-bc16-22b447d0bc0f@kernel.dk> (raw)
In-Reply-To: <20181202164628.1116-1-hch@lst.de>

On 12/2/18 9:46 AM, Christoph Hellwig wrote:
> Hi all,
> 
> this series optimizes a few bits in the block layer and nvme code
> related to polling.
> 
> It starts by moving the queue types recently introduce entirely into
> the block layer instead of requiring an indirect call for them.
> 
> It then switches nvme and the block layer to only allow polling
> with separate poll queues, which allows us to realize the following
> benefits:
> 
>  - poll queues can safely avoid disabling irqs on any locks
>    (we already do that in NVMe, but it isn't 100% kosher as-is)
>  - regular interrupt driven queues can drop the CQ lock entirely,
>    as we won't race for completing CQs
> 
> Then we drop the NVMe RDMA code, as it doesn't follow the new mode,
> and remove the nvme multipath polling code including the block hooks
> for it, which didn't make much sense to start with given that we
> started bypassing the multipath code for single controller subsystems
> early on.  Last but not least we enable polling in the block layer
> by default if the underlying driver has poll queues, as that already
> requires explicit user action.
> 
> Note that it would be really nice to have polling back for RDMA with
> dedicated poll queues, but that might take a while.  Also based on
> Jens' polling aio patches we could now implement a model in nvmet
> where we have a thread polling both the backend nvme device and
> the RDMA CQs, which might give us some pretty nice performace
> (I know Sagi looked into something similar a while ago).

Applied, thanks.

-- 
Jens Axboe


WARNING: multiple messages have this Message-ID (diff)
From: axboe@kernel.dk (Jens Axboe)
Subject: block and nvme polling improvements V3
Date: Tue, 4 Dec 2018 11:40:16 -0700	[thread overview]
Message-ID: <11a7106d-17e2-ceb4-bc16-22b447d0bc0f@kernel.dk> (raw)
In-Reply-To: <20181202164628.1116-1-hch@lst.de>

On 12/2/18 9:46 AM, Christoph Hellwig wrote:
> Hi all,
> 
> this series optimizes a few bits in the block layer and nvme code
> related to polling.
> 
> It starts by moving the queue types recently introduce entirely into
> the block layer instead of requiring an indirect call for them.
> 
> It then switches nvme and the block layer to only allow polling
> with separate poll queues, which allows us to realize the following
> benefits:
> 
>  - poll queues can safely avoid disabling irqs on any locks
>    (we already do that in NVMe, but it isn't 100% kosher as-is)
>  - regular interrupt driven queues can drop the CQ lock entirely,
>    as we won't race for completing CQs
> 
> Then we drop the NVMe RDMA code, as it doesn't follow the new mode,
> and remove the nvme multipath polling code including the block hooks
> for it, which didn't make much sense to start with given that we
> started bypassing the multipath code for single controller subsystems
> early on.  Last but not least we enable polling in the block layer
> by default if the underlying driver has poll queues, as that already
> requires explicit user action.
> 
> Note that it would be really nice to have polling back for RDMA with
> dedicated poll queues, but that might take a while.  Also based on
> Jens' polling aio patches we could now implement a model in nvmet
> where we have a thread polling both the backend nvme device and
> the RDMA CQs, which might give us some pretty nice performace
> (I know Sagi looked into something similar a while ago).

Applied, thanks.

-- 
Jens Axboe

  parent reply	other threads:[~2018-12-04 18:40 UTC|newest]

Thread overview: 82+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-12-02 16:46 block and nvme polling improvements V3 Christoph Hellwig
2018-12-02 16:46 ` Christoph Hellwig
2018-12-02 16:46 ` [PATCH 01/13] block: move queues types to the block layer Christoph Hellwig
2018-12-02 16:46   ` Christoph Hellwig
2018-12-04  0:49   ` Sagi Grimberg
2018-12-04  0:49     ` Sagi Grimberg
2018-12-04 15:00     ` Christoph Hellwig
2018-12-04 15:00       ` Christoph Hellwig
2018-12-04 17:08       ` Sagi Grimberg
2018-12-04 17:08         ` Sagi Grimberg
2018-12-02 16:46 ` [PATCH 02/13] nvme-pci: use atomic bitops to mark a queue enabled Christoph Hellwig
2018-12-02 16:46   ` Christoph Hellwig
2018-12-04  0:54   ` Sagi Grimberg
2018-12-04  0:54     ` Sagi Grimberg
2018-12-04 15:04     ` Christoph Hellwig
2018-12-04 15:04       ` Christoph Hellwig
2018-12-04 17:11       ` Sagi Grimberg
2018-12-04 17:11         ` Sagi Grimberg
2018-12-02 16:46 ` [PATCH 03/13] nvme-pci: cleanup SQ allocation a bit Christoph Hellwig
2018-12-02 16:46   ` Christoph Hellwig
2018-12-04  0:55   ` Sagi Grimberg
2018-12-04  0:55     ` Sagi Grimberg
2018-12-02 16:46 ` [PATCH 04/13] nvme-pci: only allow polling with separate poll queues Christoph Hellwig
2018-12-02 16:46   ` Christoph Hellwig
2018-12-03 18:23   ` Keith Busch
2018-12-03 18:23     ` Keith Busch
2018-12-04  0:56   ` Sagi Grimberg
2018-12-04  0:56     ` Sagi Grimberg
2018-12-02 16:46 ` [PATCH 05/13] nvme-pci: consolidate code for polling non-dedicated queues Christoph Hellwig
2018-12-02 16:46   ` Christoph Hellwig
2018-12-04  0:58   ` Sagi Grimberg
2018-12-04  0:58     ` Sagi Grimberg
2018-12-04 15:04     ` Christoph Hellwig
2018-12-04 15:04       ` Christoph Hellwig
2018-12-04 17:13       ` Sagi Grimberg
2018-12-04 17:13         ` Sagi Grimberg
2018-12-04 18:19         ` Jens Axboe
2018-12-04 18:19           ` Jens Axboe
2018-12-02 16:46 ` [PATCH 06/13] nvme-pci: refactor nvme_disable_io_queues Christoph Hellwig
2018-12-02 16:46   ` Christoph Hellwig
2018-12-04  1:00   ` Sagi Grimberg
2018-12-04  1:00     ` Sagi Grimberg
2018-12-04 15:05     ` Christoph Hellwig
2018-12-04 15:05       ` Christoph Hellwig
2018-12-04 18:19       ` Jens Axboe
2018-12-04 18:19         ` Jens Axboe
2018-12-02 16:46 ` [PATCH 07/13] nvme-pci: don't poll from irq context when deleting queues Christoph Hellwig
2018-12-02 16:46   ` Christoph Hellwig
2018-12-03 18:15   ` Keith Busch
2018-12-03 18:15     ` Keith Busch
2018-12-04  1:05   ` Sagi Grimberg
2018-12-04  1:05     ` Sagi Grimberg
2018-12-02 16:46 ` [PATCH 08/13] nvme-pci: remove the CQ lock for interrupt driven queues Christoph Hellwig
2018-12-02 16:46   ` Christoph Hellwig
2018-12-04  1:08   ` Sagi Grimberg
2018-12-04  1:08     ` Sagi Grimberg
2018-12-02 16:46 ` [PATCH 09/13] nvme-rdma: remove I/O polling support Christoph Hellwig
2018-12-02 16:46   ` Christoph Hellwig
2018-12-02 16:46 ` [PATCH 10/13] nvme-mpath: " Christoph Hellwig
2018-12-02 16:46   ` Christoph Hellwig
2018-12-03 18:22   ` Keith Busch
2018-12-03 18:22     ` Keith Busch
2018-12-04  1:11   ` Sagi Grimberg
2018-12-04  1:11     ` Sagi Grimberg
2018-12-04 15:07     ` Christoph Hellwig
2018-12-04 15:07       ` Christoph Hellwig
2018-12-04 17:18       ` Sagi Grimberg
2018-12-04 17:18         ` Sagi Grimberg
2018-12-02 16:46 ` [PATCH 11/13] block: remove ->poll_fn Christoph Hellwig
2018-12-02 16:46   ` Christoph Hellwig
2018-12-04  1:11   ` Sagi Grimberg
2018-12-04  1:11     ` Sagi Grimberg
2018-12-02 16:46 ` [PATCH 12/13] block: only allow polling if a poll queue_map exists Christoph Hellwig
2018-12-02 16:46   ` Christoph Hellwig
2018-12-04  1:14   ` Sagi Grimberg
2018-12-04  1:14     ` Sagi Grimberg
2018-12-02 16:46 ` [PATCH 13/13] block: enable polling by default if a poll map is initalized Christoph Hellwig
2018-12-02 16:46   ` Christoph Hellwig
2018-12-04  1:14   ` Sagi Grimberg
2018-12-04  1:14     ` Sagi Grimberg
2018-12-04 18:40 ` Jens Axboe [this message]
2018-12-04 18:40   ` block and nvme polling improvements V3 Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=11a7106d-17e2-ceb4-bc16-22b447d0bc0f@kernel.dk \
    --to=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=keith.busch@intel.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=maxg@mellanox.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.