linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jens Axboe <axboe@kernel.dk>
To: io-uring@vger.kernel.org
Cc: linux-block@vger.kernel.org, hch@infradead.org
Subject: [PATCHSET v4 0/6] Enable bio recycling for polled IO
Date: Wed, 11 Aug 2021 13:35:27 -0600	[thread overview]
Message-ID: <20210811193533.766613-1-axboe@kernel.dk> (raw)

Hi,

This is v4 of this patchset, and Yet Another method of achieving the
same goal. This one moves into the direction of my old cpu-alloc-cache
branch, where the caches are just per-cpu. The trouble with those is
that we need to make this specific to polled IO to lose the IRQ
safety of them, otherwise it's not a real win and we're better off just
using the slab allocator smarts. This is combined with Christoph's idea
to make it per bio_set, and retains the flagging of the kiocb for
having the IO issuer tell the below layer whether the cache can be
safely used or not.

Another change from last is that we can now grossly simplify the
io_uring side, as we don't need locking for the cache and async retries
are no longer interesting there. This is combined with a block layer
change that clears BIO_PERCPU_CACHE if we clear the HIPRI flag.

The tldr; here is that we get about a 10% bump in polled performance with
this patchset, as we can recycle bio structures essentially for free.
Outside of that, explanations in each patch. I've also got an iomap patch,
but trying to keep this single user until there's agreement on the
direction.

Against for-5.15/io_uring, and can also be found in my
io_uring-bio-cache.4 branch.

 block/bio.c                | 170 +++++++++++++++++++++++++++++++++----
 block/blk-core.c           |   5 +-
 fs/block_dev.c             |   6 +-
 fs/io_uring.c              |   2 +-
 include/linux/bio.h        |  23 +++--
 include/linux/blk_types.h  |   1 +
 include/linux/cpuhotplug.h |   1 +
 include/linux/fs.h         |   2 +
 8 files changed, 185 insertions(+), 25 deletions(-)

-- 
Jens Axboe



             reply	other threads:[~2021-08-11 19:35 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-11 19:35 Jens Axboe [this message]
2021-08-11 19:35 ` [PATCH 1/6] bio: optimize initialization of a bio Jens Axboe
2021-08-12  6:51   ` Christoph Hellwig
2021-08-12 15:29     ` Jens Axboe
2021-08-11 19:35 ` [PATCH 2/6] fs: add kiocb alloc cache flag Jens Axboe
2021-08-12  6:54   ` Christoph Hellwig
2021-08-12 14:52     ` Jens Axboe
2021-08-11 19:35 ` [PATCH 3/6] bio: add allocation cache abstraction Jens Axboe
2021-08-12  7:01   ` Christoph Hellwig
2021-08-12 15:08     ` Jens Axboe
2021-08-12 15:18       ` Christoph Hellwig
2021-08-12 15:26         ` Jens Axboe
2021-08-11 19:35 ` [PATCH 4/6] block: clear BIO_PERCPU_CACHE flag if polling isn't supported Jens Axboe
2021-08-11 19:35 ` [PATCH 5/6] io_uring: enable use of bio alloc cache Jens Axboe
2021-08-11 19:35 ` [PATCH 6/6] block: enable use of bio allocation cache Jens Axboe
2021-08-12  7:04   ` Christoph Hellwig
2021-08-12 14:52     ` Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210811193533.766613-1-axboe@kernel.dk \
    --to=axboe@kernel.dk \
    --cc=hch@infradead.org \
    --cc=io-uring@vger.kernel.org \
    --cc=linux-block@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).