linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Pavel Begunkov <asml.silence@gmail.com>
To: Ming Lei <ming.lei@redhat.com>, Matthew Wilcox <willy@infradead.org>
Cc: linux-fsdevel@vger.kernel.org,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Jens Axboe <axboe@kernel.dk>,
	linux-block@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 1/2] iov_iter: optimise iov_iter_npages for bvec
Date: Fri, 20 Nov 2020 17:22:15 +0000	[thread overview]
Message-ID: <59329ec4-e894-e3ff-6f6e-7d89c34bebaf@gmail.com> (raw)
In-Reply-To: <20201120022426.GC333150@T590>

On 20/11/2020 02:24, Ming Lei wrote:
> On Fri, Nov 20, 2020 at 02:06:10AM +0000, Matthew Wilcox wrote:
>> On Fri, Nov 20, 2020 at 01:56:22AM +0000, Pavel Begunkov wrote:
>>> On 20/11/2020 01:49, Matthew Wilcox wrote:
>>>> On Fri, Nov 20, 2020 at 01:39:05AM +0000, Pavel Begunkov wrote:
>>>>> On 20/11/2020 01:20, Matthew Wilcox wrote:
>>>>>> On Thu, Nov 19, 2020 at 11:24:38PM +0000, Pavel Begunkov wrote:
>>>>>>> The block layer spends quite a while in iov_iter_npages(), but for the
>>>>>>> bvec case the number of pages is already known and stored in
>>>>>>> iter->nr_segs, so it can be returned immediately as an optimisation
>>>>>>
>>>>>> Er ... no, it doesn't.  nr_segs is the number of bvecs.  Each bvec can
>>>>>> store up to 4GB of contiguous physical memory.
>>>>>
>>>>> Ah, really, missed min() with PAGE_SIZE in bvec_iter_len(), then it's a
>>>>> stupid statement. Thanks!
>>>>>
>>>>> Are there many users of that? All these iterators are a huge burden,
>>>>> just to count one 4KB page in bvec it takes 2% of CPU time for me.
>>>>
>>>> __bio_try_merge_page() will create multipage BIOs, and that's
>>>> called from a number of places including
>>>> bio_try_merge_hw_seg(), bio_add_page(), and __bio_iov_iter_get_pages()
>>>
>>> I get it that there are a lot of places, more interesting how often
>>> it's actually triggered and if that's performance critical for anybody.
>>> Not like I'm going to change it, just out of curiosity, but bvec.h
>>> can be nicely optimised without it.
>>
>> Typically when you're allocating pages for the page cache, they'll get
>> allocated in order and then you'll read or write them in order, so yes,
>> it ends up triggering quite a lot.  There was once a bug in the page
>> allocator which caused them to get allocated in reverse order and it
>> was a noticable performance hit (this was 15-20 years ago).
> 
> hugepage use cases can benefit much from this way too.

This didn't yield any considerable boost for me though. 1.5% -> 1.3%
for 1 page reads. I'll send it anyway though because there are cases
that can benefit, e.g. as Ming mentioned.

Ming would you want to send the patch yourself? After all you did post
it first.

-- 
Pavel Begunkov

  reply	other threads:[~2020-11-20 17:25 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-19 23:24 [PATCH v2 0/2] optimise iov_iter Pavel Begunkov
2020-11-19 23:24 ` [PATCH v2 1/2] iov_iter: optimise iov_iter_npages for bvec Pavel Begunkov
2020-11-20  1:20   ` Matthew Wilcox
2020-11-20  1:39     ` Pavel Begunkov
2020-11-20  1:49       ` Matthew Wilcox
2020-11-20  1:56         ` Pavel Begunkov
2020-11-20  2:06           ` Matthew Wilcox
2020-11-20  2:08             ` Pavel Begunkov
2020-11-20  2:24             ` Ming Lei
2020-11-20 17:22               ` Pavel Begunkov [this message]
2020-11-20 17:23                 ` Pavel Begunkov
2020-11-20  2:22       ` Ming Lei
2020-11-20  2:25         ` Pavel Begunkov
2020-11-20  2:54           ` Matthew Wilcox
2020-11-20  8:14             ` Christoph Hellwig
2020-11-20 12:39               ` Matthew Wilcox
2020-11-20 13:00                 ` Pavel Begunkov
2020-11-20 13:13                   ` Matthew Wilcox
2020-11-20  9:57             ` Pavel Begunkov
2020-11-20 13:29   ` David Laight
2020-11-19 23:24 ` [PATCH v2 2/2] iov_iter: optimise iter type checking Pavel Begunkov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=59329ec4-e894-e3ff-6f6e-7d89c34bebaf@gmail.com \
    --to=asml.silence@gmail.com \
    --cc=axboe@kernel.dk \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=viro@zeniv.linux.org.uk \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).