linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/2] optimise iov_iter
@ 2020-11-19 23:24 Pavel Begunkov
  2020-11-19 23:24 ` [PATCH v2 1/2] iov_iter: optimise iov_iter_npages for bvec Pavel Begunkov
  2020-11-19 23:24 ` [PATCH v2 2/2] iov_iter: optimise iter type checking Pavel Begunkov
  0 siblings, 2 replies; 21+ messages in thread
From: Pavel Begunkov @ 2020-11-19 23:24 UTC (permalink / raw)
  To: linux-fsdevel, Alexander Viro; +Cc: Jens Axboe, linux-block, linux-kernel

The first patch optimises iov_iter_npages() for the bvec case, and the
second helps code generation to kill unreachable code.

v2: same code, stylistic message changes
    + Reviewed-by

Pavel Begunkov (2):
  iov_iter: optimise iov_iter_npages for bvec
  iov_iter: optimise iter type checking

 include/linux/uio.h | 10 +++++-----
 lib/iov_iter.c      | 10 +++++-----
 2 files changed, 10 insertions(+), 10 deletions(-)

-- 
2.24.0


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v2 1/2] iov_iter: optimise iov_iter_npages for bvec
  2020-11-19 23:24 [PATCH v2 0/2] optimise iov_iter Pavel Begunkov
@ 2020-11-19 23:24 ` Pavel Begunkov
  2020-11-20  1:20   ` Matthew Wilcox
  2020-11-20 13:29   ` David Laight
  2020-11-19 23:24 ` [PATCH v2 2/2] iov_iter: optimise iter type checking Pavel Begunkov
  1 sibling, 2 replies; 21+ messages in thread
From: Pavel Begunkov @ 2020-11-19 23:24 UTC (permalink / raw)
  To: linux-fsdevel, Alexander Viro; +Cc: Jens Axboe, linux-block, linux-kernel

The block layer spends quite a while in iov_iter_npages(), but for the
bvec case the number of pages is already known and stored in
iter->nr_segs, so it can be returned immediately as an optimisation

Perf for an io_uring benchmark with registered buffers (i.e. bvec) shows
~1.5-2.0% total cycle count spent in iov_iter_npages(), that's dropped
by this patch to ~0.2%.

Reviewed-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 lib/iov_iter.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index 1635111c5bd2..0fa7ac330acf 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -1594,6 +1594,8 @@ int iov_iter_npages(const struct iov_iter *i, int maxpages)
 		return 0;
 	if (unlikely(iov_iter_is_discard(i)))
 		return 0;
+	if (unlikely(iov_iter_is_bvec(i)))
+		return min_t(int, i->nr_segs, maxpages);
 
 	if (unlikely(iov_iter_is_pipe(i))) {
 		struct pipe_inode_info *pipe = i->pipe;
@@ -1614,11 +1616,9 @@ int iov_iter_npages(const struct iov_iter *i, int maxpages)
 			- p / PAGE_SIZE;
 		if (npages >= maxpages)
 			return maxpages;
-	0;}),({
-		npages++;
-		if (npages >= maxpages)
-			return maxpages;
-	}),({
+	0;}),
+		0 /* bvecs are handled above */
+	,({
 		unsigned long p = (unsigned long)v.iov_base;
 		npages += DIV_ROUND_UP(p + v.iov_len, PAGE_SIZE)
 			- p / PAGE_SIZE;
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 2/2] iov_iter: optimise iter type checking
  2020-11-19 23:24 [PATCH v2 0/2] optimise iov_iter Pavel Begunkov
  2020-11-19 23:24 ` [PATCH v2 1/2] iov_iter: optimise iov_iter_npages for bvec Pavel Begunkov
@ 2020-11-19 23:24 ` Pavel Begunkov
  1 sibling, 0 replies; 21+ messages in thread
From: Pavel Begunkov @ 2020-11-19 23:24 UTC (permalink / raw)
  To: linux-fsdevel, Alexander Viro; +Cc: Jens Axboe, linux-block, linux-kernel

The problem here is that iov_iter_is_*() helpers check types for
equality, but all iterate_* helpers do bitwise ands. This confuses
compilers, so even if some cases were handled separately with
iov_iter_is_*(), corresponding ifs in iterate*() right after are not
eliminated.

E.g. iov_iter_npages() first handles bvecs and discards, but
iterate_all_kinds() still checks those iter types and generates actually
unreachable code including iter init, for(), etc.

size lib/iov_iter.o
before:
   text    data     bss     dec     hex filename
  24409     805       0   25214    627e lib/iov_iter.o

after:
   text    data     bss     dec     hex filename
  23785     793       0   24578    6002 lib/iov_iter.o

Most of it is ifs that are executed but never true.

Reviewed-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 include/linux/uio.h | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/include/linux/uio.h b/include/linux/uio.h
index 72d88566694e..c5970b2d3307 100644
--- a/include/linux/uio.h
+++ b/include/linux/uio.h
@@ -57,27 +57,27 @@ static inline enum iter_type iov_iter_type(const struct iov_iter *i)
 
 static inline bool iter_is_iovec(const struct iov_iter *i)
 {
-	return iov_iter_type(i) == ITER_IOVEC;
+	return iov_iter_type(i) & ITER_IOVEC;
 }
 
 static inline bool iov_iter_is_kvec(const struct iov_iter *i)
 {
-	return iov_iter_type(i) == ITER_KVEC;
+	return iov_iter_type(i) & ITER_KVEC;
 }
 
 static inline bool iov_iter_is_bvec(const struct iov_iter *i)
 {
-	return iov_iter_type(i) == ITER_BVEC;
+	return iov_iter_type(i) & ITER_BVEC;
 }
 
 static inline bool iov_iter_is_pipe(const struct iov_iter *i)
 {
-	return iov_iter_type(i) == ITER_PIPE;
+	return iov_iter_type(i) & ITER_PIPE;
 }
 
 static inline bool iov_iter_is_discard(const struct iov_iter *i)
 {
-	return iov_iter_type(i) == ITER_DISCARD;
+	return iov_iter_type(i) & ITER_DISCARD;
 }
 
 static inline unsigned char iov_iter_rw(const struct iov_iter *i)
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/2] iov_iter: optimise iov_iter_npages for bvec
  2020-11-19 23:24 ` [PATCH v2 1/2] iov_iter: optimise iov_iter_npages for bvec Pavel Begunkov
@ 2020-11-20  1:20   ` Matthew Wilcox
  2020-11-20  1:39     ` Pavel Begunkov
  2020-11-20 13:29   ` David Laight
  1 sibling, 1 reply; 21+ messages in thread
From: Matthew Wilcox @ 2020-11-20  1:20 UTC (permalink / raw)
  To: Pavel Begunkov
  Cc: linux-fsdevel, Alexander Viro, Jens Axboe, linux-block, linux-kernel

On Thu, Nov 19, 2020 at 11:24:38PM +0000, Pavel Begunkov wrote:
> The block layer spends quite a while in iov_iter_npages(), but for the
> bvec case the number of pages is already known and stored in
> iter->nr_segs, so it can be returned immediately as an optimisation

Er ... no, it doesn't.  nr_segs is the number of bvecs.  Each bvec can
store up to 4GB of contiguous physical memory.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/2] iov_iter: optimise iov_iter_npages for bvec
  2020-11-20  1:20   ` Matthew Wilcox
@ 2020-11-20  1:39     ` Pavel Begunkov
  2020-11-20  1:49       ` Matthew Wilcox
  2020-11-20  2:22       ` Ming Lei
  0 siblings, 2 replies; 21+ messages in thread
From: Pavel Begunkov @ 2020-11-20  1:39 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-fsdevel, Alexander Viro, Jens Axboe, linux-block, linux-kernel

On 20/11/2020 01:20, Matthew Wilcox wrote:
> On Thu, Nov 19, 2020 at 11:24:38PM +0000, Pavel Begunkov wrote:
>> The block layer spends quite a while in iov_iter_npages(), but for the
>> bvec case the number of pages is already known and stored in
>> iter->nr_segs, so it can be returned immediately as an optimisation
> 
> Er ... no, it doesn't.  nr_segs is the number of bvecs.  Each bvec can
> store up to 4GB of contiguous physical memory.

Ah, really, missed min() with PAGE_SIZE in bvec_iter_len(), then it's a
stupid statement. Thanks!

Are there many users of that? All these iterators are a huge burden,
just to count one 4KB page in bvec it takes 2% of CPU time for me.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/2] iov_iter: optimise iov_iter_npages for bvec
  2020-11-20  1:39     ` Pavel Begunkov
@ 2020-11-20  1:49       ` Matthew Wilcox
  2020-11-20  1:56         ` Pavel Begunkov
  2020-11-20  2:22       ` Ming Lei
  1 sibling, 1 reply; 21+ messages in thread
From: Matthew Wilcox @ 2020-11-20  1:49 UTC (permalink / raw)
  To: Pavel Begunkov
  Cc: linux-fsdevel, Alexander Viro, Jens Axboe, linux-block, linux-kernel

On Fri, Nov 20, 2020 at 01:39:05AM +0000, Pavel Begunkov wrote:
> On 20/11/2020 01:20, Matthew Wilcox wrote:
> > On Thu, Nov 19, 2020 at 11:24:38PM +0000, Pavel Begunkov wrote:
> >> The block layer spends quite a while in iov_iter_npages(), but for the
> >> bvec case the number of pages is already known and stored in
> >> iter->nr_segs, so it can be returned immediately as an optimisation
> > 
> > Er ... no, it doesn't.  nr_segs is the number of bvecs.  Each bvec can
> > store up to 4GB of contiguous physical memory.
> 
> Ah, really, missed min() with PAGE_SIZE in bvec_iter_len(), then it's a
> stupid statement. Thanks!
> 
> Are there many users of that? All these iterators are a huge burden,
> just to count one 4KB page in bvec it takes 2% of CPU time for me.

__bio_try_merge_page() will create multipage BIOs, and that's
called from a number of places including
bio_try_merge_hw_seg(), bio_add_page(), and __bio_iov_iter_get_pages()

so ... yeah, it's used a lot.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/2] iov_iter: optimise iov_iter_npages for bvec
  2020-11-20  1:49       ` Matthew Wilcox
@ 2020-11-20  1:56         ` Pavel Begunkov
  2020-11-20  2:06           ` Matthew Wilcox
  0 siblings, 1 reply; 21+ messages in thread
From: Pavel Begunkov @ 2020-11-20  1:56 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-fsdevel, Alexander Viro, Jens Axboe, linux-block, linux-kernel

On 20/11/2020 01:49, Matthew Wilcox wrote:
> On Fri, Nov 20, 2020 at 01:39:05AM +0000, Pavel Begunkov wrote:
>> On 20/11/2020 01:20, Matthew Wilcox wrote:
>>> On Thu, Nov 19, 2020 at 11:24:38PM +0000, Pavel Begunkov wrote:
>>>> The block layer spends quite a while in iov_iter_npages(), but for the
>>>> bvec case the number of pages is already known and stored in
>>>> iter->nr_segs, so it can be returned immediately as an optimisation
>>>
>>> Er ... no, it doesn't.  nr_segs is the number of bvecs.  Each bvec can
>>> store up to 4GB of contiguous physical memory.
>>
>> Ah, really, missed min() with PAGE_SIZE in bvec_iter_len(), then it's a
>> stupid statement. Thanks!
>>
>> Are there many users of that? All these iterators are a huge burden,
>> just to count one 4KB page in bvec it takes 2% of CPU time for me.
> 
> __bio_try_merge_page() will create multipage BIOs, and that's
> called from a number of places including
> bio_try_merge_hw_seg(), bio_add_page(), and __bio_iov_iter_get_pages()

I get it that there are a lot of places, more interesting how often
it's actually triggered and if that's performance critical for anybody.
Not like I'm going to change it, just out of curiosity, but bvec.h
can be nicely optimised without it.

> 
> so ... yeah, it's used a lot.
> 

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/2] iov_iter: optimise iov_iter_npages for bvec
  2020-11-20  1:56         ` Pavel Begunkov
@ 2020-11-20  2:06           ` Matthew Wilcox
  2020-11-20  2:08             ` Pavel Begunkov
  2020-11-20  2:24             ` Ming Lei
  0 siblings, 2 replies; 21+ messages in thread
From: Matthew Wilcox @ 2020-11-20  2:06 UTC (permalink / raw)
  To: Pavel Begunkov
  Cc: linux-fsdevel, Alexander Viro, Jens Axboe, linux-block, linux-kernel

On Fri, Nov 20, 2020 at 01:56:22AM +0000, Pavel Begunkov wrote:
> On 20/11/2020 01:49, Matthew Wilcox wrote:
> > On Fri, Nov 20, 2020 at 01:39:05AM +0000, Pavel Begunkov wrote:
> >> On 20/11/2020 01:20, Matthew Wilcox wrote:
> >>> On Thu, Nov 19, 2020 at 11:24:38PM +0000, Pavel Begunkov wrote:
> >>>> The block layer spends quite a while in iov_iter_npages(), but for the
> >>>> bvec case the number of pages is already known and stored in
> >>>> iter->nr_segs, so it can be returned immediately as an optimisation
> >>>
> >>> Er ... no, it doesn't.  nr_segs is the number of bvecs.  Each bvec can
> >>> store up to 4GB of contiguous physical memory.
> >>
> >> Ah, really, missed min() with PAGE_SIZE in bvec_iter_len(), then it's a
> >> stupid statement. Thanks!
> >>
> >> Are there many users of that? All these iterators are a huge burden,
> >> just to count one 4KB page in bvec it takes 2% of CPU time for me.
> > 
> > __bio_try_merge_page() will create multipage BIOs, and that's
> > called from a number of places including
> > bio_try_merge_hw_seg(), bio_add_page(), and __bio_iov_iter_get_pages()
> 
> I get it that there are a lot of places, more interesting how often
> it's actually triggered and if that's performance critical for anybody.
> Not like I'm going to change it, just out of curiosity, but bvec.h
> can be nicely optimised without it.

Typically when you're allocating pages for the page cache, they'll get
allocated in order and then you'll read or write them in order, so yes,
it ends up triggering quite a lot.  There was once a bug in the page
allocator which caused them to get allocated in reverse order and it
was a noticable performance hit (this was 15-20 years ago).

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/2] iov_iter: optimise iov_iter_npages for bvec
  2020-11-20  2:06           ` Matthew Wilcox
@ 2020-11-20  2:08             ` Pavel Begunkov
  2020-11-20  2:24             ` Ming Lei
  1 sibling, 0 replies; 21+ messages in thread
From: Pavel Begunkov @ 2020-11-20  2:08 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-fsdevel, Alexander Viro, Jens Axboe, linux-block, linux-kernel

On 20/11/2020 02:06, Matthew Wilcox wrote:
> On Fri, Nov 20, 2020 at 01:56:22AM +0000, Pavel Begunkov wrote:
>> On 20/11/2020 01:49, Matthew Wilcox wrote:
>>> On Fri, Nov 20, 2020 at 01:39:05AM +0000, Pavel Begunkov wrote:
>>>> On 20/11/2020 01:20, Matthew Wilcox wrote:
>>>>> On Thu, Nov 19, 2020 at 11:24:38PM +0000, Pavel Begunkov wrote:
>>>>>> The block layer spends quite a while in iov_iter_npages(), but for the
>>>>>> bvec case the number of pages is already known and stored in
>>>>>> iter->nr_segs, so it can be returned immediately as an optimisation
>>>>>
>>>>> Er ... no, it doesn't.  nr_segs is the number of bvecs.  Each bvec can
>>>>> store up to 4GB of contiguous physical memory.
>>>>
>>>> Ah, really, missed min() with PAGE_SIZE in bvec_iter_len(), then it's a
>>>> stupid statement. Thanks!
>>>>
>>>> Are there many users of that? All these iterators are a huge burden,
>>>> just to count one 4KB page in bvec it takes 2% of CPU time for me.
>>>
>>> __bio_try_merge_page() will create multipage BIOs, and that's
>>> called from a number of places including
>>> bio_try_merge_hw_seg(), bio_add_page(), and __bio_iov_iter_get_pages()
>>
>> I get it that there are a lot of places, more interesting how often
>> it's actually triggered and if that's performance critical for anybody.
>> Not like I'm going to change it, just out of curiosity, but bvec.h
>> can be nicely optimised without it.
> 
> Typically when you're allocating pages for the page cache, they'll get
> allocated in order and then you'll read or write them in order, so yes,
> it ends up triggering quite a lot.  There was once a bug in the page
> allocator which caused them to get allocated in reverse order and it
> was a noticable performance hit (this was 15-20 years ago).

I see, thanks for a bit of insight

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/2] iov_iter: optimise iov_iter_npages for bvec
  2020-11-20  1:39     ` Pavel Begunkov
  2020-11-20  1:49       ` Matthew Wilcox
@ 2020-11-20  2:22       ` Ming Lei
  2020-11-20  2:25         ` Pavel Begunkov
  1 sibling, 1 reply; 21+ messages in thread
From: Ming Lei @ 2020-11-20  2:22 UTC (permalink / raw)
  To: Pavel Begunkov
  Cc: Matthew Wilcox, linux-fsdevel, Alexander Viro, Jens Axboe,
	linux-block, linux-kernel

On Fri, Nov 20, 2020 at 01:39:05AM +0000, Pavel Begunkov wrote:
> On 20/11/2020 01:20, Matthew Wilcox wrote:
> > On Thu, Nov 19, 2020 at 11:24:38PM +0000, Pavel Begunkov wrote:
> >> The block layer spends quite a while in iov_iter_npages(), but for the
> >> bvec case the number of pages is already known and stored in
> >> iter->nr_segs, so it can be returned immediately as an optimisation
> > 
> > Er ... no, it doesn't.  nr_segs is the number of bvecs.  Each bvec can
> > store up to 4GB of contiguous physical memory.
> 
> Ah, really, missed min() with PAGE_SIZE in bvec_iter_len(), then it's a
> stupid statement. Thanks!
> 

iov_iter_npages(bvec) still can be improved a bit by the following way:

diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index 1635111c5bd2..d85ed7acce05 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -1608,17 +1608,23 @@ int iov_iter_npages(const struct iov_iter *i, int maxpages)
 		npages = pipe_space_for_user(iter_head, pipe->tail, pipe);
 		if (npages >= maxpages)
 			return maxpages;
+	} else if (iov_iter_is_bvec(i)) {
+		unsigned idx, offset = i->iov_offset;
+
+		for (idx = 0; idx < i->nr_segs; idx++) {
+			npages += DIV_ROUND_UP(i->bvec[idx].bv_len - offset,
+					PAGE_SIZE);
+			offset = 0;
+		}
+		if (npages >= maxpages)
+			return maxpages;
 	} else iterate_all_kinds(i, size, v, ({
 		unsigned long p = (unsigned long)v.iov_base;
 		npages += DIV_ROUND_UP(p + v.iov_len, PAGE_SIZE)
 			- p / PAGE_SIZE;
 		if (npages >= maxpages)
 			return maxpages;
-	0;}),({
-		npages++;
-		if (npages >= maxpages)
-			return maxpages;
-	}),({
+	0;}),0,({
 		unsigned long p = (unsigned long)v.iov_base;
 		npages += DIV_ROUND_UP(p + v.iov_len, PAGE_SIZE)
 			- p / PAGE_SIZE;

-- 
Ming


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/2] iov_iter: optimise iov_iter_npages for bvec
  2020-11-20  2:06           ` Matthew Wilcox
  2020-11-20  2:08             ` Pavel Begunkov
@ 2020-11-20  2:24             ` Ming Lei
  2020-11-20 17:22               ` Pavel Begunkov
  1 sibling, 1 reply; 21+ messages in thread
From: Ming Lei @ 2020-11-20  2:24 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Pavel Begunkov, linux-fsdevel, Alexander Viro, Jens Axboe,
	linux-block, linux-kernel

On Fri, Nov 20, 2020 at 02:06:10AM +0000, Matthew Wilcox wrote:
> On Fri, Nov 20, 2020 at 01:56:22AM +0000, Pavel Begunkov wrote:
> > On 20/11/2020 01:49, Matthew Wilcox wrote:
> > > On Fri, Nov 20, 2020 at 01:39:05AM +0000, Pavel Begunkov wrote:
> > >> On 20/11/2020 01:20, Matthew Wilcox wrote:
> > >>> On Thu, Nov 19, 2020 at 11:24:38PM +0000, Pavel Begunkov wrote:
> > >>>> The block layer spends quite a while in iov_iter_npages(), but for the
> > >>>> bvec case the number of pages is already known and stored in
> > >>>> iter->nr_segs, so it can be returned immediately as an optimisation
> > >>>
> > >>> Er ... no, it doesn't.  nr_segs is the number of bvecs.  Each bvec can
> > >>> store up to 4GB of contiguous physical memory.
> > >>
> > >> Ah, really, missed min() with PAGE_SIZE in bvec_iter_len(), then it's a
> > >> stupid statement. Thanks!
> > >>
> > >> Are there many users of that? All these iterators are a huge burden,
> > >> just to count one 4KB page in bvec it takes 2% of CPU time for me.
> > > 
> > > __bio_try_merge_page() will create multipage BIOs, and that's
> > > called from a number of places including
> > > bio_try_merge_hw_seg(), bio_add_page(), and __bio_iov_iter_get_pages()
> > 
> > I get it that there are a lot of places, more interesting how often
> > it's actually triggered and if that's performance critical for anybody.
> > Not like I'm going to change it, just out of curiosity, but bvec.h
> > can be nicely optimised without it.
> 
> Typically when you're allocating pages for the page cache, they'll get
> allocated in order and then you'll read or write them in order, so yes,
> it ends up triggering quite a lot.  There was once a bug in the page
> allocator which caused them to get allocated in reverse order and it
> was a noticable performance hit (this was 15-20 years ago).

hugepage use cases can benefit much from this way too.


Thanks,
Ming


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/2] iov_iter: optimise iov_iter_npages for bvec
  2020-11-20  2:22       ` Ming Lei
@ 2020-11-20  2:25         ` Pavel Begunkov
  2020-11-20  2:54           ` Matthew Wilcox
  0 siblings, 1 reply; 21+ messages in thread
From: Pavel Begunkov @ 2020-11-20  2:25 UTC (permalink / raw)
  To: Ming Lei
  Cc: Matthew Wilcox, linux-fsdevel, Alexander Viro, Jens Axboe,
	linux-block, linux-kernel

On 20/11/2020 02:22, Ming Lei wrote:
> On Fri, Nov 20, 2020 at 01:39:05AM +0000, Pavel Begunkov wrote:
>> On 20/11/2020 01:20, Matthew Wilcox wrote:
>>> On Thu, Nov 19, 2020 at 11:24:38PM +0000, Pavel Begunkov wrote:
>>>> The block layer spends quite a while in iov_iter_npages(), but for the
>>>> bvec case the number of pages is already known and stored in
>>>> iter->nr_segs, so it can be returned immediately as an optimisation
>>>
>>> Er ... no, it doesn't.  nr_segs is the number of bvecs.  Each bvec can
>>> store up to 4GB of contiguous physical memory.
>>
>> Ah, really, missed min() with PAGE_SIZE in bvec_iter_len(), then it's a
>> stupid statement. Thanks!
>>
> 
> iov_iter_npages(bvec) still can be improved a bit by the following way:

Yep, was doing exactly that, +a couple of other places that are in my way.

> 
> diff --git a/lib/iov_iter.c b/lib/iov_iter.c
> index 1635111c5bd2..d85ed7acce05 100644
> --- a/lib/iov_iter.c
> +++ b/lib/iov_iter.c
> @@ -1608,17 +1608,23 @@ int iov_iter_npages(const struct iov_iter *i, int maxpages)
>  		npages = pipe_space_for_user(iter_head, pipe->tail, pipe);
>  		if (npages >= maxpages)
>  			return maxpages;
> +	} else if (iov_iter_is_bvec(i)) {
> +		unsigned idx, offset = i->iov_offset;
> +
> +		for (idx = 0; idx < i->nr_segs; idx++) {
> +			npages += DIV_ROUND_UP(i->bvec[idx].bv_len - offset,
> +					PAGE_SIZE);
> +			offset = 0;
> +		}
> +		if (npages >= maxpages)
> +			return maxpages;
>  	} else iterate_all_kinds(i, size, v, ({
>  		unsigned long p = (unsigned long)v.iov_base;
>  		npages += DIV_ROUND_UP(p + v.iov_len, PAGE_SIZE)
>  			- p / PAGE_SIZE;
>  		if (npages >= maxpages)
>  			return maxpages;
> -	0;}),({
> -		npages++;
> -		if (npages >= maxpages)
> -			return maxpages;
> -	}),({
> +	0;}),0,({
>  		unsigned long p = (unsigned long)v.iov_base;
>  		npages += DIV_ROUND_UP(p + v.iov_len, PAGE_SIZE)
>  			- p / PAGE_SIZE;
> 

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/2] iov_iter: optimise iov_iter_npages for bvec
  2020-11-20  2:25         ` Pavel Begunkov
@ 2020-11-20  2:54           ` Matthew Wilcox
  2020-11-20  8:14             ` Christoph Hellwig
  2020-11-20  9:57             ` Pavel Begunkov
  0 siblings, 2 replies; 21+ messages in thread
From: Matthew Wilcox @ 2020-11-20  2:54 UTC (permalink / raw)
  To: Pavel Begunkov
  Cc: Ming Lei, linux-fsdevel, Alexander Viro, Jens Axboe, linux-block,
	linux-kernel

On Fri, Nov 20, 2020 at 02:25:08AM +0000, Pavel Begunkov wrote:
> On 20/11/2020 02:22, Ming Lei wrote:
> > iov_iter_npages(bvec) still can be improved a bit by the following way:
> 
> Yep, was doing exactly that, +a couple of other places that are in my way.

Are you optimising the right thing here?  Assuming you're looking at
the one in do_blockdev_direct_IO(), wouldn't we be better off figuring
out how to copy the bvecs directly from the iov_iter into the bio
rather than calling dio_bio_add_page() for each page?

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/2] iov_iter: optimise iov_iter_npages for bvec
  2020-11-20  2:54           ` Matthew Wilcox
@ 2020-11-20  8:14             ` Christoph Hellwig
  2020-11-20 12:39               ` Matthew Wilcox
  2020-11-20  9:57             ` Pavel Begunkov
  1 sibling, 1 reply; 21+ messages in thread
From: Christoph Hellwig @ 2020-11-20  8:14 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Pavel Begunkov, Ming Lei, linux-fsdevel, Alexander Viro,
	Jens Axboe, linux-block, linux-kernel

On Fri, Nov 20, 2020 at 02:54:57AM +0000, Matthew Wilcox wrote:
> On Fri, Nov 20, 2020 at 02:25:08AM +0000, Pavel Begunkov wrote:
> > On 20/11/2020 02:22, Ming Lei wrote:
> > > iov_iter_npages(bvec) still can be improved a bit by the following way:
> > 
> > Yep, was doing exactly that, +a couple of other places that are in my way.
> 
> Are you optimising the right thing here?  Assuming you're looking at
> the one in do_blockdev_direct_IO(), wouldn't we be better off figuring
> out how to copy the bvecs directly from the iov_iter into the bio
> rather than calling dio_bio_add_page() for each page?

Which is most effectively done by stopping to to use *blockdev_direct_IO
and switching to iomap instead :)

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/2] iov_iter: optimise iov_iter_npages for bvec
  2020-11-20  2:54           ` Matthew Wilcox
  2020-11-20  8:14             ` Christoph Hellwig
@ 2020-11-20  9:57             ` Pavel Begunkov
  1 sibling, 0 replies; 21+ messages in thread
From: Pavel Begunkov @ 2020-11-20  9:57 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Ming Lei, linux-fsdevel, Alexander Viro, Jens Axboe, linux-block,
	linux-kernel

On 20/11/2020 02:54, Matthew Wilcox wrote:
> On Fri, Nov 20, 2020 at 02:25:08AM +0000, Pavel Begunkov wrote:
>> On 20/11/2020 02:22, Ming Lei wrote:
>>> iov_iter_npages(bvec) still can be improved a bit by the following way:
>>
>> Yep, was doing exactly that, +a couple of other places that are in my way.
> 
> Are you optimising the right thing here?  Assuming you're looking at
> the one in do_blockdev_direct_IO(), wouldn't we be better off figuring
> out how to copy the bvecs directly from the iov_iter into the bio
> rather than calling dio_bio_add_page() for each page?

Ha, you got me, *add_page() was that "couple of others". It shows up much
more, but iov_iter_npages() just looked simple enough to do first.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/2] iov_iter: optimise iov_iter_npages for bvec
  2020-11-20  8:14             ` Christoph Hellwig
@ 2020-11-20 12:39               ` Matthew Wilcox
  2020-11-20 13:00                 ` Pavel Begunkov
  0 siblings, 1 reply; 21+ messages in thread
From: Matthew Wilcox @ 2020-11-20 12:39 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Pavel Begunkov, Ming Lei, linux-fsdevel, Alexander Viro,
	Jens Axboe, linux-block, linux-kernel

On Fri, Nov 20, 2020 at 08:14:29AM +0000, Christoph Hellwig wrote:
> On Fri, Nov 20, 2020 at 02:54:57AM +0000, Matthew Wilcox wrote:
> > On Fri, Nov 20, 2020 at 02:25:08AM +0000, Pavel Begunkov wrote:
> > > On 20/11/2020 02:22, Ming Lei wrote:
> > > > iov_iter_npages(bvec) still can be improved a bit by the following way:
> > > 
> > > Yep, was doing exactly that, +a couple of other places that are in my way.
> > 
> > Are you optimising the right thing here?  Assuming you're looking at
> > the one in do_blockdev_direct_IO(), wouldn't we be better off figuring
> > out how to copy the bvecs directly from the iov_iter into the bio
> > rather than calling dio_bio_add_page() for each page?
> 
> Which is most effectively done by stopping to to use *blockdev_direct_IO
> and switching to iomap instead :)

But iomap still calls iov_iter_npages().  So maybe we need something like
this ...

diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
index 933f234d5bec..1c5a802a45d9 100644
--- a/fs/iomap/direct-io.c
+++ b/fs/iomap/direct-io.c
@@ -250,7 +250,7 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length,
 	orig_count = iov_iter_count(dio->submit.iter);
 	iov_iter_truncate(dio->submit.iter, length);
 
-	nr_pages = iov_iter_npages(dio->submit.iter, BIO_MAX_PAGES);
+	nr_pages = bio_iov_iter_npages(dio->submit.iter);
 	if (nr_pages <= 0) {
 		ret = nr_pages;
 		goto out;
@@ -308,7 +308,7 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length,
 		dio->size += n;
 		copied += n;
 
-		nr_pages = iov_iter_npages(dio->submit.iter, BIO_MAX_PAGES);
+		nr_pages = bio_iov_iter_npages(dio->submit.iter);
 		iomap_dio_submit_bio(dio, iomap, bio, pos);
 		pos += n;
 	} while (nr_pages);
diff --git a/include/linux/bio.h b/include/linux/bio.h
index c6d765382926..86cc74f84b30 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -10,6 +10,7 @@
 #include <linux/ioprio.h>
 /* struct bio, bio_vec and BIO_* flags are defined in blk_types.h */
 #include <linux/blk_types.h>
+#include <linux/uio.h>
 
 #define BIO_DEBUG
 
@@ -447,6 +448,16 @@ bool __bio_try_merge_page(struct bio *bio, struct page *page,
 void __bio_add_page(struct bio *bio, struct page *page,
 		unsigned int len, unsigned int off);
 int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter);
+
+static inline int bio_iov_iter_npages(const struct iov_iter *i)
+{
+	if (!iov_iter_count(i))
+		return 0;
+	if (iov_iter_is_bvec(i))
+		return 1;
+	return iov_iter_npages(i, BIO_MAX_PAGES);
+}
+
 void bio_release_pages(struct bio *bio, bool mark_dirty);
 extern void bio_set_pages_dirty(struct bio *bio);
 extern void bio_check_pages_dirty(struct bio *bio);

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/2] iov_iter: optimise iov_iter_npages for bvec
  2020-11-20 12:39               ` Matthew Wilcox
@ 2020-11-20 13:00                 ` Pavel Begunkov
  2020-11-20 13:13                   ` Matthew Wilcox
  0 siblings, 1 reply; 21+ messages in thread
From: Pavel Begunkov @ 2020-11-20 13:00 UTC (permalink / raw)
  To: Matthew Wilcox, Christoph Hellwig
  Cc: Ming Lei, linux-fsdevel, Alexander Viro, Jens Axboe, linux-block,
	linux-kernel

On 20/11/2020 12:39, Matthew Wilcox wrote:
> On Fri, Nov 20, 2020 at 08:14:29AM +0000, Christoph Hellwig wrote:
>> On Fri, Nov 20, 2020 at 02:54:57AM +0000, Matthew Wilcox wrote:
>>> On Fri, Nov 20, 2020 at 02:25:08AM +0000, Pavel Begunkov wrote:
>>>> On 20/11/2020 02:22, Ming Lei wrote:
>>>>> iov_iter_npages(bvec) still can be improved a bit by the following way:
>>>>
>>>> Yep, was doing exactly that, +a couple of other places that are in my way.
>>>
>>> Are you optimising the right thing here?  Assuming you're looking at
>>> the one in do_blockdev_direct_IO(), wouldn't we be better off figuring
>>> out how to copy the bvecs directly from the iov_iter into the bio
>>> rather than calling dio_bio_add_page() for each page?
>>
>> Which is most effectively done by stopping to to use *blockdev_direct_IO
>> and switching to iomap instead :)
> 
> But iomap still calls iov_iter_npages().  So maybe we need something like
> this ...

Yep, all that are not mutually exclusive optimisations.
Why `return 1`? It seems to be used later in bio_alloc(nr_pages)

> 
> diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
> index 933f234d5bec..1c5a802a45d9 100644
> --- a/fs/iomap/direct-io.c
> +++ b/fs/iomap/direct-io.c
> @@ -250,7 +250,7 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length,
>  	orig_count = iov_iter_count(dio->submit.iter);
>  	iov_iter_truncate(dio->submit.iter, length);
>  
> -	nr_pages = iov_iter_npages(dio->submit.iter, BIO_MAX_PAGES);
> +	nr_pages = bio_iov_iter_npages(dio->submit.iter);
>  	if (nr_pages <= 0) {
>  		ret = nr_pages;
>  		goto out;
> @@ -308,7 +308,7 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length,
>  		dio->size += n;
>  		copied += n;
>  
> -		nr_pages = iov_iter_npages(dio->submit.iter, BIO_MAX_PAGES);
> +		nr_pages = bio_iov_iter_npages(dio->submit.iter);
>  		iomap_dio_submit_bio(dio, iomap, bio, pos);
>  		pos += n;
>  	} while (nr_pages);
> diff --git a/include/linux/bio.h b/include/linux/bio.h
> index c6d765382926..86cc74f84b30 100644
> --- a/include/linux/bio.h
> +++ b/include/linux/bio.h
> @@ -10,6 +10,7 @@
>  #include <linux/ioprio.h>
>  /* struct bio, bio_vec and BIO_* flags are defined in blk_types.h */
>  #include <linux/blk_types.h>
> +#include <linux/uio.h>
>  
>  #define BIO_DEBUG
>  
> @@ -447,6 +448,16 @@ bool __bio_try_merge_page(struct bio *bio, struct page *page,
>  void __bio_add_page(struct bio *bio, struct page *page,
>  		unsigned int len, unsigned int off);
>  int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter);
> +
> +static inline int bio_iov_iter_npages(const struct iov_iter *i)
> +{
> +	if (!iov_iter_count(i))
> +		return 0;
> +	if (iov_iter_is_bvec(i))
> +		return 1;
> +	return iov_iter_npages(i, BIO_MAX_PAGES);
> +}
> +
>  void bio_release_pages(struct bio *bio, bool mark_dirty);
>  extern void bio_set_pages_dirty(struct bio *bio);
>  extern void bio_check_pages_dirty(struct bio *bio);
> 

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/2] iov_iter: optimise iov_iter_npages for bvec
  2020-11-20 13:00                 ` Pavel Begunkov
@ 2020-11-20 13:13                   ` Matthew Wilcox
  0 siblings, 0 replies; 21+ messages in thread
From: Matthew Wilcox @ 2020-11-20 13:13 UTC (permalink / raw)
  To: Pavel Begunkov
  Cc: Christoph Hellwig, Ming Lei, linux-fsdevel, Alexander Viro,
	Jens Axboe, linux-block, linux-kernel

On Fri, Nov 20, 2020 at 01:00:37PM +0000, Pavel Begunkov wrote:
> On 20/11/2020 12:39, Matthew Wilcox wrote:
> > On Fri, Nov 20, 2020 at 08:14:29AM +0000, Christoph Hellwig wrote:
> >> On Fri, Nov 20, 2020 at 02:54:57AM +0000, Matthew Wilcox wrote:
> >>> On Fri, Nov 20, 2020 at 02:25:08AM +0000, Pavel Begunkov wrote:
> >>>> On 20/11/2020 02:22, Ming Lei wrote:
> >>>>> iov_iter_npages(bvec) still can be improved a bit by the following way:
> >>>>
> >>>> Yep, was doing exactly that, +a couple of other places that are in my way.
> >>>
> >>> Are you optimising the right thing here?  Assuming you're looking at
> >>> the one in do_blockdev_direct_IO(), wouldn't we be better off figuring
> >>> out how to copy the bvecs directly from the iov_iter into the bio
> >>> rather than calling dio_bio_add_page() for each page?
> >>
> >> Which is most effectively done by stopping to to use *blockdev_direct_IO
> >> and switching to iomap instead :)
> > 
> > But iomap still calls iov_iter_npages().  So maybe we need something like
> > this ...
> 
> Yep, all that are not mutually exclusive optimisations.
> Why `return 1`? It seems to be used later in bio_alloc(nr_pages)

because 0 means "no pages".  It does no harm to allocate one biovec
that we then don't use.

> > -	nr_pages = iov_iter_npages(dio->submit.iter, BIO_MAX_PAGES);
> > +	nr_pages = bio_iov_iter_npages(dio->submit.iter);
> >  	if (nr_pages <= 0) {
            ^^^^^^^^^^^^^

> > -		nr_pages = iov_iter_npages(dio->submit.iter, BIO_MAX_PAGES);
> > +		nr_pages = bio_iov_iter_npages(dio->submit.iter);
> >  		iomap_dio_submit_bio(dio, iomap, bio, pos);
> >  		pos += n;
> >  	} while (nr_pages);
                 ^^^^^^^^


^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: [PATCH v2 1/2] iov_iter: optimise iov_iter_npages for bvec
  2020-11-19 23:24 ` [PATCH v2 1/2] iov_iter: optimise iov_iter_npages for bvec Pavel Begunkov
  2020-11-20  1:20   ` Matthew Wilcox
@ 2020-11-20 13:29   ` David Laight
  1 sibling, 0 replies; 21+ messages in thread
From: David Laight @ 2020-11-20 13:29 UTC (permalink / raw)
  To: 'Pavel Begunkov', linux-fsdevel, Alexander Viro
  Cc: Jens Axboe, linux-block, linux-kernel

From: Pavel Begunkov
> Sent: 19 November 2020 23:25
>
> The block layer spends quite a while in iov_iter_npages(), but for the
> bvec case the number of pages is already known and stored in
> iter->nr_segs, so it can be returned immediately as an optimisation
> 
> Perf for an io_uring benchmark with registered buffers (i.e. bvec) shows
> ~1.5-2.0% total cycle count spent in iov_iter_npages(), that's dropped
> by this patch to ~0.2%.
> 
> Reviewed-by: Jens Axboe <axboe@kernel.dk>
> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
> ---
>  lib/iov_iter.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/lib/iov_iter.c b/lib/iov_iter.c
> index 1635111c5bd2..0fa7ac330acf 100644
> --- a/lib/iov_iter.c
> +++ b/lib/iov_iter.c
> @@ -1594,6 +1594,8 @@ int iov_iter_npages(const struct iov_iter *i, int maxpages)
>  		return 0;
>  	if (unlikely(iov_iter_is_discard(i)))
>  		return 0;
> +	if (unlikely(iov_iter_is_bvec(i)))
> +		return min_t(int, i->nr_segs, maxpages);
> 
>  	if (unlikely(iov_iter_is_pipe(i))) {

Is it worth putting an extra condition around these three 'unlikely' cases.
ie:
	if (unlikely((iov_iter_type(i) & (ITER_DISCARD | ITER_BVEC | ITER_PIPE)) {
		if (iov_iter_is_discard(i))
			return 0;
		if (iov_iter_is_bvec(i))
			return min_t(int, i->nr_segs, maxpages);
		/* Must be ITER_PIPE */

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/2] iov_iter: optimise iov_iter_npages for bvec
  2020-11-20  2:24             ` Ming Lei
@ 2020-11-20 17:22               ` Pavel Begunkov
  2020-11-20 17:23                 ` Pavel Begunkov
  0 siblings, 1 reply; 21+ messages in thread
From: Pavel Begunkov @ 2020-11-20 17:22 UTC (permalink / raw)
  To: Ming Lei, Matthew Wilcox
  Cc: linux-fsdevel, Alexander Viro, Jens Axboe, linux-block, linux-kernel

On 20/11/2020 02:24, Ming Lei wrote:
> On Fri, Nov 20, 2020 at 02:06:10AM +0000, Matthew Wilcox wrote:
>> On Fri, Nov 20, 2020 at 01:56:22AM +0000, Pavel Begunkov wrote:
>>> On 20/11/2020 01:49, Matthew Wilcox wrote:
>>>> On Fri, Nov 20, 2020 at 01:39:05AM +0000, Pavel Begunkov wrote:
>>>>> On 20/11/2020 01:20, Matthew Wilcox wrote:
>>>>>> On Thu, Nov 19, 2020 at 11:24:38PM +0000, Pavel Begunkov wrote:
>>>>>>> The block layer spends quite a while in iov_iter_npages(), but for the
>>>>>>> bvec case the number of pages is already known and stored in
>>>>>>> iter->nr_segs, so it can be returned immediately as an optimisation
>>>>>>
>>>>>> Er ... no, it doesn't.  nr_segs is the number of bvecs.  Each bvec can
>>>>>> store up to 4GB of contiguous physical memory.
>>>>>
>>>>> Ah, really, missed min() with PAGE_SIZE in bvec_iter_len(), then it's a
>>>>> stupid statement. Thanks!
>>>>>
>>>>> Are there many users of that? All these iterators are a huge burden,
>>>>> just to count one 4KB page in bvec it takes 2% of CPU time for me.
>>>>
>>>> __bio_try_merge_page() will create multipage BIOs, and that's
>>>> called from a number of places including
>>>> bio_try_merge_hw_seg(), bio_add_page(), and __bio_iov_iter_get_pages()
>>>
>>> I get it that there are a lot of places, more interesting how often
>>> it's actually triggered and if that's performance critical for anybody.
>>> Not like I'm going to change it, just out of curiosity, but bvec.h
>>> can be nicely optimised without it.
>>
>> Typically when you're allocating pages for the page cache, they'll get
>> allocated in order and then you'll read or write them in order, so yes,
>> it ends up triggering quite a lot.  There was once a bug in the page
>> allocator which caused them to get allocated in reverse order and it
>> was a noticable performance hit (this was 15-20 years ago).
> 
> hugepage use cases can benefit much from this way too.

This didn't yield any considerable boost for me though. 1.5% -> 1.3%
for 1 page reads. I'll send it anyway though because there are cases
that can benefit, e.g. as Ming mentioned.

Ming would you want to send the patch yourself? After all you did post
it first.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/2] iov_iter: optimise iov_iter_npages for bvec
  2020-11-20 17:22               ` Pavel Begunkov
@ 2020-11-20 17:23                 ` Pavel Begunkov
  0 siblings, 0 replies; 21+ messages in thread
From: Pavel Begunkov @ 2020-11-20 17:23 UTC (permalink / raw)
  To: Ming Lei, Matthew Wilcox
  Cc: linux-fsdevel, Alexander Viro, Jens Axboe, linux-block, linux-kernel

On 20/11/2020 17:22, Pavel Begunkov wrote:
> On 20/11/2020 02:24, Ming Lei wrote:
>> On Fri, Nov 20, 2020 at 02:06:10AM +0000, Matthew Wilcox wrote:
>>> On Fri, Nov 20, 2020 at 01:56:22AM +0000, Pavel Begunkov wrote:
>>>> On 20/11/2020 01:49, Matthew Wilcox wrote:
>>>>> On Fri, Nov 20, 2020 at 01:39:05AM +0000, Pavel Begunkov wrote:
>>>>>> On 20/11/2020 01:20, Matthew Wilcox wrote:
>>>>>>> On Thu, Nov 19, 2020 at 11:24:38PM +0000, Pavel Begunkov wrote:
>>>>>>>> The block layer spends quite a while in iov_iter_npages(), but for the
>>>>>>>> bvec case the number of pages is already known and stored in
>>>>>>>> iter->nr_segs, so it can be returned immediately as an optimisation
>>>>>>>
>>>>>>> Er ... no, it doesn't.  nr_segs is the number of bvecs.  Each bvec can
>>>>>>> store up to 4GB of contiguous physical memory.
>>>>>>
>>>>>> Ah, really, missed min() with PAGE_SIZE in bvec_iter_len(), then it's a
>>>>>> stupid statement. Thanks!
>>>>>>
>>>>>> Are there many users of that? All these iterators are a huge burden,
>>>>>> just to count one 4KB page in bvec it takes 2% of CPU time for me.
>>>>>
>>>>> __bio_try_merge_page() will create multipage BIOs, and that's
>>>>> called from a number of places including
>>>>> bio_try_merge_hw_seg(), bio_add_page(), and __bio_iov_iter_get_pages()
>>>>
>>>> I get it that there are a lot of places, more interesting how often
>>>> it's actually triggered and if that's performance critical for anybody.
>>>> Not like I'm going to change it, just out of curiosity, but bvec.h
>>>> can be nicely optimised without it.
>>>
>>> Typically when you're allocating pages for the page cache, they'll get
>>> allocated in order and then you'll read or write them in order, so yes,
>>> it ends up triggering quite a lot.  There was once a bug in the page
>>> allocator which caused them to get allocated in reverse order and it
>>> was a noticable performance hit (this was 15-20 years ago).
>>
>> hugepage use cases can benefit much from this way too.
> 
> This didn't yield any considerable boost for me though. 1.5% -> 1.3%
> for 1 page reads. I'll send it anyway though because there are cases
> that can benefit, e.g. as Ming mentioned.

And yeah, it just shifts my attention for optimisation to its callers,
e.g. blkdev_direct_IO.

> Ming would you want to send the patch yourself? After all you did post
> it first.
> 

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2020-11-20 17:27 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-19 23:24 [PATCH v2 0/2] optimise iov_iter Pavel Begunkov
2020-11-19 23:24 ` [PATCH v2 1/2] iov_iter: optimise iov_iter_npages for bvec Pavel Begunkov
2020-11-20  1:20   ` Matthew Wilcox
2020-11-20  1:39     ` Pavel Begunkov
2020-11-20  1:49       ` Matthew Wilcox
2020-11-20  1:56         ` Pavel Begunkov
2020-11-20  2:06           ` Matthew Wilcox
2020-11-20  2:08             ` Pavel Begunkov
2020-11-20  2:24             ` Ming Lei
2020-11-20 17:22               ` Pavel Begunkov
2020-11-20 17:23                 ` Pavel Begunkov
2020-11-20  2:22       ` Ming Lei
2020-11-20  2:25         ` Pavel Begunkov
2020-11-20  2:54           ` Matthew Wilcox
2020-11-20  8:14             ` Christoph Hellwig
2020-11-20 12:39               ` Matthew Wilcox
2020-11-20 13:00                 ` Pavel Begunkov
2020-11-20 13:13                   ` Matthew Wilcox
2020-11-20  9:57             ` Pavel Begunkov
2020-11-20 13:29   ` David Laight
2020-11-19 23:24 ` [PATCH v2 2/2] iov_iter: optimise iter type checking Pavel Begunkov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).