All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mikulas Patocka <mpatocka@redhat.com>
To: Jan Kara <jack@suse.cz>
Cc: Dave Chinner <david@fromorbit.com>,
	Zhongwei Cai <sunrise_l@sjtu.edu.cn>,
	Theodore Ts'o <tytso@mit.edu>,
	Matthew Wilcox <willy@infradead.org>,
	David Laight <David.Laight@aculab.com>,
	Mingkai Dong <mingkaidong@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Steven Whitehouse <swhiteho@redhat.com>,
	Eric Sandeen <esandeen@redhat.com>,
	Dave Chinner <dchinner@redhat.com>,
	Wang Jianchao <jianchao.wan9@gmail.com>,
	Rajesh Tadakamadla <rajesh.tadakamadla@hpe.com>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	linux-nvdimm <linux-nvdimm@lists.01.org>
Subject: Re: Expense of read_iter
Date: Wed, 20 Jan 2021 10:12:01 -0500 (EST)	[thread overview]
Message-ID: <alpine.LRH.2.02.2101200951070.24430@file01.intranet.prod.int.rdu2.redhat.com> (raw)
In-Reply-To: <20210120141834.GA24063@quack2.suse.cz>



On Wed, 20 Jan 2021, Jan Kara wrote:

> Yeah, I agree. I'm against ext4 private solution for this read problem. And
> I'm also against duplicating ->read_iter functionatily in ->read handler.
> The maintenance burden of this code duplication is IMHO just too big. We
> rather need to improve the generic code so that the fast path is faster.
> And every filesystem will benefit because this is not ext4 specific
> problem.
> 
> 								Honza

Do you have some idea how to optimize the generic code that calls 
->read_iter?

vfs_read calls ->read if it is present. If not, it calls new_sync_read. 
new_sync_read's frame size is 128 bytes - it holds the structures iovec, 
kiocb and iov_iter. new_sync_read calls ->read_iter.

I have found out that the cost of calling new_sync_read is 3.3%, Zhongwei 
found out 3.9%. (the benchmark repeatedy reads the same 4k page)

I don't see any way how to optimize new_sync_read or how to reduce its 
frame size. Do you?

Mikulas
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

WARNING: multiple messages have this Message-ID (diff)
From: Mikulas Patocka <mpatocka@redhat.com>
To: Jan Kara <jack@suse.cz>
Cc: Dave Chinner <david@fromorbit.com>,
	Zhongwei Cai <sunrise_l@sjtu.edu.cn>,
	"Theodore Ts'o" <tytso@mit.edu>,
	Matthew Wilcox <willy@infradead.org>,
	David Laight <David.Laight@aculab.com>,
	Mingkai Dong <mingkaidong@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Steven Whitehouse <swhiteho@redhat.com>,
	Eric Sandeen <esandeen@redhat.com>,
	Dave Chinner <dchinner@redhat.com>,
	Wang Jianchao <jianchao.wan9@gmail.com>,
	Rajesh Tadakamadla <rajesh.tadakamadla@hpe.com>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	linux-nvdimm <linux-nvdimm@lists.01.org>
Subject: Re: Expense of read_iter
Date: Wed, 20 Jan 2021 10:12:01 -0500 (EST)	[thread overview]
Message-ID: <alpine.LRH.2.02.2101200951070.24430@file01.intranet.prod.int.rdu2.redhat.com> (raw)
In-Reply-To: <20210120141834.GA24063@quack2.suse.cz>



On Wed, 20 Jan 2021, Jan Kara wrote:

> Yeah, I agree. I'm against ext4 private solution for this read problem. And
> I'm also against duplicating ->read_iter functionatily in ->read handler.
> The maintenance burden of this code duplication is IMHO just too big. We
> rather need to improve the generic code so that the fast path is faster.
> And every filesystem will benefit because this is not ext4 specific
> problem.
> 
> 								Honza

Do you have some idea how to optimize the generic code that calls 
->read_iter?

vfs_read calls ->read if it is present. If not, it calls new_sync_read. 
new_sync_read's frame size is 128 bytes - it holds the structures iovec, 
kiocb and iov_iter. new_sync_read calls ->read_iter.

I have found out that the cost of calling new_sync_read is 3.3%, Zhongwei 
found out 3.9%. (the benchmark repeatedy reads the same 4k page)

I don't see any way how to optimize new_sync_read or how to reduce its 
frame size. Do you?

Mikulas


  reply	other threads:[~2021-01-20 15:12 UTC|newest]

Thread overview: 57+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-07 13:15 [RFC v2] nvfs: a filesystem for persistent memory Mikulas Patocka
2021-01-07 13:15 ` Mikulas Patocka
2021-01-07 15:11 ` Expense of read_iter Matthew Wilcox
2021-01-07 15:11   ` Matthew Wilcox
2021-01-07 16:43   ` Mingkai Dong
2021-01-07 16:43     ` Mingkai Dong
2021-01-12 13:45     ` Zhongwei Cai
2021-01-12 14:06       ` David Laight
2021-01-12 14:06         ` David Laight
2021-01-13 16:44       ` Mikulas Patocka
2021-01-13 16:44         ` Mikulas Patocka
2021-01-15  9:40         ` Zhongwei Cai
2021-01-20  4:47           ` Dave Chinner
2021-01-20  4:47             ` Dave Chinner
2021-01-20 14:18             ` Jan Kara
2021-01-20 14:18               ` Jan Kara
2021-01-20 15:12               ` Mikulas Patocka [this message]
2021-01-20 15:12                 ` Mikulas Patocka
2021-01-20 15:44                 ` David Laight
2021-01-20 15:44                   ` David Laight
2021-01-21 15:47                 ` Matthew Wilcox
2021-01-21 15:47                   ` Matthew Wilcox
2021-01-21 16:06                   ` Mikulas Patocka
2021-01-21 16:06                     ` Mikulas Patocka
2021-01-21 16:30               ` Zhongwei Cai
2021-01-07 18:59   ` Mikulas Patocka
2021-01-07 18:59     ` Mikulas Patocka
2021-01-10  6:13     ` Matthew Wilcox
2021-01-10  6:13       ` Matthew Wilcox
2021-01-10 21:19       ` Mikulas Patocka
2021-01-10 21:19         ` Mikulas Patocka
2021-01-11  0:18         ` Matthew Wilcox
2021-01-11  0:18           ` Matthew Wilcox
2021-01-11 21:10           ` Mikulas Patocka
2021-01-11 21:10             ` Mikulas Patocka
2021-01-11 10:11       ` David Laight
2021-01-11 10:11         ` David Laight
2021-01-10 16:20 ` [RFC v2] nvfs: a filesystem for persistent memory Al Viro
2021-01-10 16:20   ` Al Viro
2021-01-10 16:51   ` Al Viro
2021-01-10 16:51     ` Al Viro
2021-01-10 21:14   ` Mikulas Patocka
2021-01-10 21:14     ` Mikulas Patocka
2021-01-10 23:40     ` Al Viro
2021-01-10 23:40       ` Al Viro
2021-01-11 11:41       ` Mikulas Patocka
2021-01-11 11:41         ` Mikulas Patocka
2021-01-11 10:29   ` David Laight
2021-01-11 10:29     ` David Laight
2021-01-11 11:44     ` Mikulas Patocka
2021-01-11 11:44       ` Mikulas Patocka
2021-01-11 11:57       ` David Laight
2021-01-11 11:57         ` David Laight
2021-01-11 14:43         ` Al Viro
2021-01-11 14:43           ` Al Viro
2021-01-11 14:54           ` David Laight
2021-01-11 14:54             ` David Laight

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.LRH.2.02.2101200951070.24430@file01.intranet.prod.int.rdu2.redhat.com \
    --to=mpatocka@redhat.com \
    --cc=David.Laight@aculab.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@fromorbit.com \
    --cc=dchinner@redhat.com \
    --cc=esandeen@redhat.com \
    --cc=jack@suse.cz \
    --cc=jianchao.wan9@gmail.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=mingkaidong@gmail.com \
    --cc=rajesh.tadakamadla@hpe.com \
    --cc=sunrise_l@sjtu.edu.cn \
    --cc=swhiteho@redhat.com \
    --cc=tytso@mit.edu \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.