All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jan Kara <jack@suse.cz>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Jan Kara <jack@suse.cz>, LKML <linux-kernel@vger.kernel.org>,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH 2/6] fs: Take mapping lock in generic read paths
Date: Fri, 8 Feb 2013 15:59:45 +0100	[thread overview]
Message-ID: <20130208145945.GA10030@quack.suse.cz> (raw)
In-Reply-To: <20130204124715.GF7523@quack.suse.cz>

On Mon 04-02-13 13:47:15, Jan Kara wrote:
> On Thu 31-01-13 15:59:40, Andrew Morton wrote:
> > On Thu, 31 Jan 2013 22:49:50 +0100
> > Jan Kara <jack@suse.cz> wrote:
> > 
> > > Add mapping lock to struct address_space and grab it in all paths
> > > creating pages in page cache to read data into them. That means buffered
> > > read, readahead, and page fault code.
> > 
> > Boy, this does look expensive in both speed and space.
>   I'm not sure I'll be able to do much with the space cost but hopefully
> the CPU cost could be reduced.
> 
> > As you pointed out in [0/n], it's 2-3%.  As always with pagecache
> > stuff, the cost of filling the page generally swamps any inefficiencies
> > in preparing that page.
>   Yes, I measured it with with ramdisk backed fs exactly to remove the cost
> of filling the page from the picture. But there are systems where IO is CPU
> bound (e.g. when you have PCIe attached devices) and although there is the
> additional cost of block layer which will further hide the cost of page
> cache itself I assume the added 2-3% incurred by page cache itself will be
> measurable on such systems. So that's why I'd like to reduce the CPU cost
> of range locking.
  So I played a bit more with the code and I was able to reduce the space
cost to a single pointer in struct address_space and unmeasurable impact in
write path. I still see ~ 1% regression in the read path and I'm not sure
why that is as the fast path now only adds a test for one value. But maybe
there's some thinko somewhere. Anyway I'm optimistic that at least in
the current form the code could be massaged so that the CPU cost is in the
noise.

I write "in the current form" because as Dave Chinner pointed out we need
to lock the whole range used by write() at once to ever have a chance to
drop i_mutex and that will require some non-trivial changes. So I'll be
looking into that now...

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

WARNING: multiple messages have this Message-ID (diff)
From: Jan Kara <jack@suse.cz>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Jan Kara <jack@suse.cz>, LKML <linux-kernel@vger.kernel.org>,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH 2/6] fs: Take mapping lock in generic read paths
Date: Fri, 8 Feb 2013 15:59:45 +0100	[thread overview]
Message-ID: <20130208145945.GA10030@quack.suse.cz> (raw)
In-Reply-To: <20130204124715.GF7523@quack.suse.cz>

On Mon 04-02-13 13:47:15, Jan Kara wrote:
> On Thu 31-01-13 15:59:40, Andrew Morton wrote:
> > On Thu, 31 Jan 2013 22:49:50 +0100
> > Jan Kara <jack@suse.cz> wrote:
> > 
> > > Add mapping lock to struct address_space and grab it in all paths
> > > creating pages in page cache to read data into them. That means buffered
> > > read, readahead, and page fault code.
> > 
> > Boy, this does look expensive in both speed and space.
>   I'm not sure I'll be able to do much with the space cost but hopefully
> the CPU cost could be reduced.
> 
> > As you pointed out in [0/n], it's 2-3%.  As always with pagecache
> > stuff, the cost of filling the page generally swamps any inefficiencies
> > in preparing that page.
>   Yes, I measured it with with ramdisk backed fs exactly to remove the cost
> of filling the page from the picture. But there are systems where IO is CPU
> bound (e.g. when you have PCIe attached devices) and although there is the
> additional cost of block layer which will further hide the cost of page
> cache itself I assume the added 2-3% incurred by page cache itself will be
> measurable on such systems. So that's why I'd like to reduce the CPU cost
> of range locking.
  So I played a bit more with the code and I was able to reduce the space
cost to a single pointer in struct address_space and unmeasurable impact in
write path. I still see ~ 1% regression in the read path and I'm not sure
why that is as the fast path now only adds a test for one value. But maybe
there's some thinko somewhere. Anyway I'm optimistic that at least in
the current form the code could be massaged so that the CPU cost is in the
noise.

I write "in the current form" because as Dave Chinner pointed out we need
to lock the whole range used by write() at once to ever have a chance to
drop i_mutex and that will require some non-trivial changes. So I'll be
looking into that now...

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2013-02-08 14:59 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-01-31 21:49 [PATCH 0/6 RFC] Mapping range lock Jan Kara
2013-01-31 21:49 ` Jan Kara
2013-01-31 21:49 ` [PATCH 1/6] lib: Implement range locks Jan Kara
2013-01-31 21:49   ` Jan Kara
2013-01-31 23:57   ` Andrew Morton
2013-01-31 23:57     ` Andrew Morton
2013-02-04 16:41     ` Jan Kara
2013-02-04 16:41       ` Jan Kara
2013-02-11  5:42   ` Michel Lespinasse
2013-02-11  5:42     ` Michel Lespinasse
2013-02-11 10:27     ` Jan Kara
2013-02-11 10:27       ` Jan Kara
2013-02-11 11:03       ` Michel Lespinasse
2013-02-11 11:03         ` Michel Lespinasse
2013-02-11 12:58         ` Jan Kara
2013-02-11 12:58           ` Jan Kara
2013-01-31 21:49 ` [PATCH 2/6] fs: Take mapping lock in generic read paths Jan Kara
2013-01-31 21:49   ` Jan Kara
2013-01-31 23:59   ` Andrew Morton
2013-01-31 23:59     ` Andrew Morton
2013-02-04 12:47     ` Jan Kara
2013-02-04 12:47       ` Jan Kara
2013-02-08 14:59       ` Jan Kara [this message]
2013-02-08 14:59         ` Jan Kara
2013-01-31 21:49 ` [PATCH 3/6] fs: Provide function to take mapping lock in buffered write path Jan Kara
2013-01-31 21:49   ` Jan Kara
2013-01-31 21:49 ` [PATCH 4/6] fs: Don't call dio_cleanup() before submitting all bios Jan Kara
2013-01-31 21:49   ` Jan Kara
2013-01-31 21:49 ` [PATCH 5/6] fs: Take mapping lock during direct IO Jan Kara
2013-01-31 21:49   ` Jan Kara
2013-01-31 21:49 ` [PATCH 6/6] ext3: Convert ext3 to use mapping lock Jan Kara
2013-01-31 21:49   ` Jan Kara
2013-02-01  0:07 ` [PATCH 0/6 RFC] Mapping range lock Andrew Morton
2013-02-01  0:07   ` Andrew Morton
2013-02-04  9:29   ` Zheng Liu
2013-02-04  9:29     ` Zheng Liu
2013-02-04 12:38   ` Jan Kara
2013-02-04 12:38     ` Jan Kara
2013-02-05 23:25     ` Dave Chinner
2013-02-05 23:25       ` Dave Chinner
2013-02-06 19:25       ` Jan Kara
2013-02-06 19:25         ` Jan Kara
2013-02-07  2:43         ` Dave Chinner
2013-02-07  2:43           ` Dave Chinner
2013-02-07 11:06           ` Jan Kara
2013-02-07 11:06             ` Jan Kara

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130208145945.GA10030@quack.suse.cz \
    --to=jack@suse.cz \
    --cc=akpm@linux-foundation.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.