All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jan Kara <jack@suse.cz>
To: LKML <linux-kernel@vger.kernel.org>
Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	Jan Kara <jack@suse.cz>
Subject: [PATCH 0/6 RFC] Mapping range lock
Date: Thu, 31 Jan 2013 22:49:48 +0100	[thread overview]
Message-ID: <1359668994-13433-1-git-send-email-jack@suse.cz> (raw)

  Hello,

  As I promised in my LSF/MM summit proposal here are initial patches
implementing mapping range lock. There's ext3 converted to fully use the
range locks, converting other filesystems shouldn't be difficult but I want
to spend time on it only after we are sure what we want. The following part
is copied from the LSF/MM proposal, below it are some performance numbers.

There are several different motivations for implementing mapping range
locking:

a) Punch hole is currently racy wrt mmap (page can be faulted in in the
   punched range after page cache has been invalidated) leading to nasty
   results as fs corruption (we can end up writing to already freed block),
   user exposure of uninitialized data, etc. To fix this we need some new
   mechanism of serializing hole punching and page faults.

b) There is an uncomfortable number of mechanisms serializing various paths
   manipulating pagecache and data underlying it. We have i_mutex, page lock,
   checks for page beyond EOF in pagefault code, i_dio_count for direct IO.
   Different pairs of operations are serialized by different mechanisms and
   not all the cases are covered. Case (a) above is likely the worst but DIO
   vs buffered IO isn't ideal either (we provide only limited consistency).
   The range locking should somewhat simplify serialization of pagecache
   operations. So i_dio_count can be removed completely, i_mutex to certain
   extent (we still need something for things like timestamp updates,
   possibly for i_size changes although those can be dealt with I think).

c) i_mutex doesn't allow any paralellism of operations using it and some
   filesystems workaround this for specific cases (e.g. DIO reads). Using
   range locking allows for concurrent operations (e.g. writes, DIO) on
   different parts of the file. Of course, range locking itself isn't
   enough to make the parallelism possible. Filesystems still have to
   somehow deal with the concurrency when manipulating inode allocation
   data. But the range locking at least provides a common VFS mechanism for
   serialization VFS itself needs and it's upto each filesystem to
   serialize more if it needs to.

How it works:
------------

General idea is that range lock for range x-y prevents creation of pages in
that range.

In practice this means:
----------------------

All read paths adding page to page cache and grab_cache_page_write_begin()
first take range lock for the index, then insert locked page, and finally
unlock the range. See below on why buffered IO uses range locks on per-page
basis.

DIO gets range lock at the moment it submits bio for the range covering
pages in the bio. Then pagecache is truncated and bio submitted. Range lock
is unlocked once bio is completed.

Punch hole for range x-y takes range lock for the range before truncating
page cache and the lock is released after filesystem blocks for the range
are freed.

Truncate to size x is equivalent to punch hole for the range x - ~0UL.

The reason why we take the range lock for buffered IO on per-page basis and
for DIO for each bio separately is lock ordering with mmap_sem. Page faults
need to instantiate page under mmap_sem. That establishes mmap_sem > range
lock. Buffered IO takes mmap_sem when prefaulting pages so we cannot hold
range lock at that moment. Similarly get_user_pages() in DIO code takes
mmap_sem so we have be sure not to hold range lock when calling that.

How much does it cost:
---------------------

There's a memory cost - an extra pointer and spinlock in struct
address_space, 64 bytes on stack for buffered IO, truncate, punch hole, and
dynamically allocated 72-byte structure per each BIO submitted by direct IO.

And there's a cpu cost. I measured it on an 8 CPU machine with 4 GB of memory
with ext2 (yes, I added support also for ext2 and used it for measurements as
especially write results are much less noisy) over 1G ramdisk. The workloads
were generated by FIO and were 1) read 800 MB file, 2) overwrite 800 MB file,
3) mmap read 800 MB file. Each test was run 30 times.

The results are here (times to complete in ms):
	Vanilla				Range Locks
	Avg		Stddev		Avg		Stddev
READ	1133.566667	11.954590	1137.06666	7.827019
WRITE	1069.300000	7.996458	1101.200000	8.607748
MMAP	1416.733333	28.459250	1421.900000	30.636960

So we see READ and MMAP time changes are in the noise (although for reads
there seem to be about 1% cost if I compare more tests), for WRITE the cost
barely stands out of the noise at ~3% (and here I verified with perf what's
going on and indeed the range_lock() and range_unlock() calls cost in total
close to 3% of CPU time).

So the cost is noticeable. Is it a problem? Maybe, not sure... We could
likely optimize the lock-single-page range case but I wanted to start
simple and get some feedback first.

								Honza

WARNING: multiple messages have this Message-ID (diff)
From: Jan Kara <jack@suse.cz>
To: LKML <linux-kernel@vger.kernel.org>
Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	Jan Kara <jack@suse.cz>
Subject: [PATCH 0/6 RFC] Mapping range lock
Date: Thu, 31 Jan 2013 22:49:48 +0100	[thread overview]
Message-ID: <1359668994-13433-1-git-send-email-jack@suse.cz> (raw)

  Hello,

  As I promised in my LSF/MM summit proposal here are initial patches
implementing mapping range lock. There's ext3 converted to fully use the
range locks, converting other filesystems shouldn't be difficult but I want
to spend time on it only after we are sure what we want. The following part
is copied from the LSF/MM proposal, below it are some performance numbers.

There are several different motivations for implementing mapping range
locking:

a) Punch hole is currently racy wrt mmap (page can be faulted in in the
   punched range after page cache has been invalidated) leading to nasty
   results as fs corruption (we can end up writing to already freed block),
   user exposure of uninitialized data, etc. To fix this we need some new
   mechanism of serializing hole punching and page faults.

b) There is an uncomfortable number of mechanisms serializing various paths
   manipulating pagecache and data underlying it. We have i_mutex, page lock,
   checks for page beyond EOF in pagefault code, i_dio_count for direct IO.
   Different pairs of operations are serialized by different mechanisms and
   not all the cases are covered. Case (a) above is likely the worst but DIO
   vs buffered IO isn't ideal either (we provide only limited consistency).
   The range locking should somewhat simplify serialization of pagecache
   operations. So i_dio_count can be removed completely, i_mutex to certain
   extent (we still need something for things like timestamp updates,
   possibly for i_size changes although those can be dealt with I think).

c) i_mutex doesn't allow any paralellism of operations using it and some
   filesystems workaround this for specific cases (e.g. DIO reads). Using
   range locking allows for concurrent operations (e.g. writes, DIO) on
   different parts of the file. Of course, range locking itself isn't
   enough to make the parallelism possible. Filesystems still have to
   somehow deal with the concurrency when manipulating inode allocation
   data. But the range locking at least provides a common VFS mechanism for
   serialization VFS itself needs and it's upto each filesystem to
   serialize more if it needs to.

How it works:
------------

General idea is that range lock for range x-y prevents creation of pages in
that range.

In practice this means:
----------------------

All read paths adding page to page cache and grab_cache_page_write_begin()
first take range lock for the index, then insert locked page, and finally
unlock the range. See below on why buffered IO uses range locks on per-page
basis.

DIO gets range lock at the moment it submits bio for the range covering
pages in the bio. Then pagecache is truncated and bio submitted. Range lock
is unlocked once bio is completed.

Punch hole for range x-y takes range lock for the range before truncating
page cache and the lock is released after filesystem blocks for the range
are freed.

Truncate to size x is equivalent to punch hole for the range x - ~0UL.

The reason why we take the range lock for buffered IO on per-page basis and
for DIO for each bio separately is lock ordering with mmap_sem. Page faults
need to instantiate page under mmap_sem. That establishes mmap_sem > range
lock. Buffered IO takes mmap_sem when prefaulting pages so we cannot hold
range lock at that moment. Similarly get_user_pages() in DIO code takes
mmap_sem so we have be sure not to hold range lock when calling that.

How much does it cost:
---------------------

There's a memory cost - an extra pointer and spinlock in struct
address_space, 64 bytes on stack for buffered IO, truncate, punch hole, and
dynamically allocated 72-byte structure per each BIO submitted by direct IO.

And there's a cpu cost. I measured it on an 8 CPU machine with 4 GB of memory
with ext2 (yes, I added support also for ext2 and used it for measurements as
especially write results are much less noisy) over 1G ramdisk. The workloads
were generated by FIO and were 1) read 800 MB file, 2) overwrite 800 MB file,
3) mmap read 800 MB file. Each test was run 30 times.

The results are here (times to complete in ms):
	Vanilla				Range Locks
	Avg		Stddev		Avg		Stddev
READ	1133.566667	11.954590	1137.06666	7.827019
WRITE	1069.300000	7.996458	1101.200000	8.607748
MMAP	1416.733333	28.459250	1421.900000	30.636960

So we see READ and MMAP time changes are in the noise (although for reads
there seem to be about 1% cost if I compare more tests), for WRITE the cost
barely stands out of the noise at ~3% (and here I verified with perf what's
going on and indeed the range_lock() and range_unlock() calls cost in total
close to 3% of CPU time).

So the cost is noticeable. Is it a problem? Maybe, not sure... We could
likely optimize the lock-single-page range case but I wanted to start
simple and get some feedback first.

								Honza

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

             reply	other threads:[~2013-01-31 21:50 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-01-31 21:49 Jan Kara [this message]
2013-01-31 21:49 ` [PATCH 0/6 RFC] Mapping range lock Jan Kara
2013-01-31 21:49 ` [PATCH 1/6] lib: Implement range locks Jan Kara
2013-01-31 21:49   ` Jan Kara
2013-01-31 23:57   ` Andrew Morton
2013-01-31 23:57     ` Andrew Morton
2013-02-04 16:41     ` Jan Kara
2013-02-04 16:41       ` Jan Kara
2013-02-11  5:42   ` Michel Lespinasse
2013-02-11  5:42     ` Michel Lespinasse
2013-02-11 10:27     ` Jan Kara
2013-02-11 10:27       ` Jan Kara
2013-02-11 11:03       ` Michel Lespinasse
2013-02-11 11:03         ` Michel Lespinasse
2013-02-11 12:58         ` Jan Kara
2013-02-11 12:58           ` Jan Kara
2013-01-31 21:49 ` [PATCH 2/6] fs: Take mapping lock in generic read paths Jan Kara
2013-01-31 21:49   ` Jan Kara
2013-01-31 23:59   ` Andrew Morton
2013-01-31 23:59     ` Andrew Morton
2013-02-04 12:47     ` Jan Kara
2013-02-04 12:47       ` Jan Kara
2013-02-08 14:59       ` Jan Kara
2013-02-08 14:59         ` Jan Kara
2013-01-31 21:49 ` [PATCH 3/6] fs: Provide function to take mapping lock in buffered write path Jan Kara
2013-01-31 21:49   ` Jan Kara
2013-01-31 21:49 ` [PATCH 4/6] fs: Don't call dio_cleanup() before submitting all bios Jan Kara
2013-01-31 21:49   ` Jan Kara
2013-01-31 21:49 ` [PATCH 5/6] fs: Take mapping lock during direct IO Jan Kara
2013-01-31 21:49   ` Jan Kara
2013-01-31 21:49 ` [PATCH 6/6] ext3: Convert ext3 to use mapping lock Jan Kara
2013-01-31 21:49   ` Jan Kara
2013-02-01  0:07 ` [PATCH 0/6 RFC] Mapping range lock Andrew Morton
2013-02-01  0:07   ` Andrew Morton
2013-02-04  9:29   ` Zheng Liu
2013-02-04  9:29     ` Zheng Liu
2013-02-04 12:38   ` Jan Kara
2013-02-04 12:38     ` Jan Kara
2013-02-05 23:25     ` Dave Chinner
2013-02-05 23:25       ` Dave Chinner
2013-02-06 19:25       ` Jan Kara
2013-02-06 19:25         ` Jan Kara
2013-02-07  2:43         ` Dave Chinner
2013-02-07  2:43           ` Dave Chinner
2013-02-07 11:06           ` Jan Kara
2013-02-07 11:06             ` Jan Kara

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1359668994-13433-1-git-send-email-jack@suse.cz \
    --to=jack@suse.cz \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.