All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jan Kara <jack@suse.cz>
To: Dan Williams <dan.j.williams@intel.com>
Cc: Jan Kara <jack@suse.cz>,
	Ross Zwisler <ross.zwisler@linux.intel.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Dave Chinner <david@fromorbit.com>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	Linux MM <linux-mm@kvack.org>,
	"linux-nvdimm@lists.01.org" <linux-nvdimm@lists.01.org>,
	Mel Gorman <mgorman@suse.de>,
	Matthew Wilcox <willy@linux.intel.com>
Subject: Re: Another proposal for DAX fault locking
Date: Thu, 11 Feb 2016 11:43:13 +0100	[thread overview]
Message-ID: <20160211104313.GF21760@quack.suse.cz> (raw)
In-Reply-To: <CAPcyv4jNXogNgtVVUaJC_YLPvHcb93dXYdfsfH6cSgHS2=GoDA@mail.gmail.com>

On Wed 10-02-16 12:08:12, Dan Williams wrote:
> On Wed, Feb 10, 2016 at 2:32 AM, Jan Kara <jack@suse.cz> wrote:
> > On Tue 09-02-16 10:18:53, Dan Williams wrote:
> >> On Tue, Feb 9, 2016 at 9:24 AM, Jan Kara <jack@suse.cz> wrote:
> >> > Hello,
> >> >
> >> > I was thinking about current issues with DAX fault locking [1] (data
> >> > corruption due to racing faults allocating blocks) and also races which
> >> > currently don't allow us to clear dirty tags in the radix tree due to races
> >> > between faults and cache flushing [2]. Both of these exist because we don't
> >> > have an equivalent of page lock available for DAX. While we have a
> >> > reasonable solution available for problem [1], so far I'm not aware of a
> >> > decent solution for [2]. After briefly discussing the issue with Mel he had
> >> > a bright idea that we could used hashed locks to deal with [2] (and I think
> >> > we can solve [1] with them as well). So my proposal looks as follows:
> >> >
> >> > DAX will have an array of mutexes (the array can be made per device but
> >> > initially a global one should be OK). We will use mutexes in the array as a
> >> > replacement for page lock - we will use hashfn(mapping, index) to get
> >> > particular mutex protecting our offset in the mapping. On fault / page
> >> > mkwrite, we'll grab the mutex similarly to page lock and release it once we
> >> > are done updating page tables. This deals with races in [1]. When flushing
> >> > caches we grab the mutex before clearing writeable bit in page tables
> >> > and clearing dirty bit in the radix tree and drop it after we have flushed
> >> > caches for the pfn. This deals with races in [2].
> >> >
> >> > Thoughts?
> >> >
> >>
> >> I like the fact that this makes the locking explicit and
> >> straightforward rather than something more tricky.  Can we make the
> >> hashfn pfn based?  I'm thinking we could later reuse this as part of
> >> the solution for eliminating the need to allocate struct page, and we
> >> don't have the 'mapping' available in all paths...
> >
> > So Mel originally suggested to use pfn for hashing as well. My concern with
> > using pfn is that e.g. if you want to fill a hole, you don't have a pfn to
> > lock. What you really need to protect is a logical offset in the file to
> > serialize allocation of underlying blocks, its mapping into page tables,
> > and flushing the blocks out of caches. So using inode/mapping and offset
> > for the hashing is easier (it isn't obvious to me we can fix hole filling
> > races with pfn-based locking).
> >
> > I'm not sure for which other purposes you'd like to use this lock and
> > whether propagating file+offset to those call sites would make sense or
> > not. struct page has the advantage that block mapping information is only
> > attached to it, so when filling a hole, we can just allocate some page,
> > attach it to the radix tree, use page lock for synchronization, and allocate
> > blocks only after that. With pfns we cannot do this...
> 
> Right, I am thinking of the direct-I/O path's use of the page lock and
> the occasions where it relies on page->mapping lookups.

Well, but the main problem with direct IO is that it takes page *reference*
via get_user_pages(). So that's something different from page lock. Maybe
the new lock could be abused to provide necessary exclusion for direct IO
use as well but that would need deep thinking... So far it seems
problematic to me.

> Given we already have support for dynamically allocating struct page I
> don't think we need to have a "pfn to lock" lookup in the initial
> implementation of this locking scheme.

Agreed.

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

WARNING: multiple messages have this Message-ID (diff)
From: Jan Kara <jack@suse.cz>
To: Dan Williams <dan.j.williams@intel.com>
Cc: Jan Kara <jack@suse.cz>,
	Ross Zwisler <ross.zwisler@linux.intel.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Dave Chinner <david@fromorbit.com>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	Linux MM <linux-mm@kvack.org>,
	"linux-nvdimm@lists.01.org" <linux-nvdimm@ml01.01.org>,
	Mel Gorman <mgorman@suse.de>,
	Matthew Wilcox <willy@linux.intel.com>
Subject: Re: Another proposal for DAX fault locking
Date: Thu, 11 Feb 2016 11:43:13 +0100	[thread overview]
Message-ID: <20160211104313.GF21760@quack.suse.cz> (raw)
In-Reply-To: <CAPcyv4jNXogNgtVVUaJC_YLPvHcb93dXYdfsfH6cSgHS2=GoDA@mail.gmail.com>

On Wed 10-02-16 12:08:12, Dan Williams wrote:
> On Wed, Feb 10, 2016 at 2:32 AM, Jan Kara <jack@suse.cz> wrote:
> > On Tue 09-02-16 10:18:53, Dan Williams wrote:
> >> On Tue, Feb 9, 2016 at 9:24 AM, Jan Kara <jack@suse.cz> wrote:
> >> > Hello,
> >> >
> >> > I was thinking about current issues with DAX fault locking [1] (data
> >> > corruption due to racing faults allocating blocks) and also races which
> >> > currently don't allow us to clear dirty tags in the radix tree due to races
> >> > between faults and cache flushing [2]. Both of these exist because we don't
> >> > have an equivalent of page lock available for DAX. While we have a
> >> > reasonable solution available for problem [1], so far I'm not aware of a
> >> > decent solution for [2]. After briefly discussing the issue with Mel he had
> >> > a bright idea that we could used hashed locks to deal with [2] (and I think
> >> > we can solve [1] with them as well). So my proposal looks as follows:
> >> >
> >> > DAX will have an array of mutexes (the array can be made per device but
> >> > initially a global one should be OK). We will use mutexes in the array as a
> >> > replacement for page lock - we will use hashfn(mapping, index) to get
> >> > particular mutex protecting our offset in the mapping. On fault / page
> >> > mkwrite, we'll grab the mutex similarly to page lock and release it once we
> >> > are done updating page tables. This deals with races in [1]. When flushing
> >> > caches we grab the mutex before clearing writeable bit in page tables
> >> > and clearing dirty bit in the radix tree and drop it after we have flushed
> >> > caches for the pfn. This deals with races in [2].
> >> >
> >> > Thoughts?
> >> >
> >>
> >> I like the fact that this makes the locking explicit and
> >> straightforward rather than something more tricky.  Can we make the
> >> hashfn pfn based?  I'm thinking we could later reuse this as part of
> >> the solution for eliminating the need to allocate struct page, and we
> >> don't have the 'mapping' available in all paths...
> >
> > So Mel originally suggested to use pfn for hashing as well. My concern with
> > using pfn is that e.g. if you want to fill a hole, you don't have a pfn to
> > lock. What you really need to protect is a logical offset in the file to
> > serialize allocation of underlying blocks, its mapping into page tables,
> > and flushing the blocks out of caches. So using inode/mapping and offset
> > for the hashing is easier (it isn't obvious to me we can fix hole filling
> > races with pfn-based locking).
> >
> > I'm not sure for which other purposes you'd like to use this lock and
> > whether propagating file+offset to those call sites would make sense or
> > not. struct page has the advantage that block mapping information is only
> > attached to it, so when filling a hole, we can just allocate some page,
> > attach it to the radix tree, use page lock for synchronization, and allocate
> > blocks only after that. With pfns we cannot do this...
> 
> Right, I am thinking of the direct-I/O path's use of the page lock and
> the occasions where it relies on page->mapping lookups.

Well, but the main problem with direct IO is that it takes page *reference*
via get_user_pages(). So that's something different from page lock. Maybe
the new lock could be abused to provide necessary exclusion for direct IO
use as well but that would need deep thinking... So far it seems
problematic to me.

> Given we already have support for dynamically allocating struct page I
> don't think we need to have a "pfn to lock" lookup in the initial
> implementation of this locking scheme.

Agreed.

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

  reply	other threads:[~2016-02-11 10:43 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-09 17:24 Another proposal for DAX fault locking Jan Kara
2016-02-09 17:24 ` Jan Kara
2016-02-09 18:18 ` Dan Williams
2016-02-09 18:18   ` Dan Williams
2016-02-10 10:32   ` Jan Kara
2016-02-10 10:32     ` Jan Kara
2016-02-10 20:08     ` Dan Williams
2016-02-10 20:08       ` Dan Williams
2016-02-11 10:43       ` Jan Kara [this message]
2016-02-11 10:43         ` Jan Kara
2016-02-10 22:09     ` Dave Chinner
2016-02-10 22:09       ` Dave Chinner
2016-02-10 22:39       ` Cedric Blancher
2016-02-10 22:39         ` Cedric Blancher
2016-02-10 23:34         ` Ross Zwisler
2016-02-10 23:34           ` Ross Zwisler
2016-02-11 10:55         ` Jan Kara
2016-02-11 10:55           ` Jan Kara
2016-02-11 21:05           ` Cedric Blancher
2016-02-11 21:05             ` Cedric Blancher
2016-02-10 23:32       ` Ross Zwisler
2016-02-10 23:32         ` Ross Zwisler
2016-02-11 11:15         ` Jan Kara
2016-02-11 11:15           ` Jan Kara
2016-02-09 18:46 ` Cedric Blancher
2016-02-09 18:46   ` Cedric Blancher
2016-02-10  8:19   ` Mel Gorman
2016-02-10  8:19     ` Mel Gorman
2016-02-10 10:18     ` Jan Kara
2016-02-10 10:18       ` Jan Kara
2016-02-10 12:29 ` Dmitry Monakhov
2016-02-10 12:29   ` Dmitry Monakhov
2016-02-10 12:35   ` Jan Kara
2016-02-10 12:35     ` Jan Kara
2016-02-10 17:38 ` Boaz Harrosh
2016-02-10 17:38   ` Boaz Harrosh
2016-02-11 10:38   ` Jan Kara
2016-02-11 10:38     ` Jan Kara
2016-02-14  8:51     ` Boaz Harrosh
2016-02-14  8:51       ` Boaz Harrosh
2016-02-10 23:44 ` Ross Zwisler
2016-02-10 23:44   ` Ross Zwisler
2016-02-10 23:51   ` Cedric Blancher
2016-02-10 23:51     ` Cedric Blancher
2016-02-11  0:13     ` Ross Zwisler
2016-02-11  0:13       ` Ross Zwisler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160211104313.GF21760@quack.suse.cz \
    --to=jack@suse.cz \
    --cc=dan.j.williams@intel.com \
    --cc=david@fromorbit.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=mgorman@suse.de \
    --cc=ross.zwisler@linux.intel.com \
    --cc=willy@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.