All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ross Zwisler <ross.zwisler@linux.intel.com>
To: Jan Kara <jack@suse.cz>
Cc: Theodore Ts'o <tytso@mit.edu>,
	Matthew Wilcox <mawilcox@microsoft.com>,
	Dave Chinner <david@fromorbit.com>,
	linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org,
	Christoph Hellwig <hch@lst.de>,
	linux-xfs@vger.kernel.org, linux-mm@kvack.org,
	Andreas Dilger <adilger.kernel@dilger.ca>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Jan Kara <jack@suse.com>,
	linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH v4 10/12] dax: add struct iomap based DAX PMD support
Date: Tue, 4 Oct 2016 09:39:48 -0600	[thread overview]
Message-ID: <20161004153948.GA21248@linux.intel.com> (raw)
In-Reply-To: <20161004055557.GB17515@quack2.suse.cz>

On Tue, Oct 04, 2016 at 07:55:57AM +0200, Jan Kara wrote:
> On Mon 03-10-16 15:05:57, Ross Zwisler wrote:
> > > > @@ -623,22 +672,30 @@ static void *dax_insert_mapping_entry(struct address_space *mapping,
> > > >  		error = radix_tree_preload(vmf->gfp_mask & ~__GFP_HIGHMEM);
> > > >  		if (error)
> > > >  			return ERR_PTR(error);
> > > > +	} else if ((unsigned long)entry & RADIX_DAX_HZP && !hzp) {
> > > > +		/* replacing huge zero page with PMD block mapping */
> > > > +		unmap_mapping_range(mapping,
> > > > +			(vmf->pgoff << PAGE_SHIFT) & PMD_MASK, PMD_SIZE, 0);
> > > >  	}
> > > >  
> > > >  	spin_lock_irq(&mapping->tree_lock);
> > > > -	new_entry = (void *)((unsigned long)RADIX_DAX_ENTRY(sector, false) |
> > > > -		       RADIX_DAX_ENTRY_LOCK);
> > > > +	if (hzp)
> > > > +		new_entry = RADIX_DAX_HZP_ENTRY();
> > > > +	else
> > > > +		new_entry = RADIX_DAX_ENTRY(sector, new_type);
> > > > +
> > > >  	if (hole_fill) {
> > > >  		__delete_from_page_cache(entry, NULL);
> > > >  		/* Drop pagecache reference */
> > > >  		put_page(entry);
> > > > -		error = radix_tree_insert(page_tree, index, new_entry);
> > > > +		error = __radix_tree_insert(page_tree, index,
> > > > +				RADIX_DAX_ORDER(new_type), new_entry);
> > > >  		if (error) {
> > > >  			new_entry = ERR_PTR(error);
> > > >  			goto unlock;
> > > >  		}
> > > >  		mapping->nrexceptional++;
> > > > -	} else {
> > > > +	} else if ((unsigned long)entry & (RADIX_DAX_HZP|RADIX_DAX_EMPTY)) {
> > > >  		void **slot;
> > > >  		void *ret;
> > > 
> > > Hum, I somewhat dislike how PTE and PMD paths differ here. But it's OK for
> > > now I guess. Long term we might be better off to do away with zero pages
> > > for PTEs as well and use exceptional entry and a single zero page like you
> > > do for PMD. Because the special cases these zero pages cause are a
> > > headache.
> > 
> > I've been thinking about this as well, and I do think we'd be better off with
> > a single zero page for PTEs, as we have with PMDs.  It'd reduce the special
> > casing in the DAX code, and it'd also ensure that we don't waste a bunch of
> > time and memory creating read-only zero pages to service reads from holes.
> > 
> > I'll look into adding this for v5.
> 
> Well, this would clash with the dirty bit cleaning series I have. So I'd
> prefer to put this on a todo list and address it once existing series are
> integrated...

Sure, that works.

> > > > +	if (error)
> > > > +		goto fallback;
> > > > +	if (iomap.offset + iomap.length < pos + PMD_SIZE)
> > > > +		goto fallback;
> > > > +
> > > > +	vmf.pgoff = pgoff;
> > > > +	vmf.flags = flags;
> > > > +	vmf.gfp_mask = mapping_gfp_mask(mapping) | __GFP_FS | __GFP_IO;
> > > 
> > > I don't think you want __GFP_FS here - we have already gone through the
> > > filesystem's pmd_fault() handler which called dax_iomap_pmd_fault() and
> > > thus we hold various fs locks, freeze protection, ...
> > 
> > I copied this from __get_fault_gfp_mask() in mm/memory.c.  That function is
> > used by do_page_mkwrite() and __do_fault(), and we eventually get this
> > vmf->gfp_mask in the PTE fault code.  With the code as it is we get the same
> > vmf->gfp_mask in both dax_iomap_fault() and dax_iomap_pmd_fault().  It seems
> > like they should remain consistent - is it wrong to have __GFP_FS in
> > dax_iomap_fault()?
> 
> The gfp_mask that propagates from __do_fault() or do_page_mkwrite() is fine
> because at that point it is correct. But once we grab filesystem locks
> which are not reclaim safe, we should update vmf->gfp_mask we pass further
> down into DAX code to not contain __GFP_FS (that's a bug we apparently have
> there). And inside DAX code, we definitely are not generally safe to add
> __GFP_FS to mapping_gfp_mask(). Maybe we'd be better off propagating struct
> vm_fault into this function, using passed gfp_mask there and make sure
> callers update gfp_mask as appropriate.

Yep, that makes sense to me.  In reviewing your set it also occurred to me that
we might want to stick a struct vm_area_struct *vma pointer in the vmf, since
you always need a vma when you are using a vmf, but we pass them as a pair
everywhere.
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

WARNING: multiple messages have this Message-ID (diff)
From: Ross Zwisler <ross.zwisler@linux.intel.com>
To: Jan Kara <jack@suse.cz>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>,
	linux-kernel@vger.kernel.org, "Theodore Ts'o" <tytso@mit.edu>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Andreas Dilger <adilger.kernel@dilger.ca>,
	Andrew Morton <akpm@linux-foundation.org>,
	Christoph Hellwig <hch@lst.de>,
	Dan Williams <dan.j.williams@intel.com>,
	Dave Chinner <david@fromorbit.com>, Jan Kara <jack@suse.com>,
	Matthew Wilcox <mawilcox@microsoft.com>,
	linux-ext4@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, linux-nvdimm@ml01.01.org,
	linux-xfs@vger.kernel.org
Subject: Re: [PATCH v4 10/12] dax: add struct iomap based DAX PMD support
Date: Tue, 4 Oct 2016 09:39:48 -0600	[thread overview]
Message-ID: <20161004153948.GA21248@linux.intel.com> (raw)
In-Reply-To: <20161004055557.GB17515@quack2.suse.cz>

On Tue, Oct 04, 2016 at 07:55:57AM +0200, Jan Kara wrote:
> On Mon 03-10-16 15:05:57, Ross Zwisler wrote:
> > > > @@ -623,22 +672,30 @@ static void *dax_insert_mapping_entry(struct address_space *mapping,
> > > >  		error = radix_tree_preload(vmf->gfp_mask & ~__GFP_HIGHMEM);
> > > >  		if (error)
> > > >  			return ERR_PTR(error);
> > > > +	} else if ((unsigned long)entry & RADIX_DAX_HZP && !hzp) {
> > > > +		/* replacing huge zero page with PMD block mapping */
> > > > +		unmap_mapping_range(mapping,
> > > > +			(vmf->pgoff << PAGE_SHIFT) & PMD_MASK, PMD_SIZE, 0);
> > > >  	}
> > > >  
> > > >  	spin_lock_irq(&mapping->tree_lock);
> > > > -	new_entry = (void *)((unsigned long)RADIX_DAX_ENTRY(sector, false) |
> > > > -		       RADIX_DAX_ENTRY_LOCK);
> > > > +	if (hzp)
> > > > +		new_entry = RADIX_DAX_HZP_ENTRY();
> > > > +	else
> > > > +		new_entry = RADIX_DAX_ENTRY(sector, new_type);
> > > > +
> > > >  	if (hole_fill) {
> > > >  		__delete_from_page_cache(entry, NULL);
> > > >  		/* Drop pagecache reference */
> > > >  		put_page(entry);
> > > > -		error = radix_tree_insert(page_tree, index, new_entry);
> > > > +		error = __radix_tree_insert(page_tree, index,
> > > > +				RADIX_DAX_ORDER(new_type), new_entry);
> > > >  		if (error) {
> > > >  			new_entry = ERR_PTR(error);
> > > >  			goto unlock;
> > > >  		}
> > > >  		mapping->nrexceptional++;
> > > > -	} else {
> > > > +	} else if ((unsigned long)entry & (RADIX_DAX_HZP|RADIX_DAX_EMPTY)) {
> > > >  		void **slot;
> > > >  		void *ret;
> > > 
> > > Hum, I somewhat dislike how PTE and PMD paths differ here. But it's OK for
> > > now I guess. Long term we might be better off to do away with zero pages
> > > for PTEs as well and use exceptional entry and a single zero page like you
> > > do for PMD. Because the special cases these zero pages cause are a
> > > headache.
> > 
> > I've been thinking about this as well, and I do think we'd be better off with
> > a single zero page for PTEs, as we have with PMDs.  It'd reduce the special
> > casing in the DAX code, and it'd also ensure that we don't waste a bunch of
> > time and memory creating read-only zero pages to service reads from holes.
> > 
> > I'll look into adding this for v5.
> 
> Well, this would clash with the dirty bit cleaning series I have. So I'd
> prefer to put this on a todo list and address it once existing series are
> integrated...

Sure, that works.

> > > > +	if (error)
> > > > +		goto fallback;
> > > > +	if (iomap.offset + iomap.length < pos + PMD_SIZE)
> > > > +		goto fallback;
> > > > +
> > > > +	vmf.pgoff = pgoff;
> > > > +	vmf.flags = flags;
> > > > +	vmf.gfp_mask = mapping_gfp_mask(mapping) | __GFP_FS | __GFP_IO;
> > > 
> > > I don't think you want __GFP_FS here - we have already gone through the
> > > filesystem's pmd_fault() handler which called dax_iomap_pmd_fault() and
> > > thus we hold various fs locks, freeze protection, ...
> > 
> > I copied this from __get_fault_gfp_mask() in mm/memory.c.  That function is
> > used by do_page_mkwrite() and __do_fault(), and we eventually get this
> > vmf->gfp_mask in the PTE fault code.  With the code as it is we get the same
> > vmf->gfp_mask in both dax_iomap_fault() and dax_iomap_pmd_fault().  It seems
> > like they should remain consistent - is it wrong to have __GFP_FS in
> > dax_iomap_fault()?
> 
> The gfp_mask that propagates from __do_fault() or do_page_mkwrite() is fine
> because at that point it is correct. But once we grab filesystem locks
> which are not reclaim safe, we should update vmf->gfp_mask we pass further
> down into DAX code to not contain __GFP_FS (that's a bug we apparently have
> there). And inside DAX code, we definitely are not generally safe to add
> __GFP_FS to mapping_gfp_mask(). Maybe we'd be better off propagating struct
> vm_fault into this function, using passed gfp_mask there and make sure
> callers update gfp_mask as appropriate.

Yep, that makes sense to me.  In reviewing your set it also occurred to me that
we might want to stick a struct vm_area_struct *vma pointer in the vmf, since
you always need a vma when you are using a vmf, but we pass them as a pair
everywhere.

WARNING: multiple messages have this Message-ID (diff)
From: Ross Zwisler <ross.zwisler@linux.intel.com>
To: Jan Kara <jack@suse.cz>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>,
	linux-kernel@vger.kernel.org, Theodore Ts'o <tytso@mit.edu>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Andreas Dilger <adilger.kernel@dilger.ca>,
	Andrew Morton <akpm@linux-foundation.org>,
	Christoph Hellwig <hch@lst.de>,
	Dan Williams <dan.j.williams@intel.com>,
	Dave Chinner <david@fromorbit.com>, Jan Kara <jack@suse.com>,
	Matthew Wilcox <mawilcox@microsoft.com>,
	linux-ext4@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, linux-nvdimm@lists.01.org,
	linux-xfs@vger.kernel.org
Subject: Re: [PATCH v4 10/12] dax: add struct iomap based DAX PMD support
Date: Tue, 4 Oct 2016 09:39:48 -0600	[thread overview]
Message-ID: <20161004153948.GA21248@linux.intel.com> (raw)
In-Reply-To: <20161004055557.GB17515@quack2.suse.cz>

On Tue, Oct 04, 2016 at 07:55:57AM +0200, Jan Kara wrote:
> On Mon 03-10-16 15:05:57, Ross Zwisler wrote:
> > > > @@ -623,22 +672,30 @@ static void *dax_insert_mapping_entry(struct address_space *mapping,
> > > >  		error = radix_tree_preload(vmf->gfp_mask & ~__GFP_HIGHMEM);
> > > >  		if (error)
> > > >  			return ERR_PTR(error);
> > > > +	} else if ((unsigned long)entry & RADIX_DAX_HZP && !hzp) {
> > > > +		/* replacing huge zero page with PMD block mapping */
> > > > +		unmap_mapping_range(mapping,
> > > > +			(vmf->pgoff << PAGE_SHIFT) & PMD_MASK, PMD_SIZE, 0);
> > > >  	}
> > > >  
> > > >  	spin_lock_irq(&mapping->tree_lock);
> > > > -	new_entry = (void *)((unsigned long)RADIX_DAX_ENTRY(sector, false) |
> > > > -		       RADIX_DAX_ENTRY_LOCK);
> > > > +	if (hzp)
> > > > +		new_entry = RADIX_DAX_HZP_ENTRY();
> > > > +	else
> > > > +		new_entry = RADIX_DAX_ENTRY(sector, new_type);
> > > > +
> > > >  	if (hole_fill) {
> > > >  		__delete_from_page_cache(entry, NULL);
> > > >  		/* Drop pagecache reference */
> > > >  		put_page(entry);
> > > > -		error = radix_tree_insert(page_tree, index, new_entry);
> > > > +		error = __radix_tree_insert(page_tree, index,
> > > > +				RADIX_DAX_ORDER(new_type), new_entry);
> > > >  		if (error) {
> > > >  			new_entry = ERR_PTR(error);
> > > >  			goto unlock;
> > > >  		}
> > > >  		mapping->nrexceptional++;
> > > > -	} else {
> > > > +	} else if ((unsigned long)entry & (RADIX_DAX_HZP|RADIX_DAX_EMPTY)) {
> > > >  		void **slot;
> > > >  		void *ret;
> > > 
> > > Hum, I somewhat dislike how PTE and PMD paths differ here. But it's OK for
> > > now I guess. Long term we might be better off to do away with zero pages
> > > for PTEs as well and use exceptional entry and a single zero page like you
> > > do for PMD. Because the special cases these zero pages cause are a
> > > headache.
> > 
> > I've been thinking about this as well, and I do think we'd be better off with
> > a single zero page for PTEs, as we have with PMDs.  It'd reduce the special
> > casing in the DAX code, and it'd also ensure that we don't waste a bunch of
> > time and memory creating read-only zero pages to service reads from holes.
> > 
> > I'll look into adding this for v5.
> 
> Well, this would clash with the dirty bit cleaning series I have. So I'd
> prefer to put this on a todo list and address it once existing series are
> integrated...

Sure, that works.

> > > > +	if (error)
> > > > +		goto fallback;
> > > > +	if (iomap.offset + iomap.length < pos + PMD_SIZE)
> > > > +		goto fallback;
> > > > +
> > > > +	vmf.pgoff = pgoff;
> > > > +	vmf.flags = flags;
> > > > +	vmf.gfp_mask = mapping_gfp_mask(mapping) | __GFP_FS | __GFP_IO;
> > > 
> > > I don't think you want __GFP_FS here - we have already gone through the
> > > filesystem's pmd_fault() handler which called dax_iomap_pmd_fault() and
> > > thus we hold various fs locks, freeze protection, ...
> > 
> > I copied this from __get_fault_gfp_mask() in mm/memory.c.  That function is
> > used by do_page_mkwrite() and __do_fault(), and we eventually get this
> > vmf->gfp_mask in the PTE fault code.  With the code as it is we get the same
> > vmf->gfp_mask in both dax_iomap_fault() and dax_iomap_pmd_fault().  It seems
> > like they should remain consistent - is it wrong to have __GFP_FS in
> > dax_iomap_fault()?
> 
> The gfp_mask that propagates from __do_fault() or do_page_mkwrite() is fine
> because at that point it is correct. But once we grab filesystem locks
> which are not reclaim safe, we should update vmf->gfp_mask we pass further
> down into DAX code to not contain __GFP_FS (that's a bug we apparently have
> there). And inside DAX code, we definitely are not generally safe to add
> __GFP_FS to mapping_gfp_mask(). Maybe we'd be better off propagating struct
> vm_fault into this function, using passed gfp_mask there and make sure
> callers update gfp_mask as appropriate.

Yep, that makes sense to me.  In reviewing your set it also occurred to me that
we might want to stick a struct vm_area_struct *vma pointer in the vmf, since
you always need a vma when you are using a vmf, but we pass them as a pair
everywhere.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

WARNING: multiple messages have this Message-ID (diff)
From: Ross Zwisler <ross.zwisler@linux.intel.com>
To: Jan Kara <jack@suse.cz>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>,
	linux-kernel@vger.kernel.org, Theodore Ts'o <tytso@mit.edu>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Andreas Dilger <adilger.kernel@dilger.ca>,
	Andrew Morton <akpm@linux-foundation.org>,
	Christoph Hellwig <hch@lst.de>,
	Dan Williams <dan.j.williams@intel.com>,
	Dave Chinner <david@fromorbit.com>, Jan Kara <jack@suse.com>,
	Matthew Wilcox <mawilcox@microsoft.com>,
	linux-ext4@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, linux-nvdimm@lists.01.org,
	linux-xfs@vger.kernel.org
Subject: Re: [PATCH v4 10/12] dax: add struct iomap based DAX PMD support
Date: Tue, 4 Oct 2016 09:39:48 -0600	[thread overview]
Message-ID: <20161004153948.GA21248@linux.intel.com> (raw)
In-Reply-To: <20161004055557.GB17515@quack2.suse.cz>

On Tue, Oct 04, 2016 at 07:55:57AM +0200, Jan Kara wrote:
> On Mon 03-10-16 15:05:57, Ross Zwisler wrote:
> > > > @@ -623,22 +672,30 @@ static void *dax_insert_mapping_entry(struct address_space *mapping,
> > > >  		error = radix_tree_preload(vmf->gfp_mask & ~__GFP_HIGHMEM);
> > > >  		if (error)
> > > >  			return ERR_PTR(error);
> > > > +	} else if ((unsigned long)entry & RADIX_DAX_HZP && !hzp) {
> > > > +		/* replacing huge zero page with PMD block mapping */
> > > > +		unmap_mapping_range(mapping,
> > > > +			(vmf->pgoff << PAGE_SHIFT) & PMD_MASK, PMD_SIZE, 0);
> > > >  	}
> > > >  
> > > >  	spin_lock_irq(&mapping->tree_lock);
> > > > -	new_entry = (void *)((unsigned long)RADIX_DAX_ENTRY(sector, false) |
> > > > -		       RADIX_DAX_ENTRY_LOCK);
> > > > +	if (hzp)
> > > > +		new_entry = RADIX_DAX_HZP_ENTRY();
> > > > +	else
> > > > +		new_entry = RADIX_DAX_ENTRY(sector, new_type);
> > > > +
> > > >  	if (hole_fill) {
> > > >  		__delete_from_page_cache(entry, NULL);
> > > >  		/* Drop pagecache reference */
> > > >  		put_page(entry);
> > > > -		error = radix_tree_insert(page_tree, index, new_entry);
> > > > +		error = __radix_tree_insert(page_tree, index,
> > > > +				RADIX_DAX_ORDER(new_type), new_entry);
> > > >  		if (error) {
> > > >  			new_entry = ERR_PTR(error);
> > > >  			goto unlock;
> > > >  		}
> > > >  		mapping->nrexceptional++;
> > > > -	} else {
> > > > +	} else if ((unsigned long)entry & (RADIX_DAX_HZP|RADIX_DAX_EMPTY)) {
> > > >  		void **slot;
> > > >  		void *ret;
> > > 
> > > Hum, I somewhat dislike how PTE and PMD paths differ here. But it's OK for
> > > now I guess. Long term we might be better off to do away with zero pages
> > > for PTEs as well and use exceptional entry and a single zero page like you
> > > do for PMD. Because the special cases these zero pages cause are a
> > > headache.
> > 
> > I've been thinking about this as well, and I do think we'd be better off with
> > a single zero page for PTEs, as we have with PMDs.  It'd reduce the special
> > casing in the DAX code, and it'd also ensure that we don't waste a bunch of
> > time and memory creating read-only zero pages to service reads from holes.
> > 
> > I'll look into adding this for v5.
> 
> Well, this would clash with the dirty bit cleaning series I have. So I'd
> prefer to put this on a todo list and address it once existing series are
> integrated...

Sure, that works.

> > > > +	if (error)
> > > > +		goto fallback;
> > > > +	if (iomap.offset + iomap.length < pos + PMD_SIZE)
> > > > +		goto fallback;
> > > > +
> > > > +	vmf.pgoff = pgoff;
> > > > +	vmf.flags = flags;
> > > > +	vmf.gfp_mask = mapping_gfp_mask(mapping) | __GFP_FS | __GFP_IO;
> > > 
> > > I don't think you want __GFP_FS here - we have already gone through the
> > > filesystem's pmd_fault() handler which called dax_iomap_pmd_fault() and
> > > thus we hold various fs locks, freeze protection, ...
> > 
> > I copied this from __get_fault_gfp_mask() in mm/memory.c.  That function is
> > used by do_page_mkwrite() and __do_fault(), and we eventually get this
> > vmf->gfp_mask in the PTE fault code.  With the code as it is we get the same
> > vmf->gfp_mask in both dax_iomap_fault() and dax_iomap_pmd_fault().  It seems
> > like they should remain consistent - is it wrong to have __GFP_FS in
> > dax_iomap_fault()?
> 
> The gfp_mask that propagates from __do_fault() or do_page_mkwrite() is fine
> because at that point it is correct. But once we grab filesystem locks
> which are not reclaim safe, we should update vmf->gfp_mask we pass further
> down into DAX code to not contain __GFP_FS (that's a bug we apparently have
> there). And inside DAX code, we definitely are not generally safe to add
> __GFP_FS to mapping_gfp_mask(). Maybe we'd be better off propagating struct
> vm_fault into this function, using passed gfp_mask there and make sure
> callers update gfp_mask as appropriate.

Yep, that makes sense to me.  In reviewing your set it also occurred to me that
we might want to stick a struct vm_area_struct *vma pointer in the vmf, since
you always need a vma when you are using a vmf, but we pass them as a pair
everywhere.

  reply	other threads:[~2016-10-04 15:39 UTC|newest]

Thread overview: 189+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-29 22:49 [PATCH v4 00/12] re-enable DAX PMD support Ross Zwisler
2016-09-29 22:49 ` Ross Zwisler
2016-09-29 22:49 ` Ross Zwisler
2016-09-29 22:49 ` Ross Zwisler
2016-09-29 22:49 ` [PATCH v4 01/12] ext4: allow DAX writeback for hole punch Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49 ` [PATCH v4 02/12] ext4: tell DAX the size of allocation holes Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49 ` [PATCH v4 03/12] dax: remove buffer_size_valid() Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-30  8:49   ` Christoph Hellwig
2016-09-30  8:49     ` Christoph Hellwig
2016-09-30  8:49     ` Christoph Hellwig
2016-09-30  8:49     ` Christoph Hellwig
2016-09-29 22:49 ` [PATCH v4 04/12] ext2: remove support for DAX PMD faults Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-30  8:49   ` Christoph Hellwig
2016-09-30  8:49     ` Christoph Hellwig
2016-09-30  8:49     ` Christoph Hellwig
2016-09-30  8:49     ` Christoph Hellwig
2016-10-03  9:35   ` Jan Kara
2016-10-03  9:35     ` Jan Kara
2016-10-03  9:35     ` Jan Kara
2016-10-03  9:35     ` Jan Kara
2016-09-29 22:49 ` [PATCH v4 05/12] dax: make 'wait_table' global variable static Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-30  8:50   ` Christoph Hellwig
2016-09-30  8:50     ` Christoph Hellwig
2016-09-30  8:50     ` Christoph Hellwig
2016-09-30  8:50     ` Christoph Hellwig
2016-10-03  9:36   ` Jan Kara
2016-10-03  9:36     ` Jan Kara
2016-10-03  9:36     ` Jan Kara
2016-10-03  9:36     ` Jan Kara
2016-10-03  9:36     ` Jan Kara
2016-09-29 22:49 ` [PATCH v4 06/12] dax: consistent variable naming for DAX entries Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-30  8:50   ` Christoph Hellwig
2016-09-30  8:50     ` Christoph Hellwig
2016-09-30  8:50     ` Christoph Hellwig
2016-09-30  8:50     ` Christoph Hellwig
2016-09-30  8:50     ` Christoph Hellwig
2016-10-03  9:37   ` Jan Kara
2016-10-03  9:37     ` Jan Kara
2016-10-03  9:37     ` Jan Kara
2016-10-03  9:37     ` Jan Kara
2016-10-03  9:37     ` Jan Kara
2016-09-29 22:49 ` [PATCH v4 07/12] dax: coordinate locking for offsets in PMD range Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-30  9:44   ` Christoph Hellwig
2016-09-30  9:44     ` Christoph Hellwig
2016-10-03  9:55   ` Jan Kara
2016-10-03  9:55     ` Jan Kara
2016-10-03  9:55     ` Jan Kara
2016-10-03  9:55     ` Jan Kara
2016-10-03 18:40     ` Ross Zwisler
2016-10-03 18:40       ` Ross Zwisler
2016-10-03 18:40       ` Ross Zwisler
2016-10-03 18:40       ` Ross Zwisler
2016-10-03 18:40       ` Ross Zwisler
2016-09-29 22:49 ` [PATCH v4 08/12] dax: remove dax_pmd_fault() Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-30  8:50   ` Christoph Hellwig
2016-09-30  8:50     ` Christoph Hellwig
2016-09-30  8:50     ` Christoph Hellwig
2016-09-30  8:50     ` Christoph Hellwig
2016-09-30  8:50     ` Christoph Hellwig
2016-10-03  9:56   ` Jan Kara
2016-10-03  9:56     ` Jan Kara
2016-10-03  9:56     ` Jan Kara
2016-10-03  9:56     ` Jan Kara
2016-09-29 22:49 ` [PATCH v4 09/12] dax: correct dax iomap code namespace Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-30  8:51   ` Christoph Hellwig
2016-09-30  8:51     ` Christoph Hellwig
2016-09-30  8:51     ` Christoph Hellwig
2016-09-30  8:51     ` Christoph Hellwig
2016-10-03  9:57   ` Jan Kara
2016-10-03  9:57     ` Jan Kara
2016-10-03  9:57     ` Jan Kara
2016-10-03  9:57     ` Jan Kara
2016-09-29 22:49 ` [PATCH v4 10/12] dax: add struct iomap based DAX PMD support Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
     [not found]   ` <1475189370-31634-11-git-send-email-ross.zwisler-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
2016-09-30  9:56     ` Christoph Hellwig
2016-09-30  9:56       ` Christoph Hellwig
2016-09-30  9:56       ` Christoph Hellwig
     [not found]       ` <20160930095627.GB5299-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
2016-10-03 21:16         ` Ross Zwisler
2016-10-03 21:16           ` Ross Zwisler
2016-10-03 21:16           ` Ross Zwisler
2016-10-03 10:59   ` Jan Kara
2016-10-03 10:59     ` Jan Kara
2016-10-03 10:59     ` Jan Kara
2016-10-03 10:59     ` Jan Kara
2016-10-03 16:37     ` Christoph Hellwig
2016-10-03 16:37       ` Christoph Hellwig
2016-10-03 16:37       ` Christoph Hellwig
2016-10-03 21:05     ` Ross Zwisler
2016-10-03 21:05       ` Ross Zwisler
2016-10-03 21:05       ` Ross Zwisler
2016-10-04  5:55       ` Jan Kara
2016-10-04  5:55         ` Jan Kara
2016-10-04  5:55         ` Jan Kara
2016-10-04  5:55         ` Jan Kara
2016-10-04 15:39         ` Ross Zwisler [this message]
2016-10-04 15:39           ` Ross Zwisler
2016-10-04 15:39           ` Ross Zwisler
2016-10-04 15:39           ` Ross Zwisler
2016-10-05  5:50           ` Jan Kara
2016-10-05  5:50             ` Jan Kara
2016-10-05  5:50             ` Jan Kara
2016-10-05  5:50             ` Jan Kara
2016-10-05  5:50             ` Jan Kara
2016-10-06 21:34       ` Ross Zwisler
2016-10-06 21:34         ` Ross Zwisler
2016-10-06 21:34         ` Ross Zwisler
2016-10-06 21:34         ` Ross Zwisler
2016-10-06 21:34         ` Ross Zwisler
2016-10-07  2:58         ` Ross Zwisler
2016-10-07  2:58           ` Ross Zwisler
2016-10-07  2:58           ` Ross Zwisler
2016-10-07  2:58           ` Ross Zwisler
2016-10-07  7:24           ` Jan Kara
2016-10-07  7:24             ` Jan Kara
2016-10-07  7:24             ` Jan Kara
2016-10-07  7:24             ` Jan Kara
2016-10-07  7:24             ` Jan Kara
2016-09-29 22:49 ` [PATCH v4 11/12] xfs: use struct iomap based DAX PMD fault path Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49 ` [PATCH v4 12/12] dax: remove "depends on BROKEN" from FS_DAX_PMD Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 22:49   ` Ross Zwisler
2016-09-29 23:43 ` [PATCH v4 00/12] re-enable DAX PMD support Dave Chinner
2016-09-29 23:43   ` Dave Chinner
2016-09-29 23:43   ` Dave Chinner
2016-09-29 23:43   ` Dave Chinner
2016-09-29 23:43   ` Dave Chinner
2016-09-30  3:03   ` Ross Zwisler
2016-09-30  3:03     ` Ross Zwisler
2016-09-30  3:03     ` Ross Zwisler
2016-09-30  3:03     ` Ross Zwisler
2016-09-30  4:00     ` Darrick J. Wong
2016-09-30  4:00       ` Darrick J. Wong
2016-10-03 18:54       ` Ross Zwisler
2016-10-03 18:54         ` Ross Zwisler
2016-09-30  6:48     ` Dave Chinner
2016-09-30  6:48       ` Dave Chinner
2016-09-30  6:48       ` Dave Chinner
2016-09-30  6:48       ` Dave Chinner
2016-10-03 21:11       ` Ross Zwisler
2016-10-03 21:11         ` Ross Zwisler
2016-10-03 21:11         ` Ross Zwisler
2016-10-03 21:11         ` Ross Zwisler
2016-10-03 23:05   ` Ross Zwisler
2016-10-03 23:05     ` Ross Zwisler
2016-10-03 23:05     ` Ross Zwisler
2016-10-03 23:05     ` Ross Zwisler
2016-09-30 11:46 ` Christoph Hellwig
2016-09-30 11:46   ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20161004153948.GA21248@linux.intel.com \
    --to=ross.zwisler@linux.intel.com \
    --cc=adilger.kernel@dilger.ca \
    --cc=akpm@linux-foundation.org \
    --cc=david@fromorbit.com \
    --cc=hch@lst.de \
    --cc=jack@suse.com \
    --cc=jack@suse.cz \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=mawilcox@microsoft.com \
    --cc=tytso@mit.edu \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.