linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/4] Write protect DAX PMDs in *sync path
@ 2016-12-22 21:18 Ross Zwisler
  2016-12-22 21:18 ` [PATCH v2 1/4] dax: kill uml support Ross Zwisler
                   ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: Ross Zwisler @ 2016-12-22 21:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ross Zwisler, Alexander Viro, Andrew Morton, Arnd Bergmann,
	Christoph Hellwig, Dan Williams, Dave Chinner, Dave Hansen,
	Jan Kara, Matthew Wilcox, linux-arch, linux-fsdevel, linux-mm,
	linux-nvdimm

Currently dax_mapping_entry_mkclean() fails to clean and write protect the
pmd_t of a DAX PMD entry during an *sync operation.  This can result in
data loss, as detailed in patch 4.

You can find a working tree here:

https://git.kernel.org/cgit/linux/kernel/git/zwisler/linux.git/log/?h=dax_pmd_clean_v2

This series applies cleanly to mmotm-2016-12-19-16-31.

Changes since v1:
 - Included Dan's patch to kill DAX support for UML.
 - Instead of wrapping the DAX PMD code in dax_mapping_entry_mkclean() in
   an #ifdef, we now create a stub for pmdp_huge_clear_flush() for the case
   when CONFIG_TRANSPARENT_HUGEPAGE isn't defined. (Dan & Jan)

Dan Williams (1):
  dax: kill uml support

Ross Zwisler (3):
  dax: add stub for pmdp_huge_clear_flush()
  mm: add follow_pte_pmd()
  dax: wrprotect pmd_t in dax_mapping_entry_mkclean

 fs/Kconfig                    |  2 +-
 fs/dax.c                      | 49 ++++++++++++++++++++++++++++++-------------
 include/asm-generic/pgtable.h | 10 +++++++++
 include/linux/mm.h            |  4 ++--
 mm/memory.c                   | 41 ++++++++++++++++++++++++++++--------
 5 files changed, 79 insertions(+), 27 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 1/4] dax: kill uml support
  2016-12-22 21:18 [PATCH v2 0/4] Write protect DAX PMDs in *sync path Ross Zwisler
@ 2016-12-22 21:18 ` Ross Zwisler
  2016-12-23 13:45   ` Jan Kara
  2016-12-22 21:18 ` [PATCH v2 2/4] dax: add stub for pmdp_huge_clear_flush() Ross Zwisler
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 10+ messages in thread
From: Ross Zwisler @ 2016-12-22 21:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dan Williams, Alexander Viro, Andrew Morton, Arnd Bergmann,
	Christoph Hellwig, Dave Chinner, Dave Hansen, Jan Kara,
	Matthew Wilcox, linux-arch, linux-fsdevel, linux-mm,
	linux-nvdimm, Ross Zwisler

From: Dan Williams <dan.j.williams@intel.com>

The lack of common transparent-huge-page helpers for UML is becoming
increasingly painful for fs/dax.c now that it is growing more pmd
functionality. Add UML to the list of unsupported architectures.

Cc: Jan Kara <jack@suse.cz>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Matthew Wilcox <mawilcox@microsoft.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
[rez: squashed #ifdef removal into another patch in the series ]
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
---
 fs/Kconfig | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/Kconfig b/fs/Kconfig
index c2a377c..661931f 100644
--- a/fs/Kconfig
+++ b/fs/Kconfig
@@ -37,7 +37,7 @@ source "fs/f2fs/Kconfig"
 config FS_DAX
 	bool "Direct Access (DAX) support"
 	depends on MMU
-	depends on !(ARM || MIPS || SPARC)
+	depends on !(ARM || MIPS || SPARC || UML)
 	help
 	  Direct Access (DAX) can be used on memory-backed block devices.
 	  If the block device supports DAX and the filesystem supports DAX,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 2/4] dax: add stub for pmdp_huge_clear_flush()
  2016-12-22 21:18 [PATCH v2 0/4] Write protect DAX PMDs in *sync path Ross Zwisler
  2016-12-22 21:18 ` [PATCH v2 1/4] dax: kill uml support Ross Zwisler
@ 2016-12-22 21:18 ` Ross Zwisler
  2016-12-23 13:44   ` Jan Kara
  2016-12-22 21:18 ` [PATCH v2 3/4] mm: add follow_pte_pmd() Ross Zwisler
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 10+ messages in thread
From: Ross Zwisler @ 2016-12-22 21:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ross Zwisler, Alexander Viro, Andrew Morton, Arnd Bergmann,
	Christoph Hellwig, Dan Williams, Dave Chinner, Dave Hansen,
	Jan Kara, Matthew Wilcox, linux-arch, linux-fsdevel, linux-mm,
	linux-nvdimm

Add a pmdp_huge_clear_flush() stub for configs that don't define
CONFIG_TRANSPARENT_HUGEPAGE.

We use a WARN_ON_ONCE() instead of a BUILD_BUG() because in the DAX code at
least we do want this compile successfully even for configs without
CONFIG_TRANSPARENT_HUGEPAGE.  It'll be a runtime decision whether we call
this code gets called, based on whether we find DAX PMD entries in our
tree.  We shouldn't ever find such PMD entries for
!CONFIG_TRANSPARENT_HUGEPAGE configs, so this function should never be
called.

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
---
 include/asm-generic/pgtable.h | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index 18af2bc..65e9536 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -178,9 +178,19 @@ extern pte_t ptep_clear_flush(struct vm_area_struct *vma,
 #endif
 
 #ifndef __HAVE_ARCH_PMDP_HUGE_CLEAR_FLUSH
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 extern pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma,
 			      unsigned long address,
 			      pmd_t *pmdp);
+#else
+static inline pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma,
+			      unsigned long address,
+			      pmd_t *pmdp)
+{
+	WARN_ON_ONCE(1);
+	return *pmdp;
+}
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 #endif
 
 #ifndef __HAVE_ARCH_PTEP_SET_WRPROTECT
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 3/4] mm: add follow_pte_pmd()
  2016-12-22 21:18 [PATCH v2 0/4] Write protect DAX PMDs in *sync path Ross Zwisler
  2016-12-22 21:18 ` [PATCH v2 1/4] dax: kill uml support Ross Zwisler
  2016-12-22 21:18 ` [PATCH v2 2/4] dax: add stub for pmdp_huge_clear_flush() Ross Zwisler
@ 2016-12-22 21:18 ` Ross Zwisler
  2016-12-22 21:18 ` [PATCH v2 4/4] dax: wrprotect pmd_t in dax_mapping_entry_mkclean Ross Zwisler
  2017-01-04  0:13 ` [PATCH v2 0/4] Write protect DAX PMDs in *sync path Ross Zwisler
  4 siblings, 0 replies; 10+ messages in thread
From: Ross Zwisler @ 2016-12-22 21:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ross Zwisler, Alexander Viro, Andrew Morton, Arnd Bergmann,
	Christoph Hellwig, Dan Williams, Dave Chinner, Dave Hansen,
	Jan Kara, Matthew Wilcox, linux-arch, linux-fsdevel, linux-mm,
	linux-nvdimm

Similar to follow_pte(), follow_pte_pmd() allows either a PTE leaf or a
huge page PMD leaf to be found and returned.

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Suggested-by: Dave Hansen <dave.hansen@intel.com>
---
 include/linux/mm.h |  2 ++
 mm/memory.c        | 37 ++++++++++++++++++++++++++++++-------
 2 files changed, 32 insertions(+), 7 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 4424784..ff0e1c1 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1212,6 +1212,8 @@ void unmap_mapping_range(struct address_space *mapping,
 		loff_t const holebegin, loff_t const holelen, int even_cows);
 int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp,
 	       spinlock_t **ptlp);
+int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+			     pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp);
 int follow_pfn(struct vm_area_struct *vma, unsigned long address,
 	unsigned long *pfn);
 int follow_phys(struct vm_area_struct *vma, unsigned long address,
diff --git a/mm/memory.c b/mm/memory.c
index 455c3e6..29edd91 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3779,8 +3779,8 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address)
 }
 #endif /* __PAGETABLE_PMD_FOLDED */
 
-static int __follow_pte(struct mm_struct *mm, unsigned long address,
-		pte_t **ptepp, spinlock_t **ptlp)
+static int __follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+		pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp)
 {
 	pgd_t *pgd;
 	pud_t *pud;
@@ -3797,11 +3797,20 @@ static int __follow_pte(struct mm_struct *mm, unsigned long address,
 
 	pmd = pmd_offset(pud, address);
 	VM_BUG_ON(pmd_trans_huge(*pmd));
-	if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
-		goto out;
 
-	/* We cannot handle huge page PFN maps. Luckily they don't exist. */
-	if (pmd_huge(*pmd))
+	if (pmd_huge(*pmd)) {
+		if (!pmdpp)
+			goto out;
+
+		*ptlp = pmd_lock(mm, pmd);
+		if (pmd_huge(*pmd)) {
+			*pmdpp = pmd;
+			return 0;
+		}
+		spin_unlock(*ptlp);
+	}
+
+	if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
 		goto out;
 
 	ptep = pte_offset_map_lock(mm, pmd, address, ptlp);
@@ -3824,9 +3833,23 @@ int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp,
 
 	/* (void) is needed to make gcc happy */
 	(void) __cond_lock(*ptlp,
-			   !(res = __follow_pte(mm, address, ptepp, ptlp)));
+			   !(res = __follow_pte_pmd(mm, address, ptepp, NULL,
+					   ptlp)));
+	return res;
+}
+
+int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+			     pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp)
+{
+	int res;
+
+	/* (void) is needed to make gcc happy */
+	(void) __cond_lock(*ptlp,
+			   !(res = __follow_pte_pmd(mm, address, ptepp, pmdpp,
+					   ptlp)));
 	return res;
 }
+EXPORT_SYMBOL(follow_pte_pmd);
 
 /**
  * follow_pfn - look up PFN at a user virtual address
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 4/4] dax: wrprotect pmd_t in dax_mapping_entry_mkclean
  2016-12-22 21:18 [PATCH v2 0/4] Write protect DAX PMDs in *sync path Ross Zwisler
                   ` (2 preceding siblings ...)
  2016-12-22 21:18 ` [PATCH v2 3/4] mm: add follow_pte_pmd() Ross Zwisler
@ 2016-12-22 21:18 ` Ross Zwisler
  2017-01-04  0:13 ` [PATCH v2 0/4] Write protect DAX PMDs in *sync path Ross Zwisler
  4 siblings, 0 replies; 10+ messages in thread
From: Ross Zwisler @ 2016-12-22 21:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ross Zwisler, Alexander Viro, Andrew Morton, Arnd Bergmann,
	Christoph Hellwig, Dan Williams, Dave Chinner, Dave Hansen,
	Jan Kara, Matthew Wilcox, linux-arch, linux-fsdevel, linux-mm,
	linux-nvdimm

Currently dax_mapping_entry_mkclean() fails to clean and write protect the
pmd_t of a DAX PMD entry during an *sync operation.  This can result in
data loss in the following sequence:

1) mmap write to DAX PMD, dirtying PMD radix tree entry and making the
   pmd_t dirty and writeable
2) fsync, flushing out PMD data and cleaning the radix tree entry. We
   currently fail to mark the pmd_t as clean and write protected.
3) more mmap writes to the PMD.  These don't cause any page faults since
   the pmd_t is dirty and writeable.  The radix tree entry remains clean.
4) fsync, which fails to flush the dirty PMD data because the radix tree
   entry was clean.
5) crash - dirty data that should have been fsync'd as part of 4) could
   still have been in the processor cache, and is lost.

Fix this by marking the pmd_t clean and write protected in
dax_mapping_entry_mkclean(), which is called as part of the fsync
operation 2).  This will cause the writes in step 3) above to generate page
faults where we'll re-dirty the PMD radix tree entry, resulting in flushes
in the fsync that happens in step 4).

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: Jan Kara <jack@suse.cz>
Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush")
Reviewed-by: Jan Kara <jack@suse.cz>
---
 fs/dax.c           | 49 ++++++++++++++++++++++++++++++++++---------------
 include/linux/mm.h |  2 --
 mm/memory.c        |  4 ++--
 3 files changed, 36 insertions(+), 19 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 5c74f60..62b3ed4 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -691,8 +691,8 @@ static void dax_mapping_entry_mkclean(struct address_space *mapping,
 				      pgoff_t index, unsigned long pfn)
 {
 	struct vm_area_struct *vma;
-	pte_t *ptep;
-	pte_t pte;
+	pte_t pte, *ptep = NULL;
+	pmd_t *pmdp = NULL;
 	spinlock_t *ptl;
 	bool changed;
 
@@ -707,21 +707,40 @@ static void dax_mapping_entry_mkclean(struct address_space *mapping,
 
 		address = pgoff_address(index, vma);
 		changed = false;
-		if (follow_pte(vma->vm_mm, address, &ptep, &ptl))
+		if (follow_pte_pmd(vma->vm_mm, address, &ptep, &pmdp, &ptl))
 			continue;
-		if (pfn != pte_pfn(*ptep))
-			goto unlock;
-		if (!pte_dirty(*ptep) && !pte_write(*ptep))
-			goto unlock;
 
-		flush_cache_page(vma, address, pfn);
-		pte = ptep_clear_flush(vma, address, ptep);
-		pte = pte_wrprotect(pte);
-		pte = pte_mkclean(pte);
-		set_pte_at(vma->vm_mm, address, ptep, pte);
-		changed = true;
-unlock:
-		pte_unmap_unlock(ptep, ptl);
+		if (pmdp) {
+			pmd_t pmd;
+
+			if (pfn != pmd_pfn(*pmdp))
+				goto unlock_pmd;
+			if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp))
+				goto unlock_pmd;
+
+			flush_cache_page(vma, address, pfn);
+			pmd = pmdp_huge_clear_flush(vma, address, pmdp);
+			pmd = pmd_wrprotect(pmd);
+			pmd = pmd_mkclean(pmd);
+			set_pmd_at(vma->vm_mm, address, pmdp, pmd);
+			changed = true;
+unlock_pmd:
+			spin_unlock(ptl);
+		} else {
+			if (pfn != pte_pfn(*ptep))
+				goto unlock_pte;
+			if (!pte_dirty(*ptep) && !pte_write(*ptep))
+				goto unlock_pte;
+
+			flush_cache_page(vma, address, pfn);
+			pte = ptep_clear_flush(vma, address, ptep);
+			pte = pte_wrprotect(pte);
+			pte = pte_mkclean(pte);
+			set_pte_at(vma->vm_mm, address, ptep, pte);
+			changed = true;
+unlock_pte:
+			pte_unmap_unlock(ptep, ptl);
+		}
 
 		if (changed)
 			mmu_notifier_invalidate_page(vma->vm_mm, address);
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ff0e1c1..f4de7fa 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1210,8 +1210,6 @@ int copy_page_range(struct mm_struct *dst, struct mm_struct *src,
 			struct vm_area_struct *vma);
 void unmap_mapping_range(struct address_space *mapping,
 		loff_t const holebegin, loff_t const holelen, int even_cows);
-int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp,
-	       spinlock_t **ptlp);
 int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
 			     pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp);
 int follow_pfn(struct vm_area_struct *vma, unsigned long address,
diff --git a/mm/memory.c b/mm/memory.c
index 29edd91..ddcf979 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3826,8 +3826,8 @@ static int __follow_pte_pmd(struct mm_struct *mm, unsigned long address,
 	return -EINVAL;
 }
 
-int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp,
-	       spinlock_t **ptlp)
+static inline int follow_pte(struct mm_struct *mm, unsigned long address,
+			     pte_t **ptepp, spinlock_t **ptlp)
 {
 	int res;
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 2/4] dax: add stub for pmdp_huge_clear_flush()
  2016-12-22 21:18 ` [PATCH v2 2/4] dax: add stub for pmdp_huge_clear_flush() Ross Zwisler
@ 2016-12-23 13:44   ` Jan Kara
  0 siblings, 0 replies; 10+ messages in thread
From: Jan Kara @ 2016-12-23 13:44 UTC (permalink / raw)
  To: Ross Zwisler
  Cc: linux-kernel, Alexander Viro, Andrew Morton, Arnd Bergmann,
	Christoph Hellwig, Dan Williams, Dave Chinner, Dave Hansen,
	Jan Kara, Matthew Wilcox, linux-arch, linux-fsdevel, linux-mm,
	linux-nvdimm

On Thu 22-12-16 14:18:54, Ross Zwisler wrote:
> Add a pmdp_huge_clear_flush() stub for configs that don't define
> CONFIG_TRANSPARENT_HUGEPAGE.
> 
> We use a WARN_ON_ONCE() instead of a BUILD_BUG() because in the DAX code at
> least we do want this compile successfully even for configs without
> CONFIG_TRANSPARENT_HUGEPAGE.  It'll be a runtime decision whether we call
> this code gets called, based on whether we find DAX PMD entries in our
> tree.  We shouldn't ever find such PMD entries for
> !CONFIG_TRANSPARENT_HUGEPAGE configs, so this function should never be
> called.
> 
> Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>

Looks good. You can add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  include/asm-generic/pgtable.h | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
> index 18af2bc..65e9536 100644
> --- a/include/asm-generic/pgtable.h
> +++ b/include/asm-generic/pgtable.h
> @@ -178,9 +178,19 @@ extern pte_t ptep_clear_flush(struct vm_area_struct *vma,
>  #endif
>  
>  #ifndef __HAVE_ARCH_PMDP_HUGE_CLEAR_FLUSH
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  extern pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma,
>  			      unsigned long address,
>  			      pmd_t *pmdp);
> +#else
> +static inline pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma,
> +			      unsigned long address,
> +			      pmd_t *pmdp)
> +{
> +	WARN_ON_ONCE(1);
> +	return *pmdp;
> +}
> +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>  #endif
>  
>  #ifndef __HAVE_ARCH_PTEP_SET_WRPROTECT
> -- 
> 2.7.4
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/4] dax: kill uml support
  2016-12-22 21:18 ` [PATCH v2 1/4] dax: kill uml support Ross Zwisler
@ 2016-12-23 13:45   ` Jan Kara
  0 siblings, 0 replies; 10+ messages in thread
From: Jan Kara @ 2016-12-23 13:45 UTC (permalink / raw)
  To: Ross Zwisler
  Cc: linux-kernel, Dan Williams, Alexander Viro, Andrew Morton,
	Arnd Bergmann, Christoph Hellwig, Dave Chinner, Dave Hansen,
	Jan Kara, Matthew Wilcox, linux-arch, linux-fsdevel, linux-mm,
	linux-nvdimm

On Thu 22-12-16 14:18:53, Ross Zwisler wrote:
> From: Dan Williams <dan.j.williams@intel.com>
> 
> The lack of common transparent-huge-page helpers for UML is becoming
> increasingly painful for fs/dax.c now that it is growing more pmd
> functionality. Add UML to the list of unsupported architectures.
> 
> Cc: Jan Kara <jack@suse.cz>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Dave Chinner <david@fromorbit.com>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Matthew Wilcox <mawilcox@microsoft.com>
> Cc: Alexander Viro <viro@zeniv.linux.org.uk>
> Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> [rez: squashed #ifdef removal into another patch in the series ]
> Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>

Fine by me. You can add:

Acked-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  fs/Kconfig | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/fs/Kconfig b/fs/Kconfig
> index c2a377c..661931f 100644
> --- a/fs/Kconfig
> +++ b/fs/Kconfig
> @@ -37,7 +37,7 @@ source "fs/f2fs/Kconfig"
>  config FS_DAX
>  	bool "Direct Access (DAX) support"
>  	depends on MMU
> -	depends on !(ARM || MIPS || SPARC)
> +	depends on !(ARM || MIPS || SPARC || UML)
>  	help
>  	  Direct Access (DAX) can be used on memory-backed block devices.
>  	  If the block device supports DAX and the filesystem supports DAX,
> -- 
> 2.7.4
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 0/4] Write protect DAX PMDs in *sync path
  2016-12-22 21:18 [PATCH v2 0/4] Write protect DAX PMDs in *sync path Ross Zwisler
                   ` (3 preceding siblings ...)
  2016-12-22 21:18 ` [PATCH v2 4/4] dax: wrprotect pmd_t in dax_mapping_entry_mkclean Ross Zwisler
@ 2017-01-04  0:13 ` Ross Zwisler
  2017-01-06  1:27   ` Andrew Morton
  4 siblings, 1 reply; 10+ messages in thread
From: Ross Zwisler @ 2017-01-04  0:13 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, Alexander Viro, Andrew Morton, Arnd Bergmann,
	Christoph Hellwig, Dan Williams, Dave Chinner, Dave Hansen,
	Jan Kara, Matthew Wilcox, linux-arch, linux-fsdevel, linux-mm,
	linux-nvdimm

On Thu, Dec 22, 2016 at 02:18:52PM -0700, Ross Zwisler wrote:
> Currently dax_mapping_entry_mkclean() fails to clean and write protect the
> pmd_t of a DAX PMD entry during an *sync operation.  This can result in
> data loss, as detailed in patch 4.
> 
> You can find a working tree here:
> 
> https://git.kernel.org/cgit/linux/kernel/git/zwisler/linux.git/log/?h=dax_pmd_clean_v2
> 
> This series applies cleanly to mmotm-2016-12-19-16-31.
> 
> Changes since v1:
>  - Included Dan's patch to kill DAX support for UML.
>  - Instead of wrapping the DAX PMD code in dax_mapping_entry_mkclean() in
>    an #ifdef, we now create a stub for pmdp_huge_clear_flush() for the case
>    when CONFIG_TRANSPARENT_HUGEPAGE isn't defined. (Dan & Jan)
> 
> Dan Williams (1):
>   dax: kill uml support
> 
> Ross Zwisler (3):
>   dax: add stub for pmdp_huge_clear_flush()
>   mm: add follow_pte_pmd()
>   dax: wrprotect pmd_t in dax_mapping_entry_mkclean
> 
>  fs/Kconfig                    |  2 +-
>  fs/dax.c                      | 49 ++++++++++++++++++++++++++++++-------------
>  include/asm-generic/pgtable.h | 10 +++++++++
>  include/linux/mm.h            |  4 ++--
>  mm/memory.c                   | 41 ++++++++++++++++++++++++++++--------
>  5 files changed, 79 insertions(+), 27 deletions(-)

Well, 0-day found another architecture that doesn't define pmd_pfn() et al.,
so we'll need some more fixes. (Thank you, 0-day, for the coverage!)

I have to apologize, I didn't understand that Dan intended his "dax: kill uml
support" patch to land in v4.11.  I thought he intended it as a cleanup to my
series, which really needs to land in v4.10.  That's why I folded them
together into this v2, along with the wrapper suggested by Jan.

Andrew, does it work for you to just keep v1 of this series, and eventually
send that to Linus for v4.10?

https://lkml.org/lkml/2016/12/20/649

You've already pulled that one into -mm, and it does correctly solve the data
loss issue.

That would let us deal with getting rid of the #ifdef, blacklisting
architectures and introducing the pmdp_huge_clear_flush() strub in a follow-on
series for v4.11.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 0/4] Write protect DAX PMDs in *sync path
  2017-01-04  0:13 ` [PATCH v2 0/4] Write protect DAX PMDs in *sync path Ross Zwisler
@ 2017-01-06  1:27   ` Andrew Morton
  2017-01-06 18:18     ` Ross Zwisler
  0 siblings, 1 reply; 10+ messages in thread
From: Andrew Morton @ 2017-01-06  1:27 UTC (permalink / raw)
  To: Ross Zwisler
  Cc: linux-kernel, Alexander Viro, Arnd Bergmann, Christoph Hellwig,
	Dan Williams, Dave Chinner, Dave Hansen, Jan Kara,
	Matthew Wilcox, linux-arch, linux-fsdevel, linux-mm,
	linux-nvdimm

On Tue, 3 Jan 2017 17:13:49 -0700 Ross Zwisler <ross.zwisler@linux.intel.com> wrote:

> On Thu, Dec 22, 2016 at 02:18:52PM -0700, Ross Zwisler wrote:
> > Currently dax_mapping_entry_mkclean() fails to clean and write protect the
> > pmd_t of a DAX PMD entry during an *sync operation.  This can result in
> > data loss, as detailed in patch 4.
> > 
> > You can find a working tree here:
> > 
> > https://git.kernel.org/cgit/linux/kernel/git/zwisler/linux.git/log/?h=dax_pmd_clean_v2
> > 
> > This series applies cleanly to mmotm-2016-12-19-16-31.
> > 
> > Changes since v1:
> >  - Included Dan's patch to kill DAX support for UML.
> >  - Instead of wrapping the DAX PMD code in dax_mapping_entry_mkclean() in
> >    an #ifdef, we now create a stub for pmdp_huge_clear_flush() for the case
> >    when CONFIG_TRANSPARENT_HUGEPAGE isn't defined. (Dan & Jan)
> > 
> > Dan Williams (1):
> >   dax: kill uml support
> > 
> > Ross Zwisler (3):
> >   dax: add stub for pmdp_huge_clear_flush()
> >   mm: add follow_pte_pmd()
> >   dax: wrprotect pmd_t in dax_mapping_entry_mkclean
> > 
> >  fs/Kconfig                    |  2 +-
> >  fs/dax.c                      | 49 ++++++++++++++++++++++++++++++-------------
> >  include/asm-generic/pgtable.h | 10 +++++++++
> >  include/linux/mm.h            |  4 ++--
> >  mm/memory.c                   | 41 ++++++++++++++++++++++++++++--------
> >  5 files changed, 79 insertions(+), 27 deletions(-)
> 
> Well, 0-day found another architecture that doesn't define pmd_pfn() et al.,
> so we'll need some more fixes. (Thank you, 0-day, for the coverage!)
> 
> I have to apologize, I didn't understand that Dan intended his "dax: kill uml
> support" patch to land in v4.11.  I thought he intended it as a cleanup to my
> series, which really needs to land in v4.10.  That's why I folded them
> together into this v2, along with the wrapper suggested by Jan.
> 
> Andrew, does it work for you to just keep v1 of this series, and eventually
> send that to Linus for v4.10?
> 
> https://lkml.org/lkml/2016/12/20/649
> 
> You've already pulled that one into -mm, and it does correctly solve the data
> loss issue.
> 
> That would let us deal with getting rid of the #ifdef, blacklisting
> architectures and introducing the pmdp_huge_clear_flush() strub in a follow-on
> series for v4.11.

I have mm-add-follow_pte_pmd.patch and
dax-wrprotect-pmd_t-in-dax_mapping_entry_mkclean.patch queued for 4.10.
Please (re)send any additional patches, indicating for each one
whether you believe it should also go into 4.10?

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 0/4] Write protect DAX PMDs in *sync path
  2017-01-06  1:27   ` Andrew Morton
@ 2017-01-06 18:18     ` Ross Zwisler
  0 siblings, 0 replies; 10+ messages in thread
From: Ross Zwisler @ 2017-01-06 18:18 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Ross Zwisler, linux-kernel, Alexander Viro, Arnd Bergmann,
	Christoph Hellwig, Dan Williams, Dave Chinner, Dave Hansen,
	Jan Kara, Matthew Wilcox, linux-arch, linux-fsdevel, linux-mm,
	linux-nvdimm

On Thu, Jan 05, 2017 at 05:27:34PM -0800, Andrew Morton wrote:
> On Tue, 3 Jan 2017 17:13:49 -0700 Ross Zwisler <ross.zwisler@linux.intel.com> wrote:
> 
> > On Thu, Dec 22, 2016 at 02:18:52PM -0700, Ross Zwisler wrote:
> > > Currently dax_mapping_entry_mkclean() fails to clean and write protect the
> > > pmd_t of a DAX PMD entry during an *sync operation.  This can result in
> > > data loss, as detailed in patch 4.
> > > 
> > > You can find a working tree here:
> > > 
> > > https://git.kernel.org/cgit/linux/kernel/git/zwisler/linux.git/log/?h=dax_pmd_clean_v2
> > > 
> > > This series applies cleanly to mmotm-2016-12-19-16-31.
> > > 
> > > Changes since v1:
> > >  - Included Dan's patch to kill DAX support for UML.
> > >  - Instead of wrapping the DAX PMD code in dax_mapping_entry_mkclean() in
> > >    an #ifdef, we now create a stub for pmdp_huge_clear_flush() for the case
> > >    when CONFIG_TRANSPARENT_HUGEPAGE isn't defined. (Dan & Jan)
> > > 
> > > Dan Williams (1):
> > >   dax: kill uml support
> > > 
> > > Ross Zwisler (3):
> > >   dax: add stub for pmdp_huge_clear_flush()
> > >   mm: add follow_pte_pmd()
> > >   dax: wrprotect pmd_t in dax_mapping_entry_mkclean
> > > 
> > >  fs/Kconfig                    |  2 +-
> > >  fs/dax.c                      | 49 ++++++++++++++++++++++++++++++-------------
> > >  include/asm-generic/pgtable.h | 10 +++++++++
> > >  include/linux/mm.h            |  4 ++--
> > >  mm/memory.c                   | 41 ++++++++++++++++++++++++++++--------
> > >  5 files changed, 79 insertions(+), 27 deletions(-)
> > 
> > Well, 0-day found another architecture that doesn't define pmd_pfn() et al.,
> > so we'll need some more fixes. (Thank you, 0-day, for the coverage!)
> > 
> > I have to apologize, I didn't understand that Dan intended his "dax: kill uml
> > support" patch to land in v4.11.  I thought he intended it as a cleanup to my
> > series, which really needs to land in v4.10.  That's why I folded them
> > together into this v2, along with the wrapper suggested by Jan.
> > 
> > Andrew, does it work for you to just keep v1 of this series, and eventually
> > send that to Linus for v4.10?
> > 
> > https://lkml.org/lkml/2016/12/20/649
> > 
> > You've already pulled that one into -mm, and it does correctly solve the data
> > loss issue.
> > 
> > That would let us deal with getting rid of the #ifdef, blacklisting
> > architectures and introducing the pmdp_huge_clear_flush() strub in a follow-on
> > series for v4.11.
> 
> I have mm-add-follow_pte_pmd.patch and
> dax-wrprotect-pmd_t-in-dax_mapping_entry_mkclean.patch queued for 4.10.
> Please (re)send any additional patches, indicating for each one
> whether you believe it should also go into 4.10?

The two patches that you already have queued are correct, and no additional
patches are necessary for v4.10 for this issue.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2017-01-06 18:18 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-22 21:18 [PATCH v2 0/4] Write protect DAX PMDs in *sync path Ross Zwisler
2016-12-22 21:18 ` [PATCH v2 1/4] dax: kill uml support Ross Zwisler
2016-12-23 13:45   ` Jan Kara
2016-12-22 21:18 ` [PATCH v2 2/4] dax: add stub for pmdp_huge_clear_flush() Ross Zwisler
2016-12-23 13:44   ` Jan Kara
2016-12-22 21:18 ` [PATCH v2 3/4] mm: add follow_pte_pmd() Ross Zwisler
2016-12-22 21:18 ` [PATCH v2 4/4] dax: wrprotect pmd_t in dax_mapping_entry_mkclean Ross Zwisler
2017-01-04  0:13 ` [PATCH v2 0/4] Write protect DAX PMDs in *sync path Ross Zwisler
2017-01-06  1:27   ` Andrew Morton
2017-01-06 18:18     ` Ross Zwisler

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).