* [PATCH 0/2] Write protect DAX PMDs in *sync path
@ 2016-12-20 22:23 ` Ross Zwisler
0 siblings, 0 replies; 15+ messages in thread
From: Ross Zwisler @ 2016-12-20 22:23 UTC (permalink / raw)
To: linux-kernel
Cc: Jan Kara, Andrew Morton, Matthew Wilcox, linux-nvdimm,
Dave Chinner, Christoph Hellwig, linux-mm, Dave Hansen,
Alexander Viro, linux-fsdevel
Currently dax_mapping_entry_mkclean() fails to clean and write protect the
pmd_t of a DAX PMD entry during an *sync operation. This can result in
data loss, as detailed in patch 2.
This series is based on Dan's "libnvdimm-pending" branch, which is the
current home for Jan's "dax: Page invalidation fixes" series. You can find
a working tree here:
https://git.kernel.org/cgit/linux/kernel/git/zwisler/linux.git/log/?h=dax_pmd_clean
Ross Zwisler (2):
mm: add follow_pte_pmd()
dax: wrprotect pmd_t in dax_mapping_entry_mkclean
fs/dax.c | 51 ++++++++++++++++++++++++++++++++++++---------------
include/linux/mm.h | 4 ++--
mm/memory.c | 41 ++++++++++++++++++++++++++++++++---------
3 files changed, 70 insertions(+), 26 deletions(-)
--
2.7.4
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 0/2] Write protect DAX PMDs in *sync path
@ 2016-12-20 22:23 ` Ross Zwisler
0 siblings, 0 replies; 15+ messages in thread
From: Ross Zwisler @ 2016-12-20 22:23 UTC (permalink / raw)
To: linux-kernel
Cc: Ross Zwisler, Alexander Viro, Andrew Morton, Christoph Hellwig,
Dan Williams, Dave Chinner, Dave Hansen, Jan Kara,
Matthew Wilcox, linux-fsdevel, linux-mm, linux-nvdimm
Currently dax_mapping_entry_mkclean() fails to clean and write protect the
pmd_t of a DAX PMD entry during an *sync operation. This can result in
data loss, as detailed in patch 2.
This series is based on Dan's "libnvdimm-pending" branch, which is the
current home for Jan's "dax: Page invalidation fixes" series. You can find
a working tree here:
https://git.kernel.org/cgit/linux/kernel/git/zwisler/linux.git/log/?h=dax_pmd_clean
Ross Zwisler (2):
mm: add follow_pte_pmd()
dax: wrprotect pmd_t in dax_mapping_entry_mkclean
fs/dax.c | 51 ++++++++++++++++++++++++++++++++++++---------------
include/linux/mm.h | 4 ++--
mm/memory.c | 41 ++++++++++++++++++++++++++++++++---------
3 files changed, 70 insertions(+), 26 deletions(-)
--
2.7.4
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 0/2] Write protect DAX PMDs in *sync path
@ 2016-12-20 22:23 ` Ross Zwisler
0 siblings, 0 replies; 15+ messages in thread
From: Ross Zwisler @ 2016-12-20 22:23 UTC (permalink / raw)
To: linux-kernel
Cc: Ross Zwisler, Alexander Viro, Andrew Morton, Christoph Hellwig,
Dan Williams, Dave Chinner, Dave Hansen, Jan Kara,
Matthew Wilcox, linux-fsdevel, linux-mm, linux-nvdimm
Currently dax_mapping_entry_mkclean() fails to clean and write protect the
pmd_t of a DAX PMD entry during an *sync operation. This can result in
data loss, as detailed in patch 2.
This series is based on Dan's "libnvdimm-pending" branch, which is the
current home for Jan's "dax: Page invalidation fixes" series. You can find
a working tree here:
https://git.kernel.org/cgit/linux/kernel/git/zwisler/linux.git/log/?h=dax_pmd_clean
Ross Zwisler (2):
mm: add follow_pte_pmd()
dax: wrprotect pmd_t in dax_mapping_entry_mkclean
fs/dax.c | 51 ++++++++++++++++++++++++++++++++++++---------------
include/linux/mm.h | 4 ++--
mm/memory.c | 41 ++++++++++++++++++++++++++++++++---------
3 files changed, 70 insertions(+), 26 deletions(-)
--
2.7.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 1/2] mm: add follow_pte_pmd()
2016-12-20 22:23 ` Ross Zwisler
(?)
@ 2016-12-20 22:23 ` Ross Zwisler
-1 siblings, 0 replies; 15+ messages in thread
From: Ross Zwisler @ 2016-12-20 22:23 UTC (permalink / raw)
To: linux-kernel
Cc: Jan Kara, Andrew Morton, Matthew Wilcox, linux-nvdimm,
Dave Chinner, Christoph Hellwig, linux-mm, Dave Hansen,
Alexander Viro, linux-fsdevel
Similar to follow_pte(), follow_pte_pmd() allows either a PTE leaf or a
huge page PMD leaf to be found and returned.
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Suggested-by: Dave Hansen <dave.hansen@intel.com>
---
include/linux/mm.h | 2 ++
mm/memory.c | 37 ++++++++++++++++++++++++++++++-------
2 files changed, 32 insertions(+), 7 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 4424784..ff0e1c1 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1212,6 +1212,8 @@ void unmap_mapping_range(struct address_space *mapping,
loff_t const holebegin, loff_t const holelen, int even_cows);
int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp,
spinlock_t **ptlp);
+int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+ pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp);
int follow_pfn(struct vm_area_struct *vma, unsigned long address,
unsigned long *pfn);
int follow_phys(struct vm_area_struct *vma, unsigned long address,
diff --git a/mm/memory.c b/mm/memory.c
index 455c3e6..29edd91 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3779,8 +3779,8 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address)
}
#endif /* __PAGETABLE_PMD_FOLDED */
-static int __follow_pte(struct mm_struct *mm, unsigned long address,
- pte_t **ptepp, spinlock_t **ptlp)
+static int __follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+ pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp)
{
pgd_t *pgd;
pud_t *pud;
@@ -3797,11 +3797,20 @@ static int __follow_pte(struct mm_struct *mm, unsigned long address,
pmd = pmd_offset(pud, address);
VM_BUG_ON(pmd_trans_huge(*pmd));
- if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
- goto out;
- /* We cannot handle huge page PFN maps. Luckily they don't exist. */
- if (pmd_huge(*pmd))
+ if (pmd_huge(*pmd)) {
+ if (!pmdpp)
+ goto out;
+
+ *ptlp = pmd_lock(mm, pmd);
+ if (pmd_huge(*pmd)) {
+ *pmdpp = pmd;
+ return 0;
+ }
+ spin_unlock(*ptlp);
+ }
+
+ if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
goto out;
ptep = pte_offset_map_lock(mm, pmd, address, ptlp);
@@ -3824,9 +3833,23 @@ int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp,
/* (void) is needed to make gcc happy */
(void) __cond_lock(*ptlp,
- !(res = __follow_pte(mm, address, ptepp, ptlp)));
+ !(res = __follow_pte_pmd(mm, address, ptepp, NULL,
+ ptlp)));
+ return res;
+}
+
+int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+ pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp)
+{
+ int res;
+
+ /* (void) is needed to make gcc happy */
+ (void) __cond_lock(*ptlp,
+ !(res = __follow_pte_pmd(mm, address, ptepp, pmdpp,
+ ptlp)));
return res;
}
+EXPORT_SYMBOL(follow_pte_pmd);
/**
* follow_pfn - look up PFN at a user virtual address
--
2.7.4
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 1/2] mm: add follow_pte_pmd()
@ 2016-12-20 22:23 ` Ross Zwisler
0 siblings, 0 replies; 15+ messages in thread
From: Ross Zwisler @ 2016-12-20 22:23 UTC (permalink / raw)
To: linux-kernel
Cc: Ross Zwisler, Alexander Viro, Andrew Morton, Christoph Hellwig,
Dan Williams, Dave Chinner, Dave Hansen, Jan Kara,
Matthew Wilcox, linux-fsdevel, linux-mm, linux-nvdimm
Similar to follow_pte(), follow_pte_pmd() allows either a PTE leaf or a
huge page PMD leaf to be found and returned.
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Suggested-by: Dave Hansen <dave.hansen@intel.com>
---
include/linux/mm.h | 2 ++
mm/memory.c | 37 ++++++++++++++++++++++++++++++-------
2 files changed, 32 insertions(+), 7 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 4424784..ff0e1c1 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1212,6 +1212,8 @@ void unmap_mapping_range(struct address_space *mapping,
loff_t const holebegin, loff_t const holelen, int even_cows);
int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp,
spinlock_t **ptlp);
+int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+ pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp);
int follow_pfn(struct vm_area_struct *vma, unsigned long address,
unsigned long *pfn);
int follow_phys(struct vm_area_struct *vma, unsigned long address,
diff --git a/mm/memory.c b/mm/memory.c
index 455c3e6..29edd91 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3779,8 +3779,8 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address)
}
#endif /* __PAGETABLE_PMD_FOLDED */
-static int __follow_pte(struct mm_struct *mm, unsigned long address,
- pte_t **ptepp, spinlock_t **ptlp)
+static int __follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+ pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp)
{
pgd_t *pgd;
pud_t *pud;
@@ -3797,11 +3797,20 @@ static int __follow_pte(struct mm_struct *mm, unsigned long address,
pmd = pmd_offset(pud, address);
VM_BUG_ON(pmd_trans_huge(*pmd));
- if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
- goto out;
- /* We cannot handle huge page PFN maps. Luckily they don't exist. */
- if (pmd_huge(*pmd))
+ if (pmd_huge(*pmd)) {
+ if (!pmdpp)
+ goto out;
+
+ *ptlp = pmd_lock(mm, pmd);
+ if (pmd_huge(*pmd)) {
+ *pmdpp = pmd;
+ return 0;
+ }
+ spin_unlock(*ptlp);
+ }
+
+ if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
goto out;
ptep = pte_offset_map_lock(mm, pmd, address, ptlp);
@@ -3824,9 +3833,23 @@ int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp,
/* (void) is needed to make gcc happy */
(void) __cond_lock(*ptlp,
- !(res = __follow_pte(mm, address, ptepp, ptlp)));
+ !(res = __follow_pte_pmd(mm, address, ptepp, NULL,
+ ptlp)));
+ return res;
+}
+
+int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+ pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp)
+{
+ int res;
+
+ /* (void) is needed to make gcc happy */
+ (void) __cond_lock(*ptlp,
+ !(res = __follow_pte_pmd(mm, address, ptepp, pmdpp,
+ ptlp)));
return res;
}
+EXPORT_SYMBOL(follow_pte_pmd);
/**
* follow_pfn - look up PFN at a user virtual address
--
2.7.4
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 1/2] mm: add follow_pte_pmd()
@ 2016-12-20 22:23 ` Ross Zwisler
0 siblings, 0 replies; 15+ messages in thread
From: Ross Zwisler @ 2016-12-20 22:23 UTC (permalink / raw)
To: linux-kernel
Cc: Ross Zwisler, Alexander Viro, Andrew Morton, Christoph Hellwig,
Dan Williams, Dave Chinner, Dave Hansen, Jan Kara,
Matthew Wilcox, linux-fsdevel, linux-mm, linux-nvdimm
Similar to follow_pte(), follow_pte_pmd() allows either a PTE leaf or a
huge page PMD leaf to be found and returned.
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Suggested-by: Dave Hansen <dave.hansen@intel.com>
---
include/linux/mm.h | 2 ++
mm/memory.c | 37 ++++++++++++++++++++++++++++++-------
2 files changed, 32 insertions(+), 7 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 4424784..ff0e1c1 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1212,6 +1212,8 @@ void unmap_mapping_range(struct address_space *mapping,
loff_t const holebegin, loff_t const holelen, int even_cows);
int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp,
spinlock_t **ptlp);
+int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+ pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp);
int follow_pfn(struct vm_area_struct *vma, unsigned long address,
unsigned long *pfn);
int follow_phys(struct vm_area_struct *vma, unsigned long address,
diff --git a/mm/memory.c b/mm/memory.c
index 455c3e6..29edd91 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3779,8 +3779,8 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address)
}
#endif /* __PAGETABLE_PMD_FOLDED */
-static int __follow_pte(struct mm_struct *mm, unsigned long address,
- pte_t **ptepp, spinlock_t **ptlp)
+static int __follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+ pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp)
{
pgd_t *pgd;
pud_t *pud;
@@ -3797,11 +3797,20 @@ static int __follow_pte(struct mm_struct *mm, unsigned long address,
pmd = pmd_offset(pud, address);
VM_BUG_ON(pmd_trans_huge(*pmd));
- if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
- goto out;
- /* We cannot handle huge page PFN maps. Luckily they don't exist. */
- if (pmd_huge(*pmd))
+ if (pmd_huge(*pmd)) {
+ if (!pmdpp)
+ goto out;
+
+ *ptlp = pmd_lock(mm, pmd);
+ if (pmd_huge(*pmd)) {
+ *pmdpp = pmd;
+ return 0;
+ }
+ spin_unlock(*ptlp);
+ }
+
+ if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
goto out;
ptep = pte_offset_map_lock(mm, pmd, address, ptlp);
@@ -3824,9 +3833,23 @@ int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp,
/* (void) is needed to make gcc happy */
(void) __cond_lock(*ptlp,
- !(res = __follow_pte(mm, address, ptepp, ptlp)));
+ !(res = __follow_pte_pmd(mm, address, ptepp, NULL,
+ ptlp)));
+ return res;
+}
+
+int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+ pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp)
+{
+ int res;
+
+ /* (void) is needed to make gcc happy */
+ (void) __cond_lock(*ptlp,
+ !(res = __follow_pte_pmd(mm, address, ptepp, pmdpp,
+ ptlp)));
return res;
}
+EXPORT_SYMBOL(follow_pte_pmd);
/**
* follow_pfn - look up PFN at a user virtual address
--
2.7.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 2/2] dax: wrprotect pmd_t in dax_mapping_entry_mkclean
2016-12-20 22:23 ` Ross Zwisler
(?)
@ 2016-12-20 22:23 ` Ross Zwisler
-1 siblings, 0 replies; 15+ messages in thread
From: Ross Zwisler @ 2016-12-20 22:23 UTC (permalink / raw)
To: linux-kernel
Cc: Jan Kara, Andrew Morton, Matthew Wilcox, linux-nvdimm,
Dave Chinner, Christoph Hellwig, linux-mm, Dave Hansen,
Alexander Viro, linux-fsdevel
Currently dax_mapping_entry_mkclean() fails to clean and write protect the
pmd_t of a DAX PMD entry during an *sync operation. This can result in
data loss in the following sequence:
1) mmap write to DAX PMD, dirtying PMD radix tree entry and making the
pmd_t dirty and writeable
2) fsync, flushing out PMD data and cleaning the radix tree entry. We
currently fail to mark the pmd_t as clean and write protected.
3) more mmap writes to the PMD. These don't cause any page faults since
the pmd_t is dirty and writeable. The radix tree entry remains clean.
4) fsync, which fails to flush the dirty PMD data because the radix tree
entry was clean.
5) crash - dirty data that should have been fsync'd as part of 4) could
still have been in the processor cache, and is lost.
Fix this by marking the pmd_t clean and write protected in
dax_mapping_entry_mkclean(), which is called as part of the fsync
operation 2). This will cause the writes in step 3) above to generate page
faults where we'll re-dirty the PMD radix tree entry, resulting in flushes
in the fsync that happens in step 4).
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: Jan Kara <jack@suse.cz>
Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush")
---
fs/dax.c | 51 ++++++++++++++++++++++++++++++++++++---------------
include/linux/mm.h | 2 --
mm/memory.c | 4 ++--
3 files changed, 38 insertions(+), 19 deletions(-)
diff --git a/fs/dax.c b/fs/dax.c
index 5c74f60..ddcddfe 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -691,8 +691,8 @@ static void dax_mapping_entry_mkclean(struct address_space *mapping,
pgoff_t index, unsigned long pfn)
{
struct vm_area_struct *vma;
- pte_t *ptep;
- pte_t pte;
+ pte_t pte, *ptep = NULL;
+ pmd_t *pmdp = NULL;
spinlock_t *ptl;
bool changed;
@@ -707,21 +707,42 @@ static void dax_mapping_entry_mkclean(struct address_space *mapping,
address = pgoff_address(index, vma);
changed = false;
- if (follow_pte(vma->vm_mm, address, &ptep, &ptl))
+ if (follow_pte_pmd(vma->vm_mm, address, &ptep, &pmdp, &ptl))
continue;
- if (pfn != pte_pfn(*ptep))
- goto unlock;
- if (!pte_dirty(*ptep) && !pte_write(*ptep))
- goto unlock;
- flush_cache_page(vma, address, pfn);
- pte = ptep_clear_flush(vma, address, ptep);
- pte = pte_wrprotect(pte);
- pte = pte_mkclean(pte);
- set_pte_at(vma->vm_mm, address, ptep, pte);
- changed = true;
-unlock:
- pte_unmap_unlock(ptep, ptl);
+ if (pmdp) {
+#ifdef CONFIG_FS_DAX_PMD
+ pmd_t pmd;
+
+ if (pfn != pmd_pfn(*pmdp))
+ goto unlock_pmd;
+ if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp))
+ goto unlock_pmd;
+
+ flush_cache_page(vma, address, pfn);
+ pmd = pmdp_huge_clear_flush(vma, address, pmdp);
+ pmd = pmd_wrprotect(pmd);
+ pmd = pmd_mkclean(pmd);
+ set_pmd_at(vma->vm_mm, address, pmdp, pmd);
+ changed = true;
+unlock_pmd:
+ spin_unlock(ptl);
+#endif
+ } else {
+ if (pfn != pte_pfn(*ptep))
+ goto unlock_pte;
+ if (!pte_dirty(*ptep) && !pte_write(*ptep))
+ goto unlock_pte;
+
+ flush_cache_page(vma, address, pfn);
+ pte = ptep_clear_flush(vma, address, ptep);
+ pte = pte_wrprotect(pte);
+ pte = pte_mkclean(pte);
+ set_pte_at(vma->vm_mm, address, ptep, pte);
+ changed = true;
+unlock_pte:
+ pte_unmap_unlock(ptep, ptl);
+ }
if (changed)
mmu_notifier_invalidate_page(vma->vm_mm, address);
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ff0e1c1..f4de7fa 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1210,8 +1210,6 @@ int copy_page_range(struct mm_struct *dst, struct mm_struct *src,
struct vm_area_struct *vma);
void unmap_mapping_range(struct address_space *mapping,
loff_t const holebegin, loff_t const holelen, int even_cows);
-int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp,
- spinlock_t **ptlp);
int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp);
int follow_pfn(struct vm_area_struct *vma, unsigned long address,
diff --git a/mm/memory.c b/mm/memory.c
index 29edd91..ddcf979 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3826,8 +3826,8 @@ static int __follow_pte_pmd(struct mm_struct *mm, unsigned long address,
return -EINVAL;
}
-int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp,
- spinlock_t **ptlp)
+static inline int follow_pte(struct mm_struct *mm, unsigned long address,
+ pte_t **ptepp, spinlock_t **ptlp)
{
int res;
--
2.7.4
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 2/2] dax: wrprotect pmd_t in dax_mapping_entry_mkclean
@ 2016-12-20 22:23 ` Ross Zwisler
0 siblings, 0 replies; 15+ messages in thread
From: Ross Zwisler @ 2016-12-20 22:23 UTC (permalink / raw)
To: linux-kernel
Cc: Ross Zwisler, Alexander Viro, Andrew Morton, Christoph Hellwig,
Dan Williams, Dave Chinner, Dave Hansen, Jan Kara,
Matthew Wilcox, linux-fsdevel, linux-mm, linux-nvdimm
Currently dax_mapping_entry_mkclean() fails to clean and write protect the
pmd_t of a DAX PMD entry during an *sync operation. This can result in
data loss in the following sequence:
1) mmap write to DAX PMD, dirtying PMD radix tree entry and making the
pmd_t dirty and writeable
2) fsync, flushing out PMD data and cleaning the radix tree entry. We
currently fail to mark the pmd_t as clean and write protected.
3) more mmap writes to the PMD. These don't cause any page faults since
the pmd_t is dirty and writeable. The radix tree entry remains clean.
4) fsync, which fails to flush the dirty PMD data because the radix tree
entry was clean.
5) crash - dirty data that should have been fsync'd as part of 4) could
still have been in the processor cache, and is lost.
Fix this by marking the pmd_t clean and write protected in
dax_mapping_entry_mkclean(), which is called as part of the fsync
operation 2). This will cause the writes in step 3) above to generate page
faults where we'll re-dirty the PMD radix tree entry, resulting in flushes
in the fsync that happens in step 4).
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: Jan Kara <jack@suse.cz>
Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush")
---
fs/dax.c | 51 ++++++++++++++++++++++++++++++++++++---------------
include/linux/mm.h | 2 --
mm/memory.c | 4 ++--
3 files changed, 38 insertions(+), 19 deletions(-)
diff --git a/fs/dax.c b/fs/dax.c
index 5c74f60..ddcddfe 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -691,8 +691,8 @@ static void dax_mapping_entry_mkclean(struct address_space *mapping,
pgoff_t index, unsigned long pfn)
{
struct vm_area_struct *vma;
- pte_t *ptep;
- pte_t pte;
+ pte_t pte, *ptep = NULL;
+ pmd_t *pmdp = NULL;
spinlock_t *ptl;
bool changed;
@@ -707,21 +707,42 @@ static void dax_mapping_entry_mkclean(struct address_space *mapping,
address = pgoff_address(index, vma);
changed = false;
- if (follow_pte(vma->vm_mm, address, &ptep, &ptl))
+ if (follow_pte_pmd(vma->vm_mm, address, &ptep, &pmdp, &ptl))
continue;
- if (pfn != pte_pfn(*ptep))
- goto unlock;
- if (!pte_dirty(*ptep) && !pte_write(*ptep))
- goto unlock;
- flush_cache_page(vma, address, pfn);
- pte = ptep_clear_flush(vma, address, ptep);
- pte = pte_wrprotect(pte);
- pte = pte_mkclean(pte);
- set_pte_at(vma->vm_mm, address, ptep, pte);
- changed = true;
-unlock:
- pte_unmap_unlock(ptep, ptl);
+ if (pmdp) {
+#ifdef CONFIG_FS_DAX_PMD
+ pmd_t pmd;
+
+ if (pfn != pmd_pfn(*pmdp))
+ goto unlock_pmd;
+ if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp))
+ goto unlock_pmd;
+
+ flush_cache_page(vma, address, pfn);
+ pmd = pmdp_huge_clear_flush(vma, address, pmdp);
+ pmd = pmd_wrprotect(pmd);
+ pmd = pmd_mkclean(pmd);
+ set_pmd_at(vma->vm_mm, address, pmdp, pmd);
+ changed = true;
+unlock_pmd:
+ spin_unlock(ptl);
+#endif
+ } else {
+ if (pfn != pte_pfn(*ptep))
+ goto unlock_pte;
+ if (!pte_dirty(*ptep) && !pte_write(*ptep))
+ goto unlock_pte;
+
+ flush_cache_page(vma, address, pfn);
+ pte = ptep_clear_flush(vma, address, ptep);
+ pte = pte_wrprotect(pte);
+ pte = pte_mkclean(pte);
+ set_pte_at(vma->vm_mm, address, ptep, pte);
+ changed = true;
+unlock_pte:
+ pte_unmap_unlock(ptep, ptl);
+ }
if (changed)
mmu_notifier_invalidate_page(vma->vm_mm, address);
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ff0e1c1..f4de7fa 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1210,8 +1210,6 @@ int copy_page_range(struct mm_struct *dst, struct mm_struct *src,
struct vm_area_struct *vma);
void unmap_mapping_range(struct address_space *mapping,
loff_t const holebegin, loff_t const holelen, int even_cows);
-int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp,
- spinlock_t **ptlp);
int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp);
int follow_pfn(struct vm_area_struct *vma, unsigned long address,
diff --git a/mm/memory.c b/mm/memory.c
index 29edd91..ddcf979 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3826,8 +3826,8 @@ static int __follow_pte_pmd(struct mm_struct *mm, unsigned long address,
return -EINVAL;
}
-int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp,
- spinlock_t **ptlp)
+static inline int follow_pte(struct mm_struct *mm, unsigned long address,
+ pte_t **ptepp, spinlock_t **ptlp)
{
int res;
--
2.7.4
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 2/2] dax: wrprotect pmd_t in dax_mapping_entry_mkclean
@ 2016-12-20 22:23 ` Ross Zwisler
0 siblings, 0 replies; 15+ messages in thread
From: Ross Zwisler @ 2016-12-20 22:23 UTC (permalink / raw)
To: linux-kernel
Cc: Ross Zwisler, Alexander Viro, Andrew Morton, Christoph Hellwig,
Dan Williams, Dave Chinner, Dave Hansen, Jan Kara,
Matthew Wilcox, linux-fsdevel, linux-mm, linux-nvdimm
Currently dax_mapping_entry_mkclean() fails to clean and write protect the
pmd_t of a DAX PMD entry during an *sync operation. This can result in
data loss in the following sequence:
1) mmap write to DAX PMD, dirtying PMD radix tree entry and making the
pmd_t dirty and writeable
2) fsync, flushing out PMD data and cleaning the radix tree entry. We
currently fail to mark the pmd_t as clean and write protected.
3) more mmap writes to the PMD. These don't cause any page faults since
the pmd_t is dirty and writeable. The radix tree entry remains clean.
4) fsync, which fails to flush the dirty PMD data because the radix tree
entry was clean.
5) crash - dirty data that should have been fsync'd as part of 4) could
still have been in the processor cache, and is lost.
Fix this by marking the pmd_t clean and write protected in
dax_mapping_entry_mkclean(), which is called as part of the fsync
operation 2). This will cause the writes in step 3) above to generate page
faults where we'll re-dirty the PMD radix tree entry, resulting in flushes
in the fsync that happens in step 4).
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: Jan Kara <jack@suse.cz>
Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush")
---
fs/dax.c | 51 ++++++++++++++++++++++++++++++++++++---------------
include/linux/mm.h | 2 --
mm/memory.c | 4 ++--
3 files changed, 38 insertions(+), 19 deletions(-)
diff --git a/fs/dax.c b/fs/dax.c
index 5c74f60..ddcddfe 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -691,8 +691,8 @@ static void dax_mapping_entry_mkclean(struct address_space *mapping,
pgoff_t index, unsigned long pfn)
{
struct vm_area_struct *vma;
- pte_t *ptep;
- pte_t pte;
+ pte_t pte, *ptep = NULL;
+ pmd_t *pmdp = NULL;
spinlock_t *ptl;
bool changed;
@@ -707,21 +707,42 @@ static void dax_mapping_entry_mkclean(struct address_space *mapping,
address = pgoff_address(index, vma);
changed = false;
- if (follow_pte(vma->vm_mm, address, &ptep, &ptl))
+ if (follow_pte_pmd(vma->vm_mm, address, &ptep, &pmdp, &ptl))
continue;
- if (pfn != pte_pfn(*ptep))
- goto unlock;
- if (!pte_dirty(*ptep) && !pte_write(*ptep))
- goto unlock;
- flush_cache_page(vma, address, pfn);
- pte = ptep_clear_flush(vma, address, ptep);
- pte = pte_wrprotect(pte);
- pte = pte_mkclean(pte);
- set_pte_at(vma->vm_mm, address, ptep, pte);
- changed = true;
-unlock:
- pte_unmap_unlock(ptep, ptl);
+ if (pmdp) {
+#ifdef CONFIG_FS_DAX_PMD
+ pmd_t pmd;
+
+ if (pfn != pmd_pfn(*pmdp))
+ goto unlock_pmd;
+ if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp))
+ goto unlock_pmd;
+
+ flush_cache_page(vma, address, pfn);
+ pmd = pmdp_huge_clear_flush(vma, address, pmdp);
+ pmd = pmd_wrprotect(pmd);
+ pmd = pmd_mkclean(pmd);
+ set_pmd_at(vma->vm_mm, address, pmdp, pmd);
+ changed = true;
+unlock_pmd:
+ spin_unlock(ptl);
+#endif
+ } else {
+ if (pfn != pte_pfn(*ptep))
+ goto unlock_pte;
+ if (!pte_dirty(*ptep) && !pte_write(*ptep))
+ goto unlock_pte;
+
+ flush_cache_page(vma, address, pfn);
+ pte = ptep_clear_flush(vma, address, ptep);
+ pte = pte_wrprotect(pte);
+ pte = pte_mkclean(pte);
+ set_pte_at(vma->vm_mm, address, ptep, pte);
+ changed = true;
+unlock_pte:
+ pte_unmap_unlock(ptep, ptl);
+ }
if (changed)
mmu_notifier_invalidate_page(vma->vm_mm, address);
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ff0e1c1..f4de7fa 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1210,8 +1210,6 @@ int copy_page_range(struct mm_struct *dst, struct mm_struct *src,
struct vm_area_struct *vma);
void unmap_mapping_range(struct address_space *mapping,
loff_t const holebegin, loff_t const holelen, int even_cows);
-int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp,
- spinlock_t **ptlp);
int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp);
int follow_pfn(struct vm_area_struct *vma, unsigned long address,
diff --git a/mm/memory.c b/mm/memory.c
index 29edd91..ddcf979 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3826,8 +3826,8 @@ static int __follow_pte_pmd(struct mm_struct *mm, unsigned long address,
return -EINVAL;
}
-int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp,
- spinlock_t **ptlp)
+static inline int follow_pte(struct mm_struct *mm, unsigned long address,
+ pte_t **ptepp, spinlock_t **ptlp)
{
int res;
--
2.7.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] dax: wrprotect pmd_t in dax_mapping_entry_mkclean
2016-12-20 22:23 ` Ross Zwisler
(?)
@ 2016-12-20 23:06 ` Dan Williams
-1 siblings, 0 replies; 15+ messages in thread
From: Dan Williams @ 2016-12-20 23:06 UTC (permalink / raw)
To: Ross Zwisler
Cc: Jan Kara, Matthew Wilcox, linux-nvdimm, Dave Chinner,
linux-kernel, Linux MM, Dave Hansen, Alexander Viro,
linux-fsdevel, Andrew Morton, Christoph Hellwig
On Tue, Dec 20, 2016 at 2:23 PM, Ross Zwisler
<ross.zwisler@linux.intel.com> wrote:
> Currently dax_mapping_entry_mkclean() fails to clean and write protect the
> pmd_t of a DAX PMD entry during an *sync operation. This can result in
> data loss in the following sequence:
>
> 1) mmap write to DAX PMD, dirtying PMD radix tree entry and making the
> pmd_t dirty and writeable
> 2) fsync, flushing out PMD data and cleaning the radix tree entry. We
> currently fail to mark the pmd_t as clean and write protected.
> 3) more mmap writes to the PMD. These don't cause any page faults since
> the pmd_t is dirty and writeable. The radix tree entry remains clean.
> 4) fsync, which fails to flush the dirty PMD data because the radix tree
> entry was clean.
> 5) crash - dirty data that should have been fsync'd as part of 4) could
> still have been in the processor cache, and is lost.
>
> Fix this by marking the pmd_t clean and write protected in
> dax_mapping_entry_mkclean(), which is called as part of the fsync
> operation 2). This will cause the writes in step 3) above to generate page
> faults where we'll re-dirty the PMD radix tree entry, resulting in flushes
> in the fsync that happens in step 4).
>
> Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
> Cc: Jan Kara <jack@suse.cz>
> Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush")
> ---
> fs/dax.c | 51 ++++++++++++++++++++++++++++++++++++---------------
> include/linux/mm.h | 2 --
> mm/memory.c | 4 ++--
> 3 files changed, 38 insertions(+), 19 deletions(-)
>
> diff --git a/fs/dax.c b/fs/dax.c
> index 5c74f60..ddcddfe 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -691,8 +691,8 @@ static void dax_mapping_entry_mkclean(struct address_space *mapping,
> pgoff_t index, unsigned long pfn)
> {
> struct vm_area_struct *vma;
> - pte_t *ptep;
> - pte_t pte;
> + pte_t pte, *ptep = NULL;
> + pmd_t *pmdp = NULL;
> spinlock_t *ptl;
> bool changed;
>
> @@ -707,21 +707,42 @@ static void dax_mapping_entry_mkclean(struct address_space *mapping,
>
> address = pgoff_address(index, vma);
> changed = false;
> - if (follow_pte(vma->vm_mm, address, &ptep, &ptl))
> + if (follow_pte_pmd(vma->vm_mm, address, &ptep, &pmdp, &ptl))
> continue;
> - if (pfn != pte_pfn(*ptep))
> - goto unlock;
> - if (!pte_dirty(*ptep) && !pte_write(*ptep))
> - goto unlock;
>
> - flush_cache_page(vma, address, pfn);
> - pte = ptep_clear_flush(vma, address, ptep);
> - pte = pte_wrprotect(pte);
> - pte = pte_mkclean(pte);
> - set_pte_at(vma->vm_mm, address, ptep, pte);
> - changed = true;
> -unlock:
> - pte_unmap_unlock(ptep, ptl);
> + if (pmdp) {
> +#ifdef CONFIG_FS_DAX_PMD
> + pmd_t pmd;
> +
> + if (pfn != pmd_pfn(*pmdp))
> + goto unlock_pmd;
> + if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp))
> + goto unlock_pmd;
> +
> + flush_cache_page(vma, address, pfn);
> + pmd = pmdp_huge_clear_flush(vma, address, pmdp);
> + pmd = pmd_wrprotect(pmd);
> + pmd = pmd_mkclean(pmd);
> + set_pmd_at(vma->vm_mm, address, pmdp, pmd);
> + changed = true;
> +unlock_pmd:
> + spin_unlock(ptl);
> +#endif
Can we please kill this ifdef?
I know we've had problems with ARCH=um builds in the past with
undefined pmd helpers, but to me that simply means we now need to
extend the FS_DAX blacklist to include UML
diff --git a/fs/Kconfig b/fs/Kconfig
index c2a377cdda2b..661931fb0ce0 100644
--- a/fs/Kconfig
+++ b/fs/Kconfig
@@ -37,7 +37,7 @@ source "fs/f2fs/Kconfig"
config FS_DAX
bool "Direct Access (DAX) support"
depends on MMU
- depends on !(ARM || MIPS || SPARC)
+ depends on !(ARM || MIPS || SPARC || UML)
help
Direct Access (DAX) can be used on memory-backed block devices.
If the block device supports DAX and the filesystem supports DAX,
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] dax: wrprotect pmd_t in dax_mapping_entry_mkclean
@ 2016-12-20 23:06 ` Dan Williams
0 siblings, 0 replies; 15+ messages in thread
From: Dan Williams @ 2016-12-20 23:06 UTC (permalink / raw)
To: Ross Zwisler
Cc: linux-kernel, Alexander Viro, Andrew Morton, Christoph Hellwig,
Dave Chinner, Dave Hansen, Jan Kara, Matthew Wilcox,
linux-fsdevel, Linux MM, linux-nvdimm@lists.01.org
On Tue, Dec 20, 2016 at 2:23 PM, Ross Zwisler
<ross.zwisler@linux.intel.com> wrote:
> Currently dax_mapping_entry_mkclean() fails to clean and write protect the
> pmd_t of a DAX PMD entry during an *sync operation. This can result in
> data loss in the following sequence:
>
> 1) mmap write to DAX PMD, dirtying PMD radix tree entry and making the
> pmd_t dirty and writeable
> 2) fsync, flushing out PMD data and cleaning the radix tree entry. We
> currently fail to mark the pmd_t as clean and write protected.
> 3) more mmap writes to the PMD. These don't cause any page faults since
> the pmd_t is dirty and writeable. The radix tree entry remains clean.
> 4) fsync, which fails to flush the dirty PMD data because the radix tree
> entry was clean.
> 5) crash - dirty data that should have been fsync'd as part of 4) could
> still have been in the processor cache, and is lost.
>
> Fix this by marking the pmd_t clean and write protected in
> dax_mapping_entry_mkclean(), which is called as part of the fsync
> operation 2). This will cause the writes in step 3) above to generate page
> faults where we'll re-dirty the PMD radix tree entry, resulting in flushes
> in the fsync that happens in step 4).
>
> Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
> Cc: Jan Kara <jack@suse.cz>
> Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush")
> ---
> fs/dax.c | 51 ++++++++++++++++++++++++++++++++++++---------------
> include/linux/mm.h | 2 --
> mm/memory.c | 4 ++--
> 3 files changed, 38 insertions(+), 19 deletions(-)
>
> diff --git a/fs/dax.c b/fs/dax.c
> index 5c74f60..ddcddfe 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -691,8 +691,8 @@ static void dax_mapping_entry_mkclean(struct address_space *mapping,
> pgoff_t index, unsigned long pfn)
> {
> struct vm_area_struct *vma;
> - pte_t *ptep;
> - pte_t pte;
> + pte_t pte, *ptep = NULL;
> + pmd_t *pmdp = NULL;
> spinlock_t *ptl;
> bool changed;
>
> @@ -707,21 +707,42 @@ static void dax_mapping_entry_mkclean(struct address_space *mapping,
>
> address = pgoff_address(index, vma);
> changed = false;
> - if (follow_pte(vma->vm_mm, address, &ptep, &ptl))
> + if (follow_pte_pmd(vma->vm_mm, address, &ptep, &pmdp, &ptl))
> continue;
> - if (pfn != pte_pfn(*ptep))
> - goto unlock;
> - if (!pte_dirty(*ptep) && !pte_write(*ptep))
> - goto unlock;
>
> - flush_cache_page(vma, address, pfn);
> - pte = ptep_clear_flush(vma, address, ptep);
> - pte = pte_wrprotect(pte);
> - pte = pte_mkclean(pte);
> - set_pte_at(vma->vm_mm, address, ptep, pte);
> - changed = true;
> -unlock:
> - pte_unmap_unlock(ptep, ptl);
> + if (pmdp) {
> +#ifdef CONFIG_FS_DAX_PMD
> + pmd_t pmd;
> +
> + if (pfn != pmd_pfn(*pmdp))
> + goto unlock_pmd;
> + if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp))
> + goto unlock_pmd;
> +
> + flush_cache_page(vma, address, pfn);
> + pmd = pmdp_huge_clear_flush(vma, address, pmdp);
> + pmd = pmd_wrprotect(pmd);
> + pmd = pmd_mkclean(pmd);
> + set_pmd_at(vma->vm_mm, address, pmdp, pmd);
> + changed = true;
> +unlock_pmd:
> + spin_unlock(ptl);
> +#endif
Can we please kill this ifdef?
I know we've had problems with ARCH=um builds in the past with
undefined pmd helpers, but to me that simply means we now need to
extend the FS_DAX blacklist to include UML
diff --git a/fs/Kconfig b/fs/Kconfig
index c2a377cdda2b..661931fb0ce0 100644
--- a/fs/Kconfig
+++ b/fs/Kconfig
@@ -37,7 +37,7 @@ source "fs/f2fs/Kconfig"
config FS_DAX
bool "Direct Access (DAX) support"
depends on MMU
- depends on !(ARM || MIPS || SPARC)
+ depends on !(ARM || MIPS || SPARC || UML)
help
Direct Access (DAX) can be used on memory-backed block devices.
If the block device supports DAX and the filesystem supports DAX,
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] dax: wrprotect pmd_t in dax_mapping_entry_mkclean
@ 2016-12-20 23:06 ` Dan Williams
0 siblings, 0 replies; 15+ messages in thread
From: Dan Williams @ 2016-12-20 23:06 UTC (permalink / raw)
To: Ross Zwisler
Cc: linux-kernel, Alexander Viro, Andrew Morton, Christoph Hellwig,
Dave Chinner, Dave Hansen, Jan Kara, Matthew Wilcox,
linux-fsdevel, Linux MM, linux-nvdimm
On Tue, Dec 20, 2016 at 2:23 PM, Ross Zwisler
<ross.zwisler@linux.intel.com> wrote:
> Currently dax_mapping_entry_mkclean() fails to clean and write protect the
> pmd_t of a DAX PMD entry during an *sync operation. This can result in
> data loss in the following sequence:
>
> 1) mmap write to DAX PMD, dirtying PMD radix tree entry and making the
> pmd_t dirty and writeable
> 2) fsync, flushing out PMD data and cleaning the radix tree entry. We
> currently fail to mark the pmd_t as clean and write protected.
> 3) more mmap writes to the PMD. These don't cause any page faults since
> the pmd_t is dirty and writeable. The radix tree entry remains clean.
> 4) fsync, which fails to flush the dirty PMD data because the radix tree
> entry was clean.
> 5) crash - dirty data that should have been fsync'd as part of 4) could
> still have been in the processor cache, and is lost.
>
> Fix this by marking the pmd_t clean and write protected in
> dax_mapping_entry_mkclean(), which is called as part of the fsync
> operation 2). This will cause the writes in step 3) above to generate page
> faults where we'll re-dirty the PMD radix tree entry, resulting in flushes
> in the fsync that happens in step 4).
>
> Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
> Cc: Jan Kara <jack@suse.cz>
> Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush")
> ---
> fs/dax.c | 51 ++++++++++++++++++++++++++++++++++++---------------
> include/linux/mm.h | 2 --
> mm/memory.c | 4 ++--
> 3 files changed, 38 insertions(+), 19 deletions(-)
>
> diff --git a/fs/dax.c b/fs/dax.c
> index 5c74f60..ddcddfe 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -691,8 +691,8 @@ static void dax_mapping_entry_mkclean(struct address_space *mapping,
> pgoff_t index, unsigned long pfn)
> {
> struct vm_area_struct *vma;
> - pte_t *ptep;
> - pte_t pte;
> + pte_t pte, *ptep = NULL;
> + pmd_t *pmdp = NULL;
> spinlock_t *ptl;
> bool changed;
>
> @@ -707,21 +707,42 @@ static void dax_mapping_entry_mkclean(struct address_space *mapping,
>
> address = pgoff_address(index, vma);
> changed = false;
> - if (follow_pte(vma->vm_mm, address, &ptep, &ptl))
> + if (follow_pte_pmd(vma->vm_mm, address, &ptep, &pmdp, &ptl))
> continue;
> - if (pfn != pte_pfn(*ptep))
> - goto unlock;
> - if (!pte_dirty(*ptep) && !pte_write(*ptep))
> - goto unlock;
>
> - flush_cache_page(vma, address, pfn);
> - pte = ptep_clear_flush(vma, address, ptep);
> - pte = pte_wrprotect(pte);
> - pte = pte_mkclean(pte);
> - set_pte_at(vma->vm_mm, address, ptep, pte);
> - changed = true;
> -unlock:
> - pte_unmap_unlock(ptep, ptl);
> + if (pmdp) {
> +#ifdef CONFIG_FS_DAX_PMD
> + pmd_t pmd;
> +
> + if (pfn != pmd_pfn(*pmdp))
> + goto unlock_pmd;
> + if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp))
> + goto unlock_pmd;
> +
> + flush_cache_page(vma, address, pfn);
> + pmd = pmdp_huge_clear_flush(vma, address, pmdp);
> + pmd = pmd_wrprotect(pmd);
> + pmd = pmd_mkclean(pmd);
> + set_pmd_at(vma->vm_mm, address, pmdp, pmd);
> + changed = true;
> +unlock_pmd:
> + spin_unlock(ptl);
> +#endif
Can we please kill this ifdef?
I know we've had problems with ARCH=um builds in the past with
undefined pmd helpers, but to me that simply means we now need to
extend the FS_DAX blacklist to include UML
diff --git a/fs/Kconfig b/fs/Kconfig
index c2a377cdda2b..661931fb0ce0 100644
--- a/fs/Kconfig
+++ b/fs/Kconfig
@@ -37,7 +37,7 @@ source "fs/f2fs/Kconfig"
config FS_DAX
bool "Direct Access (DAX) support"
depends on MMU
- depends on !(ARM || MIPS || SPARC)
+ depends on !(ARM || MIPS || SPARC || UML)
help
Direct Access (DAX) can be used on memory-backed block devices.
If the block device supports DAX and the filesystem supports DAX,
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] dax: wrprotect pmd_t in dax_mapping_entry_mkclean
2016-12-20 22:23 ` Ross Zwisler
(?)
@ 2016-12-21 8:48 ` Jan Kara
-1 siblings, 0 replies; 15+ messages in thread
From: Jan Kara @ 2016-12-21 8:48 UTC (permalink / raw)
To: Ross Zwisler
Cc: Jan Kara, Matthew Wilcox, linux-nvdimm, Dave Chinner,
linux-kernel, linux-mm, Dave Hansen, Alexander Viro,
linux-fsdevel, Andrew Morton, Christoph Hellwig
On Tue 20-12-16 15:23:06, Ross Zwisler wrote:
> Currently dax_mapping_entry_mkclean() fails to clean and write protect the
> pmd_t of a DAX PMD entry during an *sync operation. This can result in
> data loss in the following sequence:
>
> 1) mmap write to DAX PMD, dirtying PMD radix tree entry and making the
> pmd_t dirty and writeable
> 2) fsync, flushing out PMD data and cleaning the radix tree entry. We
> currently fail to mark the pmd_t as clean and write protected.
> 3) more mmap writes to the PMD. These don't cause any page faults since
> the pmd_t is dirty and writeable. The radix tree entry remains clean.
> 4) fsync, which fails to flush the dirty PMD data because the radix tree
> entry was clean.
> 5) crash - dirty data that should have been fsync'd as part of 4) could
> still have been in the processor cache, and is lost.
>
> Fix this by marking the pmd_t clean and write protected in
> dax_mapping_entry_mkclean(), which is called as part of the fsync
> operation 2). This will cause the writes in step 3) above to generate page
> faults where we'll re-dirty the PMD radix tree entry, resulting in flushes
> in the fsync that happens in step 4).
>
> Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
> Cc: Jan Kara <jack@suse.cz>
> Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush")
Yeah, good catch. The patch looks good. You can add:
Reviewed-by: Jan Kara <jack@suse.cz>
Honza
> ---
> fs/dax.c | 51 ++++++++++++++++++++++++++++++++++++---------------
> include/linux/mm.h | 2 --
> mm/memory.c | 4 ++--
> 3 files changed, 38 insertions(+), 19 deletions(-)
>
> diff --git a/fs/dax.c b/fs/dax.c
> index 5c74f60..ddcddfe 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -691,8 +691,8 @@ static void dax_mapping_entry_mkclean(struct address_space *mapping,
> pgoff_t index, unsigned long pfn)
> {
> struct vm_area_struct *vma;
> - pte_t *ptep;
> - pte_t pte;
> + pte_t pte, *ptep = NULL;
> + pmd_t *pmdp = NULL;
> spinlock_t *ptl;
> bool changed;
>
> @@ -707,21 +707,42 @@ static void dax_mapping_entry_mkclean(struct address_space *mapping,
>
> address = pgoff_address(index, vma);
> changed = false;
> - if (follow_pte(vma->vm_mm, address, &ptep, &ptl))
> + if (follow_pte_pmd(vma->vm_mm, address, &ptep, &pmdp, &ptl))
> continue;
> - if (pfn != pte_pfn(*ptep))
> - goto unlock;
> - if (!pte_dirty(*ptep) && !pte_write(*ptep))
> - goto unlock;
>
> - flush_cache_page(vma, address, pfn);
> - pte = ptep_clear_flush(vma, address, ptep);
> - pte = pte_wrprotect(pte);
> - pte = pte_mkclean(pte);
> - set_pte_at(vma->vm_mm, address, ptep, pte);
> - changed = true;
> -unlock:
> - pte_unmap_unlock(ptep, ptl);
> + if (pmdp) {
> +#ifdef CONFIG_FS_DAX_PMD
> + pmd_t pmd;
> +
> + if (pfn != pmd_pfn(*pmdp))
> + goto unlock_pmd;
> + if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp))
> + goto unlock_pmd;
> +
> + flush_cache_page(vma, address, pfn);
> + pmd = pmdp_huge_clear_flush(vma, address, pmdp);
> + pmd = pmd_wrprotect(pmd);
> + pmd = pmd_mkclean(pmd);
> + set_pmd_at(vma->vm_mm, address, pmdp, pmd);
> + changed = true;
> +unlock_pmd:
> + spin_unlock(ptl);
> +#endif
> + } else {
> + if (pfn != pte_pfn(*ptep))
> + goto unlock_pte;
> + if (!pte_dirty(*ptep) && !pte_write(*ptep))
> + goto unlock_pte;
> +
> + flush_cache_page(vma, address, pfn);
> + pte = ptep_clear_flush(vma, address, ptep);
> + pte = pte_wrprotect(pte);
> + pte = pte_mkclean(pte);
> + set_pte_at(vma->vm_mm, address, ptep, pte);
> + changed = true;
> +unlock_pte:
> + pte_unmap_unlock(ptep, ptl);
> + }
>
> if (changed)
> mmu_notifier_invalidate_page(vma->vm_mm, address);
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index ff0e1c1..f4de7fa 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1210,8 +1210,6 @@ int copy_page_range(struct mm_struct *dst, struct mm_struct *src,
> struct vm_area_struct *vma);
> void unmap_mapping_range(struct address_space *mapping,
> loff_t const holebegin, loff_t const holelen, int even_cows);
> -int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp,
> - spinlock_t **ptlp);
> int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
> pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp);
> int follow_pfn(struct vm_area_struct *vma, unsigned long address,
> diff --git a/mm/memory.c b/mm/memory.c
> index 29edd91..ddcf979 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3826,8 +3826,8 @@ static int __follow_pte_pmd(struct mm_struct *mm, unsigned long address,
> return -EINVAL;
> }
>
> -int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp,
> - spinlock_t **ptlp)
> +static inline int follow_pte(struct mm_struct *mm, unsigned long address,
> + pte_t **ptepp, spinlock_t **ptlp)
> {
> int res;
>
> --
> 2.7.4
>
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] dax: wrprotect pmd_t in dax_mapping_entry_mkclean
@ 2016-12-21 8:48 ` Jan Kara
0 siblings, 0 replies; 15+ messages in thread
From: Jan Kara @ 2016-12-21 8:48 UTC (permalink / raw)
To: Ross Zwisler
Cc: linux-kernel, Alexander Viro, Andrew Morton, Christoph Hellwig,
Dan Williams, Dave Chinner, Dave Hansen, Jan Kara,
Matthew Wilcox, linux-fsdevel, linux-mm, linux-nvdimm
On Tue 20-12-16 15:23:06, Ross Zwisler wrote:
> Currently dax_mapping_entry_mkclean() fails to clean and write protect the
> pmd_t of a DAX PMD entry during an *sync operation. This can result in
> data loss in the following sequence:
>
> 1) mmap write to DAX PMD, dirtying PMD radix tree entry and making the
> pmd_t dirty and writeable
> 2) fsync, flushing out PMD data and cleaning the radix tree entry. We
> currently fail to mark the pmd_t as clean and write protected.
> 3) more mmap writes to the PMD. These don't cause any page faults since
> the pmd_t is dirty and writeable. The radix tree entry remains clean.
> 4) fsync, which fails to flush the dirty PMD data because the radix tree
> entry was clean.
> 5) crash - dirty data that should have been fsync'd as part of 4) could
> still have been in the processor cache, and is lost.
>
> Fix this by marking the pmd_t clean and write protected in
> dax_mapping_entry_mkclean(), which is called as part of the fsync
> operation 2). This will cause the writes in step 3) above to generate page
> faults where we'll re-dirty the PMD radix tree entry, resulting in flushes
> in the fsync that happens in step 4).
>
> Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
> Cc: Jan Kara <jack@suse.cz>
> Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush")
Yeah, good catch. The patch looks good. You can add:
Reviewed-by: Jan Kara <jack@suse.cz>
Honza
> ---
> fs/dax.c | 51 ++++++++++++++++++++++++++++++++++++---------------
> include/linux/mm.h | 2 --
> mm/memory.c | 4 ++--
> 3 files changed, 38 insertions(+), 19 deletions(-)
>
> diff --git a/fs/dax.c b/fs/dax.c
> index 5c74f60..ddcddfe 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -691,8 +691,8 @@ static void dax_mapping_entry_mkclean(struct address_space *mapping,
> pgoff_t index, unsigned long pfn)
> {
> struct vm_area_struct *vma;
> - pte_t *ptep;
> - pte_t pte;
> + pte_t pte, *ptep = NULL;
> + pmd_t *pmdp = NULL;
> spinlock_t *ptl;
> bool changed;
>
> @@ -707,21 +707,42 @@ static void dax_mapping_entry_mkclean(struct address_space *mapping,
>
> address = pgoff_address(index, vma);
> changed = false;
> - if (follow_pte(vma->vm_mm, address, &ptep, &ptl))
> + if (follow_pte_pmd(vma->vm_mm, address, &ptep, &pmdp, &ptl))
> continue;
> - if (pfn != pte_pfn(*ptep))
> - goto unlock;
> - if (!pte_dirty(*ptep) && !pte_write(*ptep))
> - goto unlock;
>
> - flush_cache_page(vma, address, pfn);
> - pte = ptep_clear_flush(vma, address, ptep);
> - pte = pte_wrprotect(pte);
> - pte = pte_mkclean(pte);
> - set_pte_at(vma->vm_mm, address, ptep, pte);
> - changed = true;
> -unlock:
> - pte_unmap_unlock(ptep, ptl);
> + if (pmdp) {
> +#ifdef CONFIG_FS_DAX_PMD
> + pmd_t pmd;
> +
> + if (pfn != pmd_pfn(*pmdp))
> + goto unlock_pmd;
> + if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp))
> + goto unlock_pmd;
> +
> + flush_cache_page(vma, address, pfn);
> + pmd = pmdp_huge_clear_flush(vma, address, pmdp);
> + pmd = pmd_wrprotect(pmd);
> + pmd = pmd_mkclean(pmd);
> + set_pmd_at(vma->vm_mm, address, pmdp, pmd);
> + changed = true;
> +unlock_pmd:
> + spin_unlock(ptl);
> +#endif
> + } else {
> + if (pfn != pte_pfn(*ptep))
> + goto unlock_pte;
> + if (!pte_dirty(*ptep) && !pte_write(*ptep))
> + goto unlock_pte;
> +
> + flush_cache_page(vma, address, pfn);
> + pte = ptep_clear_flush(vma, address, ptep);
> + pte = pte_wrprotect(pte);
> + pte = pte_mkclean(pte);
> + set_pte_at(vma->vm_mm, address, ptep, pte);
> + changed = true;
> +unlock_pte:
> + pte_unmap_unlock(ptep, ptl);
> + }
>
> if (changed)
> mmu_notifier_invalidate_page(vma->vm_mm, address);
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index ff0e1c1..f4de7fa 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1210,8 +1210,6 @@ int copy_page_range(struct mm_struct *dst, struct mm_struct *src,
> struct vm_area_struct *vma);
> void unmap_mapping_range(struct address_space *mapping,
> loff_t const holebegin, loff_t const holelen, int even_cows);
> -int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp,
> - spinlock_t **ptlp);
> int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
> pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp);
> int follow_pfn(struct vm_area_struct *vma, unsigned long address,
> diff --git a/mm/memory.c b/mm/memory.c
> index 29edd91..ddcf979 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3826,8 +3826,8 @@ static int __follow_pte_pmd(struct mm_struct *mm, unsigned long address,
> return -EINVAL;
> }
>
> -int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp,
> - spinlock_t **ptlp)
> +static inline int follow_pte(struct mm_struct *mm, unsigned long address,
> + pte_t **ptepp, spinlock_t **ptlp)
> {
> int res;
>
> --
> 2.7.4
>
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] dax: wrprotect pmd_t in dax_mapping_entry_mkclean
@ 2016-12-21 8:48 ` Jan Kara
0 siblings, 0 replies; 15+ messages in thread
From: Jan Kara @ 2016-12-21 8:48 UTC (permalink / raw)
To: Ross Zwisler
Cc: linux-kernel, Alexander Viro, Andrew Morton, Christoph Hellwig,
Dan Williams, Dave Chinner, Dave Hansen, Jan Kara,
Matthew Wilcox, linux-fsdevel, linux-mm, linux-nvdimm
On Tue 20-12-16 15:23:06, Ross Zwisler wrote:
> Currently dax_mapping_entry_mkclean() fails to clean and write protect the
> pmd_t of a DAX PMD entry during an *sync operation. This can result in
> data loss in the following sequence:
>
> 1) mmap write to DAX PMD, dirtying PMD radix tree entry and making the
> pmd_t dirty and writeable
> 2) fsync, flushing out PMD data and cleaning the radix tree entry. We
> currently fail to mark the pmd_t as clean and write protected.
> 3) more mmap writes to the PMD. These don't cause any page faults since
> the pmd_t is dirty and writeable. The radix tree entry remains clean.
> 4) fsync, which fails to flush the dirty PMD data because the radix tree
> entry was clean.
> 5) crash - dirty data that should have been fsync'd as part of 4) could
> still have been in the processor cache, and is lost.
>
> Fix this by marking the pmd_t clean and write protected in
> dax_mapping_entry_mkclean(), which is called as part of the fsync
> operation 2). This will cause the writes in step 3) above to generate page
> faults where we'll re-dirty the PMD radix tree entry, resulting in flushes
> in the fsync that happens in step 4).
>
> Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
> Cc: Jan Kara <jack@suse.cz>
> Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush")
Yeah, good catch. The patch looks good. You can add:
Reviewed-by: Jan Kara <jack@suse.cz>
Honza
> ---
> fs/dax.c | 51 ++++++++++++++++++++++++++++++++++++---------------
> include/linux/mm.h | 2 --
> mm/memory.c | 4 ++--
> 3 files changed, 38 insertions(+), 19 deletions(-)
>
> diff --git a/fs/dax.c b/fs/dax.c
> index 5c74f60..ddcddfe 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -691,8 +691,8 @@ static void dax_mapping_entry_mkclean(struct address_space *mapping,
> pgoff_t index, unsigned long pfn)
> {
> struct vm_area_struct *vma;
> - pte_t *ptep;
> - pte_t pte;
> + pte_t pte, *ptep = NULL;
> + pmd_t *pmdp = NULL;
> spinlock_t *ptl;
> bool changed;
>
> @@ -707,21 +707,42 @@ static void dax_mapping_entry_mkclean(struct address_space *mapping,
>
> address = pgoff_address(index, vma);
> changed = false;
> - if (follow_pte(vma->vm_mm, address, &ptep, &ptl))
> + if (follow_pte_pmd(vma->vm_mm, address, &ptep, &pmdp, &ptl))
> continue;
> - if (pfn != pte_pfn(*ptep))
> - goto unlock;
> - if (!pte_dirty(*ptep) && !pte_write(*ptep))
> - goto unlock;
>
> - flush_cache_page(vma, address, pfn);
> - pte = ptep_clear_flush(vma, address, ptep);
> - pte = pte_wrprotect(pte);
> - pte = pte_mkclean(pte);
> - set_pte_at(vma->vm_mm, address, ptep, pte);
> - changed = true;
> -unlock:
> - pte_unmap_unlock(ptep, ptl);
> + if (pmdp) {
> +#ifdef CONFIG_FS_DAX_PMD
> + pmd_t pmd;
> +
> + if (pfn != pmd_pfn(*pmdp))
> + goto unlock_pmd;
> + if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp))
> + goto unlock_pmd;
> +
> + flush_cache_page(vma, address, pfn);
> + pmd = pmdp_huge_clear_flush(vma, address, pmdp);
> + pmd = pmd_wrprotect(pmd);
> + pmd = pmd_mkclean(pmd);
> + set_pmd_at(vma->vm_mm, address, pmdp, pmd);
> + changed = true;
> +unlock_pmd:
> + spin_unlock(ptl);
> +#endif
> + } else {
> + if (pfn != pte_pfn(*ptep))
> + goto unlock_pte;
> + if (!pte_dirty(*ptep) && !pte_write(*ptep))
> + goto unlock_pte;
> +
> + flush_cache_page(vma, address, pfn);
> + pte = ptep_clear_flush(vma, address, ptep);
> + pte = pte_wrprotect(pte);
> + pte = pte_mkclean(pte);
> + set_pte_at(vma->vm_mm, address, ptep, pte);
> + changed = true;
> +unlock_pte:
> + pte_unmap_unlock(ptep, ptl);
> + }
>
> if (changed)
> mmu_notifier_invalidate_page(vma->vm_mm, address);
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index ff0e1c1..f4de7fa 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1210,8 +1210,6 @@ int copy_page_range(struct mm_struct *dst, struct mm_struct *src,
> struct vm_area_struct *vma);
> void unmap_mapping_range(struct address_space *mapping,
> loff_t const holebegin, loff_t const holelen, int even_cows);
> -int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp,
> - spinlock_t **ptlp);
> int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
> pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp);
> int follow_pfn(struct vm_area_struct *vma, unsigned long address,
> diff --git a/mm/memory.c b/mm/memory.c
> index 29edd91..ddcf979 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3826,8 +3826,8 @@ static int __follow_pte_pmd(struct mm_struct *mm, unsigned long address,
> return -EINVAL;
> }
>
> -int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp,
> - spinlock_t **ptlp)
> +static inline int follow_pte(struct mm_struct *mm, unsigned long address,
> + pte_t **ptepp, spinlock_t **ptlp)
> {
> int res;
>
> --
> 2.7.4
>
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2016-12-21 8:48 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-20 22:23 [PATCH 0/2] Write protect DAX PMDs in *sync path Ross Zwisler
2016-12-20 22:23 ` Ross Zwisler
2016-12-20 22:23 ` Ross Zwisler
2016-12-20 22:23 ` [PATCH 1/2] mm: add follow_pte_pmd() Ross Zwisler
2016-12-20 22:23 ` Ross Zwisler
2016-12-20 22:23 ` Ross Zwisler
2016-12-20 22:23 ` [PATCH 2/2] dax: wrprotect pmd_t in dax_mapping_entry_mkclean Ross Zwisler
2016-12-20 22:23 ` Ross Zwisler
2016-12-20 22:23 ` Ross Zwisler
2016-12-20 23:06 ` Dan Williams
2016-12-20 23:06 ` Dan Williams
2016-12-20 23:06 ` Dan Williams
2016-12-21 8:48 ` Jan Kara
2016-12-21 8:48 ` Jan Kara
2016-12-21 8:48 ` Jan Kara
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.