All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jan Kara <jack@suse.cz>
To: linux-mm@kvack.org
Cc: Jan Kara <jack@suse.cz>,
	linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: [PATCH 16/20] mm: Provide helper for finishing mkwrite faults
Date: Tue, 27 Sep 2016 18:08:20 +0200	[thread overview]
Message-ID: <1474992504-20133-17-git-send-email-jack@suse.cz> (raw)
In-Reply-To: <1474992504-20133-1-git-send-email-jack@suse.cz>

Provide a helper function for finishing write faults due to PTE being
read-only. The helper will be used by DAX to avoid the need of
complicating generic MM code with DAX locking specifics.

Signed-off-by: Jan Kara <jack@suse.cz>
---
 include/linux/mm.h |  1 +
 mm/memory.c        | 65 +++++++++++++++++++++++++++++++-----------------------
 2 files changed, 39 insertions(+), 27 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1055f2ece80d..e5a014be8932 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -617,6 +617,7 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
 int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
 		struct page *page);
 int finish_fault(struct vm_fault *vmf);
+int finish_mkwrite_fault(struct vm_fault *vmf);
 #endif
 
 /*
diff --git a/mm/memory.c b/mm/memory.c
index f49e736d6a36..8c8cb7f2133e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2266,6 +2266,36 @@ oom:
 	return VM_FAULT_OOM;
 }
 
+/**
+ * finish_mkrite_fault - finish page fault making PTE writeable once the page
+ *			 page is prepared
+ *
+ * @vmf: structure describing the fault
+ *
+ * This function handles all that is needed to finish a write page fault due
+ * to PTE being read-only once the mapped page is prepared. It handles locking
+ * of PTE and modifying it. The function returns VM_FAULT_WRITE on success,
+ * 0 when PTE got changed before we acquired PTE lock.
+ *
+ * The function expects the page to be locked or other protection against
+ * concurrent faults / writeback (such as DAX radix tree locks).
+ */
+int finish_mkwrite_fault(struct vm_fault *vmf)
+{
+	vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address,
+				       &vmf->ptl);
+	/*
+	 * We might have raced with another page fault while we released the
+	 * pte_offset_map_lock.
+	 */
+	if (!pte_same(*vmf->pte, vmf->orig_pte)) {
+		pte_unmap_unlock(vmf->pte, vmf->ptl);
+		return 0;
+	}
+	wp_page_reuse(vmf);
+	return VM_FAULT_WRITE;
+}
+
 /*
  * Handle write page faults for VM_MIXEDMAP or VM_PFNMAP for a VM_SHARED
  * mapping
@@ -2282,16 +2312,7 @@ static int wp_pfn_shared(struct vm_fault *vmf)
 		ret = vma->vm_ops->pfn_mkwrite(vma, vmf);
 		if (ret & VM_FAULT_ERROR)
 			return ret;
-		vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
-				vmf->address, &vmf->ptl);
-		/*
-		 * We might have raced with another page fault while we
-		 * released the pte_offset_map_lock.
-		 */
-		if (!pte_same(*vmf->pte, vmf->orig_pte)) {
-			pte_unmap_unlock(vmf->pte, vmf->ptl);
-			return 0;
-		}
+		return finish_mkwrite_fault(vmf);
 	}
 	wp_page_reuse(vmf);
 	return VM_FAULT_WRITE;
@@ -2301,7 +2322,6 @@ static int wp_page_shared(struct vm_fault *vmf)
 	__releases(vmf->ptl)
 {
 	struct vm_area_struct *vma = vmf->vma;
-	int page_mkwrite = 0;
 
 	get_page(vmf->page);
 
@@ -2315,26 +2335,17 @@ static int wp_page_shared(struct vm_fault *vmf)
 			put_page(vmf->page);
 			return tmp;
 		}
-		/*
-		 * Since we dropped the lock we need to revalidate
-		 * the PTE as someone else may have changed it.  If
-		 * they did, we just return, as we can count on the
-		 * MMU to tell us if they didn't also make it writable.
-		 */
-		vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
-						vmf->address, &vmf->ptl);
-		if (!pte_same(*vmf->pte, vmf->orig_pte)) {
+		tmp = finish_mkwrite_fault(vmf);
+		if (unlikely(!tmp || (tmp &
+				      (VM_FAULT_ERROR | VM_FAULT_NOPAGE)))) {
 			unlock_page(vmf->page);
-			pte_unmap_unlock(vmf->pte, vmf->ptl);
 			put_page(vmf->page);
-			return 0;
+			return tmp;
 		}
-		page_mkwrite = 1;
-	}
-
-	wp_page_reuse(vmf);
-	if (!page_mkwrite)
+	} else {
+		wp_page_reuse(vmf);
 		lock_page(vmf->page);
+	}
 	fault_dirty_shared_page(vma, vmf->page);
 	put_page(vmf->page);
 
-- 
2.6.6

_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

WARNING: multiple messages have this Message-ID (diff)
From: Jan Kara <jack@suse.cz>
To: linux-mm@kvack.org
Cc: linux-fsdevel@vger.kernel.org, linux-nvdimm@lists.01.org,
	Dan Williams <dan.j.williams@intel.com>,
	Ross Zwisler <ross.zwisler@linux.intel.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Jan Kara <jack@suse.cz>
Subject: [PATCH 16/20] mm: Provide helper for finishing mkwrite faults
Date: Tue, 27 Sep 2016 18:08:20 +0200	[thread overview]
Message-ID: <1474992504-20133-17-git-send-email-jack@suse.cz> (raw)
In-Reply-To: <1474992504-20133-1-git-send-email-jack@suse.cz>

Provide a helper function for finishing write faults due to PTE being
read-only. The helper will be used by DAX to avoid the need of
complicating generic MM code with DAX locking specifics.

Signed-off-by: Jan Kara <jack@suse.cz>
---
 include/linux/mm.h |  1 +
 mm/memory.c        | 65 +++++++++++++++++++++++++++++++-----------------------
 2 files changed, 39 insertions(+), 27 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1055f2ece80d..e5a014be8932 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -617,6 +617,7 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
 int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
 		struct page *page);
 int finish_fault(struct vm_fault *vmf);
+int finish_mkwrite_fault(struct vm_fault *vmf);
 #endif
 
 /*
diff --git a/mm/memory.c b/mm/memory.c
index f49e736d6a36..8c8cb7f2133e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2266,6 +2266,36 @@ oom:
 	return VM_FAULT_OOM;
 }
 
+/**
+ * finish_mkrite_fault - finish page fault making PTE writeable once the page
+ *			 page is prepared
+ *
+ * @vmf: structure describing the fault
+ *
+ * This function handles all that is needed to finish a write page fault due
+ * to PTE being read-only once the mapped page is prepared. It handles locking
+ * of PTE and modifying it. The function returns VM_FAULT_WRITE on success,
+ * 0 when PTE got changed before we acquired PTE lock.
+ *
+ * The function expects the page to be locked or other protection against
+ * concurrent faults / writeback (such as DAX radix tree locks).
+ */
+int finish_mkwrite_fault(struct vm_fault *vmf)
+{
+	vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address,
+				       &vmf->ptl);
+	/*
+	 * We might have raced with another page fault while we released the
+	 * pte_offset_map_lock.
+	 */
+	if (!pte_same(*vmf->pte, vmf->orig_pte)) {
+		pte_unmap_unlock(vmf->pte, vmf->ptl);
+		return 0;
+	}
+	wp_page_reuse(vmf);
+	return VM_FAULT_WRITE;
+}
+
 /*
  * Handle write page faults for VM_MIXEDMAP or VM_PFNMAP for a VM_SHARED
  * mapping
@@ -2282,16 +2312,7 @@ static int wp_pfn_shared(struct vm_fault *vmf)
 		ret = vma->vm_ops->pfn_mkwrite(vma, vmf);
 		if (ret & VM_FAULT_ERROR)
 			return ret;
-		vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
-				vmf->address, &vmf->ptl);
-		/*
-		 * We might have raced with another page fault while we
-		 * released the pte_offset_map_lock.
-		 */
-		if (!pte_same(*vmf->pte, vmf->orig_pte)) {
-			pte_unmap_unlock(vmf->pte, vmf->ptl);
-			return 0;
-		}
+		return finish_mkwrite_fault(vmf);
 	}
 	wp_page_reuse(vmf);
 	return VM_FAULT_WRITE;
@@ -2301,7 +2322,6 @@ static int wp_page_shared(struct vm_fault *vmf)
 	__releases(vmf->ptl)
 {
 	struct vm_area_struct *vma = vmf->vma;
-	int page_mkwrite = 0;
 
 	get_page(vmf->page);
 
@@ -2315,26 +2335,17 @@ static int wp_page_shared(struct vm_fault *vmf)
 			put_page(vmf->page);
 			return tmp;
 		}
-		/*
-		 * Since we dropped the lock we need to revalidate
-		 * the PTE as someone else may have changed it.  If
-		 * they did, we just return, as we can count on the
-		 * MMU to tell us if they didn't also make it writable.
-		 */
-		vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
-						vmf->address, &vmf->ptl);
-		if (!pte_same(*vmf->pte, vmf->orig_pte)) {
+		tmp = finish_mkwrite_fault(vmf);
+		if (unlikely(!tmp || (tmp &
+				      (VM_FAULT_ERROR | VM_FAULT_NOPAGE)))) {
 			unlock_page(vmf->page);
-			pte_unmap_unlock(vmf->pte, vmf->ptl);
 			put_page(vmf->page);
-			return 0;
+			return tmp;
 		}
-		page_mkwrite = 1;
-	}
-
-	wp_page_reuse(vmf);
-	if (!page_mkwrite)
+	} else {
+		wp_page_reuse(vmf);
 		lock_page(vmf->page);
+	}
 	fault_dirty_shared_page(vma, vmf->page);
 	put_page(vmf->page);
 
-- 
2.6.6


WARNING: multiple messages have this Message-ID (diff)
From: Jan Kara <jack@suse.cz>
To: linux-mm@kvack.org
Cc: linux-fsdevel@vger.kernel.org, linux-nvdimm@lists.01.org,
	Dan Williams <dan.j.williams@intel.com>,
	Ross Zwisler <ross.zwisler@linux.intel.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Jan Kara <jack@suse.cz>
Subject: [PATCH 16/20] mm: Provide helper for finishing mkwrite faults
Date: Tue, 27 Sep 2016 18:08:20 +0200	[thread overview]
Message-ID: <1474992504-20133-17-git-send-email-jack@suse.cz> (raw)
In-Reply-To: <1474992504-20133-1-git-send-email-jack@suse.cz>

Provide a helper function for finishing write faults due to PTE being
read-only. The helper will be used by DAX to avoid the need of
complicating generic MM code with DAX locking specifics.

Signed-off-by: Jan Kara <jack@suse.cz>
---
 include/linux/mm.h |  1 +
 mm/memory.c        | 65 +++++++++++++++++++++++++++++++-----------------------
 2 files changed, 39 insertions(+), 27 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1055f2ece80d..e5a014be8932 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -617,6 +617,7 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
 int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
 		struct page *page);
 int finish_fault(struct vm_fault *vmf);
+int finish_mkwrite_fault(struct vm_fault *vmf);
 #endif
 
 /*
diff --git a/mm/memory.c b/mm/memory.c
index f49e736d6a36..8c8cb7f2133e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2266,6 +2266,36 @@ oom:
 	return VM_FAULT_OOM;
 }
 
+/**
+ * finish_mkrite_fault - finish page fault making PTE writeable once the page
+ *			 page is prepared
+ *
+ * @vmf: structure describing the fault
+ *
+ * This function handles all that is needed to finish a write page fault due
+ * to PTE being read-only once the mapped page is prepared. It handles locking
+ * of PTE and modifying it. The function returns VM_FAULT_WRITE on success,
+ * 0 when PTE got changed before we acquired PTE lock.
+ *
+ * The function expects the page to be locked or other protection against
+ * concurrent faults / writeback (such as DAX radix tree locks).
+ */
+int finish_mkwrite_fault(struct vm_fault *vmf)
+{
+	vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address,
+				       &vmf->ptl);
+	/*
+	 * We might have raced with another page fault while we released the
+	 * pte_offset_map_lock.
+	 */
+	if (!pte_same(*vmf->pte, vmf->orig_pte)) {
+		pte_unmap_unlock(vmf->pte, vmf->ptl);
+		return 0;
+	}
+	wp_page_reuse(vmf);
+	return VM_FAULT_WRITE;
+}
+
 /*
  * Handle write page faults for VM_MIXEDMAP or VM_PFNMAP for a VM_SHARED
  * mapping
@@ -2282,16 +2312,7 @@ static int wp_pfn_shared(struct vm_fault *vmf)
 		ret = vma->vm_ops->pfn_mkwrite(vma, vmf);
 		if (ret & VM_FAULT_ERROR)
 			return ret;
-		vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
-				vmf->address, &vmf->ptl);
-		/*
-		 * We might have raced with another page fault while we
-		 * released the pte_offset_map_lock.
-		 */
-		if (!pte_same(*vmf->pte, vmf->orig_pte)) {
-			pte_unmap_unlock(vmf->pte, vmf->ptl);
-			return 0;
-		}
+		return finish_mkwrite_fault(vmf);
 	}
 	wp_page_reuse(vmf);
 	return VM_FAULT_WRITE;
@@ -2301,7 +2322,6 @@ static int wp_page_shared(struct vm_fault *vmf)
 	__releases(vmf->ptl)
 {
 	struct vm_area_struct *vma = vmf->vma;
-	int page_mkwrite = 0;
 
 	get_page(vmf->page);
 
@@ -2315,26 +2335,17 @@ static int wp_page_shared(struct vm_fault *vmf)
 			put_page(vmf->page);
 			return tmp;
 		}
-		/*
-		 * Since we dropped the lock we need to revalidate
-		 * the PTE as someone else may have changed it.  If
-		 * they did, we just return, as we can count on the
-		 * MMU to tell us if they didn't also make it writable.
-		 */
-		vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
-						vmf->address, &vmf->ptl);
-		if (!pte_same(*vmf->pte, vmf->orig_pte)) {
+		tmp = finish_mkwrite_fault(vmf);
+		if (unlikely(!tmp || (tmp &
+				      (VM_FAULT_ERROR | VM_FAULT_NOPAGE)))) {
 			unlock_page(vmf->page);
-			pte_unmap_unlock(vmf->pte, vmf->ptl);
 			put_page(vmf->page);
-			return 0;
+			return tmp;
 		}
-		page_mkwrite = 1;
-	}
-
-	wp_page_reuse(vmf);
-	if (!page_mkwrite)
+	} else {
+		wp_page_reuse(vmf);
 		lock_page(vmf->page);
+	}
 	fault_dirty_shared_page(vma, vmf->page);
 	put_page(vmf->page);
 
-- 
2.6.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2016-09-27 16:08 UTC|newest]

Thread overview: 126+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-27 16:08 [PATCH 0/20 v3] dax: Clear dirty bits after flushing caches Jan Kara
2016-09-27 16:08 ` [PATCH 01/20] mm: Change type of vmf->virtual_address Jan Kara
     [not found]   ` <1474992504-20133-2-git-send-email-jack-AlSwsSmVLrQ@public.gmane.org>
2016-09-30  9:07     ` Christoph Hellwig
2016-09-30  9:07       ` Christoph Hellwig
2016-10-14 18:02   ` Ross Zwisler
2016-10-14 18:02     ` Ross Zwisler
2016-09-27 16:08 ` [PATCH 02/20] mm: Join struct fault_env and vm_fault Jan Kara
     [not found]   ` <1474992504-20133-3-git-send-email-jack-AlSwsSmVLrQ@public.gmane.org>
2016-09-30  9:10     ` Christoph Hellwig
2016-09-30  9:10       ` Christoph Hellwig
2016-10-03  7:43       ` Jan Kara
2016-10-03  7:43         ` Jan Kara
2016-09-27 16:08 ` [PATCH 03/20] mm: Use pgoff in struct vm_fault instead of passing it separately Jan Kara
2016-09-27 16:08   ` Jan Kara
2016-09-27 16:08   ` Jan Kara
2016-10-14 18:42   ` Ross Zwisler
2016-10-14 18:42     ` Ross Zwisler
2016-10-17  9:01     ` Jan Kara
2016-09-27 16:08 ` [PATCH 04/20] mm: Use passed vm_fault structure in __do_fault() Jan Kara
2016-09-27 16:08   ` Jan Kara
2016-10-14 19:05   ` Ross Zwisler
2016-10-14 19:05     ` Ross Zwisler
2016-09-27 16:08 ` [PATCH 05/20] mm: Trim __do_fault() arguments Jan Kara
2016-09-27 16:08   ` Jan Kara
2016-10-14 20:31   ` Ross Zwisler
2016-10-17  9:04     ` Jan Kara
2016-10-17  9:04       ` Jan Kara
2016-09-27 16:08 ` [PATCH 06/20] mm: Use pass vm_fault structure for in wp_pfn_shared() Jan Kara
2016-09-27 16:08   ` Jan Kara
2016-09-27 16:08   ` Jan Kara
2016-10-14 21:04   ` Ross Zwisler
2016-10-14 21:04     ` Ross Zwisler
2016-09-27 16:08 ` [PATCH 07/20] mm: Add orig_pte field into vm_fault Jan Kara
2016-10-17 16:45   ` Ross Zwisler
2016-10-17 16:45     ` Ross Zwisler
2016-10-18 10:13     ` Jan Kara
2016-10-18 10:13       ` Jan Kara
2016-09-27 16:08 ` [PATCH 08/20] mm: Allow full handling of COW faults in ->fault handlers Jan Kara
2016-09-27 16:08   ` Jan Kara
2016-09-27 16:08   ` Jan Kara
2016-10-17 16:50   ` Ross Zwisler
2016-09-27 16:08 ` [PATCH 09/20] mm: Factor out functionality to finish page faults Jan Kara
2016-10-17 17:38   ` Ross Zwisler
2016-10-17 17:38     ` Ross Zwisler
2016-10-17 17:40   ` Ross Zwisler
2016-10-17 17:40     ` Ross Zwisler
2016-10-18  9:44     ` Jan Kara
2016-09-27 16:08 ` [PATCH 10/20] mm: Move handling of COW faults into DAX code Jan Kara
2016-10-17 19:29   ` Ross Zwisler
2016-10-17 19:29     ` Ross Zwisler
2016-10-18 10:32     ` Jan Kara
2016-10-18 10:32       ` Jan Kara
2016-09-27 16:08 ` [PATCH 11/20] mm: Remove unnecessary vma->vm_ops check Jan Kara
2016-10-17 19:40   ` Ross Zwisler
2016-10-17 19:40     ` Ross Zwisler
2016-10-18 10:37     ` Jan Kara
2016-10-18 10:37       ` Jan Kara
2016-09-27 16:08 ` [PATCH 12/20] mm: Factor out common parts of write fault handling Jan Kara
2016-09-27 16:08   ` Jan Kara
2016-10-17 22:08   ` Ross Zwisler
2016-10-17 22:08     ` Ross Zwisler
2016-10-18 10:50     ` Jan Kara
2016-10-18 17:32       ` Ross Zwisler
2016-10-18 17:32         ` Ross Zwisler
2016-09-27 16:08 ` [PATCH 13/20] mm: Pass vm_fault structure into do_page_mkwrite() Jan Kara
2016-10-17 22:29   ` Ross Zwisler
2016-10-17 22:29     ` Ross Zwisler
2016-09-27 16:08 ` [PATCH 14/20] mm: Use vmf->page during WP faults Jan Kara
2016-10-18 17:56   ` Ross Zwisler
2016-10-18 17:56     ` Ross Zwisler
2016-09-27 16:08 ` [PATCH 15/20] mm: Move part of wp_page_reuse() into the single call site Jan Kara
2016-09-27 16:08   ` Jan Kara
2016-09-27 16:08   ` Jan Kara
2016-10-18 17:59   ` Ross Zwisler
2016-09-27 16:08 ` Jan Kara [this message]
2016-09-27 16:08   ` [PATCH 16/20] mm: Provide helper for finishing mkwrite faults Jan Kara
2016-09-27 16:08   ` Jan Kara
2016-10-18 18:35   ` Ross Zwisler
2016-10-18 18:35     ` Ross Zwisler
2016-10-19  7:16     ` Jan Kara
2016-10-19  7:16       ` Jan Kara
2016-10-19 17:21       ` Ross Zwisler
2016-10-19 17:21         ` Ross Zwisler
2016-10-20  8:48         ` Jan Kara
2016-10-20  8:48           ` Jan Kara
2016-09-27 16:08 ` [PATCH 17/20] mm: Export follow_pte() Jan Kara
2016-10-18 18:37   ` Ross Zwisler
2016-10-18 18:37     ` Ross Zwisler
2016-09-27 16:08 ` [PATCH 18/20] dax: Make cache flushing protected by entry lock Jan Kara
2016-10-18 19:20   ` Ross Zwisler
2016-10-19  7:19     ` Jan Kara
2016-10-19  7:19       ` Jan Kara
2016-10-19 18:25     ` Ross Zwisler
2016-10-19 18:25       ` Ross Zwisler
2016-09-27 16:08 ` [PATCH 19/20] dax: Protect PTE modification on WP fault by radix tree " Jan Kara
2016-09-27 16:08   ` Jan Kara
2016-09-27 16:08   ` Jan Kara
2016-10-18 19:53   ` Ross Zwisler
2016-10-18 19:53     ` Ross Zwisler
2016-10-19  7:25     ` Jan Kara
2016-10-19  7:25       ` Jan Kara
2016-10-19 17:25       ` Ross Zwisler
2016-10-19 17:25         ` Ross Zwisler
2016-09-27 16:08 ` [PATCH 20/20] dax: Clear dirty entry tags on cache flush Jan Kara
2016-09-27 16:08   ` Jan Kara
2016-10-18 22:12   ` Ross Zwisler
2016-10-18 22:12     ` Ross Zwisler
2016-10-19  7:30     ` Jan Kara
2016-10-19  7:30       ` Jan Kara
2016-10-19 16:38       ` Ross Zwisler
2016-10-19 16:38         ` Ross Zwisler
2016-09-30  9:14 ` [PATCH 0/20 v3] dax: Clear dirty bits after flushing caches Christoph Hellwig
2016-10-03  7:59   ` Jan Kara
2016-10-03  8:03     ` Christoph Hellwig
2016-10-03  8:15       ` Jan Kara
2016-10-03  8:15         ` Jan Kara
2016-10-03  9:32         ` Christoph Hellwig
2016-10-03  9:32           ` Christoph Hellwig
2016-10-03 11:13           ` Jan Kara
     [not found]             ` <20161003111358.GQ6457-4I4JzKEfoa/jFM9bn6wA6Q@public.gmane.org>
2016-10-13 20:34               ` Ross Zwisler
2016-10-13 20:34                 ` Ross Zwisler
2016-10-17  8:47                 ` Jan Kara
2016-10-17  8:47                   ` Jan Kara
     [not found]                   ` <20161017084732.GD3359-4I4JzKEfoa/jFM9bn6wA6Q@public.gmane.org>
2016-10-17 18:59                     ` Ross Zwisler
2016-10-17 18:59                       ` Ross Zwisler
2016-10-18  9:49                       ` Jan Kara
2016-10-18  9:49                         ` Jan Kara

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1474992504-20133-17-git-send-email-jack@suse.cz \
    --to=jack@suse.cz \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nvdimm@lists.01.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.