All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ross Zwisler <ross.zwisler@linux.intel.com>
To: linux-kernel@vger.kernel.org
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"J. Bruce Fields" <bfields@fieldses.org>,
	Theodore Ts'o <tytso@mit.edu>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Andreas Dilger <adilger.kernel@dilger.ca>,
	Dan Williams <dan.j.williams@intel.com>,
	Dave Chinner <david@fromorbit.com>,
	Ingo Molnar <mingo@redhat.com>, Jan Kara <jack@suse.com>,
	Jeff Layton <jlayton@poochiereds.net>,
	Matthew Wilcox <willy@linux.intel.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	linux-ext4@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, linux-nvdimm@lists.01.org, x86@kernel.org,
	xfs@oss.sgi.com, Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <matthew.r.wilcox@intel.com>,
	Dave Hansen <dave.hansen@linux.intel.com>
Subject: [PATCH v2 05/11] mm: add follow_pte_pmd()
Date: Fri, 13 Nov 2015 17:06:44 -0700	[thread overview]
Message-ID: <1447459610-14259-6-git-send-email-ross.zwisler@linux.intel.com> (raw)
In-Reply-To: <1447459610-14259-1-git-send-email-ross.zwisler@linux.intel.com>

Similar to follow_pte(), follow_pte_pmd() allows either a PTE leaf or a
huge page PMD leaf to be found and returned.

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
---
 include/linux/mm.h |  2 ++
 mm/memory.c        | 38 ++++++++++++++++++++++++++++++--------
 2 files changed, 32 insertions(+), 8 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 80001de..393441c 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1166,6 +1166,8 @@ int copy_page_range(struct mm_struct *dst, struct mm_struct *src,
 			struct vm_area_struct *vma);
 void unmap_mapping_range(struct address_space *mapping,
 		loff_t const holebegin, loff_t const holelen, int even_cows);
+int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+			     pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp);
 int follow_pfn(struct vm_area_struct *vma, unsigned long address,
 	unsigned long *pfn);
 int follow_phys(struct vm_area_struct *vma, unsigned long address,
diff --git a/mm/memory.c b/mm/memory.c
index deb679c..7f4090e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3512,8 +3512,8 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address)
 }
 #endif /* __PAGETABLE_PMD_FOLDED */
 
-static int __follow_pte(struct mm_struct *mm, unsigned long address,
-		pte_t **ptepp, spinlock_t **ptlp)
+static int __follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+		pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp)
 {
 	pgd_t *pgd;
 	pud_t *pud;
@@ -3529,12 +3529,20 @@ static int __follow_pte(struct mm_struct *mm, unsigned long address,
 		goto out;
 
 	pmd = pmd_offset(pud, address);
-	VM_BUG_ON(pmd_trans_huge(*pmd));
-	if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
-		goto out;
 
-	/* We cannot handle huge page PFN maps. Luckily they don't exist. */
-	if (pmd_huge(*pmd))
+	if (pmd_huge(*pmd)) {
+		if (!pmdpp)
+			goto out;
+
+		*ptlp = pmd_lock(mm, pmd);
+		if (pmd_huge(*pmd)) {
+			*pmdpp = pmd;
+			return 0;
+		}
+		spin_unlock(*ptlp);
+	}
+
+	if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
 		goto out;
 
 	ptep = pte_offset_map_lock(mm, pmd, address, ptlp);
@@ -3557,9 +3565,23 @@ static inline int follow_pte(struct mm_struct *mm, unsigned long address,
 
 	/* (void) is needed to make gcc happy */
 	(void) __cond_lock(*ptlp,
-			   !(res = __follow_pte(mm, address, ptepp, ptlp)));
+			   !(res = __follow_pte_pmd(mm, address, ptepp, NULL,
+					   ptlp)));
+	return res;
+}
+
+int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+			     pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp)
+{
+	int res;
+
+	/* (void) is needed to make gcc happy */
+	(void) __cond_lock(*ptlp,
+			   !(res = __follow_pte_pmd(mm, address, ptepp, pmdpp,
+					   ptlp)));
 	return res;
 }
+EXPORT_SYMBOL(follow_pte_pmd);
 
 /**
  * follow_pfn - look up PFN at a user virtual address
-- 
2.1.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

WARNING: multiple messages have this Message-ID (diff)
From: Ross Zwisler <ross.zwisler@linux.intel.com>
To: linux-kernel@vger.kernel.org
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"J. Bruce Fields" <bfields@fieldses.org>,
	"Theodore Ts'o" <tytso@mit.edu>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Andreas Dilger <adilger.kernel@dilger.ca>,
	Dan Williams <dan.j.williams@intel.com>,
	Dave Chinner <david@fromorbit.com>,
	Ingo Molnar <mingo@redhat.com>, Jan Kara <jack@suse.com>,
	Jeff Layton <jlayton@poochiereds.net>,
	Matthew Wilcox <willy@linux.intel.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	linux-ext4@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, linux-nvdimm@ml01.01.org, x86@kernel.org,
	xfs@oss.sgi.com, Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <matthew.r.wilcox@intel.com>,
	Dave Hansen <dave.hansen@linux.intel.com>
Subject: [PATCH v2 05/11] mm: add follow_pte_pmd()
Date: Fri, 13 Nov 2015 17:06:44 -0700	[thread overview]
Message-ID: <1447459610-14259-6-git-send-email-ross.zwisler@linux.intel.com> (raw)
In-Reply-To: <1447459610-14259-1-git-send-email-ross.zwisler@linux.intel.com>

Similar to follow_pte(), follow_pte_pmd() allows either a PTE leaf or a
huge page PMD leaf to be found and returned.

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
---
 include/linux/mm.h |  2 ++
 mm/memory.c        | 38 ++++++++++++++++++++++++++++++--------
 2 files changed, 32 insertions(+), 8 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 80001de..393441c 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1166,6 +1166,8 @@ int copy_page_range(struct mm_struct *dst, struct mm_struct *src,
 			struct vm_area_struct *vma);
 void unmap_mapping_range(struct address_space *mapping,
 		loff_t const holebegin, loff_t const holelen, int even_cows);
+int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+			     pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp);
 int follow_pfn(struct vm_area_struct *vma, unsigned long address,
 	unsigned long *pfn);
 int follow_phys(struct vm_area_struct *vma, unsigned long address,
diff --git a/mm/memory.c b/mm/memory.c
index deb679c..7f4090e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3512,8 +3512,8 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address)
 }
 #endif /* __PAGETABLE_PMD_FOLDED */
 
-static int __follow_pte(struct mm_struct *mm, unsigned long address,
-		pte_t **ptepp, spinlock_t **ptlp)
+static int __follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+		pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp)
 {
 	pgd_t *pgd;
 	pud_t *pud;
@@ -3529,12 +3529,20 @@ static int __follow_pte(struct mm_struct *mm, unsigned long address,
 		goto out;
 
 	pmd = pmd_offset(pud, address);
-	VM_BUG_ON(pmd_trans_huge(*pmd));
-	if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
-		goto out;
 
-	/* We cannot handle huge page PFN maps. Luckily they don't exist. */
-	if (pmd_huge(*pmd))
+	if (pmd_huge(*pmd)) {
+		if (!pmdpp)
+			goto out;
+
+		*ptlp = pmd_lock(mm, pmd);
+		if (pmd_huge(*pmd)) {
+			*pmdpp = pmd;
+			return 0;
+		}
+		spin_unlock(*ptlp);
+	}
+
+	if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
 		goto out;
 
 	ptep = pte_offset_map_lock(mm, pmd, address, ptlp);
@@ -3557,9 +3565,23 @@ static inline int follow_pte(struct mm_struct *mm, unsigned long address,
 
 	/* (void) is needed to make gcc happy */
 	(void) __cond_lock(*ptlp,
-			   !(res = __follow_pte(mm, address, ptepp, ptlp)));
+			   !(res = __follow_pte_pmd(mm, address, ptepp, NULL,
+					   ptlp)));
+	return res;
+}
+
+int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+			     pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp)
+{
+	int res;
+
+	/* (void) is needed to make gcc happy */
+	(void) __cond_lock(*ptlp,
+			   !(res = __follow_pte_pmd(mm, address, ptepp, pmdpp,
+					   ptlp)));
 	return res;
 }
+EXPORT_SYMBOL(follow_pte_pmd);
 
 /**
  * follow_pfn - look up PFN at a user virtual address
-- 
2.1.0


WARNING: multiple messages have this Message-ID (diff)
From: Ross Zwisler <ross.zwisler@linux.intel.com>
To: linux-kernel@vger.kernel.org
Cc: Dave Hansen <dave.hansen@linux.intel.com>,
	"J. Bruce Fields" <bfields@fieldses.org>,
	linux-mm@kvack.org, Andreas Dilger <adilger.kernel@dilger.ca>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Jeff Layton <jlayton@poochiereds.net>,
	Dan Williams <dan.j.williams@intel.com>,
	linux-nvdimm@lists.01.org, x86@kernel.org,
	Ingo Molnar <mingo@redhat.com>,
	Matthew Wilcox <willy@linux.intel.com>,
	Ross Zwisler <ross.zwisler@linux.intel.com>,
	linux-ext4@vger.kernel.org, xfs@oss.sgi.com,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Thomas Gleixner <tglx@linutronix.de>,
	Theodore Ts'o <tytso@mit.edu>, Jan Kara <jack@suse.com>,
	linux-fsdevel@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <matthew.r.wilcox@intel.com>
Subject: [PATCH v2 05/11] mm: add follow_pte_pmd()
Date: Fri, 13 Nov 2015 17:06:44 -0700	[thread overview]
Message-ID: <1447459610-14259-6-git-send-email-ross.zwisler@linux.intel.com> (raw)
In-Reply-To: <1447459610-14259-1-git-send-email-ross.zwisler@linux.intel.com>

Similar to follow_pte(), follow_pte_pmd() allows either a PTE leaf or a
huge page PMD leaf to be found and returned.

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
---
 include/linux/mm.h |  2 ++
 mm/memory.c        | 38 ++++++++++++++++++++++++++++++--------
 2 files changed, 32 insertions(+), 8 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 80001de..393441c 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1166,6 +1166,8 @@ int copy_page_range(struct mm_struct *dst, struct mm_struct *src,
 			struct vm_area_struct *vma);
 void unmap_mapping_range(struct address_space *mapping,
 		loff_t const holebegin, loff_t const holelen, int even_cows);
+int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+			     pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp);
 int follow_pfn(struct vm_area_struct *vma, unsigned long address,
 	unsigned long *pfn);
 int follow_phys(struct vm_area_struct *vma, unsigned long address,
diff --git a/mm/memory.c b/mm/memory.c
index deb679c..7f4090e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3512,8 +3512,8 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address)
 }
 #endif /* __PAGETABLE_PMD_FOLDED */
 
-static int __follow_pte(struct mm_struct *mm, unsigned long address,
-		pte_t **ptepp, spinlock_t **ptlp)
+static int __follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+		pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp)
 {
 	pgd_t *pgd;
 	pud_t *pud;
@@ -3529,12 +3529,20 @@ static int __follow_pte(struct mm_struct *mm, unsigned long address,
 		goto out;
 
 	pmd = pmd_offset(pud, address);
-	VM_BUG_ON(pmd_trans_huge(*pmd));
-	if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
-		goto out;
 
-	/* We cannot handle huge page PFN maps. Luckily they don't exist. */
-	if (pmd_huge(*pmd))
+	if (pmd_huge(*pmd)) {
+		if (!pmdpp)
+			goto out;
+
+		*ptlp = pmd_lock(mm, pmd);
+		if (pmd_huge(*pmd)) {
+			*pmdpp = pmd;
+			return 0;
+		}
+		spin_unlock(*ptlp);
+	}
+
+	if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
 		goto out;
 
 	ptep = pte_offset_map_lock(mm, pmd, address, ptlp);
@@ -3557,9 +3565,23 @@ static inline int follow_pte(struct mm_struct *mm, unsigned long address,
 
 	/* (void) is needed to make gcc happy */
 	(void) __cond_lock(*ptlp,
-			   !(res = __follow_pte(mm, address, ptepp, ptlp)));
+			   !(res = __follow_pte_pmd(mm, address, ptepp, NULL,
+					   ptlp)));
+	return res;
+}
+
+int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+			     pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp)
+{
+	int res;
+
+	/* (void) is needed to make gcc happy */
+	(void) __cond_lock(*ptlp,
+			   !(res = __follow_pte_pmd(mm, address, ptepp, pmdpp,
+					   ptlp)));
 	return res;
 }
+EXPORT_SYMBOL(follow_pte_pmd);
 
 /**
  * follow_pfn - look up PFN at a user virtual address
-- 
2.1.0

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  parent reply	other threads:[~2015-11-14  0:06 UTC|newest]

Thread overview: 132+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-11-14  0:06 [PATCH v2 00/11] DAX fsynx/msync support Ross Zwisler
2015-11-14  0:06 ` Ross Zwisler
2015-11-14  0:06 ` Ross Zwisler
2015-11-14  0:06 ` [PATCH v2 01/11] pmem: add wb_cache_pmem() to the PMEM API Ross Zwisler
2015-11-14  0:06   ` Ross Zwisler
2015-11-14  0:06   ` Ross Zwisler
2015-11-14  0:06 ` [PATCH v2 02/11] mm: add pmd_mkclean() Ross Zwisler
2015-11-14  0:06   ` Ross Zwisler
2015-11-14  0:06   ` Ross Zwisler
2015-11-14  1:02   ` Dave Hansen
2015-11-14  1:02     ` Dave Hansen
2015-11-14  1:02     ` Dave Hansen
2015-11-17 17:52     ` Ross Zwisler
2015-11-17 17:52       ` Ross Zwisler
2015-11-17 17:52       ` Ross Zwisler
2015-11-14  0:06 ` [PATCH v2 03/11] pmem: enable REQ_FUA/REQ_FLUSH handling Ross Zwisler
2015-11-14  0:06   ` Ross Zwisler
2015-11-14  0:06   ` Ross Zwisler
2015-11-14  0:20   ` Dan Williams
2015-11-14  0:20     ` Dan Williams
2015-11-14  0:20     ` Dan Williams
2015-11-14  0:43     ` Andreas Dilger
2015-11-14  0:43       ` Andreas Dilger
2015-11-14  0:43       ` Andreas Dilger
2015-11-14  2:32       ` Dan Williams
2015-11-14  2:32         ` Dan Williams
2015-11-14  2:32         ` Dan Williams
2015-11-16 13:37         ` Jan Kara
2015-11-16 13:37           ` Jan Kara
2015-11-16 13:37           ` Jan Kara
2015-11-16 13:37           ` Jan Kara
2015-11-16 14:05           ` Jan Kara
2015-11-16 14:05             ` Jan Kara
2015-11-16 14:05             ` Jan Kara
2015-11-16 17:28             ` Dan Williams
2015-11-16 17:28               ` Dan Williams
2015-11-16 17:28               ` Dan Williams
2015-11-16 19:48               ` Ross Zwisler
2015-11-16 19:48                 ` Ross Zwisler
2015-11-16 19:48                 ` Ross Zwisler
2015-11-16 19:48                 ` Ross Zwisler
2015-11-16 20:34                 ` Dan Williams
2015-11-16 20:34                   ` Dan Williams
2015-11-16 20:34                   ` Dan Williams
2015-11-16 20:34                   ` Dan Williams
2015-11-16 23:57                   ` Ross Zwisler
2015-11-16 23:57                     ` Ross Zwisler
2015-11-16 23:57                     ` Ross Zwisler
2015-11-16 23:57                     ` Ross Zwisler
2015-11-16 22:14             ` Dave Chinner
2015-11-16 22:14               ` Dave Chinner
2015-11-16 22:14               ` Dave Chinner
2015-11-16 23:29               ` Ross Zwisler
2015-11-16 23:29                 ` Ross Zwisler
2015-11-16 23:29                 ` Ross Zwisler
2015-11-16 23:29                 ` Ross Zwisler
2015-11-16 23:42                 ` Dave Chinner
2015-11-16 23:42                   ` Dave Chinner
2015-11-16 23:42                   ` Dave Chinner
2015-11-16 23:42                   ` Dave Chinner
2015-11-16 20:09         ` Ross Zwisler
2015-11-16 20:09           ` Ross Zwisler
2015-11-16 20:09           ` Ross Zwisler
2015-11-18 10:40           ` Jan Kara
2015-11-18 10:40             ` Jan Kara
2015-11-18 10:40             ` Jan Kara
2015-11-18 10:40             ` Jan Kara
2015-11-18 16:16             ` Ross Zwisler
2015-11-18 16:16               ` Ross Zwisler
2015-11-18 16:16               ` Ross Zwisler
2015-11-18 16:16               ` Ross Zwisler
2015-11-18 16:16               ` Ross Zwisler
2015-11-14  0:06 ` [PATCH v2 04/11] dax: support dirty DAX entries in radix tree Ross Zwisler
2015-11-14  0:06   ` Ross Zwisler
2015-11-14  0:06   ` Ross Zwisler
2015-11-14  0:06 ` Ross Zwisler [this message]
2015-11-14  0:06   ` [PATCH v2 05/11] mm: add follow_pte_pmd() Ross Zwisler
2015-11-14  0:06   ` Ross Zwisler
2015-11-14  0:06 ` [PATCH v2 06/11] mm: add pgoff_mkclean() Ross Zwisler
2015-11-14  0:06   ` Ross Zwisler
2015-11-14  0:06   ` Ross Zwisler
2015-11-14  0:06 ` [PATCH v2 07/11] mm: add find_get_entries_tag() Ross Zwisler
2015-11-14  0:06   ` Ross Zwisler
2015-11-14  0:06   ` Ross Zwisler
2015-11-14  0:06   ` Ross Zwisler
2015-11-16 22:42   ` Dave Chinner
2015-11-16 22:42     ` Dave Chinner
2015-11-16 22:42     ` Dave Chinner
2015-11-17 18:08     ` Ross Zwisler
2015-11-17 18:08       ` Ross Zwisler
2015-11-17 18:08       ` Ross Zwisler
2015-11-14  0:06 ` [PATCH v2 08/11] dax: add support for fsync/sync Ross Zwisler
2015-11-14  0:06   ` Ross Zwisler
2015-11-14  0:06   ` Ross Zwisler
2015-11-16 22:58   ` Dave Chinner
2015-11-16 22:58     ` Dave Chinner
2015-11-16 22:58     ` Dave Chinner
2015-11-17 18:30     ` Ross Zwisler
2015-11-17 18:30       ` Ross Zwisler
2015-11-17 18:30       ` Ross Zwisler
2015-11-14  0:06 ` [PATCH v2 09/11] ext2: add support for DAX fsync/msync Ross Zwisler
2015-11-14  0:06   ` Ross Zwisler
2015-11-14  0:06   ` Ross Zwisler
2015-11-14  0:06 ` [PATCH v2 10/11] ext4: " Ross Zwisler
2015-11-14  0:06   ` Ross Zwisler
2015-11-14  0:06   ` Ross Zwisler
2015-11-14  0:06   ` Ross Zwisler
2015-11-14  0:06 ` [PATCH v2 11/11] xfs: " Ross Zwisler
2015-11-14  0:06   ` Ross Zwisler
2015-11-14  0:06   ` Ross Zwisler
2015-11-16 23:12   ` Dave Chinner
2015-11-16 23:12     ` Dave Chinner
2015-11-16 23:12     ` Dave Chinner
2015-11-16 23:12     ` Dave Chinner
2015-11-17 19:03     ` Ross Zwisler
2015-11-17 19:03       ` Ross Zwisler
2015-11-17 19:03       ` Ross Zwisler
2015-11-17 19:03       ` Ross Zwisler
2015-11-20  0:37       ` Dave Chinner
2015-11-20  0:37         ` Dave Chinner
2015-11-20  0:37         ` Dave Chinner
2015-11-16 14:41 ` [PATCH v2 00/11] DAX fsynx/msync support Jan Kara
2015-11-16 14:41   ` Jan Kara
2015-11-16 14:41   ` Jan Kara
2015-11-16 16:58   ` Dan Williams
2015-11-16 16:58     ` Dan Williams
2015-11-16 16:58     ` Dan Williams
2015-11-16 16:58     ` Dan Williams
2015-11-16 20:01     ` Ross Zwisler
2015-11-16 20:01       ` Ross Zwisler
2015-11-16 20:01       ` Ross Zwisler
2015-11-16 20:01       ` Ross Zwisler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1447459610-14259-6-git-send-email-ross.zwisler@linux.intel.com \
    --to=ross.zwisler@linux.intel.com \
    --cc=adilger.kernel@dilger.ca \
    --cc=akpm@linux-foundation.org \
    --cc=bfields@fieldses.org \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@fromorbit.com \
    --cc=hpa@zytor.com \
    --cc=jack@suse.com \
    --cc=jlayton@poochiereds.net \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=matthew.r.wilcox@intel.com \
    --cc=mingo@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=tytso@mit.edu \
    --cc=viro@zeniv.linux.org.uk \
    --cc=willy@linux.intel.com \
    --cc=x86@kernel.org \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.