* [PATCH 0/2] fix for direct-I/O to DAX mappings
@ 2017-02-25 17:08 ` Dan Williams
0 siblings, 0 replies; 12+ messages in thread
From: Dan Williams @ 2017-02-25 17:08 UTC (permalink / raw)
To: akpm
Cc: x86, Xiong Zhou, Dave Hansen, linux-kernel, stable, linux-mm,
Ingo Molnar, H. Peter Anvin, Thomas Gleixner, torvalds,
Ross Zwisler
Hi Andrew,
While Ross was doing a review of a new mmap+DAX direct-I/O test case for
xfstests, from Xiong, he noticed occasions where it failed to trigger a
page dirty event. Dave then spotted the problem fixed by patch1. The
pte_devmap() check is precluding pte_allows_gup(), i.e. bypassing
permission checks and dirty tracking.
Patch2 is a cleanup and clarifies that pte_unmap() only needs to be done
once per page-worth of ptes. It unifies the exit paths similar to the
generic gup_pte_range() in the __HAVE_ARCH_PTE_SPECIAL case.
I'm sending this through the -mm tree for a double-check from memory
management folks. It has a build success notification from the kbuild
robot.
---
Dan Williams (2):
x86, mm: fix gup_pte_range() vs DAX mappings
x86, mm: unify exit paths in gup_pte_range()
arch/x86/mm/gup.c | 37 +++++++++++++++++++++----------------
1 file changed, 21 insertions(+), 16 deletions(-)
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 0/2] fix for direct-I/O to DAX mappings
@ 2017-02-25 17:08 ` Dan Williams
0 siblings, 0 replies; 12+ messages in thread
From: Dan Williams @ 2017-02-25 17:08 UTC (permalink / raw)
To: akpm
Cc: x86, Xiong Zhou, Dave Hansen, linux-kernel, stable, linux-mm,
Ingo Molnar, H. Peter Anvin, Thomas Gleixner, torvalds,
Ross Zwisler
Hi Andrew,
While Ross was doing a review of a new mmap+DAX direct-I/O test case for
xfstests, from Xiong, he noticed occasions where it failed to trigger a
page dirty event. Dave then spotted the problem fixed by patch1. The
pte_devmap() check is precluding pte_allows_gup(), i.e. bypassing
permission checks and dirty tracking.
Patch2 is a cleanup and clarifies that pte_unmap() only needs to be done
once per page-worth of ptes. It unifies the exit paths similar to the
generic gup_pte_range() in the __HAVE_ARCH_PTE_SPECIAL case.
I'm sending this through the -mm tree for a double-check from memory
management folks. It has a build success notification from the kbuild
robot.
---
Dan Williams (2):
x86, mm: fix gup_pte_range() vs DAX mappings
x86, mm: unify exit paths in gup_pte_range()
arch/x86/mm/gup.c | 37 +++++++++++++++++++++----------------
1 file changed, 21 insertions(+), 16 deletions(-)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 1/2] x86, mm: fix gup_pte_range() vs DAX mappings
2017-02-25 17:08 ` Dan Williams
@ 2017-02-25 17:08 ` Dan Williams
-1 siblings, 0 replies; 12+ messages in thread
From: Dan Williams @ 2017-02-25 17:08 UTC (permalink / raw)
To: akpm
Cc: x86, Xiong Zhou, Dave Hansen, linux-kernel, stable, linux-mm,
Ingo Molnar, H. Peter Anvin, Thomas Gleixner, torvalds,
Ross Zwisler
gup_pte_range() fails to check pte_allows_gup() before translating a DAX
pte entry, pte_devmap(), to a page. This allows writes to read-only
mappings, and bypasses the DAX cacheline dirty tracking due to missed
'mkwrite' faults. The gup_huge_pmd() path and the gup_huge_pud() path
correctly check pte_allows_gup() before checking for _devmap() entries.
Fixes: 3565fce3a659 ("mm, x86: get_user_pages() for dax mappings")
Cc: <x86@kernel.org>
Cc: <stable@vger.kernel.org>
Cc: Xiong Zhou <xzhou@redhat.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Reported-by: Dave Hansen <dave.hansen@linux.intel.com>
Reported-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
arch/x86/mm/gup.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/gup.c b/arch/x86/mm/gup.c
index 0d4fb3ebbbac..1680768d392c 100644
--- a/arch/x86/mm/gup.c
+++ b/arch/x86/mm/gup.c
@@ -120,6 +120,11 @@ static noinline int gup_pte_range(pmd_t pmd, unsigned long addr,
return 0;
}
+ if (!pte_allows_gup(pte_val(pte), write)) {
+ pte_unmap(ptep);
+ return 0;
+ }
+
if (pte_devmap(pte)) {
pgmap = get_dev_pagemap(pte_pfn(pte), pgmap);
if (unlikely(!pgmap)) {
@@ -127,8 +132,7 @@ static noinline int gup_pte_range(pmd_t pmd, unsigned long addr,
pte_unmap(ptep);
return 0;
}
- } else if (!pte_allows_gup(pte_val(pte), write) ||
- pte_special(pte)) {
+ } else if (pte_special(pte)) {
pte_unmap(ptep);
return 0;
}
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 1/2] x86, mm: fix gup_pte_range() vs DAX mappings
@ 2017-02-25 17:08 ` Dan Williams
0 siblings, 0 replies; 12+ messages in thread
From: Dan Williams @ 2017-02-25 17:08 UTC (permalink / raw)
To: akpm
Cc: x86, Xiong Zhou, Dave Hansen, linux-kernel, stable, linux-mm,
Ingo Molnar, H. Peter Anvin, Thomas Gleixner, torvalds,
Ross Zwisler
gup_pte_range() fails to check pte_allows_gup() before translating a DAX
pte entry, pte_devmap(), to a page. This allows writes to read-only
mappings, and bypasses the DAX cacheline dirty tracking due to missed
'mkwrite' faults. The gup_huge_pmd() path and the gup_huge_pud() path
correctly check pte_allows_gup() before checking for _devmap() entries.
Fixes: 3565fce3a659 ("mm, x86: get_user_pages() for dax mappings")
Cc: <x86@kernel.org>
Cc: <stable@vger.kernel.org>
Cc: Xiong Zhou <xzhou@redhat.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Reported-by: Dave Hansen <dave.hansen@linux.intel.com>
Reported-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
arch/x86/mm/gup.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/gup.c b/arch/x86/mm/gup.c
index 0d4fb3ebbbac..1680768d392c 100644
--- a/arch/x86/mm/gup.c
+++ b/arch/x86/mm/gup.c
@@ -120,6 +120,11 @@ static noinline int gup_pte_range(pmd_t pmd, unsigned long addr,
return 0;
}
+ if (!pte_allows_gup(pte_val(pte), write)) {
+ pte_unmap(ptep);
+ return 0;
+ }
+
if (pte_devmap(pte)) {
pgmap = get_dev_pagemap(pte_pfn(pte), pgmap);
if (unlikely(!pgmap)) {
@@ -127,8 +132,7 @@ static noinline int gup_pte_range(pmd_t pmd, unsigned long addr,
pte_unmap(ptep);
return 0;
}
- } else if (!pte_allows_gup(pte_val(pte), write) ||
- pte_special(pte)) {
+ } else if (pte_special(pte)) {
pte_unmap(ptep);
return 0;
}
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 2/2] x86, mm: unify exit paths in gup_pte_range()
2017-02-25 17:08 ` Dan Williams
@ 2017-02-25 17:08 ` Dan Williams
-1 siblings, 0 replies; 12+ messages in thread
From: Dan Williams @ 2017-02-25 17:08 UTC (permalink / raw)
To: akpm
Cc: x86, Dave Hansen, linux-kernel, linux-mm, Ingo Molnar,
H. Peter Anvin, Thomas Gleixner, torvalds, Ross Zwisler
All exit paths from gup_pte_range() require pte_unmap() of the original
pte page before returning. Refactor the code to have a single exit point
to do the unmap.
This mirrors the flow of the generic gup_pte_range() in mm/gup.c.
Cc: <x86@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
arch/x86/mm/gup.c | 39 ++++++++++++++++++++-------------------
1 file changed, 20 insertions(+), 19 deletions(-)
diff --git a/arch/x86/mm/gup.c b/arch/x86/mm/gup.c
index 1680768d392c..e703f09c1d78 100644
--- a/arch/x86/mm/gup.c
+++ b/arch/x86/mm/gup.c
@@ -106,36 +106,35 @@ static noinline int gup_pte_range(pmd_t pmd, unsigned long addr,
unsigned long end, int write, struct page **pages, int *nr)
{
struct dev_pagemap *pgmap = NULL;
- int nr_start = *nr;
- pte_t *ptep;
+ int nr_start = *nr, ret = 0;
+ pte_t *ptep, *ptem;
- ptep = pte_offset_map(&pmd, addr);
+ /*
+ * Keep the original mapped PTE value (ptem) around since we
+ * might increment ptep off the end of the page when finishing
+ * our loop iteration.
+ */
+ ptem = ptep = pte_offset_map(&pmd, addr);
do {
pte_t pte = gup_get_pte(ptep);
struct page *page;
/* Similar to the PMD case, NUMA hinting must take slow path */
- if (pte_protnone(pte)) {
- pte_unmap(ptep);
- return 0;
- }
+ if (pte_protnone(pte))
+ break;
- if (!pte_allows_gup(pte_val(pte), write)) {
- pte_unmap(ptep);
- return 0;
- }
+ if (!pte_allows_gup(pte_val(pte), write))
+ break;
if (pte_devmap(pte)) {
pgmap = get_dev_pagemap(pte_pfn(pte), pgmap);
if (unlikely(!pgmap)) {
undo_dev_pagemap(nr, nr_start, pages);
- pte_unmap(ptep);
- return 0;
+ break;
}
- } else if (pte_special(pte)) {
- pte_unmap(ptep);
- return 0;
- }
+ } else if (pte_special(pte))
+ break;
+
VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
page = pte_page(pte);
get_page(page);
@@ -145,9 +144,11 @@ static noinline int gup_pte_range(pmd_t pmd, unsigned long addr,
(*nr)++;
} while (ptep++, addr += PAGE_SIZE, addr != end);
- pte_unmap(ptep - 1);
+ if (addr == end)
+ ret = 1;
+ pte_unmap(ptem);
- return 1;
+ return ret;
}
static inline void get_head_page_multiple(struct page *page, int nr)
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 2/2] x86, mm: unify exit paths in gup_pte_range()
@ 2017-02-25 17:08 ` Dan Williams
0 siblings, 0 replies; 12+ messages in thread
From: Dan Williams @ 2017-02-25 17:08 UTC (permalink / raw)
To: akpm
Cc: x86, Dave Hansen, linux-kernel, linux-mm, Ingo Molnar,
H. Peter Anvin, Thomas Gleixner, torvalds, Ross Zwisler
All exit paths from gup_pte_range() require pte_unmap() of the original
pte page before returning. Refactor the code to have a single exit point
to do the unmap.
This mirrors the flow of the generic gup_pte_range() in mm/gup.c.
Cc: <x86@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
arch/x86/mm/gup.c | 39 ++++++++++++++++++++-------------------
1 file changed, 20 insertions(+), 19 deletions(-)
diff --git a/arch/x86/mm/gup.c b/arch/x86/mm/gup.c
index 1680768d392c..e703f09c1d78 100644
--- a/arch/x86/mm/gup.c
+++ b/arch/x86/mm/gup.c
@@ -106,36 +106,35 @@ static noinline int gup_pte_range(pmd_t pmd, unsigned long addr,
unsigned long end, int write, struct page **pages, int *nr)
{
struct dev_pagemap *pgmap = NULL;
- int nr_start = *nr;
- pte_t *ptep;
+ int nr_start = *nr, ret = 0;
+ pte_t *ptep, *ptem;
- ptep = pte_offset_map(&pmd, addr);
+ /*
+ * Keep the original mapped PTE value (ptem) around since we
+ * might increment ptep off the end of the page when finishing
+ * our loop iteration.
+ */
+ ptem = ptep = pte_offset_map(&pmd, addr);
do {
pte_t pte = gup_get_pte(ptep);
struct page *page;
/* Similar to the PMD case, NUMA hinting must take slow path */
- if (pte_protnone(pte)) {
- pte_unmap(ptep);
- return 0;
- }
+ if (pte_protnone(pte))
+ break;
- if (!pte_allows_gup(pte_val(pte), write)) {
- pte_unmap(ptep);
- return 0;
- }
+ if (!pte_allows_gup(pte_val(pte), write))
+ break;
if (pte_devmap(pte)) {
pgmap = get_dev_pagemap(pte_pfn(pte), pgmap);
if (unlikely(!pgmap)) {
undo_dev_pagemap(nr, nr_start, pages);
- pte_unmap(ptep);
- return 0;
+ break;
}
- } else if (pte_special(pte)) {
- pte_unmap(ptep);
- return 0;
- }
+ } else if (pte_special(pte))
+ break;
+
VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
page = pte_page(pte);
get_page(page);
@@ -145,9 +144,11 @@ static noinline int gup_pte_range(pmd_t pmd, unsigned long addr,
(*nr)++;
} while (ptep++, addr += PAGE_SIZE, addr != end);
- pte_unmap(ptep - 1);
+ if (addr == end)
+ ret = 1;
+ pte_unmap(ptem);
- return 1;
+ return ret;
}
static inline void get_head_page_multiple(struct page *page, int nr)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH 0/2] fix for direct-I/O to DAX mappings
2017-02-25 17:08 ` Dan Williams
@ 2017-02-28 17:10 ` Linus Torvalds
-1 siblings, 0 replies; 12+ messages in thread
From: Linus Torvalds @ 2017-02-28 17:10 UTC (permalink / raw)
To: Dan Williams
Cc: Andrew Morton, the arch/x86 maintainers, Xiong Zhou, Dave Hansen,
Linux Kernel Mailing List, stable, linux-mm, Ingo Molnar,
H. Peter Anvin, Thomas Gleixner, Ross Zwisler
On Sat, Feb 25, 2017 at 9:08 AM, Dan Williams <dan.j.williams@intel.com> wrote:
>
> I'm sending this through the -mm tree for a double-check from memory
> management folks. It has a build success notification from the kbuild
> robot.
I'm just checking that this isn't lost - I didn't get it in the latest
patch-bomb from Andrew.
I'm assuming it's still percolating through your system, Andrew, but
if not, holler.
Linus
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 0/2] fix for direct-I/O to DAX mappings
@ 2017-02-28 17:10 ` Linus Torvalds
0 siblings, 0 replies; 12+ messages in thread
From: Linus Torvalds @ 2017-02-28 17:10 UTC (permalink / raw)
To: Dan Williams
Cc: Andrew Morton, the arch/x86 maintainers, Xiong Zhou, Dave Hansen,
Linux Kernel Mailing List, stable, linux-mm, Ingo Molnar,
H. Peter Anvin, Thomas Gleixner, Ross Zwisler
On Sat, Feb 25, 2017 at 9:08 AM, Dan Williams <dan.j.williams@intel.com> wrote:
>
> I'm sending this through the -mm tree for a double-check from memory
> management folks. It has a build success notification from the kbuild
> robot.
I'm just checking that this isn't lost - I didn't get it in the latest
patch-bomb from Andrew.
I'm assuming it's still percolating through your system, Andrew, but
if not, holler.
Linus
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 0/2] fix for direct-I/O to DAX mappings
2017-02-28 17:10 ` Linus Torvalds
@ 2017-02-28 22:09 ` Andrew Morton
-1 siblings, 0 replies; 12+ messages in thread
From: Andrew Morton @ 2017-02-28 22:09 UTC (permalink / raw)
To: Linus Torvalds
Cc: Dan Williams, the arch/x86 maintainers, Xiong Zhou, Dave Hansen,
Linux Kernel Mailing List, stable, linux-mm, Ingo Molnar,
H. Peter Anvin, Thomas Gleixner, Ross Zwisler
On Tue, 28 Feb 2017 09:10:39 -0800 Linus Torvalds <torvalds@linux-foundation.org> wrote:
> On Sat, Feb 25, 2017 at 9:08 AM, Dan Williams <dan.j.williams@intel.com> wrote:
> >
> > I'm sending this through the -mm tree for a double-check from memory
> > management folks. It has a build success notification from the kbuild
> > robot.
>
> I'm just checking that this isn't lost - I didn't get it in the latest
> patch-bomb from Andrew.
>
> I'm assuming it's still percolating through your system, Andrew, but
> if not, holler.
>
Yup, I've got them.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 0/2] fix for direct-I/O to DAX mappings
@ 2017-02-28 22:09 ` Andrew Morton
0 siblings, 0 replies; 12+ messages in thread
From: Andrew Morton @ 2017-02-28 22:09 UTC (permalink / raw)
To: Linus Torvalds
Cc: Dan Williams, the arch/x86 maintainers, Xiong Zhou, Dave Hansen,
Linux Kernel Mailing List, stable, linux-mm, Ingo Molnar,
H. Peter Anvin, Thomas Gleixner, Ross Zwisler
On Tue, 28 Feb 2017 09:10:39 -0800 Linus Torvalds <torvalds@linux-foundation.org> wrote:
> On Sat, Feb 25, 2017 at 9:08 AM, Dan Williams <dan.j.williams@intel.com> wrote:
> >
> > I'm sending this through the -mm tree for a double-check from memory
> > management folks. It has a build success notification from the kbuild
> > robot.
>
> I'm just checking that this isn't lost - I didn't get it in the latest
> patch-bomb from Andrew.
>
> I'm assuming it's still percolating through your system, Andrew, but
> if not, holler.
>
Yup, I've got them.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 0/2] fix for direct-I/O to DAX mappings
2017-02-25 17:08 ` Dan Williams
@ 2017-03-02 13:45 ` Xiong Zhou
-1 siblings, 0 replies; 12+ messages in thread
From: Xiong Zhou @ 2017-03-02 13:45 UTC (permalink / raw)
To: Dan Williams
Cc: akpm, x86, Xiong Zhou, Dave Hansen, linux-kernel, stable,
linux-mm, Ingo Molnar, H. Peter Anvin, Thomas Gleixner, torvalds,
Ross Zwisler
On Sat, Feb 25, 2017 at 09:08:28AM -0800, Dan Williams wrote:
> Hi Andrew,
>
> While Ross was doing a review of a new mmap+DAX direct-I/O test case for
> xfstests, from Xiong, he noticed occasions where it failed to trigger a
> page dirty event. Dave then spotted the problem fixed by patch1. The
> pte_devmap() check is precluding pte_allows_gup(), i.e. bypassing
> permission checks and dirty tracking.
This mmap-dax-dio case still fails with this patchset, while it makes
sense. It's the test case that need to be fixed.
BTW, this patchset fixes another xfsrestore issue, which i hit now
and then, xfs/301 w/ or wo/ DAX only on nvdimms. xfsrestore never
return but killable.
Thanks,
>
> Patch2 is a cleanup and clarifies that pte_unmap() only needs to be done
> once per page-worth of ptes. It unifies the exit paths similar to the
> generic gup_pte_range() in the __HAVE_ARCH_PTE_SPECIAL case.
>
> I'm sending this through the -mm tree for a double-check from memory
> management folks. It has a build success notification from the kbuild
> robot.
>
> ---
>
> Dan Williams (2):
> x86, mm: fix gup_pte_range() vs DAX mappings
> x86, mm: unify exit paths in gup_pte_range()
>
>
> arch/x86/mm/gup.c | 37 +++++++++++++++++++++----------------
> 1 file changed, 21 insertions(+), 16 deletions(-)
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 0/2] fix for direct-I/O to DAX mappings
@ 2017-03-02 13:45 ` Xiong Zhou
0 siblings, 0 replies; 12+ messages in thread
From: Xiong Zhou @ 2017-03-02 13:45 UTC (permalink / raw)
To: Dan Williams
Cc: akpm, x86, Xiong Zhou, Dave Hansen, linux-kernel, stable,
linux-mm, Ingo Molnar, H. Peter Anvin, Thomas Gleixner, torvalds,
Ross Zwisler
On Sat, Feb 25, 2017 at 09:08:28AM -0800, Dan Williams wrote:
> Hi Andrew,
>
> While Ross was doing a review of a new mmap+DAX direct-I/O test case for
> xfstests, from Xiong, he noticed occasions where it failed to trigger a
> page dirty event. Dave then spotted the problem fixed by patch1. The
> pte_devmap() check is precluding pte_allows_gup(), i.e. bypassing
> permission checks and dirty tracking.
This mmap-dax-dio case still fails with this patchset, while it makes
sense. It's the test case that need to be fixed.
BTW, this patchset fixes another xfsrestore issue, which i hit now
and then, xfs/301 w/ or wo/ DAX only on nvdimms. xfsrestore never
return but killable.
Thanks,
>
> Patch2 is a cleanup and clarifies that pte_unmap() only needs to be done
> once per page-worth of ptes. It unifies the exit paths similar to the
> generic gup_pte_range() in the __HAVE_ARCH_PTE_SPECIAL case.
>
> I'm sending this through the -mm tree for a double-check from memory
> management folks. It has a build success notification from the kbuild
> robot.
>
> ---
>
> Dan Williams (2):
> x86, mm: fix gup_pte_range() vs DAX mappings
> x86, mm: unify exit paths in gup_pte_range()
>
>
> arch/x86/mm/gup.c | 37 +++++++++++++++++++++----------------
> 1 file changed, 21 insertions(+), 16 deletions(-)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2017-03-02 13:54 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-25 17:08 [PATCH 0/2] fix for direct-I/O to DAX mappings Dan Williams
2017-02-25 17:08 ` Dan Williams
2017-02-25 17:08 ` [PATCH 1/2] x86, mm: fix gup_pte_range() vs " Dan Williams
2017-02-25 17:08 ` Dan Williams
2017-02-25 17:08 ` [PATCH 2/2] x86, mm: unify exit paths in gup_pte_range() Dan Williams
2017-02-25 17:08 ` Dan Williams
2017-02-28 17:10 ` [PATCH 0/2] fix for direct-I/O to DAX mappings Linus Torvalds
2017-02-28 17:10 ` Linus Torvalds
2017-02-28 22:09 ` Andrew Morton
2017-02-28 22:09 ` Andrew Morton
2017-03-02 13:45 ` Xiong Zhou
2017-03-02 13:45 ` Xiong Zhou
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.