* [PATCH] mm, proc: Make the task_mmu walk_page_range() limit in clear_refs_write() obvious
@ 2016-08-31 15:03 James Morse
2016-09-01 0:13 ` Naoya Horiguchi
0 siblings, 1 reply; 2+ messages in thread
From: James Morse @ 2016-08-31 15:03 UTC (permalink / raw)
To: linux-mm; +Cc: Andrew Morton, James Morse, Naoya Horiguchi
Trying to walk all of virtual memory requires architecture specific
knowledge. On x86_64, addresses must be sign extended from bit 48,
whereas on arm64 the top VA_BITS of address space have their own set
of page tables.
clear_refs_write() calls walk_page_range() on the range 0 to ~0UL, it
provides a test_walk() callback that only expects to be walking over
VMAs. Currently walk_pmd_range() will skip memory regions that don't
have a VMA, reporting them as a hole.
As this call only expects to walk user address space, make it walk
0 to 'highest_vm_end'.
Signed-off-by: James Morse <james.morse@arm.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
---
This is in preparation for a RFC series that allows walk_page_range() to
walk kernel page tables too.
fs/proc/task_mmu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 187d84ef9de9..1026b7862896 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -1068,7 +1068,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
}
mmu_notifier_invalidate_range_start(mm, 0, -1);
}
- walk_page_range(0, ~0UL, &clear_refs_walk);
+ walk_page_range(0, mm->highest_vm_end, &clear_refs_walk);
if (type == CLEAR_REFS_SOFT_DIRTY)
mmu_notifier_invalidate_range_end(mm, 0, -1);
flush_tlb_mm(mm);
--
2.8.0.rc3
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH] mm, proc: Make the task_mmu walk_page_range() limit in clear_refs_write() obvious
2016-08-31 15:03 [PATCH] mm, proc: Make the task_mmu walk_page_range() limit in clear_refs_write() obvious James Morse
@ 2016-09-01 0:13 ` Naoya Horiguchi
0 siblings, 0 replies; 2+ messages in thread
From: Naoya Horiguchi @ 2016-09-01 0:13 UTC (permalink / raw)
To: James Morse; +Cc: linux-mm, Andrew Morton
On Wed, Aug 31, 2016 at 04:03:12PM +0100, James Morse wrote:
> Trying to walk all of virtual memory requires architecture specific
> knowledge. On x86_64, addresses must be sign extended from bit 48,
> whereas on arm64 the top VA_BITS of address space have their own set
> of page tables.
>
> clear_refs_write() calls walk_page_range() on the range 0 to ~0UL, it
> provides a test_walk() callback that only expects to be walking over
> VMAs. Currently walk_pmd_range() will skip memory regions that don't
> have a VMA, reporting them as a hole.
>
> As this call only expects to walk user address space, make it walk
> 0 to 'highest_vm_end'.
>
> Signed-off-by: James Morse <james.morse@arm.com>
> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Makes sense to me.
Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2016-09-01 0:25 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-31 15:03 [PATCH] mm, proc: Make the task_mmu walk_page_range() limit in clear_refs_write() obvious James Morse
2016-09-01 0:13 ` Naoya Horiguchi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).