linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] change_protection(): Count the number of pages affected
@ 2012-11-14  8:50 Ingo Molnar
  2012-11-14  8:50 ` [PATCH 1/2] sched, numa, mm: Count WS scanning against present PTEs, not virtual memory ranges Ingo Molnar
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Ingo Molnar @ 2012-11-14  8:50 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: Paul Turner, Lee Schermerhorn, Christoph Lameter, Rik van Riel,
	Mel Gorman, Andrew Morton, Andrea Arcangeli, Linus Torvalds,
	Peter Zijlstra, Thomas Gleixner, Hugh Dickins

What do you guys think about this mprotect() optimization?

Thanks,

	Ingo

--
Ingo Molnar (1):
  mm: Optimize the TLB flush of sys_mprotect() and change_protection()
    users

Peter Zijlstra (1):
  sched, numa, mm: Count WS scanning against present PTEs, not virtual
    memory ranges

 include/linux/hugetlb.h |  8 ++++++--
 include/linux/mm.h      |  6 +++---
 kernel/sched/fair.c     | 37 +++++++++++++++++++++----------------
 mm/hugetlb.c            | 10 ++++++++--
 mm/mprotect.c           | 46 ++++++++++++++++++++++++++++++++++------------
 5 files changed, 72 insertions(+), 35 deletions(-)

-- 
1.7.11.7


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/2] sched, numa, mm: Count WS scanning against present PTEs, not virtual memory ranges
  2012-11-14  8:50 [PATCH 0/2] change_protection(): Count the number of pages affected Ingo Molnar
@ 2012-11-14  8:50 ` Ingo Molnar
  2012-11-14 18:37   ` Rik van Riel
  2012-11-14  8:50 ` [PATCH 2/2] mm: Optimize the TLB flush of sys_mprotect() and change_protection() users Ingo Molnar
  2012-11-14 18:01 ` [PATCH 0/2] change_protection(): Count the number of pages affected Linus Torvalds
  2 siblings, 1 reply; 10+ messages in thread
From: Ingo Molnar @ 2012-11-14  8:50 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: Paul Turner, Lee Schermerhorn, Christoph Lameter, Rik van Riel,
	Mel Gorman, Andrew Morton, Andrea Arcangeli, Linus Torvalds,
	Peter Zijlstra, Thomas Gleixner, Hugh Dickins

From: Peter Zijlstra <a.p.zijlstra@chello.nl>

By accounting against the present PTEs, scanning speed reflects the
actual present (mapped) memory.

For this we modify mm/mprotect.c::change_protection() to return the
number of ptes modified. (No change in functionality.)

Suggested-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/hugetlb.h |  8 ++++++--
 include/linux/mm.h      |  6 +++---
 kernel/sched/fair.c     | 37 +++++++++++++++++++++----------------
 mm/hugetlb.c            | 10 ++++++++--
 mm/mprotect.c           | 41 ++++++++++++++++++++++++++++++-----------
 5 files changed, 68 insertions(+), 34 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 2251648..06e691b 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -87,7 +87,7 @@ struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address,
 				pud_t *pud, int write);
 int pmd_huge(pmd_t pmd);
 int pud_huge(pud_t pmd);
-void hugetlb_change_protection(struct vm_area_struct *vma,
+unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
 		unsigned long address, unsigned long end, pgprot_t newprot);
 
 #else /* !CONFIG_HUGETLB_PAGE */
@@ -132,7 +132,11 @@ static inline void copy_huge_page(struct page *dst, struct page *src)
 {
 }
 
-#define hugetlb_change_protection(vma, address, end, newprot)
+static inline unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
+		unsigned long address, unsigned long end, pgprot_t newprot)
+{
+	return 0;
+}
 
 static inline void __unmap_hugepage_range_final(struct mmu_gather *tlb,
 			struct vm_area_struct *vma, unsigned long start,
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 141a28f..e6df281 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1099,7 +1099,7 @@ extern unsigned long move_page_tables(struct vm_area_struct *vma,
 extern unsigned long do_mremap(unsigned long addr,
 			       unsigned long old_len, unsigned long new_len,
 			       unsigned long flags, unsigned long new_addr);
-extern void change_protection(struct vm_area_struct *vma, unsigned long start,
+extern unsigned long change_protection(struct vm_area_struct *vma, unsigned long start,
 			      unsigned long end, pgprot_t newprot,
 			      int dirty_accountable);
 extern int mprotect_fixup(struct vm_area_struct *vma,
@@ -1581,10 +1581,10 @@ static inline pgprot_t vma_prot_none(struct vm_area_struct *vma)
 	return pgprot_modify(vma->vm_page_prot, vm_get_page_prot(vmflags));
 }
 
-static inline void
+static inline unsigned long
 change_prot_none(struct vm_area_struct *vma, unsigned long start, unsigned long end)
 {
-	change_protection(vma, start, end, vma_prot_none(vma), 0);
+	return change_protection(vma, start, end, vma_prot_none(vma), 0);
 }
 
 struct vm_area_struct *find_extend_vma(struct mm_struct *, unsigned long addr);
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 4ed0ab1..d4d708e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -915,8 +915,8 @@ void task_numa_work(struct callback_head *work)
 	struct task_struct *p = current;
 	struct mm_struct *mm = p->mm;
 	struct vm_area_struct *vma;
-	unsigned long offset, end;
-	long length;
+	unsigned long start, end;
+	long pages;
 
 	WARN_ON_ONCE(p != container_of(work, struct task_struct, numa_work));
 
@@ -945,30 +945,35 @@ void task_numa_work(struct callback_head *work)
 
 	current->numa_scan_period += jiffies_to_msecs(2);
 
-	offset = mm->numa_scan_offset;
-	length = sysctl_sched_numa_scan_size;
-	length <<= 20;
+	start = mm->numa_scan_offset;
+	pages = sysctl_sched_numa_scan_size;
+	pages <<= 20 - PAGE_SHIFT; /* MB in pages */
+	if (!pages)
+		return;
 
 	down_write(&mm->mmap_sem);
-	vma = find_vma(mm, offset);
+	vma = find_vma(mm, start);
 	if (!vma) {
 		ACCESS_ONCE(mm->numa_scan_seq)++;
-		offset = 0;
+		start = 0;
 		vma = mm->mmap;
 	}
-	for (; vma && length > 0; vma = vma->vm_next) {
+	for (; vma; vma = vma->vm_next) {
 		if (!vma_migratable(vma))
 			continue;
 
-		offset = max(offset, vma->vm_start);
-		end = min(ALIGN(offset + length, HPAGE_SIZE), vma->vm_end);
-		length -= end - offset;
-
-		change_prot_none(vma, offset, end);
-
-		offset = end;
+		do {
+			start = max(start, vma->vm_start);
+			end = ALIGN(start + (pages << PAGE_SHIFT), HPAGE_SIZE);
+			end = min(end, vma->vm_end);
+			pages -= change_prot_none(vma, start, end);
+			start = end;
+			if (pages <= 0)
+				goto out;
+		} while (end != vma->vm_end);
 	}
-	mm->numa_scan_offset = offset;
+out:
+	mm->numa_scan_offset = start;
 	up_write(&mm->mmap_sem);
 }
 
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 59a0059..712895e 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3014,7 +3014,7 @@ same_page:
 	return i ? i : -EFAULT;
 }
 
-void hugetlb_change_protection(struct vm_area_struct *vma,
+unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
 		unsigned long address, unsigned long end, pgprot_t newprot)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -3022,6 +3022,7 @@ void hugetlb_change_protection(struct vm_area_struct *vma,
 	pte_t *ptep;
 	pte_t pte;
 	struct hstate *h = hstate_vma(vma);
+	unsigned long pages = 0;
 
 	BUG_ON(address >= end);
 	flush_cache_range(vma, address, end);
@@ -3032,12 +3033,15 @@ void hugetlb_change_protection(struct vm_area_struct *vma,
 		ptep = huge_pte_offset(mm, address);
 		if (!ptep)
 			continue;
-		if (huge_pmd_unshare(mm, &address, ptep))
+		if (huge_pmd_unshare(mm, &address, ptep)) {
+			pages++;
 			continue;
+		}
 		if (!huge_pte_none(huge_ptep_get(ptep))) {
 			pte = huge_ptep_get_and_clear(mm, address, ptep);
 			pte = pte_mkhuge(pte_modify(pte, newprot));
 			set_huge_pte_at(mm, address, ptep, pte);
+			pages++;
 		}
 	}
 	spin_unlock(&mm->page_table_lock);
@@ -3049,6 +3053,8 @@ void hugetlb_change_protection(struct vm_area_struct *vma,
 	 */
 	flush_tlb_range(vma, start, end);
 	mutex_unlock(&vma->vm_file->f_mapping->i_mmap_mutex);
+
+	return pages << h->order;
 }
 
 int hugetlb_reserve_pages(struct inode *inode,
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 392b124..ce0377b 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -28,12 +28,13 @@
 #include <asm/cacheflush.h>
 #include <asm/tlbflush.h>
 
-static void change_pte_range(struct mm_struct *mm, pmd_t *pmd,
+static unsigned long change_pte_range(struct mm_struct *mm, pmd_t *pmd,
 		unsigned long addr, unsigned long end, pgprot_t newprot,
 		int dirty_accountable)
 {
 	pte_t *pte, oldpte;
 	spinlock_t *ptl;
+	unsigned long pages = 0;
 
 	pte = pte_offset_map_lock(mm, pmd, addr, &ptl);
 	arch_enter_lazy_mmu_mode();
@@ -53,6 +54,7 @@ static void change_pte_range(struct mm_struct *mm, pmd_t *pmd,
 				ptent = pte_mkwrite(ptent);
 
 			ptep_modify_prot_commit(mm, addr, pte, ptent);
+			pages++;
 		} else if (IS_ENABLED(CONFIG_MIGRATION) && !pte_file(oldpte)) {
 			swp_entry_t entry = pte_to_swp_entry(oldpte);
 
@@ -65,18 +67,22 @@ static void change_pte_range(struct mm_struct *mm, pmd_t *pmd,
 				set_pte_at(mm, addr, pte,
 					swp_entry_to_pte(entry));
 			}
+			pages++;
 		}
 	} while (pte++, addr += PAGE_SIZE, addr != end);
 	arch_leave_lazy_mmu_mode();
 	pte_unmap_unlock(pte - 1, ptl);
+
+	return pages;
 }
 
-static inline void change_pmd_range(struct vm_area_struct *vma, pud_t *pud,
+static inline unsigned long change_pmd_range(struct vm_area_struct *vma, pud_t *pud,
 		unsigned long addr, unsigned long end, pgprot_t newprot,
 		int dirty_accountable)
 {
 	pmd_t *pmd;
 	unsigned long next;
+	unsigned long pages = 0;
 
 	pmd = pmd_offset(pud, addr);
 	do {
@@ -84,35 +90,42 @@ static inline void change_pmd_range(struct vm_area_struct *vma, pud_t *pud,
 		if (pmd_trans_huge(*pmd)) {
 			if (next - addr != HPAGE_PMD_SIZE)
 				split_huge_page_pmd(vma->vm_mm, pmd);
-			else if (change_huge_pmd(vma, pmd, addr, newprot))
+			else if (change_huge_pmd(vma, pmd, addr, newprot)) {
+				pages += HPAGE_PMD_NR;
 				continue;
+			}
 			/* fall through */
 		}
 		if (pmd_none_or_clear_bad(pmd))
 			continue;
-		change_pte_range(vma->vm_mm, pmd, addr, next, newprot,
+		pages += change_pte_range(vma->vm_mm, pmd, addr, next, newprot,
 				 dirty_accountable);
 	} while (pmd++, addr = next, addr != end);
+
+	return pages;
 }
 
-static inline void change_pud_range(struct vm_area_struct *vma, pgd_t *pgd,
+static inline unsigned long change_pud_range(struct vm_area_struct *vma, pgd_t *pgd,
 		unsigned long addr, unsigned long end, pgprot_t newprot,
 		int dirty_accountable)
 {
 	pud_t *pud;
 	unsigned long next;
+	unsigned long pages = 0;
 
 	pud = pud_offset(pgd, addr);
 	do {
 		next = pud_addr_end(addr, end);
 		if (pud_none_or_clear_bad(pud))
 			continue;
-		change_pmd_range(vma, pud, addr, next, newprot,
+		pages += change_pmd_range(vma, pud, addr, next, newprot,
 				 dirty_accountable);
 	} while (pud++, addr = next, addr != end);
+
+	return pages;
 }
 
-static void change_protection_range(struct vm_area_struct *vma,
+static unsigned long change_protection_range(struct vm_area_struct *vma,
 		unsigned long addr, unsigned long end, pgprot_t newprot,
 		int dirty_accountable)
 {
@@ -120,6 +133,7 @@ static void change_protection_range(struct vm_area_struct *vma,
 	pgd_t *pgd;
 	unsigned long next;
 	unsigned long start = addr;
+	unsigned long pages = 0;
 
 	BUG_ON(addr >= end);
 	pgd = pgd_offset(mm, addr);
@@ -128,24 +142,29 @@ static void change_protection_range(struct vm_area_struct *vma,
 		next = pgd_addr_end(addr, end);
 		if (pgd_none_or_clear_bad(pgd))
 			continue;
-		change_pud_range(vma, pgd, addr, next, newprot,
+		pages += change_pud_range(vma, pgd, addr, next, newprot,
 				 dirty_accountable);
 	} while (pgd++, addr = next, addr != end);
 	flush_tlb_range(vma, start, end);
+
+	return pages;
 }
 
-void change_protection(struct vm_area_struct *vma, unsigned long start,
+unsigned long change_protection(struct vm_area_struct *vma, unsigned long start,
 		       unsigned long end, pgprot_t newprot,
 		       int dirty_accountable)
 {
 	struct mm_struct *mm = vma->vm_mm;
+	unsigned long pages;
 
 	mmu_notifier_invalidate_range_start(mm, start, end);
 	if (is_vm_hugetlb_page(vma))
-		hugetlb_change_protection(vma, start, end, newprot);
+		pages = hugetlb_change_protection(vma, start, end, newprot);
 	else
-		change_protection_range(vma, start, end, newprot, dirty_accountable);
+		pages = change_protection_range(vma, start, end, newprot, dirty_accountable);
 	mmu_notifier_invalidate_range_end(mm, start, end);
+
+	return pages;
 }
 
 int
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/2] mm: Optimize the TLB flush of sys_mprotect() and change_protection() users
  2012-11-14  8:50 [PATCH 0/2] change_protection(): Count the number of pages affected Ingo Molnar
  2012-11-14  8:50 ` [PATCH 1/2] sched, numa, mm: Count WS scanning against present PTEs, not virtual memory ranges Ingo Molnar
@ 2012-11-14  8:50 ` Ingo Molnar
  2012-11-14 18:39   ` Rik van Riel
  2012-11-14 18:01 ` [PATCH 0/2] change_protection(): Count the number of pages affected Linus Torvalds
  2 siblings, 1 reply; 10+ messages in thread
From: Ingo Molnar @ 2012-11-14  8:50 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: Paul Turner, Lee Schermerhorn, Christoph Lameter, Rik van Riel,
	Mel Gorman, Andrew Morton, Andrea Arcangeli, Linus Torvalds,
	Peter Zijlstra, Thomas Gleixner, Hugh Dickins

Reuse the NUMA code's 'modified page protections' count that
change_protection() computes and skip the TLB flush if there's
no changes to a range that sys_mprotect() modifies.

Given that mprotect() already optimizes the same-flags case
I expected this optimization to dominantly trigger on
CONFIG_NUMA_BALANCING=y kernels - but even with that feature
disabled it triggers rather often.

There's two reasons for that:

1)

While sys_mprotect() already optimizes the same-flag case:

        if (newflags == oldflags) {
                *pprev = vma;
                return 0;
        }

and this test works in many cases, but it is too sharp in some
others, where it differentiates between protection values that the
underlying PTE format makes no distinction about, such as
PROT_EXEC == PROT_READ on x86.

2)

Even where the pte format over vma flag changes necessiates a
modification of the pagetables, there might be no pagetables
yet to modify: they might not be instantiated yet.

During a regular desktop bootup this optimization hits a couple
of hundred times. During a Java test I measured thousands of
hits.

So this optimization improves sys_mprotect() in general, not just
CONFIG_NUMA_BALANCING=y kernels.

[ We could further increase the efficiency of this optimization if
  change_pte_range() and change_huge_pmd() was a bit smarter about
  recognizing exact-same-value protection masks - when the hardware
  can do that safely. This would probably further speed up mprotect(). ]

Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Hugh Dickins <hughd@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 mm/mprotect.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/mm/mprotect.c b/mm/mprotect.c
index ce0377b..6ff2d5e 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -145,7 +145,10 @@ static unsigned long change_protection_range(struct vm_area_struct *vma,
 		pages += change_pud_range(vma, pgd, addr, next, newprot,
 				 dirty_accountable);
 	} while (pgd++, addr = next, addr != end);
-	flush_tlb_range(vma, start, end);
+
+	/* Only flush the TLB if we actually modified any entries: */
+	if (pages)
+		flush_tlb_range(vma, start, end);
 
 	return pages;
 }
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/2] change_protection(): Count the number of pages affected
  2012-11-14  8:50 [PATCH 0/2] change_protection(): Count the number of pages affected Ingo Molnar
  2012-11-14  8:50 ` [PATCH 1/2] sched, numa, mm: Count WS scanning against present PTEs, not virtual memory ranges Ingo Molnar
  2012-11-14  8:50 ` [PATCH 2/2] mm: Optimize the TLB flush of sys_mprotect() and change_protection() users Ingo Molnar
@ 2012-11-14 18:01 ` Linus Torvalds
  2012-11-14 18:43   ` Rik van Riel
  2012-11-16 18:40   ` Ingo Molnar
  2 siblings, 2 replies; 10+ messages in thread
From: Linus Torvalds @ 2012-11-14 18:01 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Linux Kernel Mailing List, linux-mm, Paul Turner,
	Lee Schermerhorn, Christoph Lameter, Rik van Riel, Mel Gorman,
	Andrew Morton, Andrea Arcangeli, Peter Zijlstra, Thomas Gleixner,
	Hugh Dickins

On Wed, Nov 14, 2012 at 12:50 AM, Ingo Molnar <mingo@kernel.org> wrote:
> What do you guys think about this mprotect() optimization?

Hmm..

If this is mainly about just avoiding the TLB flushing, I do wonder if
it might not be more interesting to try to be much more aggressive.

As noted elsewhere, we should just notice when vm_page_prot doesn't
change at all - even if 'flags' change, it is possible that the actual
low-level page protection bits do not (due to the X=R issue).

But even *more* aggressively, how about looking at

 - not flushing the TLB at all if the bits become  more permissive
(taking the TLB micro-fault and letting the CPU just update it on its
own)

 - even *more* aggressive: if the bits become strictly more
restrictive, how about not flushing the TLB at all, *and* not even
changing the page tables, and just teaching the page fault code to do
it lazily at fault time?

Now, the "change protections lazily" might actually be a huge
performance problem with the page fault overhead dwarfing any TLB
flush costs, but we don't really know, do we? It might be worth trying
out.

               Linus

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/2] sched, numa, mm: Count WS scanning against present PTEs, not virtual memory ranges
  2012-11-14  8:50 ` [PATCH 1/2] sched, numa, mm: Count WS scanning against present PTEs, not virtual memory ranges Ingo Molnar
@ 2012-11-14 18:37   ` Rik van Riel
  0 siblings, 0 replies; 10+ messages in thread
From: Rik van Riel @ 2012-11-14 18:37 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, linux-mm, Paul Turner, Lee Schermerhorn,
	Christoph Lameter, Mel Gorman, Andrew Morton, Andrea Arcangeli,
	Linus Torvalds, Peter Zijlstra, Thomas Gleixner, Hugh Dickins

On 11/14/2012 03:50 AM, Ingo Molnar wrote:
> From: Peter Zijlstra <a.p.zijlstra@chello.nl>
>
> By accounting against the present PTEs, scanning speed reflects the
> actual present (mapped) memory.
>
> For this we modify mm/mprotect.c::change_protection() to return the
> number of ptes modified. (No change in functionality.)

We need to figure out what we actually want here.

Do we want to mark 256MB as non-present, or do we want to leave
behind 256MB of non-present (NUMA) memory? :)

-- 
All rights reversed

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] mm: Optimize the TLB flush of sys_mprotect() and change_protection() users
  2012-11-14  8:50 ` [PATCH 2/2] mm: Optimize the TLB flush of sys_mprotect() and change_protection() users Ingo Molnar
@ 2012-11-14 18:39   ` Rik van Riel
  0 siblings, 0 replies; 10+ messages in thread
From: Rik van Riel @ 2012-11-14 18:39 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, linux-mm, Paul Turner, Lee Schermerhorn,
	Christoph Lameter, Mel Gorman, Andrew Morton, Andrea Arcangeli,
	Linus Torvalds, Peter Zijlstra, Thomas Gleixner, Hugh Dickins

On 11/14/2012 03:50 AM, Ingo Molnar wrote:
> Reuse the NUMA code's 'modified page protections' count that
> change_protection() computes and skip the TLB flush if there's
> no changes to a range that sys_mprotect() modifies.
>
> Given that mprotect() already optimizes the same-flags case
> I expected this optimization to dominantly trigger on
> CONFIG_NUMA_BALANCING=y kernels - but even with that feature
> disabled it triggers rather often.
>
> There's two reasons for that:
>
> 1)
>
> While sys_mprotect() already optimizes the same-flag case:
>
>          if (newflags == oldflags) {
>                  *pprev = vma;
>                  return 0;
>          }
>
> and this test works in many cases, but it is too sharp in some
> others, where it differentiates between protection values that the
> underlying PTE format makes no distinction about, such as
> PROT_EXEC == PROT_READ on x86.
>
> 2)
>
> Even where the pte format over vma flag changes necessiates a
> modification of the pagetables, there might be no pagetables
> yet to modify: they might not be instantiated yet.
>
> During a regular desktop bootup this optimization hits a couple
> of hundred times. During a Java test I measured thousands of
> hits.
>
> So this optimization improves sys_mprotect() in general, not just
> CONFIG_NUMA_BALANCING=y kernels.
>
> [ We could further increase the efficiency of this optimization if
>    change_pte_range() and change_huge_pmd() was a bit smarter about
>    recognizing exact-same-value protection masks - when the hardware
>    can do that safely. This would probably further speed up mprotect(). ]
>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> Cc: Rik van Riel <riel@redhat.com>
> Cc: Mel Gorman <mgorman@suse.de>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Signed-off-by: Ingo Molnar <mingo@kernel.org>
> ---
>   mm/mprotect.c | 5 ++++-
>   1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/mm/mprotect.c b/mm/mprotect.c
> index ce0377b..6ff2d5e 100644
> --- a/mm/mprotect.c
> +++ b/mm/mprotect.c
> @@ -145,7 +145,10 @@ static unsigned long change_protection_range(struct vm_area_struct *vma,
>   		pages += change_pud_range(vma, pgd, addr, next, newprot,
>   				 dirty_accountable);
>   	} while (pgd++, addr = next, addr != end);
> -	flush_tlb_range(vma, start, end);
> +
> +	/* Only flush the TLB if we actually modified any entries: */
> +	if (pages)
> +		flush_tlb_range(vma, start, end);
>
>   	return pages;
>   }

Ahh, this explains why the previous patch does what it does.

Would be nice to have that explained in the changelog for that patch,
too :)

-- 
All rights reversed

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/2] change_protection(): Count the number of pages affected
  2012-11-14 18:01 ` [PATCH 0/2] change_protection(): Count the number of pages affected Linus Torvalds
@ 2012-11-14 18:43   ` Rik van Riel
  2012-11-14 20:52     ` Linus Torvalds
  2012-11-16 18:40   ` Ingo Molnar
  1 sibling, 1 reply; 10+ messages in thread
From: Rik van Riel @ 2012-11-14 18:43 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Ingo Molnar, Linux Kernel Mailing List, linux-mm, Paul Turner,
	Lee Schermerhorn, Christoph Lameter, Mel Gorman, Andrew Morton,
	Andrea Arcangeli, Peter Zijlstra, Thomas Gleixner, Hugh Dickins

On 11/14/2012 01:01 PM, Linus Torvalds wrote:

> But even *more* aggressively, how about looking at
>
>   - not flushing the TLB at all if the bits become  more permissive
> (taking the TLB micro-fault and letting the CPU just update it on its
> own)

This seems like a good idea.

Additionally, we may be able to get away with not modifying
the PTEs if the bits become more permissive. We can just let
handle_pte_fault update the bits to match the VMA permissions.

That way we may be able to save a fair amount of scanning and
pte manipulation for eg. JVMs that manipulate the same range
of memory repeatedly in the garbage collector.

I do not know whether that would be worthwhile, but it sounds
like something that may be worth a try...

>   - even *more* aggressive: if the bits become strictly more
> restrictive, how about not flushing the TLB at all, *and* not even
> changing the page tables, and just teaching the page fault code to do
> it lazily at fault time?

How can we do that in a safe way?

Unless we change the page tables, and flush the TLBs before
returning to userspace, the mprotect may not take effect for
an arbitrarily large period of time.

If we do not change the page tables, we should also not incur
any page faults, so the fault code would never run to "do it
lazily".

Am I misreading what you propose?

> Now, the "change protections lazily" might actually be a huge
> performance problem with the page fault overhead dwarfing any TLB
> flush costs, but we don't really know, do we? It might be worth trying
> out.

-- 
All rights reversed

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/2] change_protection(): Count the number of pages affected
  2012-11-14 18:43   ` Rik van Riel
@ 2012-11-14 20:52     ` Linus Torvalds
  2012-11-14 22:04       ` Rik van Riel
  0 siblings, 1 reply; 10+ messages in thread
From: Linus Torvalds @ 2012-11-14 20:52 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Ingo Molnar, Linux Kernel Mailing List, linux-mm, Paul Turner,
	Lee Schermerhorn, Christoph Lameter, Mel Gorman, Andrew Morton,
	Andrea Arcangeli, Peter Zijlstra, Thomas Gleixner, Hugh Dickins

On Wed, Nov 14, 2012 at 10:43 AM, Rik van Riel <riel@redhat.com> wrote:
>
>>   - even *more* aggressive: if the bits become strictly more
>> restrictive

sorry, this was meant to be "permissive", not restrictive.

>> how about not flushing the TLB at all, *and* not even
>> changing the page tables, and just teaching the page fault code to do
>> it lazily at fault time?
>
>
> How can we do that in a safe way?
>
> Unless we change the page tables, and flush the TLBs before
> returning to userspace, the mprotect may not take effect for
> an arbitrarily large period of time.

My mistake - the point is that if we're changing to a strictly more
permissive mode, the old state of the page tables and TLB's are
perfectly "valid", they are just unnecessarily strict. So we'll take a
fault on some accesses, but that's fine - we can fix things up at
fault time.

The question then becomes what the access patterns are. The fault
overhead may well dawrf any TLB flush costs, but it depends on whether
people tend to do large mprotect() and then just actually change a few
pages, or whether mprotect() users often then touch all of the area..

                 Linus

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/2] change_protection(): Count the number of pages affected
  2012-11-14 20:52     ` Linus Torvalds
@ 2012-11-14 22:04       ` Rik van Riel
  0 siblings, 0 replies; 10+ messages in thread
From: Rik van Riel @ 2012-11-14 22:04 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Ingo Molnar, Linux Kernel Mailing List, linux-mm, Paul Turner,
	Lee Schermerhorn, Christoph Lameter, Mel Gorman, Andrew Morton,
	Andrea Arcangeli, Peter Zijlstra, Thomas Gleixner, Hugh Dickins

On 11/14/2012 03:52 PM, Linus Torvalds wrote:
> On Wed, Nov 14, 2012 at 10:43 AM, Rik van Riel <riel@redhat.com> wrote:
>>
>>>    - even *more* aggressive: if the bits become strictly more
>>> restrictive
>
> sorry, this was meant to be "permissive", not restrictive.

> My mistake - the point is that if we're changing to a strictly more
> permissive mode, the old state of the page tables and TLB's are
> perfectly "valid", they are just unnecessarily strict. So we'll take a
> fault on some accesses, but that's fine - we can fix things up at
> fault time.

The patches I sent in a few weeks ago do that for do_wp_page,
but I can see how we want the same for mprotect...

> The question then becomes what the access patterns are. The fault
> overhead may well dawrf any TLB flush costs, but it depends on whether
> people tend to do large mprotect() and then just actually change a few
> pages, or whether mprotect() users often then touch all of the area..

If we keep a counter of faults-after-mprotect, we may be able
to figure out automatically what behaviour would be best.

Of course, that gets us into premature optimization, so it is
probably best to do the simple thing for now.

-- 
All rights reversed

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/2] change_protection(): Count the number of pages affected
  2012-11-14 18:01 ` [PATCH 0/2] change_protection(): Count the number of pages affected Linus Torvalds
  2012-11-14 18:43   ` Rik van Riel
@ 2012-11-16 18:40   ` Ingo Molnar
  1 sibling, 0 replies; 10+ messages in thread
From: Ingo Molnar @ 2012-11-16 18:40 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Linux Kernel Mailing List, linux-mm, Paul Turner,
	Lee Schermerhorn, Christoph Lameter, Rik van Riel, Mel Gorman,
	Andrew Morton, Andrea Arcangeli, Peter Zijlstra, Thomas Gleixner,
	Hugh Dickins


* Linus Torvalds <torvalds@linux-foundation.org> wrote:

> On Wed, Nov 14, 2012 at 12:50 AM, Ingo Molnar <mingo@kernel.org> wrote:
> > What do you guys think about this mprotect() optimization?
> 
> Hmm..
> 
> If this is mainly about just avoiding the TLB flushing, I do 
> wonder if it might not be more interesting to try to be much 
> more aggressive.
> 
> As noted elsewhere, we should just notice when vm_page_prot 
> doesn't change at all - even if 'flags' change, it is possible 
> that the actual low-level page protection bits do not (due to 
> the X=R issue).
> 
> But even *more* aggressively, how about looking at
> 
>  - not flushing the TLB at all if the bits become  more permissive
> (taking the TLB micro-fault and letting the CPU just update it on its
> own)
> 
>  - even *more* aggressive: if the bits become strictly more 
> restrictive, how about not flushing the TLB at all, *and* not 
> even changing the page tables, and just teaching the page 
> fault code to do it lazily at fault time?
> 
> Now, the "change protections lazily" might actually be a huge 
> performance problem with the page fault overhead dwarfing any 
> TLB flush costs, but we don't really know, do we? It might be 
> worth trying out.

It might be a good idea when ptes get weaker protections - and 
maybe some CPU models see the pte modification in memory and are 
able to hash that to the TLB entry already and flush it? Even if 
they don't guarantee it architecturally they might have it as an 
optimization that works most of the time.

But I'd prefer to keep any such patch separate from these 
patches and maybe even keep them per arch and per CPU model?

I have instrumented and made sure that *these* patches do help 
visibly - but to determine whether not flushing TLBs when they 
are made more permissive is a lot harder to do ... there could 
be per arch differences, even per CPU model differences, 
depending on TLB size, CPU features, etc.

For unthreaded process environments mprotect() is pretty neat 
already.

For small/midsize mprotect()s in threaded environments there's 
two big costs:

  - the down_write(mm->sem)/up_write(mm->sem) serializes between 
    threads.

    Technically this could be improved, as the most expensive 
    parts of mprotect() are really safe via down_read() - the 
    only exception appears to be:

        vma->vm_flags = newflags;
        vma->vm_page_prot = pgprot_modify(vma->vm_page_prot,
                                          vm_get_page_prot(newflags));

    and that could be serialized using a spinlock, say the 
    pagetable lock. But it's a lot of footwork factoring out 
    vma->vm_page_prot users and we'd consider each such place 
    whether slowing them down is less of a problem than the 
    benefit of speeding up mprotect().

    So I wouldn't personally go there, dragons and all that.

  - the TLB flush, if done on some highly threaded workload like
    a JVM with threads live on many other CPUs is a global TLB 
    flush, with IPIs sent everywhere and the result has to be 
    waited for.

    This could be improved even if we don't do your
    very aggressive optimization, unless I'm missing something: 
    we could still flush locally and send the IPIs, but we don't
    have to *wait* for them when we weaken protections, right? 

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2012-11-16 18:40 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-11-14  8:50 [PATCH 0/2] change_protection(): Count the number of pages affected Ingo Molnar
2012-11-14  8:50 ` [PATCH 1/2] sched, numa, mm: Count WS scanning against present PTEs, not virtual memory ranges Ingo Molnar
2012-11-14 18:37   ` Rik van Riel
2012-11-14  8:50 ` [PATCH 2/2] mm: Optimize the TLB flush of sys_mprotect() and change_protection() users Ingo Molnar
2012-11-14 18:39   ` Rik van Riel
2012-11-14 18:01 ` [PATCH 0/2] change_protection(): Count the number of pages affected Linus Torvalds
2012-11-14 18:43   ` Rik van Riel
2012-11-14 20:52     ` Linus Torvalds
2012-11-14 22:04       ` Rik van Riel
2012-11-16 18:40   ` Ingo Molnar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).