All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5] mm: improve mprotect(R|W) efficiency on pages referenced once
@ 2021-06-01 18:59 Peter Collingbourne
  0 siblings, 0 replies; only message in thread
From: Peter Collingbourne @ 2021-06-01 18:59 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Peter Collingbourne, Kostya Kortchinsky, Evgenii Stepanov,
	Andrea Arcangeli, Peter Xu, linux-mm

In the Scudo memory allocator [1] we would like to be able to
detect use-after-free vulnerabilities involving large allocations
by issuing mprotect(PROT_NONE) on the memory region used for the
allocation when it is deallocated. Later on, after the memory
region has been "quarantined" for a sufficient period of time we
would like to be able to use it for another allocation by issuing
mprotect(PROT_READ|PROT_WRITE).

Before this patch, after removing the write protection, any writes
to the memory region would result in page faults and entering
the copy-on-write code path, even in the usual case where the
pages are only referenced by a single PTE, harming performance
unnecessarily. Make it so that any pages in anonymous mappings that
are only referenced by a single PTE are immediately made writable
during the mprotect so that we can avoid the page faults.

This program shows the critical syscall sequence that we intend to
use in the allocator:

  #include <string.h>
  #include <sys/mman.h>

  enum { kSize = 131072 };

  int main(int argc, char **argv) {
    char *addr = (char *)mmap(0, kSize, PROT_READ | PROT_WRITE,
                              MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
    for (int i = 0; i != 100000; ++i) {
      memset(addr, i, kSize);
      mprotect((void *)addr, kSize, PROT_NONE);
      mprotect((void *)addr, kSize, PROT_READ | PROT_WRITE);
    }
  }

The effect of this patch on the above program was measured on a
DragonBoard 845c by taking the median real time execution time of
10 runs.

Before: 2.94s
After:  0.66s

The effect was also measured using one of the microbenchmarks that
we normally use to benchmark the allocator [2], after modifying it
to make the appropriate mprotect calls [3]. With an allocation size
of 131072 bytes to trigger the allocator's "large allocation" code
path the per-iteration time was measured as follows:

Before: 27450ns
After:   6010ns

This patch means that we do more work during the mprotect call itself
in exchange for less work when the pages are accessed. In the worst
case, the pages are not accessed at all. The effect of this patch in
such cases was measured using the following program:

  #include <string.h>
  #include <sys/mman.h>

  enum { kSize = 131072 };

  int main(int argc, char **argv) {
    char *addr = (char *)mmap(0, kSize, PROT_READ | PROT_WRITE,
                              MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
    memset(addr, 1, kSize);
    for (int i = 0; i != 100000; ++i) {
  #ifdef PAGE_FAULT
      memset(addr + (i * 4096) % kSize, i, 4096);
  #endif
      mprotect((void *)addr, kSize, PROT_NONE);
      mprotect((void *)addr, kSize, PROT_READ | PROT_WRITE);
    }
  }

With PAGE_FAULT undefined (0 pages touched after removing write
protection) the median real time execution time of 100 runs was
measured as follows:

Before: 0.330260s
After:  0.338836s

With PAGE_FAULT defined (1 page touched) the measurements were
as follows:

Before: 0.438048s
After:  0.355661s

So it seems that even with a single page fault the new approach
is faster.

I saw similar results if I adjusted the programs to use a larger
mapping size. With kSize = 1048576 I get these numbers with PAGE_FAULT
undefined:

Before: 1.428988s
After:  1.512016s

i.e. around 5.5%.

And these with PAGE_FAULT defined:

Before: 1.518559s
After:  1.524417s

i.e. about the same.

What I think we may conclude from these results is that for smaller
mappings the advantage of the previous approach, although measurable,
is wiped out by a single page fault. I think we may expect that there
should be at least one access resulting in a page fault (under the
previous approach) after making the pages writable, since the program
presumably made the pages writable for a reason.

For larger mappings we may guesstimate that the new approach wins if
the density of future page faults is > 0.4%. But for the mappings that
are large enough for density to matter (not just the absolute number
of page faults) it doesn't seem like the increase in mprotect latency
would be very large relative to the total mprotect execution time.

Signed-off-by: Peter Collingbourne <pcc@google.com>
Link: https://linux-review.googlesource.com/id/I98d75ef90e20330c578871c87494d64b1df3f1b8
Link: [1] https://source.android.com/devices/tech/debug/scudo
Link: [2] https://cs.android.com/android/platform/superproject/+/master:bionic/benchmarks/stdlib_benchmark.cpp;l=53;drc=e8693e78711e8f45ccd2b610e4dbe0b94d551cc9
Link: [3] https://github.com/pcc/llvm-project/commit/scudo-mprotect-secondary2
---
v5:
- add comments
- prohibit optimization for NUMA pages

v4:
- check pte_uffd_wp() to ensure that we still see UFFD faults
- check page_count() instead of page_mapcount() to handle non-map references (e.g. FOLL_LONGTERM)
- move the check into a separate function

v3:
- check for dirty pages
- refresh the performance numbers

v2:
- improve the commit message

 mm/mprotect.c | 52 +++++++++++++++++++++++++++++++++++++++++++++------
 1 file changed, 46 insertions(+), 6 deletions(-)

diff --git a/mm/mprotect.c b/mm/mprotect.c
index 94188df1ee55..c4627b0198ff 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -35,6 +35,51 @@
 
 #include "internal.h"
 
+/* Determine whether we can avoid taking write faults for known dirty pages. */
+static bool may_avoid_write_fault(pte_t pte, struct vm_area_struct *vma,
+				  unsigned long cp_flags)
+{
+	/*
+	 * The dirty accountable bit indicates that we can always make the page
+	 * writable regardless of the number of references.
+	 */
+	if (!(cp_flags & MM_CP_DIRTY_ACCT)) {
+		/* Otherwise, we must have exclusive access to the page. */
+		if (!(vma_is_anonymous(vma) && (vma->vm_flags & VM_WRITE)))
+			return false;
+
+		if (page_count(pte_page(pte)) != 1)
+			return false;
+	}
+
+	/*
+	 * Don't do this optimization for clean pages as we need to be notified
+	 * of the transition from clean to dirty.
+	 */
+	if (!pte_dirty(pte))
+		return false;
+
+	/* Same for softdirty. */
+	if (!pte_soft_dirty(pte) && (vma->vm_flags & VM_SOFTDIRTY))
+		return false;
+
+	/*
+	 * For userfaultfd the user program needs to monitor write faults so we
+	 * can't do this optimization.
+	 */
+	if (pte_uffd_wp(pte))
+		return false;
+
+	/*
+	 * It is unclear whether this optimization can be done safely for NUMA
+	 * pages.
+	 */
+	if (cp_flags & MM_CP_PROT_NUMA)
+		return false;
+
+	return true;
+}
+
 static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
 		unsigned long addr, unsigned long end, pgprot_t newprot,
 		unsigned long cp_flags)
@@ -43,7 +88,6 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
 	spinlock_t *ptl;
 	unsigned long pages = 0;
 	int target_node = NUMA_NO_NODE;
-	bool dirty_accountable = cp_flags & MM_CP_DIRTY_ACCT;
 	bool prot_numa = cp_flags & MM_CP_PROT_NUMA;
 	bool uffd_wp = cp_flags & MM_CP_UFFD_WP;
 	bool uffd_wp_resolve = cp_flags & MM_CP_UFFD_WP_RESOLVE;
@@ -131,12 +175,8 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
 				ptent = pte_clear_uffd_wp(ptent);
 			}
 
-			/* Avoid taking write faults for known dirty pages */
-			if (dirty_accountable && pte_dirty(ptent) &&
-					(pte_soft_dirty(ptent) ||
-					 !(vma->vm_flags & VM_SOFTDIRTY))) {
+			if (may_avoid_write_fault(ptent, vma, cp_flags))
 				ptent = pte_mkwrite(ptent);
-			}
 			ptep_modify_prot_commit(vma, addr, pte, oldpte, ptent);
 			pages++;
 		} else if (is_swap_pte(oldpte)) {
-- 
2.32.0.rc1.229.g3e70b5a671-goog



^ permalink raw reply related	[flat|nested] only message in thread

only message in thread, other threads:[~2021-06-01 18:59 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-01 18:59 [PATCH v5] mm: improve mprotect(R|W) efficiency on pages referenced once Peter Collingbourne

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.