linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC v8 PATCH 0/5] mm: zap pages with read mmap_sem in munmap for large mapping
@ 2018-08-15 18:49 Yang Shi
  2018-08-15 18:49 ` [RFC v8 PATCH 1/5] mm: refactor do_munmap() to extract the common part Yang Shi
                   ` (4 more replies)
  0 siblings, 5 replies; 24+ messages in thread
From: Yang Shi @ 2018-08-15 18:49 UTC (permalink / raw)
  To: mhocko, willy, ldufour, kirill, vbabka, akpm, peterz, mingo,
	acme, alexander.shishkin, jolsa, namhyung
  Cc: yang.shi, linux-mm, linux-kernel


Background:
Recently, when we ran some vm scalability tests on machines with large memory,
we ran into a couple of mmap_sem scalability issues when unmapping large memory
space, please refer to https://lkml.org/lkml/2017/12/14/733 and
https://lkml.org/lkml/2018/2/20/576.


History:
Then akpm suggested to unmap large mapping section by section and drop mmap_sem
at a time to mitigate it (see https://lkml.org/lkml/2018/3/6/784).

V1 patch series was submitted to the mailing list per Andrew's suggestion
(see https://lkml.org/lkml/2018/3/20/786). Then I received a lot great feedback
and suggestions.

Then this topic was discussed on LSFMM summit 2018. In the summit, Michal Hocko
suggested (also in the v1 patches review) to try "two phases" approach. Zapping
pages with read mmap_sem, then doing via cleanup with write mmap_sem (for
discussion detail, see https://lwn.net/Articles/753269/)


Approach:
Zapping pages is the most time consuming part, according to the suggestion from
Michal Hocko [1], zapping pages can be done with holding read mmap_sem, like
what MADV_DONTNEED does. Then re-acquire write mmap_sem to cleanup vmas.

But, we can't call MADV_DONTNEED directly, since there are two major drawbacks:
  * The unexpected state from PF if it wins the race in the middle of munmap.
    It may return zero page, instead of the content or SIGSEGV.
  * Can't handle VM_LOCKED | VM_HUGETLB | VM_PFNMAP and uprobe mappings, which
    is a showstopper from akpm

But, some part may need write mmap_sem, for example, vma splitting. So,
the design is as follows:
        acquire write mmap_sem
        lookup vmas (find and split vmas)
        deal with special mappings
        detach vmas
        downgrade_write

        zap pages
        free page tables
        release mmap_sem

The vm events with read mmap_sem may come in during page zapping, but
since vmas have been detached before, they, i.e. page fault, gup, etc,
will not be able to find valid vma, then just return SIGSEGV or -EFAULT
as expected.

If the vma has VM_HUGETLB | VM_PFNMAP or uprobe, they are considered as
special mappings. They will be handled by falling back to regular
do_munmap() with exclusive mmap_sem held in this patch since they may
update vm flags.
But, with the "detach vmas first" approach, the vmas have been detached
when vm flags are updated, so it sounds safe to update vm flags with
read mmap_sem for this specific case. So, VM_HUGETLB and VM_PFNMAP will
be handled by using the optimized path in the following separate patches
for bisectable sake. However, uprobes mappings will keep using regular
do_munmap() since unmapping uprobe areas may need update mm flags. It
might be not safe with just holding read mmap_sem even though affected
vmas have been detached.

With the "detach vmas first" approach we don't have to re-acquire
mmap_sem again to clean up vmas to avoid race window which might get the
address space changed since downgrade_write() doesn't release the lock
to lead regression, which simply downgrades to read lock.

And, since the lock acquire/release cost is managed to the minimum and
almost as same as before, the optimization could be extended to any size
of mapping without incurring significant penalty to small mappings.

For the time being, just do this in munmap syscall path. Other
vm_munmap() or do_munmap() call sites (i.e mmap, mremap, etc) remain
intact due to some implementation difficulties since they acquire write
mmap_sem from very beginning and hold it until the end, do_munmap()
might be called in the middle. But, the optimized do_munmap would like
to be called without mmap_sem held so that we can do the optimization.
So, if we want to do the similar optimization for mmap/mremap path, I'm
afraid we would have to redesign them. mremap might be called on very
large area depending on the usecases, the optimization to it will be
considered in the future.


Changelog
v7 -> v8:
* Added Acked-by from Vlastimil for patch 1/5. Thanks.
* Fixed the wrong "evolution" direction. Converted VM_HUGETLB and VM_PFNMAP
  mapping use the optimized path in separate patches respectively for safe and
  bisectable sake per Michal's suggestion.
* Extracted has_uprobes() helper from uprobes_munmap() to check if mm or vmas
  have uprobes, which could save some cycles instead of calling
  vma_has_uprobes() directly for some cases. Per Vlastimil's suggestion.
* Keep unmapping uprobes area using regular do_munmap() since it might update
  mm flags, that might be not safe with read mmap_sem even though vmas have
  been detached.
* Fixed some comments from Willy.

v6 -> v7:
* Rename some helper functions per Michal and Vlastimil's comments.
* Refactor munmap_lookup_vma() to return the pointer of start vma per Michal's
  suggestion.
* Rephrase some commit log for patch 2/4 per Michal's comments.
* Deal with special mappings (VM_HUGETLB | VM_PFNMAP | uprobes) with regular
  do_munmap() in a separate patch per Michal's suggestion.
* Bring the patch which makes vma_has_uprobes() non-static back since it is
  needed to check if a vma has uprobes or not.

v5 -> v6:
* Fixed the comments from Kirill and Laurent
* Added Laurent's reviewed-by to patch 1/2. Thanks.

v4 -> v5:
* Detach vmas before zapping pages so that we don't have to use VM_DEAD to mark
  a being unmapping vma since they have been detached from rbtree when zapping
  pages. Per Kirill
* Eliminate VM_DEAD stuff
* With this change we don't have to re-acquire write mmap_sem to do cleanup.
  So, we could eliminate a potential race window
* Eliminate PUD_SIZE check, and extend this optimization to all size

v3 -> v4:
* Extend check_stable_address_space to check VM_DEAD as Michal suggested
* Deal with vm_flags update of VM_LOCKED | VM_HUGETLB | VM_PFNMAP and uprobe
  mappings with exclusive lock held. The actual unmapping is still done with read
  mmap_sem to solve akpm's concern
* Clean up vmas with calling do_munmap to prevent from race condition by not
  carrying vmas as Kirill suggested
* Extracted more common code
* Solved some code cleanup comments from akpm
* Dropped uprobe and arch specific code, now all the changes are mm only
* Still keep PUD_SIZE threshold, if everyone thinks it is better to extend to all
  sizes or smaller size, will remove it
* Make this optimization 64 bit only explicitly per akpm's suggestion

v2 -> v3:
* Refactor do_munmap code to extract the common part per Peter's sugestion
* Introduced VM_DEAD flag per Michal's suggestion. Just handled VM_DEAD in
  x86's page fault handler for the time being. Other architectures will be covered
  once the patch series is reviewed
* Now lookup vma (find and split) and set VM_DEAD flag with write mmap_sem, then
  zap mapping with read mmap_sem, then clean up pgtables and vmas with write
  mmap_sem per Peter's suggestion

v1 -> v2:
* Re-implemented the code per the discussion on LSFMM summit


Regression and performance data:
Did the below regression test with setting thresh to 4K manually in the code:
  * Full LTP
  * Trinity (munmap/all vm syscalls)
  * Stress-ng: mmap/mmapfork/mmapfixed/mmapaddr/mmapmany/vm
  * mm-tests: kernbench, phpbench, sysbench-mariadb, will-it-scale
  * vm-scalability

With the patches, exclusive mmap_sem hold time when munmap a 80GB address
space on a machine with 32 cores of E5-2680 @ 2.70GHz dropped to us level
from second.

munmap_test-15002 [008]   594.380138: funcgraph_entry: |  vm_munmap_zap_rlock() {
munmap_test-15002 [008]   594.380146: funcgraph_entry:      !2485684 us |    unmap_region();
munmap_test-15002 [008]   596.865836: funcgraph_exit:       !2485692 us |  }

Here the excution time of unmap_region() is used to evaluate the time of
holding read mmap_sem, then the remaining time is used with holding
exclusive lock.


Yang Shi (5):
      mm: refactor do_munmap() to extract the common part
      uprobes: introduce has_uprobes helper
      mm: mmap: zap pages with read mmap_sem in munmap
      mm: unmap VM_HUGETLB mappings with optimized path
      mm: unmap VM_PFNMAP mappings with optimized path

 include/linux/uprobes.h |   7 ++++
 kernel/events/uprobes.c |  23 ++++++++----
 mm/mmap.c               | 199 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++------------------
 3 files changed, 188 insertions(+), 41 deletions(-)

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [RFC v8 PATCH 1/5] mm: refactor do_munmap() to extract the common part
  2018-08-15 18:49 [RFC v8 PATCH 0/5] mm: zap pages with read mmap_sem in munmap for large mapping Yang Shi
@ 2018-08-15 18:49 ` Yang Shi
  2018-08-15 18:49 ` [RFC v8 PATCH 2/5] uprobes: introduce has_uprobes helper Yang Shi
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 24+ messages in thread
From: Yang Shi @ 2018-08-15 18:49 UTC (permalink / raw)
  To: mhocko, willy, ldufour, kirill, vbabka, akpm, peterz, mingo,
	acme, alexander.shishkin, jolsa, namhyung
  Cc: yang.shi, linux-mm, linux-kernel

Introduces three new helper functions:
  * addr_ok()
  * munmap_lookup_vma()
  * munlock_vmas()

They will be used by do_munmap() and the new do_munmap with zapping
large mapping early in the later patch.

There is no functional change, just code refactor.

Reviewed-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
---
 mm/mmap.c | 106 +++++++++++++++++++++++++++++++++++++++++++-------------------
 1 file changed, 74 insertions(+), 32 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 17bbf4d..f05f49b 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2681,35 +2681,42 @@ int split_vma(struct mm_struct *mm, struct vm_area_struct *vma,
 	return __split_vma(mm, vma, addr, new_below);
 }
 
-/* Munmap is split into 2 main parts -- this part which finds
- * what needs doing, and the areas themselves, which do the
- * work.  This now handles partial unmappings.
- * Jeremy Fitzhardinge <jeremy@goop.org>
- */
-int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
-	      struct list_head *uf)
+static inline bool addr_ok(unsigned long start, size_t len)
 {
-	unsigned long end;
-	struct vm_area_struct *vma, *prev, *last;
-
 	if ((offset_in_page(start)) || start > TASK_SIZE || len > TASK_SIZE-start)
-		return -EINVAL;
+		return false;
 
-	len = PAGE_ALIGN(len);
-	if (len == 0)
-		return -EINVAL;
+	if (PAGE_ALIGN(len) == 0)
+		return false;
+
+	return true;
+}
+
+/*
+ * munmap_lookup_vma: find the first overlap vma and split overlap vmas.
+ * @mm: mm_struct
+ * @start: start address
+ * @end: end address
+ *
+ * Return: %NULL if no VMA overlaps this range.  An ERR_PTR if an
+ * overlapping VMA could not be split.  Otherwise a pointer to the first
+ * VMA which overlaps the range.
+ */
+static struct vm_area_struct *munmap_lookup_vma(struct mm_struct *mm,
+			unsigned long start, unsigned long end)
+{
+	struct vm_area_struct *vma, *prev, *last;
 
 	/* Find the first overlapping VMA */
 	vma = find_vma(mm, start);
 	if (!vma)
-		return 0;
-	prev = vma->vm_prev;
-	/* we have  start < vma->vm_end  */
+		return NULL;
 
+	/* we have start < vma->vm_end  */
 	/* if it doesn't overlap, we have nothing.. */
-	end = start + len;
 	if (vma->vm_start >= end)
-		return 0;
+		return NULL;
+	prev = vma->vm_prev;
 
 	/*
 	 * If we need to split any vma, do it now to save pain later.
@@ -2727,11 +2734,11 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 		 * its limit temporarily, to help free resources as expected.
 		 */
 		if (end < vma->vm_end && mm->map_count >= sysctl_max_map_count)
-			return -ENOMEM;
+			return ERR_PTR(-ENOMEM);
 
 		error = __split_vma(mm, vma, start, 0);
 		if (error)
-			return error;
+			return ERR_PTR(error);
 		prev = vma;
 	}
 
@@ -2740,10 +2747,53 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 	if (last && end > last->vm_start) {
 		int error = __split_vma(mm, last, end, 1);
 		if (error)
-			return error;
+			return ERR_PTR(error);
 	}
 	vma = prev ? prev->vm_next : mm->mmap;
 
+	return vma;
+}
+
+static inline void munlock_vmas(struct vm_area_struct *vma,
+				unsigned long end)
+{
+	struct mm_struct *mm = vma->vm_mm;
+
+	while (vma && vma->vm_start < end) {
+		if (vma->vm_flags & VM_LOCKED) {
+			mm->locked_vm -= vma_pages(vma);
+			munlock_vma_pages_all(vma);
+		}
+		vma = vma->vm_next;
+	}
+}
+
+/* Munmap is split into 2 main parts -- this part which finds
+ * what needs doing, and the areas themselves, which do the
+ * work.  This now handles partial unmappings.
+ * Jeremy Fitzhardinge <jeremy@goop.org>
+ */
+int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
+	      struct list_head *uf)
+{
+	unsigned long end;
+	struct vm_area_struct *vma, *prev;
+
+	if (!addr_ok(start, len))
+		return -EINVAL;
+
+	len = PAGE_ALIGN(len);
+
+	end = start + len;
+
+	vma = munmap_lookup_vma(mm, start, end);
+	if (!vma)
+		return 0;
+	if (IS_ERR(vma))
+		return PTR_ERR(vma);
+
+	prev = vma->vm_prev;
+
 	if (unlikely(uf)) {
 		/*
 		 * If userfaultfd_unmap_prep returns an error the vmas
@@ -2762,16 +2812,8 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 	/*
 	 * unlock any mlock()ed ranges before detaching vmas
 	 */
-	if (mm->locked_vm) {
-		struct vm_area_struct *tmp = vma;
-		while (tmp && tmp->vm_start < end) {
-			if (tmp->vm_flags & VM_LOCKED) {
-				mm->locked_vm -= vma_pages(tmp);
-				munlock_vma_pages_all(tmp);
-			}
-			tmp = tmp->vm_next;
-		}
-	}
+	if (mm->locked_vm)
+		munlock_vmas(vma, end);
 
 	/*
 	 * Remove the vma's, and unmap the actual pages
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC v8 PATCH 2/5] uprobes: introduce has_uprobes helper
  2018-08-15 18:49 [RFC v8 PATCH 0/5] mm: zap pages with read mmap_sem in munmap for large mapping Yang Shi
  2018-08-15 18:49 ` [RFC v8 PATCH 1/5] mm: refactor do_munmap() to extract the common part Yang Shi
@ 2018-08-15 18:49 ` Yang Shi
  2018-08-22 10:55   ` Vlastimil Babka
  2018-08-15 18:49 ` [RFC v8 PATCH 3/5] mm: mmap: zap pages with read mmap_sem in munmap Yang Shi
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 24+ messages in thread
From: Yang Shi @ 2018-08-15 18:49 UTC (permalink / raw)
  To: mhocko, willy, ldufour, kirill, vbabka, akpm, peterz, mingo,
	acme, alexander.shishkin, jolsa, namhyung
  Cc: yang.shi, linux-mm, linux-kernel

We need check if mm or vma has uprobes in the following patch to check
if a vma could be unmapped with holding read mmap_sem. The checks and
pre-conditions used by uprobe_munmap() look just suitable for this
purpose.

Extracting those checks into a helper function, has_uprobes().

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
---
 include/linux/uprobes.h |  7 +++++++
 kernel/events/uprobes.c | 23 ++++++++++++++++-------
 2 files changed, 23 insertions(+), 7 deletions(-)

diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
index 0a294e9..418764e 100644
--- a/include/linux/uprobes.h
+++ b/include/linux/uprobes.h
@@ -149,6 +149,8 @@ struct uprobes_state {
 extern bool arch_uprobe_ignore(struct arch_uprobe *aup, struct pt_regs *regs);
 extern void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr,
 					 void *src, unsigned long len);
+extern bool has_uprobes(struct vm_area_struct *vma, unsigned long start,
+			unsigned long end);
 #else /* !CONFIG_UPROBES */
 struct uprobes_state {
 };
@@ -203,5 +205,10 @@ static inline void uprobe_copy_process(struct task_struct *t, unsigned long flag
 static inline void uprobe_clear_state(struct mm_struct *mm)
 {
 }
+static inline bool has_uprobes(struct vm_area_struct *vma, unsigned long start,
+			       unsgined long end)
+{
+	return false;
+}
 #endif /* !CONFIG_UPROBES */
 #endif	/* _LINUX_UPROBES_H */
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index aed1ba5..568481c 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -1114,22 +1114,31 @@ int uprobe_mmap(struct vm_area_struct *vma)
 	return !!n;
 }
 
-/*
- * Called in context of a munmap of a vma.
- */
-void uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned long end)
+bool
+has_uprobes(struct vm_area_struct *vma, unsigned long start, unsigned long end)
 {
 	if (no_uprobe_events() || !valid_vma(vma, false))
-		return;
+		return false;
 
 	if (!atomic_read(&vma->vm_mm->mm_users)) /* called by mmput() ? */
-		return;
+		return false;
 
 	if (!test_bit(MMF_HAS_UPROBES, &vma->vm_mm->flags) ||
 	     test_bit(MMF_RECALC_UPROBES, &vma->vm_mm->flags))
-		return;
+		return false;
 
 	if (vma_has_uprobes(vma, start, end))
+		return true;
+
+	return false;
+}
+
+/*
+ * Called in context of a munmap of a vma.
+ */
+void uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned long end)
+{
+	if (has_uprobes(vma, start, end))
 		set_bit(MMF_RECALC_UPROBES, &vma->vm_mm->flags);
 }
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC v8 PATCH 3/5] mm: mmap: zap pages with read mmap_sem in munmap
  2018-08-15 18:49 [RFC v8 PATCH 0/5] mm: zap pages with read mmap_sem in munmap for large mapping Yang Shi
  2018-08-15 18:49 ` [RFC v8 PATCH 1/5] mm: refactor do_munmap() to extract the common part Yang Shi
  2018-08-15 18:49 ` [RFC v8 PATCH 2/5] uprobes: introduce has_uprobes helper Yang Shi
@ 2018-08-15 18:49 ` Yang Shi
  2018-08-15 19:16   ` Matthew Wilcox
                     ` (2 more replies)
  2018-08-15 18:49 ` [RFC v8 PATCH 4/5] mm: unmap VM_HUGETLB mappings with optimized path Yang Shi
  2018-08-15 18:49 ` [RFC v8 PATCH 5/5] mm: unmap VM_PFNMAP " Yang Shi
  4 siblings, 3 replies; 24+ messages in thread
From: Yang Shi @ 2018-08-15 18:49 UTC (permalink / raw)
  To: mhocko, willy, ldufour, kirill, vbabka, akpm, peterz, mingo,
	acme, alexander.shishkin, jolsa, namhyung
  Cc: yang.shi, linux-mm, linux-kernel

When running some mmap/munmap scalability tests with large memory (i.e.
> 300GB), the below hung task issue may happen occasionally.

INFO: task ps:14018 blocked for more than 120 seconds.
       Tainted: G            E 4.9.79-009.ali3000.alios7.x86_64 #1
 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
message.
 ps              D    0 14018      1 0x00000004
  ffff885582f84000 ffff885e8682f000 ffff880972943000 ffff885ebf499bc0
  ffff8828ee120000 ffffc900349bfca8 ffffffff817154d0 0000000000000040
  00ffffff812f872a ffff885ebf499bc0 024000d000948300 ffff880972943000
 Call Trace:
  [<ffffffff817154d0>] ? __schedule+0x250/0x730
  [<ffffffff817159e6>] schedule+0x36/0x80
  [<ffffffff81718560>] rwsem_down_read_failed+0xf0/0x150
  [<ffffffff81390a28>] call_rwsem_down_read_failed+0x18/0x30
  [<ffffffff81717db0>] down_read+0x20/0x40
  [<ffffffff812b9439>] proc_pid_cmdline_read+0xd9/0x4e0
  [<ffffffff81253c95>] ? do_filp_open+0xa5/0x100
  [<ffffffff81241d87>] __vfs_read+0x37/0x150
  [<ffffffff812f824b>] ? security_file_permission+0x9b/0xc0
  [<ffffffff81242266>] vfs_read+0x96/0x130
  [<ffffffff812437b5>] SyS_read+0x55/0xc0
  [<ffffffff8171a6da>] entry_SYSCALL_64_fastpath+0x1a/0xc5

It is because munmap holds mmap_sem exclusively from very beginning to
all the way down to the end, and doesn't release it in the middle. When
unmapping large mapping, it may take long time (take ~18 seconds to
unmap 320GB mapping with every single page mapped on an idle machine).

Zapping pages is the most time consuming part, according to the
suggestion from Michal Hocko [1], zapping pages can be done with holding
read mmap_sem, like what MADV_DONTNEED does. Then re-acquire write
mmap_sem to cleanup vmas.

But, some part may need write mmap_sem, for example, vma splitting. So,
the design is as follows:
        acquire write mmap_sem
        lookup vmas (find and split vmas)
        deal with special mappings
        detach vmas
        downgrade_write

        zap pages
        free page tables
        release mmap_sem

The vm events with read mmap_sem may come in during page zapping, but
since vmas have been detached before, they, i.e. page fault, gup, etc,
will not be able to find valid vma, then just return SIGSEGV or -EFAULT
as expected.

If the vma has VM_HUGETLB | VM_PFNMAP or uprobe, they are considered as
special mappings. They will be handled by falling back to regular
do_munmap() with exclusive mmap_sem held in this patch since they may
update vm flags.
But, with the "detach vmas first" approach, the vmas have been detached
when vm flags are updated, so it sounds safe to update vm flags with
read mmap_sem for this specific case. So, VM_HUGETLB and VM_PFNMAP will
be handled by using the optimized path in the following separate patches
for bisectable sake. However, uprobes mappings will keep using regular
do_munmap() since unmapping uprobe areas may need update mm flags. It
might be not safe with just holding read mmap_sem even though affected
vmas have been detached.

With the "detach vmas first" approach we don't have to re-acquire
mmap_sem again to clean up vmas to avoid race window which might get the
address space changed since downgrade_write() doesn't release the lock
to lead regression, which simply downgrades to read lock.

And, since the lock acquire/release cost is managed to the minimum and
almost as same as before, the optimization could be extended to any size
of mapping without incurring significant penalty to small mappings.

For the time being, just do this in munmap syscall path. Other
vm_munmap() or do_munmap() call sites (i.e mmap, mremap, etc) remain
intact due to some implementation difficulties since they acquire write
mmap_sem from very beginning and hold it until the end, do_munmap()
might be called in the middle. But, the optimized do_munmap would like
to be called without mmap_sem held so that we can do the optimization.
So, if we want to do the similar optimization for mmap/mremap path, I'm
afraid we would have to redesign them. mremap might be called on very
large area depending on the usecases, the optimization to it will be
considered in the future.

With the patches, exclusive mmap_sem hold time when munmap a 80GB
address space on a machine with 32 cores of E5-2680 @ 2.70GHz dropped to
us level from second.

munmap_test-15002 [008]   594.380138: funcgraph_entry: |
vm_munmap_zap_rlock() {
munmap_test-15002 [008]   594.380146: funcgraph_entry:      !2485684 us
|    unmap_region();
munmap_test-15002 [008]   596.865836: funcgraph_exit:       !2485692 us
|  }

Here the excution time of unmap_region() is used to evaluate the time of
holding read mmap_sem, then the remaining time is used with holding
exclusive lock.

[1] https://lwn.net/Articles/753269/

Suggested-by: Michal Hocko <mhocko@kernel.org>
Suggested-by: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
---
 mm/mmap.c | 97 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 95 insertions(+), 2 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index f05f49b..e92f680 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2768,6 +2768,89 @@ static inline void munlock_vmas(struct vm_area_struct *vma,
 	}
 }
 
+/*
+ * Zap pages with read mmap_sem held
+ *
+ * uf is the list for userfaultfd
+ */
+static int do_munmap_zap_rlock(struct mm_struct *mm, unsigned long start,
+			       size_t len, struct list_head *uf)
+{
+	unsigned long end;
+	struct vm_area_struct *start_vma, *prev, *vma;
+	int ret = 0;
+
+	if (!addr_ok(start, len))
+		return -EINVAL;
+
+	len = PAGE_ALIGN(len);
+
+	end = start + len;
+
+	/*
+	 * Need write mmap_sem to split vmas and detach vmas
+	 * splitting vma up-front to save PITA to clean if it is failed
+	 */
+	if (down_write_killable(&mm->mmap_sem))
+		return -EINTR;
+
+	start_vma = munmap_lookup_vma(mm, start, end);
+	if (!start_vma)
+		goto out;
+	if (IS_ERR(start_vma)) {
+		ret = PTR_ERR(start_vma);
+		goto out;
+	}
+
+	prev = start_vma->vm_prev;
+
+	if (unlikely(uf)) {
+		ret = userfaultfd_unmap_prep(start_vma, start, end, uf);
+		if (ret)
+			goto out;
+	}
+
+	/*
+	 * Unmapping vmas, which have:
+	 *   VM_HUGETLB or
+	 *   VM_PFNMAP or
+	 *   uprobes
+	 * need get done with write mmap_sem held since they may update
+	 * vm_flags. Deal with such mappings with regular do_munmap() call.
+	 */
+	for (vma = start_vma; vma && vma->vm_start < end; vma = vma->vm_next) {
+		if ((vma->vm_file &&
+		    has_uprobes(vma, vma->vm_start, vma->vm_end)) ||
+		    (vma->vm_flags & (VM_HUGETLB | VM_PFNMAP)))
+			goto regular_path;
+	}
+
+	/* Handle mlocked vmas */
+	if (mm->locked_vm)
+		munlock_vmas(start_vma, end);
+
+	/* Detach vmas from rbtree */
+	detach_vmas_to_be_unmapped(mm, start_vma, prev, end);
+
+	downgrade_write(&mm->mmap_sem);
+
+	/* Zap mappings with read mmap_sem */
+	unmap_region(mm, start_vma, prev, start, end);
+
+	arch_unmap(mm, start_vma, start, end);
+	remove_vma_list(mm, start_vma);
+	up_read(&mm->mmap_sem);
+
+	return 0;
+
+regular_path:
+	ret = do_munmap(mm, start, len, uf);
+
+out:
+	up_write(&mm->mmap_sem);
+	return ret;
+}
+
 /* Munmap is split into 2 main parts -- this part which finds
  * what needs doing, and the areas themselves, which do the
  * work.  This now handles partial unmappings.
@@ -2829,6 +2912,17 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 	return 0;
 }
 
+static int vm_munmap_zap_rlock(unsigned long start, size_t len)
+{
+	int ret;
+	struct mm_struct *mm = current->mm;
+	LIST_HEAD(uf);
+
+	ret = do_munmap_zap_rlock(mm, start, len, &uf);
+	userfaultfd_unmap_complete(mm, &uf);
+	return ret;
+}
+
 int vm_munmap(unsigned long start, size_t len)
 {
 	int ret;
@@ -2848,10 +2942,9 @@ int vm_munmap(unsigned long start, size_t len)
 SYSCALL_DEFINE2(munmap, unsigned long, addr, size_t, len)
 {
 	profile_munmap(addr);
-	return vm_munmap(addr, len);
+	return vm_munmap_zap_rlock(addr, len);
 }
 
-
 /*
  * Emulation of deprecated remap_file_pages() syscall.
  */
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC v8 PATCH 4/5] mm: unmap VM_HUGETLB mappings with optimized path
  2018-08-15 18:49 [RFC v8 PATCH 0/5] mm: zap pages with read mmap_sem in munmap for large mapping Yang Shi
                   ` (2 preceding siblings ...)
  2018-08-15 18:49 ` [RFC v8 PATCH 3/5] mm: mmap: zap pages with read mmap_sem in munmap Yang Shi
@ 2018-08-15 18:49 ` Yang Shi
  2018-08-15 18:49 ` [RFC v8 PATCH 5/5] mm: unmap VM_PFNMAP " Yang Shi
  4 siblings, 0 replies; 24+ messages in thread
From: Yang Shi @ 2018-08-15 18:49 UTC (permalink / raw)
  To: mhocko, willy, ldufour, kirill, vbabka, akpm, peterz, mingo,
	acme, alexander.shishkin, jolsa, namhyung
  Cc: yang.shi, linux-mm, linux-kernel

When unmapping VM_HUGETLB mappings, vm flags need to be updated. Since
the vmas have been detached, so it sounds safe to update vm flags with
read mmap_sem.

Cc: Michal Hocko <mhocko@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
---
 mm/mmap.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index e92f680..3b9f734 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2812,7 +2812,6 @@ static int do_munmap_zap_rlock(struct mm_struct *mm, unsigned long start,
 
 	/*
 	 * Unmapping vmas, which have:
-	 *   VM_HUGETLB or
 	 *   VM_PFNMAP or
 	 *   uprobes
 	 * need get done with write mmap_sem held since they may update
@@ -2821,7 +2820,7 @@ static int do_munmap_zap_rlock(struct mm_struct *mm, unsigned long start,
 	for (vma = start_vma; vma && vma->vm_start < end; vma = vma->vm_next) {
 		if ((vma->vm_file &&
 		    has_uprobes(vma, vma->vm_start, vma->vm_end)) ||
-		    (vma->vm_flags & (VM_HUGETLB | VM_PFNMAP)))
+		    (vma->vm_flags & VM_PFNMAP))
 			goto regular_path;
 	}
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC v8 PATCH 5/5] mm: unmap VM_PFNMAP mappings with optimized path
  2018-08-15 18:49 [RFC v8 PATCH 0/5] mm: zap pages with read mmap_sem in munmap for large mapping Yang Shi
                   ` (3 preceding siblings ...)
  2018-08-15 18:49 ` [RFC v8 PATCH 4/5] mm: unmap VM_HUGETLB mappings with optimized path Yang Shi
@ 2018-08-15 18:49 ` Yang Shi
  4 siblings, 0 replies; 24+ messages in thread
From: Yang Shi @ 2018-08-15 18:49 UTC (permalink / raw)
  To: mhocko, willy, ldufour, kirill, vbabka, akpm, peterz, mingo,
	acme, alexander.shishkin, jolsa, namhyung
  Cc: yang.shi, linux-mm, linux-kernel

When unmapping VM_PFNMAP mappings, vm flags need to be updated. Since
the vmas have been detached, so it sounds safe to update vm flags with
read mmap_sem.

Cc: Michal Hocko <mhocko@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
---
 mm/mmap.c | 13 +++++--------
 1 file changed, 5 insertions(+), 8 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 3b9f734..0a9960d 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2811,16 +2811,13 @@ static int do_munmap_zap_rlock(struct mm_struct *mm, unsigned long start,
 	}
 
 	/*
-	 * Unmapping vmas, which have:
-	 *   VM_PFNMAP or
-	 *   uprobes
-	 * need get done with write mmap_sem held since they may update
-	 * vm_flags. Deal with such mappings with regular do_munmap() call.
+	 * Unmapping vmas, which have uprobes need get done with write
+	 * mmap_sem held since they may update vm_flags. Deal with such
+	 * mappings with regular do_munmap() call.
 	 */
 	for (vma = start_vma; vma && vma->vm_start < end; vma = vma->vm_next) {
-		if ((vma->vm_file &&
-		    has_uprobes(vma, vma->vm_start, vma->vm_end)) ||
-		    (vma->vm_flags & VM_PFNMAP))
+		if (vma->vm_file &&
+		    has_uprobes(vma, vma->vm_start, vma->vm_end))
 			goto regular_path;
 	}
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [RFC v8 PATCH 3/5] mm: mmap: zap pages with read mmap_sem in munmap
  2018-08-15 18:49 ` [RFC v8 PATCH 3/5] mm: mmap: zap pages with read mmap_sem in munmap Yang Shi
@ 2018-08-15 19:16   ` Matthew Wilcox
  2018-08-15 21:09     ` Matthew Wilcox
  2018-08-22 11:11   ` Vlastimil Babka
  2018-08-22 11:19   ` Vlastimil Babka
  2 siblings, 1 reply; 24+ messages in thread
From: Matthew Wilcox @ 2018-08-15 19:16 UTC (permalink / raw)
  To: Yang Shi
  Cc: mhocko, ldufour, kirill, vbabka, akpm, peterz, mingo, acme,
	alexander.shishkin, jolsa, namhyung, linux-mm, linux-kernel

On Thu, Aug 16, 2018 at 02:49:48AM +0800, Yang Shi wrote:
> +static int do_munmap_zap_rlock(struct mm_struct *mm, unsigned long start,
> +			       size_t len, struct list_head *uf)
> +{
> +	unsigned long end;
> +	struct vm_area_struct *start_vma, *prev, *vma;
> +	int ret = 0;
> +
> +	if (!addr_ok(start, len))
> +		return -EINVAL;
> +
> +	len = PAGE_ALIGN(len);
> +
> +	end = start + len;
> +
> +	/*
> +	 * Need write mmap_sem to split vmas and detach vmas
> +	 * splitting vma up-front to save PITA to clean if it is failed
> +	 */
> +	if (down_write_killable(&mm->mmap_sem))
> +		return -EINTR;
> +
> +	start_vma = munmap_lookup_vma(mm, start, end);
> +	if (!start_vma)
> +		goto out;
> +	if (IS_ERR(start_vma)) {
> +		ret = PTR_ERR(start_vma);
> +		goto out;
> +	}
> +
> +	prev = start_vma->vm_prev;
> +
> +	if (unlikely(uf)) {
> +		ret = userfaultfd_unmap_prep(start_vma, start, end, uf);
> +		if (ret)
> +			goto out;
> +	}
> +
> +	/*
> +	 * Unmapping vmas, which have:
> +	 *   VM_HUGETLB or
> +	 *   VM_PFNMAP or
> +	 *   uprobes
> +	 * need get done with write mmap_sem held since they may update
> +	 * vm_flags. Deal with such mappings with regular do_munmap() call.
> +	 */
> +	for (vma = start_vma; vma && vma->vm_start < end; vma = vma->vm_next) {
> +		if ((vma->vm_file &&
> +		    has_uprobes(vma, vma->vm_start, vma->vm_end)) ||
> +		    (vma->vm_flags & (VM_HUGETLB | VM_PFNMAP)))
> +			goto regular_path;

but ... that's going to redo all the work you already did!  Why not just this:

(not even compiled, and I can see a good opportunity for combining the
VM_LOCKED loop with the has_uprobes loop)

diff --git a/mm/mmap.c b/mm/mmap.c
index de699523c0b7..8d121db36efc 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2803,6 +2803,8 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 {
 	unsigned long end;
 	struct vm_area_struct *vma, *prev, *last;
+	int res = 0;
+	bool downgrade = false;
 
 	if ((offset_in_page(start)) || start > TASK_SIZE || len > TASK_SIZE-start)
 		return -EINVAL;
@@ -2811,17 +2813,20 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 	if (len == 0)
 		return -EINVAL;
 
+	if (down_write_killable(&mm->mmap_sem))
+		return -EINTR;
+
 	/* Find the first overlapping VMA */
 	vma = find_vma(mm, start);
 	if (!vma)
-		return 0;
+		goto unlock;
 	prev = vma->vm_prev;
-	/* we have  start < vma->vm_end  */
+	/* we have start < vma->vm_end  */
 
 	/* if it doesn't overlap, we have nothing.. */
 	end = start + len;
 	if (vma->vm_start >= end)
-		return 0;
+		goto unlock;
 
 	/*
 	 * If we need to split any vma, do it now to save pain later.
@@ -2831,28 +2836,27 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 	 * places tmp vma above, and higher split_vma places tmp vma below.
 	 */
 	if (start > vma->vm_start) {
-		int error;
-
 		/*
 		 * Make sure that map_count on return from munmap() will
 		 * not exceed its limit; but let map_count go just above
 		 * its limit temporarily, to help free resources as expected.
 		 */
+		res = -ENOMEM
 		if (end < vma->vm_end && mm->map_count >= sysctl_max_map_count)
-			return -ENOMEM;
+			goto unlock;
 
-		error = __split_vma(mm, vma, start, 0);
-		if (error)
-			return error;
+		res = __split_vma(mm, vma, start, 0);
+		if (res)
+			goto unlock;
 		prev = vma;
 	}
 
 	/* Does it split the last one? */
 	last = find_vma(mm, end);
 	if (last && end > last->vm_start) {
-		int error = __split_vma(mm, last, end, 1);
-		if (error)
-			return error;
+		res = __split_vma(mm, last, end, 1);
+		if (res)
+			goto unlock;
 	}
 	vma = prev ? prev->vm_next : mm->mmap;
 
@@ -2866,9 +2870,19 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 		 * split, despite we could. This is unlikely enough
 		 * failure that it's not worth optimizing it for.
 		 */
-		int error = userfaultfd_unmap_prep(vma, start, end, uf);
-		if (error)
-			return error;
+		result = userfaultfd_unmap_prep(vma, start, end, uf);
+		if (result)
+			goto unlock;
+	}
+
+	downgrade = true;
+
+	for (vma = start_vma; vma && vma->vm_start < end; vma = vma->vm_next) {
+		if (vma->vm_file &&
+				has_uprobes(vma, vma->vm_start, vma->vm_end)) {
+			downgrade = false;
+			break;
+		}
 	}
 
 	/*
@@ -2885,6 +2899,9 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 		}
 	}
 
+	if (downgrade)
+		downgrade_write(&mm->mmap_sem);
+
 	/*
 	 * Remove the vma's, and unmap the actual pages
 	 */
@@ -2896,7 +2913,14 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 	/* Fix up all other VM information */
 	remove_vma_list(mm, vma);
 
-	return 0;
+	res = 0;
+unlock:
+	if (downgrade) {
+		up_read(&mm->mmap_sem);
+	} else {
+		up_write(&mm->mmap_sem);
+	}
+	return res;
 }
 
 int vm_munmap(unsigned long start, size_t len)
@@ -2905,11 +2929,7 @@ int vm_munmap(unsigned long start, size_t len)
 	struct mm_struct *mm = current->mm;
 	LIST_HEAD(uf);
 
-	if (down_write_killable(&mm->mmap_sem))
-		return -EINTR;
-
 	ret = do_munmap(mm, start, len, &uf);
-	up_write(&mm->mmap_sem);
 	userfaultfd_unmap_complete(mm, &uf);
 	return ret;
 }

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [RFC v8 PATCH 3/5] mm: mmap: zap pages with read mmap_sem in munmap
  2018-08-15 19:16   ` Matthew Wilcox
@ 2018-08-15 21:09     ` Matthew Wilcox
  2018-08-15 21:54       ` Yang Shi
  0 siblings, 1 reply; 24+ messages in thread
From: Matthew Wilcox @ 2018-08-15 21:09 UTC (permalink / raw)
  To: Yang Shi
  Cc: mhocko, ldufour, kirill, vbabka, akpm, peterz, mingo, acme,
	alexander.shishkin, jolsa, namhyung, linux-mm, linux-kernel

On Wed, Aug 15, 2018 at 12:16:06PM -0700, Matthew Wilcox wrote:
> (not even compiled, and I can see a good opportunity for combining the
> VM_LOCKED loop with the has_uprobes loop)

I was rushing to get that sent earlier.  Here it is tidied up to
actually compile.

Note the diffstat:

 mmap.c |   71 ++++++++++++++++++++++++++++++++++++++---------------------------
 1 file changed, 42 insertions(+), 29 deletions(-)

I think that's a pretty small extra price to pay for having this improved
scalability.

diff --git a/mm/mmap.c b/mm/mmap.c
index de699523c0b7..b77bb3908f8c 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2802,7 +2802,9 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 	      struct list_head *uf)
 {
 	unsigned long end;
-	struct vm_area_struct *vma, *prev, *last;
+	struct vm_area_struct *vma, *prev, *last, *tmp;
+	int res = 0;
+	bool downgrade = false;
 
 	if ((offset_in_page(start)) || start > TASK_SIZE || len > TASK_SIZE-start)
 		return -EINVAL;
@@ -2811,17 +2813,20 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 	if (len == 0)
 		return -EINVAL;
 
+	if (down_write_killable(&mm->mmap_sem))
+		return -EINTR;
+
 	/* Find the first overlapping VMA */
 	vma = find_vma(mm, start);
 	if (!vma)
-		return 0;
+		goto unlock;
 	prev = vma->vm_prev;
-	/* we have  start < vma->vm_end  */
+	/* we have start < vma->vm_end  */
 
 	/* if it doesn't overlap, we have nothing.. */
 	end = start + len;
 	if (vma->vm_start >= end)
-		return 0;
+		goto unlock;
 
 	/*
 	 * If we need to split any vma, do it now to save pain later.
@@ -2831,28 +2836,27 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 	 * places tmp vma above, and higher split_vma places tmp vma below.
 	 */
 	if (start > vma->vm_start) {
-		int error;
-
 		/*
 		 * Make sure that map_count on return from munmap() will
 		 * not exceed its limit; but let map_count go just above
 		 * its limit temporarily, to help free resources as expected.
 		 */
+		res = -ENOMEM;
 		if (end < vma->vm_end && mm->map_count >= sysctl_max_map_count)
-			return -ENOMEM;
+			goto unlock;
 
-		error = __split_vma(mm, vma, start, 0);
-		if (error)
-			return error;
+		res = __split_vma(mm, vma, start, 0);
+		if (res)
+			goto unlock;
 		prev = vma;
 	}
 
 	/* Does it split the last one? */
 	last = find_vma(mm, end);
 	if (last && end > last->vm_start) {
-		int error = __split_vma(mm, last, end, 1);
-		if (error)
-			return error;
+		res = __split_vma(mm, last, end, 1);
+		if (res)
+			goto unlock;
 	}
 	vma = prev ? prev->vm_next : mm->mmap;
 
@@ -2866,25 +2870,31 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 		 * split, despite we could. This is unlikely enough
 		 * failure that it's not worth optimizing it for.
 		 */
-		int error = userfaultfd_unmap_prep(vma, start, end, uf);
-		if (error)
-			return error;
+		res = userfaultfd_unmap_prep(vma, start, end, uf);
+		if (res)
+			goto unlock;
 	}
 
 	/*
 	 * unlock any mlock()ed ranges before detaching vmas
+	 * and check to see if there's any reason we might have to hold
+	 * the mmap_sem write-locked while unmapping regions.
 	 */
-	if (mm->locked_vm) {
-		struct vm_area_struct *tmp = vma;
-		while (tmp && tmp->vm_start < end) {
-			if (tmp->vm_flags & VM_LOCKED) {
-				mm->locked_vm -= vma_pages(tmp);
-				munlock_vma_pages_all(tmp);
-			}
-			tmp = tmp->vm_next;
+	downgrade = true;
+
+	for (tmp = vma; tmp && tmp->vm_start < end; tmp = tmp->vm_next) {
+		if (tmp->vm_flags & VM_LOCKED) {
+			mm->locked_vm -= vma_pages(tmp);
+			munlock_vma_pages_all(tmp);
 		}
+		if (tmp->vm_file &&
+				has_uprobes(tmp, tmp->vm_start, tmp->vm_end))
+			downgrade = false;
 	}
 
+	if (downgrade)
+		downgrade_write(&mm->mmap_sem);
+
 	/*
 	 * Remove the vma's, and unmap the actual pages
 	 */
@@ -2896,7 +2906,14 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 	/* Fix up all other VM information */
 	remove_vma_list(mm, vma);
 
-	return 0;
+	res = 0;
+unlock:
+	if (downgrade) {
+		up_read(&mm->mmap_sem);
+	} else {
+		up_write(&mm->mmap_sem);
+	}
+	return res;
 }
 
 int vm_munmap(unsigned long start, size_t len)
@@ -2905,11 +2922,7 @@ int vm_munmap(unsigned long start, size_t len)
 	struct mm_struct *mm = current->mm;
 	LIST_HEAD(uf);
 
-	if (down_write_killable(&mm->mmap_sem))
-		return -EINTR;
-
 	ret = do_munmap(mm, start, len, &uf);
-	up_write(&mm->mmap_sem);
 	userfaultfd_unmap_complete(mm, &uf);
 	return ret;
 }

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [RFC v8 PATCH 3/5] mm: mmap: zap pages with read mmap_sem in munmap
  2018-08-15 21:09     ` Matthew Wilcox
@ 2018-08-15 21:54       ` Yang Shi
  2018-08-16  2:46         ` Matthew Wilcox
  0 siblings, 1 reply; 24+ messages in thread
From: Yang Shi @ 2018-08-15 21:54 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: mhocko, ldufour, kirill, vbabka, akpm, peterz, mingo, acme,
	alexander.shishkin, jolsa, namhyung, linux-mm, linux-kernel



On 8/15/18 2:09 PM, Matthew Wilcox wrote:
> On Wed, Aug 15, 2018 at 12:16:06PM -0700, Matthew Wilcox wrote:
>> (not even compiled, and I can see a good opportunity for combining the
>> VM_LOCKED loop with the has_uprobes loop)
> I was rushing to get that sent earlier.  Here it is tidied up to
> actually compile.

Thanks for the example. Yes, I believe the code still can be compacted 
to save some lines. However, the cover letter and the commit log of this 
patch has elaborated the discussion in the earlier reviews about why we 
do it in this way.

Or you just mean I don't have to call do_munmap() for the special 
mappings with the "downgrade" flag to save some cycles since do_munmap() 
will redo something which have been done?

Thanks,
Yang

>
> Note the diffstat:
>
>   mmap.c |   71 ++++++++++++++++++++++++++++++++++++++---------------------------
>   1 file changed, 42 insertions(+), 29 deletions(-)
>
> I think that's a pretty small extra price to pay for having this improved
> scalability.
>
> diff --git a/mm/mmap.c b/mm/mmap.c
> index de699523c0b7..b77bb3908f8c 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -2802,7 +2802,9 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
>   	      struct list_head *uf)
>   {
>   	unsigned long end;
> -	struct vm_area_struct *vma, *prev, *last;
> +	struct vm_area_struct *vma, *prev, *last, *tmp;
> +	int res = 0;
> +	bool downgrade = false;
>   
>   	if ((offset_in_page(start)) || start > TASK_SIZE || len > TASK_SIZE-start)
>   		return -EINVAL;
> @@ -2811,17 +2813,20 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
>   	if (len == 0)
>   		return -EINVAL;
>   
> +	if (down_write_killable(&mm->mmap_sem))
> +		return -EINTR;
> +
>   	/* Find the first overlapping VMA */
>   	vma = find_vma(mm, start);
>   	if (!vma)
> -		return 0;
> +		goto unlock;
>   	prev = vma->vm_prev;
> -	/* we have  start < vma->vm_end  */
> +	/* we have start < vma->vm_end  */
>   
>   	/* if it doesn't overlap, we have nothing.. */
>   	end = start + len;
>   	if (vma->vm_start >= end)
> -		return 0;
> +		goto unlock;
>   
>   	/*
>   	 * If we need to split any vma, do it now to save pain later.
> @@ -2831,28 +2836,27 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
>   	 * places tmp vma above, and higher split_vma places tmp vma below.
>   	 */
>   	if (start > vma->vm_start) {
> -		int error;
> -
>   		/*
>   		 * Make sure that map_count on return from munmap() will
>   		 * not exceed its limit; but let map_count go just above
>   		 * its limit temporarily, to help free resources as expected.
>   		 */
> +		res = -ENOMEM;
>   		if (end < vma->vm_end && mm->map_count >= sysctl_max_map_count)
> -			return -ENOMEM;
> +			goto unlock;
>   
> -		error = __split_vma(mm, vma, start, 0);
> -		if (error)
> -			return error;
> +		res = __split_vma(mm, vma, start, 0);
> +		if (res)
> +			goto unlock;
>   		prev = vma;
>   	}
>   
>   	/* Does it split the last one? */
>   	last = find_vma(mm, end);
>   	if (last && end > last->vm_start) {
> -		int error = __split_vma(mm, last, end, 1);
> -		if (error)
> -			return error;
> +		res = __split_vma(mm, last, end, 1);
> +		if (res)
> +			goto unlock;
>   	}
>   	vma = prev ? prev->vm_next : mm->mmap;
>   
> @@ -2866,25 +2870,31 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
>   		 * split, despite we could. This is unlikely enough
>   		 * failure that it's not worth optimizing it for.
>   		 */
> -		int error = userfaultfd_unmap_prep(vma, start, end, uf);
> -		if (error)
> -			return error;
> +		res = userfaultfd_unmap_prep(vma, start, end, uf);
> +		if (res)
> +			goto unlock;
>   	}
>   
>   	/*
>   	 * unlock any mlock()ed ranges before detaching vmas
> +	 * and check to see if there's any reason we might have to hold
> +	 * the mmap_sem write-locked while unmapping regions.
>   	 */
> -	if (mm->locked_vm) {
> -		struct vm_area_struct *tmp = vma;
> -		while (tmp && tmp->vm_start < end) {
> -			if (tmp->vm_flags & VM_LOCKED) {
> -				mm->locked_vm -= vma_pages(tmp);
> -				munlock_vma_pages_all(tmp);
> -			}
> -			tmp = tmp->vm_next;
> +	downgrade = true;
> +
> +	for (tmp = vma; tmp && tmp->vm_start < end; tmp = tmp->vm_next) {
> +		if (tmp->vm_flags & VM_LOCKED) {
> +			mm->locked_vm -= vma_pages(tmp);
> +			munlock_vma_pages_all(tmp);
>   		}
> +		if (tmp->vm_file &&
> +				has_uprobes(tmp, tmp->vm_start, tmp->vm_end))
> +			downgrade = false;
>   	}
>   
> +	if (downgrade)
> +		downgrade_write(&mm->mmap_sem);
> +
>   	/*
>   	 * Remove the vma's, and unmap the actual pages
>   	 */
> @@ -2896,7 +2906,14 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
>   	/* Fix up all other VM information */
>   	remove_vma_list(mm, vma);
>   
> -	return 0;
> +	res = 0;
> +unlock:
> +	if (downgrade) {
> +		up_read(&mm->mmap_sem);
> +	} else {
> +		up_write(&mm->mmap_sem);
> +	}
> +	return res;
>   }
>   
>   int vm_munmap(unsigned long start, size_t len)
> @@ -2905,11 +2922,7 @@ int vm_munmap(unsigned long start, size_t len)
>   	struct mm_struct *mm = current->mm;
>   	LIST_HEAD(uf);
>   
> -	if (down_write_killable(&mm->mmap_sem))
> -		return -EINTR;
> -
>   	ret = do_munmap(mm, start, len, &uf);
> -	up_write(&mm->mmap_sem);
>   	userfaultfd_unmap_complete(mm, &uf);
>   	return ret;
>   }


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC v8 PATCH 3/5] mm: mmap: zap pages with read mmap_sem in munmap
  2018-08-15 21:54       ` Yang Shi
@ 2018-08-16  2:46         ` Matthew Wilcox
  2018-08-16  6:11           ` Yang Shi
  0 siblings, 1 reply; 24+ messages in thread
From: Matthew Wilcox @ 2018-08-16  2:46 UTC (permalink / raw)
  To: Yang Shi
  Cc: mhocko, ldufour, kirill, vbabka, akpm, peterz, mingo, acme,
	alexander.shishkin, jolsa, namhyung, linux-mm, linux-kernel

On Wed, Aug 15, 2018 at 02:54:13PM -0700, Yang Shi wrote:
> 
> 
> On 8/15/18 2:09 PM, Matthew Wilcox wrote:
> > On Wed, Aug 15, 2018 at 12:16:06PM -0700, Matthew Wilcox wrote:
> > > (not even compiled, and I can see a good opportunity for combining the
> > > VM_LOCKED loop with the has_uprobes loop)
> > I was rushing to get that sent earlier.  Here it is tidied up to
> > actually compile.
> 
> Thanks for the example. Yes, I believe the code still can be compacted to
> save some lines. However, the cover letter and the commit log of this patch
> has elaborated the discussion in the earlier reviews about why we do it in
> this way.

You mean the other callers which need to hold mmap_sem write-locked for
longer?  I hadn't really considered those; how about this?

 mmap.c |   47 +++++++++++++++++++++++++++++------------------
 1 file changed, 29 insertions(+), 18 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index de699523c0b7..06dc31d1da8c 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2798,11 +2798,11 @@ int split_vma(struct mm_struct *mm, struct vm_area_struct *vma,
  * work.  This now handles partial unmappings.
  * Jeremy Fitzhardinge <jeremy@goop.org>
  */
-int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
-	      struct list_head *uf)
+static int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
+	      struct list_head *uf, bool downgrade)
 {
 	unsigned long end;
-	struct vm_area_struct *vma, *prev, *last;
+	struct vm_area_struct *vma, *prev, *last, *tmp;
 
 	if ((offset_in_page(start)) || start > TASK_SIZE || len > TASK_SIZE-start)
 		return -EINVAL;
@@ -2816,7 +2816,7 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 	if (!vma)
 		return 0;
 	prev = vma->vm_prev;
-	/* we have  start < vma->vm_end  */
+	/* we have start < vma->vm_end  */
 
 	/* if it doesn't overlap, we have nothing.. */
 	end = start + len;
@@ -2873,18 +2873,22 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 
 	/*
 	 * unlock any mlock()ed ranges before detaching vmas
+	 * and check to see if there's any reason we might have to hold
+	 * the mmap_sem write-locked while unmapping regions.
 	 */
-	if (mm->locked_vm) {
-		struct vm_area_struct *tmp = vma;
-		while (tmp && tmp->vm_start < end) {
-			if (tmp->vm_flags & VM_LOCKED) {
-				mm->locked_vm -= vma_pages(tmp);
-				munlock_vma_pages_all(tmp);
-			}
-			tmp = tmp->vm_next;
+	for (tmp = vma; tmp && tmp->vm_start < end; tmp = tmp->vm_next) {
+		if (tmp->vm_flags & VM_LOCKED) {
+			mm->locked_vm -= vma_pages(tmp);
+			munlock_vma_pages_all(tmp);
 		}
+		if (tmp->vm_file &&
+				has_uprobes(tmp, tmp->vm_start, tmp->vm_end))
+			downgrade = false;
 	}
 
+	if (downgrade)
+		downgrade_write(&mm->mmap_sem);
+
 	/*
 	 * Remove the vma's, and unmap the actual pages
 	 */
@@ -2896,7 +2900,13 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 	/* Fix up all other VM information */
 	remove_vma_list(mm, vma);
 
-	return 0;
+	return downgrade ? 1 : 0;
+}
+
+int do_unmap(struct mm_struct *mm, unsigned long start, size_t len,
+		struct list_head *uf)
+{
+	return __do_munmap(mm, start, len, uf, false);
 }
 
 int vm_munmap(unsigned long start, size_t len)
@@ -2905,11 +2915,12 @@ int vm_munmap(unsigned long start, size_t len)
 	struct mm_struct *mm = current->mm;
 	LIST_HEAD(uf);
 
-	if (down_write_killable(&mm->mmap_sem))
-		return -EINTR;
-
-	ret = do_munmap(mm, start, len, &uf);
-	up_write(&mm->mmap_sem);
+	down_write(&mm->mmap_sem);
+	ret = __do_munmap(mm, start, len, &uf, true);
+	if (ret == 1)
+		up_read(&mm->mmap_sem);
+	else
+		up_write(&mm->mmap_sem);
 	userfaultfd_unmap_complete(mm, &uf);
 	return ret;
 }

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [RFC v8 PATCH 3/5] mm: mmap: zap pages with read mmap_sem in munmap
  2018-08-16  2:46         ` Matthew Wilcox
@ 2018-08-16  6:11           ` Yang Shi
  0 siblings, 0 replies; 24+ messages in thread
From: Yang Shi @ 2018-08-16  6:11 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: mhocko, ldufour, kirill, vbabka, akpm, peterz, mingo, acme,
	alexander.shishkin, jolsa, namhyung, linux-mm, linux-kernel



On 8/15/18 7:46 PM, Matthew Wilcox wrote:
> On Wed, Aug 15, 2018 at 02:54:13PM -0700, Yang Shi wrote:
>>
>> On 8/15/18 2:09 PM, Matthew Wilcox wrote:
>>> On Wed, Aug 15, 2018 at 12:16:06PM -0700, Matthew Wilcox wrote:
>>>> (not even compiled, and I can see a good opportunity for combining the
>>>> VM_LOCKED loop with the has_uprobes loop)
>>> I was rushing to get that sent earlier.  Here it is tidied up to
>>> actually compile.
>> Thanks for the example. Yes, I believe the code still can be compacted to
>> save some lines. However, the cover letter and the commit log of this patch
>> has elaborated the discussion in the earlier reviews about why we do it in
>> this way.
> You mean the other callers which need to hold mmap_sem write-locked for
> longer?  I hadn't really considered those; how about this?

Thanks. Yes, this is the other potential implementation. My rationale 
about a separate function for the optimized path is I would prefer 
optimize this step by step by starting with some relatively simple way, 
then add enhancement on top of it.

And, I would prefer keep the current implementation of do_munmap since 
it is called somewhere else and it might be called by the optimized path 
for some reason until we are confident enough that the optimization 
doesn't have regression.

This sounds like separate function vs an extra parameter. We do save 
some lines with extra parameter instead of a separate function.

Thanks,
Yang

>
>   mmap.c |   47 +++++++++++++++++++++++++++++------------------
>   1 file changed, 29 insertions(+), 18 deletions(-)
>
> diff --git a/mm/mmap.c b/mm/mmap.c
> index de699523c0b7..06dc31d1da8c 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -2798,11 +2798,11 @@ int split_vma(struct mm_struct *mm, struct vm_area_struct *vma,
>    * work.  This now handles partial unmappings.
>    * Jeremy Fitzhardinge <jeremy@goop.org>
>    */
> -int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
> -	      struct list_head *uf)
> +static int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
> +	      struct list_head *uf, bool downgrade)
>   {
>   	unsigned long end;
> -	struct vm_area_struct *vma, *prev, *last;
> +	struct vm_area_struct *vma, *prev, *last, *tmp;
>   
>   	if ((offset_in_page(start)) || start > TASK_SIZE || len > TASK_SIZE-start)
>   		return -EINVAL;
> @@ -2816,7 +2816,7 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
>   	if (!vma)
>   		return 0;
>   	prev = vma->vm_prev;
> -	/* we have  start < vma->vm_end  */
> +	/* we have start < vma->vm_end  */
>   
>   	/* if it doesn't overlap, we have nothing.. */
>   	end = start + len;
> @@ -2873,18 +2873,22 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
>   
>   	/*
>   	 * unlock any mlock()ed ranges before detaching vmas
> +	 * and check to see if there's any reason we might have to hold
> +	 * the mmap_sem write-locked while unmapping regions.
>   	 */
> -	if (mm->locked_vm) {
> -		struct vm_area_struct *tmp = vma;
> -		while (tmp && tmp->vm_start < end) {
> -			if (tmp->vm_flags & VM_LOCKED) {
> -				mm->locked_vm -= vma_pages(tmp);
> -				munlock_vma_pages_all(tmp);
> -			}
> -			tmp = tmp->vm_next;
> +	for (tmp = vma; tmp && tmp->vm_start < end; tmp = tmp->vm_next) {
> +		if (tmp->vm_flags & VM_LOCKED) {
> +			mm->locked_vm -= vma_pages(tmp);
> +			munlock_vma_pages_all(tmp);
>   		}
> +		if (tmp->vm_file &&
> +				has_uprobes(tmp, tmp->vm_start, tmp->vm_end))
> +			downgrade = false;
>   	}
>   
> +	if (downgrade)
> +		downgrade_write(&mm->mmap_sem);
> +
>   	/*
>   	 * Remove the vma's, and unmap the actual pages
>   	 */
> @@ -2896,7 +2900,13 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
>   	/* Fix up all other VM information */
>   	remove_vma_list(mm, vma);
>   
> -	return 0;
> +	return downgrade ? 1 : 0;
> +}
> +
> +int do_unmap(struct mm_struct *mm, unsigned long start, size_t len,
> +		struct list_head *uf)
> +{
> +	return __do_munmap(mm, start, len, uf, false);
>   }
>   
>   int vm_munmap(unsigned long start, size_t len)
> @@ -2905,11 +2915,12 @@ int vm_munmap(unsigned long start, size_t len)
>   	struct mm_struct *mm = current->mm;
>   	LIST_HEAD(uf);
>   
> -	if (down_write_killable(&mm->mmap_sem))
> -		return -EINTR;
> -
> -	ret = do_munmap(mm, start, len, &uf);
> -	up_write(&mm->mmap_sem);
> +	down_write(&mm->mmap_sem);
> +	ret = __do_munmap(mm, start, len, &uf, true);
> +	if (ret == 1)
> +		up_read(&mm->mmap_sem);
> +	else
> +		up_write(&mm->mmap_sem);
>   	userfaultfd_unmap_complete(mm, &uf);
>   	return ret;
>   }


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC v8 PATCH 2/5] uprobes: introduce has_uprobes helper
  2018-08-15 18:49 ` [RFC v8 PATCH 2/5] uprobes: introduce has_uprobes helper Yang Shi
@ 2018-08-22 10:55   ` Vlastimil Babka
  2018-08-22 15:07     ` Srikar Dronamraju
  0 siblings, 1 reply; 24+ messages in thread
From: Vlastimil Babka @ 2018-08-22 10:55 UTC (permalink / raw)
  To: Yang Shi, mhocko, willy, ldufour, kirill, akpm, peterz, mingo,
	acme, alexander.shishkin, jolsa, namhyung
  Cc: linux-mm, linux-kernel

On 08/15/2018 08:49 PM, Yang Shi wrote:
> We need check if mm or vma has uprobes in the following patch to check
> if a vma could be unmapped with holding read mmap_sem. The checks and
> pre-conditions used by uprobe_munmap() look just suitable for this
> purpose.
> 
> Extracting those checks into a helper function, has_uprobes().
> 
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
> Cc: Jiri Olsa <jolsa@redhat.com>
> Cc: Namhyung Kim <namhyung@kernel.org>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
> ---
>  include/linux/uprobes.h |  7 +++++++
>  kernel/events/uprobes.c | 23 ++++++++++++++++-------
>  2 files changed, 23 insertions(+), 7 deletions(-)
> 
> diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
> index 0a294e9..418764e 100644
> --- a/include/linux/uprobes.h
> +++ b/include/linux/uprobes.h
> @@ -149,6 +149,8 @@ struct uprobes_state {
>  extern bool arch_uprobe_ignore(struct arch_uprobe *aup, struct pt_regs *regs);
>  extern void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr,
>  					 void *src, unsigned long len);
> +extern bool has_uprobes(struct vm_area_struct *vma, unsigned long start,
> +			unsigned long end);
>  #else /* !CONFIG_UPROBES */
>  struct uprobes_state {
>  };
> @@ -203,5 +205,10 @@ static inline void uprobe_copy_process(struct task_struct *t, unsigned long flag
>  static inline void uprobe_clear_state(struct mm_struct *mm)
>  {
>  }
> +static inline bool has_uprobes(struct vm_area_struct *vma, unsigned long start,
> +			       unsgined long end)
> +{
> +	return false;
> +}
>  #endif /* !CONFIG_UPROBES */
>  #endif	/* _LINUX_UPROBES_H */
> diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> index aed1ba5..568481c 100644
> --- a/kernel/events/uprobes.c
> +++ b/kernel/events/uprobes.c
> @@ -1114,22 +1114,31 @@ int uprobe_mmap(struct vm_area_struct *vma)
>  	return !!n;
>  }
>  
> -/*
> - * Called in context of a munmap of a vma.
> - */
> -void uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned long end)
> +bool
> +has_uprobes(struct vm_area_struct *vma, unsigned long start, unsigned long end)

The name is not really great...

>  {
>  	if (no_uprobe_events() || !valid_vma(vma, false))
> -		return;
> +		return false;
>  
>  	if (!atomic_read(&vma->vm_mm->mm_users)) /* called by mmput() ? */
> -		return;
> +		return false;
>  
>  	if (!test_bit(MMF_HAS_UPROBES, &vma->vm_mm->flags) ||
>  	     test_bit(MMF_RECALC_UPROBES, &vma->vm_mm->flags))

This means that vma might have uprobes, but since RECALC is already set,
we don't need to set it again. That's different from "has uprobes".

Perhaps something like vma_needs_recalc_uprobes() ?

But I also worry there might be a race where we initially return false
because of MMF_RECALC_UPROBES, then the flag is cleared while vma's
still have uprobes, then we downgrade mmap_sem and skip uprobe_munmap().
Should be checked if e.g. mmap_sem and vma visibility changes protects
this case from happening.

> -		return;
> +		return false;
>  
>  	if (vma_has_uprobes(vma, start, end))
> +		return true;
> +
> +	return false;

Simpler:
	return vma_has_uprobes(vma, start, end);

> +}
> +
> +/*
> + * Called in context of a munmap of a vma.
> + */
> +void uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned long end)
> +{
> +	if (has_uprobes(vma, start, end))
>  		set_bit(MMF_RECALC_UPROBES, &vma->vm_mm->flags);
>  }
>  
> 


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC v8 PATCH 3/5] mm: mmap: zap pages with read mmap_sem in munmap
  2018-08-15 18:49 ` [RFC v8 PATCH 3/5] mm: mmap: zap pages with read mmap_sem in munmap Yang Shi
  2018-08-15 19:16   ` Matthew Wilcox
@ 2018-08-22 11:11   ` Vlastimil Babka
  2018-08-22 19:20     ` Yang Shi
  2018-08-22 11:19   ` Vlastimil Babka
  2 siblings, 1 reply; 24+ messages in thread
From: Vlastimil Babka @ 2018-08-22 11:11 UTC (permalink / raw)
  To: Yang Shi, mhocko, willy, ldufour, kirill, akpm, peterz, mingo,
	acme, alexander.shishkin, jolsa, namhyung
  Cc: linux-mm, linux-kernel

On 08/15/2018 08:49 PM, Yang Shi wrote:

> +	start_vma = munmap_lookup_vma(mm, start, end);
> +	if (!start_vma)
> +		goto out;
> +	if (IS_ERR(start_vma)) {
> +		ret = PTR_ERR(start_vma);
> +		goto out;
> +	}
> +
> +	prev = start_vma->vm_prev;
> +
> +	if (unlikely(uf)) {
> +		ret = userfaultfd_unmap_prep(start_vma, start, end, uf);
> +		if (ret)
> +			goto out;
> +	}
> +

You sure it's ok to redo this in case of goto regular path? The
preparations have some side-effects... I would rather move this after
the regular path check?

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC v8 PATCH 3/5] mm: mmap: zap pages with read mmap_sem in munmap
  2018-08-15 18:49 ` [RFC v8 PATCH 3/5] mm: mmap: zap pages with read mmap_sem in munmap Yang Shi
  2018-08-15 19:16   ` Matthew Wilcox
  2018-08-22 11:11   ` Vlastimil Babka
@ 2018-08-22 11:19   ` Vlastimil Babka
  2018-08-22 20:45     ` Yang Shi
  2 siblings, 1 reply; 24+ messages in thread
From: Vlastimil Babka @ 2018-08-22 11:19 UTC (permalink / raw)
  To: Yang Shi, mhocko, willy, ldufour, kirill, akpm, peterz, mingo,
	acme, alexander.shishkin, jolsa, namhyung
  Cc: linux-mm, linux-kernel

On 08/15/2018 08:49 PM, Yang Shi wrote:
> +	downgrade_write(&mm->mmap_sem);
> +
> +	/* Zap mappings with read mmap_sem */
> +	unmap_region(mm, start_vma, prev, start, end);
> +
> +	arch_unmap(mm, start_vma, start, end);

Hmm, did you check that all architectures' arch_unmap() is safe with
read mmap_sem instead of write mmap_sem? E.g. x86 does
mpx_notify_unmap() there where I would be far from sure at first glance...

> +	remove_vma_list(mm, start_vma);
> +	up_read(&mm->mmap_sem);


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC v8 PATCH 2/5] uprobes: introduce has_uprobes helper
  2018-08-22 10:55   ` Vlastimil Babka
@ 2018-08-22 15:07     ` Srikar Dronamraju
  2018-08-22 20:51       ` Yang Shi
  2018-08-23 15:15       ` Oleg Nesterov
  0 siblings, 2 replies; 24+ messages in thread
From: Srikar Dronamraju @ 2018-08-22 15:07 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Yang Shi, mhocko, willy, ldufour, kirill, akpm, peterz, mingo,
	acme, alexander.shishkin, jolsa, namhyung, linux-mm,
	Oleg Nesterov, liu.song.a23, ravi.bangoria, linux-kernel

* Vlastimil Babka <vbabka@suse.cz> [2018-08-22 12:55:59]:

> On 08/15/2018 08:49 PM, Yang Shi wrote:
> > We need check if mm or vma has uprobes in the following patch to check
> > if a vma could be unmapped with holding read mmap_sem. The checks and
> > pre-conditions used by uprobe_munmap() look just suitable for this
> > purpose.
> > 
> > Extracting those checks into a helper function, has_uprobes().
> > 
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Cc: Ingo Molnar <mingo@redhat.com>
> > Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
> > Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
> > Cc: Jiri Olsa <jolsa@redhat.com>
> > Cc: Namhyung Kim <namhyung@kernel.org>
> > Cc: Vlastimil Babka <vbabka@suse.cz>
> > Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
> > ---
> >  include/linux/uprobes.h |  7 +++++++
> >  kernel/events/uprobes.c | 23 ++++++++++++++++-------
> >  2 files changed, 23 insertions(+), 7 deletions(-)
> > 
> > diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
> > index 0a294e9..418764e 100644
> > --- a/include/linux/uprobes.h
> > +++ b/include/linux/uprobes.h
> > @@ -149,6 +149,8 @@ struct uprobes_state {
> >  extern bool arch_uprobe_ignore(struct arch_uprobe *aup, struct pt_regs *regs);
> >  extern void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr,
> >  					 void *src, unsigned long len);
> > +extern bool has_uprobes(struct vm_area_struct *vma, unsigned long start,
> > +			unsigned long end);
> >  #else /* !CONFIG_UPROBES */
> >  struct uprobes_state {
> >  };
> > @@ -203,5 +205,10 @@ static inline void uprobe_copy_process(struct task_struct *t, unsigned long flag
> >  static inline void uprobe_clear_state(struct mm_struct *mm)
> >  {
> >  }
> > +static inline bool has_uprobes(struct vm_area_struct *vma, unsigned long start,
> > +			       unsgined long end)
> > +{
> > +	return false;
> > +}
> >  #endif /* !CONFIG_UPROBES */
> >  #endif	/* _LINUX_UPROBES_H */
> > diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> > index aed1ba5..568481c 100644
> > --- a/kernel/events/uprobes.c
> > +++ b/kernel/events/uprobes.c
> > @@ -1114,22 +1114,31 @@ int uprobe_mmap(struct vm_area_struct *vma)
> >  	return !!n;
> >  }
> >  
> > -/*
> > - * Called in context of a munmap of a vma.
> > - */
> > -void uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned long end)
> > +bool
> > +has_uprobes(struct vm_area_struct *vma, unsigned long start, unsigned long end)
> 
> The name is not really great...

I too feel the name is not apt. 
Can you make this vma_has_uprobes and convert the current
vma_has_uprobes to __vma_has_uprobes?

> 
> >  {
> >  	if (no_uprobe_events() || !valid_vma(vma, false))
> > -		return;
> > +		return false;
> >  
> >  	if (!atomic_read(&vma->vm_mm->mm_users)) /* called by mmput() ? */
> > -		return;
> > +		return false;
> >  
> >  	if (!test_bit(MMF_HAS_UPROBES, &vma->vm_mm->flags) ||
> >  	     test_bit(MMF_RECALC_UPROBES, &vma->vm_mm->flags))
> 
> This means that vma might have uprobes, but since RECALC is already set,
> we don't need to set it again. That's different from "has uprobes".
> 
> Perhaps something like vma_needs_recalc_uprobes() ?
> 
> But I also worry there might be a race where we initially return false
> because of MMF_RECALC_UPROBES, then the flag is cleared while vma's
> still have uprobes, then we downgrade mmap_sem and skip uprobe_munmap().
> Should be checked if e.g. mmap_sem and vma visibility changes protects
> this case from happening.

That is a very good observation.

One think we can probably do is pass an extra parameter to
has_uprobes(), depending on which we should skip this check.
such that when we call from uprobes_munmap(), we continue as is
but when calling from do_munmap_zap_rlock(), we skip the check.


> 
> > -		return;
> > +		return false;
> >  
> >  	if (vma_has_uprobes(vma, start, end))
> > +		return true;
> > +
> > +	return false;
> 
> Simpler:
> 	return vma_has_uprobes(vma, start, end);
> 
> > +}
> > +
> > +/*
> > + * Called in context of a munmap of a vma.
> > + */
> > +void uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned long end)
> > +{
> > +	if (has_uprobes(vma, start, end))
> >  		set_bit(MMF_RECALC_UPROBES, &vma->vm_mm->flags);
> >  }

-- 
Thanks and Regards
Srikar Dronamraju


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC v8 PATCH 3/5] mm: mmap: zap pages with read mmap_sem in munmap
  2018-08-22 11:11   ` Vlastimil Babka
@ 2018-08-22 19:20     ` Yang Shi
  0 siblings, 0 replies; 24+ messages in thread
From: Yang Shi @ 2018-08-22 19:20 UTC (permalink / raw)
  To: Vlastimil Babka, mhocko, willy, ldufour, kirill, akpm, peterz,
	mingo, acme, alexander.shishkin, jolsa, namhyung
  Cc: linux-mm, linux-kernel



On 8/22/18 4:11 AM, Vlastimil Babka wrote:
> On 08/15/2018 08:49 PM, Yang Shi wrote:
>
>> +	start_vma = munmap_lookup_vma(mm, start, end);
>> +	if (!start_vma)
>> +		goto out;
>> +	if (IS_ERR(start_vma)) {
>> +		ret = PTR_ERR(start_vma);
>> +		goto out;
>> +	}
>> +
>> +	prev = start_vma->vm_prev;
>> +
>> +	if (unlikely(uf)) {
>> +		ret = userfaultfd_unmap_prep(start_vma, start, end, uf);
>> +		if (ret)
>> +			goto out;
>> +	}
>> +
> You sure it's ok to redo this in case of goto regular path? The
> preparations have some side-effects... I would rather move this after
> the regular path check?

This preparation sets vma->vm_userfaultfd_ctx.ctx for each vmas. But, 
before doing this, it calls has_unmap_ctx() to check if the ctx has been 
set or not. If it has been set, it just skip the vma. It sounds ok, right?



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC v8 PATCH 3/5] mm: mmap: zap pages with read mmap_sem in munmap
  2018-08-22 11:19   ` Vlastimil Babka
@ 2018-08-22 20:45     ` Yang Shi
  2018-08-22 21:10       ` Kirill A. Shutemov
  0 siblings, 1 reply; 24+ messages in thread
From: Yang Shi @ 2018-08-22 20:45 UTC (permalink / raw)
  To: Vlastimil Babka, mhocko, willy, ldufour, kirill, akpm, peterz,
	mingo, acme, alexander.shishkin, jolsa, namhyung
  Cc: linux-mm, linux-kernel



On 8/22/18 4:19 AM, Vlastimil Babka wrote:
> On 08/15/2018 08:49 PM, Yang Shi wrote:
>> +	downgrade_write(&mm->mmap_sem);
>> +
>> +	/* Zap mappings with read mmap_sem */
>> +	unmap_region(mm, start_vma, prev, start, end);
>> +
>> +	arch_unmap(mm, start_vma, start, end);
> Hmm, did you check that all architectures' arch_unmap() is safe with
> read mmap_sem instead of write mmap_sem? E.g. x86 does
> mpx_notify_unmap() there where I would be far from sure at first glance...

Yes, I'm also not quite sure if it is 100% safe or not. I was trying to 
move this before downgrade_write, however, I'm not sure if it is ok or 
not too, so I keep the calling sequence.

For architectures, just x86 and ppc really do something. PPC just uses 
it for vdso unmap which should just happen during process exit, so it 
sounds safe.

For x86, mpx_notify_unmap() looks finally zap the VM_MPX vmas in bound 
table range with zap_page_range() and doesn't update vm flags, so it 
sounds ok to me since vmas have been detached, nobody can find those 
vmas. But, I'm not familiar with the details of mpx, maybe Kirill could 
help to confirm this?

Thanks,
Yang

>
>> +	remove_vma_list(mm, start_vma);
>> +	up_read(&mm->mmap_sem);


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC v8 PATCH 2/5] uprobes: introduce has_uprobes helper
  2018-08-22 15:07     ` Srikar Dronamraju
@ 2018-08-22 20:51       ` Yang Shi
  2018-08-23 15:15       ` Oleg Nesterov
  1 sibling, 0 replies; 24+ messages in thread
From: Yang Shi @ 2018-08-22 20:51 UTC (permalink / raw)
  To: Srikar Dronamraju, Vlastimil Babka
  Cc: mhocko, willy, ldufour, kirill, akpm, peterz, mingo, acme,
	alexander.shishkin, jolsa, namhyung, linux-mm, Oleg Nesterov,
	liu.song.a23, ravi.bangoria, linux-kernel



On 8/22/18 8:07 AM, Srikar Dronamraju wrote:
> * Vlastimil Babka <vbabka@suse.cz> [2018-08-22 12:55:59]:
>
>> On 08/15/2018 08:49 PM, Yang Shi wrote:
>>> We need check if mm or vma has uprobes in the following patch to check
>>> if a vma could be unmapped with holding read mmap_sem. The checks and
>>> pre-conditions used by uprobe_munmap() look just suitable for this
>>> purpose.
>>>
>>> Extracting those checks into a helper function, has_uprobes().
>>>
>>> Cc: Peter Zijlstra <peterz@infradead.org>
>>> Cc: Ingo Molnar <mingo@redhat.com>
>>> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
>>> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
>>> Cc: Jiri Olsa <jolsa@redhat.com>
>>> Cc: Namhyung Kim <namhyung@kernel.org>
>>> Cc: Vlastimil Babka <vbabka@suse.cz>
>>> Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
>>> ---
>>>   include/linux/uprobes.h |  7 +++++++
>>>   kernel/events/uprobes.c | 23 ++++++++++++++++-------
>>>   2 files changed, 23 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
>>> index 0a294e9..418764e 100644
>>> --- a/include/linux/uprobes.h
>>> +++ b/include/linux/uprobes.h
>>> @@ -149,6 +149,8 @@ struct uprobes_state {
>>>   extern bool arch_uprobe_ignore(struct arch_uprobe *aup, struct pt_regs *regs);
>>>   extern void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr,
>>>   					 void *src, unsigned long len);
>>> +extern bool has_uprobes(struct vm_area_struct *vma, unsigned long start,
>>> +			unsigned long end);
>>>   #else /* !CONFIG_UPROBES */
>>>   struct uprobes_state {
>>>   };
>>> @@ -203,5 +205,10 @@ static inline void uprobe_copy_process(struct task_struct *t, unsigned long flag
>>>   static inline void uprobe_clear_state(struct mm_struct *mm)
>>>   {
>>>   }
>>> +static inline bool has_uprobes(struct vm_area_struct *vma, unsigned long start,
>>> +			       unsgined long end)
>>> +{
>>> +	return false;
>>> +}
>>>   #endif /* !CONFIG_UPROBES */
>>>   #endif	/* _LINUX_UPROBES_H */
>>> diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
>>> index aed1ba5..568481c 100644
>>> --- a/kernel/events/uprobes.c
>>> +++ b/kernel/events/uprobes.c
>>> @@ -1114,22 +1114,31 @@ int uprobe_mmap(struct vm_area_struct *vma)
>>>   	return !!n;
>>>   }
>>>   
>>> -/*
>>> - * Called in context of a munmap of a vma.
>>> - */
>>> -void uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned long end)
>>> +bool
>>> +has_uprobes(struct vm_area_struct *vma, unsigned long start, unsigned long end)
>> The name is not really great...
> I too feel the name is not apt.
> Can you make this vma_has_uprobes and convert the current
> vma_has_uprobes to __vma_has_uprobes?

It sounds good to me.

>
>>>   {
>>>   	if (no_uprobe_events() || !valid_vma(vma, false))
>>> -		return;
>>> +		return false;
>>>   
>>>   	if (!atomic_read(&vma->vm_mm->mm_users)) /* called by mmput() ? */
>>> -		return;
>>> +		return false;
>>>   
>>>   	if (!test_bit(MMF_HAS_UPROBES, &vma->vm_mm->flags) ||
>>>   	     test_bit(MMF_RECALC_UPROBES, &vma->vm_mm->flags))
>> This means that vma might have uprobes, but since RECALC is already set,
>> we don't need to set it again. That's different from "has uprobes".
>>
>> Perhaps something like vma_needs_recalc_uprobes() ?
>>
>> But I also worry there might be a race where we initially return false
>> because of MMF_RECALC_UPROBES, then the flag is cleared while vma's
>> still have uprobes, then we downgrade mmap_sem and skip uprobe_munmap().
>> Should be checked if e.g. mmap_sem and vma visibility changes protects
>> this case from happening.
> That is a very good observation.
>
> One think we can probably do is pass an extra parameter to
> has_uprobes(), depending on which we should skip this check.
> such that when we call from uprobes_munmap(), we continue as is
> but when calling from do_munmap_zap_rlock(), we skip the check.

Yes, it sounds good to solve the race issue. When we need decide if we 
should jump to regular path for uprobes mapping, we don't check if 
MMF_RECALC_UPROBES is set or not. It looks not harmful to set this flag 
twice.

Thanks,
Yang

>
>
>>> -		return;
>>> +		return false;
>>>   
>>>   	if (vma_has_uprobes(vma, start, end))
>>> +		return true;
>>> +
>>> +	return false;
>> Simpler:
>> 	return vma_has_uprobes(vma, start, end);
>>
>>> +}
>>> +
>>> +/*
>>> + * Called in context of a munmap of a vma.
>>> + */
>>> +void uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned long end)
>>> +{
>>> +	if (has_uprobes(vma, start, end))
>>>   		set_bit(MMF_RECALC_UPROBES, &vma->vm_mm->flags);
>>>   }


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC v8 PATCH 3/5] mm: mmap: zap pages with read mmap_sem in munmap
  2018-08-22 20:45     ` Yang Shi
@ 2018-08-22 21:10       ` Kirill A. Shutemov
  2018-08-22 21:42         ` Dave Hansen
  0 siblings, 1 reply; 24+ messages in thread
From: Kirill A. Shutemov @ 2018-08-22 21:10 UTC (permalink / raw)
  To: Yang Shi, Dave Hansen
  Cc: Vlastimil Babka, mhocko, willy, ldufour, akpm, peterz, mingo,
	acme, alexander.shishkin, jolsa, namhyung, linux-mm,
	linux-kernel

On Wed, Aug 22, 2018 at 01:45:44PM -0700, Yang Shi wrote:
> 
> 
> On 8/22/18 4:19 AM, Vlastimil Babka wrote:
> > On 08/15/2018 08:49 PM, Yang Shi wrote:
> > > +	downgrade_write(&mm->mmap_sem);
> > > +
> > > +	/* Zap mappings with read mmap_sem */
> > > +	unmap_region(mm, start_vma, prev, start, end);
> > > +
> > > +	arch_unmap(mm, start_vma, start, end);
> > Hmm, did you check that all architectures' arch_unmap() is safe with
> > read mmap_sem instead of write mmap_sem? E.g. x86 does
> > mpx_notify_unmap() there where I would be far from sure at first glance...
> 
> Yes, I'm also not quite sure if it is 100% safe or not. I was trying to move
> this before downgrade_write, however, I'm not sure if it is ok or not too,
> so I keep the calling sequence.
> 
> For architectures, just x86 and ppc really do something. PPC just uses it
> for vdso unmap which should just happen during process exit, so it sounds
> safe.
> 
> For x86, mpx_notify_unmap() looks finally zap the VM_MPX vmas in bound table
> range with zap_page_range() and doesn't update vm flags, so it sounds ok to
> me since vmas have been detached, nobody can find those vmas. But, I'm not
> familiar with the details of mpx, maybe Kirill could help to confirm this?

I don't see anything obviously dependent on down_write() in
mpx_notify_unmap(), but Dave should know better.

-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC v8 PATCH 3/5] mm: mmap: zap pages with read mmap_sem in munmap
  2018-08-22 21:10       ` Kirill A. Shutemov
@ 2018-08-22 21:42         ` Dave Hansen
  2018-08-22 21:56           ` Yang Shi
  0 siblings, 1 reply; 24+ messages in thread
From: Dave Hansen @ 2018-08-22 21:42 UTC (permalink / raw)
  To: Kirill A. Shutemov, Yang Shi
  Cc: Vlastimil Babka, mhocko, willy, ldufour, akpm, peterz, mingo,
	acme, alexander.shishkin, jolsa, namhyung, linux-mm,
	linux-kernel

On 08/22/2018 02:10 PM, Kirill A. Shutemov wrote:
>> For x86, mpx_notify_unmap() looks finally zap the VM_MPX vmas in bound table
>> range with zap_page_range() and doesn't update vm flags, so it sounds ok to
>> me since vmas have been detached, nobody can find those vmas. But, I'm not
>> familiar with the details of mpx, maybe Kirill could help to confirm this?
> I don't see anything obviously dependent on down_write() in
> mpx_notify_unmap(), but Dave should know better.

We need mmap_sem for write in mpx_notify_unmap().

Its job is to clean up bounds tables, but bounds tables are dynamically
allocated and destroyed by the kernel.  When we destroy a table, we also
destroy the VMA for the bounds table *itself*, separate from the VMA
being unmapped.

But, this code is very likely to go away soon.  If it's causing a
problem for you, let me know and I'll see if I can get to removing it
faster.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC v8 PATCH 3/5] mm: mmap: zap pages with read mmap_sem in munmap
  2018-08-22 21:42         ` Dave Hansen
@ 2018-08-22 21:56           ` Yang Shi
  2018-08-22 22:03             ` Dave Hansen
  0 siblings, 1 reply; 24+ messages in thread
From: Yang Shi @ 2018-08-22 21:56 UTC (permalink / raw)
  To: Dave Hansen, Kirill A. Shutemov
  Cc: Vlastimil Babka, mhocko, willy, ldufour, akpm, peterz, mingo,
	acme, alexander.shishkin, jolsa, namhyung, linux-mm,
	linux-kernel



On 8/22/18 2:42 PM, Dave Hansen wrote:
> On 08/22/2018 02:10 PM, Kirill A. Shutemov wrote:
>>> For x86, mpx_notify_unmap() looks finally zap the VM_MPX vmas in bound table
>>> range with zap_page_range() and doesn't update vm flags, so it sounds ok to
>>> me since vmas have been detached, nobody can find those vmas. But, I'm not
>>> familiar with the details of mpx, maybe Kirill could help to confirm this?
>> I don't see anything obviously dependent on down_write() in
>> mpx_notify_unmap(), but Dave should know better.
> We need mmap_sem for write in mpx_notify_unmap().
>
> Its job is to clean up bounds tables, but bounds tables are dynamically
> allocated and destroyed by the kernel.  When we destroy a table, we also
> destroy the VMA for the bounds table *itself*, separate from the VMA
> being unmapped.

Thanks for confirming this. I didn't realize there is VMA for bounds 
table itself.

>
> But, this code is very likely to go away soon.  If it's causing a
> problem for you, let me know and I'll see if I can get to removing it
> faster.

Does it depends on unmap_region()? Or IOW, does it has to be called 
after unmap_region()? Now the calling sequence is:

detach vmas
unmap_region()
mpx_notify_unmap()

I'm wondering if it is safe to move it up before unmap_region() like:

detach vmas
mpx_notify_unmap()
unmap_region()

With this change we also can do our optimization to do unmap_region() 
with read mmap_sem. Otherwise it does cause problem.

Thanks,
Yang



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC v8 PATCH 3/5] mm: mmap: zap pages with read mmap_sem in munmap
  2018-08-22 21:56           ` Yang Shi
@ 2018-08-22 22:03             ` Dave Hansen
  0 siblings, 0 replies; 24+ messages in thread
From: Dave Hansen @ 2018-08-22 22:03 UTC (permalink / raw)
  To: owner-linux-mm, Kirill A. Shutemov
  Cc: Vlastimil Babka, mhocko, willy, ldufour, akpm, peterz, mingo,
	acme, alexander.shishkin, jolsa, namhyung, linux-mm,
	linux-kernel

On 08/22/2018 02:56 PM, owner-linux-mm@kvack.org wrote:
> 
> 
> On 8/22/18 2:42 PM, Dave Hansen wrote:
>> On 08/22/2018 02:10 PM, Kirill A. Shutemov wrote:
>>>> For x86, mpx_notify_unmap() looks finally zap the VM_MPX vmas in
>>>> bound table
>>>> range with zap_page_range() and doesn't update vm flags, so it
>>>> sounds ok to
>>>> me since vmas have been detached, nobody can find those vmas. But,
>>>> I'm not
>>>> familiar with the details of mpx, maybe Kirill could help to confirm
>>>> this?
>>> I don't see anything obviously dependent on down_write() in
>>> mpx_notify_unmap(), but Dave should know better.
>> We need mmap_sem for write in mpx_notify_unmap().
>>
>> Its job is to clean up bounds tables, but bounds tables are dynamically
>> allocated and destroyed by the kernel.  When we destroy a table, we also
>> destroy the VMA for the bounds table *itself*, separate from the VMA
>> being unmapped.
...
> Does it depends on unmap_region()? Or IOW, does it has to be called
> after unmap_region()? Now the calling sequence is:
> 
> detach vmas
> unmap_region()
> mpx_notify_unmap()
> 
> I'm wondering if it is safe to move it up before unmap_region() like:
> 
> detach vmas
> mpx_notify_unmap()
> unmap_region()
> 
> With this change we also can do our optimization to do unmap_region()
> with read mmap_sem. Otherwise it does cause problem.

I think changing the ordering is fine.

The MPX bounds table unmapping is entirely driven by the VMAs being
unmapped, so the page table unmapping in unmap_region() should not
affect it.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC v8 PATCH 2/5] uprobes: introduce has_uprobes helper
  2018-08-22 15:07     ` Srikar Dronamraju
  2018-08-22 20:51       ` Yang Shi
@ 2018-08-23 15:15       ` Oleg Nesterov
  2018-08-23 16:07         ` Yang Shi
  1 sibling, 1 reply; 24+ messages in thread
From: Oleg Nesterov @ 2018-08-23 15:15 UTC (permalink / raw)
  To: Srikar Dronamraju
  Cc: Vlastimil Babka, Yang Shi, mhocko, willy, ldufour, kirill, akpm,
	peterz, mingo, acme, alexander.shishkin, jolsa, namhyung,
	linux-mm, liu.song.a23, ravi.bangoria, linux-kernel

On 08/22, Srikar Dronamraju wrote:
>
> * Vlastimil Babka <vbabka@suse.cz> [2018-08-22 12:55:59]:
>
> > On 08/15/2018 08:49 PM, Yang Shi wrote:
> > > We need check if mm or vma has uprobes in the following patch to check
> > > if a vma could be unmapped with holding read mmap_sem.

Confused... why can't we call uprobe_munmap() under read_lock(mmap_sem) ?

OK, it can race with find_active_uprobe() but I do not see anything really
wrong, and a false-positive MMF_RECALC_UPROBES is fine.

Again, I think we should simply kill uprobe_munmap(), but this needs another
discussion.

Oleg.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC v8 PATCH 2/5] uprobes: introduce has_uprobes helper
  2018-08-23 15:15       ` Oleg Nesterov
@ 2018-08-23 16:07         ` Yang Shi
  0 siblings, 0 replies; 24+ messages in thread
From: Yang Shi @ 2018-08-23 16:07 UTC (permalink / raw)
  To: Oleg Nesterov, Srikar Dronamraju
  Cc: Vlastimil Babka, mhocko, willy, ldufour, kirill, akpm, peterz,
	mingo, acme, alexander.shishkin, jolsa, namhyung, linux-mm,
	liu.song.a23, ravi.bangoria, linux-kernel



On 8/23/18 8:15 AM, Oleg Nesterov wrote:
> On 08/22, Srikar Dronamraju wrote:
>> * Vlastimil Babka <vbabka@suse.cz> [2018-08-22 12:55:59]:
>>
>>> On 08/15/2018 08:49 PM, Yang Shi wrote:
>>>> We need check if mm or vma has uprobes in the following patch to check
>>>> if a vma could be unmapped with holding read mmap_sem.
> Confused... why can't we call uprobe_munmap() under read_lock(mmap_sem) ?

I'm not sure if it is safe or not because it is not recommended and not 
safe to update vma's vm flags with read mmap_sem. uprobe_munmap() may 
update mm flags (MMF_RECALC_UPROBES). So, it sounds safer to not call it 
under read mmap_sem.

>
> OK, it can race with find_active_uprobe() but I do not see anything really
> wrong, and a false-positive MMF_RECALC_UPROBES is fine.

Thanks for confirming this. If it is ok to have such race, we don't have 
to have has_uprobes() helper anymore since it can be just called under 
read mmap_sem without any special handling.

Yang

>
> Again, I think we should simply kill uprobe_munmap(), but this needs another
> discussion.
>
> Oleg.


^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2018-08-23 16:08 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-15 18:49 [RFC v8 PATCH 0/5] mm: zap pages with read mmap_sem in munmap for large mapping Yang Shi
2018-08-15 18:49 ` [RFC v8 PATCH 1/5] mm: refactor do_munmap() to extract the common part Yang Shi
2018-08-15 18:49 ` [RFC v8 PATCH 2/5] uprobes: introduce has_uprobes helper Yang Shi
2018-08-22 10:55   ` Vlastimil Babka
2018-08-22 15:07     ` Srikar Dronamraju
2018-08-22 20:51       ` Yang Shi
2018-08-23 15:15       ` Oleg Nesterov
2018-08-23 16:07         ` Yang Shi
2018-08-15 18:49 ` [RFC v8 PATCH 3/5] mm: mmap: zap pages with read mmap_sem in munmap Yang Shi
2018-08-15 19:16   ` Matthew Wilcox
2018-08-15 21:09     ` Matthew Wilcox
2018-08-15 21:54       ` Yang Shi
2018-08-16  2:46         ` Matthew Wilcox
2018-08-16  6:11           ` Yang Shi
2018-08-22 11:11   ` Vlastimil Babka
2018-08-22 19:20     ` Yang Shi
2018-08-22 11:19   ` Vlastimil Babka
2018-08-22 20:45     ` Yang Shi
2018-08-22 21:10       ` Kirill A. Shutemov
2018-08-22 21:42         ` Dave Hansen
2018-08-22 21:56           ` Yang Shi
2018-08-22 22:03             ` Dave Hansen
2018-08-15 18:49 ` [RFC v8 PATCH 4/5] mm: unmap VM_HUGETLB mappings with optimized path Yang Shi
2018-08-15 18:49 ` [RFC v8 PATCH 5/5] mm: unmap VM_PFNMAP " Yang Shi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).