linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC v9 PATCH 0/4] mm: zap pages with read mmap_sem in munmap for large mapping
@ 2018-09-11 20:58 Yang Shi
  2018-09-11 20:58 ` [RFC v9 PATCH 1/4] mm: refactor do_munmap() to extract the common part Yang Shi
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Yang Shi @ 2018-09-11 20:58 UTC (permalink / raw)
  To: mhocko, willy, ldufour, vbabka, akpm, dave.hansen, oleg, srikar
  Cc: yang.shi, linux-mm, linux-kernel


Background:
Recently, when we ran some vm scalability tests on machines with large memory,
we ran into a couple of mmap_sem scalability issues when unmapping large memory
space, please refer to https://lkml.org/lkml/2017/12/14/733 and
https://lkml.org/lkml/2018/2/20/576.


History:
Then akpm suggested to unmap large mapping section by section and drop mmap_sem
at a time to mitigate it (see https://lkml.org/lkml/2018/3/6/784).

V1 patch series was submitted to the mailing list per Andrew's suggestion
(see https://lkml.org/lkml/2018/3/20/786). Then I received a lot great feedback
and suggestions.

Then this topic was discussed on LSFMM summit 2018. In the summit, Michal Hocko
suggested (also in the v1 patches review) to try "two phases" approach. Zapping
pages with read mmap_sem, then doing via cleanup with write mmap_sem (for
discussion detail, see https://lwn.net/Articles/753269/)


Approach:
Zapping pages is the most time consuming part, according to the suggestion from
Michal Hocko [1], zapping pages can be done with holding read mmap_sem, like
what MADV_DONTNEED does. Then re-acquire write mmap_sem to cleanup vmas.

But, we can't call MADV_DONTNEED directly, since there are two major drawbacks:
  * The unexpected state from PF if it wins the race in the middle of munmap.
    It may return zero page, instead of the content or SIGSEGV.
  * Can't handle VM_LOCKED | VM_HUGETLB | VM_PFNMAP and uprobe mappings, which
    is a showstopper from akpm

But, some part may need write mmap_sem, for example, vma splitting. So,
the design is as follows:
        acquire write mmap_sem
        lookup vmas (find and split vmas)
        deal with special mappings
        detach vmas
        downgrade_write

        zap pages
        free page tables
        release mmap_sem

The vm events with read mmap_sem may come in during page zapping, but
since vmas have been detached before, they, i.e. page fault, gup, etc,
will not be able to find valid vma, then just return SIGSEGV or -EFAULT
as expected.

If the vma has VM_HUGETLB | VM_PFNMAP, they are considered as special
mappings. They will be handled by falling back to regular do_munmap()
with exclusive mmap_sem held in this patch since they may update vm flags.

But, with the "detach vmas first" approach, the vmas have been detached
when vm flags are updated, so it sounds safe to update vm flags with
read mmap_sem for this specific case. So, VM_HUGETLB and VM_PFNMAP will
be handled by using the optimized path in the following separate patches
for bisectable sake.

Unmapping uprobe areas may need update mm flags (MMF_RECALC_UPROBES).
However it is fine to have false-positive MMF_RECALC_UPROBES according
to uprobes developer. So, uprobe unmap will not be handled by the
regular path.

With the "detach vmas first" approach we don't have to re-acquire
mmap_sem again to clean up vmas to avoid race window which might get the
address space changed since downgrade_write() doesn't release the lock
to lead regression, which simply downgrades to read lock.

And, since the lock acquire/release cost is managed to the minimum and
almost as same as before, the optimization could be extended to any size
of mapping without incurring significant penalty to small mappings.

For the time being, just do this in munmap syscall path. Other
vm_munmap() or do_munmap() call sites (i.e mmap, mremap, etc) remain
intact due to some implementation difficulties since they acquire write
mmap_sem from very beginning and hold it until the end, do_munmap()
might be called in the middle. But, the optimized do_munmap would like
to be called without mmap_sem held so that we can do the optimization.
So, if we want to do the similar optimization for mmap/mremap path, I'm
afraid we would have to redesign them. mremap might be called on very
large area depending on the usecases, the optimization to it will be
considered in the future.


Changelog
v8 -> v9:
* Uprobe developer (Oleg Nesterov and Srikar Dronamraju) helped to confirm it is
  fine to have a false-positive MMF_RECALC_UPROBES. So, unmapping uprobe areas
  doesn't have to be handled by regular path and we can drop the uprobe related
  patch. Thanks Oleg and Srikar.
* Dave hansen helped to confirm mpx unmap has to be called under write mmap_sem,
  but it has not to be after unmap_region(). So move arch_unmap() before
  downgrade_write(). The other user of arch_unmap() is PowerPC, which just set
  mm->context.vdso_base, so it sounds fine for this change too. Thanks Dave.
* The above two resolved the concerns from Vlastimil for v8.

v7 -> v8:
* Added Acked-by from Vlastimil for patch 1/5. Thanks.
* Fixed the wrong "evolution" direction. Converted VM_HUGETLB and VM_PFNMAP
  mapping use the optimized path in separate patches respectively for safe and
  bisectable sake per Michal's suggestion.
* Extracted has_uprobes() helper from uprobes_munmap() to check if mm or vmas
  have uprobes, which could save some cycles instead of calling
  vma_has_uprobes() directly for some cases. Per Vlastimil's suggestion.
* Keep unmapping uprobes area using regular do_munmap() since it might update
  mm flags, that might be not safe with read mmap_sem even though vmas have
  been detached.
* Fixed some comments from Willy.

v6 -> v7:
* Rename some helper functions per Michal and Vlastimil's comments.
* Refactor munmap_lookup_vma() to return the pointer of start vma per Michal's
  suggestion.
* Rephrase some commit log for patch 2/4 per Michal's comments.
* Deal with special mappings (VM_HUGETLB | VM_PFNMAP | uprobes) with regular
  do_munmap() in a separate patch per Michal's suggestion.
* Bring the patch which makes vma_has_uprobes() non-static back since it is
  needed to check if a vma has uprobes or not.

v5 -> v6:
* Fixed the comments from Kirill and Laurent
* Added Laurent's reviewed-by to patch 1/2. Thanks.

v4 -> v5:
* Detach vmas before zapping pages so that we don't have to use VM_DEAD to mark
  a being unmapping vma since they have been detached from rbtree when zapping
  pages. Per Kirill
* Eliminate VM_DEAD stuff
* With this change we don't have to re-acquire write mmap_sem to do cleanup.
  So, we could eliminate a potential race window
* Eliminate PUD_SIZE check, and extend this optimization to all size

v3 -> v4:
* Extend check_stable_address_space to check VM_DEAD as Michal suggested
* Deal with vm_flags update of VM_LOCKED | VM_HUGETLB | VM_PFNMAP and uprobe
  mappings with exclusive lock held. The actual unmapping is still done with read
  mmap_sem to solve akpm's concern
* Clean up vmas with calling do_munmap to prevent from race condition by not
  carrying vmas as Kirill suggested
* Extracted more common code
* Solved some code cleanup comments from akpm
* Dropped uprobe and arch specific code, now all the changes are mm only
* Still keep PUD_SIZE threshold, if everyone thinks it is better to extend to all
  sizes or smaller size, will remove it
* Make this optimization 64 bit only explicitly per akpm's suggestion

v2 -> v3:
* Refactor do_munmap code to extract the common part per Peter's sugestion
* Introduced VM_DEAD flag per Michal's suggestion. Just handled VM_DEAD in
  x86's page fault handler for the time being. Other architectures will be covered
  once the patch series is reviewed
* Now lookup vma (find and split) and set VM_DEAD flag with write mmap_sem, then
  zap mapping with read mmap_sem, then clean up pgtables and vmas with write
  mmap_sem per Peter's suggestion

v1 -> v2:
* Re-implemented the code per the discussion on LSFMM summit


Regression and performance data:
Did the below regression test with setting thresh to 4K manually in the code:
  * Full LTP
  * Trinity (munmap/all vm syscalls)
  * Stress-ng: mmap/mmapfork/mmapfixed/mmapaddr/mmapmany/vm
  * mm-tests: kernbench, phpbench, sysbench-mariadb, will-it-scale
  * vm-scalability

With the patches, exclusive mmap_sem hold time when munmap a 80GB address
space on a machine with 32 cores of E5-2680 @ 2.70GHz dropped to us level
from second.

munmap_test-15002 [008]   594.380138: funcgraph_entry: |  vm_munmap_zap_rlock() {
munmap_test-15002 [008]   594.380146: funcgraph_entry:      !2485684 us |    unmap_region();
munmap_test-15002 [008]   596.865836: funcgraph_exit:       !2485692 us |  }

Here the excution time of unmap_region() is used to evaluate the time of
holding read mmap_sem, then the remaining time is used with holding
exclusive lock.

Yang Shi (4):
      mm: refactor do_munmap() to extract the common part
      mm: mmap: zap pages with read mmap_sem in munmap
      mm: unmap VM_HUGETLB mappings with optimized path
      mm: unmap VM_PFNMAP mappings with optimized path

 mm/mmap.c | 190 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---------------
 1 file changed, 156 insertions(+), 34 deletions(-)

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [RFC v9 PATCH 1/4] mm: refactor do_munmap() to extract the common part
  2018-09-11 20:58 [RFC v9 PATCH 0/4] mm: zap pages with read mmap_sem in munmap for large mapping Yang Shi
@ 2018-09-11 20:58 ` Yang Shi
  2018-09-11 20:58 ` [RFC v9 PATCH 2/4] mm: mmap: zap pages with read mmap_sem in munmap Yang Shi
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 10+ messages in thread
From: Yang Shi @ 2018-09-11 20:58 UTC (permalink / raw)
  To: mhocko, willy, ldufour, vbabka, akpm, dave.hansen, oleg, srikar
  Cc: yang.shi, linux-mm, linux-kernel

Introduces three new helper functions:
  * addr_ok()
  * munmap_lookup_vma()
  * munlock_vmas()

They will be used by do_munmap() and the new do_munmap with zapping
large mapping early in the later patch.

There is no functional change, just code refactor.

Reviewed-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
---
 mm/mmap.c | 106 +++++++++++++++++++++++++++++++++++++++++++-------------------
 1 file changed, 74 insertions(+), 32 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 5f2b2b1..b7092b4 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2682,35 +2682,42 @@ int split_vma(struct mm_struct *mm, struct vm_area_struct *vma,
 	return __split_vma(mm, vma, addr, new_below);
 }
 
-/* Munmap is split into 2 main parts -- this part which finds
- * what needs doing, and the areas themselves, which do the
- * work.  This now handles partial unmappings.
- * Jeremy Fitzhardinge <jeremy@goop.org>
- */
-int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
-	      struct list_head *uf)
+static inline bool addr_ok(unsigned long start, size_t len)
 {
-	unsigned long end;
-	struct vm_area_struct *vma, *prev, *last;
-
 	if ((offset_in_page(start)) || start > TASK_SIZE || len > TASK_SIZE-start)
-		return -EINVAL;
+		return false;
 
-	len = PAGE_ALIGN(len);
-	if (len == 0)
-		return -EINVAL;
+	if (PAGE_ALIGN(len) == 0)
+		return false;
+
+	return true;
+}
+
+/*
+ * munmap_lookup_vma: find the first overlap vma and split overlap vmas.
+ * @mm: mm_struct
+ * @start: start address
+ * @end: end address
+ *
+ * Return: %NULL if no VMA overlaps this range.  An ERR_PTR if an
+ * overlapping VMA could not be split.  Otherwise a pointer to the first
+ * VMA which overlaps the range.
+ */
+static struct vm_area_struct *munmap_lookup_vma(struct mm_struct *mm,
+			unsigned long start, unsigned long end)
+{
+	struct vm_area_struct *vma, *prev, *last;
 
 	/* Find the first overlapping VMA */
 	vma = find_vma(mm, start);
 	if (!vma)
-		return 0;
-	prev = vma->vm_prev;
-	/* we have  start < vma->vm_end  */
+		return NULL;
 
+	/* we have start < vma->vm_end  */
 	/* if it doesn't overlap, we have nothing.. */
-	end = start + len;
 	if (vma->vm_start >= end)
-		return 0;
+		return NULL;
+	prev = vma->vm_prev;
 
 	/*
 	 * If we need to split any vma, do it now to save pain later.
@@ -2728,11 +2735,11 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 		 * its limit temporarily, to help free resources as expected.
 		 */
 		if (end < vma->vm_end && mm->map_count >= sysctl_max_map_count)
-			return -ENOMEM;
+			return ERR_PTR(-ENOMEM);
 
 		error = __split_vma(mm, vma, start, 0);
 		if (error)
-			return error;
+			return ERR_PTR(error);
 		prev = vma;
 	}
 
@@ -2741,10 +2748,53 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 	if (last && end > last->vm_start) {
 		int error = __split_vma(mm, last, end, 1);
 		if (error)
-			return error;
+			return ERR_PTR(error);
 	}
 	vma = prev ? prev->vm_next : mm->mmap;
 
+	return vma;
+}
+
+static inline void munlock_vmas(struct vm_area_struct *vma,
+				unsigned long end)
+{
+	struct mm_struct *mm = vma->vm_mm;
+
+	while (vma && vma->vm_start < end) {
+		if (vma->vm_flags & VM_LOCKED) {
+			mm->locked_vm -= vma_pages(vma);
+			munlock_vma_pages_all(vma);
+		}
+		vma = vma->vm_next;
+	}
+}
+
+/* Munmap is split into 2 main parts -- this part which finds
+ * what needs doing, and the areas themselves, which do the
+ * work.  This now handles partial unmappings.
+ * Jeremy Fitzhardinge <jeremy@goop.org>
+ */
+int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
+	      struct list_head *uf)
+{
+	unsigned long end;
+	struct vm_area_struct *vma, *prev;
+
+	if (!addr_ok(start, len))
+		return -EINVAL;
+
+	len = PAGE_ALIGN(len);
+
+	end = start + len;
+
+	vma = munmap_lookup_vma(mm, start, end);
+	if (!vma)
+		return 0;
+	if (IS_ERR(vma))
+		return PTR_ERR(vma);
+
+	prev = vma->vm_prev;
+
 	if (unlikely(uf)) {
 		/*
 		 * If userfaultfd_unmap_prep returns an error the vmas
@@ -2763,16 +2813,8 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 	/*
 	 * unlock any mlock()ed ranges before detaching vmas
 	 */
-	if (mm->locked_vm) {
-		struct vm_area_struct *tmp = vma;
-		while (tmp && tmp->vm_start < end) {
-			if (tmp->vm_flags & VM_LOCKED) {
-				mm->locked_vm -= vma_pages(tmp);
-				munlock_vma_pages_all(tmp);
-			}
-			tmp = tmp->vm_next;
-		}
-	}
+	if (mm->locked_vm)
+		munlock_vmas(vma, end);
 
 	/*
 	 * Remove the vma's, and unmap the actual pages
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC v9 PATCH 2/4] mm: mmap: zap pages with read mmap_sem in munmap
  2018-09-11 20:58 [RFC v9 PATCH 0/4] mm: zap pages with read mmap_sem in munmap for large mapping Yang Shi
  2018-09-11 20:58 ` [RFC v9 PATCH 1/4] mm: refactor do_munmap() to extract the common part Yang Shi
@ 2018-09-11 20:58 ` Yang Shi
  2018-09-11 21:16   ` Matthew Wilcox
  2018-09-11 20:58 ` [RFC v9 PATCH 3/4] mm: unmap VM_HUGETLB mappings with optimized path Yang Shi
  2018-09-11 20:58 ` [RFC v9 PATCH 4/4] mm: unmap VM_PFNMAP " Yang Shi
  3 siblings, 1 reply; 10+ messages in thread
From: Yang Shi @ 2018-09-11 20:58 UTC (permalink / raw)
  To: mhocko, willy, ldufour, vbabka, akpm, dave.hansen, oleg, srikar
  Cc: yang.shi, linux-mm, linux-kernel

When running some mmap/munmap scalability tests with large memory (i.e.
> 300GB), the below hung task issue may happen occasionally.

INFO: task ps:14018 blocked for more than 120 seconds.
       Tainted: G            E 4.9.79-009.ali3000.alios7.x86_64 #1
 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
message.
 ps              D    0 14018      1 0x00000004
  ffff885582f84000 ffff885e8682f000 ffff880972943000 ffff885ebf499bc0
  ffff8828ee120000 ffffc900349bfca8 ffffffff817154d0 0000000000000040
  00ffffff812f872a ffff885ebf499bc0 024000d000948300 ffff880972943000
 Call Trace:
  [<ffffffff817154d0>] ? __schedule+0x250/0x730
  [<ffffffff817159e6>] schedule+0x36/0x80
  [<ffffffff81718560>] rwsem_down_read_failed+0xf0/0x150
  [<ffffffff81390a28>] call_rwsem_down_read_failed+0x18/0x30
  [<ffffffff81717db0>] down_read+0x20/0x40
  [<ffffffff812b9439>] proc_pid_cmdline_read+0xd9/0x4e0
  [<ffffffff81253c95>] ? do_filp_open+0xa5/0x100
  [<ffffffff81241d87>] __vfs_read+0x37/0x150
  [<ffffffff812f824b>] ? security_file_permission+0x9b/0xc0
  [<ffffffff81242266>] vfs_read+0x96/0x130
  [<ffffffff812437b5>] SyS_read+0x55/0xc0
  [<ffffffff8171a6da>] entry_SYSCALL_64_fastpath+0x1a/0xc5

It is because munmap holds mmap_sem exclusively from very beginning to
all the way down to the end, and doesn't release it in the middle. When
unmapping large mapping, it may take long time (take ~18 seconds to
unmap 320GB mapping with every single page mapped on an idle machine).

Zapping pages is the most time consuming part, according to the
suggestion from Michal Hocko [1], zapping pages can be done with holding
read mmap_sem, like what MADV_DONTNEED does. Then re-acquire write
mmap_sem to cleanup vmas.

But, some part may need write mmap_sem, for example, vma splitting. So,
the design is as follows:
        acquire write mmap_sem
        lookup vmas (find and split vmas)
        deal with special mappings
        detach vmas
        downgrade_write

        zap pages
        free page tables
        release mmap_sem

The vm events with read mmap_sem may come in during page zapping, but
since vmas have been detached before, they, i.e. page fault, gup, etc,
will not be able to find valid vma, then just return SIGSEGV or -EFAULT
as expected.

If the vma has VM_HUGETLB | VM_PFNMAP, they are considered as special
mappings. They will be handled by falling back to regular do_munmap()
with exclusive mmap_sem held in this patch since they may update vm flags.

But, with the "detach vmas first" approach, the vmas have been detached
when vm flags are updated, so it sounds safe to update vm flags with
read mmap_sem for this specific case. So, VM_HUGETLB and VM_PFNMAP will
be handled by using the optimized path in the following separate patches
for bisectable sake.

Unmapping uprobe areas may need update mm flags (MMF_RECALC_UPROBES).
However it is fine to have false-positive MMF_RECALC_UPROBES according
to uprobes developer. So, uprobe unmap will not be handled by the
regular path.

With the "detach vmas first" approach we don't have to re-acquire
mmap_sem again to clean up vmas to avoid race window which might get the
address space changed since downgrade_write() doesn't release the lock
to lead regression, which simply downgrades to read lock.

And, since the lock acquire/release cost is managed to the minimum and
almost as same as before, the optimization could be extended to any size
of mapping without incurring significant penalty to small mappings.

For the time being, just do this in munmap syscall path. Other
vm_munmap() or do_munmap() call sites (i.e mmap, mremap, etc) remain
intact due to some implementation difficulties since they acquire write
mmap_sem from very beginning and hold it until the end, do_munmap()
might be called in the middle. But, the optimized do_munmap would like
to be called without mmap_sem held so that we can do the optimization.
So, if we want to do the similar optimization for mmap/mremap path, I'm
afraid we would have to redesign them. mremap might be called on very
large area depending on the usecases, the optimization to it will be
considered in the future.

With the patches, exclusive mmap_sem hold time when munmap a 80GB
address space on a machine with 32 cores of E5-2680 @ 2.70GHz dropped to
us level from second.

munmap_test-15002 [008]   594.380138: funcgraph_entry: |
vm_munmap_zap_rlock() {
munmap_test-15002 [008]   594.380146: funcgraph_entry:      !2485684 us
|    unmap_region();
munmap_test-15002 [008]   596.865836: funcgraph_exit:       !2485692 us
|  }

Here the excution time of unmap_region() is used to evaluate the time of
holding read mmap_sem, then the remaining time is used with holding
exclusive lock.

[1] https://lwn.net/Articles/753269/

Suggested-by: Michal Hocko <mhocko@kernel.org>
Suggested-by: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
---
 mm/mmap.c | 97 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 95 insertions(+), 2 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index b7092b4..937d2f2 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2769,6 +2769,89 @@ static inline void munlock_vmas(struct vm_area_struct *vma,
 	}
 }
 
+/*
+ * Zap pages with read mmap_sem held
+ *
+ * uf is the list for userfaultfd
+ */
+static int do_munmap_zap_rlock(struct mm_struct *mm, unsigned long start,
+			       size_t len, struct list_head *uf)
+{
+	unsigned long end;
+	struct vm_area_struct *start_vma, *prev, *vma;
+	int ret = 0;
+
+	if (!addr_ok(start, len))
+		return -EINVAL;
+
+	len = PAGE_ALIGN(len);
+
+	end = start + len;
+
+	/*
+	 * Need write mmap_sem to split vmas and detach vmas
+	 * splitting vma up-front to save PITA to clean if it is failed
+	 */
+	if (down_write_killable(&mm->mmap_sem))
+		return -EINTR;
+
+	start_vma = munmap_lookup_vma(mm, start, end);
+	if (!start_vma)
+		goto out;
+	if (IS_ERR(start_vma)) {
+		ret = PTR_ERR(start_vma);
+		goto out;
+	}
+
+	prev = start_vma->vm_prev;
+
+	if (unlikely(uf)) {
+		ret = userfaultfd_unmap_prep(start_vma, start, end, uf);
+		if (ret)
+			goto out;
+	}
+
+	/*
+	 * Unmapping vmas, which have VM_HUGETLB or VM_PFNMAP
+	 * need get done with write mmap_sem held since they may update
+	 * vm_flags. Deal with such mappings with regular do_munmap() call.
+	 */
+	for (vma = start_vma; vma && vma->vm_start < end; vma = vma->vm_next) {
+		if (vma->vm_flags & (VM_HUGETLB | VM_PFNMAP))
+			goto regular_path;
+	}
+
+	/* Handle mlocked vmas */
+	if (mm->locked_vm)
+		munlock_vmas(start_vma, end);
+
+	/* Detach vmas from rbtree */
+	detach_vmas_to_be_unmapped(mm, start_vma, prev, end);
+
+	/*
+	 * mpx unmap need to be handled with write mmap_sem. It is safe to
+	 * deal with it before unmap_region().
+	 */
+	arch_unmap(mm, start_vma, start, end);
+
+	downgrade_write(&mm->mmap_sem);
+
+	/* Zap mappings with read mmap_sem */
+	unmap_region(mm, start_vma, prev, start, end);
+
+	remove_vma_list(mm, start_vma);
+	up_read(&mm->mmap_sem);
+
+	return 0;
+
+regular_path:
+	ret = do_munmap(mm, start, len, uf);
+
+out:
+	up_write(&mm->mmap_sem);
+	return ret;
+}
+
 /* Munmap is split into 2 main parts -- this part which finds
  * what needs doing, and the areas themselves, which do the
  * work.  This now handles partial unmappings.
@@ -2830,6 +2913,17 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 	return 0;
 }
 
+static int vm_munmap_zap_rlock(unsigned long start, size_t len)
+{
+	int ret;
+	struct mm_struct *mm = current->mm;
+	LIST_HEAD(uf);
+
+	ret = do_munmap_zap_rlock(mm, start, len, &uf);
+	userfaultfd_unmap_complete(mm, &uf);
+	return ret;
+}
+
 int vm_munmap(unsigned long start, size_t len)
 {
 	int ret;
@@ -2849,10 +2943,9 @@ int vm_munmap(unsigned long start, size_t len)
 SYSCALL_DEFINE2(munmap, unsigned long, addr, size_t, len)
 {
 	profile_munmap(addr);
-	return vm_munmap(addr, len);
+	return vm_munmap_zap_rlock(addr, len);
 }
 
-
 /*
  * Emulation of deprecated remap_file_pages() syscall.
  */
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC v9 PATCH 3/4] mm: unmap VM_HUGETLB mappings with optimized path
  2018-09-11 20:58 [RFC v9 PATCH 0/4] mm: zap pages with read mmap_sem in munmap for large mapping Yang Shi
  2018-09-11 20:58 ` [RFC v9 PATCH 1/4] mm: refactor do_munmap() to extract the common part Yang Shi
  2018-09-11 20:58 ` [RFC v9 PATCH 2/4] mm: mmap: zap pages with read mmap_sem in munmap Yang Shi
@ 2018-09-11 20:58 ` Yang Shi
  2018-09-11 20:58 ` [RFC v9 PATCH 4/4] mm: unmap VM_PFNMAP " Yang Shi
  3 siblings, 0 replies; 10+ messages in thread
From: Yang Shi @ 2018-09-11 20:58 UTC (permalink / raw)
  To: mhocko, willy, ldufour, vbabka, akpm, dave.hansen, oleg, srikar
  Cc: yang.shi, linux-mm, linux-kernel

When unmapping VM_HUGETLB mappings, vm flags need to be updated. Since
the vmas have been detached, so it sounds safe to update vm flags with
read mmap_sem.

Cc: Michal Hocko <mhocko@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
---
 mm/mmap.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 937d2f2..086f8b5 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2812,12 +2812,12 @@ static int do_munmap_zap_rlock(struct mm_struct *mm, unsigned long start,
 	}
 
 	/*
-	 * Unmapping vmas, which have VM_HUGETLB or VM_PFNMAP
+	 * Unmapping vmas, which have VM_PFNMAP
 	 * need get done with write mmap_sem held since they may update
 	 * vm_flags. Deal with such mappings with regular do_munmap() call.
 	 */
 	for (vma = start_vma; vma && vma->vm_start < end; vma = vma->vm_next) {
-		if (vma->vm_flags & (VM_HUGETLB | VM_PFNMAP))
+		if (vma->vm_flags & VM_PFNMAP)
 			goto regular_path;
 	}
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC v9 PATCH 4/4] mm: unmap VM_PFNMAP mappings with optimized path
  2018-09-11 20:58 [RFC v9 PATCH 0/4] mm: zap pages with read mmap_sem in munmap for large mapping Yang Shi
                   ` (2 preceding siblings ...)
  2018-09-11 20:58 ` [RFC v9 PATCH 3/4] mm: unmap VM_HUGETLB mappings with optimized path Yang Shi
@ 2018-09-11 20:58 ` Yang Shi
  3 siblings, 0 replies; 10+ messages in thread
From: Yang Shi @ 2018-09-11 20:58 UTC (permalink / raw)
  To: mhocko, willy, ldufour, vbabka, akpm, dave.hansen, oleg, srikar
  Cc: yang.shi, linux-mm, linux-kernel

When unmapping VM_PFNMAP mappings, vm flags need to be updated. Since
the vmas have been detached, so it sounds safe to update vm flags with
read mmap_sem.

Cc: Michal Hocko <mhocko@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
---
 mm/mmap.c | 15 +--------------
 1 file changed, 1 insertion(+), 14 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 086f8b5..0b6b231 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2778,7 +2778,7 @@ static int do_munmap_zap_rlock(struct mm_struct *mm, unsigned long start,
 			       size_t len, struct list_head *uf)
 {
 	unsigned long end;
-	struct vm_area_struct *start_vma, *prev, *vma;
+	struct vm_area_struct *start_vma, *prev;
 	int ret = 0;
 
 	if (!addr_ok(start, len))
@@ -2811,16 +2811,6 @@ static int do_munmap_zap_rlock(struct mm_struct *mm, unsigned long start,
 			goto out;
 	}
 
-	/*
-	 * Unmapping vmas, which have VM_PFNMAP
-	 * need get done with write mmap_sem held since they may update
-	 * vm_flags. Deal with such mappings with regular do_munmap() call.
-	 */
-	for (vma = start_vma; vma && vma->vm_start < end; vma = vma->vm_next) {
-		if (vma->vm_flags & VM_PFNMAP)
-			goto regular_path;
-	}
-
 	/* Handle mlocked vmas */
 	if (mm->locked_vm)
 		munlock_vmas(start_vma, end);
@@ -2844,9 +2834,6 @@ static int do_munmap_zap_rlock(struct mm_struct *mm, unsigned long start,
 
 	return 0;
 
-regular_path:
-	ret = do_munmap(mm, start, len, uf);
-
 out:
 	up_write(&mm->mmap_sem);
 	return ret;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [RFC v9 PATCH 2/4] mm: mmap: zap pages with read mmap_sem in munmap
  2018-09-11 20:58 ` [RFC v9 PATCH 2/4] mm: mmap: zap pages with read mmap_sem in munmap Yang Shi
@ 2018-09-11 21:16   ` Matthew Wilcox
  2018-09-11 23:35     ` Yang Shi
  0 siblings, 1 reply; 10+ messages in thread
From: Matthew Wilcox @ 2018-09-11 21:16 UTC (permalink / raw)
  To: Yang Shi
  Cc: mhocko, ldufour, vbabka, akpm, dave.hansen, oleg, srikar,
	linux-mm, linux-kernel

On Wed, Sep 12, 2018 at 04:58:11AM +0800, Yang Shi wrote:
>  mm/mmap.c | 97 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--

I really think you're going about this the wrong way by duplicating
vm_munmap().

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC v9 PATCH 2/4] mm: mmap: zap pages with read mmap_sem in munmap
  2018-09-11 21:16   ` Matthew Wilcox
@ 2018-09-11 23:35     ` Yang Shi
  2018-09-12  2:29       ` Matthew Wilcox
  0 siblings, 1 reply; 10+ messages in thread
From: Yang Shi @ 2018-09-11 23:35 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: mhocko, ldufour, vbabka, akpm, dave.hansen, oleg, srikar,
	linux-mm, linux-kernel



On 9/11/18 2:16 PM, Matthew Wilcox wrote:
> On Wed, Sep 12, 2018 at 04:58:11AM +0800, Yang Shi wrote:
>>   mm/mmap.c | 97 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--
> I really think you're going about this the wrong way by duplicating
> vm_munmap().

If we don't duplicate vm_munmap() or do_munmap(), we need pass an extra 
parameter to them to tell when it is fine to downgrade write lock or if 
the lock has been acquired outside it (i.e. in mmap()/mremap()), right? 
But, vm_munmap() or do_munmap() is called not only by mmap-related, but 
also some other places, like arch-specific places, which don't need 
downgrade write lock or are not safe to do so.

Actually, I did this way in the v1 patches, but it got pushed back by 
tglx who suggested duplicate the code so that the change could be done 
in mm only without touching other files, i.e. arch-specific stuff. I 
didn't have strong argument to convince him.

And, Michal prefers have VM_HUGETLB and VM_PFNMAP handled separately for 
safe and bisectable sake, which needs call the regular do_munmap().

In addition to this, I just found mpx code may call do_munmap() 
recursively when I was looking into the mpx code.

We might be able to handle these by the extra parameter, but it sounds 
it make the code hard to understand and error prone.

Thanks,
Yang



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC v9 PATCH 2/4] mm: mmap: zap pages with read mmap_sem in munmap
  2018-09-11 23:35     ` Yang Shi
@ 2018-09-12  2:29       ` Matthew Wilcox
  2018-09-12  9:11         ` Michal Hocko
  0 siblings, 1 reply; 10+ messages in thread
From: Matthew Wilcox @ 2018-09-12  2:29 UTC (permalink / raw)
  To: Yang Shi
  Cc: mhocko, ldufour, vbabka, akpm, dave.hansen, oleg, srikar,
	linux-mm, linux-kernel

On Tue, Sep 11, 2018 at 04:35:03PM -0700, Yang Shi wrote:
> On 9/11/18 2:16 PM, Matthew Wilcox wrote:
> > On Wed, Sep 12, 2018 at 04:58:11AM +0800, Yang Shi wrote:
> > >   mm/mmap.c | 97 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--
> > I really think you're going about this the wrong way by duplicating
> > vm_munmap().
> 
> If we don't duplicate vm_munmap() or do_munmap(), we need pass an extra
> parameter to them to tell when it is fine to downgrade write lock or if the
> lock has been acquired outside it (i.e. in mmap()/mremap()), right? But,
> vm_munmap() or do_munmap() is called not only by mmap-related, but also some
> other places, like arch-specific places, which don't need downgrade write
> lock or are not safe to do so.
> 
> Actually, I did this way in the v1 patches, but it got pushed back by tglx
> who suggested duplicate the code so that the change could be done in mm only
> without touching other files, i.e. arch-specific stuff. I didn't have strong
> argument to convince him.

With my patch, there is nothing to change in arch-specific code.
Here it is again ...

diff --git a/mm/mmap.c b/mm/mmap.c
index de699523c0b7..06dc31d1da8c 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2798,11 +2798,11 @@ int split_vma(struct mm_struct *mm, struct vm_area_struct *vma,
  * work.  This now handles partial unmappings.
  * Jeremy Fitzhardinge <jeremy@goop.org>
  */
-int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
-	      struct list_head *uf)
+static int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
+	      struct list_head *uf, bool downgrade)
 {
 	unsigned long end;
-	struct vm_area_struct *vma, *prev, *last;
+	struct vm_area_struct *vma, *prev, *last, *tmp;
 
 	if ((offset_in_page(start)) || start > TASK_SIZE || len > TASK_SIZE-start)
 		return -EINVAL;
@@ -2816,7 +2816,7 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 	if (!vma)
 		return 0;
 	prev = vma->vm_prev;
-	/* we have  start < vma->vm_end  */
+	/* we have start < vma->vm_end  */
 
 	/* if it doesn't overlap, we have nothing.. */
 	end = start + len;
@@ -2873,18 +2873,22 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 
 	/*
 	 * unlock any mlock()ed ranges before detaching vmas
+	 * and check to see if there's any reason we might have to hold
+	 * the mmap_sem write-locked while unmapping regions.
 	 */
-	if (mm->locked_vm) {
-		struct vm_area_struct *tmp = vma;
-		while (tmp && tmp->vm_start < end) {
-			if (tmp->vm_flags & VM_LOCKED) {
-				mm->locked_vm -= vma_pages(tmp);
-				munlock_vma_pages_all(tmp);
-			}
-			tmp = tmp->vm_next;
+	for (tmp = vma; tmp && tmp->vm_start < end; tmp = tmp->vm_next) {
+		if (tmp->vm_flags & VM_LOCKED) {
+			mm->locked_vm -= vma_pages(tmp);
+			munlock_vma_pages_all(tmp);
 		}
+		if (tmp->vm_file &&
+				has_uprobes(tmp, tmp->vm_start, tmp->vm_end))
+			downgrade = false;
 	}
 
+	if (downgrade)
+		downgrade_write(&mm->mmap_sem);
+
 	/*
 	 * Remove the vma's, and unmap the actual pages
 	 */
@@ -2896,7 +2900,13 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 	/* Fix up all other VM information */
 	remove_vma_list(mm, vma);
 
-	return 0;
+	return downgrade ? 1 : 0;
+}
+
+int do_unmap(struct mm_struct *mm, unsigned long start, size_t len,
+		struct list_head *uf)
+{
+	return __do_munmap(mm, start, len, uf, false);
 }
 
 int vm_munmap(unsigned long start, size_t len)
@@ -2905,11 +2915,12 @@ int vm_munmap(unsigned long start, size_t len)
 	struct mm_struct *mm = current->mm;
 	LIST_HEAD(uf);
 
-	if (down_write_killable(&mm->mmap_sem))
-		return -EINTR;
-
-	ret = do_munmap(mm, start, len, &uf);
-	up_write(&mm->mmap_sem);
+	down_write(&mm->mmap_sem);
+	ret = __do_munmap(mm, start, len, &uf, true);
+	if (ret == 1)
+		up_read(&mm->mmap_sem);
+	else
+		up_write(&mm->mmap_sem);
 	userfaultfd_unmap_complete(mm, &uf);
 	return ret;
 }

Anybody calling do_munmap() will not get the lock dropped.

> And, Michal prefers have VM_HUGETLB and VM_PFNMAP handled separately for
> safe and bisectable sake, which needs call the regular do_munmap().

That can be introduced and then taken out ... indeed, you can split this into
many patches, starting with this:

+		if (tmp->vm_file)
+			downgrade = false;

to only allow this optimisation for anonymous mappings at first.

> In addition to this, I just found mpx code may call do_munmap() recursively
> when I was looking into the mpx code.
> 
> We might be able to handle these by the extra parameter, but it sounds it
> make the code hard to understand and error prone.

Only if you make the extra parameter mandatory.

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [RFC v9 PATCH 2/4] mm: mmap: zap pages with read mmap_sem in munmap
  2018-09-12  2:29       ` Matthew Wilcox
@ 2018-09-12  9:11         ` Michal Hocko
  2018-09-12 17:15           ` Yang Shi
  0 siblings, 1 reply; 10+ messages in thread
From: Michal Hocko @ 2018-09-12  9:11 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Yang Shi, ldufour, vbabka, akpm, dave.hansen, oleg, srikar,
	linux-mm, linux-kernel

On Tue 11-09-18 19:29:21, Matthew Wilcox wrote:
> On Tue, Sep 11, 2018 at 04:35:03PM -0700, Yang Shi wrote:
[...]

I didn't get to read the patch yet.

> > And, Michal prefers have VM_HUGETLB and VM_PFNMAP handled separately for
> > safe and bisectable sake, which needs call the regular do_munmap().
> 
> That can be introduced and then taken out ... indeed, you can split this into
> many patches, starting with this:
> 
> +		if (tmp->vm_file)
> +			downgrade = false;
> 
> to only allow this optimisation for anonymous mappings at first.

or add a helper function to check for special cases and make the
downgrade behavior conditional on it.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC v9 PATCH 2/4] mm: mmap: zap pages with read mmap_sem in munmap
  2018-09-12  9:11         ` Michal Hocko
@ 2018-09-12 17:15           ` Yang Shi
  0 siblings, 0 replies; 10+ messages in thread
From: Yang Shi @ 2018-09-12 17:15 UTC (permalink / raw)
  To: Michal Hocko, Matthew Wilcox
  Cc: ldufour, vbabka, akpm, dave.hansen, oleg, srikar, linux-mm, linux-kernel



On 9/12/18 2:11 AM, Michal Hocko wrote:
> On Tue 11-09-18 19:29:21, Matthew Wilcox wrote:
>> On Tue, Sep 11, 2018 at 04:35:03PM -0700, Yang Shi wrote:
> [...]
>
> I didn't get to read the patch yet.

If you guys think this is the better way I could convert my patches to 
go this way. It is simple to do the conversion.

Thanks,
Yang

>
>>> And, Michal prefers have VM_HUGETLB and VM_PFNMAP handled separately for
>>> safe and bisectable sake, which needs call the regular do_munmap().
>> That can be introduced and then taken out ... indeed, you can split this into
>> many patches, starting with this:
>>
>> +		if (tmp->vm_file)
>> +			downgrade = false;
>>
>> to only allow this optimisation for anonymous mappings at first.
> or add a helper function to check for special cases and make the
> downgrade behavior conditional on it.


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2018-09-12 17:15 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-11 20:58 [RFC v9 PATCH 0/4] mm: zap pages with read mmap_sem in munmap for large mapping Yang Shi
2018-09-11 20:58 ` [RFC v9 PATCH 1/4] mm: refactor do_munmap() to extract the common part Yang Shi
2018-09-11 20:58 ` [RFC v9 PATCH 2/4] mm: mmap: zap pages with read mmap_sem in munmap Yang Shi
2018-09-11 21:16   ` Matthew Wilcox
2018-09-11 23:35     ` Yang Shi
2018-09-12  2:29       ` Matthew Wilcox
2018-09-12  9:11         ` Michal Hocko
2018-09-12 17:15           ` Yang Shi
2018-09-11 20:58 ` [RFC v9 PATCH 3/4] mm: unmap VM_HUGETLB mappings with optimized path Yang Shi
2018-09-11 20:58 ` [RFC v9 PATCH 4/4] mm: unmap VM_PFNMAP " Yang Shi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).