linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC v7 PATCH 0/4] mm: zap pages with read mmap_sem in munmap for large mapping
@ 2018-08-09 23:35 Yang Shi
  2018-08-09 23:36 ` [RFC v7 PATCH 1/4] mm: refactor do_munmap() to extract the common part Yang Shi
                   ` (3 more replies)
  0 siblings, 4 replies; 15+ messages in thread
From: Yang Shi @ 2018-08-09 23:35 UTC (permalink / raw)
  To: mhocko, willy, ldufour, kirill, vbabka, akpm, peterz, mingo,
	acme, alexander.shishkin, jolsa, namhyung
  Cc: yang.shi, linux-mm, linux-kernel


Background:
Recently, when we ran some vm scalability tests on machines with large memory,
we ran into a couple of mmap_sem scalability issues when unmapping large memory
space, please refer to https://lkml.org/lkml/2017/12/14/733 and
https://lkml.org/lkml/2018/2/20/576.


History:
Then akpm suggested to unmap large mapping section by section and drop mmap_sem
at a time to mitigate it (see https://lkml.org/lkml/2018/3/6/784).

V1 patch series was submitted to the mailing list per Andrew's suggestion
(see https://lkml.org/lkml/2018/3/20/786). Then I received a lot great feedback
and suggestions.

Then this topic was discussed on LSFMM summit 2018. In the summit, Michal Hocko
suggested (also in the v1 patches review) to try "two phases" approach. Zapping
pages with read mmap_sem, then doing via cleanup with write mmap_sem (for
discussion detail, see https://lwn.net/Articles/753269/)


Approach:
Zapping pages is the most time consuming part, according to the suggestion from
Michal Hocko [1], zapping pages can be done with holding read mmap_sem, like
what MADV_DONTNEED does. Then re-acquire write mmap_sem to cleanup vmas.

But, we can't call MADV_DONTNEED directly, since there are two major drawbacks:
  * The unexpected state from PF if it wins the race in the middle of munmap.
    It may return zero page, instead of the content or SIGSEGV.
  * Can’t handle VM_LOCKED | VM_HUGETLB | VM_PFNMAP and uprobe mappings, which
    is a showstopper from akpm

But, some part may need write mmap_sem, for example, vma splitting. So,
the design is as follows:
        acquire write mmap_sem
        lookup vmas (find and split vmas)
        deal with special mappings
        detach vmas
        downgrade_write

        zap pages
        free page tables
        release mmap_sem

The vm events with read mmap_sem may come in during page zapping, but
since vmas have been detached before, they, i.e. page fault, gup, etc,
will not be able to find valid vma, then just return SIGSEGV or -EFAULT
as expected.

If the vma has VM_HUGETLB | VM_PFNMAP or uprobe, they are considered as
special mappings. For the safer and bisectable sake, they will be
handled by falling back to regular do_munmap() with exclusive mmap_sem
held in a separate patch. Since it may be not safe to update vm_flags
with read mmap_sem, although it sounds safe for this specific case hence
vmas have been detached.

With the "detach vmas first" approach we don't have to re-acquire
mmap_sem again to clean up vmas to avoid race window which might get the
address space changed since downgrade_write() doesn't release the lock
to lead regression, which simply downgrades to read lock.

And, since the lock acquire/release cost is managed to the minimum and
almost as same as before, the optimization could be extended to any size
of mapping without incurring significant penalty to small mappings.

For the time being, just do this in munmap syscall path. Other
vm_munmap() or do_munmap() call sites (i.e mmap, mremap, etc) remain
intact due to some implementation difficulties since they acquire write
mmap_sem from very beginning and hold it until the end, do_munmap()
might be called in the middle. But, the optimized do_munmap would like
to be called without mmap_sem held so that we can do the optimization.
So, if we want to do the similar optimization for mmap/mremap path, I'm
afraid we would have to redesign them. mremap might be called on very
large area depending on the usecases, the optimization to it will be
considered in the future.


Changelog
v6 -> v7:
* Rename some helper functions per Michal and Vlastimil's comments.
* Refactor munmap_lookup_vma() to return the pointer of start vma per Michal's
  suggestion.
* Rephrase some commit log for patch 2/4 per Michal's comments.
* Deal with special mappings (VM_HUGETLB | VM_PFNMAP | uprobes) with regular
  do_munmap() in a separate patch per Michal's suggestion.
* Bring the patch which makes vma_has_uprobes() non-static back since it is
  needed to check if a vma has uprobes or not.

v5 -> v6:
* Fixed the comments from Kirill and Laurent
* Added Laurent's reviewed-by to patch 1/2. Thanks.

v4 -> v5:
* Detach vmas before zapping pages so that we don't have to use VM_DEAD to mark
  a being unmapping vma since they have been detached from rbtree when zapping
  pages. Per Kirill
* Eliminate VM_DEAD stuff
* With this change we don't have to re-acquire write mmap_sem to do cleanup.
  So, we could eliminate a potential race window
* Eliminate PUD_SIZE check, and extend this optimization to all size

v3 -> v4:
* Extend check_stable_address_space to check VM_DEAD as Michal suggested
* Deal with vm_flags update of VM_LOCKED | VM_HUGETLB | VM_PFNMAP and uprobe
  mappings with exclusive lock held. The actual unmapping is still done with read
  mmap_sem to solve akpm's concern
* Clean up vmas with calling do_munmap to prevent from race condition by not
  carrying vmas as Kirill suggested
* Extracted more common code
* Solved some code cleanup comments from akpm
* Dropped uprobe and arch specific code, now all the changes are mm only
* Still keep PUD_SIZE threshold, if everyone thinks it is better to extend to all
  sizes or smaller size, will remove it
* Make this optimization 64 bit only explicitly per akpm's suggestion

v2 -> v3:
* Refactor do_munmap code to extract the common part per Peter's sugestion
* Introduced VM_DEAD flag per Michal's suggestion. Just handled VM_DEAD in
  x86's page fault handler for the time being. Other architectures will be covered
  once the patch series is reviewed
* Now lookup vma (find and split) and set VM_DEAD flag with write mmap_sem, then
  zap mapping with read mmap_sem, then clean up pgtables and vmas with write
  mmap_sem per Peter's suggestion

v1 -> v2:
* Re-implemented the code per the discussion on LSFMM summit


Regression and performance data:
Did the below regression test with setting thresh to 4K manually in the code:
  * Full LTP
  * Trinity (munmap/all vm syscalls)
  * Stress-ng: mmap/mmapfork/mmapfixed/mmapaddr/mmapmany/vm
  * mm-tests: kernbench, phpbench, sysbench-mariadb, will-it-scale
  * vm-scalability

With the patches, exclusive mmap_sem hold time when munmap a 80GB address
space on a machine with 32 cores of E5-2680 @ 2.70GHz dropped to us level
from second.

munmap_test-15002 [008]   594.380138: funcgraph_entry: |  vm_munmap_zap_rlock() {
munmap_test-15002 [008]   594.380146: funcgraph_entry:      !2485684 us |    unmap_region();
munmap_test-15002 [008]   596.865836: funcgraph_exit:       !2485692 us |  }

Here the excution time of unmap_region() is used to evaluate the time of
holding read mmap_sem, then the remaining time is used with holding
exclusive lock.


Yang Shi (4):
      mm: refactor do_munmap() to extract the common part
      mm: mmap: zap pages with read mmap_sem in munmap
      uprobes: make vma_has_uprobes non-static
      mm: unmap special vmas with regular do_munmap()

 include/linux/uprobes.h |   7 +++
 kernel/events/uprobes.c |   2 +-
 mm/mmap.c               | 205 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-----------
 3 files changed, 182 insertions(+), 32 deletions(-)

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [RFC v7 PATCH 1/4] mm: refactor do_munmap() to extract the common part
  2018-08-09 23:35 [RFC v7 PATCH 0/4] mm: zap pages with read mmap_sem in munmap for large mapping Yang Shi
@ 2018-08-09 23:36 ` Yang Shi
  2018-08-10 10:20   ` Vlastimil Babka
  2018-08-10 17:41   ` Matthew Wilcox
  2018-08-09 23:36 ` [RFC v7 PATCH 2/4] mm: mmap: zap pages with read mmap_sem in munmap Yang Shi
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 15+ messages in thread
From: Yang Shi @ 2018-08-09 23:36 UTC (permalink / raw)
  To: mhocko, willy, ldufour, kirill, vbabka, akpm, peterz, mingo,
	acme, alexander.shishkin, jolsa, namhyung
  Cc: yang.shi, linux-mm, linux-kernel

Introduces three new helper functions:
  * addr_ok()
  * munmap_lookup_vma()
  * munlock_vmas()

They will be used by do_munmap() and the new do_munmap with zapping
large mapping early in the later patch.

There is no functional change, just code refactor.

Reviewed-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
---
 mm/mmap.c | 100 ++++++++++++++++++++++++++++++++++++++++++++------------------
 1 file changed, 71 insertions(+), 29 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 17bbf4d..2a6898b 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2681,35 +2681,40 @@ int split_vma(struct mm_struct *mm, struct vm_area_struct *vma,
 	return __split_vma(mm, vma, addr, new_below);
 }
 
-/* Munmap is split into 2 main parts -- this part which finds
- * what needs doing, and the areas themselves, which do the
- * work.  This now handles partial unmappings.
- * Jeremy Fitzhardinge <jeremy@goop.org>
- */
-int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
-	      struct list_head *uf)
+static inline bool addr_ok(unsigned long start, size_t len)
 {
-	unsigned long end;
-	struct vm_area_struct *vma, *prev, *last;
-
 	if ((offset_in_page(start)) || start > TASK_SIZE || len > TASK_SIZE-start)
-		return -EINVAL;
+		return false;
 
-	len = PAGE_ALIGN(len);
-	if (len == 0)
-		return -EINVAL;
+	if (PAGE_ALIGN(len) == 0)
+		return false;
+
+	return true;
+}
+
+/*
+ * munmap_lookup_vma: find the first overlap vma and split overlap vmas.
+ * @mm: mm_struct
+ * @start: start address
+ * @end: end address
+ *
+ * returns the pointer to vma, NULL or err ptr when spilt_vma returns error.
+ */
+static struct vm_area_struct *munmap_lookup_vma(struct mm_struct *mm,
+			unsigned long start, unsigned long end)
+{
+	struct vm_area_struct *vma, *prev, *last;
 
 	/* Find the first overlapping VMA */
 	vma = find_vma(mm, start);
 	if (!vma)
-		return 0;
-	prev = vma->vm_prev;
-	/* we have  start < vma->vm_end  */
+		return NULL;
 
+	/* we have  start < vma->vm_end  */
 	/* if it doesn't overlap, we have nothing.. */
-	end = start + len;
 	if (vma->vm_start >= end)
-		return 0;
+		return NULL;
+	prev = vma->vm_prev;
 
 	/*
 	 * If we need to split any vma, do it now to save pain later.
@@ -2727,11 +2732,11 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 		 * its limit temporarily, to help free resources as expected.
 		 */
 		if (end < vma->vm_end && mm->map_count >= sysctl_max_map_count)
-			return -ENOMEM;
+			return ERR_PTR(-ENOMEM);
 
 		error = __split_vma(mm, vma, start, 0);
 		if (error)
-			return error;
+			return ERR_PTR(error);
 		prev = vma;
 	}
 
@@ -2740,10 +2745,53 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 	if (last && end > last->vm_start) {
 		int error = __split_vma(mm, last, end, 1);
 		if (error)
-			return error;
+			return ERR_PTR(error);
 	}
 	vma = prev ? prev->vm_next : mm->mmap;
 
+	return vma;
+}
+
+static inline void munlock_vmas(struct vm_area_struct *vma,
+				unsigned long end)
+{
+	struct mm_struct *mm = vma->vm_mm;
+
+	while (vma && vma->vm_start < end) {
+		if (vma->vm_flags & VM_LOCKED) {
+			mm->locked_vm -= vma_pages(vma);
+			munlock_vma_pages_all(vma);
+		}
+		vma = vma->vm_next;
+	}
+}
+
+/* Munmap is split into 2 main parts -- this part which finds
+ * what needs doing, and the areas themselves, which do the
+ * work.  This now handles partial unmappings.
+ * Jeremy Fitzhardinge <jeremy@goop.org>
+ */
+int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
+	      struct list_head *uf)
+{
+	unsigned long end;
+	struct vm_area_struct *vma, *prev;
+
+	if (!addr_ok(start, len))
+		return -EINVAL;
+
+	len = PAGE_ALIGN(len);
+
+	end = start + len;
+
+	vma = munmap_lookup_vma(mm, start, end);
+	if (!vma)
+		return 0;
+	if (IS_ERR(vma))
+		return PTR_ERR(vma);
+
+	prev = vma->vm_prev;
+
 	if (unlikely(uf)) {
 		/*
 		 * If userfaultfd_unmap_prep returns an error the vmas
@@ -2764,13 +2812,7 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 	 */
 	if (mm->locked_vm) {
 		struct vm_area_struct *tmp = vma;
-		while (tmp && tmp->vm_start < end) {
-			if (tmp->vm_flags & VM_LOCKED) {
-				mm->locked_vm -= vma_pages(tmp);
-				munlock_vma_pages_all(tmp);
-			}
-			tmp = tmp->vm_next;
-		}
+		munlock_vmas(tmp, end);
 	}
 
 	/*
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC v7 PATCH 2/4] mm: mmap: zap pages with read mmap_sem in munmap
  2018-08-09 23:35 [RFC v7 PATCH 0/4] mm: zap pages with read mmap_sem in munmap for large mapping Yang Shi
  2018-08-09 23:36 ` [RFC v7 PATCH 1/4] mm: refactor do_munmap() to extract the common part Yang Shi
@ 2018-08-09 23:36 ` Yang Shi
  2018-08-10 17:57   ` Matthew Wilcox
  2018-08-09 23:36 ` [RFC v7 PATCH 3/4] uprobes: make vma_has_uprobes non-static Yang Shi
  2018-08-09 23:36 ` [RFC v7 PATCH 4/4] mm: unmap special vmas with regular do_munmap() Yang Shi
  3 siblings, 1 reply; 15+ messages in thread
From: Yang Shi @ 2018-08-09 23:36 UTC (permalink / raw)
  To: mhocko, willy, ldufour, kirill, vbabka, akpm, peterz, mingo,
	acme, alexander.shishkin, jolsa, namhyung
  Cc: yang.shi, linux-mm, linux-kernel

When running some mmap/munmap scalability tests with large memory (i.e.
> 300GB), the below hung task issue may happen occasionally.

INFO: task ps:14018 blocked for more than 120 seconds.
       Tainted: G            E 4.9.79-009.ali3000.alios7.x86_64 #1
 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
message.
 ps              D    0 14018      1 0x00000004
  ffff885582f84000 ffff885e8682f000 ffff880972943000 ffff885ebf499bc0
  ffff8828ee120000 ffffc900349bfca8 ffffffff817154d0 0000000000000040
  00ffffff812f872a ffff885ebf499bc0 024000d000948300 ffff880972943000
 Call Trace:
  [<ffffffff817154d0>] ? __schedule+0x250/0x730
  [<ffffffff817159e6>] schedule+0x36/0x80
  [<ffffffff81718560>] rwsem_down_read_failed+0xf0/0x150
  [<ffffffff81390a28>] call_rwsem_down_read_failed+0x18/0x30
  [<ffffffff81717db0>] down_read+0x20/0x40
  [<ffffffff812b9439>] proc_pid_cmdline_read+0xd9/0x4e0
  [<ffffffff81253c95>] ? do_filp_open+0xa5/0x100
  [<ffffffff81241d87>] __vfs_read+0x37/0x150
  [<ffffffff812f824b>] ? security_file_permission+0x9b/0xc0
  [<ffffffff81242266>] vfs_read+0x96/0x130
  [<ffffffff812437b5>] SyS_read+0x55/0xc0
  [<ffffffff8171a6da>] entry_SYSCALL_64_fastpath+0x1a/0xc5

It is because munmap holds mmap_sem exclusively from very beginning to
all the way down to the end, and doesn't release it in the middle. When
unmapping large mapping, it may take long time (take ~18 seconds to
unmap 320GB mapping with every single page mapped on an idle machine).

Zapping pages is the most time consuming part, according to the
suggestion from Michal Hocko [1], zapping pages can be done with holding
read mmap_sem, like what MADV_DONTNEED does. Then re-acquire write
mmap_sem to cleanup vmas.

But, some part may need write mmap_sem, for example, vma splitting. So,
the design is as follows:
        acquire write mmap_sem
        lookup vmas (find and split vmas)
        deal with special mappings
        detach vmas
        downgrade_write

        zap pages
        free page tables
        release mmap_sem

The vm events with read mmap_sem may come in during page zapping, but
since vmas have been detached before, they, i.e. page fault, gup, etc,
will not be able to find valid vma, then just return SIGSEGV or -EFAULT
as expected.

If the vma has VM_HUGETLB | VM_PFNMAP or uprobe, they are considered as
special mappings. For the safer and bisectable sake, they will be
handled by falling back to regular do_munmap() with exclusive mmap_sem
held in a separate patch. Since it may be not safe to update vm_flags
with read mmap_sem, although it sounds safe for this specific case hence
vmas have been detached.

With the "detach vmas first" approach we don't have to re-acquire
mmap_sem again to clean up vmas to avoid race window which might get the
address space changed since downgrade_write() doesn't release the lock
to lead regression, which simply downgrades to read lock.

And, since the lock acquire/release cost is managed to the minimum and
almost as same as before, the optimization could be extended to any size
of mapping without incurring significant penalty to small mappings.

For the time being, just do this in munmap syscall path. Other
vm_munmap() or do_munmap() call sites (i.e mmap, mremap, etc) remain
intact due to some implementation difficulties since they acquire write
mmap_sem from very beginning and hold it until the end, do_munmap()
might be called in the middle. But, the optimized do_munmap would like
to be called without mmap_sem held so that we can do the optimization.
So, if we want to do the similar optimization for mmap/mremap path, I'm
afraid we would have to redesign them. mremap might be called on very
large area depending on the usecases, the optimization to it will be
considered in the future.

With the patches, exclusive mmap_sem hold time when munmap a 80GB
address space on a machine with 32 cores of E5-2680 @ 2.70GHz dropped to
us level from second.

munmap_test-15002 [008]   594.380138: funcgraph_entry: |
vm_munmap_zap_rlock() {
munmap_test-15002 [008]   594.380146: funcgraph_entry:      !2485684 us
|    unmap_region();
munmap_test-15002 [008]   596.865836: funcgraph_exit:       !2485692 us
|  }

Here the excution time of unmap_region() is used to evaluate the time of
holding read mmap_sem, then the remaining time is used with holding
exclusive lock.

[1] https://lwn.net/Articles/753269/

Suggested-by: Michal Hocko <mhocko@kernel.org>
Suggested-by: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
---
 mm/mmap.c | 81 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 79 insertions(+), 2 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 2a6898b..2234d5a 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2766,6 +2766,73 @@ static inline void munlock_vmas(struct vm_area_struct *vma,
 	}
 }
 
+/*
+ * Zap pages with read mmap_sem held
+ *
+ * uf is the list for userfaultfd
+ */
+static int do_munmap_zap_rlock(struct mm_struct *mm, unsigned long start,
+			       size_t len, struct list_head *uf)
+{
+	unsigned long end;
+	struct vm_area_struct *start_vma, *prev, *vma;
+	int ret = 0;
+
+	if (!addr_ok(start, len))
+		return -EINVAL;
+
+	len = PAGE_ALIGN(len);
+
+	end = start + len;
+
+	/*
+	 * Need write mmap_sem to split vmas and detach vmas
+	 * splitting vma up-front to save PITA to clean if it is failed
+	 */
+	if (down_write_killable(&mm->mmap_sem))
+		return -EINTR;
+
+	start_vma = munmap_lookup_vma(mm, start, end);
+	if (!start_vma)
+		goto out;
+	if (IS_ERR(start_vma)) {
+		ret = PTR_ERR(start_vma);
+		goto out;
+	}
+
+	prev = start_vma->vm_prev;
+
+	if (unlikely(uf)) {
+		ret = userfaultfd_unmap_prep(start_vma, start, end, uf);
+		if (ret)
+			goto out;
+	}
+
+	/* Handle mlocked vmas */
+	if (mm->locked_vm) {
+		vma = start_vma;
+		munlock_vmas(vma, end);
+	}
+
+	/* Detach vmas from rbtree */
+	detach_vmas_to_be_unmapped(mm, start_vma, prev, end);
+
+	downgrade_write(&mm->mmap_sem);
+
+	/* Zap mappings with read mmap_sem */
+	unmap_region(mm, start_vma, prev, start, end);
+
+	arch_unmap(mm, start_vma, start, end);
+	remove_vma_list(mm, start_vma);
+	up_read(&mm->mmap_sem);
+
+	return 0;
+
+out:
+	up_write(&mm->mmap_sem);
+	return ret;
+}
+
 /* Munmap is split into 2 main parts -- this part which finds
  * what needs doing, and the areas themselves, which do the
  * work.  This now handles partial unmappings.
@@ -2829,6 +2896,17 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 	return 0;
 }
 
+static int vm_munmap_zap_rlock(unsigned long start, size_t len)
+{
+	int ret;
+	struct mm_struct *mm = current->mm;
+	LIST_HEAD(uf);
+
+	ret = do_munmap_zap_rlock(mm, start, len, &uf);
+	userfaultfd_unmap_complete(mm, &uf);
+	return ret;
+}
+
 int vm_munmap(unsigned long start, size_t len)
 {
 	int ret;
@@ -2848,10 +2926,9 @@ int vm_munmap(unsigned long start, size_t len)
 SYSCALL_DEFINE2(munmap, unsigned long, addr, size_t, len)
 {
 	profile_munmap(addr);
-	return vm_munmap(addr, len);
+	return vm_munmap_zap_rlock(addr, len);
 }
 
-
 /*
  * Emulation of deprecated remap_file_pages() syscall.
  */
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC v7 PATCH 3/4] uprobes: make vma_has_uprobes non-static
  2018-08-09 23:35 [RFC v7 PATCH 0/4] mm: zap pages with read mmap_sem in munmap for large mapping Yang Shi
  2018-08-09 23:36 ` [RFC v7 PATCH 1/4] mm: refactor do_munmap() to extract the common part Yang Shi
  2018-08-09 23:36 ` [RFC v7 PATCH 2/4] mm: mmap: zap pages with read mmap_sem in munmap Yang Shi
@ 2018-08-09 23:36 ` Yang Shi
  2018-08-09 23:36 ` [RFC v7 PATCH 4/4] mm: unmap special vmas with regular do_munmap() Yang Shi
  3 siblings, 0 replies; 15+ messages in thread
From: Yang Shi @ 2018-08-09 23:36 UTC (permalink / raw)
  To: mhocko, willy, ldufour, kirill, vbabka, akpm, peterz, mingo,
	acme, alexander.shishkin, jolsa, namhyung
  Cc: yang.shi, linux-mm, linux-kernel

vma_has_uprobes() will be used in the following patch to check if a vma
could be unmapped with holding read mmap_sem, but it is static. So, make
it non-static to use outside uprobe.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
---
 include/linux/uprobes.h | 7 +++++++
 kernel/events/uprobes.c | 2 +-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
index 0a294e9..caeb26b 100644
--- a/include/linux/uprobes.h
+++ b/include/linux/uprobes.h
@@ -149,6 +149,8 @@ struct uprobes_state {
 extern bool arch_uprobe_ignore(struct arch_uprobe *aup, struct pt_regs *regs);
 extern void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr,
 					 void *src, unsigned long len);
+extern bool vma_has_uprobes(struct vm_area_struct *vma, unsigned long start,
+			    unsigned long end);
 #else /* !CONFIG_UPROBES */
 struct uprobes_state {
 };
@@ -203,5 +205,10 @@ static inline void uprobe_copy_process(struct task_struct *t, unsigned long flag
 static inline void uprobe_clear_state(struct mm_struct *mm)
 {
 }
+static inline bool vma_has_uprobes(struct vm_area_struct *vma, unsigned long start,
+				   unsigned long end)
+{
+	return false;
+}
 #endif /* !CONFIG_UPROBES */
 #endif	/* _LINUX_UPROBES_H */
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index ccc579a..4880c46 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -1095,7 +1095,7 @@ int uprobe_mmap(struct vm_area_struct *vma)
 	return 0;
 }
 
-static bool
+bool
 vma_has_uprobes(struct vm_area_struct *vma, unsigned long start, unsigned long end)
 {
 	loff_t min, max;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC v7 PATCH 4/4] mm: unmap special vmas with regular do_munmap()
  2018-08-09 23:35 [RFC v7 PATCH 0/4] mm: zap pages with read mmap_sem in munmap for large mapping Yang Shi
                   ` (2 preceding siblings ...)
  2018-08-09 23:36 ` [RFC v7 PATCH 3/4] uprobes: make vma_has_uprobes non-static Yang Shi
@ 2018-08-09 23:36 ` Yang Shi
  2018-08-10  9:51   ` Vlastimil Babka
  2018-08-10 10:46   ` Vlastimil Babka
  3 siblings, 2 replies; 15+ messages in thread
From: Yang Shi @ 2018-08-09 23:36 UTC (permalink / raw)
  To: mhocko, willy, ldufour, kirill, vbabka, akpm, peterz, mingo,
	acme, alexander.shishkin, jolsa, namhyung
  Cc: yang.shi, linux-mm, linux-kernel

Unmapping vmas, which have VM_HUGETLB | VM_PFNMAP flag set or
have uprobes set, need get done with write mmap_sem held since
they may update vm_flags.

So, it might be not safe enough to deal with these kind of special
mappings with read mmap_sem. Deal with such mappings with regular
do_munmap() call.

Michal suggested to make this as a separate patch for safer and more
bisectable sake.

Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
---
 mm/mmap.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/mm/mmap.c b/mm/mmap.c
index 2234d5a..06cb83c 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2766,6 +2766,16 @@ static inline void munlock_vmas(struct vm_area_struct *vma,
 	}
 }
 
+static inline bool can_zap_with_rlock(struct vm_area_struct *vma)
+{
+	if ((vma->vm_file &&
+	     vma_has_uprobes(vma, vma->vm_start, vma->vm_end)) ||
+	     (vma->vm_flags | (VM_HUGETLB | VM_PFNMAP)))
+		return false;
+
+	return true;
+}
+
 /*
  * Zap pages with read mmap_sem held
  *
@@ -2808,6 +2818,17 @@ static int do_munmap_zap_rlock(struct mm_struct *mm, unsigned long start,
 			goto out;
 	}
 
+	/*
+	 * Unmapping vmas, which have VM_HUGETLB | VM_PFNMAP flag set or
+	 * have uprobes set, need get done with write mmap_sem held since
+	 * they may update vm_flags. Deal with such mappings with regular
+	 * do_munmap() call.
+	 */
+	for (vma = start_vma; vma && vma->vm_start < end; vma = vma->vm_next) {
+		if (!can_zap_with_rlock(vma))
+			goto regular_path;
+	}
+
 	/* Handle mlocked vmas */
 	if (mm->locked_vm) {
 		vma = start_vma;
@@ -2828,6 +2849,9 @@ static int do_munmap_zap_rlock(struct mm_struct *mm, unsigned long start,
 
 	return 0;
 
+regular_path:
+	ret = do_munmap(mm, start, len, uf);
+
 out:
 	up_write(&mm->mmap_sem);
 	return ret;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [RFC v7 PATCH 4/4] mm: unmap special vmas with regular do_munmap()
  2018-08-09 23:36 ` [RFC v7 PATCH 4/4] mm: unmap special vmas with regular do_munmap() Yang Shi
@ 2018-08-10  9:51   ` Vlastimil Babka
  2018-08-10 11:59     ` Michal Hocko
  2018-08-10 16:51     ` Yang Shi
  2018-08-10 10:46   ` Vlastimil Babka
  1 sibling, 2 replies; 15+ messages in thread
From: Vlastimil Babka @ 2018-08-10  9:51 UTC (permalink / raw)
  To: Yang Shi, mhocko, willy, ldufour, kirill, akpm, peterz, mingo,
	acme, alexander.shishkin, jolsa, namhyung
  Cc: linux-mm, linux-kernel

On 08/10/2018 01:36 AM, Yang Shi wrote:
> Unmapping vmas, which have VM_HUGETLB | VM_PFNMAP flag set or
> have uprobes set, need get done with write mmap_sem held since
> they may update vm_flags.
> 
> So, it might be not safe enough to deal with these kind of special
> mappings with read mmap_sem. Deal with such mappings with regular
> do_munmap() call.
> 
> Michal suggested to make this as a separate patch for safer and more
> bisectable sake.

Hm I believe Michal meant the opposite "evolution" though. Patch 2/4
should be done in a way that special mappings keep using the regular
path, and this patch would convert them to the new path. Possibly even
each special case separately.

> Cc: Michal Hocko <mhocko@kernel.org>
> Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
> ---
>  mm/mmap.c | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
> 
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 2234d5a..06cb83c 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -2766,6 +2766,16 @@ static inline void munlock_vmas(struct vm_area_struct *vma,
>  	}
>  }
>  
> +static inline bool can_zap_with_rlock(struct vm_area_struct *vma)
> +{
> +	if ((vma->vm_file &&
> +	     vma_has_uprobes(vma, vma->vm_start, vma->vm_end)) ||
> +	     (vma->vm_flags | (VM_HUGETLB | VM_PFNMAP)))
> +		return false;
> +
> +	return true;
> +}
> +
>  /*
>   * Zap pages with read mmap_sem held
>   *
> @@ -2808,6 +2818,17 @@ static int do_munmap_zap_rlock(struct mm_struct *mm, unsigned long start,
>  			goto out;
>  	}
>  
> +	/*
> +	 * Unmapping vmas, which have VM_HUGETLB | VM_PFNMAP flag set or
> +	 * have uprobes set, need get done with write mmap_sem held since
> +	 * they may update vm_flags. Deal with such mappings with regular
> +	 * do_munmap() call.
> +	 */
> +	for (vma = start_vma; vma && vma->vm_start < end; vma = vma->vm_next) {
> +		if (!can_zap_with_rlock(vma))
> +			goto regular_path;
> +	}
> +
>  	/* Handle mlocked vmas */
>  	if (mm->locked_vm) {
>  		vma = start_vma;
> @@ -2828,6 +2849,9 @@ static int do_munmap_zap_rlock(struct mm_struct *mm, unsigned long start,
>  
>  	return 0;
>  
> +regular_path:
> +	ret = do_munmap(mm, start, len, uf);
> +
>  out:
>  	up_write(&mm->mmap_sem);
>  	return ret;
> 


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC v7 PATCH 1/4] mm: refactor do_munmap() to extract the common part
  2018-08-09 23:36 ` [RFC v7 PATCH 1/4] mm: refactor do_munmap() to extract the common part Yang Shi
@ 2018-08-10 10:20   ` Vlastimil Babka
  2018-08-10 17:41   ` Matthew Wilcox
  1 sibling, 0 replies; 15+ messages in thread
From: Vlastimil Babka @ 2018-08-10 10:20 UTC (permalink / raw)
  To: Yang Shi, mhocko, willy, ldufour, kirill, akpm, peterz, mingo,
	acme, alexander.shishkin, jolsa, namhyung
  Cc: linux-mm, linux-kernel

On 08/10/2018 01:36 AM, Yang Shi wrote:
> Introduces three new helper functions:
>   * addr_ok()
>   * munmap_lookup_vma()
>   * munlock_vmas()
> 
> They will be used by do_munmap() and the new do_munmap with zapping
> large mapping early in the later patch.
> 
> There is no functional change, just code refactor.
> 
> Reviewed-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

Small nit below.

> @@ -2764,13 +2812,7 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
>  	 */
>  	if (mm->locked_vm) {
>  		struct vm_area_struct *tmp = vma;
> -		while (tmp && tmp->vm_start < end) {
> -			if (tmp->vm_flags & VM_LOCKED) {
> -				mm->locked_vm -= vma_pages(tmp);
> -				munlock_vma_pages_all(tmp);
> -			}
> -			tmp = tmp->vm_next;
> -		}
> +		munlock_vmas(tmp, end);

No need for 'tmp' here.
	
>  	}
>  
>  	/*
> 


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC v7 PATCH 4/4] mm: unmap special vmas with regular do_munmap()
  2018-08-09 23:36 ` [RFC v7 PATCH 4/4] mm: unmap special vmas with regular do_munmap() Yang Shi
  2018-08-10  9:51   ` Vlastimil Babka
@ 2018-08-10 10:46   ` Vlastimil Babka
  2018-08-10 17:00     ` Yang Shi
  1 sibling, 1 reply; 15+ messages in thread
From: Vlastimil Babka @ 2018-08-10 10:46 UTC (permalink / raw)
  To: Yang Shi, mhocko, willy, ldufour, kirill, akpm, peterz, mingo,
	acme, alexander.shishkin, jolsa, namhyung
  Cc: linux-mm, linux-kernel

On 08/10/2018 01:36 AM, Yang Shi wrote:
> Unmapping vmas, which have VM_HUGETLB | VM_PFNMAP flag set or
> have uprobes set, need get done with write mmap_sem held since
> they may update vm_flags.
> 
> So, it might be not safe enough to deal with these kind of special
> mappings with read mmap_sem. Deal with such mappings with regular
> do_munmap() call.
> 
> Michal suggested to make this as a separate patch for safer and more
> bisectable sake.
> 
> Cc: Michal Hocko <mhocko@kernel.org>
> Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
> ---
>  mm/mmap.c | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
> 
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 2234d5a..06cb83c 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -2766,6 +2766,16 @@ static inline void munlock_vmas(struct vm_area_struct *vma,
>  	}
>  }
>  
> +static inline bool can_zap_with_rlock(struct vm_area_struct *vma)
> +{
> +	if ((vma->vm_file &&
> +	     vma_has_uprobes(vma, vma->vm_start, vma->vm_end)) |

vma_has_uprobes() seems to be rather expensive check with e.g.
unconditional spinlock. uprobe_munmap() seems to have some precondition
cheaper checks for e.g. cases when there's no uprobes in the system
(should be common?).

BTW, uprobe_munmap() touches mm->flags, not vma->flags, so it should be
evaluated more carefully for being called under mmap sem for reading, as
having vmas already detached is no guarantee.

> +	     (vma->vm_flags | (VM_HUGETLB | VM_PFNMAP)))

			    ^ I think replace '|' with '&' here?

> +		return false;
> +
> +	return true;
> +}
> +
>  /*
>   * Zap pages with read mmap_sem held
>   *
> @@ -2808,6 +2818,17 @@ static int do_munmap_zap_rlock(struct mm_struct *mm, unsigned long start,
>  			goto out;
>  	}
>  
> +	/*
> +	 * Unmapping vmas, which have VM_HUGETLB | VM_PFNMAP flag set or
> +	 * have uprobes set, need get done with write mmap_sem held since
> +	 * they may update vm_flags. Deal with such mappings with regular
> +	 * do_munmap() call.
> +	 */
> +	for (vma = start_vma; vma && vma->vm_start < end; vma = vma->vm_next) {
> +		if (!can_zap_with_rlock(vma))
> +			goto regular_path;
> +	}
> +
>  	/* Handle mlocked vmas */
>  	if (mm->locked_vm) {
>  		vma = start_vma;
> @@ -2828,6 +2849,9 @@ static int do_munmap_zap_rlock(struct mm_struct *mm, unsigned long start,
>  
>  	return 0;
>  
> +regular_path:

I think it's missing a down_write_* here.

> +	ret = do_munmap(mm, start, len, uf);
> +
>  out:
>  	up_write(&mm->mmap_sem);
>  	return ret;
> 


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC v7 PATCH 4/4] mm: unmap special vmas with regular do_munmap()
  2018-08-10  9:51   ` Vlastimil Babka
@ 2018-08-10 11:59     ` Michal Hocko
  2018-08-10 16:51     ` Yang Shi
  1 sibling, 0 replies; 15+ messages in thread
From: Michal Hocko @ 2018-08-10 11:59 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Yang Shi, willy, ldufour, kirill, akpm, peterz, mingo, acme,
	alexander.shishkin, jolsa, namhyung, linux-mm, linux-kernel

On Fri 10-08-18 11:51:54, Vlastimil Babka wrote:
> On 08/10/2018 01:36 AM, Yang Shi wrote:
> > Unmapping vmas, which have VM_HUGETLB | VM_PFNMAP flag set or
> > have uprobes set, need get done with write mmap_sem held since
> > they may update vm_flags.
> > 
> > So, it might be not safe enough to deal with these kind of special
> > mappings with read mmap_sem. Deal with such mappings with regular
> > do_munmap() call.
> > 
> > Michal suggested to make this as a separate patch for safer and more
> > bisectable sake.
> 
> Hm I believe Michal meant the opposite "evolution" though. Patch 2/4
> should be done in a way that special mappings keep using the regular
> path, and this patch would convert them to the new path. Possibly even
> each special case separately.

yes, that is what I meant. Each of the special case should have its own
patch and changelog explaining why it is safe.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC v7 PATCH 4/4] mm: unmap special vmas with regular do_munmap()
  2018-08-10  9:51   ` Vlastimil Babka
  2018-08-10 11:59     ` Michal Hocko
@ 2018-08-10 16:51     ` Yang Shi
  1 sibling, 0 replies; 15+ messages in thread
From: Yang Shi @ 2018-08-10 16:51 UTC (permalink / raw)
  To: Vlastimil Babka, mhocko, willy, ldufour, kirill, akpm, peterz,
	mingo, acme, alexander.shishkin, jolsa, namhyung
  Cc: linux-mm, linux-kernel



On 8/10/18 2:51 AM, Vlastimil Babka wrote:
> On 08/10/2018 01:36 AM, Yang Shi wrote:
>> Unmapping vmas, which have VM_HUGETLB | VM_PFNMAP flag set or
>> have uprobes set, need get done with write mmap_sem held since
>> they may update vm_flags.
>>
>> So, it might be not safe enough to deal with these kind of special
>> mappings with read mmap_sem. Deal with such mappings with regular
>> do_munmap() call.
>>
>> Michal suggested to make this as a separate patch for safer and more
>> bisectable sake.
> Hm I believe Michal meant the opposite "evolution" though. Patch 2/4
> should be done in a way that special mappings keep using the regular
> path, and this patch would convert them to the new path. Possibly even
> each special case separately.

Aha, thanks. I understood it oppositely.

Yang

>
>> Cc: Michal Hocko <mhocko@kernel.org>
>> Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
>> ---
>>   mm/mmap.c | 24 ++++++++++++++++++++++++
>>   1 file changed, 24 insertions(+)
>>
>> diff --git a/mm/mmap.c b/mm/mmap.c
>> index 2234d5a..06cb83c 100644
>> --- a/mm/mmap.c
>> +++ b/mm/mmap.c
>> @@ -2766,6 +2766,16 @@ static inline void munlock_vmas(struct vm_area_struct *vma,
>>   	}
>>   }
>>   
>> +static inline bool can_zap_with_rlock(struct vm_area_struct *vma)
>> +{
>> +	if ((vma->vm_file &&
>> +	     vma_has_uprobes(vma, vma->vm_start, vma->vm_end)) ||
>> +	     (vma->vm_flags | (VM_HUGETLB | VM_PFNMAP)))
>> +		return false;
>> +
>> +	return true;
>> +}
>> +
>>   /*
>>    * Zap pages with read mmap_sem held
>>    *
>> @@ -2808,6 +2818,17 @@ static int do_munmap_zap_rlock(struct mm_struct *mm, unsigned long start,
>>   			goto out;
>>   	}
>>   
>> +	/*
>> +	 * Unmapping vmas, which have VM_HUGETLB | VM_PFNMAP flag set or
>> +	 * have uprobes set, need get done with write mmap_sem held since
>> +	 * they may update vm_flags. Deal with such mappings with regular
>> +	 * do_munmap() call.
>> +	 */
>> +	for (vma = start_vma; vma && vma->vm_start < end; vma = vma->vm_next) {
>> +		if (!can_zap_with_rlock(vma))
>> +			goto regular_path;
>> +	}
>> +
>>   	/* Handle mlocked vmas */
>>   	if (mm->locked_vm) {
>>   		vma = start_vma;
>> @@ -2828,6 +2849,9 @@ static int do_munmap_zap_rlock(struct mm_struct *mm, unsigned long start,
>>   
>>   	return 0;
>>   
>> +regular_path:
>> +	ret = do_munmap(mm, start, len, uf);
>> +
>>   out:
>>   	up_write(&mm->mmap_sem);
>>   	return ret;
>>


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC v7 PATCH 4/4] mm: unmap special vmas with regular do_munmap()
  2018-08-10 10:46   ` Vlastimil Babka
@ 2018-08-10 17:00     ` Yang Shi
  0 siblings, 0 replies; 15+ messages in thread
From: Yang Shi @ 2018-08-10 17:00 UTC (permalink / raw)
  To: Vlastimil Babka, mhocko, willy, ldufour, kirill, akpm, peterz,
	mingo, acme, alexander.shishkin, jolsa, namhyung
  Cc: linux-mm, linux-kernel



On 8/10/18 3:46 AM, Vlastimil Babka wrote:
> On 08/10/2018 01:36 AM, Yang Shi wrote:
>> Unmapping vmas, which have VM_HUGETLB | VM_PFNMAP flag set or
>> have uprobes set, need get done with write mmap_sem held since
>> they may update vm_flags.
>>
>> So, it might be not safe enough to deal with these kind of special
>> mappings with read mmap_sem. Deal with such mappings with regular
>> do_munmap() call.
>>
>> Michal suggested to make this as a separate patch for safer and more
>> bisectable sake.
>>
>> Cc: Michal Hocko <mhocko@kernel.org>
>> Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
>> ---
>>   mm/mmap.c | 24 ++++++++++++++++++++++++
>>   1 file changed, 24 insertions(+)
>>
>> diff --git a/mm/mmap.c b/mm/mmap.c
>> index 2234d5a..06cb83c 100644
>> --- a/mm/mmap.c
>> +++ b/mm/mmap.c
>> @@ -2766,6 +2766,16 @@ static inline void munlock_vmas(struct vm_area_struct *vma,
>>   	}
>>   }
>>   
>> +static inline bool can_zap_with_rlock(struct vm_area_struct *vma)
>> +{
>> +	if ((vma->vm_file &&
>> +	     vma_has_uprobes(vma, vma->vm_start, vma->vm_end)) |
> vma_has_uprobes() seems to be rather expensive check with e.g.
> unconditional spinlock. uprobe_munmap() seems to have some precondition
> cheaper checks for e.g. cases when there's no uprobes in the system
> (should be common?).

I think they are common, i.e. checking vm prot since uprobes are 
typically installed for VM_EXEC vmas. We could use those checks to save 
some cycles.

>
> BTW, uprobe_munmap() touches mm->flags, not vma->flags, so it should be
> evaluated more carefully for being called under mmap sem for reading, as
> having vmas already detached is no guarantee.

We might just leave uprobe vmas to use regular do_munmap? I'm supposed 
they should be not very common. And, uprobes just can be installed for 
VM_EXEC vma, although there may be large text segments, typically 
VM_EXEC vmas are unmapped when process exits, so the latency might be fine.

>
>> +	     (vma->vm_flags | (VM_HUGETLB | VM_PFNMAP)))
> 			    ^ I think replace '|' with '&' here?

Yes, thanks for catching this.

>
>> +		return false;
>> +
>> +	return true;
>> +}
>> +
>>   /*
>>    * Zap pages with read mmap_sem held
>>    *
>> @@ -2808,6 +2818,17 @@ static int do_munmap_zap_rlock(struct mm_struct *mm, unsigned long start,
>>   			goto out;
>>   	}
>>   
>> +	/*
>> +	 * Unmapping vmas, which have VM_HUGETLB | VM_PFNMAP flag set or
>> +	 * have uprobes set, need get done with write mmap_sem held since
>> +	 * they may update vm_flags. Deal with such mappings with regular
>> +	 * do_munmap() call.
>> +	 */
>> +	for (vma = start_vma; vma && vma->vm_start < end; vma = vma->vm_next) {
>> +		if (!can_zap_with_rlock(vma))
>> +			goto regular_path;
>> +	}
>> +
>>   	/* Handle mlocked vmas */
>>   	if (mm->locked_vm) {
>>   		vma = start_vma;
>> @@ -2828,6 +2849,9 @@ static int do_munmap_zap_rlock(struct mm_struct *mm, unsigned long start,
>>   
>>   	return 0;
>>   
>> +regular_path:
> I think it's missing a down_write_* here.

No, the jump is called before downgrade_write.

Thanks,
Yang

>
>> +	ret = do_munmap(mm, start, len, uf);
>> +
>>   out:
>>   	up_write(&mm->mmap_sem);
>>   	return ret;
>>


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC v7 PATCH 1/4] mm: refactor do_munmap() to extract the common part
  2018-08-09 23:36 ` [RFC v7 PATCH 1/4] mm: refactor do_munmap() to extract the common part Yang Shi
  2018-08-10 10:20   ` Vlastimil Babka
@ 2018-08-10 17:41   ` Matthew Wilcox
  2018-08-10 18:23     ` Yang Shi
  1 sibling, 1 reply; 15+ messages in thread
From: Matthew Wilcox @ 2018-08-10 17:41 UTC (permalink / raw)
  To: Yang Shi
  Cc: mhocko, ldufour, kirill, vbabka, akpm, peterz, mingo, acme,
	alexander.shishkin, jolsa, namhyung, linux-mm, linux-kernel

On Fri, Aug 10, 2018 at 07:36:00AM +0800, Yang Shi wrote:
> +static inline bool addr_ok(unsigned long start, size_t len)

Maybe munmap_range_ok()?  Otherwise some of the conditions here don't make
sense for such a generic sounding function.

>  {
> -	unsigned long end;
> -	struct vm_area_struct *vma, *prev, *last;
> -
>  	if ((offset_in_page(start)) || start > TASK_SIZE || len > TASK_SIZE-start)
> -		return -EINVAL;
> +		return false;
>  
> -	len = PAGE_ALIGN(len);
> -	if (len == 0)
> -		return -EINVAL;
> +	if (PAGE_ALIGN(len) == 0)
> +		return false;
> +
> +	return true;
> +}
> +
> +/*
> + * munmap_lookup_vma: find the first overlap vma and split overlap vmas.
> + * @mm: mm_struct
> + * @start: start address
> + * @end: end address
> + *
> + * returns the pointer to vma, NULL or err ptr when spilt_vma returns error.

kernel-doc prefers:

 * Return: %NULL if no VMA overlaps this range.  An ERR_PTR if an
 * overlapping VMA could not be split.  Otherwise a pointer to the first
 * VMA which overlaps the range.

> + */
> +static struct vm_area_struct *munmap_lookup_vma(struct mm_struct *mm,
> +			unsigned long start, unsigned long end)
> +{
> +	struct vm_area_struct *vma, *prev, *last;
>  
>  	/* Find the first overlapping VMA */
>  	vma = find_vma(mm, start);
>  	if (!vma)
> -		return 0;
> -	prev = vma->vm_prev;
> -	/* we have  start < vma->vm_end  */
> +		return NULL;
>  
> +	/* we have  start < vma->vm_end  */

Can you remove the duplicate spaces here?


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC v7 PATCH 2/4] mm: mmap: zap pages with read mmap_sem in munmap
  2018-08-09 23:36 ` [RFC v7 PATCH 2/4] mm: mmap: zap pages with read mmap_sem in munmap Yang Shi
@ 2018-08-10 17:57   ` Matthew Wilcox
  2018-08-10 18:26     ` Yang Shi
  0 siblings, 1 reply; 15+ messages in thread
From: Matthew Wilcox @ 2018-08-10 17:57 UTC (permalink / raw)
  To: Yang Shi
  Cc: mhocko, ldufour, kirill, vbabka, akpm, peterz, mingo, acme,
	alexander.shishkin, jolsa, namhyung, linux-mm, linux-kernel

On Fri, Aug 10, 2018 at 07:36:01AM +0800, Yang Shi wrote:
> +/*
> + * Zap pages with read mmap_sem held
> + *
> + * uf is the list for userfaultfd
> + */
> +static int do_munmap_zap_rlock(struct mm_struct *mm, unsigned long start,
> +			       size_t len, struct list_head *uf)

I don't like the name here.  We aren't zapping rlocks, we're zapping
pages.  Not sure what to call it though ...


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC v7 PATCH 1/4] mm: refactor do_munmap() to extract the common part
  2018-08-10 17:41   ` Matthew Wilcox
@ 2018-08-10 18:23     ` Yang Shi
  0 siblings, 0 replies; 15+ messages in thread
From: Yang Shi @ 2018-08-10 18:23 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: mhocko, ldufour, kirill, vbabka, akpm, peterz, mingo, acme,
	alexander.shishkin, jolsa, namhyung, linux-mm, linux-kernel



On 8/10/18 10:41 AM, Matthew Wilcox wrote:
> On Fri, Aug 10, 2018 at 07:36:00AM +0800, Yang Shi wrote:
>> +static inline bool addr_ok(unsigned long start, size_t len)
> Maybe munmap_range_ok()?  Otherwise some of the conditions here don't make
> sense for such a generic sounding function.

I don't know. I think the argument is about munmap_ prefix should be used.

>
>>   {
>> -	unsigned long end;
>> -	struct vm_area_struct *vma, *prev, *last;
>> -
>>   	if ((offset_in_page(start)) || start > TASK_SIZE || len > TASK_SIZE-start)
>> -		return -EINVAL;
>> +		return false;
>>   
>> -	len = PAGE_ALIGN(len);
>> -	if (len == 0)
>> -		return -EINVAL;
>> +	if (PAGE_ALIGN(len) == 0)
>> +		return false;
>> +
>> +	return true;
>> +}
>> +
>> +/*
>> + * munmap_lookup_vma: find the first overlap vma and split overlap vmas.
>> + * @mm: mm_struct
>> + * @start: start address
>> + * @end: end address
>> + *
>> + * returns the pointer to vma, NULL or err ptr when spilt_vma returns error.
> kernel-doc prefers:
>
>   * Return: %NULL if no VMA overlaps this range.  An ERR_PTR if an
>   * overlapping VMA could not be split.  Otherwise a pointer to the first
>   * VMA which overlaps the range.

Ok, will fix it.

>
>> + */
>> +static struct vm_area_struct *munmap_lookup_vma(struct mm_struct *mm,
>> +			unsigned long start, unsigned long end)
>> +{
>> +	struct vm_area_struct *vma, *prev, *last;
>>   
>>   	/* Find the first overlapping VMA */
>>   	vma = find_vma(mm, start);
>>   	if (!vma)
>> -		return 0;
>> -	prev = vma->vm_prev;
>> -	/* we have  start < vma->vm_end  */
>> +		return NULL;
>>   
>> +	/* we have  start < vma->vm_end  */
> Can you remove the duplicate spaces here?

Sure

Thanks,
Yang



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC v7 PATCH 2/4] mm: mmap: zap pages with read mmap_sem in munmap
  2018-08-10 17:57   ` Matthew Wilcox
@ 2018-08-10 18:26     ` Yang Shi
  0 siblings, 0 replies; 15+ messages in thread
From: Yang Shi @ 2018-08-10 18:26 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: mhocko, ldufour, kirill, vbabka, akpm, peterz, mingo, acme,
	alexander.shishkin, jolsa, namhyung, linux-mm, linux-kernel



On 8/10/18 10:57 AM, Matthew Wilcox wrote:
> On Fri, Aug 10, 2018 at 07:36:01AM +0800, Yang Shi wrote:
>> +/*
>> + * Zap pages with read mmap_sem held
>> + *
>> + * uf is the list for userfaultfd
>> + */
>> +static int do_munmap_zap_rlock(struct mm_struct *mm, unsigned long start,
>> +			       size_t len, struct list_head *uf)
> I don't like the name here.  We aren't zapping rlocks, we're zapping
> pages.  Not sure what to call it though ...

It may look ambiguous, it means "zap with rlock", but I don't think 
anyone would expect we are zapping locks.

Thanks,
Yang



^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2018-08-10 18:26 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-09 23:35 [RFC v7 PATCH 0/4] mm: zap pages with read mmap_sem in munmap for large mapping Yang Shi
2018-08-09 23:36 ` [RFC v7 PATCH 1/4] mm: refactor do_munmap() to extract the common part Yang Shi
2018-08-10 10:20   ` Vlastimil Babka
2018-08-10 17:41   ` Matthew Wilcox
2018-08-10 18:23     ` Yang Shi
2018-08-09 23:36 ` [RFC v7 PATCH 2/4] mm: mmap: zap pages with read mmap_sem in munmap Yang Shi
2018-08-10 17:57   ` Matthew Wilcox
2018-08-10 18:26     ` Yang Shi
2018-08-09 23:36 ` [RFC v7 PATCH 3/4] uprobes: make vma_has_uprobes non-static Yang Shi
2018-08-09 23:36 ` [RFC v7 PATCH 4/4] mm: unmap special vmas with regular do_munmap() Yang Shi
2018-08-10  9:51   ` Vlastimil Babka
2018-08-10 11:59     ` Michal Hocko
2018-08-10 16:51     ` Yang Shi
2018-08-10 10:46   ` Vlastimil Babka
2018-08-10 17:00     ` Yang Shi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).