linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [v2 PATCH 1/2 -mm] mm: mremap: dwongrade mmap_sem to read when shrinking
@ 2018-09-26 18:10 Yang Shi
  2018-09-26 18:10 ` [v2 PATCH 2/2 -mm] mm: brk: " Yang Shi
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Yang Shi @ 2018-09-26 18:10 UTC (permalink / raw)
  To: mhocko, kirill, willy, ldufour, vbabka, akpm
  Cc: yang.shi, linux-mm, linux-kernel

Other than munmap, mremap might be used to shrink memory mapping too.
So, it may hold write mmap_sem for long time when shrinking large
mapping, as what commit ("mm: mmap: zap pages with read mmap_sem in
munmap") described.

The mremap() will not manipulate vmas anymore after __do_munmap() call for
the mapping shrink use case, so it is safe to downgrade to read mmap_sem.

So, the same optimization, which downgrades mmap_sem to read for zapping
pages, is also feasible and reasonable to this case.

The period of holding exclusive mmap_sem for shrinking large mapping
would be reduced significantly with this optimization.

MREMAP_FIXED and MREMAP_MAYMOVE are more complicated to adopt this
optimization since they need manipulate vmas after do_munmap(),
downgrading mmap_sem may create race window.

Simple mapping shrink is the low hanging fruit, and it may cover the
most cases of unmap with munmap together.

Cc: Michal Hocko <mhocko@kernel.org>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
---
v2: Rephrase the commit log per Michal

 include/linux/mm.h |  2 ++
 mm/mmap.c          |  4 ++--
 mm/mremap.c        | 17 +++++++++++++----
 3 files changed, 17 insertions(+), 6 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index a61ebe8..3028028 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2286,6 +2286,8 @@ extern unsigned long do_mmap(struct file *file, unsigned long addr,
 	unsigned long len, unsigned long prot, unsigned long flags,
 	vm_flags_t vm_flags, unsigned long pgoff, unsigned long *populate,
 	struct list_head *uf);
+extern int __do_munmap(struct mm_struct *, unsigned long, size_t,
+		       struct list_head *uf, bool downgrade);
 extern int do_munmap(struct mm_struct *, unsigned long, size_t,
 		     struct list_head *uf);
 
diff --git a/mm/mmap.c b/mm/mmap.c
index 847a17d..017bcfa 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2687,8 +2687,8 @@ int split_vma(struct mm_struct *mm, struct vm_area_struct *vma,
  * work.  This now handles partial unmappings.
  * Jeremy Fitzhardinge <jeremy@goop.org>
  */
-static int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
-		       struct list_head *uf, bool downgrade)
+int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
+		struct list_head *uf, bool downgrade)
 {
 	unsigned long end;
 	struct vm_area_struct *vma, *prev, *last;
diff --git a/mm/mremap.c b/mm/mremap.c
index 5c2e185..8f1ec2b 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -525,6 +525,7 @@ static int vma_expandable(struct vm_area_struct *vma, unsigned long delta)
 	unsigned long ret = -EINVAL;
 	unsigned long charged = 0;
 	bool locked = false;
+	bool downgrade = false;
 	struct vm_userfaultfd_ctx uf = NULL_VM_UFFD_CTX;
 	LIST_HEAD(uf_unmap_early);
 	LIST_HEAD(uf_unmap);
@@ -561,12 +562,17 @@ static int vma_expandable(struct vm_area_struct *vma, unsigned long delta)
 	/*
 	 * Always allow a shrinking remap: that just unmaps
 	 * the unnecessary pages..
-	 * do_munmap does all the needed commit accounting
+	 * __do_munmap does all the needed commit accounting, and
+	 * downgrade mmap_sem to read.
 	 */
 	if (old_len >= new_len) {
-		ret = do_munmap(mm, addr+new_len, old_len - new_len, &uf_unmap);
-		if (ret && old_len != new_len)
+		ret = __do_munmap(mm, addr+new_len, old_len - new_len,
+				  &uf_unmap, true);
+		if (ret < 0 && old_len != new_len)
 			goto out;
+		/* Returning 1 indicates mmap_sem is downgraded to read. */
+		else if (ret == 1)
+			downgrade = true;
 		ret = addr;
 		goto out;
 	}
@@ -631,7 +637,10 @@ static int vma_expandable(struct vm_area_struct *vma, unsigned long delta)
 		vm_unacct_memory(charged);
 		locked = 0;
 	}
-	up_write(&current->mm->mmap_sem);
+	if (downgrade)
+		up_read(&current->mm->mmap_sem);
+	else
+		up_write(&current->mm->mmap_sem);
 	if (locked && new_len > old_len)
 		mm_populate(new_addr + old_len, new_len - old_len);
 	userfaultfd_unmap_complete(mm, &uf_unmap_early);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [v2 PATCH 2/2 -mm] mm: brk: dwongrade mmap_sem to read when shrinking
  2018-09-26 18:10 [v2 PATCH 1/2 -mm] mm: mremap: dwongrade mmap_sem to read when shrinking Yang Shi
@ 2018-09-26 18:10 ` Yang Shi
  2018-09-27 12:14   ` Vlastimil Babka
  2018-09-27 12:50   ` Kirill A. Shutemov
  2018-09-27 11:50 ` [v2 PATCH 1/2 -mm] mm: mremap: " Vlastimil Babka
  2018-09-27 12:46 ` Kirill A. Shutemov
  2 siblings, 2 replies; 10+ messages in thread
From: Yang Shi @ 2018-09-26 18:10 UTC (permalink / raw)
  To: mhocko, kirill, willy, ldufour, vbabka, akpm
  Cc: yang.shi, linux-mm, linux-kernel

brk might be used to shinrk memory mapping too other than munmap().
So, it may hold write mmap_sem for long time when shrinking large
mapping, as what commit ("mm: mmap: zap pages with read mmap_sem in
munmap") described.

The brk() will not manipulate vmas anymore after __do_munmap() call for
the mapping shrink use case. But, it may set mm->brk after
__do_munmap(), which needs hold write mmap_sem.

However, a simple trick can workaround this by setting mm->brk before
__do_munmap(). Then restore the original value if __do_munmap() fails.
With this trick, it is safe to downgrade to read mmap_sem.

So, the same optimization, which downgrades mmap_sem to read for
zapping pages, is also feasible and reasonable to this case.

The period of holding exclusive mmap_sem for shrinking large mapping
would be reduced significantly with this optimization.

Cc: Michal Hocko <mhocko@kernel.org>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
---
v2: Rephrase the commit per Michal

 mm/mmap.c | 40 ++++++++++++++++++++++++++++++----------
 1 file changed, 30 insertions(+), 10 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 017bcfa..0d2fae1 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -193,9 +193,11 @@ static int do_brk_flags(unsigned long addr, unsigned long request, unsigned long
 	unsigned long retval;
 	unsigned long newbrk, oldbrk;
 	struct mm_struct *mm = current->mm;
+	unsigned long origbrk = mm->brk;
 	struct vm_area_struct *next;
 	unsigned long min_brk;
 	bool populate;
+	bool downgrade = false;
 	LIST_HEAD(uf);
 
 	if (down_write_killable(&mm->mmap_sem))
@@ -229,14 +231,29 @@ static int do_brk_flags(unsigned long addr, unsigned long request, unsigned long
 
 	newbrk = PAGE_ALIGN(brk);
 	oldbrk = PAGE_ALIGN(mm->brk);
-	if (oldbrk == newbrk)
-		goto set_brk;
+	if (oldbrk == newbrk) {
+		mm->brk = brk;
+		goto success;
+	}
 
-	/* Always allow shrinking brk. */
+	/*
+	 * Always allow shrinking brk.
+	 * __do_munmap() may downgrade mmap_sem to read.
+	 */
 	if (brk <= mm->brk) {
-		if (!do_munmap(mm, newbrk, oldbrk-newbrk, &uf))
-			goto set_brk;
-		goto out;
+		/*
+		 * mm->brk need to be protected by write mmap_sem, update it
+		 * before downgrading mmap_sem.
+		 * When __do_munmap fail, it will be restored from origbrk.
+		 */
+		mm->brk = brk;
+		retval = __do_munmap(mm, newbrk, oldbrk-newbrk, &uf, true);
+		if (retval < 0) {
+			mm->brk = origbrk;
+			goto out;
+		} else if (retval == 1)
+			downgrade = true;
+		goto success;
 	}
 
 	/* Check against existing mmap mappings. */
@@ -247,18 +264,21 @@ static int do_brk_flags(unsigned long addr, unsigned long request, unsigned long
 	/* Ok, looks good - let it rip. */
 	if (do_brk_flags(oldbrk, newbrk-oldbrk, 0, &uf) < 0)
 		goto out;
-
-set_brk:
 	mm->brk = brk;
+
+success:
 	populate = newbrk > oldbrk && (mm->def_flags & VM_LOCKED) != 0;
-	up_write(&mm->mmap_sem);
+	if (downgrade)
+		up_read(&mm->mmap_sem);
+	else
+		up_write(&mm->mmap_sem);
 	userfaultfd_unmap_complete(mm, &uf);
 	if (populate)
 		mm_populate(oldbrk, newbrk - oldbrk);
 	return brk;
 
 out:
-	retval = mm->brk;
+	retval = origbrk;
 	up_write(&mm->mmap_sem);
 	return retval;
 }
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [v2 PATCH 1/2 -mm] mm: mremap: dwongrade mmap_sem to read when shrinking
  2018-09-26 18:10 [v2 PATCH 1/2 -mm] mm: mremap: dwongrade mmap_sem to read when shrinking Yang Shi
  2018-09-26 18:10 ` [v2 PATCH 2/2 -mm] mm: brk: " Yang Shi
@ 2018-09-27 11:50 ` Vlastimil Babka
  2018-09-27 16:04   ` Yang Shi
  2018-09-27 12:46 ` Kirill A. Shutemov
  2 siblings, 1 reply; 10+ messages in thread
From: Vlastimil Babka @ 2018-09-27 11:50 UTC (permalink / raw)
  To: Yang Shi, mhocko, kirill, willy, ldufour, akpm; +Cc: linux-mm, linux-kernel

On 9/26/18 8:10 PM, Yang Shi wrote:
> Subject: [v2 PATCH 1/2 -mm] mm: mremap: dwongrade mmap_sem to read
when shrinking

"downgrade" in the subject

> Other than munmap, mremap might be used to shrink memory mapping too.
> So, it may hold write mmap_sem for long time when shrinking large
> mapping, as what commit ("mm: mmap: zap pages with read mmap_sem in
> munmap") described.
> 
> The mremap() will not manipulate vmas anymore after __do_munmap() call for
> the mapping shrink use case, so it is safe to downgrade to read mmap_sem.
> 
> So, the same optimization, which downgrades mmap_sem to read for zapping
> pages, is also feasible and reasonable to this case.
> 
> The period of holding exclusive mmap_sem for shrinking large mapping
> would be reduced significantly with this optimization.
> 
> MREMAP_FIXED and MREMAP_MAYMOVE are more complicated to adopt this
> optimization since they need manipulate vmas after do_munmap(),
> downgrading mmap_sem may create race window.
> 
> Simple mapping shrink is the low hanging fruit, and it may cover the
> most cases of unmap with munmap together.
> 
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Kirill A. Shutemov <kirill@shutemov.name>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>

Looks fine,

Acked-by: Vlastimil Babka <vbabka@suse.cz>

Nit:

> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -2687,8 +2687,8 @@ int split_vma(struct mm_struct *mm, struct vm_area_struct *vma,
>   * work.  This now handles partial unmappings.
>   * Jeremy Fitzhardinge <jeremy@goop.org>
>   */
> -static int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
> -		       struct list_head *uf, bool downgrade)
> +int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
> +		struct list_head *uf, bool downgrade)
>  {
>  	unsigned long end;
>  	struct vm_area_struct *vma, *prev, *last;
> diff --git a/mm/mremap.c b/mm/mremap.c
> index 5c2e185..8f1ec2b 100644
> --- a/mm/mremap.c
> +++ b/mm/mremap.c
> @@ -525,6 +525,7 @@ static int vma_expandable(struct vm_area_struct *vma, unsigned long delta)
>  	unsigned long ret = -EINVAL;
>  	unsigned long charged = 0;
>  	bool locked = false;
> +	bool downgrade = false;

Maybe "downgraded" is more accurate here, or even "downgraded_mmap_sem".


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [v2 PATCH 2/2 -mm] mm: brk: dwongrade mmap_sem to read when shrinking
  2018-09-26 18:10 ` [v2 PATCH 2/2 -mm] mm: brk: " Yang Shi
@ 2018-09-27 12:14   ` Vlastimil Babka
  2018-09-27 16:05     ` Yang Shi
  2018-09-27 12:50   ` Kirill A. Shutemov
  1 sibling, 1 reply; 10+ messages in thread
From: Vlastimil Babka @ 2018-09-27 12:14 UTC (permalink / raw)
  To: Yang Shi, mhocko, kirill, willy, ldufour, akpm; +Cc: linux-mm, linux-kernel

On 9/26/18 8:10 PM, Yang Shi wrote:

Again, "downgrade" in the subject

> brk might be used to shinrk memory mapping too other than munmap().

                       ^ shrink

> So, it may hold write mmap_sem for long time when shrinking large
> mapping, as what commit ("mm: mmap: zap pages with read mmap_sem in
> munmap") described.
> 
> The brk() will not manipulate vmas anymore after __do_munmap() call for
> the mapping shrink use case. But, it may set mm->brk after
> __do_munmap(), which needs hold write mmap_sem.
> 
> However, a simple trick can workaround this by setting mm->brk before
> __do_munmap(). Then restore the original value if __do_munmap() fails.
> With this trick, it is safe to downgrade to read mmap_sem.
> 
> So, the same optimization, which downgrades mmap_sem to read for
> zapping pages, is also feasible and reasonable to this case.
> 
> The period of holding exclusive mmap_sem for shrinking large mapping
> would be reduced significantly with this optimization.
> 
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Kirill A. Shutemov <kirill@shutemov.name>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

Same nit for the "bool downgrade" name as for patch 1/2.

Thanks,
Vlastimil

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [v2 PATCH 1/2 -mm] mm: mremap: dwongrade mmap_sem to read when shrinking
  2018-09-26 18:10 [v2 PATCH 1/2 -mm] mm: mremap: dwongrade mmap_sem to read when shrinking Yang Shi
  2018-09-26 18:10 ` [v2 PATCH 2/2 -mm] mm: brk: " Yang Shi
  2018-09-27 11:50 ` [v2 PATCH 1/2 -mm] mm: mremap: " Vlastimil Babka
@ 2018-09-27 12:46 ` Kirill A. Shutemov
  2 siblings, 0 replies; 10+ messages in thread
From: Kirill A. Shutemov @ 2018-09-27 12:46 UTC (permalink / raw)
  To: Yang Shi; +Cc: mhocko, willy, ldufour, vbabka, akpm, linux-mm, linux-kernel

On Thu, Sep 27, 2018 at 02:10:33AM +0800, Yang Shi wrote:
> Other than munmap, mremap might be used to shrink memory mapping too.
> So, it may hold write mmap_sem for long time when shrinking large
> mapping, as what commit ("mm: mmap: zap pages with read mmap_sem in
> munmap") described.
> 
> The mremap() will not manipulate vmas anymore after __do_munmap() call for
> the mapping shrink use case, so it is safe to downgrade to read mmap_sem.
> 
> So, the same optimization, which downgrades mmap_sem to read for zapping
> pages, is also feasible and reasonable to this case.
> 
> The period of holding exclusive mmap_sem for shrinking large mapping
> would be reduced significantly with this optimization.
> 
> MREMAP_FIXED and MREMAP_MAYMOVE are more complicated to adopt this
> optimization since they need manipulate vmas after do_munmap(),
> downgrading mmap_sem may create race window.
> 
> Simple mapping shrink is the low hanging fruit, and it may cover the
> most cases of unmap with munmap together.
> 
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Kirill A. Shutemov <kirill@shutemov.name>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
> ---
> v2: Rephrase the commit log per Michal
> 
>  include/linux/mm.h |  2 ++
>  mm/mmap.c          |  4 ++--
>  mm/mremap.c        | 17 +++++++++++++----
>  3 files changed, 17 insertions(+), 6 deletions(-)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index a61ebe8..3028028 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2286,6 +2286,8 @@ extern unsigned long do_mmap(struct file *file, unsigned long addr,
>  	unsigned long len, unsigned long prot, unsigned long flags,
>  	vm_flags_t vm_flags, unsigned long pgoff, unsigned long *populate,
>  	struct list_head *uf);
> +extern int __do_munmap(struct mm_struct *, unsigned long, size_t,
> +		       struct list_head *uf, bool downgrade);
>  extern int do_munmap(struct mm_struct *, unsigned long, size_t,
>  		     struct list_head *uf);
>  
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 847a17d..017bcfa 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -2687,8 +2687,8 @@ int split_vma(struct mm_struct *mm, struct vm_area_struct *vma,
>   * work.  This now handles partial unmappings.
>   * Jeremy Fitzhardinge <jeremy@goop.org>
>   */
> -static int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
> -		       struct list_head *uf, bool downgrade)
> +int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
> +		struct list_head *uf, bool downgrade)
>  {
>  	unsigned long end;
>  	struct vm_area_struct *vma, *prev, *last;
> diff --git a/mm/mremap.c b/mm/mremap.c
> index 5c2e185..8f1ec2b 100644
> --- a/mm/mremap.c
> +++ b/mm/mremap.c
> @@ -525,6 +525,7 @@ static int vma_expandable(struct vm_area_struct *vma, unsigned long delta)
>  	unsigned long ret = -EINVAL;
>  	unsigned long charged = 0;
>  	bool locked = false;
> +	bool downgrade = false;

s/downgrade/downgraded/ ?

Otherwise looks good to me:

Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>

-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [v2 PATCH 2/2 -mm] mm: brk: dwongrade mmap_sem to read when shrinking
  2018-09-26 18:10 ` [v2 PATCH 2/2 -mm] mm: brk: " Yang Shi
  2018-09-27 12:14   ` Vlastimil Babka
@ 2018-09-27 12:50   ` Kirill A. Shutemov
  2018-09-27 13:21     ` Vlastimil Babka
  2018-09-27 16:06     ` Yang Shi
  1 sibling, 2 replies; 10+ messages in thread
From: Kirill A. Shutemov @ 2018-09-27 12:50 UTC (permalink / raw)
  To: Yang Shi; +Cc: mhocko, willy, ldufour, vbabka, akpm, linux-mm, linux-kernel

On Thu, Sep 27, 2018 at 02:10:34AM +0800, Yang Shi wrote:
> brk might be used to shinrk memory mapping too other than munmap().

s/shinrk/shrink/

> So, it may hold write mmap_sem for long time when shrinking large
> mapping, as what commit ("mm: mmap: zap pages with read mmap_sem in
> munmap") described.
> 
> The brk() will not manipulate vmas anymore after __do_munmap() call for
> the mapping shrink use case. But, it may set mm->brk after
> __do_munmap(), which needs hold write mmap_sem.
> 
> However, a simple trick can workaround this by setting mm->brk before
> __do_munmap(). Then restore the original value if __do_munmap() fails.
> With this trick, it is safe to downgrade to read mmap_sem.
> 
> So, the same optimization, which downgrades mmap_sem to read for
> zapping pages, is also feasible and reasonable to this case.
> 
> The period of holding exclusive mmap_sem for shrinking large mapping
> would be reduced significantly with this optimization.
> 
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Kirill A. Shutemov <kirill@shutemov.name>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
> ---
> v2: Rephrase the commit per Michal
> 
>  mm/mmap.c | 40 ++++++++++++++++++++++++++++++----------
>  1 file changed, 30 insertions(+), 10 deletions(-)
> 
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 017bcfa..0d2fae1 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -193,9 +193,11 @@ static int do_brk_flags(unsigned long addr, unsigned long request, unsigned long
>  	unsigned long retval;
>  	unsigned long newbrk, oldbrk;
>  	struct mm_struct *mm = current->mm;
> +	unsigned long origbrk = mm->brk;

Is it safe to read mm->brk outside the lock?

>  	struct vm_area_struct *next;
>  	unsigned long min_brk;
>  	bool populate;
> +	bool downgrade = false;

Again,

s/downgrade/downgraded/ ?

>  	LIST_HEAD(uf);
>  
>  	if (down_write_killable(&mm->mmap_sem))
> @@ -229,14 +231,29 @@ static int do_brk_flags(unsigned long addr, unsigned long request, unsigned long
>  
>  	newbrk = PAGE_ALIGN(brk);
>  	oldbrk = PAGE_ALIGN(mm->brk);
> -	if (oldbrk == newbrk)
> -		goto set_brk;
> +	if (oldbrk == newbrk) {
> +		mm->brk = brk;
> +		goto success;
> +	}
>  
> -	/* Always allow shrinking brk. */
> +	/*
> +	 * Always allow shrinking brk.
> +	 * __do_munmap() may downgrade mmap_sem to read.
> +	 */
>  	if (brk <= mm->brk) {
> -		if (!do_munmap(mm, newbrk, oldbrk-newbrk, &uf))
> -			goto set_brk;
> -		goto out;
> +		/*
> +		 * mm->brk need to be protected by write mmap_sem, update it
> +		 * before downgrading mmap_sem.
> +		 * When __do_munmap fail, it will be restored from origbrk.
> +		 */
> +		mm->brk = brk;
> +		retval = __do_munmap(mm, newbrk, oldbrk-newbrk, &uf, true);
> +		if (retval < 0) {
> +			mm->brk = origbrk;
> +			goto out;
> +		} else if (retval == 1)
> +			downgrade = true;
> +		goto success;
>  	}
>  
>  	/* Check against existing mmap mappings. */
> @@ -247,18 +264,21 @@ static int do_brk_flags(unsigned long addr, unsigned long request, unsigned long
>  	/* Ok, looks good - let it rip. */
>  	if (do_brk_flags(oldbrk, newbrk-oldbrk, 0, &uf) < 0)
>  		goto out;
> -
> -set_brk:
>  	mm->brk = brk;
> +
> +success:
>  	populate = newbrk > oldbrk && (mm->def_flags & VM_LOCKED) != 0;
> -	up_write(&mm->mmap_sem);
> +	if (downgrade)
> +		up_read(&mm->mmap_sem);
> +	else
> +		up_write(&mm->mmap_sem);
>  	userfaultfd_unmap_complete(mm, &uf);
>  	if (populate)
>  		mm_populate(oldbrk, newbrk - oldbrk);
>  	return brk;
>  
>  out:
> -	retval = mm->brk;
> +	retval = origbrk;
>  	up_write(&mm->mmap_sem);
>  	return retval;
>  }
> -- 
> 1.8.3.1
> 

-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [v2 PATCH 2/2 -mm] mm: brk: dwongrade mmap_sem to read when shrinking
  2018-09-27 12:50   ` Kirill A. Shutemov
@ 2018-09-27 13:21     ` Vlastimil Babka
  2018-09-27 16:06     ` Yang Shi
  1 sibling, 0 replies; 10+ messages in thread
From: Vlastimil Babka @ 2018-09-27 13:21 UTC (permalink / raw)
  To: Kirill A. Shutemov, Yang Shi
  Cc: mhocko, willy, ldufour, akpm, linux-mm, linux-kernel

On 9/27/18 2:50 PM, Kirill A. Shutemov wrote:
>> diff --git a/mm/mmap.c b/mm/mmap.c
>> index 017bcfa..0d2fae1 100644
>> --- a/mm/mmap.c
>> +++ b/mm/mmap.c
>> @@ -193,9 +193,11 @@ static int do_brk_flags(unsigned long addr, unsigned long request, unsigned long
>>  	unsigned long retval;
>>  	unsigned long newbrk, oldbrk;
>>  	struct mm_struct *mm = current->mm;
>> +	unsigned long origbrk = mm->brk;
> 
> Is it safe to read mm->brk outside the lock?

Good catch! I guess not, parallel brk()'s could then race.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [v2 PATCH 1/2 -mm] mm: mremap: dwongrade mmap_sem to read when shrinking
  2018-09-27 11:50 ` [v2 PATCH 1/2 -mm] mm: mremap: " Vlastimil Babka
@ 2018-09-27 16:04   ` Yang Shi
  0 siblings, 0 replies; 10+ messages in thread
From: Yang Shi @ 2018-09-27 16:04 UTC (permalink / raw)
  To: Vlastimil Babka, mhocko, kirill, willy, ldufour, akpm
  Cc: linux-mm, linux-kernel



On 9/27/18 4:50 AM, Vlastimil Babka wrote:
> On 9/26/18 8:10 PM, Yang Shi wrote:
>> Subject: [v2 PATCH 1/2 -mm] mm: mremap: dwongrade mmap_sem to read
> when shrinking
>
> "downgrade" in the subject

Will fix in the next version.

Thanks,
Yang

>
>> Other than munmap, mremap might be used to shrink memory mapping too.
>> So, it may hold write mmap_sem for long time when shrinking large
>> mapping, as what commit ("mm: mmap: zap pages with read mmap_sem in
>> munmap") described.
>>
>> The mremap() will not manipulate vmas anymore after __do_munmap() call for
>> the mapping shrink use case, so it is safe to downgrade to read mmap_sem.
>>
>> So, the same optimization, which downgrades mmap_sem to read for zapping
>> pages, is also feasible and reasonable to this case.
>>
>> The period of holding exclusive mmap_sem for shrinking large mapping
>> would be reduced significantly with this optimization.
>>
>> MREMAP_FIXED and MREMAP_MAYMOVE are more complicated to adopt this
>> optimization since they need manipulate vmas after do_munmap(),
>> downgrading mmap_sem may create race window.
>>
>> Simple mapping shrink is the low hanging fruit, and it may cover the
>> most cases of unmap with munmap together.
>>
>> Cc: Michal Hocko <mhocko@kernel.org>
>> Cc: Kirill A. Shutemov <kirill@shutemov.name>
>> Cc: Matthew Wilcox <willy@infradead.org>
>> Cc: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>> Cc: Vlastimil Babka <vbabka@suse.cz>
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
> Looks fine,
>
> Acked-by: Vlastimil Babka <vbabka@suse.cz>
>
> Nit:
>
>> --- a/mm/mmap.c
>> +++ b/mm/mmap.c
>> @@ -2687,8 +2687,8 @@ int split_vma(struct mm_struct *mm, struct vm_area_struct *vma,
>>    * work.  This now handles partial unmappings.
>>    * Jeremy Fitzhardinge <jeremy@goop.org>
>>    */
>> -static int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
>> -		       struct list_head *uf, bool downgrade)
>> +int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
>> +		struct list_head *uf, bool downgrade)
>>   {
>>   	unsigned long end;
>>   	struct vm_area_struct *vma, *prev, *last;
>> diff --git a/mm/mremap.c b/mm/mremap.c
>> index 5c2e185..8f1ec2b 100644
>> --- a/mm/mremap.c
>> +++ b/mm/mremap.c
>> @@ -525,6 +525,7 @@ static int vma_expandable(struct vm_area_struct *vma, unsigned long delta)
>>   	unsigned long ret = -EINVAL;
>>   	unsigned long charged = 0;
>>   	bool locked = false;
>> +	bool downgrade = false;
> Maybe "downgraded" is more accurate here, or even "downgraded_mmap_sem".


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [v2 PATCH 2/2 -mm] mm: brk: dwongrade mmap_sem to read when shrinking
  2018-09-27 12:14   ` Vlastimil Babka
@ 2018-09-27 16:05     ` Yang Shi
  0 siblings, 0 replies; 10+ messages in thread
From: Yang Shi @ 2018-09-27 16:05 UTC (permalink / raw)
  To: Vlastimil Babka, mhocko, kirill, willy, ldufour, akpm
  Cc: linux-mm, linux-kernel



On 9/27/18 5:14 AM, Vlastimil Babka wrote:
> On 9/26/18 8:10 PM, Yang Shi wrote:
>
> Again, "downgrade" in the subject
>
>> brk might be used to shinrk memory mapping too other than munmap().
>                         ^ shrink
>
>> So, it may hold write mmap_sem for long time when shrinking large
>> mapping, as what commit ("mm: mmap: zap pages with read mmap_sem in
>> munmap") described.
>>
>> The brk() will not manipulate vmas anymore after __do_munmap() call for
>> the mapping shrink use case. But, it may set mm->brk after
>> __do_munmap(), which needs hold write mmap_sem.
>>
>> However, a simple trick can workaround this by setting mm->brk before
>> __do_munmap(). Then restore the original value if __do_munmap() fails.
>> With this trick, it is safe to downgrade to read mmap_sem.
>>
>> So, the same optimization, which downgrades mmap_sem to read for
>> zapping pages, is also feasible and reasonable to this case.
>>
>> The period of holding exclusive mmap_sem for shrinking large mapping
>> would be reduced significantly with this optimization.
>>
>> Cc: Michal Hocko <mhocko@kernel.org>
>> Cc: Kirill A. Shutemov <kirill@shutemov.name>
>> Cc: Matthew Wilcox <willy@infradead.org>
>> Cc: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>> Cc: Vlastimil Babka <vbabka@suse.cz>
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
> Acked-by: Vlastimil Babka <vbabka@suse.cz>
>
> Same nit for the "bool downgrade" name as for patch 1/2.

Will solve in next version.

Thanks,
Yang

>
> Thanks,
> Vlastimil


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [v2 PATCH 2/2 -mm] mm: brk: dwongrade mmap_sem to read when shrinking
  2018-09-27 12:50   ` Kirill A. Shutemov
  2018-09-27 13:21     ` Vlastimil Babka
@ 2018-09-27 16:06     ` Yang Shi
  1 sibling, 0 replies; 10+ messages in thread
From: Yang Shi @ 2018-09-27 16:06 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: mhocko, willy, ldufour, vbabka, akpm, linux-mm, linux-kernel



On 9/27/18 5:50 AM, Kirill A. Shutemov wrote:
> On Thu, Sep 27, 2018 at 02:10:34AM +0800, Yang Shi wrote:
>> brk might be used to shinrk memory mapping too other than munmap().
> s/shinrk/shrink/
>
>> So, it may hold write mmap_sem for long time when shrinking large
>> mapping, as what commit ("mm: mmap: zap pages with read mmap_sem in
>> munmap") described.
>>
>> The brk() will not manipulate vmas anymore after __do_munmap() call for
>> the mapping shrink use case. But, it may set mm->brk after
>> __do_munmap(), which needs hold write mmap_sem.
>>
>> However, a simple trick can workaround this by setting mm->brk before
>> __do_munmap(). Then restore the original value if __do_munmap() fails.
>> With this trick, it is safe to downgrade to read mmap_sem.
>>
>> So, the same optimization, which downgrades mmap_sem to read for
>> zapping pages, is also feasible and reasonable to this case.
>>
>> The period of holding exclusive mmap_sem for shrinking large mapping
>> would be reduced significantly with this optimization.
>>
>> Cc: Michal Hocko <mhocko@kernel.org>
>> Cc: Kirill A. Shutemov <kirill@shutemov.name>
>> Cc: Matthew Wilcox <willy@infradead.org>
>> Cc: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>> Cc: Vlastimil Babka <vbabka@suse.cz>
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
>> ---
>> v2: Rephrase the commit per Michal
>>
>>   mm/mmap.c | 40 ++++++++++++++++++++++++++++++----------
>>   1 file changed, 30 insertions(+), 10 deletions(-)
>>
>> diff --git a/mm/mmap.c b/mm/mmap.c
>> index 017bcfa..0d2fae1 100644
>> --- a/mm/mmap.c
>> +++ b/mm/mmap.c
>> @@ -193,9 +193,11 @@ static int do_brk_flags(unsigned long addr, unsigned long request, unsigned long
>>   	unsigned long retval;
>>   	unsigned long newbrk, oldbrk;
>>   	struct mm_struct *mm = current->mm;
>> +	unsigned long origbrk = mm->brk;
> Is it safe to read mm->brk outside the lock?

Aha, thanks for catching this. It can be moved inside down_write().

Will solve in the next version.

Thanks,
Yang

>
>>   	struct vm_area_struct *next;
>>   	unsigned long min_brk;
>>   	bool populate;
>> +	bool downgrade = false;
> Again,
>
> s/downgrade/downgraded/ ?
>
>>   	LIST_HEAD(uf);
>>   
>>   	if (down_write_killable(&mm->mmap_sem))
>> @@ -229,14 +231,29 @@ static int do_brk_flags(unsigned long addr, unsigned long request, unsigned long
>>   
>>   	newbrk = PAGE_ALIGN(brk);
>>   	oldbrk = PAGE_ALIGN(mm->brk);
>> -	if (oldbrk == newbrk)
>> -		goto set_brk;
>> +	if (oldbrk == newbrk) {
>> +		mm->brk = brk;
>> +		goto success;
>> +	}
>>   
>> -	/* Always allow shrinking brk. */
>> +	/*
>> +	 * Always allow shrinking brk.
>> +	 * __do_munmap() may downgrade mmap_sem to read.
>> +	 */
>>   	if (brk <= mm->brk) {
>> -		if (!do_munmap(mm, newbrk, oldbrk-newbrk, &uf))
>> -			goto set_brk;
>> -		goto out;
>> +		/*
>> +		 * mm->brk need to be protected by write mmap_sem, update it
>> +		 * before downgrading mmap_sem.
>> +		 * When __do_munmap fail, it will be restored from origbrk.
>> +		 */
>> +		mm->brk = brk;
>> +		retval = __do_munmap(mm, newbrk, oldbrk-newbrk, &uf, true);
>> +		if (retval < 0) {
>> +			mm->brk = origbrk;
>> +			goto out;
>> +		} else if (retval == 1)
>> +			downgrade = true;
>> +		goto success;
>>   	}
>>   
>>   	/* Check against existing mmap mappings. */
>> @@ -247,18 +264,21 @@ static int do_brk_flags(unsigned long addr, unsigned long request, unsigned long
>>   	/* Ok, looks good - let it rip. */
>>   	if (do_brk_flags(oldbrk, newbrk-oldbrk, 0, &uf) < 0)
>>   		goto out;
>> -
>> -set_brk:
>>   	mm->brk = brk;
>> +
>> +success:
>>   	populate = newbrk > oldbrk && (mm->def_flags & VM_LOCKED) != 0;
>> -	up_write(&mm->mmap_sem);
>> +	if (downgrade)
>> +		up_read(&mm->mmap_sem);
>> +	else
>> +		up_write(&mm->mmap_sem);
>>   	userfaultfd_unmap_complete(mm, &uf);
>>   	if (populate)
>>   		mm_populate(oldbrk, newbrk - oldbrk);
>>   	return brk;
>>   
>>   out:
>> -	retval = mm->brk;
>> +	retval = origbrk;
>>   	up_write(&mm->mmap_sem);
>>   	return retval;
>>   }
>> -- 
>> 1.8.3.1
>>


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2018-09-27 16:06 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-26 18:10 [v2 PATCH 1/2 -mm] mm: mremap: dwongrade mmap_sem to read when shrinking Yang Shi
2018-09-26 18:10 ` [v2 PATCH 2/2 -mm] mm: brk: " Yang Shi
2018-09-27 12:14   ` Vlastimil Babka
2018-09-27 16:05     ` Yang Shi
2018-09-27 12:50   ` Kirill A. Shutemov
2018-09-27 13:21     ` Vlastimil Babka
2018-09-27 16:06     ` Yang Shi
2018-09-27 11:50 ` [v2 PATCH 1/2 -mm] mm: mremap: " Vlastimil Babka
2018-09-27 16:04   ` Yang Shi
2018-09-27 12:46 ` Kirill A. Shutemov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).