linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] mm: fix the racy mm->locked_vm change in acct_stack_growth()
@ 2015-09-29 18:27 Oleg Nesterov
  2015-09-29 18:27 ` [PATCH 1/2] mm: fix the racy mm->locked_vm change in Oleg Nesterov
  2015-09-29 18:28 ` [PATCH 2/2] mm: add the "struct mm_struct *mm" local into Oleg Nesterov
  0 siblings, 2 replies; 7+ messages in thread
From: Oleg Nesterov @ 2015-09-29 18:27 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Konovalov, Davidlohr Bueso, Hugh Dickins,
	Kirill A. Shutemov, Sasha Levin, Vlastimil Babka, linux-kernel

Kirill, Hugh, could you please review and ack/nack? Looks simple,
but I'm afraid I could miss something and I have no idea how to
test this.

Oleg.


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/2] mm: fix the racy mm->locked_vm change in
  2015-09-29 18:27 [PATCH 0/2] mm: fix the racy mm->locked_vm change in acct_stack_growth() Oleg Nesterov
@ 2015-09-29 18:27 ` Oleg Nesterov
  2015-10-01  3:01   ` Hugh Dickins
  2015-09-29 18:28 ` [PATCH 2/2] mm: add the "struct mm_struct *mm" local into Oleg Nesterov
  1 sibling, 1 reply; 7+ messages in thread
From: Oleg Nesterov @ 2015-09-29 18:27 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Konovalov, Davidlohr Bueso, Hugh Dickins,
	Kirill A. Shutemov, Sasha Levin, Vlastimil Babka, linux-kernel

"mm->locked_vm += grow" and vm_stat_account() in acct_stack_growth()
are not safe; multiple threads using the same ->mm can do this at the
same time trying to expans different vma's under down_read(mmap_sem).
This means that one of the "locked_vm += grow" changes can be lost
and we can miss munlock_vma_pages_all() later.

Move this code into the caller(s) under mm->page_table_lock. All other
updates to ->locked_vm hold mmap_sem for writing.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
 mm/mmap.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 8393580..4efdc37 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2138,10 +2138,6 @@ static int acct_stack_growth(struct vm_area_struct *vma, unsigned long size, uns
 	if (security_vm_enough_memory_mm(mm, grow))
 		return -ENOMEM;
 
-	/* Ok, everything looks good - let it rip */
-	if (vma->vm_flags & VM_LOCKED)
-		mm->locked_vm += grow;
-	vm_stat_account(mm, vma->vm_flags, vma->vm_file, grow);
 	return 0;
 }
 
@@ -2202,6 +2198,10 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
 				 * against concurrent vma expansions.
 				 */
 				spin_lock(&vma->vm_mm->page_table_lock);
+				if (vma->vm_flags & VM_LOCKED)
+					vma->vm_mm->locked_vm += grow;
+				vm_stat_account(vma->vm_mm, vma->vm_flags,
+						vma->vm_file, grow);
 				anon_vma_interval_tree_pre_update_vma(vma);
 				vma->vm_end = address;
 				anon_vma_interval_tree_post_update_vma(vma);
@@ -2273,6 +2273,10 @@ int expand_downwards(struct vm_area_struct *vma,
 				 * against concurrent vma expansions.
 				 */
 				spin_lock(&vma->vm_mm->page_table_lock);
+				if (vma->vm_flags & VM_LOCKED)
+					vma->vm_mm->locked_vm += grow;
+				vm_stat_account(vma->vm_mm, vma->vm_flags,
+						vma->vm_file, grow);
 				anon_vma_interval_tree_pre_update_vma(vma);
 				vma->vm_start = address;
 				vma->vm_pgoff -= grow;
-- 
2.4.3


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/2] mm: add the "struct mm_struct *mm" local into
  2015-09-29 18:27 [PATCH 0/2] mm: fix the racy mm->locked_vm change in acct_stack_growth() Oleg Nesterov
  2015-09-29 18:27 ` [PATCH 1/2] mm: fix the racy mm->locked_vm change in Oleg Nesterov
@ 2015-09-29 18:28 ` Oleg Nesterov
  2015-10-01  3:02   ` Hugh Dickins
  1 sibling, 1 reply; 7+ messages in thread
From: Oleg Nesterov @ 2015-09-29 18:28 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Konovalov, Davidlohr Bueso, Hugh Dickins,
	Kirill A. Shutemov, Sasha Levin, Vlastimil Babka, linux-kernel

Cosmetic, but expand_upwards() and expand_downwards() overuse
vma->vm_mm, a local variable makes sense imho.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
 mm/mmap.c | 24 +++++++++++++-----------
 1 file changed, 13 insertions(+), 11 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 4efdc37..7edf9ed 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2148,6 +2148,7 @@ static int acct_stack_growth(struct vm_area_struct *vma, unsigned long size, uns
  */
 int expand_upwards(struct vm_area_struct *vma, unsigned long address)
 {
+	struct mm_struct *mm = vma->vm_mm;
 	int error;
 
 	if (!(vma->vm_flags & VM_GROWSUP))
@@ -2197,10 +2198,10 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
 				 * So, we reuse mm->page_table_lock to guard
 				 * against concurrent vma expansions.
 				 */
-				spin_lock(&vma->vm_mm->page_table_lock);
+				spin_lock(&mm->page_table_lock);
 				if (vma->vm_flags & VM_LOCKED)
-					vma->vm_mm->locked_vm += grow;
-				vm_stat_account(vma->vm_mm, vma->vm_flags,
+					mm->locked_vm += grow;
+				vm_stat_account(mm, vma->vm_flags,
 						vma->vm_file, grow);
 				anon_vma_interval_tree_pre_update_vma(vma);
 				vma->vm_end = address;
@@ -2208,8 +2209,8 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
 				if (vma->vm_next)
 					vma_gap_update(vma->vm_next);
 				else
-					vma->vm_mm->highest_vm_end = address;
-				spin_unlock(&vma->vm_mm->page_table_lock);
+					mm->highest_vm_end = address;
+				spin_unlock(&mm->page_table_lock);
 
 				perf_event_mmap(vma);
 			}
@@ -2217,7 +2218,7 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
 	}
 	vma_unlock_anon_vma(vma);
 	khugepaged_enter_vma_merge(vma, vma->vm_flags);
-	validate_mm(vma->vm_mm);
+	validate_mm(mm);
 	return error;
 }
 #endif /* CONFIG_STACK_GROWSUP || CONFIG_IA64 */
@@ -2228,6 +2229,7 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
 int expand_downwards(struct vm_area_struct *vma,
 				   unsigned long address)
 {
+	struct mm_struct *mm = vma->vm_mm;
 	int error;
 
 	/*
@@ -2272,17 +2274,17 @@ int expand_downwards(struct vm_area_struct *vma,
 				 * So, we reuse mm->page_table_lock to guard
 				 * against concurrent vma expansions.
 				 */
-				spin_lock(&vma->vm_mm->page_table_lock);
+				spin_lock(&mm->page_table_lock);
 				if (vma->vm_flags & VM_LOCKED)
-					vma->vm_mm->locked_vm += grow;
-				vm_stat_account(vma->vm_mm, vma->vm_flags,
+					mm->locked_vm += grow;
+				vm_stat_account(mm, vma->vm_flags,
 						vma->vm_file, grow);
 				anon_vma_interval_tree_pre_update_vma(vma);
 				vma->vm_start = address;
 				vma->vm_pgoff -= grow;
 				anon_vma_interval_tree_post_update_vma(vma);
 				vma_gap_update(vma);
-				spin_unlock(&vma->vm_mm->page_table_lock);
+				spin_unlock(&mm->page_table_lock);
 
 				perf_event_mmap(vma);
 			}
@@ -2290,7 +2292,7 @@ int expand_downwards(struct vm_area_struct *vma,
 	}
 	vma_unlock_anon_vma(vma);
 	khugepaged_enter_vma_merge(vma, vma->vm_flags);
-	validate_mm(vma->vm_mm);
+	validate_mm(mm);
 	return error;
 }
 
-- 
2.4.3


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] mm: fix the racy mm->locked_vm change in
  2015-09-29 18:27 ` [PATCH 1/2] mm: fix the racy mm->locked_vm change in Oleg Nesterov
@ 2015-10-01  3:01   ` Hugh Dickins
  2015-10-01 14:49     ` Oleg Nesterov
  0 siblings, 1 reply; 7+ messages in thread
From: Hugh Dickins @ 2015-10-01  3:01 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: Andrew Morton, Andrey Konovalov, Davidlohr Bueso, Hugh Dickins,
	Kirill A. Shutemov, Sasha Levin, Vlastimil Babka,
	Andrea Arcangeli, Michel Lespinasse, linux-kernel, linux-mm

On Tue, 29 Sep 2015, Oleg Nesterov wrote:

> "mm->locked_vm += grow" and vm_stat_account() in acct_stack_growth()
> are not safe; multiple threads using the same ->mm can do this at the
> same time trying to expans different vma's under down_read(mmap_sem).
                      expand
> This means that one of the "locked_vm += grow" changes can be lost
> and we can miss munlock_vma_pages_all() later.

>From the Cc list, I guess you are thinking this might be the fix to
the "Bad state page (mlocked)" issues Andrey and Sasha have reported.

I've not been able to explain those from the direction in which
I was thinking (despite giving it more hours of thought meanwhile),
so I am glad you're looking at it from a very different direction,
and hope you're right with this.

> 
> Move this code into the caller(s) under mm->page_table_lock. All other
> updates to ->locked_vm hold mmap_sem for writing.

So it looks like Andrea and I broke this back in v2.6.7: page_table_lock
was used here before then, and we thought the anon_vma lock was better.

Confession: from that time until today, I thought MAP_GROWSDOWN was
one of those flags (say, like MAP_DENYWRITE) which the kernel accepts
from userspace but ignores; I thought ia64 was the only architecture
on which an mm might contain more than one VM_GROWS* vma (excepting
the case where the original gets split; but surely stack would have
its anon_vma allocated by then, and shared across the split).  It's
only this patch of yours that leads me to calc_vm_flag_bits(), and
to how Michel brought page_table_lock back here to guard vma_gap.

> 
> Signed-off-by: Oleg Nesterov <oleg@redhat.com>

Acked-by: Hugh Dickins <hughd@google.com>

with some hesitation.  I don't like very much that the preliminary
mm->locked_vm + grow check is still done without complete locking,
so racing threads could get more locked_vm than they're permitted;
but I'm not sure that we care enough to put page_table_lock back
over all of that (and security_vm_enough_memory wants to have final
say on whether to go ahead); even if it was that way years ago.

(And if we did care, shouldn't __vm_enough_memory() be using
percpu_counter_compare instead of percpu_counter_read_positive?
but that's a digression.)

It would be even nicer if we could kill these expand_stack()
anomalies once and for all, with down_write of mmap_sem here too.
But can't be done without revisiting every architecture's mm/fault.c,
which I have no stomach for at this time, and probably you neither.

Let's accept that your patch is a significant improvement,
and hope that it fixes the "Bad page state (mlocked)".

> ---
>  mm/mmap.c | 12 ++++++++----
>  1 file changed, 8 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 8393580..4efdc37 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -2138,10 +2138,6 @@ static int acct_stack_growth(struct vm_area_struct *vma, unsigned long size, uns
>  	if (security_vm_enough_memory_mm(mm, grow))
>  		return -ENOMEM;
>  
> -	/* Ok, everything looks good - let it rip */
> -	if (vma->vm_flags & VM_LOCKED)
> -		mm->locked_vm += grow;
> -	vm_stat_account(mm, vma->vm_flags, vma->vm_file, grow);
>  	return 0;
>  }
>  
> @@ -2202,6 +2198,10 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
>  				 * against concurrent vma expansions.
>  				 */
>  				spin_lock(&vma->vm_mm->page_table_lock);
> +				if (vma->vm_flags & VM_LOCKED)
> +					vma->vm_mm->locked_vm += grow;
> +				vm_stat_account(vma->vm_mm, vma->vm_flags,
> +						vma->vm_file, grow);
>  				anon_vma_interval_tree_pre_update_vma(vma);
>  				vma->vm_end = address;
>  				anon_vma_interval_tree_post_update_vma(vma);
> @@ -2273,6 +2273,10 @@ int expand_downwards(struct vm_area_struct *vma,
>  				 * against concurrent vma expansions.
>  				 */
>  				spin_lock(&vma->vm_mm->page_table_lock);
> +				if (vma->vm_flags & VM_LOCKED)
> +					vma->vm_mm->locked_vm += grow;
> +				vm_stat_account(vma->vm_mm, vma->vm_flags,
> +						vma->vm_file, grow);
>  				anon_vma_interval_tree_pre_update_vma(vma);
>  				vma->vm_start = address;
>  				vma->vm_pgoff -= grow;
> -- 
> 2.4.3
> 
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/2] mm: add the "struct mm_struct *mm" local into
  2015-09-29 18:28 ` [PATCH 2/2] mm: add the "struct mm_struct *mm" local into Oleg Nesterov
@ 2015-10-01  3:02   ` Hugh Dickins
  0 siblings, 0 replies; 7+ messages in thread
From: Hugh Dickins @ 2015-10-01  3:02 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: Andrew Morton, Andrey Konovalov, Davidlohr Bueso, Hugh Dickins,
	Kirill A. Shutemov, Sasha Levin, Vlastimil Babka, linux-kernel,
	linux-mm

On Tue, 29 Sep 2015, Oleg Nesterov wrote:

> Cosmetic, but expand_upwards() and expand_downwards() overuse
> vma->vm_mm, a local variable makes sense imho.
> 
> Signed-off-by: Oleg Nesterov <oleg@redhat.com>

Acked-by: Hugh Dickins <hughd@google.com>

> ---
>  mm/mmap.c | 24 +++++++++++++-----------
>  1 file changed, 13 insertions(+), 11 deletions(-)
> 
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 4efdc37..7edf9ed 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -2148,6 +2148,7 @@ static int acct_stack_growth(struct vm_area_struct *vma, unsigned long size, uns
>   */
>  int expand_upwards(struct vm_area_struct *vma, unsigned long address)
>  {
> +	struct mm_struct *mm = vma->vm_mm;
>  	int error;
>  
>  	if (!(vma->vm_flags & VM_GROWSUP))
> @@ -2197,10 +2198,10 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
>  				 * So, we reuse mm->page_table_lock to guard
>  				 * against concurrent vma expansions.
>  				 */
> -				spin_lock(&vma->vm_mm->page_table_lock);
> +				spin_lock(&mm->page_table_lock);
>  				if (vma->vm_flags & VM_LOCKED)
> -					vma->vm_mm->locked_vm += grow;
> -				vm_stat_account(vma->vm_mm, vma->vm_flags,
> +					mm->locked_vm += grow;
> +				vm_stat_account(mm, vma->vm_flags,
>  						vma->vm_file, grow);
>  				anon_vma_interval_tree_pre_update_vma(vma);
>  				vma->vm_end = address;
> @@ -2208,8 +2209,8 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
>  				if (vma->vm_next)
>  					vma_gap_update(vma->vm_next);
>  				else
> -					vma->vm_mm->highest_vm_end = address;
> -				spin_unlock(&vma->vm_mm->page_table_lock);
> +					mm->highest_vm_end = address;
> +				spin_unlock(&mm->page_table_lock);
>  
>  				perf_event_mmap(vma);
>  			}
> @@ -2217,7 +2218,7 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
>  	}
>  	vma_unlock_anon_vma(vma);
>  	khugepaged_enter_vma_merge(vma, vma->vm_flags);
> -	validate_mm(vma->vm_mm);
> +	validate_mm(mm);
>  	return error;
>  }
>  #endif /* CONFIG_STACK_GROWSUP || CONFIG_IA64 */
> @@ -2228,6 +2229,7 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
>  int expand_downwards(struct vm_area_struct *vma,
>  				   unsigned long address)
>  {
> +	struct mm_struct *mm = vma->vm_mm;
>  	int error;
>  
>  	/*
> @@ -2272,17 +2274,17 @@ int expand_downwards(struct vm_area_struct *vma,
>  				 * So, we reuse mm->page_table_lock to guard
>  				 * against concurrent vma expansions.
>  				 */
> -				spin_lock(&vma->vm_mm->page_table_lock);
> +				spin_lock(&mm->page_table_lock);
>  				if (vma->vm_flags & VM_LOCKED)
> -					vma->vm_mm->locked_vm += grow;
> -				vm_stat_account(vma->vm_mm, vma->vm_flags,
> +					mm->locked_vm += grow;
> +				vm_stat_account(mm, vma->vm_flags,
>  						vma->vm_file, grow);
>  				anon_vma_interval_tree_pre_update_vma(vma);
>  				vma->vm_start = address;
>  				vma->vm_pgoff -= grow;
>  				anon_vma_interval_tree_post_update_vma(vma);
>  				vma_gap_update(vma);
> -				spin_unlock(&vma->vm_mm->page_table_lock);
> +				spin_unlock(&mm->page_table_lock);
>  
>  				perf_event_mmap(vma);
>  			}
> @@ -2290,7 +2292,7 @@ int expand_downwards(struct vm_area_struct *vma,
>  	}
>  	vma_unlock_anon_vma(vma);
>  	khugepaged_enter_vma_merge(vma, vma->vm_flags);
> -	validate_mm(vma->vm_mm);
> +	validate_mm(mm);
>  	return error;
>  }
>  
> -- 
> 2.4.3
> 
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] mm: fix the racy mm->locked_vm change in
  2015-10-01  3:01   ` Hugh Dickins
@ 2015-10-01 14:49     ` Oleg Nesterov
  2015-10-01 18:34       ` Hugh Dickins
  0 siblings, 1 reply; 7+ messages in thread
From: Oleg Nesterov @ 2015-10-01 14:49 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, Andrey Konovalov, Davidlohr Bueso,
	Kirill A. Shutemov, Sasha Levin, Vlastimil Babka,
	Andrea Arcangeli, Michel Lespinasse, linux-kernel, linux-mm

On 09/30, Hugh Dickins wrote:
>
> On Tue, 29 Sep 2015, Oleg Nesterov wrote:
>
> > "mm->locked_vm += grow" and vm_stat_account() in acct_stack_growth()
> > are not safe; multiple threads using the same ->mm can do this at the
> > same time trying to expans different vma's under down_read(mmap_sem).
>                       expand
> > This means that one of the "locked_vm += grow" changes can be lost
> > and we can miss munlock_vma_pages_all() later.
>
> From the Cc list, I guess you are thinking this might be the fix to
> the "Bad state page (mlocked)" issues Andrey and Sasha have reported.

Yes, I found this when I tried to explain this problem, but I doubt
this change can fix it... Firstly I think it is very unlikely that
trinity hits this race. And even if mm->locked_vm is wrongly equal
to zero in exit_mmap(), it seems that page_remove_rmap() should do
clear_page_mlock(). But I do not understand this code enough. So if
this patch can actually help I would really like to know why ;)

And of course this can not explain other traces which look like
mm->mmap corruption.

> Acked-by: Hugh Dickins <hughd@google.com>

Thanks!

> with some hesitation.  I don't like very much that the preliminary
> mm->locked_vm + grow check is still done without complete locking,
> so racing threads could get more locked_vm than they're permitted;
> but I'm not sure that we care enough to put page_table_lock back
> over all of that (and security_vm_enough_memory wants to have final
> say on whether to go ahead); even if it was that way years ago.

Yes. Plus all these RLIMIT_MEMLOCK/etc and security_* checks assume
that we are going to expand current->mm, but this is not necessarily
true. Debugger or sys_process_vm_* can expand a foreign vma.

Oleg.


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] mm: fix the racy mm->locked_vm change in
  2015-10-01 14:49     ` Oleg Nesterov
@ 2015-10-01 18:34       ` Hugh Dickins
  0 siblings, 0 replies; 7+ messages in thread
From: Hugh Dickins @ 2015-10-01 18:34 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: Hugh Dickins, Andrew Morton, Andrey Konovalov, Davidlohr Bueso,
	Kirill A. Shutemov, Sasha Levin, Vlastimil Babka,
	Andrea Arcangeli, Michel Lespinasse, linux-kernel, linux-mm

On Thu, 1 Oct 2015, Oleg Nesterov wrote:
> On 09/30, Hugh Dickins wrote:
> >
> > On Tue, 29 Sep 2015, Oleg Nesterov wrote:
> >
> > > "mm->locked_vm += grow" and vm_stat_account() in acct_stack_growth()
> > > are not safe; multiple threads using the same ->mm can do this at the
> > > same time trying to expans different vma's under down_read(mmap_sem).
> >                       expand
> > > This means that one of the "locked_vm += grow" changes can be lost
> > > and we can miss munlock_vma_pages_all() later.
> >
> > From the Cc list, I guess you are thinking this might be the fix to
> > the "Bad state page (mlocked)" issues Andrey and Sasha have reported.
> 
> Yes, I found this when I tried to explain this problem, but I doubt
> this change can fix it... Firstly I think it is very unlikely that
> trinity hits this race. And even if mm->locked_vm is wrongly equal
> to zero in exit_mmap(), it seems that page_remove_rmap() should do
> clear_page_mlock().

Oh yes, good point, a subsequent clear_page_mlock(), in unmapping
this address space, or later unmapping from another, ought to clear
it before the page ever gets freed.

> But I do not understand this code enough. So if
> this patch can actually help I would really like to know why ;)

I doubt any of us understand it very well, mlock+munlock have
over the years become so much more grotesque than the uninitiated
would expect.

> 
> And of course this can not explain other traces which look like
> mm->mmap corruption.
> 
> > Acked-by: Hugh Dickins <hughd@google.com>
> 
> Thanks!
> 
> > with some hesitation.  I don't like very much that the preliminary
> > mm->locked_vm + grow check is still done without complete locking,
> > so racing threads could get more locked_vm than they're permitted;
> > but I'm not sure that we care enough to put page_table_lock back
> > over all of that (and security_vm_enough_memory wants to have final
> > say on whether to go ahead); even if it was that way years ago.
> 
> Yes. Plus all these RLIMIT_MEMLOCK/etc and security_* checks assume
> that we are going to expand current->mm, but this is not necessarily
> true. Debugger or sys_process_vm_* can expand a foreign vma.

Right, I'd forgotten all about that aspect: yes, none of us ever took
expand_stack()'s "current" assumptions seriously enough to rework its
interface with all the architectures, so that's another argument for
sticking for now with the patch you already have here - thanks.

Hugh

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2015-10-01 18:34 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-09-29 18:27 [PATCH 0/2] mm: fix the racy mm->locked_vm change in acct_stack_growth() Oleg Nesterov
2015-09-29 18:27 ` [PATCH 1/2] mm: fix the racy mm->locked_vm change in Oleg Nesterov
2015-10-01  3:01   ` Hugh Dickins
2015-10-01 14:49     ` Oleg Nesterov
2015-10-01 18:34       ` Hugh Dickins
2015-09-29 18:28 ` [PATCH 2/2] mm: add the "struct mm_struct *mm" local into Oleg Nesterov
2015-10-01  3:02   ` Hugh Dickins

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).