linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm: proc: smaps_rollup: Fix pss_locked calculation
@ 2019-01-21  1:10 Sandeep Patil
       [not found] ` <20190123225746.5B3DF218A4@mail.kernel.org>
  2019-01-29  0:15 ` Andrew Morton
  0 siblings, 2 replies; 6+ messages in thread
From: Sandeep Patil @ 2019-01-21  1:10 UTC (permalink / raw)
  To: vbabka, adobriyan, akpm, avagin
  Cc: linux-fsdevel, linux-mm, linux-kernel, stable, kernel-team, dancol

The 'pss_locked' field of smaps_rollup was being calculated incorrectly
as it accumulated the current pss everytime a locked VMA was found.

Fix that by making sure we record the current pss value before each VMA
is walked. So, we can only add the delta if the VMA was found to be
VM_LOCKED.

Fixes: 493b0e9d945f ("mm: add /proc/pid/smaps_rollup")
Cc: stable@vger.kernel.org # 4.14.y 4.19.y
Signed-off-by: Sandeep Patil <sspatil@android.com>
---
 fs/proc/task_mmu.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index f0ec9edab2f3..51a00a2b4733 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -709,6 +709,7 @@ static void smap_gather_stats(struct vm_area_struct *vma,
 #endif
 		.mm = vma->vm_mm,
 	};
+	unsigned long pss;
 
 	smaps_walk.private = mss;
 
@@ -737,11 +738,12 @@ static void smap_gather_stats(struct vm_area_struct *vma,
 		}
 	}
 #endif
-
+	/* record current pss so we can calculate the delta after page walk */
+	pss = mss->pss;
 	/* mmap_sem is held in m_start */
 	walk_page_vma(vma, &smaps_walk);
 	if (vma->vm_flags & VM_LOCKED)
-		mss->pss_locked += mss->pss;
+		mss->pss_locked += mss->pss - pss;
 }
 
 #define SEQ_PUT_DEC(str, val) \
-- 
2.20.1.321.g9e740568ce-goog


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm: proc: smaps_rollup: Fix pss_locked calculation
       [not found] ` <20190123225746.5B3DF218A4@mail.kernel.org>
@ 2019-01-24 21:39   ` Sandeep Patil
  2019-01-25  6:21     ` Greg KH
  0 siblings, 1 reply; 6+ messages in thread
From: Sandeep Patil @ 2019-01-24 21:39 UTC (permalink / raw)
  To: Sasha Levin; +Cc: vbabka, adobriyan, akpm, linux-fsdevel, linux-mm, stable

On Wed, Jan 23, 2019 at 10:57:45PM +0000, Sasha Levin wrote:
> Hi,
> 
> [This is an automated email]
> 
> This commit has been processed because it contains a "Fixes:" tag,
> fixing commit: 493b0e9d945f mm: add /proc/pid/smaps_rollup.
> 
> The bot has tested the following trees: v4.20.3, v4.19.16, v4.14.94.
> 
> v4.20.3: Build OK!
> v4.19.16: Build OK!
> v4.14.94: Failed to apply! Possible dependencies:
>     8526d84f8171 ("fs/proc/task_mmu.c: do not show VmExe bigger than total executable virtual memory")
>     8e68d689afe3 ("mm: /proc/pid/smaps: factor out mem stats gathering")
>     af5b0f6a09e4 ("mm: consolidate page table accounting")
>     b4e98d9ac775 ("mm: account pud page tables")
>     c4812909f5d5 ("mm: introduce wrappers to access mm->nr_ptes")
>     d1be35cb6f96 ("proc: add seq_put_decimal_ull_width to speed up /proc/pid/smaps")
> 
> 
> How should we proceed with this patch?

I will send 4.14 / 4.9 backports to -stable if / when the patch gets
accepted.

- ssp

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm: proc: smaps_rollup: Fix pss_locked calculation
  2019-01-24 21:39   ` Sandeep Patil
@ 2019-01-25  6:21     ` Greg KH
  0 siblings, 0 replies; 6+ messages in thread
From: Greg KH @ 2019-01-25  6:21 UTC (permalink / raw)
  To: Sandeep Patil
  Cc: Sasha Levin, vbabka, adobriyan, akpm, linux-fsdevel, linux-mm, stable

On Thu, Jan 24, 2019 at 01:39:40PM -0800, Sandeep Patil wrote:
> On Wed, Jan 23, 2019 at 10:57:45PM +0000, Sasha Levin wrote:
> > Hi,
> > 
> > [This is an automated email]
> > 
> > This commit has been processed because it contains a "Fixes:" tag,
> > fixing commit: 493b0e9d945f mm: add /proc/pid/smaps_rollup.
> > 
> > The bot has tested the following trees: v4.20.3, v4.19.16, v4.14.94.
> > 
> > v4.20.3: Build OK!
> > v4.19.16: Build OK!
> > v4.14.94: Failed to apply! Possible dependencies:
> >     8526d84f8171 ("fs/proc/task_mmu.c: do not show VmExe bigger than total executable virtual memory")
> >     8e68d689afe3 ("mm: /proc/pid/smaps: factor out mem stats gathering")
> >     af5b0f6a09e4 ("mm: consolidate page table accounting")
> >     b4e98d9ac775 ("mm: account pud page tables")
> >     c4812909f5d5 ("mm: introduce wrappers to access mm->nr_ptes")
> >     d1be35cb6f96 ("proc: add seq_put_decimal_ull_width to speed up /proc/pid/smaps")
> > 
> > 
> > How should we proceed with this patch?
> 
> I will send 4.14 / 4.9 backports to -stable if / when the patch gets
> accepted.

That's fine, you will get the automated "FAILED:" emails when I try to
apply it to the tree at that time, and you can send an updated version
then if you want.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm: proc: smaps_rollup: Fix pss_locked calculation
  2019-01-21  1:10 [PATCH] mm: proc: smaps_rollup: Fix pss_locked calculation Sandeep Patil
       [not found] ` <20190123225746.5B3DF218A4@mail.kernel.org>
@ 2019-01-29  0:15 ` Andrew Morton
  2019-01-29 15:52   ` Vlastimil Babka
  1 sibling, 1 reply; 6+ messages in thread
From: Andrew Morton @ 2019-01-29  0:15 UTC (permalink / raw)
  To: Sandeep Patil
  Cc: vbabka, adobriyan, avagin, linux-fsdevel, linux-mm, linux-kernel,
	stable, kernel-team, dancol

On Sun, 20 Jan 2019 17:10:49 -0800 Sandeep Patil <sspatil@android.com> wrote:

> The 'pss_locked' field of smaps_rollup was being calculated incorrectly
> as it accumulated the current pss everytime a locked VMA was found.
> 
> Fix that by making sure we record the current pss value before each VMA
> is walked. So, we can only add the delta if the VMA was found to be
> VM_LOCKED.
> 
> ...
>
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -709,6 +709,7 @@ static void smap_gather_stats(struct vm_area_struct *vma,
>  #endif
>  		.mm = vma->vm_mm,
>  	};
> +	unsigned long pss;
>  
>  	smaps_walk.private = mss;
>  
> @@ -737,11 +738,12 @@ static void smap_gather_stats(struct vm_area_struct *vma,
>  		}
>  	}
>  #endif
> -
> +	/* record current pss so we can calculate the delta after page walk */
> +	pss = mss->pss;
>  	/* mmap_sem is held in m_start */
>  	walk_page_vma(vma, &smaps_walk);
>  	if (vma->vm_flags & VM_LOCKED)
> -		mss->pss_locked += mss->pss;
> +		mss->pss_locked += mss->pss - pss;
>  }

This seems to be a rather obscure way of accumulating
mem_size_stats.pss_locked.  Wouldn't it make more sense to do this in
smaps_account(), wherever we increment mem_size_stats.pss?

It would be a tiny bit less efficient but I think that the code cleanup
justifies such a cost?

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm: proc: smaps_rollup: Fix pss_locked calculation
  2019-01-29  0:15 ` Andrew Morton
@ 2019-01-29 15:52   ` Vlastimil Babka
  2019-02-03  6:21     ` Sandeep Patil
  0 siblings, 1 reply; 6+ messages in thread
From: Vlastimil Babka @ 2019-01-29 15:52 UTC (permalink / raw)
  To: Andrew Morton, Sandeep Patil
  Cc: adobriyan, avagin, linux-fsdevel, linux-mm, linux-kernel, stable,
	kernel-team, dancol

On 1/29/19 1:15 AM, Andrew Morton wrote:
> On Sun, 20 Jan 2019 17:10:49 -0800 Sandeep Patil <sspatil@android.com> wrote:
> 
>> The 'pss_locked' field of smaps_rollup was being calculated incorrectly
>> as it accumulated the current pss everytime a locked VMA was found.
>> 
>> Fix that by making sure we record the current pss value before each VMA
>> is walked. So, we can only add the delta if the VMA was found to be
>> VM_LOCKED.
>> 
>> ...
>>
>> --- a/fs/proc/task_mmu.c
>> +++ b/fs/proc/task_mmu.c
>> @@ -709,6 +709,7 @@ static void smap_gather_stats(struct vm_area_struct *vma,
>>  #endif
>>  		.mm = vma->vm_mm,
>>  	};
>> +	unsigned long pss;
>>  
>>  	smaps_walk.private = mss;
>>  
>> @@ -737,11 +738,12 @@ static void smap_gather_stats(struct vm_area_struct *vma,
>>  		}
>>  	}
>>  #endif
>> -
>> +	/* record current pss so we can calculate the delta after page walk */
>> +	pss = mss->pss;
>>  	/* mmap_sem is held in m_start */
>>  	walk_page_vma(vma, &smaps_walk);
>>  	if (vma->vm_flags & VM_LOCKED)
>> -		mss->pss_locked += mss->pss;
>> +		mss->pss_locked += mss->pss - pss;
>>  }
> 
> This seems to be a rather obscure way of accumulating
> mem_size_stats.pss_locked.  Wouldn't it make more sense to do this in
> smaps_account(), wherever we increment mem_size_stats.pss?
> 
> It would be a tiny bit less efficient but I think that the code cleanup
> justifies such a cost?

Yeah, Sandeep could you add 'bool locked' param to smaps_account() and check it
there? We probably don't need the whole vma param yet.

Thanks.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm: proc: smaps_rollup: Fix pss_locked calculation
  2019-01-29 15:52   ` Vlastimil Babka
@ 2019-02-03  6:21     ` Sandeep Patil
  0 siblings, 0 replies; 6+ messages in thread
From: Sandeep Patil @ 2019-02-03  6:21 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Andrew Morton, adobriyan, avagin, linux-fsdevel, linux-mm,
	linux-kernel, stable, kernel-team, dancol

On Tue, Jan 29, 2019 at 04:52:21PM +0100, Vlastimil Babka wrote:
> On 1/29/19 1:15 AM, Andrew Morton wrote:
> > On Sun, 20 Jan 2019 17:10:49 -0800 Sandeep Patil <sspatil@android.com> wrote:
> > 
> >> The 'pss_locked' field of smaps_rollup was being calculated incorrectly
> >> as it accumulated the current pss everytime a locked VMA was found.
> >> 
> >> Fix that by making sure we record the current pss value before each VMA
> >> is walked. So, we can only add the delta if the VMA was found to be
> >> VM_LOCKED.
> >> 
> >> ...
> >>
> >> --- a/fs/proc/task_mmu.c
> >> +++ b/fs/proc/task_mmu.c
> >> @@ -709,6 +709,7 @@ static void smap_gather_stats(struct vm_area_struct *vma,
> >>  #endif
> >>  		.mm = vma->vm_mm,
> >>  	};
> >> +	unsigned long pss;
> >>  
> >>  	smaps_walk.private = mss;
> >>  
> >> @@ -737,11 +738,12 @@ static void smap_gather_stats(struct vm_area_struct *vma,
> >>  		}
> >>  	}
> >>  #endif
> >> -
> >> +	/* record current pss so we can calculate the delta after page walk */
> >> +	pss = mss->pss;
> >>  	/* mmap_sem is held in m_start */
> >>  	walk_page_vma(vma, &smaps_walk);
> >>  	if (vma->vm_flags & VM_LOCKED)
> >> -		mss->pss_locked += mss->pss;
> >> +		mss->pss_locked += mss->pss - pss;
> >>  }
> > 
> > This seems to be a rather obscure way of accumulating
> > mem_size_stats.pss_locked.  Wouldn't it make more sense to do this in
> > smaps_account(), wherever we increment mem_size_stats.pss?
> > 
> > It would be a tiny bit less efficient but I think that the code cleanup
> > justifies such a cost?
> 
> Yeah, Sandeep could you add 'bool locked' param to smaps_account() and check it
> there? We probably don't need the whole vma param yet.

Agree, I will send -v2 shortly.

- ssp

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-02-03  6:21 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-21  1:10 [PATCH] mm: proc: smaps_rollup: Fix pss_locked calculation Sandeep Patil
     [not found] ` <20190123225746.5B3DF218A4@mail.kernel.org>
2019-01-24 21:39   ` Sandeep Patil
2019-01-25  6:21     ` Greg KH
2019-01-29  0:15 ` Andrew Morton
2019-01-29 15:52   ` Vlastimil Babka
2019-02-03  6:21     ` Sandeep Patil

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).