linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm, vmscan: set shrinker to the left page count
@ 2016-06-27 11:02 Chen Feng
  2016-06-27 16:57 ` Vladimir Davydov
  0 siblings, 1 reply; 5+ messages in thread
From: Chen Feng @ 2016-06-27 11:02 UTC (permalink / raw)
  To: puck.chen, akpm, hannes, vdavydov, mhocko, vbabka, mgorman, riel,
	linux-mm, linux-kernel, labbott
  Cc: suzhuangluan, oliver.fu, puck.chen, dan.zhao, saberlily.xia, xuyiping

In my platform, there can be cache a lot of memory in
ion page pool. When shrink memory the nr_to_scan to ion
is always to little.
to_scan: 395  ion_pool_cached: 27305

Currently, the shrinker nr_deferred is set to total_scan.
But it's not the real left of the shrinker. Change it to
the freeable - freed.

Signed-off-by: Chen Feng <puck.chen@hisilicon.com>
---
 mm/vmscan.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index c4a2f45..1ce3fc4 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -357,8 +357,8 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
 	 * manner that handles concurrent updates. If we exhausted the
 	 * scan, there is no need to do an update.
 	 */
-	if (total_scan > 0)
-		new_nr = atomic_long_add_return(total_scan,
+	if (freeable - freed > 0)
+		new_nr = atomic_long_add_return(freeable - freed,
 						&shrinker->nr_deferred[nid]);
 	else
 		new_nr = atomic_long_read(&shrinker->nr_deferred[nid]);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm, vmscan: set shrinker to the left page count
  2016-06-27 11:02 [PATCH] mm, vmscan: set shrinker to the left page count Chen Feng
@ 2016-06-27 16:57 ` Vladimir Davydov
  2016-06-28 10:37   ` Chen Feng
  0 siblings, 1 reply; 5+ messages in thread
From: Vladimir Davydov @ 2016-06-27 16:57 UTC (permalink / raw)
  To: Chen Feng
  Cc: akpm, hannes, mhocko, vbabka, mgorman, riel, linux-mm,
	linux-kernel, labbott, suzhuangluan, oliver.fu, puck.chen,
	dan.zhao, saberlily.xia, xuyiping

On Mon, Jun 27, 2016 at 07:02:15PM +0800, Chen Feng wrote:
> In my platform, there can be cache a lot of memory in
> ion page pool. When shrink memory the nr_to_scan to ion
> is always to little.
> to_scan: 395  ion_pool_cached: 27305

That's OK. We want to shrink slabs gradually, not all at once.

> 
> Currently, the shrinker nr_deferred is set to total_scan.
> But it's not the real left of the shrinker.

And it shouldn't. The idea behind nr_deferred is following. A shrinker
may return SHRINK_STOP if the current allocation context doesn't allow
to reclaim its objects (e.g. reclaiming inodes under GFP_NOFS is
deadlock prone). In this case we can't call the shrinker right now, but
if we just forget about the batch we are supposed to reclaim at the
current iteration, we can wind up having too many of these objects so
that they start to exert unfairly high pressure on user memory. So we
add the amount that we wanted to scan but couldn't to nr_deferred, so
that we can catch up when we get to shrink_slab() with a proper context.

> Change it to
> the freeable - freed.
> 
> Signed-off-by: Chen Feng <puck.chen@hisilicon.com>
> ---
>  mm/vmscan.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index c4a2f45..1ce3fc4 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -357,8 +357,8 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
>  	 * manner that handles concurrent updates. If we exhausted the
>  	 * scan, there is no need to do an update.
>  	 */
> -	if (total_scan > 0)
> -		new_nr = atomic_long_add_return(total_scan,
> +	if (freeable - freed > 0)
> +		new_nr = atomic_long_add_return(freeable - freed,
>  						&shrinker->nr_deferred[nid]);
>  	else
>  		new_nr = atomic_long_read(&shrinker->nr_deferred[nid]);

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm, vmscan: set shrinker to the left page count
  2016-06-27 16:57 ` Vladimir Davydov
@ 2016-06-28 10:37   ` Chen Feng
  2016-06-28 16:48     ` Vladimir Davydov
  2016-06-29  7:25     ` Minchan Kim
  0 siblings, 2 replies; 5+ messages in thread
From: Chen Feng @ 2016-06-28 10:37 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: akpm, hannes, mhocko, vbabka, mgorman, riel, linux-mm,
	linux-kernel, labbott, suzhuangluan, oliver.fu, puck.chen,
	dan.zhao, saberlily.xia, xuyiping

Thanks for you reply.

On 2016/6/28 0:57, Vladimir Davydov wrote:
> On Mon, Jun 27, 2016 at 07:02:15PM +0800, Chen Feng wrote:
>> In my platform, there can be cache a lot of memory in
>> ion page pool. When shrink memory the nr_to_scan to ion
>> is always to little.
>> to_scan: 395  ion_pool_cached: 27305
> 
> That's OK. We want to shrink slabs gradually, not all at once.
> 

OK, But my question there are a lot of memory waiting for free.
But the to_scan is too little.

So, the lowmemorykill may kill the wrong process.
>>
>> Currently, the shrinker nr_deferred is set to total_scan.
>> But it's not the real left of the shrinker.
> 
> And it shouldn't. The idea behind nr_deferred is following. A shrinker
> may return SHRINK_STOP if the current allocation context doesn't allow
> to reclaim its objects (e.g. reclaiming inodes under GFP_NOFS is
> deadlock prone). In this case we can't call the shrinker right now, but
> if we just forget about the batch we are supposed to reclaim at the
> current iteration, we can wind up having too many of these objects so
> that they start to exert unfairly high pressure on user memory. So we
> add the amount that we wanted to scan but couldn't to nr_deferred, so
> that we can catch up when we get to shrink_slab() with a proper context.
> 
I am confused with your comments. If the shrinker return STOP this time.
It also can return STOP next time.
Is there any other effects about this change?

Any feedback is appreciated.
Thanks.
>> Change it to
>> the freeable - freed.
>>
>> Signed-off-by: Chen Feng <puck.chen@hisilicon.com>
>> ---
>>  mm/vmscan.c | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index c4a2f45..1ce3fc4 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -357,8 +357,8 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
>>  	 * manner that handles concurrent updates. If we exhausted the
>>  	 * scan, there is no need to do an update.
>>  	 */
>> -	if (total_scan > 0)
>> -		new_nr = atomic_long_add_return(total_scan,
>> +	if (freeable - freed > 0)
>> +		new_nr = atomic_long_add_return(freeable - freed,
>>  						&shrinker->nr_deferred[nid]);
>>  	else
>>  		new_nr = atomic_long_read(&shrinker->nr_deferred[nid]);
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
> 
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm, vmscan: set shrinker to the left page count
  2016-06-28 10:37   ` Chen Feng
@ 2016-06-28 16:48     ` Vladimir Davydov
  2016-06-29  7:25     ` Minchan Kim
  1 sibling, 0 replies; 5+ messages in thread
From: Vladimir Davydov @ 2016-06-28 16:48 UTC (permalink / raw)
  To: Chen Feng
  Cc: akpm, hannes, mhocko, vbabka, mgorman, riel, linux-mm,
	linux-kernel, labbott, suzhuangluan, oliver.fu, puck.chen,
	dan.zhao, saberlily.xia, xuyiping

On Tue, Jun 28, 2016 at 06:37:24PM +0800, Chen Feng wrote:
> Thanks for you reply.
> 
> On 2016/6/28 0:57, Vladimir Davydov wrote:
> > On Mon, Jun 27, 2016 at 07:02:15PM +0800, Chen Feng wrote:
> >> In my platform, there can be cache a lot of memory in
> >> ion page pool. When shrink memory the nr_to_scan to ion
> >> is always to little.
> >> to_scan: 395  ion_pool_cached: 27305
> > 
> > That's OK. We want to shrink slabs gradually, not all at once.
> > 
> 
> OK, But my question there are a lot of memory waiting for free.
> But the to_scan is too little.

Small value of 'total_scan' in comparison to 'freeable' (in shrink_slab)
means that memory pressure is not really high and so there's no need to
scan all cached objects yet.

> 
> So, the lowmemorykill may kill the wrong process.
> >>
> >> Currently, the shrinker nr_deferred is set to total_scan.
> >> But it's not the real left of the shrinker.
> > 
> > And it shouldn't. The idea behind nr_deferred is following. A shrinker
> > may return SHRINK_STOP if the current allocation context doesn't allow
> > to reclaim its objects (e.g. reclaiming inodes under GFP_NOFS is
> > deadlock prone). In this case we can't call the shrinker right now, but
> > if we just forget about the batch we are supposed to reclaim at the
> > current iteration, we can wind up having too many of these objects so
> > that they start to exert unfairly high pressure on user memory. So we
> > add the amount that we wanted to scan but couldn't to nr_deferred, so
> > that we can catch up when we get to shrink_slab() with a proper context.
> > 
> I am confused with your comments. If the shrinker return STOP this time.
> It also can return STOP next time.

There's always kswapd running in background which calls reclaim with
GFP_KERNEL. So even if a process issues a lot of successive GFP_NOFS,
which makes fs shrinkers abort scan, their objects will still be scanned
and reclaimed by kswapd.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm, vmscan: set shrinker to the left page count
  2016-06-28 10:37   ` Chen Feng
  2016-06-28 16:48     ` Vladimir Davydov
@ 2016-06-29  7:25     ` Minchan Kim
  1 sibling, 0 replies; 5+ messages in thread
From: Minchan Kim @ 2016-06-29  7:25 UTC (permalink / raw)
  To: Chen Feng
  Cc: Vladimir Davydov, akpm, hannes, mhocko, vbabka, mgorman, riel,
	linux-mm, linux-kernel, labbott, suzhuangluan, oliver.fu,
	puck.chen, dan.zhao, saberlily.xia, xuyiping

On Tue, Jun 28, 2016 at 06:37:24PM +0800, Chen Feng wrote:
> Thanks for you reply.
> 
> On 2016/6/28 0:57, Vladimir Davydov wrote:
> > On Mon, Jun 27, 2016 at 07:02:15PM +0800, Chen Feng wrote:
> >> In my platform, there can be cache a lot of memory in
> >> ion page pool. When shrink memory the nr_to_scan to ion
> >> is always to little.
> >> to_scan: 395  ion_pool_cached: 27305
> > 
> > That's OK. We want to shrink slabs gradually, not all at once.
> > 
> 
> OK, But my question there are a lot of memory waiting for free.
> But the to_scan is too little.
> 
> So, the lowmemorykill may kill the wrong process.

So, the problem is LMK is too agressive. If it's really problem,
you could fix LMK to consider reclaimable slab as well as file
pages.

Thanks.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2016-06-29  7:25 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-27 11:02 [PATCH] mm, vmscan: set shrinker to the left page count Chen Feng
2016-06-27 16:57 ` Vladimir Davydov
2016-06-28 10:37   ` Chen Feng
2016-06-28 16:48     ` Vladimir Davydov
2016-06-29  7:25     ` Minchan Kim

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).