linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] mm/vmscan: get number of pages on the LRU list in memcgroup base on lru_zone_size
@ 2019-09-05  7:10 Honglei Wang
  2019-10-06  0:10 ` Andrew Morton
  2019-10-07 14:28 ` Michal Hocko
  0 siblings, 2 replies; 8+ messages in thread
From: Honglei Wang @ 2019-09-05  7:10 UTC (permalink / raw)
  To: linux-mm, vdavydov.dev, hannes, mhocko

lruvec_lru_size() is involving lruvec_page_state_local() to get the
lru_size in the current code. It's base on lruvec_stat_local.count[]
of mem_cgroup_per_node. This counter is updated in batch. It won't
do charge if the number of coming pages doesn't meet the needs of
MEMCG_CHARGE_BATCH who's defined as 32 now.

The testcase in LTP madvise09[1] fails due to small block memory is
not charged. It creates a new memcgroup and sets up 32 MADV_FREE
pages. Then it forks child who will introduce memory pressure in the
memcgroup. The MADV_FREE pages are expected to be released under the
pressure, but 32 is not more than MEMCG_CHARGE_BATCH and these pages
won't be charged in lruvec_stat_local.count[] until some more pages
come in to satisfy the needs of batch charging. So these MADV_FREE
pages can't be freed in memory pressure which is a bit conflicted
with the definition of MADV_FREE.

Getting lru_size base on lru_zone_size of mem_cgroup_per_node which
is not updated in batch can make it a bit more accurate in similar
scenario.

[1] https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/syscalls/madvise/madvise09.c

Signed-off-by: Honglei Wang <honglei.wang@oracle.com>
---
 mm/vmscan.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index c77d1e3761a7..c28672460868 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -354,12 +354,13 @@ unsigned long zone_reclaimable_pages(struct zone *zone)
  */
 unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, int zone_idx)
 {
-	unsigned long lru_size;
+	unsigned long lru_size = 0;
 	int zid;
 
-	if (!mem_cgroup_disabled())
-		lru_size = lruvec_page_state_local(lruvec, NR_LRU_BASE + lru);
-	else
+	if (!mem_cgroup_disabled()) {
+		for (zid = 0; zid < MAX_NR_ZONES; zid++)
+			lru_size += mem_cgroup_get_zone_lru_size(lruvec, lru, zid);
+	} else
 		lru_size = node_page_state(lruvec_pgdat(lruvec), NR_LRU_BASE + lru);
 
 	for (zid = zone_idx + 1; zid < MAX_NR_ZONES; zid++) {
-- 
2.17.0



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] mm/vmscan: get number of pages on the LRU list in memcgroup base on lru_zone_size
  2019-09-05  7:10 [PATCH v2] mm/vmscan: get number of pages on the LRU list in memcgroup base on lru_zone_size Honglei Wang
@ 2019-10-06  0:10 ` Andrew Morton
  2019-10-07 14:28 ` Michal Hocko
  1 sibling, 0 replies; 8+ messages in thread
From: Andrew Morton @ 2019-10-06  0:10 UTC (permalink / raw)
  To: Honglei Wang; +Cc: linux-mm, vdavydov.dev, hannes, mhocko

On Thu,  5 Sep 2019 15:10:34 +0800 Honglei Wang <honglei.wang@oracle.com> wrote:

> lruvec_lru_size() is involving lruvec_page_state_local() to get the
> lru_size in the current code. It's base on lruvec_stat_local.count[]
> of mem_cgroup_per_node. This counter is updated in batch. It won't
> do charge if the number of coming pages doesn't meet the needs of
> MEMCG_CHARGE_BATCH who's defined as 32 now.
> 
> The testcase in LTP madvise09[1] fails due to small block memory is
> not charged. It creates a new memcgroup and sets up 32 MADV_FREE
> pages. Then it forks child who will introduce memory pressure in the
> memcgroup. The MADV_FREE pages are expected to be released under the
> pressure, but 32 is not more than MEMCG_CHARGE_BATCH and these pages
> won't be charged in lruvec_stat_local.count[] until some more pages
> come in to satisfy the needs of batch charging. So these MADV_FREE
> pages can't be freed in memory pressure which is a bit conflicted
> with the definition of MADV_FREE.
> 
> Getting lru_size base on lru_zone_size of mem_cgroup_per_node which
> is not updated in batch can make it a bit more accurate in similar
> scenario.

I redid the changelog somewhat:

: lruvec_lru_size() is invokving lruvec_page_state_local() to get the
: lru_size.  It's base on lruvec_stat_local.count[] of mem_cgroup_per_node. 
: This counter is updated in a batched way.  It won't be charged if the
: number of incoming pages doesn't meet the needs of MEMCG_CHARGE_BATCH
: which is defined as 32.
: 
: The testcase in LTP madvise09[1] fails because small blocks of memory are
: not charged.  It creates a new memcgroup and sets up 32 MADV_FREE pages. 
: Then it forks a child who will introduce memory pressure in the memcgroup.
: The MADV_FREE pages are expected to be released under the pressure, but
: 32 is not more than MEMCG_CHARGE_BATCH and these pages won't be charged in
: lruvec_stat_local.count[] until some more pages come in to satisfy the
: needs of batch charging.  So these MADV_FREE pages can't be freed in
: memory pressure which is a bit conflicted with the definition of
: MADV_FREE.
: 
: Get the lru_size base on lru_zone_size of mem_cgroup_per_node which is not
: updated via batching can making it more accurate in this scenario.
: 
: This is effectively a partial reversion of 1a61ab8038e72 ("mm: memcontrol:
: replace zone summing with lruvec_page_state()").
: 
: [1] https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/syscalls/madvise/madvise09.c


> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -354,12 +354,13 @@ unsigned long zone_reclaimable_pages(struct zone *zone)
>   */
>  unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, int zone_idx)
>  {
> -	unsigned long lru_size;
> +	unsigned long lru_size = 0;
>  	int zid;
>  
> -	if (!mem_cgroup_disabled())
> -		lru_size = lruvec_page_state_local(lruvec, NR_LRU_BASE + lru);
> -	else
> +	if (!mem_cgroup_disabled()) {
> +		for (zid = 0; zid < MAX_NR_ZONES; zid++)
> +			lru_size += mem_cgroup_get_zone_lru_size(lruvec, lru, zid);
> +	} else
>  		lru_size = node_page_state(lruvec_pgdat(lruvec), NR_LRU_BASE + lru);
>  
>  	for (zid = zone_idx + 1; zid < MAX_NR_ZONES; zid++) {

Do we think this problem is serious enough to warrant backporting into
earlier kernels?



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] mm/vmscan: get number of pages on the LRU list in memcgroup base on lru_zone_size
  2019-09-05  7:10 [PATCH v2] mm/vmscan: get number of pages on the LRU list in memcgroup base on lru_zone_size Honglei Wang
  2019-10-06  0:10 ` Andrew Morton
@ 2019-10-07 14:28 ` Michal Hocko
  2019-10-08  9:34   ` Honglei Wang
  1 sibling, 1 reply; 8+ messages in thread
From: Michal Hocko @ 2019-10-07 14:28 UTC (permalink / raw)
  To: Honglei Wang; +Cc: linux-mm, vdavydov.dev, hannes

On Thu 05-09-19 15:10:34, Honglei Wang wrote:
> lruvec_lru_size() is involving lruvec_page_state_local() to get the
> lru_size in the current code. It's base on lruvec_stat_local.count[]
> of mem_cgroup_per_node. This counter is updated in batch. It won't
> do charge if the number of coming pages doesn't meet the needs of
> MEMCG_CHARGE_BATCH who's defined as 32 now.
> 
> The testcase in LTP madvise09[1] fails due to small block memory is
> not charged. It creates a new memcgroup and sets up 32 MADV_FREE
> pages. Then it forks child who will introduce memory pressure in the
> memcgroup. The MADV_FREE pages are expected to be released under the
> pressure, but 32 is not more than MEMCG_CHARGE_BATCH and these pages
> won't be charged in lruvec_stat_local.count[] until some more pages
> come in to satisfy the needs of batch charging. So these MADV_FREE
> pages can't be freed in memory pressure which is a bit conflicted
> with the definition of MADV_FREE.

The test case is simly wrong. The caching and the batch size is an
internal implementation detail. Moreover MADV_FREE is a _hint_ so all
you can say is that those pages will get freed at some point in time but
you cannot make any assumptions about when that moment happens.

> Getting lru_size base on lru_zone_size of mem_cgroup_per_node which
> is not updated in batch can make it a bit more accurate in similar
> scenario.

What does that mean? It would be more helpful to describe the code path
which will use this more precise value and what is the effect of that.

As I've said in the previous version, I do not object to the patch
because a more precise lruvec_lru_size sounds like a nice thing as long
as we are not paying a high price for that. Just look at the global case
for mem_cgroup_disabled(). It uses node_page_state and that one is using
per-cpu accounting with regular global value refreshing IIRC.

> [1] https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/syscalls/madvise/madvise09.c
> 
> Signed-off-by: Honglei Wang <honglei.wang@oracle.com>
> ---
>  mm/vmscan.c | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index c77d1e3761a7..c28672460868 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -354,12 +354,13 @@ unsigned long zone_reclaimable_pages(struct zone *zone)
>   */
>  unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, int zone_idx)
>  {
> -	unsigned long lru_size;
> +	unsigned long lru_size = 0;
>  	int zid;
>  
> -	if (!mem_cgroup_disabled())
> -		lru_size = lruvec_page_state_local(lruvec, NR_LRU_BASE + lru);
> -	else
> +	if (!mem_cgroup_disabled()) {
> +		for (zid = 0; zid < MAX_NR_ZONES; zid++)
> +			lru_size += mem_cgroup_get_zone_lru_size(lruvec, lru, zid);
> +	} else
>  		lru_size = node_page_state(lruvec_pgdat(lruvec), NR_LRU_BASE + lru);
>  
>  	for (zid = zone_idx + 1; zid < MAX_NR_ZONES; zid++) {
> -- 
> 2.17.0

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] mm/vmscan: get number of pages on the LRU list in memcgroup base on lru_zone_size
  2019-10-07 14:28 ` Michal Hocko
@ 2019-10-08  9:34   ` Honglei Wang
  2019-10-09 14:16     ` Michal Hocko
  0 siblings, 1 reply; 8+ messages in thread
From: Honglei Wang @ 2019-10-08  9:34 UTC (permalink / raw)
  To: Michal Hocko; +Cc: linux-mm, vdavydov.dev, hannes



On 10/7/19 10:28 PM, Michal Hocko wrote:
> On Thu 05-09-19 15:10:34, Honglei Wang wrote:
>> lruvec_lru_size() is involving lruvec_page_state_local() to get the
>> lru_size in the current code. It's base on lruvec_stat_local.count[]
>> of mem_cgroup_per_node. This counter is updated in batch. It won't
>> do charge if the number of coming pages doesn't meet the needs of
>> MEMCG_CHARGE_BATCH who's defined as 32 now.
>>
>> The testcase in LTP madvise09[1] fails due to small block memory is
>> not charged. It creates a new memcgroup and sets up 32 MADV_FREE
>> pages. Then it forks child who will introduce memory pressure in the
>> memcgroup. The MADV_FREE pages are expected to be released under the
>> pressure, but 32 is not more than MEMCG_CHARGE_BATCH and these pages
>> won't be charged in lruvec_stat_local.count[] until some more pages
>> come in to satisfy the needs of batch charging. So these MADV_FREE
>> pages can't be freed in memory pressure which is a bit conflicted
>> with the definition of MADV_FREE.
> 
> The test case is simly wrong. The caching and the batch size is an
> internal implementation detail. Moreover MADV_FREE is a _hint_ so all
> you can say is that those pages will get freed at some point in time but
> you cannot make any assumptions about when that moment happens.
> 

This is a corner case, it makes extremely memory pressure which give the 
group no chance to satisfy the batch operation. There might be small 
chance to hit such problem in real workload -- 128K memory is really 
small in current amount of memory usage. I know exactly what you mean. 
The batch size is internal implementation detail, this *test case* just 
happen hit it in black box.

>> Getting lru_size base on lru_zone_size of mem_cgroup_per_node which
>> is not updated in batch can make it a bit more accurate in similar
>> scenario.
> 
> What does that mean? It would be more helpful to describe the code path
> which will use this more precise value and what is the effect of that.
> 

How about we describe it like this:

Get the lru_size base on lru_zone_size of mem_cgroup_per_node which is 
not updated via batching can help any related code path get more precise 
lru size in mem_cgroup case. This makes memory reclaim code won't ignore 
small blocks of memory(say, less than MEMCG_CHARGE_BATCH pages) in the 
lru list.

For this specific MADV_FREE page case, more precise lru size helps 
release the pages less than 32 as expected.

Thanks,
Honglei

> As I've said in the previous version, I do not object to the patch
> because a more precise lruvec_lru_size sounds like a nice thing as long
> as we are not paying a high price for that. Just look at the global case
> for mem_cgroup_disabled(). It uses node_page_state and that one is using
> per-cpu accounting with regular global value refreshing IIRC.
> 
>> [1] https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/syscalls/madvise/madvise09.c
>>
>> Signed-off-by: Honglei Wang <honglei.wang@oracle.com>
>> ---
>>   mm/vmscan.c | 9 +++++----
>>   1 file changed, 5 insertions(+), 4 deletions(-)
>>
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index c77d1e3761a7..c28672460868 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -354,12 +354,13 @@ unsigned long zone_reclaimable_pages(struct zone *zone)
>>    */
>>   unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, int zone_idx)
>>   {
>> -	unsigned long lru_size;
>> +	unsigned long lru_size = 0;
>>   	int zid;
>>   
>> -	if (!mem_cgroup_disabled())
>> -		lru_size = lruvec_page_state_local(lruvec, NR_LRU_BASE + lru);
>> -	else
>> +	if (!mem_cgroup_disabled()) {
>> +		for (zid = 0; zid < MAX_NR_ZONES; zid++)
>> +			lru_size += mem_cgroup_get_zone_lru_size(lruvec, lru, zid);
>> +	} else
>>   		lru_size = node_page_state(lruvec_pgdat(lruvec), NR_LRU_BASE + lru);
>>   
>>   	for (zid = zone_idx + 1; zid < MAX_NR_ZONES; zid++) {
>> -- 
>> 2.17.0
> 


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] mm/vmscan: get number of pages on the LRU list in memcgroup base on lru_zone_size
  2019-10-08  9:34   ` Honglei Wang
@ 2019-10-09 14:16     ` Michal Hocko
  2019-10-10  8:40       ` Honglei Wang
  0 siblings, 1 reply; 8+ messages in thread
From: Michal Hocko @ 2019-10-09 14:16 UTC (permalink / raw)
  To: Honglei Wang; +Cc: linux-mm, vdavydov.dev, hannes

On Tue 08-10-19 17:34:03, Honglei Wang wrote:
> How about we describe it like this:
> 
> Get the lru_size base on lru_zone_size of mem_cgroup_per_node which is not
> updated via batching can help any related code path get more precise lru
> size in mem_cgroup case. This makes memory reclaim code won't ignore small
> blocks of memory(say, less than MEMCG_CHARGE_BATCH pages) in the lru list.

I am sorry but this doesn't really explain the problem nor justify the
patch.
Let's have a look at where we are at first. lruvec_lru_size provides an
estimate of the number of pages on the given lru that qualifies for the
given zone index. Note the estimate part because that is an optimization
for the updaters path which tend to be really hot. Here we are
consistent between the global and memcg cases.

Now we can have a look at differences between the two cases. The global
LRU case relies on periodic syncing from a kworker context. This has no
guarantee on the timing and as such we cannot really rely on it to be
precise. Memcg path batches updates to MEMCG_CHARGE_BATCH (32) pages
and propages the value up the hierarchy. There is no periodic sync up so
the unsynced case might stay for ever if there are no new accounting events
happening.

Now, does it really matter? 32 pages should be really negligible to
normal workloads (read to those where MEMCG_CHARGE_BATCH << limits).
So we can talk whether other usecases are really sensible. Do we really
want to support memcgs with hard limit set to 10 pages? I would say I am
not really convinced because I have hard time to see real application
other than some artificial testing. On the other hand there is really
non trivial effort to make such usecases to work - just consider all
potential caching/batching that we do for performance reasons.

That being said, making lruvec_lru_size more precise doesn't sound like
a bad idea in general. But it comes with an additional cost which
shouldn't really matter much with the current code because it shouldn't
be used from hot paths. But is this really the case? Have you done all
the audit? Is this going to stay that way? These are important questions
to answer in the changelog to justify the change properly.

I hope this makes more sense now.
-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] mm/vmscan: get number of pages on the LRU list in memcgroup base on lru_zone_size
  2019-10-09 14:16     ` Michal Hocko
@ 2019-10-10  8:40       ` Honglei Wang
  2019-10-10 14:33         ` Michal Hocko
  0 siblings, 1 reply; 8+ messages in thread
From: Honglei Wang @ 2019-10-10  8:40 UTC (permalink / raw)
  To: Michal Hocko; +Cc: linux-mm, vdavydov.dev, hannes



On 10/9/19 10:16 PM, Michal Hocko wrote:
> On Tue 08-10-19 17:34:03, Honglei Wang wrote:
>> How about we describe it like this:
>>
>> Get the lru_size base on lru_zone_size of mem_cgroup_per_node which is not
>> updated via batching can help any related code path get more precise lru
>> size in mem_cgroup case. This makes memory reclaim code won't ignore small
>> blocks of memory(say, less than MEMCG_CHARGE_BATCH pages) in the lru list.
> 
> I am sorry but this doesn't really explain the problem nor justify the
> patch.
> Let's have a look at where we are at first. lruvec_lru_size provides an
> estimate of the number of pages on the given lru that qualifies for the
> given zone index. Note the estimate part because that is an optimization
> for the updaters path which tend to be really hot. Here we are
> consistent between the global and memcg cases.
> 
> Now we can have a look at differences between the two cases. The global
> LRU case relies on periodic syncing from a kworker context. This has no
> guarantee on the timing and as such we cannot really rely on it to be
> precise. Memcg path batches updates to MEMCG_CHARGE_BATCH (32) pages
> and propages the value up the hierarchy. There is no periodic sync up so
> the unsynced case might stay for ever if there are no new accounting events
> happening.
> 
> Now, does it really matter? 32 pages should be really negligible to
> normal workloads (read to those where MEMCG_CHARGE_BATCH << limits).
> So we can talk whether other usecases are really sensible. Do we really
> want to support memcgs with hard limit set to 10 pages? I would say I am
> not really convinced because I have hard time to see real application
> other than some artificial testing. On the other hand there is really
> non trivial effort to make such usecases to work - just consider all
> potential caching/batching that we do for performance reasons.
> 

Thanks for the detailed explanation, Michal. Yes, I didn't care about 
such kind of testing until QA guys got me and said the ltp testcase 
don't work as expect and same test passed in older kernel. I recognize 
there are some users whose job is doing functional verification on 
Linux. It might make them confused that same test case fail on latest 
kernel. And they don't know kernel internal such as the details of batch 
accounting. They just want to use several pages memory to verify the 
usage of memory feature and there is no 32 pages limitation mentioned in 
any documentations...

I explain stuff of batch accounting and MEMCG_CHARGE_BATCH to QA mate 
and clarify it's not a kernel bug. But on the other hand, the question 
is, is it necessary for us to cater to these users?

> That being said, making lruvec_lru_size more precise doesn't sound like
> a bad idea in general. But it comes with an additional cost which
> shouldn't really matter much with the current code because it shouldn't
> be used from hot paths. But is this really the case? Have you done all
> the audit? Is this going to stay that way? These are important questions
> to answer in the changelog to justify the change properly.
> 
> I hope this makes more sense now.
> 

Yes, I'll think more about these questions.

Thanks,
Honglei


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] mm/vmscan: get number of pages on the LRU list in memcgroup base on lru_zone_size
  2019-10-10  8:40       ` Honglei Wang
@ 2019-10-10 14:33         ` Michal Hocko
  2019-10-11  1:40           ` Honglei Wang
  0 siblings, 1 reply; 8+ messages in thread
From: Michal Hocko @ 2019-10-10 14:33 UTC (permalink / raw)
  To: Honglei Wang, Andrew Morton; +Cc: linux-mm, vdavydov.dev, hannes

On Thu 10-10-19 16:40:52, Honglei Wang wrote:
> 
> 
> On 10/9/19 10:16 PM, Michal Hocko wrote:
> > On Tue 08-10-19 17:34:03, Honglei Wang wrote:
> > > How about we describe it like this:
> > > 
> > > Get the lru_size base on lru_zone_size of mem_cgroup_per_node which is not
> > > updated via batching can help any related code path get more precise lru
> > > size in mem_cgroup case. This makes memory reclaim code won't ignore small
> > > blocks of memory(say, less than MEMCG_CHARGE_BATCH pages) in the lru list.
> > 
> > I am sorry but this doesn't really explain the problem nor justify the
> > patch.
> > Let's have a look at where we are at first. lruvec_lru_size provides an
> > estimate of the number of pages on the given lru that qualifies for the
> > given zone index. Note the estimate part because that is an optimization
> > for the updaters path which tend to be really hot. Here we are
> > consistent between the global and memcg cases.
> > 
> > Now we can have a look at differences between the two cases. The global
> > LRU case relies on periodic syncing from a kworker context. This has no
> > guarantee on the timing and as such we cannot really rely on it to be
> > precise. Memcg path batches updates to MEMCG_CHARGE_BATCH (32) pages
> > and propages the value up the hierarchy. There is no periodic sync up so
> > the unsynced case might stay for ever if there are no new accounting events
> > happening.
> > 
> > Now, does it really matter? 32 pages should be really negligible to
> > normal workloads (read to those where MEMCG_CHARGE_BATCH << limits).
> > So we can talk whether other usecases are really sensible. Do we really
> > want to support memcgs with hard limit set to 10 pages? I would say I am
> > not really convinced because I have hard time to see real application
> > other than some artificial testing. On the other hand there is really
> > non trivial effort to make such usecases to work - just consider all
> > potential caching/batching that we do for performance reasons.
> > 
> 
> Thanks for the detailed explanation, Michal. Yes, I didn't care about such
> kind of testing until QA guys got me and said the ltp testcase don't work as
> expect and same test passed in older kernel. I recognize there are some
> users whose job is doing functional verification on Linux. It might make
> them confused that same test case fail on latest kernel. And they don't know
> kernel internal such as the details of batch accounting. They just want to
> use several pages memory to verify the usage of memory feature and there is
> no 32 pages limitation mentioned in any documentations...
> 
> I explain stuff of batch accounting and MEMCG_CHARGE_BATCH to QA mate and
> clarify it's not a kernel bug. But on the other hand, the question is, is it
> necessary for us to cater to these users?

I would just fix the LTP test. Coincidently I have discussed the very
same thing with our QA guys just recently and the conclusion was to
update the LTP and use a larger madvised memory area. I haven't checked
whether this is upstream in LTP already. But please talk to LTP people.

If there is no other reason to change the kernel code but the failing
LTP test that doesn't really represent any real life scenario then I
would simply keep the code as is. Andrew, could you drop it from the
mmotm tree please?
-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] mm/vmscan: get number of pages on the LRU list in memcgroup base on lru_zone_size
  2019-10-10 14:33         ` Michal Hocko
@ 2019-10-11  1:40           ` Honglei Wang
  0 siblings, 0 replies; 8+ messages in thread
From: Honglei Wang @ 2019-10-11  1:40 UTC (permalink / raw)
  To: Michal Hocko, Andrew Morton; +Cc: linux-mm, vdavydov.dev, hannes



On 10/10/19 10:33 PM, Michal Hocko wrote:

> I would just fix the LTP test. Coincidently I have discussed the very
> same thing with our QA guys just recently and the conclusion was to
> update the LTP and use a larger madvised memory area. I haven't checked
> whether this is upstream in LTP already. But please talk to LTP people.
> 
My QA colleague sync with me weeks ago, they have a workaround for this 
case in LTP which is signed-off-by both suse and oracle QA. So it's not 
a problem for LTP now. If we don't care this from kernel side, it's fine 
to drop it.

Thanks,
Honglei

> If there is no other reason to change the kernel code but the failing
> LTP test that doesn't really represent any real life scenario then I
> would simply keep the code as is. Andrew, could you drop it from the
> mmotm tree please?
> 


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2019-10-11  1:41 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-05  7:10 [PATCH v2] mm/vmscan: get number of pages on the LRU list in memcgroup base on lru_zone_size Honglei Wang
2019-10-06  0:10 ` Andrew Morton
2019-10-07 14:28 ` Michal Hocko
2019-10-08  9:34   ` Honglei Wang
2019-10-09 14:16     ` Michal Hocko
2019-10-10  8:40       ` Honglei Wang
2019-10-10 14:33         ` Michal Hocko
2019-10-11  1:40           ` Honglei Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).