linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: Honglei Wang <honglei.wang@oracle.com>,
	Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org, vdavydov.dev@gmail.com, hannes@cmpxchg.org
Subject: Re: [PATCH v2] mm/vmscan: get number of pages on the LRU list in memcgroup base on lru_zone_size
Date: Thu, 10 Oct 2019 16:33:13 +0200	[thread overview]
Message-ID: <20191010143313.GJ18412@dhcp22.suse.cz> (raw)
In-Reply-To: <4dccae1b-2b34-7ff9-94c3-8814baab636e@oracle.com>

On Thu 10-10-19 16:40:52, Honglei Wang wrote:
> 
> 
> On 10/9/19 10:16 PM, Michal Hocko wrote:
> > On Tue 08-10-19 17:34:03, Honglei Wang wrote:
> > > How about we describe it like this:
> > > 
> > > Get the lru_size base on lru_zone_size of mem_cgroup_per_node which is not
> > > updated via batching can help any related code path get more precise lru
> > > size in mem_cgroup case. This makes memory reclaim code won't ignore small
> > > blocks of memory(say, less than MEMCG_CHARGE_BATCH pages) in the lru list.
> > 
> > I am sorry but this doesn't really explain the problem nor justify the
> > patch.
> > Let's have a look at where we are at first. lruvec_lru_size provides an
> > estimate of the number of pages on the given lru that qualifies for the
> > given zone index. Note the estimate part because that is an optimization
> > for the updaters path which tend to be really hot. Here we are
> > consistent between the global and memcg cases.
> > 
> > Now we can have a look at differences between the two cases. The global
> > LRU case relies on periodic syncing from a kworker context. This has no
> > guarantee on the timing and as such we cannot really rely on it to be
> > precise. Memcg path batches updates to MEMCG_CHARGE_BATCH (32) pages
> > and propages the value up the hierarchy. There is no periodic sync up so
> > the unsynced case might stay for ever if there are no new accounting events
> > happening.
> > 
> > Now, does it really matter? 32 pages should be really negligible to
> > normal workloads (read to those where MEMCG_CHARGE_BATCH << limits).
> > So we can talk whether other usecases are really sensible. Do we really
> > want to support memcgs with hard limit set to 10 pages? I would say I am
> > not really convinced because I have hard time to see real application
> > other than some artificial testing. On the other hand there is really
> > non trivial effort to make such usecases to work - just consider all
> > potential caching/batching that we do for performance reasons.
> > 
> 
> Thanks for the detailed explanation, Michal. Yes, I didn't care about such
> kind of testing until QA guys got me and said the ltp testcase don't work as
> expect and same test passed in older kernel. I recognize there are some
> users whose job is doing functional verification on Linux. It might make
> them confused that same test case fail on latest kernel. And they don't know
> kernel internal such as the details of batch accounting. They just want to
> use several pages memory to verify the usage of memory feature and there is
> no 32 pages limitation mentioned in any documentations...
> 
> I explain stuff of batch accounting and MEMCG_CHARGE_BATCH to QA mate and
> clarify it's not a kernel bug. But on the other hand, the question is, is it
> necessary for us to cater to these users?

I would just fix the LTP test. Coincidently I have discussed the very
same thing with our QA guys just recently and the conclusion was to
update the LTP and use a larger madvised memory area. I haven't checked
whether this is upstream in LTP already. But please talk to LTP people.

If there is no other reason to change the kernel code but the failing
LTP test that doesn't really represent any real life scenario then I
would simply keep the code as is. Andrew, could you drop it from the
mmotm tree please?
-- 
Michal Hocko
SUSE Labs


  reply	other threads:[~2019-10-10 14:33 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-05  7:10 [PATCH v2] mm/vmscan: get number of pages on the LRU list in memcgroup base on lru_zone_size Honglei Wang
2019-10-06  0:10 ` Andrew Morton
2019-10-07 14:28 ` Michal Hocko
2019-10-08  9:34   ` Honglei Wang
2019-10-09 14:16     ` Michal Hocko
2019-10-10  8:40       ` Honglei Wang
2019-10-10 14:33         ` Michal Hocko [this message]
2019-10-11  1:40           ` Honglei Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191010143313.GJ18412@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=honglei.wang@oracle.com \
    --cc=linux-mm@kvack.org \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).