From: Michal Hocko <mhocko@kernel.org>
To: Leonid Moiseichuk <lmoiseichuk@magicleap.com>
Cc: svc_lmoiseichuk@magicleap.com, hannes@cmpxchg.org,
vdavydov.dev@gmail.com, tj@kernel.org, lizefan@huawei.com,
cgroups@vger.kernel.org, akpm@linux-foundation.org,
rientjes@google.com, minchan@kernel.org, vinmenon@codeaurora.org,
andriy.shevchenko@linux.intel.com, anton.vorontsov@linaro.org,
penberg@kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH 0/2] memcg, vmpressure: expose vmpressure controls
Date: Tue, 14 Apr 2020 20:49:17 +0200 [thread overview]
Message-ID: <20200414184917.GT4629@dhcp22.suse.cz> (raw)
In-Reply-To: <CAELvCDTGnpA4WBAMZjGSLTrg2-Dbb3kTmLjMTw_JLYXBdvpcxw@mail.gmail.com>
On Tue 14-04-20 12:42:44, Leonid Moiseichuk wrote:
> Thanks Michal for quick response, see my answer below.
> I will update the commit message with numbers for 8 GB memory swapless
> devices.
>
> On Tue, Apr 14, 2020 at 7:37 AM Michal Hocko <mhocko@kernel.org> wrote:
>
> > On Mon 13-04-20 17:57:48, svc_lmoiseichuk@magicleap.com wrote:
> > > From: Leonid Moiseichuk <lmoiseichuk@magicleap.com>
> > >
> > > Small tweak to populate vmpressure parameters to userspace without
> > > any built-in logic change.
> > >
> > > The vmpressure is used actively (e.g. on Android) to track mm stress.
> > > vmpressure parameters selected empiricaly quite long time ago and not
> > > always suitable for modern memory configurations.
> >
> > This needs much more details. Why it is not suitable? What are usual
> > numbers you need to set up to work properly? Why those wouldn't be
> > generally applicable?
> >
> As far I see numbers which vmpressure uses - they are closer to RSS of
> userspace processes for memory utilization.
> Default calibration in memory.pressure_level_medium as 60% makes 8GB device
> hit memory threshold when RSS utilization
> reaches ~5 GB and that is a bit too early, I observe it happened
> immediately after boot. Reasonable level should be
> in the 70-80% range depending on SW preloaded on your device.
I am not sure I follow. Levels are based on the reclaim ineffectivity not
the overall memory utilization. So it takes to have only 40% reclaim
effectivity to trigger the medium level. While you are right that the
threshold for the event is pretty arbitrary I would like to hear why
that doesn't work in your environment. It shouldn't really depend on the
amount of memory as this is a percentage, right?
> From another point of view having a memory.pressure_level_critical set to
> 95% may never happen as it comes to a level where an OOM killer already
> starts to kill processes,
> and in some cases it is even worse than the now removed Android low memory
> killer. For such cases has sense to shift the threshold down to 85-90% to
> have device reliably
> handling low memory situations and not rely only on oom_score_adj hints.
>
> Next important parameter for tweaking is memory.pressure_window which has
> the sense to increase twice to reduce the number of activations of userspace
> to save some power by reducing sensitivity.
Could you be more specific, please?
> For 12 and 16 GB devices the situation will be similar but worse, based on
> fact in current settings they will hit medium memory usage when ~5 or 6.5
> GB memory will be still free.
>
>
> >
> > Anyway, I have to confess I am not a big fan of this. vmpressure turned
> > out to be a very weak interface to measure the memory pressure. Not only
> > it is not numa aware which makes it unusable on many systems it also
> > gives data way too late from the practice.
> >
> > Btw. why don't you use /proc/pressure/memory resp. its memcg counterpart
> > to measure the memory pressure in the first place?
> >
>
> According to our checks PSI produced numbers only when swap enabled e.g.
> swapless device 75% RAM utilization:
I believe you should discuss that with the people familiar with PSI
internals (Johannes already in the CC list).
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2020-04-14 18:49 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-04-13 21:57 [PATCH 0/2] memcg, vmpressure: expose vmpressure controls svc_lmoiseichuk
2020-04-13 21:57 ` [PATCH 1/2] memcg: expose vmpressure knobs svc_lmoiseichuk
2020-04-14 22:55 ` Chris Down
2020-04-14 23:00 ` Leonid Moiseichuk
2020-04-13 21:57 ` [PATCH 2/2] memcg, vmpressure: expose vmpressure controls svc_lmoiseichuk
2020-04-14 11:37 ` [PATCH 0/2] " Michal Hocko
2020-04-14 16:42 ` Leonid Moiseichuk
2020-04-14 18:49 ` Michal Hocko [this message]
2020-04-14 20:53 ` Leonid Moiseichuk
2020-04-15 7:51 ` Michal Hocko
2020-04-15 12:17 ` Leonid Moiseichuk
2020-04-15 12:28 ` Michal Hocko
2020-04-15 12:33 ` Leonid Moiseichuk
2020-04-14 19:23 ` Johannes Weiner
2020-04-14 22:12 ` Leonid Moiseichuk
2020-04-15 7:55 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200414184917.GT4629@dhcp22.suse.cz \
--to=mhocko@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=andriy.shevchenko@linux.intel.com \
--cc=anton.vorontsov@linaro.org \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=linux-mm@kvack.org \
--cc=lizefan@huawei.com \
--cc=lmoiseichuk@magicleap.com \
--cc=minchan@kernel.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=svc_lmoiseichuk@magicleap.com \
--cc=tj@kernel.org \
--cc=vdavydov.dev@gmail.com \
--cc=vinmenon@codeaurora.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).