linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Leonid Moiseichuk <lmoiseichuk@magicleap.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: svc lmoiseichuk <svc_lmoiseichuk@magicleap.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	vdavydov.dev@gmail.com, tj@kernel.org, lizefan@huawei.com,
	cgroups@vger.kernel.org, akpm@linux-foundation.org,
	rientjes@google.com, minchan@kernel.org, vinmenon@codeaurora.org,
	andriy.shevchenko@linux.intel.com, anton.vorontsov@linaro.org,
	penberg@kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH 0/2] memcg, vmpressure: expose vmpressure controls
Date: Wed, 15 Apr 2020 08:17:42 -0400	[thread overview]
Message-ID: <CAELvCDRpVi4zjpHCw1oeY=GXf8XO2TXGUFAwztvydS27Q8L9Sw@mail.gmail.com> (raw)
In-Reply-To: <20200415075136.GY4629@dhcp22.suse.cz>

[-- Attachment #1: Type: text/plain, Size: 4881 bytes --]

As Chris Down stated cgroups v1 frozen, so no API changes in the mainline
kernel.
If opinions change in the future I can continue with polishing this change.
I will focus on PSI bugs for swapless/zram swapped devices :)

The rest is below.

On Wed, Apr 15, 2020 at 3:51 AM Michal Hocko <mhocko@kernel.org> wrote:

> On Tue 14-04-20 16:53:55, Leonid Moiseichuk wrote:
> > It would be nice if you can specify exact numbers you like to see.
>
> You are proposing an interface which allows to tune thresholds from
> userspace. Which suggests that you want to tune them. I am asking what
> kind of tuning you are using and why cannot we use them as defaults in
> the kernel.
>

Yes, this type of hack is obvious. But selecting some parameters in one
moment of time might be not good later.
Plus these patches can be applied by vendor to e.g. Android 8 or 9 who has
no PSI and tweaked their own way.
Some products stick to old versions of kernels. I made a docs in separate
change to cover a wider set of older kernels.
They are transparent, tested and working fine.


> > On Tue, Apr 14, 2020 at 2:49 PM Michal Hocko <mhocko@kernel.org> wrote:
> >
> > > ....
> >
> > > As far I see numbers which vmpressure uses - they are closer to RSS of
> > > > userspace processes for memory utilization.
> > > > Default calibration in memory.pressure_level_medium as 60% makes 8GB
> > > device
> > > > hit memory threshold when RSS utilization
> > > > reaches ~5 GB and that is a bit too early, I observe it happened
> > > > immediately after boot. Reasonable level should be
> > > > in the 70-80% range depending on SW preloaded on your device.
> > >
> > > I am not sure I follow. Levels are based on the reclaim ineffectivity
> not
> > > the overall memory utilization. So it takes to have only 40% reclaim
> > > effectivity to trigger the medium level. While you are right that the
> > > threshold for the event is pretty arbitrary I would like to hear why
> > > that doesn't work in your environment. It shouldn't really depend on
> the
> > > amount of memory as this is a percentage, right?
> > >
> > It is not only depends from amount of memory or reclams but also what is
> > software running.
> >
> > As I see from vmscan.c vmpressure activated from various shrink_node()
> or,
> > basically do_try_to_free_pages().
> > To hit this state you need to somehow lack memory due to various reasons,
> > so the amount of memory plays a role here.
> > In particular my case is very impacted by GPU (using CMA) consumption
> which
> > can easily take gigs.
> > Apps can take gigabyte as well.
> > So reclaiming will be quite often called in case of lack of memory (4K
> > calls are possible).
> >
> > Handling level change will happen if the amount of scanned pages is more
> > than window size, 512 is too little as now it is only 2 MB.
> > So small slices are a source of false triggers.
> >
> > Next, pressure counted as
> >         unsigned long scale = scanned + reclaimed;
> >         pressure = scale - (reclaimed * scale / scanned);
> >         pressure = pressure * 100 / scale;
>
> Just to make this more obvious this is essentially
>         100 * (1 - reclaimed/scanned)
>
> > Or for 512 pages (lets use minimal) it leads to reclaimed should be 204
> > pages for 60% threshold and 25 pages for 95% (as critical)
> >
> > In case of pressure happened (usually at 85% of memory used, and hittin
> > critical level)
>
> I still find this very confusing because the amount of used memory is
> not really important. It really only depends on the reclaim activity and
> that is either the memcg or the global reclaim. And you are getting
> critical levels only if the reclaim is failing to reclaim way too many
> pages.
>

OK, agree from that point of view.
But for larger systems reclaiming happens not so often and we can
use larger window sizes to have better memory utilization approximation.


>
> > I rarely see something like closer to real numbers
> > vmpressure_work_fn: scanned 545, reclaimed 144   <-- 73%
> > vmpressure_work_fn: scanned 16283, reclaimed 2495  <-- same session but
> 83%
> > Most of the time it is looping between kswapd and lmkd reclaiming
> failures,
> > consuming quite a high amount of cpu.
> >
> > On vmscan calls everything looks as expected
> > [  312.410938] vmpressure: tree 0 scanned 4, reclaimed 2
> > [  312.410939] vmpressure: tree 0 scanned 120, reclaimed 62
> > [  312.410939] vmpressure: tree 1 scanned 2, reclaimed 1
> > [  312.410940] vmpressure: tree 1 scanned 120, reclaimed 62
> > [  312.410941] vmpressure: tree 0 scanned 0, reclaimed 0
>
> This looks more like a problem of vmpressure implementation than
> something you want to workaround by tuning to me.
>
Basically it is how it works - collect the scanned page and activate worker
activity to update the current level.


>
> --
> Michal Hocko
> SUSE Labs
>


-- 
With Best Wishes,
Leonid

[-- Attachment #2: Type: text/html, Size: 6458 bytes --]

  reply	other threads:[~2020-04-15 12:17 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-13 21:57 [PATCH 0/2] memcg, vmpressure: expose vmpressure controls svc_lmoiseichuk
2020-04-13 21:57 ` [PATCH 1/2] memcg: expose vmpressure knobs svc_lmoiseichuk
2020-04-14 22:55   ` Chris Down
2020-04-14 23:00     ` Leonid Moiseichuk
2020-04-13 21:57 ` [PATCH 2/2] memcg, vmpressure: expose vmpressure controls svc_lmoiseichuk
2020-04-14 11:37 ` [PATCH 0/2] " Michal Hocko
2020-04-14 16:42   ` Leonid Moiseichuk
2020-04-14 18:49     ` Michal Hocko
2020-04-14 20:53       ` Leonid Moiseichuk
2020-04-15  7:51         ` Michal Hocko
2020-04-15 12:17           ` Leonid Moiseichuk [this message]
2020-04-15 12:28             ` Michal Hocko
2020-04-15 12:33               ` Leonid Moiseichuk
2020-04-14 19:23     ` Johannes Weiner
2020-04-14 22:12       ` Leonid Moiseichuk
2020-04-15  7:55         ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAELvCDRpVi4zjpHCw1oeY=GXf8XO2TXGUFAwztvydS27Q8L9Sw@mail.gmail.com' \
    --to=lmoiseichuk@magicleap.com \
    --cc=akpm@linux-foundation.org \
    --cc=andriy.shevchenko@linux.intel.com \
    --cc=anton.vorontsov@linaro.org \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-mm@kvack.org \
    --cc=lizefan@huawei.com \
    --cc=mhocko@kernel.org \
    --cc=minchan@kernel.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=svc_lmoiseichuk@magicleap.com \
    --cc=tj@kernel.org \
    --cc=vdavydov.dev@gmail.com \
    --cc=vinmenon@codeaurora.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).