As Chris Down stated cgroups v1 frozen, so no API changes in the mainline kernel.
If opinions change in the future I can continue with polishing this change.
I will focus on PSI bugs for swapless/zram swapped devices :)

The rest is below.

On Wed, Apr 15, 2020 at 3:51 AM Michal Hocko <mhocko@kernel.org> wrote:
On Tue 14-04-20 16:53:55, Leonid Moiseichuk wrote:
> It would be nice if you can specify exact numbers you like to see.

You are proposing an interface which allows to tune thresholds from
userspace. Which suggests that you want to tune them. I am asking what
kind of tuning you are using and why cannot we use them as defaults in
the kernel.

Yes, this type of hack is obvious. But selecting some parameters in one moment of time might be not good later.
Plus these patches can be applied by vendor to e.g. Android 8 or 9 who has no PSI and tweaked their own way.
Some products stick to old versions of kernels. I made a docs in separate change to cover a wider set of older kernels.
They are transparent, tested and working fine.


> On Tue, Apr 14, 2020 at 2:49 PM Michal Hocko <mhocko@kernel.org> wrote:
>
> > ....
>
> > As far I see numbers which vmpressure uses - they are closer to RSS of
> > > userspace processes for memory utilization.
> > > Default calibration in memory.pressure_level_medium as 60% makes 8GB
> > device
> > > hit memory threshold when RSS utilization
> > > reaches ~5 GB and that is a bit too early, I observe it happened
> > > immediately after boot. Reasonable level should be
> > > in the 70-80% range depending on SW preloaded on your device.
> >
> > I am not sure I follow. Levels are based on the reclaim ineffectivity not
> > the overall memory utilization. So it takes to have only 40% reclaim
> > effectivity to trigger the medium level. While you are right that the
> > threshold for the event is pretty arbitrary I would like to hear why
> > that doesn't work in your environment. It shouldn't really depend on the
> > amount of memory as this is a percentage, right?
> >
> It is not only depends from amount of memory or reclams but also what is
> software running.
>
> As I see from vmscan.c vmpressure activated from various shrink_node()  or,
> basically do_try_to_free_pages().
> To hit this state you need to somehow lack memory due to various reasons,
> so the amount of memory plays a role here.
> In particular my case is very impacted by GPU (using CMA) consumption which
> can easily take gigs.
> Apps can take gigabyte as well.
> So reclaiming will be quite often called in case of lack of memory (4K
> calls are possible).
>
> Handling level change will happen if the amount of scanned pages is more
> than window size, 512 is too little as now it is only 2 MB.
> So small slices are a source of false triggers.
>
> Next, pressure counted as
>         unsigned long scale = scanned + reclaimed;
>         pressure = scale - (reclaimed * scale / scanned);
>         pressure = pressure * 100 / scale;

Just to make this more obvious this is essentially
        100 * (1 - reclaimed/scanned)

> Or for 512 pages (lets use minimal) it leads to reclaimed should be 204
> pages for 60% threshold and 25 pages for 95% (as critical)
>
> In case of pressure happened (usually at 85% of memory used, and hittin
> critical level)

I still find this very confusing because the amount of used memory is
not really important. It really only depends on the reclaim activity and
that is either the memcg or the global reclaim. And you are getting
critical levels only if the reclaim is failing to reclaim way too many
pages.

OK, agree from that point of view. 
But for larger systems reclaiming happens not so often and we can use larger window sizes to have better memory utilization approximation.
 

> I rarely see something like closer to real numbers
> vmpressure_work_fn: scanned 545, reclaimed 144   <-- 73%
> vmpressure_work_fn: scanned 16283, reclaimed 2495  <-- same session but 83%
> Most of the time it is looping between kswapd and lmkd reclaiming failures,
> consuming quite a high amount of cpu.
>
> On vmscan calls everything looks as expected
> [  312.410938] vmpressure: tree 0 scanned 4, reclaimed 2
> [  312.410939] vmpressure: tree 0 scanned 120, reclaimed 62
> [  312.410939] vmpressure: tree 1 scanned 2, reclaimed 1
> [  312.410940] vmpressure: tree 1 scanned 120, reclaimed 62
> [  312.410941] vmpressure: tree 0 scanned 0, reclaimed 0

This looks more like a problem of vmpressure implementation than
something you want to workaround by tuning to me.
Basically it is how it works - collect the scanned page and activate worker activity to update the current level.
 

--
Michal Hocko
SUSE Labs


--
With Best Wishes,
Leonid