Thanks Michal for quick response, see my answer below.
I will update the commit message with numbers for 8 GB memory swapless devices.

On Tue, Apr 14, 2020 at 7:37 AM Michal Hocko <mhocko@kernel.org> wrote:
On Mon 13-04-20 17:57:48, svc_lmoiseichuk@magicleap.com wrote:
> From: Leonid Moiseichuk <lmoiseichuk@magicleap.com>
>
> Small tweak to populate vmpressure parameters to userspace without
> any built-in logic change.
>
> The vmpressure is used actively (e.g. on Android) to track mm stress.
> vmpressure parameters selected empiricaly quite long time ago and not
> always suitable for modern memory configurations.

This needs much more details. Why it is not suitable? What are usual
numbers you need to set up to work properly? Why those wouldn't be
generally applicable?
As far I see numbers which vmpressure uses - they are closer to RSS of userspace processes for memory utilization.
Default calibration in memory.pressure_level_medium as 60% makes 8GB device hit memory threshold when RSS utilization 
reaches ~5 GB and that is a bit too early, I observe it happened immediately after boot. Reasonable level should be 
in the 70-80% range depending on SW preloaded on your device.

From another point of view having a memory.pressure_level_critical set to 95% may never happen as it comes to a level where an OOM killer already starts to kill processes,
and in some cases it is even worse than the now removed Android low memory killer. For such cases has sense to shift the threshold down to 85-90% to have device reliably
handling low memory situations and not rely only on oom_score_adj hints.

Next important parameter for tweaking is memory.pressure_window which has the sense to increase twice to reduce the number of activations of userspace
to save some power by reducing sensitivity.

For 12 and 16 GB devices the situation will be similar but worse, based on fact in current settings they will hit medium memory usage when ~5 or 6.5 GB memory will be still free.
 

Anyway, I have to confess I am not a big fan of this. vmpressure turned
out to be a very weak interface to measure the memory pressure. Not only
it is not numa aware which makes it unusable on many systems it also
gives data way too late from the practice.

Btw. why don't you use /proc/pressure/memory resp. its memcg counterpart
to measure the memory pressure in the first place?

According to our checks PSI produced numbers only when swap enabled e.g. swapless device 75% RAM utilization:
==> /proc/pressure/io <==
some avg10=0.00 avg60=1.18 avg300=1.51 total=9642648
full avg10=0.00 avg60=1.11 avg300=1.47 total=9271174

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=0
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

Probably it is possible to activate PSI by introducing high IO and swap enabled but that is not a typical case for mobile devices.

With swap-enabled case memory pressure follows IO pressure with some fraction i.e. memory is io/2 ... io/10 depending on pattern.
Light sysbench case with swap enabled
==> /proc/pressure/io <==
some avg10=0.00 avg60=0.00 avg300=0.11 total=155383820
full avg10=0.00 avg60=0.00 avg300=0.05 total=100516966
==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.06 total=465916397
full avg10=0.00 avg60=0.00 avg300=0.00 total=368664282

Since not all devices have zram or swap enabled it makes sense to have vmpressure tuning option possible since
it is well used in Android and related issues are understandable.


> Leonid Moiseichuk (2):
>   memcg: expose vmpressure knobs
>   memcg, vmpressure: expose vmpressure controls
>
>  .../admin-guide/cgroup-v1/memory.rst          |  12 +-
>  include/linux/vmpressure.h                    |  35 ++++++
>  mm/memcontrol.c                               | 113 ++++++++++++++++++
>  mm/vmpressure.c                               | 101 +++++++---------
>  4 files changed, 200 insertions(+), 61 deletions(-)
>
> --
> 2.17.1
>

--
Michal Hocko
SUSE Labs


--
With Best Wishes,
Leonid