linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Zhaoyang Huang <huangzhaoyang@gmail.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: Zhaoyang Huang <zhaoyang.huang@unisoc.com>,
	"open list:MEMORY MANAGEMENT" <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>,
	xuewen.yan@unisoc.com, Ke Wang <ke.wang@unisoc.com>
Subject: Re: [RFC PATCH] psi : calc psi memstall time more precisely
Date: Tue, 14 Sep 2021 09:24:43 +0800	[thread overview]
Message-ID: <CAGWkznGFWE3A=QRvQQ89JhX9AG2DsH=vR58-UwZpYA1wbW74gQ@mail.gmail.com> (raw)
In-Reply-To: <YTouqsXeAGV6c5oV@cmpxchg.org>

On Thu, Sep 9, 2021 at 11:54 PM Johannes Weiner <hannes@cmpxchg.org> wrote:
>
> On Thu, Sep 09, 2021 at 08:00:24PM +0800, Huangzhaoyang wrote:
> > From: Zhaoyang Huang <zhaoyang.huang@unisoc.com>
> >
> > psi's memstall time is counted as simple as exit - entry so far, which ignore
> > the task's off cpu time. Fix it by calc the percentage of off time via task and
> > rq's util and runq load.
> >
> > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com>
>
> Can you please explain what practical problem you are trying to solve?
>
> If a reclaimer gets preempted and has to wait for CPU, should that
> stall be attributed to a lack of memory? Some of it should, since page
> reclaim consumed CPU budget that would've otherwise been available for
> doing real work. The application of course may still have experienced
> a CPU wait outside of reclaim, but potentially a shorter one. Memory
> pressure can definitely increase CPU pressure (as it can IO pressure).
The preempted time which is mentioned here can be separated into 2
categories. First one is cfs task preempted because running out of the
share of schedule_lantency. The second one is preempted by RT,DL and
IRQs. IMO, the previous is reasonable to be counted in stall time,
while the second one NOT.
>
> Proportional and transitive accounting - how much of total CPU load is
> page reclaim, and thus how much of each runq wait is due to memory
> pressure - would give more precise answers. But generally discounting
> off-CPU time in a stall is not any more correct than including it all.
>
> This is doable, but I think there needs to be better justification for
> providing this level of precision, since it comes with code complexity
> that has performance and maintenance overhead.
The rq's utilization of load tracking provides an easy way to compute
such proportion. A new commit has been given out which mainly deals
with the 2nd scenario described above. The statistics of the precision
are provided together.

      reply	other threads:[~2021-09-14  1:25 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-09 12:00 [RFC PATCH] psi : calc psi memstall time more precisely Huangzhaoyang
2021-09-09 13:07 ` Vlastimil Babka
2021-09-09 15:56 ` Johannes Weiner
2021-09-14  1:24   ` Zhaoyang Huang [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAGWkznGFWE3A=QRvQQ89JhX9AG2DsH=vR58-UwZpYA1wbW74gQ@mail.gmail.com' \
    --to=huangzhaoyang@gmail.com \
    --cc=hannes@cmpxchg.org \
    --cc=ke.wang@unisoc.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=xuewen.yan@unisoc.com \
    --cc=zhaoyang.huang@unisoc.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).