From: "hasegawa-hitomi@fujitsu.com" <hasegawa-hitomi@fujitsu.com>
To: "'fweisbec@gmail.com'" <fweisbec@gmail.com>,
"'tglx@linutronix.de'" <tglx@linutronix.de>,
"'mingo@kernel.org'" <mingo@kernel.org>,
"'peterz@infradead.org'" <peterz@infradead.org>,
"'juri.lelli@redhat.com'" <juri.lelli@redhat.com>,
"'vincent.guittot@linaro.org'" <vincent.guittot@linaro.org>
Cc: "'dietmar.eggemann@arm.com'" <dietmar.eggemann@arm.com>,
"'rostedt@goodmis.org'" <rostedt@goodmis.org>,
"'bsegall@google.com'" <bsegall@google.com>,
"'mgorman@suse.de'" <mgorman@suse.de>,
"'bristot@redhat.com'" <bristot@redhat.com>,
"'linux-kernel@vger.kernel.org'" <linux-kernel@vger.kernel.org>
Subject: Utime and stime are less when getrusage (RUSAGE_THREAD) is executed on a tickless CPU.
Date: Wed, 12 May 2021 03:28:02 +0000 [thread overview]
Message-ID: <OSBPR01MB21837C8931D90AE55AF4A955EB529@OSBPR01MB2183.jpnprd01.prod.outlook.com> (raw)
Hello.
I found that when I run getrusage(RUSAGE_THREAD) on a tickless CPU, the utime and stime I get are less than the actual time, unlike when I run getrusage(RUSAGE_SELF) on a single thread.
This problem seems to be caused by the fact that se.sum_exec_runtime is not updated just before getting the information from 'current'.
In the current implementation, task_cputime_adjusted() calls task_cputime() to get the 'current' utime and stime, then calls cputime_adjust() to adjust the sum of utime and stime to be equal to cputime.sum_exec_runtime. On a tickless CPU, sum_exec_runtime is not updated periodically, so there seems to be a discrepancy with the actual time.
Therefore, I think I should include a process to update se.sum_exec_runtime just before getting the information from 'current' (as in other processes except RUSAGE_THREAD). I'm thinking of the following improvement.
@@ void getrusage(struct task_struct *p, int who, struct rusage *r)
if (who == RUSAGE_THREAD) {
+ task_sched_runtime(current);
task_cputime_adjusted(current, &utime, &stime);
Is there any possible problem with this?
Thanks.
Hitomi Hasegawa
next reply other threads:[~2021-05-12 3:35 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-05-12 3:28 hasegawa-hitomi [this message]
2021-05-18 7:59 ` Utime and stime are less when getrusage (RUSAGE_THREAD) is executed on a tickless CPU hasegawa-hitomi
2021-05-18 8:23 ` Peter Zijlstra
2021-05-19 6:30 ` hasegawa-hitomi
2021-05-19 9:24 ` Peter Zijlstra
2021-05-19 9:28 ` Peter Zijlstra
2021-05-21 6:40 ` hasegawa-hitomi
2021-05-21 8:41 ` Mel Gorman
2021-06-16 2:27 ` hasegawa-hitomi
2021-06-16 12:31 ` Frederic Weisbecker
2021-06-16 12:54 ` Frederic Weisbecker
2021-06-22 6:49 ` hasegawa-hitomi
2021-06-28 2:36 ` hasegawa-hitomi
2021-06-28 14:13 ` Frederic Weisbecker
2021-07-21 8:24 ` hasegawa-hitomi
2021-08-31 4:18 ` hasegawa-hitomi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=OSBPR01MB21837C8931D90AE55AF4A955EB529@OSBPR01MB2183.jpnprd01.prod.outlook.com \
--to=hasegawa-hitomi@fujitsu.com \
--cc=bristot@redhat.com \
--cc=bsegall@google.com \
--cc=dietmar.eggemann@arm.com \
--cc=fweisbec@gmail.com \
--cc=juri.lelli@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mgorman@suse.de \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
--cc=vincent.guittot@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).