From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S964823AbcIEQwY (ORCPT ); Mon, 5 Sep 2016 12:52:24 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:52734 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935019AbcIEQwU (ORCPT ); Mon, 5 Sep 2016 12:52:20 -0400 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Mike Galbraith , "Peter Zijlstra (Intel)" , Frederic Weisbecker , Fredrik Markstrom , Linus Torvalds , Paolo Bonzini , Radim , Rik van Riel , Stephane Eranian , Thomas Gleixner , Vince Weaver , Wanpeng Li , Ingo Molnar Subject: [PATCH 4.7 074/143] sched/cputime: Fix NO_HZ_FULL getrusage() monotonicity regression Date: Mon, 5 Sep 2016 18:44:10 +0200 Message-Id: <20160905164433.741813740@linuxfoundation.org> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20160905164430.593075551@linuxfoundation.org> References: <20160905164430.593075551@linuxfoundation.org> User-Agent: quilt/0.64 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.7-stable review patch. If anyone has any objections, please let me know. ------------------ From: Peter Zijlstra commit 173be9a14f7b2e901cf77c18b1aafd4d672e9d9e upstream. Mike reports: Roughly 10% of the time, ltp testcase getrusage04 fails: getrusage04 0 TINFO : Expected timers granularity is 4000 us getrusage04 0 TINFO : Using 1 as multiply factor for max [us]time increment (1000+4000us)! getrusage04 0 TINFO : utime: 0us; stime: 179us getrusage04 0 TINFO : utime: 3751us; stime: 0us getrusage04 1 TFAIL : getrusage04.c:133: stime increased > 5000us: And tracked it down to the case where the task simply doesn't get _any_ [us]time ticks. Update the code to assume all rtime is utime when we lack information, thus ensuring a task that elides the tick gets time accounted. Reported-by: Mike Galbraith Tested-by: Mike Galbraith Signed-off-by: Peter Zijlstra (Intel) Cc: Frederic Weisbecker Cc: Fredrik Markstrom Cc: Linus Torvalds Cc: Paolo Bonzini Cc: Peter Zijlstra Cc: Radim Cc: Rik van Riel Cc: Stephane Eranian Cc: Thomas Gleixner Cc: Vince Weaver Cc: Wanpeng Li Fixes: 9d7fb0427648 ("sched/cputime: Guarantee stime + utime == rtime") Signed-off-by: Ingo Molnar Signed-off-by: Greg Kroah-Hartman --- kernel/sched/cputime.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) --- a/kernel/sched/cputime.c +++ b/kernel/sched/cputime.c @@ -603,19 +603,25 @@ static void cputime_adjust(struct task_c stime = curr->stime; utime = curr->utime; - if (utime == 0) { - stime = rtime; + /* + * If either stime or both stime and utime are 0, assume all runtime is + * userspace. Once a task gets some ticks, the monotonicy code at + * 'update' will ensure things converge to the observed ratio. + */ + if (stime == 0) { + utime = rtime; goto update; } - if (stime == 0) { - utime = rtime; + if (utime == 0) { + stime = rtime; goto update; } stime = scale_stime((__force u64)stime, (__force u64)rtime, (__force u64)(stime + utime)); +update: /* * Make sure stime doesn't go backwards; this preserves monotonicity * for utime because rtime is monotonic. @@ -638,7 +644,6 @@ static void cputime_adjust(struct task_c stime = rtime - utime; } -update: prev->stime = stime; prev->utime = utime; out: