From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751484AbdA1MM7 (ORCPT ); Sat, 28 Jan 2017 07:12:59 -0500 Received: from mx1.redhat.com ([209.132.183.28]:37918 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751130AbdA1MMt (ORCPT ); Sat, 28 Jan 2017 07:12:49 -0500 Date: Sat, 28 Jan 2017 12:57:40 +0100 From: Stanislaw Gruszka To: Frederic Weisbecker , Peter Zijlstra Cc: LKML , Tony Luck , Wanpeng Li , Michael Ellerman , Heiko Carstens , Benjamin Herrenschmidt , Thomas Gleixner , Paul Mackerras , Ingo Molnar , Fenghua Yu , Rik van Riel , Martin Schwidefsky Subject: Re: [PATCH 08/37] cputime: Convert task/group cputime to nsecs Message-ID: <20170128115740.GA688@redhat.com> References: <1485109213-8561-1-git-send-email-fweisbec@gmail.com> <1485109213-8561-9-git-send-email-fweisbec@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1485109213-8561-9-git-send-email-fweisbec@gmail.com> User-Agent: Mutt/1.5.23 (2014-03-12) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Sat, 28 Jan 2017 12:02:26 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Frederic and sorry for late comment. On Sun, Jan 22, 2017 at 07:19:44PM +0100, Frederic Weisbecker wrote: > Now that most cputime readers use the transition API which return the > task cputime in old style cputime_t, we can safely store the cputime in > nsecs. This will eventually make cputime statistics less opaque and more > granular. Back and forth convertions between cputime_t and nsecs in order > to deal with cputime_t random granularity won't be needed anymore. > - cputime_t utime; > - cputime_t stime; > + u64 utime; > + u64 stime; > unsigned long long sum_exec_runtime; > }; > @@ -134,7 +134,7 @@ void account_user_time(struct task_struct *p, cputime_t cputime) > int index; > > /* Add user time to process. */ > - p->utime += cputime; > + p->utime += cputime_to_nsecs(cputime); > account_group_user_time(p, cputime); > +void task_cputime_adjusted(struct task_struct *p, u64 *ut, u64 *st) > { > *ut = p->utime; > *st = p->stime; > o } On 32 bit architectures 64bit store/load is not atomic and if not protected - 64bit variables can be mangled. I do not see any protection (lock) between utime/stime store and load in the patch and seems that {u/s}time store/load can be performed at the same time. Though problem is very very improbable it still can happen at least theoretically when lower and upper 32 bits are changed at the same time i.e. process {u,s}time become near to multiple of 2**32 nsec (aprox: 4sec) and 64bit {u,s}time is stored and loaded at the same time on different cpus. As said this is very improbable situation, but eventually could be possible on long lived processes. BTW we have already similar problem with sum_exec_runtime. I posted some patches to solve the problem, but non of them was good: - https://lkml.org/lkml/2016/9/1/172 this one slow down scheduler hot path's and Peter hates it. - https://lkml.org/lkml/2016/9/6/305 this one was fine for Peter, but I dislike it for taking task_rq_lock() and do not push it forward. I considering fixing problem of sum_exec_runtime possible mangling by using prev_sum_exec_runtime: u64 read_sum_exec_runtime(struct task_struct *t) { u64 ns, prev_ns; do { prev_ns = READ_ONCE(t->se.prev_sum_exec_runtime); ns = READ_ONCE(t->se.sum_exec_runtime); } while (ns < prev_ns || ns > (prev_ns + U32_MAX)); return ns; } This should work based on fact that prev_sum_exec_runtime and sum_exec_runtime are not modified and stored at the same time, so only one of those variabled can be mangled. Though I need to think about correctnes of that a bit more. Stanislaw