From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753125AbbLKNgV (ORCPT ); Fri, 11 Dec 2015 08:36:21 -0500 Received: from bombadil.infradead.org ([198.137.202.9]:53897 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751961AbbLKNgS (ORCPT ); Fri, 11 Dec 2015 08:36:18 -0500 Date: Fri, 11 Dec 2015 14:36:12 +0100 From: Peter Zijlstra To: Andrey Ryabinin Cc: mingo@redhat.com, linux-kernel@vger.kernel.org, yuyang.du@intel.com, Morten Rasmussen , Paul Turner , Ben Segall Subject: Re: [PATCH] sched/fair: fix mul overflow on 32-bit systems Message-ID: <20151211133612.GG6373@twins.programming.kicks-ass.net> References: <1449838518-26543-1-git-send-email-aryabinin@virtuozzo.com> <20151211132551.GO6356@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20151211132551.GO6356@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Dec 11, 2015 at 02:25:51PM +0100, Peter Zijlstra wrote: > On Fri, Dec 11, 2015 at 03:55:18PM +0300, Andrey Ryabinin wrote: > > Make 'r' 64-bit type to avoid overflow in 'r * LOAD_AVG_MAX' > > on 32-bit systems: > > UBSAN: Undefined behaviour in kernel/sched/fair.c:2785:18 > > signed integer overflow: > > 87950 * 47742 cannot be represented in type 'int' > > > > Fixes: 9d89c257dfb9 ("sched/fair: Rewrite runnable load and utilization average tracking") > > Signed-off-by: Andrey Ryabinin > > --- > > kernel/sched/fair.c | 4 ++-- > > 1 file changed, 2 insertions(+), 2 deletions(-) > > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > > index e3266eb..733f0b8 100644 > > --- a/kernel/sched/fair.c > > +++ b/kernel/sched/fair.c > > @@ -2780,14 +2780,14 @@ static inline int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq) > > int decayed, removed = 0; > > > > if (atomic_long_read(&cfs_rq->removed_load_avg)) { > > - long r = atomic_long_xchg(&cfs_rq->removed_load_avg, 0); > > + s64 r = atomic_long_xchg(&cfs_rq->removed_load_avg, 0); > > sa->load_avg = max_t(long, sa->load_avg - r, 0); > > sa->load_sum = max_t(s64, sa->load_sum - r * LOAD_AVG_MAX, 0); > > This makes sense, because sched_avg::load_sum is u64. > > > removed = 1; > > } > > > > if (atomic_long_read(&cfs_rq->removed_util_avg)) { > > - long r = atomic_long_xchg(&cfs_rq->removed_util_avg, 0); > > + s64 r = atomic_long_xchg(&cfs_rq->removed_util_avg, 0); > > sa->util_avg = max_t(long, sa->util_avg - r, 0); > > sa->util_sum = max_t(s32, sa->util_sum - r * LOAD_AVG_MAX, 0); > > } > > However sched_avg::util_sum is u32, so this is still wrecked. I seems to have wrecked that in: 006cdf025a33 ("sched/fair: Optimize per entity utilization tracking") maybe just make util_load u64 too?