From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE675C4742C for ; Mon, 16 Nov 2020 15:29:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B945820E65 for ; Mon, 16 Nov 2020 15:29:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731133AbgKPP3w (ORCPT ); Mon, 16 Nov 2020 10:29:52 -0500 Received: from outbound-smtp53.blacknight.com ([46.22.136.237]:49401 "EHLO outbound-smtp53.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728029AbgKPP3w (ORCPT ); Mon, 16 Nov 2020 10:29:52 -0500 Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp53.blacknight.com (Postfix) with ESMTPS id C4784FB14B for ; Mon, 16 Nov 2020 15:29:49 +0000 (GMT) Received: (qmail 21534 invoked from network); 16 Nov 2020 15:29:49 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.22.4]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 16 Nov 2020 15:29:48 -0000 Date: Mon, 16 Nov 2020 15:29:46 +0000 From: Mel Gorman To: Peter Zijlstra Cc: Will Deacon , Davidlohr Bueso , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: Re: Loadavg accounting error on arm64 Message-ID: <20201116152946.GR3371@techsingularity.net> References: <20201116091054.GL3371@techsingularity.net> <20201116114938.GN3371@techsingularity.net> <20201116125355.GB3121392@hirez.programming.kicks-ass.net> <20201116125803.GB3121429@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20201116125803.GB3121429@hirez.programming.kicks-ass.net> User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Nov 16, 2020 at 01:58:03PM +0100, Peter Zijlstra wrote: > On Mon, Nov 16, 2020 at 01:53:55PM +0100, Peter Zijlstra wrote: > > On Mon, Nov 16, 2020 at 11:49:38AM +0000, Mel Gorman wrote: > > > On Mon, Nov 16, 2020 at 09:10:54AM +0000, Mel Gorman wrote: > > > > I'll be looking again today to see can I find a mistake in the ordering for > > > > how sched_contributes_to_load is handled but again, the lack of knowledge > > > > on the arm64 memory model means I'm a bit stuck and a second set of eyes > > > > would be nice :( > > > > > > > > > > This morning, it's not particularly clear what orders the visibility of > > > sched_contributes_to_load exactly like other task fields in the schedule > > > vs try_to_wake_up paths. I thought the rq lock would have ordered them but > > > something is clearly off or loadavg would not be getting screwed. It could > > > be done with an rmb and wmb (testing and hasn't blown up so far) but that's > > > far too heavy. smp_load_acquire/smp_store_release might be sufficient > > > on it although less clear if the arm64 gives the necessary guarantees. > > > > > > (This is still at the chucking out ideas as I haven't context switched > > > back in all the memory barrier rules). > > > > IIRC it should be so ordered by ->on_cpu. > > > > We have: > > > > schedule() > > prev->sched_contributes_to_load = X; > > smp_store_release(prev->on_cpu, 0); > > > > > > on the one hand, and: > > Ah, my bad, ttwu() itself will of course wait for !p->on_cpu before we > even get here. > Sortof, it will either have called smp_load_acquire(&p->on_cpu) or smp_cond_load_acquire(&p->on_cpu, !VAL) before hitting one of the paths leading to ttwu_do_activate. Either way, it's covered. > > sched_ttwu_pending() > > if (WARN_ON_ONCE(p->on_cpu)) > > smp_cond_load_acquire(&p->on_cpu) > > > > ttwu_do_activate() > > if (p->sched_contributes_to_load) > > ... > > > > on the other (for the remote case, which is the only 'interesting' one). > But this side is interesting because I'm having trouble convincing myself it's 100% correct for sched_contributes_to_load. The write of prev->sched_contributes_to_load in the schedule() path has a big gap before it hits the smp_store_release(prev->on_cpu). On the ttwu path, we have if (smp_load_acquire(&p->on_cpu) && ttwu_queue_wakelist(p, task_cpu(p), wake_flags | WF_ON_CPU)) goto unlock; ttwu_queue_wakelist queues task on the wakelist, sends IPI and on the receiver side it calls ttwu_do_activate and reads sched_contributes_to_load sched_ttwu_pending() is not necessarily using the same rq lock so no protection there. The smp_load_acquire() has just been hit but it still leaves a gap between when sched_contributes_to_load is written and a parallel read of sched_contributes_to_load. So while we might be able to avoid a smp_rmb() before the read of sched_contributes_to_load and rely on p->on_cpu ordering there, we may still need a smp_wmb() after nr_interruptible() increments instead of waiting until the smp_store_release() is hit while a task is scheduling. That would be a real memory barrier on arm64 and a plain compiler barrier on x86-64. > Also see the "Notes on Program-Order guarantees on SMP systems." > comment. I did, it was the on_cpu ordering for the blocking case that had me looking at the smp_store_release and smp_cond_load_acquire in arm64 in the first place thinking that something in there must be breaking the on_cpu ordering. I'm re-reading it every so often while trying to figure out where the gap is or whether I'm imagining things. Not fully tested but did not instantly break either diff --git a/kernel/sched/core.c b/kernel/sched/core.c index d2003a7d5ab5..877eaeba45ac 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4459,14 +4459,26 @@ static void __sched notrace __schedule(bool preempt) if (signal_pending_state(prev_state, prev)) { prev->state = TASK_RUNNING; } else { - prev->sched_contributes_to_load = + int acct_load = (prev_state & TASK_UNINTERRUPTIBLE) && !(prev_state & TASK_NOLOAD) && !(prev->flags & PF_FROZEN); - if (prev->sched_contributes_to_load) + prev->sched_contributes_to_load = acct_load; + if (acct_load) { rq->nr_uninterruptible++; + /* + * Pairs with p->on_cpu ordering, either a + * smp_load_acquire or smp_cond_load_acquire + * in the ttwu path before ttwu_do_activate + * p->sched_contributes_to_load. It's only + * after the nr_interruptible update happens + * that the ordering is critical. + */ + smp_wmb(); + } + /* * __schedule() ttwu() * prev_state = prev->state; if (p->on_rq && ...) -- Mel Gorman SUSE Labs From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 968ECC4742C for ; Mon, 16 Nov 2020 15:47:46 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0D395207BC for ; Mon, 16 Nov 2020 15:47:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="qRuwStKZ"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="vqIy+YxH" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0D395207BC Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=techsingularity.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References:Message-ID: Subject:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=GE3HZ7mM6umEF+Do22mHrapJlIPxfNSYcHSGsDR8xUI=; b=qRuwStKZavdhOKWsuCiBueMG2 GJJX8uCbecGLi+r1wiDd0wC9ItUGSUwmlzLk2j426sN8BVPCYOQfDGWVpt/NFY9crS/RqFxS+dO6H 82i/JWV/ObaccF5kSUomTo0IH/jrNUvOw3bbj+IxWlEcx8tGUAXb3D3KTF9B7usyQAF1pVIe3a8se uHqsTzaW+h26zTzhB0qoxq7nGYy6v6jXfWMKgDJIsCfIJKjf68KwB7Vzyn7kUIXzpRWzMcAqVi/6t S3Sc+kDwE2JjFBv9n1GP6y/wWijIJRcXrZwfE9PcArB66RU5XezH2mpUuQO2AKND48PK/Rpy/2q4u vYubgSUSg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kegip-0000ds-1f; Mon, 16 Nov 2020 15:47:15 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kegim-0000XD-UF for linux-arm-kernel@merlin.infradead.org; Mon, 16 Nov 2020 15:47:12 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=T2B0yHt8YKK8gc//egkycfuX/p5wfRTMgOFRzGCZzHs=; b=vqIy+YxH5wk6TZDzXeu3FA+2kb RWfb7Rvj1p7N7OazR2eKKE92mGrykGIyzH6rdwwmddwJYwu8+2CTs7239aLsuvYmVVYGZIhlh7O+w 8iD+gzo2a/8J9HVy1/z5aYztjBe2xbzScVPGlxCb70oCOQhstpudNqqB3fNKQL4QuzrXnX8oF+92V 5CIA1l7AwuDc11OqEAXxRJ+s9rPk3flAj4Mba9ICBYuOTkXvZZZoO7HylpKKy/0y40nPNSpRdJeo/ VlhoyLQ2eapaof6kZnFNq00H9V8ydPj7P00Im2VQVGa0OLieCf4X9MhuMo0Bg3OoK3TWPIGhbx2lL 2XMpW4oA==; Received: from outbound-smtp34.blacknight.com ([46.22.139.253]) by casper.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kegS0-0008SX-6F for linux-arm-kernel@lists.infradead.org; Mon, 16 Nov 2020 15:29:54 +0000 Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp34.blacknight.com (Postfix) with ESMTPS id C33401F08 for ; Mon, 16 Nov 2020 15:29:49 +0000 (GMT) Received: (qmail 21534 invoked from network); 16 Nov 2020 15:29:49 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.22.4]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 16 Nov 2020 15:29:48 -0000 Date: Mon, 16 Nov 2020 15:29:46 +0000 From: Mel Gorman To: Peter Zijlstra Subject: Re: Loadavg accounting error on arm64 Message-ID: <20201116152946.GR3371@techsingularity.net> References: <20201116091054.GL3371@techsingularity.net> <20201116114938.GN3371@techsingularity.net> <20201116125355.GB3121392@hirez.programming.kicks-ass.net> <20201116125803.GB3121429@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20201116125803.GB3121429@hirez.programming.kicks-ass.net> User-Agent: Mutt/1.10.1 (2018-07-13) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201116_152952_562519_BCF93CA2 X-CRM114-Status: GOOD ( 35.76 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Davidlohr Bueso , Will Deacon , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Mon, Nov 16, 2020 at 01:58:03PM +0100, Peter Zijlstra wrote: > On Mon, Nov 16, 2020 at 01:53:55PM +0100, Peter Zijlstra wrote: > > On Mon, Nov 16, 2020 at 11:49:38AM +0000, Mel Gorman wrote: > > > On Mon, Nov 16, 2020 at 09:10:54AM +0000, Mel Gorman wrote: > > > > I'll be looking again today to see can I find a mistake in the ordering for > > > > how sched_contributes_to_load is handled but again, the lack of knowledge > > > > on the arm64 memory model means I'm a bit stuck and a second set of eyes > > > > would be nice :( > > > > > > > > > > This morning, it's not particularly clear what orders the visibility of > > > sched_contributes_to_load exactly like other task fields in the schedule > > > vs try_to_wake_up paths. I thought the rq lock would have ordered them but > > > something is clearly off or loadavg would not be getting screwed. It could > > > be done with an rmb and wmb (testing and hasn't blown up so far) but that's > > > far too heavy. smp_load_acquire/smp_store_release might be sufficient > > > on it although less clear if the arm64 gives the necessary guarantees. > > > > > > (This is still at the chucking out ideas as I haven't context switched > > > back in all the memory barrier rules). > > > > IIRC it should be so ordered by ->on_cpu. > > > > We have: > > > > schedule() > > prev->sched_contributes_to_load = X; > > smp_store_release(prev->on_cpu, 0); > > > > > > on the one hand, and: > > Ah, my bad, ttwu() itself will of course wait for !p->on_cpu before we > even get here. > Sortof, it will either have called smp_load_acquire(&p->on_cpu) or smp_cond_load_acquire(&p->on_cpu, !VAL) before hitting one of the paths leading to ttwu_do_activate. Either way, it's covered. > > sched_ttwu_pending() > > if (WARN_ON_ONCE(p->on_cpu)) > > smp_cond_load_acquire(&p->on_cpu) > > > > ttwu_do_activate() > > if (p->sched_contributes_to_load) > > ... > > > > on the other (for the remote case, which is the only 'interesting' one). > But this side is interesting because I'm having trouble convincing myself it's 100% correct for sched_contributes_to_load. The write of prev->sched_contributes_to_load in the schedule() path has a big gap before it hits the smp_store_release(prev->on_cpu). On the ttwu path, we have if (smp_load_acquire(&p->on_cpu) && ttwu_queue_wakelist(p, task_cpu(p), wake_flags | WF_ON_CPU)) goto unlock; ttwu_queue_wakelist queues task on the wakelist, sends IPI and on the receiver side it calls ttwu_do_activate and reads sched_contributes_to_load sched_ttwu_pending() is not necessarily using the same rq lock so no protection there. The smp_load_acquire() has just been hit but it still leaves a gap between when sched_contributes_to_load is written and a parallel read of sched_contributes_to_load. So while we might be able to avoid a smp_rmb() before the read of sched_contributes_to_load and rely on p->on_cpu ordering there, we may still need a smp_wmb() after nr_interruptible() increments instead of waiting until the smp_store_release() is hit while a task is scheduling. That would be a real memory barrier on arm64 and a plain compiler barrier on x86-64. > Also see the "Notes on Program-Order guarantees on SMP systems." > comment. I did, it was the on_cpu ordering for the blocking case that had me looking at the smp_store_release and smp_cond_load_acquire in arm64 in the first place thinking that something in there must be breaking the on_cpu ordering. I'm re-reading it every so often while trying to figure out where the gap is or whether I'm imagining things. Not fully tested but did not instantly break either diff --git a/kernel/sched/core.c b/kernel/sched/core.c index d2003a7d5ab5..877eaeba45ac 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4459,14 +4459,26 @@ static void __sched notrace __schedule(bool preempt) if (signal_pending_state(prev_state, prev)) { prev->state = TASK_RUNNING; } else { - prev->sched_contributes_to_load = + int acct_load = (prev_state & TASK_UNINTERRUPTIBLE) && !(prev_state & TASK_NOLOAD) && !(prev->flags & PF_FROZEN); - if (prev->sched_contributes_to_load) + prev->sched_contributes_to_load = acct_load; + if (acct_load) { rq->nr_uninterruptible++; + /* + * Pairs with p->on_cpu ordering, either a + * smp_load_acquire or smp_cond_load_acquire + * in the ttwu path before ttwu_do_activate + * p->sched_contributes_to_load. It's only + * after the nr_interruptible update happens + * that the ordering is critical. + */ + smp_wmb(); + } + /* * __schedule() ttwu() * prev_state = prev->state; if (p->on_rq && ...) -- Mel Gorman SUSE Labs _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel