From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6063CC5519F for ; Mon, 16 Nov 2020 13:37:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 297DC22245 for ; Mon, 16 Nov 2020 13:37:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729316AbgKPNh1 (ORCPT ); Mon, 16 Nov 2020 08:37:27 -0500 Received: from outbound-smtp44.blacknight.com ([46.22.136.52]:44237 "EHLO outbound-smtp44.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726944AbgKPNh1 (ORCPT ); Mon, 16 Nov 2020 08:37:27 -0500 Received: from mail.blacknight.com (pemlinmail05.blacknight.ie [81.17.254.26]) by outbound-smtp44.blacknight.com (Postfix) with ESMTPS id D4D31F8510 for ; Mon, 16 Nov 2020 13:37:24 +0000 (GMT) Received: (qmail 16313 invoked from network); 16 Nov 2020 13:37:24 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.22.4]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 16 Nov 2020 13:37:23 -0000 Date: Mon, 16 Nov 2020 13:37:21 +0000 From: Mel Gorman To: Will Deacon Cc: Peter Zijlstra , Davidlohr Bueso , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: Re: Loadavg accounting error on arm64 Message-ID: <20201116133721.GQ3371@techsingularity.net> References: <20201116091054.GL3371@techsingularity.net> <20201116131102.GA29992@willie-the-truck> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20201116131102.GA29992@willie-the-truck> User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Nov 16, 2020 at 01:11:03PM +0000, Will Deacon wrote: > On Mon, Nov 16, 2020 at 09:10:54AM +0000, Mel Gorman wrote: > > I got cc'd internal bug report filed against a 5.8 and 5.9 kernel > > that loadavg was "exploding" on arch64 on a machines acting as a build > > servers. It happened on at least two different arm64 variants. That setup > > is complex to replicate but fortunately can be reproduced by running > > hackbench-process-pipes while heavily overcomitting a machine with 96 > > logical CPUs and then checking if loadavg drops afterwards. With an > > MMTests clone, I reproduced it as follows > > > > ./run-mmtests.sh --config configs/config-workload-hackbench-process-pipes --no-monitor testrun; \ > > for i in `seq 1 60`; do cat /proc/loadavg; sleep 60; done > > > > Load should drop to 10 after about 10 minutes and it does on x86-64 but > > remained at around 200+ on arm64. > > Do you think you could use this to bisect the problem? Also, are you able > to reproduce the issue on any other arm64 machines, or just this one? > I didn't bisect it as I was assuming it was related to c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu") which is something I would still like to preserve and was responsible to a loadavg glitch fixed by dbfb089d360b ("sched: Fix loadavg accounting race") and d136122f5845 ("sched: Fix race against ptrace_freeze_trace()") While *I* can only reproduce it on one machine, I have a bug report saying it affects others. It's not a single machine issue or a single ARM variant. > > The reproduction case simply hammers the case where a task can be > > descheduling while also being woken by another task at the same time. It > > takes a long time to run but it makes the problem very obvious. The > > expectation is that after hackbench has been running and saturating the > > machine for a long time. > > > > Commit dbfb089d360b ("sched: Fix loadavg accounting race") fixed a loadavg > > accounting race in the generic case. Later it was documented why the > > ordering of when p->sched_contributes_to_load is read/updated relative > > to p->on_cpu. This is critical when a task is descheduling at the same > > time it is being activated on another CPU. While the load/stores happen > > under the RQ lock, the RQ lock on its own does not give any guarantees > > on the task state. > > > > Over the weekend I convinced myself that it must be because the > > implementation of smp_load_acquire and smp_store_release do not appear > > to implement acquire/release semantics because I didn't find something > > arm64 that was playing with p->state behind the schedulers back (I could > > have missed it if it was in an assembly portion as I can't reliablyh read > > arm assembler). Similarly, it's not clear why the arm64 implementation > > does not call smp_acquire__after_ctrl_dep in the smp_load_acquire > > implementation. Even when it was introduced, the arm64 implementation > > differed significantly from the arm implementation in terms of what > > barriers it used for non-obvious reasons. > > Why would you expect smp_acquire__after_ctrl_dep() to be called as part of > the smp_load_acquire() implementation? > I wouldn't, I should have said smp_cond_load_acquire. > FWIW, arm64 has special instructions for acquire and release (and they > actually provide more order than is strictly needed by Linux), so we > just map acquire/release to those instructions directly. Since these > instructions are not available on most 32-bit cores, the arm implementation > just uses the fence-based implementation. > Ok, makes sense. I think this was a red herring anyway as it's now looking more like a sched_contibutes_to_load ordering issue. > Anyway, setting all that aside, I do agree with you that the bitfield usage > in task_struct looks pretty suspicious. For example, in __schedule() we > have: > > rq_lock(rq, &rf); > smp_mb__after_spinlock(); > ... > prev_state = prev->state; > > if (!preempt && prev_state) { > if (signal_pending_state(prev_state, prev)) { > prev->state = TASK_RUNNING; > } else { > prev->sched_contributes_to_load = > (prev_state & TASK_UNINTERRUPTIBLE) && > !(prev_state & TASK_NOLOAD) && > !(prev->flags & PF_FROZEN); > ... > deactivate_task(rq, prev, DEQUEUE_SLEEP | DEQUEUE_NOCLOCK); > > where deactivate_task() updates p->on_rq directly: > > p->on_rq = (flags & DEQUEUE_SLEEP) ? 0 : TASK_ON_RQ_MIGRATING; > It used to be at least a WRITE_ONCE until 58877d347b58 ("sched: Better document ttwu()") which changed it. Not sure why that is and didn't think about it too deep as it didn't appear to be directly related to the problem and didn't have ordering consequences. > so this is _not_ ordered wrt sched_contributes_to_load. But then over in > __ttwu_queue_wakelist() we have: > > p->sched_remote_wakeup = !!(wake_flags & WF_MIGRATED); > > which can be invoked on the try_to_wake_up() path if p->on_rq is first read > as zero and then p->on_cpu is read as 1. Perhaps these non-atomic bitfield > updates can race and cause the flags to be corrupted? > I think this is at least one possibility. I think at least that one should only be explicitly set on WF_MIGRATED and explicitly cleared in sched_ttwu_pending. While I haven't audited it fully, it might be enough to avoid a double write outside of the rq lock on the bitfield but I still need to think more about the ordering of sched_contributes_to_load and whether it's ordered by p->on_cpu or not. -- Mel Gorman SUSE Labs From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CC15C4742C for ; Mon, 16 Nov 2020 13:38:41 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CC5FC208C7 for ; Mon, 16 Nov 2020 13:38:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="ggjIK2PL" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CC5FC208C7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=techsingularity.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References:Message-ID: Subject:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=5p3Z38USfZDOPyNEA4pLlcLye+l25rBrnp15xM8qhTs=; b=ggjIK2PLwuviWdxO9UiSzJmSj qPmPUF/msQHLVbau65bwiBmQ+KpHncXpJJQwOc8y2ErmEY0N2L08InHCK4x8NKNy3cDCF1EEwUY74 cO9CH4RCVik39yTT0YoeikejdKziUKJW3StGel7IWPn0KDY0oPM7DYtnySHUFPL9q7rSXZFX0MWq8 QK5aM/QktEbetb6zqnRyEX6d7bTtpFDmY42On2JRmrSjvPpgGnAJWoTOzolQCOUhSWX8gRZepH6Ay teq+LGQNm8db10oJ+sQq35J9fqIJwubzy907teTunGG53xWeJvTjiPML0zjRUsAwbRcRh0etu2s0n rzUF+R/4Q==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1keehG-0002H4-F0; Mon, 16 Nov 2020 13:37:30 +0000 Received: from outbound-smtp63.blacknight.com ([46.22.136.252]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1keehC-0002Fy-Rr for linux-arm-kernel@lists.infradead.org; Mon, 16 Nov 2020 13:37:28 +0000 Received: from mail.blacknight.com (pemlinmail05.blacknight.ie [81.17.254.26]) by outbound-smtp63.blacknight.com (Postfix) with ESMTPS id CF9BBFB069 for ; Mon, 16 Nov 2020 13:37:24 +0000 (GMT) Received: (qmail 16313 invoked from network); 16 Nov 2020 13:37:24 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.22.4]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 16 Nov 2020 13:37:23 -0000 Date: Mon, 16 Nov 2020 13:37:21 +0000 From: Mel Gorman To: Will Deacon Subject: Re: Loadavg accounting error on arm64 Message-ID: <20201116133721.GQ3371@techsingularity.net> References: <20201116091054.GL3371@techsingularity.net> <20201116131102.GA29992@willie-the-truck> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20201116131102.GA29992@willie-the-truck> User-Agent: Mutt/1.10.1 (2018-07-13) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201116_083727_123803_5ACF15A5 X-CRM114-Status: GOOD ( 39.08 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Zijlstra , Davidlohr Bueso , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Mon, Nov 16, 2020 at 01:11:03PM +0000, Will Deacon wrote: > On Mon, Nov 16, 2020 at 09:10:54AM +0000, Mel Gorman wrote: > > I got cc'd internal bug report filed against a 5.8 and 5.9 kernel > > that loadavg was "exploding" on arch64 on a machines acting as a build > > servers. It happened on at least two different arm64 variants. That setup > > is complex to replicate but fortunately can be reproduced by running > > hackbench-process-pipes while heavily overcomitting a machine with 96 > > logical CPUs and then checking if loadavg drops afterwards. With an > > MMTests clone, I reproduced it as follows > > > > ./run-mmtests.sh --config configs/config-workload-hackbench-process-pipes --no-monitor testrun; \ > > for i in `seq 1 60`; do cat /proc/loadavg; sleep 60; done > > > > Load should drop to 10 after about 10 minutes and it does on x86-64 but > > remained at around 200+ on arm64. > > Do you think you could use this to bisect the problem? Also, are you able > to reproduce the issue on any other arm64 machines, or just this one? > I didn't bisect it as I was assuming it was related to c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu") which is something I would still like to preserve and was responsible to a loadavg glitch fixed by dbfb089d360b ("sched: Fix loadavg accounting race") and d136122f5845 ("sched: Fix race against ptrace_freeze_trace()") While *I* can only reproduce it on one machine, I have a bug report saying it affects others. It's not a single machine issue or a single ARM variant. > > The reproduction case simply hammers the case where a task can be > > descheduling while also being woken by another task at the same time. It > > takes a long time to run but it makes the problem very obvious. The > > expectation is that after hackbench has been running and saturating the > > machine for a long time. > > > > Commit dbfb089d360b ("sched: Fix loadavg accounting race") fixed a loadavg > > accounting race in the generic case. Later it was documented why the > > ordering of when p->sched_contributes_to_load is read/updated relative > > to p->on_cpu. This is critical when a task is descheduling at the same > > time it is being activated on another CPU. While the load/stores happen > > under the RQ lock, the RQ lock on its own does not give any guarantees > > on the task state. > > > > Over the weekend I convinced myself that it must be because the > > implementation of smp_load_acquire and smp_store_release do not appear > > to implement acquire/release semantics because I didn't find something > > arm64 that was playing with p->state behind the schedulers back (I could > > have missed it if it was in an assembly portion as I can't reliablyh read > > arm assembler). Similarly, it's not clear why the arm64 implementation > > does not call smp_acquire__after_ctrl_dep in the smp_load_acquire > > implementation. Even when it was introduced, the arm64 implementation > > differed significantly from the arm implementation in terms of what > > barriers it used for non-obvious reasons. > > Why would you expect smp_acquire__after_ctrl_dep() to be called as part of > the smp_load_acquire() implementation? > I wouldn't, I should have said smp_cond_load_acquire. > FWIW, arm64 has special instructions for acquire and release (and they > actually provide more order than is strictly needed by Linux), so we > just map acquire/release to those instructions directly. Since these > instructions are not available on most 32-bit cores, the arm implementation > just uses the fence-based implementation. > Ok, makes sense. I think this was a red herring anyway as it's now looking more like a sched_contibutes_to_load ordering issue. > Anyway, setting all that aside, I do agree with you that the bitfield usage > in task_struct looks pretty suspicious. For example, in __schedule() we > have: > > rq_lock(rq, &rf); > smp_mb__after_spinlock(); > ... > prev_state = prev->state; > > if (!preempt && prev_state) { > if (signal_pending_state(prev_state, prev)) { > prev->state = TASK_RUNNING; > } else { > prev->sched_contributes_to_load = > (prev_state & TASK_UNINTERRUPTIBLE) && > !(prev_state & TASK_NOLOAD) && > !(prev->flags & PF_FROZEN); > ... > deactivate_task(rq, prev, DEQUEUE_SLEEP | DEQUEUE_NOCLOCK); > > where deactivate_task() updates p->on_rq directly: > > p->on_rq = (flags & DEQUEUE_SLEEP) ? 0 : TASK_ON_RQ_MIGRATING; > It used to be at least a WRITE_ONCE until 58877d347b58 ("sched: Better document ttwu()") which changed it. Not sure why that is and didn't think about it too deep as it didn't appear to be directly related to the problem and didn't have ordering consequences. > so this is _not_ ordered wrt sched_contributes_to_load. But then over in > __ttwu_queue_wakelist() we have: > > p->sched_remote_wakeup = !!(wake_flags & WF_MIGRATED); > > which can be invoked on the try_to_wake_up() path if p->on_rq is first read > as zero and then p->on_cpu is read as 1. Perhaps these non-atomic bitfield > updates can race and cause the flags to be corrupted? > I think this is at least one possibility. I think at least that one should only be explicitly set on WF_MIGRATED and explicitly cleared in sched_ttwu_pending. While I haven't audited it fully, it might be enough to avoid a double write outside of the rq lock on the bitfield but I still need to think more about the ordering of sched_contributes_to_load and whether it's ordered by p->on_cpu or not. -- Mel Gorman SUSE Labs _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel