From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A3F9C433F5 for ; Fri, 10 Sep 2021 15:06:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6031B610C7 for ; Fri, 10 Sep 2021 15:06:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234199AbhIJPH6 (ORCPT ); Fri, 10 Sep 2021 11:07:58 -0400 Received: from mail.kernel.org ([198.145.29.99]:44890 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229749AbhIJPH4 (ORCPT ); Fri, 10 Sep 2021 11:07:56 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 43E6B60F92; Fri, 10 Sep 2021 15:06:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1631286405; bh=VIn1b4GQH+TJrbsUH/3F2B+v61HV9neqi+1DLLKlHvc=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=ewu10aDmon6lqr9M+QtkSQT2j7QL/Zgb9y3S6QUIRpOEhC4DrrS5UrsycyCysrvSt kbEhtRU3QbKYVUtMalnpVlJbpeWcmWCfzT+ylUn4hIAtQikCfNlTHbWy1ZD7tmhqdD snjIJtA38vsRU93VedDt8M2/u7JVU+S4nc2be9fG6t8jGfpJaQ6E+jxLduMUrfPgmB OanYA8n4Y8Q5b0seD6Onap+yhWX4l7SZL7tIYVlrmioBlkA9OtcHsSHTqemSJympD+ L3bdP68IkwCl7L09Kcn/FsgHbbNU/678BKD5aWTk0IT5c2BlQsUM41/tMVM8w5S1tE 8lY33cKuTgsxQ== Date: Fri, 10 Sep 2021 16:06:40 +0100 From: Will Deacon To: Peter Zijlstra Cc: tglx@linutronix.de, boqun.feng@gmail.com, linux-kernel@vger.kernel.org, Ingo Molnar , Juri Lelli , Steven Rostedt , Davidlohr Bueso , Waiman Long , Sebastian Andrzej Siewior , Mike Galbraith , Daniel Bristot de Oliveira Subject: Re: [PATCH 1/4] sched/wakeup: Strengthen current_save_and_set_rtlock_wait_state() Message-ID: <20210910150639.GA1755@willie-the-truck> References: <20210909105915.757320973@infradead.org> <20210909110203.767330253@infradead.org> <20210909134524.GB9722@willie-the-truck> <20210910125658.GA1454@willie-the-truck> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Sep 10, 2021 at 04:01:29PM +0200, Peter Zijlstra wrote: > On Fri, Sep 10, 2021 at 03:17:04PM +0200, Peter Zijlstra wrote: > > On Fri, Sep 10, 2021 at 01:57:26PM +0100, Will Deacon wrote: > > > On Thu, Sep 09, 2021 at 04:27:46PM +0200, Peter Zijlstra wrote: > > > > Moo yes, so the earlier changelog I wrote was something like: > > > > > > > > current_save_and_set_rtlock_wait_state(); > > > > for (;;) { > > > > if (try_lock()) > > > > break; > > > > > > > > raw_spin_unlock_irq(&lock->wait_lock); > > > > if (!cond) > > > > schedule(); > > > > raw_spin_lock_irq(&lock->wait_lock); > > > > > > > > set_current_state(TASK_RTLOCK_WAIT); > > > > } > > > > current_restore_rtlock_saved_state(); > > > > > > > > which is more what the code looks like before these patches, and in that > > > > case the @cond load can be lifted before __state. > > > > > > Ah, so that makes more sense, thanks. I can't see how the try_lock() could > > > be reordered though, as it's going to have to do an atomic rmw. > > > > OK, lemme go update the Changelog and make it __flags for bigeasy :-) > > The patch now reads: > > --- > Subject: sched/wakeup: Strengthen current_save_and_set_rtlock_wait_state() > From: Peter Zijlstra > Date: Thu, 09 Sep 2021 12:59:16 +0200 > > While looking at current_save_and_set_rtlock_wait_state() I'm thinking > it really ought to use smp_store_mb(), because using it for a more > traditional wait loop like: > > current_save_and_set_rtlock_wait_state(); > for (;;) { > if (cond) > schedule(); > > set_current_state(TASK_RTLOCK_WAIT); > } > current_restore_rtlock_saved_state(); > > is actually broken, since the cond load could be re-ordered against > the state store, which could lead to a missed wakeup -> BAD (tm). > > While there, make them consistent with the IRQ usage in > set_special_state(). Cheers, that's much better: Acked-by: Will Deacon Will