From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5437C433EF for ; Fri, 10 Sep 2021 12:57:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B814060041 for ; Fri, 10 Sep 2021 12:57:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233401AbhIJM6n (ORCPT ); Fri, 10 Sep 2021 08:58:43 -0400 Received: from mail.kernel.org ([198.145.29.99]:46364 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233345AbhIJM6m (ORCPT ); Fri, 10 Sep 2021 08:58:42 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 8432861153; Fri, 10 Sep 2021 12:57:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1631278651; bh=MdI2jzzWxnLJvDkNGlzjcRf+bAWdpF09vvi+hNLsNKU=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=sgK9zb46wl8Klk9RtHhBHQeYoOyZ/NAPm+TlkTxaGO9yPNO3CiMKQPY5gRWlHt6eV aJJB1dox26RWF81tj8cwGBbsUzlmqG9yXw0lQ57Dh27tnxHSOner3CoPMa5o/IfA3B Csy20KJ7xjhGSjuKhWnKxmVYe8eBP62TTfMn0U435zxRFdlok1l6K3mRP0aiOaZHpR Al5VIzw0gp75rBep928nTDMXHv+i8QYMOlxQsLPrrTuRgSuWFhPL3sXrdCUYR4oh8E rlc6ktINqyakR3wkZX7yM4+7J5NEQhYTfiJ7X+lsPAUrNd5MMJhqwa3lRehBWVjbZv E0T2S4FFb0MFQ== Date: Fri, 10 Sep 2021 13:57:26 +0100 From: Will Deacon To: Peter Zijlstra Cc: tglx@linutronix.de, boqun.feng@gmail.com, linux-kernel@vger.kernel.org, Ingo Molnar , Juri Lelli , Steven Rostedt , Davidlohr Bueso , Waiman Long , Sebastian Andrzej Siewior , Mike Galbraith , Daniel Bristot de Oliveira Subject: Re: [PATCH 1/4] sched/wakeup: Strengthen current_save_and_set_rtlock_wait_state() Message-ID: <20210910125658.GA1454@willie-the-truck> References: <20210909105915.757320973@infradead.org> <20210909110203.767330253@infradead.org> <20210909134524.GB9722@willie-the-truck> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Sep 09, 2021 at 04:27:46PM +0200, Peter Zijlstra wrote: > On Thu, Sep 09, 2021 at 02:45:24PM +0100, Will Deacon wrote: > > On Thu, Sep 09, 2021 at 12:59:16PM +0200, Peter Zijlstra wrote: > > > While looking at current_save_and_set_rtlock_wait_state() I'm thinking > > > it really ought to use smp_store_mb(), because something like: > > > > > > current_save_and_set_rtlock_wait_state(); > > > for (;;) { > > > if (try_lock()) > > > break; > > > > > > raw_spin_unlock_irq(&lock->wait_lock); > > > schedule(); > > > raw_spin_lock_irq(&lock->wait_lock); > > > > > > set_current_state(TASK_RTLOCK_WAIT); > > > } > > > current_restore_rtlock_saved_state(); > > > > > > which is the advertised usage in the comment, is actually broken, > > > since trylock() will only need a load-acquire in general and that > > > could be re-ordered against the state store, which could lead to a > > > missed wakeup -> BAD (tm). > > > > Why doesn't the UNLOCK of pi_lock in current_save_and_set_rtlock_wait_state() > > order the state change before the successful try_lock? I'm just struggling > > to envisage how this actually goes wrong. > > Moo yes, so the earlier changelog I wrote was something like: > > current_save_and_set_rtlock_wait_state(); > for (;;) { > if (try_lock()) > break; > > raw_spin_unlock_irq(&lock->wait_lock); > if (!cond) > schedule(); > raw_spin_lock_irq(&lock->wait_lock); > > set_current_state(TASK_RTLOCK_WAIT); > } > current_restore_rtlock_saved_state(); > > which is more what the code looks like before these patches, and in that > case the @cond load can be lifted before __state. Ah, so that makes more sense, thanks. I can't see how the try_lock() could be reordered though, as it's going to have to do an atomic rmw. Will