From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73E5EC433F5 for ; Mon, 13 Sep 2021 22:08:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 55242610D2 for ; Mon, 13 Sep 2021 22:08:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244659AbhIMWJt (ORCPT ); Mon, 13 Sep 2021 18:09:49 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:57176 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235948AbhIMWJs (ORCPT ); Mon, 13 Sep 2021 18:09:48 -0400 From: Thomas Gleixner DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1631570911; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=UIFUULZXQDg7+U+CZahOjC36byrqbcllUTVbW2ndRTI=; b=oobnOPnv/ggcqFhOGebZ9ySpZq+NIwxYkAiXEFvvJdj7DfcY5kccC7NrbYxg5lMTx1LkSU 5l+86n9pmRp9MzWFh7kCXymBuCDafS89r5Lx+9k/YH8MUEcZIQ09fd/CKJUzpRAkiLNdKZ IUgXcPDc15T40ygrnkbVCNyu3LjOAeKST8PtOgw06tAlNmITuEWza02cX6OzTN8dqhM4PT 2hNSLvebcAcZFsjjgjlZYjoeuvQasqGxa/hmCSWpm33ubukH00Xb9civpoaC8i7hbueUBZ a0FqELq6ZY/5kT7Jm2+pHusWsWlxaC4uy9WR4aQtn93yygzvtexEJMyBOLGt7A== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1631570911; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=UIFUULZXQDg7+U+CZahOjC36byrqbcllUTVbW2ndRTI=; b=QmsCXBvqYyPbrTlY/BjmSoIMCMv/ifqWhO3if8hnHHMFXy91oJaqHeVQFBAf1QFdzhzLmg qSNxJs7/1M+mVPBw== To: Peter Zijlstra , boqun.feng@gmail.com Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, Ingo Molnar , Juri Lelli , Steven Rostedt , Davidlohr Bueso , Will Deacon , Waiman Long , Sebastian Andrzej Siewior , Mike Galbraith , Daniel Bristot de Oliveira Subject: Re: [PATCH 1/4] sched/wakeup: Strengthen current_save_and_set_rtlock_wait_state() In-Reply-To: <20210909110203.767330253@infradead.org> References: <20210909105915.757320973@infradead.org> <20210909110203.767330253@infradead.org> Date: Tue, 14 Sep 2021 00:08:30 +0200 Message-ID: <871r5sf7s1.ffs@tglx> MIME-Version: 1.0 Content-Type: text/plain Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Sep 09 2021 at 12:59, Peter Zijlstra wrote: > While looking at current_save_and_set_rtlock_wait_state() I'm thinking > it really ought to use smp_store_mb(), because something like: > > current_save_and_set_rtlock_wait_state(); > for (;;) { > if (try_lock()) > break; > > raw_spin_unlock_irq(&lock->wait_lock); > schedule(); > raw_spin_lock_irq(&lock->wait_lock); > > set_current_state(TASK_RTLOCK_WAIT); > } > current_restore_rtlock_saved_state(); > > which is the advertised usage in the comment, is actually broken, > since trylock() will only need a load-acquire in general and that > could be re-ordered against the state store, which could lead to a > missed wakeup -> BAD (tm). I don't think so because both the state store and the wakeup are serialized via tsk->pi_lock. > While there, make them consistent with the IRQ usage in > set_special_state(). > > Fixes: 5f220be21418 ("sched/wakeup: Prepare for RT sleeping spin/rwlocks") > Signed-off-by: Peter Zijlstra (Intel) > --- > include/linux/sched.h | 19 +++++++++++-------- > 1 file changed, 11 insertions(+), 8 deletions(-) > > --- a/include/linux/sched.h > +++ b/include/linux/sched.h > @@ -245,7 +245,8 @@ struct task_group; > * if (try_lock()) > * break; > * raw_spin_unlock_irq(&lock->wait_lock); > - * schedule_rtlock(); > + * if (!cond) > + * schedule_rtlock(); cond is not really relevant here. > * raw_spin_lock_irq(&lock->wait_lock); > * set_current_state(TASK_RTLOCK_WAIT); > * } > @@ -253,22 +254,24 @@ struct task_group; > */ > #define current_save_and_set_rtlock_wait_state() \ > do { \ > - lockdep_assert_irqs_disabled(); \ > - raw_spin_lock(¤t->pi_lock); \ > + unsigned long flags; /* may shadow */ \ > + \ > + raw_spin_lock_irqsave(¤t->pi_lock, flags); \ Why? This is solely for the rtlock use case which invokes this with interrupts disabled. So why do we need that irqsave() overhead here? > current->saved_state = current->__state; \ > debug_rtlock_wait_set_state(); \ > - WRITE_ONCE(current->__state, TASK_RTLOCK_WAIT); \ > - raw_spin_unlock(¤t->pi_lock); \ > + smp_store_mb(current->__state, TASK_RTLOCK_WAIT); \ The try_lock() does not matter at all here, really. All what matters is that the unlocker cannot observe the wrong state and that's fully serialized via tsk::pi_lock. Thanks, tglx