From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 771DBC433EF for ; Thu, 23 Sep 2021 10:02:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 599D161284 for ; Thu, 23 Sep 2021 10:02:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240305AbhIWKEJ (ORCPT ); Thu, 23 Sep 2021 06:04:09 -0400 Received: from mail.kernel.org ([198.145.29.99]:40764 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240296AbhIWKEI (ORCPT ); Thu, 23 Sep 2021 06:04:08 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 10EF961279; Thu, 23 Sep 2021 10:02:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1632391357; bh=HSGgjOkM07hP18XY0Q7ip8XE7xzU+dWwNiaGeoorR+8=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=ezUOH5LwO3C9X8XT2O3wod7/4tozea/MoFSP0eDIzfoREoUL/ofOHPz5zZA4bxVZ9 W2aMpWaykopLVUT2RXZXQEFiioNPoayW57JkHrOffhINp4RudJqKPyugUSshE0suKM aEZD0tY8kz/1ryB3nQ0aW7+5A/rk+Ud5RYmFBaWN+kKI/HaOw5c/5TRvL4d03xn3H7 HJf2SMOwrOSiHH4Jl3Cj++owGNKQsvhUum/Mp1ccx8xDELF5IpU4Z4nJfL0U1uTvEN s3KlUuWVv8K0RVKSZiImnCg4c2gkTUzFKvOalBEKxOo+SaYGaAiZLXkkVb/nk/XqF5 pc+ttDdIUmebg== Date: Thu, 23 Sep 2021 12:02:35 +0200 From: Frederic Weisbecker To: Sebastian Andrzej Siewior Cc: Thomas Gleixner , Valentin Schneider , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, rcu@vger.kernel.org, linux-rt-users@vger.kernel.org, Catalin Marinas , Will Deacon , Ingo Molnar , Peter Zijlstra , Steven Rostedt , Daniel Bristot de Oliveira , "Paul E. McKenney" , Josh Triplett , Mathieu Desnoyers , Davidlohr Bueso , Lai Jiangshan , Joel Fernandes , Anshuman Khandual , Vincenzo Frascino , Steven Price , Ard Biesheuvel , Boqun Feng , Mike Galbraith Subject: Re: rcu/tree: Protect rcu_rdp_is_offloaded() invocations on RT Message-ID: <20210923100235.GA113809@lothringen> References: <20210811201354.1976839-1-valentin.schneider@arm.com> <20210811201354.1976839-4-valentin.schneider@arm.com> <874kae6n3g.ffs@tglx> <87pmt163al.ffs@tglx> <20210921234518.GB100318@lothringen> <20210922063208.ltf7sdou4tr5yrnc@linutronix.de> <20210922111012.GA106513@lothringen> <20210922112731.dvauvxlhx5suc7qd@linutronix.de> <20210922113820.GC106513@lothringen> <20210922130232.vm7rgkdszfhejf34@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210922130232.vm7rgkdszfhejf34@linutronix.de> Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org On Wed, Sep 22, 2021 at 03:02:32PM +0200, Sebastian Andrzej Siewior wrote: > On 2021-09-22 13:38:20 [+0200], Frederic Weisbecker wrote: > > > So you rely on some implicit behaviour which breaks with RT such as: > > > > > > CPU 0 > > > ----------------------------------------------- > > > RANDOM TASK-A RANDOM TASK-B > > > ------ ----------- > > > int *X = &per_cpu(CPUX, 0) int *X = &per_cpu(CPUX, 0) > > > int A, B; > > > spin_lock(&D); > > > spin_lock(&C); > > > WRITE_ONCE(*X, 0); > > > A = READ_ONCE(*X); > > > WRITE_ONCE(*X, 1); > > > B = READ_ONCE(*X); > > > > > > while spinlock C and D are just random locks not related to CPUX but it > > > just happens that they are held at that time. So for !RT you guarantee > > > that A == B while it is not the case on RT. > > > > Not sure which spinlocks you are referring to here. Also most RCU spinlocks > > are raw. > > I was bringing an example where you also could rely on implicit locking > provided by spin_lock() which breaks on RT. Good point!