From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03576C433ED for ; Wed, 7 Apr 2021 11:37:12 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A1BA461382 for ; Wed, 7 Apr 2021 11:37:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A1BA461382 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References:Message-ID: Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=X9MIU6WuJ5xlBR9aDikejhk5PW/w6Hhd1UlCGqJS7ck=; b=VMhkgIuzQOOexJ4maLMfFGQ43 Vs0eJ3rHcDER4qw7C1ibpWVLKxBEbQ1FqPD8FuBIGWy1nCzG+NPo12DJ0pz9WKoFwtdkQxZFKD9wo lDDlUx2tbtZtJMjQHRfAgEzM/lg0AtqgIWJZC/SsdYRjTuxOzYaGyfALznhsy1c1gx7oz0N2jTvtE YcM706OWzPPEuz64mQG70lsfQC+bG26KtlHSOrPNE9rBJxpJwA3tCczTJ2h1ppFZBDPEqFZECq4ot bL8IfkfEjwYqOvLOZOwTVyD1DOwp4NlgJayILitg6vvmf9CS9N5ABv2hdZvZqmdPrrmBP9onhwS3x GhyetMaMQ==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lU6UQ-004tR8-Eu; Wed, 07 Apr 2021 11:36:54 +0000 Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1lU6UI-004tQo-A8; Wed, 07 Apr 2021 11:36:50 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 66B363001FB; Wed, 7 Apr 2021 13:36:45 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id 4D45A23D3AF81; Wed, 7 Apr 2021 13:36:45 +0200 (CEST) Date: Wed, 7 Apr 2021 13:36:45 +0200 From: Peter Zijlstra To: Arnd Bergmann Cc: Stafford Horne , Guo Ren , linux-riscv , Linux Kernel Mailing List , linux-csky@vger.kernel.org, linux-arch , Guo Ren , Will Deacon , Ingo Molnar , Waiman Long , Anup Patel Subject: Re: [PATCH v4 3/4] locking/qspinlock: Add ARCH_USE_QUEUED_SPINLOCKS_XCHG32 Message-ID: References: <20210330223514.GE1171117@lianli.shorne-pla.net> <20210406085626.GE3288043@lianli.shorne-pla.net> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org On Wed, Apr 07, 2021 at 10:42:50AM +0200, Arnd Bergmann wrote: > Since there are really only a handful of instances in the kernel > that use the cmpxchg() or xchg() on u8/u16 variables, it would seem > best to just disallow those completely Not going to happen. xchg16 is optimal for qspinlock and if we replace that with a cmpxchg loop on x86 we're regressing. > Interestingly, the s390 version using __sync_val_compare_and_swap() > seems to produce nice output on all architectures that have atomic > instructions, with any supported compiler, to the point where I think > we could just use that to replace most of the inline-asm versions except > for arm64: > > #define cmpxchg(ptr, o, n) \ > ({ \ > __typeof__(*(ptr)) __o = (o); \ > __typeof__(*(ptr)) __n = (n); \ > (__typeof__(*(ptr))) __sync_val_compare_and_swap((ptr),__o,__n);\ > }) It generates the LL/SC loop, but doesn't do sensible optimizations when it's again used in a loop itself. That is, it generates a loop of a loop, just like what you'd expect, which is sub-optimal for LL/SC. > Not how gcc's acquire/release behavior of __sync_val_compare_and_swap() > relates to what the kernel wants here. > > The gcc documentation also recommends using the standard > __atomic_compare_exchange_n() builtin instead, which would allow > constructing release/acquire/relaxed versions as well, but I could not > get it to produce equally good output. (possibly I was using it wrong) I'm scared to death of the C11 crap, the compiler will 'optimize' them when it feels like it and use the C11 memory model rules for it, which are not compatible with the kernel rules. But the same thing applies, it won't do the right thing for composites. _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv