From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A298CCDB465 for ; Thu, 19 Oct 2023 08:46:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344602AbjJSIqC (ORCPT ); Thu, 19 Oct 2023 04:46:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34940 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231984AbjJSIp7 (ORCPT ); Thu, 19 Oct 2023 04:45:59 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 668C210F for ; Thu, 19 Oct 2023 01:45:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=2OvDv6KjjjkV7XKCQZRcKD2cr4pcyB4jYdJMvx56mP8=; b=Xq6iV1680szqGtsstE6IYiRZCq T63TX8lnxdJJYSj/o54VFyj8ujUaE+DTJG5mP1gteshyxDppuT+4CIy3y7pMEOQdU18QU/+LXNc1C jwWBavyk/Ebl9lw1kv2/vrPJaF1bnzvJTfOysScOmFb0JcXDS0U+8pjeMlOkrwJQojmPdpTtqgaS9 Lragrs853Ys0UVzseD1wUF2D9oWpo4jPU0ZjC53Uc/2iUqQMJgfOC4fsfrbPSy3kXu4svFfGsGn8x FimE5dMnFGPkyyYnN46rxsramsw6mQ0V1VVzkeczg5tp9lbeib/mAVVHHPt21bxt6mXd/Vp7LsVTw HVk+Xidw==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1qtOeK-00648M-9p; Thu, 19 Oct 2023 08:45:00 +0000 Received: by noisy.programming.kicks-ass.net (Postfix, from userid 1000) id ECD4C300392; Thu, 19 Oct 2023 10:44:59 +0200 (CEST) Date: Thu, 19 Oct 2023 10:44:59 +0200 From: Peter Zijlstra To: Linus Torvalds Cc: Uros Bizjak , Nadav Amit , the arch/x86 maintainers , Linux Kernel Mailing List , Andy Lutomirski , Brian Gerst , Denys Vlasenko , "H . Peter Anvin" , Thomas Gleixner , Josh Poimboeuf , Nick Desaulniers Subject: Re: [PATCH v2 -tip] x86/percpu: Use C for arch_raw_cpu_ptr() Message-ID: <20231019084459.GP33217@noisy.programming.kicks-ass.net> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Oct 18, 2023 at 03:40:05PM -0700, Linus Torvalds wrote: > Side note: the code that caused that problem is this: > > __always_inline void __cyc2ns_read(struct cyc2ns_data *data) > { > int seq, idx; > > do { > seq = this_cpu_read(cyc2ns.seq.seqcount.sequence); > ... > } while (unlikely(seq != this_cpu_read(cyc2ns.seq.seqcount.sequence))); > } > > where the issue is that the this_cpu_read() of that sequence number > needs to be ordered. I have very vague memories of other code also relying on this_cpu_read() implying READ_ONCE(). And that code really only is buggy if you do not have that. Since it is cpu local, the smp_rmb() would be confusing, as would smp_load_acquire() be -- there is no cross-cpu data ordering. The other option is of couse adding explicit barrier(), but that's entirely superfluous when all the loads are READ_ONCE(). If you want to make this_cpu_read() not imply READ_ONCE(), then we should go audit all users :/ Can be done ofc.