From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2986ECDB482 for ; Thu, 19 Oct 2023 14:25:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345919AbjJSOZ5 (ORCPT ); Thu, 19 Oct 2023 10:25:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49018 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346077AbjJSOZz (ORCPT ); Thu, 19 Oct 2023 10:25:55 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C86D13D for ; Thu, 19 Oct 2023 07:25:52 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1BEF9C433C7; Thu, 19 Oct 2023 14:25:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697725552; bh=c70oHRZIcX9ccV9DBUUmfsPIDAtoFzr6CLXIfVHsiyo=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=P8JImrOq7BrNsIWUVCibxxKNRxUbaQaRl66JW6JSv2uweVcQXByVmBtooT/UYVDoC AmOL/QJTk7FUg5NCdjz1p8Kr5CXtuIMJ0sY9bFQgzNLlNFt+ofPnjM87JrNzZ9wVWa NI08CE2eYyatI0gft/69fqF3sZOp6WTyvM8Q5gccrB3wOqE/+MKRb7xwZc3H6Cv5Ze COQ2TgNtoflMwduausqpBtEe6im0DFE4ikltZEvLOD/5w6C3TDTMU13chk72ar77NM 6eJ2k6gmK64lSrNDLUI1yX1ePWJ9eQa2EW9sVuF1BHTHtxRL41yOzqts8e5/T/u6sU NMjKtH/XSZmWw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 9A2E9CE0868; Thu, 19 Oct 2023 07:25:51 -0700 (PDT) Date: Thu, 19 Oct 2023 07:25:51 -0700 From: "Paul E. McKenney" To: Hou Tao Cc: bpf@vger.kernel.org, David Vernet , Andrii Nakryiko , Alexei Starovoitov , Daniel Borkmann , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , linux-kernel@vger.kernel.org Subject: Re: [PATCH bpf] Fold smp_mb__before_atomic() into atomic_set_release() Message-ID: Reply-To: paulmck@kernel.org References: <722b64d7-281b-b4ab-4d4d-403abc41a36b@huaweicloud.com> <7fe984d2-c30c-40ad-83cd-d9fb51b6ce0d@huaweicloud.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <7fe984d2-c30c-40ad-83cd-d9fb51b6ce0d@huaweicloud.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Oct 19, 2023 at 02:20:35PM +0800, Hou Tao wrote: > Hi Paul, > > On 10/19/2023 12:54 PM, Paul E. McKenney wrote: > > On Thu, Oct 19, 2023 at 09:07:07AM +0800, Hou Tao wrote: > >> Hi Paul, > >> > >> On 10/19/2023 6:28 AM, Paul E. McKenney wrote: > >>> bpf: Fold smp_mb__before_atomic() into atomic_set_release() > >>> > >>> The bpf_user_ringbuf_drain() BPF_CALL function uses an atomic_set() > >>> immediately preceded by smp_mb__before_atomic() so as to order storing > >>> of ring-buffer consumer and producer positions prior to the atomic_set() > >>> call's clearing of the ->busy flag, as follows: > >>> > >>> smp_mb__before_atomic(); > >>> atomic_set(&rb->busy, 0); > >>> > >>> Although this works given current architectures and implementations, and > >>> given that this only needs to order prior writes against a later write. > >>> However, it does so by accident because the smp_mb__before_atomic() > >>> is only guaranteed to work with read-modify-write atomic operations, > >>> and not at all with things like atomic_set() and atomic_read(). > >>> > >>> Note especially that smp_mb__before_atomic() will not, repeat *not*, > >>> order the prior write to "a" before the subsequent non-read-modify-write > >>> atomic read from "b", even on strongly ordered systems such as x86: > >>> > >>> WRITE_ONCE(a, 1); > >>> smp_mb__before_atomic(); > >>> r1 = atomic_read(&b); > >> The reason is smp_mb__before_atomic() is defined as noop and > >> atomic_read() in x86-64 is just READ_ONCE(), right ? > > The real reason is that smp_mb__before_atomic() is not defined to do > > anything unless followed by an atomic read-modify-write operation, > > and atomic_read(), atomic_64read(), atomic_set(), and so on are not > > read-modify-write operations. > > I see. Thanks for explanation. It seems I did not read > Documentation/atomic_t.txt carefully, it said: > >     The barriers: > >     smp_mb__{before,after}_atomic() > >     only apply to the RMW atomic ops and can be used to augment/upgrade the >     ordering inherent to the op. That is the place! > > As you point out, one implementation consequence of this is that > > smp_mb__before_atomic() is nothingness on x86. > > > >> And it seems that I also used smp_mb__before_atomic() in a wrong way for > >> patch [1]. The memory order in the posted patch is > >> > >> process X                                    process Y > >>     atomic64_dec_and_test(&map->usercnt) > >>     READ_ONCE(timer->timer) > >>                                             timer->time = t > > The above two lines are supposed to be accessing the same field, correct? > > If so, process Y's store really should be WRITE_ONCE(). > > Yes. These two processes are accessing the same field (namely > timer->timer). Is WRITE_ONCE(xx) still necessary when the write of > timer->time in process Y is protected by a spin-lock ? If there is any possibility of a concurrent reader, that is, a reader not holding that same lock, then yes, you should use WRITE_ONCE(). Compilers can do pretty vicious things to unmarked reads and writes. But don't take my word for it, here are a few writeups: o "Who's afraid of a big bad optimizing compiler?" (series) https://lwn.net/Articles/793253, https://lwn.net/Articles/799218 o "An introduction to lockless algorithms" (Paolo Bonzini series) https://lwn.net/Articles/844224, https://lwn.net/Articles/846700, https://lwn.net/Articles/847481, https://lwn.net/Articles/847973, https://lwn.net/Articles/849237, https://lwn.net/Articles/850202 o "Is Parallel Programming Hard, And, If So, What Can You Do About It?" Section 4.3.4 ("Accessing Shared Variables") https://mirrors.edge.kernel.org/pub/linux/kernel/people/paulmck/perfbook/ perfbook.html > >>                                             // it won't work > >>                                             smp_mb__before_atomic() > >>                                             atomic64_read(&map->usercnt) > >> > >> For the problem, it seems I need to replace smp_mb__before_atomic() by > >> smp_mb() to fix the memory order, right ? > > Yes, because smp_mb() will order the prior store against that later load. > > Thanks. Will fix the patch. Very good! Thanx, Paul > Regards, > Hou > > > > Thanx, Paul > > > >> Regards, > >> Hou > >> > >> [1]: > >> https://lore.kernel.org/bpf/20231017125717.241101-2-houtao@huaweicloud.com/ > >>                                                                  > >> > >>> Therefore, replace the smp_mb__before_atomic() and atomic_set() with > >>> atomic_set_release() as follows: > >>> > >>> atomic_set_release(&rb->busy, 0); > >>> > >>> This is no slower (and sometimes is faster) than the original, and also > >>> provides a formal guarantee of ordering that the original lacks. > >>> > >>> Signed-off-by: Paul E. McKenney > >>> Acked-by: David Vernet > >>> Cc: Andrii Nakryiko > >>> Cc: Alexei Starovoitov > >>> Cc: Daniel Borkmann > >>> Cc: Martin KaFai Lau > >>> Cc: Song Liu > >>> Cc: Yonghong Song > >>> Cc: John Fastabend > >>> Cc: KP Singh > >>> Cc: Stanislav Fomichev > >>> Cc: Hao Luo > >>> Cc: Jiri Olsa > >>> Cc: > >>> > >>> diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c > >>> index f045fde632e5..0ee653a936ea 100644 > >>> --- a/kernel/bpf/ringbuf.c > >>> +++ b/kernel/bpf/ringbuf.c > >>> @@ -770,8 +770,7 @@ BPF_CALL_4(bpf_user_ringbuf_drain, struct bpf_map *, map, > >>> /* Prevent the clearing of the busy-bit from being reordered before the > >>> * storing of any rb consumer or producer positions. > >>> */ > >>> - smp_mb__before_atomic(); > >>> - atomic_set(&rb->busy, 0); > >>> + atomic_set_release(&rb->busy, 0); > >>> > >>> if (flags & BPF_RB_FORCE_WAKEUP) > >>> irq_work_queue(&rb->work); > >>> > >>> . >