From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2F0FCDB482 for ; Thu, 19 Oct 2023 06:20:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232714AbjJSGUu (ORCPT ); Thu, 19 Oct 2023 02:20:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39942 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229894AbjJSGUt (ORCPT ); Thu, 19 Oct 2023 02:20:49 -0400 Received: from dggsgout11.his.huawei.com (unknown [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3509BB6; Wed, 18 Oct 2023 23:20:46 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4S9yJR32BTz4f3jqh; Thu, 19 Oct 2023 14:20:39 +0800 (CST) Received: from [10.174.176.117] (unknown [10.174.176.117]) by APP4 (Coremail) with SMTP id gCh0CgD3idmzyjBl3pdtDQ--.28374S2; Thu, 19 Oct 2023 14:20:39 +0800 (CST) Subject: Re: [PATCH bpf] Fold smp_mb__before_atomic() into atomic_set_release() To: paulmck@kernel.org Cc: bpf@vger.kernel.org, David Vernet , Andrii Nakryiko , Alexei Starovoitov , Daniel Borkmann , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , linux-kernel@vger.kernel.org References: <722b64d7-281b-b4ab-4d4d-403abc41a36b@huaweicloud.com> From: Hou Tao Message-ID: <7fe984d2-c30c-40ad-83cd-d9fb51b6ce0d@huaweicloud.com> Date: Thu, 19 Oct 2023 14:20:35 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Content-Language: en-US X-CM-TRANSID: gCh0CgD3idmzyjBl3pdtDQ--.28374S2 X-Coremail-Antispam: 1UD129KBjvJXoW3Ar1fJr1UKF4rGr15WFy8Zrb_yoW7WrWfpr WkKF1jyrWkXr18Aw1Dtw4UZa4ftw4DA343Gr45XFy8Zr1DKr4jvF18Xr4jgFn8Jr4kGr1j yr4UXr9Fv34UXrDanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUvIb4IE77IF4wAFF20E14v26r4j6ryUM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28lY4IEw2IIxxk0rwA2F7IY1VAKz4 vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7Cj xVAFwI0_Gr1j6F4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x 0267AKxVW0oVCq3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG 6I80ewAv7VC0I7IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFV Cjc4AY6r1j6r4UM4x0Y48IcVAKI48JM4IIrI8v6xkF7I0E8cxan2IY04v7Mxk0xIA0c2IE e2xFo4CEbIxvr21l42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxV Aqx4xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r1q 6r43MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6x kF7I0E14v26r4j6F4UMIIF0xvE42xK8VAvwI8IcIk0rVWrZr1j6s0DMIIF0xvEx4A2jsIE 14v26r1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf 9x07UWE__UUUUU= X-CM-SenderInfo: xkrx3t3r6k3tpzhluzxrxghudrp/ X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Paul, On 10/19/2023 12:54 PM, Paul E. McKenney wrote: > On Thu, Oct 19, 2023 at 09:07:07AM +0800, Hou Tao wrote: >> Hi Paul, >> >> On 10/19/2023 6:28 AM, Paul E. McKenney wrote: >>> bpf: Fold smp_mb__before_atomic() into atomic_set_release() >>> >>> The bpf_user_ringbuf_drain() BPF_CALL function uses an atomic_set() >>> immediately preceded by smp_mb__before_atomic() so as to order storing >>> of ring-buffer consumer and producer positions prior to the atomic_set() >>> call's clearing of the ->busy flag, as follows: >>> >>> smp_mb__before_atomic(); >>> atomic_set(&rb->busy, 0); >>> >>> Although this works given current architectures and implementations, and >>> given that this only needs to order prior writes against a later write. >>> However, it does so by accident because the smp_mb__before_atomic() >>> is only guaranteed to work with read-modify-write atomic operations, >>> and not at all with things like atomic_set() and atomic_read(). >>> >>> Note especially that smp_mb__before_atomic() will not, repeat *not*, >>> order the prior write to "a" before the subsequent non-read-modify-write >>> atomic read from "b", even on strongly ordered systems such as x86: >>> >>> WRITE_ONCE(a, 1); >>> smp_mb__before_atomic(); >>> r1 = atomic_read(&b); >> The reason is smp_mb__before_atomic() is defined as noop and >> atomic_read() in x86-64 is just READ_ONCE(), right ? > The real reason is that smp_mb__before_atomic() is not defined to do > anything unless followed by an atomic read-modify-write operation, > and atomic_read(), atomic_64read(), atomic_set(), and so on are not > read-modify-write operations. I see. Thanks for explanation. It seems I did not read Documentation/atomic_t.txt carefully, it said:     The barriers:     smp_mb__{before,after}_atomic()     only apply to the RMW atomic ops and can be used to augment/upgrade the     ordering inherent to the op. > > As you point out, one implementation consequence of this is that > smp_mb__before_atomic() is nothingness on x86. > >> And it seems that I also used smp_mb__before_atomic() in a wrong way for >> patch [1]. The memory order in the posted patch is >> >> process X                                    process Y >>     atomic64_dec_and_test(&map->usercnt) >>     READ_ONCE(timer->timer) >>                                             timer->time = t > The above two lines are supposed to be accessing the same field, correct? > If so, process Y's store really should be WRITE_ONCE(). Yes. These two processes are accessing the same field (namely timer->timer). Is WRITE_ONCE(xx) still necessary when the write of timer->time in process Y is protected by a spin-lock ? > >>                                             // it won't work >>                                             smp_mb__before_atomic() >>                                             atomic64_read(&map->usercnt) >> >> For the problem, it seems I need to replace smp_mb__before_atomic() by >> smp_mb() to fix the memory order, right ? > Yes, because smp_mb() will order the prior store against that later load. Thanks. Will fix the patch. Regards, Hou > > Thanx, Paul > >> Regards, >> Hou >> >> [1]: >> https://lore.kernel.org/bpf/20231017125717.241101-2-houtao@huaweicloud.com/ >>                                                                  >> >>> Therefore, replace the smp_mb__before_atomic() and atomic_set() with >>> atomic_set_release() as follows: >>> >>> atomic_set_release(&rb->busy, 0); >>> >>> This is no slower (and sometimes is faster) than the original, and also >>> provides a formal guarantee of ordering that the original lacks. >>> >>> Signed-off-by: Paul E. McKenney >>> Acked-by: David Vernet >>> Cc: Andrii Nakryiko >>> Cc: Alexei Starovoitov >>> Cc: Daniel Borkmann >>> Cc: Martin KaFai Lau >>> Cc: Song Liu >>> Cc: Yonghong Song >>> Cc: John Fastabend >>> Cc: KP Singh >>> Cc: Stanislav Fomichev >>> Cc: Hao Luo >>> Cc: Jiri Olsa >>> Cc: >>> >>> diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c >>> index f045fde632e5..0ee653a936ea 100644 >>> --- a/kernel/bpf/ringbuf.c >>> +++ b/kernel/bpf/ringbuf.c >>> @@ -770,8 +770,7 @@ BPF_CALL_4(bpf_user_ringbuf_drain, struct bpf_map *, map, >>> /* Prevent the clearing of the busy-bit from being reordered before the >>> * storing of any rb consumer or producer positions. >>> */ >>> - smp_mb__before_atomic(); >>> - atomic_set(&rb->busy, 0); >>> + atomic_set_release(&rb->busy, 0); >>> >>> if (flags & BPF_RB_FORCE_WAKEUP) >>> irq_work_queue(&rb->work); >>> >>> .