From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C160DC433E0 for ; Wed, 17 Feb 2021 23:13:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8E93C64E58 for ; Wed, 17 Feb 2021 23:13:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231978AbhBQXNO (ORCPT ); Wed, 17 Feb 2021 18:13:14 -0500 Received: from mail.kernel.org ([198.145.29.99]:45338 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231462AbhBQXNN (ORCPT ); Wed, 17 Feb 2021 18:13:13 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 4CB8F64E58 for ; Wed, 17 Feb 2021 23:12:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1613603551; bh=WNUZwBJ2Lg5EGkpXxIuc7t6hqOC3xm2TIvY/sblQlPY=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=neITDwE/uRZ63bdI+oSLjiXc89+ZnobMnRMYN1m4gmTCy2IGmww+bi2xJuSJ+0jqE fy/WWBxR/q2XKfbr5N87msMLTCWHQr0R4G2XCa1UOKdMwQghdTpq0ZFI1DHKBNDf8q PwVG2mwkW7IHqJchZKfNwGqyrt9RMCnLa1kj8E9m24yrC4w104pSOlQE562mDRrvyY Jhx/wKuPTWyWOvI/SMEuoaWb0b7o327BbSREjlh+K9xZpKMXQP9TrZH3pREmOXaslK mKlJY8QpAlwXFhQuZYJUfVeGaPMQQ4MdowIBlW4wCKRHSvUsIZy3lPZNduSg61/Rf1 W6MQChB7ETMpw== Received: by mail-lj1-f172.google.com with SMTP id b16so19098652lji.13 for ; Wed, 17 Feb 2021 15:12:31 -0800 (PST) X-Gm-Message-State: AOAM530NKYERt/BbioGpJciUZUyGhJWjN+iNms/AYHw13+Va7Xqac4zM XKHy9RE21kX8wgHTWrhB1JN2rpdlFzIfNSBSX0tgaQ== X-Google-Smtp-Source: ABdhPJxu2TYejMpM2mmH72WB9X9sleCFkj+yAzw4rdCCS/S9ece5v8LkohEw4KYb/faD9UZGBNNP9dmw/RetglDhRyE= X-Received: by 2002:a2e:2c09:: with SMTP id s9mr906029ljs.136.1613603549574; Wed, 17 Feb 2021 15:12:29 -0800 (PST) MIME-Version: 1.0 References: <20210217092831.2366396-1-jackmanb@google.com> In-Reply-To: From: KP Singh Date: Thu, 18 Feb 2021 00:12:18 +0100 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v3 bpf-next] bpf: Explicitly zero-extend R0 after 32-bit cmpxchg To: Ilya Leoshkevich Cc: Brendan Jackman , bpf , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Florent Revest Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org On Wed, Feb 17, 2021 at 7:30 PM Ilya Leoshkevich wrote: > > On Wed, 2021-02-17 at 09:28 +0000, Brendan Jackman wrote: > > As pointed out by Ilya and explained in the new comment, there's a > > discrepancy between x86 and BPF CMPXCHG semantics: BPF always loads > > the value from memory into r0, while x86 only does so when r0 and the > > value in memory are different. The same issue affects s390. > > > > At first this might sound like pure semantics, but it makes a real > > difference when the comparison is 32-bit, since the load will > > zero-extend r0/rax. > > > > The fix is to explicitly zero-extend rax after doing such a > > CMPXCHG. Since this problem affects multiple archs, this is done in > > the verifier by patching in a BPF_ZEXT_REG instruction after every > > 32-bit cmpxchg. Any archs that don't need such manual zero-extension > > can do a look-ahead with insn_is_zext to skip the unnecessary mov. > > > > Reported-by: Ilya Leoshkevich > > Fixes: 5ffa25502b5a ("bpf: Add instructions for atomic_[cmp]xchg") > > Signed-off-by: Brendan Jackman > > --- > > > > Differences v2->v3[1]: > > - Moved patching into fixup_bpf_calls (patch incoming to rename this > > function) > > - Added extra commentary on bpf_jit_needs_zext > > - Added check to avoid adding a pointless zext(r0) if there's > > already one there. > > > > Difference v1->v2[1]: Now solved centrally in the verifier instead of > > specifically for the x86 JIT. Thanks to Ilya and Daniel for the > > suggestions! > > > > [1] v2: > > https://lore.kernel.org/bpf/08669818-c99d-0d30-e1db-53160c063611@iogearbox.net/T/#t > > v1: > > https://lore.kernel.org/bpf/d7ebaefb-bfd6-a441-3ff2-2fdfe699b1d2@iogearbox.net/T/#t > > > > kernel/bpf/core.c | 4 +++ > > kernel/bpf/verifier.c | 26 > > +++++++++++++++++++ > > .../selftests/bpf/verifier/atomic_cmpxchg.c | 25 > > ++++++++++++++++++ > > .../selftests/bpf/verifier/atomic_or.c | 26 > > +++++++++++++++++++ > > 4 files changed, 81 insertions(+) > > [...] > > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > > index 16ba43352a5f..a0d19be13558 100644 > > --- a/kernel/bpf/verifier.c > > +++ b/kernel/bpf/verifier.c > > @@ -11662,6 +11662,32 @@ static int fixup_bpf_calls(struct > > bpf_verifier_env *env) > > continue; > > } > > > > + /* BPF_CMPXCHG always loads a value into R0, > > therefore always > > + * zero-extends. However some archs' equivalent > > instruction only > > + * does this load when the comparison is successful. > > So here we > > + * add a BPF_ZEXT_REG after every 32-bit CMPXCHG, so > > that such > > + * archs' JITs don't need to deal with the issue. > > Archs that > > + * don't face this issue may use insn_is_zext to > > detect and skip > > + * the added instruction. > > + */ > > + if (insn->code == (BPF_STX | BPF_W | BPF_ATOMIC) && > > insn->imm == BPF_CMPXCHG) { > > + struct bpf_insn zext_patch[2] = { [1] = > > BPF_ZEXT_REG(BPF_REG_0) }; > > + > > + if (!memcmp(&insn[1], &zext_patch[1], > > sizeof(struct bpf_insn))) > > + /* Probably done by > > opt_subreg_zext_lo32_rnd_hi32. */ > > + continue; > > + > > Isn't opt_subreg_zext_lo32_rnd_hi32() called after fixup_bpf_calls()? Indeed, this check should not be needed. > > [...] >