All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 bpf-next] bpf: Explicitly zero-extend R0 after 32-bit cmpxchg
@ 2021-02-17  9:28 Brendan Jackman
  2021-02-17 18:30 ` Ilya Leoshkevich
  0 siblings, 1 reply; 5+ messages in thread
From: Brendan Jackman @ 2021-02-17  9:28 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, KP Singh,
	Florent Revest, Ilya Leoshkevich, Brendan Jackman

As pointed out by Ilya and explained in the new comment, there's a
discrepancy between x86 and BPF CMPXCHG semantics: BPF always loads
the value from memory into r0, while x86 only does so when r0 and the
value in memory are different. The same issue affects s390.

At first this might sound like pure semantics, but it makes a real
difference when the comparison is 32-bit, since the load will
zero-extend r0/rax.

The fix is to explicitly zero-extend rax after doing such a
CMPXCHG. Since this problem affects multiple archs, this is done in
the verifier by patching in a BPF_ZEXT_REG instruction after every
32-bit cmpxchg. Any archs that don't need such manual zero-extension
can do a look-ahead with insn_is_zext to skip the unnecessary mov.

Reported-by: Ilya Leoshkevich <iii@linux.ibm.com>
Fixes: 5ffa25502b5a ("bpf: Add instructions for atomic_[cmp]xchg")
Signed-off-by: Brendan Jackman <jackmanb@google.com>
---

Differences v2->v3[1]:
 - Moved patching into fixup_bpf_calls (patch incoming to rename this function)
 - Added extra commentary on bpf_jit_needs_zext
 - Added check to avoid adding a pointless zext(r0) if there's already one there.

Difference v1->v2[1]: Now solved centrally in the verifier instead of
  specifically for the x86 JIT. Thanks to Ilya and Daniel for the suggestions!

[1] v2: https://lore.kernel.org/bpf/08669818-c99d-0d30-e1db-53160c063611@iogearbox.net/T/#t
    v1: https://lore.kernel.org/bpf/d7ebaefb-bfd6-a441-3ff2-2fdfe699b1d2@iogearbox.net/T/#t

 kernel/bpf/core.c                             |  4 +++
 kernel/bpf/verifier.c                         | 26 +++++++++++++++++++
 .../selftests/bpf/verifier/atomic_cmpxchg.c   | 25 ++++++++++++++++++
 .../selftests/bpf/verifier/atomic_or.c        | 26 +++++++++++++++++++
 4 files changed, 81 insertions(+)

diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 0ae015ad1e05..dcf18612841b 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -2342,6 +2342,10 @@ bool __weak bpf_helper_changes_pkt_data(void *func)
 /* Return TRUE if the JIT backend wants verifier to enable sub-register usage
  * analysis code and wants explicit zero extension inserted by verifier.
  * Otherwise, return FALSE.
+ *
+ * The verifier inserts an explicit zero extension after BPF_CMPXCHGs even if
+ * you don't override this. JITs that don't want these extra insns can detect
+ * them using insn_is_zext.
  */
 bool __weak bpf_jit_needs_zext(void)
 {
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 16ba43352a5f..a0d19be13558 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -11662,6 +11662,32 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env)
 			continue;
 		}

+		/* BPF_CMPXCHG always loads a value into R0, therefore always
+		 * zero-extends. However some archs' equivalent instruction only
+		 * does this load when the comparison is successful. So here we
+		 * add a BPF_ZEXT_REG after every 32-bit CMPXCHG, so that such
+		 * archs' JITs don't need to deal with the issue. Archs that
+		 * don't face this issue may use insn_is_zext to detect and skip
+		 * the added instruction.
+		 */
+		if (insn->code == (BPF_STX | BPF_W | BPF_ATOMIC) && insn->imm == BPF_CMPXCHG) {
+			struct bpf_insn zext_patch[2] = { [1] = BPF_ZEXT_REG(BPF_REG_0) };
+
+			if (!memcmp(&insn[1], &zext_patch[1], sizeof(struct bpf_insn)))
+				/* Probably done by opt_subreg_zext_lo32_rnd_hi32. */
+				continue;
+
+			zext_patch[0] = *insn;
+			new_prog = bpf_patch_insn_data(env, i + delta, zext_patch, 2);
+			if (!new_prog)
+				return -ENOMEM;
+
+			delta    += 1;
+			env->prog = prog = new_prog;
+			insn      = new_prog->insnsi + i + delta;
+			continue;
+		}
+
 		if (insn->code != (BPF_JMP | BPF_CALL))
 			continue;
 		if (insn->src_reg == BPF_PSEUDO_CALL)
diff --git a/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c b/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c
index 2efd8bcf57a1..6e52dfc64415 100644
--- a/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c
+++ b/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c
@@ -94,3 +94,28 @@
 	.result = REJECT,
 	.errstr = "invalid read from stack",
 },
+{
+	"BPF_W cmpxchg should zero top 32 bits",
+	.insns = {
+		/* r0 = U64_MAX; */
+		BPF_MOV64_IMM(BPF_REG_0, 0),
+		BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 1),
+		/* u64 val = r0; */
+		BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8),
+		/* r0 = (u32)atomic_cmpxchg((u32 *)&val, r0, 1); */
+		BPF_MOV32_IMM(BPF_REG_1, 1),
+		BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, BPF_REG_10, BPF_REG_1, -8),
+		/* r1 = 0x00000000FFFFFFFFull; */
+		BPF_MOV64_IMM(BPF_REG_1, 1),
+		BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 32),
+		BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 1),
+		/* if (r0 != r1) exit(1); */
+		BPF_JMP_REG(BPF_JEQ, BPF_REG_0, BPF_REG_1, 2),
+		BPF_MOV32_IMM(BPF_REG_0, 1),
+		BPF_EXIT_INSN(),
+		/* exit(0); */
+		BPF_MOV32_IMM(BPF_REG_0, 0),
+		BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+},
diff --git a/tools/testing/selftests/bpf/verifier/atomic_or.c b/tools/testing/selftests/bpf/verifier/atomic_or.c
index 70f982e1f9f0..0a08b99e6ddd 100644
--- a/tools/testing/selftests/bpf/verifier/atomic_or.c
+++ b/tools/testing/selftests/bpf/verifier/atomic_or.c
@@ -75,3 +75,29 @@
 	},
 	.result = ACCEPT,
 },
+{
+	"BPF_W atomic_fetch_or should zero top 32 bits",
+	.insns = {
+		/* r1 = U64_MAX; */
+		BPF_MOV64_IMM(BPF_REG_1, 0),
+		BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 1),
+		/* u64 val = r0; */
+		BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_1, -8),
+		/* r1 = (u32)atomic_sub((u32 *)&val, 1); */
+		BPF_MOV32_IMM(BPF_REG_1, 2),
+		BPF_ATOMIC_OP(BPF_W, BPF_OR | BPF_FETCH, BPF_REG_10, BPF_REG_1, -8),
+		/* r2 = 0x00000000FFFFFFFF; */
+		BPF_MOV64_IMM(BPF_REG_2, 1),
+		BPF_ALU64_IMM(BPF_LSH, BPF_REG_2, 32),
+		BPF_ALU64_IMM(BPF_SUB, BPF_REG_2, 1),
+		/* if (r2 != r1) exit(1); */
+		BPF_JMP_REG(BPF_JEQ, BPF_REG_2, BPF_REG_1, 2),
+		/* BPF_MOV32_IMM(BPF_REG_0, 1), */
+		BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
+		BPF_EXIT_INSN(),
+		/* exit(0); */
+		BPF_MOV32_IMM(BPF_REG_0, 0),
+		BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+},

base-commit: 45159b27637b0fef6d5ddb86fc7c46b13c77960f
--
2.30.0.478.g8a0d178c01-goog


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v3 bpf-next] bpf: Explicitly zero-extend R0 after 32-bit cmpxchg
  2021-02-17  9:28 [PATCH v3 bpf-next] bpf: Explicitly zero-extend R0 after 32-bit cmpxchg Brendan Jackman
@ 2021-02-17 18:30 ` Ilya Leoshkevich
  2021-02-17 23:12   ` KP Singh
  0 siblings, 1 reply; 5+ messages in thread
From: Ilya Leoshkevich @ 2021-02-17 18:30 UTC (permalink / raw)
  To: Brendan Jackman, bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, KP Singh,
	Florent Revest

On Wed, 2021-02-17 at 09:28 +0000, Brendan Jackman wrote:
> As pointed out by Ilya and explained in the new comment, there's a
> discrepancy between x86 and BPF CMPXCHG semantics: BPF always loads
> the value from memory into r0, while x86 only does so when r0 and the
> value in memory are different. The same issue affects s390.
> 
> At first this might sound like pure semantics, but it makes a real
> difference when the comparison is 32-bit, since the load will
> zero-extend r0/rax.
> 
> The fix is to explicitly zero-extend rax after doing such a
> CMPXCHG. Since this problem affects multiple archs, this is done in
> the verifier by patching in a BPF_ZEXT_REG instruction after every
> 32-bit cmpxchg. Any archs that don't need such manual zero-extension
> can do a look-ahead with insn_is_zext to skip the unnecessary mov.
> 
> Reported-by: Ilya Leoshkevich <iii@linux.ibm.com>
> Fixes: 5ffa25502b5a ("bpf: Add instructions for atomic_[cmp]xchg")
> Signed-off-by: Brendan Jackman <jackmanb@google.com>
> ---
> 
> Differences v2->v3[1]:
>  - Moved patching into fixup_bpf_calls (patch incoming to rename this
> function)
>  - Added extra commentary on bpf_jit_needs_zext
>  - Added check to avoid adding a pointless zext(r0) if there's
> already one there.
> 
> Difference v1->v2[1]: Now solved centrally in the verifier instead of
>   specifically for the x86 JIT. Thanks to Ilya and Daniel for the
> suggestions!
> 
> [1] v2: 
> https://lore.kernel.org/bpf/08669818-c99d-0d30-e1db-53160c063611@iogearbox.net/T/#t
>     v1: 
> https://lore.kernel.org/bpf/d7ebaefb-bfd6-a441-3ff2-2fdfe699b1d2@iogearbox.net/T/#t
> 
>  kernel/bpf/core.c                             |  4 +++
>  kernel/bpf/verifier.c                         | 26
> +++++++++++++++++++
>  .../selftests/bpf/verifier/atomic_cmpxchg.c   | 25
> ++++++++++++++++++
>  .../selftests/bpf/verifier/atomic_or.c        | 26
> +++++++++++++++++++
>  4 files changed, 81 insertions(+)

[...]

> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 16ba43352a5f..a0d19be13558 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -11662,6 +11662,32 @@ static int fixup_bpf_calls(struct
> bpf_verifier_env *env)
>                         continue;
>                 }
> 
> +               /* BPF_CMPXCHG always loads a value into R0,
> therefore always
> +                * zero-extends. However some archs' equivalent
> instruction only
> +                * does this load when the comparison is successful.
> So here we
> +                * add a BPF_ZEXT_REG after every 32-bit CMPXCHG, so
> that such
> +                * archs' JITs don't need to deal with the issue.
> Archs that
> +                * don't face this issue may use insn_is_zext to
> detect and skip
> +                * the added instruction.
> +                */
> +               if (insn->code == (BPF_STX | BPF_W | BPF_ATOMIC) &&
> insn->imm == BPF_CMPXCHG) {
> +                       struct bpf_insn zext_patch[2] = { [1] =
> BPF_ZEXT_REG(BPF_REG_0) };
> +
> +                       if (!memcmp(&insn[1], &zext_patch[1],
> sizeof(struct bpf_insn)))
> +                               /* Probably done by
> opt_subreg_zext_lo32_rnd_hi32. */
> +                               continue;
> +

Isn't opt_subreg_zext_lo32_rnd_hi32() called after fixup_bpf_calls()?

[...]


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v3 bpf-next] bpf: Explicitly zero-extend R0 after 32-bit cmpxchg
  2021-02-17 18:30 ` Ilya Leoshkevich
@ 2021-02-17 23:12   ` KP Singh
  2021-02-22 15:06     ` Brendan Jackman
  0 siblings, 1 reply; 5+ messages in thread
From: KP Singh @ 2021-02-17 23:12 UTC (permalink / raw)
  To: Ilya Leoshkevich
  Cc: Brendan Jackman, bpf, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Florent Revest

On Wed, Feb 17, 2021 at 7:30 PM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
>
> On Wed, 2021-02-17 at 09:28 +0000, Brendan Jackman wrote:
> > As pointed out by Ilya and explained in the new comment, there's a
> > discrepancy between x86 and BPF CMPXCHG semantics: BPF always loads
> > the value from memory into r0, while x86 only does so when r0 and the
> > value in memory are different. The same issue affects s390.
> >
> > At first this might sound like pure semantics, but it makes a real
> > difference when the comparison is 32-bit, since the load will
> > zero-extend r0/rax.
> >
> > The fix is to explicitly zero-extend rax after doing such a
> > CMPXCHG. Since this problem affects multiple archs, this is done in
> > the verifier by patching in a BPF_ZEXT_REG instruction after every
> > 32-bit cmpxchg. Any archs that don't need such manual zero-extension
> > can do a look-ahead with insn_is_zext to skip the unnecessary mov.
> >
> > Reported-by: Ilya Leoshkevich <iii@linux.ibm.com>
> > Fixes: 5ffa25502b5a ("bpf: Add instructions for atomic_[cmp]xchg")
> > Signed-off-by: Brendan Jackman <jackmanb@google.com>
> > ---
> >
> > Differences v2->v3[1]:
> >  - Moved patching into fixup_bpf_calls (patch incoming to rename this
> > function)
> >  - Added extra commentary on bpf_jit_needs_zext
> >  - Added check to avoid adding a pointless zext(r0) if there's
> > already one there.
> >
> > Difference v1->v2[1]: Now solved centrally in the verifier instead of
> >   specifically for the x86 JIT. Thanks to Ilya and Daniel for the
> > suggestions!
> >
> > [1] v2:
> > https://lore.kernel.org/bpf/08669818-c99d-0d30-e1db-53160c063611@iogearbox.net/T/#t
> >     v1:
> > https://lore.kernel.org/bpf/d7ebaefb-bfd6-a441-3ff2-2fdfe699b1d2@iogearbox.net/T/#t
> >
> >  kernel/bpf/core.c                             |  4 +++
> >  kernel/bpf/verifier.c                         | 26
> > +++++++++++++++++++
> >  .../selftests/bpf/verifier/atomic_cmpxchg.c   | 25
> > ++++++++++++++++++
> >  .../selftests/bpf/verifier/atomic_or.c        | 26
> > +++++++++++++++++++
> >  4 files changed, 81 insertions(+)
>
> [...]
>
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 16ba43352a5f..a0d19be13558 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -11662,6 +11662,32 @@ static int fixup_bpf_calls(struct
> > bpf_verifier_env *env)
> >                         continue;
> >                 }
> >
> > +               /* BPF_CMPXCHG always loads a value into R0,
> > therefore always
> > +                * zero-extends. However some archs' equivalent
> > instruction only
> > +                * does this load when the comparison is successful.
> > So here we
> > +                * add a BPF_ZEXT_REG after every 32-bit CMPXCHG, so
> > that such
> > +                * archs' JITs don't need to deal with the issue.
> > Archs that
> > +                * don't face this issue may use insn_is_zext to
> > detect and skip
> > +                * the added instruction.
> > +                */
> > +               if (insn->code == (BPF_STX | BPF_W | BPF_ATOMIC) &&
> > insn->imm == BPF_CMPXCHG) {
> > +                       struct bpf_insn zext_patch[2] = { [1] =
> > BPF_ZEXT_REG(BPF_REG_0) };
> > +
> > +                       if (!memcmp(&insn[1], &zext_patch[1],
> > sizeof(struct bpf_insn)))
> > +                               /* Probably done by
> > opt_subreg_zext_lo32_rnd_hi32. */
> > +                               continue;
> > +
>
> Isn't opt_subreg_zext_lo32_rnd_hi32() called after fixup_bpf_calls()?

Indeed, this check should not be needed.

>
> [...]
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v3 bpf-next] bpf: Explicitly zero-extend R0 after 32-bit cmpxchg
  2021-02-17 23:12   ` KP Singh
@ 2021-02-22 15:06     ` Brendan Jackman
  2021-02-22 15:51       ` Ilya Leoshkevich
  0 siblings, 1 reply; 5+ messages in thread
From: Brendan Jackman @ 2021-02-22 15:06 UTC (permalink / raw)
  To: KP Singh
  Cc: Ilya Leoshkevich, bpf, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Florent Revest

On Thu, 18 Feb 2021 at 00:12, KP Singh <kpsingh@kernel.org> wrote:
>
> On Wed, Feb 17, 2021 at 7:30 PM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
> >
> > On Wed, 2021-02-17 at 09:28 +0000, Brendan Jackman wrote:
> > > As pointed out by Ilya and explained in the new comment, there's a
> > > discrepancy between x86 and BPF CMPXCHG semantics: BPF always loads
> > > the value from memory into r0, while x86 only does so when r0 and the
> > > value in memory are different. The same issue affects s390.
> > >
> > > At first this might sound like pure semantics, but it makes a real
> > > difference when the comparison is 32-bit, since the load will
> > > zero-extend r0/rax.
> > >
> > > The fix is to explicitly zero-extend rax after doing such a
> > > CMPXCHG. Since this problem affects multiple archs, this is done in
> > > the verifier by patching in a BPF_ZEXT_REG instruction after every
> > > 32-bit cmpxchg. Any archs that don't need such manual zero-extension
> > > can do a look-ahead with insn_is_zext to skip the unnecessary mov.
> > >
> > > Reported-by: Ilya Leoshkevich <iii@linux.ibm.com>
> > > Fixes: 5ffa25502b5a ("bpf: Add instructions for atomic_[cmp]xchg")
> > > Signed-off-by: Brendan Jackman <jackmanb@google.com>
> > > ---
> > >
> > > Differences v2->v3[1]:
> > >  - Moved patching into fixup_bpf_calls (patch incoming to rename this
> > > function)
> > >  - Added extra commentary on bpf_jit_needs_zext
> > >  - Added check to avoid adding a pointless zext(r0) if there's
> > > already one there.
> > >
> > > Difference v1->v2[1]: Now solved centrally in the verifier instead of
> > >   specifically for the x86 JIT. Thanks to Ilya and Daniel for the
> > > suggestions!
> > >
> > > [1] v2:
> > > https://lore.kernel.org/bpf/08669818-c99d-0d30-e1db-53160c063611@iogearbox.net/T/#t
> > >     v1:
> > > https://lore.kernel.org/bpf/d7ebaefb-bfd6-a441-3ff2-2fdfe699b1d2@iogearbox.net/T/#t
> > >
> > >  kernel/bpf/core.c                             |  4 +++
> > >  kernel/bpf/verifier.c                         | 26
> > > +++++++++++++++++++
> > >  .../selftests/bpf/verifier/atomic_cmpxchg.c   | 25
> > > ++++++++++++++++++
> > >  .../selftests/bpf/verifier/atomic_or.c        | 26
> > > +++++++++++++++++++
> > >  4 files changed, 81 insertions(+)
> >
> > [...]
> >
> > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > > index 16ba43352a5f..a0d19be13558 100644
> > > --- a/kernel/bpf/verifier.c
> > > +++ b/kernel/bpf/verifier.c
> > > @@ -11662,6 +11662,32 @@ static int fixup_bpf_calls(struct
> > > bpf_verifier_env *env)
> > >                         continue;
> > >                 }
> > >
> > > +               /* BPF_CMPXCHG always loads a value into R0,
> > > therefore always
> > > +                * zero-extends. However some archs' equivalent
> > > instruction only
> > > +                * does this load when the comparison is successful.
> > > So here we
> > > +                * add a BPF_ZEXT_REG after every 32-bit CMPXCHG, so
> > > that such
> > > +                * archs' JITs don't need to deal with the issue.
> > > Archs that
> > > +                * don't face this issue may use insn_is_zext to
> > > detect and skip
> > > +                * the added instruction.
> > > +                */
> > > +               if (insn->code == (BPF_STX | BPF_W | BPF_ATOMIC) &&
> > > insn->imm == BPF_CMPXCHG) {
> > > +                       struct bpf_insn zext_patch[2] = { [1] =
> > > BPF_ZEXT_REG(BPF_REG_0) };
> > > +
> > > +                       if (!memcmp(&insn[1], &zext_patch[1],
> > > sizeof(struct bpf_insn)))
> > > +                               /* Probably done by
> > > opt_subreg_zext_lo32_rnd_hi32. */
> > > +                               continue;
> > > +
> >
> > Isn't opt_subreg_zext_lo32_rnd_hi32() called after fixup_bpf_calls()?
>
> Indeed, this check should not be needed.

Ah yep, right. Do you folks think I should keep the optimisation (i.e.
move this memcmp into opt_subreg_zext_lo32_rnd_hi32)? It feels like a
bit of a toss-up to me.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v3 bpf-next] bpf: Explicitly zero-extend R0 after 32-bit cmpxchg
  2021-02-22 15:06     ` Brendan Jackman
@ 2021-02-22 15:51       ` Ilya Leoshkevich
  0 siblings, 0 replies; 5+ messages in thread
From: Ilya Leoshkevich @ 2021-02-22 15:51 UTC (permalink / raw)
  To: Brendan Jackman, KP Singh
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Florent Revest

On Mon, 2021-02-22 at 16:06 +0100, Brendan Jackman wrote:
> On Thu, 18 Feb 2021 at 00:12, KP Singh <kpsingh@kernel.org> wrote:
> > 
> > On Wed, Feb 17, 2021 at 7:30 PM Ilya Leoshkevich <iii@linux.ibm.com>
> > wrote:
> > > 
> > > On Wed, 2021-02-17 at 09:28 +0000, Brendan Jackman wrote:
> > > > As pointed out by Ilya and explained in the new comment, there's
> > > > a
> > > > discrepancy between x86 and BPF CMPXCHG semantics: BPF always
> > > > loads
> > > > the value from memory into r0, while x86 only does so when r0 and
> > > > the
> > > > value in memory are different. The same issue affects s390.
> > > > 
> > > > At first this might sound like pure semantics, but it makes a
> > > > real
> > > > difference when the comparison is 32-bit, since the load will
> > > > zero-extend r0/rax.
> > > > 
> > > > The fix is to explicitly zero-extend rax after doing such a
> > > > CMPXCHG. Since this problem affects multiple archs, this is done
> > > > in
> > > > the verifier by patching in a BPF_ZEXT_REG instruction after
> > > > every
> > > > 32-bit cmpxchg. Any archs that don't need such manual zero-
> > > > extension
> > > > can do a look-ahead with insn_is_zext to skip the unnecessary
> > > > mov.
> > > > 
> > > > Reported-by: Ilya Leoshkevich <iii@linux.ibm.com>
> > > > Fixes: 5ffa25502b5a ("bpf: Add instructions for
> > > > atomic_[cmp]xchg")
> > > > Signed-off-by: Brendan Jackman <jackmanb@google.com>
> > > > ---
> > > > 
> > > > Differences v2->v3[1]:
> > > >  - Moved patching into fixup_bpf_calls (patch incoming to rename
> > > > this
> > > > function)
> > > >  - Added extra commentary on bpf_jit_needs_zext
> > > >  - Added check to avoid adding a pointless zext(r0) if there's
> > > > already one there.
> > > > 
> > > > Difference v1->v2[1]: Now solved centrally in the verifier
> > > > instead of
> > > >   specifically for the x86 JIT. Thanks to Ilya and Daniel for the
> > > > suggestions!
> > > > 
> > > > [1] v2:
> > > > https://lore.kernel.org/bpf/08669818-c99d-0d30-e1db-53160c063611@iogearbox.net/T/#t
> > > >     v1:
> > > > https://lore.kernel.org/bpf/d7ebaefb-bfd6-a441-3ff2-2fdfe699b1d2@iogearbox.net/T/#t
> > > > 
> > > >  kernel/bpf/core.c                             |  4 +++
> > > >  kernel/bpf/verifier.c                         | 26
> > > > +++++++++++++++++++
> > > >  .../selftests/bpf/verifier/atomic_cmpxchg.c   | 25
> > > > ++++++++++++++++++
> > > >  .../selftests/bpf/verifier/atomic_or.c        | 26
> > > > +++++++++++++++++++
> > > >  4 files changed, 81 insertions(+)
> > > 
> > > [...]
> > > 
> > > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > > > index 16ba43352a5f..a0d19be13558 100644
> > > > --- a/kernel/bpf/verifier.c
> > > > +++ b/kernel/bpf/verifier.c
> > > > @@ -11662,6 +11662,32 @@ static int fixup_bpf_calls(struct
> > > > bpf_verifier_env *env)
> > > >                         continue;
> > > >                 }
> > > > 
> > > > +               /* BPF_CMPXCHG always loads a value into R0,
> > > > therefore always
> > > > +                * zero-extends. However some archs' equivalent
> > > > instruction only
> > > > +                * does this load when the comparison is
> > > > successful.
> > > > So here we
> > > > +                * add a BPF_ZEXT_REG after every 32-bit CMPXCHG,
> > > > so
> > > > that such
> > > > +                * archs' JITs don't need to deal with the issue.
> > > > Archs that
> > > > +                * don't face this issue may use insn_is_zext to
> > > > detect and skip
> > > > +                * the added instruction.
> > > > +                */
> > > > +               if (insn->code == (BPF_STX | BPF_W | BPF_ATOMIC)
> > > > &&
> > > > insn->imm == BPF_CMPXCHG) {
> > > > +                       struct bpf_insn zext_patch[2] = { [1] =
> > > > BPF_ZEXT_REG(BPF_REG_0) };
> > > > +
> > > > +                       if (!memcmp(&insn[1], &zext_patch[1],
> > > > sizeof(struct bpf_insn)))
> > > > +                               /* Probably done by
> > > > opt_subreg_zext_lo32_rnd_hi32. */
> > > > +                               continue;
> > > > +
> > > 
> > > Isn't opt_subreg_zext_lo32_rnd_hi32() called after
> > > fixup_bpf_calls()?
> > 
> > Indeed, this check should not be needed.
> 
> Ah yep, right. Do you folks think I should keep the optimisation (i.e.
> move this memcmp into opt_subreg_zext_lo32_rnd_hi32)? It feels like a
> bit of a toss-up to me.

It would be good to have this on s390. In "BPF_W cmpxchg should zero
top 32 bits", for example, I get:

   7: (c3) r0 = atomic_cmpxchg((u32 *)(r10 -8), r0, r1)
   8: (bc) w0 = w0
   9: (bc) w0 = w0

With the following adjustment (only briefly tested: survives 
test_verifier on s390):

--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -11677,8 +11677,9 @@ static int fixup_bpf_calls(struct
bpf_verifier_env *env)
                if (insn->code == (BPF_STX | BPF_W | BPF_ATOMIC) &&
insn->imm == BPF_CMPXCHG) {
                        struct bpf_insn zext_patch[2] = { [1] =
BPF_ZEXT_REG(BPF_REG_0) };
 
-                       if (!memcmp(&insn[1], &zext_patch[1],
sizeof(struct bpf_insn)))
-                               /* Probably done by
opt_subreg_zext_lo32_rnd_hi32. */
+                       aux = &env->insn_aux_data[i + delta];
+                       if (aux->zext_dst && bpf_jit_needs_zext())
+                               /* Will be done by
opt_subreg_zext_lo32_rnd_hi32(). */
                                continue;
 
                        zext_patch[0] = *insn;

it becomes:

   7: (c3) r0 = atomic_cmpxchg((u32 *)(r10 -8), r0, r1)
   8: (bc) w0 = w0

Moving the check to opt_subreg_zext_lo32_rnd_hi32() is also an option;
I don't know which of the two is a better choice.


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-02-22 15:53 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-17  9:28 [PATCH v3 bpf-next] bpf: Explicitly zero-extend R0 after 32-bit cmpxchg Brendan Jackman
2021-02-17 18:30 ` Ilya Leoshkevich
2021-02-17 23:12   ` KP Singh
2021-02-22 15:06     ` Brendan Jackman
2021-02-22 15:51       ` Ilya Leoshkevich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.