* [PATCH 0/5] Atomics support for eBPF on powerpc
@ 2022-05-12 7:45 Hari Bathini
2022-05-12 7:45 ` [PATCH 1/5] bpf ppc64: add support for BPF_ATOMIC bitwise operations Hari Bathini
` (5 more replies)
0 siblings, 6 replies; 11+ messages in thread
From: Hari Bathini @ 2022-05-12 7:45 UTC (permalink / raw)
To: bpf, linuxppc-dev
Cc: Michael Ellerman, Naveen N. Rao, Christophe Leroy, netdev,
Benjamin Herrenschmidt, Paul Mackerras, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau, Song Liu,
Yonghong Song, John Fastabend, KP Singh, Jordan Niethe
This patchset adds atomic operations to the eBPF instruction set on
powerpc. The instructions that are added here can be summarised with
this list of kernel operations for ppc64:
* atomic[64]_[fetch_]add
* atomic[64]_[fetch_]and
* atomic[64]_[fetch_]or
* atomic[64]_[fetch_]xor
* atomic[64]_xchg
* atomic[64]_cmpxchg
and this list of kernel operations for ppc32:
* atomic_[fetch_]add
* atomic_[fetch_]and
* atomic_[fetch_]or
* atomic_[fetch_]xor
* atomic_xchg
* atomic_cmpxchg
The following are left out of scope for this effort:
* 64 bit operations on ppc32.
* Explicit memory barriers, 16 and 8 bit operations on both ppc32
& ppc64.
The first patch adds support for bitwsie atomic operations on ppc64.
The next patch adds fetch variant support for these instructions. The
third patch adds support for xchg and cmpxchg atomic operations on
ppc64. Patch #4 adds support for 32-bit atomic bitwise operations on
ppc32. patch #5 adds support for xchg and cmpxchg atomic operations
on ppc32.
Hari Bathini (5):
bpf ppc64: add support for BPF_ATOMIC bitwise operations
bpf ppc64: add support for atomic fetch operations
bpf ppc64: Add instructions for atomic_[cmp]xchg
bpf ppc32: add support for BPF_ATOMIC bitwise operations
bpf ppc32: Add instructions for atomic_[cmp]xchg
arch/powerpc/net/bpf_jit_comp32.c | 62 +++++++++++++++++-----
arch/powerpc/net/bpf_jit_comp64.c | 87 +++++++++++++++++++++----------
2 files changed, 108 insertions(+), 41 deletions(-)
--
2.35.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 1/5] bpf ppc64: add support for BPF_ATOMIC bitwise operations
2022-05-12 7:45 [PATCH 0/5] Atomics support for eBPF on powerpc Hari Bathini
@ 2022-05-12 7:45 ` Hari Bathini
2022-05-12 7:45 ` [PATCH 2/5] bpf ppc64: add support for atomic fetch operations Hari Bathini
` (4 subsequent siblings)
5 siblings, 0 replies; 11+ messages in thread
From: Hari Bathini @ 2022-05-12 7:45 UTC (permalink / raw)
To: bpf, linuxppc-dev
Cc: Michael Ellerman, Naveen N. Rao, Christophe Leroy, netdev,
Benjamin Herrenschmidt, Paul Mackerras, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau, Song Liu,
Yonghong Song, John Fastabend, KP Singh, Jordan Niethe
Adding instructions for ppc64 for
atomic[64]_and
atomic[64]_or
atomic[64]_xor
Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
---
arch/powerpc/net/bpf_jit_comp64.c | 57 ++++++++++++++++---------------
1 file changed, 29 insertions(+), 28 deletions(-)
diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index 183a73f8fcc4..b34ed88167f0 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -782,41 +782,42 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
* BPF_STX ATOMIC (atomic ops)
*/
case BPF_STX | BPF_ATOMIC | BPF_W:
- if (imm != BPF_ADD) {
- pr_err_ratelimited(
- "eBPF filter atomic op code %02x (@%d) unsupported\n",
- code, i);
- return -ENOTSUPP;
- }
-
- /* *(u32 *)(dst + off) += src */
-
- /* Get EA into TMP_REG_1 */
- EMIT(PPC_RAW_ADDI(tmp1_reg, dst_reg, off));
+ case BPF_STX | BPF_ATOMIC | BPF_DW:
+ /* Get offset into TMP_REG_1 */
+ EMIT(PPC_RAW_LI(tmp1_reg, off));
tmp_idx = ctx->idx * 4;
/* load value from memory into TMP_REG_2 */
- EMIT(PPC_RAW_LWARX(tmp2_reg, 0, tmp1_reg, 0));
- /* add value from src_reg into this */
- EMIT(PPC_RAW_ADD(tmp2_reg, tmp2_reg, src_reg));
- /* store result back */
- EMIT(PPC_RAW_STWCX(tmp2_reg, 0, tmp1_reg));
- /* we're done if this succeeded */
- PPC_BCC_SHORT(COND_NE, tmp_idx);
- break;
- case BPF_STX | BPF_ATOMIC | BPF_DW:
- if (imm != BPF_ADD) {
+ if (size == BPF_DW)
+ EMIT(PPC_RAW_LDARX(tmp2_reg, tmp1_reg, dst_reg, 0));
+ else
+ EMIT(PPC_RAW_LWARX(tmp2_reg, tmp1_reg, dst_reg, 0));
+
+ switch (imm) {
+ case BPF_ADD:
+ EMIT(PPC_RAW_ADD(tmp2_reg, tmp2_reg, src_reg));
+ break;
+ case BPF_AND:
+ EMIT(PPC_RAW_AND(tmp2_reg, tmp2_reg, src_reg));
+ break;
+ case BPF_OR:
+ EMIT(PPC_RAW_OR(tmp2_reg, tmp2_reg, src_reg));
+ break;
+ case BPF_XOR:
+ EMIT(PPC_RAW_XOR(tmp2_reg, tmp2_reg, src_reg));
+ break;
+ default:
pr_err_ratelimited(
"eBPF filter atomic op code %02x (@%d) unsupported\n",
code, i);
- return -ENOTSUPP;
+ return -EOPNOTSUPP;
}
- /* *(u64 *)(dst + off) += src */
- EMIT(PPC_RAW_ADDI(tmp1_reg, dst_reg, off));
- tmp_idx = ctx->idx * 4;
- EMIT(PPC_RAW_LDARX(tmp2_reg, 0, tmp1_reg, 0));
- EMIT(PPC_RAW_ADD(tmp2_reg, tmp2_reg, src_reg));
- EMIT(PPC_RAW_STDCX(tmp2_reg, 0, tmp1_reg));
+ /* store result back */
+ if (size == BPF_DW)
+ EMIT(PPC_RAW_STDCX(tmp2_reg, tmp1_reg, dst_reg));
+ else
+ EMIT(PPC_RAW_STWCX(tmp2_reg, tmp1_reg, dst_reg));
+ /* we're done if this succeeded */
PPC_BCC_SHORT(COND_NE, tmp_idx);
break;
--
2.35.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 2/5] bpf ppc64: add support for atomic fetch operations
2022-05-12 7:45 [PATCH 0/5] Atomics support for eBPF on powerpc Hari Bathini
2022-05-12 7:45 ` [PATCH 1/5] bpf ppc64: add support for BPF_ATOMIC bitwise operations Hari Bathini
@ 2022-05-12 7:45 ` Hari Bathini
2022-05-12 7:45 ` [PATCH 3/5] bpf ppc64: Add instructions for atomic_[cmp]xchg Hari Bathini
` (3 subsequent siblings)
5 siblings, 0 replies; 11+ messages in thread
From: Hari Bathini @ 2022-05-12 7:45 UTC (permalink / raw)
To: bpf, linuxppc-dev
Cc: Michael Ellerman, Naveen N. Rao, Christophe Leroy, netdev,
Benjamin Herrenschmidt, Paul Mackerras, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau, Song Liu,
Yonghong Song, John Fastabend, KP Singh, Jordan Niethe
Adding instructions for ppc64 for
atomic[64]_fetch_add
atomic[64]_fetch_and
atomic[64]_fetch_or
atomic[64]_fetch_xor
Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
---
arch/powerpc/net/bpf_jit_comp64.c | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index b34ed88167f0..504fa459f9f3 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -792,17 +792,25 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
else
EMIT(PPC_RAW_LWARX(tmp2_reg, tmp1_reg, dst_reg, 0));
+ /* Save old value in _R0 */
+ if (imm & BPF_FETCH)
+ EMIT(PPC_RAW_MR(_R0, tmp2_reg));
+
switch (imm) {
case BPF_ADD:
+ case BPF_ADD | BPF_FETCH:
EMIT(PPC_RAW_ADD(tmp2_reg, tmp2_reg, src_reg));
break;
case BPF_AND:
+ case BPF_AND | BPF_FETCH:
EMIT(PPC_RAW_AND(tmp2_reg, tmp2_reg, src_reg));
break;
case BPF_OR:
+ case BPF_OR | BPF_FETCH:
EMIT(PPC_RAW_OR(tmp2_reg, tmp2_reg, src_reg));
break;
case BPF_XOR:
+ case BPF_XOR | BPF_FETCH:
EMIT(PPC_RAW_XOR(tmp2_reg, tmp2_reg, src_reg));
break;
default:
@@ -812,13 +820,17 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
return -EOPNOTSUPP;
}
- /* store result back */
+ /* store new value */
if (size == BPF_DW)
EMIT(PPC_RAW_STDCX(tmp2_reg, tmp1_reg, dst_reg));
else
EMIT(PPC_RAW_STWCX(tmp2_reg, tmp1_reg, dst_reg));
/* we're done if this succeeded */
PPC_BCC_SHORT(COND_NE, tmp_idx);
+
+ /* For the BPF_FETCH variant, get old value into src_reg */
+ if (imm & BPF_FETCH)
+ EMIT(PPC_RAW_MR(src_reg, _R0));
break;
/*
--
2.35.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 3/5] bpf ppc64: Add instructions for atomic_[cmp]xchg
2022-05-12 7:45 [PATCH 0/5] Atomics support for eBPF on powerpc Hari Bathini
2022-05-12 7:45 ` [PATCH 1/5] bpf ppc64: add support for BPF_ATOMIC bitwise operations Hari Bathini
2022-05-12 7:45 ` [PATCH 2/5] bpf ppc64: add support for atomic fetch operations Hari Bathini
@ 2022-05-12 7:45 ` Hari Bathini
2022-05-16 3:03 ` Russell Currey
2022-05-12 7:45 ` [PATCH 4/5] bpf ppc32: add support for BPF_ATOMIC bitwise operations Hari Bathini
` (2 subsequent siblings)
5 siblings, 1 reply; 11+ messages in thread
From: Hari Bathini @ 2022-05-12 7:45 UTC (permalink / raw)
To: bpf, linuxppc-dev
Cc: Michael Ellerman, Naveen N. Rao, Christophe Leroy, netdev,
Benjamin Herrenschmidt, Paul Mackerras, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau, Song Liu,
Yonghong Song, John Fastabend, KP Singh, Jordan Niethe
This adds two atomic opcodes BPF_XCHG and BPF_CMPXCHG on ppc64, both
of which include the BPF_FETCH flag. The kernel's atomic_cmpxchg
operation fundamentally has 3 operands, but we only have two register
fields. Therefore the operand we compare against (the kernel's API
calls it 'old') is hard-coded to be BPF_REG_R0. Also, kernel's
atomic_cmpxchg returns the previous value at dst_reg + off. JIT the
same for BPF too with return value put in BPF_REG_0.
BPF_REG_R0 = atomic_cmpxchg(dst_reg + off, BPF_REG_R0, src_reg);
Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
---
arch/powerpc/net/bpf_jit_comp64.c | 28 ++++++++++++++++++++++++----
1 file changed, 24 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index 504fa459f9f3..df9e20b22ccb 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -783,6 +783,9 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
*/
case BPF_STX | BPF_ATOMIC | BPF_W:
case BPF_STX | BPF_ATOMIC | BPF_DW:
+ u32 save_reg = tmp2_reg;
+ u32 ret_reg = src_reg;
+
/* Get offset into TMP_REG_1 */
EMIT(PPC_RAW_LI(tmp1_reg, off));
tmp_idx = ctx->idx * 4;
@@ -813,6 +816,24 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
case BPF_XOR | BPF_FETCH:
EMIT(PPC_RAW_XOR(tmp2_reg, tmp2_reg, src_reg));
break;
+ case BPF_CMPXCHG:
+ /*
+ * Return old value in BPF_REG_0 for BPF_CMPXCHG &
+ * in src_reg for other cases.
+ */
+ ret_reg = bpf_to_ppc(BPF_REG_0);
+
+ /* Compare with old value in BPF_R0 */
+ if (size == BPF_DW)
+ EMIT(PPC_RAW_CMPD(bpf_to_ppc(BPF_REG_0), tmp2_reg));
+ else
+ EMIT(PPC_RAW_CMPW(bpf_to_ppc(BPF_REG_0), tmp2_reg));
+ /* Don't set if different from old value */
+ PPC_BCC_SHORT(COND_NE, (ctx->idx + 3) * 4);
+ fallthrough;
+ case BPF_XCHG:
+ save_reg = src_reg;
+ break;
default:
pr_err_ratelimited(
"eBPF filter atomic op code %02x (@%d) unsupported\n",
@@ -822,15 +843,14 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
/* store new value */
if (size == BPF_DW)
- EMIT(PPC_RAW_STDCX(tmp2_reg, tmp1_reg, dst_reg));
+ EMIT(PPC_RAW_STDCX(save_reg, tmp1_reg, dst_reg));
else
- EMIT(PPC_RAW_STWCX(tmp2_reg, tmp1_reg, dst_reg));
+ EMIT(PPC_RAW_STWCX(save_reg, tmp1_reg, dst_reg));
/* we're done if this succeeded */
PPC_BCC_SHORT(COND_NE, tmp_idx);
- /* For the BPF_FETCH variant, get old value into src_reg */
if (imm & BPF_FETCH)
- EMIT(PPC_RAW_MR(src_reg, _R0));
+ EMIT(PPC_RAW_MR(ret_reg, _R0));
break;
/*
--
2.35.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 4/5] bpf ppc32: add support for BPF_ATOMIC bitwise operations
2022-05-12 7:45 [PATCH 0/5] Atomics support for eBPF on powerpc Hari Bathini
` (2 preceding siblings ...)
2022-05-12 7:45 ` [PATCH 3/5] bpf ppc64: Add instructions for atomic_[cmp]xchg Hari Bathini
@ 2022-05-12 7:45 ` Hari Bathini
2022-05-13 6:58 ` Christophe Leroy
2022-05-12 7:45 ` [PATCH 5/5] bpf ppc32: Add instructions for atomic_[cmp]xchg Hari Bathini
2022-05-12 18:40 ` [PATCH 0/5] Atomics support for eBPF on powerpc Daniel Borkmann
5 siblings, 1 reply; 11+ messages in thread
From: Hari Bathini @ 2022-05-12 7:45 UTC (permalink / raw)
To: bpf, linuxppc-dev
Cc: Michael Ellerman, Naveen N. Rao, Christophe Leroy, netdev,
Benjamin Herrenschmidt, Paul Mackerras, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau, Song Liu,
Yonghong Song, John Fastabend, KP Singh, Jordan Niethe
Adding instructions for ppc32 for
atomic_and
atomic_or
atomic_xor
atomic_fetch_add
atomic_fetch_and
atomic_fetch_or
atomic_fetch_xor
Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
---
arch/powerpc/net/bpf_jit_comp32.c | 45 +++++++++++++++++++++----------
1 file changed, 31 insertions(+), 14 deletions(-)
diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
index e46ed1e8c6ca..5604ae1b60ab 100644
--- a/arch/powerpc/net/bpf_jit_comp32.c
+++ b/arch/powerpc/net/bpf_jit_comp32.c
@@ -798,25 +798,42 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
* BPF_STX ATOMIC (atomic ops)
*/
case BPF_STX | BPF_ATOMIC | BPF_W:
- if (imm != BPF_ADD) {
- pr_err_ratelimited("eBPF filter atomic op code %02x (@%d) unsupported\n",
- code, i);
- return -ENOTSUPP;
- }
-
- /* *(u32 *)(dst + off) += src */
-
bpf_set_seen_register(ctx, tmp_reg);
/* Get offset into TMP_REG */
EMIT(PPC_RAW_LI(tmp_reg, off));
+ tmp_idx = ctx->idx * 4;
/* load value from memory into r0 */
EMIT(PPC_RAW_LWARX(_R0, tmp_reg, dst_reg, 0));
- /* add value from src_reg into this */
- EMIT(PPC_RAW_ADD(_R0, _R0, src_reg));
- /* store result back */
- EMIT(PPC_RAW_STWCX(_R0, tmp_reg, dst_reg));
- /* we're done if this succeeded */
- PPC_BCC_SHORT(COND_NE, (ctx->idx - 3) * 4);
+ switch (imm) {
+ case BPF_ADD:
+ case BPF_ADD | BPF_FETCH:
+ EMIT(PPC_RAW_ADD(_R0, _R0, src_reg));
+ goto atomic_ops;
+ case BPF_AND:
+ case BPF_AND | BPF_FETCH:
+ EMIT(PPC_RAW_AND(_R0, _R0, src_reg));
+ goto atomic_ops;
+ case BPF_OR:
+ case BPF_OR | BPF_FETCH:
+ EMIT(PPC_RAW_OR(_R0, _R0, src_reg));
+ goto atomic_ops;
+ case BPF_XOR:
+ case BPF_XOR | BPF_FETCH:
+ EMIT(PPC_RAW_XOR(_R0, _R0, src_reg));
+atomic_ops:
+ /* For the BPF_FETCH variant, get old data into src_reg */
+ if (imm & BPF_FETCH)
+ EMIT(PPC_RAW_LWARX(src_reg, tmp_reg, dst_reg, 0));
+ /* store result back */
+ EMIT(PPC_RAW_STWCX(_R0, tmp_reg, dst_reg));
+ /* we're done if this succeeded */
+ PPC_BCC_SHORT(COND_NE, tmp_idx);
+ break;
+ default:
+ pr_err_ratelimited("eBPF filter atomic op code %02x (@%d) unsupported\n",
+ code, i);
+ return -EOPNOTSUPP;
+ }
break;
case BPF_STX | BPF_ATOMIC | BPF_DW: /* *(u64 *)(dst + off) += src */
--
2.35.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 5/5] bpf ppc32: Add instructions for atomic_[cmp]xchg
2022-05-12 7:45 [PATCH 0/5] Atomics support for eBPF on powerpc Hari Bathini
` (3 preceding siblings ...)
2022-05-12 7:45 ` [PATCH 4/5] bpf ppc32: add support for BPF_ATOMIC bitwise operations Hari Bathini
@ 2022-05-12 7:45 ` Hari Bathini
2022-05-13 7:50 ` Christophe Leroy
2022-05-12 18:40 ` [PATCH 0/5] Atomics support for eBPF on powerpc Daniel Borkmann
5 siblings, 1 reply; 11+ messages in thread
From: Hari Bathini @ 2022-05-12 7:45 UTC (permalink / raw)
To: bpf, linuxppc-dev
Cc: Michael Ellerman, Naveen N. Rao, Christophe Leroy, netdev,
Benjamin Herrenschmidt, Paul Mackerras, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau, Song Liu,
Yonghong Song, John Fastabend, KP Singh, Jordan Niethe
This adds two atomic opcodes BPF_XCHG and BPF_CMPXCHG on ppc32, both
of which include the BPF_FETCH flag. The kernel's atomic_cmpxchg
operation fundamentally has 3 operands, but we only have two register
fields. Therefore the operand we compare against (the kernel's API
calls it 'old') is hard-coded to be BPF_REG_R0. Also, kernel's
atomic_cmpxchg returns the previous value at dst_reg + off. JIT the
same for BPF too with return value put in BPF_REG_0.
BPF_REG_R0 = atomic_cmpxchg(dst_reg + off, BPF_REG_R0, src_reg);
Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
---
arch/powerpc/net/bpf_jit_comp32.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
index 5604ae1b60ab..4690fd6e9e52 100644
--- a/arch/powerpc/net/bpf_jit_comp32.c
+++ b/arch/powerpc/net/bpf_jit_comp32.c
@@ -829,6 +829,23 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
/* we're done if this succeeded */
PPC_BCC_SHORT(COND_NE, tmp_idx);
break;
+ case BPF_CMPXCHG:
+ /* Compare with old value in BPF_REG_0 */
+ EMIT(PPC_RAW_CMPW(bpf_to_ppc(BPF_REG_0), _R0));
+ /* Don't set if different from old value */
+ PPC_BCC_SHORT(COND_NE, (ctx->idx + 3) * 4);
+ fallthrough;
+ case BPF_XCHG:
+ /* store new value */
+ EMIT(PPC_RAW_STWCX(src_reg, tmp_reg, dst_reg));
+ PPC_BCC_SHORT(COND_NE, tmp_idx);
+ /*
+ * Return old value in src_reg for BPF_XCHG &
+ * BPF_REG_0 for BPF_CMPXCHG.
+ */
+ EMIT(PPC_RAW_MR(imm == BPF_XCHG ? src_reg : bpf_to_ppc(BPF_REG_0),
+ _R0));
+ break;
default:
pr_err_ratelimited("eBPF filter atomic op code %02x (@%d) unsupported\n",
code, i);
--
2.35.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 0/5] Atomics support for eBPF on powerpc
2022-05-12 7:45 [PATCH 0/5] Atomics support for eBPF on powerpc Hari Bathini
` (4 preceding siblings ...)
2022-05-12 7:45 ` [PATCH 5/5] bpf ppc32: Add instructions for atomic_[cmp]xchg Hari Bathini
@ 2022-05-12 18:40 ` Daniel Borkmann
2022-05-13 6:37 ` Michael Ellerman
5 siblings, 1 reply; 11+ messages in thread
From: Daniel Borkmann @ 2022-05-12 18:40 UTC (permalink / raw)
To: Hari Bathini, bpf, linuxppc-dev
Cc: Michael Ellerman, Naveen N. Rao, Christophe Leroy, netdev,
Benjamin Herrenschmidt, Paul Mackerras, Alexei Starovoitov,
Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
John Fastabend, KP Singh, Jordan Niethe
On 5/12/22 9:45 AM, Hari Bathini wrote:
> This patchset adds atomic operations to the eBPF instruction set on
> powerpc. The instructions that are added here can be summarised with
> this list of kernel operations for ppc64:
>
> * atomic[64]_[fetch_]add
> * atomic[64]_[fetch_]and
> * atomic[64]_[fetch_]or
> * atomic[64]_[fetch_]xor
> * atomic[64]_xchg
> * atomic[64]_cmpxchg
>
> and this list of kernel operations for ppc32:
>
> * atomic_[fetch_]add
> * atomic_[fetch_]and
> * atomic_[fetch_]or
> * atomic_[fetch_]xor
> * atomic_xchg
> * atomic_cmpxchg
>
> The following are left out of scope for this effort:
>
> * 64 bit operations on ppc32.
> * Explicit memory barriers, 16 and 8 bit operations on both ppc32
> & ppc64.
>
> The first patch adds support for bitwsie atomic operations on ppc64.
> The next patch adds fetch variant support for these instructions. The
> third patch adds support for xchg and cmpxchg atomic operations on
> ppc64. Patch #4 adds support for 32-bit atomic bitwise operations on
> ppc32. patch #5 adds support for xchg and cmpxchg atomic operations
> on ppc32.
Thanks for adding these, Hari! I presume they'll get routed via Michael,
right? One thing that may be worth adding to the commit log as well is
the test result from test_bpf.ko given it has an extensive suite around
atomics useful for testing corner cases in JITs.
> Hari Bathini (5):
> bpf ppc64: add support for BPF_ATOMIC bitwise operations
> bpf ppc64: add support for atomic fetch operations
> bpf ppc64: Add instructions for atomic_[cmp]xchg
> bpf ppc32: add support for BPF_ATOMIC bitwise operations
> bpf ppc32: Add instructions for atomic_[cmp]xchg
>
> arch/powerpc/net/bpf_jit_comp32.c | 62 +++++++++++++++++-----
> arch/powerpc/net/bpf_jit_comp64.c | 87 +++++++++++++++++++++----------
> 2 files changed, 108 insertions(+), 41 deletions(-)
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 0/5] Atomics support for eBPF on powerpc
2022-05-12 18:40 ` [PATCH 0/5] Atomics support for eBPF on powerpc Daniel Borkmann
@ 2022-05-13 6:37 ` Michael Ellerman
0 siblings, 0 replies; 11+ messages in thread
From: Michael Ellerman @ 2022-05-13 6:37 UTC (permalink / raw)
To: Daniel Borkmann, Hari Bathini, bpf, linuxppc-dev
Cc: Naveen N. Rao, Christophe Leroy, netdev, Benjamin Herrenschmidt,
Paul Mackerras, Alexei Starovoitov, Andrii Nakryiko,
Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
KP Singh, Jordan Niethe
Daniel Borkmann <daniel@iogearbox.net> writes:
> On 5/12/22 9:45 AM, Hari Bathini wrote:
>> This patchset adds atomic operations to the eBPF instruction set on
>> powerpc. The instructions that are added here can be summarised with
>> this list of kernel operations for ppc64:
>>
>> * atomic[64]_[fetch_]add
>> * atomic[64]_[fetch_]and
>> * atomic[64]_[fetch_]or
>> * atomic[64]_[fetch_]xor
>> * atomic[64]_xchg
>> * atomic[64]_cmpxchg
>>
>> and this list of kernel operations for ppc32:
>>
>> * atomic_[fetch_]add
>> * atomic_[fetch_]and
>> * atomic_[fetch_]or
>> * atomic_[fetch_]xor
>> * atomic_xchg
>> * atomic_cmpxchg
>>
>> The following are left out of scope for this effort:
>>
>> * 64 bit operations on ppc32.
>> * Explicit memory barriers, 16 and 8 bit operations on both ppc32
>> & ppc64.
>>
>> The first patch adds support for bitwsie atomic operations on ppc64.
>> The next patch adds fetch variant support for these instructions. The
>> third patch adds support for xchg and cmpxchg atomic operations on
>> ppc64. Patch #4 adds support for 32-bit atomic bitwise operations on
>> ppc32. patch #5 adds support for xchg and cmpxchg atomic operations
>> on ppc32.
>
> Thanks for adding these, Hari! I presume they'll get routed via Michael,
> right?
Yeah I'm happy to take them if they are OK by you.
I do wonder if the BPF jit code should eventually move out of arch/, but
that's a discussion for another day.
> One thing that may be worth adding to the commit log as well is
> the test result from test_bpf.ko given it has an extensive suite around
> atomics useful for testing corner cases in JITs.
Yes please, test results make me feel much better about merging things :)
cheers
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 4/5] bpf ppc32: add support for BPF_ATOMIC bitwise operations
2022-05-12 7:45 ` [PATCH 4/5] bpf ppc32: add support for BPF_ATOMIC bitwise operations Hari Bathini
@ 2022-05-13 6:58 ` Christophe Leroy
0 siblings, 0 replies; 11+ messages in thread
From: Christophe Leroy @ 2022-05-13 6:58 UTC (permalink / raw)
To: Hari Bathini, bpf, linuxppc-dev
Cc: Michael Ellerman, Naveen N. Rao, netdev, Benjamin Herrenschmidt,
Paul Mackerras, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
John Fastabend, KP Singh, Jordan Niethe
Le 12/05/2022 à 09:45, Hari Bathini a écrit :
> Adding instructions for ppc32 for
>
> atomic_and
> atomic_or
> atomic_xor
> atomic_fetch_add
> atomic_fetch_and
> atomic_fetch_or
> atomic_fetch_xor
>
> Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
> ---
> arch/powerpc/net/bpf_jit_comp32.c | 45 +++++++++++++++++++++----------
> 1 file changed, 31 insertions(+), 14 deletions(-)
>
> diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
> index e46ed1e8c6ca..5604ae1b60ab 100644
> --- a/arch/powerpc/net/bpf_jit_comp32.c
> +++ b/arch/powerpc/net/bpf_jit_comp32.c
> @@ -798,25 +798,42 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
> * BPF_STX ATOMIC (atomic ops)
> */
> case BPF_STX | BPF_ATOMIC | BPF_W:
> - if (imm != BPF_ADD) {
> - pr_err_ratelimited("eBPF filter atomic op code %02x (@%d) unsupported\n",
> - code, i);
> - return -ENOTSUPP;
> - }
> -
> - /* *(u32 *)(dst + off) += src */
> -
> bpf_set_seen_register(ctx, tmp_reg);
> /* Get offset into TMP_REG */
> EMIT(PPC_RAW_LI(tmp_reg, off));
> + tmp_idx = ctx->idx * 4;
> /* load value from memory into r0 */
> EMIT(PPC_RAW_LWARX(_R0, tmp_reg, dst_reg, 0));
> - /* add value from src_reg into this */
> - EMIT(PPC_RAW_ADD(_R0, _R0, src_reg));
> - /* store result back */
> - EMIT(PPC_RAW_STWCX(_R0, tmp_reg, dst_reg));
> - /* we're done if this succeeded */
> - PPC_BCC_SHORT(COND_NE, (ctx->idx - 3) * 4);
> + switch (imm) {
> + case BPF_ADD:
> + case BPF_ADD | BPF_FETCH:
> + EMIT(PPC_RAW_ADD(_R0, _R0, src_reg));
> + goto atomic_ops;
> + case BPF_AND:
> + case BPF_AND | BPF_FETCH:
> + EMIT(PPC_RAW_AND(_R0, _R0, src_reg));
> + goto atomic_ops;
> + case BPF_OR:
> + case BPF_OR | BPF_FETCH:
> + EMIT(PPC_RAW_OR(_R0, _R0, src_reg));
> + goto atomic_ops;
> + case BPF_XOR:
> + case BPF_XOR | BPF_FETCH:
> + EMIT(PPC_RAW_XOR(_R0, _R0, src_reg));
> +atomic_ops:
This looks like an odd construct.
The default case doesn't fall through, so the below part could go after
the switch and all cases could just break instead of goto atomic_ops.
> + /* For the BPF_FETCH variant, get old data into src_reg */
> + if (imm & BPF_FETCH)
> + EMIT(PPC_RAW_LWARX(src_reg, tmp_reg, dst_reg, 0));
I think this is wrong. By doing a new LWARX you kill the reservation
done by the previous one. If the data has changed between the first
LWARX and now, it will go undetected.
It should be a LWZX I believe.
But there is another problem: you clobber src_reg, then what happens if
STWCX fails and it loops back to tmp_idx ?
> + /* store result back */
> + EMIT(PPC_RAW_STWCX(_R0, tmp_reg, dst_reg));
> + /* we're done if this succeeded */
> + PPC_BCC_SHORT(COND_NE, tmp_idx);
> + break;
> + default:
> + pr_err_ratelimited("eBPF filter atomic op code %02x (@%d) unsupported\n",
> + code, i);
> + return -EOPNOTSUPP;
> + }
> break;
>
> case BPF_STX | BPF_ATOMIC | BPF_DW: /* *(u64 *)(dst + off) += src */
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 5/5] bpf ppc32: Add instructions for atomic_[cmp]xchg
2022-05-12 7:45 ` [PATCH 5/5] bpf ppc32: Add instructions for atomic_[cmp]xchg Hari Bathini
@ 2022-05-13 7:50 ` Christophe Leroy
0 siblings, 0 replies; 11+ messages in thread
From: Christophe Leroy @ 2022-05-13 7:50 UTC (permalink / raw)
To: Hari Bathini, bpf, linuxppc-dev
Cc: Michael Ellerman, Naveen N. Rao, netdev, Benjamin Herrenschmidt,
Paul Mackerras, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
John Fastabend, KP Singh, Jordan Niethe
Le 12/05/2022 à 09:45, Hari Bathini a écrit :
> This adds two atomic opcodes BPF_XCHG and BPF_CMPXCHG on ppc32, both
> of which include the BPF_FETCH flag. The kernel's atomic_cmpxchg
> operation fundamentally has 3 operands, but we only have two register
> fields. Therefore the operand we compare against (the kernel's API
> calls it 'old') is hard-coded to be BPF_REG_R0. Also, kernel's
> atomic_cmpxchg returns the previous value at dst_reg + off. JIT the
> same for BPF too with return value put in BPF_REG_0.
>
> BPF_REG_R0 = atomic_cmpxchg(dst_reg + off, BPF_REG_R0, src_reg);
Ah, now we mix the xchg's with the bitwise operations. Ok I understand
better that goto atomic_ops in the previous patch then. But it now
becomes uneasy to read and follow.
I think it would be cleaner to separate completely the bitwise
operations and this, even if it duplicates half a dozen of lines.
>
> Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
> ---
> arch/powerpc/net/bpf_jit_comp32.c | 17 +++++++++++++++++
> 1 file changed, 17 insertions(+)
>
> diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
> index 5604ae1b60ab..4690fd6e9e52 100644
> --- a/arch/powerpc/net/bpf_jit_comp32.c
> +++ b/arch/powerpc/net/bpf_jit_comp32.c
> @@ -829,6 +829,23 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
> /* we're done if this succeeded */
> PPC_BCC_SHORT(COND_NE, tmp_idx);
> break;
> + case BPF_CMPXCHG:
> + /* Compare with old value in BPF_REG_0 */
> + EMIT(PPC_RAW_CMPW(bpf_to_ppc(BPF_REG_0), _R0));
> + /* Don't set if different from old value */
> + PPC_BCC_SHORT(COND_NE, (ctx->idx + 3) * 4);
> + fallthrough;
> + case BPF_XCHG:
> + /* store new value */
> + EMIT(PPC_RAW_STWCX(src_reg, tmp_reg, dst_reg));
> + PPC_BCC_SHORT(COND_NE, tmp_idx);
> + /*
> + * Return old value in src_reg for BPF_XCHG &
> + * BPF_REG_0 for BPF_CMPXCHG.
> + */
> + EMIT(PPC_RAW_MR(imm == BPF_XCHG ? src_reg : bpf_to_ppc(BPF_REG_0),
> + _R0));
If the line spreads into two lines, compact form is probably not worth
it. Would be more readable as
if (imm == BPF_XCHG)
EMIT_PPC_RAW_MR(src_reg, _R0));
else
EMIT_PPC_RAW_MR(src_reg, bpf_to_ppc(BPF_REG_0)));
At the end, it's probably even more readable if you separate both cases
completely:
case BPF_CMPXCHG:
/* Compare with old value in BPF_REG_0 */
EMIT(PPC_RAW_CMPW(bpf_to_ppc(BPF_REG_0), _R0));
/* Don't set if different from old value */
PPC_BCC_SHORT(COND_NE, (ctx->idx + 3) * 4);
/* store new value */
EMIT(PPC_RAW_STWCX(src_reg, tmp_reg, dst_reg));
PPC_BCC_SHORT(COND_NE, tmp_idx);
/* Return old value in BPF_REG_0 */
EMIT_PPC_RAW_MR(src_reg, bpf_to_ppc(BPF_REG_0)));
break;
case BPF_XCHG:
/* store new value */
EMIT(PPC_RAW_STWCX(src_reg, tmp_reg, dst_reg));
PPC_BCC_SHORT(COND_NE, tmp_idx);
/* Return old value in src_reg */
EMIT_PPC_RAW_MR(src_reg, _R0));
break;
> + break;
> default:
> pr_err_ratelimited("eBPF filter atomic op code %02x (@%d) unsupported\n",
> code, i);
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 3/5] bpf ppc64: Add instructions for atomic_[cmp]xchg
2022-05-12 7:45 ` [PATCH 3/5] bpf ppc64: Add instructions for atomic_[cmp]xchg Hari Bathini
@ 2022-05-16 3:03 ` Russell Currey
0 siblings, 0 replies; 11+ messages in thread
From: Russell Currey @ 2022-05-16 3:03 UTC (permalink / raw)
To: Hari Bathini, bpf, linuxppc-dev
Cc: Song Liu, Daniel Borkmann, John Fastabend, Alexei Starovoitov,
Andrii Nakryiko, netdev, Paul Mackerras, Naveen N. Rao,
Yonghong Song, KP Singh, Jordan Niethe, Martin KaFai Lau
On Thu, 2022-05-12 at 13:15 +0530, Hari Bathini wrote:
> This adds two atomic opcodes BPF_XCHG and BPF_CMPXCHG on ppc64, both
> of which include the BPF_FETCH flag. The kernel's atomic_cmpxchg
> operation fundamentally has 3 operands, but we only have two register
> fields. Therefore the operand we compare against (the kernel's API
> calls it 'old') is hard-coded to be BPF_REG_R0. Also, kernel's
> atomic_cmpxchg returns the previous value at dst_reg + off. JIT the
> same for BPF too with return value put in BPF_REG_0.
>
> BPF_REG_R0 = atomic_cmpxchg(dst_reg + off, BPF_REG_R0, src_reg);
>
> Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
> ---
> arch/powerpc/net/bpf_jit_comp64.c | 28 ++++++++++++++++++++++++----
> 1 file changed, 24 insertions(+), 4 deletions(-)
>
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c
> b/arch/powerpc/net/bpf_jit_comp64.c
> index 504fa459f9f3..df9e20b22ccb 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -783,6 +783,9 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32
> *image, struct codegen_context *
> */
> case BPF_STX | BPF_ATOMIC | BPF_W:
> case BPF_STX | BPF_ATOMIC | BPF_DW:
> + u32 save_reg = tmp2_reg;
> + u32 ret_reg = src_reg;
Hi Hari,
Some compilers[0][1] don't like these late declarations after case
labels:
arch/powerpc/net/bpf_jit_comp64.c: In function ‘bpf_jit_build_body’:
arch/powerpc/net/bpf_jit_comp64.c:781:4: error: a label can only be
part of a statement and a declaration is not a statement
u32 save_reg = tmp2_reg;
^~~
arch/powerpc/net/bpf_jit_comp64.c:782:4: error: expected expression
before ‘u32’
u32 ret_reg = src_reg;
^~~
arch/powerpc/net/bpf_jit_comp64.c:819:5: error: ‘ret_reg’ undeclared
(first use in this function); did you mean ‘dst_reg’?
ret_reg = bpf_to_ppc(BPF_REG_0);
Adding a semicolon fixes the first issue, i.e.
case BPF_STX | BPF_ATOMIC | BPF_DW: ;
but then it just complains about mixed declarations and code instead.
So you should declare save_reg and ret_reg at the beginning of the for
loop like the rest of the variables.
- Russell
[0]: gcc 5.5.0
https://github.com/ruscur/linux-ci/runs/6418546193?check_suite_focus=true#step:4:122
[1]: clang 12.0
https://github.com/ruscur/linux-ci/runs/6418545338?check_suite_focus=true#step:4:117
> +
> /* Get offset into TMP_REG_1 */
> EMIT(PPC_RAW_LI(tmp1_reg, off));
> tmp_idx = ctx->idx * 4;
> @@ -813,6 +816,24 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32
> *image, struct codegen_context *
> case BPF_XOR | BPF_FETCH:
> EMIT(PPC_RAW_XOR(tmp2_reg, tmp2_reg,
> src_reg));
> break;
> + case BPF_CMPXCHG:
> + /*
> + * Return old value in BPF_REG_0 for
> BPF_CMPXCHG &
> + * in src_reg for other cases.
> + */
> + ret_reg = bpf_to_ppc(BPF_REG_0);
> +
> + /* Compare with old value in BPF_R0
> */
> + if (size == BPF_DW)
> + EMIT(PPC_RAW_CMPD(bpf_to_ppc(
> BPF_REG_0), tmp2_reg));
> + else
> + EMIT(PPC_RAW_CMPW(bpf_to_ppc(
> BPF_REG_0), tmp2_reg));
> + /* Don't set if different from old
> value */
> + PPC_BCC_SHORT(COND_NE, (ctx->idx + 3)
> * 4);
> + fallthrough;
> + case BPF_XCHG:
> + save_reg = src_reg;
> + break;
> default:
> pr_err_ratelimited(
> "eBPF filter atomic op code
> %02x (@%d) unsupported\n",
> @@ -822,15 +843,14 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32
> *image, struct codegen_context *
>
> /* store new value */
> if (size == BPF_DW)
> - EMIT(PPC_RAW_STDCX(tmp2_reg,
> tmp1_reg, dst_reg));
> + EMIT(PPC_RAW_STDCX(save_reg,
> tmp1_reg, dst_reg));
> else
> - EMIT(PPC_RAW_STWCX(tmp2_reg,
> tmp1_reg, dst_reg));
> + EMIT(PPC_RAW_STWCX(save_reg,
> tmp1_reg, dst_reg));
> /* we're done if this succeeded */
> PPC_BCC_SHORT(COND_NE, tmp_idx);
>
> - /* For the BPF_FETCH variant, get old value
> into src_reg */
> if (imm & BPF_FETCH)
> - EMIT(PPC_RAW_MR(src_reg, _R0));
> + EMIT(PPC_RAW_MR(ret_reg, _R0));
> break;
>
> /*
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2022-05-16 3:04 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-12 7:45 [PATCH 0/5] Atomics support for eBPF on powerpc Hari Bathini
2022-05-12 7:45 ` [PATCH 1/5] bpf ppc64: add support for BPF_ATOMIC bitwise operations Hari Bathini
2022-05-12 7:45 ` [PATCH 2/5] bpf ppc64: add support for atomic fetch operations Hari Bathini
2022-05-12 7:45 ` [PATCH 3/5] bpf ppc64: Add instructions for atomic_[cmp]xchg Hari Bathini
2022-05-16 3:03 ` Russell Currey
2022-05-12 7:45 ` [PATCH 4/5] bpf ppc32: add support for BPF_ATOMIC bitwise operations Hari Bathini
2022-05-13 6:58 ` Christophe Leroy
2022-05-12 7:45 ` [PATCH 5/5] bpf ppc32: Add instructions for atomic_[cmp]xchg Hari Bathini
2022-05-13 7:50 ` Christophe Leroy
2022-05-12 18:40 ` [PATCH 0/5] Atomics support for eBPF on powerpc Daniel Borkmann
2022-05-13 6:37 ` Michael Ellerman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).