* [PATCH bpf-next v4 0/3] Enlarge offset check value in bpf_skb_load_bytes
@ 2022-04-16 10:57 Liu Jian
2022-04-16 10:57 ` [PATCH bpf-next v4 1/3] net: Enlarge offset check value from 0xffff to INT_MAX " Liu Jian
` (3 more replies)
0 siblings, 4 replies; 5+ messages in thread
From: Liu Jian @ 2022-04-16 10:57 UTC (permalink / raw)
To: ast, daniel, andrii, kafai, songliubraving, yhs, john.fastabend,
kpsingh, davem, kuba, sdf, netdev, bpf, pabeni
Cc: liujian56
The data length of skb frags + frag_list may be greater than 0xffff,
and skb_header_pointer can not handle negative offset.
So here INT_MAX is used to check the validity of offset.
And add the test case for the change.
Liu Jian (3):
net: Enlarge offset check value from 0xffff to INT_MAX in
bpf_skb_load_bytes
net: change skb_ensure_writable()'s write_len param to unsigned int
type
selftests: bpf: add test for skb_load_bytes
include/linux/skbuff.h | 2 +-
net/core/filter.c | 4 +-
net/core/skbuff.c | 2 +-
.../selftests/bpf/prog_tests/skb_load_bytes.c | 45 +++++++++++++++++++
.../selftests/bpf/progs/skb_load_bytes.c | 19 ++++++++
5 files changed, 68 insertions(+), 4 deletions(-)
create mode 100644 tools/testing/selftests/bpf/prog_tests/skb_load_bytes.c
create mode 100644 tools/testing/selftests/bpf/progs/skb_load_bytes.c
--
2.17.1
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH bpf-next v4 1/3] net: Enlarge offset check value from 0xffff to INT_MAX in bpf_skb_load_bytes
2022-04-16 10:57 [PATCH bpf-next v4 0/3] Enlarge offset check value in bpf_skb_load_bytes Liu Jian
@ 2022-04-16 10:57 ` Liu Jian
2022-04-16 10:58 ` [PATCH bpf-next v4 2/3] net: change skb_ensure_writable()'s write_len param to unsigned int type Liu Jian
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Liu Jian @ 2022-04-16 10:57 UTC (permalink / raw)
To: ast, daniel, andrii, kafai, songliubraving, yhs, john.fastabend,
kpsingh, davem, kuba, sdf, netdev, bpf, pabeni
Cc: liujian56
The data length of skb frags + frag_list may be greater than 0xffff,
and skb_header_pointer can not handle negative offset.
So here INT_MAX is used to check the validity of offset.
Add the same change to the related function skb_store_bytes.
Fixes: 05c74e5e53f6 ("bpf: add bpf_skb_load_bytes helper")
Signed-off-by: Liu Jian <liujian56@huawei.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
v3->v4: delete "|| len > INT_MAX"
net/core/filter.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/core/filter.c b/net/core/filter.c
index 64470a727ef7..966796b345e7 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -1687,7 +1687,7 @@ BPF_CALL_5(bpf_skb_store_bytes, struct sk_buff *, skb, u32, offset,
if (unlikely(flags & ~(BPF_F_RECOMPUTE_CSUM | BPF_F_INVALIDATE_HASH)))
return -EINVAL;
- if (unlikely(offset > 0xffff))
+ if (unlikely(offset > INT_MAX))
return -EFAULT;
if (unlikely(bpf_try_make_writable(skb, offset + len)))
return -EFAULT;
@@ -1722,7 +1722,7 @@ BPF_CALL_4(bpf_skb_load_bytes, const struct sk_buff *, skb, u32, offset,
{
void *ptr;
- if (unlikely(offset > 0xffff))
+ if (unlikely(offset > INT_MAX))
goto err_clear;
ptr = skb_header_pointer(skb, offset, len, to);
--
2.17.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH bpf-next v4 2/3] net: change skb_ensure_writable()'s write_len param to unsigned int type
2022-04-16 10:57 [PATCH bpf-next v4 0/3] Enlarge offset check value in bpf_skb_load_bytes Liu Jian
2022-04-16 10:57 ` [PATCH bpf-next v4 1/3] net: Enlarge offset check value from 0xffff to INT_MAX " Liu Jian
@ 2022-04-16 10:58 ` Liu Jian
2022-04-16 10:58 ` [PATCH bpf-next v4 3/3] selftests: bpf: add test for skb_load_bytes Liu Jian
2022-04-20 21:50 ` [PATCH bpf-next v4 0/3] Enlarge offset check value in bpf_skb_load_bytes patchwork-bot+netdevbpf
3 siblings, 0 replies; 5+ messages in thread
From: Liu Jian @ 2022-04-16 10:58 UTC (permalink / raw)
To: ast, daniel, andrii, kafai, songliubraving, yhs, john.fastabend,
kpsingh, davem, kuba, sdf, netdev, bpf, pabeni
Cc: liujian56
Both pskb_may_pull() and skb_clone_writable()'s length parameters are of
type unsigned int already.
Therefore, change this function's write_len param to unsigned int type.
Signed-off-by: Liu Jian <liujian56@huawei.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
include/linux/skbuff.h | 2 +-
net/core/skbuff.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 3a30cae8b0a5..fe8990ce52a8 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -3886,7 +3886,7 @@ struct sk_buff *skb_segment(struct sk_buff *skb, netdev_features_t features);
struct sk_buff *skb_segment_list(struct sk_buff *skb, netdev_features_t features,
unsigned int offset);
struct sk_buff *skb_vlan_untag(struct sk_buff *skb);
-int skb_ensure_writable(struct sk_buff *skb, int write_len);
+int skb_ensure_writable(struct sk_buff *skb, unsigned int write_len);
int __skb_vlan_pop(struct sk_buff *skb, u16 *vlan_tci);
int skb_vlan_pop(struct sk_buff *skb);
int skb_vlan_push(struct sk_buff *skb, __be16 vlan_proto, u16 vlan_tci);
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 30b523fa4ad2..a84e00e44ad2 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -5601,7 +5601,7 @@ struct sk_buff *skb_vlan_untag(struct sk_buff *skb)
}
EXPORT_SYMBOL(skb_vlan_untag);
-int skb_ensure_writable(struct sk_buff *skb, int write_len)
+int skb_ensure_writable(struct sk_buff *skb, unsigned int write_len)
{
if (!pskb_may_pull(skb, write_len))
return -ENOMEM;
--
2.17.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH bpf-next v4 3/3] selftests: bpf: add test for skb_load_bytes
2022-04-16 10:57 [PATCH bpf-next v4 0/3] Enlarge offset check value in bpf_skb_load_bytes Liu Jian
2022-04-16 10:57 ` [PATCH bpf-next v4 1/3] net: Enlarge offset check value from 0xffff to INT_MAX " Liu Jian
2022-04-16 10:58 ` [PATCH bpf-next v4 2/3] net: change skb_ensure_writable()'s write_len param to unsigned int type Liu Jian
@ 2022-04-16 10:58 ` Liu Jian
2022-04-20 21:50 ` [PATCH bpf-next v4 0/3] Enlarge offset check value in bpf_skb_load_bytes patchwork-bot+netdevbpf
3 siblings, 0 replies; 5+ messages in thread
From: Liu Jian @ 2022-04-16 10:58 UTC (permalink / raw)
To: ast, daniel, andrii, kafai, songliubraving, yhs, john.fastabend,
kpsingh, davem, kuba, sdf, netdev, bpf, pabeni
Cc: liujian56
Use bpf_prog_test_run_opts to test the skb_load_bytes function.
Tests the behavior when offset is greater than INT_MAX or a normal value.
Signed-off-by: Liu Jian <liujian56@huawei.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
.../selftests/bpf/prog_tests/skb_load_bytes.c | 45 +++++++++++++++++++
.../selftests/bpf/progs/skb_load_bytes.c | 19 ++++++++
2 files changed, 64 insertions(+)
create mode 100644 tools/testing/selftests/bpf/prog_tests/skb_load_bytes.c
create mode 100644 tools/testing/selftests/bpf/progs/skb_load_bytes.c
diff --git a/tools/testing/selftests/bpf/prog_tests/skb_load_bytes.c b/tools/testing/selftests/bpf/prog_tests/skb_load_bytes.c
new file mode 100644
index 000000000000..d7f83c0a40a5
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/skb_load_bytes.c
@@ -0,0 +1,45 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+#include <network_helpers.h>
+#include "skb_load_bytes.skel.h"
+
+void test_skb_load_bytes(void)
+{
+ struct skb_load_bytes *skel;
+ int err, prog_fd, test_result;
+ struct __sk_buff skb = { 0 };
+
+ LIBBPF_OPTS(bpf_test_run_opts, tattr,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .ctx_in = &skb,
+ .ctx_size_in = sizeof(skb),
+ );
+
+ skel = skb_load_bytes__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "skel_open_and_load"))
+ return;
+
+ prog_fd = bpf_program__fd(skel->progs.skb_process);
+ if (!ASSERT_GE(prog_fd, 0, "prog_fd"))
+ goto out;
+
+ skel->bss->load_offset = (uint32_t)(-1);
+ err = bpf_prog_test_run_opts(prog_fd, &tattr);
+ if (!ASSERT_OK(err, "bpf_prog_test_run_opts"))
+ goto out;
+ test_result = skel->bss->test_result;
+ if (!ASSERT_EQ(test_result, -EFAULT, "offset -1"))
+ goto out;
+
+ skel->bss->load_offset = (uint32_t)10;
+ err = bpf_prog_test_run_opts(prog_fd, &tattr);
+ if (!ASSERT_OK(err, "bpf_prog_test_run_opts"))
+ goto out;
+ test_result = skel->bss->test_result;
+ if (!ASSERT_EQ(test_result, 0, "offset 10"))
+ goto out;
+
+out:
+ skb_load_bytes__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/skb_load_bytes.c b/tools/testing/selftests/bpf/progs/skb_load_bytes.c
new file mode 100644
index 000000000000..e4252fd973be
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/skb_load_bytes.c
@@ -0,0 +1,19 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+
+char _license[] SEC("license") = "GPL";
+
+__u32 load_offset = 0;
+int test_result = 0;
+
+SEC("tc")
+int skb_process(struct __sk_buff *skb)
+{
+ char buf[16];
+
+ test_result = bpf_skb_load_bytes(skb, load_offset, buf, 10);
+
+ return 0;
+}
--
2.17.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH bpf-next v4 0/3] Enlarge offset check value in bpf_skb_load_bytes
2022-04-16 10:57 [PATCH bpf-next v4 0/3] Enlarge offset check value in bpf_skb_load_bytes Liu Jian
` (2 preceding siblings ...)
2022-04-16 10:58 ` [PATCH bpf-next v4 3/3] selftests: bpf: add test for skb_load_bytes Liu Jian
@ 2022-04-20 21:50 ` patchwork-bot+netdevbpf
3 siblings, 0 replies; 5+ messages in thread
From: patchwork-bot+netdevbpf @ 2022-04-20 21:50 UTC (permalink / raw)
To: Liu Jian
Cc: ast, daniel, andrii, kafai, songliubraving, yhs, john.fastabend,
kpsingh, davem, kuba, sdf, netdev, bpf, pabeni
Hello:
This series was applied to bpf/bpf-next.git (master)
by Daniel Borkmann <daniel@iogearbox.net>:
On Sat, 16 Apr 2022 18:57:58 +0800 you wrote:
> The data length of skb frags + frag_list may be greater than 0xffff,
> and skb_header_pointer can not handle negative offset.
> So here INT_MAX is used to check the validity of offset.
>
> And add the test case for the change.
>
> Liu Jian (3):
> net: Enlarge offset check value from 0xffff to INT_MAX in
> bpf_skb_load_bytes
> net: change skb_ensure_writable()'s write_len param to unsigned int
> type
> selftests: bpf: add test for skb_load_bytes
>
> [...]
Here is the summary with links:
- [bpf-next,v4,1/3] net: Enlarge offset check value from 0xffff to INT_MAX in bpf_skb_load_bytes
https://git.kernel.org/bpf/bpf-next/c/45969b4152c1
- [bpf-next,v4,2/3] net: change skb_ensure_writable()'s write_len param to unsigned int type
https://git.kernel.org/bpf/bpf-next/c/92ece28072f1
- [bpf-next,v4,3/3] selftests: bpf: add test for skb_load_bytes
https://git.kernel.org/bpf/bpf-next/c/127e7dca427b
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2022-04-20 21:50 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-16 10:57 [PATCH bpf-next v4 0/3] Enlarge offset check value in bpf_skb_load_bytes Liu Jian
2022-04-16 10:57 ` [PATCH bpf-next v4 1/3] net: Enlarge offset check value from 0xffff to INT_MAX " Liu Jian
2022-04-16 10:58 ` [PATCH bpf-next v4 2/3] net: change skb_ensure_writable()'s write_len param to unsigned int type Liu Jian
2022-04-16 10:58 ` [PATCH bpf-next v4 3/3] selftests: bpf: add test for skb_load_bytes Liu Jian
2022-04-20 21:50 ` [PATCH bpf-next v4 0/3] Enlarge offset check value in bpf_skb_load_bytes patchwork-bot+netdevbpf
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.