* [PATCH bpf v2 0/9] fixes for bpf map iterator
@ 2022-08-10 8:05 Hou Tao
2022-08-10 8:05 ` [PATCH bpf v2 1/9] bpf: Acquire map uref in .init_seq_private for array " Hou Tao
` (9 more replies)
0 siblings, 10 replies; 15+ messages in thread
From: Hou Tao @ 2022-08-10 8:05 UTC (permalink / raw)
To: bpf, Yonghong Song
Cc: Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann,
Martin KaFai Lau, Song Liu, KP Singh, David S . Miller,
Jakub Kicinski, Stanislav Fomichev, Hao Luo, Jiri Olsa,
John Fastabend, Lorenz Bauer, houtao1
From: Hou Tao <houtao1@huawei.com>
Hi,
The patchset constitues three fixes for bpf map iterator:
(1) patch 1~4: fix user-after-free during reading map iterator fd
It is possible when both the corresponding link fd and map fd are
closed bfore reading the iterator fd. I had squashed these four patches
into one, but it was not friendly for stable backport, so I break these
fixes into four single patches in the end. Patch 7 is its testing patch.
(2) patch 5: fix invalidity check for values in sk local storage map
Patch 8 adds two tests for it.
(3) patch 6: reject sleepable program for non-resched map iterator
Patch 9 add a test for it.
Please check the individual patches for more details. And comments are
always welcome.
Regards,
Tao
Changes since v2:
* patch 1~6: update commit messages (from Yonghong & Martin)
* patch 7: add more detailed comments (from Yonghong)
* patch 8: use NULL directly instead of (void *)0
v1: https://lore.kernel.org/bpf/20220806074019.2756957-1-houtao@huaweicloud.com
Hou Tao (9):
bpf: Acquire map uref in .init_seq_private for array map iterator
bpf: Acquire map uref in .init_seq_private for hash map iterator
bpf: Acquire map uref in .init_seq_private for sock local storage map
iterator
bpf: Acquire map uref in .init_seq_private for sock{map,hash} iterator
bpf: Check the validity of max_rdwr_access for sock local storage map
iterator
bpf: Only allow sleepable program for resched-able iterator
selftests/bpf: Add tests for reading a dangling map iter fd
selftests/bpf: Add write tests for sk local storage map iterator
selftests/bpf: Ensure sleepable program is rejected by hash map iter
kernel/bpf/arraymap.c | 6 +
kernel/bpf/bpf_iter.c | 11 +-
kernel/bpf/hashtab.c | 2 +
net/core/bpf_sk_storage.c | 12 +-
net/core/sock_map.c | 20 ++-
.../selftests/bpf/prog_tests/bpf_iter.c | 116 +++++++++++++++++-
.../bpf/progs/bpf_iter_bpf_hash_map.c | 9 ++
.../bpf/progs/bpf_iter_bpf_sk_storage_map.c | 22 +++-
8 files changed, 191 insertions(+), 7 deletions(-)
--
2.29.2
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH bpf v2 1/9] bpf: Acquire map uref in .init_seq_private for array map iterator
2022-08-10 8:05 [PATCH bpf v2 0/9] fixes for bpf map iterator Hou Tao
@ 2022-08-10 8:05 ` Hou Tao
2022-08-10 8:05 ` [PATCH bpf v2 2/9] bpf: Acquire map uref in .init_seq_private for hash " Hou Tao
` (8 subsequent siblings)
9 siblings, 0 replies; 15+ messages in thread
From: Hou Tao @ 2022-08-10 8:05 UTC (permalink / raw)
To: bpf, Yonghong Song
Cc: Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann,
Martin KaFai Lau, Song Liu, KP Singh, David S . Miller,
Jakub Kicinski, Stanislav Fomichev, Hao Luo, Jiri Olsa,
John Fastabend, Lorenz Bauer, houtao1
From: Hou Tao <houtao1@huawei.com>
bpf_iter_attach_map() acquires a map uref, and the uref may be released
before or in the middle of iterating map elements. For example, the uref
could be released in bpf_iter_detach_map() as part of
bpf_link_release(), or could be released in bpf_map_put_with_uref() as
part of bpf_map_release().
Alternative fix is acquiring an extra bpf_link reference just like
a pinned map iterator does, but it introduces unnecessary dependency
on bpf_link instead of bpf_map.
So choose another fix: acquiring an extra map uref in .init_seq_private
for array map iterator.
Fixes: d3cc2ab546ad ("bpf: Implement bpf iterator for array maps")
Signed-off-by: Hou Tao <houtao1@huawei.com>
Acked-by: Yonghong Song <yhs@fb.com>
---
kernel/bpf/arraymap.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
index d3e734bf8056..624527401d4d 100644
--- a/kernel/bpf/arraymap.c
+++ b/kernel/bpf/arraymap.c
@@ -649,6 +649,11 @@ static int bpf_iter_init_array_map(void *priv_data,
seq_info->percpu_value_buf = value_buf;
}
+ /* bpf_iter_attach_map() acquires a map uref, and the uref may be
+ * released before or in the middle of iterating map elements, so
+ * acquire an extra map uref for iterator.
+ */
+ bpf_map_inc_with_uref(map);
seq_info->map = map;
return 0;
}
@@ -657,6 +662,7 @@ static void bpf_iter_fini_array_map(void *priv_data)
{
struct bpf_iter_seq_array_map_info *seq_info = priv_data;
+ bpf_map_put_with_uref(seq_info->map);
kfree(seq_info->percpu_value_buf);
}
--
2.29.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH bpf v2 2/9] bpf: Acquire map uref in .init_seq_private for hash map iterator
2022-08-10 8:05 [PATCH bpf v2 0/9] fixes for bpf map iterator Hou Tao
2022-08-10 8:05 ` [PATCH bpf v2 1/9] bpf: Acquire map uref in .init_seq_private for array " Hou Tao
@ 2022-08-10 8:05 ` Hou Tao
2022-08-10 8:05 ` [PATCH bpf v2 3/9] bpf: Acquire map uref in .init_seq_private for sock local storage " Hou Tao
` (7 subsequent siblings)
9 siblings, 0 replies; 15+ messages in thread
From: Hou Tao @ 2022-08-10 8:05 UTC (permalink / raw)
To: bpf, Yonghong Song
Cc: Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann,
Martin KaFai Lau, Song Liu, KP Singh, David S . Miller,
Jakub Kicinski, Stanislav Fomichev, Hao Luo, Jiri Olsa,
John Fastabend, Lorenz Bauer, houtao1
From: Hou Tao <houtao1@huawei.com>
bpf_iter_attach_map() acquires a map uref, and the uref may be released
before or in the middle of iterating map elements. For example, the uref
could be released in bpf_iter_detach_map() as part of
bpf_link_release(), or could be released in bpf_map_put_with_uref() as
part of bpf_map_release().
So acquiring an extra map uref in bpf_iter_init_hash_map() and
releasing it in bpf_iter_fini_hash_map().
Fixes: d6c4503cc296 ("bpf: Implement bpf iterator for hash maps")
Signed-off-by: Hou Tao <houtao1@huawei.com>
Acked-by: Yonghong Song <yhs@fb.com>
---
kernel/bpf/hashtab.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index da7578426a46..da8c0177f773 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -2064,6 +2064,7 @@ static int bpf_iter_init_hash_map(void *priv_data,
seq_info->percpu_value_buf = value_buf;
}
+ bpf_map_inc_with_uref(map);
seq_info->map = map;
seq_info->htab = container_of(map, struct bpf_htab, map);
return 0;
@@ -2073,6 +2074,7 @@ static void bpf_iter_fini_hash_map(void *priv_data)
{
struct bpf_iter_seq_hash_map_info *seq_info = priv_data;
+ bpf_map_put_with_uref(seq_info->map);
kfree(seq_info->percpu_value_buf);
}
--
2.29.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH bpf v2 3/9] bpf: Acquire map uref in .init_seq_private for sock local storage map iterator
2022-08-10 8:05 [PATCH bpf v2 0/9] fixes for bpf map iterator Hou Tao
2022-08-10 8:05 ` [PATCH bpf v2 1/9] bpf: Acquire map uref in .init_seq_private for array " Hou Tao
2022-08-10 8:05 ` [PATCH bpf v2 2/9] bpf: Acquire map uref in .init_seq_private for hash " Hou Tao
@ 2022-08-10 8:05 ` Hou Tao
2022-08-10 16:47 ` Martin KaFai Lau
2022-08-10 8:05 ` [PATCH bpf v2 4/9] bpf: Acquire map uref in .init_seq_private for sock{map,hash} iterator Hou Tao
` (6 subsequent siblings)
9 siblings, 1 reply; 15+ messages in thread
From: Hou Tao @ 2022-08-10 8:05 UTC (permalink / raw)
To: bpf, Yonghong Song
Cc: Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann,
Martin KaFai Lau, Song Liu, KP Singh, David S . Miller,
Jakub Kicinski, Stanislav Fomichev, Hao Luo, Jiri Olsa,
John Fastabend, Lorenz Bauer, houtao1
From: Hou Tao <houtao1@huawei.com>
bpf_iter_attach_map() acquires a map uref, and the uref may be released
before or in the middle of iterating map elements. For example, the uref
could be released in bpf_iter_detach_map() as part of
bpf_link_release(), or could be released in bpf_map_put_with_uref() as
part of bpf_map_release().
So acquiring an extra map uref in bpf_iter_init_sk_storage_map() and
releasing it in bpf_iter_fini_sk_storage_map().
Fixes: 5ce6e77c7edf ("bpf: Implement bpf iterator for sock local storage map")
Signed-off-by: Hou Tao <houtao1@huawei.com>
Acked-by: Yonghong Song <yhs@fb.com>
---
net/core/bpf_sk_storage.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
index a25ec93729b9..83b89ba824d7 100644
--- a/net/core/bpf_sk_storage.c
+++ b/net/core/bpf_sk_storage.c
@@ -875,10 +875,18 @@ static int bpf_iter_init_sk_storage_map(void *priv_data,
{
struct bpf_iter_seq_sk_storage_map_info *seq_info = priv_data;
+ bpf_map_inc_with_uref(aux->map);
seq_info->map = aux->map;
return 0;
}
+static void bpf_iter_fini_sk_storage_map(void *priv_data)
+{
+ struct bpf_iter_seq_sk_storage_map_info *seq_info = priv_data;
+
+ bpf_map_put_with_uref(seq_info->map);
+}
+
static int bpf_iter_attach_map(struct bpf_prog *prog,
union bpf_iter_link_info *linfo,
struct bpf_iter_aux_info *aux)
@@ -924,7 +932,7 @@ static const struct seq_operations bpf_sk_storage_map_seq_ops = {
static const struct bpf_iter_seq_info iter_seq_info = {
.seq_ops = &bpf_sk_storage_map_seq_ops,
.init_seq_private = bpf_iter_init_sk_storage_map,
- .fini_seq_private = NULL,
+ .fini_seq_private = bpf_iter_fini_sk_storage_map,
.seq_priv_size = sizeof(struct bpf_iter_seq_sk_storage_map_info),
};
--
2.29.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH bpf v2 4/9] bpf: Acquire map uref in .init_seq_private for sock{map,hash} iterator
2022-08-10 8:05 [PATCH bpf v2 0/9] fixes for bpf map iterator Hou Tao
` (2 preceding siblings ...)
2022-08-10 8:05 ` [PATCH bpf v2 3/9] bpf: Acquire map uref in .init_seq_private for sock local storage " Hou Tao
@ 2022-08-10 8:05 ` Hou Tao
2022-08-10 8:05 ` [PATCH bpf v2 5/9] bpf: Check the validity of max_rdwr_access for sock local storage map iterator Hou Tao
` (5 subsequent siblings)
9 siblings, 0 replies; 15+ messages in thread
From: Hou Tao @ 2022-08-10 8:05 UTC (permalink / raw)
To: bpf, Yonghong Song
Cc: Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann,
Martin KaFai Lau, Song Liu, KP Singh, David S . Miller,
Jakub Kicinski, Stanislav Fomichev, Hao Luo, Jiri Olsa,
John Fastabend, Lorenz Bauer, houtao1
From: Hou Tao <houtao1@huawei.com>
sock_map_iter_attach_target() acquires a map uref, and the uref may be
released before or in the middle of iterating map elements. For example,
the uref could be released in sock_map_iter_detach_target() as part of
bpf_link_release(), or could be released in bpf_map_put_with_uref() as
part of bpf_map_release().
Fixing it by acquiring an extra map uref in .init_seq_private and
releasing it in .fini_seq_private.
Fixes: 0365351524d7 ("net: Allow iterating sockmap and sockhash")
Signed-off-by: Hou Tao <houtao1@huawei.com>
Acked-by: Yonghong Song <yhs@fb.com>
---
net/core/sock_map.c | 20 +++++++++++++++++++-
1 file changed, 19 insertions(+), 1 deletion(-)
diff --git a/net/core/sock_map.c b/net/core/sock_map.c
index 028813dfecb0..9a9fb9487d63 100644
--- a/net/core/sock_map.c
+++ b/net/core/sock_map.c
@@ -783,13 +783,22 @@ static int sock_map_init_seq_private(void *priv_data,
{
struct sock_map_seq_info *info = priv_data;
+ bpf_map_inc_with_uref(aux->map);
info->map = aux->map;
return 0;
}
+static void sock_map_fini_seq_private(void *priv_data)
+{
+ struct sock_map_seq_info *info = priv_data;
+
+ bpf_map_put_with_uref(info->map);
+}
+
static const struct bpf_iter_seq_info sock_map_iter_seq_info = {
.seq_ops = &sock_map_seq_ops,
.init_seq_private = sock_map_init_seq_private,
+ .fini_seq_private = sock_map_fini_seq_private,
.seq_priv_size = sizeof(struct sock_map_seq_info),
};
@@ -1369,18 +1378,27 @@ static const struct seq_operations sock_hash_seq_ops = {
};
static int sock_hash_init_seq_private(void *priv_data,
- struct bpf_iter_aux_info *aux)
+ struct bpf_iter_aux_info *aux)
{
struct sock_hash_seq_info *info = priv_data;
+ bpf_map_inc_with_uref(aux->map);
info->map = aux->map;
info->htab = container_of(aux->map, struct bpf_shtab, map);
return 0;
}
+static void sock_hash_fini_seq_private(void *priv_data)
+{
+ struct sock_hash_seq_info *info = priv_data;
+
+ bpf_map_put_with_uref(info->map);
+}
+
static const struct bpf_iter_seq_info sock_hash_iter_seq_info = {
.seq_ops = &sock_hash_seq_ops,
.init_seq_private = sock_hash_init_seq_private,
+ .fini_seq_private = sock_hash_fini_seq_private,
.seq_priv_size = sizeof(struct sock_hash_seq_info),
};
--
2.29.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH bpf v2 5/9] bpf: Check the validity of max_rdwr_access for sock local storage map iterator
2022-08-10 8:05 [PATCH bpf v2 0/9] fixes for bpf map iterator Hou Tao
` (3 preceding siblings ...)
2022-08-10 8:05 ` [PATCH bpf v2 4/9] bpf: Acquire map uref in .init_seq_private for sock{map,hash} iterator Hou Tao
@ 2022-08-10 8:05 ` Hou Tao
2022-08-10 16:59 ` Martin KaFai Lau
2022-08-10 8:05 ` [PATCH bpf v2 6/9] bpf: Only allow sleepable program for resched-able iterator Hou Tao
` (4 subsequent siblings)
9 siblings, 1 reply; 15+ messages in thread
From: Hou Tao @ 2022-08-10 8:05 UTC (permalink / raw)
To: bpf, Yonghong Song
Cc: Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann,
Martin KaFai Lau, Song Liu, KP Singh, David S . Miller,
Jakub Kicinski, Stanislav Fomichev, Hao Luo, Jiri Olsa,
John Fastabend, Lorenz Bauer, houtao1
From: Hou Tao <houtao1@huawei.com>
The value of sock local storage map is writable in map iterator, so check
max_rdwr_access instead of max_rdonly_access.
Fixes: 5ce6e77c7edf ("bpf: Implement bpf iterator for sock local storage map")
Signed-off-by: Hou Tao <houtao1@huawei.com>
Acked-by: Yonghong Song <yhs@fb.com>
---
net/core/bpf_sk_storage.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
index 83b89ba824d7..1b7f385643b4 100644
--- a/net/core/bpf_sk_storage.c
+++ b/net/core/bpf_sk_storage.c
@@ -904,7 +904,7 @@ static int bpf_iter_attach_map(struct bpf_prog *prog,
if (map->map_type != BPF_MAP_TYPE_SK_STORAGE)
goto put_map;
- if (prog->aux->max_rdonly_access > map->value_size) {
+ if (prog->aux->max_rdwr_access > map->value_size) {
err = -EACCES;
goto put_map;
}
--
2.29.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH bpf v2 6/9] bpf: Only allow sleepable program for resched-able iterator
2022-08-10 8:05 [PATCH bpf v2 0/9] fixes for bpf map iterator Hou Tao
` (4 preceding siblings ...)
2022-08-10 8:05 ` [PATCH bpf v2 5/9] bpf: Check the validity of max_rdwr_access for sock local storage map iterator Hou Tao
@ 2022-08-10 8:05 ` Hou Tao
2022-08-10 8:05 ` [PATCH bpf v2 7/9] selftests/bpf: Add tests for reading a dangling map iter fd Hou Tao
` (3 subsequent siblings)
9 siblings, 0 replies; 15+ messages in thread
From: Hou Tao @ 2022-08-10 8:05 UTC (permalink / raw)
To: bpf, Yonghong Song
Cc: Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann,
Martin KaFai Lau, Song Liu, KP Singh, David S . Miller,
Jakub Kicinski, Stanislav Fomichev, Hao Luo, Jiri Olsa,
John Fastabend, Lorenz Bauer, houtao1
From: Hou Tao <houtao1@huawei.com>
When a sleepable program is attached to a hash map iterator, might_fault()
will report "BUG: sleeping function called from invalid context..." if
CONFIG_DEBUG_ATOMIC_SLEEP is enabled. The reason is that rcu_read_lock()
is held in bpf_hash_map_seq_next() and won't be released until all elements
are traversed or bpf_hash_map_seq_stop() is called.
Fixing it by reusing BPF_ITER_RESCHED to indicate that only non-sleepable
program is allowed for iterator without BPF_ITER_RESCHED. We can revise
bpf_iter_link_attach() later if there are other conditions which may
cause rcu_read_lock() or spin_lock() issues.
Signed-off-by: Hou Tao <houtao1@huawei.com>
Acked-by: Yonghong Song <yhs@fb.com>
---
kernel/bpf/bpf_iter.c | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/bpf_iter.c b/kernel/bpf/bpf_iter.c
index 4b112aa8bba3..97bb57493ed5 100644
--- a/kernel/bpf/bpf_iter.c
+++ b/kernel/bpf/bpf_iter.c
@@ -68,13 +68,18 @@ static void bpf_iter_done_stop(struct seq_file *seq)
iter_priv->done_stop = true;
}
+static inline bool bpf_iter_target_support_resched(const struct bpf_iter_target_info *tinfo)
+{
+ return tinfo->reg_info->feature & BPF_ITER_RESCHED;
+}
+
static bool bpf_iter_support_resched(struct seq_file *seq)
{
struct bpf_iter_priv_data *iter_priv;
iter_priv = container_of(seq->private, struct bpf_iter_priv_data,
target_private);
- return iter_priv->tinfo->reg_info->feature & BPF_ITER_RESCHED;
+ return bpf_iter_target_support_resched(iter_priv->tinfo);
}
/* maximum visited objects before bailing out */
@@ -542,6 +547,10 @@ int bpf_iter_link_attach(const union bpf_attr *attr, bpfptr_t uattr,
if (!tinfo)
return -ENOENT;
+ /* Only allow sleepable program for resched-able iterator */
+ if (prog->aux->sleepable && !bpf_iter_target_support_resched(tinfo))
+ return -EINVAL;
+
link = kzalloc(sizeof(*link), GFP_USER | __GFP_NOWARN);
if (!link)
return -ENOMEM;
--
2.29.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH bpf v2 7/9] selftests/bpf: Add tests for reading a dangling map iter fd
2022-08-10 8:05 [PATCH bpf v2 0/9] fixes for bpf map iterator Hou Tao
` (5 preceding siblings ...)
2022-08-10 8:05 ` [PATCH bpf v2 6/9] bpf: Only allow sleepable program for resched-able iterator Hou Tao
@ 2022-08-10 8:05 ` Hou Tao
2022-08-10 15:13 ` Yonghong Song
2022-08-10 8:05 ` [PATCH bpf v2 8/9] selftests/bpf: Add write tests for sk local storage map iterator Hou Tao
` (2 subsequent siblings)
9 siblings, 1 reply; 15+ messages in thread
From: Hou Tao @ 2022-08-10 8:05 UTC (permalink / raw)
To: bpf, Yonghong Song
Cc: Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann,
Martin KaFai Lau, Song Liu, KP Singh, David S . Miller,
Jakub Kicinski, Stanislav Fomichev, Hao Luo, Jiri Olsa,
John Fastabend, Lorenz Bauer, houtao1
From: Hou Tao <houtao1@huawei.com>
After closing both related link fd and map fd, reading the map
iterator fd to ensure it is OK to do so.
Signed-off-by: Hou Tao <houtao1@huawei.com>
---
.../selftests/bpf/prog_tests/bpf_iter.c | 92 +++++++++++++++++++
1 file changed, 92 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
index a33874b081b6..b690c9e9d346 100644
--- a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
+++ b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
@@ -28,6 +28,7 @@
#include "bpf_iter_test_kern6.skel.h"
#include "bpf_iter_bpf_link.skel.h"
#include "bpf_iter_ksym.skel.h"
+#include "bpf_iter_sockmap.skel.h"
static int duration;
@@ -67,6 +68,50 @@ static void do_dummy_read(struct bpf_program *prog)
bpf_link__destroy(link);
}
+static void do_read_map_iter_fd(struct bpf_object_skeleton **skel, struct bpf_program *prog,
+ struct bpf_map *map)
+{
+ DECLARE_LIBBPF_OPTS(bpf_iter_attach_opts, opts);
+ union bpf_iter_link_info linfo;
+ struct bpf_link *link;
+ char buf[16] = {};
+ int iter_fd, len;
+
+ memset(&linfo, 0, sizeof(linfo));
+ linfo.map.map_fd = bpf_map__fd(map);
+ opts.link_info = &linfo;
+ opts.link_info_len = sizeof(linfo);
+ link = bpf_program__attach_iter(prog, &opts);
+ if (!ASSERT_OK_PTR(link, "attach_map_iter"))
+ return;
+
+ iter_fd = bpf_iter_create(bpf_link__fd(link));
+ if (!ASSERT_GE(iter_fd, 0, "create_map_iter")) {
+ bpf_link__destroy(link);
+ return;
+ }
+
+ /* Close link and map fd prematurely */
+ bpf_link__destroy(link);
+ bpf_object__destroy_skeleton(*skel);
+ *skel = NULL;
+
+ /* Try to let map free work to run first if map is freed */
+ usleep(100);
+ /* Memory used by both sock map and sock local storage map are
+ * freed after two synchronize_rcu() calls, so wait for it
+ */
+ kern_sync_rcu();
+ kern_sync_rcu();
+
+ /* Read after both map fd and link fd are closed */
+ while ((len = read(iter_fd, buf, sizeof(buf))) > 0)
+ ;
+ ASSERT_GE(len, 0, "read_iterator");
+
+ close(iter_fd);
+}
+
static int read_fd_into_buffer(int fd, char *buf, int size)
{
int bufleft = size;
@@ -827,6 +872,20 @@ static void test_bpf_array_map(void)
bpf_iter_bpf_array_map__destroy(skel);
}
+static void test_bpf_array_map_iter_fd(void)
+{
+ struct bpf_iter_bpf_array_map *skel;
+
+ skel = bpf_iter_bpf_array_map__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "bpf_iter_bpf_array_map__open_and_load"))
+ return;
+
+ do_read_map_iter_fd(&skel->skeleton, skel->progs.dump_bpf_array_map,
+ skel->maps.arraymap1);
+
+ bpf_iter_bpf_array_map__destroy(skel);
+}
+
static void test_bpf_percpu_array_map(void)
{
DECLARE_LIBBPF_OPTS(bpf_iter_attach_opts, opts);
@@ -1009,6 +1068,20 @@ static void test_bpf_sk_storage_get(void)
bpf_iter_bpf_sk_storage_helpers__destroy(skel);
}
+static void test_bpf_sk_stoarge_map_iter_fd(void)
+{
+ struct bpf_iter_bpf_sk_storage_map *skel;
+
+ skel = bpf_iter_bpf_sk_storage_map__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "bpf_iter_bpf_sk_storage_map__open_and_load"))
+ return;
+
+ do_read_map_iter_fd(&skel->skeleton, skel->progs.dump_bpf_sk_storage_map,
+ skel->maps.sk_stg_map);
+
+ bpf_iter_bpf_sk_storage_map__destroy(skel);
+}
+
static void test_bpf_sk_storage_map(void)
{
DECLARE_LIBBPF_OPTS(bpf_iter_attach_opts, opts);
@@ -1217,6 +1290,19 @@ static void test_task_vma(void)
bpf_iter_task_vma__destroy(skel);
}
+void test_bpf_sockmap_map_iter_fd(void)
+{
+ struct bpf_iter_sockmap *skel;
+
+ skel = bpf_iter_sockmap__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "bpf_iter_sockmap__open_and_load"))
+ return;
+
+ do_read_map_iter_fd(&skel->skeleton, skel->progs.copy, skel->maps.sockmap);
+
+ bpf_iter_sockmap__destroy(skel);
+}
+
void test_bpf_iter(void)
{
if (test__start_subtest("btf_id_or_null"))
@@ -1267,10 +1353,14 @@ void test_bpf_iter(void)
test_bpf_percpu_hash_map();
if (test__start_subtest("bpf_array_map"))
test_bpf_array_map();
+ if (test__start_subtest("bpf_array_map_iter_fd"))
+ test_bpf_array_map_iter_fd();
if (test__start_subtest("bpf_percpu_array_map"))
test_bpf_percpu_array_map();
if (test__start_subtest("bpf_sk_storage_map"))
test_bpf_sk_storage_map();
+ if (test__start_subtest("bpf_sk_storage_map_iter_fd"))
+ test_bpf_sk_stoarge_map_iter_fd();
if (test__start_subtest("bpf_sk_storage_delete"))
test_bpf_sk_storage_delete();
if (test__start_subtest("bpf_sk_storage_get"))
@@ -1283,4 +1373,6 @@ void test_bpf_iter(void)
test_link_iter();
if (test__start_subtest("ksym"))
test_ksym_iter();
+ if (test__start_subtest("bpf_sockmap_map_iter_fd"))
+ test_bpf_sockmap_map_iter_fd();
}
--
2.29.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH bpf v2 8/9] selftests/bpf: Add write tests for sk local storage map iterator
2022-08-10 8:05 [PATCH bpf v2 0/9] fixes for bpf map iterator Hou Tao
` (6 preceding siblings ...)
2022-08-10 8:05 ` [PATCH bpf v2 7/9] selftests/bpf: Add tests for reading a dangling map iter fd Hou Tao
@ 2022-08-10 8:05 ` Hou Tao
2022-08-10 17:05 ` Martin KaFai Lau
2022-08-10 8:05 ` [PATCH bpf v2 9/9] selftests/bpf: Ensure sleepable program is rejected by hash map iter Hou Tao
2022-08-10 17:20 ` [PATCH bpf v2 0/9] fixes for bpf map iterator patchwork-bot+netdevbpf
9 siblings, 1 reply; 15+ messages in thread
From: Hou Tao @ 2022-08-10 8:05 UTC (permalink / raw)
To: bpf, Yonghong Song
Cc: Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann,
Martin KaFai Lau, Song Liu, KP Singh, David S . Miller,
Jakub Kicinski, Stanislav Fomichev, Hao Luo, Jiri Olsa,
John Fastabend, Lorenz Bauer, houtao1
From: Hou Tao <houtao1@huawei.com>
Add test to validate the overwrite of sock local storage map value in
map iterator and another one to ensure out-of-bound value writing is
rejected.
Signed-off-by: Hou Tao <houtao1@huawei.com>
Acked-by: Yonghong Song <yhs@fb.com>
---
.../selftests/bpf/prog_tests/bpf_iter.c | 20 +++++++++++++++--
.../bpf/progs/bpf_iter_bpf_sk_storage_map.c | 22 +++++++++++++++++--
2 files changed, 38 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
index b690c9e9d346..1571a6586b3b 100644
--- a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
+++ b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
@@ -1076,7 +1076,7 @@ static void test_bpf_sk_stoarge_map_iter_fd(void)
if (!ASSERT_OK_PTR(skel, "bpf_iter_bpf_sk_storage_map__open_and_load"))
return;
- do_read_map_iter_fd(&skel->skeleton, skel->progs.dump_bpf_sk_storage_map,
+ do_read_map_iter_fd(&skel->skeleton, skel->progs.rw_bpf_sk_storage_map,
skel->maps.sk_stg_map);
bpf_iter_bpf_sk_storage_map__destroy(skel);
@@ -1117,7 +1117,15 @@ static void test_bpf_sk_storage_map(void)
linfo.map.map_fd = map_fd;
opts.link_info = &linfo;
opts.link_info_len = sizeof(linfo);
- link = bpf_program__attach_iter(skel->progs.dump_bpf_sk_storage_map, &opts);
+ link = bpf_program__attach_iter(skel->progs.oob_write_bpf_sk_storage_map, &opts);
+ err = libbpf_get_error(link);
+ if (!ASSERT_EQ(err, -EACCES, "attach_oob_write_iter")) {
+ if (!err)
+ bpf_link__destroy(link);
+ goto out;
+ }
+
+ link = bpf_program__attach_iter(skel->progs.rw_bpf_sk_storage_map, &opts);
if (!ASSERT_OK_PTR(link, "attach_iter"))
goto out;
@@ -1125,6 +1133,7 @@ static void test_bpf_sk_storage_map(void)
if (!ASSERT_GE(iter_fd, 0, "create_iter"))
goto free_link;
+ skel->bss->to_add_val = time(NULL);
/* do some tests */
while ((len = read(iter_fd, buf, sizeof(buf))) > 0)
;
@@ -1138,6 +1147,13 @@ static void test_bpf_sk_storage_map(void)
if (!ASSERT_EQ(skel->bss->val_sum, expected_val, "val_sum"))
goto close_iter;
+ for (i = 0; i < num_sockets; i++) {
+ err = bpf_map_lookup_elem(map_fd, &sock_fd[i], &val);
+ if (!ASSERT_OK(err, "map_lookup") ||
+ !ASSERT_EQ(val, i + 1 + skel->bss->to_add_val, "check_map_value"))
+ break;
+ }
+
close_iter:
close(iter_fd);
free_link:
diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_bpf_sk_storage_map.c b/tools/testing/selftests/bpf/progs/bpf_iter_bpf_sk_storage_map.c
index 6b70ccaba301..c7b8e006b171 100644
--- a/tools/testing/selftests/bpf/progs/bpf_iter_bpf_sk_storage_map.c
+++ b/tools/testing/selftests/bpf/progs/bpf_iter_bpf_sk_storage_map.c
@@ -16,19 +16,37 @@ struct {
__u32 val_sum = 0;
__u32 ipv6_sk_count = 0;
+__u32 to_add_val = 0;
SEC("iter/bpf_sk_storage_map")
-int dump_bpf_sk_storage_map(struct bpf_iter__bpf_sk_storage_map *ctx)
+int rw_bpf_sk_storage_map(struct bpf_iter__bpf_sk_storage_map *ctx)
{
struct sock *sk = ctx->sk;
__u32 *val = ctx->value;
- if (sk == (void *)0 || val == (void *)0)
+ if (sk == NULL || val == NULL)
return 0;
if (sk->sk_family == AF_INET6)
ipv6_sk_count++;
val_sum += *val;
+
+ *val += to_add_val;
+
+ return 0;
+}
+
+SEC("iter/bpf_sk_storage_map")
+int oob_write_bpf_sk_storage_map(struct bpf_iter__bpf_sk_storage_map *ctx)
+{
+ struct sock *sk = ctx->sk;
+ __u32 *val = ctx->value;
+
+ if (sk == NULL || val == NULL)
+ return 0;
+
+ *(val + 1) = 0xdeadbeef;
+
return 0;
}
--
2.29.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH bpf v2 9/9] selftests/bpf: Ensure sleepable program is rejected by hash map iter
2022-08-10 8:05 [PATCH bpf v2 0/9] fixes for bpf map iterator Hou Tao
` (7 preceding siblings ...)
2022-08-10 8:05 ` [PATCH bpf v2 8/9] selftests/bpf: Add write tests for sk local storage map iterator Hou Tao
@ 2022-08-10 8:05 ` Hou Tao
2022-08-10 17:20 ` [PATCH bpf v2 0/9] fixes for bpf map iterator patchwork-bot+netdevbpf
9 siblings, 0 replies; 15+ messages in thread
From: Hou Tao @ 2022-08-10 8:05 UTC (permalink / raw)
To: bpf, Yonghong Song
Cc: Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann,
Martin KaFai Lau, Song Liu, KP Singh, David S . Miller,
Jakub Kicinski, Stanislav Fomichev, Hao Luo, Jiri Olsa,
John Fastabend, Lorenz Bauer, houtao1
From: Hou Tao <houtao1@huawei.com>
Add a test to ensure sleepable program is rejected by hash map iterator.
Signed-off-by: Hou Tao <houtao1@huawei.com>
Acked-by: Yonghong Song <yhs@fb.com>
---
tools/testing/selftests/bpf/prog_tests/bpf_iter.c | 6 ++++++
.../testing/selftests/bpf/progs/bpf_iter_bpf_hash_map.c | 9 +++++++++
2 files changed, 15 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
index 1571a6586b3b..e89685bd587c 100644
--- a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
+++ b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
@@ -679,6 +679,12 @@ static void test_bpf_hash_map(void)
goto out;
}
+ /* Sleepable program is prohibited for hash map iterator */
+ linfo.map.map_fd = map_fd;
+ link = bpf_program__attach_iter(skel->progs.sleepable_dummy_dump, &opts);
+ if (!ASSERT_ERR_PTR(link, "attach_sleepable_prog_to_iter"))
+ goto out;
+
linfo.map.map_fd = map_fd;
link = bpf_program__attach_iter(skel->progs.dump_bpf_hash_map, &opts);
if (!ASSERT_OK_PTR(link, "attach_iter"))
diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_bpf_hash_map.c b/tools/testing/selftests/bpf/progs/bpf_iter_bpf_hash_map.c
index 0aa3cd34cbe3..d7a69217fb68 100644
--- a/tools/testing/selftests/bpf/progs/bpf_iter_bpf_hash_map.c
+++ b/tools/testing/selftests/bpf/progs/bpf_iter_bpf_hash_map.c
@@ -112,3 +112,12 @@ int dump_bpf_hash_map(struct bpf_iter__bpf_map_elem *ctx)
return 0;
}
+
+SEC("iter.s/bpf_map_elem")
+int sleepable_dummy_dump(struct bpf_iter__bpf_map_elem *ctx)
+{
+ if (ctx->meta->seq_num == 0)
+ BPF_SEQ_PRINTF(ctx->meta->seq, "map dump starts\n");
+
+ return 0;
+}
--
2.29.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH bpf v2 7/9] selftests/bpf: Add tests for reading a dangling map iter fd
2022-08-10 8:05 ` [PATCH bpf v2 7/9] selftests/bpf: Add tests for reading a dangling map iter fd Hou Tao
@ 2022-08-10 15:13 ` Yonghong Song
0 siblings, 0 replies; 15+ messages in thread
From: Yonghong Song @ 2022-08-10 15:13 UTC (permalink / raw)
To: Hou Tao, bpf
Cc: Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann,
Martin KaFai Lau, Song Liu, KP Singh, David S . Miller,
Jakub Kicinski, Stanislav Fomichev, Hao Luo, Jiri Olsa,
John Fastabend, Lorenz Bauer, houtao1
On 8/10/22 1:05 AM, Hou Tao wrote:
> From: Hou Tao <houtao1@huawei.com>
>
> After closing both related link fd and map fd, reading the map
> iterator fd to ensure it is OK to do so.
>
> Signed-off-by: Hou Tao <houtao1@huawei.com>
Acked-by: Yonghong Song <yhs@fb.com>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH bpf v2 3/9] bpf: Acquire map uref in .init_seq_private for sock local storage map iterator
2022-08-10 8:05 ` [PATCH bpf v2 3/9] bpf: Acquire map uref in .init_seq_private for sock local storage " Hou Tao
@ 2022-08-10 16:47 ` Martin KaFai Lau
0 siblings, 0 replies; 15+ messages in thread
From: Martin KaFai Lau @ 2022-08-10 16:47 UTC (permalink / raw)
To: Hou Tao
Cc: bpf, Yonghong Song, Andrii Nakryiko, Alexei Starovoitov,
Daniel Borkmann, Song Liu, KP Singh, David S . Miller,
Jakub Kicinski, Stanislav Fomichev, Hao Luo, Jiri Olsa,
John Fastabend, Lorenz Bauer, houtao1
On Wed, Aug 10, 2022 at 04:05:32PM +0800, Hou Tao wrote:
> From: Hou Tao <houtao1@huawei.com>
>
> bpf_iter_attach_map() acquires a map uref, and the uref may be released
> before or in the middle of iterating map elements. For example, the uref
> could be released in bpf_iter_detach_map() as part of
> bpf_link_release(), or could be released in bpf_map_put_with_uref() as
> part of bpf_map_release().
>
> So acquiring an extra map uref in bpf_iter_init_sk_storage_map() and
> releasing it in bpf_iter_fini_sk_storage_map().
Acked-by: Martin KaFai Lau <kafai@fb.com>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH bpf v2 5/9] bpf: Check the validity of max_rdwr_access for sock local storage map iterator
2022-08-10 8:05 ` [PATCH bpf v2 5/9] bpf: Check the validity of max_rdwr_access for sock local storage map iterator Hou Tao
@ 2022-08-10 16:59 ` Martin KaFai Lau
0 siblings, 0 replies; 15+ messages in thread
From: Martin KaFai Lau @ 2022-08-10 16:59 UTC (permalink / raw)
To: Hou Tao
Cc: bpf, Yonghong Song, Andrii Nakryiko, Alexei Starovoitov,
Daniel Borkmann, Song Liu, KP Singh, David S . Miller,
Jakub Kicinski, Stanislav Fomichev, Hao Luo, Jiri Olsa,
John Fastabend, Lorenz Bauer, houtao1
On Wed, Aug 10, 2022 at 04:05:34PM +0800, Hou Tao wrote:
> From: Hou Tao <houtao1@huawei.com>
>
> The value of sock local storage map is writable in map iterator, so check
> max_rdwr_access instead of max_rdonly_access.
Acked-by: Martin KaFai Lau <kafai@fb.com>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH bpf v2 8/9] selftests/bpf: Add write tests for sk local storage map iterator
2022-08-10 8:05 ` [PATCH bpf v2 8/9] selftests/bpf: Add write tests for sk local storage map iterator Hou Tao
@ 2022-08-10 17:05 ` Martin KaFai Lau
0 siblings, 0 replies; 15+ messages in thread
From: Martin KaFai Lau @ 2022-08-10 17:05 UTC (permalink / raw)
To: Hou Tao
Cc: bpf, Yonghong Song, Andrii Nakryiko, Alexei Starovoitov,
Daniel Borkmann, Song Liu, KP Singh, David S . Miller,
Jakub Kicinski, Stanislav Fomichev, Hao Luo, Jiri Olsa,
John Fastabend, Lorenz Bauer, houtao1
On Wed, Aug 10, 2022 at 04:05:37PM +0800, Hou Tao wrote:
> From: Hou Tao <houtao1@huawei.com>
>
> Add test to validate the overwrite of sock local storage map value in
> map iterator and another one to ensure out-of-bound value writing is
> rejected.
Acked-by: Martin KaFai Lau <kafai@fb.com>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH bpf v2 0/9] fixes for bpf map iterator
2022-08-10 8:05 [PATCH bpf v2 0/9] fixes for bpf map iterator Hou Tao
` (8 preceding siblings ...)
2022-08-10 8:05 ` [PATCH bpf v2 9/9] selftests/bpf: Ensure sleepable program is rejected by hash map iter Hou Tao
@ 2022-08-10 17:20 ` patchwork-bot+netdevbpf
9 siblings, 0 replies; 15+ messages in thread
From: patchwork-bot+netdevbpf @ 2022-08-10 17:20 UTC (permalink / raw)
To: Hou Tao
Cc: bpf, yhs, andrii, ast, daniel, kafai, songliubraving, kpsingh,
davem, kuba, sdf, haoluo, jolsa, john.fastabend, oss, houtao1
Hello:
This series was applied to bpf/bpf.git (master)
by Alexei Starovoitov <ast@kernel.org>:
On Wed, 10 Aug 2022 16:05:29 +0800 you wrote:
> From: Hou Tao <houtao1@huawei.com>
>
> Hi,
>
> The patchset constitues three fixes for bpf map iterator:
>
> (1) patch 1~4: fix user-after-free during reading map iterator fd
> It is possible when both the corresponding link fd and map fd are
> closed bfore reading the iterator fd. I had squashed these four patches
> into one, but it was not friendly for stable backport, so I break these
> fixes into four single patches in the end. Patch 7 is its testing patch.
>
> [...]
Here is the summary with links:
- [bpf,v2,1/9] bpf: Acquire map uref in .init_seq_private for array map iterator
https://git.kernel.org/bpf/bpf/c/f76fa6b33805
- [bpf,v2,2/9] bpf: Acquire map uref in .init_seq_private for hash map iterator
https://git.kernel.org/bpf/bpf/c/ef1e93d2eeb5
- [bpf,v2,3/9] bpf: Acquire map uref in .init_seq_private for sock local storage map iterator
https://git.kernel.org/bpf/bpf/c/3c5f6e698b5c
- [bpf,v2,4/9] bpf: Acquire map uref in .init_seq_private for sock{map,hash} iterator
https://git.kernel.org/bpf/bpf/c/f0d2b2716d71
- [bpf,v2,5/9] bpf: Check the validity of max_rdwr_access for sock local storage map iterator
https://git.kernel.org/bpf/bpf/c/52bd05eb7c88
- [bpf,v2,6/9] bpf: Only allow sleepable program for resched-able iterator
https://git.kernel.org/bpf/bpf/c/d247049f4fd0
- [bpf,v2,7/9] selftests/bpf: Add tests for reading a dangling map iter fd
https://git.kernel.org/bpf/bpf/c/5836d81e4b03
- [bpf,v2,8/9] selftests/bpf: Add write tests for sk local storage map iterator
https://git.kernel.org/bpf/bpf/c/939a1a946d75
- [bpf,v2,9/9] selftests/bpf: Ensure sleepable program is rejected by hash map iter
https://git.kernel.org/bpf/bpf/c/c5c0981fd81d
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2022-08-10 17:20 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-10 8:05 [PATCH bpf v2 0/9] fixes for bpf map iterator Hou Tao
2022-08-10 8:05 ` [PATCH bpf v2 1/9] bpf: Acquire map uref in .init_seq_private for array " Hou Tao
2022-08-10 8:05 ` [PATCH bpf v2 2/9] bpf: Acquire map uref in .init_seq_private for hash " Hou Tao
2022-08-10 8:05 ` [PATCH bpf v2 3/9] bpf: Acquire map uref in .init_seq_private for sock local storage " Hou Tao
2022-08-10 16:47 ` Martin KaFai Lau
2022-08-10 8:05 ` [PATCH bpf v2 4/9] bpf: Acquire map uref in .init_seq_private for sock{map,hash} iterator Hou Tao
2022-08-10 8:05 ` [PATCH bpf v2 5/9] bpf: Check the validity of max_rdwr_access for sock local storage map iterator Hou Tao
2022-08-10 16:59 ` Martin KaFai Lau
2022-08-10 8:05 ` [PATCH bpf v2 6/9] bpf: Only allow sleepable program for resched-able iterator Hou Tao
2022-08-10 8:05 ` [PATCH bpf v2 7/9] selftests/bpf: Add tests for reading a dangling map iter fd Hou Tao
2022-08-10 15:13 ` Yonghong Song
2022-08-10 8:05 ` [PATCH bpf v2 8/9] selftests/bpf: Add write tests for sk local storage map iterator Hou Tao
2022-08-10 17:05 ` Martin KaFai Lau
2022-08-10 8:05 ` [PATCH bpf v2 9/9] selftests/bpf: Ensure sleepable program is rejected by hash map iter Hou Tao
2022-08-10 17:20 ` [PATCH bpf v2 0/9] fixes for bpf map iterator patchwork-bot+netdevbpf
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.