* [PATCH bpf-next v4 0/2] bpf: allow map helpers access to map values directly
@ 2018-04-22 21:50 Paul Chaignon via iovisor-dev
2018-04-22 21:52 ` [PATCH bpf-next v4 1/2] " Paul Chaignon
[not found] ` <cover.1524407664.git.paul.chaignon-C0LM0jrOve7QT0dZR+AlfA@public.gmane.org>
0 siblings, 2 replies; 5+ messages in thread
From: Paul Chaignon via iovisor-dev @ 2018-04-22 21:50 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, netdev-u79uwXL29TY76Z2rM5mHXA
Cc: iovisor-dev-9jONkmmOlFHEE9lA1F8Ukti2O/JbrIOy
Currently, helpers that expect ARG_PTR_TO_MAP_KEY and ARG_PTR_TO_MAP_VALUE
can only access stack and packet memory. This patchset allows these
helpers to directly access map values by passing registers of type
PTR_TO_MAP_VALUE.
The first patch changes the verifier; the second adds new test cases.
Previous versions of this patchset were sent on the iovisor-dev mailing
list only.
Changelogs:
Changes in v4:
- Rebase.
Changes in v3:
- Bug fixes.
- Negative test cases.
Changes in v2:
- Additional test cases for adjusted maps.
Paul Chaignon (2):
bpf: allow map helpers access to map values directly
tools/bpf: add verifier tests for accesses to map
kernel/bpf/verifier.c | 9 +-
tools/testing/selftests/bpf/test_verifier.c | 266 ++++++++++++++++++++++++++++
2 files changed, 274 insertions(+), 1 deletion(-)
--
2.14.1
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH bpf-next v4 1/2] bpf: allow map helpers access to map values directly
2018-04-22 21:50 [PATCH bpf-next v4 0/2] bpf: allow map helpers access to map values directly Paul Chaignon via iovisor-dev
@ 2018-04-22 21:52 ` Paul Chaignon
2018-04-23 21:18 ` Daniel Borkmann
[not found] ` <cover.1524407664.git.paul.chaignon-C0LM0jrOve7QT0dZR+AlfA@public.gmane.org>
1 sibling, 1 reply; 5+ messages in thread
From: Paul Chaignon @ 2018-04-22 21:52 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, netdev; +Cc: iovisor-dev, paul.chaignon
Helpers that expect ARG_PTR_TO_MAP_KEY and ARG_PTR_TO_MAP_VALUE can only
access stack and packet memory. Allow these helpers to directly access
map values by passing registers of type PTR_TO_MAP_VALUE.
This change removes the need for an extra copy to the stack when using a
map value to perform a second map lookup, as in the following:
struct bpf_map_def SEC("maps") infobyreq = {
.type = BPF_MAP_TYPE_HASHMAP,
.key_size = sizeof(struct request *),
.value_size = sizeof(struct info_t),
.max_entries = 1024,
};
struct bpf_map_def SEC("maps") counts = {
.type = BPF_MAP_TYPE_HASHMAP,
.key_size = sizeof(struct info_t),
.value_size = sizeof(u64),
.max_entries = 1024,
};
SEC("kprobe/blk_account_io_start")
int bpf_blk_account_io_start(struct pt_regs *ctx)
{
struct info_t *info = bpf_map_lookup_elem(&infobyreq, &ctx->di);
u64 *count = bpf_map_lookup_elem(&counts, info);
(*count)++;
}
Signed-off-by: Paul Chaignon <paul.chaignon@orange.com>
---
kernel/bpf/verifier.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 5dd1dcb902bf..70e00beade03 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1914,7 +1914,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno,
if (arg_type == ARG_PTR_TO_MAP_KEY ||
arg_type == ARG_PTR_TO_MAP_VALUE) {
expected_type = PTR_TO_STACK;
- if (!type_is_pkt_pointer(type) &&
+ if (!type_is_pkt_pointer(type) && type != PTR_TO_MAP_VALUE &&
type != expected_type)
goto err_type;
} else if (arg_type == ARG_CONST_SIZE ||
@@ -1970,6 +1970,9 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno,
err = check_packet_access(env, regno, reg->off,
meta->map_ptr->key_size,
false);
+ else if (type == PTR_TO_MAP_VALUE)
+ err = check_map_access(env, regno, reg->off,
+ meta->map_ptr->key_size, false);
else
err = check_stack_boundary(env, regno,
meta->map_ptr->key_size,
@@ -1987,6 +1990,10 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno,
err = check_packet_access(env, regno, reg->off,
meta->map_ptr->value_size,
false);
+ else if (type == PTR_TO_MAP_VALUE)
+ err = check_map_access(env, regno, reg->off,
+ meta->map_ptr->value_size,
+ false);
else
err = check_stack_boundary(env, regno,
meta->map_ptr->value_size,
--
2.14.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH bpf-next v4 2/2] tools/bpf: add verifier tests for accesses to map
[not found] ` <cover.1524407664.git.paul.chaignon-C0LM0jrOve7QT0dZR+AlfA@public.gmane.org>
@ 2018-04-22 21:52 ` Paul Chaignon via iovisor-dev
0 siblings, 0 replies; 5+ messages in thread
From: Paul Chaignon via iovisor-dev @ 2018-04-22 21:52 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, netdev-u79uwXL29TY76Z2rM5mHXA
Cc: iovisor-dev-9jONkmmOlFHEE9lA1F8Ukti2O/JbrIOy
This patch adds new test cases for accesses to map values from map
helpers.
Signed-off-by: Paul Chaignon <paul.chaignon-C0LM0jrOve7QT0dZR+AlfA@public.gmane.org>
---
tools/testing/selftests/bpf/test_verifier.c | 266 ++++++++++++++++++++++++++++
1 file changed, 266 insertions(+)
diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index 3e7718b1a9ae..165e9ddfa446 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -64,6 +64,7 @@ struct bpf_test {
struct bpf_insn insns[MAX_INSNS];
int fixup_map1[MAX_FIXUPS];
int fixup_map2[MAX_FIXUPS];
+ int fixup_map3[MAX_FIXUPS];
int fixup_prog[MAX_FIXUPS];
int fixup_map_in_map[MAX_FIXUPS];
const char *errstr;
@@ -88,6 +89,11 @@ struct test_val {
int foo[MAX_ENTRIES];
};
+struct other_val {
+ long long foo;
+ long long bar;
+};
+
static struct bpf_test tests[] = {
{
"add+sub+mul",
@@ -5593,6 +5599,257 @@ static struct bpf_test tests[] = {
.errstr = "R1 min value is negative",
.prog_type = BPF_PROG_TYPE_TRACEPOINT,
},
+ {
+ "map lookup helper access to map",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map3 = { 3, 8 },
+ .result = ACCEPT,
+ .prog_type = BPF_PROG_TYPE_TRACEPOINT,
+ },
+ {
+ "map update helper access to map",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+ BPF_MOV64_IMM(BPF_REG_4, 0),
+ BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_EMIT_CALL(BPF_FUNC_map_update_elem),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map3 = { 3, 10 },
+ .result = ACCEPT,
+ .prog_type = BPF_PROG_TYPE_TRACEPOINT,
+ },
+ {
+ "map update helper access to map: wrong size",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+ BPF_MOV64_IMM(BPF_REG_4, 0),
+ BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_EMIT_CALL(BPF_FUNC_map_update_elem),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 3 },
+ .fixup_map3 = { 10 },
+ .result = REJECT,
+ .errstr = "invalid access to map value, value_size=8 off=0 size=16",
+ .prog_type = BPF_PROG_TYPE_TRACEPOINT,
+ },
+ {
+ "map helper access to adjusted map (via const imm)",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2,
+ offsetof(struct other_val, bar)),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map3 = { 3, 9 },
+ .result = ACCEPT,
+ .prog_type = BPF_PROG_TYPE_TRACEPOINT,
+ },
+ {
+ "map helper access to adjusted map (via const imm): out-of-bound 1",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2,
+ sizeof(struct other_val) - 4),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map3 = { 3, 9 },
+ .result = REJECT,
+ .errstr = "invalid access to map value, value_size=16 off=12 size=8",
+ .prog_type = BPF_PROG_TYPE_TRACEPOINT,
+ },
+ {
+ "map helper access to adjusted map (via const imm): out-of-bound 2",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map3 = { 3, 9 },
+ .result = REJECT,
+ .errstr = "invalid access to map value, value_size=16 off=-4 size=8",
+ .prog_type = BPF_PROG_TYPE_TRACEPOINT,
+ },
+ {
+ "map helper access to adjusted map (via const reg)",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
+ BPF_MOV64_IMM(BPF_REG_3,
+ offsetof(struct other_val, bar)),
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_3),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map3 = { 3, 10 },
+ .result = ACCEPT,
+ .prog_type = BPF_PROG_TYPE_TRACEPOINT,
+ },
+ {
+ "map helper access to adjusted map (via const reg): out-of-bound 1",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
+ BPF_MOV64_IMM(BPF_REG_3,
+ sizeof(struct other_val) - 4),
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_3),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map3 = { 3, 10 },
+ .result = REJECT,
+ .errstr = "invalid access to map value, value_size=16 off=12 size=8",
+ .prog_type = BPF_PROG_TYPE_TRACEPOINT,
+ },
+ {
+ "map helper access to adjusted map (via const reg): out-of-bound 2",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
+ BPF_MOV64_IMM(BPF_REG_3, -4),
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_3),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map3 = { 3, 10 },
+ .result = REJECT,
+ .errstr = "invalid access to map value, value_size=16 off=-4 size=8",
+ .prog_type = BPF_PROG_TYPE_TRACEPOINT,
+ },
+ {
+ "map helper access to adjusted map (via variable)",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
+ BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0),
+ BPF_JMP_IMM(BPF_JGT, BPF_REG_3,
+ offsetof(struct other_val, bar), 4),
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_3),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map3 = { 3, 11 },
+ .result = ACCEPT,
+ .prog_type = BPF_PROG_TYPE_TRACEPOINT,
+ },
+ {
+ "map helper access to adjusted map (via variable): no max check",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
+ BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0),
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_3),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map3 = { 3, 10 },
+ .result = REJECT,
+ .errstr = "R2 unbounded memory access, make sure to bounds check any array access into a map",
+ .prog_type = BPF_PROG_TYPE_TRACEPOINT,
+ },
+ {
+ "map helper access to adjusted map (via variable): wrong max check",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
+ BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0),
+ BPF_JMP_IMM(BPF_JGT, BPF_REG_3,
+ offsetof(struct other_val, bar) + 1, 4),
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_3),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map3 = { 3, 11 },
+ .result = REJECT,
+ .errstr = "invalid access to map value, value_size=16 off=9 size=8",
+ .prog_type = BPF_PROG_TYPE_TRACEPOINT,
+ },
{
"map element value is preserved across register spilling",
.insns = {
@@ -11533,6 +11790,7 @@ static void do_test_fixup(struct bpf_test *test, struct bpf_insn *prog,
{
int *fixup_map1 = test->fixup_map1;
int *fixup_map2 = test->fixup_map2;
+ int *fixup_map3 = test->fixup_map3;
int *fixup_prog = test->fixup_prog;
int *fixup_map_in_map = test->fixup_map_in_map;
@@ -11556,6 +11814,14 @@ static void do_test_fixup(struct bpf_test *test, struct bpf_insn *prog,
} while (*fixup_map2);
}
+ if (*fixup_map3) {
+ map_fds[1] = create_map(sizeof(struct other_val), 1);
+ do {
+ prog[*fixup_map3].imm = map_fds[1];
+ fixup_map3++;
+ } while (*fixup_map3);
+ }
+
if (*fixup_prog) {
map_fds[2] = create_prog_array();
do {
--
2.14.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH bpf-next v4 1/2] bpf: allow map helpers access to map values directly
2018-04-22 21:52 ` [PATCH bpf-next v4 1/2] " Paul Chaignon
@ 2018-04-23 21:18 ` Daniel Borkmann
[not found] ` <a71a2e2d-ca39-b351-a62d-315c034f1ea1-FeC+5ew28dpmcu3hnIyYJQ@public.gmane.org>
0 siblings, 1 reply; 5+ messages in thread
From: Daniel Borkmann @ 2018-04-23 21:18 UTC (permalink / raw)
To: Paul Chaignon, Alexei Starovoitov, netdev; +Cc: iovisor-dev, paul.chaignon
On 04/22/2018 11:52 PM, Paul Chaignon wrote:
> Helpers that expect ARG_PTR_TO_MAP_KEY and ARG_PTR_TO_MAP_VALUE can only
> access stack and packet memory. Allow these helpers to directly access
> map values by passing registers of type PTR_TO_MAP_VALUE.
>
> This change removes the need for an extra copy to the stack when using a
> map value to perform a second map lookup, as in the following:
>
> struct bpf_map_def SEC("maps") infobyreq = {
> .type = BPF_MAP_TYPE_HASHMAP,
> .key_size = sizeof(struct request *),
> .value_size = sizeof(struct info_t),
> .max_entries = 1024,
> };
> struct bpf_map_def SEC("maps") counts = {
> .type = BPF_MAP_TYPE_HASHMAP,
> .key_size = sizeof(struct info_t),
> .value_size = sizeof(u64),
> .max_entries = 1024,
> };
> SEC("kprobe/blk_account_io_start")
> int bpf_blk_account_io_start(struct pt_regs *ctx)
> {
> struct info_t *info = bpf_map_lookup_elem(&infobyreq, &ctx->di);
> u64 *count = bpf_map_lookup_elem(&counts, info);
> (*count)++;
> }
>
> Signed-off-by: Paul Chaignon <paul.chaignon@orange.com>
> ---
> kernel/bpf/verifier.c | 9 ++++++++-
> 1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 5dd1dcb902bf..70e00beade03 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -1914,7 +1914,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno,
> if (arg_type == ARG_PTR_TO_MAP_KEY ||
> arg_type == ARG_PTR_TO_MAP_VALUE) {
> expected_type = PTR_TO_STACK;
> - if (!type_is_pkt_pointer(type) &&
> + if (!type_is_pkt_pointer(type) && type != PTR_TO_MAP_VALUE &&
> type != expected_type)
> goto err_type;
> } else if (arg_type == ARG_CONST_SIZE ||
> @@ -1970,6 +1970,9 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno,
> err = check_packet_access(env, regno, reg->off,
> meta->map_ptr->key_size,
> false);
> + else if (type == PTR_TO_MAP_VALUE)
> + err = check_map_access(env, regno, reg->off,
> + meta->map_ptr->key_size, false);
> else
> err = check_stack_boundary(env, regno,
> meta->map_ptr->key_size,
We should reuse check_helper_mem_access() here which covers all three cases
from above already and simplifies the code a bit.
> @@ -1987,6 +1990,10 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno,
> err = check_packet_access(env, regno, reg->off,
> meta->map_ptr->value_size,
> false);
> + else if (type == PTR_TO_MAP_VALUE)
> + err = check_map_access(env, regno, reg->off,
> + meta->map_ptr->value_size,
> + false);
> else
> err = check_stack_boundary(env, regno,
> meta->map_ptr->value_size,
>
Ditto.
Thanks,
Daniel
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH bpf-next v4 1/2] bpf: allow map helpers access to map values directly
[not found] ` <a71a2e2d-ca39-b351-a62d-315c034f1ea1-FeC+5ew28dpmcu3hnIyYJQ@public.gmane.org>
@ 2018-04-24 13:50 ` Paul Chaignon via iovisor-dev
0 siblings, 0 replies; 5+ messages in thread
From: Paul Chaignon via iovisor-dev @ 2018-04-24 13:50 UTC (permalink / raw)
To: Daniel Borkmann, Alexei Starovoitov, netdev-u79uwXL29TY76Z2rM5mHXA
Cc: iovisor-dev-9jONkmmOlFHEE9lA1F8Ukti2O/JbrIOy
On 04/23/2018 11:18 PM +0200, Daniel Borkmann wrote:
> On 04/22/2018 11:52 PM, Paul Chaignon wrote:
> > Helpers that expect ARG_PTR_TO_MAP_KEY and ARG_PTR_TO_MAP_VALUE can only
> > access stack and packet memory. Allow these helpers to directly access
> > map values by passing registers of type PTR_TO_MAP_VALUE.
> >
> > This change removes the need for an extra copy to the stack when using a
> > map value to perform a second map lookup, as in the following:
> >
> > struct bpf_map_def SEC("maps") infobyreq = {
> > .type = BPF_MAP_TYPE_HASHMAP,
> > .key_size = sizeof(struct request *),
> > .value_size = sizeof(struct info_t),
> > .max_entries = 1024,
> > };
> > struct bpf_map_def SEC("maps") counts = {
> > .type = BPF_MAP_TYPE_HASHMAP,
> > .key_size = sizeof(struct info_t),
> > .value_size = sizeof(u64),
> > .max_entries = 1024,
> > };
> > SEC("kprobe/blk_account_io_start")
> > int bpf_blk_account_io_start(struct pt_regs *ctx)
> > {
> > struct info_t *info = bpf_map_lookup_elem(&infobyreq, &ctx->di);
> > u64 *count = bpf_map_lookup_elem(&counts, info);
> > (*count)++;
> > }
> >
> > Signed-off-by: Paul Chaignon <paul.chaignon-C0LM0jrOve7QT0dZR+AlfA@public.gmane.org>
> > ---
> > kernel/bpf/verifier.c | 9 ++++++++-
> > 1 file changed, 8 insertions(+), 1 deletion(-)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 5dd1dcb902bf..70e00beade03 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -1914,7 +1914,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno,
> > if (arg_type == ARG_PTR_TO_MAP_KEY ||
> > arg_type == ARG_PTR_TO_MAP_VALUE) {
> > expected_type = PTR_TO_STACK;
> > - if (!type_is_pkt_pointer(type) &&
> > + if (!type_is_pkt_pointer(type) && type != PTR_TO_MAP_VALUE &&
> > type != expected_type)
> > goto err_type;
> > } else if (arg_type == ARG_CONST_SIZE ||
> > @@ -1970,6 +1970,9 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno,
> > err = check_packet_access(env, regno, reg->off,
> > meta->map_ptr->key_size,
> > false);
> > + else if (type == PTR_TO_MAP_VALUE)
> > + err = check_map_access(env, regno, reg->off,
> > + meta->map_ptr->key_size, false);
> > else
> > err = check_stack_boundary(env, regno,
> > meta->map_ptr->key_size,
>
> We should reuse check_helper_mem_access() here which covers all three cases
> from above already and simplifies the code a bit.
Thanks for the review.
I've sent a refactored patchset that uses check_helper_mem_access().
>
> > @@ -1987,6 +1990,10 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno,
> > err = check_packet_access(env, regno, reg->off,
> > meta->map_ptr->value_size,
> > false);
> > + else if (type == PTR_TO_MAP_VALUE)
> > + err = check_map_access(env, regno, reg->off,
> > + meta->map_ptr->value_size,
> > + false);
> > else
> > err = check_stack_boundary(env, regno,
> > meta->map_ptr->value_size,
> >
>
> Ditto.
>
> Thanks,
> Daniel
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2018-04-24 13:50 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-22 21:50 [PATCH bpf-next v4 0/2] bpf: allow map helpers access to map values directly Paul Chaignon via iovisor-dev
2018-04-22 21:52 ` [PATCH bpf-next v4 1/2] " Paul Chaignon
2018-04-23 21:18 ` Daniel Borkmann
[not found] ` <a71a2e2d-ca39-b351-a62d-315c034f1ea1-FeC+5ew28dpmcu3hnIyYJQ@public.gmane.org>
2018-04-24 13:50 ` Paul Chaignon via iovisor-dev
[not found] ` <cover.1524407664.git.paul.chaignon-C0LM0jrOve7QT0dZR+AlfA@public.gmane.org>
2018-04-22 21:52 ` [PATCH bpf-next v4 2/2] tools/bpf: add verifier tests for accesses to map Paul Chaignon via iovisor-dev
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.