All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 bpf-next 0/5] bpf: Support access to bpf map fields
@ 2020-06-19 21:11 Andrey Ignatov
  2020-06-19 21:11 ` [PATCH v2 bpf-next 1/5] bpf: Switch btf_parse_vmlinux to btf_find_by_name_kind Andrey Ignatov
                   ` (6 more replies)
  0 siblings, 7 replies; 15+ messages in thread
From: Andrey Ignatov @ 2020-06-19 21:11 UTC (permalink / raw)
  To: bpf; +Cc: Andrey Ignatov, ast, daniel, kafai, andriin, kernel-team

v1->v2:
- move btf id cache to a new bpf_map_ops.map_btf_id field (Martin, Andrii);
- don't check btf names for collisions (Martin);
- drop btf_find_by_name_kind_next() patch since it was needed only for
  collision check;
- don't fall back to `struct bpf_map` if a map type doesn't specify both
  map_btf_name and map_btf_id;

This patch set adds support to access bpf map fields from bpf programs
using btf_struct_access().

That way a program can cast a pointer to map to either `struct bpf_map *`
or map type specific struct pointer such as `struct bpf_array *` or
`struct bpf_htab *`, and access necessary fields, e.g. map->max_entries.

The fields, in turn, should be defined by a user provided struct with
preserve_access_index attribute or included from vmlinux.h.

Please see patch 3 for more details on the feature and use-cases.

Other patches:

Patch 1 is refactoring to simplify btf_parse_vmlinux().
Patch 2 is a rename to avoid having two different `struct bpf_htab`.

Patch 4 enables access to map fields for all map types.
Patch 5 adds selftests.


Andrey Ignatov (5):
  bpf: Switch btf_parse_vmlinux to btf_find_by_name_kind
  bpf: Rename bpf_htab to bpf_shtab in sock_map
  bpf: Support access to bpf map fields
  bpf: Set map_btf_{name,id} for all map types
  selftests/bpf: Test access to bpf map pointer

 include/linux/bpf.h                           |   9 +
 include/linux/bpf_verifier.h                  |   1 +
 kernel/bpf/arraymap.c                         |  18 +
 kernel/bpf/bpf_struct_ops.c                   |   3 +
 kernel/bpf/btf.c                              |  63 +-
 kernel/bpf/cpumap.c                           |   3 +
 kernel/bpf/devmap.c                           |   6 +
 kernel/bpf/hashtab.c                          |  15 +
 kernel/bpf/local_storage.c                    |   3 +
 kernel/bpf/lpm_trie.c                         |   3 +
 kernel/bpf/queue_stack_maps.c                 |   6 +
 kernel/bpf/reuseport_array.c                  |   3 +
 kernel/bpf/ringbuf.c                          |   3 +
 kernel/bpf/stackmap.c                         |   3 +
 kernel/bpf/verifier.c                         |  82 ++-
 net/core/bpf_sk_storage.c                     |   3 +
 net/core/sock_map.c                           |  88 +--
 net/xdp/xskmap.c                              |   3 +
 .../selftests/bpf/prog_tests/map_ptr.c        |  32 +
 .../selftests/bpf/progs/map_ptr_kern.c        | 686 ++++++++++++++++++
 .../testing/selftests/bpf/verifier/map_ptr.c  |  62 ++
 .../selftests/bpf/verifier/map_ptr_mixing.c   |   2 +-
 22 files changed, 1030 insertions(+), 67 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/map_ptr.c
 create mode 100644 tools/testing/selftests/bpf/progs/map_ptr_kern.c
 create mode 100644 tools/testing/selftests/bpf/verifier/map_ptr.c

-- 
2.24.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v2 bpf-next 1/5] bpf: Switch btf_parse_vmlinux to btf_find_by_name_kind
  2020-06-19 21:11 [PATCH v2 bpf-next 0/5] bpf: Support access to bpf map fields Andrey Ignatov
@ 2020-06-19 21:11 ` Andrey Ignatov
  2020-06-20  6:04   ` John Fastabend
  2020-06-19 21:11 ` [PATCH v2 bpf-next 2/5] bpf: Rename bpf_htab to bpf_shtab in sock_map Andrey Ignatov
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Andrey Ignatov @ 2020-06-19 21:11 UTC (permalink / raw)
  To: bpf; +Cc: Andrey Ignatov, ast, daniel, kafai, andriin, kernel-team

btf_parse_vmlinux() implements manual search for struct bpf_ctx_convert
since at the time of implementing btf_find_by_name_kind() was not
available.

Later btf_find_by_name_kind() was introduced in 27ae7997a661 ("bpf:
Introduce BPF_PROG_TYPE_STRUCT_OPS"). It provides similar search
functionality and can be leveraged in btf_parse_vmlinux(). Do it.

Signed-off-by: Andrey Ignatov <rdna@fb.com>
Acked-by: Andrii Nakryiko <andriin@fb.com>
---
 kernel/bpf/btf.c | 23 ++++++-----------------
 1 file changed, 6 insertions(+), 17 deletions(-)

diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 58c9af1d4808..3eb804618a53 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3591,7 +3591,7 @@ struct btf *btf_parse_vmlinux(void)
 	struct btf_verifier_env *env = NULL;
 	struct bpf_verifier_log *log;
 	struct btf *btf = NULL;
-	int err, i;
+	int err, btf_id;
 
 	env = kzalloc(sizeof(*env), GFP_KERNEL | __GFP_NOWARN);
 	if (!env)
@@ -3625,24 +3625,13 @@ struct btf *btf_parse_vmlinux(void)
 		goto errout;
 
 	/* find struct bpf_ctx_convert for type checking later */
-	for (i = 1; i <= btf->nr_types; i++) {
-		const struct btf_type *t;
-		const char *tname;
-
-		t = btf_type_by_id(btf, i);
-		if (!__btf_type_is_struct(t))
-			continue;
-		tname = __btf_name_by_offset(btf, t->name_off);
-		if (!strcmp(tname, "bpf_ctx_convert")) {
-			/* btf_parse_vmlinux() runs under bpf_verifier_lock */
-			bpf_ctx_convert.t = t;
-			break;
-		}
-	}
-	if (i > btf->nr_types) {
-		err = -ENOENT;
+	btf_id = btf_find_by_name_kind(btf, "bpf_ctx_convert", BTF_KIND_STRUCT);
+	if (btf_id < 0) {
+		err = btf_id;
 		goto errout;
 	}
+	/* btf_parse_vmlinux() runs under bpf_verifier_lock */
+	bpf_ctx_convert.t = btf_type_by_id(btf, btf_id);
 
 	bpf_struct_ops_init(btf, log);
 
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 bpf-next 2/5] bpf: Rename bpf_htab to bpf_shtab in sock_map
  2020-06-19 21:11 [PATCH v2 bpf-next 0/5] bpf: Support access to bpf map fields Andrey Ignatov
  2020-06-19 21:11 ` [PATCH v2 bpf-next 1/5] bpf: Switch btf_parse_vmlinux to btf_find_by_name_kind Andrey Ignatov
@ 2020-06-19 21:11 ` Andrey Ignatov
  2020-06-20  6:08   ` John Fastabend
  2020-06-19 21:11 ` [PATCH v2 bpf-next 3/5] bpf: Support access to bpf map fields Andrey Ignatov
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Andrey Ignatov @ 2020-06-19 21:11 UTC (permalink / raw)
  To: bpf; +Cc: Andrey Ignatov, ast, daniel, kafai, andriin, kernel-team

There are two different `struct bpf_htab` in bpf code in the following
files:
- kernel/bpf/hashtab.c
- net/core/sock_map.c

It makes it impossible to find proper btf_id by name = "bpf_htab" and
kind = BTF_KIND_STRUCT what is needed to support access to map ptr so
that bpf program can access `struct bpf_htab` fields.

To make it possible one of the struct-s should be renamed, sock_map.c
looks like a better candidate for rename since it's specialized version
of hashtab.

Rename it to bpf_shtab ("sh" stands for Sock Hash).

Signed-off-by: Andrey Ignatov <rdna@fb.com>
---
 net/core/sock_map.c | 82 ++++++++++++++++++++++-----------------------
 1 file changed, 41 insertions(+), 41 deletions(-)

diff --git a/net/core/sock_map.c b/net/core/sock_map.c
index 4059f94e9bb5..2b884f2d562a 100644
--- a/net/core/sock_map.c
+++ b/net/core/sock_map.c
@@ -655,7 +655,7 @@ const struct bpf_map_ops sock_map_ops = {
 	.map_check_btf		= map_check_no_btf,
 };
 
-struct bpf_htab_elem {
+struct bpf_shtab_elem {
 	struct rcu_head rcu;
 	u32 hash;
 	struct sock *sk;
@@ -663,14 +663,14 @@ struct bpf_htab_elem {
 	u8 key[];
 };
 
-struct bpf_htab_bucket {
+struct bpf_shtab_bucket {
 	struct hlist_head head;
 	raw_spinlock_t lock;
 };
 
-struct bpf_htab {
+struct bpf_shtab {
 	struct bpf_map map;
-	struct bpf_htab_bucket *buckets;
+	struct bpf_shtab_bucket *buckets;
 	u32 buckets_num;
 	u32 elem_size;
 	struct sk_psock_progs progs;
@@ -682,17 +682,17 @@ static inline u32 sock_hash_bucket_hash(const void *key, u32 len)
 	return jhash(key, len, 0);
 }
 
-static struct bpf_htab_bucket *sock_hash_select_bucket(struct bpf_htab *htab,
-						       u32 hash)
+static struct bpf_shtab_bucket *sock_hash_select_bucket(struct bpf_shtab *htab,
+							u32 hash)
 {
 	return &htab->buckets[hash & (htab->buckets_num - 1)];
 }
 
-static struct bpf_htab_elem *
+static struct bpf_shtab_elem *
 sock_hash_lookup_elem_raw(struct hlist_head *head, u32 hash, void *key,
 			  u32 key_size)
 {
-	struct bpf_htab_elem *elem;
+	struct bpf_shtab_elem *elem;
 
 	hlist_for_each_entry_rcu(elem, head, node) {
 		if (elem->hash == hash &&
@@ -705,10 +705,10 @@ sock_hash_lookup_elem_raw(struct hlist_head *head, u32 hash, void *key,
 
 static struct sock *__sock_hash_lookup_elem(struct bpf_map *map, void *key)
 {
-	struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
+	struct bpf_shtab *htab = container_of(map, struct bpf_shtab, map);
 	u32 key_size = map->key_size, hash;
-	struct bpf_htab_bucket *bucket;
-	struct bpf_htab_elem *elem;
+	struct bpf_shtab_bucket *bucket;
+	struct bpf_shtab_elem *elem;
 
 	WARN_ON_ONCE(!rcu_read_lock_held());
 
@@ -719,8 +719,8 @@ static struct sock *__sock_hash_lookup_elem(struct bpf_map *map, void *key)
 	return elem ? elem->sk : NULL;
 }
 
-static void sock_hash_free_elem(struct bpf_htab *htab,
-				struct bpf_htab_elem *elem)
+static void sock_hash_free_elem(struct bpf_shtab *htab,
+				struct bpf_shtab_elem *elem)
 {
 	atomic_dec(&htab->count);
 	kfree_rcu(elem, rcu);
@@ -729,9 +729,9 @@ static void sock_hash_free_elem(struct bpf_htab *htab,
 static void sock_hash_delete_from_link(struct bpf_map *map, struct sock *sk,
 				       void *link_raw)
 {
-	struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
-	struct bpf_htab_elem *elem_probe, *elem = link_raw;
-	struct bpf_htab_bucket *bucket;
+	struct bpf_shtab *htab = container_of(map, struct bpf_shtab, map);
+	struct bpf_shtab_elem *elem_probe, *elem = link_raw;
+	struct bpf_shtab_bucket *bucket;
 
 	WARN_ON_ONCE(!rcu_read_lock_held());
 	bucket = sock_hash_select_bucket(htab, elem->hash);
@@ -753,10 +753,10 @@ static void sock_hash_delete_from_link(struct bpf_map *map, struct sock *sk,
 
 static int sock_hash_delete_elem(struct bpf_map *map, void *key)
 {
-	struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
+	struct bpf_shtab *htab = container_of(map, struct bpf_shtab, map);
 	u32 hash, key_size = map->key_size;
-	struct bpf_htab_bucket *bucket;
-	struct bpf_htab_elem *elem;
+	struct bpf_shtab_bucket *bucket;
+	struct bpf_shtab_elem *elem;
 	int ret = -ENOENT;
 
 	hash = sock_hash_bucket_hash(key, key_size);
@@ -774,12 +774,12 @@ static int sock_hash_delete_elem(struct bpf_map *map, void *key)
 	return ret;
 }
 
-static struct bpf_htab_elem *sock_hash_alloc_elem(struct bpf_htab *htab,
-						  void *key, u32 key_size,
-						  u32 hash, struct sock *sk,
-						  struct bpf_htab_elem *old)
+static struct bpf_shtab_elem *sock_hash_alloc_elem(struct bpf_shtab *htab,
+						   void *key, u32 key_size,
+						   u32 hash, struct sock *sk,
+						   struct bpf_shtab_elem *old)
 {
-	struct bpf_htab_elem *new;
+	struct bpf_shtab_elem *new;
 
 	if (atomic_inc_return(&htab->count) > htab->map.max_entries) {
 		if (!old) {
@@ -803,10 +803,10 @@ static struct bpf_htab_elem *sock_hash_alloc_elem(struct bpf_htab *htab,
 static int sock_hash_update_common(struct bpf_map *map, void *key,
 				   struct sock *sk, u64 flags)
 {
-	struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
+	struct bpf_shtab *htab = container_of(map, struct bpf_shtab, map);
 	u32 key_size = map->key_size, hash;
-	struct bpf_htab_elem *elem, *elem_new;
-	struct bpf_htab_bucket *bucket;
+	struct bpf_shtab_elem *elem, *elem_new;
+	struct bpf_shtab_bucket *bucket;
 	struct sk_psock_link *link;
 	struct sk_psock *psock;
 	int ret;
@@ -916,8 +916,8 @@ static int sock_hash_update_elem(struct bpf_map *map, void *key,
 static int sock_hash_get_next_key(struct bpf_map *map, void *key,
 				  void *key_next)
 {
-	struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
-	struct bpf_htab_elem *elem, *elem_next;
+	struct bpf_shtab *htab = container_of(map, struct bpf_shtab, map);
+	struct bpf_shtab_elem *elem, *elem_next;
 	u32 hash, key_size = map->key_size;
 	struct hlist_head *head;
 	int i = 0;
@@ -931,7 +931,7 @@ static int sock_hash_get_next_key(struct bpf_map *map, void *key,
 		goto find_first_elem;
 
 	elem_next = hlist_entry_safe(rcu_dereference_raw(hlist_next_rcu(&elem->node)),
-				     struct bpf_htab_elem, node);
+				     struct bpf_shtab_elem, node);
 	if (elem_next) {
 		memcpy(key_next, elem_next->key, key_size);
 		return 0;
@@ -943,7 +943,7 @@ static int sock_hash_get_next_key(struct bpf_map *map, void *key,
 	for (; i < htab->buckets_num; i++) {
 		head = &sock_hash_select_bucket(htab, i)->head;
 		elem_next = hlist_entry_safe(rcu_dereference_raw(hlist_first_rcu(head)),
-					     struct bpf_htab_elem, node);
+					     struct bpf_shtab_elem, node);
 		if (elem_next) {
 			memcpy(key_next, elem_next->key, key_size);
 			return 0;
@@ -955,7 +955,7 @@ static int sock_hash_get_next_key(struct bpf_map *map, void *key,
 
 static struct bpf_map *sock_hash_alloc(union bpf_attr *attr)
 {
-	struct bpf_htab *htab;
+	struct bpf_shtab *htab;
 	int i, err;
 	u64 cost;
 
@@ -977,15 +977,15 @@ static struct bpf_map *sock_hash_alloc(union bpf_attr *attr)
 	bpf_map_init_from_attr(&htab->map, attr);
 
 	htab->buckets_num = roundup_pow_of_two(htab->map.max_entries);
-	htab->elem_size = sizeof(struct bpf_htab_elem) +
+	htab->elem_size = sizeof(struct bpf_shtab_elem) +
 			  round_up(htab->map.key_size, 8);
 	if (htab->buckets_num == 0 ||
-	    htab->buckets_num > U32_MAX / sizeof(struct bpf_htab_bucket)) {
+	    htab->buckets_num > U32_MAX / sizeof(struct bpf_shtab_bucket)) {
 		err = -EINVAL;
 		goto free_htab;
 	}
 
-	cost = (u64) htab->buckets_num * sizeof(struct bpf_htab_bucket) +
+	cost = (u64) htab->buckets_num * sizeof(struct bpf_shtab_bucket) +
 	       (u64) htab->elem_size * htab->map.max_entries;
 	if (cost >= U32_MAX - PAGE_SIZE) {
 		err = -EINVAL;
@@ -996,7 +996,7 @@ static struct bpf_map *sock_hash_alloc(union bpf_attr *attr)
 		goto free_htab;
 
 	htab->buckets = bpf_map_area_alloc(htab->buckets_num *
-					   sizeof(struct bpf_htab_bucket),
+					   sizeof(struct bpf_shtab_bucket),
 					   htab->map.numa_node);
 	if (!htab->buckets) {
 		bpf_map_charge_finish(&htab->map.memory);
@@ -1017,10 +1017,10 @@ static struct bpf_map *sock_hash_alloc(union bpf_attr *attr)
 
 static void sock_hash_free(struct bpf_map *map)
 {
-	struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
-	struct bpf_htab_bucket *bucket;
+	struct bpf_shtab *htab = container_of(map, struct bpf_shtab, map);
+	struct bpf_shtab_bucket *bucket;
 	struct hlist_head unlink_list;
-	struct bpf_htab_elem *elem;
+	struct bpf_shtab_elem *elem;
 	struct hlist_node *node;
 	int i;
 
@@ -1096,7 +1096,7 @@ static void *sock_hash_lookup(struct bpf_map *map, void *key)
 
 static void sock_hash_release_progs(struct bpf_map *map)
 {
-	psock_progs_drop(&container_of(map, struct bpf_htab, map)->progs);
+	psock_progs_drop(&container_of(map, struct bpf_shtab, map)->progs);
 }
 
 BPF_CALL_4(bpf_sock_hash_update, struct bpf_sock_ops_kern *, sops,
@@ -1194,7 +1194,7 @@ static struct sk_psock_progs *sock_map_progs(struct bpf_map *map)
 	case BPF_MAP_TYPE_SOCKMAP:
 		return &container_of(map, struct bpf_stab, map)->progs;
 	case BPF_MAP_TYPE_SOCKHASH:
-		return &container_of(map, struct bpf_htab, map)->progs;
+		return &container_of(map, struct bpf_shtab, map)->progs;
 	default:
 		break;
 	}
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 bpf-next 3/5] bpf: Support access to bpf map fields
  2020-06-19 21:11 [PATCH v2 bpf-next 0/5] bpf: Support access to bpf map fields Andrey Ignatov
  2020-06-19 21:11 ` [PATCH v2 bpf-next 1/5] bpf: Switch btf_parse_vmlinux to btf_find_by_name_kind Andrey Ignatov
  2020-06-19 21:11 ` [PATCH v2 bpf-next 2/5] bpf: Rename bpf_htab to bpf_shtab in sock_map Andrey Ignatov
@ 2020-06-19 21:11 ` Andrey Ignatov
  2020-06-21  3:27   ` John Fastabend
  2020-06-19 21:11 ` [PATCH v2 bpf-next 4/5] bpf: Set map_btf_{name,id} for all map types Andrey Ignatov
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Andrey Ignatov @ 2020-06-19 21:11 UTC (permalink / raw)
  To: bpf; +Cc: Andrey Ignatov, ast, daniel, kafai, andriin, kernel-team

There are multiple use-cases when it's convenient to have access to bpf
map fields, both `struct bpf_map` and map type specific struct-s such as
`struct bpf_array`, `struct bpf_htab`, etc.

For example while working with sock arrays it can be necessary to
calculate the key based on map->max_entries (some_hash % max_entries).
Currently this is solved by communicating max_entries via "out-of-band"
channel, e.g. via additional map with known key to get info about target
map. That works, but is not very convenient and error-prone while
working with many maps.

In other cases necessary data is dynamic (i.e. unknown at loading time)
and it's impossible to get it at all. For example while working with a
hash table it can be convenient to know how much capacity is already
used (bpf_htab.count.counter for BPF_F_NO_PREALLOC case).

At the same time kernel knows this info and can provide it to bpf
program.

Fill this gap by adding support to access bpf map fields from bpf
program for both `struct bpf_map` and map type specific fields.

Support is implemented via btf_struct_access() so that a user can define
their own `struct bpf_map` or map type specific struct in their program
with only necessary fields and preserve_access_index attribute, cast a
map to this struct and use a field.

For example:

	struct bpf_map {
		__u32 max_entries;
	} __attribute__((preserve_access_index));

	struct bpf_array {
		struct bpf_map map;
		__u32 elem_size;
	} __attribute__((preserve_access_index));

	struct {
		__uint(type, BPF_MAP_TYPE_ARRAY);
		__uint(max_entries, 4);
		__type(key, __u32);
		__type(value, __u32);
	} m_array SEC(".maps");

	SEC("cgroup_skb/egress")
	int cg_skb(void *ctx)
	{
		struct bpf_array *array = (struct bpf_array *)&m_array;
		struct bpf_map *map = (struct bpf_map *)&m_array;

		/* .. use map->max_entries or array->map.max_entries .. */
	}

Similarly to other btf_struct_access() use-cases (e.g. struct tcp_sock
in net/ipv4/bpf_tcp_ca.c) the patch allows access to any fields of
corresponding struct. Only reading from map fields is supported.

For btf_struct_access() to work there should be a way to know btf id of
a struct that corresponds to a map type. To get btf id there should be a
way to get a stringified name of map-specific struct, such as
"bpf_array", "bpf_htab", etc for a map type. Two new fields are added to
`struct bpf_map_ops` to handle it:
* .map_btf_name keeps a btf name of a struct returned by map_alloc();
* .map_btf_id is used to cache btf id of that struct.

To make btf ids calculation cheaper they're calculated once while
preparing btf_vmlinux and cached same way as it's done for btf_id field
of `struct bpf_func_proto`

While calculating btf ids, struct names are NOT checked for collision.
Collisions will be checked as a part of the work to prepare btf ids used
in verifier in compile time that should land soon. The only known
collision for `struct bpf_htab` (kernel/bpf/hashtab.c vs
net/core/sock_map.c) was fixed earlier.

Both new fields .map_btf_name and .map_btf_id must be set for a map type
for the feature to work. If neither is set for a map type, verifier will
return ENOTSUPP on a try to access map_ptr of corresponding type. If
just one of them set, it's verifier misconfiguration.

Only `struct bpf_array` for BPF_MAP_TYPE_ARRAY and `struct bpf_htab` for
BPF_MAP_TYPE_HASH are supported by this patch. Other map types will be
supported separately.

The feature is available only for CONFIG_DEBUG_INFO_BTF=y and gated by
perfmon_capable() so that unpriv programs won't have access to bpf map
fields.

Signed-off-by: Andrey Ignatov <rdna@fb.com>
---
 include/linux/bpf.h                           |  9 ++
 include/linux/bpf_verifier.h                  |  1 +
 kernel/bpf/arraymap.c                         |  3 +
 kernel/bpf/btf.c                              | 40 +++++++++
 kernel/bpf/hashtab.c                          |  3 +
 kernel/bpf/verifier.c                         | 82 +++++++++++++++++--
 .../selftests/bpf/verifier/map_ptr_mixing.c   |  2 +-
 7 files changed, 131 insertions(+), 9 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 07052d44bca1..1e1501ee53ce 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -92,6 +92,10 @@ struct bpf_map_ops {
 	int (*map_mmap)(struct bpf_map *map, struct vm_area_struct *vma);
 	__poll_t (*map_poll)(struct bpf_map *map, struct file *filp,
 			     struct poll_table_struct *pts);
+
+	/* BTF name and id of struct allocated by map_alloc */
+	const char * const map_btf_name;
+	int *map_btf_id;
 };
 
 struct bpf_map_memory {
@@ -1109,6 +1113,11 @@ static inline bool bpf_allow_ptr_leaks(void)
 	return perfmon_capable();
 }
 
+static inline bool bpf_allow_ptr_to_map_access(void)
+{
+	return perfmon_capable();
+}
+
 static inline bool bpf_bypass_spec_v1(void)
 {
 	return perfmon_capable();
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index ca08db4ffb5f..53c7bd568c5d 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -379,6 +379,7 @@ struct bpf_verifier_env {
 	u32 used_map_cnt;		/* number of used maps */
 	u32 id_gen;			/* used to generate unique reg IDs */
 	bool allow_ptr_leaks;
+	bool allow_ptr_to_map_access;
 	bool bpf_capable;
 	bool bypass_spec_v1;
 	bool bypass_spec_v4;
diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
index 11584618e861..e7caa48812fb 100644
--- a/kernel/bpf/arraymap.c
+++ b/kernel/bpf/arraymap.c
@@ -494,6 +494,7 @@ static int array_map_mmap(struct bpf_map *map, struct vm_area_struct *vma)
 				   vma->vm_pgoff + pgoff);
 }
 
+static int array_map_btf_id;
 const struct bpf_map_ops array_map_ops = {
 	.map_alloc_check = array_map_alloc_check,
 	.map_alloc = array_map_alloc,
@@ -510,6 +511,8 @@ const struct bpf_map_ops array_map_ops = {
 	.map_check_btf = array_map_check_btf,
 	.map_lookup_batch = generic_map_lookup_batch,
 	.map_update_batch = generic_map_update_batch,
+	.map_btf_name = "bpf_array",
+	.map_btf_id = &array_map_btf_id,
 };
 
 const struct bpf_map_ops percpu_array_map_ops = {
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 3eb804618a53..e377d1981730 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3571,6 +3571,41 @@ btf_get_prog_ctx_type(struct bpf_verifier_log *log, struct btf *btf,
 	return ctx_type;
 }
 
+static const struct bpf_map_ops * const btf_vmlinux_map_ops[] = {
+#define BPF_PROG_TYPE(_id, _name, prog_ctx_type, kern_ctx_type)
+#define BPF_LINK_TYPE(_id, _name)
+#define BPF_MAP_TYPE(_id, _ops) \
+	[_id] = &_ops,
+#include <linux/bpf_types.h>
+#undef BPF_PROG_TYPE
+#undef BPF_LINK_TYPE
+#undef BPF_MAP_TYPE
+};
+
+static int btf_vmlinux_map_ids_init(const struct btf *btf,
+				    struct bpf_verifier_log *log)
+{
+	const struct bpf_map_ops *ops;
+	int i, btf_id;
+
+	for (i = 0; i < ARRAY_SIZE(btf_vmlinux_map_ops); ++i) {
+		ops = btf_vmlinux_map_ops[i];
+		if (!ops || (!ops->map_btf_name && !ops->map_btf_id))
+			continue;
+		if (!ops->map_btf_name || !ops->map_btf_id) {
+			bpf_log(log, "map type %d is misconfigured\n", i);
+			return -EINVAL;
+		}
+		btf_id = btf_find_by_name_kind(btf, ops->map_btf_name,
+					       BTF_KIND_STRUCT);
+		if (btf_id < 0)
+			return btf_id;
+		*ops->map_btf_id = btf_id;
+	}
+
+	return 0;
+}
+
 static int btf_translate_to_vmlinux(struct bpf_verifier_log *log,
 				     struct btf *btf,
 				     const struct btf_type *t,
@@ -3633,6 +3668,11 @@ struct btf *btf_parse_vmlinux(void)
 	/* btf_parse_vmlinux() runs under bpf_verifier_lock */
 	bpf_ctx_convert.t = btf_type_by_id(btf, btf_id);
 
+	/* find bpf map structs for map_ptr access checking */
+	err = btf_vmlinux_map_ids_init(btf, log);
+	if (err < 0)
+		goto errout;
+
 	bpf_struct_ops_init(btf, log);
 
 	btf_verifier_env_free(env);
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index b4b288a3c3c9..2c5999e02060 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -1614,6 +1614,7 @@ htab_lru_map_lookup_and_delete_batch(struct bpf_map *map,
 						  true, false);
 }
 
+static int htab_map_btf_id;
 const struct bpf_map_ops htab_map_ops = {
 	.map_alloc_check = htab_map_alloc_check,
 	.map_alloc = htab_map_alloc,
@@ -1625,6 +1626,8 @@ const struct bpf_map_ops htab_map_ops = {
 	.map_gen_lookup = htab_map_gen_lookup,
 	.map_seq_show_elem = htab_map_seq_show_elem,
 	BATCH_OPS(htab),
+	.map_btf_name = "bpf_htab",
+	.map_btf_id = &htab_map_btf_id,
 };
 
 const struct bpf_map_ops htab_lru_map_ops = {
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 34cde841ab68..20e0637679a7 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1351,6 +1351,19 @@ static void mark_reg_not_init(struct bpf_verifier_env *env,
 	__mark_reg_not_init(env, regs + regno);
 }
 
+static void mark_btf_ld_reg(struct bpf_verifier_env *env,
+			    struct bpf_reg_state *regs, u32 regno,
+			    enum bpf_reg_type reg_type, u32 btf_id)
+{
+	if (reg_type == SCALAR_VALUE) {
+		mark_reg_unknown(env, regs, regno);
+		return;
+	}
+	mark_reg_known_zero(env, regs, regno);
+	regs[regno].type = PTR_TO_BTF_ID;
+	regs[regno].btf_id = btf_id;
+}
+
 #define DEF_NOT_SUBREG	(0)
 static void init_reg_state(struct bpf_verifier_env *env,
 			   struct bpf_func_state *state)
@@ -3182,19 +3195,68 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
 	if (ret < 0)
 		return ret;
 
-	if (atype == BPF_READ && value_regno >= 0) {
-		if (ret == SCALAR_VALUE) {
-			mark_reg_unknown(env, regs, value_regno);
-			return 0;
-		}
-		mark_reg_known_zero(env, regs, value_regno);
-		regs[value_regno].type = PTR_TO_BTF_ID;
-		regs[value_regno].btf_id = btf_id;
+	if (atype == BPF_READ && value_regno >= 0)
+		mark_btf_ld_reg(env, regs, value_regno, ret, btf_id);
+
+	return 0;
+}
+
+static int check_ptr_to_map_access(struct bpf_verifier_env *env,
+				   struct bpf_reg_state *regs,
+				   int regno, int off, int size,
+				   enum bpf_access_type atype,
+				   int value_regno)
+{
+	struct bpf_reg_state *reg = regs + regno;
+	struct bpf_map *map = reg->map_ptr;
+	const struct btf_type *t;
+	const char *tname;
+	u32 btf_id;
+	int ret;
+
+	if (!btf_vmlinux) {
+		verbose(env, "map_ptr access not supported without CONFIG_DEBUG_INFO_BTF\n");
+		return -ENOTSUPP;
+	}
+
+	if (!map->ops->map_btf_id || !*map->ops->map_btf_id) {
+		verbose(env, "map_ptr access not supported for map type %d\n",
+			map->map_type);
+		return -ENOTSUPP;
+	}
+
+	t = btf_type_by_id(btf_vmlinux, *map->ops->map_btf_id);
+	tname = btf_name_by_offset(btf_vmlinux, t->name_off);
+
+	if (!env->allow_ptr_to_map_access) {
+		verbose(env,
+			"%s access is allowed only to CAP_PERFMON and CAP_SYS_ADMIN\n",
+			tname);
+		return -EPERM;
 	}
 
+	if (off < 0) {
+		verbose(env, "R%d is %s invalid negative access: off=%d\n",
+			regno, tname, off);
+		return -EACCES;
+	}
+
+	if (atype != BPF_READ) {
+		verbose(env, "only read from %s is supported\n", tname);
+		return -EACCES;
+	}
+
+	ret = btf_struct_access(&env->log, t, off, size, atype, &btf_id);
+	if (ret < 0)
+		return ret;
+
+	if (value_regno >= 0)
+		mark_btf_ld_reg(env, regs, value_regno, ret, btf_id);
+
 	return 0;
 }
 
+
 /* check whether memory at (regno + off) is accessible for t = (read | write)
  * if t==write, value_regno is a register which value is stored into memory
  * if t==read, value_regno is a register which will receive the value from memory
@@ -3363,6 +3425,9 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
 	} else if (reg->type == PTR_TO_BTF_ID) {
 		err = check_ptr_to_btf_access(env, regs, regno, off, size, t,
 					      value_regno);
+	} else if (reg->type == CONST_PTR_TO_MAP) {
+		err = check_ptr_to_map_access(env, regs, regno, off, size, t,
+					      value_regno);
 	} else {
 		verbose(env, "R%d invalid mem access '%s'\n", regno,
 			reg_type_str[reg->type]);
@@ -10946,6 +11011,7 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr,
 		env->strict_alignment = false;
 
 	env->allow_ptr_leaks = bpf_allow_ptr_leaks();
+	env->allow_ptr_to_map_access = bpf_allow_ptr_to_map_access();
 	env->bypass_spec_v1 = bpf_bypass_spec_v1();
 	env->bypass_spec_v4 = bpf_bypass_spec_v4();
 	env->bpf_capable = bpf_capable();
diff --git a/tools/testing/selftests/bpf/verifier/map_ptr_mixing.c b/tools/testing/selftests/bpf/verifier/map_ptr_mixing.c
index cd26ee6b7b1d..1f2b8c4cb26d 100644
--- a/tools/testing/selftests/bpf/verifier/map_ptr_mixing.c
+++ b/tools/testing/selftests/bpf/verifier/map_ptr_mixing.c
@@ -56,7 +56,7 @@
 	.fixup_map_in_map = { 16 },
 	.fixup_map_array_48b = { 13 },
 	.result = REJECT,
-	.errstr = "R0 invalid mem access 'map_ptr'",
+	.errstr = "only read from bpf_array is supported",
 },
 {
 	"cond: two branches returning different map pointers for lookup (tail, tail)",
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 bpf-next 4/5] bpf: Set map_btf_{name,id} for all map types
  2020-06-19 21:11 [PATCH v2 bpf-next 0/5] bpf: Support access to bpf map fields Andrey Ignatov
                   ` (2 preceding siblings ...)
  2020-06-19 21:11 ` [PATCH v2 bpf-next 3/5] bpf: Support access to bpf map fields Andrey Ignatov
@ 2020-06-19 21:11 ` Andrey Ignatov
  2020-06-21  3:28   ` John Fastabend
  2020-06-19 21:11 ` [PATCH v2 bpf-next 5/5] selftests/bpf: Test access to bpf map pointer Andrey Ignatov
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Andrey Ignatov @ 2020-06-19 21:11 UTC (permalink / raw)
  To: bpf; +Cc: Andrey Ignatov, ast, daniel, kafai, andriin, kernel-team

Set map_btf_name and map_btf_id for all map types so that map fields can
be accessed by bpf programs.

Signed-off-by: Andrey Ignatov <rdna@fb.com>
---
 kernel/bpf/arraymap.c         | 15 +++++++++++++++
 kernel/bpf/bpf_struct_ops.c   |  3 +++
 kernel/bpf/cpumap.c           |  3 +++
 kernel/bpf/devmap.c           |  6 ++++++
 kernel/bpf/hashtab.c          | 12 ++++++++++++
 kernel/bpf/local_storage.c    |  3 +++
 kernel/bpf/lpm_trie.c         |  3 +++
 kernel/bpf/queue_stack_maps.c |  6 ++++++
 kernel/bpf/reuseport_array.c  |  3 +++
 kernel/bpf/ringbuf.c          |  3 +++
 kernel/bpf/stackmap.c         |  3 +++
 net/core/bpf_sk_storage.c     |  3 +++
 net/core/sock_map.c           |  6 ++++++
 net/xdp/xskmap.c              |  3 +++
 14 files changed, 72 insertions(+)

diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
index e7caa48812fb..ec5cd11032aa 100644
--- a/kernel/bpf/arraymap.c
+++ b/kernel/bpf/arraymap.c
@@ -515,6 +515,7 @@ const struct bpf_map_ops array_map_ops = {
 	.map_btf_id = &array_map_btf_id,
 };
 
+static int percpu_array_map_btf_id;
 const struct bpf_map_ops percpu_array_map_ops = {
 	.map_alloc_check = array_map_alloc_check,
 	.map_alloc = array_map_alloc,
@@ -525,6 +526,8 @@ const struct bpf_map_ops percpu_array_map_ops = {
 	.map_delete_elem = array_map_delete_elem,
 	.map_seq_show_elem = percpu_array_map_seq_show_elem,
 	.map_check_btf = array_map_check_btf,
+	.map_btf_name = "bpf_array",
+	.map_btf_id = &percpu_array_map_btf_id,
 };
 
 static int fd_array_map_alloc_check(union bpf_attr *attr)
@@ -871,6 +874,7 @@ static void prog_array_map_free(struct bpf_map *map)
 	fd_array_map_free(map);
 }
 
+static int prog_array_map_btf_id;
 const struct bpf_map_ops prog_array_map_ops = {
 	.map_alloc_check = fd_array_map_alloc_check,
 	.map_alloc = prog_array_map_alloc,
@@ -886,6 +890,8 @@ const struct bpf_map_ops prog_array_map_ops = {
 	.map_fd_sys_lookup_elem = prog_fd_array_sys_lookup_elem,
 	.map_release_uref = prog_array_map_clear,
 	.map_seq_show_elem = prog_array_map_seq_show_elem,
+	.map_btf_name = "bpf_array",
+	.map_btf_id = &prog_array_map_btf_id,
 };
 
 static struct bpf_event_entry *bpf_event_entry_gen(struct file *perf_file,
@@ -964,6 +970,7 @@ static void perf_event_fd_array_release(struct bpf_map *map,
 	rcu_read_unlock();
 }
 
+static int perf_event_array_map_btf_id;
 const struct bpf_map_ops perf_event_array_map_ops = {
 	.map_alloc_check = fd_array_map_alloc_check,
 	.map_alloc = array_map_alloc,
@@ -975,6 +982,8 @@ const struct bpf_map_ops perf_event_array_map_ops = {
 	.map_fd_put_ptr = perf_event_fd_array_put_ptr,
 	.map_release = perf_event_fd_array_release,
 	.map_check_btf = map_check_no_btf,
+	.map_btf_name = "bpf_array",
+	.map_btf_id = &perf_event_array_map_btf_id,
 };
 
 #ifdef CONFIG_CGROUPS
@@ -997,6 +1006,7 @@ static void cgroup_fd_array_free(struct bpf_map *map)
 	fd_array_map_free(map);
 }
 
+static int cgroup_array_map_btf_id;
 const struct bpf_map_ops cgroup_array_map_ops = {
 	.map_alloc_check = fd_array_map_alloc_check,
 	.map_alloc = array_map_alloc,
@@ -1007,6 +1017,8 @@ const struct bpf_map_ops cgroup_array_map_ops = {
 	.map_fd_get_ptr = cgroup_fd_array_get_ptr,
 	.map_fd_put_ptr = cgroup_fd_array_put_ptr,
 	.map_check_btf = map_check_no_btf,
+	.map_btf_name = "bpf_array",
+	.map_btf_id = &cgroup_array_map_btf_id,
 };
 #endif
 
@@ -1080,6 +1092,7 @@ static u32 array_of_map_gen_lookup(struct bpf_map *map,
 	return insn - insn_buf;
 }
 
+static int array_of_maps_map_btf_id;
 const struct bpf_map_ops array_of_maps_map_ops = {
 	.map_alloc_check = fd_array_map_alloc_check,
 	.map_alloc = array_of_map_alloc,
@@ -1092,4 +1105,6 @@ const struct bpf_map_ops array_of_maps_map_ops = {
 	.map_fd_sys_lookup_elem = bpf_map_fd_sys_lookup_elem,
 	.map_gen_lookup = array_of_map_gen_lookup,
 	.map_check_btf = map_check_no_btf,
+	.map_btf_name = "bpf_array",
+	.map_btf_id = &array_of_maps_map_btf_id,
 };
diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
index c6b0decaa46a..969c5d47f81f 100644
--- a/kernel/bpf/bpf_struct_ops.c
+++ b/kernel/bpf/bpf_struct_ops.c
@@ -611,6 +611,7 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
 	return map;
 }
 
+static int bpf_struct_ops_map_btf_id;
 const struct bpf_map_ops bpf_struct_ops_map_ops = {
 	.map_alloc_check = bpf_struct_ops_map_alloc_check,
 	.map_alloc = bpf_struct_ops_map_alloc,
@@ -620,6 +621,8 @@ const struct bpf_map_ops bpf_struct_ops_map_ops = {
 	.map_delete_elem = bpf_struct_ops_map_delete_elem,
 	.map_update_elem = bpf_struct_ops_map_update_elem,
 	.map_seq_show_elem = bpf_struct_ops_map_seq_show_elem,
+	.map_btf_name = "bpf_struct_ops_map",
+	.map_btf_id = &bpf_struct_ops_map_btf_id,
 };
 
 /* "const void *" because some subsystem is
diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
index 27595fc6da56..bd8658055c16 100644
--- a/kernel/bpf/cpumap.c
+++ b/kernel/bpf/cpumap.c
@@ -543,6 +543,7 @@ static int cpu_map_get_next_key(struct bpf_map *map, void *key, void *next_key)
 	return 0;
 }
 
+static int cpu_map_btf_id;
 const struct bpf_map_ops cpu_map_ops = {
 	.map_alloc		= cpu_map_alloc,
 	.map_free		= cpu_map_free,
@@ -551,6 +552,8 @@ const struct bpf_map_ops cpu_map_ops = {
 	.map_lookup_elem	= cpu_map_lookup_elem,
 	.map_get_next_key	= cpu_map_get_next_key,
 	.map_check_btf		= map_check_no_btf,
+	.map_btf_name		= "bpf_cpu_map",
+	.map_btf_id		= &cpu_map_btf_id,
 };
 
 static int bq_flush_to_queue(struct xdp_bulk_queue *bq)
diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
index 0cbb72cdaf63..58acc46861ef 100644
--- a/kernel/bpf/devmap.c
+++ b/kernel/bpf/devmap.c
@@ -747,6 +747,7 @@ static int dev_map_hash_update_elem(struct bpf_map *map, void *key, void *value,
 					 map, key, value, map_flags);
 }
 
+static int dev_map_btf_id;
 const struct bpf_map_ops dev_map_ops = {
 	.map_alloc = dev_map_alloc,
 	.map_free = dev_map_free,
@@ -755,8 +756,11 @@ const struct bpf_map_ops dev_map_ops = {
 	.map_update_elem = dev_map_update_elem,
 	.map_delete_elem = dev_map_delete_elem,
 	.map_check_btf = map_check_no_btf,
+	.map_btf_name = "bpf_dtab",
+	.map_btf_id = &dev_map_btf_id,
 };
 
+static int dev_map_hash_map_btf_id;
 const struct bpf_map_ops dev_map_hash_ops = {
 	.map_alloc = dev_map_alloc,
 	.map_free = dev_map_free,
@@ -765,6 +769,8 @@ const struct bpf_map_ops dev_map_hash_ops = {
 	.map_update_elem = dev_map_hash_update_elem,
 	.map_delete_elem = dev_map_hash_delete_elem,
 	.map_check_btf = map_check_no_btf,
+	.map_btf_name = "bpf_dtab",
+	.map_btf_id = &dev_map_hash_map_btf_id,
 };
 
 static void dev_map_hash_remove_netdev(struct bpf_dtab *dtab,
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index 2c5999e02060..acd06081d81d 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -1630,6 +1630,7 @@ const struct bpf_map_ops htab_map_ops = {
 	.map_btf_id = &htab_map_btf_id,
 };
 
+static int htab_lru_map_btf_id;
 const struct bpf_map_ops htab_lru_map_ops = {
 	.map_alloc_check = htab_map_alloc_check,
 	.map_alloc = htab_map_alloc,
@@ -1642,6 +1643,8 @@ const struct bpf_map_ops htab_lru_map_ops = {
 	.map_gen_lookup = htab_lru_map_gen_lookup,
 	.map_seq_show_elem = htab_map_seq_show_elem,
 	BATCH_OPS(htab_lru),
+	.map_btf_name = "bpf_htab",
+	.map_btf_id = &htab_lru_map_btf_id,
 };
 
 /* Called from eBPF program */
@@ -1746,6 +1749,7 @@ static void htab_percpu_map_seq_show_elem(struct bpf_map *map, void *key,
 	rcu_read_unlock();
 }
 
+static int htab_percpu_map_btf_id;
 const struct bpf_map_ops htab_percpu_map_ops = {
 	.map_alloc_check = htab_map_alloc_check,
 	.map_alloc = htab_map_alloc,
@@ -1756,8 +1760,11 @@ const struct bpf_map_ops htab_percpu_map_ops = {
 	.map_delete_elem = htab_map_delete_elem,
 	.map_seq_show_elem = htab_percpu_map_seq_show_elem,
 	BATCH_OPS(htab_percpu),
+	.map_btf_name = "bpf_htab",
+	.map_btf_id = &htab_percpu_map_btf_id,
 };
 
+static int htab_lru_percpu_map_btf_id;
 const struct bpf_map_ops htab_lru_percpu_map_ops = {
 	.map_alloc_check = htab_map_alloc_check,
 	.map_alloc = htab_map_alloc,
@@ -1768,6 +1775,8 @@ const struct bpf_map_ops htab_lru_percpu_map_ops = {
 	.map_delete_elem = htab_lru_map_delete_elem,
 	.map_seq_show_elem = htab_percpu_map_seq_show_elem,
 	BATCH_OPS(htab_lru_percpu),
+	.map_btf_name = "bpf_htab",
+	.map_btf_id = &htab_lru_percpu_map_btf_id,
 };
 
 static int fd_htab_map_alloc_check(union bpf_attr *attr)
@@ -1890,6 +1899,7 @@ static void htab_of_map_free(struct bpf_map *map)
 	fd_htab_map_free(map);
 }
 
+static int htab_of_maps_map_btf_id;
 const struct bpf_map_ops htab_of_maps_map_ops = {
 	.map_alloc_check = fd_htab_map_alloc_check,
 	.map_alloc = htab_of_map_alloc,
@@ -1902,4 +1912,6 @@ const struct bpf_map_ops htab_of_maps_map_ops = {
 	.map_fd_sys_lookup_elem = bpf_map_fd_sys_lookup_elem,
 	.map_gen_lookup = htab_of_map_gen_lookup,
 	.map_check_btf = map_check_no_btf,
+	.map_btf_name = "bpf_htab",
+	.map_btf_id = &htab_of_maps_map_btf_id,
 };
diff --git a/kernel/bpf/local_storage.c b/kernel/bpf/local_storage.c
index 33d01866bcc2..51bd5a8cb01b 100644
--- a/kernel/bpf/local_storage.c
+++ b/kernel/bpf/local_storage.c
@@ -409,6 +409,7 @@ static void cgroup_storage_seq_show_elem(struct bpf_map *map, void *_key,
 	rcu_read_unlock();
 }
 
+static int cgroup_storage_map_btf_id;
 const struct bpf_map_ops cgroup_storage_map_ops = {
 	.map_alloc = cgroup_storage_map_alloc,
 	.map_free = cgroup_storage_map_free,
@@ -418,6 +419,8 @@ const struct bpf_map_ops cgroup_storage_map_ops = {
 	.map_delete_elem = cgroup_storage_delete_elem,
 	.map_check_btf = cgroup_storage_check_btf,
 	.map_seq_show_elem = cgroup_storage_seq_show_elem,
+	.map_btf_name = "bpf_cgroup_storage_map",
+	.map_btf_id = &cgroup_storage_map_btf_id,
 };
 
 int bpf_cgroup_storage_assign(struct bpf_prog_aux *aux, struct bpf_map *_map)
diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
index c8cc4e4cf98d..1abd4e3f906d 100644
--- a/kernel/bpf/lpm_trie.c
+++ b/kernel/bpf/lpm_trie.c
@@ -735,6 +735,7 @@ static int trie_check_btf(const struct bpf_map *map,
 	       -EINVAL : 0;
 }
 
+static int trie_map_btf_id;
 const struct bpf_map_ops trie_map_ops = {
 	.map_alloc = trie_alloc,
 	.map_free = trie_free,
@@ -743,4 +744,6 @@ const struct bpf_map_ops trie_map_ops = {
 	.map_update_elem = trie_update_elem,
 	.map_delete_elem = trie_delete_elem,
 	.map_check_btf = trie_check_btf,
+	.map_btf_name = "lpm_trie",
+	.map_btf_id = &trie_map_btf_id,
 };
diff --git a/kernel/bpf/queue_stack_maps.c b/kernel/bpf/queue_stack_maps.c
index 05c8e043b9d2..80c66a6d7c54 100644
--- a/kernel/bpf/queue_stack_maps.c
+++ b/kernel/bpf/queue_stack_maps.c
@@ -262,6 +262,7 @@ static int queue_stack_map_get_next_key(struct bpf_map *map, void *key,
 	return -EINVAL;
 }
 
+static int queue_map_btf_id;
 const struct bpf_map_ops queue_map_ops = {
 	.map_alloc_check = queue_stack_map_alloc_check,
 	.map_alloc = queue_stack_map_alloc,
@@ -273,8 +274,11 @@ const struct bpf_map_ops queue_map_ops = {
 	.map_pop_elem = queue_map_pop_elem,
 	.map_peek_elem = queue_map_peek_elem,
 	.map_get_next_key = queue_stack_map_get_next_key,
+	.map_btf_name = "bpf_queue_stack",
+	.map_btf_id = &queue_map_btf_id,
 };
 
+static int stack_map_btf_id;
 const struct bpf_map_ops stack_map_ops = {
 	.map_alloc_check = queue_stack_map_alloc_check,
 	.map_alloc = queue_stack_map_alloc,
@@ -286,4 +290,6 @@ const struct bpf_map_ops stack_map_ops = {
 	.map_pop_elem = stack_map_pop_elem,
 	.map_peek_elem = stack_map_peek_elem,
 	.map_get_next_key = queue_stack_map_get_next_key,
+	.map_btf_name = "bpf_queue_stack",
+	.map_btf_id = &stack_map_btf_id,
 };
diff --git a/kernel/bpf/reuseport_array.c b/kernel/bpf/reuseport_array.c
index 21cde24386db..a09922f656e4 100644
--- a/kernel/bpf/reuseport_array.c
+++ b/kernel/bpf/reuseport_array.c
@@ -345,6 +345,7 @@ static int reuseport_array_get_next_key(struct bpf_map *map, void *key,
 	return 0;
 }
 
+static int reuseport_array_map_btf_id;
 const struct bpf_map_ops reuseport_array_ops = {
 	.map_alloc_check = reuseport_array_alloc_check,
 	.map_alloc = reuseport_array_alloc,
@@ -352,4 +353,6 @@ const struct bpf_map_ops reuseport_array_ops = {
 	.map_lookup_elem = reuseport_array_lookup_elem,
 	.map_get_next_key = reuseport_array_get_next_key,
 	.map_delete_elem = reuseport_array_delete_elem,
+	.map_btf_name = "reuseport_array",
+	.map_btf_id = &reuseport_array_map_btf_id,
 };
diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
index 180414bb0d3e..dbf37aff4827 100644
--- a/kernel/bpf/ringbuf.c
+++ b/kernel/bpf/ringbuf.c
@@ -294,6 +294,7 @@ static __poll_t ringbuf_map_poll(struct bpf_map *map, struct file *filp,
 	return 0;
 }
 
+static int ringbuf_map_btf_id;
 const struct bpf_map_ops ringbuf_map_ops = {
 	.map_alloc = ringbuf_map_alloc,
 	.map_free = ringbuf_map_free,
@@ -303,6 +304,8 @@ const struct bpf_map_ops ringbuf_map_ops = {
 	.map_update_elem = ringbuf_map_update_elem,
 	.map_delete_elem = ringbuf_map_delete_elem,
 	.map_get_next_key = ringbuf_map_get_next_key,
+	.map_btf_name = "bpf_ringbuf_map",
+	.map_btf_id = &ringbuf_map_btf_id,
 };
 
 /* Given pointer to ring buffer record metadata and struct bpf_ringbuf itself,
diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
index 599488f25e40..27dc9b1b08a5 100644
--- a/kernel/bpf/stackmap.c
+++ b/kernel/bpf/stackmap.c
@@ -613,6 +613,7 @@ static void stack_map_free(struct bpf_map *map)
 	put_callchain_buffers();
 }
 
+static int stack_trace_map_btf_id;
 const struct bpf_map_ops stack_trace_map_ops = {
 	.map_alloc = stack_map_alloc,
 	.map_free = stack_map_free,
@@ -621,6 +622,8 @@ const struct bpf_map_ops stack_trace_map_ops = {
 	.map_update_elem = stack_map_update_elem,
 	.map_delete_elem = stack_map_delete_elem,
 	.map_check_btf = map_check_no_btf,
+	.map_btf_name = "bpf_stack_map",
+	.map_btf_id = &stack_trace_map_btf_id,
 };
 
 static int __init stack_map_init(void)
diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
index 1dae4b543243..6f921c4ddc2c 100644
--- a/net/core/bpf_sk_storage.c
+++ b/net/core/bpf_sk_storage.c
@@ -919,6 +919,7 @@ BPF_CALL_2(bpf_sk_storage_delete, struct bpf_map *, map, struct sock *, sk)
 	return -ENOENT;
 }
 
+static int sk_storage_map_btf_id;
 const struct bpf_map_ops sk_storage_map_ops = {
 	.map_alloc_check = bpf_sk_storage_map_alloc_check,
 	.map_alloc = bpf_sk_storage_map_alloc,
@@ -928,6 +929,8 @@ const struct bpf_map_ops sk_storage_map_ops = {
 	.map_update_elem = bpf_fd_sk_storage_update_elem,
 	.map_delete_elem = bpf_fd_sk_storage_delete_elem,
 	.map_check_btf = bpf_sk_storage_map_check_btf,
+	.map_btf_name = "bpf_sk_storage_map",
+	.map_btf_id = &sk_storage_map_btf_id,
 };
 
 const struct bpf_func_proto bpf_sk_storage_get_proto = {
diff --git a/net/core/sock_map.c b/net/core/sock_map.c
index 2b884f2d562a..4c1123c749bb 100644
--- a/net/core/sock_map.c
+++ b/net/core/sock_map.c
@@ -643,6 +643,7 @@ const struct bpf_func_proto bpf_msg_redirect_map_proto = {
 	.arg4_type      = ARG_ANYTHING,
 };
 
+static int sock_map_btf_id;
 const struct bpf_map_ops sock_map_ops = {
 	.map_alloc		= sock_map_alloc,
 	.map_free		= sock_map_free,
@@ -653,6 +654,8 @@ const struct bpf_map_ops sock_map_ops = {
 	.map_lookup_elem	= sock_map_lookup,
 	.map_release_uref	= sock_map_release_progs,
 	.map_check_btf		= map_check_no_btf,
+	.map_btf_name		= "bpf_stab",
+	.map_btf_id		= &sock_map_btf_id,
 };
 
 struct bpf_shtab_elem {
@@ -1176,6 +1179,7 @@ const struct bpf_func_proto bpf_msg_redirect_hash_proto = {
 	.arg4_type      = ARG_ANYTHING,
 };
 
+static int sock_hash_map_btf_id;
 const struct bpf_map_ops sock_hash_ops = {
 	.map_alloc		= sock_hash_alloc,
 	.map_free		= sock_hash_free,
@@ -1186,6 +1190,8 @@ const struct bpf_map_ops sock_hash_ops = {
 	.map_lookup_elem_sys_only = sock_hash_lookup_sys,
 	.map_release_uref	= sock_hash_release_progs,
 	.map_check_btf		= map_check_no_btf,
+	.map_btf_name		= "bpf_shtab",
+	.map_btf_id		= &sock_hash_map_btf_id,
 };
 
 static struct sk_psock_progs *sock_map_progs(struct bpf_map *map)
diff --git a/net/xdp/xskmap.c b/net/xdp/xskmap.c
index 1dc7208c71ba..8367adbbe9df 100644
--- a/net/xdp/xskmap.c
+++ b/net/xdp/xskmap.c
@@ -254,6 +254,7 @@ void xsk_map_try_sock_delete(struct xsk_map *map, struct xdp_sock *xs,
 	spin_unlock_bh(&map->lock);
 }
 
+static int xsk_map_btf_id;
 const struct bpf_map_ops xsk_map_ops = {
 	.map_alloc = xsk_map_alloc,
 	.map_free = xsk_map_free,
@@ -264,4 +265,6 @@ const struct bpf_map_ops xsk_map_ops = {
 	.map_update_elem = xsk_map_update_elem,
 	.map_delete_elem = xsk_map_delete_elem,
 	.map_check_btf = map_check_no_btf,
+	.map_btf_name = "xsk_map",
+	.map_btf_id = &xsk_map_btf_id,
 };
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 bpf-next 5/5] selftests/bpf: Test access to bpf map pointer
  2020-06-19 21:11 [PATCH v2 bpf-next 0/5] bpf: Support access to bpf map fields Andrey Ignatov
                   ` (3 preceding siblings ...)
  2020-06-19 21:11 ` [PATCH v2 bpf-next 4/5] bpf: Set map_btf_{name,id} for all map types Andrey Ignatov
@ 2020-06-19 21:11 ` Andrey Ignatov
  2020-06-21  3:35   ` John Fastabend
  2020-06-22 17:42 ` [PATCH v2 bpf-next 0/5] bpf: Support access to bpf map fields Martin KaFai Lau
  2020-06-22 21:22 ` Daniel Borkmann
  6 siblings, 1 reply; 15+ messages in thread
From: Andrey Ignatov @ 2020-06-19 21:11 UTC (permalink / raw)
  To: bpf; +Cc: Andrey Ignatov, ast, daniel, kafai, andriin, kernel-team

Add selftests to test access to map pointers from bpf program for all
map types except struct_ops (that one would need additional work).

verifier test focuses mostly on scenarios that must be rejected.

prog_tests test focuses on accessing multiple fields both scalar and a
nested struct from bpf program and verifies that those fields have
expected values.

Signed-off-by: Andrey Ignatov <rdna@fb.com>
---
 .../selftests/bpf/prog_tests/map_ptr.c        |  32 +
 .../selftests/bpf/progs/map_ptr_kern.c        | 686 ++++++++++++++++++
 .../testing/selftests/bpf/verifier/map_ptr.c  |  62 ++
 3 files changed, 780 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/map_ptr.c
 create mode 100644 tools/testing/selftests/bpf/progs/map_ptr_kern.c
 create mode 100644 tools/testing/selftests/bpf/verifier/map_ptr.c

diff --git a/tools/testing/selftests/bpf/prog_tests/map_ptr.c b/tools/testing/selftests/bpf/prog_tests/map_ptr.c
new file mode 100644
index 000000000000..c230a573c373
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/map_ptr.c
@@ -0,0 +1,32 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2020 Facebook
+
+#include <test_progs.h>
+#include <network_helpers.h>
+
+#include "map_ptr_kern.skel.h"
+
+void test_map_ptr(void)
+{
+	struct map_ptr_kern *skel;
+	__u32 duration = 0, retval;
+	char buf[128];
+	int err;
+
+	skel = map_ptr_kern__open_and_load();
+	if (CHECK(!skel, "skel_open_load", "open_load failed\n"))
+		return;
+
+	err = bpf_prog_test_run(bpf_program__fd(skel->progs.cg_skb), 1, &pkt_v4,
+				sizeof(pkt_v4), buf, NULL, &retval, NULL);
+
+	if (CHECK(err, "test_run", "err=%d errno=%d\n", err, errno))
+		goto cleanup;
+
+	if (CHECK(!retval, "retval", "retval=%d map_type=%u line=%u\n", retval,
+		  skel->bss->g_map_type, skel->bss->g_line))
+		goto cleanup;
+
+cleanup:
+	map_ptr_kern__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/map_ptr_kern.c b/tools/testing/selftests/bpf/progs/map_ptr_kern.c
new file mode 100644
index 000000000000..473665cac67e
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/map_ptr_kern.c
@@ -0,0 +1,686 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2020 Facebook
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+
+#define LOOP_BOUND 0xf
+#define MAX_ENTRIES 8
+#define HALF_ENTRIES (MAX_ENTRIES >> 1)
+
+_Static_assert(MAX_ENTRIES < LOOP_BOUND, "MAX_ENTRIES must be < LOOP_BOUND");
+
+enum bpf_map_type g_map_type = BPF_MAP_TYPE_UNSPEC;
+__u32 g_line = 0;
+
+#define VERIFY_TYPE(type, func) ({	\
+	g_map_type = type;		\
+	if (!func())			\
+		return 0;		\
+})
+
+
+#define VERIFY(expr) ({		\
+	g_line = __LINE__;	\
+	if (!(expr))		\
+		return 0;	\
+})
+
+struct bpf_map_memory {
+	__u32 pages;
+} __attribute__((preserve_access_index));
+
+struct bpf_map {
+	enum bpf_map_type map_type;
+	__u32 key_size;
+	__u32 value_size;
+	__u32 max_entries;
+	__u32 id;
+	struct bpf_map_memory memory;
+} __attribute__((preserve_access_index));
+
+static inline int check_bpf_map_fields(struct bpf_map *map, __u32 key_size,
+				       __u32 value_size, __u32 max_entries)
+{
+	VERIFY(map->map_type == g_map_type);
+	VERIFY(map->key_size == key_size);
+	VERIFY(map->value_size == value_size);
+	VERIFY(map->max_entries == max_entries);
+	VERIFY(map->id > 0);
+	VERIFY(map->memory.pages > 0);
+
+	return 1;
+}
+
+static inline int check_bpf_map_ptr(struct bpf_map *indirect,
+				    struct bpf_map *direct)
+{
+	VERIFY(indirect->map_type == direct->map_type);
+	VERIFY(indirect->key_size == direct->key_size);
+	VERIFY(indirect->value_size == direct->value_size);
+	VERIFY(indirect->max_entries == direct->max_entries);
+	VERIFY(indirect->id == direct->id);
+	VERIFY(indirect->memory.pages == direct->memory.pages);
+
+	return 1;
+}
+
+static inline int check(struct bpf_map *indirect, struct bpf_map *direct,
+			__u32 key_size, __u32 value_size, __u32 max_entries)
+{
+	VERIFY(check_bpf_map_ptr(indirect, direct));
+	VERIFY(check_bpf_map_fields(indirect, key_size, value_size,
+				    max_entries));
+	return 1;
+}
+
+static inline int check_default(struct bpf_map *indirect,
+				struct bpf_map *direct)
+{
+	VERIFY(check(indirect, direct, sizeof(__u32), sizeof(__u32),
+		     MAX_ENTRIES));
+	return 1;
+}
+
+typedef struct {
+	int counter;
+} atomic_t;
+
+struct bpf_htab {
+	struct bpf_map map;
+	atomic_t count;
+	__u32 n_buckets;
+	__u32 elem_size;
+} __attribute__((preserve_access_index));
+
+struct {
+	__uint(type, BPF_MAP_TYPE_HASH);
+	__uint(map_flags, BPF_F_NO_PREALLOC); /* to test bpf_htab.count */
+	__uint(max_entries, MAX_ENTRIES);
+	__type(key, __u32);
+	__type(value, __u32);
+} m_hash SEC(".maps");
+
+static inline int check_hash(void)
+{
+	struct bpf_htab *hash = (struct bpf_htab *)&m_hash;
+	struct bpf_map *map = (struct bpf_map *)&m_hash;
+	int i;
+
+	VERIFY(check_default(&hash->map, map));
+
+	VERIFY(hash->n_buckets == MAX_ENTRIES);
+	VERIFY(hash->elem_size == 64);
+
+	VERIFY(hash->count.counter == 0);
+	for (i = 0; i < HALF_ENTRIES; ++i) {
+		const __u32 key = i;
+		const __u32 val = 1;
+
+		if (bpf_map_update_elem(hash, &key, &val, 0))
+			return 0;
+	}
+	VERIFY(hash->count.counter == HALF_ENTRIES);
+
+	return 1;
+}
+
+struct bpf_array {
+	struct bpf_map map;
+	__u32 elem_size;
+} __attribute__((preserve_access_index));
+
+struct {
+	__uint(type, BPF_MAP_TYPE_ARRAY);
+	__uint(max_entries, MAX_ENTRIES);
+	__type(key, __u32);
+	__type(value, __u32);
+} m_array SEC(".maps");
+
+static inline int check_array(void)
+{
+	struct bpf_array *array = (struct bpf_array *)&m_array;
+	struct bpf_map *map = (struct bpf_map *)&m_array;
+	int i, n_lookups = 0, n_keys = 0;
+
+	VERIFY(check_default(&array->map, map));
+
+	VERIFY(array->elem_size == 8);
+
+	for (i = 0; i < array->map.max_entries && i < LOOP_BOUND; ++i) {
+		const __u32 key = i;
+		__u32 *val = bpf_map_lookup_elem(array, &key);
+
+		++n_lookups;
+		if (val)
+			++n_keys;
+	}
+
+	VERIFY(n_lookups == MAX_ENTRIES);
+	VERIFY(n_keys == MAX_ENTRIES);
+
+	return 1;
+}
+
+struct {
+	__uint(type, BPF_MAP_TYPE_PROG_ARRAY);
+	__uint(max_entries, MAX_ENTRIES);
+	__type(key, __u32);
+	__type(value, __u32);
+} m_prog_array SEC(".maps");
+
+static inline int check_prog_array(void)
+{
+	struct bpf_array *prog_array = (struct bpf_array *)&m_prog_array;
+	struct bpf_map *map = (struct bpf_map *)&m_prog_array;
+
+	VERIFY(check_default(&prog_array->map, map));
+
+	return 1;
+}
+
+struct {
+	__uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY);
+	__uint(max_entries, MAX_ENTRIES);
+	__type(key, __u32);
+	__type(value, __u32);
+} m_perf_event_array SEC(".maps");
+
+static inline int check_perf_event_array(void)
+{
+	struct bpf_array *perf_event_array = (struct bpf_array *)&m_perf_event_array;
+	struct bpf_map *map = (struct bpf_map *)&m_perf_event_array;
+
+	VERIFY(check_default(&perf_event_array->map, map));
+
+	return 1;
+}
+
+struct {
+	__uint(type, BPF_MAP_TYPE_PERCPU_HASH);
+	__uint(max_entries, MAX_ENTRIES);
+	__type(key, __u32);
+	__type(value, __u32);
+} m_percpu_hash SEC(".maps");
+
+static inline int check_percpu_hash(void)
+{
+	struct bpf_htab *percpu_hash = (struct bpf_htab *)&m_percpu_hash;
+	struct bpf_map *map = (struct bpf_map *)&m_percpu_hash;
+
+	VERIFY(check_default(&percpu_hash->map, map));
+
+	return 1;
+}
+
+struct {
+	__uint(type, BPF_MAP_TYPE_PERCPU_ARRAY);
+	__uint(max_entries, MAX_ENTRIES);
+	__type(key, __u32);
+	__type(value, __u32);
+} m_percpu_array SEC(".maps");
+
+static inline int check_percpu_array(void)
+{
+	struct bpf_array *percpu_array = (struct bpf_array *)&m_percpu_array;
+	struct bpf_map *map = (struct bpf_map *)&m_percpu_array;
+
+	VERIFY(check_default(&percpu_array->map, map));
+
+	return 1;
+}
+
+struct bpf_stack_map {
+	struct bpf_map map;
+} __attribute__((preserve_access_index));
+
+struct {
+	__uint(type, BPF_MAP_TYPE_STACK_TRACE);
+	__uint(max_entries, MAX_ENTRIES);
+	__type(key, __u32);
+	__type(value, __u64);
+} m_stack_trace SEC(".maps");
+
+static inline int check_stack_trace(void)
+{
+	struct bpf_stack_map *stack_trace =
+		(struct bpf_stack_map *)&m_stack_trace;
+	struct bpf_map *map = (struct bpf_map *)&m_stack_trace;
+
+	VERIFY(check(&stack_trace->map, map, sizeof(__u32), sizeof(__u64),
+		     MAX_ENTRIES));
+
+	return 1;
+}
+
+struct {
+	__uint(type, BPF_MAP_TYPE_CGROUP_ARRAY);
+	__uint(max_entries, MAX_ENTRIES);
+	__type(key, __u32);
+	__type(value, __u32);
+} m_cgroup_array SEC(".maps");
+
+static inline int check_cgroup_array(void)
+{
+	struct bpf_array *cgroup_array = (struct bpf_array *)&m_cgroup_array;
+	struct bpf_map *map = (struct bpf_map *)&m_cgroup_array;
+
+	VERIFY(check_default(&cgroup_array->map, map));
+
+	return 1;
+}
+
+struct {
+	__uint(type, BPF_MAP_TYPE_LRU_HASH);
+	__uint(max_entries, MAX_ENTRIES);
+	__type(key, __u32);
+	__type(value, __u32);
+} m_lru_hash SEC(".maps");
+
+static inline int check_lru_hash(void)
+{
+	struct bpf_htab *lru_hash = (struct bpf_htab *)&m_lru_hash;
+	struct bpf_map *map = (struct bpf_map *)&m_lru_hash;
+
+	VERIFY(check_default(&lru_hash->map, map));
+
+	return 1;
+}
+
+struct {
+	__uint(type, BPF_MAP_TYPE_LRU_PERCPU_HASH);
+	__uint(max_entries, MAX_ENTRIES);
+	__type(key, __u32);
+	__type(value, __u32);
+} m_lru_percpu_hash SEC(".maps");
+
+static inline int check_lru_percpu_hash(void)
+{
+	struct bpf_htab *lru_percpu_hash = (struct bpf_htab *)&m_lru_percpu_hash;
+	struct bpf_map *map = (struct bpf_map *)&m_lru_percpu_hash;
+
+	VERIFY(check_default(&lru_percpu_hash->map, map));
+
+	return 1;
+}
+
+struct lpm_trie {
+	struct bpf_map map;
+} __attribute__((preserve_access_index));
+
+struct lpm_key {
+	struct bpf_lpm_trie_key trie_key;
+	__u32 data;
+};
+
+struct {
+	__uint(type, BPF_MAP_TYPE_LPM_TRIE);
+	__uint(map_flags, BPF_F_NO_PREALLOC);
+	__uint(max_entries, MAX_ENTRIES);
+	__type(key, struct lpm_key);
+	__type(value, __u32);
+} m_lpm_trie SEC(".maps");
+
+static inline int check_lpm_trie(void)
+{
+	struct lpm_trie *lpm_trie = (struct lpm_trie *)&m_lpm_trie;
+	struct bpf_map *map = (struct bpf_map *)&m_lpm_trie;
+
+	VERIFY(check(&lpm_trie->map, map, sizeof(struct lpm_key), sizeof(__u32),
+		     MAX_ENTRIES));
+
+	return 1;
+}
+
+struct inner_map {
+	__uint(type, BPF_MAP_TYPE_ARRAY);
+	__uint(max_entries, 1);
+	__type(key, __u32);
+	__type(value, __u32);
+} inner_map SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS);
+	__uint(max_entries, MAX_ENTRIES);
+	__type(key, __u32);
+	__type(value, __u32);
+	__array(values, struct {
+		__uint(type, BPF_MAP_TYPE_ARRAY);
+		__uint(max_entries, 1);
+		__type(key, __u32);
+		__type(value, __u32);
+	});
+} m_array_of_maps SEC(".maps") = {
+	.values = { (void *)&inner_map, 0, 0, 0, 0, 0, 0, 0, 0 },
+};
+
+static inline int check_array_of_maps(void)
+{
+	struct bpf_array *array_of_maps = (struct bpf_array *)&m_array_of_maps;
+	struct bpf_map *map = (struct bpf_map *)&m_array_of_maps;
+
+	VERIFY(check_default(&array_of_maps->map, map));
+
+	return 1;
+}
+
+struct {
+	__uint(type, BPF_MAP_TYPE_HASH_OF_MAPS);
+	__uint(max_entries, MAX_ENTRIES);
+	__type(key, __u32);
+	__type(value, __u32);
+	__array(values, struct inner_map);
+} m_hash_of_maps SEC(".maps") = {
+	.values = {
+		[2] = &inner_map,
+	},
+};
+
+static inline int check_hash_of_maps(void)
+{
+	struct bpf_htab *hash_of_maps = (struct bpf_htab *)&m_hash_of_maps;
+	struct bpf_map *map = (struct bpf_map *)&m_hash_of_maps;
+
+	VERIFY(check_default(&hash_of_maps->map, map));
+
+	return 1;
+}
+
+struct bpf_dtab {
+	struct bpf_map map;
+} __attribute__((preserve_access_index));
+
+struct {
+	__uint(type, BPF_MAP_TYPE_DEVMAP);
+	__uint(max_entries, MAX_ENTRIES);
+	__type(key, __u32);
+	__type(value, __u32);
+} m_devmap SEC(".maps");
+
+static inline int check_devmap(void)
+{
+	struct bpf_dtab *devmap = (struct bpf_dtab *)&m_devmap;
+	struct bpf_map *map = (struct bpf_map *)&m_devmap;
+
+	VERIFY(check_default(&devmap->map, map));
+
+	return 1;
+}
+
+struct bpf_stab {
+	struct bpf_map map;
+} __attribute__((preserve_access_index));
+
+struct {
+	__uint(type, BPF_MAP_TYPE_SOCKMAP);
+	__uint(max_entries, MAX_ENTRIES);
+	__type(key, __u32);
+	__type(value, __u32);
+} m_sockmap SEC(".maps");
+
+static inline int check_sockmap(void)
+{
+	struct bpf_stab *sockmap = (struct bpf_stab *)&m_sockmap;
+	struct bpf_map *map = (struct bpf_map *)&m_sockmap;
+
+	VERIFY(check_default(&sockmap->map, map));
+
+	return 1;
+}
+
+struct bpf_cpu_map {
+	struct bpf_map map;
+} __attribute__((preserve_access_index));
+
+struct {
+	__uint(type, BPF_MAP_TYPE_CPUMAP);
+	__uint(max_entries, MAX_ENTRIES);
+	__type(key, __u32);
+	__type(value, __u32);
+} m_cpumap SEC(".maps");
+
+static inline int check_cpumap(void)
+{
+	struct bpf_cpu_map *cpumap = (struct bpf_cpu_map *)&m_cpumap;
+	struct bpf_map *map = (struct bpf_map *)&m_cpumap;
+
+	VERIFY(check_default(&cpumap->map, map));
+
+	return 1;
+}
+
+struct xsk_map {
+	struct bpf_map map;
+} __attribute__((preserve_access_index));
+
+struct {
+	__uint(type, BPF_MAP_TYPE_XSKMAP);
+	__uint(max_entries, MAX_ENTRIES);
+	__type(key, __u32);
+	__type(value, __u32);
+} m_xskmap SEC(".maps");
+
+static inline int check_xskmap(void)
+{
+	struct xsk_map *xskmap = (struct xsk_map *)&m_xskmap;
+	struct bpf_map *map = (struct bpf_map *)&m_xskmap;
+
+	VERIFY(check_default(&xskmap->map, map));
+
+	return 1;
+}
+
+struct bpf_shtab {
+	struct bpf_map map;
+} __attribute__((preserve_access_index));
+
+struct {
+	__uint(type, BPF_MAP_TYPE_SOCKHASH);
+	__uint(max_entries, MAX_ENTRIES);
+	__type(key, __u32);
+	__type(value, __u32);
+} m_sockhash SEC(".maps");
+
+static inline int check_sockhash(void)
+{
+	struct bpf_shtab *sockhash = (struct bpf_shtab *)&m_sockhash;
+	struct bpf_map *map = (struct bpf_map *)&m_sockhash;
+
+	VERIFY(check_default(&sockhash->map, map));
+
+	return 1;
+}
+
+struct bpf_cgroup_storage_map {
+	struct bpf_map map;
+} __attribute__((preserve_access_index));
+
+struct {
+	__uint(type, BPF_MAP_TYPE_CGROUP_STORAGE);
+	__type(key, struct bpf_cgroup_storage_key);
+	__type(value, __u32);
+} m_cgroup_storage SEC(".maps");
+
+static inline int check_cgroup_storage(void)
+{
+	struct bpf_cgroup_storage_map *cgroup_storage =
+		(struct bpf_cgroup_storage_map *)&m_cgroup_storage;
+	struct bpf_map *map = (struct bpf_map *)&m_cgroup_storage;
+
+	VERIFY(check(&cgroup_storage->map, map,
+		     sizeof(struct bpf_cgroup_storage_key), sizeof(__u32), 0));
+
+	return 1;
+}
+
+struct reuseport_array {
+	struct bpf_map map;
+} __attribute__((preserve_access_index));
+
+struct {
+	__uint(type, BPF_MAP_TYPE_REUSEPORT_SOCKARRAY);
+	__uint(max_entries, MAX_ENTRIES);
+	__type(key, __u32);
+	__type(value, __u32);
+} m_reuseport_sockarray SEC(".maps");
+
+static inline int check_reuseport_sockarray(void)
+{
+	struct reuseport_array *reuseport_sockarray =
+		(struct reuseport_array *)&m_reuseport_sockarray;
+	struct bpf_map *map = (struct bpf_map *)&m_reuseport_sockarray;
+
+	VERIFY(check_default(&reuseport_sockarray->map, map));
+
+	return 1;
+}
+
+struct {
+	__uint(type, BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE);
+	__type(key, struct bpf_cgroup_storage_key);
+	__type(value, __u32);
+} m_percpu_cgroup_storage SEC(".maps");
+
+static inline int check_percpu_cgroup_storage(void)
+{
+	struct bpf_cgroup_storage_map *percpu_cgroup_storage =
+		(struct bpf_cgroup_storage_map *)&m_percpu_cgroup_storage;
+	struct bpf_map *map = (struct bpf_map *)&m_percpu_cgroup_storage;
+
+	VERIFY(check(&percpu_cgroup_storage->map, map,
+		     sizeof(struct bpf_cgroup_storage_key), sizeof(__u32), 0));
+
+	return 1;
+}
+
+struct bpf_queue_stack {
+	struct bpf_map map;
+} __attribute__((preserve_access_index));
+
+struct {
+	__uint(type, BPF_MAP_TYPE_QUEUE);
+	__uint(max_entries, MAX_ENTRIES);
+	__type(value, __u32);
+} m_queue SEC(".maps");
+
+static inline int check_queue(void)
+{
+	struct bpf_queue_stack *queue = (struct bpf_queue_stack *)&m_queue;
+	struct bpf_map *map = (struct bpf_map *)&m_queue;
+
+	VERIFY(check(&queue->map, map, 0, sizeof(__u32), MAX_ENTRIES));
+
+	return 1;
+}
+
+struct {
+	__uint(type, BPF_MAP_TYPE_STACK);
+	__uint(max_entries, MAX_ENTRIES);
+	__type(value, __u32);
+} m_stack SEC(".maps");
+
+static inline int check_stack(void)
+{
+	struct bpf_queue_stack *stack = (struct bpf_queue_stack *)&m_stack;
+	struct bpf_map *map = (struct bpf_map *)&m_stack;
+
+	VERIFY(check(&stack->map, map, 0, sizeof(__u32), MAX_ENTRIES));
+
+	return 1;
+}
+
+struct bpf_sk_storage_map {
+	struct bpf_map map;
+} __attribute__((preserve_access_index));
+
+struct {
+	__uint(type, BPF_MAP_TYPE_SK_STORAGE);
+	__uint(map_flags, BPF_F_NO_PREALLOC);
+	__type(key, __u32);
+	__type(value, __u32);
+} m_sk_storage SEC(".maps");
+
+static inline int check_sk_storage(void)
+{
+	struct bpf_sk_storage_map *sk_storage =
+		(struct bpf_sk_storage_map *)&m_sk_storage;
+	struct bpf_map *map = (struct bpf_map *)&m_sk_storage;
+
+	VERIFY(check(&sk_storage->map, map, sizeof(__u32), sizeof(__u32), 0));
+
+	return 1;
+}
+
+struct {
+	__uint(type, BPF_MAP_TYPE_DEVMAP_HASH);
+	__uint(max_entries, MAX_ENTRIES);
+	__type(key, __u32);
+	__type(value, __u32);
+} m_devmap_hash SEC(".maps");
+
+static inline int check_devmap_hash(void)
+{
+	struct bpf_dtab *devmap_hash = (struct bpf_dtab *)&m_devmap_hash;
+	struct bpf_map *map = (struct bpf_map *)&m_devmap_hash;
+
+	VERIFY(check_default(&devmap_hash->map, map));
+
+	return 1;
+}
+
+struct bpf_ringbuf_map {
+	struct bpf_map map;
+} __attribute__((preserve_access_index));
+
+struct {
+	__uint(type, BPF_MAP_TYPE_RINGBUF);
+	__uint(max_entries, 1 << 12);
+} m_ringbuf SEC(".maps");
+
+static inline int check_ringbuf(void)
+{
+	struct bpf_ringbuf_map *ringbuf = (struct bpf_ringbuf_map *)&m_ringbuf;
+	struct bpf_map *map = (struct bpf_map *)&m_ringbuf;
+
+	VERIFY(check(&ringbuf->map, map, 0, 0, 1 << 12));
+
+	return 1;
+}
+
+SEC("cgroup_skb/egress")
+int cg_skb(void *ctx)
+{
+	VERIFY_TYPE(BPF_MAP_TYPE_HASH, check_hash);
+	VERIFY_TYPE(BPF_MAP_TYPE_ARRAY, check_array);
+	VERIFY_TYPE(BPF_MAP_TYPE_PROG_ARRAY, check_prog_array);
+	VERIFY_TYPE(BPF_MAP_TYPE_PERF_EVENT_ARRAY, check_perf_event_array);
+	VERIFY_TYPE(BPF_MAP_TYPE_PERCPU_HASH, check_percpu_hash);
+	VERIFY_TYPE(BPF_MAP_TYPE_PERCPU_ARRAY, check_percpu_array);
+	VERIFY_TYPE(BPF_MAP_TYPE_STACK_TRACE, check_stack_trace);
+	VERIFY_TYPE(BPF_MAP_TYPE_CGROUP_ARRAY, check_cgroup_array);
+	VERIFY_TYPE(BPF_MAP_TYPE_LRU_HASH, check_lru_hash);
+	VERIFY_TYPE(BPF_MAP_TYPE_LRU_PERCPU_HASH, check_lru_percpu_hash);
+	VERIFY_TYPE(BPF_MAP_TYPE_LPM_TRIE, check_lpm_trie);
+	VERIFY_TYPE(BPF_MAP_TYPE_ARRAY_OF_MAPS, check_array_of_maps);
+	VERIFY_TYPE(BPF_MAP_TYPE_HASH_OF_MAPS, check_hash_of_maps);
+	VERIFY_TYPE(BPF_MAP_TYPE_DEVMAP, check_devmap);
+	VERIFY_TYPE(BPF_MAP_TYPE_SOCKMAP, check_sockmap);
+	VERIFY_TYPE(BPF_MAP_TYPE_CPUMAP, check_cpumap);
+	VERIFY_TYPE(BPF_MAP_TYPE_XSKMAP, check_xskmap);
+	VERIFY_TYPE(BPF_MAP_TYPE_SOCKHASH, check_sockhash);
+	VERIFY_TYPE(BPF_MAP_TYPE_CGROUP_STORAGE, check_cgroup_storage);
+	VERIFY_TYPE(BPF_MAP_TYPE_REUSEPORT_SOCKARRAY,
+		    check_reuseport_sockarray);
+	VERIFY_TYPE(BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE,
+		    check_percpu_cgroup_storage);
+	VERIFY_TYPE(BPF_MAP_TYPE_QUEUE, check_queue);
+	VERIFY_TYPE(BPF_MAP_TYPE_STACK, check_stack);
+	VERIFY_TYPE(BPF_MAP_TYPE_SK_STORAGE, check_sk_storage);
+	VERIFY_TYPE(BPF_MAP_TYPE_DEVMAP_HASH, check_devmap_hash);
+	VERIFY_TYPE(BPF_MAP_TYPE_RINGBUF, check_ringbuf);
+
+	return 1;
+}
+
+__u32 _version SEC("version") = 1;
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/verifier/map_ptr.c b/tools/testing/selftests/bpf/verifier/map_ptr.c
new file mode 100644
index 000000000000..b52209db8250
--- /dev/null
+++ b/tools/testing/selftests/bpf/verifier/map_ptr.c
@@ -0,0 +1,62 @@
+{
+	"bpf_map_ptr: read with negative offset rejected",
+	.insns = {
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+	BPF_LD_MAP_FD(BPF_REG_1, 0),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, -8),
+	BPF_MOV64_IMM(BPF_REG_0, 1),
+	BPF_EXIT_INSN(),
+	},
+	.fixup_map_array_48b = { 1 },
+	.result_unpriv = REJECT,
+	.errstr_unpriv = "bpf_array access is allowed only to CAP_PERFMON and CAP_SYS_ADMIN",
+	.result = REJECT,
+	.errstr = "R1 is bpf_array invalid negative access: off=-8",
+},
+{
+	"bpf_map_ptr: write rejected",
+	.insns = {
+	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+	BPF_LD_MAP_FD(BPF_REG_1, 0),
+	BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, 0),
+	BPF_MOV64_IMM(BPF_REG_0, 1),
+	BPF_EXIT_INSN(),
+	},
+	.fixup_map_array_48b = { 3 },
+	.result_unpriv = REJECT,
+	.errstr_unpriv = "bpf_array access is allowed only to CAP_PERFMON and CAP_SYS_ADMIN",
+	.result = REJECT,
+	.errstr = "only read from bpf_array is supported",
+},
+{
+	"bpf_map_ptr: read non-existent field rejected",
+	.insns = {
+	BPF_MOV64_IMM(BPF_REG_6, 0),
+	BPF_LD_MAP_FD(BPF_REG_1, 0),
+	BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, 1),
+	BPF_MOV64_IMM(BPF_REG_0, 1),
+	BPF_EXIT_INSN(),
+	},
+	.fixup_map_array_48b = { 1 },
+	.result_unpriv = REJECT,
+	.errstr_unpriv = "bpf_array access is allowed only to CAP_PERFMON and CAP_SYS_ADMIN",
+	.result = REJECT,
+	.errstr = "cannot access ptr member ops with moff 0 in struct bpf_map with off 1 size 4",
+},
+{
+	"bpf_map_ptr: read ops field accepted",
+	.insns = {
+	BPF_MOV64_IMM(BPF_REG_6, 0),
+	BPF_LD_MAP_FD(BPF_REG_1, 0),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0),
+	BPF_MOV64_IMM(BPF_REG_0, 1),
+	BPF_EXIT_INSN(),
+	},
+	.fixup_map_array_48b = { 1 },
+	.result_unpriv = REJECT,
+	.errstr_unpriv = "bpf_array access is allowed only to CAP_PERFMON and CAP_SYS_ADMIN",
+	.result = ACCEPT,
+	.retval = 1,
+},
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* RE: [PATCH v2 bpf-next 1/5] bpf: Switch btf_parse_vmlinux to btf_find_by_name_kind
  2020-06-19 21:11 ` [PATCH v2 bpf-next 1/5] bpf: Switch btf_parse_vmlinux to btf_find_by_name_kind Andrey Ignatov
@ 2020-06-20  6:04   ` John Fastabend
  0 siblings, 0 replies; 15+ messages in thread
From: John Fastabend @ 2020-06-20  6:04 UTC (permalink / raw)
  To: Andrey Ignatov, bpf
  Cc: Andrey Ignatov, ast, daniel, kafai, andriin, kernel-team

Andrey Ignatov wrote:
> btf_parse_vmlinux() implements manual search for struct bpf_ctx_convert
> since at the time of implementing btf_find_by_name_kind() was not
> available.
> 
> Later btf_find_by_name_kind() was introduced in 27ae7997a661 ("bpf:
> Introduce BPF_PROG_TYPE_STRUCT_OPS"). It provides similar search
> functionality and can be leveraged in btf_parse_vmlinux(). Do it.
> 
> Signed-off-by: Andrey Ignatov <rdna@fb.com>
> Acked-by: Andrii Nakryiko <andriin@fb.com>
> ---

Acked-by: John Fastabend <john.fastabend@gmail.com>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [PATCH v2 bpf-next 2/5] bpf: Rename bpf_htab to bpf_shtab in sock_map
  2020-06-19 21:11 ` [PATCH v2 bpf-next 2/5] bpf: Rename bpf_htab to bpf_shtab in sock_map Andrey Ignatov
@ 2020-06-20  6:08   ` John Fastabend
  0 siblings, 0 replies; 15+ messages in thread
From: John Fastabend @ 2020-06-20  6:08 UTC (permalink / raw)
  To: Andrey Ignatov, bpf
  Cc: Andrey Ignatov, ast, daniel, kafai, andriin, kernel-team

Andrey Ignatov wrote:
> There are two different `struct bpf_htab` in bpf code in the following
> files:
> - kernel/bpf/hashtab.c
> - net/core/sock_map.c
> 
> It makes it impossible to find proper btf_id by name = "bpf_htab" and
> kind = BTF_KIND_STRUCT what is needed to support access to map ptr so
> that bpf program can access `struct bpf_htab` fields.
> 
> To make it possible one of the struct-s should be renamed, sock_map.c
> looks like a better candidate for rename since it's specialized version
> of hashtab.
> 
> Rename it to bpf_shtab ("sh" stands for Sock Hash).
> 
> Signed-off-by: Andrey Ignatov <rdna@fb.com>
> ---
>  net/core/sock_map.c | 82 ++++++++++++++++++++++-----------------------
>  1 file changed, 41 insertions(+), 41 deletions(-)

Acked-by: John Fastabend <john.fastabend@gmail.com>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [PATCH v2 bpf-next 3/5] bpf: Support access to bpf map fields
  2020-06-19 21:11 ` [PATCH v2 bpf-next 3/5] bpf: Support access to bpf map fields Andrey Ignatov
@ 2020-06-21  3:27   ` John Fastabend
  2020-06-22 18:39     ` Andrey Ignatov
  0 siblings, 1 reply; 15+ messages in thread
From: John Fastabend @ 2020-06-21  3:27 UTC (permalink / raw)
  To: Andrey Ignatov, bpf
  Cc: Andrey Ignatov, ast, daniel, kafai, andriin, kernel-team

Andrey Ignatov wrote:
> There are multiple use-cases when it's convenient to have access to bpf
> map fields, both `struct bpf_map` and map type specific struct-s such as
> `struct bpf_array`, `struct bpf_htab`, etc.
> 
> For example while working with sock arrays it can be necessary to
> calculate the key based on map->max_entries (some_hash % max_entries).
> Currently this is solved by communicating max_entries via "out-of-band"
> channel, e.g. via additional map with known key to get info about target
> map. That works, but is not very convenient and error-prone while
> working with many maps.
> 
> In other cases necessary data is dynamic (i.e. unknown at loading time)
> and it's impossible to get it at all. For example while working with a
> hash table it can be convenient to know how much capacity is already
> used (bpf_htab.count.counter for BPF_F_NO_PREALLOC case).
> 
> At the same time kernel knows this info and can provide it to bpf
> program.
> 
> Fill this gap by adding support to access bpf map fields from bpf
> program for both `struct bpf_map` and map type specific fields.
> 
> Support is implemented via btf_struct_access() so that a user can define
> their own `struct bpf_map` or map type specific struct in their program
> with only necessary fields and preserve_access_index attribute, cast a
> map to this struct and use a field.
> 
> For example:
> 
> 	struct bpf_map {
> 		__u32 max_entries;
> 	} __attribute__((preserve_access_index));
> 
> 	struct bpf_array {
> 		struct bpf_map map;
> 		__u32 elem_size;
> 	} __attribute__((preserve_access_index));
> 
> 	struct {
> 		__uint(type, BPF_MAP_TYPE_ARRAY);
> 		__uint(max_entries, 4);
> 		__type(key, __u32);
> 		__type(value, __u32);
> 	} m_array SEC(".maps");
> 
> 	SEC("cgroup_skb/egress")
> 	int cg_skb(void *ctx)
> 	{
> 		struct bpf_array *array = (struct bpf_array *)&m_array;
> 		struct bpf_map *map = (struct bpf_map *)&m_array;
> 
> 		/* .. use map->max_entries or array->map.max_entries .. */
> 	}
> 
> Similarly to other btf_struct_access() use-cases (e.g. struct tcp_sock
> in net/ipv4/bpf_tcp_ca.c) the patch allows access to any fields of
> corresponding struct. Only reading from map fields is supported.
> 
> For btf_struct_access() to work there should be a way to know btf id of
> a struct that corresponds to a map type. To get btf id there should be a
> way to get a stringified name of map-specific struct, such as
> "bpf_array", "bpf_htab", etc for a map type. Two new fields are added to
> `struct bpf_map_ops` to handle it:
> * .map_btf_name keeps a btf name of a struct returned by map_alloc();
> * .map_btf_id is used to cache btf id of that struct.
> 
> To make btf ids calculation cheaper they're calculated once while
> preparing btf_vmlinux and cached same way as it's done for btf_id field
> of `struct bpf_func_proto`
> 
> While calculating btf ids, struct names are NOT checked for collision.
> Collisions will be checked as a part of the work to prepare btf ids used
> in verifier in compile time that should land soon. The only known
> collision for `struct bpf_htab` (kernel/bpf/hashtab.c vs
> net/core/sock_map.c) was fixed earlier.
> 
> Both new fields .map_btf_name and .map_btf_id must be set for a map type
> for the feature to work. If neither is set for a map type, verifier will
> return ENOTSUPP on a try to access map_ptr of corresponding type. If
> just one of them set, it's verifier misconfiguration.
> 
> Only `struct bpf_array` for BPF_MAP_TYPE_ARRAY and `struct bpf_htab` for
> BPF_MAP_TYPE_HASH are supported by this patch. Other map types will be
> supported separately.
> 
> The feature is available only for CONFIG_DEBUG_INFO_BTF=y and gated by
> perfmon_capable() so that unpriv programs won't have access to bpf map
> fields.
> 
> Signed-off-by: Andrey Ignatov <rdna@fb.com>
> ---
>  include/linux/bpf.h                           |  9 ++
>  include/linux/bpf_verifier.h                  |  1 +
>  kernel/bpf/arraymap.c                         |  3 +
>  kernel/bpf/btf.c                              | 40 +++++++++
>  kernel/bpf/hashtab.c                          |  3 +
>  kernel/bpf/verifier.c                         | 82 +++++++++++++++++--
>  .../selftests/bpf/verifier/map_ptr_mixing.c   |  2 +-
>  7 files changed, 131 insertions(+), 9 deletions(-)
> 
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 07052d44bca1..1e1501ee53ce 100644

LGTM, but any reason not to allow this with bpf_capable() it looks
useful for building load balancers which might not be related to
CAP_PERFMON.

Otherwise,

Acked-by: John Fastabend <john.fastabend@gmail.com>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [PATCH v2 bpf-next 4/5] bpf: Set map_btf_{name,id} for all map types
  2020-06-19 21:11 ` [PATCH v2 bpf-next 4/5] bpf: Set map_btf_{name,id} for all map types Andrey Ignatov
@ 2020-06-21  3:28   ` John Fastabend
  0 siblings, 0 replies; 15+ messages in thread
From: John Fastabend @ 2020-06-21  3:28 UTC (permalink / raw)
  To: Andrey Ignatov, bpf
  Cc: Andrey Ignatov, ast, daniel, kafai, andriin, kernel-team

Andrey Ignatov wrote:
> Set map_btf_name and map_btf_id for all map types so that map fields can
> be accessed by bpf programs.
> 
> Signed-off-by: Andrey Ignatov <rdna@fb.com>
> ---

Acked-by: John Fastabend <john.fastabend@gmail.com>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [PATCH v2 bpf-next 5/5] selftests/bpf: Test access to bpf map pointer
  2020-06-19 21:11 ` [PATCH v2 bpf-next 5/5] selftests/bpf: Test access to bpf map pointer Andrey Ignatov
@ 2020-06-21  3:35   ` John Fastabend
  0 siblings, 0 replies; 15+ messages in thread
From: John Fastabend @ 2020-06-21  3:35 UTC (permalink / raw)
  To: Andrey Ignatov, bpf
  Cc: Andrey Ignatov, ast, daniel, kafai, andriin, kernel-team

Andrey Ignatov wrote:
> Add selftests to test access to map pointers from bpf program for all
> map types except struct_ops (that one would need additional work).
> 
> verifier test focuses mostly on scenarios that must be rejected.
> 
> prog_tests test focuses on accessing multiple fields both scalar and a
> nested struct from bpf program and verifies that those fields have
> expected values.
> 
> Signed-off-by: Andrey Ignatov <rdna@fb.com>
> ---

Acked-by: John Fastabend <john.fastabend@gmail.com>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 bpf-next 0/5] bpf: Support access to bpf map fields
  2020-06-19 21:11 [PATCH v2 bpf-next 0/5] bpf: Support access to bpf map fields Andrey Ignatov
                   ` (4 preceding siblings ...)
  2020-06-19 21:11 ` [PATCH v2 bpf-next 5/5] selftests/bpf: Test access to bpf map pointer Andrey Ignatov
@ 2020-06-22 17:42 ` Martin KaFai Lau
  2020-06-22 21:22 ` Daniel Borkmann
  6 siblings, 0 replies; 15+ messages in thread
From: Martin KaFai Lau @ 2020-06-22 17:42 UTC (permalink / raw)
  To: Andrey Ignatov; +Cc: bpf, ast, daniel, andriin, kernel-team

On Fri, Jun 19, 2020 at 02:11:40PM -0700, Andrey Ignatov wrote:
> v1->v2:
> - move btf id cache to a new bpf_map_ops.map_btf_id field (Martin, Andrii);
> - don't check btf names for collisions (Martin);
> - drop btf_find_by_name_kind_next() patch since it was needed only for
>   collision check;
> - don't fall back to `struct bpf_map` if a map type doesn't specify both
>   map_btf_name and map_btf_id;
> 
> This patch set adds support to access bpf map fields from bpf programs
> using btf_struct_access().
Acked-by: Martin KaFai Lau <kafai@fb.com>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 bpf-next 3/5] bpf: Support access to bpf map fields
  2020-06-21  3:27   ` John Fastabend
@ 2020-06-22 18:39     ` Andrey Ignatov
  2020-06-22 22:11       ` Alexei Starovoitov
  0 siblings, 1 reply; 15+ messages in thread
From: Andrey Ignatov @ 2020-06-22 18:39 UTC (permalink / raw)
  To: John Fastabend, ast; +Cc: bpf, daniel, kafai, andriin, kernel-team

John Fastabend <john.fastabend@gmail.com> [Sat, 2020-06-20 20:27 -0700]:
> Andrey Ignatov wrote:
...
> > 
> > The feature is available only for CONFIG_DEBUG_INFO_BTF=y and gated by
> > perfmon_capable() so that unpriv programs won't have access to bpf map
> > fields.
> > 
> > Signed-off-by: Andrey Ignatov <rdna@fb.com>
> > ---
> >  include/linux/bpf.h                           |  9 ++
> >  include/linux/bpf_verifier.h                  |  1 +
> >  kernel/bpf/arraymap.c                         |  3 +
> >  kernel/bpf/btf.c                              | 40 +++++++++
> >  kernel/bpf/hashtab.c                          |  3 +
> >  kernel/bpf/verifier.c                         | 82 +++++++++++++++++--
> >  .../selftests/bpf/verifier/map_ptr_mixing.c   |  2 +-
> >  7 files changed, 131 insertions(+), 9 deletions(-)
> > 
> > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > index 07052d44bca1..1e1501ee53ce 100644
> 
> LGTM, but any reason not to allow this with bpf_capable() it looks
> useful for building load balancers which might not be related to
> CAP_PERFMON.

Thanks for review John. I agree that this can be useful for many
use-cases, incl. networking programs.

Accessing a kernel struct looks like "tracing" kind of functionality to
me (that's why CAP_PERFMON), but I'm not quite sure, and using
bpf_capable() looks fine as well.

Alexei, since you introduced CAP_BPF, could you clarify which cap is the
right one to use here and why?


> Otherwise,
> 
> Acked-by: John Fastabend <john.fastabend@gmail.com>


-- 
Andrey Ignatov

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 bpf-next 0/5] bpf: Support access to bpf map fields
  2020-06-19 21:11 [PATCH v2 bpf-next 0/5] bpf: Support access to bpf map fields Andrey Ignatov
                   ` (5 preceding siblings ...)
  2020-06-22 17:42 ` [PATCH v2 bpf-next 0/5] bpf: Support access to bpf map fields Martin KaFai Lau
@ 2020-06-22 21:22 ` Daniel Borkmann
  6 siblings, 0 replies; 15+ messages in thread
From: Daniel Borkmann @ 2020-06-22 21:22 UTC (permalink / raw)
  To: Andrey Ignatov, bpf; +Cc: ast, kafai, andriin, kernel-team

On 6/19/20 11:11 PM, Andrey Ignatov wrote:
> v1->v2:
> - move btf id cache to a new bpf_map_ops.map_btf_id field (Martin, Andrii);
> - don't check btf names for collisions (Martin);
> - drop btf_find_by_name_kind_next() patch since it was needed only for
>    collision check;
> - don't fall back to `struct bpf_map` if a map type doesn't specify both
>    map_btf_name and map_btf_id;
> 
> This patch set adds support to access bpf map fields from bpf programs
> using btf_struct_access().
> 
> That way a program can cast a pointer to map to either `struct bpf_map *`
> or map type specific struct pointer such as `struct bpf_array *` or
> `struct bpf_htab *`, and access necessary fields, e.g. map->max_entries.
> 
> The fields, in turn, should be defined by a user provided struct with
> preserve_access_index attribute or included from vmlinux.h.
> 
> Please see patch 3 for more details on the feature and use-cases.
> 
> Other patches:
> 
> Patch 1 is refactoring to simplify btf_parse_vmlinux().
> Patch 2 is a rename to avoid having two different `struct bpf_htab`.
> 
> Patch 4 enables access to map fields for all map types.
> Patch 5 adds selftests.
> 

Applied, thanks!

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 bpf-next 3/5] bpf: Support access to bpf map fields
  2020-06-22 18:39     ` Andrey Ignatov
@ 2020-06-22 22:11       ` Alexei Starovoitov
  0 siblings, 0 replies; 15+ messages in thread
From: Alexei Starovoitov @ 2020-06-22 22:11 UTC (permalink / raw)
  To: Andrey Ignatov
  Cc: John Fastabend, ast, bpf, daniel, kafai, andriin, kernel-team

On Mon, Jun 22, 2020 at 11:39:11AM -0700, Andrey Ignatov wrote:
> John Fastabend <john.fastabend@gmail.com> [Sat, 2020-06-20 20:27 -0700]:
> > Andrey Ignatov wrote:
> ...
> > > 
> > > The feature is available only for CONFIG_DEBUG_INFO_BTF=y and gated by
> > > perfmon_capable() so that unpriv programs won't have access to bpf map
> > > fields.
> > > 
> > > Signed-off-by: Andrey Ignatov <rdna@fb.com>
> > > ---
> > >  include/linux/bpf.h                           |  9 ++
> > >  include/linux/bpf_verifier.h                  |  1 +
> > >  kernel/bpf/arraymap.c                         |  3 +
> > >  kernel/bpf/btf.c                              | 40 +++++++++
> > >  kernel/bpf/hashtab.c                          |  3 +
> > >  kernel/bpf/verifier.c                         | 82 +++++++++++++++++--
> > >  .../selftests/bpf/verifier/map_ptr_mixing.c   |  2 +-
> > >  7 files changed, 131 insertions(+), 9 deletions(-)
> > > 
> > > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > > index 07052d44bca1..1e1501ee53ce 100644
> > 
> > LGTM, but any reason not to allow this with bpf_capable() it looks
> > useful for building load balancers which might not be related to
> > CAP_PERFMON.
> 
> Thanks for review John. I agree that this can be useful for many
> use-cases, incl. networking programs.
> 
> Accessing a kernel struct looks like "tracing" kind of functionality to
> me (that's why CAP_PERFMON), but I'm not quite sure, and using
> bpf_capable() looks fine as well.
> 
> Alexei, since you introduced CAP_BPF, could you clarify which cap is the
> right one to use here and why?

It's tracing and pointers are accessed, so cap_perfmon.

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2020-06-22 22:11 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-19 21:11 [PATCH v2 bpf-next 0/5] bpf: Support access to bpf map fields Andrey Ignatov
2020-06-19 21:11 ` [PATCH v2 bpf-next 1/5] bpf: Switch btf_parse_vmlinux to btf_find_by_name_kind Andrey Ignatov
2020-06-20  6:04   ` John Fastabend
2020-06-19 21:11 ` [PATCH v2 bpf-next 2/5] bpf: Rename bpf_htab to bpf_shtab in sock_map Andrey Ignatov
2020-06-20  6:08   ` John Fastabend
2020-06-19 21:11 ` [PATCH v2 bpf-next 3/5] bpf: Support access to bpf map fields Andrey Ignatov
2020-06-21  3:27   ` John Fastabend
2020-06-22 18:39     ` Andrey Ignatov
2020-06-22 22:11       ` Alexei Starovoitov
2020-06-19 21:11 ` [PATCH v2 bpf-next 4/5] bpf: Set map_btf_{name,id} for all map types Andrey Ignatov
2020-06-21  3:28   ` John Fastabend
2020-06-19 21:11 ` [PATCH v2 bpf-next 5/5] selftests/bpf: Test access to bpf map pointer Andrey Ignatov
2020-06-21  3:35   ` John Fastabend
2020-06-22 17:42 ` [PATCH v2 bpf-next 0/5] bpf: Support access to bpf map fields Martin KaFai Lau
2020-06-22 21:22 ` Daniel Borkmann

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.