All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next v2 0/6] bpf: Implement cgroup local storage available to non-cgroup-attached bpf progs
@ 2022-10-20 22:12 Yonghong Song
  2022-10-20 22:13 ` [PATCH bpf-next v2 1/6] bpf: Make struct cgroup btf id global Yonghong Song
                   ` (5 more replies)
  0 siblings, 6 replies; 23+ messages in thread
From: Yonghong Song @ 2022-10-20 22:12 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, KP Singh, Martin KaFai Lau, Tejun Heo

There already exists a local storage implementation for cgroup-attached
bpf programs. See map type BPF_MAP_TYPE_CGROUP_STORAGE and helper
bpf_get_local_storage(). But there are use cases such that non-cgroup
attached bpf progs wants to access cgroup local storage data. For example,
tc egress prog has access to sk and cgroup. It is possible to use
sk local storage to emulate cgroup local storage by storing data in socket.
But this is a waste as it could be lots of sockets belonging to a particular
cgroup. Alternatively, a separate map can be created with cgroup id as the key.
But this will introduce additional overhead to manipulate the new map.
A cgroup local storage, similar to existing sk/inode/task storage,
should help for this use case.

This patch implemented new cgroup local storage available to
non-cgroup-attached bpf programs. In the patch series, Patch 1
is a preparation patch. Patch 2 implemented new cgroup local storage
kernel support. Patches 3 and 4 implemented libbpf and bpftool support.
Patch 5 added two tests to validate kernel/libbpf implementations.
Patch 6 added documentation for new BPF_MAP_TYPE_CGRP_STORAGE map type
and comparison of the old and new cgroup local storage maps.

Changelogs:
  v1 -> v2:
    . change map name from BPF_MAP_TYPE_CGROUP_LOCAL_STORAGE to
      BPF_MAP_TYPE_CGRP_STORAGE.
    . removed support of sleepable programs.
    . changed the place of freeing cgrp local storage from put_css_set_locked()
      to css_free_rwork_fn().
    . added map documentation.

Yonghong Song (6):
  bpf: Make struct cgroup btf id global
  bpf: Implement cgroup storage available to non-cgroup-attached bpf
    progs
  libbpf: Support new cgroup local storage
  bpftool: Support new cgroup local storage
  selftests/bpf: Add selftests for cgroup local storage
  docs/bpf: Add documentation for map type BPF_MAP_TYPE_CGRP_STROAGE

 Documentation/bpf/map_cgrp_storage.rst        | 104 +++++++
 include/linux/bpf.h                           |   3 +
 include/linux/bpf_types.h                     |   1 +
 include/linux/btf_ids.h                       |   1 +
 include/linux/cgroup-defs.h                   |   4 +
 include/uapi/linux/bpf.h                      |  48 ++-
 kernel/bpf/Makefile                           |   2 +-
 kernel/bpf/bpf_cgrp_storage.c                 | 276 ++++++++++++++++++
 kernel/bpf/cgroup_iter.c                      |   2 +-
 kernel/bpf/helpers.c                          |   6 +
 kernel/bpf/syscall.c                          |   3 +-
 kernel/bpf/verifier.c                         |  13 +-
 kernel/cgroup/cgroup.c                        |   4 +
 kernel/trace/bpf_trace.c                      |   4 +
 scripts/bpf_doc.py                            |   2 +
 .../bpf/bpftool/Documentation/bpftool-map.rst |   2 +-
 tools/bpf/bpftool/map.c                       |   2 +-
 tools/include/uapi/linux/bpf.h                |  48 ++-
 tools/lib/bpf/libbpf.c                        |   1 +
 tools/lib/bpf/libbpf_probes.c                 |   1 +
 .../bpf/prog_tests/cgroup_local_storage.c     |  92 ++++++
 .../bpf/progs/cgroup_local_storage.c          |  88 ++++++
 .../selftests/bpf/progs/cgroup_ls_recursion.c |  70 +++++
 23 files changed, 769 insertions(+), 8 deletions(-)
 create mode 100644 Documentation/bpf/map_cgrp_storage.rst
 create mode 100644 kernel/bpf/bpf_cgrp_storage.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_local_storage.c
 create mode 100644 tools/testing/selftests/bpf/progs/cgroup_local_storage.c
 create mode 100644 tools/testing/selftests/bpf/progs/cgroup_ls_recursion.c

-- 
2.30.2


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH bpf-next v2 1/6] bpf: Make struct cgroup btf id global
  2022-10-20 22:12 [PATCH bpf-next v2 0/6] bpf: Implement cgroup local storage available to non-cgroup-attached bpf progs Yonghong Song
@ 2022-10-20 22:13 ` Yonghong Song
  2022-10-20 22:13 ` [PATCH bpf-next v2 2/6] bpf: Implement cgroup storage available to non-cgroup-attached bpf progs Yonghong Song
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 23+ messages in thread
From: Yonghong Song @ 2022-10-20 22:13 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, KP Singh, Martin KaFai Lau, Tejun Heo

Make struct cgroup btf id global so later patch can reuse
the same btf id.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 include/linux/btf_ids.h  | 1 +
 kernel/bpf/cgroup_iter.c | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/include/linux/btf_ids.h b/include/linux/btf_ids.h
index 2aea877d644f..c9744efd202f 100644
--- a/include/linux/btf_ids.h
+++ b/include/linux/btf_ids.h
@@ -265,5 +265,6 @@ MAX_BTF_TRACING_TYPE,
 };
 
 extern u32 btf_tracing_ids[];
+extern u32 bpf_cgroup_btf_id[];
 
 #endif
diff --git a/kernel/bpf/cgroup_iter.c b/kernel/bpf/cgroup_iter.c
index 0d200a993489..c6ffc706d583 100644
--- a/kernel/bpf/cgroup_iter.c
+++ b/kernel/bpf/cgroup_iter.c
@@ -157,7 +157,7 @@ static const struct seq_operations cgroup_iter_seq_ops = {
 	.show   = cgroup_iter_seq_show,
 };
 
-BTF_ID_LIST_SINGLE(bpf_cgroup_btf_id, struct, cgroup)
+BTF_ID_LIST_GLOBAL_SINGLE(bpf_cgroup_btf_id, struct, cgroup)
 
 static int cgroup_iter_seq_init(void *priv, struct bpf_iter_aux_info *aux)
 {
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH bpf-next v2 2/6] bpf: Implement cgroup storage available to non-cgroup-attached bpf progs
  2022-10-20 22:12 [PATCH bpf-next v2 0/6] bpf: Implement cgroup local storage available to non-cgroup-attached bpf progs Yonghong Song
  2022-10-20 22:13 ` [PATCH bpf-next v2 1/6] bpf: Make struct cgroup btf id global Yonghong Song
@ 2022-10-20 22:13 ` Yonghong Song
  2022-10-21  5:22   ` David Vernet
                     ` (2 more replies)
  2022-10-20 22:13 ` [PATCH bpf-next v2 3/6] libbpf: Support new cgroup local storage Yonghong Song
                   ` (3 subsequent siblings)
  5 siblings, 3 replies; 23+ messages in thread
From: Yonghong Song @ 2022-10-20 22:13 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, KP Singh, Martin KaFai Lau, Tejun Heo

Similar to sk/inode/task storage, implement similar cgroup local storage.

There already exists a local storage implementation for cgroup-attached
bpf programs.  See map type BPF_MAP_TYPE_CGROUP_STORAGE and helper
bpf_get_local_storage(). But there are use cases such that non-cgroup
attached bpf progs wants to access cgroup local storage data. For example,
tc egress prog has access to sk and cgroup. It is possible to use
sk local storage to emulate cgroup local storage by storing data in socket.
But this is a waste as it could be lots of sockets belonging to a particular
cgroup. Alternatively, a separate map can be created with cgroup id as the key.
But this will introduce additional overhead to manipulate the new map.
A cgroup local storage, similar to existing sk/inode/task storage,
should help for this use case.

The life-cycle of storage is managed with the life-cycle of the
cgroup struct.  i.e. the storage is destroyed along with the owning cgroup
with a callback to the bpf_cgrp_storage_free when cgroup itself
is deleted.

The userspace map operations can be done by using a cgroup fd as a key
passed to the lookup, update and delete operations.

Typically, the following code is used to get the current cgroup:
    struct task_struct *task = bpf_get_current_task_btf();
    ... task->cgroups->dfl_cgrp ...
and in structure task_struct definition:
    struct task_struct {
        ....
        struct css_set __rcu            *cgroups;
        ....
    }
With sleepable program, accessing task->cgroups is not protected by rcu_read_lock.
So the current implementation only supports non-sleepable program and supporting
sleepable program will be the next step together with adding rcu_read_lock
protection for rcu tagged structures.

Since map name BPF_MAP_TYPE_CGROUP_STORAGE has been used for old cgroup local
storage support, the new map name BPF_MAP_TYPE_CGRP_STORAGE is used
for cgroup storage available to non-cgroup-attached bpf programs. The old
cgroup storage supports bpf_get_local_storage() helper to get the cgroup data.
The new cgroup storage helper bpf_cgrp_storage_get() can provide similar
functionality. While old cgroup storage pre-allocates storage memory, the new
mechanism can also pre-allocate with a user space bpf_map_update_elem() call
to avoid potential run-time memory allocaiton failure.
Therefore, the new cgroup storage can provide all functionality w.r.t.
the old one. So in uapi bpf.h, the old BPF_MAP_TYPE_CGROUP_STORAGE is alias to
BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED to indicate the old cgroup storage can
be deprecated since the new one can provide the same functionality.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 include/linux/bpf.h            |   3 +
 include/linux/bpf_types.h      |   1 +
 include/linux/cgroup-defs.h    |   4 +
 include/uapi/linux/bpf.h       |  48 +++++-
 kernel/bpf/Makefile            |   2 +-
 kernel/bpf/bpf_cgrp_storage.c  | 276 +++++++++++++++++++++++++++++++++
 kernel/bpf/helpers.c           |   6 +
 kernel/bpf/syscall.c           |   3 +-
 kernel/bpf/verifier.c          |  13 +-
 kernel/cgroup/cgroup.c         |   4 +
 kernel/trace/bpf_trace.c       |   4 +
 scripts/bpf_doc.py             |   2 +
 tools/include/uapi/linux/bpf.h |  48 +++++-
 13 files changed, 409 insertions(+), 5 deletions(-)
 create mode 100644 kernel/bpf/bpf_cgrp_storage.c

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 9e7d46d16032..674da3129248 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -2045,6 +2045,7 @@ struct bpf_link *bpf_link_by_id(u32 id);
 
 const struct bpf_func_proto *bpf_base_func_proto(enum bpf_func_id func_id);
 void bpf_task_storage_free(struct task_struct *task);
+void bpf_cgrp_storage_free(struct cgroup *cgroup);
 bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog);
 const struct btf_func_model *
 bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
@@ -2537,6 +2538,8 @@ extern const struct bpf_func_proto bpf_copy_from_user_task_proto;
 extern const struct bpf_func_proto bpf_set_retval_proto;
 extern const struct bpf_func_proto bpf_get_retval_proto;
 extern const struct bpf_func_proto bpf_user_ringbuf_drain_proto;
+extern const struct bpf_func_proto bpf_cgrp_storage_get_proto;
+extern const struct bpf_func_proto bpf_cgrp_storage_delete_proto;
 
 const struct bpf_func_proto *tracing_prog_func_proto(
   enum bpf_func_id func_id, const struct bpf_prog *prog);
diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h
index 2c6a4f2562a7..f9d5aa62fed0 100644
--- a/include/linux/bpf_types.h
+++ b/include/linux/bpf_types.h
@@ -90,6 +90,7 @@ BPF_MAP_TYPE(BPF_MAP_TYPE_CGROUP_ARRAY, cgroup_array_map_ops)
 #ifdef CONFIG_CGROUP_BPF
 BPF_MAP_TYPE(BPF_MAP_TYPE_CGROUP_STORAGE, cgroup_storage_map_ops)
 BPF_MAP_TYPE(BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE, cgroup_storage_map_ops)
+BPF_MAP_TYPE(BPF_MAP_TYPE_CGRP_STORAGE, cgrp_storage_map_ops)
 #endif
 BPF_MAP_TYPE(BPF_MAP_TYPE_HASH, htab_map_ops)
 BPF_MAP_TYPE(BPF_MAP_TYPE_PERCPU_HASH, htab_percpu_map_ops)
diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
index 8f481d1b159a..4a72bc3a0a4e 100644
--- a/include/linux/cgroup-defs.h
+++ b/include/linux/cgroup-defs.h
@@ -504,6 +504,10 @@ struct cgroup {
 	/* Used to store internal freezer state */
 	struct cgroup_freezer_state freezer;
 
+#ifdef CONFIG_CGROUP_BPF
+	struct bpf_local_storage __rcu  *bpf_cgrp_storage;
+#endif
+
 	/* All ancestors including self */
 	struct cgroup *ancestors[];
 };
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 17f61338f8f8..2d7f79bf3500 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -922,7 +922,14 @@ enum bpf_map_type {
 	BPF_MAP_TYPE_CPUMAP,
 	BPF_MAP_TYPE_XSKMAP,
 	BPF_MAP_TYPE_SOCKHASH,
-	BPF_MAP_TYPE_CGROUP_STORAGE,
+	BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED,
+	/* BPF_MAP_TYPE_CGROUP_STORAGE is available to bpf programs attaching
+	 * to a cgroup. The newer BPF_MAP_TYPE_CGRP_STORAGE is available to
+	 * both cgroup-attached and other progs and supports all functionality
+	 * provided by BPF_MAP_TYPE_CGROUP_STORAGE. So mark
+	 * BPF_MAP_TYPE_CGROUP_STORAGE deprecated.
+	 */
+	BPF_MAP_TYPE_CGROUP_STORAGE = BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED,
 	BPF_MAP_TYPE_REUSEPORT_SOCKARRAY,
 	BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE,
 	BPF_MAP_TYPE_QUEUE,
@@ -935,6 +942,7 @@ enum bpf_map_type {
 	BPF_MAP_TYPE_TASK_STORAGE,
 	BPF_MAP_TYPE_BLOOM_FILTER,
 	BPF_MAP_TYPE_USER_RINGBUF,
+	BPF_MAP_TYPE_CGRP_STORAGE,
 };
 
 /* Note that tracing related programs such as
@@ -5435,6 +5443,42 @@ union bpf_attr {
  *		**-E2BIG** if user-space has tried to publish a sample which is
  *		larger than the size of the ring buffer, or which cannot fit
  *		within a struct bpf_dynptr.
+ *
+ * void *bpf_cgrp_storage_get(struct bpf_map *map, struct cgroup *cgroup, void *value, u64 flags)
+ *	Description
+ *		Get a bpf_local_storage from the *cgroup*.
+ *
+ *		Logically, it could be thought of as getting the value from
+ *		a *map* with *cgroup* as the **key**.  From this
+ *		perspective,  the usage is not much different from
+ *		**bpf_map_lookup_elem**\ (*map*, **&**\ *cgroup*) except this
+ *		helper enforces the key must be a cgroup struct and the map must also
+ *		be a **BPF_MAP_TYPE_CGRP_STORAGE**.
+ *
+ *		Underneath, the value is stored locally at *cgroup* instead of
+ *		the *map*.  The *map* is used as the bpf-local-storage
+ *		"type". The bpf-local-storage "type" (i.e. the *map*) is
+ *		searched against all bpf_local_storage residing at *cgroup*.
+ *
+ *		An optional *flags* (**BPF_LOCAL_STORAGE_GET_F_CREATE**) can be
+ *		used such that a new bpf_local_storage will be
+ *		created if one does not exist.  *value* can be used
+ *		together with **BPF_LOCAL_STORAGE_GET_F_CREATE** to specify
+ *		the initial value of a bpf_local_storage.  If *value* is
+ *		**NULL**, the new bpf_local_storage will be zero initialized.
+ *	Return
+ *		A bpf_local_storage pointer is returned on success.
+ *
+ *		**NULL** if not found or there was an error in adding
+ *		a new bpf_local_storage.
+ *
+ * long bpf_cgrp_storage_delete(struct bpf_map *map, struct cgroup *cgroup)
+ *	Description
+ *		Delete a bpf_local_storage from a *cgroup*.
+ *	Return
+ *		0 on success.
+ *
+ *		**-ENOENT** if the bpf_local_storage cannot be found.
  */
 #define ___BPF_FUNC_MAPPER(FN, ctx...)			\
 	FN(unspec, 0, ##ctx)				\
@@ -5647,6 +5691,8 @@ union bpf_attr {
 	FN(tcp_raw_check_syncookie_ipv6, 207, ##ctx)	\
 	FN(ktime_get_tai_ns, 208, ##ctx)		\
 	FN(user_ringbuf_drain, 209, ##ctx)		\
+	FN(cgrp_storage_get, 210, ##ctx)		\
+	FN(cgrp_storage_delete, 211, ##ctx)		\
 	/* */
 
 /* backwards-compatibility macros for users of __BPF_FUNC_MAPPER that don't
diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
index 341c94f208f4..3a12e6b400a2 100644
--- a/kernel/bpf/Makefile
+++ b/kernel/bpf/Makefile
@@ -25,7 +25,7 @@ ifeq ($(CONFIG_PERF_EVENTS),y)
 obj-$(CONFIG_BPF_SYSCALL) += stackmap.o
 endif
 ifeq ($(CONFIG_CGROUPS),y)
-obj-$(CONFIG_BPF_SYSCALL) += cgroup_iter.o
+obj-$(CONFIG_BPF_SYSCALL) += cgroup_iter.o bpf_cgrp_storage.o
 endif
 obj-$(CONFIG_CGROUP_BPF) += cgroup.o
 ifeq ($(CONFIG_INET),y)
diff --git a/kernel/bpf/bpf_cgrp_storage.c b/kernel/bpf/bpf_cgrp_storage.c
new file mode 100644
index 000000000000..bcc5f0fc20be
--- /dev/null
+++ b/kernel/bpf/bpf_cgrp_storage.c
@@ -0,0 +1,276 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2022 Meta Platforms, Inc. and affiliates.
+ */
+
+#include <linux/types.h>
+#include <linux/bpf.h>
+#include <linux/bpf_local_storage.h>
+#include <uapi/linux/btf.h>
+#include <linux/btf_ids.h>
+
+DEFINE_BPF_STORAGE_CACHE(cgroup_cache);
+
+static DEFINE_PER_CPU(int, bpf_cgrp_storage_busy);
+
+static void bpf_cgrp_storage_lock(void)
+{
+	migrate_disable();
+	this_cpu_inc(bpf_cgrp_storage_busy);
+}
+
+static void bpf_cgrp_storage_unlock(void)
+{
+	this_cpu_dec(bpf_cgrp_storage_busy);
+	migrate_enable();
+}
+
+static bool bpf_cgrp_storage_trylock(void)
+{
+	migrate_disable();
+	if (unlikely(this_cpu_inc_return(bpf_cgrp_storage_busy) != 1)) {
+		this_cpu_dec(bpf_cgrp_storage_busy);
+		migrate_enable();
+		return false;
+	}
+	return true;
+}
+
+static struct bpf_local_storage __rcu **cgroup_storage_ptr(void *owner)
+{
+	struct cgroup *cg = owner;
+
+	return &cg->bpf_cgrp_storage;
+}
+
+void bpf_cgrp_storage_free(struct cgroup *cgroup)
+{
+	struct bpf_local_storage *local_storage;
+	struct bpf_local_storage_elem *selem;
+	bool free_cgroup_storage = false;
+	struct hlist_node *n;
+	unsigned long flags;
+
+	rcu_read_lock();
+	local_storage = rcu_dereference(cgroup->bpf_cgrp_storage);
+	if (!local_storage) {
+		rcu_read_unlock();
+		return;
+	}
+
+	/* Neither the bpf_prog nor the bpf-map's syscall
+	 * could be modifying the local_storage->list now.
+	 * Thus, no elem can be added-to or deleted-from the
+	 * local_storage->list by the bpf_prog or by the bpf-map's syscall.
+	 *
+	 * It is racing with bpf_local_storage_map_free() alone
+	 * when unlinking elem from the local_storage->list and
+	 * the map's bucket->list.
+	 */
+	bpf_cgrp_storage_lock();
+	raw_spin_lock_irqsave(&local_storage->lock, flags);
+	hlist_for_each_entry_safe(selem, n, &local_storage->list, snode) {
+		bpf_selem_unlink_map(selem);
+		free_cgroup_storage =
+			bpf_selem_unlink_storage_nolock(local_storage, selem, false, false);
+	}
+	raw_spin_unlock_irqrestore(&local_storage->lock, flags);
+	bpf_cgrp_storage_unlock();
+	rcu_read_unlock();
+
+	/* free_cgroup_storage should always be true as long as
+	 * local_storage->list was non-empty.
+	 */
+	if (free_cgroup_storage)
+		kfree_rcu(local_storage, rcu);
+}
+
+static struct bpf_local_storage_data *
+cgroup_storage_lookup(struct cgroup *cgroup, struct bpf_map *map, bool cacheit_lockit)
+{
+	struct bpf_local_storage *cgroup_storage;
+	struct bpf_local_storage_map *smap;
+
+	cgroup_storage = rcu_dereference_check(cgroup->bpf_cgrp_storage,
+					       bpf_rcu_lock_held());
+	if (!cgroup_storage)
+		return NULL;
+
+	smap = (struct bpf_local_storage_map *)map;
+	return bpf_local_storage_lookup(cgroup_storage, smap, cacheit_lockit);
+}
+
+static void *bpf_cgrp_storage_lookup_elem(struct bpf_map *map, void *key)
+{
+	struct bpf_local_storage_data *sdata;
+	struct cgroup *cgroup;
+	int fd;
+
+	fd = *(int *)key;
+	cgroup = cgroup_get_from_fd(fd);
+	if (IS_ERR(cgroup))
+		return ERR_CAST(cgroup);
+
+	bpf_cgrp_storage_lock();
+	sdata = cgroup_storage_lookup(cgroup, map, true);
+	bpf_cgrp_storage_unlock();
+	cgroup_put(cgroup);
+	return sdata ? sdata->data : NULL;
+}
+
+static int bpf_cgrp_storage_update_elem(struct bpf_map *map, void *key,
+					  void *value, u64 map_flags)
+{
+	struct bpf_local_storage_data *sdata;
+	struct cgroup *cgroup;
+	int fd;
+
+	fd = *(int *)key;
+	cgroup = cgroup_get_from_fd(fd);
+	if (IS_ERR(cgroup))
+		return PTR_ERR(cgroup);
+
+	bpf_cgrp_storage_lock();
+	sdata = bpf_local_storage_update(cgroup, (struct bpf_local_storage_map *)map,
+					 value, map_flags, GFP_ATOMIC);
+	bpf_cgrp_storage_unlock();
+	cgroup_put(cgroup);
+	return PTR_ERR_OR_ZERO(sdata);
+}
+
+static int cgroup_storage_delete(struct cgroup *cgroup, struct bpf_map *map)
+{
+	struct bpf_local_storage_data *sdata;
+
+	sdata = cgroup_storage_lookup(cgroup, map, false);
+	if (!sdata)
+		return -ENOENT;
+
+	bpf_selem_unlink(SELEM(sdata), true);
+	return 0;
+}
+
+static int bpf_cgrp_storage_delete_elem(struct bpf_map *map, void *key)
+{
+	struct cgroup *cgroup;
+	int err, fd;
+
+	fd = *(int *)key;
+	cgroup = cgroup_get_from_fd(fd);
+	if (IS_ERR(cgroup))
+		return PTR_ERR(cgroup);
+
+	bpf_cgrp_storage_lock();
+	err = cgroup_storage_delete(cgroup, map);
+	bpf_cgrp_storage_unlock();
+	cgroup_put(cgroup);
+	return err;
+}
+
+static int notsupp_get_next_key(struct bpf_map *map, void *key, void *next_key)
+{
+	return -ENOTSUPP;
+}
+
+static struct bpf_map *cgroup_storage_map_alloc(union bpf_attr *attr)
+{
+	struct bpf_local_storage_map *smap;
+
+	smap = bpf_local_storage_map_alloc(attr);
+	if (IS_ERR(smap))
+		return ERR_CAST(smap);
+
+	smap->cache_idx = bpf_local_storage_cache_idx_get(&cgroup_cache);
+	return &smap->map;
+}
+
+static void cgroup_storage_map_free(struct bpf_map *map)
+{
+	struct bpf_local_storage_map *smap;
+
+	smap = (struct bpf_local_storage_map *)map;
+	bpf_local_storage_cache_idx_free(&cgroup_cache, smap->cache_idx);
+	bpf_local_storage_map_free(smap, NULL);
+}
+
+/* *gfp_flags* is a hidden argument provided by the verifier */
+BPF_CALL_5(bpf_cgrp_storage_get, struct bpf_map *, map, struct cgroup *, cgroup,
+	   void *, value, u64, flags, gfp_t, gfp_flags)
+{
+	struct bpf_local_storage_data *sdata;
+
+	WARN_ON_ONCE(!bpf_rcu_lock_held());
+	if (flags & ~(BPF_LOCAL_STORAGE_GET_F_CREATE))
+		return (unsigned long)NULL;
+
+	if (!cgroup)
+		return (unsigned long)NULL;
+
+	if (!bpf_cgrp_storage_trylock())
+		return (unsigned long)NULL;
+
+	sdata = cgroup_storage_lookup(cgroup, map, true);
+	if (sdata)
+		goto unlock;
+
+	/* only allocate new storage, when the cgroup is refcounted */
+	if (!percpu_ref_is_dying(&cgroup->self.refcnt) &&
+	    (flags & BPF_LOCAL_STORAGE_GET_F_CREATE))
+		sdata = bpf_local_storage_update(cgroup, (struct bpf_local_storage_map *)map,
+						 value, BPF_NOEXIST, gfp_flags);
+
+unlock:
+	bpf_cgrp_storage_unlock();
+	return IS_ERR_OR_NULL(sdata) ? (unsigned long)NULL : (unsigned long)sdata->data;
+}
+
+BPF_CALL_2(bpf_cgrp_storage_delete, struct bpf_map *, map, struct cgroup *, cgroup)
+{
+	int ret;
+
+	WARN_ON_ONCE(!bpf_rcu_lock_held());
+	if (!cgroup)
+		return -EINVAL;
+
+	if (!bpf_cgrp_storage_trylock())
+		return -EBUSY;
+
+	ret = cgroup_storage_delete(cgroup, map);
+	bpf_cgrp_storage_unlock();
+	return ret;
+}
+
+BTF_ID_LIST_SINGLE(cgroup_storage_map_btf_ids, struct, bpf_local_storage_map)
+const struct bpf_map_ops cgrp_storage_map_ops = {
+	.map_meta_equal = bpf_map_meta_equal,
+	.map_alloc_check = bpf_local_storage_map_alloc_check,
+	.map_alloc = cgroup_storage_map_alloc,
+	.map_free = cgroup_storage_map_free,
+	.map_get_next_key = notsupp_get_next_key,
+	.map_lookup_elem = bpf_cgrp_storage_lookup_elem,
+	.map_update_elem = bpf_cgrp_storage_update_elem,
+	.map_delete_elem = bpf_cgrp_storage_delete_elem,
+	.map_check_btf = bpf_local_storage_map_check_btf,
+	.map_btf_id = &cgroup_storage_map_btf_ids[0],
+	.map_owner_storage_ptr = cgroup_storage_ptr,
+};
+
+const struct bpf_func_proto bpf_cgrp_storage_get_proto = {
+	.func		= bpf_cgrp_storage_get,
+	.gpl_only	= false,
+	.ret_type	= RET_PTR_TO_MAP_VALUE_OR_NULL,
+	.arg1_type	= ARG_CONST_MAP_PTR,
+	.arg2_type	= ARG_PTR_TO_BTF_ID,
+	.arg2_btf_id	= &bpf_cgroup_btf_id[0],
+	.arg3_type	= ARG_PTR_TO_MAP_VALUE_OR_NULL,
+	.arg4_type	= ARG_ANYTHING,
+};
+
+const struct bpf_func_proto bpf_cgrp_storage_delete_proto = {
+	.func		= bpf_cgrp_storage_delete,
+	.gpl_only	= false,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_CONST_MAP_PTR,
+	.arg2_type	= ARG_PTR_TO_BTF_ID,
+	.arg2_btf_id	= &bpf_cgroup_btf_id[0],
+};
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index a6b04faed282..124fd199ce5c 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -1663,6 +1663,12 @@ bpf_base_func_proto(enum bpf_func_id func_id)
 		return &bpf_dynptr_write_proto;
 	case BPF_FUNC_dynptr_data:
 		return &bpf_dynptr_data_proto;
+#ifdef CONFIG_CGROUPS
+	case BPF_FUNC_cgrp_storage_get:
+		return &bpf_cgrp_storage_get_proto;
+	case BPF_FUNC_cgrp_storage_delete:
+		return &bpf_cgrp_storage_delete_proto;
+#endif
 	default:
 		break;
 	}
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 7b373a5e861f..b95c276f92e3 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1016,7 +1016,8 @@ static int map_check_btf(struct bpf_map *map, const struct btf *btf,
 		    map->map_type != BPF_MAP_TYPE_CGROUP_STORAGE &&
 		    map->map_type != BPF_MAP_TYPE_SK_STORAGE &&
 		    map->map_type != BPF_MAP_TYPE_INODE_STORAGE &&
-		    map->map_type != BPF_MAP_TYPE_TASK_STORAGE)
+		    map->map_type != BPF_MAP_TYPE_TASK_STORAGE &&
+		    map->map_type != BPF_MAP_TYPE_CGRP_STORAGE)
 			return -ENOTSUPP;
 		if (map->spin_lock_off + sizeof(struct bpf_spin_lock) >
 		    map->value_size) {
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 6f6d2d511c06..82bb18d7e881 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -6360,6 +6360,11 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
 		    func_id != BPF_FUNC_task_storage_delete)
 			goto error;
 		break;
+	case BPF_MAP_TYPE_CGRP_STORAGE:
+		if (func_id != BPF_FUNC_cgrp_storage_get &&
+		    func_id != BPF_FUNC_cgrp_storage_delete)
+			goto error;
+		break;
 	case BPF_MAP_TYPE_BLOOM_FILTER:
 		if (func_id != BPF_FUNC_map_peek_elem &&
 		    func_id != BPF_FUNC_map_push_elem)
@@ -6472,6 +6477,11 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
 		if (map->map_type != BPF_MAP_TYPE_TASK_STORAGE)
 			goto error;
 		break;
+	case BPF_FUNC_cgrp_storage_get:
+	case BPF_FUNC_cgrp_storage_delete:
+		if (map->map_type != BPF_MAP_TYPE_CGRP_STORAGE)
+			goto error;
+		break;
 	default:
 		break;
 	}
@@ -14149,7 +14159,8 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 
 		if (insn->imm == BPF_FUNC_task_storage_get ||
 		    insn->imm == BPF_FUNC_sk_storage_get ||
-		    insn->imm == BPF_FUNC_inode_storage_get) {
+		    insn->imm == BPF_FUNC_inode_storage_get ||
+		    insn->imm == BPF_FUNC_cgrp_storage_get) {
 			if (env->prog->aux->sleepable)
 				insn_buf[0] = BPF_MOV64_IMM(BPF_REG_5, (__force __s32)GFP_KERNEL);
 			else
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 764bdd5fd8d1..7e80e15fae4e 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -5227,6 +5227,10 @@ static void css_free_rwork_fn(struct work_struct *work)
 	struct cgroup_subsys *ss = css->ss;
 	struct cgroup *cgrp = css->cgroup;
 
+#ifdef CONFIG_CGROUP_BPF
+	bpf_cgrp_storage_free(cgrp);
+#endif
+
 	percpu_ref_exit(&css->refcnt);
 
 	if (ss) {
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 49fb9ec8366d..0ddf0834b1b8 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1454,6 +1454,10 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 		return &bpf_get_current_cgroup_id_proto;
 	case BPF_FUNC_get_current_ancestor_cgroup_id:
 		return &bpf_get_current_ancestor_cgroup_id_proto;
+	case BPF_FUNC_cgrp_storage_get:
+		return &bpf_cgrp_storage_get_proto;
+	case BPF_FUNC_cgrp_storage_delete:
+		return &bpf_cgrp_storage_delete_proto;
 #endif
 	case BPF_FUNC_send_signal:
 		return &bpf_send_signal_proto;
diff --git a/scripts/bpf_doc.py b/scripts/bpf_doc.py
index c0e6690be82a..fdb0aff8cb5a 100755
--- a/scripts/bpf_doc.py
+++ b/scripts/bpf_doc.py
@@ -685,6 +685,7 @@ class PrinterHelpers(Printer):
             'struct udp6_sock',
             'struct unix_sock',
             'struct task_struct',
+            'struct cgroup',
 
             'struct __sk_buff',
             'struct sk_msg_md',
@@ -742,6 +743,7 @@ class PrinterHelpers(Printer):
             'struct udp6_sock',
             'struct unix_sock',
             'struct task_struct',
+            'struct cgroup',
             'struct path',
             'struct btf_ptr',
             'struct inode',
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 17f61338f8f8..2d7f79bf3500 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -922,7 +922,14 @@ enum bpf_map_type {
 	BPF_MAP_TYPE_CPUMAP,
 	BPF_MAP_TYPE_XSKMAP,
 	BPF_MAP_TYPE_SOCKHASH,
-	BPF_MAP_TYPE_CGROUP_STORAGE,
+	BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED,
+	/* BPF_MAP_TYPE_CGROUP_STORAGE is available to bpf programs attaching
+	 * to a cgroup. The newer BPF_MAP_TYPE_CGRP_STORAGE is available to
+	 * both cgroup-attached and other progs and supports all functionality
+	 * provided by BPF_MAP_TYPE_CGROUP_STORAGE. So mark
+	 * BPF_MAP_TYPE_CGROUP_STORAGE deprecated.
+	 */
+	BPF_MAP_TYPE_CGROUP_STORAGE = BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED,
 	BPF_MAP_TYPE_REUSEPORT_SOCKARRAY,
 	BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE,
 	BPF_MAP_TYPE_QUEUE,
@@ -935,6 +942,7 @@ enum bpf_map_type {
 	BPF_MAP_TYPE_TASK_STORAGE,
 	BPF_MAP_TYPE_BLOOM_FILTER,
 	BPF_MAP_TYPE_USER_RINGBUF,
+	BPF_MAP_TYPE_CGRP_STORAGE,
 };
 
 /* Note that tracing related programs such as
@@ -5435,6 +5443,42 @@ union bpf_attr {
  *		**-E2BIG** if user-space has tried to publish a sample which is
  *		larger than the size of the ring buffer, or which cannot fit
  *		within a struct bpf_dynptr.
+ *
+ * void *bpf_cgrp_storage_get(struct bpf_map *map, struct cgroup *cgroup, void *value, u64 flags)
+ *	Description
+ *		Get a bpf_local_storage from the *cgroup*.
+ *
+ *		Logically, it could be thought of as getting the value from
+ *		a *map* with *cgroup* as the **key**.  From this
+ *		perspective,  the usage is not much different from
+ *		**bpf_map_lookup_elem**\ (*map*, **&**\ *cgroup*) except this
+ *		helper enforces the key must be a cgroup struct and the map must also
+ *		be a **BPF_MAP_TYPE_CGRP_STORAGE**.
+ *
+ *		Underneath, the value is stored locally at *cgroup* instead of
+ *		the *map*.  The *map* is used as the bpf-local-storage
+ *		"type". The bpf-local-storage "type" (i.e. the *map*) is
+ *		searched against all bpf_local_storage residing at *cgroup*.
+ *
+ *		An optional *flags* (**BPF_LOCAL_STORAGE_GET_F_CREATE**) can be
+ *		used such that a new bpf_local_storage will be
+ *		created if one does not exist.  *value* can be used
+ *		together with **BPF_LOCAL_STORAGE_GET_F_CREATE** to specify
+ *		the initial value of a bpf_local_storage.  If *value* is
+ *		**NULL**, the new bpf_local_storage will be zero initialized.
+ *	Return
+ *		A bpf_local_storage pointer is returned on success.
+ *
+ *		**NULL** if not found or there was an error in adding
+ *		a new bpf_local_storage.
+ *
+ * long bpf_cgrp_storage_delete(struct bpf_map *map, struct cgroup *cgroup)
+ *	Description
+ *		Delete a bpf_local_storage from a *cgroup*.
+ *	Return
+ *		0 on success.
+ *
+ *		**-ENOENT** if the bpf_local_storage cannot be found.
  */
 #define ___BPF_FUNC_MAPPER(FN, ctx...)			\
 	FN(unspec, 0, ##ctx)				\
@@ -5647,6 +5691,8 @@ union bpf_attr {
 	FN(tcp_raw_check_syncookie_ipv6, 207, ##ctx)	\
 	FN(ktime_get_tai_ns, 208, ##ctx)		\
 	FN(user_ringbuf_drain, 209, ##ctx)		\
+	FN(cgrp_storage_get, 210, ##ctx)		\
+	FN(cgrp_storage_delete, 211, ##ctx)		\
 	/* */
 
 /* backwards-compatibility macros for users of __BPF_FUNC_MAPPER that don't
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH bpf-next v2 3/6] libbpf: Support new cgroup local storage
  2022-10-20 22:12 [PATCH bpf-next v2 0/6] bpf: Implement cgroup local storage available to non-cgroup-attached bpf progs Yonghong Song
  2022-10-20 22:13 ` [PATCH bpf-next v2 1/6] bpf: Make struct cgroup btf id global Yonghong Song
  2022-10-20 22:13 ` [PATCH bpf-next v2 2/6] bpf: Implement cgroup storage available to non-cgroup-attached bpf progs Yonghong Song
@ 2022-10-20 22:13 ` Yonghong Song
  2022-10-21 23:10   ` Andrii Nakryiko
  2022-10-20 22:13 ` [PATCH bpf-next v2 4/6] bpftool: " Yonghong Song
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 23+ messages in thread
From: Yonghong Song @ 2022-10-20 22:13 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, KP Singh, Martin KaFai Lau, Tejun Heo

Add support for new cgroup local storage.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 tools/lib/bpf/libbpf.c        | 1 +
 tools/lib/bpf/libbpf_probes.c | 1 +
 2 files changed, 2 insertions(+)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 027fd9565c16..5d7819edf074 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -164,6 +164,7 @@ static const char * const map_type_name[] = {
 	[BPF_MAP_TYPE_TASK_STORAGE]		= "task_storage",
 	[BPF_MAP_TYPE_BLOOM_FILTER]		= "bloom_filter",
 	[BPF_MAP_TYPE_USER_RINGBUF]             = "user_ringbuf",
+	[BPF_MAP_TYPE_CGRP_STORAGE]		= "cgrp_storage",
 };
 
 static const char * const prog_type_name[] = {
diff --git a/tools/lib/bpf/libbpf_probes.c b/tools/lib/bpf/libbpf_probes.c
index f3a8e8e74eb8..bdb83d467f9a 100644
--- a/tools/lib/bpf/libbpf_probes.c
+++ b/tools/lib/bpf/libbpf_probes.c
@@ -221,6 +221,7 @@ static int probe_map_create(enum bpf_map_type map_type)
 	case BPF_MAP_TYPE_SK_STORAGE:
 	case BPF_MAP_TYPE_INODE_STORAGE:
 	case BPF_MAP_TYPE_TASK_STORAGE:
+	case BPF_MAP_TYPE_CGRP_STORAGE:
 		btf_key_type_id = 1;
 		btf_value_type_id = 3;
 		value_size = 8;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH bpf-next v2 4/6] bpftool: Support new cgroup local storage
  2022-10-20 22:12 [PATCH bpf-next v2 0/6] bpf: Implement cgroup local storage available to non-cgroup-attached bpf progs Yonghong Song
                   ` (2 preceding siblings ...)
  2022-10-20 22:13 ` [PATCH bpf-next v2 3/6] libbpf: Support new cgroup local storage Yonghong Song
@ 2022-10-20 22:13 ` Yonghong Song
  2022-10-20 22:13 ` [PATCH bpf-next v2 5/6] selftests/bpf: Add selftests for " Yonghong Song
  2022-10-20 22:13 ` [PATCH bpf-next v2 6/6] docs/bpf: Add documentation for map type BPF_MAP_TYPE_CGRP_STROAGE Yonghong Song
  5 siblings, 0 replies; 23+ messages in thread
From: Yonghong Song @ 2022-10-20 22:13 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, KP Singh, Martin KaFai Lau, Tejun Heo,
	Quentin Monnet

Add support for new cgroup local storage

Acked-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Yonghong Song <yhs@fb.com>
---
 tools/bpf/bpftool/Documentation/bpftool-map.rst | 2 +-
 tools/bpf/bpftool/map.c                         | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/bpf/bpftool/Documentation/bpftool-map.rst b/tools/bpf/bpftool/Documentation/bpftool-map.rst
index 7f3b67a8b48f..11250c4734fe 100644
--- a/tools/bpf/bpftool/Documentation/bpftool-map.rst
+++ b/tools/bpf/bpftool/Documentation/bpftool-map.rst
@@ -55,7 +55,7 @@ MAP COMMANDS
 |		| **devmap** | **devmap_hash** | **sockmap** | **cpumap** | **xskmap** | **sockhash**
 |		| **cgroup_storage** | **reuseport_sockarray** | **percpu_cgroup_storage**
 |		| **queue** | **stack** | **sk_storage** | **struct_ops** | **ringbuf** | **inode_storage**
-|		| **task_storage** | **bloom_filter** | **user_ringbuf** }
+|		| **task_storage** | **bloom_filter** | **user_ringbuf** | **cgrp_storage** }
 
 DESCRIPTION
 ===========
diff --git a/tools/bpf/bpftool/map.c b/tools/bpf/bpftool/map.c
index 9a6ca9f31133..3a2a89912637 100644
--- a/tools/bpf/bpftool/map.c
+++ b/tools/bpf/bpftool/map.c
@@ -1459,7 +1459,7 @@ static int do_help(int argc, char **argv)
 		"                 devmap | devmap_hash | sockmap | cpumap | xskmap | sockhash |\n"
 		"                 cgroup_storage | reuseport_sockarray | percpu_cgroup_storage |\n"
 		"                 queue | stack | sk_storage | struct_ops | ringbuf | inode_storage |\n"
-		"                 task_storage | bloom_filter | user_ringbuf }\n"
+		"                 task_storage | bloom_filter | user_ringbuf | cgrp_storage }\n"
 		"       " HELP_SPEC_OPTIONS " |\n"
 		"                    {-f|--bpffs} | {-n|--nomount} }\n"
 		"",
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH bpf-next v2 5/6] selftests/bpf: Add selftests for cgroup local storage
  2022-10-20 22:12 [PATCH bpf-next v2 0/6] bpf: Implement cgroup local storage available to non-cgroup-attached bpf progs Yonghong Song
                   ` (3 preceding siblings ...)
  2022-10-20 22:13 ` [PATCH bpf-next v2 4/6] bpftool: " Yonghong Song
@ 2022-10-20 22:13 ` Yonghong Song
  2022-10-20 22:13 ` [PATCH bpf-next v2 6/6] docs/bpf: Add documentation for map type BPF_MAP_TYPE_CGRP_STROAGE Yonghong Song
  5 siblings, 0 replies; 23+ messages in thread
From: Yonghong Song @ 2022-10-20 22:13 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, KP Singh, Martin KaFai Lau, Tejun Heo

Add two tests for cgroup local storage, one to test bpf program helpers
and user space map APIs, and the other to test recursive fentry
triggering won't deadlock.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 .../bpf/prog_tests/cgroup_local_storage.c     | 92 +++++++++++++++++++
 .../bpf/progs/cgroup_local_storage.c          | 88 ++++++++++++++++++
 .../selftests/bpf/progs/cgroup_ls_recursion.c | 70 ++++++++++++++
 3 files changed, 250 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_local_storage.c
 create mode 100644 tools/testing/selftests/bpf/progs/cgroup_local_storage.c
 create mode 100644 tools/testing/selftests/bpf/progs/cgroup_ls_recursion.c

diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_local_storage.c b/tools/testing/selftests/bpf/prog_tests/cgroup_local_storage.c
new file mode 100644
index 000000000000..69b1abda0d42
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/cgroup_local_storage.c
@@ -0,0 +1,92 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates.*/
+
+#define _GNU_SOURCE
+#include <unistd.h>
+#include <sys/syscall.h>
+#include <sys/types.h>
+#include <test_progs.h>
+#include "cgroup_local_storage.skel.h"
+#include "cgroup_ls_recursion.skel.h"
+
+static void test_sys_enter_exit(int cgroup_fd)
+{
+	struct cgroup_local_storage *skel;
+	long val1 = 1, val2 = 0;
+	int err;
+
+	skel = cgroup_local_storage__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "skel_open_and_load"))
+		return;
+
+	/* populate a value in cgrp_storage_2 */
+	err = bpf_map_update_elem(bpf_map__fd(skel->maps.cgrp_storage_2), &cgroup_fd, &val1, BPF_ANY);
+	if (!ASSERT_OK(err, "map_update_elem"))
+		goto out;
+
+	/* check value */
+	err = bpf_map_lookup_elem(bpf_map__fd(skel->maps.cgrp_storage_2), &cgroup_fd, &val2);
+	if (!ASSERT_OK(err, "map_lookup_elem"))
+		goto out;
+	if (!ASSERT_EQ(val2, 1, "map_lookup_elem, invalid val"))
+		goto out;
+
+	/* delete value */
+	err = bpf_map_delete_elem(bpf_map__fd(skel->maps.cgrp_storage_2), &cgroup_fd);
+	if (!ASSERT_OK(err, "map_delete_elem"))
+		goto out;
+
+	skel->bss->target_pid = syscall(SYS_gettid);
+
+	err = cgroup_local_storage__attach(skel);
+	if (!ASSERT_OK(err, "skel_attach"))
+		goto out;
+
+	syscall(SYS_gettid);
+	syscall(SYS_gettid);
+
+	skel->bss->target_pid = 0;
+
+	/* 3x syscalls: 1x attach and 2x gettid */
+	ASSERT_EQ(skel->bss->enter_cnt, 3, "enter_cnt");
+	ASSERT_EQ(skel->bss->exit_cnt, 3, "exit_cnt");
+	ASSERT_EQ(skel->bss->mismatch_cnt, 0, "mismatch_cnt");
+out:
+	cgroup_local_storage__destroy(skel);
+}
+
+static void test_recursion(int cgroup_fd)
+{
+	struct cgroup_ls_recursion *skel;
+	int err;
+
+	skel = cgroup_ls_recursion__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "skel_open_and_load"))
+		return;
+
+	err = cgroup_ls_recursion__attach(skel);
+	if (!ASSERT_OK(err, "skel_attach"))
+		goto out;
+
+	/* trigger sys_enter, make sure it does not cause deadlock */
+	syscall(SYS_gettid);
+
+out:
+	cgroup_ls_recursion__destroy(skel);
+}
+
+void test_cgroup_local_storage(void)
+{
+	int cgroup_fd;
+
+	cgroup_fd = test__join_cgroup("/cgroup_local_storage");
+	if (!ASSERT_GE(cgroup_fd, 0, "join_cgroup /cgroup_local_storage"))
+		return;
+
+	if (test__start_subtest("sys_enter_exit"))
+		test_sys_enter_exit(cgroup_fd);
+	if (test__start_subtest("recursion"))
+		test_recursion(cgroup_fd);
+
+	close(cgroup_fd);
+}
diff --git a/tools/testing/selftests/bpf/progs/cgroup_local_storage.c b/tools/testing/selftests/bpf/progs/cgroup_local_storage.c
new file mode 100644
index 000000000000..dee63d4c1512
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/cgroup_local_storage.c
@@ -0,0 +1,88 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */
+
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+struct {
+	__uint(type, BPF_MAP_TYPE_CGRP_STORAGE);
+	__uint(map_flags, BPF_F_NO_PREALLOC);
+	__type(key, int);
+	__type(value, long);
+} cgrp_storage_1 SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_CGRP_STORAGE);
+	__uint(map_flags, BPF_F_NO_PREALLOC);
+	__type(key, int);
+	__type(value, long);
+} cgrp_storage_2 SEC(".maps");
+
+#define MAGIC_VALUE 0xabcd1234
+
+pid_t target_pid = 0;
+int mismatch_cnt = 0;
+int enter_cnt = 0;
+int exit_cnt = 0;
+
+SEC("tp_btf/sys_enter")
+int BPF_PROG(on_enter, struct pt_regs *regs, long id)
+{
+	struct task_struct *task;
+	long *ptr;
+	int err;
+
+	task = bpf_get_current_task_btf();
+	if (task->pid != target_pid)
+		return 0;
+
+	/* populate value 0 */
+	ptr = bpf_cgrp_storage_get(&cgrp_storage_1, task->cgroups->dfl_cgrp, 0,
+				   BPF_LOCAL_STORAGE_GET_F_CREATE);
+	if (!ptr)
+		return 0;
+
+	/* delete value 0 */
+	err = bpf_cgrp_storage_delete(&cgrp_storage_1, task->cgroups->dfl_cgrp);
+	if (err)
+		return 0;
+
+	/* value is not available */
+	ptr = bpf_cgrp_storage_get(&cgrp_storage_1, task->cgroups->dfl_cgrp, 0, 0);
+	if (ptr)
+		return 0;
+
+	/* re-populate the value */
+	ptr = bpf_cgrp_storage_get(&cgrp_storage_1, task->cgroups->dfl_cgrp, 0,
+				   BPF_LOCAL_STORAGE_GET_F_CREATE);
+	if (!ptr)
+		return 0;
+	__sync_fetch_and_add(&enter_cnt, 1);
+	*ptr = MAGIC_VALUE + enter_cnt;
+
+	return 0;
+}
+
+SEC("tp_btf/sys_exit")
+int BPF_PROG(on_exit, struct pt_regs *regs, long id)
+{
+	struct task_struct *task;
+	long *ptr;
+
+	task = bpf_get_current_task_btf();
+	if (task->pid != target_pid)
+		return 0;
+
+	ptr = bpf_cgrp_storage_get(&cgrp_storage_1, task->cgroups->dfl_cgrp, 0,
+				   BPF_LOCAL_STORAGE_GET_F_CREATE);
+	if (!ptr)
+		return 0;
+
+	__sync_fetch_and_add(&exit_cnt, 1);
+	if (*ptr != MAGIC_VALUE + exit_cnt)
+		__sync_fetch_and_add(&mismatch_cnt, 1);
+	return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/cgroup_ls_recursion.c b/tools/testing/selftests/bpf/progs/cgroup_ls_recursion.c
new file mode 100644
index 000000000000..a043d8fefdac
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/cgroup_ls_recursion.c
@@ -0,0 +1,70 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */
+
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+struct {
+	__uint(type, BPF_MAP_TYPE_CGRP_STORAGE);
+	__uint(map_flags, BPF_F_NO_PREALLOC);
+	__type(key, int);
+	__type(value, long);
+} map_a SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_CGRP_STORAGE);
+	__uint(map_flags, BPF_F_NO_PREALLOC);
+	__type(key, int);
+	__type(value, long);
+} map_b SEC(".maps");
+
+SEC("fentry/bpf_local_storage_lookup")
+int BPF_PROG(on_lookup)
+{
+	struct task_struct *task = bpf_get_current_task_btf();
+
+	bpf_cgrp_storage_delete(&map_a, task->cgroups->dfl_cgrp);
+	bpf_cgrp_storage_delete(&map_b, task->cgroups->dfl_cgrp);
+	return 0;
+}
+
+SEC("fentry/bpf_local_storage_update")
+int BPF_PROG(on_update)
+{
+	struct task_struct *task = bpf_get_current_task_btf();
+	long *ptr;
+
+	ptr = bpf_cgrp_storage_get(&map_a, task->cgroups->dfl_cgrp, 0,
+				   BPF_LOCAL_STORAGE_GET_F_CREATE);
+	if (ptr)
+		*ptr += 1;
+
+	ptr = bpf_cgrp_storage_get(&map_b, task->cgroups->dfl_cgrp, 0,
+				   BPF_LOCAL_STORAGE_GET_F_CREATE);
+	if (ptr)
+		*ptr += 1;
+
+	return 0;
+}
+
+SEC("tp_btf/sys_enter")
+int BPF_PROG(on_enter, struct pt_regs *regs, long id)
+{
+	struct task_struct *task;
+	long *ptr;
+
+	task = bpf_get_current_task_btf();
+	ptr = bpf_cgrp_storage_get(&map_a, task->cgroups->dfl_cgrp, 0,
+				   BPF_LOCAL_STORAGE_GET_F_CREATE);
+	if (ptr)
+		*ptr = 200;
+
+	ptr = bpf_cgrp_storage_get(&map_b, task->cgroups->dfl_cgrp, 0,
+				   BPF_LOCAL_STORAGE_GET_F_CREATE);
+	if (ptr)
+		*ptr = 100;
+	return 0;
+}
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH bpf-next v2 6/6] docs/bpf: Add documentation for map type BPF_MAP_TYPE_CGRP_STROAGE
  2022-10-20 22:12 [PATCH bpf-next v2 0/6] bpf: Implement cgroup local storage available to non-cgroup-attached bpf progs Yonghong Song
                   ` (4 preceding siblings ...)
  2022-10-20 22:13 ` [PATCH bpf-next v2 5/6] selftests/bpf: Add selftests for " Yonghong Song
@ 2022-10-20 22:13 ` Yonghong Song
  2022-10-21  7:12   ` David Vernet
  5 siblings, 1 reply; 23+ messages in thread
From: Yonghong Song @ 2022-10-20 22:13 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, KP Singh, Martin KaFai Lau, Tejun Heo

Add some descriptions and examples for BPF_MAP_TYPE_CGRP_STROAGE.
Also illustate the major difference between BPF_MAP_TYPE_CGRP_STROAGE
and BPF_MAP_TYPE_CGROUP_STORAGE and recommend to use
BPF_MAP_TYPE_CGRP_STROAGE instead of BPF_MAP_TYPE_CGROUP_STORAGE
in the end.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 Documentation/bpf/map_cgrp_storage.rst | 104 +++++++++++++++++++++++++
 1 file changed, 104 insertions(+)
 create mode 100644 Documentation/bpf/map_cgrp_storage.rst

diff --git a/Documentation/bpf/map_cgrp_storage.rst b/Documentation/bpf/map_cgrp_storage.rst
new file mode 100644
index 000000000000..15691aab7fc7
--- /dev/null
+++ b/Documentation/bpf/map_cgrp_storage.rst
@@ -0,0 +1,104 @@
+.. SPDX-License-Identifier: GPL-2.0-only
+.. Copyright (C) 2022 Meta Platforms, Inc. and affiliates.
+
+===========================
+BPF_MAP_TYPE_CGRP_STORAGE
+===========================
+
+The ``BPF_MAP_TYPE_CGRP_STORAGE`` map type represents a local fix-sized
+storage for cgroups. It is only available with ``CONFIG_CGROUP_BPF``.
+The programs are made available by the same Kconfig. The
+data for a particular cgroup can be retrieved by looking up the map
+with that cgroup.
+
+This document describes the usage and semantics of the
+``BPF_MAP_TYPE_CGRP_STORAGE`` map type.
+
+Usage
+=====
+
+The map key must be ``sizeof(int)`` representing a cgroup fd.
+To access the storage in a program, use ``bpf_cgrp_storage_get``::
+
+    void *bpf_cgrp_storage_get(struct bpf_map *map, struct cgroup *cgroup, void *value, u64 flags)
+
+``flags`` could be 0 or ``BPF_LOCAL_STORAGE_GET_F_CREATE`` which indicates that
+a new local storage will be created if one does not exist.
+
+The local storage can be removed with ``bpf_cgrp_storage_delete``::
+
+    long bpf_cgrp_storage_delete(struct bpf_map *map, struct cgruop *cgroup)
+
+The map is available to all program types.
+
+Examples
+========
+
+An bpf-program example with BPF_MAP_TYPE_CGRP_STORAGE::
+
+    #include <vmlinux.h>
+    #include <bpf/bpf_helpers.h>
+    #include <bpf/bpf_tracing.h>
+
+    struct {
+            __uint(type, BPF_MAP_TYPE_CGRP_STORAGE);
+            __uint(map_flags, BPF_F_NO_PREALLOC);
+            __type(key, int);
+            __type(value, long);
+    } cgrp_storage SEC(".maps");
+
+    SEC("tp_btf/sys_enter")
+    int BPF_PROG(on_enter, struct pt_regs *regs, long id)
+    {
+            struct task_struct *task = bpf_get_current_task_btf();
+            long *ptr;
+
+            ptr = bpf_cgrp_storage_get(&cgrp_storage, task->cgroups->dfl_cgrp, 0,
+                                       BPF_LOCAL_STORAGE_GET_F_CREATE);
+            if (ptr)
+                __sync_fetch_and_add(ptr, 1);
+
+            return 0;
+    }
+
+Userspace accessing map declared above::
+
+    #include <linux/bpf.h>
+    #include <linux/libbpf.h>
+
+    __u32 map_lookup(struct bpf_map *map, int cgrp_fd)
+    {
+            __u32 *value;
+            value = bpf_map_lookup_elem(bpf_map__fd(map), &cgrp_fd);
+            if (value)
+                return *value;
+            return 0;
+    }
+
+Difference Between BPF_MAP_TYPE_CGRP_STORAGE and BPF_MAP_TYPE_CGROUP_STORAGE
+============================================================================
+
+The main difference between ``BPF_MAP_TYPE_CGRP_STORAGE`` and ``BPF_MAP_TYPE_CGROUP_STORAGE``
+is that ``BPF_MAP_TYPE_CGRP_STORAGE`` can be used by all program types while
+``BPF_MAP_TYPE_CGROUP_STORAGE`` is available only to cgroup program types.
+
+There are some other differences as well.
+
+(1). ``BPF_MAP_TYPE_CGRP_STORAGE`` supports local storage for more than one
+     cgroups while ``BPF_MAP_TYPE_CGROUP_STORAGE`` only support one, the one attached
+     by the bpf program.
+
+(2). ``BPF_MAP_TYPE_CGROUP_STORAGE`` allocates local storage at attach time so
+     ``bpf_get_local_storage()`` always returns non-null local storage.
+     ``BPF_MAP_TYPE_CGRP_STORAGE`` allocates local storage at runtime so
+     it is possible that ``bpf_cgrp_storage_get()`` may return null local storage.
+     To avoid such null local storage issue, user space can do
+     ``bpf_map_update_elem()`` to pre-allocate local storage.
+
+(3). ``BPF_MAP_TYPE_CGRP_STORAGE`` supports de-allocating local storage by bpf program
+     while ``BPF_MAP_TYPE_CGROUP_STORAGE`` only de-allocates storage during
+     prog detach time.
+
+So overall, ``BPF_MAP_TYPE_CGRP_STORAGE`` supports all ``BPF_MAP_TYPE_CGROUP_STORAGE``
+functionality and beyond. It is recommended to use ``BPF_MAP_TYPE_CGRP_STORAGE``
+instead of ``BPF_MAP_TYPE_CGROUP_STORAGE``.
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v2 2/6] bpf: Implement cgroup storage available to non-cgroup-attached bpf progs
  2022-10-20 22:13 ` [PATCH bpf-next v2 2/6] bpf: Implement cgroup storage available to non-cgroup-attached bpf progs Yonghong Song
@ 2022-10-21  5:22   ` David Vernet
  2022-10-21  5:26     ` David Vernet
  2022-10-21 17:33     ` Yonghong Song
       [not found]   ` <202210210932.nHqTyTmx-lkp@intel.com>
  2022-10-21 19:29   ` Yosry Ahmed
  2 siblings, 2 replies; 23+ messages in thread
From: David Vernet @ 2022-10-21  5:22 UTC (permalink / raw)
  To: Yonghong Song
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, KP Singh, Martin KaFai Lau, Tejun Heo

On Thu, Oct 20, 2022 at 03:13:06PM -0700, Yonghong Song wrote:
> Similar to sk/inode/task storage, implement similar cgroup local storage.
> 
> There already exists a local storage implementation for cgroup-attached
> bpf programs.  See map type BPF_MAP_TYPE_CGROUP_STORAGE and helper
> bpf_get_local_storage(). But there are use cases such that non-cgroup
> attached bpf progs wants to access cgroup local storage data. For example,
> tc egress prog has access to sk and cgroup. It is possible to use
> sk local storage to emulate cgroup local storage by storing data in socket.
> But this is a waste as it could be lots of sockets belonging to a particular
> cgroup. Alternatively, a separate map can be created with cgroup id as the key.
> But this will introduce additional overhead to manipulate the new map.
> A cgroup local storage, similar to existing sk/inode/task storage,
> should help for this use case.
> 
> The life-cycle of storage is managed with the life-cycle of the
> cgroup struct.  i.e. the storage is destroyed along with the owning cgroup
> with a callback to the bpf_cgrp_storage_free when cgroup itself
> is deleted.
> 
> The userspace map operations can be done by using a cgroup fd as a key
> passed to the lookup, update and delete operations.
> 
> Typically, the following code is used to get the current cgroup:
>     struct task_struct *task = bpf_get_current_task_btf();
>     ... task->cgroups->dfl_cgrp ...
> and in structure task_struct definition:
>     struct task_struct {
>         ....
>         struct css_set __rcu            *cgroups;
>         ....
>     }
> With sleepable program, accessing task->cgroups is not protected by rcu_read_lock.
> So the current implementation only supports non-sleepable program and supporting
> sleepable program will be the next step together with adding rcu_read_lock
> protection for rcu tagged structures.
> 
> Since map name BPF_MAP_TYPE_CGROUP_STORAGE has been used for old cgroup local
> storage support, the new map name BPF_MAP_TYPE_CGRP_STORAGE is used
> for cgroup storage available to non-cgroup-attached bpf programs. The old
> cgroup storage supports bpf_get_local_storage() helper to get the cgroup data.
> The new cgroup storage helper bpf_cgrp_storage_get() can provide similar
> functionality. While old cgroup storage pre-allocates storage memory, the new
> mechanism can also pre-allocate with a user space bpf_map_update_elem() call
> to avoid potential run-time memory allocaiton failure.

s/allocaiton/allocation

> Therefore, the new cgroup storage can provide all functionality w.r.t.
> the old one. So in uapi bpf.h, the old BPF_MAP_TYPE_CGROUP_STORAGE is alias to
> BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED to indicate the old cgroup storage can
> be deprecated since the new one can provide the same functionality.
> 
> Signed-off-by: Yonghong Song <yhs@fb.com>
> ---
>  include/linux/bpf.h            |   3 +
>  include/linux/bpf_types.h      |   1 +
>  include/linux/cgroup-defs.h    |   4 +
>  include/uapi/linux/bpf.h       |  48 +++++-
>  kernel/bpf/Makefile            |   2 +-
>  kernel/bpf/bpf_cgrp_storage.c  | 276 +++++++++++++++++++++++++++++++++
>  kernel/bpf/helpers.c           |   6 +
>  kernel/bpf/syscall.c           |   3 +-
>  kernel/bpf/verifier.c          |  13 +-
>  kernel/cgroup/cgroup.c         |   4 +
>  kernel/trace/bpf_trace.c       |   4 +
>  scripts/bpf_doc.py             |   2 +
>  tools/include/uapi/linux/bpf.h |  48 +++++-
>  13 files changed, 409 insertions(+), 5 deletions(-)
>  create mode 100644 kernel/bpf/bpf_cgrp_storage.c
> 
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 9e7d46d16032..674da3129248 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -2045,6 +2045,7 @@ struct bpf_link *bpf_link_by_id(u32 id);
>  
>  const struct bpf_func_proto *bpf_base_func_proto(enum bpf_func_id func_id);
>  void bpf_task_storage_free(struct task_struct *task);
> +void bpf_cgrp_storage_free(struct cgroup *cgroup);
>  bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog);
>  const struct btf_func_model *
>  bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
> @@ -2537,6 +2538,8 @@ extern const struct bpf_func_proto bpf_copy_from_user_task_proto;
>  extern const struct bpf_func_proto bpf_set_retval_proto;
>  extern const struct bpf_func_proto bpf_get_retval_proto;
>  extern const struct bpf_func_proto bpf_user_ringbuf_drain_proto;
> +extern const struct bpf_func_proto bpf_cgrp_storage_get_proto;
> +extern const struct bpf_func_proto bpf_cgrp_storage_delete_proto;
>  
>  const struct bpf_func_proto *tracing_prog_func_proto(
>    enum bpf_func_id func_id, const struct bpf_prog *prog);
> diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h
> index 2c6a4f2562a7..f9d5aa62fed0 100644
> --- a/include/linux/bpf_types.h
> +++ b/include/linux/bpf_types.h
> @@ -90,6 +90,7 @@ BPF_MAP_TYPE(BPF_MAP_TYPE_CGROUP_ARRAY, cgroup_array_map_ops)
>  #ifdef CONFIG_CGROUP_BPF
>  BPF_MAP_TYPE(BPF_MAP_TYPE_CGROUP_STORAGE, cgroup_storage_map_ops)
>  BPF_MAP_TYPE(BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE, cgroup_storage_map_ops)
> +BPF_MAP_TYPE(BPF_MAP_TYPE_CGRP_STORAGE, cgrp_storage_map_ops)
>  #endif
>  BPF_MAP_TYPE(BPF_MAP_TYPE_HASH, htab_map_ops)
>  BPF_MAP_TYPE(BPF_MAP_TYPE_PERCPU_HASH, htab_percpu_map_ops)
> diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
> index 8f481d1b159a..4a72bc3a0a4e 100644
> --- a/include/linux/cgroup-defs.h
> +++ b/include/linux/cgroup-defs.h
> @@ -504,6 +504,10 @@ struct cgroup {
>  	/* Used to store internal freezer state */
>  	struct cgroup_freezer_state freezer;
>  
> +#ifdef CONFIG_CGROUP_BPF
> +	struct bpf_local_storage __rcu  *bpf_cgrp_storage;
> +#endif
> +
>  	/* All ancestors including self */
>  	struct cgroup *ancestors[];
>  };
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 17f61338f8f8..2d7f79bf3500 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -922,7 +922,14 @@ enum bpf_map_type {
>  	BPF_MAP_TYPE_CPUMAP,
>  	BPF_MAP_TYPE_XSKMAP,
>  	BPF_MAP_TYPE_SOCKHASH,
> -	BPF_MAP_TYPE_CGROUP_STORAGE,
> +	BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED,
> +	/* BPF_MAP_TYPE_CGROUP_STORAGE is available to bpf programs attaching
> +	 * to a cgroup. The newer BPF_MAP_TYPE_CGRP_STORAGE is available to
> +	 * both cgroup-attached and other progs and supports all functionality
> +	 * provided by BPF_MAP_TYPE_CGROUP_STORAGE. So mark
> +	 * BPF_MAP_TYPE_CGROUP_STORAGE deprecated.
> +	 */
> +	BPF_MAP_TYPE_CGROUP_STORAGE = BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED,
>  	BPF_MAP_TYPE_REUSEPORT_SOCKARRAY,
>  	BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE,
>  	BPF_MAP_TYPE_QUEUE,
> @@ -935,6 +942,7 @@ enum bpf_map_type {
>  	BPF_MAP_TYPE_TASK_STORAGE,
>  	BPF_MAP_TYPE_BLOOM_FILTER,
>  	BPF_MAP_TYPE_USER_RINGBUF,
> +	BPF_MAP_TYPE_CGRP_STORAGE,
>  };
>  
>  /* Note that tracing related programs such as
> @@ -5435,6 +5443,42 @@ union bpf_attr {
>   *		**-E2BIG** if user-space has tried to publish a sample which is
>   *		larger than the size of the ring buffer, or which cannot fit
>   *		within a struct bpf_dynptr.
> + *
> + * void *bpf_cgrp_storage_get(struct bpf_map *map, struct cgroup *cgroup, void *value, u64 flags)
> + *	Description
> + *		Get a bpf_local_storage from the *cgroup*.
> + *
> + *		Logically, it could be thought of as getting the value from
> + *		a *map* with *cgroup* as the **key**.  From this
> + *		perspective,  the usage is not much different from
> + *		**bpf_map_lookup_elem**\ (*map*, **&**\ *cgroup*) except this
> + *		helper enforces the key must be a cgroup struct and the map must also
> + *		be a **BPF_MAP_TYPE_CGRP_STORAGE**.
> + *
> + *		Underneath, the value is stored locally at *cgroup* instead of
> + *		the *map*.  The *map* is used as the bpf-local-storage
> + *		"type". The bpf-local-storage "type" (i.e. the *map*) is
> + *		searched against all bpf_local_storage residing at *cgroup*.

IMO this paragraph is a bit hard to parse. Please correct me if I'm
wrong, but I think what it's trying to convey is that when an instance
of cgroup bpf-local-storage is accessed by a program in e.g.
bpf_cgrp_storage_get(), all of the cgroup bpf_local_storage entries are
iterated over in the struct cgroup object until this program's local
storage instance is found. Is that right? If so, perhaps something like
this would be more clear:

In reality, the local-storage value is embedded directly inside of the
*cgroup* object itself, rather than being located in the
**BPF_MAP_TYPE_CGRP_STORAGE** map. When the local-storage value is
queried for some *map* on a *cgroup* object, the kernel will perform an
O(n) iteration over all of the live local-storage values for that
*cgroup* object until the local-storage value for the *map* is found.

What do you think?

> + *		An optional *flags* (**BPF_LOCAL_STORAGE_GET_F_CREATE**) can be
> + *		used such that a new bpf_local_storage will be
> + *		created if one does not exist.  *value* can be used
> + *		together with **BPF_LOCAL_STORAGE_GET_F_CREATE** to specify
> + *		the initial value of a bpf_local_storage.  If *value* is
> + *		**NULL**, the new bpf_local_storage will be zero initialized.
> + *	Return
> + *		A bpf_local_storage pointer is returned on success.
> + *
> + *		**NULL** if not found or there was an error in adding
> + *		a new bpf_local_storage.
> + *
> + * long bpf_cgrp_storage_delete(struct bpf_map *map, struct cgroup *cgroup)
> + *	Description
> + *		Delete a bpf_local_storage from a *cgroup*.
> + *	Return
> + *		0 on success.
> + *
> + *		**-ENOENT** if the bpf_local_storage cannot be found.
>   */
>  #define ___BPF_FUNC_MAPPER(FN, ctx...)			\
>  	FN(unspec, 0, ##ctx)				\
> @@ -5647,6 +5691,8 @@ union bpf_attr {
>  	FN(tcp_raw_check_syncookie_ipv6, 207, ##ctx)	\
>  	FN(ktime_get_tai_ns, 208, ##ctx)		\
>  	FN(user_ringbuf_drain, 209, ##ctx)		\
> +	FN(cgrp_storage_get, 210, ##ctx)		\
> +	FN(cgrp_storage_delete, 211, ##ctx)		\
>  	/* */
>  
>  /* backwards-compatibility macros for users of __BPF_FUNC_MAPPER that don't
> diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
> index 341c94f208f4..3a12e6b400a2 100644
> --- a/kernel/bpf/Makefile
> +++ b/kernel/bpf/Makefile
> @@ -25,7 +25,7 @@ ifeq ($(CONFIG_PERF_EVENTS),y)
>  obj-$(CONFIG_BPF_SYSCALL) += stackmap.o
>  endif
>  ifeq ($(CONFIG_CGROUPS),y)

I assume that you double checked that it's valid to compile the helper
with CONFIG_CGROUPS && !CONFIG_CGROUP_BPF, but I must admit that even if
that's the case, I'm not following why we would want the map to be
compiled with a different kconfig option than the helper that provides
access to it. If theres's a precedent for doing this then I suppose it's
fine, but it does seem wrong and/or at least wasteful to compile these
helpers in if CONFIG_CGROUPS is defined but CONFIG_CGROUP_BPF is not.

> -obj-$(CONFIG_BPF_SYSCALL) += cgroup_iter.o
> +obj-$(CONFIG_BPF_SYSCALL) += cgroup_iter.o bpf_cgrp_storage.o
>  endif
>  obj-$(CONFIG_CGROUP_BPF) += cgroup.o
>  ifeq ($(CONFIG_INET),y)
> diff --git a/kernel/bpf/bpf_cgrp_storage.c b/kernel/bpf/bpf_cgrp_storage.c
> new file mode 100644
> index 000000000000..bcc5f0fc20be
> --- /dev/null
> +++ b/kernel/bpf/bpf_cgrp_storage.c
> @@ -0,0 +1,276 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (c) 2022 Meta Platforms, Inc. and affiliates.
> + */
> +
> +#include <linux/types.h>
> +#include <linux/bpf.h>
> +#include <linux/bpf_local_storage.h>
> +#include <uapi/linux/btf.h>
> +#include <linux/btf_ids.h>
> +
> +DEFINE_BPF_STORAGE_CACHE(cgroup_cache);
> +
> +static DEFINE_PER_CPU(int, bpf_cgrp_storage_busy);
> +
> +static void bpf_cgrp_storage_lock(void)
> +{
> +	migrate_disable();
> +	this_cpu_inc(bpf_cgrp_storage_busy);
> +}
> +
> +static void bpf_cgrp_storage_unlock(void)
> +{
> +	this_cpu_dec(bpf_cgrp_storage_busy);
> +	migrate_enable();
> +}
> +
> +static bool bpf_cgrp_storage_trylock(void)
> +{
> +	migrate_disable();
> +	if (unlikely(this_cpu_inc_return(bpf_cgrp_storage_busy) != 1)) {
> +		this_cpu_dec(bpf_cgrp_storage_busy);
> +		migrate_enable();
> +		return false;
> +	}
> +	return true;
> +}
> +
> +static struct bpf_local_storage __rcu **cgroup_storage_ptr(void *owner)
> +{
> +	struct cgroup *cg = owner;
> +
> +	return &cg->bpf_cgrp_storage;
> +}
> +
> +void bpf_cgrp_storage_free(struct cgroup *cgroup)
> +{
> +	struct bpf_local_storage *local_storage;
> +	struct bpf_local_storage_elem *selem;
> +	bool free_cgroup_storage = false;
> +	struct hlist_node *n;
> +	unsigned long flags;
> +
> +	rcu_read_lock();
> +	local_storage = rcu_dereference(cgroup->bpf_cgrp_storage);
> +	if (!local_storage) {
> +		rcu_read_unlock();
> +		return;
> +	}
> +
> +	/* Neither the bpf_prog nor the bpf-map's syscall

Very minor nit, but I think using a hyphen in bpf-map like this is
incorrect as it's not a compound adjective. Applies elsewhere as well. I
don't believe "added-to" or "deleted-from" require hyphens either.

> +	 * could be modifying the local_storage->list now.
> +	 * Thus, no elem can be added-to or deleted-from the
> +	 * local_storage->list by the bpf_prog or by the bpf-map's syscall.
> +	 *
> +	 * It is racing with bpf_local_storage_map_free() alone
> +	 * when unlinking elem from the local_storage->list and
> +	 * the map's bucket->list.
> +	 */
> +	bpf_cgrp_storage_lock();
> +	raw_spin_lock_irqsave(&local_storage->lock, flags);
> +	hlist_for_each_entry_safe(selem, n, &local_storage->list, snode) {
> +		bpf_selem_unlink_map(selem);
> +		free_cgroup_storage =
> +			bpf_selem_unlink_storage_nolock(local_storage, selem, false, false);

This still requires a comment explaining why it's OK to overwrite
free_cgroup_storage with a previous value from calling
bpf_selem_unlink_storage_nolock(). Even if that is safe, this looks like
a pretty weird programming pattern, and IMO doing this feels more
intentional and future-proof:

if (bpf_selem_unlink_storage_nolock(local_storage, selem, false, false))
	free_cgroup_storage = true;

> +	}
> +	raw_spin_unlock_irqrestore(&local_storage->lock, flags);
> +	bpf_cgrp_storage_unlock();
> +	rcu_read_unlock();
> +
> +	/* free_cgroup_storage should always be true as long as
> +	 * local_storage->list was non-empty.
> +	 */
> +	if (free_cgroup_storage)
> +		kfree_rcu(local_storage, rcu);
> +}
> +
> +static struct bpf_local_storage_data *
> +cgroup_storage_lookup(struct cgroup *cgroup, struct bpf_map *map, bool cacheit_lockit)
> +{
> +	struct bpf_local_storage *cgroup_storage;
> +	struct bpf_local_storage_map *smap;
> +
> +	cgroup_storage = rcu_dereference_check(cgroup->bpf_cgrp_storage,
> +					       bpf_rcu_lock_held());
> +	if (!cgroup_storage)
> +		return NULL;
> +
> +	smap = (struct bpf_local_storage_map *)map;
> +	return bpf_local_storage_lookup(cgroup_storage, smap, cacheit_lockit);
> +}
> +
> +static void *bpf_cgrp_storage_lookup_elem(struct bpf_map *map, void *key)
> +{
> +	struct bpf_local_storage_data *sdata;
> +	struct cgroup *cgroup;
> +	int fd;
> +
> +	fd = *(int *)key;
> +	cgroup = cgroup_get_from_fd(fd);
> +	if (IS_ERR(cgroup))
> +		return ERR_CAST(cgroup);
> +
> +	bpf_cgrp_storage_lock();
> +	sdata = cgroup_storage_lookup(cgroup, map, true);
> +	bpf_cgrp_storage_unlock();
> +	cgroup_put(cgroup);
> +	return sdata ? sdata->data : NULL;
> +}

Stanislav pointed out in the v1 revision that there's a lot of very
similar logic in task storage, and I think you'd mentioned that you were
going to think about generalizing some of that. Have you had a chance to
consider?

> +static int bpf_cgrp_storage_update_elem(struct bpf_map *map, void *key,
> +					  void *value, u64 map_flags)
> +{
> +	struct bpf_local_storage_data *sdata;
> +	struct cgroup *cgroup;
> +	int fd;
> +
> +	fd = *(int *)key;
> +	cgroup = cgroup_get_from_fd(fd);
> +	if (IS_ERR(cgroup))
> +		return PTR_ERR(cgroup);
> +
> +	bpf_cgrp_storage_lock();
> +	sdata = bpf_local_storage_update(cgroup, (struct bpf_local_storage_map *)map,
> +					 value, map_flags, GFP_ATOMIC);
> +	bpf_cgrp_storage_unlock();
> +	cgroup_put(cgroup);
> +	return PTR_ERR_OR_ZERO(sdata);
> +}
> +
> +static int cgroup_storage_delete(struct cgroup *cgroup, struct bpf_map *map)
> +{
> +	struct bpf_local_storage_data *sdata;
> +
> +	sdata = cgroup_storage_lookup(cgroup, map, false);
> +	if (!sdata)
> +		return -ENOENT;
> +
> +	bpf_selem_unlink(SELEM(sdata), true);
> +	return 0;
> +}
> +
> +static int bpf_cgrp_storage_delete_elem(struct bpf_map *map, void *key)
> +{
> +	struct cgroup *cgroup;
> +	int err, fd;
> +
> +	fd = *(int *)key;
> +	cgroup = cgroup_get_from_fd(fd);
> +	if (IS_ERR(cgroup))
> +		return PTR_ERR(cgroup);
> +
> +	bpf_cgrp_storage_lock();
> +	err = cgroup_storage_delete(cgroup, map);
> +	bpf_cgrp_storage_unlock();
> +	cgroup_put(cgroup);
> +	return err;
> +}
> +
> +static int notsupp_get_next_key(struct bpf_map *map, void *key, void *next_key)
> +{
> +	return -ENOTSUPP;
> +}
> +
> +static struct bpf_map *cgroup_storage_map_alloc(union bpf_attr *attr)
> +{
> +	struct bpf_local_storage_map *smap;
> +
> +	smap = bpf_local_storage_map_alloc(attr);
> +	if (IS_ERR(smap))
> +		return ERR_CAST(smap);
> +
> +	smap->cache_idx = bpf_local_storage_cache_idx_get(&cgroup_cache);
> +	return &smap->map;
> +}
> +
> +static void cgroup_storage_map_free(struct bpf_map *map)
> +{
> +	struct bpf_local_storage_map *smap;
> +
> +	smap = (struct bpf_local_storage_map *)map;
> +	bpf_local_storage_cache_idx_free(&cgroup_cache, smap->cache_idx);
> +	bpf_local_storage_map_free(smap, NULL);
> +}
> +
> +/* *gfp_flags* is a hidden argument provided by the verifier */
> +BPF_CALL_5(bpf_cgrp_storage_get, struct bpf_map *, map, struct cgroup *, cgroup,
> +	   void *, value, u64, flags, gfp_t, gfp_flags)
> +{
> +	struct bpf_local_storage_data *sdata;
> +
> +	WARN_ON_ONCE(!bpf_rcu_lock_held());
> +	if (flags & ~(BPF_LOCAL_STORAGE_GET_F_CREATE))
> +		return (unsigned long)NULL;
> +
> +	if (!cgroup)
> +		return (unsigned long)NULL;
> +
> +	if (!bpf_cgrp_storage_trylock())
> +		return (unsigned long)NULL;
> +
> +	sdata = cgroup_storage_lookup(cgroup, map, true);
> +	if (sdata)
> +		goto unlock;
> +
> +	/* only allocate new storage, when the cgroup is refcounted */
> +	if (!percpu_ref_is_dying(&cgroup->self.refcnt) &&
> +	    (flags & BPF_LOCAL_STORAGE_GET_F_CREATE))
> +		sdata = bpf_local_storage_update(cgroup, (struct bpf_local_storage_map *)map,
> +						 value, BPF_NOEXIST, gfp_flags);
> +
> +unlock:
> +	bpf_cgrp_storage_unlock();
> +	return IS_ERR_OR_NULL(sdata) ? (unsigned long)NULL : (unsigned long)sdata->data;
> +}
> +
> +BPF_CALL_2(bpf_cgrp_storage_delete, struct bpf_map *, map, struct cgroup *, cgroup)
> +{
> +	int ret;
> +
> +	WARN_ON_ONCE(!bpf_rcu_lock_held());
> +	if (!cgroup)
> +		return -EINVAL;
> +
> +	if (!bpf_cgrp_storage_trylock())
> +		return -EBUSY;
> +
> +	ret = cgroup_storage_delete(cgroup, map);
> +	bpf_cgrp_storage_unlock();
> +	return ret;
> +}
> +
> +BTF_ID_LIST_SINGLE(cgroup_storage_map_btf_ids, struct, bpf_local_storage_map)
> +const struct bpf_map_ops cgrp_storage_map_ops = {
> +	.map_meta_equal = bpf_map_meta_equal,
> +	.map_alloc_check = bpf_local_storage_map_alloc_check,
> +	.map_alloc = cgroup_storage_map_alloc,
> +	.map_free = cgroup_storage_map_free,
> +	.map_get_next_key = notsupp_get_next_key,
> +	.map_lookup_elem = bpf_cgrp_storage_lookup_elem,
> +	.map_update_elem = bpf_cgrp_storage_update_elem,
> +	.map_delete_elem = bpf_cgrp_storage_delete_elem,
> +	.map_check_btf = bpf_local_storage_map_check_btf,
> +	.map_btf_id = &cgroup_storage_map_btf_ids[0],
> +	.map_owner_storage_ptr = cgroup_storage_ptr,
> +};
> +
> +const struct bpf_func_proto bpf_cgrp_storage_get_proto = {
> +	.func		= bpf_cgrp_storage_get,
> +	.gpl_only	= false,
> +	.ret_type	= RET_PTR_TO_MAP_VALUE_OR_NULL,
> +	.arg1_type	= ARG_CONST_MAP_PTR,
> +	.arg2_type	= ARG_PTR_TO_BTF_ID,
> +	.arg2_btf_id	= &bpf_cgroup_btf_id[0],
> +	.arg3_type	= ARG_PTR_TO_MAP_VALUE_OR_NULL,
> +	.arg4_type	= ARG_ANYTHING,
> +};
> +
> +const struct bpf_func_proto bpf_cgrp_storage_delete_proto = {
> +	.func		= bpf_cgrp_storage_delete,
> +	.gpl_only	= false,
> +	.ret_type	= RET_INTEGER,
> +	.arg1_type	= ARG_CONST_MAP_PTR,
> +	.arg2_type	= ARG_PTR_TO_BTF_ID,
> +	.arg2_btf_id	= &bpf_cgroup_btf_id[0],
> +};
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index a6b04faed282..124fd199ce5c 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -1663,6 +1663,12 @@ bpf_base_func_proto(enum bpf_func_id func_id)
>  		return &bpf_dynptr_write_proto;
>  	case BPF_FUNC_dynptr_data:
>  		return &bpf_dynptr_data_proto;
> +#ifdef CONFIG_CGROUPS
> +	case BPF_FUNC_cgrp_storage_get:
> +		return &bpf_cgrp_storage_get_proto;
> +	case BPF_FUNC_cgrp_storage_delete:
> +		return &bpf_cgrp_storage_delete_proto;
> +#endif
>  	default:
>  		break;
>  	}
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index 7b373a5e861f..b95c276f92e3 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -1016,7 +1016,8 @@ static int map_check_btf(struct bpf_map *map, const struct btf *btf,
>  		    map->map_type != BPF_MAP_TYPE_CGROUP_STORAGE &&
>  		    map->map_type != BPF_MAP_TYPE_SK_STORAGE &&
>  		    map->map_type != BPF_MAP_TYPE_INODE_STORAGE &&
> -		    map->map_type != BPF_MAP_TYPE_TASK_STORAGE)
> +		    map->map_type != BPF_MAP_TYPE_TASK_STORAGE &&
> +		    map->map_type != BPF_MAP_TYPE_CGRP_STORAGE)
>  			return -ENOTSUPP;
>  		if (map->spin_lock_off + sizeof(struct bpf_spin_lock) >
>  		    map->value_size) {
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 6f6d2d511c06..82bb18d7e881 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -6360,6 +6360,11 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
>  		    func_id != BPF_FUNC_task_storage_delete)
>  			goto error;
>  		break;
> +	case BPF_MAP_TYPE_CGRP_STORAGE:
> +		if (func_id != BPF_FUNC_cgrp_storage_get &&
> +		    func_id != BPF_FUNC_cgrp_storage_delete)
> +			goto error;
> +		break;
>  	case BPF_MAP_TYPE_BLOOM_FILTER:
>  		if (func_id != BPF_FUNC_map_peek_elem &&
>  		    func_id != BPF_FUNC_map_push_elem)
> @@ -6472,6 +6477,11 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
>  		if (map->map_type != BPF_MAP_TYPE_TASK_STORAGE)
>  			goto error;
>  		break;
> +	case BPF_FUNC_cgrp_storage_get:
> +	case BPF_FUNC_cgrp_storage_delete:
> +		if (map->map_type != BPF_MAP_TYPE_CGRP_STORAGE)
> +			goto error;
> +		break;
>  	default:
>  		break;
>  	}
> @@ -14149,7 +14159,8 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
>  
>  		if (insn->imm == BPF_FUNC_task_storage_get ||
>  		    insn->imm == BPF_FUNC_sk_storage_get ||
> -		    insn->imm == BPF_FUNC_inode_storage_get) {
> +		    insn->imm == BPF_FUNC_inode_storage_get ||
> +		    insn->imm == BPF_FUNC_cgrp_storage_get) {
>  			if (env->prog->aux->sleepable)
>  				insn_buf[0] = BPF_MOV64_IMM(BPF_REG_5, (__force __s32)GFP_KERNEL);
>  			else
> diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
> index 764bdd5fd8d1..7e80e15fae4e 100644
> --- a/kernel/cgroup/cgroup.c
> +++ b/kernel/cgroup/cgroup.c
> @@ -5227,6 +5227,10 @@ static void css_free_rwork_fn(struct work_struct *work)
>  	struct cgroup_subsys *ss = css->ss;
>  	struct cgroup *cgrp = css->cgroup;
>  
> +#ifdef CONFIG_CGROUP_BPF
> +	bpf_cgrp_storage_free(cgrp);
> +#endif
> +
>  	percpu_ref_exit(&css->refcnt);
>  
>  	if (ss) {
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index 49fb9ec8366d..0ddf0834b1b8 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -1454,6 +1454,10 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
>  		return &bpf_get_current_cgroup_id_proto;
>  	case BPF_FUNC_get_current_ancestor_cgroup_id:
>  		return &bpf_get_current_ancestor_cgroup_id_proto;
> +	case BPF_FUNC_cgrp_storage_get:
> +		return &bpf_cgrp_storage_get_proto;
> +	case BPF_FUNC_cgrp_storage_delete:
> +		return &bpf_cgrp_storage_delete_proto;

Why the #ifdef CONFIG_CGROUPS check in bpf_base_func_proto(), but not
here?

>  #endif
>  	case BPF_FUNC_send_signal:
>  		return &bpf_send_signal_proto;
> diff --git a/scripts/bpf_doc.py b/scripts/bpf_doc.py
> index c0e6690be82a..fdb0aff8cb5a 100755
> --- a/scripts/bpf_doc.py
> +++ b/scripts/bpf_doc.py
> @@ -685,6 +685,7 @@ class PrinterHelpers(Printer):
>              'struct udp6_sock',
>              'struct unix_sock',
>              'struct task_struct',
> +            'struct cgroup',
>  
>              'struct __sk_buff',
>              'struct sk_msg_md',
> @@ -742,6 +743,7 @@ class PrinterHelpers(Printer):
>              'struct udp6_sock',
>              'struct unix_sock',
>              'struct task_struct',
> +            'struct cgroup',
>              'struct path',
>              'struct btf_ptr',
>              'struct inode',
> diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
> index 17f61338f8f8..2d7f79bf3500 100644
> --- a/tools/include/uapi/linux/bpf.h
> +++ b/tools/include/uapi/linux/bpf.h
> @@ -922,7 +922,14 @@ enum bpf_map_type {
>  	BPF_MAP_TYPE_CPUMAP,
>  	BPF_MAP_TYPE_XSKMAP,
>  	BPF_MAP_TYPE_SOCKHASH,
> -	BPF_MAP_TYPE_CGROUP_STORAGE,
> +	BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED,
> +	/* BPF_MAP_TYPE_CGROUP_STORAGE is available to bpf programs attaching
> +	 * to a cgroup. The newer BPF_MAP_TYPE_CGRP_STORAGE is available to
> +	 * both cgroup-attached and other progs and supports all functionality
> +	 * provided by BPF_MAP_TYPE_CGROUP_STORAGE. So mark
> +	 * BPF_MAP_TYPE_CGROUP_STORAGE deprecated.
> +	 */
> +	BPF_MAP_TYPE_CGROUP_STORAGE = BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED,
>  	BPF_MAP_TYPE_REUSEPORT_SOCKARRAY,
>  	BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE,
>  	BPF_MAP_TYPE_QUEUE,
> @@ -935,6 +942,7 @@ enum bpf_map_type {
>  	BPF_MAP_TYPE_TASK_STORAGE,
>  	BPF_MAP_TYPE_BLOOM_FILTER,
>  	BPF_MAP_TYPE_USER_RINGBUF,
> +	BPF_MAP_TYPE_CGRP_STORAGE,
>  };
>  
>  /* Note that tracing related programs such as
> @@ -5435,6 +5443,42 @@ union bpf_attr {
>   *		**-E2BIG** if user-space has tried to publish a sample which is
>   *		larger than the size of the ring buffer, or which cannot fit
>   *		within a struct bpf_dynptr.
> + *
> + * void *bpf_cgrp_storage_get(struct bpf_map *map, struct cgroup *cgroup, void *value, u64 flags)
> + *	Description
> + *		Get a bpf_local_storage from the *cgroup*.
> + *
> + *		Logically, it could be thought of as getting the value from
> + *		a *map* with *cgroup* as the **key**.  From this
> + *		perspective,  the usage is not much different from
> + *		**bpf_map_lookup_elem**\ (*map*, **&**\ *cgroup*) except this
> + *		helper enforces the key must be a cgroup struct and the map must also
> + *		be a **BPF_MAP_TYPE_CGRP_STORAGE**.
> + *
> + *		Underneath, the value is stored locally at *cgroup* instead of
> + *		the *map*.  The *map* is used as the bpf-local-storage
> + *		"type". The bpf-local-storage "type" (i.e. the *map*) is
> + *		searched against all bpf_local_storage residing at *cgroup*.
> + *
> + *		An optional *flags* (**BPF_LOCAL_STORAGE_GET_F_CREATE**) can be
> + *		used such that a new bpf_local_storage will be
> + *		created if one does not exist.  *value* can be used
> + *		together with **BPF_LOCAL_STORAGE_GET_F_CREATE** to specify
> + *		the initial value of a bpf_local_storage.  If *value* is
> + *		**NULL**, the new bpf_local_storage will be zero initialized.
> + *	Return
> + *		A bpf_local_storage pointer is returned on success.
> + *
> + *		**NULL** if not found or there was an error in adding
> + *		a new bpf_local_storage.
> + *
> + * long bpf_cgrp_storage_delete(struct bpf_map *map, struct cgroup *cgroup)
> + *	Description
> + *		Delete a bpf_local_storage from a *cgroup*.
> + *	Return
> + *		0 on success.
> + *
> + *		**-ENOENT** if the bpf_local_storage cannot be found.
>   */
>  #define ___BPF_FUNC_MAPPER(FN, ctx...)			\
>  	FN(unspec, 0, ##ctx)				\
> @@ -5647,6 +5691,8 @@ union bpf_attr {
>  	FN(tcp_raw_check_syncookie_ipv6, 207, ##ctx)	\
>  	FN(ktime_get_tai_ns, 208, ##ctx)		\
>  	FN(user_ringbuf_drain, 209, ##ctx)		\
> +	FN(cgrp_storage_get, 210, ##ctx)		\
> +	FN(cgrp_storage_delete, 211, ##ctx)		\
>  	/* */
>  
>  /* backwards-compatibility macros for users of __BPF_FUNC_MAPPER that don't
> -- 
> 2.30.2
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v2 2/6] bpf: Implement cgroup storage available to non-cgroup-attached bpf progs
  2022-10-21  5:22   ` David Vernet
@ 2022-10-21  5:26     ` David Vernet
  2022-10-21 17:33     ` Yonghong Song
  1 sibling, 0 replies; 23+ messages in thread
From: David Vernet @ 2022-10-21  5:26 UTC (permalink / raw)
  To: Yonghong Song
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, KP Singh, Martin KaFai Lau, Tejun Heo

On Fri, Oct 21, 2022 at 12:22:49AM -0500, David Vernet wrote:
> > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > index 49fb9ec8366d..0ddf0834b1b8 100644
> > --- a/kernel/trace/bpf_trace.c
> > +++ b/kernel/trace/bpf_trace.c
> > @@ -1454,6 +1454,10 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> >  		return &bpf_get_current_cgroup_id_proto;
> >  	case BPF_FUNC_get_current_ancestor_cgroup_id:
> >  		return &bpf_get_current_ancestor_cgroup_id_proto;
> > +	case BPF_FUNC_cgrp_storage_get:
> > +		return &bpf_cgrp_storage_get_proto;
> > +	case BPF_FUNC_cgrp_storage_delete:
> > +		return &bpf_cgrp_storage_delete_proto;
> 
> Why the #ifdef CONFIG_CGROUPS check in bpf_base_func_proto(), but not
> here?

Sorry, just realized it's already in an #ifdef CONFIG_CGROUPS block.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v2 6/6] docs/bpf: Add documentation for map type BPF_MAP_TYPE_CGRP_STROAGE
  2022-10-20 22:13 ` [PATCH bpf-next v2 6/6] docs/bpf: Add documentation for map type BPF_MAP_TYPE_CGRP_STROAGE Yonghong Song
@ 2022-10-21  7:12   ` David Vernet
  2022-10-21 17:46     ` Yonghong Song
  0 siblings, 1 reply; 23+ messages in thread
From: David Vernet @ 2022-10-21  7:12 UTC (permalink / raw)
  To: Yonghong Song
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, KP Singh, Martin KaFai Lau, Tejun Heo

On Thu, Oct 20, 2022 at 03:13:27PM -0700, Yonghong Song wrote:
> Add some descriptions and examples for BPF_MAP_TYPE_CGRP_STROAGE.
> Also illustate the major difference between BPF_MAP_TYPE_CGRP_STROAGE
> and BPF_MAP_TYPE_CGROUP_STORAGE and recommend to use
> BPF_MAP_TYPE_CGRP_STROAGE instead of BPF_MAP_TYPE_CGROUP_STORAGE
> in the end.
> 
> Signed-off-by: Yonghong Song <yhs@fb.com>
> ---
>  Documentation/bpf/map_cgrp_storage.rst | 104 +++++++++++++++++++++++++
>  1 file changed, 104 insertions(+)
>  create mode 100644 Documentation/bpf/map_cgrp_storage.rst
> 
> diff --git a/Documentation/bpf/map_cgrp_storage.rst b/Documentation/bpf/map_cgrp_storage.rst
> new file mode 100644
> index 000000000000..15691aab7fc7
> --- /dev/null
> +++ b/Documentation/bpf/map_cgrp_storage.rst
> @@ -0,0 +1,104 @@
> +.. SPDX-License-Identifier: GPL-2.0-only
> +.. Copyright (C) 2022 Meta Platforms, Inc. and affiliates.
> +
> +===========================
> +BPF_MAP_TYPE_CGRP_STORAGE
> +===========================
> +
> +The ``BPF_MAP_TYPE_CGRP_STORAGE`` map type represents a local fix-sized
> +storage for cgroups. It is only available with ``CONFIG_CGROUP_BPF``.
> +The programs are made available by the same Kconfig. The
> +data for a particular cgroup can be retrieved by looking up the map
> +with that cgroup.
> +
> +This document describes the usage and semantics of the
> +``BPF_MAP_TYPE_CGRP_STORAGE`` map type.
> +
> +Usage
> +=====
> +
> +The map key must be ``sizeof(int)`` representing a cgroup fd.
> +To access the storage in a program, use ``bpf_cgrp_storage_get``::
> +
> +    void *bpf_cgrp_storage_get(struct bpf_map *map, struct cgroup *cgroup, void *value, u64 flags)
> +
> +``flags`` could be 0 or ``BPF_LOCAL_STORAGE_GET_F_CREATE`` which indicates that
> +a new local storage will be created if one does not exist.
> +
> +The local storage can be removed with ``bpf_cgrp_storage_delete``::
> +
> +    long bpf_cgrp_storage_delete(struct bpf_map *map, struct cgruop *cgroup)

s/cgruop/cgroup

> +
> +The map is available to all program types.
> +
> +Examples
> +========
> +
> +An bpf-program example with BPF_MAP_TYPE_CGRP_STORAGE::

s/An bpf-program/A bpf program

> +
> +    #include <vmlinux.h>
> +    #include <bpf/bpf_helpers.h>
> +    #include <bpf/bpf_tracing.h>
> +
> +    struct {
> +            __uint(type, BPF_MAP_TYPE_CGRP_STORAGE);
> +            __uint(map_flags, BPF_F_NO_PREALLOC);
> +            __type(key, int);
> +            __type(value, long);
> +    } cgrp_storage SEC(".maps");
> +
> +    SEC("tp_btf/sys_enter")
> +    int BPF_PROG(on_enter, struct pt_regs *regs, long id)
> +    {
> +            struct task_struct *task = bpf_get_current_task_btf();
> +            long *ptr;
> +
> +            ptr = bpf_cgrp_storage_get(&cgrp_storage, task->cgroups->dfl_cgrp, 0,
> +                                       BPF_LOCAL_STORAGE_GET_F_CREATE);
> +            if (ptr)
> +                __sync_fetch_and_add(ptr, 1);
> +
> +            return 0;
> +    }
> +
> +Userspace accessing map declared above::
> +
> +    #include <linux/bpf.h>
> +    #include <linux/libbpf.h>
> +
> +    __u32 map_lookup(struct bpf_map *map, int cgrp_fd)
> +    {
> +            __u32 *value;
> +            value = bpf_map_lookup_elem(bpf_map__fd(map), &cgrp_fd);
> +            if (value)
> +                return *value;
> +            return 0;
> +    }
> +
> +Difference Between BPF_MAP_TYPE_CGRP_STORAGE and BPF_MAP_TYPE_CGROUP_STORAGE
> +============================================================================
> +
> +The main difference between ``BPF_MAP_TYPE_CGRP_STORAGE`` and ``BPF_MAP_TYPE_CGROUP_STORAGE``
> +is that ``BPF_MAP_TYPE_CGRP_STORAGE`` can be used by all program types while
> +``BPF_MAP_TYPE_CGROUP_STORAGE`` is available only to cgroup program types.

Suggestion: I'd consider going into just a bit more detail about what's
meant by "cgroup program types" here. Maybe just 1-2 sentences
describing the types of programs where the deprecated cgroup storage map
type is available, and why. A program that's using the
BPF_MAP_TYPE_CGRP_STORAGE map is conceptually also a "cgroup program
type" in that it's querying, analyzing, etc cgroups, so being explicit
may help get ahead of any confusion.

Also, given that BPF_MAP_TYPE_CGROUP_STORAGE is deprecated, and is
extremely close in name to BPF_MAP_TYPE_CGRP_STORAGE, perhaps we should
start this section out by explaining that BPF_MAP_TYPE_CGROUP_STORAGE is
a deprecated map type that's now aliased to
BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED, and then reference it as
BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED for the rest of the page. I think
it's important to highlight that the map type is deprecated, and give
some historical context here so that users understand why they should
use BPF_MAP_TYPE_CGRP_STORAGE, and why we have two such confusingly-
similarly named maps. What do you think?

> +There are some other differences as well.
> +
> +(1). ``BPF_MAP_TYPE_CGRP_STORAGE`` supports local storage for more than one
> +     cgroups while ``BPF_MAP_TYPE_CGROUP_STORAGE`` only support one, the one attached

s/cgroups/cgroup

> +     by the bpf program.
> +
> +(2). ``BPF_MAP_TYPE_CGROUP_STORAGE`` allocates local storage at attach time so
> +     ``bpf_get_local_storage()`` always returns non-null local storage.

Suggestion: s/non-null/non-NULL. In general, I'd suggest changing null
to NULL throughout the page, but I don't feel strongly about it.

> +     ``BPF_MAP_TYPE_CGRP_STORAGE`` allocates local storage at runtime so
> +     it is possible that ``bpf_cgrp_storage_get()`` may return null local storage.
> +     To avoid such null local storage issue, user space can do
> +     ``bpf_map_update_elem()`` to pre-allocate local storage.

Should we specify that user space can preallocate by doing
bpf_map_update_elem() _before_ the program is attached? Also, another
small nit, but I think pre-allocate and de-allocate should just be
preallocate and deallocate respectively.

> +(3). ``BPF_MAP_TYPE_CGRP_STORAGE`` supports de-allocating local storage by bpf program

s/by bpf program/by a bpf program

> +     while ``BPF_MAP_TYPE_CGROUP_STORAGE`` only de-allocates storage during
> +     prog detach time.
> +
> +So overall, ``BPF_MAP_TYPE_CGRP_STORAGE`` supports all ``BPF_MAP_TYPE_CGROUP_STORAGE``
> +functionality and beyond. It is recommended to use ``BPF_MAP_TYPE_CGRP_STORAGE``
> +instead of ``BPF_MAP_TYPE_CGROUP_STORAGE``.

As mentioned above, I think we need to go beyond stating that using
BPF_MAP_TYPE_CGRP_STORAGE is recommended, and also explicitly and loudly
call out that BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED is deprecated and
will not be supported indefinitely.

Overall though, this looks great. Thank you for writing it up.

Thanks,
David

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v2 2/6] bpf: Implement cgroup storage available to non-cgroup-attached bpf progs
       [not found]   ` <202210210932.nHqTyTmx-lkp@intel.com>
@ 2022-10-21 16:51     ` Yonghong Song
  0 siblings, 0 replies; 23+ messages in thread
From: Yonghong Song @ 2022-10-21 16:51 UTC (permalink / raw)
  To: kernel test robot, Yonghong Song, bpf
  Cc: kbuild-all, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, KP Singh, Martin KaFai Lau, Tejun Heo



On 10/20/22 6:29 PM, kernel test robot wrote:
> Hi Yonghong,
> 
> I love your patch! Yet something to improve:
> 
> [auto build test ERROR on bpf-next/master]
> 
> url:    https://github.com/intel-lab-lkp/linux/commits/Yonghong-Song/bpf-Implement-cgroup-local-storage-available-to-non-cgroup-attached-bpf-progs/20221021-061520
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
> patch link:    https://lore.kernel.org/r/20221020221306.3554250-1-yhs%40fb.com
> patch subject: [PATCH bpf-next v2 2/6] bpf: Implement cgroup storage available to non-cgroup-attached bpf progs
> config: mips-randconfig-r021-20221019
> compiler: mipsel-linux-gcc (GCC) 12.1.0
> reproduce (this is a W=1 build):
>          wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross   -O ~/bin/make.cross
>          chmod +x ~/bin/make.cross
>          # https://github.com/intel-lab-lkp/linux/commit/444d94c5601ec8650f49598c284571e1bc81a43d
>          git remote add linux-review https://github.com/intel-lab-lkp/linux
>          git fetch --no-tags linux-review Yonghong-Song/bpf-Implement-cgroup-local-storage-available-to-non-cgroup-attached-bpf-progs/20221021-061520
>          git checkout 444d94c5601ec8650f49598c284571e1bc81a43d
>          # save the config file
>          mkdir build_dir && cp config build_dir/.config
>          COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=mips SHELL=/bin/bash kernel/bpf/
> 
> If you fix the issue, kindly add following tag where applicable
> | Reported-by: kernel test robot <lkp@intel.com>
> 
> All errors (new ones prefixed by >>):
> 
>     kernel/bpf/bpf_cgrp_storage.c: In function 'cgroup_storage_ptr':
>>> kernel/bpf/bpf_cgrp_storage.c:43:19: error: 'struct cgroup' has no member named 'bpf_cgrp_storage'
>        43 |         return &cg->bpf_cgrp_storage;

I used CONFIG_CGROUP_BPF to guard this field in cgroup-defs.h.
But it is possible that CONFIG_CGROUPS=y and CONFIG_BPF_SYSCALL=y, but
CONFIG_CGROUP_BPF=n.
I will change to guard the bpf_cgrp_storage field in cgroup-defs.h
with CONFIG_BPF_SYSCALL=y and it should fix the issue.

>           |                   ^~
>     In file included from include/linux/workqueue.h:16,
>                      from include/linux/bpf.h:10,
>                      from kernel/bpf/bpf_cgrp_storage.c:7:
>     kernel/bpf/bpf_cgrp_storage.c: In function 'bpf_cgrp_storage_free':
>     kernel/bpf/bpf_cgrp_storage.c:55:47: error: 'struct cgroup' has no member named 'bpf_cgrp_storage'
>        55 |         local_storage = rcu_dereference(cgroup->bpf_cgrp_storage);
>           |                                               ^~
>     include/linux/rcupdate.h:439:17: note: in definition of macro '__rcu_dereference_check'
>       439 |         typeof(*p) *local = (typeof(*p) *__force)READ_ONCE(p); \
>           |                 ^
>     include/linux/rcupdate.h:659:28: note: in expansion of macro 'rcu_dereference_check'
>       659 | #define rcu_dereference(p) rcu_dereference_check(p, 0)
>           |                            ^~~~~~~~~~~~~~~~~~~~~
>     kernel/bpf/bpf_cgrp_storage.c:55:25: note: in expansion of macro 'rcu_dereference'
>        55 |         local_storage = rcu_dereference(cgroup->bpf_cgrp_storage);
>           |                         ^~~~~~~~~~~~~~~
>     kernel/bpf/bpf_cgrp_storage.c:55:47: error: 'struct cgroup' has no member named 'bpf_cgrp_storage'
>        55 |         local_storage = rcu_dereference(cgroup->bpf_cgrp_storage);
>           |                                               ^~
>     include/linux/rcupdate.h:439:38: note: in definition of macro '__rcu_dereference_check'
>       439 |         typeof(*p) *local = (typeof(*p) *__force)READ_ONCE(p); \
>           |                                      ^
>     include/linux/rcupdate.h:659:28: note: in expansion of macro 'rcu_dereference_check'
>       659 | #define rcu_dereference(p) rcu_dereference_check(p, 0)
>           |                            ^~~~~~~~~~~~~~~~~~~~~
>     kernel/bpf/bpf_cgrp_storage.c:55:25: note: in expansion of macro 'rcu_dereference'
>        55 |         local_storage = rcu_dereference(cgroup->bpf_cgrp_storage);
>           |                         ^~~~~~~~~~~~~~~
>     In file included from <command-line>:
>     kernel/bpf/bpf_cgrp_storage.c:55:47: error: 'struct cgroup' has no member named 'bpf_cgrp_storage'
>        55 |         local_storage = rcu_dereference(cgroup->bpf_cgrp_storage);
>           |                                               ^~
>     include/linux/compiler_types.h:337:23: note: in definition of macro '__compiletime_assert'
>       337 |                 if (!(condition))                                       \
>           |                       ^~~~~~~~~
>     include/linux/compiler_types.h:357:9: note: in expansion of macro '_compiletime_assert'
>       357 |         _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
>           |         ^~~~~~~~~~~~~~~~~~~
>     include/asm-generic/rwonce.h:36:9: note: in expansion of macro 'compiletime_assert'
>        36 |         compiletime_assert(__native_word(t) || sizeof(t) == sizeof(long long),  \
>           |         ^~~~~~~~~~~~~~~~~~
>     include/asm-generic/rwonce.h:36:28: note: in expansion of macro '__native_word'
>        36 |         compiletime_assert(__native_word(t) || sizeof(t) == sizeof(long long),  \
>           |                            ^~~~~~~~~~~~~
>     include/asm-generic/rwonce.h:49:9: note: in expansion of macro 'compiletime_assert_rwonce_type'
>        49 |         compiletime_assert_rwonce_type(x);                              \
>           |         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>     include/linux/rcupdate.h:439:50: note: in expansion of macro 'READ_ONCE'
>       439 |         typeof(*p) *local = (typeof(*p) *__force)READ_ONCE(p); \
>           |                                                  ^~~~~~~~~
>     include/linux/rcupdate.h:587:9: note: in expansion of macro '__rcu_dereference_check'
>       587 |         __rcu_dereference_check((p), __UNIQUE_ID(rcu), \
>           |         ^~~~~~~~~~~~~~~~~~~~~~~
>     include/linux/rcupdate.h:659:28: note: in expansion of macro 'rcu_dereference_check'
>       659 | #define rcu_dereference(p) rcu_dereference_check(p, 0)
>           |                            ^~~~~~~~~~~~~~~~~~~~~
>     kernel/bpf/bpf_cgrp_storage.c:55:25: note: in expansion of macro 'rcu_dereference'
>        55 |         local_storage = rcu_dereference(cgroup->bpf_cgrp_storage);
>           |                         ^~~~~~~~~~~~~~~
>     kernel/bpf/bpf_cgrp_storage.c:55:47: error: 'struct cgroup' has no member named 'bpf_cgrp_storage'
>        55 |         local_storage = rcu_dereference(cgroup->bpf_cgrp_storage);
>           |                                               ^~
>     include/linux/compiler_types.h:337:23: note: in definition of macro '__compiletime_assert'
>       337 |                 if (!(condition))                                       \
>           |                       ^~~~~~~~~
>     include/linux/compiler_types.h:357:9: note: in expansion of macro '_compiletime_assert'
>       357 |         _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
>           |         ^~~~~~~~~~~~~~~~~~~
>     include/asm-generic/rwonce.h:36:9: note: in expansion of macro 'compiletime_assert'
>        36 |         compiletime_assert(__native_word(t) || sizeof(t) == sizeof(long long),  \
>           |         ^~~~~~~~~~~~~~~~~~
>     include/asm-generic/rwonce.h:36:28: note: in expansion of macro '__native_word'
>        36 |         compiletime_assert(__native_word(t) || sizeof(t) == sizeof(long long),  \
>           |                            ^~~~~~~~~~~~~
>     include/asm-generic/rwonce.h:49:9: note: in expansion of macro 'compiletime_assert_rwonce_type'
>        49 |         compiletime_assert_rwonce_type(x);                              \
>           |         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>     include/linux/rcupdate.h:439:50: note: in expansion of macro 'READ_ONCE'
>       439 |         typeof(*p) *local = (typeof(*p) *__force)READ_ONCE(p); \
>           |                                                  ^~~~~~~~~
>     include/linux/rcupdate.h:587:9: note: in expansion of macro '__rcu_dereference_check'
>       587 |         __rcu_dereference_check((p), __UNIQUE_ID(rcu), \
>           |         ^~~~~~~~~~~~~~~~~~~~~~~
>     include/linux/rcupdate.h:659:28: note: in expansion of macro 'rcu_dereference_check'
>       659 | #define rcu_dereference(p) rcu_dereference_check(p, 0)
>           |                            ^~~~~~~~~~~~~~~~~~~~~
>     kernel/bpf/bpf_cgrp_storage.c:55:25: note: in expansion of macro 'rcu_dereference'
>        55 |         local_storage = rcu_dereference(cgroup->bpf_cgrp_storage);
>           |                         ^~~~~~~~~~~~~~~
>     kernel/bpf/bpf_cgrp_storage.c:55:47: error: 'struct cgroup' has no member named 'bpf_cgrp_storage'
>        55 |         local_storage = rcu_dereference(cgroup->bpf_cgrp_storage);
>           |                                               ^~
>     include/linux/compiler_types.h:337:23: note: in definition of macro '__compiletime_assert'
>       337 |                 if (!(condition))                                       \
>           |                       ^~~~~~~~~
>     include/linux/compiler_types.h:357:9: note: in expansion of macro '_compiletime_assert'
>       357 |         _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
>           |         ^~~~~~~~~~~~~~~~~~~
> 
> 
> vim +43 kernel/bpf/bpf_cgrp_storage.c
> 
>      38	
>      39	static struct bpf_local_storage __rcu **cgroup_storage_ptr(void *owner)
>      40	{
>      41		struct cgroup *cg = owner;
>      42	
>    > 43		return &cg->bpf_cgrp_storage;
>      44	}
>      45	
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v2 2/6] bpf: Implement cgroup storage available to non-cgroup-attached bpf progs
  2022-10-21  5:22   ` David Vernet
  2022-10-21  5:26     ` David Vernet
@ 2022-10-21 17:33     ` Yonghong Song
  2022-10-21 19:57       ` David Vernet
  1 sibling, 1 reply; 23+ messages in thread
From: Yonghong Song @ 2022-10-21 17:33 UTC (permalink / raw)
  To: David Vernet, Yonghong Song
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, KP Singh, Martin KaFai Lau, Tejun Heo



On 10/20/22 10:22 PM, David Vernet wrote:
> On Thu, Oct 20, 2022 at 03:13:06PM -0700, Yonghong Song wrote:
>> Similar to sk/inode/task storage, implement similar cgroup local storage.
>>
>> There already exists a local storage implementation for cgroup-attached
>> bpf programs.  See map type BPF_MAP_TYPE_CGROUP_STORAGE and helper
>> bpf_get_local_storage(). But there are use cases such that non-cgroup
>> attached bpf progs wants to access cgroup local storage data. For example,
>> tc egress prog has access to sk and cgroup. It is possible to use
>> sk local storage to emulate cgroup local storage by storing data in socket.
>> But this is a waste as it could be lots of sockets belonging to a particular
>> cgroup. Alternatively, a separate map can be created with cgroup id as the key.
>> But this will introduce additional overhead to manipulate the new map.
>> A cgroup local storage, similar to existing sk/inode/task storage,
>> should help for this use case.
>>
>> The life-cycle of storage is managed with the life-cycle of the
>> cgroup struct.  i.e. the storage is destroyed along with the owning cgroup
>> with a callback to the bpf_cgrp_storage_free when cgroup itself
>> is deleted.
>>
>> The userspace map operations can be done by using a cgroup fd as a key
>> passed to the lookup, update and delete operations.
>>
>> Typically, the following code is used to get the current cgroup:
>>      struct task_struct *task = bpf_get_current_task_btf();
>>      ... task->cgroups->dfl_cgrp ...
>> and in structure task_struct definition:
>>      struct task_struct {
>>          ....
>>          struct css_set __rcu            *cgroups;
>>          ....
>>      }
>> With sleepable program, accessing task->cgroups is not protected by rcu_read_lock.
>> So the current implementation only supports non-sleepable program and supporting
>> sleepable program will be the next step together with adding rcu_read_lock
>> protection for rcu tagged structures.
>>
>> Since map name BPF_MAP_TYPE_CGROUP_STORAGE has been used for old cgroup local
>> storage support, the new map name BPF_MAP_TYPE_CGRP_STORAGE is used
>> for cgroup storage available to non-cgroup-attached bpf programs. The old
>> cgroup storage supports bpf_get_local_storage() helper to get the cgroup data.
>> The new cgroup storage helper bpf_cgrp_storage_get() can provide similar
>> functionality. While old cgroup storage pre-allocates storage memory, the new
>> mechanism can also pre-allocate with a user space bpf_map_update_elem() call
>> to avoid potential run-time memory allocaiton failure.
> 
> s/allocaiton/allocation

ack.

> 
>> Therefore, the new cgroup storage can provide all functionality w.r.t.
>> the old one. So in uapi bpf.h, the old BPF_MAP_TYPE_CGROUP_STORAGE is alias to
>> BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED to indicate the old cgroup storage can
>> be deprecated since the new one can provide the same functionality.
>>
>> Signed-off-by: Yonghong Song <yhs@fb.com>
>> ---
>>   include/linux/bpf.h            |   3 +
>>   include/linux/bpf_types.h      |   1 +
>>   include/linux/cgroup-defs.h    |   4 +
>>   include/uapi/linux/bpf.h       |  48 +++++-
>>   kernel/bpf/Makefile            |   2 +-
>>   kernel/bpf/bpf_cgrp_storage.c  | 276 +++++++++++++++++++++++++++++++++
>>   kernel/bpf/helpers.c           |   6 +
>>   kernel/bpf/syscall.c           |   3 +-
>>   kernel/bpf/verifier.c          |  13 +-
>>   kernel/cgroup/cgroup.c         |   4 +
>>   kernel/trace/bpf_trace.c       |   4 +
>>   scripts/bpf_doc.py             |   2 +
>>   tools/include/uapi/linux/bpf.h |  48 +++++-
>>   13 files changed, 409 insertions(+), 5 deletions(-)
>>   create mode 100644 kernel/bpf/bpf_cgrp_storage.c
>>
>> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
>> index 9e7d46d16032..674da3129248 100644
>> --- a/include/linux/bpf.h
>> +++ b/include/linux/bpf.h
>> @@ -2045,6 +2045,7 @@ struct bpf_link *bpf_link_by_id(u32 id);
>>   
>>   const struct bpf_func_proto *bpf_base_func_proto(enum bpf_func_id func_id);
>>   void bpf_task_storage_free(struct task_struct *task);
>> +void bpf_cgrp_storage_free(struct cgroup *cgroup);
>>   bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog);
>>   const struct btf_func_model *
>>   bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
>> @@ -2537,6 +2538,8 @@ extern const struct bpf_func_proto bpf_copy_from_user_task_proto;
>>   extern const struct bpf_func_proto bpf_set_retval_proto;
>>   extern const struct bpf_func_proto bpf_get_retval_proto;
>>   extern const struct bpf_func_proto bpf_user_ringbuf_drain_proto;
>> +extern const struct bpf_func_proto bpf_cgrp_storage_get_proto;
>> +extern const struct bpf_func_proto bpf_cgrp_storage_delete_proto;
>>   
>>   const struct bpf_func_proto *tracing_prog_func_proto(
>>     enum bpf_func_id func_id, const struct bpf_prog *prog);
>> diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h
>> index 2c6a4f2562a7..f9d5aa62fed0 100644
>> --- a/include/linux/bpf_types.h
>> +++ b/include/linux/bpf_types.h
>> @@ -90,6 +90,7 @@ BPF_MAP_TYPE(BPF_MAP_TYPE_CGROUP_ARRAY, cgroup_array_map_ops)
>>   #ifdef CONFIG_CGROUP_BPF
>>   BPF_MAP_TYPE(BPF_MAP_TYPE_CGROUP_STORAGE, cgroup_storage_map_ops)
>>   BPF_MAP_TYPE(BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE, cgroup_storage_map_ops)
>> +BPF_MAP_TYPE(BPF_MAP_TYPE_CGRP_STORAGE, cgrp_storage_map_ops)
>>   #endif
>>   BPF_MAP_TYPE(BPF_MAP_TYPE_HASH, htab_map_ops)
>>   BPF_MAP_TYPE(BPF_MAP_TYPE_PERCPU_HASH, htab_percpu_map_ops)
>> diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
>> index 8f481d1b159a..4a72bc3a0a4e 100644
>> --- a/include/linux/cgroup-defs.h
>> +++ b/include/linux/cgroup-defs.h
>> @@ -504,6 +504,10 @@ struct cgroup {
>>   	/* Used to store internal freezer state */
>>   	struct cgroup_freezer_state freezer;
>>   
>> +#ifdef CONFIG_CGROUP_BPF
>> +	struct bpf_local_storage __rcu  *bpf_cgrp_storage;
>> +#endif
>> +
>>   	/* All ancestors including self */
>>   	struct cgroup *ancestors[];
>>   };
>> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
>> index 17f61338f8f8..2d7f79bf3500 100644
>> --- a/include/uapi/linux/bpf.h
>> +++ b/include/uapi/linux/bpf.h
>> @@ -922,7 +922,14 @@ enum bpf_map_type {
>>   	BPF_MAP_TYPE_CPUMAP,
>>   	BPF_MAP_TYPE_XSKMAP,
>>   	BPF_MAP_TYPE_SOCKHASH,
>> -	BPF_MAP_TYPE_CGROUP_STORAGE,
>> +	BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED,
>> +	/* BPF_MAP_TYPE_CGROUP_STORAGE is available to bpf programs attaching
>> +	 * to a cgroup. The newer BPF_MAP_TYPE_CGRP_STORAGE is available to
>> +	 * both cgroup-attached and other progs and supports all functionality
>> +	 * provided by BPF_MAP_TYPE_CGROUP_STORAGE. So mark
>> +	 * BPF_MAP_TYPE_CGROUP_STORAGE deprecated.
>> +	 */
>> +	BPF_MAP_TYPE_CGROUP_STORAGE = BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED,
>>   	BPF_MAP_TYPE_REUSEPORT_SOCKARRAY,
>>   	BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE,
>>   	BPF_MAP_TYPE_QUEUE,
>> @@ -935,6 +942,7 @@ enum bpf_map_type {
>>   	BPF_MAP_TYPE_TASK_STORAGE,
>>   	BPF_MAP_TYPE_BLOOM_FILTER,
>>   	BPF_MAP_TYPE_USER_RINGBUF,
>> +	BPF_MAP_TYPE_CGRP_STORAGE,
>>   };
>>   
>>   /* Note that tracing related programs such as
>> @@ -5435,6 +5443,42 @@ union bpf_attr {
>>    *		**-E2BIG** if user-space has tried to publish a sample which is
>>    *		larger than the size of the ring buffer, or which cannot fit
>>    *		within a struct bpf_dynptr.
>> + *
>> + * void *bpf_cgrp_storage_get(struct bpf_map *map, struct cgroup *cgroup, void *value, u64 flags)
>> + *	Description
>> + *		Get a bpf_local_storage from the *cgroup*.
>> + *
>> + *		Logically, it could be thought of as getting the value from
>> + *		a *map* with *cgroup* as the **key**.  From this
>> + *		perspective,  the usage is not much different from
>> + *		**bpf_map_lookup_elem**\ (*map*, **&**\ *cgroup*) except this
>> + *		helper enforces the key must be a cgroup struct and the map must also
>> + *		be a **BPF_MAP_TYPE_CGRP_STORAGE**.
>> + *
>> + *		Underneath, the value is stored locally at *cgroup* instead of
>> + *		the *map*.  The *map* is used as the bpf-local-storage
>> + *		"type". The bpf-local-storage "type" (i.e. the *map*) is
>> + *		searched against all bpf_local_storage residing at *cgroup*.
> 
> IMO this paragraph is a bit hard to parse. Please correct me if I'm
> wrong, but I think what it's trying to convey is that when an instance
> of cgroup bpf-local-storage is accessed by a program in e.g.
> bpf_cgrp_storage_get(), all of the cgroup bpf_local_storage entries are
> iterated over in the struct cgroup object until this program's local
> storage instance is found. Is that right? If so, perhaps something like
> this would be more clear:

yes. your above interpretation is correct.

> 
> In reality, the local-storage value is embedded directly inside of the
> *cgroup* object itself, rather than being located in the
> **BPF_MAP_TYPE_CGRP_STORAGE** map. When the local-storage value is
> queried for some *map* on a *cgroup* object, the kernel will perform an
> O(n) iteration over all of the live local-storage values for that
> *cgroup* object until the local-storage value for the *map* is found.

Sounds okay. I can change the explanation like the above.

> 
> What do you think?
> 
>> + *		An optional *flags* (**BPF_LOCAL_STORAGE_GET_F_CREATE**) can be
>> + *		used such that a new bpf_local_storage will be
>> + *		created if one does not exist.  *value* can be used
>> + *		together with **BPF_LOCAL_STORAGE_GET_F_CREATE** to specify
>> + *		the initial value of a bpf_local_storage.  If *value* is
>> + *		**NULL**, the new bpf_local_storage will be zero initialized.
>> + *	Return
>> + *		A bpf_local_storage pointer is returned on success.
>> + *
>> + *		**NULL** if not found or there was an error in adding
>> + *		a new bpf_local_storage.
>> + *
>> + * long bpf_cgrp_storage_delete(struct bpf_map *map, struct cgroup *cgroup)
>> + *	Description
>> + *		Delete a bpf_local_storage from a *cgroup*.
>> + *	Return
>> + *		0 on success.
>> + *
>> + *		**-ENOENT** if the bpf_local_storage cannot be found.
>>    */
>>   #define ___BPF_FUNC_MAPPER(FN, ctx...)			\
>>   	FN(unspec, 0, ##ctx)				\
>> @@ -5647,6 +5691,8 @@ union bpf_attr {
>>   	FN(tcp_raw_check_syncookie_ipv6, 207, ##ctx)	\
>>   	FN(ktime_get_tai_ns, 208, ##ctx)		\
>>   	FN(user_ringbuf_drain, 209, ##ctx)		\
>> +	FN(cgrp_storage_get, 210, ##ctx)		\
>> +	FN(cgrp_storage_delete, 211, ##ctx)		\
>>   	/* */
>>   
>>   /* backwards-compatibility macros for users of __BPF_FUNC_MAPPER that don't
>> diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
>> index 341c94f208f4..3a12e6b400a2 100644
>> --- a/kernel/bpf/Makefile
>> +++ b/kernel/bpf/Makefile
>> @@ -25,7 +25,7 @@ ifeq ($(CONFIG_PERF_EVENTS),y)
>>   obj-$(CONFIG_BPF_SYSCALL) += stackmap.o
>>   endif
>>   ifeq ($(CONFIG_CGROUPS),y)
> 
> I assume that you double checked that it's valid to compile the helper
> with CONFIG_CGROUPS && !CONFIG_CGROUP_BPF, but I must admit that even if
> that's the case, I'm not following why we would want the map to be
> compiled with a different kconfig option than the helper that provides
> access to it. If theres's a precedent for doing this then I suppose it's
> fine, but it does seem wrong and/or at least wasteful to compile these
> helpers in if CONFIG_CGROUPS is defined but CONFIG_CGROUP_BPF is not.

The following is my understanding.
CONFIG_CGROUP_BPF guards kernel/bpf/cgroup.c which contains 
implementation mostly for cgroup-attached program types, helpers, etc.

A lot of other cgroup-related implementation like cgroup_iter, some
cgroup related helper (not related to cgroup-attached program types), 
etc. are guarded with CONFIG_CGROUPS and CONFIG_BPF_SYSCALL.

Note that it is totally possible CONFIG_CGROUP_BPF is 'n' while
CONFIG_CGROUPS and CONFIG_BPF_SYSCALL are 'y'.

So for cgroup local storage implemented in this patch set,
using CONFIG_CGROUPS and CONFIG_BPF_SYSCALL seems okay.

> 
>> -obj-$(CONFIG_BPF_SYSCALL) += cgroup_iter.o
>> +obj-$(CONFIG_BPF_SYSCALL) += cgroup_iter.o bpf_cgrp_storage.o
>>   endif
>>   obj-$(CONFIG_CGROUP_BPF) += cgroup.o
>>   ifeq ($(CONFIG_INET),y)
>> diff --git a/kernel/bpf/bpf_cgrp_storage.c b/kernel/bpf/bpf_cgrp_storage.c
>> new file mode 100644
>> index 000000000000..bcc5f0fc20be
>> --- /dev/null
>> +++ b/kernel/bpf/bpf_cgrp_storage.c
>> @@ -0,0 +1,276 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Copyright (c) 2022 Meta Platforms, Inc. and affiliates.
>> + */
>> +
>> +#include <linux/types.h>
>> +#include <linux/bpf.h>
>> +#include <linux/bpf_local_storage.h>
>> +#include <uapi/linux/btf.h>
>> +#include <linux/btf_ids.h>
>> +
>> +DEFINE_BPF_STORAGE_CACHE(cgroup_cache);
>> +
>> +static DEFINE_PER_CPU(int, bpf_cgrp_storage_busy);
>> +
>> +static void bpf_cgrp_storage_lock(void)
>> +{
>> +	migrate_disable();
>> +	this_cpu_inc(bpf_cgrp_storage_busy);
>> +}
>> +
>> +static void bpf_cgrp_storage_unlock(void)
>> +{
>> +	this_cpu_dec(bpf_cgrp_storage_busy);
>> +	migrate_enable();
>> +}
>> +
>> +static bool bpf_cgrp_storage_trylock(void)
>> +{
>> +	migrate_disable();
>> +	if (unlikely(this_cpu_inc_return(bpf_cgrp_storage_busy) != 1)) {
>> +		this_cpu_dec(bpf_cgrp_storage_busy);
>> +		migrate_enable();
>> +		return false;
>> +	}
>> +	return true;
>> +}
>> +
>> +static struct bpf_local_storage __rcu **cgroup_storage_ptr(void *owner)
>> +{
>> +	struct cgroup *cg = owner;
>> +
>> +	return &cg->bpf_cgrp_storage;
>> +}
>> +
>> +void bpf_cgrp_storage_free(struct cgroup *cgroup)
>> +{
>> +	struct bpf_local_storage *local_storage;
>> +	struct bpf_local_storage_elem *selem;
>> +	bool free_cgroup_storage = false;
>> +	struct hlist_node *n;
>> +	unsigned long flags;
>> +
>> +	rcu_read_lock();
>> +	local_storage = rcu_dereference(cgroup->bpf_cgrp_storage);
>> +	if (!local_storage) {
>> +		rcu_read_unlock();
>> +		return;
>> +	}
>> +
>> +	/* Neither the bpf_prog nor the bpf-map's syscall
> 
> Very minor nit, but I think using a hyphen in bpf-map like this is
> incorrect as it's not a compound adjective. Applies elsewhere as well. I
> don't believe "added-to" or "deleted-from" require hyphens either.

ack.

> 
>> +	 * could be modifying the local_storage->list now.
>> +	 * Thus, no elem can be added-to or deleted-from the
>> +	 * local_storage->list by the bpf_prog or by the bpf-map's syscall.
>> +	 *
>> +	 * It is racing with bpf_local_storage_map_free() alone
>> +	 * when unlinking elem from the local_storage->list and
>> +	 * the map's bucket->list.
>> +	 */
>> +	bpf_cgrp_storage_lock();
>> +	raw_spin_lock_irqsave(&local_storage->lock, flags);
>> +	hlist_for_each_entry_safe(selem, n, &local_storage->list, snode) {
>> +		bpf_selem_unlink_map(selem);
>> +		free_cgroup_storage =
>> +			bpf_selem_unlink_storage_nolock(local_storage, selem, false, false);
> 
> This still requires a comment explaining why it's OK to overwrite
> free_cgroup_storage with a previous value from calling
> bpf_selem_unlink_storage_nolock(). Even if that is safe, this looks like
> a pretty weird programming pattern, and IMO doing this feels more
> intentional and future-proof:
> 
> if (bpf_selem_unlink_storage_nolock(local_storage, selem, false, false))
> 	free_cgroup_storage = true;

We have a comment a few lines below.
   /* free_cgroup_storage should always be true as long as
    * local_storage->list was non-empty.
    */
   if (free_cgroup_storage)
	kfree_rcu(local_storage, rcu);

I will add more explanation in the above code like

	bpf_selem_unlink_map(selem);
	/* If local_storage list only have one element, the
	 * bpf_selem_unlink_storage_nolock() will return true.
	 * Otherwise, it will return false. The current loop iteration
	 * intends to remove all local storage. So the last iteration
	 * of the loop will set the free_cgroup_storage to true.
	 */
	free_cgroup_storage =
		bpf_selem_unlink_storage_nolock(local_storage, selem, false, false);

> 
>> +	}
>> +	raw_spin_unlock_irqrestore(&local_storage->lock, flags);
>> +	bpf_cgrp_storage_unlock();
>> +	rcu_read_unlock();
>> +
>> +	/* free_cgroup_storage should always be true as long as
>> +	 * local_storage->list was non-empty.
>> +	 */
>> +	if (free_cgroup_storage)
>> +		kfree_rcu(local_storage, rcu);
>> +}
>> +
>> +static struct bpf_local_storage_data *
>> +cgroup_storage_lookup(struct cgroup *cgroup, struct bpf_map *map, bool cacheit_lockit)
>> +{
>> +	struct bpf_local_storage *cgroup_storage;
>> +	struct bpf_local_storage_map *smap;
>> +
>> +	cgroup_storage = rcu_dereference_check(cgroup->bpf_cgrp_storage,
>> +					       bpf_rcu_lock_held());
>> +	if (!cgroup_storage)
>> +		return NULL;
>> +
>> +	smap = (struct bpf_local_storage_map *)map;
>> +	return bpf_local_storage_lookup(cgroup_storage, smap, cacheit_lockit);
>> +}
>> +
>> +static void *bpf_cgrp_storage_lookup_elem(struct bpf_map *map, void *key)
>> +{
>> +	struct bpf_local_storage_data *sdata;
>> +	struct cgroup *cgroup;
>> +	int fd;
>> +
>> +	fd = *(int *)key;
>> +	cgroup = cgroup_get_from_fd(fd);
>> +	if (IS_ERR(cgroup))
>> +		return ERR_CAST(cgroup);
>> +
>> +	bpf_cgrp_storage_lock();
>> +	sdata = cgroup_storage_lookup(cgroup, map, true);
>> +	bpf_cgrp_storage_unlock();
>> +	cgroup_put(cgroup);
>> +	return sdata ? sdata->data : NULL;
>> +}
> 
> Stanislav pointed out in the v1 revision that there's a lot of very
> similar logic in task storage, and I think you'd mentioned that you were
> going to think about generalizing some of that. Have you had a chance to
> consider?

It is hard to have a common function for 
lookup_elem/update_elem/delete_elem(). They are quite different as each 
heavily involves
task/cgroup-specific functions.

but map_alloc and map_free could have common helpers.

> 
>> +static int bpf_cgrp_storage_update_elem(struct bpf_map *map, void *key,
>> +					  void *value, u64 map_flags)
>> +{
>> +	struct bpf_local_storage_data *sdata;
>> +	struct cgroup *cgroup;
>> +	int fd;
>> +
>> +	fd = *(int *)key;
>> +	cgroup = cgroup_get_from_fd(fd);
>> +	if (IS_ERR(cgroup))
>> +		return PTR_ERR(cgroup);
>> +
>> +	bpf_cgrp_storage_lock();
>> +	sdata = bpf_local_storage_update(cgroup, (struct bpf_local_storage_map *)map,
>> +					 value, map_flags, GFP_ATOMIC);
>> +	bpf_cgrp_storage_unlock();
>> +	cgroup_put(cgroup);
>> +	return PTR_ERR_OR_ZERO(sdata);
>> +}
>> +
>> +static int cgroup_storage_delete(struct cgroup *cgroup, struct bpf_map *map)
>> +{
>> +	struct bpf_local_storage_data *sdata;
>> +
>> +	sdata = cgroup_storage_lookup(cgroup, map, false);
>> +	if (!sdata)
>> +		return -ENOENT;
>> +
>> +	bpf_selem_unlink(SELEM(sdata), true);
>> +	return 0;
>> +}
>> +
>> +static int bpf_cgrp_storage_delete_elem(struct bpf_map *map, void *key)
>> +{
>> +	struct cgroup *cgroup;
>> +	int err, fd;
>> +
>> +	fd = *(int *)key;
>> +	cgroup = cgroup_get_from_fd(fd);
>> +	if (IS_ERR(cgroup))
>> +		return PTR_ERR(cgroup);
>> +
>> +	bpf_cgrp_storage_lock();
>> +	err = cgroup_storage_delete(cgroup, map);
>> +	bpf_cgrp_storage_unlock();
>> +	cgroup_put(cgroup);
>> +	return err;
>> +}
>> +
>> +static int notsupp_get_next_key(struct bpf_map *map, void *key, void *next_key)
>> +{
>> +	return -ENOTSUPP;
>> +}
>> +
>> +static struct bpf_map *cgroup_storage_map_alloc(union bpf_attr *attr)
>> +{
>> +	struct bpf_local_storage_map *smap;
>> +
>> +	smap = bpf_local_storage_map_alloc(attr);
>> +	if (IS_ERR(smap))
>> +		return ERR_CAST(smap);
>> +
>> +	smap->cache_idx = bpf_local_storage_cache_idx_get(&cgroup_cache);
>> +	return &smap->map;
>> +}
>> +
>> +static void cgroup_storage_map_free(struct bpf_map *map)
>> +{
>> +	struct bpf_local_storage_map *smap;
>> +
>> +	smap = (struct bpf_local_storage_map *)map;
>> +	bpf_local_storage_cache_idx_free(&cgroup_cache, smap->cache_idx);
>> +	bpf_local_storage_map_free(smap, NULL);
>> +}
>> +
[...]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v2 6/6] docs/bpf: Add documentation for map type BPF_MAP_TYPE_CGRP_STROAGE
  2022-10-21  7:12   ` David Vernet
@ 2022-10-21 17:46     ` Yonghong Song
  0 siblings, 0 replies; 23+ messages in thread
From: Yonghong Song @ 2022-10-21 17:46 UTC (permalink / raw)
  To: David Vernet, Yonghong Song
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, KP Singh, Martin KaFai Lau, Tejun Heo



On 10/21/22 12:12 AM, David Vernet wrote:
> On Thu, Oct 20, 2022 at 03:13:27PM -0700, Yonghong Song wrote:
>> Add some descriptions and examples for BPF_MAP_TYPE_CGRP_STROAGE.
>> Also illustate the major difference between BPF_MAP_TYPE_CGRP_STROAGE
>> and BPF_MAP_TYPE_CGROUP_STORAGE and recommend to use
>> BPF_MAP_TYPE_CGRP_STROAGE instead of BPF_MAP_TYPE_CGROUP_STORAGE
>> in the end.
>>
>> Signed-off-by: Yonghong Song <yhs@fb.com>
>> ---
>>   Documentation/bpf/map_cgrp_storage.rst | 104 +++++++++++++++++++++++++
>>   1 file changed, 104 insertions(+)
>>   create mode 100644 Documentation/bpf/map_cgrp_storage.rst
>>
>> diff --git a/Documentation/bpf/map_cgrp_storage.rst b/Documentation/bpf/map_cgrp_storage.rst
>> new file mode 100644
>> index 000000000000..15691aab7fc7
>> --- /dev/null
>> +++ b/Documentation/bpf/map_cgrp_storage.rst
>> @@ -0,0 +1,104 @@
>> +.. SPDX-License-Identifier: GPL-2.0-only
>> +.. Copyright (C) 2022 Meta Platforms, Inc. and affiliates.
>> +
>> +===========================
>> +BPF_MAP_TYPE_CGRP_STORAGE
>> +===========================
>> +
>> +The ``BPF_MAP_TYPE_CGRP_STORAGE`` map type represents a local fix-sized
>> +storage for cgroups. It is only available with ``CONFIG_CGROUP_BPF``.
>> +The programs are made available by the same Kconfig. The
>> +data for a particular cgroup can be retrieved by looking up the map
>> +with that cgroup.
>> +
>> +This document describes the usage and semantics of the
>> +``BPF_MAP_TYPE_CGRP_STORAGE`` map type.
>> +
>> +Usage
>> +=====
>> +
>> +The map key must be ``sizeof(int)`` representing a cgroup fd.
>> +To access the storage in a program, use ``bpf_cgrp_storage_get``::
>> +
>> +    void *bpf_cgrp_storage_get(struct bpf_map *map, struct cgroup *cgroup, void *value, u64 flags)
>> +
>> +``flags`` could be 0 or ``BPF_LOCAL_STORAGE_GET_F_CREATE`` which indicates that
>> +a new local storage will be created if one does not exist.
>> +
>> +The local storage can be removed with ``bpf_cgrp_storage_delete``::
>> +
>> +    long bpf_cgrp_storage_delete(struct bpf_map *map, struct cgruop *cgroup)
> 
> s/cgruop/cgroup

ack.

> 
>> +
>> +The map is available to all program types.
>> +
>> +Examples
>> +========
>> +
>> +An bpf-program example with BPF_MAP_TYPE_CGRP_STORAGE::
> 
> s/An bpf-program/A bpf program

ack.

> 
>> +
>> +    #include <vmlinux.h>
>> +    #include <bpf/bpf_helpers.h>
>> +    #include <bpf/bpf_tracing.h>
>> +
>> +    struct {
>> +            __uint(type, BPF_MAP_TYPE_CGRP_STORAGE);
>> +            __uint(map_flags, BPF_F_NO_PREALLOC);
>> +            __type(key, int);
>> +            __type(value, long);
>> +    } cgrp_storage SEC(".maps");
>> +
>> +    SEC("tp_btf/sys_enter")
>> +    int BPF_PROG(on_enter, struct pt_regs *regs, long id)
>> +    {
>> +            struct task_struct *task = bpf_get_current_task_btf();
>> +            long *ptr;
>> +
>> +            ptr = bpf_cgrp_storage_get(&cgrp_storage, task->cgroups->dfl_cgrp, 0,
>> +                                       BPF_LOCAL_STORAGE_GET_F_CREATE);
>> +            if (ptr)
>> +                __sync_fetch_and_add(ptr, 1);
>> +
>> +            return 0;
>> +    }
>> +
>> +Userspace accessing map declared above::
>> +
>> +    #include <linux/bpf.h>
>> +    #include <linux/libbpf.h>
>> +
>> +    __u32 map_lookup(struct bpf_map *map, int cgrp_fd)
>> +    {
>> +            __u32 *value;
>> +            value = bpf_map_lookup_elem(bpf_map__fd(map), &cgrp_fd);
>> +            if (value)
>> +                return *value;
>> +            return 0;
>> +    }
>> +
>> +Difference Between BPF_MAP_TYPE_CGRP_STORAGE and BPF_MAP_TYPE_CGROUP_STORAGE
>> +============================================================================
>> +
>> +The main difference between ``BPF_MAP_TYPE_CGRP_STORAGE`` and ``BPF_MAP_TYPE_CGROUP_STORAGE``
>> +is that ``BPF_MAP_TYPE_CGRP_STORAGE`` can be used by all program types while
>> +``BPF_MAP_TYPE_CGROUP_STORAGE`` is available only to cgroup program types.
> 
> Suggestion: I'd consider going into just a bit more detail about what's
> meant by "cgroup program types" here. Maybe just 1-2 sentences
> describing the types of programs where the deprecated cgroup storage map
> type is available, and why. A program that's using the
> BPF_MAP_TYPE_CGRP_STORAGE map is conceptually also a "cgroup program
> type" in that it's querying, analyzing, etc cgroups, so being explicit
> may help get ahead of any confusion.

Right, 'cgroup program types' is not precise either. It should 'only
available to programs attached to cgroups'. I will add a few examples 
like BPF_CGROUP_INET_INGRESS and BPF_CGROUP_SOCK_OPS, etc.

> 
> Also, given that BPF_MAP_TYPE_CGROUP_STORAGE is deprecated, and is
> extremely close in name to BPF_MAP_TYPE_CGRP_STORAGE, perhaps we should
> start this section out by explaining that BPF_MAP_TYPE_CGROUP_STORAGE is
> a deprecated map type that's now aliased to
> BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED, and then reference it as
> BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED for the rest of the page. I think
> it's important to highlight that the map type is deprecated, and give
> some historical context here so that users understand why they should
> use BPF_MAP_TYPE_CGRP_STORAGE, and why we have two such confusingly-
> similarly named maps. What do you think?

Make sense. Putting deprecation in the beginning of this section
can make readers more aware of the difference from the beginning.

> 
>> +There are some other differences as well.
>> +
>> +(1). ``BPF_MAP_TYPE_CGRP_STORAGE`` supports local storage for more than one
>> +     cgroups while ``BPF_MAP_TYPE_CGROUP_STORAGE`` only support one, the one attached
> 
> s/cgroups/cgroup

ack.

> 
>> +     by the bpf program.
>> +
>> +(2). ``BPF_MAP_TYPE_CGROUP_STORAGE`` allocates local storage at attach time so
>> +     ``bpf_get_local_storage()`` always returns non-null local storage.
> 
> Suggestion: s/non-null/non-NULL. In general, I'd suggest changing null
> to NULL throughout the page, but I don't feel strongly about it.

ack.

> 
>> +     ``BPF_MAP_TYPE_CGRP_STORAGE`` allocates local storage at runtime so
>> +     it is possible that ``bpf_cgrp_storage_get()`` may return null local storage.
>> +     To avoid such null local storage issue, user space can do
>> +     ``bpf_map_update_elem()`` to pre-allocate local storage.
> 
> Should we specify that user space can preallocate by doing
> bpf_map_update_elem() _before_ the program is attached? Also, another
> small nit, but I think pre-allocate and de-allocate should just be
> preallocate and deallocate respectively.

Yes, bpf_map_update_elem() should be done before attachment. I will add 
this.

> 
>> +(3). ``BPF_MAP_TYPE_CGRP_STORAGE`` supports de-allocating local storage by bpf program
> 
> s/by bpf program/by a bpf program

ack.

> 
>> +     while ``BPF_MAP_TYPE_CGROUP_STORAGE`` only de-allocates storage during
>> +     prog detach time.
>> +
>> +So overall, ``BPF_MAP_TYPE_CGRP_STORAGE`` supports all ``BPF_MAP_TYPE_CGROUP_STORAGE``
>> +functionality and beyond. It is recommended to use ``BPF_MAP_TYPE_CGRP_STORAGE``
>> +instead of ``BPF_MAP_TYPE_CGROUP_STORAGE``.
> 
> As mentioned above, I think we need to go beyond stating that using
> BPF_MAP_TYPE_CGRP_STORAGE is recommended, and also explicitly and loudly
> call out that BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED is deprecated and
> will not be supported indefinitely.

agree.

> 
> Overall though, this looks great. Thank you for writing it up.
> 
> Thanks,
> David

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v2 2/6] bpf: Implement cgroup storage available to non-cgroup-attached bpf progs
  2022-10-20 22:13 ` [PATCH bpf-next v2 2/6] bpf: Implement cgroup storage available to non-cgroup-attached bpf progs Yonghong Song
  2022-10-21  5:22   ` David Vernet
       [not found]   ` <202210210932.nHqTyTmx-lkp@intel.com>
@ 2022-10-21 19:29   ` Yosry Ahmed
  2022-10-21 21:05     ` Yonghong Song
  2 siblings, 1 reply; 23+ messages in thread
From: Yosry Ahmed @ 2022-10-21 19:29 UTC (permalink / raw)
  To: Yonghong Song
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, KP Singh, Martin KaFai Lau, Tejun Heo

On Thu, Oct 20, 2022 at 3:13 PM Yonghong Song <yhs@fb.com> wrote:
>
> Similar to sk/inode/task storage, implement similar cgroup local storage.
>
> There already exists a local storage implementation for cgroup-attached
> bpf programs.  See map type BPF_MAP_TYPE_CGROUP_STORAGE and helper
> bpf_get_local_storage(). But there are use cases such that non-cgroup
> attached bpf progs wants to access cgroup local storage data. For example,
> tc egress prog has access to sk and cgroup. It is possible to use
> sk local storage to emulate cgroup local storage by storing data in socket.
> But this is a waste as it could be lots of sockets belonging to a particular
> cgroup. Alternatively, a separate map can be created with cgroup id as the key.
> But this will introduce additional overhead to manipulate the new map.
> A cgroup local storage, similar to existing sk/inode/task storage,
> should help for this use case.
>
> The life-cycle of storage is managed with the life-cycle of the
> cgroup struct.  i.e. the storage is destroyed along with the owning cgroup
> with a callback to the bpf_cgrp_storage_free when cgroup itself
> is deleted.
>
> The userspace map operations can be done by using a cgroup fd as a key
> passed to the lookup, update and delete operations.
>
> Typically, the following code is used to get the current cgroup:
>     struct task_struct *task = bpf_get_current_task_btf();
>     ... task->cgroups->dfl_cgrp ...
> and in structure task_struct definition:
>     struct task_struct {
>         ....
>         struct css_set __rcu            *cgroups;
>         ....
>     }
> With sleepable program, accessing task->cgroups is not protected by rcu_read_lock.
> So the current implementation only supports non-sleepable program and supporting
> sleepable program will be the next step together with adding rcu_read_lock
> protection for rcu tagged structures.
>
> Since map name BPF_MAP_TYPE_CGROUP_STORAGE has been used for old cgroup local
> storage support, the new map name BPF_MAP_TYPE_CGRP_STORAGE is used
> for cgroup storage available to non-cgroup-attached bpf programs. The old
> cgroup storage supports bpf_get_local_storage() helper to get the cgroup data.
> The new cgroup storage helper bpf_cgrp_storage_get() can provide similar
> functionality. While old cgroup storage pre-allocates storage memory, the new
> mechanism can also pre-allocate with a user space bpf_map_update_elem() call
> to avoid potential run-time memory allocaiton failure.
> Therefore, the new cgroup storage can provide all functionality w.r.t.
> the old one. So in uapi bpf.h, the old BPF_MAP_TYPE_CGROUP_STORAGE is alias to
> BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED to indicate the old cgroup storage can
> be deprecated since the new one can provide the same functionality.
>
> Signed-off-by: Yonghong Song <yhs@fb.com>
> ---
>  include/linux/bpf.h            |   3 +
>  include/linux/bpf_types.h      |   1 +
>  include/linux/cgroup-defs.h    |   4 +
>  include/uapi/linux/bpf.h       |  48 +++++-
>  kernel/bpf/Makefile            |   2 +-
>  kernel/bpf/bpf_cgrp_storage.c  | 276 +++++++++++++++++++++++++++++++++
>  kernel/bpf/helpers.c           |   6 +
>  kernel/bpf/syscall.c           |   3 +-
>  kernel/bpf/verifier.c          |  13 +-
>  kernel/cgroup/cgroup.c         |   4 +
>  kernel/trace/bpf_trace.c       |   4 +
>  scripts/bpf_doc.py             |   2 +
>  tools/include/uapi/linux/bpf.h |  48 +++++-
>  13 files changed, 409 insertions(+), 5 deletions(-)
>  create mode 100644 kernel/bpf/bpf_cgrp_storage.c
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 9e7d46d16032..674da3129248 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -2045,6 +2045,7 @@ struct bpf_link *bpf_link_by_id(u32 id);
>
>  const struct bpf_func_proto *bpf_base_func_proto(enum bpf_func_id func_id);
>  void bpf_task_storage_free(struct task_struct *task);
> +void bpf_cgrp_storage_free(struct cgroup *cgroup);
>  bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog);
>  const struct btf_func_model *
>  bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
> @@ -2537,6 +2538,8 @@ extern const struct bpf_func_proto bpf_copy_from_user_task_proto;
>  extern const struct bpf_func_proto bpf_set_retval_proto;
>  extern const struct bpf_func_proto bpf_get_retval_proto;
>  extern const struct bpf_func_proto bpf_user_ringbuf_drain_proto;
> +extern const struct bpf_func_proto bpf_cgrp_storage_get_proto;
> +extern const struct bpf_func_proto bpf_cgrp_storage_delete_proto;
>
>  const struct bpf_func_proto *tracing_prog_func_proto(
>    enum bpf_func_id func_id, const struct bpf_prog *prog);
> diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h
> index 2c6a4f2562a7..f9d5aa62fed0 100644
> --- a/include/linux/bpf_types.h
> +++ b/include/linux/bpf_types.h
> @@ -90,6 +90,7 @@ BPF_MAP_TYPE(BPF_MAP_TYPE_CGROUP_ARRAY, cgroup_array_map_ops)
>  #ifdef CONFIG_CGROUP_BPF
>  BPF_MAP_TYPE(BPF_MAP_TYPE_CGROUP_STORAGE, cgroup_storage_map_ops)
>  BPF_MAP_TYPE(BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE, cgroup_storage_map_ops)
> +BPF_MAP_TYPE(BPF_MAP_TYPE_CGRP_STORAGE, cgrp_storage_map_ops)
>  #endif
>  BPF_MAP_TYPE(BPF_MAP_TYPE_HASH, htab_map_ops)
>  BPF_MAP_TYPE(BPF_MAP_TYPE_PERCPU_HASH, htab_percpu_map_ops)
> diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
> index 8f481d1b159a..4a72bc3a0a4e 100644
> --- a/include/linux/cgroup-defs.h
> +++ b/include/linux/cgroup-defs.h
> @@ -504,6 +504,10 @@ struct cgroup {
>         /* Used to store internal freezer state */
>         struct cgroup_freezer_state freezer;
>
> +#ifdef CONFIG_CGROUP_BPF
> +       struct bpf_local_storage __rcu  *bpf_cgrp_storage;
> +#endif

Why is this protected by CONFIG_CGROUP_BPF as opposed to
CONFIG_CGROUPS && CONFIG_BPF_SYSCALL?

It seems to me (and you also point this out in a different reply) that
CONFIG_CGROUP_BPF is mostly used for bpf programs that attach to
cgroups, right?

(same for the freeing site)

> +
>         /* All ancestors including self */
>         struct cgroup *ancestors[];
>  };
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 17f61338f8f8..2d7f79bf3500 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -922,7 +922,14 @@ enum bpf_map_type {
>         BPF_MAP_TYPE_CPUMAP,
>         BPF_MAP_TYPE_XSKMAP,
>         BPF_MAP_TYPE_SOCKHASH,
> -       BPF_MAP_TYPE_CGROUP_STORAGE,
> +       BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED,
> +       /* BPF_MAP_TYPE_CGROUP_STORAGE is available to bpf programs attaching
> +        * to a cgroup. The newer BPF_MAP_TYPE_CGRP_STORAGE is available to
> +        * both cgroup-attached and other progs and supports all functionality
> +        * provided by BPF_MAP_TYPE_CGROUP_STORAGE. So mark
> +        * BPF_MAP_TYPE_CGROUP_STORAGE deprecated.
> +        */
> +       BPF_MAP_TYPE_CGROUP_STORAGE = BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED,
>         BPF_MAP_TYPE_REUSEPORT_SOCKARRAY,
>         BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE,
>         BPF_MAP_TYPE_QUEUE,
> @@ -935,6 +942,7 @@ enum bpf_map_type {
>         BPF_MAP_TYPE_TASK_STORAGE,
>         BPF_MAP_TYPE_BLOOM_FILTER,
>         BPF_MAP_TYPE_USER_RINGBUF,
> +       BPF_MAP_TYPE_CGRP_STORAGE,
>  };
>
>  /* Note that tracing related programs such as
> @@ -5435,6 +5443,42 @@ union bpf_attr {
>   *             **-E2BIG** if user-space has tried to publish a sample which is
>   *             larger than the size of the ring buffer, or which cannot fit
>   *             within a struct bpf_dynptr.
> + *
> + * void *bpf_cgrp_storage_get(struct bpf_map *map, struct cgroup *cgroup, void *value, u64 flags)
> + *     Description
> + *             Get a bpf_local_storage from the *cgroup*.
> + *
> + *             Logically, it could be thought of as getting the value from
> + *             a *map* with *cgroup* as the **key**.  From this
> + *             perspective,  the usage is not much different from
> + *             **bpf_map_lookup_elem**\ (*map*, **&**\ *cgroup*) except this
> + *             helper enforces the key must be a cgroup struct and the map must also
> + *             be a **BPF_MAP_TYPE_CGRP_STORAGE**.
> + *
> + *             Underneath, the value is stored locally at *cgroup* instead of
> + *             the *map*.  The *map* is used as the bpf-local-storage
> + *             "type". The bpf-local-storage "type" (i.e. the *map*) is
> + *             searched against all bpf_local_storage residing at *cgroup*.
> + *
> + *             An optional *flags* (**BPF_LOCAL_STORAGE_GET_F_CREATE**) can be
> + *             used such that a new bpf_local_storage will be
> + *             created if one does not exist.  *value* can be used
> + *             together with **BPF_LOCAL_STORAGE_GET_F_CREATE** to specify
> + *             the initial value of a bpf_local_storage.  If *value* is
> + *             **NULL**, the new bpf_local_storage will be zero initialized.
> + *     Return
> + *             A bpf_local_storage pointer is returned on success.
> + *
> + *             **NULL** if not found or there was an error in adding
> + *             a new bpf_local_storage.
> + *
> + * long bpf_cgrp_storage_delete(struct bpf_map *map, struct cgroup *cgroup)
> + *     Description
> + *             Delete a bpf_local_storage from a *cgroup*.
> + *     Return
> + *             0 on success.
> + *
> + *             **-ENOENT** if the bpf_local_storage cannot be found.
>   */
>  #define ___BPF_FUNC_MAPPER(FN, ctx...)                 \
>         FN(unspec, 0, ##ctx)                            \
> @@ -5647,6 +5691,8 @@ union bpf_attr {
>         FN(tcp_raw_check_syncookie_ipv6, 207, ##ctx)    \
>         FN(ktime_get_tai_ns, 208, ##ctx)                \
>         FN(user_ringbuf_drain, 209, ##ctx)              \
> +       FN(cgrp_storage_get, 210, ##ctx)                \
> +       FN(cgrp_storage_delete, 211, ##ctx)             \
>         /* */
>
>  /* backwards-compatibility macros for users of __BPF_FUNC_MAPPER that don't
> diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
> index 341c94f208f4..3a12e6b400a2 100644
> --- a/kernel/bpf/Makefile
> +++ b/kernel/bpf/Makefile
> @@ -25,7 +25,7 @@ ifeq ($(CONFIG_PERF_EVENTS),y)
>  obj-$(CONFIG_BPF_SYSCALL) += stackmap.o
>  endif
>  ifeq ($(CONFIG_CGROUPS),y)
> -obj-$(CONFIG_BPF_SYSCALL) += cgroup_iter.o
> +obj-$(CONFIG_BPF_SYSCALL) += cgroup_iter.o bpf_cgrp_storage.o
>  endif
>  obj-$(CONFIG_CGROUP_BPF) += cgroup.o
>  ifeq ($(CONFIG_INET),y)
> diff --git a/kernel/bpf/bpf_cgrp_storage.c b/kernel/bpf/bpf_cgrp_storage.c
> new file mode 100644
> index 000000000000..bcc5f0fc20be
> --- /dev/null
> +++ b/kernel/bpf/bpf_cgrp_storage.c
> @@ -0,0 +1,276 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (c) 2022 Meta Platforms, Inc. and affiliates.
> + */
> +
> +#include <linux/types.h>
> +#include <linux/bpf.h>
> +#include <linux/bpf_local_storage.h>
> +#include <uapi/linux/btf.h>
> +#include <linux/btf_ids.h>
> +
> +DEFINE_BPF_STORAGE_CACHE(cgroup_cache);
> +
> +static DEFINE_PER_CPU(int, bpf_cgrp_storage_busy);
> +
> +static void bpf_cgrp_storage_lock(void)
> +{
> +       migrate_disable();
> +       this_cpu_inc(bpf_cgrp_storage_busy);
> +}
> +
> +static void bpf_cgrp_storage_unlock(void)
> +{
> +       this_cpu_dec(bpf_cgrp_storage_busy);
> +       migrate_enable();
> +}
> +
> +static bool bpf_cgrp_storage_trylock(void)
> +{
> +       migrate_disable();
> +       if (unlikely(this_cpu_inc_return(bpf_cgrp_storage_busy) != 1)) {
> +               this_cpu_dec(bpf_cgrp_storage_busy);
> +               migrate_enable();
> +               return false;
> +       }
> +       return true;
> +}
> +
> +static struct bpf_local_storage __rcu **cgroup_storage_ptr(void *owner)
> +{
> +       struct cgroup *cg = owner;
> +
> +       return &cg->bpf_cgrp_storage;
> +}
> +
> +void bpf_cgrp_storage_free(struct cgroup *cgroup)
> +{
> +       struct bpf_local_storage *local_storage;
> +       struct bpf_local_storage_elem *selem;
> +       bool free_cgroup_storage = false;
> +       struct hlist_node *n;
> +       unsigned long flags;
> +
> +       rcu_read_lock();
> +       local_storage = rcu_dereference(cgroup->bpf_cgrp_storage);
> +       if (!local_storage) {
> +               rcu_read_unlock();
> +               return;
> +       }
> +
> +       /* Neither the bpf_prog nor the bpf-map's syscall
> +        * could be modifying the local_storage->list now.
> +        * Thus, no elem can be added-to or deleted-from the
> +        * local_storage->list by the bpf_prog or by the bpf-map's syscall.
> +        *
> +        * It is racing with bpf_local_storage_map_free() alone
> +        * when unlinking elem from the local_storage->list and
> +        * the map's bucket->list.
> +        */
> +       bpf_cgrp_storage_lock();
> +       raw_spin_lock_irqsave(&local_storage->lock, flags);
> +       hlist_for_each_entry_safe(selem, n, &local_storage->list, snode) {
> +               bpf_selem_unlink_map(selem);
> +               free_cgroup_storage =
> +                       bpf_selem_unlink_storage_nolock(local_storage, selem, false, false);
> +       }
> +       raw_spin_unlock_irqrestore(&local_storage->lock, flags);
> +       bpf_cgrp_storage_unlock();
> +       rcu_read_unlock();
> +
> +       /* free_cgroup_storage should always be true as long as
> +        * local_storage->list was non-empty.
> +        */
> +       if (free_cgroup_storage)
> +               kfree_rcu(local_storage, rcu);
> +}
> +
> +static struct bpf_local_storage_data *
> +cgroup_storage_lookup(struct cgroup *cgroup, struct bpf_map *map, bool cacheit_lockit)
> +{
> +       struct bpf_local_storage *cgroup_storage;
> +       struct bpf_local_storage_map *smap;
> +
> +       cgroup_storage = rcu_dereference_check(cgroup->bpf_cgrp_storage,
> +                                              bpf_rcu_lock_held());
> +       if (!cgroup_storage)
> +               return NULL;
> +
> +       smap = (struct bpf_local_storage_map *)map;
> +       return bpf_local_storage_lookup(cgroup_storage, smap, cacheit_lockit);
> +}
> +
> +static void *bpf_cgrp_storage_lookup_elem(struct bpf_map *map, void *key)
> +{
> +       struct bpf_local_storage_data *sdata;
> +       struct cgroup *cgroup;
> +       int fd;
> +
> +       fd = *(int *)key;
> +       cgroup = cgroup_get_from_fd(fd);
> +       if (IS_ERR(cgroup))
> +               return ERR_CAST(cgroup);
> +
> +       bpf_cgrp_storage_lock();
> +       sdata = cgroup_storage_lookup(cgroup, map, true);
> +       bpf_cgrp_storage_unlock();
> +       cgroup_put(cgroup);
> +       return sdata ? sdata->data : NULL;
> +}
> +
> +static int bpf_cgrp_storage_update_elem(struct bpf_map *map, void *key,
> +                                         void *value, u64 map_flags)
> +{
> +       struct bpf_local_storage_data *sdata;
> +       struct cgroup *cgroup;
> +       int fd;
> +
> +       fd = *(int *)key;
> +       cgroup = cgroup_get_from_fd(fd);
> +       if (IS_ERR(cgroup))
> +               return PTR_ERR(cgroup);
> +
> +       bpf_cgrp_storage_lock();
> +       sdata = bpf_local_storage_update(cgroup, (struct bpf_local_storage_map *)map,
> +                                        value, map_flags, GFP_ATOMIC);
> +       bpf_cgrp_storage_unlock();
> +       cgroup_put(cgroup);
> +       return PTR_ERR_OR_ZERO(sdata);
> +}
> +
> +static int cgroup_storage_delete(struct cgroup *cgroup, struct bpf_map *map)
> +{
> +       struct bpf_local_storage_data *sdata;
> +
> +       sdata = cgroup_storage_lookup(cgroup, map, false);
> +       if (!sdata)
> +               return -ENOENT;
> +
> +       bpf_selem_unlink(SELEM(sdata), true);
> +       return 0;
> +}
> +
> +static int bpf_cgrp_storage_delete_elem(struct bpf_map *map, void *key)
> +{
> +       struct cgroup *cgroup;
> +       int err, fd;
> +
> +       fd = *(int *)key;
> +       cgroup = cgroup_get_from_fd(fd);
> +       if (IS_ERR(cgroup))
> +               return PTR_ERR(cgroup);
> +
> +       bpf_cgrp_storage_lock();
> +       err = cgroup_storage_delete(cgroup, map);
> +       bpf_cgrp_storage_unlock();
> +       cgroup_put(cgroup);
> +       return err;
> +}
> +
> +static int notsupp_get_next_key(struct bpf_map *map, void *key, void *next_key)
> +{
> +       return -ENOTSUPP;
> +}
> +
> +static struct bpf_map *cgroup_storage_map_alloc(union bpf_attr *attr)
> +{
> +       struct bpf_local_storage_map *smap;
> +
> +       smap = bpf_local_storage_map_alloc(attr);
> +       if (IS_ERR(smap))
> +               return ERR_CAST(smap);
> +
> +       smap->cache_idx = bpf_local_storage_cache_idx_get(&cgroup_cache);
> +       return &smap->map;
> +}
> +
> +static void cgroup_storage_map_free(struct bpf_map *map)
> +{
> +       struct bpf_local_storage_map *smap;
> +
> +       smap = (struct bpf_local_storage_map *)map;
> +       bpf_local_storage_cache_idx_free(&cgroup_cache, smap->cache_idx);
> +       bpf_local_storage_map_free(smap, NULL);
> +}
> +
> +/* *gfp_flags* is a hidden argument provided by the verifier */
> +BPF_CALL_5(bpf_cgrp_storage_get, struct bpf_map *, map, struct cgroup *, cgroup,
> +          void *, value, u64, flags, gfp_t, gfp_flags)
> +{
> +       struct bpf_local_storage_data *sdata;
> +
> +       WARN_ON_ONCE(!bpf_rcu_lock_held());
> +       if (flags & ~(BPF_LOCAL_STORAGE_GET_F_CREATE))
> +               return (unsigned long)NULL;
> +
> +       if (!cgroup)
> +               return (unsigned long)NULL;
> +
> +       if (!bpf_cgrp_storage_trylock())
> +               return (unsigned long)NULL;
> +
> +       sdata = cgroup_storage_lookup(cgroup, map, true);
> +       if (sdata)
> +               goto unlock;
> +
> +       /* only allocate new storage, when the cgroup is refcounted */
> +       if (!percpu_ref_is_dying(&cgroup->self.refcnt) &&
> +           (flags & BPF_LOCAL_STORAGE_GET_F_CREATE))
> +               sdata = bpf_local_storage_update(cgroup, (struct bpf_local_storage_map *)map,
> +                                                value, BPF_NOEXIST, gfp_flags);
> +
> +unlock:
> +       bpf_cgrp_storage_unlock();
> +       return IS_ERR_OR_NULL(sdata) ? (unsigned long)NULL : (unsigned long)sdata->data;
> +}
> +
> +BPF_CALL_2(bpf_cgrp_storage_delete, struct bpf_map *, map, struct cgroup *, cgroup)
> +{
> +       int ret;
> +
> +       WARN_ON_ONCE(!bpf_rcu_lock_held());
> +       if (!cgroup)
> +               return -EINVAL;
> +
> +       if (!bpf_cgrp_storage_trylock())
> +               return -EBUSY;
> +
> +       ret = cgroup_storage_delete(cgroup, map);
> +       bpf_cgrp_storage_unlock();
> +       return ret;
> +}
> +
> +BTF_ID_LIST_SINGLE(cgroup_storage_map_btf_ids, struct, bpf_local_storage_map)
> +const struct bpf_map_ops cgrp_storage_map_ops = {
> +       .map_meta_equal = bpf_map_meta_equal,
> +       .map_alloc_check = bpf_local_storage_map_alloc_check,
> +       .map_alloc = cgroup_storage_map_alloc,
> +       .map_free = cgroup_storage_map_free,
> +       .map_get_next_key = notsupp_get_next_key,
> +       .map_lookup_elem = bpf_cgrp_storage_lookup_elem,
> +       .map_update_elem = bpf_cgrp_storage_update_elem,
> +       .map_delete_elem = bpf_cgrp_storage_delete_elem,
> +       .map_check_btf = bpf_local_storage_map_check_btf,
> +       .map_btf_id = &cgroup_storage_map_btf_ids[0],
> +       .map_owner_storage_ptr = cgroup_storage_ptr,
> +};
> +
> +const struct bpf_func_proto bpf_cgrp_storage_get_proto = {
> +       .func           = bpf_cgrp_storage_get,
> +       .gpl_only       = false,
> +       .ret_type       = RET_PTR_TO_MAP_VALUE_OR_NULL,
> +       .arg1_type      = ARG_CONST_MAP_PTR,
> +       .arg2_type      = ARG_PTR_TO_BTF_ID,
> +       .arg2_btf_id    = &bpf_cgroup_btf_id[0],
> +       .arg3_type      = ARG_PTR_TO_MAP_VALUE_OR_NULL,
> +       .arg4_type      = ARG_ANYTHING,
> +};
> +
> +const struct bpf_func_proto bpf_cgrp_storage_delete_proto = {
> +       .func           = bpf_cgrp_storage_delete,
> +       .gpl_only       = false,
> +       .ret_type       = RET_INTEGER,
> +       .arg1_type      = ARG_CONST_MAP_PTR,
> +       .arg2_type      = ARG_PTR_TO_BTF_ID,
> +       .arg2_btf_id    = &bpf_cgroup_btf_id[0],
> +};
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index a6b04faed282..124fd199ce5c 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -1663,6 +1663,12 @@ bpf_base_func_proto(enum bpf_func_id func_id)
>                 return &bpf_dynptr_write_proto;
>         case BPF_FUNC_dynptr_data:
>                 return &bpf_dynptr_data_proto;
> +#ifdef CONFIG_CGROUPS
> +       case BPF_FUNC_cgrp_storage_get:
> +               return &bpf_cgrp_storage_get_proto;
> +       case BPF_FUNC_cgrp_storage_delete:
> +               return &bpf_cgrp_storage_delete_proto;
> +#endif
>         default:
>                 break;
>         }
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index 7b373a5e861f..b95c276f92e3 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -1016,7 +1016,8 @@ static int map_check_btf(struct bpf_map *map, const struct btf *btf,
>                     map->map_type != BPF_MAP_TYPE_CGROUP_STORAGE &&
>                     map->map_type != BPF_MAP_TYPE_SK_STORAGE &&
>                     map->map_type != BPF_MAP_TYPE_INODE_STORAGE &&
> -                   map->map_type != BPF_MAP_TYPE_TASK_STORAGE)
> +                   map->map_type != BPF_MAP_TYPE_TASK_STORAGE &&
> +                   map->map_type != BPF_MAP_TYPE_CGRP_STORAGE)
>                         return -ENOTSUPP;
>                 if (map->spin_lock_off + sizeof(struct bpf_spin_lock) >
>                     map->value_size) {
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 6f6d2d511c06..82bb18d7e881 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -6360,6 +6360,11 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
>                     func_id != BPF_FUNC_task_storage_delete)
>                         goto error;
>                 break;
> +       case BPF_MAP_TYPE_CGRP_STORAGE:
> +               if (func_id != BPF_FUNC_cgrp_storage_get &&
> +                   func_id != BPF_FUNC_cgrp_storage_delete)
> +                       goto error;
> +               break;
>         case BPF_MAP_TYPE_BLOOM_FILTER:
>                 if (func_id != BPF_FUNC_map_peek_elem &&
>                     func_id != BPF_FUNC_map_push_elem)
> @@ -6472,6 +6477,11 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
>                 if (map->map_type != BPF_MAP_TYPE_TASK_STORAGE)
>                         goto error;
>                 break;
> +       case BPF_FUNC_cgrp_storage_get:
> +       case BPF_FUNC_cgrp_storage_delete:
> +               if (map->map_type != BPF_MAP_TYPE_CGRP_STORAGE)
> +                       goto error;
> +               break;
>         default:
>                 break;
>         }
> @@ -14149,7 +14159,8 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
>
>                 if (insn->imm == BPF_FUNC_task_storage_get ||
>                     insn->imm == BPF_FUNC_sk_storage_get ||
> -                   insn->imm == BPF_FUNC_inode_storage_get) {
> +                   insn->imm == BPF_FUNC_inode_storage_get ||
> +                   insn->imm == BPF_FUNC_cgrp_storage_get) {
>                         if (env->prog->aux->sleepable)
>                                 insn_buf[0] = BPF_MOV64_IMM(BPF_REG_5, (__force __s32)GFP_KERNEL);
>                         else
> diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
> index 764bdd5fd8d1..7e80e15fae4e 100644
> --- a/kernel/cgroup/cgroup.c
> +++ b/kernel/cgroup/cgroup.c
> @@ -5227,6 +5227,10 @@ static void css_free_rwork_fn(struct work_struct *work)
>         struct cgroup_subsys *ss = css->ss;
>         struct cgroup *cgrp = css->cgroup;
>
> +#ifdef CONFIG_CGROUP_BPF
> +       bpf_cgrp_storage_free(cgrp);
> +#endif
> +
>         percpu_ref_exit(&css->refcnt);
>
>         if (ss) {
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index 49fb9ec8366d..0ddf0834b1b8 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -1454,6 +1454,10 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
>                 return &bpf_get_current_cgroup_id_proto;
>         case BPF_FUNC_get_current_ancestor_cgroup_id:
>                 return &bpf_get_current_ancestor_cgroup_id_proto;
> +       case BPF_FUNC_cgrp_storage_get:
> +               return &bpf_cgrp_storage_get_proto;
> +       case BPF_FUNC_cgrp_storage_delete:
> +               return &bpf_cgrp_storage_delete_proto;
>  #endif
>         case BPF_FUNC_send_signal:
>                 return &bpf_send_signal_proto;
> diff --git a/scripts/bpf_doc.py b/scripts/bpf_doc.py
> index c0e6690be82a..fdb0aff8cb5a 100755
> --- a/scripts/bpf_doc.py
> +++ b/scripts/bpf_doc.py
> @@ -685,6 +685,7 @@ class PrinterHelpers(Printer):
>              'struct udp6_sock',
>              'struct unix_sock',
>              'struct task_struct',
> +            'struct cgroup',
>
>              'struct __sk_buff',
>              'struct sk_msg_md',
> @@ -742,6 +743,7 @@ class PrinterHelpers(Printer):
>              'struct udp6_sock',
>              'struct unix_sock',
>              'struct task_struct',
> +            'struct cgroup',
>              'struct path',
>              'struct btf_ptr',
>              'struct inode',
> diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
> index 17f61338f8f8..2d7f79bf3500 100644
> --- a/tools/include/uapi/linux/bpf.h
> +++ b/tools/include/uapi/linux/bpf.h
> @@ -922,7 +922,14 @@ enum bpf_map_type {
>         BPF_MAP_TYPE_CPUMAP,
>         BPF_MAP_TYPE_XSKMAP,
>         BPF_MAP_TYPE_SOCKHASH,
> -       BPF_MAP_TYPE_CGROUP_STORAGE,
> +       BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED,
> +       /* BPF_MAP_TYPE_CGROUP_STORAGE is available to bpf programs attaching
> +        * to a cgroup. The newer BPF_MAP_TYPE_CGRP_STORAGE is available to
> +        * both cgroup-attached and other progs and supports all functionality
> +        * provided by BPF_MAP_TYPE_CGROUP_STORAGE. So mark
> +        * BPF_MAP_TYPE_CGROUP_STORAGE deprecated.
> +        */
> +       BPF_MAP_TYPE_CGROUP_STORAGE = BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED,
>         BPF_MAP_TYPE_REUSEPORT_SOCKARRAY,
>         BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE,
>         BPF_MAP_TYPE_QUEUE,
> @@ -935,6 +942,7 @@ enum bpf_map_type {
>         BPF_MAP_TYPE_TASK_STORAGE,
>         BPF_MAP_TYPE_BLOOM_FILTER,
>         BPF_MAP_TYPE_USER_RINGBUF,
> +       BPF_MAP_TYPE_CGRP_STORAGE,
>  };
>
>  /* Note that tracing related programs such as
> @@ -5435,6 +5443,42 @@ union bpf_attr {
>   *             **-E2BIG** if user-space has tried to publish a sample which is
>   *             larger than the size of the ring buffer, or which cannot fit
>   *             within a struct bpf_dynptr.
> + *
> + * void *bpf_cgrp_storage_get(struct bpf_map *map, struct cgroup *cgroup, void *value, u64 flags)
> + *     Description
> + *             Get a bpf_local_storage from the *cgroup*.
> + *
> + *             Logically, it could be thought of as getting the value from
> + *             a *map* with *cgroup* as the **key**.  From this
> + *             perspective,  the usage is not much different from
> + *             **bpf_map_lookup_elem**\ (*map*, **&**\ *cgroup*) except this
> + *             helper enforces the key must be a cgroup struct and the map must also
> + *             be a **BPF_MAP_TYPE_CGRP_STORAGE**.
> + *
> + *             Underneath, the value is stored locally at *cgroup* instead of
> + *             the *map*.  The *map* is used as the bpf-local-storage
> + *             "type". The bpf-local-storage "type" (i.e. the *map*) is
> + *             searched against all bpf_local_storage residing at *cgroup*.
> + *
> + *             An optional *flags* (**BPF_LOCAL_STORAGE_GET_F_CREATE**) can be
> + *             used such that a new bpf_local_storage will be
> + *             created if one does not exist.  *value* can be used
> + *             together with **BPF_LOCAL_STORAGE_GET_F_CREATE** to specify
> + *             the initial value of a bpf_local_storage.  If *value* is
> + *             **NULL**, the new bpf_local_storage will be zero initialized.
> + *     Return
> + *             A bpf_local_storage pointer is returned on success.
> + *
> + *             **NULL** if not found or there was an error in adding
> + *             a new bpf_local_storage.
> + *
> + * long bpf_cgrp_storage_delete(struct bpf_map *map, struct cgroup *cgroup)
> + *     Description
> + *             Delete a bpf_local_storage from a *cgroup*.
> + *     Return
> + *             0 on success.
> + *
> + *             **-ENOENT** if the bpf_local_storage cannot be found.
>   */
>  #define ___BPF_FUNC_MAPPER(FN, ctx...)                 \
>         FN(unspec, 0, ##ctx)                            \
> @@ -5647,6 +5691,8 @@ union bpf_attr {
>         FN(tcp_raw_check_syncookie_ipv6, 207, ##ctx)    \
>         FN(ktime_get_tai_ns, 208, ##ctx)                \
>         FN(user_ringbuf_drain, 209, ##ctx)              \
> +       FN(cgrp_storage_get, 210, ##ctx)                \
> +       FN(cgrp_storage_delete, 211, ##ctx)             \
>         /* */
>
>  /* backwards-compatibility macros for users of __BPF_FUNC_MAPPER that don't
> --
> 2.30.2
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v2 2/6] bpf: Implement cgroup storage available to non-cgroup-attached bpf progs
  2022-10-21 17:33     ` Yonghong Song
@ 2022-10-21 19:57       ` David Vernet
  2022-10-21 22:57         ` Yonghong Song
  0 siblings, 1 reply; 23+ messages in thread
From: David Vernet @ 2022-10-21 19:57 UTC (permalink / raw)
  To: Yonghong Song
  Cc: Yonghong Song, bpf, Alexei Starovoitov, Andrii Nakryiko,
	Daniel Borkmann, kernel-team, KP Singh, Martin KaFai Lau,
	Tejun Heo

On Fri, Oct 21, 2022 at 10:33:41AM -0700, Yonghong Song wrote:

[...]

> > >   /* Note that tracing related programs such as
> > > @@ -5435,6 +5443,42 @@ union bpf_attr {
> > >    *		**-E2BIG** if user-space has tried to publish a sample which is
> > >    *		larger than the size of the ring buffer, or which cannot fit
> > >    *		within a struct bpf_dynptr.
> > > + *
> > > + * void *bpf_cgrp_storage_get(struct bpf_map *map, struct cgroup *cgroup, void *value, u64 flags)
> > > + *	Description
> > > + *		Get a bpf_local_storage from the *cgroup*.
> > > + *
> > > + *		Logically, it could be thought of as getting the value from
> > > + *		a *map* with *cgroup* as the **key**.  From this
> > > + *		perspective,  the usage is not much different from
> > > + *		**bpf_map_lookup_elem**\ (*map*, **&**\ *cgroup*) except this
> > > + *		helper enforces the key must be a cgroup struct and the map must also
> > > + *		be a **BPF_MAP_TYPE_CGRP_STORAGE**.
> > > + *
> > > + *		Underneath, the value is stored locally at *cgroup* instead of
> > > + *		the *map*.  The *map* is used as the bpf-local-storage
> > > + *		"type". The bpf-local-storage "type" (i.e. the *map*) is
> > > + *		searched against all bpf_local_storage residing at *cgroup*.
> > 
> > IMO this paragraph is a bit hard to parse. Please correct me if I'm
> > wrong, but I think what it's trying to convey is that when an instance
> > of cgroup bpf-local-storage is accessed by a program in e.g.
> > bpf_cgrp_storage_get(), all of the cgroup bpf_local_storage entries are
> > iterated over in the struct cgroup object until this program's local
> > storage instance is found. Is that right? If so, perhaps something like
> > this would be more clear:
> 
> yes. your above interpretation is correct.
> 
> > 
> > In reality, the local-storage value is embedded directly inside of the
> > *cgroup* object itself, rather than being located in the
> > **BPF_MAP_TYPE_CGRP_STORAGE** map. When the local-storage value is
> > queried for some *map* on a *cgroup* object, the kernel will perform an
> > O(n) iteration over all of the live local-storage values for that
> > *cgroup* object until the local-storage value for the *map* is found.
> 
> Sounds okay. I can change the explanation like the above.

Thanks!

> > > diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
> > > index 341c94f208f4..3a12e6b400a2 100644
> > > --- a/kernel/bpf/Makefile
> > > +++ b/kernel/bpf/Makefile
> > > @@ -25,7 +25,7 @@ ifeq ($(CONFIG_PERF_EVENTS),y)
> > >   obj-$(CONFIG_BPF_SYSCALL) += stackmap.o
> > >   endif
> > >   ifeq ($(CONFIG_CGROUPS),y)
> > 
> > I assume that you double checked that it's valid to compile the helper
> > with CONFIG_CGROUPS && !CONFIG_CGROUP_BPF, but I must admit that even if
> > that's the case, I'm not following why we would want the map to be
> > compiled with a different kconfig option than the helper that provides
> > access to it. If theres's a precedent for doing this then I suppose it's
> > fine, but it does seem wrong and/or at least wasteful to compile these
> > helpers in if CONFIG_CGROUPS is defined but CONFIG_CGROUP_BPF is not.
> 
> The following is my understanding.
> CONFIG_CGROUP_BPF guards kernel/bpf/cgroup.c which contains implementation
> mostly for cgroup-attached program types, helpers, etc.

Then why are we using it to guard
BPF_MAP_TYPE(BPF_MAP_TYPE_CGRP_STORAGE, cgrp_storage_map_ops)?

> A lot of other cgroup-related implementation like cgroup_iter, some
> cgroup related helper (not related to cgroup-attached program types), etc.
> are guarded with CONFIG_CGROUPS and CONFIG_BPF_SYSCALL.
> 
> Note that it is totally possible CONFIG_CGROUP_BPF is 'n' while
> CONFIG_CGROUPS and CONFIG_BPF_SYSCALL are 'y'.
> 
> So for cgroup local storage implemented in this patch set,
> using CONFIG_CGROUPS and CONFIG_BPF_SYSCALL seems okay.

I agree that it's fine to use CONFIG_CGROUPS here. What I'm not
understanding is why we're using CONFIG_CGROUP_BPF to guard defining
BPF_MAP_TYPE(BPF_MAP_TYPE_CGRP_STORAGE, cgrp_storage_map_ops), and then
in the Makefile we're using CONFIG_CGROUPS to add bpf_cgrp_storage.o.

In other words, I think there's a mismatch between:

--- a/include/linux/bpf_types.h
+++ b/include/linux/bpf_types.h
@@ -90,6 +90,7 @@ BPF_MAP_TYPE(BPF_MAP_TYPE_CGROUP_ARRAY, cgroup_array_map_ops)
 #ifdef CONFIG_CGROUP_BPF

^^ why this instead of CONFIG_CGROUPS for BPF_MAP_TYPE_CGRP_STORAGE?

 BPF_MAP_TYPE(BPF_MAP_TYPE_CGROUP_STORAGE, cgroup_storage_map_ops)
 BPF_MAP_TYPE(BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE, cgroup_storage_map_ops)
+BPF_MAP_TYPE(BPF_MAP_TYPE_CGRP_STORAGE, cgrp_storage_map_ops)
 #endif
 BPF_MAP_TYPE(BPF_MAP_TYPE_HASH, htab_map_ops)
 BPF_MAP_TYPE(BPF_MAP_TYPE_PERCPU_HASH, htab_percpu_map_ops)

and

diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
index 341c94f208f4..3a12e6b400a2 100644
--- a/kernel/bpf/Makefile
+++ b/kernel/bpf/Makefile
@@ -25,7 +25,7 @@ ifeq ($(CONFIG_PERF_EVENTS),y)
 obj-$(CONFIG_BPF_SYSCALL) += stackmap.o
 endif
 ifeq ($(CONFIG_CGROUPS),y)
-obj-$(CONFIG_BPF_SYSCALL) += cgroup_iter.o
+obj-$(CONFIG_BPF_SYSCALL) += cgroup_iter.o bpf_cgrp_storage.o
 endif
 obj-$(CONFIG_CGROUP_BPF) += cgroup.o
 ifeq ($(CONFIG_INET),y)

> > > -obj-$(CONFIG_BPF_SYSCALL) += cgroup_iter.o
> > > +obj-$(CONFIG_BPF_SYSCALL) += cgroup_iter.o bpf_cgrp_storage.o
> > >   endif
> > >   obj-$(CONFIG_CGROUP_BPF) += cgroup.o
> > >   ifeq ($(CONFIG_INET),y)

[...]

> > > +	 * could be modifying the local_storage->list now.
> > > +	 * Thus, no elem can be added-to or deleted-from the
> > > +	 * local_storage->list by the bpf_prog or by the bpf-map's syscall.
> > > +	 *
> > > +	 * It is racing with bpf_local_storage_map_free() alone
> > > +	 * when unlinking elem from the local_storage->list and
> > > +	 * the map's bucket->list.
> > > +	 */
> > > +	bpf_cgrp_storage_lock();
> > > +	raw_spin_lock_irqsave(&local_storage->lock, flags);
> > > +	hlist_for_each_entry_safe(selem, n, &local_storage->list, snode) {
> > > +		bpf_selem_unlink_map(selem);
> > > +		free_cgroup_storage =
> > > +			bpf_selem_unlink_storage_nolock(local_storage, selem, false, false);
> > 
> > This still requires a comment explaining why it's OK to overwrite
> > free_cgroup_storage with a previous value from calling
> > bpf_selem_unlink_storage_nolock(). Even if that is safe, this looks like
> > a pretty weird programming pattern, and IMO doing this feels more
> > intentional and future-proof:
> > 
> > if (bpf_selem_unlink_storage_nolock(local_storage, selem, false, false))
> > 	free_cgroup_storage = true;
> 
> We have a comment a few lines below.
>   /* free_cgroup_storage should always be true as long as
>    * local_storage->list was non-empty.
>    */
>   if (free_cgroup_storage)
> 	kfree_rcu(local_storage, rcu);

IMO that comment doesn't provide much useful information -- it states an
assumption, but doesn't give a reason for it.

> I will add more explanation in the above code like
> 
> 	bpf_selem_unlink_map(selem);
> 	/* If local_storage list only have one element, the
> 	 * bpf_selem_unlink_storage_nolock() will return true.
> 	 * Otherwise, it will return false. The current loop iteration
> 	 * intends to remove all local storage. So the last iteration
> 	 * of the loop will set the free_cgroup_storage to true.
> 	 */
> 	free_cgroup_storage =
> 		bpf_selem_unlink_storage_nolock(local_storage, selem, false, false);

Thanks, this is the type of comment I was looking for.

Also, I realize this was copy-pasted from a number of other possible
locations in the codebase which are doing the same thing, but I still
think this pattern is an odd and brittle way to do this. We're relying
on an abstracted implementation detail of
bpf_selem_unlink_storage_nolock() for correctness, which IMO is a signal
that bpf_selem_unlink_storage_nolock() should probably be the one
invoking kfree_rcu() on behalf of callers in the first place.  It looks
like all of the callers end up calling kfree_rcu() on the struct
bpf_local_storage * if bpf_selem_unlink_storage_nolock() returns true,
so can we just move the responsibility of freeing the local storage
object down into bpf_selem_unlink_storage_nolock() where it's unlinked?

IMO this can be done in a separate patch set, if we decide it's worth
doing at all.

> > 
> > > +	}
> > > +	raw_spin_unlock_irqrestore(&local_storage->lock, flags);
> > > +	bpf_cgrp_storage_unlock();
> > > +	rcu_read_unlock();
> > > +
> > > +	/* free_cgroup_storage should always be true as long as
> > > +	 * local_storage->list was non-empty.
> > > +	 */
> > > +	if (free_cgroup_storage)
> > > +		kfree_rcu(local_storage, rcu);
> > > +}
> > > +
> > > +static struct bpf_local_storage_data *
> > > +cgroup_storage_lookup(struct cgroup *cgroup, struct bpf_map *map, bool cacheit_lockit)
> > > +{
> > > +	struct bpf_local_storage *cgroup_storage;
> > > +	struct bpf_local_storage_map *smap;
> > > +
> > > +	cgroup_storage = rcu_dereference_check(cgroup->bpf_cgrp_storage,
> > > +					       bpf_rcu_lock_held());
> > > +	if (!cgroup_storage)
> > > +		return NULL;
> > > +
> > > +	smap = (struct bpf_local_storage_map *)map;
> > > +	return bpf_local_storage_lookup(cgroup_storage, smap, cacheit_lockit);
> > > +}
> > > +
> > > +static void *bpf_cgrp_storage_lookup_elem(struct bpf_map *map, void *key)
> > > +{
> > > +	struct bpf_local_storage_data *sdata;
> > > +	struct cgroup *cgroup;
> > > +	int fd;
> > > +
> > > +	fd = *(int *)key;
> > > +	cgroup = cgroup_get_from_fd(fd);
> > > +	if (IS_ERR(cgroup))
> > > +		return ERR_CAST(cgroup);
> > > +
> > > +	bpf_cgrp_storage_lock();
> > > +	sdata = cgroup_storage_lookup(cgroup, map, true);
> > > +	bpf_cgrp_storage_unlock();
> > > +	cgroup_put(cgroup);
> > > +	return sdata ? sdata->data : NULL;
> > > +}
> > 
> > Stanislav pointed out in the v1 revision that there's a lot of very
> > similar logic in task storage, and I think you'd mentioned that you were
> > going to think about generalizing some of that. Have you had a chance to
> > consider?
> 
> It is hard to have a common function for
> lookup_elem/update_elem/delete_elem(). They are quite different as each
> heavily involves
> task/cgroup-specific functions.

Yes agreed, each implementation is acquiring their own references, and
finding the backing element in whatever way it was implemented, etc.

> but map_alloc and map_free could have common helpers.

Agreed, and many of the static functions that are invoked on those paths
such as bpf_cgrp_storage_free(), bpf_cgrp_storage_lock(), etc possibly
as well. In general this feels like something we could pretty easily
simplify using something like a structure with callbacks to implement
the pieces of logic that are specific to each local storage type, such
as getting the struct bpf_local_storage __rcu
* pointer from some context (e.g.  cgroup_storage_ptr()). It doesn't
necessarily need to block this change, but IMO we should clean this up
soon because a lot of this is nearly a 100% copy-paste of other local
storage implementations.

Thanks,
David

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v2 2/6] bpf: Implement cgroup storage available to non-cgroup-attached bpf progs
  2022-10-21 19:29   ` Yosry Ahmed
@ 2022-10-21 21:05     ` Yonghong Song
  0 siblings, 0 replies; 23+ messages in thread
From: Yonghong Song @ 2022-10-21 21:05 UTC (permalink / raw)
  To: Yosry Ahmed, Yonghong Song
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, KP Singh, Martin KaFai Lau, Tejun Heo



On 10/21/22 12:29 PM, Yosry Ahmed wrote:
> On Thu, Oct 20, 2022 at 3:13 PM Yonghong Song <yhs@fb.com> wrote:
>>
>> Similar to sk/inode/task storage, implement similar cgroup local storage.
>>
>> There already exists a local storage implementation for cgroup-attached
>> bpf programs.  See map type BPF_MAP_TYPE_CGROUP_STORAGE and helper
>> bpf_get_local_storage(). But there are use cases such that non-cgroup
>> attached bpf progs wants to access cgroup local storage data. For example,
>> tc egress prog has access to sk and cgroup. It is possible to use
>> sk local storage to emulate cgroup local storage by storing data in socket.
>> But this is a waste as it could be lots of sockets belonging to a particular
>> cgroup. Alternatively, a separate map can be created with cgroup id as the key.
>> But this will introduce additional overhead to manipulate the new map.
>> A cgroup local storage, similar to existing sk/inode/task storage,
>> should help for this use case.
>>
>> The life-cycle of storage is managed with the life-cycle of the
>> cgroup struct.  i.e. the storage is destroyed along with the owning cgroup
>> with a callback to the bpf_cgrp_storage_free when cgroup itself
>> is deleted.
>>
>> The userspace map operations can be done by using a cgroup fd as a key
>> passed to the lookup, update and delete operations.
>>
>> Typically, the following code is used to get the current cgroup:
>>      struct task_struct *task = bpf_get_current_task_btf();
>>      ... task->cgroups->dfl_cgrp ...
>> and in structure task_struct definition:
>>      struct task_struct {
>>          ....
>>          struct css_set __rcu            *cgroups;
>>          ....
>>      }
>> With sleepable program, accessing task->cgroups is not protected by rcu_read_lock.
>> So the current implementation only supports non-sleepable program and supporting
>> sleepable program will be the next step together with adding rcu_read_lock
>> protection for rcu tagged structures.
>>
>> Since map name BPF_MAP_TYPE_CGROUP_STORAGE has been used for old cgroup local
>> storage support, the new map name BPF_MAP_TYPE_CGRP_STORAGE is used
>> for cgroup storage available to non-cgroup-attached bpf programs. The old
>> cgroup storage supports bpf_get_local_storage() helper to get the cgroup data.
>> The new cgroup storage helper bpf_cgrp_storage_get() can provide similar
>> functionality. While old cgroup storage pre-allocates storage memory, the new
>> mechanism can also pre-allocate with a user space bpf_map_update_elem() call
>> to avoid potential run-time memory allocaiton failure.
>> Therefore, the new cgroup storage can provide all functionality w.r.t.
>> the old one. So in uapi bpf.h, the old BPF_MAP_TYPE_CGROUP_STORAGE is alias to
>> BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED to indicate the old cgroup storage can
>> be deprecated since the new one can provide the same functionality.
>>
>> Signed-off-by: Yonghong Song <yhs@fb.com>
>> ---
>>   include/linux/bpf.h            |   3 +
>>   include/linux/bpf_types.h      |   1 +
>>   include/linux/cgroup-defs.h    |   4 +
>>   include/uapi/linux/bpf.h       |  48 +++++-
>>   kernel/bpf/Makefile            |   2 +-
>>   kernel/bpf/bpf_cgrp_storage.c  | 276 +++++++++++++++++++++++++++++++++
>>   kernel/bpf/helpers.c           |   6 +
>>   kernel/bpf/syscall.c           |   3 +-
>>   kernel/bpf/verifier.c          |  13 +-
>>   kernel/cgroup/cgroup.c         |   4 +
>>   kernel/trace/bpf_trace.c       |   4 +
>>   scripts/bpf_doc.py             |   2 +
>>   tools/include/uapi/linux/bpf.h |  48 +++++-
>>   13 files changed, 409 insertions(+), 5 deletions(-)
>>   create mode 100644 kernel/bpf/bpf_cgrp_storage.c
>>
>> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
>> index 9e7d46d16032..674da3129248 100644
>> --- a/include/linux/bpf.h
>> +++ b/include/linux/bpf.h
>> @@ -2045,6 +2045,7 @@ struct bpf_link *bpf_link_by_id(u32 id);
>>
>>   const struct bpf_func_proto *bpf_base_func_proto(enum bpf_func_id func_id);
>>   void bpf_task_storage_free(struct task_struct *task);
>> +void bpf_cgrp_storage_free(struct cgroup *cgroup);
>>   bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog);
>>   const struct btf_func_model *
>>   bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
>> @@ -2537,6 +2538,8 @@ extern const struct bpf_func_proto bpf_copy_from_user_task_proto;
>>   extern const struct bpf_func_proto bpf_set_retval_proto;
>>   extern const struct bpf_func_proto bpf_get_retval_proto;
>>   extern const struct bpf_func_proto bpf_user_ringbuf_drain_proto;
>> +extern const struct bpf_func_proto bpf_cgrp_storage_get_proto;
>> +extern const struct bpf_func_proto bpf_cgrp_storage_delete_proto;
>>
>>   const struct bpf_func_proto *tracing_prog_func_proto(
>>     enum bpf_func_id func_id, const struct bpf_prog *prog);
>> diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h
>> index 2c6a4f2562a7..f9d5aa62fed0 100644
>> --- a/include/linux/bpf_types.h
>> +++ b/include/linux/bpf_types.h
>> @@ -90,6 +90,7 @@ BPF_MAP_TYPE(BPF_MAP_TYPE_CGROUP_ARRAY, cgroup_array_map_ops)
>>   #ifdef CONFIG_CGROUP_BPF
>>   BPF_MAP_TYPE(BPF_MAP_TYPE_CGROUP_STORAGE, cgroup_storage_map_ops)
>>   BPF_MAP_TYPE(BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE, cgroup_storage_map_ops)
>> +BPF_MAP_TYPE(BPF_MAP_TYPE_CGRP_STORAGE, cgrp_storage_map_ops)
>>   #endif
>>   BPF_MAP_TYPE(BPF_MAP_TYPE_HASH, htab_map_ops)
>>   BPF_MAP_TYPE(BPF_MAP_TYPE_PERCPU_HASH, htab_percpu_map_ops)
>> diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
>> index 8f481d1b159a..4a72bc3a0a4e 100644
>> --- a/include/linux/cgroup-defs.h
>> +++ b/include/linux/cgroup-defs.h
>> @@ -504,6 +504,10 @@ struct cgroup {
>>          /* Used to store internal freezer state */
>>          struct cgroup_freezer_state freezer;
>>
>> +#ifdef CONFIG_CGROUP_BPF
>> +       struct bpf_local_storage __rcu  *bpf_cgrp_storage;
>> +#endif
> 
> Why is this protected by CONFIG_CGROUP_BPF as opposed to
> CONFIG_CGROUPS && CONFIG_BPF_SYSCALL?
> 
> It seems to me (and you also point this out in a different reply) that
> CONFIG_CGROUP_BPF is mostly used for bpf programs that attach to
> cgroups, right?

Right. It should be CONFIG_BPF_SYSCALL. My v1 actually used
CONFIG_BPF_SYSCALL and changed to CONFIG_CGROUP_BPF in v2 thinking
it should work, but kernel test bot complains. Will change back
to CONFIG_BPF_SYSCALL in v3.

> 
> (same for the freeing site)
> 
>> +
>>          /* All ancestors including self */
>>          struct cgroup *ancestors[];
>>   };
>> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
>> index 17f61338f8f8..2d7f79bf3500 100644
>> --- a/include/uapi/linux/bpf.h
>> +++ b/include/uapi/linux/bpf.h
>> @@ -922,7 +922,14 @@ enum bpf_map_type {
>>          BPF_MAP_TYPE_CPUMAP,
>>          BPF_MAP_TYPE_XSKMAP,
>>          BPF_MAP_TYPE_SOCKHASH,
>> -       BPF_MAP_TYPE_CGROUP_STORAGE,
>> +       BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED,
>> +       /* BPF_MAP_TYPE_CGROUP_STORAGE is available to bpf programs attaching
>> +        * to a cgroup. The newer BPF_MAP_TYPE_CGRP_STORAGE is available to
>> +        * both cgroup-attached and other progs and supports all functionality
>> +        * provided by BPF_MAP_TYPE_CGROUP_STORAGE. So mark
>> +        * BPF_MAP_TYPE_CGROUP_STORAGE deprecated.
>> +        */
>> +       BPF_MAP_TYPE_CGROUP_STORAGE = BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED,
>>          BPF_MAP_TYPE_REUSEPORT_SOCKARRAY,
>>          BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE,
>>          BPF_MAP_TYPE_QUEUE,
>> @@ -935,6 +942,7 @@ enum bpf_map_type {
>>          BPF_MAP_TYPE_TASK_STORAGE,
>>          BPF_MAP_TYPE_BLOOM_FILTER,
>>          BPF_MAP_TYPE_USER_RINGBUF,
>> +       BPF_MAP_TYPE_CGRP_STORAGE,
>>   };
>>
[...]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v2 2/6] bpf: Implement cgroup storage available to non-cgroup-attached bpf progs
  2022-10-21 19:57       ` David Vernet
@ 2022-10-21 22:57         ` Yonghong Song
  2022-10-22  3:02           ` David Vernet
  0 siblings, 1 reply; 23+ messages in thread
From: Yonghong Song @ 2022-10-21 22:57 UTC (permalink / raw)
  To: David Vernet
  Cc: Yonghong Song, bpf, Alexei Starovoitov, Andrii Nakryiko,
	Daniel Borkmann, kernel-team, KP Singh, Martin KaFai Lau,
	Tejun Heo



On 10/21/22 12:57 PM, David Vernet wrote:
> On Fri, Oct 21, 2022 at 10:33:41AM -0700, Yonghong Song wrote:
> 
> [...]
> 
>>>>    /* Note that tracing related programs such as
>>>> @@ -5435,6 +5443,42 @@ union bpf_attr {
>>>>     *		**-E2BIG** if user-space has tried to publish a sample which is
>>>>     *		larger than the size of the ring buffer, or which cannot fit
>>>>     *		within a struct bpf_dynptr.
>>>> + *
>>>> + * void *bpf_cgrp_storage_get(struct bpf_map *map, struct cgroup *cgroup, void *value, u64 flags)
>>>> + *	Description
>>>> + *		Get a bpf_local_storage from the *cgroup*.
>>>> + *
>>>> + *		Logically, it could be thought of as getting the value from
>>>> + *		a *map* with *cgroup* as the **key**.  From this
>>>> + *		perspective,  the usage is not much different from
>>>> + *		**bpf_map_lookup_elem**\ (*map*, **&**\ *cgroup*) except this
>>>> + *		helper enforces the key must be a cgroup struct and the map must also
>>>> + *		be a **BPF_MAP_TYPE_CGRP_STORAGE**.
>>>> + *
>>>> + *		Underneath, the value is stored locally at *cgroup* instead of
>>>> + *		the *map*.  The *map* is used as the bpf-local-storage
>>>> + *		"type". The bpf-local-storage "type" (i.e. the *map*) is
>>>> + *		searched against all bpf_local_storage residing at *cgroup*.
>>>
>>> IMO this paragraph is a bit hard to parse. Please correct me if I'm
>>> wrong, but I think what it's trying to convey is that when an instance
>>> of cgroup bpf-local-storage is accessed by a program in e.g.
>>> bpf_cgrp_storage_get(), all of the cgroup bpf_local_storage entries are
>>> iterated over in the struct cgroup object until this program's local
>>> storage instance is found. Is that right? If so, perhaps something like
>>> this would be more clear:
>>
>> yes. your above interpretation is correct.
>>
>>>
>>> In reality, the local-storage value is embedded directly inside of the
>>> *cgroup* object itself, rather than being located in the
>>> **BPF_MAP_TYPE_CGRP_STORAGE** map. When the local-storage value is
>>> queried for some *map* on a *cgroup* object, the kernel will perform an
>>> O(n) iteration over all of the live local-storage values for that
>>> *cgroup* object until the local-storage value for the *map* is found.
>>
>> Sounds okay. I can change the explanation like the above.
> 
> Thanks!
> 
>>>> diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
>>>> index 341c94f208f4..3a12e6b400a2 100644
>>>> --- a/kernel/bpf/Makefile
>>>> +++ b/kernel/bpf/Makefile
>>>> @@ -25,7 +25,7 @@ ifeq ($(CONFIG_PERF_EVENTS),y)
>>>>    obj-$(CONFIG_BPF_SYSCALL) += stackmap.o
>>>>    endif
>>>>    ifeq ($(CONFIG_CGROUPS),y)
>>>
>>> I assume that you double checked that it's valid to compile the helper
>>> with CONFIG_CGROUPS && !CONFIG_CGROUP_BPF, but I must admit that even if
>>> that's the case, I'm not following why we would want the map to be
>>> compiled with a different kconfig option than the helper that provides
>>> access to it. If theres's a precedent for doing this then I suppose it's
>>> fine, but it does seem wrong and/or at least wasteful to compile these
>>> helpers in if CONFIG_CGROUPS is defined but CONFIG_CGROUP_BPF is not.
>>
>> The following is my understanding.
>> CONFIG_CGROUP_BPF guards kernel/bpf/cgroup.c which contains implementation
>> mostly for cgroup-attached program types, helpers, etc.
> 
> Then why are we using it to guard
> BPF_MAP_TYPE(BPF_MAP_TYPE_CGRP_STORAGE, cgrp_storage_map_ops)?
> 
>> A lot of other cgroup-related implementation like cgroup_iter, some
>> cgroup related helper (not related to cgroup-attached program types), etc.
>> are guarded with CONFIG_CGROUPS and CONFIG_BPF_SYSCALL.
>>
>> Note that it is totally possible CONFIG_CGROUP_BPF is 'n' while
>> CONFIG_CGROUPS and CONFIG_BPF_SYSCALL are 'y'.
>>
>> So for cgroup local storage implemented in this patch set,
>> using CONFIG_CGROUPS and CONFIG_BPF_SYSCALL seems okay.
> 
> I agree that it's fine to use CONFIG_CGROUPS here. What I'm not
> understanding is why we're using CONFIG_CGROUP_BPF to guard defining
> BPF_MAP_TYPE(BPF_MAP_TYPE_CGRP_STORAGE, cgrp_storage_map_ops), and then
> in the Makefile we're using CONFIG_CGROUPS to add bpf_cgrp_storage.o.
> 
> In other words, I think there's a mismatch between:
> 
> --- a/include/linux/bpf_types.h
> +++ b/include/linux/bpf_types.h
> @@ -90,6 +90,7 @@ BPF_MAP_TYPE(BPF_MAP_TYPE_CGROUP_ARRAY, cgroup_array_map_ops)
>   #ifdef CONFIG_CGROUP_BPF
> 
> ^^ why this instead of CONFIG_CGROUPS for BPF_MAP_TYPE_CGRP_STORAGE?
> 
>   BPF_MAP_TYPE(BPF_MAP_TYPE_CGROUP_STORAGE, cgroup_storage_map_ops)
>   BPF_MAP_TYPE(BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE, cgroup_storage_map_ops)
> +BPF_MAP_TYPE(BPF_MAP_TYPE_CGRP_STORAGE, cgrp_storage_map_ops)
>   #endif
>   BPF_MAP_TYPE(BPF_MAP_TYPE_HASH, htab_map_ops)
>   BPF_MAP_TYPE(BPF_MAP_TYPE_PERCPU_HASH, htab_percpu_map_ops)
> 
> and
> 
> diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
> index 341c94f208f4..3a12e6b400a2 100644
> --- a/kernel/bpf/Makefile
> +++ b/kernel/bpf/Makefile
> @@ -25,7 +25,7 @@ ifeq ($(CONFIG_PERF_EVENTS),y)
>   obj-$(CONFIG_BPF_SYSCALL) += stackmap.o
>   endif
>   ifeq ($(CONFIG_CGROUPS),y)
> -obj-$(CONFIG_BPF_SYSCALL) += cgroup_iter.o
> +obj-$(CONFIG_BPF_SYSCALL) += cgroup_iter.o bpf_cgrp_storage.o
>   endif
>   obj-$(CONFIG_CGROUP_BPF) += cgroup.o
>   ifeq ($(CONFIG_INET),y)

This makes sense. I will guard
   BPF_MAP_TYPE(BPF_MAP_TYPE_CGRP_STORAGE, cgrp_storage_map_ops)
with CONFIG_CGROUPS.

> 
>>>> -obj-$(CONFIG_BPF_SYSCALL) += cgroup_iter.o
>>>> +obj-$(CONFIG_BPF_SYSCALL) += cgroup_iter.o bpf_cgrp_storage.o
>>>>    endif
>>>>    obj-$(CONFIG_CGROUP_BPF) += cgroup.o
>>>>    ifeq ($(CONFIG_INET),y)
> 
> [...]
> 
>>>> +	 * could be modifying the local_storage->list now.
>>>> +	 * Thus, no elem can be added-to or deleted-from the
>>>> +	 * local_storage->list by the bpf_prog or by the bpf-map's syscall.
>>>> +	 *
>>>> +	 * It is racing with bpf_local_storage_map_free() alone
>>>> +	 * when unlinking elem from the local_storage->list and
>>>> +	 * the map's bucket->list.
>>>> +	 */
>>>> +	bpf_cgrp_storage_lock();
>>>> +	raw_spin_lock_irqsave(&local_storage->lock, flags);
>>>> +	hlist_for_each_entry_safe(selem, n, &local_storage->list, snode) {
>>>> +		bpf_selem_unlink_map(selem);
>>>> +		free_cgroup_storage =
>>>> +			bpf_selem_unlink_storage_nolock(local_storage, selem, false, false);
>>>
>>> This still requires a comment explaining why it's OK to overwrite
>>> free_cgroup_storage with a previous value from calling
>>> bpf_selem_unlink_storage_nolock(). Even if that is safe, this looks like
>>> a pretty weird programming pattern, and IMO doing this feels more
>>> intentional and future-proof:
>>>
>>> if (bpf_selem_unlink_storage_nolock(local_storage, selem, false, false))
>>> 	free_cgroup_storage = true;
>>
>> We have a comment a few lines below.
>>    /* free_cgroup_storage should always be true as long as
>>     * local_storage->list was non-empty.
>>     */
>>    if (free_cgroup_storage)
>> 	kfree_rcu(local_storage, rcu);
> 
> IMO that comment doesn't provide much useful information -- it states an
> assumption, but doesn't give a reason for it.
> 
>> I will add more explanation in the above code like
>>
>> 	bpf_selem_unlink_map(selem);
>> 	/* If local_storage list only have one element, the
>> 	 * bpf_selem_unlink_storage_nolock() will return true.
>> 	 * Otherwise, it will return false. The current loop iteration
>> 	 * intends to remove all local storage. So the last iteration
>> 	 * of the loop will set the free_cgroup_storage to true.
>> 	 */
>> 	free_cgroup_storage =
>> 		bpf_selem_unlink_storage_nolock(local_storage, selem, false, false);
> 
> Thanks, this is the type of comment I was looking for.
> 
> Also, I realize this was copy-pasted from a number of other possible
> locations in the codebase which are doing the same thing, but I still
> think this pattern is an odd and brittle way to do this. We're relying
> on an abstracted implementation detail of
> bpf_selem_unlink_storage_nolock() for correctness, which IMO is a signal
> that bpf_selem_unlink_storage_nolock() should probably be the one
> invoking kfree_rcu() on behalf of callers in the first place.  It looks
> like all of the callers end up calling kfree_rcu() on the struct
> bpf_local_storage * if bpf_selem_unlink_storage_nolock() returns true,
> so can we just move the responsibility of freeing the local storage
> object down into bpf_selem_unlink_storage_nolock() where it's unlinked?

We probably cannot do this. bpf_selem_unlink_storage_nolock()
is inside the rcu_read_lock() region. We do kfree_rcu() outside
the rcu_read_lock() region.


> 
> IMO this can be done in a separate patch set, if we decide it's worth
> doing at all.
> 
>>>
>>>> +	}
>>>> +	raw_spin_unlock_irqrestore(&local_storage->lock, flags);
>>>> +	bpf_cgrp_storage_unlock();
>>>> +	rcu_read_unlock();
>>>> +
>>>> +	/* free_cgroup_storage should always be true as long as
>>>> +	 * local_storage->list was non-empty.
>>>> +	 */
>>>> +	if (free_cgroup_storage)
>>>> +		kfree_rcu(local_storage, rcu);
>>>> +}
>>>> +
>>>> +static struct bpf_local_storage_data *
>>>> +cgroup_storage_lookup(struct cgroup *cgroup, struct bpf_map *map, bool cacheit_lockit)
>>>> +{
>>>> +	struct bpf_local_storage *cgroup_storage;
>>>> +	struct bpf_local_storage_map *smap;
>>>> +
>>>> +	cgroup_storage = rcu_dereference_check(cgroup->bpf_cgrp_storage,
>>>> +					       bpf_rcu_lock_held());
>>>> +	if (!cgroup_storage)
>>>> +		return NULL;
>>>> +
>>>> +	smap = (struct bpf_local_storage_map *)map;
>>>> +	return bpf_local_storage_lookup(cgroup_storage, smap, cacheit_lockit);
>>>> +}
>>>> +
>>>> +static void *bpf_cgrp_storage_lookup_elem(struct bpf_map *map, void *key)
>>>> +{
>>>> +	struct bpf_local_storage_data *sdata;
>>>> +	struct cgroup *cgroup;
>>>> +	int fd;
>>>> +
>>>> +	fd = *(int *)key;
>>>> +	cgroup = cgroup_get_from_fd(fd);
>>>> +	if (IS_ERR(cgroup))
>>>> +		return ERR_CAST(cgroup);
>>>> +
>>>> +	bpf_cgrp_storage_lock();
>>>> +	sdata = cgroup_storage_lookup(cgroup, map, true);
>>>> +	bpf_cgrp_storage_unlock();
>>>> +	cgroup_put(cgroup);
>>>> +	return sdata ? sdata->data : NULL;
>>>> +}
>>>
>>> Stanislav pointed out in the v1 revision that there's a lot of very
>>> similar logic in task storage, and I think you'd mentioned that you were
>>> going to think about generalizing some of that. Have you had a chance to
>>> consider?
>>
>> It is hard to have a common function for
>> lookup_elem/update_elem/delete_elem(). They are quite different as each
>> heavily involves
>> task/cgroup-specific functions.
> 
> Yes agreed, each implementation is acquiring their own references, and
> finding the backing element in whatever way it was implemented, etc.
> 
>> but map_alloc and map_free could have common helpers.
> 
> Agreed, and many of the static functions that are invoked on those paths
> such as bpf_cgrp_storage_free(), bpf_cgrp_storage_lock(), etc possibly
> as well. In general this feels like something we could pretty easily
> simplify using something like a structure with callbacks to implement
> the pieces of logic that are specific to each local storage type, such
> as getting the struct bpf_local_storage __rcu
> * pointer from some context (e.g.  cgroup_storage_ptr()). It doesn't
> necessarily need to block this change, but IMO we should clean this up
> soon because a lot of this is nearly a 100% copy-paste of other local
> storage implementations.

Further refactoring is possible. Martin is working to simplify the
locking mechanism. We can wait for that done before doing refactoring.

> 
> Thanks,
> David

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v2 3/6] libbpf: Support new cgroup local storage
  2022-10-20 22:13 ` [PATCH bpf-next v2 3/6] libbpf: Support new cgroup local storage Yonghong Song
@ 2022-10-21 23:10   ` Andrii Nakryiko
  2022-10-22  0:32     ` Yonghong Song
  0 siblings, 1 reply; 23+ messages in thread
From: Andrii Nakryiko @ 2022-10-21 23:10 UTC (permalink / raw)
  To: Yonghong Song
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, KP Singh, Martin KaFai Lau, Tejun Heo

On Thu, Oct 20, 2022 at 3:13 PM Yonghong Song <yhs@fb.com> wrote:
>
> Add support for new cgroup local storage.
>
> Signed-off-by: Yonghong Song <yhs@fb.com>
> ---


LGTM, but I do think that BPF_MAP_TYPE_CG_STORAGE and "cg_storage" is
easier to read and talk about. But that's minor.

Acked-by: Andrii Nakryiko <andrii@kernel.org>

>  tools/lib/bpf/libbpf.c        | 1 +
>  tools/lib/bpf/libbpf_probes.c | 1 +
>  2 files changed, 2 insertions(+)
>
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 027fd9565c16..5d7819edf074 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -164,6 +164,7 @@ static const char * const map_type_name[] = {
>         [BPF_MAP_TYPE_TASK_STORAGE]             = "task_storage",
>         [BPF_MAP_TYPE_BLOOM_FILTER]             = "bloom_filter",
>         [BPF_MAP_TYPE_USER_RINGBUF]             = "user_ringbuf",
> +       [BPF_MAP_TYPE_CGRP_STORAGE]             = "cgrp_storage",
>  };
>
>  static const char * const prog_type_name[] = {
> diff --git a/tools/lib/bpf/libbpf_probes.c b/tools/lib/bpf/libbpf_probes.c
> index f3a8e8e74eb8..bdb83d467f9a 100644
> --- a/tools/lib/bpf/libbpf_probes.c
> +++ b/tools/lib/bpf/libbpf_probes.c
> @@ -221,6 +221,7 @@ static int probe_map_create(enum bpf_map_type map_type)
>         case BPF_MAP_TYPE_SK_STORAGE:
>         case BPF_MAP_TYPE_INODE_STORAGE:
>         case BPF_MAP_TYPE_TASK_STORAGE:
> +       case BPF_MAP_TYPE_CGRP_STORAGE:
>                 btf_key_type_id = 1;
>                 btf_value_type_id = 3;
>                 value_size = 8;
> --
> 2.30.2
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v2 3/6] libbpf: Support new cgroup local storage
  2022-10-21 23:10   ` Andrii Nakryiko
@ 2022-10-22  0:32     ` Yonghong Song
  2022-10-22  1:05       ` Tejun Heo
  0 siblings, 1 reply; 23+ messages in thread
From: Yonghong Song @ 2022-10-22  0:32 UTC (permalink / raw)
  To: Andrii Nakryiko, Yonghong Song
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, KP Singh, Martin KaFai Lau, Tejun Heo



On 10/21/22 4:10 PM, Andrii Nakryiko wrote:
> On Thu, Oct 20, 2022 at 3:13 PM Yonghong Song <yhs@fb.com> wrote:
>>
>> Add support for new cgroup local storage.
>>
>> Signed-off-by: Yonghong Song <yhs@fb.com>
>> ---
> 
> 
> LGTM, but I do think that BPF_MAP_TYPE_CG_STORAGE and "cg_storage" is
> easier to read and talk about. But that's minor.

I searched kernel/cgroup/* and kernel/bpf/cgroup.c and 
include/linux/cgroup*.h. The 'cgrp' for abbreviation of 'cgroup' is much 
more than
'cg' for 'cgroup' unless 'cg' appears in like 'memcg' or 'rdmacg'. So I 
would just use 'cgrp' for now.

> 
> Acked-by: Andrii Nakryiko <andrii@kernel.org>
> 
>>   tools/lib/bpf/libbpf.c        | 1 +
>>   tools/lib/bpf/libbpf_probes.c | 1 +
>>   2 files changed, 2 insertions(+)
>>
>> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
>> index 027fd9565c16..5d7819edf074 100644
>> --- a/tools/lib/bpf/libbpf.c
>> +++ b/tools/lib/bpf/libbpf.c
>> @@ -164,6 +164,7 @@ static const char * const map_type_name[] = {
>>          [BPF_MAP_TYPE_TASK_STORAGE]             = "task_storage",
>>          [BPF_MAP_TYPE_BLOOM_FILTER]             = "bloom_filter",
>>          [BPF_MAP_TYPE_USER_RINGBUF]             = "user_ringbuf",
>> +       [BPF_MAP_TYPE_CGRP_STORAGE]             = "cgrp_storage",
>>   };
>>
>>   static const char * const prog_type_name[] = {
>> diff --git a/tools/lib/bpf/libbpf_probes.c b/tools/lib/bpf/libbpf_probes.c
>> index f3a8e8e74eb8..bdb83d467f9a 100644
>> --- a/tools/lib/bpf/libbpf_probes.c
>> +++ b/tools/lib/bpf/libbpf_probes.c
>> @@ -221,6 +221,7 @@ static int probe_map_create(enum bpf_map_type map_type)
>>          case BPF_MAP_TYPE_SK_STORAGE:
>>          case BPF_MAP_TYPE_INODE_STORAGE:
>>          case BPF_MAP_TYPE_TASK_STORAGE:
>> +       case BPF_MAP_TYPE_CGRP_STORAGE:
>>                  btf_key_type_id = 1;
>>                  btf_value_type_id = 3;
>>                  value_size = 8;
>> --
>> 2.30.2
>>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v2 3/6] libbpf: Support new cgroup local storage
  2022-10-22  0:32     ` Yonghong Song
@ 2022-10-22  1:05       ` Tejun Heo
  0 siblings, 0 replies; 23+ messages in thread
From: Tejun Heo @ 2022-10-22  1:05 UTC (permalink / raw)
  To: Yonghong Song
  Cc: Andrii Nakryiko, Yonghong Song, bpf, Alexei Starovoitov,
	Andrii Nakryiko, Daniel Borkmann, kernel-team, KP Singh,
	Martin KaFai Lau

Hello,

On Fri, Oct 21, 2022 at 05:32:58PM -0700, Yonghong Song wrote:
> > LGTM, but I do think that BPF_MAP_TYPE_CG_STORAGE and "cg_storage" is
> > easier to read and talk about. But that's minor.
> 
> I searched kernel/cgroup/* and kernel/bpf/cgroup.c and
> include/linux/cgroup*.h. The 'cgrp' for abbreviation of 'cgroup' is much
> more than
> 'cg' for 'cgroup' unless 'cg' appears in like 'memcg' or 'rdmacg'. So I
> would just use 'cgrp' for now.

Yeah, cgrp is more consistent for prefixes and variable names. cg is usually
used as a part of an abbreviated word - memcg, blkcg, cgid and so on.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v2 2/6] bpf: Implement cgroup storage available to non-cgroup-attached bpf progs
  2022-10-21 22:57         ` Yonghong Song
@ 2022-10-22  3:02           ` David Vernet
  2022-10-23 16:45             ` Yonghong Song
  0 siblings, 1 reply; 23+ messages in thread
From: David Vernet @ 2022-10-22  3:02 UTC (permalink / raw)
  To: Yonghong Song
  Cc: Yonghong Song, bpf, Alexei Starovoitov, Andrii Nakryiko,
	Daniel Borkmann, kernel-team, KP Singh, Martin KaFai Lau,
	Tejun Heo

On Fri, Oct 21, 2022 at 03:57:15PM -0700, Yonghong Song wrote:

[...]

> > > > > +	 * could be modifying the local_storage->list now.
> > > > > +	 * Thus, no elem can be added-to or deleted-from the
> > > > > +	 * local_storage->list by the bpf_prog or by the bpf-map's syscall.
> > > > > +	 *
> > > > > +	 * It is racing with bpf_local_storage_map_free() alone
> > > > > +	 * when unlinking elem from the local_storage->list and
> > > > > +	 * the map's bucket->list.
> > > > > +	 */
> > > > > +	bpf_cgrp_storage_lock();
> > > > > +	raw_spin_lock_irqsave(&local_storage->lock, flags);
> > > > > +	hlist_for_each_entry_safe(selem, n, &local_storage->list, snode) {
> > > > > +		bpf_selem_unlink_map(selem);
> > > > > +		free_cgroup_storage =
> > > > > +			bpf_selem_unlink_storage_nolock(local_storage, selem, false, false);
> > > > 
> > > > This still requires a comment explaining why it's OK to overwrite
> > > > free_cgroup_storage with a previous value from calling
> > > > bpf_selem_unlink_storage_nolock(). Even if that is safe, this looks like
> > > > a pretty weird programming pattern, and IMO doing this feels more
> > > > intentional and future-proof:
> > > > 
> > > > if (bpf_selem_unlink_storage_nolock(local_storage, selem, false, false))
> > > > 	free_cgroup_storage = true;
> > > 
> > > We have a comment a few lines below.
> > >    /* free_cgroup_storage should always be true as long as
> > >     * local_storage->list was non-empty.
> > >     */
> > >    if (free_cgroup_storage)
> > > 	kfree_rcu(local_storage, rcu);
> > 
> > IMO that comment doesn't provide much useful information -- it states an
> > assumption, but doesn't give a reason for it.
> > 
> > > I will add more explanation in the above code like
> > > 
> > > 	bpf_selem_unlink_map(selem);
> > > 	/* If local_storage list only have one element, the
> > > 	 * bpf_selem_unlink_storage_nolock() will return true.
> > > 	 * Otherwise, it will return false. The current loop iteration
> > > 	 * intends to remove all local storage. So the last iteration
> > > 	 * of the loop will set the free_cgroup_storage to true.
> > > 	 */
> > > 	free_cgroup_storage =
> > > 		bpf_selem_unlink_storage_nolock(local_storage, selem, false, false);
> > 
> > Thanks, this is the type of comment I was looking for.
> > 
> > Also, I realize this was copy-pasted from a number of other possible
> > locations in the codebase which are doing the same thing, but I still
> > think this pattern is an odd and brittle way to do this. We're relying
> > on an abstracted implementation detail of
> > bpf_selem_unlink_storage_nolock() for correctness, which IMO is a signal
> > that bpf_selem_unlink_storage_nolock() should probably be the one
> > invoking kfree_rcu() on behalf of callers in the first place.  It looks
> > like all of the callers end up calling kfree_rcu() on the struct
> > bpf_local_storage * if bpf_selem_unlink_storage_nolock() returns true,
> > so can we just move the responsibility of freeing the local storage
> > object down into bpf_selem_unlink_storage_nolock() where it's unlinked?
> 
> We probably cannot do this. bpf_selem_unlink_storage_nolock()
> is inside the rcu_read_lock() region. We do kfree_rcu() outside
> the rcu_read_lock() region.

kfree_rcu() is non-blocking and is safe to invoke from within an RCU
read region. If you invoke it within an RCU read region, the object will
not be kfree'd until (at least) you exit the current read region, so I
believe that the net effect here should be the same whether it's done in
bpf_selem_unlink_storage_nolock(), or in the caller after the RCU read
region is exited.

> > IMO this can be done in a separate patch set, if we decide it's worth
> > doing at all.
> > 
> > > > 
> > > > > +	}
> > > > > +	raw_spin_unlock_irqrestore(&local_storage->lock, flags);
> > > > > +	bpf_cgrp_storage_unlock();
> > > > > +	rcu_read_unlock();
> > > > > +
> > > > > +	/* free_cgroup_storage should always be true as long as
> > > > > +	 * local_storage->list was non-empty.
> > > > > +	 */
> > > > > +	if (free_cgroup_storage)
> > > > > +		kfree_rcu(local_storage, rcu);
> > > > > +}
> > > > > +
> > > > > +static struct bpf_local_storage_data *
> > > > > +cgroup_storage_lookup(struct cgroup *cgroup, struct bpf_map *map, bool cacheit_lockit)
> > > > > +{
> > > > > +	struct bpf_local_storage *cgroup_storage;
> > > > > +	struct bpf_local_storage_map *smap;
> > > > > +
> > > > > +	cgroup_storage = rcu_dereference_check(cgroup->bpf_cgrp_storage,
> > > > > +					       bpf_rcu_lock_held());
> > > > > +	if (!cgroup_storage)
> > > > > +		return NULL;
> > > > > +
> > > > > +	smap = (struct bpf_local_storage_map *)map;
> > > > > +	return bpf_local_storage_lookup(cgroup_storage, smap, cacheit_lockit);
> > > > > +}
> > > > > +
> > > > > +static void *bpf_cgrp_storage_lookup_elem(struct bpf_map *map, void *key)
> > > > > +{
> > > > > +	struct bpf_local_storage_data *sdata;
> > > > > +	struct cgroup *cgroup;
> > > > > +	int fd;
> > > > > +
> > > > > +	fd = *(int *)key;
> > > > > +	cgroup = cgroup_get_from_fd(fd);
> > > > > +	if (IS_ERR(cgroup))
> > > > > +		return ERR_CAST(cgroup);
> > > > > +
> > > > > +	bpf_cgrp_storage_lock();
> > > > > +	sdata = cgroup_storage_lookup(cgroup, map, true);
> > > > > +	bpf_cgrp_storage_unlock();
> > > > > +	cgroup_put(cgroup);
> > > > > +	return sdata ? sdata->data : NULL;
> > > > > +}
> > > > 
> > > > Stanislav pointed out in the v1 revision that there's a lot of very
> > > > similar logic in task storage, and I think you'd mentioned that you were
> > > > going to think about generalizing some of that. Have you had a chance to
> > > > consider?
> > > 
> > > It is hard to have a common function for
> > > lookup_elem/update_elem/delete_elem(). They are quite different as each
> > > heavily involves
> > > task/cgroup-specific functions.
> > 
> > Yes agreed, each implementation is acquiring their own references, and
> > finding the backing element in whatever way it was implemented, etc.
> > 
> > > but map_alloc and map_free could have common helpers.
> > 
> > Agreed, and many of the static functions that are invoked on those paths
> > such as bpf_cgrp_storage_free(), bpf_cgrp_storage_lock(), etc possibly
> > as well. In general this feels like something we could pretty easily
> > simplify using something like a structure with callbacks to implement
> > the pieces of logic that are specific to each local storage type, such
> > as getting the struct bpf_local_storage __rcu
> > * pointer from some context (e.g.  cgroup_storage_ptr()). It doesn't
> > necessarily need to block this change, but IMO we should clean this up
> > soon because a lot of this is nearly a 100% copy-paste of other local
> > storage implementations.
> 
> Further refactoring is possible. Martin is working to simplify the
> locking mechanism. We can wait for that done before doing refactoring.

Sounds great, thanks!

- David

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v2 2/6] bpf: Implement cgroup storage available to non-cgroup-attached bpf progs
  2022-10-22  3:02           ` David Vernet
@ 2022-10-23 16:45             ` Yonghong Song
  2022-10-23 21:14               ` David Vernet
  0 siblings, 1 reply; 23+ messages in thread
From: Yonghong Song @ 2022-10-23 16:45 UTC (permalink / raw)
  To: David Vernet
  Cc: Yonghong Song, bpf, Alexei Starovoitov, Andrii Nakryiko,
	Daniel Borkmann, kernel-team, KP Singh, Martin KaFai Lau,
	Tejun Heo



On 10/21/22 8:02 PM, David Vernet wrote:
> On Fri, Oct 21, 2022 at 03:57:15PM -0700, Yonghong Song wrote:
> 
> [...]
> 
>>>>>> +	 * could be modifying the local_storage->list now.
>>>>>> +	 * Thus, no elem can be added-to or deleted-from the
>>>>>> +	 * local_storage->list by the bpf_prog or by the bpf-map's syscall.
>>>>>> +	 *
>>>>>> +	 * It is racing with bpf_local_storage_map_free() alone
>>>>>> +	 * when unlinking elem from the local_storage->list and
>>>>>> +	 * the map's bucket->list.
>>>>>> +	 */
>>>>>> +	bpf_cgrp_storage_lock();
>>>>>> +	raw_spin_lock_irqsave(&local_storage->lock, flags);
>>>>>> +	hlist_for_each_entry_safe(selem, n, &local_storage->list, snode) {
>>>>>> +		bpf_selem_unlink_map(selem);
>>>>>> +		free_cgroup_storage =
>>>>>> +			bpf_selem_unlink_storage_nolock(local_storage, selem, false, false);
>>>>>
>>>>> This still requires a comment explaining why it's OK to overwrite
>>>>> free_cgroup_storage with a previous value from calling
>>>>> bpf_selem_unlink_storage_nolock(). Even if that is safe, this looks like
>>>>> a pretty weird programming pattern, and IMO doing this feels more
>>>>> intentional and future-proof:
>>>>>
>>>>> if (bpf_selem_unlink_storage_nolock(local_storage, selem, false, false))
>>>>> 	free_cgroup_storage = true;
>>>>
>>>> We have a comment a few lines below.
>>>>     /* free_cgroup_storage should always be true as long as
>>>>      * local_storage->list was non-empty.
>>>>      */
>>>>     if (free_cgroup_storage)
>>>> 	kfree_rcu(local_storage, rcu);
>>>
>>> IMO that comment doesn't provide much useful information -- it states an
>>> assumption, but doesn't give a reason for it.
>>>
>>>> I will add more explanation in the above code like
>>>>
>>>> 	bpf_selem_unlink_map(selem);
>>>> 	/* If local_storage list only have one element, the
>>>> 	 * bpf_selem_unlink_storage_nolock() will return true.
>>>> 	 * Otherwise, it will return false. The current loop iteration
>>>> 	 * intends to remove all local storage. So the last iteration
>>>> 	 * of the loop will set the free_cgroup_storage to true.
>>>> 	 */
>>>> 	free_cgroup_storage =
>>>> 		bpf_selem_unlink_storage_nolock(local_storage, selem, false, false);
>>>
>>> Thanks, this is the type of comment I was looking for.
>>>
>>> Also, I realize this was copy-pasted from a number of other possible
>>> locations in the codebase which are doing the same thing, but I still
>>> think this pattern is an odd and brittle way to do this. We're relying
>>> on an abstracted implementation detail of
>>> bpf_selem_unlink_storage_nolock() for correctness, which IMO is a signal
>>> that bpf_selem_unlink_storage_nolock() should probably be the one
>>> invoking kfree_rcu() on behalf of callers in the first place.  It looks
>>> like all of the callers end up calling kfree_rcu() on the struct
>>> bpf_local_storage * if bpf_selem_unlink_storage_nolock() returns true,
>>> so can we just move the responsibility of freeing the local storage
>>> object down into bpf_selem_unlink_storage_nolock() where it's unlinked?
>>
>> We probably cannot do this. bpf_selem_unlink_storage_nolock()
>> is inside the rcu_read_lock() region. We do kfree_rcu() outside
>> the rcu_read_lock() region.
> 
> kfree_rcu() is non-blocking and is safe to invoke from within an RCU
> read region. If you invoke it within an RCU read region, the object will
> not be kfree'd until (at least) you exit the current read region, so I
> believe that the net effect here should be the same whether it's done in
> bpf_selem_unlink_storage_nolock(), or in the caller after the RCU read
> region is exited.

Okay. we probably still want to do kfree_rcu outside
bpf_selem_unlink_storage_nolock() as the function is to unlink storage
for a particular selem.

We could move
	if (free_cgroup_storage)
		kfree_rcu(local_storage, rcu);
immediately after hlist_for_each_entry_safe() loop.
But I think putting that 'if' statement after rcu_read_unlock() is
slightly better as it will not increase the code inside the lock region.

> 
>>> IMO this can be done in a separate patch set, if we decide it's worth
>>> doing at all.
>>>
>>>>>
>>>>>> +	}
>>>>>> +	raw_spin_unlock_irqrestore(&local_storage->lock, flags);
>>>>>> +	bpf_cgrp_storage_unlock();
>>>>>> +	rcu_read_unlock();
>>>>>> +
>>>>>> +	/* free_cgroup_storage should always be true as long as
>>>>>> +	 * local_storage->list was non-empty.
>>>>>> +	 */
>>>>>> +	if (free_cgroup_storage)
>>>>>> +		kfree_rcu(local_storage, rcu);
>>>>>> +}
>>>>>> +
>>>>>> +static struct bpf_local_storage_data *
>>>>>> +cgroup_storage_lookup(struct cgroup *cgroup, struct bpf_map *map, bool cacheit_lockit)
>>>>>> +{
>>>>>> +	struct bpf_local_storage *cgroup_storage;
>>>>>> +	struct bpf_local_storage_map *smap;
>>>>>> +
>>>>>> +	cgroup_storage = rcu_dereference_check(cgroup->bpf_cgrp_storage,
>>>>>> +					       bpf_rcu_lock_held());
>>>>>> +	if (!cgroup_storage)
>>>>>> +		return NULL;
>>>>>> +
>>>>>> +	smap = (struct bpf_local_storage_map *)map;
>>>>>> +	return bpf_local_storage_lookup(cgroup_storage, smap, cacheit_lockit);
>>>>>> +}
>>>>>> +
>>>>>> +static void *bpf_cgrp_storage_lookup_elem(struct bpf_map *map, void *key)
>>>>>> +{
>>>>>> +	struct bpf_local_storage_data *sdata;
>>>>>> +	struct cgroup *cgroup;
>>>>>> +	int fd;
>>>>>> +
>>>>>> +	fd = *(int *)key;
>>>>>> +	cgroup = cgroup_get_from_fd(fd);
>>>>>> +	if (IS_ERR(cgroup))
>>>>>> +		return ERR_CAST(cgroup);
>>>>>> +
>>>>>> +	bpf_cgrp_storage_lock();
>>>>>> +	sdata = cgroup_storage_lookup(cgroup, map, true);
>>>>>> +	bpf_cgrp_storage_unlock();
>>>>>> +	cgroup_put(cgroup);
>>>>>> +	return sdata ? sdata->data : NULL;
>>>>>> +}
>>>>>
>>>>> Stanislav pointed out in the v1 revision that there's a lot of very
>>>>> similar logic in task storage, and I think you'd mentioned that you were
>>>>> going to think about generalizing some of that. Have you had a chance to
>>>>> consider?
>>>>
>>>> It is hard to have a common function for
>>>> lookup_elem/update_elem/delete_elem(). They are quite different as each
>>>> heavily involves
>>>> task/cgroup-specific functions.
>>>
>>> Yes agreed, each implementation is acquiring their own references, and
>>> finding the backing element in whatever way it was implemented, etc.
>>>
>>>> but map_alloc and map_free could have common helpers.
>>>
>>> Agreed, and many of the static functions that are invoked on those paths
>>> such as bpf_cgrp_storage_free(), bpf_cgrp_storage_lock(), etc possibly
>>> as well. In general this feels like something we could pretty easily
>>> simplify using something like a structure with callbacks to implement
>>> the pieces of logic that are specific to each local storage type, such
>>> as getting the struct bpf_local_storage __rcu
>>> * pointer from some context (e.g.  cgroup_storage_ptr()). It doesn't
>>> necessarily need to block this change, but IMO we should clean this up
>>> soon because a lot of this is nearly a 100% copy-paste of other local
>>> storage implementations.
>>
>> Further refactoring is possible. Martin is working to simplify the
>> locking mechanism. We can wait for that done before doing refactoring.
> 
> Sounds great, thanks!
> 
> - David

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v2 2/6] bpf: Implement cgroup storage available to non-cgroup-attached bpf progs
  2022-10-23 16:45             ` Yonghong Song
@ 2022-10-23 21:14               ` David Vernet
  0 siblings, 0 replies; 23+ messages in thread
From: David Vernet @ 2022-10-23 21:14 UTC (permalink / raw)
  To: Yonghong Song
  Cc: Yonghong Song, bpf, Alexei Starovoitov, Andrii Nakryiko,
	Daniel Borkmann, kernel-team, KP Singh, Martin KaFai Lau,
	Tejun Heo

On Sun, Oct 23, 2022 at 09:45:35AM -0700, Yonghong Song wrote:
> > > > > > > +	 * could be modifying the local_storage->list now.
> > > > > > > +	 * Thus, no elem can be added-to or deleted-from the
> > > > > > > +	 * local_storage->list by the bpf_prog or by the bpf-map's syscall.
> > > > > > > +	 *
> > > > > > > +	 * It is racing with bpf_local_storage_map_free() alone
> > > > > > > +	 * when unlinking elem from the local_storage->list and
> > > > > > > +	 * the map's bucket->list.
> > > > > > > +	 */
> > > > > > > +	bpf_cgrp_storage_lock();
> > > > > > > +	raw_spin_lock_irqsave(&local_storage->lock, flags);
> > > > > > > +	hlist_for_each_entry_safe(selem, n, &local_storage->list, snode) {
> > > > > > > +		bpf_selem_unlink_map(selem);
> > > > > > > +		free_cgroup_storage =
> > > > > > > +			bpf_selem_unlink_storage_nolock(local_storage, selem, false, false);
> > > > > > 
> > > > > > This still requires a comment explaining why it's OK to overwrite
> > > > > > free_cgroup_storage with a previous value from calling
> > > > > > bpf_selem_unlink_storage_nolock(). Even if that is safe, this looks like
> > > > > > a pretty weird programming pattern, and IMO doing this feels more
> > > > > > intentional and future-proof:
> > > > > > 
> > > > > > if (bpf_selem_unlink_storage_nolock(local_storage, selem, false, false))
> > > > > > 	free_cgroup_storage = true;
> > > > > 
> > > > > We have a comment a few lines below.
> > > > >     /* free_cgroup_storage should always be true as long as
> > > > >      * local_storage->list was non-empty.
> > > > >      */
> > > > >     if (free_cgroup_storage)
> > > > > 	kfree_rcu(local_storage, rcu);
> > > > 
> > > > IMO that comment doesn't provide much useful information -- it states an
> > > > assumption, but doesn't give a reason for it.
> > > > 
> > > > > I will add more explanation in the above code like
> > > > > 
> > > > > 	bpf_selem_unlink_map(selem);
> > > > > 	/* If local_storage list only have one element, the
> > > > > 	 * bpf_selem_unlink_storage_nolock() will return true.
> > > > > 	 * Otherwise, it will return false. The current loop iteration
> > > > > 	 * intends to remove all local storage. So the last iteration
> > > > > 	 * of the loop will set the free_cgroup_storage to true.
> > > > > 	 */
> > > > > 	free_cgroup_storage =
> > > > > 		bpf_selem_unlink_storage_nolock(local_storage, selem, false, false);
> > > > 
> > > > Thanks, this is the type of comment I was looking for.
> > > > 
> > > > Also, I realize this was copy-pasted from a number of other possible
> > > > locations in the codebase which are doing the same thing, but I still
> > > > think this pattern is an odd and brittle way to do this. We're relying
> > > > on an abstracted implementation detail of
> > > > bpf_selem_unlink_storage_nolock() for correctness, which IMO is a signal
> > > > that bpf_selem_unlink_storage_nolock() should probably be the one
> > > > invoking kfree_rcu() on behalf of callers in the first place.  It looks
> > > > like all of the callers end up calling kfree_rcu() on the struct
> > > > bpf_local_storage * if bpf_selem_unlink_storage_nolock() returns true,
> > > > so can we just move the responsibility of freeing the local storage
> > > > object down into bpf_selem_unlink_storage_nolock() where it's unlinked?
> > > 
> > > We probably cannot do this. bpf_selem_unlink_storage_nolock()
> > > is inside the rcu_read_lock() region. We do kfree_rcu() outside
> > > the rcu_read_lock() region.
> > 
> > kfree_rcu() is non-blocking and is safe to invoke from within an RCU
> > read region. If you invoke it within an RCU read region, the object will
> > not be kfree'd until (at least) you exit the current read region, so I
> > believe that the net effect here should be the same whether it's done in
> > bpf_selem_unlink_storage_nolock(), or in the caller after the RCU read
> > region is exited.
> 
> Okay. we probably still want to do kfree_rcu outside
> bpf_selem_unlink_storage_nolock() as the function is to unlink storage
> for a particular selem.

Meaning, it's for unlinking a specific element rather than the whole
list, so it's not the right place to free the larger struct
bpf_local_storage * container? If that's your point (and please clarify
if it's not and I'm misunderstanding) then I agree that's true, but
unfortunately whether the API likes it or not, it's tied itself to the
lifetime of the larger struct bpf_local_storage * by returning a bool
that says whether the caller needs to free that local storage pointer.
AFAICT, with the current API / implementation, if the caller drops this
value on the floor, the struct bpf_local_storage * is leaked, which
means that it's a leaky API.

That being said, I think I agree with you that just moving kfree_rcu()
into bpf_selem_unlink_storage_nolock() may not be appropriate, but
overall it feels like this pattern / API has room for improvement.
The fact that the (now) only three callers of this function have
copy-pasted code that's doing the exact same thing to free the is local
storage object is in my opinion a testament to that.

Anyways, none of that needs to block this patch set. I acked this in
your latest version, but I think this should be cleaned up by someone in
the near future; certainly before we add another local storage variant.

> We could move
> 	if (free_cgroup_storage)
> 		kfree_rcu(local_storage, rcu);
> immediately after hlist_for_each_entry_safe() loop.
> But I think putting that 'if' statement after rcu_read_unlock() is
> slightly better as it will not increase the code inside the lock region.

Yeah, if it's not abstracted by the bpf_local_storage APIs, it might as
well just be freed outside of the critical section.

Thanks,
David

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2022-10-23 21:15 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-20 22:12 [PATCH bpf-next v2 0/6] bpf: Implement cgroup local storage available to non-cgroup-attached bpf progs Yonghong Song
2022-10-20 22:13 ` [PATCH bpf-next v2 1/6] bpf: Make struct cgroup btf id global Yonghong Song
2022-10-20 22:13 ` [PATCH bpf-next v2 2/6] bpf: Implement cgroup storage available to non-cgroup-attached bpf progs Yonghong Song
2022-10-21  5:22   ` David Vernet
2022-10-21  5:26     ` David Vernet
2022-10-21 17:33     ` Yonghong Song
2022-10-21 19:57       ` David Vernet
2022-10-21 22:57         ` Yonghong Song
2022-10-22  3:02           ` David Vernet
2022-10-23 16:45             ` Yonghong Song
2022-10-23 21:14               ` David Vernet
     [not found]   ` <202210210932.nHqTyTmx-lkp@intel.com>
2022-10-21 16:51     ` Yonghong Song
2022-10-21 19:29   ` Yosry Ahmed
2022-10-21 21:05     ` Yonghong Song
2022-10-20 22:13 ` [PATCH bpf-next v2 3/6] libbpf: Support new cgroup local storage Yonghong Song
2022-10-21 23:10   ` Andrii Nakryiko
2022-10-22  0:32     ` Yonghong Song
2022-10-22  1:05       ` Tejun Heo
2022-10-20 22:13 ` [PATCH bpf-next v2 4/6] bpftool: " Yonghong Song
2022-10-20 22:13 ` [PATCH bpf-next v2 5/6] selftests/bpf: Add selftests for " Yonghong Song
2022-10-20 22:13 ` [PATCH bpf-next v2 6/6] docs/bpf: Add documentation for map type BPF_MAP_TYPE_CGRP_STROAGE Yonghong Song
2022-10-21  7:12   ` David Vernet
2022-10-21 17:46     ` Yonghong Song

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.