bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next v7 0/8] bpf: rstat: cgroup hierarchical stats
@ 2022-08-05 21:48 Hao Luo
  2022-08-05 21:48 ` [PATCH bpf-next v7 1/8] btf: Add a new kfunc flag which allows to mark a function to be sleepable Hao Luo
                   ` (8 more replies)
  0 siblings, 9 replies; 23+ messages in thread
From: Hao Luo @ 2022-08-05 21:48 UTC (permalink / raw)
  To: linux-kernel, bpf, cgroups, netdev
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Song Liu, Yonghong Song, Tejun Heo, Zefan Li,
	KP Singh, Johannes Weiner, Michal Hocko, Benjamin Tissoires,
	John Fastabend, Michal Koutny, Roman Gushchin, David Rientjes,
	Stanislav Fomichev, Shakeel Butt, Yosry Ahmed, Hao Luo

This patch series allows for using bpf to collect hierarchical cgroup
stats efficiently by integrating with the rstat framework. The rstat
framework provides an efficient way to collect cgroup stats percpu and
propagate them through the cgroup hierarchy.

The stats are exposed to userspace in textual form by reading files in
bpffs, similar to cgroupfs stats by using a cgroup_iter program.
cgroup_iter is a type of bpf_iter. It walks over cgroups in four modes:
- walking a cgroup's descendants in pre-order.
- walking a cgroup's descendants in post-order.
- walking a cgroup's ancestors.
- process only a single object.

If no order is specified, the default order is pre-order.

When attaching cgroup_iter, one needs to set a cgroup to the iter_link
created from attaching. This cgroup can be passed either as a file
descriptor or a cgroup id. That cgroup serves as the starting point of
the walk.

One can also terminate the walk early by returning 1 from the iter
program.

Note that because walking cgroup hierarchy holds cgroup_mutex, the iter
program is called with cgroup_mutex held.

** Background on rstat for stats collection **
(I am using a subscriber analogy that is not commonly used)

The rstat framework maintains a tree of cgroups that have updates and
which cpus have updates. A subscriber to the rstat framework maintains
their own stats. The framework is used to tell the subscriber when
and what to flush, for the most efficient stats propagation. The
workflow is as follows:

- When a subscriber updates a cgroup on a cpu, it informs the rstat
  framework by calling cgroup_rstat_updated(cgrp, cpu).

- When a subscriber wants to read some stats for a cgroup, it asks
  the rstat framework to initiate a stats flush (propagation) by calling
  cgroup_rstat_flush(cgrp).

- When the rstat framework initiates a flush, it makes callbacks to
  subscribers to aggregate stats on cpus that have updates, and
  propagate updates to their parent.

Currently, the main subscribers to the rstat framework are cgroup
subsystems (e.g. memory, block). This patch series allow bpf programs to
become subscribers as well.

This patch series includes a resend of a patch from the mailing list by
Benjamin Tissoires to support sleepable kfuncs [1], modified to use the
new kfunc flags infrastructure.

Patches in this series are organized as follows:
* Patch 1 is the updated sleepable kfuncs patch.
* Patch 2 enables the use of cgroup_get_from_file() in cgroup1.
  This is useful because it enables cgroup_iter to work with cgroup1, and
  allows the entire stat collection workflow to be cgroup1-compatible.
* Patches 3-5 introduce cgroup_iter prog, and a selftest.
* Patches 6-8 allow bpf programs to integrate with rstat by adding the
  necessary hook points and kfunc. A comprehensive selftest that
  demonstrates the entire workflow for using bpf and rstat to
  efficiently collect and output cgroup stats is added.

---
Changelog:
v6 -> v7:
- Updated commit/comments in cgroup_iter for read() behavior (Yonghong)
- Extracted BPF_ITER_SELF and other options out of cgroup_iter, so
  that they can be used in other iters. Also renamed them. (Andrii)
- Supports both cgroup_fd and cgroup_id when specifying target cgroup.
  (Andrii)
- Avoided using macro for formatting expected output in cgroup_iter
  selftest. (Andrii)
- Applied 'static' on all vars and functions in cgroup_iter selftest.
  (Andrii)
- Fixed broken buf reading in cgroup_iter selftest. (Andrii)
- Switched to use bpf_link__destroy() unconditionally. (Andrii)
- Removed 'volatile' for non-const global vars in selftests. (Andrii)
- Started using bpf_core_enum_value() to get memory_cgrp_id. (Andrii)

v5 -> v6:
- Rebased on bpf-next
- Tidy up cgroup_hierarchical_stats test (Andrii)
  * 'static' and 'inline'
  * avoid using libbpf_get_error()
  * string literals of cgroup paths.
- Rename patch 8/8 to 'selftests/bpf' (Yonghong)
- Fix cgroup_iter comments (e.g. PAGE_SIZE and uapi) (Yonghong)
- Make sure further read() returns OK after previous read() finished
  properly (Yonghong)
- Release cgroup_mutex before the last call of show() (Kumar)

v4 -> v5:
- Rebased on top of new kfunc flags infrastructure, updated patch 1 and
  patch 6 accordingly.
- Added docs for sleepable kfuncs.

v3 -> v4:
- cgroup_iter:
  * reorder fields in bpf_link_info to avoid break uapi (Yonghong)
  * comment the behavior when cgroup_fd=0 (Yonghong)
  * comment on the limit of number of cgroups supported by cgroup_iter.
    (Yonghong)
- cgroup_hierarchical_stats selftest:
  * Do not return -1 if stats are not found (causes overflow in userspace).
  * Check if child process failed to join cgroup.
  * Make buf and path arrays in get_cgroup_vmscan_delay() static.
  * Increase the test map sizes to accomodate cgroups that are not
    created by the test.

v2 -> v3:
- cgroup_iter:
  * Added conditional compilation of cgroup_iter.c in kernel/bpf/Makefile
    (kernel test) and dropped the !CONFIG_CGROUP patch.
  * Added validation of traversal_order when attaching (Yonghong).
  * Fixed previous wording "two modes" to "three modes" (Yonghong).
  * Fixed the btf_dump selftest broken by this patch (Yonghong).
  * Fixed ctx_arg_info[0] to use "PTR_TO_BTF_ID_OR_NULL" instead of
    "PTR_TO_BTF_ID", because the "cgroup" pointer passed to iter prog can
     be null.
- Use __diag_push to eliminate __weak noinline warning in
  bpf_rstat_flush().
- cgroup_hierarchical_stats selftest:
  * Added write_cgroup_file_parent() helper.
  * Added error handling for failed map updates.
  * Added null check for cgroup in vmscan_flush.
  * Fixed the signature of vmscan_[start/end].
  * Correctly return error code when attaching trace programs fail.
  * Make sure all links are destroyed correctly and not leaking in
    cgroup_hierarchical_stats selftest.
  * Use memory.reclaim instead of memory.high as a more reliable way to
    invoke reclaim.
  * Eliminated sleeps, the test now runs faster.

v1 -> v2:
- Redesign of cgroup_iter from v1, based on Alexei's idea [2]:
  * supports walking cgroup subtree.
  * supports walking ancestors of a cgroup. (Andrii)
  * supports terminating the walk early.
  * uses fd instead of cgroup_id as parameter for iter_link. Using fd is
    a convention in bpf.
  * gets cgroup's ref at attach time and deref at detach.
  * brought back cgroup1 support for cgroup_iter.
- Squashed the patches adding the rstat flush hook points and kfuncs
  (Tejun).
- Added a comment explaining why bpf_rstat_flush() needs to be weak
  (Tejun).
- Updated the final selftest with the new cgroup_iter design.
- Changed CHECKs in the selftest with ASSERTs (Yonghong, Andrii).
- Removed empty line at the end of the selftest (Yonghong).
- Renamed test files to cgroup_hierarchical_stats.c.
- Reordered CGROUP_PATH params order to match struct declaration
  in the selftest (Michal).
- Removed memory_subsys_enabled() and made sure memcg controller
  enablement checks make sense and are documented (Michal).

RFC v2 -> v1:
- Instead of introducing a new program type for rstat flushing, add an
  empty hook point, bpf_rstat_flush(), and use fentry bpf programs to
  attach to it and flush bpf stats.
- Instead of using helpers, use kfuncs for rstat functions.
- These changes simplify the patchset greatly, with minimal changes to
  uapi.

RFC v1 -> RFC v2:
- Instead of rstat flush programs attach to subsystems, they now attach
  to rstat (global flushers, not per-subsystem), based on discussions
  with Tejun. The first patch is entirely rewritten.
- Pass cgroup pointers to rstat flushers instead of cgroup ids. This is
  much more flexibility and less likely to need a uapi update later.
- rstat helpers are now only defined if CGROUP_CONFIG.
- Most of the code is now only defined if CGROUP_CONFIG and
  CONFIG_BPF_SYSCALL.
- Move rstat helper protos from bpf_base_func_proto() to
  tracing_prog_func_proto().
- rstat helpers argument (cgroup pointer) is now ARG_PTR_TO_BTF_ID, not
  ARG_ANYTHING.
- Rewrote the selftest to use the cgroup helpers.
- Dropped bpf_map_lookup_percpu_elem (already added by Feng).
- Dropped patch to support cgroup v1 for cgroup_iter.
- Dropped patch to define some cgroup_put() when !CONFIG_CGROUP. The
  code that calls it is no longer compiled when !CONFIG_CGROUP.

cgroup_iter was originally introduced in a different patch series[3].
Hao and I agreed that it fits better as part of this series.
RFC v1 of this patch series had the following changes from [3]:
- Getting the cgroup's reference at the time at attaching, instead of
  at the time when iterating. (Yonghong)
- Remove .init_seq_private and .fini_seq_private callbacks for
  cgroup_iter. They are not needed now. (Yonghong)

[1] https://lore.kernel.org/bpf/20220421140740.459558-5-benjamin.tissoires@redhat.com/
[2] https://lore.kernel.org/bpf/20220520221919.jnqgv52k4ajlgzcl@MBP-98dd607d3435.dhcp.thefacebook.com/
[3] https://lore.kernel.org/lkml/20220225234339.2386398-9-haoluo@google.com/

Benjamin Tissoires (1):
  btf: Add a new kfunc flag which allows to mark a function to be
    sleepable

Hao Luo (3):
  bpf, iter: Fix the condition on p when calling stop.
  bpf: Introduce cgroup iter
  selftests/bpf: Test cgroup_iter.

Yosry Ahmed (4):
  cgroup: enable cgroup_get_from_file() on cgroup1
  cgroup: bpf: enable bpf programs to integrate with rstat
  selftests/bpf: extend cgroup helpers
  selftests/bpf: add a selftest for cgroup hierarchical stats collection

 Documentation/bpf/kfuncs.rst                  |   6 +
 include/linux/bpf.h                           |   8 +
 include/linux/btf.h                           |   1 +
 include/uapi/linux/bpf.h                      |  38 ++
 kernel/bpf/Makefile                           |   3 +
 kernel/bpf/bpf_iter.c                         |   5 +
 kernel/bpf/btf.c                              |   9 +
 kernel/bpf/cgroup_iter.c                      | 286 ++++++++++++++
 kernel/cgroup/cgroup.c                        |   5 -
 kernel/cgroup/rstat.c                         |  48 +++
 tools/include/uapi/linux/bpf.h                |  38 ++
 tools/testing/selftests/bpf/cgroup_helpers.c  | 202 ++++++++--
 tools/testing/selftests/bpf/cgroup_helpers.h  |  19 +-
 .../selftests/bpf/prog_tests/btf_dump.c       |   4 +-
 .../prog_tests/cgroup_hierarchical_stats.c    | 358 ++++++++++++++++++
 .../selftests/bpf/prog_tests/cgroup_iter.c    | 237 ++++++++++++
 tools/testing/selftests/bpf/progs/bpf_iter.h  |   7 +
 .../bpf/progs/cgroup_hierarchical_stats.c     | 226 +++++++++++
 .../testing/selftests/bpf/progs/cgroup_iter.c |  39 ++
 19 files changed, 1485 insertions(+), 54 deletions(-)
 create mode 100644 kernel/bpf/cgroup_iter.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_hierarchical_stats.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_iter.c
 create mode 100644 tools/testing/selftests/bpf/progs/cgroup_hierarchical_stats.c
 create mode 100644 tools/testing/selftests/bpf/progs/cgroup_iter.c

-- 
2.37.1.559.g78731f0fdb-goog


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH bpf-next v7 1/8] btf: Add a new kfunc flag which allows to mark a function to be sleepable
  2022-08-05 21:48 [PATCH bpf-next v7 0/8] bpf: rstat: cgroup hierarchical stats Hao Luo
@ 2022-08-05 21:48 ` Hao Luo
  2022-08-05 21:48 ` [PATCH bpf-next v7 2/8] cgroup: enable cgroup_get_from_file() on cgroup1 Hao Luo
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 23+ messages in thread
From: Hao Luo @ 2022-08-05 21:48 UTC (permalink / raw)
  To: linux-kernel, bpf, cgroups, netdev
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Song Liu, Yonghong Song, Tejun Heo, Zefan Li,
	KP Singh, Johannes Weiner, Michal Hocko, Benjamin Tissoires,
	John Fastabend, Michal Koutny, Roman Gushchin, David Rientjes,
	Stanislav Fomichev, Shakeel Butt, Yosry Ahmed, Hao Luo

From: Benjamin Tissoires <benjamin.tissoires@redhat.com>

This allows to declare a kfunc as sleepable and prevents its use in
a non sleepable program.

Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Co-developed-by: Yosry Ahmed <yosryahmed@google.com>
Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
Signed-off-by: Hao Luo <haoluo@google.com>
---
 Documentation/bpf/kfuncs.rst | 6 ++++++
 include/linux/btf.h          | 1 +
 kernel/bpf/btf.c             | 9 +++++++++
 3 files changed, 16 insertions(+)

diff --git a/Documentation/bpf/kfuncs.rst b/Documentation/bpf/kfuncs.rst
index c0b7dae6dbf5..c8b21de1c772 100644
--- a/Documentation/bpf/kfuncs.rst
+++ b/Documentation/bpf/kfuncs.rst
@@ -146,6 +146,12 @@ that operate (change some property, perform some operation) on an object that
 was obtained using an acquire kfunc. Such kfuncs need an unchanged pointer to
 ensure the integrity of the operation being performed on the expected object.
 
+2.4.6 KF_SLEEPABLE flag
+-----------------------
+
+The KF_SLEEPABLE flag is used for kfuncs that may sleep. Such kfuncs can only
+be called by sleepable BPF programs (BPF_F_SLEEPABLE).
+
 2.5 Registering the kfuncs
 --------------------------
 
diff --git a/include/linux/btf.h b/include/linux/btf.h
index cdb376d53238..976cbdd2981f 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -49,6 +49,7 @@
  * for this case.
  */
 #define KF_TRUSTED_ARGS (1 << 4) /* kfunc only takes trusted pointer arguments */
+#define KF_SLEEPABLE   (1 << 5) /* kfunc may sleep */
 
 struct btf;
 struct btf_member;
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 7e64447659f3..d3e4c86b8fcd 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -6175,6 +6175,7 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env,
 {
 	enum bpf_prog_type prog_type = resolve_prog_type(env->prog);
 	bool rel = false, kptr_get = false, trusted_arg = false;
+	bool sleepable = false;
 	struct bpf_verifier_log *log = &env->log;
 	u32 i, nargs, ref_id, ref_obj_id = 0;
 	bool is_kfunc = btf_is_kernel(btf);
@@ -6212,6 +6213,7 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env,
 		rel = kfunc_flags & KF_RELEASE;
 		kptr_get = kfunc_flags & KF_KPTR_GET;
 		trusted_arg = kfunc_flags & KF_TRUSTED_ARGS;
+		sleepable = kfunc_flags & KF_SLEEPABLE;
 	}
 
 	/* check that BTF function arguments match actual types that the
@@ -6419,6 +6421,13 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env,
 			func_name);
 		return -EINVAL;
 	}
+
+	if (sleepable && !env->prog->aux->sleepable) {
+		bpf_log(log, "kernel function %s is sleepable but the program is not\n",
+			func_name);
+		return -EINVAL;
+	}
+
 	/* returns argument register number > 0 in case of reference release kfunc */
 	return rel ? ref_regno : 0;
 }
-- 
2.37.1.559.g78731f0fdb-goog


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH bpf-next v7 2/8] cgroup: enable cgroup_get_from_file() on cgroup1
  2022-08-05 21:48 [PATCH bpf-next v7 0/8] bpf: rstat: cgroup hierarchical stats Hao Luo
  2022-08-05 21:48 ` [PATCH bpf-next v7 1/8] btf: Add a new kfunc flag which allows to mark a function to be sleepable Hao Luo
@ 2022-08-05 21:48 ` Hao Luo
  2022-08-05 21:48 ` [PATCH bpf-next v7 3/8] bpf, iter: Fix the condition on p when calling stop Hao Luo
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 23+ messages in thread
From: Hao Luo @ 2022-08-05 21:48 UTC (permalink / raw)
  To: linux-kernel, bpf, cgroups, netdev
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Song Liu, Yonghong Song, Tejun Heo, Zefan Li,
	KP Singh, Johannes Weiner, Michal Hocko, Benjamin Tissoires,
	John Fastabend, Michal Koutny, Roman Gushchin, David Rientjes,
	Stanislav Fomichev, Shakeel Butt, Yosry Ahmed, Hao Luo

From: Yosry Ahmed <yosryahmed@google.com>

cgroup_get_from_file() currently fails with -EBADF if called on cgroup
v1. However, the current implementation works on cgroup v1 as well, so
the restriction is unnecessary.

This enabled cgroup_get_from_fd() to work on cgroup v1, which would be
the only thing stopping bpf cgroup_iter from supporting cgroup v1.

Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Hao Luo <haoluo@google.com>
---
 kernel/cgroup/cgroup.c | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 13c8e91d7862..49803849a289 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -6099,11 +6099,6 @@ static struct cgroup *cgroup_get_from_file(struct file *f)
 		return ERR_CAST(css);
 
 	cgrp = css->cgroup;
-	if (!cgroup_on_dfl(cgrp)) {
-		cgroup_put(cgrp);
-		return ERR_PTR(-EBADF);
-	}
-
 	return cgrp;
 }
 
-- 
2.37.1.559.g78731f0fdb-goog


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH bpf-next v7 3/8] bpf, iter: Fix the condition on p when calling stop.
  2022-08-05 21:48 [PATCH bpf-next v7 0/8] bpf: rstat: cgroup hierarchical stats Hao Luo
  2022-08-05 21:48 ` [PATCH bpf-next v7 1/8] btf: Add a new kfunc flag which allows to mark a function to be sleepable Hao Luo
  2022-08-05 21:48 ` [PATCH bpf-next v7 2/8] cgroup: enable cgroup_get_from_file() on cgroup1 Hao Luo
@ 2022-08-05 21:48 ` Hao Luo
  2022-08-05 21:48 ` [PATCH bpf-next v7 4/8] bpf: Introduce cgroup iter Hao Luo
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 23+ messages in thread
From: Hao Luo @ 2022-08-05 21:48 UTC (permalink / raw)
  To: linux-kernel, bpf, cgroups, netdev
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Song Liu, Yonghong Song, Tejun Heo, Zefan Li,
	KP Singh, Johannes Weiner, Michal Hocko, Benjamin Tissoires,
	John Fastabend, Michal Koutny, Roman Gushchin, David Rientjes,
	Stanislav Fomichev, Shakeel Butt, Yosry Ahmed, Hao Luo

In bpf_seq_read, seq->op->next() could return an ERR and jump to
the label stop. However, the existing code in stop does not handle
the case when p (returned from next()) is an ERR. Adds the handling
of ERR of p by converting p into an error and jumping to done.

Because all the current implementations do not have a case that
returns ERR from next(), so this patch doesn't have behavior changes
right now.

Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Hao Luo <haoluo@google.com>
---
 kernel/bpf/bpf_iter.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/kernel/bpf/bpf_iter.c b/kernel/bpf/bpf_iter.c
index 7e8fd49406f6..4688ba39ef25 100644
--- a/kernel/bpf/bpf_iter.c
+++ b/kernel/bpf/bpf_iter.c
@@ -198,6 +198,11 @@ static ssize_t bpf_seq_read(struct file *file, char __user *buf, size_t size,
 	}
 stop:
 	offs = seq->count;
+	if (IS_ERR(p)) {
+		seq->op->stop(seq, NULL);
+		err = PTR_ERR(p);
+		goto done;
+	}
 	/* bpf program called if !p */
 	seq->op->stop(seq, p);
 	if (!p) {
-- 
2.37.1.559.g78731f0fdb-goog


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH bpf-next v7 4/8] bpf: Introduce cgroup iter
  2022-08-05 21:48 [PATCH bpf-next v7 0/8] bpf: rstat: cgroup hierarchical stats Hao Luo
                   ` (2 preceding siblings ...)
  2022-08-05 21:48 ` [PATCH bpf-next v7 3/8] bpf, iter: Fix the condition on p when calling stop Hao Luo
@ 2022-08-05 21:48 ` Hao Luo
  2022-08-09  0:18   ` Andrii Nakryiko
  2022-08-05 21:48 ` [PATCH bpf-next v7 5/8] selftests/bpf: Test cgroup_iter Hao Luo
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 23+ messages in thread
From: Hao Luo @ 2022-08-05 21:48 UTC (permalink / raw)
  To: linux-kernel, bpf, cgroups, netdev
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Song Liu, Yonghong Song, Tejun Heo, Zefan Li,
	KP Singh, Johannes Weiner, Michal Hocko, Benjamin Tissoires,
	John Fastabend, Michal Koutny, Roman Gushchin, David Rientjes,
	Stanislav Fomichev, Shakeel Butt, Yosry Ahmed, Hao Luo

Cgroup_iter is a type of bpf_iter. It walks over cgroups in four modes:

 - walking a cgroup's descendants in pre-order.
 - walking a cgroup's descendants in post-order.
 - walking a cgroup's ancestors.
 - process only the given cgroup.

When attaching cgroup_iter, one can set a cgroup to the iter_link
created from attaching. This cgroup is passed as a file descriptor
or cgroup id and serves as the starting point of the walk. If no
cgroup is specified, the starting point will be the root cgroup v2.

For walking descendants, one can specify the order: either pre-order or
post-order. For walking ancestors, the walk starts at the specified
cgroup and ends at the root.

One can also terminate the walk early by returning 1 from the iter
program.

Note that because walking cgroup hierarchy holds cgroup_mutex, the iter
program is called with cgroup_mutex held.

Currently only one session is supported, which means, depending on the
volume of data bpf program intends to send to user space, the number
of cgroups that can be walked is limited. For example, given the current
buffer size is 8 * PAGE_SIZE, if the program sends 64B data for each
cgroup, assuming PAGE_SIZE is 4kb, the total number of cgroups that can
be walked is 512. This is a limitation of cgroup_iter. If the output
data is larger than the kernel buffer size, after all data in the
kernel buffer is consumed by user space, the subsequent read() syscall
will signal EOPNOTSUPP. In order to work around, the user may have to
update their program to reduce the volume of data sent to output. For
example, skip some uninteresting cgroups. In future, we may extend
bpf_iter flags to allow customizing buffer size.

Acked-by: Yonghong Song <yhs@fb.com>
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Hao Luo <haoluo@google.com>
---
 include/linux/bpf.h                           |   8 +
 include/uapi/linux/bpf.h                      |  38 +++
 kernel/bpf/Makefile                           |   3 +
 kernel/bpf/cgroup_iter.c                      | 286 ++++++++++++++++++
 tools/include/uapi/linux/bpf.h                |  38 +++
 .../selftests/bpf/prog_tests/btf_dump.c       |   4 +-
 6 files changed, 375 insertions(+), 2 deletions(-)
 create mode 100644 kernel/bpf/cgroup_iter.c

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 20c26aed7896..09b5c2167424 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -48,6 +48,7 @@ struct mem_cgroup;
 struct module;
 struct bpf_func_state;
 struct ftrace_ops;
+struct cgroup;
 
 extern struct idr btf_idr;
 extern spinlock_t btf_idr_lock;
@@ -1730,7 +1731,14 @@ int bpf_obj_get_user(const char __user *pathname, int flags);
 	int __init bpf_iter_ ## target(args) { return 0; }
 
 struct bpf_iter_aux_info {
+	/* for map_elem iter */
 	struct bpf_map *map;
+
+	/* for cgroup iter */
+	struct {
+		struct cgroup *start; /* starting cgroup */
+		int order;
+	} cgroup;
 };
 
 typedef int (*bpf_iter_attach_target_t)(struct bpf_prog *prog,
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 59a217ca2dfd..4d758b2e70d6 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -87,10 +87,37 @@ struct bpf_cgroup_storage_key {
 	__u32	attach_type;		/* program attach type (enum bpf_attach_type) */
 };
 
+enum bpf_iter_order {
+	BPF_ITER_ORDER_DEFAULT = 0,	/* default order. */
+	BPF_ITER_SELF,			/* process only a single object. */
+	BPF_ITER_DESCENDANTS_PRE,	/* walk descendants in pre-order. */
+	BPF_ITER_DESCENDANTS_POST,	/* walk descendants in post-order. */
+	BPF_ITER_ANCESTORS_UP,		/* walk ancestors upward. */
+};
+
 union bpf_iter_link_info {
 	struct {
 		__u32	map_fd;
 	} map;
+	struct {
+		/* Valid values include:
+		 *  - BPF_ITER_ORDER_DEFAULT
+		 *  - BPF_ITER_SELF
+		 *  - BPF_ITER_DESCENDANTS_PRE
+		 *  - BPF_ITER_DESCENDANTS_POST
+		 *  - BPF_ITER_ANCESTORS_UP
+		 * for cgroup_iter, DEFAULT is equivalent to DESCENDANTS_PRE.
+		 */
+		__u32	order;
+
+		/* At most one of cgroup_fd and cgroup_id can be non-zero. If
+		 * both are zero, the walk starts from the default cgroup v2
+		 * root. For walking v1 hierarchy, one should always explicitly
+		 * specify cgroup_fd.
+		 */
+		__u32	cgroup_fd;
+		__u64	cgroup_id;
+	} cgroup;
 };
 
 /* BPF syscall commands, see bpf(2) man-page for more details. */
@@ -6134,11 +6161,22 @@ struct bpf_link_info {
 		struct {
 			__aligned_u64 target_name; /* in/out: target_name buffer ptr */
 			__u32 target_name_len;	   /* in/out: target_name buffer len */
+
+			/* If the iter specific field is 32 bits, it can be put
+			 * in the first or second union. Otherwise it should be
+			 * put in the second union.
+			 */
 			union {
 				struct {
 					__u32 map_id;
 				} map;
 			};
+			union {
+				struct {
+					__u64 cgroup_id;
+					__u32 order;
+				} cgroup;
+			};
 		} iter;
 		struct  {
 			__u32 netns_ino;
diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
index 057ba8e01e70..00e05b69a4df 100644
--- a/kernel/bpf/Makefile
+++ b/kernel/bpf/Makefile
@@ -24,6 +24,9 @@ endif
 ifeq ($(CONFIG_PERF_EVENTS),y)
 obj-$(CONFIG_BPF_SYSCALL) += stackmap.o
 endif
+ifeq ($(CONFIG_CGROUPS),y)
+obj-$(CONFIG_BPF_SYSCALL) += cgroup_iter.o
+endif
 obj-$(CONFIG_CGROUP_BPF) += cgroup.o
 ifeq ($(CONFIG_INET),y)
 obj-$(CONFIG_BPF_SYSCALL) += reuseport_array.o
diff --git a/kernel/bpf/cgroup_iter.c b/kernel/bpf/cgroup_iter.c
new file mode 100644
index 000000000000..c469252d0536
--- /dev/null
+++ b/kernel/bpf/cgroup_iter.c
@@ -0,0 +1,286 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (c) 2022 Google */
+#include <linux/bpf.h>
+#include <linux/btf_ids.h>
+#include <linux/cgroup.h>
+#include <linux/kernel.h>
+#include <linux/seq_file.h>
+
+#include "../cgroup/cgroup-internal.h"  /* cgroup_mutex and cgroup_is_dead */
+
+/* cgroup_iter provides four modes of traversal to the cgroup hierarchy.
+ *
+ *  1. Walk the descendants of a cgroup in pre-order.
+ *  2. Walk the descendants of a cgroup in post-order.
+ *  3. Walk the ancestors of a cgroup.
+ *  4. Show the given cgroup only.
+ *
+ * For walking descendants, cgroup_iter can walk in either pre-order or
+ * post-order. For walking ancestors, the iter walks up from a cgroup to
+ * the root.
+ *
+ * The iter program can terminate the walk early by returning 1. Walk
+ * continues if prog returns 0.
+ *
+ * The prog can check (seq->num == 0) to determine whether this is
+ * the first element. The prog may also be passed a NULL cgroup,
+ * which means the walk has completed and the prog has a chance to
+ * do post-processing, such as outputing an epilogue.
+ *
+ * Note: the iter_prog is called with cgroup_mutex held.
+ *
+ * Currently only one session is supported, which means, depending on the
+ * volume of data bpf program intends to send to user space, the number
+ * of cgroups that can be walked is limited. For example, given the current
+ * buffer size is 8 * PAGE_SIZE, if the program sends 64B data for each
+ * cgroup, assuming PAGE_SIZE is 4kb, the total number of cgroups that can
+ * be walked is 512. This is a limitation of cgroup_iter. If the output data
+ * is larger than the kernel buffer size, after all data in the kernel buffer
+ * is consumed by user space, the subsequent read() syscall will signal
+ * EOPNOTSUPP. In order to work around, the user may have to update their
+ * program to reduce the volume of data sent to output. For example, skip
+ * some uninteresting cgroups.
+ */
+
+struct bpf_iter__cgroup {
+	__bpf_md_ptr(struct bpf_iter_meta *, meta);
+	__bpf_md_ptr(struct cgroup *, cgroup);
+};
+
+struct cgroup_iter_priv {
+	struct cgroup_subsys_state *start_css;
+	bool visited_all;
+	bool terminate;
+	int order;
+};
+
+static void *cgroup_iter_seq_start(struct seq_file *seq, loff_t *pos)
+{
+	struct cgroup_iter_priv *p = seq->private;
+
+	mutex_lock(&cgroup_mutex);
+
+	/* cgroup_iter doesn't support read across multiple sessions. */
+	if (*pos > 0) {
+		if (p->visited_all)
+			return NULL;
+
+		/* Haven't visited all, but because cgroup_mutex has dropped,
+		 * return -EOPNOTSUPP to indicate incomplete iteration.
+		 */
+		return ERR_PTR(-EOPNOTSUPP);
+	}
+
+	++*pos;
+	p->terminate = false;
+	p->visited_all = false;
+	if (p->order == BPF_ITER_DESCENDANTS_PRE)
+		return css_next_descendant_pre(NULL, p->start_css);
+	else if (p->order == BPF_ITER_DESCENDANTS_POST)
+		return css_next_descendant_post(NULL, p->start_css);
+	else if (p->order == BPF_ITER_ANCESTORS_UP)
+		return p->start_css;
+	else /* BPF_ITER_SELF */
+		return p->start_css;
+}
+
+static int __cgroup_iter_seq_show(struct seq_file *seq,
+				  struct cgroup_subsys_state *css, int in_stop);
+
+static void cgroup_iter_seq_stop(struct seq_file *seq, void *v)
+{
+	struct cgroup_iter_priv *p = seq->private;
+
+	mutex_unlock(&cgroup_mutex);
+
+	/* pass NULL to the prog for post-processing */
+	if (!v) {
+		__cgroup_iter_seq_show(seq, NULL, true);
+		p->visited_all = true;
+	}
+}
+
+static void *cgroup_iter_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+{
+	struct cgroup_subsys_state *curr = (struct cgroup_subsys_state *)v;
+	struct cgroup_iter_priv *p = seq->private;
+
+	++*pos;
+	if (p->terminate)
+		return NULL;
+
+	if (p->order == BPF_ITER_DESCENDANTS_PRE)
+		return css_next_descendant_pre(curr, p->start_css);
+	else if (p->order == BPF_ITER_DESCENDANTS_POST)
+		return css_next_descendant_post(curr, p->start_css);
+	else if (p->order == BPF_ITER_ANCESTORS_UP)
+		return curr->parent;
+	else  /* BPF_ITER_SELF */
+		return NULL;
+}
+
+static int __cgroup_iter_seq_show(struct seq_file *seq,
+				  struct cgroup_subsys_state *css, int in_stop)
+{
+	struct cgroup_iter_priv *p = seq->private;
+	struct bpf_iter__cgroup ctx;
+	struct bpf_iter_meta meta;
+	struct bpf_prog *prog;
+	int ret = 0;
+
+	/* cgroup is dead, skip this element */
+	if (css && cgroup_is_dead(css->cgroup))
+		return 0;
+
+	ctx.meta = &meta;
+	ctx.cgroup = css ? css->cgroup : NULL;
+	meta.seq = seq;
+	prog = bpf_iter_get_info(&meta, in_stop);
+	if (prog)
+		ret = bpf_iter_run_prog(prog, &ctx);
+
+	/* if prog returns > 0, terminate after this element. */
+	if (ret != 0)
+		p->terminate = true;
+
+	return 0;
+}
+
+static int cgroup_iter_seq_show(struct seq_file *seq, void *v)
+{
+	return __cgroup_iter_seq_show(seq, (struct cgroup_subsys_state *)v,
+				      false);
+}
+
+static const struct seq_operations cgroup_iter_seq_ops = {
+	.start  = cgroup_iter_seq_start,
+	.next   = cgroup_iter_seq_next,
+	.stop   = cgroup_iter_seq_stop,
+	.show   = cgroup_iter_seq_show,
+};
+
+BTF_ID_LIST_SINGLE(bpf_cgroup_btf_id, struct, cgroup)
+
+static int cgroup_iter_seq_init(void *priv, struct bpf_iter_aux_info *aux)
+{
+	struct cgroup_iter_priv *p = (struct cgroup_iter_priv *)priv;
+	struct cgroup *cgrp = aux->cgroup.start;
+
+	p->start_css = &cgrp->self;
+	p->terminate = false;
+	p->visited_all = false;
+	p->order = aux->cgroup.order;
+	return 0;
+}
+
+static const struct bpf_iter_seq_info cgroup_iter_seq_info = {
+	.seq_ops		= &cgroup_iter_seq_ops,
+	.init_seq_private	= cgroup_iter_seq_init,
+	.seq_priv_size		= sizeof(struct cgroup_iter_priv),
+};
+
+static int bpf_iter_attach_cgroup(struct bpf_prog *prog,
+				  union bpf_iter_link_info *linfo,
+				  struct bpf_iter_aux_info *aux)
+{
+	int fd = linfo->cgroup.cgroup_fd;
+	u64 id = linfo->cgroup.cgroup_id;
+	int order = linfo->cgroup.order;
+	struct cgroup *cgrp;
+
+	if (order == BPF_ITER_ORDER_DEFAULT)
+		order = BPF_ITER_DESCENDANTS_PRE;
+
+	if (order != BPF_ITER_DESCENDANTS_PRE &&
+	    order != BPF_ITER_DESCENDANTS_POST &&
+	    order != BPF_ITER_ANCESTORS_UP &&
+	    order != BPF_ITER_SELF)
+		return -EINVAL;
+
+	if (fd && id)
+		return -EINVAL;
+
+	if (fd)
+		cgrp = cgroup_get_from_fd(fd);
+	else if (id)
+		cgrp = cgroup_get_from_id(id);
+	else /* walk the entire hierarchy by default. */
+		cgrp = cgroup_get_from_path("/");
+
+	if (IS_ERR(cgrp))
+		return PTR_ERR(cgrp);
+
+	aux->cgroup.start = cgrp;
+	aux->cgroup.order = order;
+	return 0;
+}
+
+static void bpf_iter_detach_cgroup(struct bpf_iter_aux_info *aux)
+{
+	cgroup_put(aux->cgroup.start);
+}
+
+static void bpf_iter_cgroup_show_fdinfo(const struct bpf_iter_aux_info *aux,
+					struct seq_file *seq)
+{
+	char *buf;
+
+	buf = kzalloc(PATH_MAX, GFP_KERNEL);
+	if (!buf) {
+		seq_puts(seq, "cgroup_path:\t<unknown>\n");
+		goto show_order;
+	}
+
+	/* If cgroup_path_ns() fails, buf will be an empty string, cgroup_path
+	 * will print nothing.
+	 *
+	 * Path is in the calling process's cgroup namespace.
+	 */
+	cgroup_path_ns(aux->cgroup.start, buf, PATH_MAX,
+		       current->nsproxy->cgroup_ns);
+	seq_printf(seq, "cgroup_path:\t%s\n", buf);
+	kfree(buf);
+
+show_order:
+	if (aux->cgroup.order == BPF_ITER_DESCENDANTS_PRE)
+		seq_puts(seq, "order: pre\n");
+	else if (aux->cgroup.order == BPF_ITER_DESCENDANTS_POST)
+		seq_puts(seq, "order: post\n");
+	else if (aux->cgroup.order == BPF_ITER_ANCESTORS_UP)
+		seq_puts(seq, "order: up\n");
+	else /* BPF_ITER_SELF */
+		seq_puts(seq, "order: self\n");
+}
+
+static int bpf_iter_cgroup_fill_link_info(const struct bpf_iter_aux_info *aux,
+					  struct bpf_link_info *info)
+{
+	info->iter.cgroup.order = aux->cgroup.order;
+	info->iter.cgroup.cgroup_id = cgroup_id(aux->cgroup.start);
+	return 0;
+}
+
+DEFINE_BPF_ITER_FUNC(cgroup, struct bpf_iter_meta *meta,
+		     struct cgroup *cgroup)
+
+static struct bpf_iter_reg bpf_cgroup_reg_info = {
+	.target			= "cgroup",
+	.attach_target		= bpf_iter_attach_cgroup,
+	.detach_target		= bpf_iter_detach_cgroup,
+	.show_fdinfo		= bpf_iter_cgroup_show_fdinfo,
+	.fill_link_info		= bpf_iter_cgroup_fill_link_info,
+	.ctx_arg_info_size	= 1,
+	.ctx_arg_info		= {
+		{ offsetof(struct bpf_iter__cgroup, cgroup),
+		  PTR_TO_BTF_ID_OR_NULL },
+	},
+	.seq_info		= &cgroup_iter_seq_info,
+};
+
+static int __init bpf_cgroup_iter_init(void)
+{
+	bpf_cgroup_reg_info.ctx_arg_info[0].btf_id = bpf_cgroup_btf_id[0];
+	return bpf_iter_reg_target(&bpf_cgroup_reg_info);
+}
+
+late_initcall(bpf_cgroup_iter_init);
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 59a217ca2dfd..4d758b2e70d6 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -87,10 +87,37 @@ struct bpf_cgroup_storage_key {
 	__u32	attach_type;		/* program attach type (enum bpf_attach_type) */
 };
 
+enum bpf_iter_order {
+	BPF_ITER_ORDER_DEFAULT = 0,	/* default order. */
+	BPF_ITER_SELF,			/* process only a single object. */
+	BPF_ITER_DESCENDANTS_PRE,	/* walk descendants in pre-order. */
+	BPF_ITER_DESCENDANTS_POST,	/* walk descendants in post-order. */
+	BPF_ITER_ANCESTORS_UP,		/* walk ancestors upward. */
+};
+
 union bpf_iter_link_info {
 	struct {
 		__u32	map_fd;
 	} map;
+	struct {
+		/* Valid values include:
+		 *  - BPF_ITER_ORDER_DEFAULT
+		 *  - BPF_ITER_SELF
+		 *  - BPF_ITER_DESCENDANTS_PRE
+		 *  - BPF_ITER_DESCENDANTS_POST
+		 *  - BPF_ITER_ANCESTORS_UP
+		 * for cgroup_iter, DEFAULT is equivalent to DESCENDANTS_PRE.
+		 */
+		__u32	order;
+
+		/* At most one of cgroup_fd and cgroup_id can be non-zero. If
+		 * both are zero, the walk starts from the default cgroup v2
+		 * root. For walking v1 hierarchy, one should always explicitly
+		 * specify cgroup_fd.
+		 */
+		__u32	cgroup_fd;
+		__u64	cgroup_id;
+	} cgroup;
 };
 
 /* BPF syscall commands, see bpf(2) man-page for more details. */
@@ -6134,11 +6161,22 @@ struct bpf_link_info {
 		struct {
 			__aligned_u64 target_name; /* in/out: target_name buffer ptr */
 			__u32 target_name_len;	   /* in/out: target_name buffer len */
+
+			/* If the iter specific field is 32 bits, it can be put
+			 * in the first or second union. Otherwise it should be
+			 * put in the second union.
+			 */
 			union {
 				struct {
 					__u32 map_id;
 				} map;
 			};
+			union {
+				struct {
+					__u64 cgroup_id;
+					__u32 order;
+				} cgroup;
+			};
 		} iter;
 		struct  {
 			__u32 netns_ino;
diff --git a/tools/testing/selftests/bpf/prog_tests/btf_dump.c b/tools/testing/selftests/bpf/prog_tests/btf_dump.c
index 5fce7008d1ff..84c1cfaa2b02 100644
--- a/tools/testing/selftests/bpf/prog_tests/btf_dump.c
+++ b/tools/testing/selftests/bpf/prog_tests/btf_dump.c
@@ -764,8 +764,8 @@ static void test_btf_dump_struct_data(struct btf *btf, struct btf_dump *d,
 
 	/* union with nested struct */
 	TEST_BTF_DUMP_DATA(btf, d, "union", str, union bpf_iter_link_info, BTF_F_COMPACT,
-			   "(union bpf_iter_link_info){.map = (struct){.map_fd = (__u32)1,},}",
-			   { .map = { .map_fd = 1 }});
+			   "(union bpf_iter_link_info){.map = (struct){.map_fd = (__u32)1,},.cgroup = (struct){.order = (__u32)1,.cgroup_fd = (__u32)1,},}",
+			   { .cgroup = { .order = 1, .cgroup_fd = 1, }});
 
 	/* struct skb with nested structs/unions; because type output is so
 	 * complex, we don't do a string comparison, just verify we return
-- 
2.37.1.559.g78731f0fdb-goog


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH bpf-next v7 5/8] selftests/bpf: Test cgroup_iter.
  2022-08-05 21:48 [PATCH bpf-next v7 0/8] bpf: rstat: cgroup hierarchical stats Hao Luo
                   ` (3 preceding siblings ...)
  2022-08-05 21:48 ` [PATCH bpf-next v7 4/8] bpf: Introduce cgroup iter Hao Luo
@ 2022-08-05 21:48 ` Hao Luo
  2022-08-09  0:20   ` Andrii Nakryiko
  2022-08-05 21:48 ` [PATCH bpf-next v7 6/8] cgroup: bpf: enable bpf programs to integrate with rstat Hao Luo
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 23+ messages in thread
From: Hao Luo @ 2022-08-05 21:48 UTC (permalink / raw)
  To: linux-kernel, bpf, cgroups, netdev
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Song Liu, Yonghong Song, Tejun Heo, Zefan Li,
	KP Singh, Johannes Weiner, Michal Hocko, Benjamin Tissoires,
	John Fastabend, Michal Koutny, Roman Gushchin, David Rientjes,
	Stanislav Fomichev, Shakeel Butt, Yosry Ahmed, Hao Luo

Add a selftest for cgroup_iter. The selftest creates a mini cgroup tree
of the following structure:

    ROOT (working cgroup)
     |
   PARENT
  /      \
CHILD1  CHILD2

and tests the following scenarios:

 - invalid cgroup fd.
 - pre-order walk over descendants from PARENT.
 - post-order walk over descendants from PARENT.
 - walk of ancestors from PARENT.
 - walk from PARENT in the default order, which is pre-order.
 - process only a single object (i.e. PARENT).
 - early termination.

Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Hao Luo <haoluo@google.com>
---
 .../selftests/bpf/prog_tests/cgroup_iter.c    | 237 ++++++++++++++++++
 tools/testing/selftests/bpf/progs/bpf_iter.h  |   7 +
 .../testing/selftests/bpf/progs/cgroup_iter.c |  39 +++
 3 files changed, 283 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_iter.c
 create mode 100644 tools/testing/selftests/bpf/progs/cgroup_iter.c

diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_iter.c b/tools/testing/selftests/bpf/prog_tests/cgroup_iter.c
new file mode 100644
index 000000000000..f9ab31a63c69
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/cgroup_iter.c
@@ -0,0 +1,237 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Google */
+
+#include <test_progs.h>
+#include <bpf/libbpf.h>
+#include <bpf/btf.h>
+#include "cgroup_iter.skel.h"
+#include "cgroup_helpers.h"
+
+#define ROOT           0
+#define PARENT         1
+#define CHILD1         2
+#define CHILD2         3
+#define NUM_CGROUPS    4
+
+#define PROLOGUE       "prologue\n"
+#define EPILOGUE       "epilogue\n"
+
+static const char *cg_path[] = {
+	"/", "/parent", "/parent/child1", "/parent/child2"
+};
+
+static int cg_fd[] = {-1, -1, -1, -1};
+static unsigned long long cg_id[] = {0, 0, 0, 0};
+static char expected_output[64];
+
+static int setup_cgroups(void)
+{
+	int fd, i = 0;
+
+	for (i = 0; i < NUM_CGROUPS; i++) {
+		fd = create_and_get_cgroup(cg_path[i]);
+		if (fd < 0)
+			return fd;
+
+		cg_fd[i] = fd;
+		cg_id[i] = get_cgroup_id(cg_path[i]);
+	}
+	return 0;
+}
+
+static void cleanup_cgroups(void)
+{
+	int i;
+
+	for (i = 0; i < NUM_CGROUPS; i++)
+		close(cg_fd[i]);
+}
+
+static void read_from_cgroup_iter(struct bpf_program *prog, int cgroup_fd,
+				  int order, const char *testname)
+{
+	DECLARE_LIBBPF_OPTS(bpf_iter_attach_opts, opts);
+	union bpf_iter_link_info linfo;
+	struct bpf_link *link;
+	int len, iter_fd;
+	static char buf[128];
+	size_t left;
+	char *p;
+
+	memset(&linfo, 0, sizeof(linfo));
+	linfo.cgroup.cgroup_fd = cgroup_fd;
+	linfo.cgroup.order = order;
+	opts.link_info = &linfo;
+	opts.link_info_len = sizeof(linfo);
+
+	link = bpf_program__attach_iter(prog, &opts);
+	if (!ASSERT_OK_PTR(link, "attach_iter"))
+		return;
+
+	iter_fd = bpf_iter_create(bpf_link__fd(link));
+	if (iter_fd < 0)
+		goto free_link;
+
+	memset(buf, 0, sizeof(buf));
+	left = ARRAY_SIZE(buf);
+	p = buf;
+	while ((len = read(iter_fd, p, left)) > 0) {
+		p += len;
+		left -= len;
+	}
+
+	ASSERT_STREQ(buf, expected_output, testname);
+
+	/* read() after iter finishes should be ok. */
+	if (len == 0)
+		ASSERT_OK(read(iter_fd, buf, sizeof(buf)), "second_read");
+
+	close(iter_fd);
+free_link:
+	bpf_link__destroy(link);
+}
+
+/* Invalid cgroup. */
+static void test_invalid_cgroup(struct cgroup_iter *skel)
+{
+	DECLARE_LIBBPF_OPTS(bpf_iter_attach_opts, opts);
+	union bpf_iter_link_info linfo;
+	struct bpf_link *link;
+
+	memset(&linfo, 0, sizeof(linfo));
+	linfo.cgroup.cgroup_fd = (__u32)-1;
+	opts.link_info = &linfo;
+	opts.link_info_len = sizeof(linfo);
+
+	link = bpf_program__attach_iter(skel->progs.cgroup_id_printer, &opts);
+	ASSERT_ERR_PTR(link, "attach_iter");
+	bpf_link__destroy(link);
+}
+
+/* Specifying both cgroup_fd and cgroup_id is invalid. */
+static void test_invalid_cgroup_spec(struct cgroup_iter *skel)
+{
+	DECLARE_LIBBPF_OPTS(bpf_iter_attach_opts, opts);
+	union bpf_iter_link_info linfo;
+	struct bpf_link *link;
+
+	memset(&linfo, 0, sizeof(linfo));
+	linfo.cgroup.cgroup_fd = (__u32)cg_fd[PARENT];
+	linfo.cgroup.cgroup_id = (__u64)cg_id[PARENT];
+	opts.link_info = &linfo;
+	opts.link_info_len = sizeof(linfo);
+
+	link = bpf_program__attach_iter(skel->progs.cgroup_id_printer, &opts);
+	ASSERT_ERR_PTR(link, "attach_iter");
+	bpf_link__destroy(link);
+}
+
+/* Preorder walk prints parent and child in order. */
+static void test_walk_preorder(struct cgroup_iter *skel)
+{
+	snprintf(expected_output, sizeof(expected_output),
+		 PROLOGUE "%8llu\n%8llu\n%8llu\n" EPILOGUE,
+		 cg_id[PARENT], cg_id[CHILD1], cg_id[CHILD2]);
+
+	read_from_cgroup_iter(skel->progs.cgroup_id_printer, cg_fd[PARENT],
+			      BPF_ITER_DESCENDANTS_PRE, "preorder");
+}
+
+/* Postorder walk prints child and parent in order. */
+static void test_walk_postorder(struct cgroup_iter *skel)
+{
+	snprintf(expected_output, sizeof(expected_output),
+		 PROLOGUE "%8llu\n%8llu\n%8llu\n" EPILOGUE,
+		 cg_id[CHILD1], cg_id[CHILD2], cg_id[PARENT]);
+
+	read_from_cgroup_iter(skel->progs.cgroup_id_printer, cg_fd[PARENT],
+			      BPF_ITER_DESCENDANTS_POST, "postorder");
+}
+
+/* Walking parents prints parent and then root. */
+static void test_walk_ancestors_up(struct cgroup_iter *skel)
+{
+	/* terminate the walk when ROOT is met. */
+	skel->bss->terminal_cgroup = cg_id[ROOT];
+
+	snprintf(expected_output, sizeof(expected_output),
+		 PROLOGUE "%8llu\n%8llu\n" EPILOGUE,
+		 cg_id[PARENT], cg_id[ROOT]);
+
+	read_from_cgroup_iter(skel->progs.cgroup_id_printer, cg_fd[PARENT],
+			      BPF_ITER_ANCESTORS_UP, "ancestors_up");
+
+	skel->bss->terminal_cgroup = 0;
+}
+
+/* Default order is pre-order. */
+static void test_walk_default_order(struct cgroup_iter *skel)
+{
+	snprintf(expected_output, sizeof(expected_output),
+		 PROLOGUE "%8llu\n%8llu\n%8llu\n" EPILOGUE,
+		 cg_id[PARENT], cg_id[CHILD1], cg_id[CHILD2]);
+
+	read_from_cgroup_iter(skel->progs.cgroup_id_printer, cg_fd[PARENT],
+			      BPF_ITER_ORDER_DEFAULT, "default_order");
+}
+
+/* Early termination prints parent only. */
+static void test_early_termination(struct cgroup_iter *skel)
+{
+	/* terminate the walk after the first element is processed. */
+	skel->bss->terminate_early = 1;
+
+	snprintf(expected_output, sizeof(expected_output),
+		 PROLOGUE "%8llu\n" EPILOGUE, cg_id[PARENT]);
+
+	read_from_cgroup_iter(skel->progs.cgroup_id_printer, cg_fd[PARENT],
+			      BPF_ITER_DESCENDANTS_PRE, "early_termination");
+
+	skel->bss->terminate_early = 0;
+}
+
+/* Waling self prints self only. */
+static void test_walk_self(struct cgroup_iter *skel)
+{
+	snprintf(expected_output, sizeof(expected_output),
+		 PROLOGUE "%8llu\n" EPILOGUE, cg_id[PARENT]);
+
+	read_from_cgroup_iter(skel->progs.cgroup_id_printer, cg_fd[PARENT],
+			      BPF_ITER_SELF, "self");
+}
+
+void test_cgroup_iter(void)
+{
+	struct cgroup_iter *skel = NULL;
+
+	if (setup_cgroup_environment())
+		return;
+
+	if (setup_cgroups())
+		goto out;
+
+	skel = cgroup_iter__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "cgroup_iter__open_and_load"))
+		goto out;
+
+	if (test__start_subtest("cgroup_iter__invalid_cgroup"))
+		test_invalid_cgroup(skel);
+	if (test__start_subtest("cgroup_iter__invalid_cgroup_spec"))
+		test_invalid_cgroup_spec(skel);
+	if (test__start_subtest("cgroup_iter__preorder"))
+		test_walk_preorder(skel);
+	if (test__start_subtest("cgroup_iter__postorder"))
+		test_walk_postorder(skel);
+	if (test__start_subtest("cgroup_iter__ancestors_up_walk"))
+		test_walk_ancestors_up(skel);
+	if (test__start_subtest("cgroup_iter__default_order"))
+		test_walk_default_order(skel);
+	if (test__start_subtest("cgroup_iter__early_termination"))
+		test_early_termination(skel);
+	if (test__start_subtest("cgroup_iter__self"))
+		test_walk_self(skel);
+out:
+	cgroup_iter__destroy(skel);
+	cleanup_cgroups();
+	cleanup_cgroup_environment();
+}
diff --git a/tools/testing/selftests/bpf/progs/bpf_iter.h b/tools/testing/selftests/bpf/progs/bpf_iter.h
index e9846606690d..c41ee80533ca 100644
--- a/tools/testing/selftests/bpf/progs/bpf_iter.h
+++ b/tools/testing/selftests/bpf/progs/bpf_iter.h
@@ -17,6 +17,7 @@
 #define bpf_iter__bpf_sk_storage_map bpf_iter__bpf_sk_storage_map___not_used
 #define bpf_iter__sockmap bpf_iter__sockmap___not_used
 #define bpf_iter__bpf_link bpf_iter__bpf_link___not_used
+#define bpf_iter__cgroup bpf_iter__cgroup___not_used
 #define btf_ptr btf_ptr___not_used
 #define BTF_F_COMPACT BTF_F_COMPACT___not_used
 #define BTF_F_NONAME BTF_F_NONAME___not_used
@@ -40,6 +41,7 @@
 #undef bpf_iter__bpf_sk_storage_map
 #undef bpf_iter__sockmap
 #undef bpf_iter__bpf_link
+#undef bpf_iter__cgroup
 #undef btf_ptr
 #undef BTF_F_COMPACT
 #undef BTF_F_NONAME
@@ -141,6 +143,11 @@ struct bpf_iter__bpf_link {
 	struct bpf_link *link;
 };
 
+struct bpf_iter__cgroup {
+	struct bpf_iter_meta *meta;
+	struct cgroup *cgroup;
+} __attribute__((preserve_access_index));
+
 struct btf_ptr {
 	void *ptr;
 	__u32 type_id;
diff --git a/tools/testing/selftests/bpf/progs/cgroup_iter.c b/tools/testing/selftests/bpf/progs/cgroup_iter.c
new file mode 100644
index 000000000000..de03997322a7
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/cgroup_iter.c
@@ -0,0 +1,39 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Google */
+
+#include "bpf_iter.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+int terminate_early = 0;
+u64 terminal_cgroup = 0;
+
+static inline u64 cgroup_id(struct cgroup *cgrp)
+{
+	return cgrp->kn->id;
+}
+
+SEC("iter/cgroup")
+int cgroup_id_printer(struct bpf_iter__cgroup *ctx)
+{
+	struct seq_file *seq = ctx->meta->seq;
+	struct cgroup *cgrp = ctx->cgroup;
+
+	/* epilogue */
+	if (cgrp == NULL) {
+		BPF_SEQ_PRINTF(seq, "epilogue\n");
+		return 0;
+	}
+
+	/* prologue */
+	if (ctx->meta->seq_num == 0)
+		BPF_SEQ_PRINTF(seq, "prologue\n");
+
+	BPF_SEQ_PRINTF(seq, "%8llu\n", cgroup_id(cgrp));
+
+	if (terminal_cgroup == cgroup_id(cgrp))
+		return 1;
+
+	return terminate_early ? 1 : 0;
+}
-- 
2.37.1.559.g78731f0fdb-goog


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH bpf-next v7 6/8] cgroup: bpf: enable bpf programs to integrate with rstat
  2022-08-05 21:48 [PATCH bpf-next v7 0/8] bpf: rstat: cgroup hierarchical stats Hao Luo
                   ` (4 preceding siblings ...)
  2022-08-05 21:48 ` [PATCH bpf-next v7 5/8] selftests/bpf: Test cgroup_iter Hao Luo
@ 2022-08-05 21:48 ` Hao Luo
  2022-08-05 21:48 ` [PATCH bpf-next v7 7/8] selftests/bpf: extend cgroup helpers Hao Luo
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 23+ messages in thread
From: Hao Luo @ 2022-08-05 21:48 UTC (permalink / raw)
  To: linux-kernel, bpf, cgroups, netdev
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Song Liu, Yonghong Song, Tejun Heo, Zefan Li,
	KP Singh, Johannes Weiner, Michal Hocko, Benjamin Tissoires,
	John Fastabend, Michal Koutny, Roman Gushchin, David Rientjes,
	Stanislav Fomichev, Shakeel Butt, Yosry Ahmed, Hao Luo

From: Yosry Ahmed <yosryahmed@google.com>

Enable bpf programs to make use of rstat to collect cgroup hierarchical
stats efficiently:
- Add cgroup_rstat_updated() kfunc, for bpf progs that collect stats.
- Add cgroup_rstat_flush() sleepable kfunc, for bpf progs that read stats.
- Add an empty bpf_rstat_flush() hook that is called during rstat
  flushing, for bpf progs that flush stats to attach to. Attaching a bpf
  prog to this hook effectively registers it as a flush callback.

Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Hao Luo <haoluo@google.com>
---
 kernel/cgroup/rstat.c | 48 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)

diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
index 24b5c2ab5598..3289f6e0d306 100644
--- a/kernel/cgroup/rstat.c
+++ b/kernel/cgroup/rstat.c
@@ -3,6 +3,10 @@
 
 #include <linux/sched/cputime.h>
 
+#include <linux/bpf.h>
+#include <linux/btf.h>
+#include <linux/btf_ids.h>
+
 static DEFINE_SPINLOCK(cgroup_rstat_lock);
 static DEFINE_PER_CPU(raw_spinlock_t, cgroup_rstat_cpu_lock);
 
@@ -141,6 +145,31 @@ static struct cgroup *cgroup_rstat_cpu_pop_updated(struct cgroup *pos,
 	return pos;
 }
 
+/*
+ * A hook for bpf stat collectors to attach to and flush their stats.
+ * Together with providing bpf kfuncs for cgroup_rstat_updated() and
+ * cgroup_rstat_flush(), this enables a complete workflow where bpf progs that
+ * collect cgroup stats can integrate with rstat for efficient flushing.
+ *
+ * A static noinline declaration here could cause the compiler to optimize away
+ * the function. A global noinline declaration will keep the definition, but may
+ * optimize away the callsite. Therefore, __weak is needed to ensure that the
+ * call is still emitted, by telling the compiler that we don't know what the
+ * function might eventually be.
+ *
+ * __diag_* below are needed to dismiss the missing prototype warning.
+ */
+__diag_push();
+__diag_ignore_all("-Wmissing-prototypes",
+		  "kfuncs which will be used in BPF programs");
+
+__weak noinline void bpf_rstat_flush(struct cgroup *cgrp,
+				     struct cgroup *parent, int cpu)
+{
+}
+
+__diag_pop();
+
 /* see cgroup_rstat_flush() */
 static void cgroup_rstat_flush_locked(struct cgroup *cgrp, bool may_sleep)
 	__releases(&cgroup_rstat_lock) __acquires(&cgroup_rstat_lock)
@@ -168,6 +197,7 @@ static void cgroup_rstat_flush_locked(struct cgroup *cgrp, bool may_sleep)
 			struct cgroup_subsys_state *css;
 
 			cgroup_base_stat_flush(pos, cpu);
+			bpf_rstat_flush(pos, cgroup_parent(pos), cpu);
 
 			rcu_read_lock();
 			list_for_each_entry_rcu(css, &pos->rstat_css_list,
@@ -469,3 +499,21 @@ void cgroup_base_stat_cputime_show(struct seq_file *seq)
 		   "system_usec %llu\n",
 		   usage, utime, stime);
 }
+
+/* Add bpf kfuncs for cgroup_rstat_updated() and cgroup_rstat_flush() */
+BTF_SET8_START(bpf_rstat_kfunc_ids)
+BTF_ID_FLAGS(func, cgroup_rstat_updated)
+BTF_ID_FLAGS(func, cgroup_rstat_flush, KF_SLEEPABLE)
+BTF_SET8_END(bpf_rstat_kfunc_ids)
+
+static const struct btf_kfunc_id_set bpf_rstat_kfunc_set = {
+	.owner          = THIS_MODULE,
+	.set            = &bpf_rstat_kfunc_ids,
+};
+
+static int __init bpf_rstat_kfunc_init(void)
+{
+	return register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING,
+					 &bpf_rstat_kfunc_set);
+}
+late_initcall(bpf_rstat_kfunc_init);
-- 
2.37.1.559.g78731f0fdb-goog


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH bpf-next v7 7/8] selftests/bpf: extend cgroup helpers
  2022-08-05 21:48 [PATCH bpf-next v7 0/8] bpf: rstat: cgroup hierarchical stats Hao Luo
                   ` (5 preceding siblings ...)
  2022-08-05 21:48 ` [PATCH bpf-next v7 6/8] cgroup: bpf: enable bpf programs to integrate with rstat Hao Luo
@ 2022-08-05 21:48 ` Hao Luo
  2022-08-05 21:48 ` [PATCH bpf-next v7 8/8] selftests/bpf: add a selftest for cgroup hierarchical stats collection Hao Luo
  2022-08-09 16:20 ` [PATCH bpf-next v7 0/8] bpf: rstat: cgroup hierarchical stats patchwork-bot+netdevbpf
  8 siblings, 0 replies; 23+ messages in thread
From: Hao Luo @ 2022-08-05 21:48 UTC (permalink / raw)
  To: linux-kernel, bpf, cgroups, netdev
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Song Liu, Yonghong Song, Tejun Heo, Zefan Li,
	KP Singh, Johannes Weiner, Michal Hocko, Benjamin Tissoires,
	John Fastabend, Michal Koutny, Roman Gushchin, David Rientjes,
	Stanislav Fomichev, Shakeel Butt, Yosry Ahmed, Hao Luo

From: Yosry Ahmed <yosryahmed@google.com>

This patch extends bpf selft cgroup_helpers [ID] n various ways:
- Add enable_controllers() that allows tests to enable all or a
  subset of controllers for a specific cgroup.
- Add join_cgroup_parent(). The cgroup workdir is based on the pid,
  therefore a spawned child cannot join the same cgroup hierarchy of the
  test through join_cgroup(). join_cgroup_parent() is used in child
  processes to join a cgroup under the parent's workdir.
- Add write_cgroup_file() and write_cgroup_file_parent() (similar to
  join_cgroup_parent() above).
- Add get_root_cgroup() for tests that need to do checks on root cgroup.
- Distinguish relative and absolute cgroup paths in function arguments.
  Now relative paths are called relative_path, and absolute paths are
  called cgroup_path.

Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
Signed-off-by: Hao Luo <haoluo@google.com>
---
 tools/testing/selftests/bpf/cgroup_helpers.c | 202 +++++++++++++++----
 tools/testing/selftests/bpf/cgroup_helpers.h |  19 +-
 2 files changed, 174 insertions(+), 47 deletions(-)

diff --git a/tools/testing/selftests/bpf/cgroup_helpers.c b/tools/testing/selftests/bpf/cgroup_helpers.c
index 9d59c3990ca8..e914cc45b766 100644
--- a/tools/testing/selftests/bpf/cgroup_helpers.c
+++ b/tools/testing/selftests/bpf/cgroup_helpers.c
@@ -33,49 +33,52 @@
 #define CGROUP_MOUNT_DFLT		"/sys/fs/cgroup"
 #define NETCLS_MOUNT_PATH		CGROUP_MOUNT_DFLT "/net_cls"
 #define CGROUP_WORK_DIR			"/cgroup-test-work-dir"
-#define format_cgroup_path(buf, path) \
+
+#define format_cgroup_path_pid(buf, path, pid) \
 	snprintf(buf, sizeof(buf), "%s%s%d%s", CGROUP_MOUNT_PATH, \
-	CGROUP_WORK_DIR, getpid(), path)
+	CGROUP_WORK_DIR, pid, path)
+
+#define format_cgroup_path(buf, path) \
+	format_cgroup_path_pid(buf, path, getpid())
+
+#define format_parent_cgroup_path(buf, path) \
+	format_cgroup_path_pid(buf, path, getppid())
 
 #define format_classid_path(buf)				\
 	snprintf(buf, sizeof(buf), "%s%s", NETCLS_MOUNT_PATH,	\
 		 CGROUP_WORK_DIR)
 
-/**
- * enable_all_controllers() - Enable all available cgroup v2 controllers
- *
- * Enable all available cgroup v2 controllers in order to increase
- * the code coverage.
- *
- * If successful, 0 is returned.
- */
-static int enable_all_controllers(char *cgroup_path)
+static int __enable_controllers(const char *cgroup_path, const char *controllers)
 {
 	char path[PATH_MAX + 1];
-	char buf[PATH_MAX];
+	char enable[PATH_MAX + 1];
 	char *c, *c2;
 	int fd, cfd;
 	ssize_t len;
 
-	snprintf(path, sizeof(path), "%s/cgroup.controllers", cgroup_path);
-	fd = open(path, O_RDONLY);
-	if (fd < 0) {
-		log_err("Opening cgroup.controllers: %s", path);
-		return 1;
-	}
-
-	len = read(fd, buf, sizeof(buf) - 1);
-	if (len < 0) {
+	/* If not controllers are passed, enable all available controllers */
+	if (!controllers) {
+		snprintf(path, sizeof(path), "%s/cgroup.controllers",
+			 cgroup_path);
+		fd = open(path, O_RDONLY);
+		if (fd < 0) {
+			log_err("Opening cgroup.controllers: %s", path);
+			return 1;
+		}
+		len = read(fd, enable, sizeof(enable) - 1);
+		if (len < 0) {
+			close(fd);
+			log_err("Reading cgroup.controllers: %s", path);
+			return 1;
+		} else if (len == 0) { /* No controllers to enable */
+			close(fd);
+			return 0;
+		}
+		enable[len] = 0;
 		close(fd);
-		log_err("Reading cgroup.controllers: %s", path);
-		return 1;
+	} else {
+		strncpy(enable, controllers, sizeof(enable));
 	}
-	buf[len] = 0;
-	close(fd);
-
-	/* No controllers available? We're probably on cgroup v1. */
-	if (len == 0)
-		return 0;
 
 	snprintf(path, sizeof(path), "%s/cgroup.subtree_control", cgroup_path);
 	cfd = open(path, O_RDWR);
@@ -84,7 +87,7 @@ static int enable_all_controllers(char *cgroup_path)
 		return 1;
 	}
 
-	for (c = strtok_r(buf, " ", &c2); c; c = strtok_r(NULL, " ", &c2)) {
+	for (c = strtok_r(enable, " ", &c2); c; c = strtok_r(NULL, " ", &c2)) {
 		if (dprintf(cfd, "+%s\n", c) <= 0) {
 			log_err("Enabling controller %s: %s", c, path);
 			close(cfd);
@@ -95,6 +98,87 @@ static int enable_all_controllers(char *cgroup_path)
 	return 0;
 }
 
+/**
+ * enable_controllers() - Enable cgroup v2 controllers
+ * @relative_path: The cgroup path, relative to the workdir
+ * @controllers: List of controllers to enable in cgroup.controllers format
+ *
+ *
+ * Enable given cgroup v2 controllers, if @controllers is NULL, enable all
+ * available controllers.
+ *
+ * If successful, 0 is returned.
+ */
+int enable_controllers(const char *relative_path, const char *controllers)
+{
+	char cgroup_path[PATH_MAX + 1];
+
+	format_cgroup_path(cgroup_path, relative_path);
+	return __enable_controllers(cgroup_path, controllers);
+}
+
+static int __write_cgroup_file(const char *cgroup_path, const char *file,
+			       const char *buf)
+{
+	char file_path[PATH_MAX + 1];
+	int fd;
+
+	snprintf(file_path, sizeof(file_path), "%s/%s", cgroup_path, file);
+	fd = open(file_path, O_RDWR);
+	if (fd < 0) {
+		log_err("Opening %s", file_path);
+		return 1;
+	}
+
+	if (dprintf(fd, "%s", buf) <= 0) {
+		log_err("Writing to %s", file_path);
+		close(fd);
+		return 1;
+	}
+	close(fd);
+	return 0;
+}
+
+/**
+ * write_cgroup_file() - Write to a cgroup file
+ * @relative_path: The cgroup path, relative to the workdir
+ * @file: The name of the file in cgroupfs to write to
+ * @buf: Buffer to write to the file
+ *
+ * Write to a file in the given cgroup's directory.
+ *
+ * If successful, 0 is returned.
+ */
+int write_cgroup_file(const char *relative_path, const char *file,
+		      const char *buf)
+{
+	char cgroup_path[PATH_MAX - 24];
+
+	format_cgroup_path(cgroup_path, relative_path);
+	return __write_cgroup_file(cgroup_path, file, buf);
+}
+
+/**
+ * write_cgroup_file_parent() - Write to a cgroup file in the parent process
+ *                              workdir
+ * @relative_path: The cgroup path, relative to the parent process workdir
+ * @file: The name of the file in cgroupfs to write to
+ * @buf: Buffer to write to the file
+ *
+ * Write to a file in the given cgroup's directory under the parent process
+ * workdir.
+ *
+ * If successful, 0 is returned.
+ */
+int write_cgroup_file_parent(const char *relative_path, const char *file,
+			     const char *buf)
+{
+	char cgroup_path[PATH_MAX - 24];
+
+	format_parent_cgroup_path(cgroup_path, relative_path);
+	return __write_cgroup_file(cgroup_path, file, buf);
+}
+
 /**
  * setup_cgroup_environment() - Setup the cgroup environment
  *
@@ -133,7 +217,9 @@ int setup_cgroup_environment(void)
 		return 1;
 	}
 
-	if (enable_all_controllers(cgroup_workdir))
+	/* Enable all available controllers to increase test coverage */
+	if (__enable_controllers(CGROUP_MOUNT_PATH, NULL) ||
+	    __enable_controllers(cgroup_workdir, NULL))
 		return 1;
 
 	return 0;
@@ -173,7 +259,7 @@ static int join_cgroup_from_top(const char *cgroup_path)
 
 /**
  * join_cgroup() - Join a cgroup
- * @path: The cgroup path, relative to the workdir, to join
+ * @relative_path: The cgroup path, relative to the workdir, to join
  *
  * This function expects a cgroup to already be created, relative to the cgroup
  * work dir, and it joins it. For example, passing "/my-cgroup" as the path
@@ -182,11 +268,27 @@ static int join_cgroup_from_top(const char *cgroup_path)
  *
  * On success, it returns 0, otherwise on failure it returns 1.
  */
-int join_cgroup(const char *path)
+int join_cgroup(const char *relative_path)
+{
+	char cgroup_path[PATH_MAX + 1];
+
+	format_cgroup_path(cgroup_path, relative_path);
+	return join_cgroup_from_top(cgroup_path);
+}
+
+/**
+ * join_parent_cgroup() - Join a cgroup in the parent process workdir
+ * @relative_path: The cgroup path, relative to parent process workdir, to join
+ *
+ * See join_cgroup().
+ *
+ * On success, it returns 0, otherwise on failure it returns 1.
+ */
+int join_parent_cgroup(const char *relative_path)
 {
 	char cgroup_path[PATH_MAX + 1];
 
-	format_cgroup_path(cgroup_path, path);
+	format_parent_cgroup_path(cgroup_path, relative_path);
 	return join_cgroup_from_top(cgroup_path);
 }
 
@@ -212,9 +314,27 @@ void cleanup_cgroup_environment(void)
 	nftw(cgroup_workdir, nftwfunc, WALK_FD_LIMIT, FTW_DEPTH | FTW_MOUNT);
 }
 
+/**
+ * get_root_cgroup() - Get the FD of the root cgroup
+ *
+ * On success, it returns the file descriptor. On failure, it returns -1.
+ * If there is a failure, it prints the error to stderr.
+ */
+int get_root_cgroup(void)
+{
+	int fd;
+
+	fd = open(CGROUP_MOUNT_PATH, O_RDONLY);
+	if (fd < 0) {
+		log_err("Opening root cgroup");
+		return -1;
+	}
+	return fd;
+}
+
 /**
  * create_and_get_cgroup() - Create a cgroup, relative to workdir, and get the FD
- * @path: The cgroup path, relative to the workdir, to join
+ * @relative_path: The cgroup path, relative to the workdir, to join
  *
  * This function creates a cgroup under the top level workdir and returns the
  * file descriptor. It is idempotent.
@@ -222,14 +342,14 @@ void cleanup_cgroup_environment(void)
  * On success, it returns the file descriptor. On failure it returns -1.
  * If there is a failure, it prints the error to stderr.
  */
-int create_and_get_cgroup(const char *path)
+int create_and_get_cgroup(const char *relative_path)
 {
 	char cgroup_path[PATH_MAX + 1];
 	int fd;
 
-	format_cgroup_path(cgroup_path, path);
+	format_cgroup_path(cgroup_path, relative_path);
 	if (mkdir(cgroup_path, 0777) && errno != EEXIST) {
-		log_err("mkdiring cgroup %s .. %s", path, cgroup_path);
+		log_err("mkdiring cgroup %s .. %s", relative_path, cgroup_path);
 		return -1;
 	}
 
@@ -244,13 +364,13 @@ int create_and_get_cgroup(const char *path)
 
 /**
  * get_cgroup_id() - Get cgroup id for a particular cgroup path
- * @path: The cgroup path, relative to the workdir, to join
+ * @relative_path: The cgroup path, relative to the workdir, to join
  *
  * On success, it returns the cgroup id. On failure it returns 0,
  * which is an invalid cgroup id.
  * If there is a failure, it prints the error to stderr.
  */
-unsigned long long get_cgroup_id(const char *path)
+unsigned long long get_cgroup_id(const char *relative_path)
 {
 	int dirfd, err, flags, mount_id, fhsize;
 	union {
@@ -261,7 +381,7 @@ unsigned long long get_cgroup_id(const char *path)
 	struct file_handle *fhp, *fhp2;
 	unsigned long long ret = 0;
 
-	format_cgroup_path(cgroup_workdir, path);
+	format_cgroup_path(cgroup_workdir, relative_path);
 
 	dirfd = AT_FDCWD;
 	flags = 0;
diff --git a/tools/testing/selftests/bpf/cgroup_helpers.h b/tools/testing/selftests/bpf/cgroup_helpers.h
index fcc9cb91b211..3358734356ab 100644
--- a/tools/testing/selftests/bpf/cgroup_helpers.h
+++ b/tools/testing/selftests/bpf/cgroup_helpers.h
@@ -10,11 +10,18 @@
 	__FILE__, __LINE__, clean_errno(), ##__VA_ARGS__)
 
 /* cgroupv2 related */
-int cgroup_setup_and_join(const char *path);
-int create_and_get_cgroup(const char *path);
-unsigned long long get_cgroup_id(const char *path);
-
-int join_cgroup(const char *path);
+int enable_controllers(const char *relative_path, const char *controllers);
+int write_cgroup_file(const char *relative_path, const char *file,
+		      const char *buf);
+int write_cgroup_file_parent(const char *relative_path, const char *file,
+			     const char *buf);
+int cgroup_setup_and_join(const char *relative_path);
+int get_root_cgroup(void);
+int create_and_get_cgroup(const char *relative_path);
+unsigned long long get_cgroup_id(const char *relative_path);
+
+int join_cgroup(const char *relative_path);
+int join_parent_cgroup(const char *relative_path);
 
 int setup_cgroup_environment(void);
 void cleanup_cgroup_environment(void);
@@ -26,4 +33,4 @@ int join_classid(void);
 int setup_classid_environment(void);
 void cleanup_classid_environment(void);
 
-#endif /* __CGROUP_HELPERS_H */
\ No newline at end of file
+#endif /* __CGROUP_HELPERS_H */
-- 
2.37.1.559.g78731f0fdb-goog


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH bpf-next v7 8/8] selftests/bpf: add a selftest for cgroup hierarchical stats collection
  2022-08-05 21:48 [PATCH bpf-next v7 0/8] bpf: rstat: cgroup hierarchical stats Hao Luo
                   ` (6 preceding siblings ...)
  2022-08-05 21:48 ` [PATCH bpf-next v7 7/8] selftests/bpf: extend cgroup helpers Hao Luo
@ 2022-08-05 21:48 ` Hao Luo
  2022-08-09 16:20 ` [PATCH bpf-next v7 0/8] bpf: rstat: cgroup hierarchical stats patchwork-bot+netdevbpf
  8 siblings, 0 replies; 23+ messages in thread
From: Hao Luo @ 2022-08-05 21:48 UTC (permalink / raw)
  To: linux-kernel, bpf, cgroups, netdev
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Song Liu, Yonghong Song, Tejun Heo, Zefan Li,
	KP Singh, Johannes Weiner, Michal Hocko, Benjamin Tissoires,
	John Fastabend, Michal Koutny, Roman Gushchin, David Rientjes,
	Stanislav Fomichev, Shakeel Butt, Yosry Ahmed, Hao Luo

From: Yosry Ahmed <yosryahmed@google.com>

Add a selftest that tests the whole workflow for collecting,
aggregating (flushing), and displaying cgroup hierarchical stats.

TL;DR:
- Userspace program creates a cgroup hierarchy and induces memcg reclaim
  in parts of it.
- Whenever reclaim happens, vmscan_start and vmscan_end update
  per-cgroup percpu readings, and tell rstat which (cgroup, cpu) pairs
  have updates.
- When userspace tries to read the stats, vmscan_dump calls rstat to flush
  the stats, and outputs the stats in text format to userspace (similar
  to cgroupfs stats).
- rstat calls vmscan_flush once for every (cgroup, cpu) pair that has
  updates, vmscan_flush aggregates cpu readings and propagates updates
  to parents.
- Userspace program makes sure the stats are aggregated and read
  correctly.

Detailed explanation:
- The test loads tracing bpf programs, vmscan_start and vmscan_end, to
  measure the latency of cgroup reclaim. Per-cgroup readings are stored in
  percpu maps for efficiency. When a cgroup reading is updated on a cpu,
  cgroup_rstat_updated(cgroup, cpu) is called to add the cgroup to the
  rstat updated tree on that cpu.

- A cgroup_iter program, vmscan_dump, is loaded and pinned to a file, for
  each cgroup. Reading this file invokes the program, which calls
  cgroup_rstat_flush(cgroup) to ask rstat to propagate the updates for all
  cpus and cgroups that have updates in this cgroup's subtree. Afterwards,
  the stats are exposed to the user. vmscan_dump returns 1 to terminate
  iteration early, so that we only expose stats for one cgroup per read.

- An ftrace program, vmscan_flush, is also loaded and attached to
  bpf_rstat_flush. When rstat flushing is ongoing, vmscan_flush is invoked
  once for each (cgroup, cpu) pair that has updates. cgroups are popped
  from the rstat tree in a bottom-up fashion, so calls will always be
  made for cgroups that have updates before their parents. The program
  aggregates percpu readings to a total per-cgroup reading, and also
  propagates them to the parent cgroup. After rstat flushing is over, all
  cgroups will have correct updated hierarchical readings (including all
  cpus and all their descendants).

- Finally, the test creates a cgroup hierarchy and induces memcg reclaim
  in parts of it, and makes sure that the stats collection, aggregation,
  and reading workflow works as expected.

Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
Signed-off-by: Hao Luo <haoluo@google.com>
---
 .../prog_tests/cgroup_hierarchical_stats.c    | 358 ++++++++++++++++++
 .../bpf/progs/cgroup_hierarchical_stats.c     | 226 +++++++++++
 2 files changed, 584 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_hierarchical_stats.c
 create mode 100644 tools/testing/selftests/bpf/progs/cgroup_hierarchical_stats.c

diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_hierarchical_stats.c b/tools/testing/selftests/bpf/prog_tests/cgroup_hierarchical_stats.c
new file mode 100644
index 000000000000..9d48b7187992
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/cgroup_hierarchical_stats.c
@@ -0,0 +1,358 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Functions to manage eBPF programs attached to cgroup subsystems
+ *
+ * Copyright 2022 Google LLC.
+ */
+#include <asm-generic/errno.h>
+#include <errno.h>
+#include <sys/types.h>
+#include <sys/mount.h>
+#include <sys/stat.h>
+#include <unistd.h>
+
+#include <test_progs.h>
+#include <bpf/libbpf.h>
+#include <bpf/bpf.h>
+
+#include "cgroup_helpers.h"
+#include "cgroup_hierarchical_stats.skel.h"
+
+#define PAGE_SIZE 4096
+#define MB(x) (x << 20)
+
+#define BPFFS_ROOT "/sys/fs/bpf/"
+#define BPFFS_VMSCAN BPFFS_ROOT"vmscan/"
+
+#define CG_ROOT_NAME "root"
+#define CG_ROOT_ID 1
+
+#define CGROUP_PATH(p, n) {.path = p"/"n, .name = n}
+
+static struct {
+	const char *path, *name;
+	unsigned long long id;
+	int fd;
+} cgroups[] = {
+	CGROUP_PATH("/", "test"),
+	CGROUP_PATH("/test", "child1"),
+	CGROUP_PATH("/test", "child2"),
+	CGROUP_PATH("/test/child1", "child1_1"),
+	CGROUP_PATH("/test/child1", "child1_2"),
+	CGROUP_PATH("/test/child2", "child2_1"),
+	CGROUP_PATH("/test/child2", "child2_2"),
+};
+
+#define N_CGROUPS ARRAY_SIZE(cgroups)
+#define N_NON_LEAF_CGROUPS 3
+
+static int root_cgroup_fd;
+static bool mounted_bpffs;
+
+/* reads file at 'path' to 'buf', returns 0 on success. */
+static int read_from_file(const char *path, char *buf, size_t size)
+{
+	int fd, len;
+
+	fd = open(path, O_RDONLY);
+	if (fd < 0)
+		return fd;
+
+	len = read(fd, buf, size);
+	close(fd);
+	if (len < 0)
+		return len;
+
+	buf[len] = 0;
+	return 0;
+}
+
+/* mounts bpffs and mkdir for reading stats, returns 0 on success. */
+static int setup_bpffs(void)
+{
+	int err;
+
+	/* Mount bpffs */
+	err = mount("bpf", BPFFS_ROOT, "bpf", 0, NULL);
+	mounted_bpffs = !err;
+	if (ASSERT_FALSE(err && errno != EBUSY, "mount"))
+		return err;
+
+	/* Create a directory to contain stat files in bpffs */
+	err = mkdir(BPFFS_VMSCAN, 0755);
+	if (!ASSERT_OK(err, "mkdir"))
+		return err;
+
+	return 0;
+}
+
+static void cleanup_bpffs(void)
+{
+	/* Remove created directory in bpffs */
+	ASSERT_OK(rmdir(BPFFS_VMSCAN), "rmdir "BPFFS_VMSCAN);
+
+	/* Unmount bpffs, if it wasn't already mounted when we started */
+	if (mounted_bpffs)
+		return;
+
+	ASSERT_OK(umount(BPFFS_ROOT), "unmount bpffs");
+}
+
+/* sets up cgroups, returns 0 on success. */
+static int setup_cgroups(void)
+{
+	int i, fd, err;
+
+	err = setup_cgroup_environment();
+	if (!ASSERT_OK(err, "setup_cgroup_environment"))
+		return err;
+
+	root_cgroup_fd = get_root_cgroup();
+	if (!ASSERT_GE(root_cgroup_fd, 0, "get_root_cgroup"))
+		return root_cgroup_fd;
+
+	for (i = 0; i < N_CGROUPS; i++) {
+		fd = create_and_get_cgroup(cgroups[i].path);
+		if (!ASSERT_GE(fd, 0, "create_and_get_cgroup"))
+			return fd;
+
+		cgroups[i].fd = fd;
+		cgroups[i].id = get_cgroup_id(cgroups[i].path);
+
+		/*
+		 * Enable memcg controller for the entire hierarchy.
+		 * Note that stats are collected for all cgroups in a hierarchy
+		 * with memcg enabled anyway, but are only exposed for cgroups
+		 * that have memcg enabled.
+		 */
+		if (i < N_NON_LEAF_CGROUPS) {
+			err = enable_controllers(cgroups[i].path, "memory");
+			if (!ASSERT_OK(err, "enable_controllers"))
+				return err;
+		}
+	}
+	return 0;
+}
+
+static void cleanup_cgroups(void)
+{
+	close(root_cgroup_fd);
+	for (int i = 0; i < N_CGROUPS; i++)
+		close(cgroups[i].fd);
+	cleanup_cgroup_environment();
+}
+
+/* Sets up cgroup hiearchary, returns 0 on success. */
+static int setup_hierarchy(void)
+{
+	return setup_bpffs() || setup_cgroups();
+}
+
+static void destroy_hierarchy(void)
+{
+	cleanup_cgroups();
+	cleanup_bpffs();
+}
+
+static int reclaimer(const char *cgroup_path, size_t size)
+{
+	static char size_buf[128];
+	char *buf, *ptr;
+	int err;
+
+	/* Join cgroup in the parent process workdir */
+	if (join_parent_cgroup(cgroup_path))
+		return EACCES;
+
+	/* Allocate memory */
+	buf = malloc(size);
+	if (!buf)
+		return ENOMEM;
+
+	/* Write to memory to make sure it's actually allocated */
+	for (ptr = buf; ptr < buf + size; ptr += PAGE_SIZE)
+		*ptr = 1;
+
+	/* Try to reclaim memory */
+	snprintf(size_buf, 128, "%lu", size);
+	err = write_cgroup_file_parent(cgroup_path, "memory.reclaim", size_buf);
+
+	free(buf);
+	/* memory.reclaim returns EAGAIN if the amount is not fully reclaimed */
+	if (err && errno != EAGAIN)
+		return errno;
+
+	return 0;
+}
+
+static int induce_vmscan(void)
+{
+	int i, status;
+
+	/*
+	 * In every leaf cgroup, run a child process that allocates some memory
+	 * and attempts to reclaim some of it.
+	 */
+	for (i = N_NON_LEAF_CGROUPS; i < N_CGROUPS; i++) {
+		pid_t pid;
+
+		/* Create reclaimer child */
+		pid = fork();
+		if (pid == 0) {
+			status = reclaimer(cgroups[i].path, MB(5));
+			exit(status);
+		}
+
+		/* Cleanup reclaimer child */
+		waitpid(pid, &status, 0);
+		ASSERT_TRUE(WIFEXITED(status), "reclaimer exited");
+		ASSERT_EQ(WEXITSTATUS(status), 0, "reclaim exit code");
+	}
+	return 0;
+}
+
+static unsigned long long
+get_cgroup_vmscan_delay(unsigned long long cgroup_id, const char *file_name)
+{
+	unsigned long long vmscan = 0, id = 0;
+	static char buf[128], path[128];
+
+	/* For every cgroup, read the file generated by cgroup_iter */
+	snprintf(path, 128, "%s%s", BPFFS_VMSCAN, file_name);
+	if (!ASSERT_OK(read_from_file(path, buf, 128), "read cgroup_iter"))
+		return 0;
+
+	/* Check the output file formatting */
+	ASSERT_EQ(sscanf(buf, "cg_id: %llu, total_vmscan_delay: %llu\n",
+			 &id, &vmscan), 2, "output format");
+
+	/* Check that the cgroup_id is displayed correctly */
+	ASSERT_EQ(id, cgroup_id, "cgroup_id");
+	/* Check that the vmscan reading is non-zero */
+	ASSERT_GT(vmscan, 0, "vmscan_reading");
+	return vmscan;
+}
+
+static void check_vmscan_stats(void)
+{
+	unsigned long long vmscan_readings[N_CGROUPS], vmscan_root;
+	int i;
+
+	for (i = 0; i < N_CGROUPS; i++) {
+		vmscan_readings[i] = get_cgroup_vmscan_delay(cgroups[i].id,
+							     cgroups[i].name);
+	}
+
+	/* Read stats for root too */
+	vmscan_root = get_cgroup_vmscan_delay(CG_ROOT_ID, CG_ROOT_NAME);
+
+	/* Check that child1 == child1_1 + child1_2 */
+	ASSERT_EQ(vmscan_readings[1], vmscan_readings[3] + vmscan_readings[4],
+		  "child1_vmscan");
+	/* Check that child2 == child2_1 + child2_2 */
+	ASSERT_EQ(vmscan_readings[2], vmscan_readings[5] + vmscan_readings[6],
+		  "child2_vmscan");
+	/* Check that test == child1 + child2 */
+	ASSERT_EQ(vmscan_readings[0], vmscan_readings[1] + vmscan_readings[2],
+		  "test_vmscan");
+	/* Check that root >= test */
+	ASSERT_GE(vmscan_root, vmscan_readings[1], "root_vmscan");
+}
+
+/* Creates iter link and pins in bpffs, returns 0 on success, -errno on failure.
+ */
+static int setup_cgroup_iter(struct cgroup_hierarchical_stats *obj,
+			     int cgroup_fd, const char *file_name)
+{
+	DECLARE_LIBBPF_OPTS(bpf_iter_attach_opts, opts);
+	union bpf_iter_link_info linfo = {};
+	struct bpf_link *link;
+	static char path[128];
+	int err;
+
+	/*
+	 * Create an iter link, parameterized by cgroup_fd.
+	 * We only want to traverse one cgroup, so set the traversal order to
+	 * "pre", and return 1 from dump_vmscan to stop iteration after the
+	 * first cgroup.
+	 */
+	linfo.cgroup.cgroup_fd = cgroup_fd;
+	linfo.cgroup.order = BPF_ITER_SELF;
+	opts.link_info = &linfo;
+	opts.link_info_len = sizeof(linfo);
+	link = bpf_program__attach_iter(obj->progs.dump_vmscan, &opts);
+	if (!ASSERT_OK_PTR(link, "attach iter"))
+		return -EFAULT;
+
+	/* Pin the link to a bpffs file */
+	snprintf(path, 128, "%s%s", BPFFS_VMSCAN, file_name);
+	err = bpf_link__pin(link, path);
+	ASSERT_OK(err, "pin cgroup_iter");
+
+	/* Remove the link, leaving only the ref held by the pinned file */
+	bpf_link__destroy(link);
+	return err;
+}
+
+/* Sets up programs for collecting stats, returns 0 on success. */
+static int setup_progs(struct cgroup_hierarchical_stats **skel)
+{
+	int i, err;
+
+	*skel = cgroup_hierarchical_stats__open_and_load();
+	if (!ASSERT_OK_PTR(*skel, "open_and_load"))
+		return 1;
+
+	/* Attach cgroup_iter program that will dump the stats to cgroups */
+	for (i = 0; i < N_CGROUPS; i++) {
+		err = setup_cgroup_iter(*skel, cgroups[i].fd, cgroups[i].name);
+		if (!ASSERT_OK(err, "setup_cgroup_iter"))
+			return err;
+	}
+
+	/* Also dump stats for root */
+	err = setup_cgroup_iter(*skel, root_cgroup_fd, CG_ROOT_NAME);
+	if (!ASSERT_OK(err, "setup_cgroup_iter"))
+		return err;
+
+	err = cgroup_hierarchical_stats__attach(*skel);
+	if (!ASSERT_OK(err, "attach"))
+		return err;
+
+	return 0;
+}
+
+static void destroy_progs(struct cgroup_hierarchical_stats *skel)
+{
+	static char path[128];
+	int i;
+
+	for (i = 0; i < N_CGROUPS; i++) {
+		/* Delete files in bpffs that cgroup_iters are pinned in */
+		snprintf(path, 128, "%s%s", BPFFS_VMSCAN,
+			 cgroups[i].name);
+		ASSERT_OK(remove(path), "remove cgroup_iter pin");
+	}
+
+	/* Delete root file in bpffs */
+	snprintf(path, 128, "%s%s", BPFFS_VMSCAN, CG_ROOT_NAME);
+	ASSERT_OK(remove(path), "remove cgroup_iter root pin");
+	cgroup_hierarchical_stats__destroy(skel);
+}
+
+void test_cgroup_hierarchical_stats(void)
+{
+	struct cgroup_hierarchical_stats *skel = NULL;
+
+	if (setup_hierarchy())
+		goto hierarchy_cleanup;
+	if (setup_progs(&skel))
+		goto cleanup;
+	if (induce_vmscan())
+		goto cleanup;
+	check_vmscan_stats();
+cleanup:
+	destroy_progs(skel);
+hierarchy_cleanup:
+	destroy_hierarchy();
+}
diff --git a/tools/testing/selftests/bpf/progs/cgroup_hierarchical_stats.c b/tools/testing/selftests/bpf/progs/cgroup_hierarchical_stats.c
new file mode 100644
index 000000000000..8ab4253a1592
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/cgroup_hierarchical_stats.c
@@ -0,0 +1,226 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Functions to manage eBPF programs attached to cgroup subsystems
+ *
+ * Copyright 2022 Google LLC.
+ */
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_core_read.h>
+
+char _license[] SEC("license") = "GPL";
+
+/*
+ * Start times are stored per-task, not per-cgroup, as multiple tasks in one
+ * cgroup can perform reclaim concurrently.
+ */
+struct {
+	__uint(type, BPF_MAP_TYPE_TASK_STORAGE);
+	__uint(map_flags, BPF_F_NO_PREALLOC);
+	__type(key, int);
+	__type(value, __u64);
+} vmscan_start_time SEC(".maps");
+
+struct vmscan_percpu {
+	/* Previous percpu state, to figure out if we have new updates */
+	__u64 prev;
+	/* Current percpu state */
+	__u64 state;
+};
+
+struct vmscan {
+	/* State propagated through children, pending aggregation */
+	__u64 pending;
+	/* Total state, including all cpus and all children */
+	__u64 state;
+};
+
+struct {
+	__uint(type, BPF_MAP_TYPE_PERCPU_HASH);
+	__uint(max_entries, 100);
+	__type(key, __u64);
+	__type(value, struct vmscan_percpu);
+} pcpu_cgroup_vmscan_elapsed SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_HASH);
+	__uint(max_entries, 100);
+	__type(key, __u64);
+	__type(value, struct vmscan);
+} cgroup_vmscan_elapsed SEC(".maps");
+
+extern void cgroup_rstat_updated(struct cgroup *cgrp, int cpu) __ksym;
+extern void cgroup_rstat_flush(struct cgroup *cgrp) __ksym;
+
+static struct cgroup *task_memcg(struct task_struct *task)
+{
+	int cgrp_id;
+
+#if __has_builtin(__builtin_preserve_enum_value)
+	cgrp_id = bpf_core_enum_value(enum cgroup_subsys_id, memory_cgrp_id);
+#else
+	cgrp_id = memory_cgrp_id;
+#endif
+	return task->cgroups->subsys[cgrp_id]->cgroup;
+}
+
+static uint64_t cgroup_id(struct cgroup *cgrp)
+{
+	return cgrp->kn->id;
+}
+
+static int create_vmscan_percpu_elem(__u64 cg_id, __u64 state)
+{
+	struct vmscan_percpu pcpu_init = {.state = state, .prev = 0};
+
+	return bpf_map_update_elem(&pcpu_cgroup_vmscan_elapsed, &cg_id,
+				   &pcpu_init, BPF_NOEXIST);
+}
+
+static int create_vmscan_elem(__u64 cg_id, __u64 state, __u64 pending)
+{
+	struct vmscan init = {.state = state, .pending = pending};
+
+	return bpf_map_update_elem(&cgroup_vmscan_elapsed, &cg_id,
+				   &init, BPF_NOEXIST);
+}
+
+SEC("tp_btf/mm_vmscan_memcg_reclaim_begin")
+int BPF_PROG(vmscan_start, int order, gfp_t gfp_flags)
+{
+	struct task_struct *task = bpf_get_current_task_btf();
+	__u64 *start_time_ptr;
+
+	start_time_ptr = bpf_task_storage_get(&vmscan_start_time, task, 0,
+					      BPF_LOCAL_STORAGE_GET_F_CREATE);
+	if (start_time_ptr)
+		*start_time_ptr = bpf_ktime_get_ns();
+	return 0;
+}
+
+SEC("tp_btf/mm_vmscan_memcg_reclaim_end")
+int BPF_PROG(vmscan_end, unsigned long nr_reclaimed)
+{
+	struct vmscan_percpu *pcpu_stat;
+	struct task_struct *current = bpf_get_current_task_btf();
+	struct cgroup *cgrp;
+	__u64 *start_time_ptr;
+	__u64 current_elapsed, cg_id;
+	__u64 end_time = bpf_ktime_get_ns();
+
+	/*
+	 * cgrp is the first parent cgroup of current that has memcg enabled in
+	 * its subtree_control, or NULL if memcg is disabled in the entire tree.
+	 * In a cgroup hierarchy like this:
+	 *                               a
+	 *                              / \
+	 *                             b   c
+	 *  If "a" has memcg enabled, while "b" doesn't, then processes in "b"
+	 *  will accumulate their stats directly to "a". This makes sure that no
+	 *  stats are lost from processes in leaf cgroups that don't have memcg
+	 *  enabled, but only exposes stats for cgroups that have memcg enabled.
+	 */
+	cgrp = task_memcg(current);
+	if (!cgrp)
+		return 0;
+
+	cg_id = cgroup_id(cgrp);
+	start_time_ptr = bpf_task_storage_get(&vmscan_start_time, current, 0,
+					      BPF_LOCAL_STORAGE_GET_F_CREATE);
+	if (!start_time_ptr)
+		return 0;
+
+	current_elapsed = end_time - *start_time_ptr;
+	pcpu_stat = bpf_map_lookup_elem(&pcpu_cgroup_vmscan_elapsed,
+					&cg_id);
+	if (pcpu_stat)
+		pcpu_stat->state += current_elapsed;
+	else if (create_vmscan_percpu_elem(cg_id, current_elapsed))
+		return 0;
+
+	cgroup_rstat_updated(cgrp, bpf_get_smp_processor_id());
+	return 0;
+}
+
+SEC("fentry/bpf_rstat_flush")
+int BPF_PROG(vmscan_flush, struct cgroup *cgrp, struct cgroup *parent, int cpu)
+{
+	struct vmscan_percpu *pcpu_stat;
+	struct vmscan *total_stat, *parent_stat;
+	__u64 cg_id = cgroup_id(cgrp);
+	__u64 parent_cg_id = parent ? cgroup_id(parent) : 0;
+	__u64 *pcpu_vmscan;
+	__u64 state;
+	__u64 delta = 0;
+
+	/* Add CPU changes on this level since the last flush */
+	pcpu_stat = bpf_map_lookup_percpu_elem(&pcpu_cgroup_vmscan_elapsed,
+					       &cg_id, cpu);
+	if (pcpu_stat) {
+		state = pcpu_stat->state;
+		delta += state - pcpu_stat->prev;
+		pcpu_stat->prev = state;
+	}
+
+	total_stat = bpf_map_lookup_elem(&cgroup_vmscan_elapsed, &cg_id);
+	if (!total_stat) {
+		if (create_vmscan_elem(cg_id, delta, 0))
+			return 0;
+
+		goto update_parent;
+	}
+
+	/* Collect pending stats from subtree */
+	if (total_stat->pending) {
+		delta += total_stat->pending;
+		total_stat->pending = 0;
+	}
+
+	/* Propagate changes to this cgroup's total */
+	total_stat->state += delta;
+
+update_parent:
+	/* Skip if there are no changes to propagate, or no parent */
+	if (!delta || !parent_cg_id)
+		return 0;
+
+	/* Propagate changes to cgroup's parent */
+	parent_stat = bpf_map_lookup_elem(&cgroup_vmscan_elapsed,
+					  &parent_cg_id);
+	if (parent_stat)
+		parent_stat->pending += delta;
+	else
+		create_vmscan_elem(parent_cg_id, 0, delta);
+	return 0;
+}
+
+SEC("iter.s/cgroup")
+int BPF_PROG(dump_vmscan, struct bpf_iter_meta *meta, struct cgroup *cgrp)
+{
+	struct seq_file *seq = meta->seq;
+	struct vmscan *total_stat;
+	__u64 cg_id = cgrp ? cgroup_id(cgrp) : 0;
+
+	/* Do nothing for the terminal call */
+	if (!cg_id)
+		return 1;
+
+	/* Flush the stats to make sure we get the most updated numbers */
+	cgroup_rstat_flush(cgrp);
+
+	total_stat = bpf_map_lookup_elem(&cgroup_vmscan_elapsed, &cg_id);
+	if (!total_stat) {
+		BPF_SEQ_PRINTF(seq, "cg_id: %llu, total_vmscan_delay: 0\n",
+			       cg_id);
+	} else {
+		BPF_SEQ_PRINTF(seq, "cg_id: %llu, total_vmscan_delay: %llu\n",
+			       cg_id, total_stat->state);
+	}
+
+	/*
+	 * We only dump stats for one cgroup here, so return 1 to stop
+	 * iteration after the first cgroup.
+	 */
+	return 1;
+}
-- 
2.37.1.559.g78731f0fdb-goog


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v7 4/8] bpf: Introduce cgroup iter
  2022-08-05 21:48 ` [PATCH bpf-next v7 4/8] bpf: Introduce cgroup iter Hao Luo
@ 2022-08-09  0:18   ` Andrii Nakryiko
  2022-08-09  0:56     ` Hao Luo
  0 siblings, 1 reply; 23+ messages in thread
From: Andrii Nakryiko @ 2022-08-09  0:18 UTC (permalink / raw)
  To: Hao Luo
  Cc: linux-kernel, bpf, cgroups, netdev, Alexei Starovoitov,
	Andrii Nakryiko, Daniel Borkmann, Martin KaFai Lau, Song Liu,
	Yonghong Song, Tejun Heo, Zefan Li, KP Singh, Johannes Weiner,
	Michal Hocko, Benjamin Tissoires, John Fastabend, Michal Koutny,
	Roman Gushchin, David Rientjes, Stanislav Fomichev, Shakeel Butt,
	Yosry Ahmed

On Fri, Aug 5, 2022 at 2:49 PM Hao Luo <haoluo@google.com> wrote:
>
> Cgroup_iter is a type of bpf_iter. It walks over cgroups in four modes:
>
>  - walking a cgroup's descendants in pre-order.
>  - walking a cgroup's descendants in post-order.
>  - walking a cgroup's ancestors.
>  - process only the given cgroup.
>
> When attaching cgroup_iter, one can set a cgroup to the iter_link
> created from attaching. This cgroup is passed as a file descriptor
> or cgroup id and serves as the starting point of the walk. If no
> cgroup is specified, the starting point will be the root cgroup v2.
>
> For walking descendants, one can specify the order: either pre-order or
> post-order. For walking ancestors, the walk starts at the specified
> cgroup and ends at the root.
>
> One can also terminate the walk early by returning 1 from the iter
> program.
>
> Note that because walking cgroup hierarchy holds cgroup_mutex, the iter
> program is called with cgroup_mutex held.
>
> Currently only one session is supported, which means, depending on the
> volume of data bpf program intends to send to user space, the number
> of cgroups that can be walked is limited. For example, given the current
> buffer size is 8 * PAGE_SIZE, if the program sends 64B data for each
> cgroup, assuming PAGE_SIZE is 4kb, the total number of cgroups that can
> be walked is 512. This is a limitation of cgroup_iter. If the output
> data is larger than the kernel buffer size, after all data in the
> kernel buffer is consumed by user space, the subsequent read() syscall
> will signal EOPNOTSUPP. In order to work around, the user may have to
> update their program to reduce the volume of data sent to output. For
> example, skip some uninteresting cgroups. In future, we may extend
> bpf_iter flags to allow customizing buffer size.
>
> Acked-by: Yonghong Song <yhs@fb.com>
> Acked-by: Tejun Heo <tj@kernel.org>
> Signed-off-by: Hao Luo <haoluo@google.com>
> ---
>  include/linux/bpf.h                           |   8 +
>  include/uapi/linux/bpf.h                      |  38 +++
>  kernel/bpf/Makefile                           |   3 +
>  kernel/bpf/cgroup_iter.c                      | 286 ++++++++++++++++++
>  tools/include/uapi/linux/bpf.h                |  38 +++
>  .../selftests/bpf/prog_tests/btf_dump.c       |   4 +-
>  6 files changed, 375 insertions(+), 2 deletions(-)
>  create mode 100644 kernel/bpf/cgroup_iter.c
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 20c26aed7896..09b5c2167424 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -48,6 +48,7 @@ struct mem_cgroup;
>  struct module;
>  struct bpf_func_state;
>  struct ftrace_ops;
> +struct cgroup;
>
>  extern struct idr btf_idr;
>  extern spinlock_t btf_idr_lock;
> @@ -1730,7 +1731,14 @@ int bpf_obj_get_user(const char __user *pathname, int flags);
>         int __init bpf_iter_ ## target(args) { return 0; }
>
>  struct bpf_iter_aux_info {
> +       /* for map_elem iter */
>         struct bpf_map *map;
> +
> +       /* for cgroup iter */
> +       struct {
> +               struct cgroup *start; /* starting cgroup */
> +               int order;
> +       } cgroup;
>  };
>
>  typedef int (*bpf_iter_attach_target_t)(struct bpf_prog *prog,
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 59a217ca2dfd..4d758b2e70d6 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -87,10 +87,37 @@ struct bpf_cgroup_storage_key {
>         __u32   attach_type;            /* program attach type (enum bpf_attach_type) */
>  };
>
> +enum bpf_iter_order {
> +       BPF_ITER_ORDER_DEFAULT = 0,     /* default order. */

why is this default order necessary? It just adds confusion (I had to
look up source code to know what is default order). I might have
missed some discussion, so if there is some very good reason, then
please document this in commit message. But I'd rather not do some
magical default order instead. We can set 0 to mean invalid and error
out, or just do SELF as the very first value (and if user forgot to
specify more fancy mode, they hopefully will quickly discover this in
their testing).

> +       BPF_ITER_SELF,                  /* process only a single object. */
> +       BPF_ITER_DESCENDANTS_PRE,       /* walk descendants in pre-order. */
> +       BPF_ITER_DESCENDANTS_POST,      /* walk descendants in post-order. */
> +       BPF_ITER_ANCESTORS_UP,          /* walk ancestors upward. */
> +};
> +

This is a somewhat pedantic nit, so feel free to ignore completely,
but don't DESCENDANTS_{PRE,POST} and ANCESTORS_UP also include "self"?
As it is right now, BPF_ITER_SELF name might be read as implying that
DESCENDANTS and ANCESTORS order don't include self. So I don't know,
maybe BPF_ITER_SELF_ONLY would be a bit clearer?


>  union bpf_iter_link_info {
>         struct {
>                 __u32   map_fd;
>         } map;
> +       struct {
> +               /* Valid values include:
> +                *  - BPF_ITER_ORDER_DEFAULT
> +                *  - BPF_ITER_SELF
> +                *  - BPF_ITER_DESCENDANTS_PRE
> +                *  - BPF_ITER_DESCENDANTS_POST
> +                *  - BPF_ITER_ANCESTORS_UP
> +                * for cgroup_iter, DEFAULT is equivalent to DESCENDANTS_PRE.
> +                */
> +               __u32   order;
> +
> +               /* At most one of cgroup_fd and cgroup_id can be non-zero. If
> +                * both are zero, the walk starts from the default cgroup v2
> +                * root. For walking v1 hierarchy, one should always explicitly
> +                * specify cgroup_fd.
> +                */
> +               __u32   cgroup_fd;
> +               __u64   cgroup_id;
> +       } cgroup;
>  };
>
>  /* BPF syscall commands, see bpf(2) man-page for more details. */
> @@ -6134,11 +6161,22 @@ struct bpf_link_info {
>                 struct {
>                         __aligned_u64 target_name; /* in/out: target_name buffer ptr */
>                         __u32 target_name_len;     /* in/out: target_name buffer len */
> +
> +                       /* If the iter specific field is 32 bits, it can be put
> +                        * in the first or second union. Otherwise it should be
> +                        * put in the second union.
> +                        */
>                         union {
>                                 struct {
>                                         __u32 map_id;
>                                 } map;
>                         };
> +                       union {
> +                               struct {
> +                                       __u64 cgroup_id;
> +                                       __u32 order;
> +                               } cgroup;
> +                       };
>                 } iter;

But other than above, I like how UAPI looks like, thanks!

[...]

> + *
> + * For walking descendants, cgroup_iter can walk in either pre-order or
> + * post-order. For walking ancestors, the iter walks up from a cgroup to
> + * the root.
> + *
> + * The iter program can terminate the walk early by returning 1. Walk
> + * continues if prog returns 0.
> + *
> + * The prog can check (seq->num == 0) to determine whether this is
> + * the first element. The prog may also be passed a NULL cgroup,
> + * which means the walk has completed and the prog has a chance to
> + * do post-processing, such as outputing an epilogue.

typo: outputting

> + *
> + * Note: the iter_prog is called with cgroup_mutex held.
> + *
> + * Currently only one session is supported, which means, depending on the
> + * volume of data bpf program intends to send to user space, the number
> + * of cgroups that can be walked is limited. For example, given the current
> + * buffer size is 8 * PAGE_SIZE, if the program sends 64B data for each
> + * cgroup, assuming PAGE_SIZE is 4kb, the total number of cgroups that can
> + * be walked is 512. This is a limitation of cgroup_iter. If the output data
> + * is larger than the kernel buffer size, after all data in the kernel buffer
> + * is consumed by user space, the subsequent read() syscall will signal
> + * EOPNOTSUPP. In order to work around, the user may have to update their
> + * program to reduce the volume of data sent to output. For example, skip
> + * some uninteresting cgroups.
> + */
> +

[...]

> +
> +static void bpf_iter_cgroup_show_fdinfo(const struct bpf_iter_aux_info *aux,
> +                                       struct seq_file *seq)
> +{
> +       char *buf;
> +
> +       buf = kzalloc(PATH_MAX, GFP_KERNEL);
> +       if (!buf) {
> +               seq_puts(seq, "cgroup_path:\t<unknown>\n");
> +               goto show_order;
> +       }
> +
> +       /* If cgroup_path_ns() fails, buf will be an empty string, cgroup_path
> +        * will print nothing.
> +        *
> +        * Path is in the calling process's cgroup namespace.
> +        */
> +       cgroup_path_ns(aux->cgroup.start, buf, PATH_MAX,
> +                      current->nsproxy->cgroup_ns);
> +       seq_printf(seq, "cgroup_path:\t%s\n", buf);
> +       kfree(buf);
> +
> +show_order:
> +       if (aux->cgroup.order == BPF_ITER_DESCENDANTS_PRE)
> +               seq_puts(seq, "order: pre\n");
> +       else if (aux->cgroup.order == BPF_ITER_DESCENDANTS_POST)
> +               seq_puts(seq, "order: post\n");
> +       else if (aux->cgroup.order == BPF_ITER_ANCESTORS_UP)
> +               seq_puts(seq, "order: up\n");
> +       else /* BPF_ITER_SELF */
> +               seq_puts(seq, "order: self\n");

should we output "descendants_pre", "descendants_post", "ancestors_up"
and "self" to match enum names more uniformly? We had similar
discussion when Daniel Mueller was doing some clean up in bpftool and
public's opinion was that uniform and consistent mapping between
kernel enum and it's string representation is more valuable than
shortness of the string.

> +}
> +

[...]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v7 5/8] selftests/bpf: Test cgroup_iter.
  2022-08-05 21:48 ` [PATCH bpf-next v7 5/8] selftests/bpf: Test cgroup_iter Hao Luo
@ 2022-08-09  0:20   ` Andrii Nakryiko
  2022-08-09  1:18     ` Hao Luo
  0 siblings, 1 reply; 23+ messages in thread
From: Andrii Nakryiko @ 2022-08-09  0:20 UTC (permalink / raw)
  To: Hao Luo
  Cc: linux-kernel, bpf, cgroups, netdev, Alexei Starovoitov,
	Andrii Nakryiko, Daniel Borkmann, Martin KaFai Lau, Song Liu,
	Yonghong Song, Tejun Heo, Zefan Li, KP Singh, Johannes Weiner,
	Michal Hocko, Benjamin Tissoires, John Fastabend, Michal Koutny,
	Roman Gushchin, David Rientjes, Stanislav Fomichev, Shakeel Butt,
	Yosry Ahmed

On Fri, Aug 5, 2022 at 2:49 PM Hao Luo <haoluo@google.com> wrote:
>
> Add a selftest for cgroup_iter. The selftest creates a mini cgroup tree
> of the following structure:
>
>     ROOT (working cgroup)
>      |
>    PARENT
>   /      \
> CHILD1  CHILD2
>
> and tests the following scenarios:
>
>  - invalid cgroup fd.
>  - pre-order walk over descendants from PARENT.
>  - post-order walk over descendants from PARENT.
>  - walk of ancestors from PARENT.
>  - walk from PARENT in the default order, which is pre-order.
>  - process only a single object (i.e. PARENT).
>  - early termination.
>
> Acked-by: Yonghong Song <yhs@fb.com>
> Signed-off-by: Hao Luo <haoluo@google.com>
> ---

LGTM.

Acked-by: Andrii Nakryiko <andrii@kernel.org>

>  .../selftests/bpf/prog_tests/cgroup_iter.c    | 237 ++++++++++++++++++
>  tools/testing/selftests/bpf/progs/bpf_iter.h  |   7 +
>  .../testing/selftests/bpf/progs/cgroup_iter.c |  39 +++
>  3 files changed, 283 insertions(+)
>  create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_iter.c
>  create mode 100644 tools/testing/selftests/bpf/progs/cgroup_iter.c
>

[...]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v7 4/8] bpf: Introduce cgroup iter
  2022-08-09  0:18   ` Andrii Nakryiko
@ 2022-08-09  0:56     ` Hao Luo
  2022-08-09 16:23       ` Alexei Starovoitov
  0 siblings, 1 reply; 23+ messages in thread
From: Hao Luo @ 2022-08-09  0:56 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: linux-kernel, bpf, cgroups, netdev, Alexei Starovoitov,
	Andrii Nakryiko, Daniel Borkmann, Martin KaFai Lau, Song Liu,
	Yonghong Song, Tejun Heo, Zefan Li, KP Singh, Johannes Weiner,
	Michal Hocko, Benjamin Tissoires, John Fastabend, Michal Koutny,
	Roman Gushchin, David Rientjes, Stanislav Fomichev, Shakeel Butt,
	Yosry Ahmed

On Mon, Aug 8, 2022 at 5:19 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Fri, Aug 5, 2022 at 2:49 PM Hao Luo <haoluo@google.com> wrote:
> >
> > Cgroup_iter is a type of bpf_iter. It walks over cgroups in four modes:
> >
> >  - walking a cgroup's descendants in pre-order.
> >  - walking a cgroup's descendants in post-order.
> >  - walking a cgroup's ancestors.
> >  - process only the given cgroup.
> >
> > When attaching cgroup_iter, one can set a cgroup to the iter_link
> > created from attaching. This cgroup is passed as a file descriptor
> > or cgroup id and serves as the starting point of the walk. If no
> > cgroup is specified, the starting point will be the root cgroup v2.
> >
> > For walking descendants, one can specify the order: either pre-order or
> > post-order. For walking ancestors, the walk starts at the specified
> > cgroup and ends at the root.
> >
> > One can also terminate the walk early by returning 1 from the iter
> > program.
> >
> > Note that because walking cgroup hierarchy holds cgroup_mutex, the iter
> > program is called with cgroup_mutex held.
> >
> > Currently only one session is supported, which means, depending on the
> > volume of data bpf program intends to send to user space, the number
> > of cgroups that can be walked is limited. For example, given the current
> > buffer size is 8 * PAGE_SIZE, if the program sends 64B data for each
> > cgroup, assuming PAGE_SIZE is 4kb, the total number of cgroups that can
> > be walked is 512. This is a limitation of cgroup_iter. If the output
> > data is larger than the kernel buffer size, after all data in the
> > kernel buffer is consumed by user space, the subsequent read() syscall
> > will signal EOPNOTSUPP. In order to work around, the user may have to
> > update their program to reduce the volume of data sent to output. For
> > example, skip some uninteresting cgroups. In future, we may extend
> > bpf_iter flags to allow customizing buffer size.
> >
> > Acked-by: Yonghong Song <yhs@fb.com>
> > Acked-by: Tejun Heo <tj@kernel.org>
> > Signed-off-by: Hao Luo <haoluo@google.com>
> > ---
> >  include/linux/bpf.h                           |   8 +
> >  include/uapi/linux/bpf.h                      |  38 +++
> >  kernel/bpf/Makefile                           |   3 +
> >  kernel/bpf/cgroup_iter.c                      | 286 ++++++++++++++++++
> >  tools/include/uapi/linux/bpf.h                |  38 +++
> >  .../selftests/bpf/prog_tests/btf_dump.c       |   4 +-
> >  6 files changed, 375 insertions(+), 2 deletions(-)
> >  create mode 100644 kernel/bpf/cgroup_iter.c
> >
> > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > index 20c26aed7896..09b5c2167424 100644
> > --- a/include/linux/bpf.h
> > +++ b/include/linux/bpf.h
> > @@ -48,6 +48,7 @@ struct mem_cgroup;
> >  struct module;
> >  struct bpf_func_state;
> >  struct ftrace_ops;
> > +struct cgroup;
> >
> >  extern struct idr btf_idr;
> >  extern spinlock_t btf_idr_lock;
> > @@ -1730,7 +1731,14 @@ int bpf_obj_get_user(const char __user *pathname, int flags);
> >         int __init bpf_iter_ ## target(args) { return 0; }
> >
> >  struct bpf_iter_aux_info {
> > +       /* for map_elem iter */
> >         struct bpf_map *map;
> > +
> > +       /* for cgroup iter */
> > +       struct {
> > +               struct cgroup *start; /* starting cgroup */
> > +               int order;
> > +       } cgroup;
> >  };
> >
> >  typedef int (*bpf_iter_attach_target_t)(struct bpf_prog *prog,
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index 59a217ca2dfd..4d758b2e70d6 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -87,10 +87,37 @@ struct bpf_cgroup_storage_key {
> >         __u32   attach_type;            /* program attach type (enum bpf_attach_type) */
> >  };
> >
> > +enum bpf_iter_order {
> > +       BPF_ITER_ORDER_DEFAULT = 0,     /* default order. */
>
> why is this default order necessary? It just adds confusion (I had to
> look up source code to know what is default order). I might have
> missed some discussion, so if there is some very good reason, then
> please document this in commit message. But I'd rather not do some
> magical default order instead. We can set 0 to mean invalid and error
> out, or just do SELF as the very first value (and if user forgot to
> specify more fancy mode, they hopefully will quickly discover this in
> their testing).
>

PRE/POST/UP are tree-specific orders. SELF applies on all iters and
yields only a single object. How does task_iter express a non-self
order? By non-self, I mean something like "I don't care about the
order, just scan _all_ the objects". And this "don't care" order, IMO,
may be the common case. I don't think everyone cares about walking
order for tasks. The DEFAULT is intentionally put at the first value,
so that if users don't care about order, they don't have to specify
this field.

If that sounds valid, maybe using "UNSPEC" instead of "DEFAULT" is better?

> > +       BPF_ITER_SELF,                  /* process only a single object. */
> > +       BPF_ITER_DESCENDANTS_PRE,       /* walk descendants in pre-order. */
> > +       BPF_ITER_DESCENDANTS_POST,      /* walk descendants in post-order. */
> > +       BPF_ITER_ANCESTORS_UP,          /* walk ancestors upward. */
> > +};
> > +
>
> This is a somewhat pedantic nit, so feel free to ignore completely,
> but don't DESCENDANTS_{PRE,POST} and ANCESTORS_UP also include "self"?
> As it is right now, BPF_ITER_SELF name might be read as implying that
> DESCENDANTS and ANCESTORS order don't include self. So I don't know,
> maybe BPF_ITER_SELF_ONLY would be a bit clearer?
>

No problem with that. I can update it in the next version.

>
> >  union bpf_iter_link_info {
> >         struct {
> >                 __u32   map_fd;
> >         } map;
> > +       struct {
> > +               /* Valid values include:
> > +                *  - BPF_ITER_ORDER_DEFAULT
> > +                *  - BPF_ITER_SELF
> > +                *  - BPF_ITER_DESCENDANTS_PRE
> > +                *  - BPF_ITER_DESCENDANTS_POST
> > +                *  - BPF_ITER_ANCESTORS_UP
> > +                * for cgroup_iter, DEFAULT is equivalent to DESCENDANTS_PRE.
> > +                */
> > +               __u32   order;
> > +
> > +               /* At most one of cgroup_fd and cgroup_id can be non-zero. If
> > +                * both are zero, the walk starts from the default cgroup v2
> > +                * root. For walking v1 hierarchy, one should always explicitly
> > +                * specify cgroup_fd.
> > +                */
> > +               __u32   cgroup_fd;
> > +               __u64   cgroup_id;
> > +       } cgroup;
> >  };
> >
> >  /* BPF syscall commands, see bpf(2) man-page for more details. */
> > @@ -6134,11 +6161,22 @@ struct bpf_link_info {
> >                 struct {
> >                         __aligned_u64 target_name; /* in/out: target_name buffer ptr */
> >                         __u32 target_name_len;     /* in/out: target_name buffer len */
> > +
> > +                       /* If the iter specific field is 32 bits, it can be put
> > +                        * in the first or second union. Otherwise it should be
> > +                        * put in the second union.
> > +                        */
> >                         union {
> >                                 struct {
> >                                         __u32 map_id;
> >                                 } map;
> >                         };
> > +                       union {
> > +                               struct {
> > +                                       __u64 cgroup_id;
> > +                                       __u32 order;
> > +                               } cgroup;
> > +                       };
> >                 } iter;
>
> But other than above, I like how UAPI looks like, thanks!
>
> [...]
>
> > + *
> > + * For walking descendants, cgroup_iter can walk in either pre-order or
> > + * post-order. For walking ancestors, the iter walks up from a cgroup to
> > + * the root.
> > + *
> > + * The iter program can terminate the walk early by returning 1. Walk
> > + * continues if prog returns 0.
> > + *
> > + * The prog can check (seq->num == 0) to determine whether this is
> > + * the first element. The prog may also be passed a NULL cgroup,
> > + * which means the walk has completed and the prog has a chance to
> > + * do post-processing, such as outputing an epilogue.
>
> typo: outputting
>

Thanks for catching. Will fix.

> > + *
> > + * Note: the iter_prog is called with cgroup_mutex held.
> > + *
> > + * Currently only one session is supported, which means, depending on the
> > + * volume of data bpf program intends to send to user space, the number
> > + * of cgroups that can be walked is limited. For example, given the current
> > + * buffer size is 8 * PAGE_SIZE, if the program sends 64B data for each
> > + * cgroup, assuming PAGE_SIZE is 4kb, the total number of cgroups that can
> > + * be walked is 512. This is a limitation of cgroup_iter. If the output data
> > + * is larger than the kernel buffer size, after all data in the kernel buffer
> > + * is consumed by user space, the subsequent read() syscall will signal
> > + * EOPNOTSUPP. In order to work around, the user may have to update their
> > + * program to reduce the volume of data sent to output. For example, skip
> > + * some uninteresting cgroups.
> > + */
> > +
>
> [...]
>
> > +
> > +static void bpf_iter_cgroup_show_fdinfo(const struct bpf_iter_aux_info *aux,
> > +                                       struct seq_file *seq)
> > +{
> > +       char *buf;
> > +
> > +       buf = kzalloc(PATH_MAX, GFP_KERNEL);
> > +       if (!buf) {
> > +               seq_puts(seq, "cgroup_path:\t<unknown>\n");
> > +               goto show_order;
> > +       }
> > +
> > +       /* If cgroup_path_ns() fails, buf will be an empty string, cgroup_path
> > +        * will print nothing.
> > +        *
> > +        * Path is in the calling process's cgroup namespace.
> > +        */
> > +       cgroup_path_ns(aux->cgroup.start, buf, PATH_MAX,
> > +                      current->nsproxy->cgroup_ns);
> > +       seq_printf(seq, "cgroup_path:\t%s\n", buf);
> > +       kfree(buf);
> > +
> > +show_order:
> > +       if (aux->cgroup.order == BPF_ITER_DESCENDANTS_PRE)
> > +               seq_puts(seq, "order: pre\n");
> > +       else if (aux->cgroup.order == BPF_ITER_DESCENDANTS_POST)
> > +               seq_puts(seq, "order: post\n");
> > +       else if (aux->cgroup.order == BPF_ITER_ANCESTORS_UP)
> > +               seq_puts(seq, "order: up\n");
> > +       else /* BPF_ITER_SELF */
> > +               seq_puts(seq, "order: self\n");
>
> should we output "descendants_pre", "descendants_post", "ancestors_up"
> and "self" to match enum names more uniformly? We had similar
> discussion when Daniel Mueller was doing some clean up in bpftool and
> public's opinion was that uniform and consistent mapping between
> kernel enum and it's string representation is more valuable than
> shortness of the string.
>

I feel this is very nit, but can update in the next version. On a
second thought, I think, specifying "descendants", "ancestors" and
"self" are probably slightly better. Because doing so, people know
it's a tree when reading iter link info.

> > +}
> > +
>
> [...]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v7 5/8] selftests/bpf: Test cgroup_iter.
  2022-08-09  0:20   ` Andrii Nakryiko
@ 2022-08-09  1:18     ` Hao Luo
  0 siblings, 0 replies; 23+ messages in thread
From: Hao Luo @ 2022-08-09  1:18 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: linux-kernel, bpf, cgroups, netdev, Alexei Starovoitov,
	Andrii Nakryiko, Daniel Borkmann, Martin KaFai Lau, Song Liu,
	Yonghong Song, Tejun Heo, Zefan Li, KP Singh, Johannes Weiner,
	Michal Hocko, Benjamin Tissoires, John Fastabend, Michal Koutny,
	Roman Gushchin, David Rientjes, Stanislav Fomichev, Shakeel Butt,
	Yosry Ahmed

On Mon, Aug 8, 2022 at 5:20 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Fri, Aug 5, 2022 at 2:49 PM Hao Luo <haoluo@google.com> wrote:
> >
> > Add a selftest for cgroup_iter. The selftest creates a mini cgroup tree
> > of the following structure:
> >
> >     ROOT (working cgroup)
> >      |
> >    PARENT
> >   /      \
> > CHILD1  CHILD2
> >
> > and tests the following scenarios:
> >
> >  - invalid cgroup fd.
> >  - pre-order walk over descendants from PARENT.
> >  - post-order walk over descendants from PARENT.
> >  - walk of ancestors from PARENT.
> >  - walk from PARENT in the default order, which is pre-order.
> >  - process only a single object (i.e. PARENT).
> >  - early termination.
> >
> > Acked-by: Yonghong Song <yhs@fb.com>
> > Signed-off-by: Hao Luo <haoluo@google.com>
> > ---
>
> LGTM.
>
> Acked-by: Andrii Nakryiko <andrii@kernel.org>
>

Thanks!

> >  .../selftests/bpf/prog_tests/cgroup_iter.c    | 237 ++++++++++++++++++
> >  tools/testing/selftests/bpf/progs/bpf_iter.h  |   7 +
> >  .../testing/selftests/bpf/progs/cgroup_iter.c |  39 +++
> >  3 files changed, 283 insertions(+)
> >  create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_iter.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/cgroup_iter.c
> >
>
> [...]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v7 0/8] bpf: rstat: cgroup hierarchical stats
  2022-08-05 21:48 [PATCH bpf-next v7 0/8] bpf: rstat: cgroup hierarchical stats Hao Luo
                   ` (7 preceding siblings ...)
  2022-08-05 21:48 ` [PATCH bpf-next v7 8/8] selftests/bpf: add a selftest for cgroup hierarchical stats collection Hao Luo
@ 2022-08-09 16:20 ` patchwork-bot+netdevbpf
  8 siblings, 0 replies; 23+ messages in thread
From: patchwork-bot+netdevbpf @ 2022-08-09 16:20 UTC (permalink / raw)
  To: Hao Luo
  Cc: linux-kernel, bpf, cgroups, netdev, ast, andrii, daniel,
	martin.lau, song, yhs, tj, lizefan.x, kpsingh, hannes, mhocko,
	benjamin.tissoires, john.fastabend, mkoutny, roman.gushchin,
	rientjes, sdf, shakeelb, yosryahmed

Hello:

This series was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:

On Fri,  5 Aug 2022 14:48:13 -0700 you wrote:
> This patch series allows for using bpf to collect hierarchical cgroup
> stats efficiently by integrating with the rstat framework. The rstat
> framework provides an efficient way to collect cgroup stats percpu and
> propagate them through the cgroup hierarchy.
> 
> The stats are exposed to userspace in textual form by reading files in
> bpffs, similar to cgroupfs stats by using a cgroup_iter program.
> cgroup_iter is a type of bpf_iter. It walks over cgroups in four modes:
> - walking a cgroup's descendants in pre-order.
> - walking a cgroup's descendants in post-order.
> - walking a cgroup's ancestors.
> - process only a single object.
> 
> [...]

Here is the summary with links:
  - [bpf-next,v7,1/8] btf: Add a new kfunc flag which allows to mark a function to be sleepable
    https://git.kernel.org/bpf/bpf-next/c/fa96b24204af
  - [bpf-next,v7,2/8] cgroup: enable cgroup_get_from_file() on cgroup1
    https://git.kernel.org/bpf/bpf-next/c/f3a2aebdd6fb
  - [bpf-next,v7,3/8] bpf, iter: Fix the condition on p when calling stop.
    https://git.kernel.org/bpf/bpf-next/c/be3bb83dab2d
  - [bpf-next,v7,4/8] bpf: Introduce cgroup iter
    (no matching commit)
  - [bpf-next,v7,5/8] selftests/bpf: Test cgroup_iter.
    (no matching commit)
  - [bpf-next,v7,6/8] cgroup: bpf: enable bpf programs to integrate with rstat
    (no matching commit)
  - [bpf-next,v7,7/8] selftests/bpf: extend cgroup helpers
    (no matching commit)
  - [bpf-next,v7,8/8] selftests/bpf: add a selftest for cgroup hierarchical stats collection
    (no matching commit)

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v7 4/8] bpf: Introduce cgroup iter
  2022-08-09  0:56     ` Hao Luo
@ 2022-08-09 16:23       ` Alexei Starovoitov
  2022-08-09 18:38         ` Hao Luo
  0 siblings, 1 reply; 23+ messages in thread
From: Alexei Starovoitov @ 2022-08-09 16:23 UTC (permalink / raw)
  To: Hao Luo
  Cc: Andrii Nakryiko, linux-kernel, bpf, cgroups, netdev,
	Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Song Liu, Yonghong Song, Tejun Heo, Zefan Li,
	KP Singh, Johannes Weiner, Michal Hocko, Benjamin Tissoires,
	John Fastabend, Michal Koutny, Roman Gushchin, David Rientjes,
	Stanislav Fomichev, Shakeel Butt, Yosry Ahmed

On Mon, Aug 08, 2022 at 05:56:57PM -0700, Hao Luo wrote:
> On Mon, Aug 8, 2022 at 5:19 PM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
> >
> > On Fri, Aug 5, 2022 at 2:49 PM Hao Luo <haoluo@google.com> wrote:
> > >
> > > Cgroup_iter is a type of bpf_iter. It walks over cgroups in four modes:
> > >
> > >  - walking a cgroup's descendants in pre-order.
> > >  - walking a cgroup's descendants in post-order.
> > >  - walking a cgroup's ancestors.
> > >  - process only the given cgroup.
> > >
> > > When attaching cgroup_iter, one can set a cgroup to the iter_link
> > > created from attaching. This cgroup is passed as a file descriptor
> > > or cgroup id and serves as the starting point of the walk. If no
> > > cgroup is specified, the starting point will be the root cgroup v2.
> > >
> > > For walking descendants, one can specify the order: either pre-order or
> > > post-order. For walking ancestors, the walk starts at the specified
> > > cgroup and ends at the root.
> > >
> > > One can also terminate the walk early by returning 1 from the iter
> > > program.
> > >
> > > Note that because walking cgroup hierarchy holds cgroup_mutex, the iter
> > > program is called with cgroup_mutex held.
> > >
> > > Currently only one session is supported, which means, depending on the
> > > volume of data bpf program intends to send to user space, the number
> > > of cgroups that can be walked is limited. For example, given the current
> > > buffer size is 8 * PAGE_SIZE, if the program sends 64B data for each
> > > cgroup, assuming PAGE_SIZE is 4kb, the total number of cgroups that can
> > > be walked is 512. This is a limitation of cgroup_iter. If the output
> > > data is larger than the kernel buffer size, after all data in the
> > > kernel buffer is consumed by user space, the subsequent read() syscall
> > > will signal EOPNOTSUPP. In order to work around, the user may have to
> > > update their program to reduce the volume of data sent to output. For
> > > example, skip some uninteresting cgroups. In future, we may extend
> > > bpf_iter flags to allow customizing buffer size.
> > >
> > > Acked-by: Yonghong Song <yhs@fb.com>
> > > Acked-by: Tejun Heo <tj@kernel.org>
> > > Signed-off-by: Hao Luo <haoluo@google.com>
> > > ---
> > >  include/linux/bpf.h                           |   8 +
> > >  include/uapi/linux/bpf.h                      |  38 +++
> > >  kernel/bpf/Makefile                           |   3 +
> > >  kernel/bpf/cgroup_iter.c                      | 286 ++++++++++++++++++
> > >  tools/include/uapi/linux/bpf.h                |  38 +++
> > >  .../selftests/bpf/prog_tests/btf_dump.c       |   4 +-
> > >  6 files changed, 375 insertions(+), 2 deletions(-)
> > >  create mode 100644 kernel/bpf/cgroup_iter.c
> > >
> > > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > > index 20c26aed7896..09b5c2167424 100644
> > > --- a/include/linux/bpf.h
> > > +++ b/include/linux/bpf.h
> > > @@ -48,6 +48,7 @@ struct mem_cgroup;
> > >  struct module;
> > >  struct bpf_func_state;
> > >  struct ftrace_ops;
> > > +struct cgroup;
> > >
> > >  extern struct idr btf_idr;
> > >  extern spinlock_t btf_idr_lock;
> > > @@ -1730,7 +1731,14 @@ int bpf_obj_get_user(const char __user *pathname, int flags);
> > >         int __init bpf_iter_ ## target(args) { return 0; }
> > >
> > >  struct bpf_iter_aux_info {
> > > +       /* for map_elem iter */
> > >         struct bpf_map *map;
> > > +
> > > +       /* for cgroup iter */
> > > +       struct {
> > > +               struct cgroup *start; /* starting cgroup */
> > > +               int order;
> > > +       } cgroup;
> > >  };
> > >
> > >  typedef int (*bpf_iter_attach_target_t)(struct bpf_prog *prog,
> > > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > > index 59a217ca2dfd..4d758b2e70d6 100644
> > > --- a/include/uapi/linux/bpf.h
> > > +++ b/include/uapi/linux/bpf.h
> > > @@ -87,10 +87,37 @@ struct bpf_cgroup_storage_key {
> > >         __u32   attach_type;            /* program attach type (enum bpf_attach_type) */
> > >  };
> > >
> > > +enum bpf_iter_order {
> > > +       BPF_ITER_ORDER_DEFAULT = 0,     /* default order. */
> >
> > why is this default order necessary? It just adds confusion (I had to
> > look up source code to know what is default order). I might have
> > missed some discussion, so if there is some very good reason, then
> > please document this in commit message. But I'd rather not do some
> > magical default order instead. We can set 0 to mean invalid and error
> > out, or just do SELF as the very first value (and if user forgot to
> > specify more fancy mode, they hopefully will quickly discover this in
> > their testing).
> >
> 
> PRE/POST/UP are tree-specific orders. SELF applies on all iters and
> yields only a single object. How does task_iter express a non-self
> order? By non-self, I mean something like "I don't care about the
> order, just scan _all_ the objects". And this "don't care" order, IMO,
> may be the common case. I don't think everyone cares about walking
> order for tasks. The DEFAULT is intentionally put at the first value,
> so that if users don't care about order, they don't have to specify
> this field.
> 
> If that sounds valid, maybe using "UNSPEC" instead of "DEFAULT" is better?

I agree with Andrii.
This:
+       if (order == BPF_ITER_ORDER_DEFAULT)
+               order = BPF_ITER_DESCENDANTS_PRE;

looks like an arbitrary choice.
imo
BPF_ITER_DESCENDANTS_PRE = 0,
would have been more obvious. No need to dig into definition of "default".

UNSPEC = 0
is fine too if we want user to always be conscious about the order
and the kernel will error if that field is not initialized.
That would be my preference, since it will match the rest of uapi/bpf.h

I applied the first 3 patches to ease respin.
Thanks!

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v7 4/8] bpf: Introduce cgroup iter
  2022-08-09 16:23       ` Alexei Starovoitov
@ 2022-08-09 18:38         ` Hao Luo
  2022-08-11  3:10           ` Hao Luo
  0 siblings, 1 reply; 23+ messages in thread
From: Hao Luo @ 2022-08-09 18:38 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Andrii Nakryiko, linux-kernel, bpf, cgroups, netdev,
	Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Song Liu, Yonghong Song, Tejun Heo, Zefan Li,
	KP Singh, Johannes Weiner, Michal Hocko, Benjamin Tissoires,
	John Fastabend, Michal Koutny, Roman Gushchin, David Rientjes,
	Stanislav Fomichev, Shakeel Butt, Yosry Ahmed

On Tue, Aug 9, 2022 at 9:23 AM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Mon, Aug 08, 2022 at 05:56:57PM -0700, Hao Luo wrote:
> > On Mon, Aug 8, 2022 at 5:19 PM Andrii Nakryiko
> > <andrii.nakryiko@gmail.com> wrote:
> > >
> > > On Fri, Aug 5, 2022 at 2:49 PM Hao Luo <haoluo@google.com> wrote:
> > > >
> > > > Cgroup_iter is a type of bpf_iter. It walks over cgroups in four modes:
> > > >
> > > >  - walking a cgroup's descendants in pre-order.
> > > >  - walking a cgroup's descendants in post-order.
> > > >  - walking a cgroup's ancestors.
> > > >  - process only the given cgroup.
> > > >
[...]
> > > > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > > > index 59a217ca2dfd..4d758b2e70d6 100644
> > > > --- a/include/uapi/linux/bpf.h
> > > > +++ b/include/uapi/linux/bpf.h
> > > > @@ -87,10 +87,37 @@ struct bpf_cgroup_storage_key {
> > > >         __u32   attach_type;            /* program attach type (enum bpf_attach_type) */
> > > >  };
> > > >
> > > > +enum bpf_iter_order {
> > > > +       BPF_ITER_ORDER_DEFAULT = 0,     /* default order. */
> > >
> > > why is this default order necessary? It just adds confusion (I had to
> > > look up source code to know what is default order). I might have
> > > missed some discussion, so if there is some very good reason, then
> > > please document this in commit message. But I'd rather not do some
> > > magical default order instead. We can set 0 to mean invalid and error
> > > out, or just do SELF as the very first value (and if user forgot to
> > > specify more fancy mode, they hopefully will quickly discover this in
> > > their testing).
> > >
> >
> > PRE/POST/UP are tree-specific orders. SELF applies on all iters and
> > yields only a single object. How does task_iter express a non-self
> > order? By non-self, I mean something like "I don't care about the
> > order, just scan _all_ the objects". And this "don't care" order, IMO,
> > may be the common case. I don't think everyone cares about walking
> > order for tasks. The DEFAULT is intentionally put at the first value,
> > so that if users don't care about order, they don't have to specify
> > this field.
> >
> > If that sounds valid, maybe using "UNSPEC" instead of "DEFAULT" is better?
>
> I agree with Andrii.
> This:
> +       if (order == BPF_ITER_ORDER_DEFAULT)
> +               order = BPF_ITER_DESCENDANTS_PRE;
>
> looks like an arbitrary choice.
> imo
> BPF_ITER_DESCENDANTS_PRE = 0,
> would have been more obvious. No need to dig into definition of "default".
>
> UNSPEC = 0
> is fine too if we want user to always be conscious about the order
> and the kernel will error if that field is not initialized.
> That would be my preference, since it will match the rest of uapi/bpf.h
>

Sounds good. In the next version, will use

enum bpf_iter_order {
        BPF_ITER_ORDER_UNSPEC = 0,
        BPF_ITER_SELF_ONLY,             /* process only a single object. */
        BPF_ITER_DESCENDANTS_PRE,       /* walk descendants in pre-order. */
        BPF_ITER_DESCENDANTS_POST,      /* walk descendants in post-order. */
        BPF_ITER_ANCESTORS_UP,          /* walk ancestors upward. */
};

and explicitly list the values acceptable by cgroup_iter, error out if
UNSPEC is detected.

Also, following Andrii's comments, will change BPF_ITER_SELF to
BPF_ITER_SELF_ONLY, which does seem a little bit explicit in
comparison.

> I applied the first 3 patches to ease respin.

Thanks! This helps!

> Thanks!

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v7 4/8] bpf: Introduce cgroup iter
  2022-08-09 18:38         ` Hao Luo
@ 2022-08-11  3:10           ` Hao Luo
  2022-08-11 14:09             ` Yosry Ahmed
  0 siblings, 1 reply; 23+ messages in thread
From: Hao Luo @ 2022-08-11  3:10 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Andrii Nakryiko, linux-kernel, bpf, cgroups, netdev,
	Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Song Liu, Yonghong Song, Tejun Heo, Zefan Li,
	KP Singh, Johannes Weiner, Michal Hocko, Benjamin Tissoires,
	John Fastabend, Michal Koutny, Roman Gushchin, David Rientjes,
	Stanislav Fomichev, Shakeel Butt, Yosry Ahmed

On Tue, Aug 9, 2022 at 11:38 AM Hao Luo <haoluo@google.com> wrote:
>
> On Tue, Aug 9, 2022 at 9:23 AM Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> >
> > On Mon, Aug 08, 2022 at 05:56:57PM -0700, Hao Luo wrote:
> > > On Mon, Aug 8, 2022 at 5:19 PM Andrii Nakryiko
> > > <andrii.nakryiko@gmail.com> wrote:
> > > >
> > > > On Fri, Aug 5, 2022 at 2:49 PM Hao Luo <haoluo@google.com> wrote:
> > > > >
> > > > > Cgroup_iter is a type of bpf_iter. It walks over cgroups in four modes:
> > > > >
> > > > >  - walking a cgroup's descendants in pre-order.
> > > > >  - walking a cgroup's descendants in post-order.
> > > > >  - walking a cgroup's ancestors.
> > > > >  - process only the given cgroup.
> > > > >
> [...]
> > > > > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > > > > index 59a217ca2dfd..4d758b2e70d6 100644
> > > > > --- a/include/uapi/linux/bpf.h
> > > > > +++ b/include/uapi/linux/bpf.h
> > > > > @@ -87,10 +87,37 @@ struct bpf_cgroup_storage_key {
> > > > >         __u32   attach_type;            /* program attach type (enum bpf_attach_type) */
> > > > >  };
> > > > >
> > > > > +enum bpf_iter_order {
> > > > > +       BPF_ITER_ORDER_DEFAULT = 0,     /* default order. */
> > > >
> > > > why is this default order necessary? It just adds confusion (I had to
> > > > look up source code to know what is default order). I might have
> > > > missed some discussion, so if there is some very good reason, then
> > > > please document this in commit message. But I'd rather not do some
> > > > magical default order instead. We can set 0 to mean invalid and error
> > > > out, or just do SELF as the very first value (and if user forgot to
> > > > specify more fancy mode, they hopefully will quickly discover this in
> > > > their testing).
> > > >
> > >
> > > PRE/POST/UP are tree-specific orders. SELF applies on all iters and
> > > yields only a single object. How does task_iter express a non-self
> > > order? By non-self, I mean something like "I don't care about the
> > > order, just scan _all_ the objects". And this "don't care" order, IMO,
> > > may be the common case. I don't think everyone cares about walking
> > > order for tasks. The DEFAULT is intentionally put at the first value,
> > > so that if users don't care about order, they don't have to specify
> > > this field.
> > >
> > > If that sounds valid, maybe using "UNSPEC" instead of "DEFAULT" is better?
> >
> > I agree with Andrii.
> > This:
> > +       if (order == BPF_ITER_ORDER_DEFAULT)
> > +               order = BPF_ITER_DESCENDANTS_PRE;
> >
> > looks like an arbitrary choice.
> > imo
> > BPF_ITER_DESCENDANTS_PRE = 0,
> > would have been more obvious. No need to dig into definition of "default".
> >
> > UNSPEC = 0
> > is fine too if we want user to always be conscious about the order
> > and the kernel will error if that field is not initialized.
> > That would be my preference, since it will match the rest of uapi/bpf.h
> >
>
> Sounds good. In the next version, will use
>
> enum bpf_iter_order {
>         BPF_ITER_ORDER_UNSPEC = 0,
>         BPF_ITER_SELF_ONLY,             /* process only a single object. */
>         BPF_ITER_DESCENDANTS_PRE,       /* walk descendants in pre-order. */
>         BPF_ITER_DESCENDANTS_POST,      /* walk descendants in post-order. */
>         BPF_ITER_ANCESTORS_UP,          /* walk ancestors upward. */
> };
>

Sigh, I find that having UNSPEC=0 and erroring out when seeing UNSPEC
doesn't work. Basically, if we have a non-iter prog and a cgroup_iter
prog written in the same source file, I can't use
bpf_object__attach_skeleton to attach them. Because the default
prog_attach_fn for iter initializes `order` to 0 (that is, UNSPEC),
which is going to be rejected by the kernel. In order to make
bpf_object__attach_skeleton work on cgroup_iter, I think I need to use
the following

enum bpf_iter_order {
        BPF_ITER_DESCENDANTS_PRE,       /* walk descendants in pre-order. */
        BPF_ITER_DESCENDANTS_POST,      /* walk descendants in post-order. */
        BPF_ITER_ANCESTORS_UP,          /* walk ancestors upward. */
        BPF_ITER_SELF_ONLY,             /* process only a single object. */
};

So that when calling bpf_object__attach_skeleton() on cgroup_iter, a
link can be generated and the generated link defaults to pre-order
walk on the whole hierarchy. Is there a better solution?

> and explicitly list the values acceptable by cgroup_iter, error out if
> UNSPEC is detected.
>
> Also, following Andrii's comments, will change BPF_ITER_SELF to
> BPF_ITER_SELF_ONLY, which does seem a little bit explicit in
> comparison.
>
> > I applied the first 3 patches to ease respin.
>
> Thanks! This helps!
>
> > Thanks!

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v7 4/8] bpf: Introduce cgroup iter
  2022-08-11  3:10           ` Hao Luo
@ 2022-08-11 14:09             ` Yosry Ahmed
  2022-08-16  4:12               ` Andrii Nakryiko
  0 siblings, 1 reply; 23+ messages in thread
From: Yosry Ahmed @ 2022-08-11 14:09 UTC (permalink / raw)
  To: Hao Luo
  Cc: Alexei Starovoitov, Andrii Nakryiko, Linux Kernel Mailing List,
	bpf, Cgroups, Networking, Alexei Starovoitov, Andrii Nakryiko,
	Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
	Tejun Heo, Zefan Li, KP Singh, Johannes Weiner, Michal Hocko,
	Benjamin Tissoires, John Fastabend, Michal Koutny,
	Roman Gushchin, David Rientjes, Stanislav Fomichev, Shakeel Butt

On Wed, Aug 10, 2022 at 8:10 PM Hao Luo <haoluo@google.com> wrote:
>
> On Tue, Aug 9, 2022 at 11:38 AM Hao Luo <haoluo@google.com> wrote:
> >
> > On Tue, Aug 9, 2022 at 9:23 AM Alexei Starovoitov
> > <alexei.starovoitov@gmail.com> wrote:
> > >
> > > On Mon, Aug 08, 2022 at 05:56:57PM -0700, Hao Luo wrote:
> > > > On Mon, Aug 8, 2022 at 5:19 PM Andrii Nakryiko
> > > > <andrii.nakryiko@gmail.com> wrote:
> > > > >
> > > > > On Fri, Aug 5, 2022 at 2:49 PM Hao Luo <haoluo@google.com> wrote:
> > > > > >
> > > > > > Cgroup_iter is a type of bpf_iter. It walks over cgroups in four modes:
> > > > > >
> > > > > >  - walking a cgroup's descendants in pre-order.
> > > > > >  - walking a cgroup's descendants in post-order.
> > > > > >  - walking a cgroup's ancestors.
> > > > > >  - process only the given cgroup.
> > > > > >
> > [...]
> > > > > > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > > > > > index 59a217ca2dfd..4d758b2e70d6 100644
> > > > > > --- a/include/uapi/linux/bpf.h
> > > > > > +++ b/include/uapi/linux/bpf.h
> > > > > > @@ -87,10 +87,37 @@ struct bpf_cgroup_storage_key {
> > > > > >         __u32   attach_type;            /* program attach type (enum bpf_attach_type) */
> > > > > >  };
> > > > > >
> > > > > > +enum bpf_iter_order {
> > > > > > +       BPF_ITER_ORDER_DEFAULT = 0,     /* default order. */
> > > > >
> > > > > why is this default order necessary? It just adds confusion (I had to
> > > > > look up source code to know what is default order). I might have
> > > > > missed some discussion, so if there is some very good reason, then
> > > > > please document this in commit message. But I'd rather not do some
> > > > > magical default order instead. We can set 0 to mean invalid and error
> > > > > out, or just do SELF as the very first value (and if user forgot to
> > > > > specify more fancy mode, they hopefully will quickly discover this in
> > > > > their testing).
> > > > >
> > > >
> > > > PRE/POST/UP are tree-specific orders. SELF applies on all iters and
> > > > yields only a single object. How does task_iter express a non-self
> > > > order? By non-self, I mean something like "I don't care about the
> > > > order, just scan _all_ the objects". And this "don't care" order, IMO,
> > > > may be the common case. I don't think everyone cares about walking
> > > > order for tasks. The DEFAULT is intentionally put at the first value,
> > > > so that if users don't care about order, they don't have to specify
> > > > this field.
> > > >
> > > > If that sounds valid, maybe using "UNSPEC" instead of "DEFAULT" is better?
> > >
> > > I agree with Andrii.
> > > This:
> > > +       if (order == BPF_ITER_ORDER_DEFAULT)
> > > +               order = BPF_ITER_DESCENDANTS_PRE;
> > >
> > > looks like an arbitrary choice.
> > > imo
> > > BPF_ITER_DESCENDANTS_PRE = 0,
> > > would have been more obvious. No need to dig into definition of "default".
> > >
> > > UNSPEC = 0
> > > is fine too if we want user to always be conscious about the order
> > > and the kernel will error if that field is not initialized.
> > > That would be my preference, since it will match the rest of uapi/bpf.h
> > >
> >
> > Sounds good. In the next version, will use
> >
> > enum bpf_iter_order {
> >         BPF_ITER_ORDER_UNSPEC = 0,
> >         BPF_ITER_SELF_ONLY,             /* process only a single object. */
> >         BPF_ITER_DESCENDANTS_PRE,       /* walk descendants in pre-order. */
> >         BPF_ITER_DESCENDANTS_POST,      /* walk descendants in post-order. */
> >         BPF_ITER_ANCESTORS_UP,          /* walk ancestors upward. */
> > };
> >
>
> Sigh, I find that having UNSPEC=0 and erroring out when seeing UNSPEC
> doesn't work. Basically, if we have a non-iter prog and a cgroup_iter
> prog written in the same source file, I can't use
> bpf_object__attach_skeleton to attach them. Because the default
> prog_attach_fn for iter initializes `order` to 0 (that is, UNSPEC),
> which is going to be rejected by the kernel. In order to make
> bpf_object__attach_skeleton work on cgroup_iter, I think I need to use
> the following
>
> enum bpf_iter_order {
>         BPF_ITER_DESCENDANTS_PRE,       /* walk descendants in pre-order. */
>         BPF_ITER_DESCENDANTS_POST,      /* walk descendants in post-order. */
>         BPF_ITER_ANCESTORS_UP,          /* walk ancestors upward. */
>         BPF_ITER_SELF_ONLY,             /* process only a single object. */
> };
>
> So that when calling bpf_object__attach_skeleton() on cgroup_iter, a
> link can be generated and the generated link defaults to pre-order
> walk on the whole hierarchy. Is there a better solution?
>

I think this can be handled by userspace? We can attach the
cgroup_iter separately first (and maybe we will need to set prog->link
as well) so that bpf_object__attach_skeleton() doesn't try to attach
it? I am following this pattern in the selftest in the final patch,
although I think I might be missing setting prog->link, so I am
wondering why there are no issues in that selftest which has the same
scenario that you are talking about.

I think such a pattern will need to be used anyway if the users need
to set any non-default arguments for the cgroup_iter prog (like the
selftest), right? The only case we are discussing here is the case
where the user wants to attach the cgroup_iter with all default
options (in which case the default order will fail).
I agree that it might be inconvenient if the default/uninitialized
options don't work for cgroup_iter, but Alexei pointed out that this
matches other bpf uapis.

My concern is that in the future we try to reuse enum bpf_iter_order
to set ordering for other iterators, and then the
default/uninitialized value (BPF_ITER_DESCENDANTS_PRE) doesn't make
sense for that iterator (e.g. not a tree). In this case, the same
problem that we are avoiding for cgroup_iter here will show up for
that iterator, and we can't easily change it at this point because
it's uapi.


> > and explicitly list the values acceptable by cgroup_iter, error out if
> > UNSPEC is detected.
> >
> > Also, following Andrii's comments, will change BPF_ITER_SELF to
> > BPF_ITER_SELF_ONLY, which does seem a little bit explicit in
> > comparison.
> >
> > > I applied the first 3 patches to ease respin.
> >
> > Thanks! This helps!
> >
> > > Thanks!

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v7 4/8] bpf: Introduce cgroup iter
  2022-08-11 14:09             ` Yosry Ahmed
@ 2022-08-16  4:12               ` Andrii Nakryiko
  2022-08-16  4:19                 ` Andrii Nakryiko
  2022-08-16  6:52                 ` Hao Luo
  0 siblings, 2 replies; 23+ messages in thread
From: Andrii Nakryiko @ 2022-08-16  4:12 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Hao Luo, Alexei Starovoitov, Linux Kernel Mailing List, bpf,
	Cgroups, Networking, Alexei Starovoitov, Andrii Nakryiko,
	Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
	Tejun Heo, Zefan Li, KP Singh, Johannes Weiner, Michal Hocko,
	Benjamin Tissoires, John Fastabend, Michal Koutny,
	Roman Gushchin, David Rientjes, Stanislav Fomichev, Shakeel Butt

On Thu, Aug 11, 2022 at 7:10 AM Yosry Ahmed <yosryahmed@google.com> wrote:
>
> On Wed, Aug 10, 2022 at 8:10 PM Hao Luo <haoluo@google.com> wrote:
> >
> > On Tue, Aug 9, 2022 at 11:38 AM Hao Luo <haoluo@google.com> wrote:
> > >
> > > On Tue, Aug 9, 2022 at 9:23 AM Alexei Starovoitov
> > > <alexei.starovoitov@gmail.com> wrote:
> > > >
> > > > On Mon, Aug 08, 2022 at 05:56:57PM -0700, Hao Luo wrote:
> > > > > On Mon, Aug 8, 2022 at 5:19 PM Andrii Nakryiko
> > > > > <andrii.nakryiko@gmail.com> wrote:
> > > > > >
> > > > > > On Fri, Aug 5, 2022 at 2:49 PM Hao Luo <haoluo@google.com> wrote:
> > > > > > >
> > > > > > > Cgroup_iter is a type of bpf_iter. It walks over cgroups in four modes:
> > > > > > >
> > > > > > >  - walking a cgroup's descendants in pre-order.
> > > > > > >  - walking a cgroup's descendants in post-order.
> > > > > > >  - walking a cgroup's ancestors.
> > > > > > >  - process only the given cgroup.
> > > > > > >
> > > [...]
> > > > > > > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > > > > > > index 59a217ca2dfd..4d758b2e70d6 100644
> > > > > > > --- a/include/uapi/linux/bpf.h
> > > > > > > +++ b/include/uapi/linux/bpf.h
> > > > > > > @@ -87,10 +87,37 @@ struct bpf_cgroup_storage_key {
> > > > > > >         __u32   attach_type;            /* program attach type (enum bpf_attach_type) */
> > > > > > >  };
> > > > > > >
> > > > > > > +enum bpf_iter_order {
> > > > > > > +       BPF_ITER_ORDER_DEFAULT = 0,     /* default order. */
> > > > > >
> > > > > > why is this default order necessary? It just adds confusion (I had to
> > > > > > look up source code to know what is default order). I might have
> > > > > > missed some discussion, so if there is some very good reason, then
> > > > > > please document this in commit message. But I'd rather not do some
> > > > > > magical default order instead. We can set 0 to mean invalid and error
> > > > > > out, or just do SELF as the very first value (and if user forgot to
> > > > > > specify more fancy mode, they hopefully will quickly discover this in
> > > > > > their testing).
> > > > > >
> > > > >
> > > > > PRE/POST/UP are tree-specific orders. SELF applies on all iters and
> > > > > yields only a single object. How does task_iter express a non-self
> > > > > order? By non-self, I mean something like "I don't care about the
> > > > > order, just scan _all_ the objects". And this "don't care" order, IMO,
> > > > > may be the common case. I don't think everyone cares about walking
> > > > > order for tasks. The DEFAULT is intentionally put at the first value,
> > > > > so that if users don't care about order, they don't have to specify
> > > > > this field.
> > > > >
> > > > > If that sounds valid, maybe using "UNSPEC" instead of "DEFAULT" is better?
> > > >
> > > > I agree with Andrii.
> > > > This:
> > > > +       if (order == BPF_ITER_ORDER_DEFAULT)
> > > > +               order = BPF_ITER_DESCENDANTS_PRE;
> > > >
> > > > looks like an arbitrary choice.
> > > > imo
> > > > BPF_ITER_DESCENDANTS_PRE = 0,
> > > > would have been more obvious. No need to dig into definition of "default".
> > > >
> > > > UNSPEC = 0
> > > > is fine too if we want user to always be conscious about the order
> > > > and the kernel will error if that field is not initialized.
> > > > That would be my preference, since it will match the rest of uapi/bpf.h
> > > >
> > >
> > > Sounds good. In the next version, will use
> > >
> > > enum bpf_iter_order {
> > >         BPF_ITER_ORDER_UNSPEC = 0,
> > >         BPF_ITER_SELF_ONLY,             /* process only a single object. */
> > >         BPF_ITER_DESCENDANTS_PRE,       /* walk descendants in pre-order. */
> > >         BPF_ITER_DESCENDANTS_POST,      /* walk descendants in post-order. */
> > >         BPF_ITER_ANCESTORS_UP,          /* walk ancestors upward. */
> > > };
> > >
> >
> > Sigh, I find that having UNSPEC=0 and erroring out when seeing UNSPEC
> > doesn't work. Basically, if we have a non-iter prog and a cgroup_iter
> > prog written in the same source file, I can't use
> > bpf_object__attach_skeleton to attach them. Because the default
> > prog_attach_fn for iter initializes `order` to 0 (that is, UNSPEC),
> > which is going to be rejected by the kernel. In order to make
> > bpf_object__attach_skeleton work on cgroup_iter, I think I need to use
> > the following
> >
> > enum bpf_iter_order {

so first of all, this can't be called "bpf_iter_order" as it doesn't
apply to BPF iterators in general. I think this should be called
bpf_iter_cgroup_order (or maybe bpf_cgroup_iter_order) and if/when we
add ability to iterate tasks within cgroups then we'll just reuse enum
bpf_iter_cgroup_order as an extra parameter for task iterator.

And with that future case in mind I do think that we should have 0
being "UNSPEC" case.

> >         BPF_ITER_DESCENDANTS_PRE,       /* walk descendants in pre-order. */
> >         BPF_ITER_DESCENDANTS_POST,      /* walk descendants in post-order. */
> >         BPF_ITER_ANCESTORS_UP,          /* walk ancestors upward. */
> >         BPF_ITER_SELF_ONLY,             /* process only a single object. */
> > };
> >
> > So that when calling bpf_object__attach_skeleton() on cgroup_iter, a
> > link can be generated and the generated link defaults to pre-order
> > walk on the whole hierarchy. Is there a better solution?
> >

I was actually surprised that we specify these additional parameters
at attach (LINK_CREATE) time, and not at bpf_iter_create() call time.
It seems more appropriate to allow to specify such runtime parameters
very late, when we create a specific instance of seq_file. But I guess
this was done because one of the initial motivations for iterators was
to be pinned in BPFFS and read as a file, so it was more convenient to
store such parameters upfront at link creation time to keep
BPF_OBJ_PIN simpler. I guess it makes sense, worst case you'll need to
create multiple bpf_link files, one for each cgroup hierarchy you'd
like to query with the same single BPF program.

But I digress.

As for not being able to auto-attach cgroup iterator. I think that's
sort of expected and is in line with not being able to auto-attach
cgroup programs, as you need cgroup FD at runtime. So even if you had
some reasonable default order, you still would need to specify target
cgroup (either through FD or ID).

So... either don't do skeleton auto-attach, or let's teach libbpf code
to not auto-attach some iter types?

Alternatively, we could teach libbpf to parse some sort of cgroup
iterator spec, like:

SEC("iter/cgroup:/path/to/cgroup:descendants_pre")

But this approach won't work for a bunch of other parameterized
iterators (e.g., task iter, or map elem iter), so I'm hesitant about
adding this to libbpf as a generic functionality.

>
> I think this can be handled by userspace? We can attach the
> cgroup_iter separately first (and maybe we will need to set prog->link
> as well) so that bpf_object__attach_skeleton() doesn't try to attach
> it? I am following this pattern in the selftest in the final patch,
> although I think I might be missing setting prog->link, so I am
> wondering why there are no issues in that selftest which has the same
> scenario that you are talking about.
>
> I think such a pattern will need to be used anyway if the users need
> to set any non-default arguments for the cgroup_iter prog (like the
> selftest), right? The only case we are discussing here is the case
> where the user wants to attach the cgroup_iter with all default
> options (in which case the default order will fail).
> I agree that it might be inconvenient if the default/uninitialized
> options don't work for cgroup_iter, but Alexei pointed out that this
> matches other bpf uapis.
>
> My concern is that in the future we try to reuse enum bpf_iter_order
> to set ordering for other iterators, and then the
> default/uninitialized value (BPF_ITER_DESCENDANTS_PRE) doesn't make
> sense for that iterator (e.g. not a tree). In this case, the same
> problem that we are avoiding for cgroup_iter here will show up for
> that iterator, and we can't easily change it at this point because
> it's uapi.

Yep, valid concern, I agree.

>
>
> > > and explicitly list the values acceptable by cgroup_iter, error out if
> > > UNSPEC is detected.
> > >
> > > Also, following Andrii's comments, will change BPF_ITER_SELF to
> > > BPF_ITER_SELF_ONLY, which does seem a little bit explicit in
> > > comparison.
> > >
> > > > I applied the first 3 patches to ease respin.
> > >
> > > Thanks! This helps!
> > >
> > > > Thanks!

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v7 4/8] bpf: Introduce cgroup iter
  2022-08-16  4:12               ` Andrii Nakryiko
@ 2022-08-16  4:19                 ` Andrii Nakryiko
  2022-08-16  6:52                 ` Hao Luo
  1 sibling, 0 replies; 23+ messages in thread
From: Andrii Nakryiko @ 2022-08-16  4:19 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Hao Luo, Alexei Starovoitov, Linux Kernel Mailing List, bpf,
	Cgroups, Networking, Alexei Starovoitov, Andrii Nakryiko,
	Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
	Tejun Heo, Zefan Li, KP Singh, Johannes Weiner, Michal Hocko,
	Benjamin Tissoires, John Fastabend, Michal Koutny,
	Roman Gushchin, David Rientjes, Stanislav Fomichev, Shakeel Butt

On Mon, Aug 15, 2022 at 9:12 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Thu, Aug 11, 2022 at 7:10 AM Yosry Ahmed <yosryahmed@google.com> wrote:
> >
> > On Wed, Aug 10, 2022 at 8:10 PM Hao Luo <haoluo@google.com> wrote:
> > >
> > > On Tue, Aug 9, 2022 at 11:38 AM Hao Luo <haoluo@google.com> wrote:
> > > >
> > > > On Tue, Aug 9, 2022 at 9:23 AM Alexei Starovoitov
> > > > <alexei.starovoitov@gmail.com> wrote:
> > > > >
> > > > > On Mon, Aug 08, 2022 at 05:56:57PM -0700, Hao Luo wrote:
> > > > > > On Mon, Aug 8, 2022 at 5:19 PM Andrii Nakryiko
> > > > > > <andrii.nakryiko@gmail.com> wrote:
> > > > > > >
> > > > > > > On Fri, Aug 5, 2022 at 2:49 PM Hao Luo <haoluo@google.com> wrote:
> > > > > > > >
> > > > > > > > Cgroup_iter is a type of bpf_iter. It walks over cgroups in four modes:
> > > > > > > >
> > > > > > > >  - walking a cgroup's descendants in pre-order.
> > > > > > > >  - walking a cgroup's descendants in post-order.
> > > > > > > >  - walking a cgroup's ancestors.
> > > > > > > >  - process only the given cgroup.
> > > > > > > >
> > > > [...]
> > > > > > > > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > > > > > > > index 59a217ca2dfd..4d758b2e70d6 100644
> > > > > > > > --- a/include/uapi/linux/bpf.h
> > > > > > > > +++ b/include/uapi/linux/bpf.h
> > > > > > > > @@ -87,10 +87,37 @@ struct bpf_cgroup_storage_key {
> > > > > > > >         __u32   attach_type;            /* program attach type (enum bpf_attach_type) */
> > > > > > > >  };
> > > > > > > >
> > > > > > > > +enum bpf_iter_order {
> > > > > > > > +       BPF_ITER_ORDER_DEFAULT = 0,     /* default order. */
> > > > > > >
> > > > > > > why is this default order necessary? It just adds confusion (I had to
> > > > > > > look up source code to know what is default order). I might have
> > > > > > > missed some discussion, so if there is some very good reason, then
> > > > > > > please document this in commit message. But I'd rather not do some
> > > > > > > magical default order instead. We can set 0 to mean invalid and error
> > > > > > > out, or just do SELF as the very first value (and if user forgot to
> > > > > > > specify more fancy mode, they hopefully will quickly discover this in
> > > > > > > their testing).
> > > > > > >
> > > > > >
> > > > > > PRE/POST/UP are tree-specific orders. SELF applies on all iters and
> > > > > > yields only a single object. How does task_iter express a non-self
> > > > > > order? By non-self, I mean something like "I don't care about the
> > > > > > order, just scan _all_ the objects". And this "don't care" order, IMO,
> > > > > > may be the common case. I don't think everyone cares about walking
> > > > > > order for tasks. The DEFAULT is intentionally put at the first value,
> > > > > > so that if users don't care about order, they don't have to specify
> > > > > > this field.
> > > > > >
> > > > > > If that sounds valid, maybe using "UNSPEC" instead of "DEFAULT" is better?
> > > > >
> > > > > I agree with Andrii.
> > > > > This:
> > > > > +       if (order == BPF_ITER_ORDER_DEFAULT)
> > > > > +               order = BPF_ITER_DESCENDANTS_PRE;
> > > > >
> > > > > looks like an arbitrary choice.
> > > > > imo
> > > > > BPF_ITER_DESCENDANTS_PRE = 0,
> > > > > would have been more obvious. No need to dig into definition of "default".
> > > > >
> > > > > UNSPEC = 0
> > > > > is fine too if we want user to always be conscious about the order
> > > > > and the kernel will error if that field is not initialized.
> > > > > That would be my preference, since it will match the rest of uapi/bpf.h
> > > > >
> > > >
> > > > Sounds good. In the next version, will use
> > > >
> > > > enum bpf_iter_order {
> > > >         BPF_ITER_ORDER_UNSPEC = 0,
> > > >         BPF_ITER_SELF_ONLY,             /* process only a single object. */
> > > >         BPF_ITER_DESCENDANTS_PRE,       /* walk descendants in pre-order. */
> > > >         BPF_ITER_DESCENDANTS_POST,      /* walk descendants in post-order. */
> > > >         BPF_ITER_ANCESTORS_UP,          /* walk ancestors upward. */
> > > > };
> > > >
> > >
> > > Sigh, I find that having UNSPEC=0 and erroring out when seeing UNSPEC
> > > doesn't work. Basically, if we have a non-iter prog and a cgroup_iter
> > > prog written in the same source file, I can't use
> > > bpf_object__attach_skeleton to attach them. Because the default
> > > prog_attach_fn for iter initializes `order` to 0 (that is, UNSPEC),
> > > which is going to be rejected by the kernel. In order to make
> > > bpf_object__attach_skeleton work on cgroup_iter, I think I need to use
> > > the following
> > >
> > > enum bpf_iter_order {
>
> so first of all, this can't be called "bpf_iter_order" as it doesn't
> apply to BPF iterators in general. I think this should be called
> bpf_iter_cgroup_order (or maybe bpf_cgroup_iter_order) and if/when we
> add ability to iterate tasks within cgroups then we'll just reuse enum
> bpf_iter_cgroup_order as an extra parameter for task iterator.
>
> And with that future case in mind I do think that we should have 0
> being "UNSPEC" case.
>
> > >         BPF_ITER_DESCENDANTS_PRE,       /* walk descendants in pre-order. */
> > >         BPF_ITER_DESCENDANTS_POST,      /* walk descendants in post-order. */
> > >         BPF_ITER_ANCESTORS_UP,          /* walk ancestors upward. */
> > >         BPF_ITER_SELF_ONLY,             /* process only a single object. */
> > > };
> > >
> > > So that when calling bpf_object__attach_skeleton() on cgroup_iter, a
> > > link can be generated and the generated link defaults to pre-order
> > > walk on the whole hierarchy. Is there a better solution?
> > >
>
> I was actually surprised that we specify these additional parameters
> at attach (LINK_CREATE) time, and not at bpf_iter_create() call time.
> It seems more appropriate to allow to specify such runtime parameters
> very late, when we create a specific instance of seq_file. But I guess
> this was done because one of the initial motivations for iterators was
> to be pinned in BPFFS and read as a file, so it was more convenient to
> store such parameters upfront at link creation time to keep
> BPF_OBJ_PIN simpler. I guess it makes sense, worst case you'll need to
> create multiple bpf_link files, one for each cgroup hierarchy you'd
> like to query with the same single BPF program.
>
> But I digress.
>
> As for not being able to auto-attach cgroup iterator. I think that's
> sort of expected and is in line with not being able to auto-attach
> cgroup programs, as you need cgroup FD at runtime. So even if you had
> some reasonable default order, you still would need to specify target
> cgroup (either through FD or ID).
>
> So... either don't do skeleton auto-attach, or let's teach libbpf code
> to not auto-attach some iter types?
>
> Alternatively, we could teach libbpf to parse some sort of cgroup
> iterator spec, like:
>
> SEC("iter/cgroup:/path/to/cgroup:descendants_pre")
>
> But this approach won't work for a bunch of other parameterized
> iterators (e.g., task iter, or map elem iter), so I'm hesitant about
> adding this to libbpf as a generic functionality.

As yet another alternative (given you seem to default to root cgroup
when cgroup_fd or cgroup_id is not specified), we can teach libbpf to
just specify BPF_ITER_DESCENDANTS_PRE (or whichever mode seems like
best default) only for auto-attach case. If that makes most sense.

>
> >
> > I think this can be handled by userspace? We can attach the
> > cgroup_iter separately first (and maybe we will need to set prog->link
> > as well) so that bpf_object__attach_skeleton() doesn't try to attach
> > it? I am following this pattern in the selftest in the final patch,
> > although I think I might be missing setting prog->link, so I am
> > wondering why there are no issues in that selftest which has the same
> > scenario that you are talking about.
> >
> > I think such a pattern will need to be used anyway if the users need
> > to set any non-default arguments for the cgroup_iter prog (like the
> > selftest), right? The only case we are discussing here is the case
> > where the user wants to attach the cgroup_iter with all default
> > options (in which case the default order will fail).
> > I agree that it might be inconvenient if the default/uninitialized
> > options don't work for cgroup_iter, but Alexei pointed out that this
> > matches other bpf uapis.
> >
> > My concern is that in the future we try to reuse enum bpf_iter_order
> > to set ordering for other iterators, and then the
> > default/uninitialized value (BPF_ITER_DESCENDANTS_PRE) doesn't make
> > sense for that iterator (e.g. not a tree). In this case, the same
> > problem that we are avoiding for cgroup_iter here will show up for
> > that iterator, and we can't easily change it at this point because
> > it's uapi.
>
> Yep, valid concern, I agree.
>
> >
> >
> > > > and explicitly list the values acceptable by cgroup_iter, error out if
> > > > UNSPEC is detected.
> > > >
> > > > Also, following Andrii's comments, will change BPF_ITER_SELF to
> > > > BPF_ITER_SELF_ONLY, which does seem a little bit explicit in
> > > > comparison.
> > > >
> > > > > I applied the first 3 patches to ease respin.
> > > >
> > > > Thanks! This helps!
> > > >
> > > > > Thanks!

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v7 4/8] bpf: Introduce cgroup iter
  2022-08-16  4:12               ` Andrii Nakryiko
  2022-08-16  4:19                 ` Andrii Nakryiko
@ 2022-08-16  6:52                 ` Hao Luo
  2022-08-16 17:17                   ` Andrii Nakryiko
  1 sibling, 1 reply; 23+ messages in thread
From: Hao Luo @ 2022-08-16  6:52 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Yosry Ahmed, Alexei Starovoitov, Linux Kernel Mailing List, bpf,
	Cgroups, Networking, Alexei Starovoitov, Andrii Nakryiko,
	Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
	Tejun Heo, Zefan Li, KP Singh, Johannes Weiner, Michal Hocko,
	Benjamin Tissoires, John Fastabend, Michal Koutny,
	Roman Gushchin, David Rientjes, Stanislav Fomichev, Shakeel Butt

On Mon, Aug 15, 2022 at 9:13 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Thu, Aug 11, 2022 at 7:10 AM Yosry Ahmed <yosryahmed@google.com> wrote:
> >
> > On Wed, Aug 10, 2022 at 8:10 PM Hao Luo <haoluo@google.com> wrote:
> > >
> > > On Tue, Aug 9, 2022 at 11:38 AM Hao Luo <haoluo@google.com> wrote:
> > > >
> > > > On Tue, Aug 9, 2022 at 9:23 AM Alexei Starovoitov
> > > > <alexei.starovoitov@gmail.com> wrote:
> > > > >
> > > > > On Mon, Aug 08, 2022 at 05:56:57PM -0700, Hao Luo wrote:
> > > > > > On Mon, Aug 8, 2022 at 5:19 PM Andrii Nakryiko
> > > > > > <andrii.nakryiko@gmail.com> wrote:
> > > > > > >
> > > > > > > On Fri, Aug 5, 2022 at 2:49 PM Hao Luo <haoluo@google.com> wrote:
> > > > > > > >
> > > > > > > > Cgroup_iter is a type of bpf_iter. It walks over cgroups in four modes:
> > > > > > > >
> > > > > > > >  - walking a cgroup's descendants in pre-order.
> > > > > > > >  - walking a cgroup's descendants in post-order.
> > > > > > > >  - walking a cgroup's ancestors.
> > > > > > > >  - process only the given cgroup.
> > > > > > > >
> > > > [...]
> > > > > > > > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > > > > > > > index 59a217ca2dfd..4d758b2e70d6 100644
> > > > > > > > --- a/include/uapi/linux/bpf.h
> > > > > > > > +++ b/include/uapi/linux/bpf.h
> > > > > > > > @@ -87,10 +87,37 @@ struct bpf_cgroup_storage_key {
> > > > > > > >         __u32   attach_type;            /* program attach type (enum bpf_attach_type) */
> > > > > > > >  };
> > > > > > > >
> > > > > > > > +enum bpf_iter_order {
> > > > > > > > +       BPF_ITER_ORDER_DEFAULT = 0,     /* default order. */
> > > > > > >
> > > > > > > why is this default order necessary? It just adds confusion (I had to
> > > > > > > look up source code to know what is default order). I might have
> > > > > > > missed some discussion, so if there is some very good reason, then
> > > > > > > please document this in commit message. But I'd rather not do some
> > > > > > > magical default order instead. We can set 0 to mean invalid and error
> > > > > > > out, or just do SELF as the very first value (and if user forgot to
> > > > > > > specify more fancy mode, they hopefully will quickly discover this in
> > > > > > > their testing).
> > > > > > >
> > > > > >
> > > > > > PRE/POST/UP are tree-specific orders. SELF applies on all iters and
> > > > > > yields only a single object. How does task_iter express a non-self
> > > > > > order? By non-self, I mean something like "I don't care about the
> > > > > > order, just scan _all_ the objects". And this "don't care" order, IMO,
> > > > > > may be the common case. I don't think everyone cares about walking
> > > > > > order for tasks. The DEFAULT is intentionally put at the first value,
> > > > > > so that if users don't care about order, they don't have to specify
> > > > > > this field.
> > > > > >
> > > > > > If that sounds valid, maybe using "UNSPEC" instead of "DEFAULT" is better?
> > > > >
> > > > > I agree with Andrii.
> > > > > This:
> > > > > +       if (order == BPF_ITER_ORDER_DEFAULT)
> > > > > +               order = BPF_ITER_DESCENDANTS_PRE;
> > > > >
> > > > > looks like an arbitrary choice.
> > > > > imo
> > > > > BPF_ITER_DESCENDANTS_PRE = 0,
> > > > > would have been more obvious. No need to dig into definition of "default".
> > > > >
> > > > > UNSPEC = 0
> > > > > is fine too if we want user to always be conscious about the order
> > > > > and the kernel will error if that field is not initialized.
> > > > > That would be my preference, since it will match the rest of uapi/bpf.h
> > > > >
> > > >
> > > > Sounds good. In the next version, will use
> > > >
> > > > enum bpf_iter_order {
> > > >         BPF_ITER_ORDER_UNSPEC = 0,
> > > >         BPF_ITER_SELF_ONLY,             /* process only a single object. */
> > > >         BPF_ITER_DESCENDANTS_PRE,       /* walk descendants in pre-order. */
> > > >         BPF_ITER_DESCENDANTS_POST,      /* walk descendants in post-order. */
> > > >         BPF_ITER_ANCESTORS_UP,          /* walk ancestors upward. */
> > > > };
> > > >
> > >
> > > Sigh, I find that having UNSPEC=0 and erroring out when seeing UNSPEC
> > > doesn't work. Basically, if we have a non-iter prog and a cgroup_iter
> > > prog written in the same source file, I can't use
> > > bpf_object__attach_skeleton to attach them. Because the default
> > > prog_attach_fn for iter initializes `order` to 0 (that is, UNSPEC),
> > > which is going to be rejected by the kernel. In order to make
> > > bpf_object__attach_skeleton work on cgroup_iter, I think I need to use
> > > the following
> > >
> > > enum bpf_iter_order {
>
> so first of all, this can't be called "bpf_iter_order" as it doesn't
> apply to BPF iterators in general. I think this should be called
> bpf_iter_cgroup_order (or maybe bpf_cgroup_iter_order) and if/when we
> add ability to iterate tasks within cgroups then we'll just reuse enum
> bpf_iter_cgroup_order as an extra parameter for task iterator.
>
> And with that future case in mind I do think that we should have 0
> being "UNSPEC" case.
>

Ok.

> > >         BPF_ITER_DESCENDANTS_PRE,       /* walk descendants in pre-order. */
> > >         BPF_ITER_DESCENDANTS_POST,      /* walk descendants in post-order. */
> > >         BPF_ITER_ANCESTORS_UP,          /* walk ancestors upward. */
> > >         BPF_ITER_SELF_ONLY,             /* process only a single object. */
> > > };
> > >
> > > So that when calling bpf_object__attach_skeleton() on cgroup_iter, a
> > > link can be generated and the generated link defaults to pre-order
> > > walk on the whole hierarchy. Is there a better solution?
> > >
>
> I was actually surprised that we specify these additional parameters
> at attach (LINK_CREATE) time, and not at bpf_iter_create() call time.
> It seems more appropriate to allow to specify such runtime parameters
> very late, when we create a specific instance of seq_file. But I guess
> this was done because one of the initial motivations for iterators was
> to be pinned in BPFFS and read as a file, so it was more convenient to
> store such parameters upfront at link creation time to keep
> BPF_OBJ_PIN simpler. I guess it makes sense, worst case you'll need to
> create multiple bpf_link files, one for each cgroup hierarchy you'd
> like to query with the same single BPF program.
>

Right. That was the design from the beginning.

> But I digress.
>
> As for not being able to auto-attach cgroup iterator. I think that's
> sort of expected and is in line with not being able to auto-attach
> cgroup programs, as you need cgroup FD at runtime. So even if you had
> some reasonable default order, you still would need to specify target
> cgroup (either through FD or ID).
>
> So... either don't do skeleton auto-attach,

This is not okay IMHO. It would be very inconvenient to use.

> or let's teach libbpf code
> to not auto-attach some iter types?
>

I'm thinking of two options:

1. Maybe we could add libbpf APIs for disabling auto-attach just like
prog autoload. Like:

bpf_program__set_auto_attach()
bpf_program__get_auto_attach(...)

2. In auto-attach, if the program's link is already set, attach will
be skipped. So, we could just manually attach, which specifies the
order, and set the link in skeleton. This way, no change in libbpf is
needed. Does this sound good to you?

> Alternatively, we could teach libbpf to parse some sort of cgroup
> iterator spec, like:
>
> SEC("iter/cgroup:/path/to/cgroup:descendants_pre")
>
> But this approach won't work for a bunch of other parameterized
> iterators (e.g., task iter, or map elem iter), so I'm hesitant about
> adding this to libbpf as a generic functionality.
>

Agree. Let's explore other options first.

> >
> > I think this can be handled by userspace? We can attach the
> > cgroup_iter separately first (and maybe we will need to set prog->link
> > as well) so that bpf_object__attach_skeleton() doesn't try to attach
> > it? I am following this pattern in the selftest in the final patch,
> > although I think I might be missing setting prog->link, so I am
> > wondering why there are no issues in that selftest which has the same
> > scenario that you are talking about.
> >
> > I think such a pattern will need to be used anyway if the users need
> > to set any non-default arguments for the cgroup_iter prog (like the
> > selftest), right? The only case we are discussing here is the case
> > where the user wants to attach the cgroup_iter with all default
> > options (in which case the default order will fail).
> > I agree that it might be inconvenient if the default/uninitialized
> > options don't work for cgroup_iter, but Alexei pointed out that this
> > matches other bpf uapis.
> >
> > My concern is that in the future we try to reuse enum bpf_iter_order
> > to set ordering for other iterators, and then the
> > default/uninitialized value (BPF_ITER_DESCENDANTS_PRE) doesn't make
> > sense for that iterator (e.g. not a tree). In this case, the same
> > problem that we are avoiding for cgroup_iter here will show up for
> > that iterator, and we can't easily change it at this point because
> > it's uapi.
>
> Yep, valid concern, I agree.
>

Andrii, other than auto-attach, do you have any concern for the rest
of this patchset?

> >
> >
> > > > and explicitly list the values acceptable by cgroup_iter, error out if
> > > > UNSPEC is detected.
> > > >
> > > > Also, following Andrii's comments, will change BPF_ITER_SELF to
> > > > BPF_ITER_SELF_ONLY, which does seem a little bit explicit in
> > > > comparison.
> > > >
> > > > > I applied the first 3 patches to ease respin.
> > > >
> > > > Thanks! This helps!
> > > >
> > > > > Thanks!

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v7 4/8] bpf: Introduce cgroup iter
  2022-08-16  6:52                 ` Hao Luo
@ 2022-08-16 17:17                   ` Andrii Nakryiko
  2022-08-16 17:22                     ` Hao Luo
  0 siblings, 1 reply; 23+ messages in thread
From: Andrii Nakryiko @ 2022-08-16 17:17 UTC (permalink / raw)
  To: Hao Luo
  Cc: Yosry Ahmed, Alexei Starovoitov, Linux Kernel Mailing List, bpf,
	Cgroups, Networking, Alexei Starovoitov, Andrii Nakryiko,
	Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
	Tejun Heo, Zefan Li, KP Singh, Johannes Weiner, Michal Hocko,
	Benjamin Tissoires, John Fastabend, Michal Koutny,
	Roman Gushchin, David Rientjes, Stanislav Fomichev, Shakeel Butt

On Mon, Aug 15, 2022 at 11:52 PM Hao Luo <haoluo@google.com> wrote:
>
> On Mon, Aug 15, 2022 at 9:13 PM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
> >
> > On Thu, Aug 11, 2022 at 7:10 AM Yosry Ahmed <yosryahmed@google.com> wrote:
> > >
> > > On Wed, Aug 10, 2022 at 8:10 PM Hao Luo <haoluo@google.com> wrote:
> > > >
> > > > On Tue, Aug 9, 2022 at 11:38 AM Hao Luo <haoluo@google.com> wrote:
> > > > >
> > > > > On Tue, Aug 9, 2022 at 9:23 AM Alexei Starovoitov
> > > > > <alexei.starovoitov@gmail.com> wrote:
> > > > > >
> > > > > > On Mon, Aug 08, 2022 at 05:56:57PM -0700, Hao Luo wrote:
> > > > > > > On Mon, Aug 8, 2022 at 5:19 PM Andrii Nakryiko
> > > > > > > <andrii.nakryiko@gmail.com> wrote:
> > > > > > > >
> > > > > > > > On Fri, Aug 5, 2022 at 2:49 PM Hao Luo <haoluo@google.com> wrote:
> > > > > > > > >
> > > > > > > > > Cgroup_iter is a type of bpf_iter. It walks over cgroups in four modes:
> > > > > > > > >
> > > > > > > > >  - walking a cgroup's descendants in pre-order.
> > > > > > > > >  - walking a cgroup's descendants in post-order.
> > > > > > > > >  - walking a cgroup's ancestors.
> > > > > > > > >  - process only the given cgroup.
> > > > > > > > >
> > > > > [...]
> > > > > > > > > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > > > > > > > > index 59a217ca2dfd..4d758b2e70d6 100644
> > > > > > > > > --- a/include/uapi/linux/bpf.h
> > > > > > > > > +++ b/include/uapi/linux/bpf.h
> > > > > > > > > @@ -87,10 +87,37 @@ struct bpf_cgroup_storage_key {
> > > > > > > > >         __u32   attach_type;            /* program attach type (enum bpf_attach_type) */
> > > > > > > > >  };
> > > > > > > > >
> > > > > > > > > +enum bpf_iter_order {
> > > > > > > > > +       BPF_ITER_ORDER_DEFAULT = 0,     /* default order. */
> > > > > > > >
> > > > > > > > why is this default order necessary? It just adds confusion (I had to
> > > > > > > > look up source code to know what is default order). I might have
> > > > > > > > missed some discussion, so if there is some very good reason, then
> > > > > > > > please document this in commit message. But I'd rather not do some
> > > > > > > > magical default order instead. We can set 0 to mean invalid and error
> > > > > > > > out, or just do SELF as the very first value (and if user forgot to
> > > > > > > > specify more fancy mode, they hopefully will quickly discover this in
> > > > > > > > their testing).
> > > > > > > >
> > > > > > >
> > > > > > > PRE/POST/UP are tree-specific orders. SELF applies on all iters and
> > > > > > > yields only a single object. How does task_iter express a non-self
> > > > > > > order? By non-self, I mean something like "I don't care about the
> > > > > > > order, just scan _all_ the objects". And this "don't care" order, IMO,
> > > > > > > may be the common case. I don't think everyone cares about walking
> > > > > > > order for tasks. The DEFAULT is intentionally put at the first value,
> > > > > > > so that if users don't care about order, they don't have to specify
> > > > > > > this field.
> > > > > > >
> > > > > > > If that sounds valid, maybe using "UNSPEC" instead of "DEFAULT" is better?
> > > > > >
> > > > > > I agree with Andrii.
> > > > > > This:
> > > > > > +       if (order == BPF_ITER_ORDER_DEFAULT)
> > > > > > +               order = BPF_ITER_DESCENDANTS_PRE;
> > > > > >
> > > > > > looks like an arbitrary choice.
> > > > > > imo
> > > > > > BPF_ITER_DESCENDANTS_PRE = 0,
> > > > > > would have been more obvious. No need to dig into definition of "default".
> > > > > >
> > > > > > UNSPEC = 0
> > > > > > is fine too if we want user to always be conscious about the order
> > > > > > and the kernel will error if that field is not initialized.
> > > > > > That would be my preference, since it will match the rest of uapi/bpf.h
> > > > > >
> > > > >
> > > > > Sounds good. In the next version, will use
> > > > >
> > > > > enum bpf_iter_order {
> > > > >         BPF_ITER_ORDER_UNSPEC = 0,
> > > > >         BPF_ITER_SELF_ONLY,             /* process only a single object. */
> > > > >         BPF_ITER_DESCENDANTS_PRE,       /* walk descendants in pre-order. */
> > > > >         BPF_ITER_DESCENDANTS_POST,      /* walk descendants in post-order. */
> > > > >         BPF_ITER_ANCESTORS_UP,          /* walk ancestors upward. */
> > > > > };
> > > > >
> > > >
> > > > Sigh, I find that having UNSPEC=0 and erroring out when seeing UNSPEC
> > > > doesn't work. Basically, if we have a non-iter prog and a cgroup_iter
> > > > prog written in the same source file, I can't use
> > > > bpf_object__attach_skeleton to attach them. Because the default
> > > > prog_attach_fn for iter initializes `order` to 0 (that is, UNSPEC),
> > > > which is going to be rejected by the kernel. In order to make
> > > > bpf_object__attach_skeleton work on cgroup_iter, I think I need to use
> > > > the following
> > > >
> > > > enum bpf_iter_order {
> >
> > so first of all, this can't be called "bpf_iter_order" as it doesn't
> > apply to BPF iterators in general. I think this should be called
> > bpf_iter_cgroup_order (or maybe bpf_cgroup_iter_order) and if/when we
> > add ability to iterate tasks within cgroups then we'll just reuse enum
> > bpf_iter_cgroup_order as an extra parameter for task iterator.
> >
> > And with that future case in mind I do think that we should have 0
> > being "UNSPEC" case.
> >
>
> Ok.
>
> > > >         BPF_ITER_DESCENDANTS_PRE,       /* walk descendants in pre-order. */
> > > >         BPF_ITER_DESCENDANTS_POST,      /* walk descendants in post-order. */
> > > >         BPF_ITER_ANCESTORS_UP,          /* walk ancestors upward. */
> > > >         BPF_ITER_SELF_ONLY,             /* process only a single object. */
> > > > };
> > > >
> > > > So that when calling bpf_object__attach_skeleton() on cgroup_iter, a
> > > > link can be generated and the generated link defaults to pre-order
> > > > walk on the whole hierarchy. Is there a better solution?
> > > >
> >
> > I was actually surprised that we specify these additional parameters
> > at attach (LINK_CREATE) time, and not at bpf_iter_create() call time.
> > It seems more appropriate to allow to specify such runtime parameters
> > very late, when we create a specific instance of seq_file. But I guess
> > this was done because one of the initial motivations for iterators was
> > to be pinned in BPFFS and read as a file, so it was more convenient to
> > store such parameters upfront at link creation time to keep
> > BPF_OBJ_PIN simpler. I guess it makes sense, worst case you'll need to
> > create multiple bpf_link files, one for each cgroup hierarchy you'd
> > like to query with the same single BPF program.
> >
>
> Right. That was the design from the beginning.
>
> > But I digress.
> >
> > As for not being able to auto-attach cgroup iterator. I think that's
> > sort of expected and is in line with not being able to auto-attach
> > cgroup programs, as you need cgroup FD at runtime. So even if you had
> > some reasonable default order, you still would need to specify target
> > cgroup (either through FD or ID).
> >
> > So... either don't do skeleton auto-attach,
>
> This is not okay IMHO. It would be very inconvenient to use.
>
> > or let's teach libbpf code
> > to not auto-attach some iter types?
> >
>
> I'm thinking of two options:
>
> 1. Maybe we could add libbpf APIs for disabling auto-attach just like
> prog autoload. Like:
>
> bpf_program__set_auto_attach()
> bpf_program__get_auto_attach(...)

Indeed, to give more flexibility we can also add
bpf_program__set_autoattach() and bpf_program__autoattach() (note no
underscore and no get prefix, to be consistent with autocreate and
autoload getters and setters). It's a pretty simple change, please
send a separate patch for this (soon-ish would be great to make it
into final 1.0).
>
> 2. In auto-attach, if the program's link is already set, attach will
> be skipped. So, we could just manually attach, which specifies the
> order, and set the link in skeleton. This way, no change in libbpf is
> needed. Does this sound good to you?
>

Yes, this is one other way and is fully supported. Might be a bit less
convenient than set_autoattach in some cases, so set_autoattach still
makes sense, IMO.

> > Alternatively, we could teach libbpf to parse some sort of cgroup
> > iterator spec, like:
> >
> > SEC("iter/cgroup:/path/to/cgroup:descendants_pre")
> >
> > But this approach won't work for a bunch of other parameterized
> > iterators (e.g., task iter, or map elem iter), so I'm hesitant about
> > adding this to libbpf as a generic functionality.
> >
>
> Agree. Let's explore other options first.
>
> > >
> > > I think this can be handled by userspace? We can attach the
> > > cgroup_iter separately first (and maybe we will need to set prog->link
> > > as well) so that bpf_object__attach_skeleton() doesn't try to attach
> > > it? I am following this pattern in the selftest in the final patch,
> > > although I think I might be missing setting prog->link, so I am
> > > wondering why there are no issues in that selftest which has the same
> > > scenario that you are talking about.
> > >
> > > I think such a pattern will need to be used anyway if the users need
> > > to set any non-default arguments for the cgroup_iter prog (like the
> > > selftest), right? The only case we are discussing here is the case
> > > where the user wants to attach the cgroup_iter with all default
> > > options (in which case the default order will fail).
> > > I agree that it might be inconvenient if the default/uninitialized
> > > options don't work for cgroup_iter, but Alexei pointed out that this
> > > matches other bpf uapis.
> > >
> > > My concern is that in the future we try to reuse enum bpf_iter_order
> > > to set ordering for other iterators, and then the
> > > default/uninitialized value (BPF_ITER_DESCENDANTS_PRE) doesn't make
> > > sense for that iterator (e.g. not a tree). In this case, the same
> > > problem that we are avoiding for cgroup_iter here will show up for
> > > that iterator, and we can't easily change it at this point because
> > > it's uapi.
> >
> > Yep, valid concern, I agree.
> >
>
> Andrii, other than auto-attach, do you have any concern for the rest
> of this patchset?

Well, I mostly was looking at UAPIs, didn't check iteration logic
itself. But plenty of others did and I trust they did a good job at
that. So no, no other concerns.

>
> > >
> > >
> > > > > and explicitly list the values acceptable by cgroup_iter, error out if
> > > > > UNSPEC is detected.
> > > > >
> > > > > Also, following Andrii's comments, will change BPF_ITER_SELF to
> > > > > BPF_ITER_SELF_ONLY, which does seem a little bit explicit in
> > > > > comparison.
> > > > >
> > > > > > I applied the first 3 patches to ease respin.
> > > > >
> > > > > Thanks! This helps!
> > > > >
> > > > > > Thanks!

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH bpf-next v7 4/8] bpf: Introduce cgroup iter
  2022-08-16 17:17                   ` Andrii Nakryiko
@ 2022-08-16 17:22                     ` Hao Luo
  0 siblings, 0 replies; 23+ messages in thread
From: Hao Luo @ 2022-08-16 17:22 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Yosry Ahmed, Alexei Starovoitov, Linux Kernel Mailing List, bpf,
	Cgroups, Networking, Alexei Starovoitov, Andrii Nakryiko,
	Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
	Tejun Heo, Zefan Li, KP Singh, Johannes Weiner, Michal Hocko,
	Benjamin Tissoires, John Fastabend, Michal Koutny,
	Roman Gushchin, David Rientjes, Stanislav Fomichev, Shakeel Butt

On Tue, Aug 16, 2022 at 10:17 AM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Mon, Aug 15, 2022 at 11:52 PM Hao Luo <haoluo@google.com> wrote:
> >
> > On Mon, Aug 15, 2022 at 9:13 PM Andrii Nakryiko
> > <andrii.nakryiko@gmail.com> wrote:
> > >
> > > On Thu, Aug 11, 2022 at 7:10 AM Yosry Ahmed <yosryahmed@google.com> wrote:
> > > >
> > > > On Wed, Aug 10, 2022 at 8:10 PM Hao Luo <haoluo@google.com> wrote:
> > > > >
> > > > > On Tue, Aug 9, 2022 at 11:38 AM Hao Luo <haoluo@google.com> wrote:
> > > > > >
> > > > > > On Tue, Aug 9, 2022 at 9:23 AM Alexei Starovoitov
> > > > > > <alexei.starovoitov@gmail.com> wrote:
> > > > > > >
> > > > > > > On Mon, Aug 08, 2022 at 05:56:57PM -0700, Hao Luo wrote:
> > > > > > > > On Mon, Aug 8, 2022 at 5:19 PM Andrii Nakryiko
> > > > > > > > <andrii.nakryiko@gmail.com> wrote:
> > > > > > > > >
> > > > > > > > > On Fri, Aug 5, 2022 at 2:49 PM Hao Luo <haoluo@google.com> wrote:
> > > > > > > > > >
> > > > > > > > > > Cgroup_iter is a type of bpf_iter. It walks over cgroups in four modes:
> > > > > > > > > >
> > > > > > > > > >  - walking a cgroup's descendants in pre-order.
> > > > > > > > > >  - walking a cgroup's descendants in post-order.
> > > > > > > > > >  - walking a cgroup's ancestors.
> > > > > > > > > >  - process only the given cgroup.
> > > > > > > > > >
> > > > > > [...]
> > > > > > > > > > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > > > > > > > > > index 59a217ca2dfd..4d758b2e70d6 100644
> > > > > > > > > > --- a/include/uapi/linux/bpf.h
> > > > > > > > > > +++ b/include/uapi/linux/bpf.h
> > > > > > > > > > @@ -87,10 +87,37 @@ struct bpf_cgroup_storage_key {
> > > > > > > > > >         __u32   attach_type;            /* program attach type (enum bpf_attach_type) */
> > > > > > > > > >  };
> > > > > > > > > >
> > > > > > > > > > +enum bpf_iter_order {
> > > > > > > > > > +       BPF_ITER_ORDER_DEFAULT = 0,     /* default order. */
> > > > > > > > >
> > > > > > > > > why is this default order necessary? It just adds confusion (I had to
> > > > > > > > > look up source code to know what is default order). I might have
> > > > > > > > > missed some discussion, so if there is some very good reason, then
> > > > > > > > > please document this in commit message. But I'd rather not do some
> > > > > > > > > magical default order instead. We can set 0 to mean invalid and error
> > > > > > > > > out, or just do SELF as the very first value (and if user forgot to
> > > > > > > > > specify more fancy mode, they hopefully will quickly discover this in
> > > > > > > > > their testing).
> > > > > > > > >
> > > > > > > >
> > > > > > > > PRE/POST/UP are tree-specific orders. SELF applies on all iters and
> > > > > > > > yields only a single object. How does task_iter express a non-self
> > > > > > > > order? By non-self, I mean something like "I don't care about the
> > > > > > > > order, just scan _all_ the objects". And this "don't care" order, IMO,
> > > > > > > > may be the common case. I don't think everyone cares about walking
> > > > > > > > order for tasks. The DEFAULT is intentionally put at the first value,
> > > > > > > > so that if users don't care about order, they don't have to specify
> > > > > > > > this field.
> > > > > > > >
> > > > > > > > If that sounds valid, maybe using "UNSPEC" instead of "DEFAULT" is better?
> > > > > > >
> > > > > > > I agree with Andrii.
> > > > > > > This:
> > > > > > > +       if (order == BPF_ITER_ORDER_DEFAULT)
> > > > > > > +               order = BPF_ITER_DESCENDANTS_PRE;
> > > > > > >
> > > > > > > looks like an arbitrary choice.
> > > > > > > imo
> > > > > > > BPF_ITER_DESCENDANTS_PRE = 0,
> > > > > > > would have been more obvious. No need to dig into definition of "default".
> > > > > > >
> > > > > > > UNSPEC = 0
> > > > > > > is fine too if we want user to always be conscious about the order
> > > > > > > and the kernel will error if that field is not initialized.
> > > > > > > That would be my preference, since it will match the rest of uapi/bpf.h
> > > > > > >
> > > > > >
> > > > > > Sounds good. In the next version, will use
> > > > > >
> > > > > > enum bpf_iter_order {
> > > > > >         BPF_ITER_ORDER_UNSPEC = 0,
> > > > > >         BPF_ITER_SELF_ONLY,             /* process only a single object. */
> > > > > >         BPF_ITER_DESCENDANTS_PRE,       /* walk descendants in pre-order. */
> > > > > >         BPF_ITER_DESCENDANTS_POST,      /* walk descendants in post-order. */
> > > > > >         BPF_ITER_ANCESTORS_UP,          /* walk ancestors upward. */
> > > > > > };
> > > > > >
> > > > >
> > > > > Sigh, I find that having UNSPEC=0 and erroring out when seeing UNSPEC
> > > > > doesn't work. Basically, if we have a non-iter prog and a cgroup_iter
> > > > > prog written in the same source file, I can't use
> > > > > bpf_object__attach_skeleton to attach them. Because the default
> > > > > prog_attach_fn for iter initializes `order` to 0 (that is, UNSPEC),
> > > > > which is going to be rejected by the kernel. In order to make
> > > > > bpf_object__attach_skeleton work on cgroup_iter, I think I need to use
> > > > > the following
> > > > >
> > > > > enum bpf_iter_order {
> > >
> > > so first of all, this can't be called "bpf_iter_order" as it doesn't
> > > apply to BPF iterators in general. I think this should be called
> > > bpf_iter_cgroup_order (or maybe bpf_cgroup_iter_order) and if/when we
> > > add ability to iterate tasks within cgroups then we'll just reuse enum
> > > bpf_iter_cgroup_order as an extra parameter for task iterator.
> > >
> > > And with that future case in mind I do think that we should have 0
> > > being "UNSPEC" case.
> > >
> >
> > Ok.
> >
> > > > >         BPF_ITER_DESCENDANTS_PRE,       /* walk descendants in pre-order. */
> > > > >         BPF_ITER_DESCENDANTS_POST,      /* walk descendants in post-order. */
> > > > >         BPF_ITER_ANCESTORS_UP,          /* walk ancestors upward. */
> > > > >         BPF_ITER_SELF_ONLY,             /* process only a single object. */
> > > > > };
> > > > >
> > > > > So that when calling bpf_object__attach_skeleton() on cgroup_iter, a
> > > > > link can be generated and the generated link defaults to pre-order
> > > > > walk on the whole hierarchy. Is there a better solution?
> > > > >
> > >
> > > I was actually surprised that we specify these additional parameters
> > > at attach (LINK_CREATE) time, and not at bpf_iter_create() call time.
> > > It seems more appropriate to allow to specify such runtime parameters
> > > very late, when we create a specific instance of seq_file. But I guess
> > > this was done because one of the initial motivations for iterators was
> > > to be pinned in BPFFS and read as a file, so it was more convenient to
> > > store such parameters upfront at link creation time to keep
> > > BPF_OBJ_PIN simpler. I guess it makes sense, worst case you'll need to
> > > create multiple bpf_link files, one for each cgroup hierarchy you'd
> > > like to query with the same single BPF program.
> > >
> >
> > Right. That was the design from the beginning.
> >
> > > But I digress.
> > >
> > > As for not being able to auto-attach cgroup iterator. I think that's
> > > sort of expected and is in line with not being able to auto-attach
> > > cgroup programs, as you need cgroup FD at runtime. So even if you had
> > > some reasonable default order, you still would need to specify target
> > > cgroup (either through FD or ID).
> > >
> > > So... either don't do skeleton auto-attach,
> >
> > This is not okay IMHO. It would be very inconvenient to use.
> >
> > > or let's teach libbpf code
> > > to not auto-attach some iter types?
> > >
> >
> > I'm thinking of two options:
> >
> > 1. Maybe we could add libbpf APIs for disabling auto-attach just like
> > prog autoload. Like:
> >
> > bpf_program__set_auto_attach()
> > bpf_program__get_auto_attach(...)
>
> Indeed, to give more flexibility we can also add
> bpf_program__set_autoattach() and bpf_program__autoattach() (note no
> underscore and no get prefix, to be consistent with autocreate and
> autoload getters and setters). It's a pretty simple change, please
> send a separate patch for this (soon-ish would be great to make it
> into final 1.0).

Acknowledged.

> >
> > 2. In auto-attach, if the program's link is already set, attach will
> > be skipped. So, we could just manually attach, which specifies the
> > order, and set the link in skeleton. This way, no change in libbpf is
> > needed. Does this sound good to you?
> >
>
> Yes, this is one other way and is fully supported. Might be a bit less
> convenient than set_autoattach in some cases, so set_autoattach still
> makes sense, IMO.
>

Acknowledged.

> > > Alternatively, we could teach libbpf to parse some sort of cgroup
> > > iterator spec, like:
> > >
> > > SEC("iter/cgroup:/path/to/cgroup:descendants_pre")
> > >
> > > But this approach won't work for a bunch of other parameterized
> > > iterators (e.g., task iter, or map elem iter), so I'm hesitant about
> > > adding this to libbpf as a generic functionality.
> > >
> >
> > Agree. Let's explore other options first.
> >
> > > >
> > > > I think this can be handled by userspace? We can attach the
> > > > cgroup_iter separately first (and maybe we will need to set prog->link
> > > > as well) so that bpf_object__attach_skeleton() doesn't try to attach
> > > > it? I am following this pattern in the selftest in the final patch,
> > > > although I think I might be missing setting prog->link, so I am
> > > > wondering why there are no issues in that selftest which has the same
> > > > scenario that you are talking about.
> > > >
> > > > I think such a pattern will need to be used anyway if the users need
> > > > to set any non-default arguments for the cgroup_iter prog (like the
> > > > selftest), right? The only case we are discussing here is the case
> > > > where the user wants to attach the cgroup_iter with all default
> > > > options (in which case the default order will fail).
> > > > I agree that it might be inconvenient if the default/uninitialized
> > > > options don't work for cgroup_iter, but Alexei pointed out that this
> > > > matches other bpf uapis.
> > > >
> > > > My concern is that in the future we try to reuse enum bpf_iter_order
> > > > to set ordering for other iterators, and then the
> > > > default/uninitialized value (BPF_ITER_DESCENDANTS_PRE) doesn't make
> > > > sense for that iterator (e.g. not a tree). In this case, the same
> > > > problem that we are avoiding for cgroup_iter here will show up for
> > > > that iterator, and we can't easily change it at this point because
> > > > it's uapi.
> > >
> > > Yep, valid concern, I agree.
> > >
> >
> > Andrii, other than auto-attach, do you have any concern for the rest
> > of this patchset?
>
> Well, I mostly was looking at UAPIs, didn't check iteration logic
> itself. But plenty of others did and I trust they did a good job at
> that. So no, no other concerns.
>

Thanks Andrii, I will try to send set_autoattach and autoattach patch asap.

> >
> > > >
> > > >
> > > > > > and explicitly list the values acceptable by cgroup_iter, error out if
> > > > > > UNSPEC is detected.
> > > > > >
> > > > > > Also, following Andrii's comments, will change BPF_ITER_SELF to
> > > > > > BPF_ITER_SELF_ONLY, which does seem a little bit explicit in
> > > > > > comparison.
> > > > > >
> > > > > > > I applied the first 3 patches to ease respin.
> > > > > >
> > > > > > Thanks! This helps!
> > > > > >
> > > > > > > Thanks!

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2022-08-16 17:23 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-05 21:48 [PATCH bpf-next v7 0/8] bpf: rstat: cgroup hierarchical stats Hao Luo
2022-08-05 21:48 ` [PATCH bpf-next v7 1/8] btf: Add a new kfunc flag which allows to mark a function to be sleepable Hao Luo
2022-08-05 21:48 ` [PATCH bpf-next v7 2/8] cgroup: enable cgroup_get_from_file() on cgroup1 Hao Luo
2022-08-05 21:48 ` [PATCH bpf-next v7 3/8] bpf, iter: Fix the condition on p when calling stop Hao Luo
2022-08-05 21:48 ` [PATCH bpf-next v7 4/8] bpf: Introduce cgroup iter Hao Luo
2022-08-09  0:18   ` Andrii Nakryiko
2022-08-09  0:56     ` Hao Luo
2022-08-09 16:23       ` Alexei Starovoitov
2022-08-09 18:38         ` Hao Luo
2022-08-11  3:10           ` Hao Luo
2022-08-11 14:09             ` Yosry Ahmed
2022-08-16  4:12               ` Andrii Nakryiko
2022-08-16  4:19                 ` Andrii Nakryiko
2022-08-16  6:52                 ` Hao Luo
2022-08-16 17:17                   ` Andrii Nakryiko
2022-08-16 17:22                     ` Hao Luo
2022-08-05 21:48 ` [PATCH bpf-next v7 5/8] selftests/bpf: Test cgroup_iter Hao Luo
2022-08-09  0:20   ` Andrii Nakryiko
2022-08-09  1:18     ` Hao Luo
2022-08-05 21:48 ` [PATCH bpf-next v7 6/8] cgroup: bpf: enable bpf programs to integrate with rstat Hao Luo
2022-08-05 21:48 ` [PATCH bpf-next v7 7/8] selftests/bpf: extend cgroup helpers Hao Luo
2022-08-05 21:48 ` [PATCH bpf-next v7 8/8] selftests/bpf: add a selftest for cgroup hierarchical stats collection Hao Luo
2022-08-09 16:20 ` [PATCH bpf-next v7 0/8] bpf: rstat: cgroup hierarchical stats patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).