bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC] [PATCH bpf-next 0/1] Implement getting cgroup path bpf helper
@ 2021-05-12  9:58 Xufeng Zhang
  2021-05-12  9:58 ` [RFC] [PATCH bpf-next 1/1] bpf: Add a BPF helper for getting the cgroup path of current task Xufeng Zhang
  0 siblings, 1 reply; 8+ messages in thread
From: Xufeng Zhang @ 2021-05-12  9:58 UTC (permalink / raw)
  To: kpsingh, ast, daniel, bpf, linux-kernel, linux-security-module
  Cc: revest, jackmanb, yhs, songliubraving, kafai, john.fastabend,
	joe, quentin, Xufeng Zhang

In order to protect the running application containers by utilizing
bpf LSM, we need to upload the security rules into bpf maps in container
granularity, however, there is no effective bpf helper to identify the
container, especially for cgroup v1. Generally, the only thing which the
user side can get is container ID, and the cgroup path for this running
container is fixed if we know the container ID, therefore, bpf programs
also need to get the cgroup path for the running task in order to apply
security rules stored in bpf maps.

This patch add a bpf helper - bpf_get_current_cpuset_cgroup_path(), which
return the cpuset cgroup path for the current task, since cgroup_path_ns()
can sleep, this helper is only allowed for sleepable LSM hooks.

Concern:
  Since cgroup_path_ns() takes 'cgroup_mutex' and 'css_set_lock' lock,
  I'm not sure if there is any dead lock code path in LSM hooks.

Xufeng Zhang (1):
bpf: Add a BPF helper for getting the cgroup path of current task
---
 include/uapi/linux/bpf.h       | 13 +++++++++++++
 kernel/bpf/bpf_lsm.c           | 28 ++++++++++++++++++++++++++++
 tools/include/uapi/linux/bpf.h | 13 +++++++++++++
 3 files changed, 54 insertions(+)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [RFC] [PATCH bpf-next 1/1] bpf: Add a BPF helper for getting the cgroup path of current task
  2021-05-12  9:58 [RFC] [PATCH bpf-next 0/1] Implement getting cgroup path bpf helper Xufeng Zhang
@ 2021-05-12  9:58 ` Xufeng Zhang
  2021-05-12 22:55   ` Alexei Starovoitov
  0 siblings, 1 reply; 8+ messages in thread
From: Xufeng Zhang @ 2021-05-12  9:58 UTC (permalink / raw)
  To: kpsingh, ast, daniel, bpf, linux-kernel, linux-security-module
  Cc: revest, jackmanb, yhs, songliubraving, kafai, john.fastabend,
	joe, quentin, Xufeng Zhang

To implement security rules for application containers by utilizing
bpf LSM, the container to which the current running task belongs need
to be known in bpf context. Think about this scenario: kubernetes
schedules a pod into one host, before the application container can run,
the security rules for this application need to be loaded into bpf
maps firstly, so that LSM bpf programs can make decisions based on
this rule maps.

However, there is no effective bpf helper to achieve this goal,
especially for cgroup v1. In the above case, the only available information
from user side is container-id, and the cgroup path for this container
is certain based on container-id, so in order to make a bridge between
user side and bpf programs, bpf programs also need to know the current
cgroup path of running task.

This change add a new bpf helper: bpf_get_current_cpuset_cgroup_path(),
since cgroup_path_ns() can sleep, this helper is only allowed for
sleepable LSM hooks.

Signed-off-by: Xufeng Zhang <yunbo.xufeng@linux.alibaba.com>
---
 include/uapi/linux/bpf.h       | 13 +++++++++++++
 kernel/bpf/bpf_lsm.c           | 28 ++++++++++++++++++++++++++++
 tools/include/uapi/linux/bpf.h | 13 +++++++++++++
 3 files changed, 54 insertions(+)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index ec6d85a81744..e8295101b865 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -4735,6 +4735,18 @@ union bpf_attr {
  *		be zero-terminated except when **str_size** is 0.
  *
  *		Or **-EBUSY** if the per-CPU memory copy buffer is busy.
+ *
+ * int bpf_get_current_cpuset_cgroup_path(char *buf, u32 buf_len)
+ *	Description
+ *		Get the cpuset cgroup path of current task from kernel memory,
+ *		this path can be used to identify in which container is the
+ *		current task running.
+ *		*buf* memory is pre-allocated, and *buf_len* indicates the size
+ *		of this memory.
+ *
+ *	Return
+ *		The cpuset cgroup path is copied into *buf* on success,
+ *		or a negative integer error in case of failure.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -4903,6 +4915,7 @@ union bpf_attr {
 	FN(check_mtu),			\
 	FN(for_each_map_elem),		\
 	FN(snprintf),			\
+	FN(get_current_cpuset_cgroup_path),     \
 	/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
index 5efb2b24012c..5e62e3875df1 100644
--- a/kernel/bpf/bpf_lsm.c
+++ b/kernel/bpf/bpf_lsm.c
@@ -99,6 +99,30 @@ static const struct bpf_func_proto bpf_ima_inode_hash_proto = {
 	.allowed	= bpf_ima_inode_hash_allowed,
 };
 
+#ifdef CONFIG_CGROUPS
+BPF_CALL_2(bpf_get_current_cpuset_cgroup_path, char *, buf, u32, buf_len)
+{
+	struct cgroup_subsys_state *css;
+	int retval;
+
+	css = task_get_css(current, cpuset_cgrp_id);
+	retval = cgroup_path_ns(css->cgroup, buf, buf_len, &init_cgroup_ns);
+	css_put(css);
+	if (retval >= buf_len)
+		retval = -ENAMETOOLONG;
+	return retval;
+}
+
+static const struct bpf_func_proto bpf_get_current_cpuset_cgroup_path_proto = {
+	.func           = bpf_get_current_cpuset_cgroup_path,
+	.gpl_only       = false,
+	.ret_type       = RET_INTEGER,
+	.arg1_type      = ARG_PTR_TO_UNINIT_MEM,
+	.arg2_type      = ARG_CONST_SIZE,
+	.allowed        = bpf_ima_inode_hash_allowed,
+};
+#endif
+
 static const struct bpf_func_proto *
 bpf_lsm_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 {
@@ -119,6 +143,10 @@ bpf_lsm_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 		return &bpf_bprm_opts_set_proto;
 	case BPF_FUNC_ima_inode_hash:
 		return prog->aux->sleepable ? &bpf_ima_inode_hash_proto : NULL;
+#ifdef CONFIG_CGROUPS
+	case BPF_FUNC_get_current_cpuset_cgroup_path:
+		return prog->aux->sleepable ? &bpf_get_current_cpuset_cgroup_path_proto : NULL;
+#endif
 	default:
 		return tracing_prog_func_proto(func_id, prog);
 	}
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index ec6d85a81744..fe31252d92e3 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -4735,6 +4735,18 @@ union bpf_attr {
  *		be zero-terminated except when **str_size** is 0.
  *
  *		Or **-EBUSY** if the per-CPU memory copy buffer is busy.
+ *
+ * int bpf_get_current_cpuset_cgroup_path(char *buf, u32 buf_len)
+ *	Description
+ *		Get the cpuset cgroup path of current task from kernel memory,
+ *		this path can be used to identify in which container is the
+ *		current task running.
+ *		*buf* memory is pre-allocated, and *buf_len* indicates the size
+ *		of this memory.
+ *
+ *	Return
+ *		The cpuset cgroup path is copied into *buf* on success,
+ *		or a negative integer error in case of failure.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -4903,6 +4915,7 @@ union bpf_attr {
 	FN(check_mtu),			\
 	FN(for_each_map_elem),		\
 	FN(snprintf),			\
+	FN(get_current_cpuset_cgroup_path),	\
 	/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
-- 
2.20.1 (Apple Git-117)


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [RFC] [PATCH bpf-next 1/1] bpf: Add a BPF helper for getting the cgroup path of current task
  2021-05-12  9:58 ` [RFC] [PATCH bpf-next 1/1] bpf: Add a BPF helper for getting the cgroup path of current task Xufeng Zhang
@ 2021-05-12 22:55   ` Alexei Starovoitov
  2021-05-13  8:57     ` xufeng zhang
  2021-05-14  4:06     ` xufeng zhang
  0 siblings, 2 replies; 8+ messages in thread
From: Alexei Starovoitov @ 2021-05-12 22:55 UTC (permalink / raw)
  To: Xufeng Zhang
  Cc: kpsingh, ast, daniel, bpf, linux-kernel, linux-security-module,
	revest, jackmanb, yhs, songliubraving, kafai, john.fastabend,
	joe, quentin

On Wed, May 12, 2021 at 05:58:23PM +0800, Xufeng Zhang wrote:
> To implement security rules for application containers by utilizing
> bpf LSM, the container to which the current running task belongs need
> to be known in bpf context. Think about this scenario: kubernetes
> schedules a pod into one host, before the application container can run,
> the security rules for this application need to be loaded into bpf
> maps firstly, so that LSM bpf programs can make decisions based on
> this rule maps.
> 
> However, there is no effective bpf helper to achieve this goal,
> especially for cgroup v1. In the above case, the only available information
> from user side is container-id, and the cgroup path for this container
> is certain based on container-id, so in order to make a bridge between
> user side and bpf programs, bpf programs also need to know the current
> cgroup path of running task.
...
> +#ifdef CONFIG_CGROUPS
> +BPF_CALL_2(bpf_get_current_cpuset_cgroup_path, char *, buf, u32, buf_len)
> +{
> +	struct cgroup_subsys_state *css;
> +	int retval;
> +
> +	css = task_get_css(current, cpuset_cgrp_id);
> +	retval = cgroup_path_ns(css->cgroup, buf, buf_len, &init_cgroup_ns);
> +	css_put(css);
> +	if (retval >= buf_len)
> +		retval = -ENAMETOOLONG;

Manipulating string path to check the hierarchy will be difficult to do
inside bpf prog. It seems to me this helper will be useful only for
simplest cgroup setups where there is no additional cgroup nesting
within containers.
Have you looked at *ancestor_cgroup_id and *cgroup_id helpers?
They're a bit more flexible when dealing with hierarchy and
can be used to achieve the same correlation between kernel and user cgroup ids.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] [PATCH bpf-next 1/1] bpf: Add a BPF helper for getting the cgroup path of current task
  2021-05-12 22:55   ` Alexei Starovoitov
@ 2021-05-13  8:57     ` xufeng zhang
  2021-05-14  4:20       ` Alexei Starovoitov
  2021-05-14  4:06     ` xufeng zhang
  1 sibling, 1 reply; 8+ messages in thread
From: xufeng zhang @ 2021-05-13  8:57 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: kpsingh, ast, daniel, bpf, linux-kernel, linux-security-module,
	revest, jackmanb, yhs, songliubraving, kafai, john.fastabend,
	joe, quentin

在 2021/5/13 上午6:55, Alexei Starovoitov 写道:

> On Wed, May 12, 2021 at 05:58:23PM +0800, Xufeng Zhang wrote:
>> To implement security rules for application containers by utilizing
>> bpf LSM, the container to which the current running task belongs need
>> to be known in bpf context. Think about this scenario: kubernetes
>> schedules a pod into one host, before the application container can run,
>> the security rules for this application need to be loaded into bpf
>> maps firstly, so that LSM bpf programs can make decisions based on
>> this rule maps.
>>
>> However, there is no effective bpf helper to achieve this goal,
>> especially for cgroup v1. In the above case, the only available information
>> from user side is container-id, and the cgroup path for this container
>> is certain based on container-id, so in order to make a bridge between
>> user side and bpf programs, bpf programs also need to know the current
>> cgroup path of running task.
> ...
>> +#ifdef CONFIG_CGROUPS
>> +BPF_CALL_2(bpf_get_current_cpuset_cgroup_path, char *, buf, u32, buf_len)
>> +{
>> +	struct cgroup_subsys_state *css;
>> +	int retval;
>> +
>> +	css = task_get_css(current, cpuset_cgrp_id);
>> +	retval = cgroup_path_ns(css->cgroup, buf, buf_len, &init_cgroup_ns);
>> +	css_put(css);
>> +	if (retval >= buf_len)
>> +		retval = -ENAMETOOLONG;
> Manipulating string path to check the hierarchy will be difficult to do
> inside bpf prog. It seems to me this helper will be useful only for
> simplest cgroup setups where there is no additional cgroup nesting
> within containers.
> Have you looked at *ancestor_cgroup_id and *cgroup_id helpers?
> They're a bit more flexible when dealing with hierarchy and
> can be used to achieve the same correlation between kernel and user cgroup ids.


Thanks for your timely reply, Alexei!

Yes, this helper is not so common, it does not works for nesting cgroup 
within containers.

About your suggestion, the *cgroup_id helpers only works for cgroup v2, 
however, we're still using cgroup v1 in product,and even for cgroup v2, 
I'm not sure if there is any way for user space to get this cgroup id 
timely(after container created, but before container start to run)。

So if there is any effective way works for cgroup v1?


Many thanks!

Xufeng



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] [PATCH bpf-next 1/1] bpf: Add a BPF helper for getting the cgroup path of current task
  2021-05-12 22:55   ` Alexei Starovoitov
  2021-05-13  8:57     ` xufeng zhang
@ 2021-05-14  4:06     ` xufeng zhang
  2021-05-14 11:20       ` KP Singh
  1 sibling, 1 reply; 8+ messages in thread
From: xufeng zhang @ 2021-05-14  4:06 UTC (permalink / raw)
  To: kpsingh
  Cc: ast, daniel, bpf, linux-kernel, linux-security-module, revest,
	jackmanb, yhs, songliubraving, kafai, john.fastabend, joe,
	quentin, Alexei Starovoitov


在 2021/5/13 上午6:55, Alexei Starovoitov 写道:
> On Wed, May 12, 2021 at 05:58:23PM +0800, Xufeng Zhang wrote:
>> To implement security rules for application containers by utilizing
>> bpf LSM, the container to which the current running task belongs need
>> to be known in bpf context. Think about this scenario: kubernetes
>> schedules a pod into one host, before the application container can run,
>> the security rules for this application need to be loaded into bpf
>> maps firstly, so that LSM bpf programs can make decisions based on
>> this rule maps.
>>
>> However, there is no effective bpf helper to achieve this goal,
>> especially for cgroup v1. In the above case, the only available information
>> from user side is container-id, and the cgroup path for this container
>> is certain based on container-id, so in order to make a bridge between
>> user side and bpf programs, bpf programs also need to know the current
>> cgroup path of running task.
> ...
>> +#ifdef CONFIG_CGROUPS
>> +BPF_CALL_2(bpf_get_current_cpuset_cgroup_path, char *, buf, u32, buf_len)
>> +{
>> +	struct cgroup_subsys_state *css;
>> +	int retval;
>> +
>> +	css = task_get_css(current, cpuset_cgrp_id);
>> +	retval = cgroup_path_ns(css->cgroup, buf, buf_len, &init_cgroup_ns);
>> +	css_put(css);
>> +	if (retval >= buf_len)
>> +		retval = -ENAMETOOLONG;
> Manipulating string path to check the hierarchy will be difficult to do
> inside bpf prog. It seems to me this helper will be useful only for
> simplest cgroup setups where there is no additional cgroup nesting
> within containers.
> Have you looked at *ancestor_cgroup_id and *cgroup_id helpers?
> They're a bit more flexible when dealing with hierarchy and
> can be used to achieve the same correlation between kernel and user cgroup ids.


KP,

do you have any suggestion?

what I am thinking is the internal kernel object(cgroup id or ns.inum) 
is not so user friendly, we can get the container-context from them for 
tracing scenario, but not for LSM blocking cases, I'm not sure how 
Google internally resolve similar issue.


Thanks!

Xufeng


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] [PATCH bpf-next 1/1] bpf: Add a BPF helper for getting the cgroup path of current task
  2021-05-13  8:57     ` xufeng zhang
@ 2021-05-14  4:20       ` Alexei Starovoitov
  2021-05-14 11:21         ` KP Singh
  0 siblings, 1 reply; 8+ messages in thread
From: Alexei Starovoitov @ 2021-05-14  4:20 UTC (permalink / raw)
  To: xufeng zhang
  Cc: KP Singh, Alexei Starovoitov, Daniel Borkmann, bpf, LKML,
	LSM List, Florent Revest, Brendan Jackman, Yonghong Song,
	Song Liu, Martin KaFai Lau, John Fastabend, Joe Stringer,
	Quentin Monnet

On Thu, May 13, 2021 at 1:57 AM xufeng zhang
<yunbo.xufeng@linux.alibaba.com> wrote:
>
> 在 2021/5/13 上午6:55, Alexei Starovoitov 写道:
>
> > On Wed, May 12, 2021 at 05:58:23PM +0800, Xufeng Zhang wrote:
> >> To implement security rules for application containers by utilizing
> >> bpf LSM, the container to which the current running task belongs need
> >> to be known in bpf context. Think about this scenario: kubernetes
> >> schedules a pod into one host, before the application container can run,
> >> the security rules for this application need to be loaded into bpf
> >> maps firstly, so that LSM bpf programs can make decisions based on
> >> this rule maps.
> >>
> >> However, there is no effective bpf helper to achieve this goal,
> >> especially for cgroup v1. In the above case, the only available information
> >> from user side is container-id, and the cgroup path for this container
> >> is certain based on container-id, so in order to make a bridge between
> >> user side and bpf programs, bpf programs also need to know the current
> >> cgroup path of running task.
> > ...
> >> +#ifdef CONFIG_CGROUPS
> >> +BPF_CALL_2(bpf_get_current_cpuset_cgroup_path, char *, buf, u32, buf_len)
> >> +{
> >> +    struct cgroup_subsys_state *css;
> >> +    int retval;
> >> +
> >> +    css = task_get_css(current, cpuset_cgrp_id);
> >> +    retval = cgroup_path_ns(css->cgroup, buf, buf_len, &init_cgroup_ns);
> >> +    css_put(css);
> >> +    if (retval >= buf_len)
> >> +            retval = -ENAMETOOLONG;
> > Manipulating string path to check the hierarchy will be difficult to do
> > inside bpf prog. It seems to me this helper will be useful only for
> > simplest cgroup setups where there is no additional cgroup nesting
> > within containers.
> > Have you looked at *ancestor_cgroup_id and *cgroup_id helpers?
> > They're a bit more flexible when dealing with hierarchy and
> > can be used to achieve the same correlation between kernel and user cgroup ids.
>
>
> Thanks for your timely reply, Alexei!
>
> Yes, this helper is not so common, it does not works for nesting cgroup
> within containers.
>
> About your suggestion, the *cgroup_id helpers only works for cgroup v2,
> however, we're still using cgroup v1 in product,and even for cgroup v2,
> I'm not sure if there is any way for user space to get this cgroup id
> timely(after container created, but before container start to run)。
>
> So if there is any effective way works for cgroup v1?

https://github.com/systemd/systemd/blob/main/NEWS#L379

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] [PATCH bpf-next 1/1] bpf: Add a BPF helper for getting the cgroup path of current task
  2021-05-14  4:06     ` xufeng zhang
@ 2021-05-14 11:20       ` KP Singh
  0 siblings, 0 replies; 8+ messages in thread
From: KP Singh @ 2021-05-14 11:20 UTC (permalink / raw)
  To: xufeng zhang
  Cc: Alexei Starovoitov, Daniel Borkmann, bpf, open list,
	Linux Security Module list, Florent Revest, Brendan Jackman,
	Yonghong Song, Song Liu, Martin KaFai Lau, John Fastabend, joe,
	quentin, Alexei Starovoitov

On Fri, May 14, 2021 at 6:06 AM xufeng zhang
<yunbo.xufeng@linux.alibaba.com> wrote:
>
>
> 在 2021/5/13 上午6:55, Alexei Starovoitov 写道:
> > On Wed, May 12, 2021 at 05:58:23PM +0800, Xufeng Zhang wrote:
> >> To implement security rules for application containers by utilizing
> >> bpf LSM, the container to which the current running task belongs need
> >> to be known in bpf context. Think about this scenario: kubernetes
> >> schedules a pod into one host, before the application container can run,
> >> the security rules for this application need to be loaded into bpf
> >> maps firstly, so that LSM bpf programs can make decisions based on
> >> this rule maps.
> >>
> >> However, there is no effective bpf helper to achieve this goal,
> >> especially for cgroup v1. In the above case, the only available information
> >> from user side is container-id, and the cgroup path for this container
> >> is certain based on container-id, so in order to make a bridge between
> >> user side and bpf programs, bpf programs also need to know the current
> >> cgroup path of running task.
> > ...
> >> +#ifdef CONFIG_CGROUPS
> >> +BPF_CALL_2(bpf_get_current_cpuset_cgroup_path, char *, buf, u32, buf_len)
> >> +{
> >> +    struct cgroup_subsys_state *css;
> >> +    int retval;
> >> +
> >> +    css = task_get_css(current, cpuset_cgrp_id);
> >> +    retval = cgroup_path_ns(css->cgroup, buf, buf_len, &init_cgroup_ns);
> >> +    css_put(css);
> >> +    if (retval >= buf_len)
> >> +            retval = -ENAMETOOLONG;
> > Manipulating string path to check the hierarchy will be difficult to do
> > inside bpf prog. It seems to me this helper will be useful only for
> > simplest cgroup setups where there is no additional cgroup nesting
> > within containers.
> > Have you looked at *ancestor_cgroup_id and *cgroup_id helpers?
> > They're a bit more flexible when dealing with hierarchy and
> > can be used to achieve the same correlation between kernel and user cgroup ids.
>
>
> KP,
>
> do you have any suggestion?

I haven't really tried this yet, but have you considered using task local
storage to identify the container?

- Add a task local storage with container ID somewhere in the container
  manager
- Propagate this ID to all the tasks within a container using task security
  blob management hooks (like task_alloc and task_free) etc.

>
> what I am thinking is the internal kernel object(cgroup id or ns.inum)
> is not so user friendly, we can get the container-context from them for
> tracing scenario, but not for LSM blocking cases, I'm not sure how
> Google internally resolve similar issue.
>
>
> Thanks!
>
> Xufeng
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] [PATCH bpf-next 1/1] bpf: Add a BPF helper for getting the cgroup path of current task
  2021-05-14  4:20       ` Alexei Starovoitov
@ 2021-05-14 11:21         ` KP Singh
  0 siblings, 0 replies; 8+ messages in thread
From: KP Singh @ 2021-05-14 11:21 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: xufeng zhang, Alexei Starovoitov, Daniel Borkmann, bpf, LKML,
	LSM List, Florent Revest, Brendan Jackman, Yonghong Song,
	Song Liu, Martin KaFai Lau, John Fastabend, Joe Stringer,
	Quentin Monnet

> > About your suggestion, the *cgroup_id helpers only works for cgroup v2,
> > however, we're still using cgroup v1 in product,and even for cgroup v2,
> > I'm not sure if there is any way for user space to get this cgroup id
> > timely(after container created, but before container start to run)。
> >
> > So if there is any effective way works for cgroup v1?
>
> https://github.com/systemd/systemd/blob/main/NEWS#L379

I agree that we should not focus on cgroup v1 if we do add a helper.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-05-14 11:21 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-12  9:58 [RFC] [PATCH bpf-next 0/1] Implement getting cgroup path bpf helper Xufeng Zhang
2021-05-12  9:58 ` [RFC] [PATCH bpf-next 1/1] bpf: Add a BPF helper for getting the cgroup path of current task Xufeng Zhang
2021-05-12 22:55   ` Alexei Starovoitov
2021-05-13  8:57     ` xufeng zhang
2021-05-14  4:20       ` Alexei Starovoitov
2021-05-14 11:21         ` KP Singh
2021-05-14  4:06     ` xufeng zhang
2021-05-14 11:20       ` KP Singh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).