netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 bpf-next 0/4] bpf: introduce bpf_get_task_stack()
@ 2020-06-29  5:55 Song Liu
  2020-06-29  5:55 ` [PATCH v4 bpf-next 4/4] selftests/bpf: add bpf_iter test with bpf_get_task_stack() Song Liu
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Song Liu @ 2020-06-29  5:55 UTC (permalink / raw)
  To: bpf, netdev, linux-kernel
  Cc: peterz, ast, daniel, kernel-team, john.fastabend, kpsingh, Song Liu

This set introduces a new helper bpf_get_task_stack(). The primary use case
is to dump all /proc/*/stack to seq_file via bpf_iter__task.

A few different approaches have been explored and compared:

  1. A simple wrapper around stack_trace_save_tsk(), as v1 [1].

     This approach introduces new syntax, which is different to existing
     helper bpf_get_stack(). Therefore, this is not ideal.

  2. Extend get_perf_callchain() to support "task" as argument.

     This approach reuses most of bpf_get_stack(). However, extending
     get_perf_callchain() requires non-trivial changes to architecture
     specific code. Which is error prone.

  3. Current (v2) approach, leverages most of existing bpf_get_stack(), and
     uses stack_trace_save_tsk() to handle architecture specific logic.

[1] https://lore.kernel.org/netdev/20200623070802.2310018-1-songliubraving@fb.com/

Changes v3 => v4:
1. Simplify the selftests with bpf_iter.h. (Yonghong)
2. Add example output to commit log of 4/4. (Yonghong)

Changes v2 => v3:
1. Rebase on top of bpf-next. (Yonghong)
2. Sanitize get_callchain_entry(). (Peter)
3. Use has_callchain_buf for bpf_get_task_stack. (Andrii)
4. Other small clean up. (Yonghong, Andrii).

Changes v1 => v2:
1. Reuse most of bpf_get_stack() logic. (Andrii)
2. Fix unsigned long vs. u64 mismatch for 32-bit systems. (Yonghong)
3. Add %pB support in bpf_trace_printk(). (Daniel)
4. Fix buffer size to bytes.

Song Liu (4):
  perf: expose get/put_callchain_entry()
  bpf: introduce helper bpf_get_task_stack()
  bpf: allow %pB in bpf_seq_printf() and bpf_trace_printk()
  selftests/bpf: add bpf_iter test with bpf_get_task_stack()

 include/linux/bpf.h                           |  1 +
 include/linux/perf_event.h                    |  2 +
 include/uapi/linux/bpf.h                      | 36 ++++++++-
 kernel/bpf/stackmap.c                         | 75 ++++++++++++++++++-
 kernel/bpf/verifier.c                         |  4 +-
 kernel/events/callchain.c                     | 13 ++--
 kernel/trace/bpf_trace.c                      | 12 ++-
 scripts/bpf_helpers_doc.py                    |  2 +
 tools/include/uapi/linux/bpf.h                | 36 ++++++++-
 .../selftests/bpf/prog_tests/bpf_iter.c       | 17 +++++
 .../selftests/bpf/progs/bpf_iter_task_stack.c | 37 +++++++++
 11 files changed, 220 insertions(+), 15 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/bpf_iter_task_stack.c

--
2.24.1

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v4 bpf-next 4/4] selftests/bpf: add bpf_iter test with bpf_get_task_stack()
  2020-06-29  5:55 [PATCH v4 bpf-next 0/4] bpf: introduce bpf_get_task_stack() Song Liu
@ 2020-06-29  5:55 ` Song Liu
  2020-06-29 15:06   ` Yonghong Song
  2020-06-29 19:25 ` [PATCH v4 bpf-next 0/4] bpf: introduce bpf_get_task_stack() Andrii Nakryiko
       [not found] ` <20200629055530.3244342-3-songliubraving@fb.com>
  2 siblings, 1 reply; 8+ messages in thread
From: Song Liu @ 2020-06-29  5:55 UTC (permalink / raw)
  To: bpf, netdev, linux-kernel
  Cc: peterz, ast, daniel, kernel-team, john.fastabend, kpsingh, Song Liu

The new test is similar to other bpf_iter tests. It dumps all
/proc/<pid>/stack to a seq_file. Here is some example output:

pid:     2873 num_entries:        3
[<0>] worker_thread+0xc6/0x380
[<0>] kthread+0x135/0x150
[<0>] ret_from_fork+0x22/0x30

pid:     2874 num_entries:        9
[<0>] __bpf_get_stack+0x15e/0x250
[<0>] bpf_prog_22a400774977bb30_dump_task_stack+0x4a/0xb3c
[<0>] bpf_iter_run_prog+0x81/0x170
[<0>] __task_seq_show+0x58/0x80
[<0>] bpf_seq_read+0x1c3/0x3b0
[<0>] vfs_read+0x9e/0x170
[<0>] ksys_read+0xa7/0xe0
[<0>] do_syscall_64+0x4c/0xa0
[<0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9

Note: To print the output, it is necessary to modify the selftest.

Signed-off-by: Song Liu <songliubraving@fb.com>
---
 .../selftests/bpf/prog_tests/bpf_iter.c       | 17 +++++++++
 .../selftests/bpf/progs/bpf_iter_task_stack.c | 37 +++++++++++++++++++
 2 files changed, 54 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/bpf_iter_task_stack.c

diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
index 1e2e0fced6e81..fed42755416db 100644
--- a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
+++ b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
@@ -5,6 +5,7 @@
 #include "bpf_iter_netlink.skel.h"
 #include "bpf_iter_bpf_map.skel.h"
 #include "bpf_iter_task.skel.h"
+#include "bpf_iter_task_stack.skel.h"
 #include "bpf_iter_task_file.skel.h"
 #include "bpf_iter_tcp4.skel.h"
 #include "bpf_iter_tcp6.skel.h"
@@ -110,6 +111,20 @@ static void test_task(void)
 	bpf_iter_task__destroy(skel);
 }
 
+static void test_task_stack(void)
+{
+	struct bpf_iter_task_stack *skel;
+
+	skel = bpf_iter_task_stack__open_and_load();
+	if (CHECK(!skel, "bpf_iter_task_stack__open_and_load",
+		  "skeleton open_and_load failed\n"))
+		return;
+
+	do_dummy_read(skel->progs.dump_task_stack);
+
+	bpf_iter_task_stack__destroy(skel);
+}
+
 static void test_task_file(void)
 {
 	struct bpf_iter_task_file *skel;
@@ -452,6 +467,8 @@ void test_bpf_iter(void)
 		test_bpf_map();
 	if (test__start_subtest("task"))
 		test_task();
+	if (test__start_subtest("task_stack"))
+		test_task_stack();
 	if (test__start_subtest("task_file"))
 		test_task_file();
 	if (test__start_subtest("tcp4"))
diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_task_stack.c b/tools/testing/selftests/bpf/progs/bpf_iter_task_stack.c
new file mode 100644
index 0000000000000..e40d32a2ed93d
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/bpf_iter_task_stack.c
@@ -0,0 +1,37 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2020 Facebook */
+#include "bpf_iter.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+#define MAX_STACK_TRACE_DEPTH   64
+unsigned long entries[MAX_STACK_TRACE_DEPTH];
+#define SIZE_OF_ULONG (sizeof(unsigned long))
+
+SEC("iter/task")
+int dump_task_stack(struct bpf_iter__task *ctx)
+{
+	struct seq_file *seq = ctx->meta->seq;
+	struct task_struct *task = ctx->task;
+	long i, retlen;
+
+	if (task == (void *)0)
+		return 0;
+
+	retlen = bpf_get_task_stack(task, entries,
+				    MAX_STACK_TRACE_DEPTH * SIZE_OF_ULONG, 0);
+	if (retlen < 0)
+		return 0;
+
+	BPF_SEQ_PRINTF(seq, "pid: %8u num_entries: %8u\n", task->pid,
+		       retlen / SIZE_OF_ULONG);
+	for (i = 0; i < MAX_STACK_TRACE_DEPTH; i++) {
+		if (retlen > i * SIZE_OF_ULONG)
+			BPF_SEQ_PRINTF(seq, "[<0>] %pB\n", (void *)entries[i]);
+	}
+	BPF_SEQ_PRINTF(seq, "\n");
+
+	return 0;
+}
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 bpf-next 4/4] selftests/bpf: add bpf_iter test with bpf_get_task_stack()
  2020-06-29  5:55 ` [PATCH v4 bpf-next 4/4] selftests/bpf: add bpf_iter test with bpf_get_task_stack() Song Liu
@ 2020-06-29 15:06   ` Yonghong Song
  2020-06-29 16:56     ` Song Liu
  0 siblings, 1 reply; 8+ messages in thread
From: Yonghong Song @ 2020-06-29 15:06 UTC (permalink / raw)
  To: Song Liu, bpf, netdev, linux-kernel
  Cc: peterz, ast, daniel, kernel-team, john.fastabend, kpsingh



On 6/28/20 10:55 PM, Song Liu wrote:
> The new test is similar to other bpf_iter tests. It dumps all
> /proc/<pid>/stack to a seq_file. Here is some example output:
> 
> pid:     2873 num_entries:        3
> [<0>] worker_thread+0xc6/0x380
> [<0>] kthread+0x135/0x150
> [<0>] ret_from_fork+0x22/0x30
> 
> pid:     2874 num_entries:        9
> [<0>] __bpf_get_stack+0x15e/0x250
> [<0>] bpf_prog_22a400774977bb30_dump_task_stack+0x4a/0xb3c
> [<0>] bpf_iter_run_prog+0x81/0x170
> [<0>] __task_seq_show+0x58/0x80
> [<0>] bpf_seq_read+0x1c3/0x3b0
> [<0>] vfs_read+0x9e/0x170
> [<0>] ksys_read+0xa7/0xe0
> [<0>] do_syscall_64+0x4c/0xa0
> [<0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9
> 
> Note: To print the output, it is necessary to modify the selftest.

I do not know what this sentence means. It seems confusing
and probably not needed.

> 
> Signed-off-by: Song Liu <songliubraving@fb.com>

Acked-by: Yonghong Song <yhs@fb.com>

> ---
>   .../selftests/bpf/prog_tests/bpf_iter.c       | 17 +++++++++
>   .../selftests/bpf/progs/bpf_iter_task_stack.c | 37 +++++++++++++++++++
>   2 files changed, 54 insertions(+)
>   create mode 100644 tools/testing/selftests/bpf/progs/bpf_iter_task_stack.c
> 
> diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
> index 1e2e0fced6e81..fed42755416db 100644
> --- a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
> +++ b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
> @@ -5,6 +5,7 @@
>   #include "bpf_iter_netlink.skel.h"
>   #include "bpf_iter_bpf_map.skel.h"
>   #include "bpf_iter_task.skel.h"
> +#include "bpf_iter_task_stack.skel.h"
>   #include "bpf_iter_task_file.skel.h"
>   #include "bpf_iter_tcp4.skel.h"
>   #include "bpf_iter_tcp6.skel.h"
> @@ -110,6 +111,20 @@ static void test_task(void)
>   	bpf_iter_task__destroy(skel);
>   }
>   
> +static void test_task_stack(void)
> +{
> +	struct bpf_iter_task_stack *skel;
> +
> +	skel = bpf_iter_task_stack__open_and_load();
> +	if (CHECK(!skel, "bpf_iter_task_stack__open_and_load",
> +		  "skeleton open_and_load failed\n"))
> +		return;
> +
> +	do_dummy_read(skel->progs.dump_task_stack);
> +
> +	bpf_iter_task_stack__destroy(skel);
> +}
> +
>   static void test_task_file(void)
>   {
>   	struct bpf_iter_task_file *skel;
> @@ -452,6 +467,8 @@ void test_bpf_iter(void)
>   		test_bpf_map();
>   	if (test__start_subtest("task"))
>   		test_task();
> +	if (test__start_subtest("task_stack"))
> +		test_task_stack();
>   	if (test__start_subtest("task_file"))
>   		test_task_file();
>   	if (test__start_subtest("tcp4"))
> diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_task_stack.c b/tools/testing/selftests/bpf/progs/bpf_iter_task_stack.c
> new file mode 100644
> index 0000000000000..e40d32a2ed93d
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/bpf_iter_task_stack.c
> @@ -0,0 +1,37 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (c) 2020 Facebook */
> +#include "bpf_iter.h"
> +#include <bpf/bpf_helpers.h>
> +#include <bpf/bpf_tracing.h>
> +
> +char _license[] SEC("license") = "GPL";
> +
> +#define MAX_STACK_TRACE_DEPTH   64
> +unsigned long entries[MAX_STACK_TRACE_DEPTH];
> +#define SIZE_OF_ULONG (sizeof(unsigned long))
> +
> +SEC("iter/task")
> +int dump_task_stack(struct bpf_iter__task *ctx)
> +{
> +	struct seq_file *seq = ctx->meta->seq;
> +	struct task_struct *task = ctx->task;
> +	long i, retlen;
> +
> +	if (task == (void *)0)
> +		return 0;
> +
> +	retlen = bpf_get_task_stack(task, entries,
> +				    MAX_STACK_TRACE_DEPTH * SIZE_OF_ULONG, 0);
> +	if (retlen < 0)
> +		return 0;
> +
> +	BPF_SEQ_PRINTF(seq, "pid: %8u num_entries: %8u\n", task->pid,
> +		       retlen / SIZE_OF_ULONG);
> +	for (i = 0; i < MAX_STACK_TRACE_DEPTH; i++) {
> +		if (retlen > i * SIZE_OF_ULONG)
> +			BPF_SEQ_PRINTF(seq, "[<0>] %pB\n", (void *)entries[i]);
> +	}
> +	BPF_SEQ_PRINTF(seq, "\n");
> +
> +	return 0;
> +}
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 bpf-next 4/4] selftests/bpf: add bpf_iter test with bpf_get_task_stack()
  2020-06-29 15:06   ` Yonghong Song
@ 2020-06-29 16:56     ` Song Liu
  2020-06-29 18:22       ` Yonghong Song
  0 siblings, 1 reply; 8+ messages in thread
From: Song Liu @ 2020-06-29 16:56 UTC (permalink / raw)
  To: Yonghong Song
  Cc: bpf, Networking, open list, Peter Zijlstra, Alexei Starovoitov,
	Daniel Borkmann, Kernel Team, john.fastabend, kpsingh



> On Jun 29, 2020, at 8:06 AM, Yonghong Song <yhs@fb.com> wrote:
> 
> 
> 
> On 6/28/20 10:55 PM, Song Liu wrote:
>> The new test is similar to other bpf_iter tests. It dumps all
>> /proc/<pid>/stack to a seq_file. Here is some example output:
>> pid:     2873 num_entries:        3
>> [<0>] worker_thread+0xc6/0x380
>> [<0>] kthread+0x135/0x150
>> [<0>] ret_from_fork+0x22/0x30
>> pid:     2874 num_entries:        9
>> [<0>] __bpf_get_stack+0x15e/0x250
>> [<0>] bpf_prog_22a400774977bb30_dump_task_stack+0x4a/0xb3c
>> [<0>] bpf_iter_run_prog+0x81/0x170
>> [<0>] __task_seq_show+0x58/0x80
>> [<0>] bpf_seq_read+0x1c3/0x3b0
>> [<0>] vfs_read+0x9e/0x170
>> [<0>] ksys_read+0xa7/0xe0
>> [<0>] do_syscall_64+0x4c/0xa0
>> [<0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9
>> Note: To print the output, it is necessary to modify the selftest.
> 
> I do not know what this sentence means. It seems confusing
> and probably not needed.

It means current do_dummy_read() doesn't check/print the contents of the 
seq_file:

        /* not check contents, but ensure read() ends without error */
        while ((len = read(iter_fd, buf, sizeof(buf))) > 0)
                ;

> 
>> Signed-off-by: Song Liu <songliubraving@fb.com>
> 
> Acked-by: Yonghong Song <yhs@fb.com>

Thanks!

[...]


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 bpf-next 4/4] selftests/bpf: add bpf_iter test with bpf_get_task_stack()
  2020-06-29 16:56     ` Song Liu
@ 2020-06-29 18:22       ` Yonghong Song
  0 siblings, 0 replies; 8+ messages in thread
From: Yonghong Song @ 2020-06-29 18:22 UTC (permalink / raw)
  To: Song Liu
  Cc: bpf, Networking, open list, Peter Zijlstra, Alexei Starovoitov,
	Daniel Borkmann, Kernel Team, john.fastabend, kpsingh



On 6/29/20 9:56 AM, Song Liu wrote:
> 
> 
>> On Jun 29, 2020, at 8:06 AM, Yonghong Song <yhs@fb.com> wrote:
>>
>>
>>
>> On 6/28/20 10:55 PM, Song Liu wrote:
>>> The new test is similar to other bpf_iter tests. It dumps all
>>> /proc/<pid>/stack to a seq_file. Here is some example output:
>>> pid:     2873 num_entries:        3
>>> [<0>] worker_thread+0xc6/0x380
>>> [<0>] kthread+0x135/0x150
>>> [<0>] ret_from_fork+0x22/0x30
>>> pid:     2874 num_entries:        9
>>> [<0>] __bpf_get_stack+0x15e/0x250
>>> [<0>] bpf_prog_22a400774977bb30_dump_task_stack+0x4a/0xb3c
>>> [<0>] bpf_iter_run_prog+0x81/0x170
>>> [<0>] __task_seq_show+0x58/0x80
>>> [<0>] bpf_seq_read+0x1c3/0x3b0
>>> [<0>] vfs_read+0x9e/0x170
>>> [<0>] ksys_read+0xa7/0xe0
>>> [<0>] do_syscall_64+0x4c/0xa0
>>> [<0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9
>>> Note: To print the output, it is necessary to modify the selftest.
>>
>> I do not know what this sentence means. It seems confusing
>> and probably not needed.
> 
> It means current do_dummy_read() doesn't check/print the contents of the
> seq_file:
> 
>          /* not check contents, but ensure read() ends without error */
>          while ((len = read(iter_fd, buf, sizeof(buf))) > 0)
>                  ;

I see. Thanks. It could be great if the commit message is more
explicit about what 'modify' is.

>>
>>> Signed-off-by: Song Liu <songliubraving@fb.com>
>>
>> Acked-by: Yonghong Song <yhs@fb.com>
> 
> Thanks!
> 
> [...]
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 bpf-next 0/4] bpf: introduce bpf_get_task_stack()
  2020-06-29  5:55 [PATCH v4 bpf-next 0/4] bpf: introduce bpf_get_task_stack() Song Liu
  2020-06-29  5:55 ` [PATCH v4 bpf-next 4/4] selftests/bpf: add bpf_iter test with bpf_get_task_stack() Song Liu
@ 2020-06-29 19:25 ` Andrii Nakryiko
       [not found] ` <20200629055530.3244342-3-songliubraving@fb.com>
  2 siblings, 0 replies; 8+ messages in thread
From: Andrii Nakryiko @ 2020-06-29 19:25 UTC (permalink / raw)
  To: Song Liu
  Cc: bpf, Networking, open list, Peter Ziljstra, Alexei Starovoitov,
	Daniel Borkmann, Kernel Team, john fastabend, KP Singh

On Mon, Jun 29, 2020 at 11:54 AM Song Liu <songliubraving@fb.com> wrote:
>
> This set introduces a new helper bpf_get_task_stack(). The primary use case
> is to dump all /proc/*/stack to seq_file via bpf_iter__task.
>
> A few different approaches have been explored and compared:
>
>   1. A simple wrapper around stack_trace_save_tsk(), as v1 [1].
>
>      This approach introduces new syntax, which is different to existing
>      helper bpf_get_stack(). Therefore, this is not ideal.
>
>   2. Extend get_perf_callchain() to support "task" as argument.
>
>      This approach reuses most of bpf_get_stack(). However, extending
>      get_perf_callchain() requires non-trivial changes to architecture
>      specific code. Which is error prone.
>
>   3. Current (v2) approach, leverages most of existing bpf_get_stack(), and
>      uses stack_trace_save_tsk() to handle architecture specific logic.
>
> [1] https://lore.kernel.org/netdev/20200623070802.2310018-1-songliubraving@fb.com/
>
> Changes v3 => v4:
> 1. Simplify the selftests with bpf_iter.h. (Yonghong)
> 2. Add example output to commit log of 4/4. (Yonghong)
>
> Changes v2 => v3:
> 1. Rebase on top of bpf-next. (Yonghong)
> 2. Sanitize get_callchain_entry(). (Peter)
> 3. Use has_callchain_buf for bpf_get_task_stack. (Andrii)
> 4. Other small clean up. (Yonghong, Andrii).
>
> Changes v1 => v2:
> 1. Reuse most of bpf_get_stack() logic. (Andrii)
> 2. Fix unsigned long vs. u64 mismatch for 32-bit systems. (Yonghong)
> 3. Add %pB support in bpf_trace_printk(). (Daniel)
> 4. Fix buffer size to bytes.
>
> Song Liu (4):
>   perf: expose get/put_callchain_entry()
>   bpf: introduce helper bpf_get_task_stack()
>   bpf: allow %pB in bpf_seq_printf() and bpf_trace_printk()
>   selftests/bpf: add bpf_iter test with bpf_get_task_stack()
>
>  include/linux/bpf.h                           |  1 +
>  include/linux/perf_event.h                    |  2 +
>  include/uapi/linux/bpf.h                      | 36 ++++++++-
>  kernel/bpf/stackmap.c                         | 75 ++++++++++++++++++-
>  kernel/bpf/verifier.c                         |  4 +-
>  kernel/events/callchain.c                     | 13 ++--
>  kernel/trace/bpf_trace.c                      | 12 ++-
>  scripts/bpf_helpers_doc.py                    |  2 +
>  tools/include/uapi/linux/bpf.h                | 36 ++++++++-
>  .../selftests/bpf/prog_tests/bpf_iter.c       | 17 +++++
>  .../selftests/bpf/progs/bpf_iter_task_stack.c | 37 +++++++++
>  11 files changed, 220 insertions(+), 15 deletions(-)
>  create mode 100644 tools/testing/selftests/bpf/progs/bpf_iter_task_stack.c
>
> --
> 2.24.1

Thanks for working on this! This will enable a whole new set of tools
and applications.

Acked-by: Andrii Nakryiko <andriin@fb.com>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 bpf-next 2/4] bpf: introduce helper bpf_get_task_stack()
       [not found] ` <20200629055530.3244342-3-songliubraving@fb.com>
@ 2020-06-30  4:18   ` Alexei Starovoitov
  2020-06-30  6:12     ` Song Liu
  0 siblings, 1 reply; 8+ messages in thread
From: Alexei Starovoitov @ 2020-06-30  4:18 UTC (permalink / raw)
  To: Song Liu
  Cc: bpf, Network Development, LKML, Peter Zijlstra,
	Alexei Starovoitov, Daniel Borkmann, Kernel Team, John Fastabend,
	KP Singh, Andrii Nakryiko

On Sun, Jun 28, 2020 at 10:58 PM Song Liu <songliubraving@fb.com> wrote:
>
> Introduce helper bpf_get_task_stack(), which dumps stack trace of given
> task. This is different to bpf_get_stack(), which gets stack track of
> current task. One potential use case of bpf_get_task_stack() is to call
> it from bpf_iter__task and dump all /proc/<pid>/stack to a seq_file.
>
> bpf_get_task_stack() uses stack_trace_save_tsk() instead of
> get_perf_callchain() for kernel stack. The benefit of this choice is that
> stack_trace_save_tsk() doesn't require changes in arch/. The downside of
> using stack_trace_save_tsk() is that stack_trace_save_tsk() dumps the
> stack trace to unsigned long array. For 32-bit systems, we need to
> translate it to u64 array.
>
> Acked-by: Andrii Nakryiko <andriin@fb.com>
> Signed-off-by: Song Liu <songliubraving@fb.com>

It doesn't apply:
Applying: bpf: Introduce helper bpf_get_task_stack()
Using index info to reconstruct a base tree...
error: patch failed: kernel/bpf/stackmap.c:471
error: kernel/bpf/stackmap.c: patch does not apply
error: Did you hand edit your patch?
It does not apply to blobs recorded in its index.
Patch failed at 0002 bpf: Introduce helper bpf_get_task_stack()

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 bpf-next 2/4] bpf: introduce helper bpf_get_task_stack()
  2020-06-30  4:18   ` [PATCH v4 bpf-next 2/4] bpf: introduce helper bpf_get_task_stack() Alexei Starovoitov
@ 2020-06-30  6:12     ` Song Liu
  0 siblings, 0 replies; 8+ messages in thread
From: Song Liu @ 2020-06-30  6:12 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: bpf, Network Development, LKML, Peter Zijlstra,
	Alexei Starovoitov, Daniel Borkmann, Kernel Team, John Fastabend,
	KP Singh, Andrii Nakryiko



> On Jun 29, 2020, at 9:18 PM, Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:
> 
> On Sun, Jun 28, 2020 at 10:58 PM Song Liu <songliubraving@fb.com> wrote:
>> 
>> Introduce helper bpf_get_task_stack(), which dumps stack trace of given
>> task. This is different to bpf_get_stack(), which gets stack track of
>> current task. One potential use case of bpf_get_task_stack() is to call
>> it from bpf_iter__task and dump all /proc/<pid>/stack to a seq_file.
>> 
>> bpf_get_task_stack() uses stack_trace_save_tsk() instead of
>> get_perf_callchain() for kernel stack. The benefit of this choice is that
>> stack_trace_save_tsk() doesn't require changes in arch/. The downside of
>> using stack_trace_save_tsk() is that stack_trace_save_tsk() dumps the
>> stack trace to unsigned long array. For 32-bit systems, we need to
>> translate it to u64 array.
>> 
>> Acked-by: Andrii Nakryiko <andriin@fb.com>
>> Signed-off-by: Song Liu <songliubraving@fb.com>
> 
> It doesn't apply:
> Applying: bpf: Introduce helper bpf_get_task_stack()
> Using index info to reconstruct a base tree...
> error: patch failed: kernel/bpf/stackmap.c:471
> error: kernel/bpf/stackmap.c: patch does not apply
> error: Did you hand edit your patch?
> It does not apply to blobs recorded in its index.
> Patch failed at 0002 bpf: Introduce helper bpf_get_task_stack()

Hmm.. seems "git format-patch -b" (--ignore-space-change) breaks it:

# without -b, works fine

$ git format-patch HEAD~1
0001-bpf-introduce-helper-bpf_get_task_stack.patch
$ git reset --hard HEAD~1
HEAD is now at c385fe4fbd7bc perf: expose get/put_callchain_entry()
$ git am ./0001-bpf-introduce-helper-bpf_get_task_stack.patch
Applying: bpf: introduce helper bpf_get_task_stack()


# with -b, doesn't apply :(

$ git format-patch -b HEAD~1
0001-bpf-introduce-helper-bpf_get_task_stack.patch
$ git reset --hard HEAD~1
HEAD is now at c385fe4fbd7bc perf: expose get/put_callchain_entry()
$ git am ./0001-bpf-introduce-helper-bpf_get_task_stack.patch
Applying: bpf: introduce helper bpf_get_task_stack()
error: patch failed: kernel/bpf/stackmap.c:471
error: kernel/bpf/stackmap.c: patch does not apply
Patch failed at 0001 bpf: introduce helper bpf_get_task_stack()
hint: Use 'git am --show-current-patch' to see the failed patch
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".

Let me see how to fix it...


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-06-30  6:13 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-29  5:55 [PATCH v4 bpf-next 0/4] bpf: introduce bpf_get_task_stack() Song Liu
2020-06-29  5:55 ` [PATCH v4 bpf-next 4/4] selftests/bpf: add bpf_iter test with bpf_get_task_stack() Song Liu
2020-06-29 15:06   ` Yonghong Song
2020-06-29 16:56     ` Song Liu
2020-06-29 18:22       ` Yonghong Song
2020-06-29 19:25 ` [PATCH v4 bpf-next 0/4] bpf: introduce bpf_get_task_stack() Andrii Nakryiko
     [not found] ` <20200629055530.3244342-3-songliubraving@fb.com>
2020-06-30  4:18   ` [PATCH v4 bpf-next 2/4] bpf: introduce helper bpf_get_task_stack() Alexei Starovoitov
2020-06-30  6:12     ` Song Liu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).