All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf 0/6] bpf: enhancements for multi-function programs
@ 2018-05-17  6:35 Sandipan Das
  2018-05-17  6:35 ` [PATCH bpf 1/6] bpf: support 64-bit offsets for bpf function calls Sandipan Das
                   ` (5 more replies)
  0 siblings, 6 replies; 9+ messages in thread
From: Sandipan Das @ 2018-05-17  6:35 UTC (permalink / raw)
  To: ast, daniel; +Cc: netdev, linuxppc-dev, naveen.n.rao

This patch series introduces the following:

[1] Support for bpf-to-bpf function calls in the powerpc64 JIT compiler.

[2] Provide a way for resolving function calls because of the way JITed
    images are allocated in powerpc64.

[3] Fix to get JITed instruction dumps for multi-function programs from
    the bpf system call.

Sandipan Das (6):
  bpf: support 64-bit offsets for bpf function calls
  bpf: powerpc64: add JIT support for multi-function programs
  bpf: get kernel symbol addresses via syscall
  tools: bpf: sync bpf uapi header
  tools: bpftool: resolve calls without using imm field
  bpf: fix JITed dump for multi-function programs via syscall

 arch/powerpc/net/bpf_jit_comp64.c | 79 ++++++++++++++++++++++++++++++++++-----
 include/uapi/linux/bpf.h          |  2 +
 kernel/bpf/syscall.c              | 56 ++++++++++++++++++++++++---
 kernel/bpf/verifier.c             | 22 +++++++----
 tools/bpf/bpftool/prog.c          | 31 +++++++++++++++
 tools/bpf/bpftool/xlated_dumper.c | 24 ++++++++----
 tools/bpf/bpftool/xlated_dumper.h |  2 +
 tools/include/uapi/linux/bpf.h    |  2 +
 8 files changed, 189 insertions(+), 29 deletions(-)

-- 
2.14.3

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH bpf 1/6] bpf: support 64-bit offsets for bpf function calls
  2018-05-17  6:35 [PATCH bpf 0/6] bpf: enhancements for multi-function programs Sandipan Das
@ 2018-05-17  6:35 ` Sandipan Das
  2018-05-17  6:35 ` [PATCH bpf 2/6] bpf: powerpc64: add JIT support for multi-function programs Sandipan Das
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Sandipan Das @ 2018-05-17  6:35 UTC (permalink / raw)
  To: ast, daniel; +Cc: netdev, linuxppc-dev, naveen.n.rao

The imm field of a bpf instruction is a signed 32-bit integer.
For JIT bpf-to-bpf function calls, it stores the offset of the
start address of the callee's JITed image from __bpf_call_base.

For some architectures, such as powerpc64, this offset may be
as large as 64 bits and cannot be accomodated in the imm field
without truncation.

We resolve this by:

[1] Additionally using the auxillary data of each function to
    keep a list of start addresses of the JITed images for all
    functions determined by the verifier.

[2] Retaining the subprog id inside the off field of the call
    instructions and using it to index into the list mentioned
    above and lookup the callee's address.

To make sure that the existing JIT compilers continue to work
without requiring changes, we keep the imm field as it is.

Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com>
---
 kernel/bpf/verifier.c | 15 ++++++++++++++-
 1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index d5e1a6c4165d..aa76879f4fd1 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5373,11 +5373,24 @@ static int jit_subprogs(struct bpf_verifier_env *env)
 			    insn->src_reg != BPF_PSEUDO_CALL)
 				continue;
 			subprog = insn->off;
-			insn->off = 0;
 			insn->imm = (u64 (*)(u64, u64, u64, u64, u64))
 				func[subprog]->bpf_func -
 				__bpf_call_base;
 		}
+
+		/* we use the aux data to keep a list of the start addresses
+		 * of the JITed images for each function in the program
+		 *
+		 * for some architectures, such as powerpc64, the imm field
+		 * might not be large enough to hold the offset of the start
+		 * address of the callee's JITed image from __bpf_call_base
+		 *
+		 * in such cases, we can lookup the start address of a callee
+		 * by using its subprog id, available from the off field of
+		 * the call instruction, as an index for this list
+		 */
+		func[i]->aux->func = func;
+		func[i]->aux->func_cnt = env->subprog_cnt + 1;
 	}
 	for (i = 0; i < env->subprog_cnt; i++) {
 		old_bpf_func = func[i]->bpf_func;
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH bpf 2/6] bpf: powerpc64: add JIT support for multi-function programs
  2018-05-17  6:35 [PATCH bpf 0/6] bpf: enhancements for multi-function programs Sandipan Das
  2018-05-17  6:35 ` [PATCH bpf 1/6] bpf: support 64-bit offsets for bpf function calls Sandipan Das
@ 2018-05-17  6:35 ` Sandipan Das
  2018-05-17  6:35 ` [PATCH bpf 3/6] bpf: get kernel symbol addresses via syscall Sandipan Das
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Sandipan Das @ 2018-05-17  6:35 UTC (permalink / raw)
  To: ast, daniel; +Cc: netdev, linuxppc-dev, naveen.n.rao

This adds support for bpf-to-bpf function calls in the powerpc64
JIT compiler. The JIT compiler converts the bpf call instructions
to native branch instructions. After a round of the usual passes,
the start addresses of the JITed images for the callee functions
are known. Finally, to fixup the branch target addresses, we need
to perform an extra pass.

Because of the address range in which JITed images are allocated
on powerpc64, the offsets of the start addresses of these images
from __bpf_call_base are as large as 64 bits. So, for a function
call, we cannot use the imm field of the instruction to determine
the callee's address. Instead, we use the alternative method of
getting it from the list of function addresses in the auxillary
data of the caller by using the off field as an index.

Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com>
---
 arch/powerpc/net/bpf_jit_comp64.c | 79 ++++++++++++++++++++++++++++++++++-----
 1 file changed, 69 insertions(+), 10 deletions(-)

diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index 1bdb1aff0619..25939892d8f7 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -256,7 +256,7 @@ static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32
 /* Assemble the body code between the prologue & epilogue */
 static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
 			      struct codegen_context *ctx,
-			      u32 *addrs)
+			      u32 *addrs, bool extra_pass)
 {
 	const struct bpf_insn *insn = fp->insnsi;
 	int flen = fp->len;
@@ -712,11 +712,23 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
 			break;
 
 		/*
-		 * Call kernel helper
+		 * Call kernel helper or bpf function
 		 */
 		case BPF_JMP | BPF_CALL:
 			ctx->seen |= SEEN_FUNC;
-			func = (u8 *) __bpf_call_base + imm;
+
+			/* bpf function call */
+			if (insn[i].src_reg == BPF_PSEUDO_CALL && extra_pass)
+				if (fp->aux->func && off < fp->aux->func_cnt)
+					/* use the subprog id from the off
+					 * field to lookup the callee address
+					 */
+					func = (u8 *) fp->aux->func[off]->bpf_func;
+				else
+					return -EINVAL;
+			/* kernel helper call */
+			else
+				func = (u8 *) __bpf_call_base + imm;
 
 			bpf_jit_emit_func_call(image, ctx, (u64)func);
 
@@ -864,6 +876,14 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
 	return 0;
 }
 
+struct powerpc64_jit_data {
+	struct bpf_binary_header *header;
+	u32 *addrs;
+	u8 *image;
+	u32 proglen;
+	struct codegen_context ctx;
+};
+
 struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
 {
 	u32 proglen;
@@ -871,6 +891,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
 	u8 *image = NULL;
 	u32 *code_base;
 	u32 *addrs;
+	struct powerpc64_jit_data *jit_data;
 	struct codegen_context cgctx;
 	int pass;
 	int flen;
@@ -878,6 +899,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
 	struct bpf_prog *org_fp = fp;
 	struct bpf_prog *tmp_fp;
 	bool bpf_blinded = false;
+	bool extra_pass = false;
 
 	if (!fp->jit_requested)
 		return org_fp;
@@ -891,7 +913,28 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
 		fp = tmp_fp;
 	}
 
+	jit_data = fp->aux->jit_data;
+	if (!jit_data) {
+		jit_data = kzalloc(sizeof(*jit_data), GFP_KERNEL);
+		if (!jit_data) {
+			fp = org_fp;
+			goto out;
+		}
+		fp->aux->jit_data = jit_data;
+	}
+
 	flen = fp->len;
+	addrs = jit_data->addrs;
+	if (addrs) {
+		cgctx = jit_data->ctx;
+		image = jit_data->image;
+		bpf_hdr = jit_data->header;
+		proglen = jit_data->proglen;
+		alloclen = proglen + FUNCTION_DESCR_SIZE;
+		extra_pass = true;
+		goto skip_init_ctx;
+	}
+
 	addrs = kzalloc((flen+1) * sizeof(*addrs), GFP_KERNEL);
 	if (addrs == NULL) {
 		fp = org_fp;
@@ -904,10 +947,10 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
 	cgctx.stack_size = round_up(fp->aux->stack_depth, 16);
 
 	/* Scouting faux-generate pass 0 */
-	if (bpf_jit_build_body(fp, 0, &cgctx, addrs)) {
+	if (bpf_jit_build_body(fp, 0, &cgctx, addrs, false)) {
 		/* We hit something illegal or unsupported. */
 		fp = org_fp;
-		goto out;
+		goto out_addrs;
 	}
 
 	/*
@@ -925,9 +968,10 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
 			bpf_jit_fill_ill_insns);
 	if (!bpf_hdr) {
 		fp = org_fp;
-		goto out;
+		goto out_addrs;
 	}
 
+skip_init_ctx:
 	code_base = (u32 *)(image + FUNCTION_DESCR_SIZE);
 
 	/* Code generation passes 1-2 */
@@ -935,7 +979,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
 		/* Now build the prologue, body code & epilogue for real. */
 		cgctx.idx = 0;
 		bpf_jit_build_prologue(code_base, &cgctx);
-		bpf_jit_build_body(fp, code_base, &cgctx, addrs);
+		bpf_jit_build_body(fp, code_base, &cgctx, addrs, extra_pass);
 		bpf_jit_build_epilogue(code_base, &cgctx);
 
 		if (bpf_jit_enable > 1)
@@ -956,15 +1000,30 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
 	((u64 *)image)[1] = local_paca->kernel_toc;
 #endif
 
+	bpf_flush_icache(bpf_hdr, (u8 *)bpf_hdr + (bpf_hdr->pages * PAGE_SIZE));
+
+	if (!fp->is_func || extra_pass) {
+		bpf_jit_binary_lock_ro(bpf_hdr);
+	} else {
+		jit_data->addrs = addrs;
+		jit_data->ctx = cgctx;
+		jit_data->proglen = proglen;
+		jit_data->image = image;
+		jit_data->header = bpf_hdr;
+	}
+
 	fp->bpf_func = (void *)image;
 	fp->jited = 1;
 	fp->jited_len = alloclen;
 
-	bpf_flush_icache(bpf_hdr, (u8 *)bpf_hdr + (bpf_hdr->pages * PAGE_SIZE));
+	if (!fp->is_func || extra_pass) {
+out_addrs:
+		kfree(addrs);
+		kfree(jit_data);
+		fp->aux->jit_data = NULL;
+	}
 
 out:
-	kfree(addrs);
-
 	if (bpf_blinded)
 		bpf_jit_prog_release_other(fp, fp == org_fp ? tmp_fp : org_fp);
 
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH bpf 3/6] bpf: get kernel symbol addresses via syscall
  2018-05-17  6:35 [PATCH bpf 0/6] bpf: enhancements for multi-function programs Sandipan Das
  2018-05-17  6:35 ` [PATCH bpf 1/6] bpf: support 64-bit offsets for bpf function calls Sandipan Das
  2018-05-17  6:35 ` [PATCH bpf 2/6] bpf: powerpc64: add JIT support for multi-function programs Sandipan Das
@ 2018-05-17  6:35 ` Sandipan Das
  2018-05-17  6:35 ` [PATCH bpf 4/6] tools: bpf: sync bpf uapi header Sandipan Das
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Sandipan Das @ 2018-05-17  6:35 UTC (permalink / raw)
  To: ast, daniel; +Cc: netdev, linuxppc-dev, naveen.n.rao

This adds new two new fields to struct bpf_prog_info. For
multi-function programs, these fields can be used to pass
a list of kernel symbol addresses for all functions in a
given program and to userspace using the bpf system call
with the BPF_OBJ_GET_INFO_BY_FD command.

When bpf_jit_kallsyms is enabled, we can get the address
of the corresponding kernel symbol for a callee function
and resolve the symbol's name. The address is determined
by adding the value of the call instruction's imm field
to __bpf_call_base. This offset gets assigned to the imm
field by the verifier.

For some architectures, such as powerpc64, the imm field
is not large enough to hold this offset.

We resolve this by:

[1] Assigning the subprog id to the imm field of a call
    instruction in the verifier instead of the offset of
    the callee's symbol's address from __bpf_call_base.

[2] Determining the address of a callee's corresponding
    symbol by using the imm field as an index for the
    list of kernel symbol addresses now available from
    the program info.

Suggested-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com>
---
 include/uapi/linux/bpf.h |  2 ++
 kernel/bpf/syscall.c     | 20 ++++++++++++++++++++
 kernel/bpf/verifier.c    |  7 +------
 3 files changed, 23 insertions(+), 6 deletions(-)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 93d5a4eeec2a..061482d3be11 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -2108,6 +2108,8 @@ struct bpf_prog_info {
 	__u32 xlated_prog_len;
 	__aligned_u64 jited_prog_insns;
 	__aligned_u64 xlated_prog_insns;
+	__aligned_u64 jited_ksyms;
+	__u32 nr_jited_ksyms;
 	__u64 load_time;	/* ns since boottime */
 	__u32 created_by_uid;
 	__u32 nr_map_ids;
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index c286e75ec087..03c8437a2990 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1933,6 +1933,7 @@ static int bpf_prog_get_info_by_fd(struct bpf_prog *prog,
 	if (!capable(CAP_SYS_ADMIN)) {
 		info.jited_prog_len = 0;
 		info.xlated_prog_len = 0;
+		info.nr_jited_ksyms = 0;
 		goto done;
 	}
 
@@ -1981,6 +1982,25 @@ static int bpf_prog_get_info_by_fd(struct bpf_prog *prog,
 		}
 	}
 
+	ulen = info.nr_jited_ksyms;
+	info.nr_jited_ksyms = prog->aux->func_cnt;
+	if (info.nr_jited_ksyms && ulen) {
+		u64 __user *user_jited_ksyms = u64_to_user_ptr(info.jited_ksyms);
+		ulong ksym_addr;
+		u32 i;
+
+		/* copy the address of the kernel symbol corresponding to
+		 * each function
+		 */
+		ulen = min_t(u32, info.nr_jited_ksyms, ulen);
+		for (i = 0; i < ulen; i++) {
+			ksym_addr = (ulong) prog->aux->func[i]->bpf_func;
+			ksym_addr &= PAGE_MASK;
+			if (put_user((u64) ksym_addr, &user_jited_ksyms[i]))
+				return -EFAULT;
+		}
+	}
+
 done:
 	if (copy_to_user(uinfo, &info, info_len) ||
 	    put_user(info_len, &uattr->info.info_len))
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index aa76879f4fd1..fc864eb3e29d 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5416,17 +5416,12 @@ static int jit_subprogs(struct bpf_verifier_env *env)
 	 * later look the same as if they were interpreted only.
 	 */
 	for (i = 0, insn = prog->insnsi; i < prog->len; i++, insn++) {
-		unsigned long addr;
-
 		if (insn->code != (BPF_JMP | BPF_CALL) ||
 		    insn->src_reg != BPF_PSEUDO_CALL)
 			continue;
 		insn->off = env->insn_aux_data[i].call_imm;
 		subprog = find_subprog(env, i + insn->off + 1);
-		addr  = (unsigned long)func[subprog]->bpf_func;
-		addr &= PAGE_MASK;
-		insn->imm = (u64 (*)(u64, u64, u64, u64, u64))
-			    addr - __bpf_call_base;
+		insn->imm = subprog;
 	}
 
 	prog->jited = 1;
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH bpf 4/6] tools: bpf: sync bpf uapi header
  2018-05-17  6:35 [PATCH bpf 0/6] bpf: enhancements for multi-function programs Sandipan Das
                   ` (2 preceding siblings ...)
  2018-05-17  6:35 ` [PATCH bpf 3/6] bpf: get kernel symbol addresses via syscall Sandipan Das
@ 2018-05-17  6:35 ` Sandipan Das
  2018-05-17  6:35 ` [PATCH bpf 5/6] tools: bpftool: resolve calls without using imm field Sandipan Das
  2018-05-17  6:35 ` [PATCH bpf 6/6] bpf: fix JITed dump for multi-function programs via syscall Sandipan Das
  5 siblings, 0 replies; 9+ messages in thread
From: Sandipan Das @ 2018-05-17  6:35 UTC (permalink / raw)
  To: ast, daniel; +Cc: netdev, linuxppc-dev, naveen.n.rao

Syncing the bpf.h uapi header with tools so that struct
bpf_prog_info has the two new fields for passing on the
addresses of the kernel symbols corresponding to each
function in a JITed program.

Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com>
---
 tools/include/uapi/linux/bpf.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 83a95ae388dd..c14a74eea910 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -2107,6 +2107,8 @@ struct bpf_prog_info {
 	__u32 xlated_prog_len;
 	__aligned_u64 jited_prog_insns;
 	__aligned_u64 xlated_prog_insns;
+	__aligned_u64 jited_ksyms;
+	__u32 nr_jited_ksyms;
 	__u64 load_time;	/* ns since boottime */
 	__u32 created_by_uid;
 	__u32 nr_map_ids;
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH bpf 5/6] tools: bpftool: resolve calls without using imm field
  2018-05-17  6:35 [PATCH bpf 0/6] bpf: enhancements for multi-function programs Sandipan Das
                   ` (3 preceding siblings ...)
  2018-05-17  6:35 ` [PATCH bpf 4/6] tools: bpf: sync bpf uapi header Sandipan Das
@ 2018-05-17  6:35 ` Sandipan Das
  2018-05-17 18:51   ` Jakub Kicinski
  2018-05-17  6:35 ` [PATCH bpf 6/6] bpf: fix JITed dump for multi-function programs via syscall Sandipan Das
  5 siblings, 1 reply; 9+ messages in thread
From: Sandipan Das @ 2018-05-17  6:35 UTC (permalink / raw)
  To: ast, daniel; +Cc: netdev, linuxppc-dev, naveen.n.rao

Currently, we resolve the callee's address for a JITed function
call by using the imm field of the call instruction as an offset
from __bpf_call_base. If bpf_jit_kallsyms is enabled, we further
use this address to get the callee's kernel symbol's name.

For some architectures, such as powerpc64, the imm field is not
large enough to hold this offset. So, instead of assigning this
offset to the imm field, the verifier now assigns the subprog
id. Also, a list of kernel symbol addresses for all the JITed
functions is provided in the program info. We now use the imm
field as an index for this list to lookup a callee's symbol's
address and resolve its name.

Suggested-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com>
---
 tools/bpf/bpftool/prog.c          | 31 +++++++++++++++++++++++++++++++
 tools/bpf/bpftool/xlated_dumper.c | 24 +++++++++++++++++-------
 tools/bpf/bpftool/xlated_dumper.h |  2 ++
 3 files changed, 50 insertions(+), 7 deletions(-)

diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
index 9bdfdf2d3fbe..ac2f62a97e84 100644
--- a/tools/bpf/bpftool/prog.c
+++ b/tools/bpf/bpftool/prog.c
@@ -430,6 +430,10 @@ static int do_dump(int argc, char **argv)
 	unsigned char *buf;
 	__u32 *member_len;
 	__u64 *member_ptr;
+	unsigned int nr_addrs;
+	unsigned long *addrs = NULL;
+	__u32 *ksyms_len;
+	__u64 *ksyms_ptr;
 	ssize_t n;
 	int err;
 	int fd;
@@ -437,6 +441,8 @@ static int do_dump(int argc, char **argv)
 	if (is_prefix(*argv, "jited")) {
 		member_len = &info.jited_prog_len;
 		member_ptr = &info.jited_prog_insns;
+		ksyms_len = &info.nr_jited_ksyms;
+		ksyms_ptr = &info.jited_ksyms;
 	} else if (is_prefix(*argv, "xlated")) {
 		member_len = &info.xlated_prog_len;
 		member_ptr = &info.xlated_prog_insns;
@@ -496,10 +502,23 @@ static int do_dump(int argc, char **argv)
 		return -1;
 	}
 
+	nr_addrs = *ksyms_len;
+	if (nr_addrs) {
+		addrs = malloc(nr_addrs * sizeof(__u64));
+		if (!addrs) {
+			p_err("mem alloc failed");
+			free(buf);
+			close(fd);
+			return -1;
+		}
+	}
+
 	memset(&info, 0, sizeof(info));
 
 	*member_ptr = ptr_to_u64(buf);
 	*member_len = buf_size;
+	*ksyms_ptr = ptr_to_u64(addrs);
+	*ksyms_len = nr_addrs;
 
 	err = bpf_obj_get_info_by_fd(fd, &info, &len);
 	close(fd);
@@ -513,6 +532,11 @@ static int do_dump(int argc, char **argv)
 		goto err_free;
 	}
 
+	if (*ksyms_len > nr_addrs) {
+		p_err("too many addresses returned");
+		goto err_free;
+	}
+
 	if ((member_len == &info.jited_prog_len &&
 	     info.jited_prog_insns == 0) ||
 	    (member_len == &info.xlated_prog_len &&
@@ -558,6 +582,9 @@ static int do_dump(int argc, char **argv)
 			dump_xlated_cfg(buf, *member_len);
 	} else {
 		kernel_syms_load(&dd);
+		dd.jited_ksyms = ksyms_ptr;
+		dd.nr_jited_ksyms = *ksyms_len;
+
 		if (json_output)
 			dump_xlated_json(&dd, buf, *member_len, opcodes);
 		else
@@ -566,10 +593,14 @@ static int do_dump(int argc, char **argv)
 	}
 
 	free(buf);
+	if (addrs)
+		free(addrs);
 	return 0;
 
 err_free:
 	free(buf);
+	if (addrs)
+		free(addrs);
 	return -1;
 }
 
diff --git a/tools/bpf/bpftool/xlated_dumper.c b/tools/bpf/bpftool/xlated_dumper.c
index 7a3173b76c16..dc8e4eca0387 100644
--- a/tools/bpf/bpftool/xlated_dumper.c
+++ b/tools/bpf/bpftool/xlated_dumper.c
@@ -178,8 +178,12 @@ static const char *print_call_pcrel(struct dump_data *dd,
 		snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
 			 "%+d#%s", insn->off, sym->name);
 	else
-		snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
-			 "%+d#0x%lx", insn->off, address);
+		if (address)
+			snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
+				 "%+d#0x%lx", insn->off, address);
+		else
+			snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
+				 "%+d", insn->off);
 	return dd->scratch_buff;
 }
 
@@ -200,14 +204,20 @@ static const char *print_call(void *private_data,
 			      const struct bpf_insn *insn)
 {
 	struct dump_data *dd = private_data;
-	unsigned long address = dd->address_call_base + insn->imm;
-	struct kernel_sym *sym;
+	unsigned long address = 0;
+	struct kernel_sym *sym = NULL;
 
-	sym = kernel_syms_search(dd, address);
-	if (insn->src_reg == BPF_PSEUDO_CALL)
+	if (insn->src_reg == BPF_PSEUDO_CALL) {
+		if (dd->nr_jited_ksyms) {
+			address = dd->jited_ksyms[insn->imm];
+			sym = kernel_syms_search(dd, address);
+		}
 		return print_call_pcrel(dd, sym, address, insn);
-	else
+	} else {
+		address = dd->address_call_base + insn->imm;
+		sym = kernel_syms_search(dd, address);
 		return print_call_helper(dd, sym, address);
+	}
 }
 
 static const char *print_imm(void *private_data,
diff --git a/tools/bpf/bpftool/xlated_dumper.h b/tools/bpf/bpftool/xlated_dumper.h
index b34affa7ef2d..eafbb49c8d0b 100644
--- a/tools/bpf/bpftool/xlated_dumper.h
+++ b/tools/bpf/bpftool/xlated_dumper.h
@@ -49,6 +49,8 @@ struct dump_data {
 	unsigned long address_call_base;
 	struct kernel_sym *sym_mapping;
 	__u32 sym_count;
+	__u64 *jited_ksyms;
+	__u32 nr_jited_ksyms;
 	char scratch_buff[SYM_MAX_NAME + 8];
 };
 
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH bpf 6/6] bpf: fix JITed dump for multi-function programs via syscall
  2018-05-17  6:35 [PATCH bpf 0/6] bpf: enhancements for multi-function programs Sandipan Das
                   ` (4 preceding siblings ...)
  2018-05-17  6:35 ` [PATCH bpf 5/6] tools: bpftool: resolve calls without using imm field Sandipan Das
@ 2018-05-17  6:35 ` Sandipan Das
  5 siblings, 0 replies; 9+ messages in thread
From: Sandipan Das @ 2018-05-17  6:35 UTC (permalink / raw)
  To: ast, daniel; +Cc: netdev, linuxppc-dev, naveen.n.rao

Currently, for multi-function programs, we cannot get the JITed
instructions using the bpf system call's BPF_OBJ_GET_INFO_BY_FD
command. Because of this, userspace tools such as bpftool fail
to identify a multi-function program as being JITed or not.

With the JIT enabled and the test program running, this can be
verified as follows:

  # cat /proc/sys/net/core/bpf_jit_enable
  1

Before applying this patch:

  # bpftool prog list
  1: kprobe  name foo  tag b811aab41a39ad3d  gpl
          loaded_at 2018-05-16T11:43:38+0530  uid 0
          xlated 216B  not jited  memlock 65536B
  ...

  # bpftool prog dump jited id 1
  no instructions returned

After applying this patch:

  # bpftool prog list
  1: kprobe  name foo  tag b811aab41a39ad3d  gpl
          loaded_at 2018-05-16T12:13:01+0530  uid 0
          xlated 216B  jited 308B  memlock 65536B
  ...

  # bpftool prog dump jited id 1
     0:   nop
     4:   nop
     8:   mflr    r0
     c:   std     r0,16(r1)
    10:   stdu    r1,-112(r1)
    14:   std     r31,104(r1)
    18:   addi    r31,r1,48
    1c:   li      r3,10
  ...

Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com>
---
 kernel/bpf/syscall.c | 38 ++++++++++++++++++++++++++++++++------
 1 file changed, 32 insertions(+), 6 deletions(-)

diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 03c8437a2990..b2f70718aca7 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1896,7 +1896,7 @@ static int bpf_prog_get_info_by_fd(struct bpf_prog *prog,
 	struct bpf_prog_info info = {};
 	u32 info_len = attr->info.info_len;
 	char __user *uinsns;
-	u32 ulen;
+	u32 ulen, i;
 	int err;
 
 	err = check_uarg_tail_zero(uinfo, sizeof(info), info_len);
@@ -1922,7 +1922,6 @@ static int bpf_prog_get_info_by_fd(struct bpf_prog *prog,
 	ulen = min_t(u32, info.nr_map_ids, ulen);
 	if (ulen) {
 		u32 __user *user_map_ids = u64_to_user_ptr(info.map_ids);
-		u32 i;
 
 		for (i = 0; i < ulen; i++)
 			if (put_user(prog->aux->used_maps[i]->id,
@@ -1970,13 +1969,41 @@ static int bpf_prog_get_info_by_fd(struct bpf_prog *prog,
 	 * for offload.
 	 */
 	ulen = info.jited_prog_len;
-	info.jited_prog_len = prog->jited_len;
+	if (prog->aux->func_cnt) {
+		info.jited_prog_len = 0;
+		for (i = 0; i < prog->aux->func_cnt; i++)
+			info.jited_prog_len += prog->aux->func[i]->jited_len;
+	} else {
+		info.jited_prog_len = prog->jited_len;
+	}
+
 	if (info.jited_prog_len && ulen) {
 		if (bpf_dump_raw_ok()) {
 			uinsns = u64_to_user_ptr(info.jited_prog_insns);
 			ulen = min_t(u32, info.jited_prog_len, ulen);
-			if (copy_to_user(uinsns, prog->bpf_func, ulen))
-				return -EFAULT;
+
+			/* for multi-function programs, copy the JITed
+			 * instructions for all the functions
+			 */
+			if (prog->aux->func_cnt) {
+				u32 len, free;
+				u8 *img;
+
+				free = ulen;
+				for (i = 0; i < prog->aux->func_cnt; i++) {
+					len = prog->aux->func[i]->jited_len;
+					img = (u8 *) prog->aux->func[i]->bpf_func;
+					if (len > free)
+						break;
+					if (copy_to_user(uinsns, img, len))
+						return -EFAULT;
+					uinsns += len;
+					free -= len;
+				}
+			} else {
+				if (copy_to_user(uinsns, prog->bpf_func, ulen))
+					return -EFAULT;
+			}
 		} else {
 			info.jited_prog_insns = 0;
 		}
@@ -1987,7 +2014,6 @@ static int bpf_prog_get_info_by_fd(struct bpf_prog *prog,
 	if (info.nr_jited_ksyms && ulen) {
 		u64 __user *user_jited_ksyms = u64_to_user_ptr(info.jited_ksyms);
 		ulong ksym_addr;
-		u32 i;
 
 		/* copy the address of the kernel symbol corresponding to
 		 * each function
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf 5/6] tools: bpftool: resolve calls without using imm field
  2018-05-17  6:35 ` [PATCH bpf 5/6] tools: bpftool: resolve calls without using imm field Sandipan Das
@ 2018-05-17 18:51   ` Jakub Kicinski
  2018-05-18  4:28     ` Sandipan Das
  0 siblings, 1 reply; 9+ messages in thread
From: Jakub Kicinski @ 2018-05-17 18:51 UTC (permalink / raw)
  To: Sandipan Das; +Cc: ast, daniel, netdev, linuxppc-dev, naveen.n.rao

On Thu, 17 May 2018 12:05:47 +0530, Sandipan Das wrote:
> Currently, we resolve the callee's address for a JITed function
> call by using the imm field of the call instruction as an offset
> from __bpf_call_base. If bpf_jit_kallsyms is enabled, we further
> use this address to get the callee's kernel symbol's name.
> 
> For some architectures, such as powerpc64, the imm field is not
> large enough to hold this offset. So, instead of assigning this
> offset to the imm field, the verifier now assigns the subprog
> id. Also, a list of kernel symbol addresses for all the JITed
> functions is provided in the program info. We now use the imm
> field as an index for this list to lookup a callee's symbol's
> address and resolve its name.
> 
> Suggested-by: Daniel Borkmann <daniel@iogearbox.net>
> Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com>

A few nit-picks below, thank you for the patch!

>  tools/bpf/bpftool/prog.c          | 31 +++++++++++++++++++++++++++++++
>  tools/bpf/bpftool/xlated_dumper.c | 24 +++++++++++++++++-------
>  tools/bpf/bpftool/xlated_dumper.h |  2 ++
>  3 files changed, 50 insertions(+), 7 deletions(-)
> 
> diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
> index 9bdfdf2d3fbe..ac2f62a97e84 100644
> --- a/tools/bpf/bpftool/prog.c
> +++ b/tools/bpf/bpftool/prog.c
> @@ -430,6 +430,10 @@ static int do_dump(int argc, char **argv)
>  	unsigned char *buf;
>  	__u32 *member_len;
>  	__u64 *member_ptr;
> +	unsigned int nr_addrs;
> +	unsigned long *addrs = NULL;
> +	__u32 *ksyms_len;
> +	__u64 *ksyms_ptr;

nit: please try to keep the variables ordered longest to shortest like
we do in networking code (please do it in all functions).

>  	ssize_t n;
>  	int err;
>  	int fd;
> @@ -437,6 +441,8 @@ static int do_dump(int argc, char **argv)
>  	if (is_prefix(*argv, "jited")) {
>  		member_len = &info.jited_prog_len;
>  		member_ptr = &info.jited_prog_insns;
> +		ksyms_len = &info.nr_jited_ksyms;
> +		ksyms_ptr = &info.jited_ksyms;
>  	} else if (is_prefix(*argv, "xlated")) {
>  		member_len = &info.xlated_prog_len;
>  		member_ptr = &info.xlated_prog_insns;
> @@ -496,10 +502,23 @@ static int do_dump(int argc, char **argv)
>  		return -1;
>  	}
>  
> +	nr_addrs = *ksyms_len;

Here and ...

> +	if (nr_addrs) {
> +		addrs = malloc(nr_addrs * sizeof(__u64));
> +		if (!addrs) {
> +			p_err("mem alloc failed");
> +			free(buf);
> +			close(fd);
> +			return -1;

You can just jump to err_free here.

> +		}
> +	}
> +
>  	memset(&info, 0, sizeof(info));
>  
>  	*member_ptr = ptr_to_u64(buf);
>  	*member_len = buf_size;
> +	*ksyms_ptr = ptr_to_u64(addrs);
> +	*ksyms_len = nr_addrs;

... here - this function is getting long, so maybe I'm not seeing
something, but are ksyms_ptr and ksyms_len guaranteed to be initialized?

>  	err = bpf_obj_get_info_by_fd(fd, &info, &len);
>  	close(fd);
> @@ -513,6 +532,11 @@ static int do_dump(int argc, char **argv)
>  		goto err_free;
>  	}
>  
> +	if (*ksyms_len > nr_addrs) {
> +		p_err("too many addresses returned");
> +		goto err_free;
> +	}
> +
>  	if ((member_len == &info.jited_prog_len &&
>  	     info.jited_prog_insns == 0) ||
>  	    (member_len == &info.xlated_prog_len &&
> @@ -558,6 +582,9 @@ static int do_dump(int argc, char **argv)
>  			dump_xlated_cfg(buf, *member_len);
>  	} else {
>  		kernel_syms_load(&dd);
> +		dd.jited_ksyms = ksyms_ptr;
> +		dd.nr_jited_ksyms = *ksyms_len;
> +
>  		if (json_output)
>  			dump_xlated_json(&dd, buf, *member_len, opcodes);
>  		else
> @@ -566,10 +593,14 @@ static int do_dump(int argc, char **argv)
>  	}
>  
>  	free(buf);
> +	if (addrs)
> +		free(addrs);

Free can deal with NULL pointers, no need for an if.

>  	return 0;
>  
>  err_free:
>  	free(buf);
> +	if (addrs)
> +		free(addrs);
>  	return -1;
>  }
>  
> diff --git a/tools/bpf/bpftool/xlated_dumper.c b/tools/bpf/bpftool/xlated_dumper.c
> index 7a3173b76c16..dc8e4eca0387 100644
> --- a/tools/bpf/bpftool/xlated_dumper.c
> +++ b/tools/bpf/bpftool/xlated_dumper.c
> @@ -178,8 +178,12 @@ static const char *print_call_pcrel(struct dump_data *dd,
>  		snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
>  			 "%+d#%s", insn->off, sym->name);
>  	else

else if (address)

saves us the indentation.

> -		snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
> -			 "%+d#0x%lx", insn->off, address);
> +		if (address)
> +			snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
> +				 "%+d#0x%lx", insn->off, address);
> +		else
> +			snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
> +				 "%+d", insn->off);
>  	return dd->scratch_buff;
>  }
>  
> @@ -200,14 +204,20 @@ static const char *print_call(void *private_data,
>  			      const struct bpf_insn *insn)
>  {
>  	struct dump_data *dd = private_data;
> -	unsigned long address = dd->address_call_base + insn->imm;
> -	struct kernel_sym *sym;
> +	unsigned long address = 0;
> +	struct kernel_sym *sym = NULL;
>  

Hm.  Quite a bit of churn.  Why not just add these three lines here:

if (insn->src_reg == BPF_PSEUDO_CALL && 
    insn->imm < dd->nr_jited_ksyms)
	address = dd->jited_ksyms[insn->imm];

> -	sym = kernel_syms_search(dd, address);
> -	if (insn->src_reg == BPF_PSEUDO_CALL)
> +	if (insn->src_reg == BPF_PSEUDO_CALL) {
> +		if (dd->nr_jited_ksyms) {
> +			address = dd->jited_ksyms[insn->imm];

Perhaps it's paranoid, but it'd please do to bound check insn->imm
against dd->nr_jited_ksyms.

> +			sym = kernel_syms_search(dd, address);
> +		}
>  		return print_call_pcrel(dd, sym, address, insn);
> -	else
> +	} else {
> +		address = dd->address_call_base + insn->imm;
> +		sym = kernel_syms_search(dd, address);
>  		return print_call_helper(dd, sym, address);
> +	}
>  }
>  
>  static const char *print_imm(void *private_data,

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf 5/6] tools: bpftool: resolve calls without using imm field
  2018-05-17 18:51   ` Jakub Kicinski
@ 2018-05-18  4:28     ` Sandipan Das
  0 siblings, 0 replies; 9+ messages in thread
From: Sandipan Das @ 2018-05-18  4:28 UTC (permalink / raw)
  To: Jakub Kicinski; +Cc: netdev, naveen.n.rao, linuxppc-dev, ast, daniel

Hi Jakub,

On 05/18/2018 12:21 AM, Jakub Kicinski wrote:
> On Thu, 17 May 2018 12:05:47 +0530, Sandipan Das wrote:
>> Currently, we resolve the callee's address for a JITed function
>> call by using the imm field of the call instruction as an offset
>> from __bpf_call_base. If bpf_jit_kallsyms is enabled, we further
>> use this address to get the callee's kernel symbol's name.
>>
>> For some architectures, such as powerpc64, the imm field is not
>> large enough to hold this offset. So, instead of assigning this
>> offset to the imm field, the verifier now assigns the subprog
>> id. Also, a list of kernel symbol addresses for all the JITed
>> functions is provided in the program info. We now use the imm
>> field as an index for this list to lookup a callee's symbol's
>> address and resolve its name.
>>
>> Suggested-by: Daniel Borkmann <daniel@iogearbox.net>
>> Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com>
> 
> A few nit-picks below, thank you for the patch!
> 
>>  tools/bpf/bpftool/prog.c          | 31 +++++++++++++++++++++++++++++++
>>  tools/bpf/bpftool/xlated_dumper.c | 24 +++++++++++++++++-------
>>  tools/bpf/bpftool/xlated_dumper.h |  2 ++
>>  3 files changed, 50 insertions(+), 7 deletions(-)
>>
>> diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
>> index 9bdfdf2d3fbe..ac2f62a97e84 100644
>> --- a/tools/bpf/bpftool/prog.c
>> +++ b/tools/bpf/bpftool/prog.c
>> @@ -430,6 +430,10 @@ static int do_dump(int argc, char **argv)
>>  	unsigned char *buf;
>>  	__u32 *member_len;
>>  	__u64 *member_ptr;
>> +	unsigned int nr_addrs;
>> +	unsigned long *addrs = NULL;
>> +	__u32 *ksyms_len;
>> +	__u64 *ksyms_ptr;
> 
> nit: please try to keep the variables ordered longest to shortest like
> we do in networking code (please do it in all functions).
> 
>>  	ssize_t n;
>>  	int err;
>>  	int fd;
>> @@ -437,6 +441,8 @@ static int do_dump(int argc, char **argv)
>>  	if (is_prefix(*argv, "jited")) {
>>  		member_len = &info.jited_prog_len;
>>  		member_ptr = &info.jited_prog_insns;
>> +		ksyms_len = &info.nr_jited_ksyms;
>> +		ksyms_ptr = &info.jited_ksyms;
>>  	} else if (is_prefix(*argv, "xlated")) {
>>  		member_len = &info.xlated_prog_len;
>>  		member_ptr = &info.xlated_prog_insns;
>> @@ -496,10 +502,23 @@ static int do_dump(int argc, char **argv)
>>  		return -1;
>>  	}
>>  
>> +	nr_addrs = *ksyms_len;
> 
> Here and ...
> 
>> +	if (nr_addrs) {
>> +		addrs = malloc(nr_addrs * sizeof(__u64));
>> +		if (!addrs) {
>> +			p_err("mem alloc failed");
>> +			free(buf);
>> +			close(fd);
>> +			return -1;
> 
> You can just jump to err_free here.
> 
>> +		}
>> +	}
>> +
>>  	memset(&info, 0, sizeof(info));
>>  
>>  	*member_ptr = ptr_to_u64(buf);
>>  	*member_len = buf_size;
>> +	*ksyms_ptr = ptr_to_u64(addrs);
>> +	*ksyms_len = nr_addrs;
> 
> ... here - this function is getting long, so maybe I'm not seeing
> something, but are ksyms_ptr and ksyms_len guaranteed to be initialized?
> 
>>  	err = bpf_obj_get_info_by_fd(fd, &info, &len);
>>  	close(fd);
>> @@ -513,6 +532,11 @@ static int do_dump(int argc, char **argv)
>>  		goto err_free;
>>  	}
>>  
>> +	if (*ksyms_len > nr_addrs) {
>> +		p_err("too many addresses returned");
>> +		goto err_free;
>> +	}
>> +
>>  	if ((member_len == &info.jited_prog_len &&
>>  	     info.jited_prog_insns == 0) ||
>>  	    (member_len == &info.xlated_prog_len &&
>> @@ -558,6 +582,9 @@ static int do_dump(int argc, char **argv)
>>  			dump_xlated_cfg(buf, *member_len);
>>  	} else {
>>  		kernel_syms_load(&dd);
>> +		dd.jited_ksyms = ksyms_ptr;
>> +		dd.nr_jited_ksyms = *ksyms_len;
>> +
>>  		if (json_output)
>>  			dump_xlated_json(&dd, buf, *member_len, opcodes);
>>  		else
>> @@ -566,10 +593,14 @@ static int do_dump(int argc, char **argv)
>>  	}
>>  
>>  	free(buf);
>> +	if (addrs)
>> +		free(addrs);
> 
> Free can deal with NULL pointers, no need for an if.
> 
>>  	return 0;
>>  
>>  err_free:
>>  	free(buf);
>> +	if (addrs)
>> +		free(addrs);
>>  	return -1;
>>  }
>>  
>> diff --git a/tools/bpf/bpftool/xlated_dumper.c b/tools/bpf/bpftool/xlated_dumper.c
>> index 7a3173b76c16..dc8e4eca0387 100644
>> --- a/tools/bpf/bpftool/xlated_dumper.c
>> +++ b/tools/bpf/bpftool/xlated_dumper.c
>> @@ -178,8 +178,12 @@ static const char *print_call_pcrel(struct dump_data *dd,
>>  		snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
>>  			 "%+d#%s", insn->off, sym->name);
>>  	else
> 
> else if (address)
> 
> saves us the indentation.
> 
>> -		snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
>> -			 "%+d#0x%lx", insn->off, address);
>> +		if (address)
>> +			snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
>> +				 "%+d#0x%lx", insn->off, address);
>> +		else
>> +			snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
>> +				 "%+d", insn->off);
>>  	return dd->scratch_buff;
>>  }
>>  
>> @@ -200,14 +204,20 @@ static const char *print_call(void *private_data,
>>  			      const struct bpf_insn *insn)
>>  {
>>  	struct dump_data *dd = private_data;
>> -	unsigned long address = dd->address_call_base + insn->imm;
>> -	struct kernel_sym *sym;
>> +	unsigned long address = 0;
>> +	struct kernel_sym *sym = NULL;
>>  
> 
> Hm.  Quite a bit of churn.  Why not just add these three lines here:
> 
> if (insn->src_reg == BPF_PSEUDO_CALL && 
>     insn->imm < dd->nr_jited_ksyms)
> 	address = dd->jited_ksyms[insn->imm];
> 
>> -	sym = kernel_syms_search(dd, address);
>> -	if (insn->src_reg == BPF_PSEUDO_CALL)
>> +	if (insn->src_reg == BPF_PSEUDO_CALL) {
>> +		if (dd->nr_jited_ksyms) {
>> +			address = dd->jited_ksyms[insn->imm];
> 
> Perhaps it's paranoid, but it'd please do to bound check insn->imm
> against dd->nr_jited_ksyms.
> 
>> +			sym = kernel_syms_search(dd, address);
>> +		}
>>  		return print_call_pcrel(dd, sym, address, insn);
>> -	else
>> +	} else {
>> +		address = dd->address_call_base + insn->imm;
>> +		sym = kernel_syms_search(dd, address);
>>  		return print_call_helper(dd, sym, address);
>> +	}
>>  }
>>  
>>  static const char *print_imm(void *private_data,
> 
> 

Thank you for the suggestions. Will post out v2 with these changes.

- Sandipan

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2018-05-18  4:28 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-17  6:35 [PATCH bpf 0/6] bpf: enhancements for multi-function programs Sandipan Das
2018-05-17  6:35 ` [PATCH bpf 1/6] bpf: support 64-bit offsets for bpf function calls Sandipan Das
2018-05-17  6:35 ` [PATCH bpf 2/6] bpf: powerpc64: add JIT support for multi-function programs Sandipan Das
2018-05-17  6:35 ` [PATCH bpf 3/6] bpf: get kernel symbol addresses via syscall Sandipan Das
2018-05-17  6:35 ` [PATCH bpf 4/6] tools: bpf: sync bpf uapi header Sandipan Das
2018-05-17  6:35 ` [PATCH bpf 5/6] tools: bpftool: resolve calls without using imm field Sandipan Das
2018-05-17 18:51   ` Jakub Kicinski
2018-05-18  4:28     ` Sandipan Das
2018-05-17  6:35 ` [PATCH bpf 6/6] bpf: fix JITed dump for multi-function programs via syscall Sandipan Das

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.