* pull-request: bpf-next 2017-12-18
@ 2017-12-18 0:33 Daniel Borkmann
2017-12-18 15:51 ` David Miller
0 siblings, 1 reply; 8+ messages in thread
From: Daniel Borkmann @ 2017-12-18 0:33 UTC (permalink / raw)
To: davem; +Cc: daniel, ast, netdev
Hi David,
The following pull-request contains BPF updates for your *net-next* tree.
The main changes are:
1) Allow arbitrary function calls from one BPF function to another BPF function.
As of today when writing BPF programs, __always_inline had to be used in
the BPF C programs for all functions, unnecessarily causing LLVM to inflate
code size. Handle this more naturally with support for BPF to BPF calls
such that this __always_inline restriction can be overcome. As a result,
it allows for better optimized code and finally enables to introduce core
BPF libraries in the future that can be reused out of different projects.
x86 and arm64 JIT support was added as well, from Alexei.
2) Add infrastructure for tagging functions as error injectable and allow for
BPF to return arbitrary error values when BPF is attached via kprobes on
those. This way of injecting errors generically eases testing and debugging
without having to recompile or restart the kernel. Tags for opting-in for
this facility are added with BPF_ALLOW_ERROR_INJECTION(), from Josef.
3) For BPF offload via nfp JIT, add support for bpf_xdp_adjust_head() helper
call for XDP programs. First part of this work adds handling of BPF
capabilities included in the firmware, and the later patches add support
to the nfp verifier part and JIT as well as some small optimizations,
from Jakub.
4) The bpftool now also gets support for basic cgroup BPF operations such
as attaching, detaching and listing current BPF programs. As a requirement
for the attach part, bpftool can now also load object files through
'bpftool prog load'. This reuses libbpf which we have in the kernel tree
as well. bpftool-cgroup man page is added along with it, from Roman.
5) Back then commit e87c6bc3852b ("bpf: permit multiple bpf attachments for
a single perf event") added support for attaching multiple BPF programs
to a single perf event. Given they are configured through perf's ioctl()
interface, the interface has been extended with a PERF_EVENT_IOC_QUERY_BPF
command in this work in order to return an array of one or multiple BPF
prog ids that are currently attached, from Yonghong.
6) Various minor fixes and cleanups to the bpftool's Makefile as well
as a new 'uninstall' and 'doc-uninstall' target for removing bpftool
itself or prior installed documentation related to it, from Quentin.
7) Add CONFIG_CGROUP_BPF=y to the BPF kernel selftest config file which is
required for the test_dev_cgroup test case to run, from Naresh.
8) Fix reporting of XDP prog_flags for nfp driver, from Jakub.
9) Fix libbpf's exit code from the Makefile when libelf was not found in
the system, also from Jakub.
Please consider pulling these changes from:
git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git
Thanks a lot!
----------------------------------------------------------------
The following changes since commit 62cd277039a3413604f486f0ca87faec810d7bb7:
Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next (2017-12-08 10:48:25 -0500)
are available in the git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git
for you to fetch changes up to 46df3d209db080395a98fc0875bd05e45e8f44e0:
trace: reenable preemption if we modify the ip (2017-12-17 20:47:32 +0100)
----------------------------------------------------------------
Alexei Starovoitov (14):
Merge branch 'bpf-tracing-multiprog-tp-query'
Merge branch 'bpf-override-return'
bpf: introduce function calls (function boundaries)
bpf: introduce function calls (verification)
selftests/bpf: add verifier tests for bpf_call
bpf: teach verifier to recognize zero initialized stack
selftests/bpf: add tests for stack_zero tracking
libbpf: add support for bpf_call
selftests/bpf: add bpf_call test
selftests/bpf: add xdp noinline test
bpf: add support for bpf_call to interpreter
bpf: fix net.core.bpf_jit_enable race
bpf: x64: add JIT support for multi-function programs
bpf: arm64: add JIT support for multi-function programs
Daniel Borkmann (5):
Merge branch 'bpf-bpftool-makefile-cleanups'
Merge branch 'bpf-bpftool-cgroup-ops'
Merge branch 'bpf-nfp-jit-adjust-head-support'
selftests/bpf: additional bpf_call tests
Merge branch 'bpf-to-bpf-function-calls'
Jakub Kicinski (8):
nfp: add nfp_cpp_area_size() accessor
nfp: bpf: prepare for parsing BPF FW capabilities
nfp: bpf: prepare for call support
nfp: bpf: add basic support for adjust head call
nfp: bpf: optimize the adjust_head calls in trivial cases
nfp: bpf: correct printk formats for size_t
libbpf: fix Makefile exit code if libelf not found
nfp: set flags in the correct member of netdev_bpf
Josef Bacik (6):
add infrastructure for tagging functions as error injectable
btrfs: make open_ctree error injectable
bpf: add a bpf_override_function helper
samples/bpf: add a test for bpf_override_return
btrfs: allow us to inject errors at io_ctl_init
trace: reenable preemption if we modify the ip
Naresh Kamboju (1):
selftests: bpf: Adding config fragment CONFIG_CGROUP_BPF=y
Quentin Monnet (2):
tools: bpftool: harmonise Makefile and Documentation/Makefile
tools: bpftool: create "uninstall", "doc-uninstall" make targets
Roman Gushchin (4):
libbpf: add ability to guess program type based on section name
libbpf: prefer global symbols as bpf program name source
bpftool: implement prog load command
bpftool: implement cgroup bpf operations
Yonghong Song (3):
bpf/tracing: allow user space to query prog array on the same tp
bpf/tracing: add a bpf test for new ioctl query interface
bpf/tracing: fix kernel/events/core.c compilation error
arch/Kconfig | 3 +
arch/arm/net/bpf_jit_32.c | 2 +-
arch/arm64/net/bpf_jit_comp.c | 70 +-
arch/mips/net/ebpf_jit.c | 2 +-
arch/powerpc/net/bpf_jit_comp64.c | 2 +-
arch/s390/net/bpf_jit_comp.c | 2 +-
arch/sparc/net/bpf_jit_comp_64.c | 2 +-
arch/x86/Kconfig | 1 +
arch/x86/include/asm/kprobes.h | 4 +
arch/x86/include/asm/ptrace.h | 5 +
arch/x86/kernel/kprobes/ftrace.c | 14 +
arch/x86/net/bpf_jit_comp.c | 49 +-
drivers/net/ethernet/netronome/nfp/bpf/fw.h | 54 +
drivers/net/ethernet/netronome/nfp/bpf/jit.c | 107 ++
drivers/net/ethernet/netronome/nfp/bpf/main.c | 115 ++
drivers/net/ethernet/netronome/nfp/bpf/main.h | 30 +
drivers/net/ethernet/netronome/nfp/bpf/offload.c | 2 +
drivers/net/ethernet/netronome/nfp/bpf/verifier.c | 70 +
drivers/net/ethernet/netronome/nfp/nfp_asm.h | 2 +
.../net/ethernet/netronome/nfp/nfp_net_common.c | 2 +-
.../net/ethernet/netronome/nfp/nfpcore/nfp_cpp.h | 1 +
.../ethernet/netronome/nfp/nfpcore/nfp_cppcore.c | 11 +
fs/btrfs/disk-io.c | 2 +
fs/btrfs/free-space-cache.c | 2 +
include/asm-generic/vmlinux.lds.h | 10 +
include/linux/bpf.h | 18 +
include/linux/bpf_verifier.h | 45 +-
include/linux/filter.h | 16 +-
include/linux/kprobes.h | 1 +
include/linux/module.h | 5 +
include/linux/trace_events.h | 7 +
include/uapi/linux/bpf.h | 13 +-
include/uapi/linux/perf_event.h | 22 +
kernel/bpf/core.c | 128 +-
kernel/bpf/disasm.c | 8 +-
kernel/bpf/syscall.c | 3 +-
kernel/bpf/verifier.c | 1122 +++++++++++---
kernel/events/core.c | 10 +
kernel/kprobes.c | 163 ++
kernel/module.c | 6 +-
kernel/trace/Kconfig | 11 +
kernel/trace/bpf_trace.c | 58 +
kernel/trace/trace_kprobe.c | 64 +-
kernel/trace/trace_probe.h | 12 +
samples/bpf/Makefile | 4 +
samples/bpf/test_override_return.sh | 15 +
samples/bpf/tracex7_kern.c | 16 +
samples/bpf/tracex7_user.c | 28 +
tools/bpf/bpftool/Documentation/Makefile | 30 +-
tools/bpf/bpftool/Documentation/bpftool-cgroup.rst | 118 ++
tools/bpf/bpftool/Documentation/bpftool-map.rst | 2 +-
tools/bpf/bpftool/Documentation/bpftool-prog.rst | 12 +-
tools/bpf/bpftool/Documentation/bpftool.rst | 8 +-
tools/bpf/bpftool/Makefile | 61 +-
tools/bpf/bpftool/cgroup.c | 307 ++++
tools/bpf/bpftool/common.c | 71 +-
tools/bpf/bpftool/main.c | 3 +-
tools/bpf/bpftool/main.h | 2 +
tools/bpf/bpftool/prog.c | 29 +-
tools/include/uapi/linux/bpf.h | 13 +-
tools/include/uapi/linux/perf_event.h | 22 +
tools/lib/bpf/Makefile | 4 +-
tools/lib/bpf/bpf.h | 2 +-
tools/lib/bpf/libbpf.c | 199 ++-
tools/scripts/Makefile.include | 1 +
tools/testing/selftests/bpf/Makefile | 12 +-
tools/testing/selftests/bpf/bpf_helpers.h | 3 +-
tools/testing/selftests/bpf/config | 1 +
tools/testing/selftests/bpf/test_l4lb_noinline.c | 473 ++++++
tools/testing/selftests/bpf/test_progs.c | 228 ++-
tools/testing/selftests/bpf/test_tracepoint.c | 26 +
tools/testing/selftests/bpf/test_verifier.c | 1624 +++++++++++++++++++-
tools/testing/selftests/bpf/test_xdp_noinline.c | 833 ++++++++++
73 files changed, 6071 insertions(+), 352 deletions(-)
create mode 100644 drivers/net/ethernet/netronome/nfp/bpf/fw.h
create mode 100755 samples/bpf/test_override_return.sh
create mode 100644 samples/bpf/tracex7_kern.c
create mode 100644 samples/bpf/tracex7_user.c
create mode 100644 tools/bpf/bpftool/Documentation/bpftool-cgroup.rst
create mode 100644 tools/bpf/bpftool/cgroup.c
create mode 100644 tools/testing/selftests/bpf/test_l4lb_noinline.c
create mode 100644 tools/testing/selftests/bpf/test_tracepoint.c
create mode 100644 tools/testing/selftests/bpf/test_xdp_noinline.c
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: pull-request: bpf-next 2017-12-18
2017-12-18 0:33 pull-request: bpf-next 2017-12-18 Daniel Borkmann
@ 2017-12-18 15:51 ` David Miller
2017-12-19 6:28 ` Alexei Starovoitov
0 siblings, 1 reply; 8+ messages in thread
From: David Miller @ 2017-12-18 15:51 UTC (permalink / raw)
To: daniel; +Cc: ast, netdev
From: Daniel Borkmann <daniel@iogearbox.net>
Date: Mon, 18 Dec 2017 01:33:07 +0100
> The following pull-request contains BPF updates for your *net-next* tree.
>
> The main changes are:
>
> 1) Allow arbitrary function calls from one BPF function to another BPF function.
> As of today when writing BPF programs, __always_inline had to be used in
> the BPF C programs for all functions, unnecessarily causing LLVM to inflate
> code size. Handle this more naturally with support for BPF to BPF calls
> such that this __always_inline restriction can be overcome. As a result,
> it allows for better optimized code and finally enables to introduce core
> BPF libraries in the future that can be reused out of different projects.
> x86 and arm64 JIT support was added as well, from Alexei.
Exciting... but now there's a lot of JIT work to do.
...
> Please consider pulling these changes from:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git
Pulled, thanks!
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: pull-request: bpf-next 2017-12-18
2017-12-18 15:51 ` David Miller
@ 2017-12-19 6:28 ` Alexei Starovoitov
2017-12-20 21:16 ` David Miller
0 siblings, 1 reply; 8+ messages in thread
From: Alexei Starovoitov @ 2017-12-19 6:28 UTC (permalink / raw)
To: David Miller; +Cc: daniel, ast, netdev
On Mon, Dec 18, 2017 at 10:51:53AM -0500, David Miller wrote:
> From: Daniel Borkmann <daniel@iogearbox.net>
> Date: Mon, 18 Dec 2017 01:33:07 +0100
>
> > The following pull-request contains BPF updates for your *net-next* tree.
> >
> > The main changes are:
> >
> > 1) Allow arbitrary function calls from one BPF function to another BPF function.
> > As of today when writing BPF programs, __always_inline had to be used in
> > the BPF C programs for all functions, unnecessarily causing LLVM to inflate
> > code size. Handle this more naturally with support for BPF to BPF calls
> > such that this __always_inline restriction can be overcome. As a result,
> > it allows for better optimized code and finally enables to introduce core
> > BPF libraries in the future that can be reused out of different projects.
> > x86 and arm64 JIT support was added as well, from Alexei.
>
> Exciting... but now there's a lot of JIT work to do.
I've looked at sparc64. It should be simpler than arm64.
First reaction was that it would need dumb version of
emit_loadimm64() (similar to arm's emit_addr_mov_i64), but not,
since it's not used in emit_call.
I can take a stab at it, but cannot test. The most time
consuming part is to setup the latest llvm on the system
to compile *_noinline.c tests.
Note to self, I really need to make test_verifier run the tests.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: pull-request: bpf-next 2017-12-18
2017-12-19 6:28 ` Alexei Starovoitov
@ 2017-12-20 21:16 ` David Miller
2017-12-21 16:28 ` David Miller
0 siblings, 1 reply; 8+ messages in thread
From: David Miller @ 2017-12-20 21:16 UTC (permalink / raw)
To: alexei.starovoitov; +Cc: daniel, ast, netdev
From: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Date: Mon, 18 Dec 2017 22:28:30 -0800
> On Mon, Dec 18, 2017 at 10:51:53AM -0500, David Miller wrote:
>> From: Daniel Borkmann <daniel@iogearbox.net>
>> Date: Mon, 18 Dec 2017 01:33:07 +0100
>>
>> > The following pull-request contains BPF updates for your *net-next* tree.
>> >
>> > The main changes are:
>> >
>> > 1) Allow arbitrary function calls from one BPF function to another BPF function.
>> > As of today when writing BPF programs, __always_inline had to be used in
>> > the BPF C programs for all functions, unnecessarily causing LLVM to inflate
>> > code size. Handle this more naturally with support for BPF to BPF calls
>> > such that this __always_inline restriction can be overcome. As a result,
>> > it allows for better optimized code and finally enables to introduce core
>> > BPF libraries in the future that can be reused out of different projects.
>> > x86 and arm64 JIT support was added as well, from Alexei.
>>
>> Exciting... but now there's a lot of JIT work to do.
>
> I've looked at sparc64. It should be simpler than arm64.
> First reaction was that it would need dumb version of
> emit_loadimm64() (similar to arm's emit_addr_mov_i64), but not,
> since it's not used in emit_call.
> I can take a stab at it, but cannot test. The most time
> consuming part is to setup the latest llvm on the system
> to compile *_noinline.c tests.
> Note to self, I really need to make test_verifier run the tests.
I think I understand how this new stuff works, I'll take a stab at
doing the sparc64 JIT bits.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: pull-request: bpf-next 2017-12-18
2017-12-20 21:16 ` David Miller
@ 2017-12-21 16:28 ` David Miller
2017-12-21 23:48 ` Daniel Borkmann
0 siblings, 1 reply; 8+ messages in thread
From: David Miller @ 2017-12-21 16:28 UTC (permalink / raw)
To: alexei.starovoitov; +Cc: daniel, ast, netdev
From: David Miller <davem@davemloft.net>
Date: Wed, 20 Dec 2017 16:16:44 -0500 (EST)
> I think I understand how this new stuff works, I'll take a stab at
> doing the sparc64 JIT bits.
This patch should do it, please queue up for bpf-next.
But this is really overkill on sparc64.
No matter where you relocate the call destination to, the size of the
program and the code output will be identical except for the call
instruction PC relative offset field.
So at some point as a follow-up I should change this code to simply
scan the insns for the function calls and fixup the offsets, rather
than do a full set of code generation passes.
Thanks.
====================
bpf: sparc64: Add JIT support for multi-function programs.
Modelled strongly upon the arm64 implementation.
Signed-off-by: David S. Miller <davem@davemloft.net>
diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c
index a2f1b5e..4ee417f 100644
--- a/arch/sparc/net/bpf_jit_comp_64.c
+++ b/arch/sparc/net/bpf_jit_comp_64.c
@@ -1507,11 +1507,19 @@ static void jit_fill_hole(void *area, unsigned int size)
*ptr++ = 0x91d02005; /* ta 5 */
}
+struct sparc64_jit_data {
+ struct bpf_binary_header *header;
+ u8 *image;
+ struct jit_ctx ctx;
+};
+
struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
{
struct bpf_prog *tmp, *orig_prog = prog;
+ struct sparc64_jit_data *jit_data;
struct bpf_binary_header *header;
bool tmp_blinded = false;
+ bool extra_pass = false;
struct jit_ctx ctx;
u32 image_size;
u8 *image_ptr;
@@ -1531,13 +1539,30 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
prog = tmp;
}
+ jit_data = prog->aux->jit_data;
+ if (!jit_data) {
+ jit_data = kzalloc(sizeof(*jit_data), GFP_KERNEL);
+ if (!jit_data) {
+ prog = orig_prog;
+ goto out;
+ }
+ }
+ if (jit_data->ctx.offset) {
+ ctx = jit_data->ctx;
+ image_ptr = jit_data->image;
+ header = jit_data->header;
+ extra_pass = true;
+ image_size = sizeof(u32) * ctx.idx;
+ goto skip_init_ctx;
+ }
+
memset(&ctx, 0, sizeof(ctx));
ctx.prog = prog;
ctx.offset = kcalloc(prog->len, sizeof(unsigned int), GFP_KERNEL);
if (ctx.offset == NULL) {
prog = orig_prog;
- goto out;
+ goto out_off;
}
/* Fake pass to detect features used, and get an accurate assessment
@@ -1560,7 +1585,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
}
ctx.image = (u32 *)image_ptr;
-
+skip_init_ctx:
for (pass = 1; pass < 3; pass++) {
ctx.idx = 0;
@@ -1591,14 +1616,24 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
bpf_flush_icache(header, (u8 *)header + (header->pages * PAGE_SIZE));
- bpf_jit_binary_lock_ro(header);
+ if (!prog->is_func || extra_pass) {
+ bpf_jit_binary_lock_ro(header);
+ } else {
+ jit_data->ctx = ctx;
+ jit_data->image = image_ptr;
+ jit_data->header = header;
+ }
prog->bpf_func = (void *)ctx.image;
prog->jited = 1;
prog->jited_len = image_size;
+ if (!prog->is_func || extra_pass) {
out_off:
- kfree(ctx.offset);
+ kfree(ctx.offset);
+ kfree(jit_data);
+ prog->aux->jit_data = NULL;
+ }
out:
if (tmp_blinded)
bpf_jit_prog_release_other(prog, prog == orig_prog ?
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: pull-request: bpf-next 2017-12-18
2017-12-21 16:28 ` David Miller
@ 2017-12-21 23:48 ` Daniel Borkmann
2017-12-22 14:42 ` David Miller
0 siblings, 1 reply; 8+ messages in thread
From: Daniel Borkmann @ 2017-12-21 23:48 UTC (permalink / raw)
To: David Miller, alexei.starovoitov; +Cc: ast, netdev
On 12/21/2017 05:28 PM, David Miller wrote:
> From: David Miller <davem@davemloft.net>
> Date: Wed, 20 Dec 2017 16:16:44 -0500 (EST)
>
>> I think I understand how this new stuff works, I'll take a stab at
>> doing the sparc64 JIT bits.
>
> This patch should do it, please queue up for bpf-next.
>
> But this is really overkill on sparc64.
>
> No matter where you relocate the call destination to, the size of the
> program and the code output will be identical except for the call
> instruction PC relative offset field.
>
> So at some point as a follow-up I should change this code to simply
> scan the insns for the function calls and fixup the offsets, rather
> than do a full set of code generation passes.
>
> Thanks.
>
> ====================
> bpf: sparc64: Add JIT support for multi-function programs.
>
> Modelled strongly upon the arm64 implementation.
>
> Signed-off-by: David S. Miller <davem@davemloft.net>
>
> diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c
> index a2f1b5e..4ee417f 100644
> --- a/arch/sparc/net/bpf_jit_comp_64.c
> +++ b/arch/sparc/net/bpf_jit_comp_64.c
> @@ -1507,11 +1507,19 @@ static void jit_fill_hole(void *area, unsigned int size)
> *ptr++ = 0x91d02005; /* ta 5 */
> }
>
> +struct sparc64_jit_data {
> + struct bpf_binary_header *header;
> + u8 *image;
> + struct jit_ctx ctx;
> +};
> +
> struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
> {
> struct bpf_prog *tmp, *orig_prog = prog;
> + struct sparc64_jit_data *jit_data;
> struct bpf_binary_header *header;
> bool tmp_blinded = false;
> + bool extra_pass = false;
> struct jit_ctx ctx;
> u32 image_size;
> u8 *image_ptr;
> @@ -1531,13 +1539,30 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
> prog = tmp;
> }
>
> + jit_data = prog->aux->jit_data;
> + if (!jit_data) {
> + jit_data = kzalloc(sizeof(*jit_data), GFP_KERNEL);
> + if (!jit_data) {
> + prog = orig_prog;
> + goto out;
> + }
Looks good, one thing: If I spot this correctly, isn't here a ...
prog->aux->jit_data = jit_data;
... missing? Otherwise the context from the initial pass is neither
saved for the extra pass nor freed.
> + }
> + if (jit_data->ctx.offset) {
> + ctx = jit_data->ctx;
> + image_ptr = jit_data->image;
> + header = jit_data->header;
> + extra_pass = true;
> + image_size = sizeof(u32) * ctx.idx;
> + goto skip_init_ctx;
> + }
> +
> memset(&ctx, 0, sizeof(ctx));
> ctx.prog = prog;
>
> ctx.offset = kcalloc(prog->len, sizeof(unsigned int), GFP_KERNEL);
> if (ctx.offset == NULL) {
> prog = orig_prog;
> - goto out;
> + goto out_off;
> }
>
> /* Fake pass to detect features used, and get an accurate assessment
> @@ -1560,7 +1585,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
> }
>
> ctx.image = (u32 *)image_ptr;
> -
> +skip_init_ctx:
> for (pass = 1; pass < 3; pass++) {
> ctx.idx = 0;
>
> @@ -1591,14 +1616,24 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
>
> bpf_flush_icache(header, (u8 *)header + (header->pages * PAGE_SIZE));
>
> - bpf_jit_binary_lock_ro(header);
> + if (!prog->is_func || extra_pass) {
> + bpf_jit_binary_lock_ro(header);
> + } else {
> + jit_data->ctx = ctx;
> + jit_data->image = image_ptr;
> + jit_data->header = header;
> + }
>
> prog->bpf_func = (void *)ctx.image;
> prog->jited = 1;
> prog->jited_len = image_size;
>
> + if (!prog->is_func || extra_pass) {
> out_off:
> - kfree(ctx.offset);
> + kfree(ctx.offset);
> + kfree(jit_data);
> + prog->aux->jit_data = NULL;
> + }
> out:
> if (tmp_blinded)
> bpf_jit_prog_release_other(prog, prog == orig_prog ?
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: pull-request: bpf-next 2017-12-18
2017-12-21 23:48 ` Daniel Borkmann
@ 2017-12-22 14:42 ` David Miller
2017-12-23 0:12 ` Daniel Borkmann
0 siblings, 1 reply; 8+ messages in thread
From: David Miller @ 2017-12-22 14:42 UTC (permalink / raw)
To: daniel; +Cc: alexei.starovoitov, ast, netdev
From: Daniel Borkmann <daniel@iogearbox.net>
Date: Fri, 22 Dec 2017 00:48:22 +0100
> Looks good, one thing: If I spot this correctly, isn't here a ...
>
> prog->aux->jit_data = jit_data;
>
> ... missing? Otherwise the context from the initial pass is neither
> saved for the extra pass nor freed.
Good catch, here is an updated patch:
====================
bpf: sparc64: Add JIT support for multi-function programs.
Modelled strongly upon the arm64 implementation.
Signed-off-by: David S. Miller <davem@davemloft.net>
diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c
index a2f1b5e..991a3ab 100644
--- a/arch/sparc/net/bpf_jit_comp_64.c
+++ b/arch/sparc/net/bpf_jit_comp_64.c
@@ -1507,11 +1507,19 @@ static void jit_fill_hole(void *area, unsigned int size)
*ptr++ = 0x91d02005; /* ta 5 */
}
+struct sparc64_jit_data {
+ struct bpf_binary_header *header;
+ u8 *image;
+ struct jit_ctx ctx;
+};
+
struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
{
struct bpf_prog *tmp, *orig_prog = prog;
+ struct sparc64_jit_data *jit_data;
struct bpf_binary_header *header;
bool tmp_blinded = false;
+ bool extra_pass = false;
struct jit_ctx ctx;
u32 image_size;
u8 *image_ptr;
@@ -1531,13 +1539,31 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
prog = tmp;
}
+ jit_data = prog->aux->jit_data;
+ if (!jit_data) {
+ jit_data = kzalloc(sizeof(*jit_data), GFP_KERNEL);
+ if (!jit_data) {
+ prog = orig_prog;
+ goto out;
+ }
+ prog->aux->jit_data = jit_data;
+ }
+ if (jit_data->ctx.offset) {
+ ctx = jit_data->ctx;
+ image_ptr = jit_data->image;
+ header = jit_data->header;
+ extra_pass = true;
+ image_size = sizeof(u32) * ctx.idx;
+ goto skip_init_ctx;
+ }
+
memset(&ctx, 0, sizeof(ctx));
ctx.prog = prog;
ctx.offset = kcalloc(prog->len, sizeof(unsigned int), GFP_KERNEL);
if (ctx.offset == NULL) {
prog = orig_prog;
- goto out;
+ goto out_off;
}
/* Fake pass to detect features used, and get an accurate assessment
@@ -1560,7 +1586,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
}
ctx.image = (u32 *)image_ptr;
-
+skip_init_ctx:
for (pass = 1; pass < 3; pass++) {
ctx.idx = 0;
@@ -1591,14 +1617,24 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
bpf_flush_icache(header, (u8 *)header + (header->pages * PAGE_SIZE));
- bpf_jit_binary_lock_ro(header);
+ if (!prog->is_func || extra_pass) {
+ bpf_jit_binary_lock_ro(header);
+ } else {
+ jit_data->ctx = ctx;
+ jit_data->image = image_ptr;
+ jit_data->header = header;
+ }
prog->bpf_func = (void *)ctx.image;
prog->jited = 1;
prog->jited_len = image_size;
+ if (!prog->is_func || extra_pass) {
out_off:
- kfree(ctx.offset);
+ kfree(ctx.offset);
+ kfree(jit_data);
+ prog->aux->jit_data = NULL;
+ }
out:
if (tmp_blinded)
bpf_jit_prog_release_other(prog, prog == orig_prog ?
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: pull-request: bpf-next 2017-12-18
2017-12-22 14:42 ` David Miller
@ 2017-12-23 0:12 ` Daniel Borkmann
0 siblings, 0 replies; 8+ messages in thread
From: Daniel Borkmann @ 2017-12-23 0:12 UTC (permalink / raw)
To: David Miller; +Cc: alexei.starovoitov, ast, netdev
On 12/22/2017 03:42 PM, David Miller wrote:
> From: Daniel Borkmann <daniel@iogearbox.net>
> Date: Fri, 22 Dec 2017 00:48:22 +0100
>
>> Looks good, one thing: If I spot this correctly, isn't here a ...
>>
>> prog->aux->jit_data = jit_data;
>>
>> ... missing? Otherwise the context from the initial pass is neither
>> saved for the extra pass nor freed.
>
> Good catch, here is an updated patch:
>
> ====================
> bpf: sparc64: Add JIT support for multi-function programs.
>
> Modelled strongly upon the arm64 implementation.
>
> Signed-off-by: David S. Miller <davem@davemloft.net>
Applied to bpf-next, thanks David!
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2017-12-23 0:12 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-18 0:33 pull-request: bpf-next 2017-12-18 Daniel Borkmann
2017-12-18 15:51 ` David Miller
2017-12-19 6:28 ` Alexei Starovoitov
2017-12-20 21:16 ` David Miller
2017-12-21 16:28 ` David Miller
2017-12-21 23:48 ` Daniel Borkmann
2017-12-22 14:42 ` David Miller
2017-12-23 0:12 ` Daniel Borkmann
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.