From: kernel test robot <lkp@intel.com>
To: kbuild@lists.01.org
Subject: Re: [PATCH bpf-next] bpf: Rename fixup_bpf_calls and add some comments
Date: Wed, 17 Feb 2021 19:28:53 +0800 [thread overview]
Message-ID: <202102171945.ZeL2oe26-lkp@intel.com> (raw)
[-- Attachment #1: Type: text/plain, Size: 32007 bytes --]
CC: kbuild-all(a)lists.01.org
In-Reply-To: <20210217104509.2423183-1-jackmanb@google.com>
References: <20210217104509.2423183-1-jackmanb@google.com>
TO: Brendan Jackman <jackmanb@google.com>
TO: bpf(a)vger.kernel.org
CC: Alexei Starovoitov <ast@kernel.org>
CC: Daniel Borkmann <daniel@iogearbox.net>
CC: Andrii Nakryiko <andrii.nakryiko@gmail.com>
CC: KP Singh <kpsingh@chromium.org>
CC: Florent Revest <revest@chromium.org>
CC: Brendan Jackman <jackmanb@google.com>
Hi Brendan,
I love your patch! Perhaps something to improve:
[auto build test WARNING on 45159b27637b0fef6d5ddb86fc7c46b13c77960f]
url: https://github.com/0day-ci/linux/commits/Brendan-Jackman/bpf-Rename-fixup_bpf_calls-and-add-some-comments/20210217-185208
base: 45159b27637b0fef6d5ddb86fc7c46b13c77960f
:::::: branch date: 37 minutes ago
:::::: commit date: 37 minutes ago
config: x86_64-randconfig-m001-20210215 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
smatch warnings:
kernel/bpf/verifier.c:11832 do_misc_fixups() warn: ignoring unreachable code.
vim +11832 kernel/bpf/verifier.c
1ea47e01ad6ea0 Alexei Starovoitov 2017-12-14 11533
f64cacf58f39b9 Brendan Jackman 2021-02-17 11534 /* Do various post-verification rewrites in a single program pass.
f64cacf58f39b9 Brendan Jackman 2021-02-17 11535 * These rewrites simplify JIT and interpreter implementations.
e245c5c6a5656e Alexei Starovoitov 2017-03-15 11536 */
f64cacf58f39b9 Brendan Jackman 2021-02-17 11537 static int do_misc_fixups(struct bpf_verifier_env *env)
e245c5c6a5656e Alexei Starovoitov 2017-03-15 11538 {
79741b3bdec01a Alexei Starovoitov 2017-03-15 11539 struct bpf_prog *prog = env->prog;
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11540 bool expect_blinding = bpf_jit_blinding_enabled(prog);
79741b3bdec01a Alexei Starovoitov 2017-03-15 11541 struct bpf_insn *insn = prog->insnsi;
e245c5c6a5656e Alexei Starovoitov 2017-03-15 11542 const struct bpf_func_proto *fn;
79741b3bdec01a Alexei Starovoitov 2017-03-15 11543 const int insn_cnt = prog->len;
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11544 const struct bpf_map_ops *ops;
c93552c443ebc6 Daniel Borkmann 2018-05-24 11545 struct bpf_insn_aux_data *aux;
81ed18ab3098b6 Alexei Starovoitov 2017-03-15 11546 struct bpf_insn insn_buf[16];
81ed18ab3098b6 Alexei Starovoitov 2017-03-15 11547 struct bpf_prog *new_prog;
81ed18ab3098b6 Alexei Starovoitov 2017-03-15 11548 struct bpf_map *map_ptr;
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11549 int i, ret, cnt, delta = 0;
e245c5c6a5656e Alexei Starovoitov 2017-03-15 11550
79741b3bdec01a Alexei Starovoitov 2017-03-15 11551 for (i = 0; i < insn_cnt; i++, insn++) {
f64cacf58f39b9 Brendan Jackman 2021-02-17 11552 /* Make divide-by-zero exceptions impossible. */
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11553 if (insn->code == (BPF_ALU64 | BPF_MOD | BPF_X) ||
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11554 insn->code == (BPF_ALU64 | BPF_DIV | BPF_X) ||
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11555 insn->code == (BPF_ALU | BPF_MOD | BPF_X) ||
68fda450a7df51 Alexei Starovoitov 2018-01-12 11556 insn->code == (BPF_ALU | BPF_DIV | BPF_X)) {
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11557 bool is64 = BPF_CLASS(insn->code) == BPF_ALU64;
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11558 struct bpf_insn mask_and_div[] = {
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11559 BPF_MOV32_REG(insn->src_reg, insn->src_reg),
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11560 /* Rx div 0 -> 0 */
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11561 BPF_JMP_IMM(BPF_JNE, insn->src_reg, 0, 2),
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11562 BPF_ALU32_REG(BPF_XOR, insn->dst_reg, insn->dst_reg),
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11563 BPF_JMP_IMM(BPF_JA, 0, 0, 1),
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11564 *insn,
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11565 };
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11566 struct bpf_insn mask_and_mod[] = {
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11567 BPF_MOV32_REG(insn->src_reg, insn->src_reg),
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11568 /* Rx mod 0 -> Rx */
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11569 BPF_JMP_IMM(BPF_JEQ, insn->src_reg, 0, 1),
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11570 *insn,
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11571 };
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11572 struct bpf_insn *patchlet;
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11573
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11574 if (insn->code == (BPF_ALU64 | BPF_DIV | BPF_X) ||
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11575 insn->code == (BPF_ALU | BPF_DIV | BPF_X)) {
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11576 patchlet = mask_and_div + (is64 ? 1 : 0);
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11577 cnt = ARRAY_SIZE(mask_and_div) - (is64 ? 1 : 0);
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11578 } else {
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11579 patchlet = mask_and_mod + (is64 ? 1 : 0);
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11580 cnt = ARRAY_SIZE(mask_and_mod) - (is64 ? 1 : 0);
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11581 }
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11582
f6b1b3bf0d5f68 Daniel Borkmann 2018-01-26 11583 new_prog = bpf_patch_insn_data(env, i + delta, patchlet, cnt);
68fda450a7df51 Alexei Starovoitov 2018-01-12 11584 if (!new_prog)
68fda450a7df51 Alexei Starovoitov 2018-01-12 11585 return -ENOMEM;
68fda450a7df51 Alexei Starovoitov 2018-01-12 11586
68fda450a7df51 Alexei Starovoitov 2018-01-12 11587 delta += cnt - 1;
68fda450a7df51 Alexei Starovoitov 2018-01-12 11588 env->prog = prog = new_prog;
68fda450a7df51 Alexei Starovoitov 2018-01-12 11589 insn = new_prog->insnsi + i + delta;
68fda450a7df51 Alexei Starovoitov 2018-01-12 11590 continue;
68fda450a7df51 Alexei Starovoitov 2018-01-12 11591 }
68fda450a7df51 Alexei Starovoitov 2018-01-12 11592
f64cacf58f39b9 Brendan Jackman 2021-02-17 11593 /* Implement LD_ABS and LD_IND with a rewrite, if supported by the program type. */
e0cea7ce988cf4 Daniel Borkmann 2018-05-04 11594 if (BPF_CLASS(insn->code) == BPF_LD &&
e0cea7ce988cf4 Daniel Borkmann 2018-05-04 11595 (BPF_MODE(insn->code) == BPF_ABS ||
e0cea7ce988cf4 Daniel Borkmann 2018-05-04 11596 BPF_MODE(insn->code) == BPF_IND)) {
e0cea7ce988cf4 Daniel Borkmann 2018-05-04 11597 cnt = env->ops->gen_ld_abs(insn, insn_buf);
e0cea7ce988cf4 Daniel Borkmann 2018-05-04 11598 if (cnt == 0 || cnt >= ARRAY_SIZE(insn_buf)) {
e0cea7ce988cf4 Daniel Borkmann 2018-05-04 11599 verbose(env, "bpf verifier is misconfigured\n");
e0cea7ce988cf4 Daniel Borkmann 2018-05-04 11600 return -EINVAL;
e0cea7ce988cf4 Daniel Borkmann 2018-05-04 11601 }
e0cea7ce988cf4 Daniel Borkmann 2018-05-04 11602
e0cea7ce988cf4 Daniel Borkmann 2018-05-04 11603 new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
e0cea7ce988cf4 Daniel Borkmann 2018-05-04 11604 if (!new_prog)
e0cea7ce988cf4 Daniel Borkmann 2018-05-04 11605 return -ENOMEM;
e0cea7ce988cf4 Daniel Borkmann 2018-05-04 11606
e0cea7ce988cf4 Daniel Borkmann 2018-05-04 11607 delta += cnt - 1;
e0cea7ce988cf4 Daniel Borkmann 2018-05-04 11608 env->prog = prog = new_prog;
e0cea7ce988cf4 Daniel Borkmann 2018-05-04 11609 insn = new_prog->insnsi + i + delta;
e0cea7ce988cf4 Daniel Borkmann 2018-05-04 11610 continue;
e0cea7ce988cf4 Daniel Borkmann 2018-05-04 11611 }
e0cea7ce988cf4 Daniel Borkmann 2018-05-04 11612
f64cacf58f39b9 Brendan Jackman 2021-02-17 11613 /* Rewrite pointer arithmetic to mitigate speculation attacks. */
979d63d50c0c0f Daniel Borkmann 2019-01-03 11614 if (insn->code == (BPF_ALU64 | BPF_ADD | BPF_X) ||
979d63d50c0c0f Daniel Borkmann 2019-01-03 11615 insn->code == (BPF_ALU64 | BPF_SUB | BPF_X)) {
979d63d50c0c0f Daniel Borkmann 2019-01-03 11616 const u8 code_add = BPF_ALU64 | BPF_ADD | BPF_X;
979d63d50c0c0f Daniel Borkmann 2019-01-03 11617 const u8 code_sub = BPF_ALU64 | BPF_SUB | BPF_X;
979d63d50c0c0f Daniel Borkmann 2019-01-03 11618 struct bpf_insn insn_buf[16];
979d63d50c0c0f Daniel Borkmann 2019-01-03 11619 struct bpf_insn *patch = &insn_buf[0];
979d63d50c0c0f Daniel Borkmann 2019-01-03 11620 bool issrc, isneg;
979d63d50c0c0f Daniel Borkmann 2019-01-03 11621 u32 off_reg;
979d63d50c0c0f Daniel Borkmann 2019-01-03 11622
979d63d50c0c0f Daniel Borkmann 2019-01-03 11623 aux = &env->insn_aux_data[i + delta];
3612af783cf52c Daniel Borkmann 2019-03-01 11624 if (!aux->alu_state ||
3612af783cf52c Daniel Borkmann 2019-03-01 11625 aux->alu_state == BPF_ALU_NON_POINTER)
979d63d50c0c0f Daniel Borkmann 2019-01-03 11626 continue;
979d63d50c0c0f Daniel Borkmann 2019-01-03 11627
979d63d50c0c0f Daniel Borkmann 2019-01-03 11628 isneg = aux->alu_state & BPF_ALU_NEG_VALUE;
979d63d50c0c0f Daniel Borkmann 2019-01-03 11629 issrc = (aux->alu_state & BPF_ALU_SANITIZE) ==
979d63d50c0c0f Daniel Borkmann 2019-01-03 11630 BPF_ALU_SANITIZE_SRC;
979d63d50c0c0f Daniel Borkmann 2019-01-03 11631
979d63d50c0c0f Daniel Borkmann 2019-01-03 11632 off_reg = issrc ? insn->src_reg : insn->dst_reg;
979d63d50c0c0f Daniel Borkmann 2019-01-03 11633 if (isneg)
979d63d50c0c0f Daniel Borkmann 2019-01-03 11634 *patch++ = BPF_ALU64_IMM(BPF_MUL, off_reg, -1);
979d63d50c0c0f Daniel Borkmann 2019-01-03 11635 *patch++ = BPF_MOV32_IMM(BPF_REG_AX, aux->alu_limit - 1);
979d63d50c0c0f Daniel Borkmann 2019-01-03 11636 *patch++ = BPF_ALU64_REG(BPF_SUB, BPF_REG_AX, off_reg);
979d63d50c0c0f Daniel Borkmann 2019-01-03 11637 *patch++ = BPF_ALU64_REG(BPF_OR, BPF_REG_AX, off_reg);
979d63d50c0c0f Daniel Borkmann 2019-01-03 11638 *patch++ = BPF_ALU64_IMM(BPF_NEG, BPF_REG_AX, 0);
979d63d50c0c0f Daniel Borkmann 2019-01-03 11639 *patch++ = BPF_ALU64_IMM(BPF_ARSH, BPF_REG_AX, 63);
979d63d50c0c0f Daniel Borkmann 2019-01-03 11640 if (issrc) {
979d63d50c0c0f Daniel Borkmann 2019-01-03 11641 *patch++ = BPF_ALU64_REG(BPF_AND, BPF_REG_AX,
979d63d50c0c0f Daniel Borkmann 2019-01-03 11642 off_reg);
979d63d50c0c0f Daniel Borkmann 2019-01-03 11643 insn->src_reg = BPF_REG_AX;
979d63d50c0c0f Daniel Borkmann 2019-01-03 11644 } else {
979d63d50c0c0f Daniel Borkmann 2019-01-03 11645 *patch++ = BPF_ALU64_REG(BPF_AND, off_reg,
979d63d50c0c0f Daniel Borkmann 2019-01-03 11646 BPF_REG_AX);
979d63d50c0c0f Daniel Borkmann 2019-01-03 11647 }
979d63d50c0c0f Daniel Borkmann 2019-01-03 11648 if (isneg)
979d63d50c0c0f Daniel Borkmann 2019-01-03 11649 insn->code = insn->code == code_add ?
979d63d50c0c0f Daniel Borkmann 2019-01-03 11650 code_sub : code_add;
979d63d50c0c0f Daniel Borkmann 2019-01-03 11651 *patch++ = *insn;
979d63d50c0c0f Daniel Borkmann 2019-01-03 11652 if (issrc && isneg)
979d63d50c0c0f Daniel Borkmann 2019-01-03 11653 *patch++ = BPF_ALU64_IMM(BPF_MUL, off_reg, -1);
979d63d50c0c0f Daniel Borkmann 2019-01-03 11654 cnt = patch - insn_buf;
979d63d50c0c0f Daniel Borkmann 2019-01-03 11655
979d63d50c0c0f Daniel Borkmann 2019-01-03 11656 new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
979d63d50c0c0f Daniel Borkmann 2019-01-03 11657 if (!new_prog)
979d63d50c0c0f Daniel Borkmann 2019-01-03 11658 return -ENOMEM;
979d63d50c0c0f Daniel Borkmann 2019-01-03 11659
979d63d50c0c0f Daniel Borkmann 2019-01-03 11660 delta += cnt - 1;
979d63d50c0c0f Daniel Borkmann 2019-01-03 11661 env->prog = prog = new_prog;
979d63d50c0c0f Daniel Borkmann 2019-01-03 11662 insn = new_prog->insnsi + i + delta;
979d63d50c0c0f Daniel Borkmann 2019-01-03 11663 continue;
979d63d50c0c0f Daniel Borkmann 2019-01-03 11664 }
979d63d50c0c0f Daniel Borkmann 2019-01-03 11665
79741b3bdec01a Alexei Starovoitov 2017-03-15 11666 if (insn->code != (BPF_JMP | BPF_CALL))
79741b3bdec01a Alexei Starovoitov 2017-03-15 11667 continue;
cc8b0b92a1699b Alexei Starovoitov 2017-12-14 11668 if (insn->src_reg == BPF_PSEUDO_CALL)
cc8b0b92a1699b Alexei Starovoitov 2017-12-14 11669 continue;
e245c5c6a5656e Alexei Starovoitov 2017-03-15 11670
e245c5c6a5656e Alexei Starovoitov 2017-03-15 11671 if (insn->imm == BPF_FUNC_get_route_realm)
e245c5c6a5656e Alexei Starovoitov 2017-03-15 11672 prog->dst_needed = 1;
e245c5c6a5656e Alexei Starovoitov 2017-03-15 11673 if (insn->imm == BPF_FUNC_get_prandom_u32)
e245c5c6a5656e Alexei Starovoitov 2017-03-15 11674 bpf_user_rnd_init_once();
9802d86585db91 Josef Bacik 2017-12-11 11675 if (insn->imm == BPF_FUNC_override_return)
9802d86585db91 Josef Bacik 2017-12-11 11676 prog->kprobe_override = 1;
e245c5c6a5656e Alexei Starovoitov 2017-03-15 11677 if (insn->imm == BPF_FUNC_tail_call) {
7b9f6da175f938 David S. Miller 2017-04-20 11678 /* If we tail call into other programs, we
7b9f6da175f938 David S. Miller 2017-04-20 11679 * cannot make any assumptions since they can
7b9f6da175f938 David S. Miller 2017-04-20 11680 * be replaced dynamically during runtime in
7b9f6da175f938 David S. Miller 2017-04-20 11681 * the program array.
7b9f6da175f938 David S. Miller 2017-04-20 11682 */
7b9f6da175f938 David S. Miller 2017-04-20 11683 prog->cb_access = 1;
e411901c0b775a Maciej Fijalkowski 2020-09-16 11684 if (!allow_tail_call_in_subprogs(env))
e411901c0b775a Maciej Fijalkowski 2020-09-16 11685 prog->aux->stack_depth = MAX_BPF_STACK;
e411901c0b775a Maciej Fijalkowski 2020-09-16 11686 prog->aux->max_pkt_offset = MAX_PACKET_OFF;
7b9f6da175f938 David S. Miller 2017-04-20 11687
79741b3bdec01a Alexei Starovoitov 2017-03-15 11688 /* mark bpf_tail_call as different opcode to avoid
79741b3bdec01a Alexei Starovoitov 2017-03-15 11689 * conditional branch in the interpeter for every normal
79741b3bdec01a Alexei Starovoitov 2017-03-15 11690 * call and to prevent accidental JITing by JIT compiler
79741b3bdec01a Alexei Starovoitov 2017-03-15 11691 * that doesn't support bpf_tail_call yet
e245c5c6a5656e Alexei Starovoitov 2017-03-15 11692 */
e245c5c6a5656e Alexei Starovoitov 2017-03-15 11693 insn->imm = 0;
71189fa9b092ef Alexei Starovoitov 2017-05-30 11694 insn->code = BPF_JMP | BPF_TAIL_CALL;
b2157399cc9898 Alexei Starovoitov 2018-01-07 11695
c93552c443ebc6 Daniel Borkmann 2018-05-24 11696 aux = &env->insn_aux_data[i + delta];
2c78ee898d8f10 Alexei Starovoitov 2020-05-13 11697 if (env->bpf_capable && !expect_blinding &&
cc52d9140aa920 Daniel Borkmann 2019-12-19 11698 prog->jit_requested &&
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11699 !bpf_map_key_poisoned(aux) &&
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11700 !bpf_map_ptr_poisoned(aux) &&
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11701 !bpf_map_ptr_unpriv(aux)) {
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11702 struct bpf_jit_poke_descriptor desc = {
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11703 .reason = BPF_POKE_REASON_TAIL_CALL,
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11704 .tail_call.map = BPF_MAP_PTR(aux->map_ptr_state),
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11705 .tail_call.key = bpf_map_key_immediate(aux),
a748c6975dea32 Maciej Fijalkowski 2020-09-16 11706 .insn_idx = i + delta,
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11707 };
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11708
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11709 ret = bpf_jit_add_poke_descriptor(prog, &desc);
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11710 if (ret < 0) {
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11711 verbose(env, "adding tail call poke descriptor failed\n");
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11712 return ret;
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11713 }
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11714
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11715 insn->imm = ret + 1;
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11716 continue;
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11717 }
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11718
c93552c443ebc6 Daniel Borkmann 2018-05-24 11719 if (!bpf_map_ptr_unpriv(aux))
c93552c443ebc6 Daniel Borkmann 2018-05-24 11720 continue;
c93552c443ebc6 Daniel Borkmann 2018-05-24 11721
b2157399cc9898 Alexei Starovoitov 2018-01-07 11722 /* instead of changing every JIT dealing with tail_call
b2157399cc9898 Alexei Starovoitov 2018-01-07 11723 * emit two extra insns:
b2157399cc9898 Alexei Starovoitov 2018-01-07 11724 * if (index >= max_entries) goto out;
b2157399cc9898 Alexei Starovoitov 2018-01-07 11725 * index &= array->index_mask;
b2157399cc9898 Alexei Starovoitov 2018-01-07 11726 * to avoid out-of-bounds cpu speculation
b2157399cc9898 Alexei Starovoitov 2018-01-07 11727 */
c93552c443ebc6 Daniel Borkmann 2018-05-24 11728 if (bpf_map_ptr_poisoned(aux)) {
40950343932879 Colin Ian King 2018-01-10 11729 verbose(env, "tail_call abusing map_ptr\n");
b2157399cc9898 Alexei Starovoitov 2018-01-07 11730 return -EINVAL;
b2157399cc9898 Alexei Starovoitov 2018-01-07 11731 }
c93552c443ebc6 Daniel Borkmann 2018-05-24 11732
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11733 map_ptr = BPF_MAP_PTR(aux->map_ptr_state);
b2157399cc9898 Alexei Starovoitov 2018-01-07 11734 insn_buf[0] = BPF_JMP_IMM(BPF_JGE, BPF_REG_3,
b2157399cc9898 Alexei Starovoitov 2018-01-07 11735 map_ptr->max_entries, 2);
b2157399cc9898 Alexei Starovoitov 2018-01-07 11736 insn_buf[1] = BPF_ALU32_IMM(BPF_AND, BPF_REG_3,
b2157399cc9898 Alexei Starovoitov 2018-01-07 11737 container_of(map_ptr,
b2157399cc9898 Alexei Starovoitov 2018-01-07 11738 struct bpf_array,
b2157399cc9898 Alexei Starovoitov 2018-01-07 11739 map)->index_mask);
b2157399cc9898 Alexei Starovoitov 2018-01-07 11740 insn_buf[2] = *insn;
b2157399cc9898 Alexei Starovoitov 2018-01-07 11741 cnt = 3;
b2157399cc9898 Alexei Starovoitov 2018-01-07 11742 new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
b2157399cc9898 Alexei Starovoitov 2018-01-07 11743 if (!new_prog)
b2157399cc9898 Alexei Starovoitov 2018-01-07 11744 return -ENOMEM;
b2157399cc9898 Alexei Starovoitov 2018-01-07 11745
b2157399cc9898 Alexei Starovoitov 2018-01-07 11746 delta += cnt - 1;
b2157399cc9898 Alexei Starovoitov 2018-01-07 11747 env->prog = prog = new_prog;
b2157399cc9898 Alexei Starovoitov 2018-01-07 11748 insn = new_prog->insnsi + i + delta;
e245c5c6a5656e Alexei Starovoitov 2017-03-15 11749 continue;
e245c5c6a5656e Alexei Starovoitov 2017-03-15 11750 }
e245c5c6a5656e Alexei Starovoitov 2017-03-15 11751
89c63074c2bc25 Daniel Borkmann 2017-08-19 11752 /* BPF_EMIT_CALL() assumptions in some of the map_gen_lookup
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11753 * and other inlining handlers are currently limited to 64 bit
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11754 * only.
89c63074c2bc25 Daniel Borkmann 2017-08-19 11755 */
60b58afc96c9df Alexei Starovoitov 2017-12-14 11756 if (prog->jit_requested && BITS_PER_LONG == 64 &&
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11757 (insn->imm == BPF_FUNC_map_lookup_elem ||
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11758 insn->imm == BPF_FUNC_map_update_elem ||
84430d4232c36c Daniel Borkmann 2018-10-21 11759 insn->imm == BPF_FUNC_map_delete_elem ||
84430d4232c36c Daniel Borkmann 2018-10-21 11760 insn->imm == BPF_FUNC_map_push_elem ||
84430d4232c36c Daniel Borkmann 2018-10-21 11761 insn->imm == BPF_FUNC_map_pop_elem ||
84430d4232c36c Daniel Borkmann 2018-10-21 11762 insn->imm == BPF_FUNC_map_peek_elem)) {
c93552c443ebc6 Daniel Borkmann 2018-05-24 11763 aux = &env->insn_aux_data[i + delta];
c93552c443ebc6 Daniel Borkmann 2018-05-24 11764 if (bpf_map_ptr_poisoned(aux))
c93552c443ebc6 Daniel Borkmann 2018-05-24 11765 goto patch_call_imm;
c93552c443ebc6 Daniel Borkmann 2018-05-24 11766
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11767 map_ptr = BPF_MAP_PTR(aux->map_ptr_state);
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11768 ops = map_ptr->ops;
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11769 if (insn->imm == BPF_FUNC_map_lookup_elem &&
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11770 ops->map_gen_lookup) {
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11771 cnt = ops->map_gen_lookup(map_ptr, insn_buf);
4a8f87e60f6db4 Daniel Borkmann 2020-10-11 11772 if (cnt == -EOPNOTSUPP)
4a8f87e60f6db4 Daniel Borkmann 2020-10-11 11773 goto patch_map_ops_generic;
4a8f87e60f6db4 Daniel Borkmann 2020-10-11 11774 if (cnt <= 0 || cnt >= ARRAY_SIZE(insn_buf)) {
61bd5218eef349 Jakub Kicinski 2017-10-09 11775 verbose(env, "bpf verifier is misconfigured\n");
81ed18ab3098b6 Alexei Starovoitov 2017-03-15 11776 return -EINVAL;
81ed18ab3098b6 Alexei Starovoitov 2017-03-15 11777 }
81ed18ab3098b6 Alexei Starovoitov 2017-03-15 11778
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11779 new_prog = bpf_patch_insn_data(env, i + delta,
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11780 insn_buf, cnt);
81ed18ab3098b6 Alexei Starovoitov 2017-03-15 11781 if (!new_prog)
81ed18ab3098b6 Alexei Starovoitov 2017-03-15 11782 return -ENOMEM;
81ed18ab3098b6 Alexei Starovoitov 2017-03-15 11783
81ed18ab3098b6 Alexei Starovoitov 2017-03-15 11784 delta += cnt - 1;
81ed18ab3098b6 Alexei Starovoitov 2017-03-15 11785 env->prog = prog = new_prog;
81ed18ab3098b6 Alexei Starovoitov 2017-03-15 11786 insn = new_prog->insnsi + i + delta;
81ed18ab3098b6 Alexei Starovoitov 2017-03-15 11787 continue;
81ed18ab3098b6 Alexei Starovoitov 2017-03-15 11788 }
81ed18ab3098b6 Alexei Starovoitov 2017-03-15 11789
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11790 BUILD_BUG_ON(!__same_type(ops->map_lookup_elem,
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11791 (void *(*)(struct bpf_map *map, void *key))NULL));
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11792 BUILD_BUG_ON(!__same_type(ops->map_delete_elem,
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11793 (int (*)(struct bpf_map *map, void *key))NULL));
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11794 BUILD_BUG_ON(!__same_type(ops->map_update_elem,
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11795 (int (*)(struct bpf_map *map, void *key, void *value,
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11796 u64 flags))NULL));
84430d4232c36c Daniel Borkmann 2018-10-21 11797 BUILD_BUG_ON(!__same_type(ops->map_push_elem,
84430d4232c36c Daniel Borkmann 2018-10-21 11798 (int (*)(struct bpf_map *map, void *value,
84430d4232c36c Daniel Borkmann 2018-10-21 11799 u64 flags))NULL));
84430d4232c36c Daniel Borkmann 2018-10-21 11800 BUILD_BUG_ON(!__same_type(ops->map_pop_elem,
84430d4232c36c Daniel Borkmann 2018-10-21 11801 (int (*)(struct bpf_map *map, void *value))NULL));
84430d4232c36c Daniel Borkmann 2018-10-21 11802 BUILD_BUG_ON(!__same_type(ops->map_peek_elem,
84430d4232c36c Daniel Borkmann 2018-10-21 11803 (int (*)(struct bpf_map *map, void *value))NULL));
4a8f87e60f6db4 Daniel Borkmann 2020-10-11 11804 patch_map_ops_generic:
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11805 switch (insn->imm) {
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11806 case BPF_FUNC_map_lookup_elem:
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11807 insn->imm = BPF_CAST_CALL(ops->map_lookup_elem) -
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11808 __bpf_call_base;
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11809 continue;
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11810 case BPF_FUNC_map_update_elem:
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11811 insn->imm = BPF_CAST_CALL(ops->map_update_elem) -
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11812 __bpf_call_base;
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11813 continue;
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11814 case BPF_FUNC_map_delete_elem:
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11815 insn->imm = BPF_CAST_CALL(ops->map_delete_elem) -
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11816 __bpf_call_base;
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11817 continue;
84430d4232c36c Daniel Borkmann 2018-10-21 11818 case BPF_FUNC_map_push_elem:
84430d4232c36c Daniel Borkmann 2018-10-21 11819 insn->imm = BPF_CAST_CALL(ops->map_push_elem) -
84430d4232c36c Daniel Borkmann 2018-10-21 11820 __bpf_call_base;
84430d4232c36c Daniel Borkmann 2018-10-21 11821 continue;
84430d4232c36c Daniel Borkmann 2018-10-21 11822 case BPF_FUNC_map_pop_elem:
84430d4232c36c Daniel Borkmann 2018-10-21 11823 insn->imm = BPF_CAST_CALL(ops->map_pop_elem) -
84430d4232c36c Daniel Borkmann 2018-10-21 11824 __bpf_call_base;
84430d4232c36c Daniel Borkmann 2018-10-21 11825 continue;
84430d4232c36c Daniel Borkmann 2018-10-21 11826 case BPF_FUNC_map_peek_elem:
84430d4232c36c Daniel Borkmann 2018-10-21 11827 insn->imm = BPF_CAST_CALL(ops->map_peek_elem) -
84430d4232c36c Daniel Borkmann 2018-10-21 11828 __bpf_call_base;
84430d4232c36c Daniel Borkmann 2018-10-21 11829 continue;
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11830 }
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11831
09772d92cd5ad9 Daniel Borkmann 2018-06-02 @11832 goto patch_call_imm;
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11833 }
09772d92cd5ad9 Daniel Borkmann 2018-06-02 11834
f64cacf58f39b9 Brendan Jackman 2021-02-17 11835 /* Implement bpf_jiffies64 inline. */
5576b991e9c1a1 Martin KaFai Lau 2020-01-22 11836 if (prog->jit_requested && BITS_PER_LONG == 64 &&
5576b991e9c1a1 Martin KaFai Lau 2020-01-22 11837 insn->imm == BPF_FUNC_jiffies64) {
5576b991e9c1a1 Martin KaFai Lau 2020-01-22 11838 struct bpf_insn ld_jiffies_addr[2] = {
5576b991e9c1a1 Martin KaFai Lau 2020-01-22 11839 BPF_LD_IMM64(BPF_REG_0,
5576b991e9c1a1 Martin KaFai Lau 2020-01-22 11840 (unsigned long)&jiffies),
5576b991e9c1a1 Martin KaFai Lau 2020-01-22 11841 };
5576b991e9c1a1 Martin KaFai Lau 2020-01-22 11842
5576b991e9c1a1 Martin KaFai Lau 2020-01-22 11843 insn_buf[0] = ld_jiffies_addr[0];
5576b991e9c1a1 Martin KaFai Lau 2020-01-22 11844 insn_buf[1] = ld_jiffies_addr[1];
5576b991e9c1a1 Martin KaFai Lau 2020-01-22 11845 insn_buf[2] = BPF_LDX_MEM(BPF_DW, BPF_REG_0,
5576b991e9c1a1 Martin KaFai Lau 2020-01-22 11846 BPF_REG_0, 0);
5576b991e9c1a1 Martin KaFai Lau 2020-01-22 11847 cnt = 3;
5576b991e9c1a1 Martin KaFai Lau 2020-01-22 11848
5576b991e9c1a1 Martin KaFai Lau 2020-01-22 11849 new_prog = bpf_patch_insn_data(env, i + delta, insn_buf,
5576b991e9c1a1 Martin KaFai Lau 2020-01-22 11850 cnt);
5576b991e9c1a1 Martin KaFai Lau 2020-01-22 11851 if (!new_prog)
5576b991e9c1a1 Martin KaFai Lau 2020-01-22 11852 return -ENOMEM;
5576b991e9c1a1 Martin KaFai Lau 2020-01-22 11853
5576b991e9c1a1 Martin KaFai Lau 2020-01-22 11854 delta += cnt - 1;
5576b991e9c1a1 Martin KaFai Lau 2020-01-22 11855 env->prog = prog = new_prog;
5576b991e9c1a1 Martin KaFai Lau 2020-01-22 11856 insn = new_prog->insnsi + i + delta;
5576b991e9c1a1 Martin KaFai Lau 2020-01-22 11857 continue;
5576b991e9c1a1 Martin KaFai Lau 2020-01-22 11858 }
5576b991e9c1a1 Martin KaFai Lau 2020-01-22 11859
81ed18ab3098b6 Alexei Starovoitov 2017-03-15 11860 patch_call_imm:
5e43f899b03a34 Andrey Ignatov 2018-03-30 11861 fn = env->ops->get_func_proto(insn->imm, env->prog);
e245c5c6a5656e Alexei Starovoitov 2017-03-15 11862 /* all functions that have prototype and verifier allowed
e245c5c6a5656e Alexei Starovoitov 2017-03-15 11863 * programs to call them, must be real in-kernel functions
e245c5c6a5656e Alexei Starovoitov 2017-03-15 11864 */
79741b3bdec01a Alexei Starovoitov 2017-03-15 11865 if (!fn->func) {
61bd5218eef349 Jakub Kicinski 2017-10-09 11866 verbose(env,
61bd5218eef349 Jakub Kicinski 2017-10-09 11867 "kernel subsystem misconfigured func %s#%d\n",
79741b3bdec01a Alexei Starovoitov 2017-03-15 11868 func_id_name(insn->imm), insn->imm);
79741b3bdec01a Alexei Starovoitov 2017-03-15 11869 return -EFAULT;
e245c5c6a5656e Alexei Starovoitov 2017-03-15 11870 }
79741b3bdec01a Alexei Starovoitov 2017-03-15 11871 insn->imm = fn->func - __bpf_call_base;
e245c5c6a5656e Alexei Starovoitov 2017-03-15 11872 }
e245c5c6a5656e Alexei Starovoitov 2017-03-15 11873
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11874 /* Since poke tab is now finalized, publish aux to tracker. */
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11875 for (i = 0; i < prog->aux->size_poke_tab; i++) {
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11876 map_ptr = prog->aux->poke_tab[i].tail_call.map;
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11877 if (!map_ptr->ops->map_poke_track ||
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11878 !map_ptr->ops->map_poke_untrack ||
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11879 !map_ptr->ops->map_poke_run) {
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11880 verbose(env, "bpf verifier is misconfigured\n");
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11881 return -EINVAL;
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11882 }
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11883
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11884 ret = map_ptr->ops->map_poke_track(map_ptr, prog->aux);
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11885 if (ret < 0) {
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11886 verbose(env, "tracking tail call prog failed\n");
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11887 return ret;
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11888 }
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11889 }
d2e4c1e6c29472 Daniel Borkmann 2019-11-22 11890
79741b3bdec01a Alexei Starovoitov 2017-03-15 11891 return 0;
79741b3bdec01a Alexei Starovoitov 2017-03-15 11892 }
e245c5c6a5656e Alexei Starovoitov 2017-03-15 11893
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 30989 bytes --]
next reply other threads:[~2021-02-17 11:28 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-02-17 11:28 kernel test robot [this message]
-- strict thread matches above, loose matches on Subject: below --
2021-02-17 10:45 [PATCH bpf-next] bpf: Rename fixup_bpf_calls and add some comments Brendan Jackman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=202102171945.ZeL2oe26-lkp@intel.com \
--to=lkp@intel.com \
--cc=kbuild@lists.01.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.