All of lore.kernel.org
 help / color / mirror / Atom feed
* [bug report] bpf: Inline calls to bpf_loop when callback is known
@ 2022-06-23  8:21 Dan Carpenter
  2022-06-23  9:05 ` Eduard Zingerman
  0 siblings, 1 reply; 3+ messages in thread
From: Dan Carpenter @ 2022-06-23  8:21 UTC (permalink / raw)
  To: eddyz87; +Cc: bpf

Hello Eduard Zingerman,

The patch 1ade23711971: "bpf: Inline calls to bpf_loop when callback
is known" from Jun 21, 2022, leads to the following Smatch static
checker warning:

	kernel/bpf/verifier.c:14420 inline_bpf_loop()
	error: dereferencing freed memory 'env->prog'

kernel/bpf/verifier.c
    14350 static struct bpf_prog *inline_bpf_loop(struct bpf_verifier_env *env,
    14351                                         int position,
    14352                                         s32 stack_base,
    14353                                         u32 callback_subprogno,
    14354                                         u32 *cnt)
    14355 {
    14356         s32 r6_offset = stack_base + 0 * BPF_REG_SIZE;
    14357         s32 r7_offset = stack_base + 1 * BPF_REG_SIZE;
    14358         s32 r8_offset = stack_base + 2 * BPF_REG_SIZE;
    14359         int reg_loop_max = BPF_REG_6;
    14360         int reg_loop_cnt = BPF_REG_7;
    14361         int reg_loop_ctx = BPF_REG_8;
    14362 
    14363         struct bpf_prog *new_prog;
    14364         u32 callback_start;
    14365         u32 call_insn_offset;
    14366         s32 callback_offset;
    14367 
    14368         /* This represents an inlined version of bpf_iter.c:bpf_loop,
    14369          * be careful to modify this code in sync.
    14370          */
    14371         struct bpf_insn insn_buf[] = {
    14372                 /* Return error and jump to the end of the patch if
    14373                  * expected number of iterations is too big.
    14374                  */
    14375                 BPF_JMP_IMM(BPF_JLE, BPF_REG_1, BPF_MAX_LOOPS, 2),
    14376                 BPF_MOV32_IMM(BPF_REG_0, -E2BIG),
    14377                 BPF_JMP_IMM(BPF_JA, 0, 0, 16),
    14378                 /* spill R6, R7, R8 to use these as loop vars */
    14379                 BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_6, r6_offset),
    14380                 BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_7, r7_offset),
    14381                 BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_8, r8_offset),
    14382                 /* initialize loop vars */
    14383                 BPF_MOV64_REG(reg_loop_max, BPF_REG_1),
    14384                 BPF_MOV32_IMM(reg_loop_cnt, 0),
    14385                 BPF_MOV64_REG(reg_loop_ctx, BPF_REG_3),
    14386                 /* loop header,
    14387                  * if reg_loop_cnt >= reg_loop_max skip the loop body
    14388                  */
    14389                 BPF_JMP_REG(BPF_JGE, reg_loop_cnt, reg_loop_max, 5),
    14390                 /* callback call,
    14391                  * correct callback offset would be set after patching
    14392                  */
    14393                 BPF_MOV64_REG(BPF_REG_1, reg_loop_cnt),
    14394                 BPF_MOV64_REG(BPF_REG_2, reg_loop_ctx),
    14395                 BPF_CALL_REL(0),
    14396                 /* increment loop counter */
    14397                 BPF_ALU64_IMM(BPF_ADD, reg_loop_cnt, 1),
    14398                 /* jump to loop header if callback returned 0 */
    14399                 BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, -6),
    14400                 /* return value of bpf_loop,
    14401                  * set R0 to the number of iterations
    14402                  */
    14403                 BPF_MOV64_REG(BPF_REG_0, reg_loop_cnt),
    14404                 /* restore original values of R6, R7, R8 */
    14405                 BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_10, r6_offset),
    14406                 BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_10, r7_offset),
    14407                 BPF_LDX_MEM(BPF_DW, BPF_REG_8, BPF_REG_10, r8_offset),
    14408         };
    14409 
    14410         *cnt = ARRAY_SIZE(insn_buf);
    14411         new_prog = bpf_patch_insn_data(env, position, insn_buf, *cnt);

The bpf_patch_insn_data() function sometimes frees the old "env->prog"
and returns "new_prog".

    14412         if (!new_prog)
    14413                 return new_prog;
    14414 
    14415         /* callback start is known only after patching */
    14416         callback_start = env->subprog_info[callback_subprogno].start;
    14417         /* Note: insn_buf[12] is an offset of BPF_CALL_REL instruction */
    14418         call_insn_offset = position + 12;
    14419         callback_offset = callback_start - call_insn_offset - 1;
--> 14420         env->prog->insnsi[call_insn_offset].imm = callback_offset;

Presumably somewhere there is a "env->prog = new_prog;" but I couldn't
spot it in bpf_patch_insn_data().  But it feels like it would be more
readable to say:

	new_prog->insnsi[call_insn_offset].imm = callback_offset;


    14421 
    14422         return new_prog;
    14423 }

regards,
dan carpenter

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2022-06-23 21:10 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-23  8:21 [bug report] bpf: Inline calls to bpf_loop when callback is known Dan Carpenter
2022-06-23  9:05 ` Eduard Zingerman
2022-06-23 21:09   ` Alexei Starovoitov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.