netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Yonghong Song <yonghong.song@linux.dev>
To: Xu Kuohai <xukuohai@huaweicloud.com>,
	Eduard Zingerman <eddyz87@gmail.com>,
	bpf@vger.kernel.org, netdev@vger.kernel.org,
	linux-security-module@vger.kernel.org,
	linux-kselftest@vger.kernel.org
Cc: Alexei Starovoitov <ast@kernel.org>,
	Andrii Nakryiko <andrii@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Martin KaFai Lau <martin.lau@linux.dev>,
	Song Liu <song@kernel.org>,
	John Fastabend <john.fastabend@gmail.com>,
	KP Singh <kpsingh@kernel.org>,
	Stanislav Fomichev <sdf@google.com>, Hao Luo <haoluo@google.com>,
	Jiri Olsa <jolsa@kernel.org>,
	Matt Bobrowski <mattbobrowski@google.com>,
	Brendan Jackman <jackmanb@chromium.org>,
	Paul Moore <paul@paul-moore.com>,
	James Morris <jmorris@namei.org>,
	"Serge E . Hallyn" <serge@hallyn.com>,
	Khadija Kamran <kamrankhadijadj@gmail.com>,
	Casey Schaufler <casey@schaufler-ca.com>,
	Ondrej Mosnacek <omosnace@redhat.com>,
	Kees Cook <keescook@chromium.org>,
	John Johansen <john.johansen@canonical.com>,
	Lukas Bulwahn <lukas.bulwahn@gmail.com>,
	Roberto Sassu <roberto.sassu@huawei.com>,
	Shung-Hsi Yu <shung-hsi.yu@suse.com>
Subject: Re: [PATCH bpf-next v3 07/11] bpf: Fix a false rejection caused by AND operation
Date: Thu, 25 Apr 2024 09:28:24 -0700	[thread overview]
Message-ID: <8a8d7802-2429-478e-9835-0b56fde99393@linux.dev> (raw)
In-Reply-To: <2ef84dfa-44a9-4d4c-b3b2-9d0b2a2e0d8e@huaweicloud.com>


On 4/24/24 7:42 PM, Xu Kuohai wrote:
> On 4/25/2024 6:06 AM, Yonghong Song wrote:
>>
>> On 4/23/24 7:25 PM, Xu Kuohai wrote:
>>> On 4/24/2024 5:55 AM, Yonghong Song wrote:
>>>>
>>>> On 4/20/24 1:33 AM, Xu Kuohai wrote:
>>>>> On 4/20/2024 7:00 AM, Eduard Zingerman wrote:
>>>>>> On Thu, 2024-04-11 at 20:27 +0800, Xu Kuohai wrote:
>>>>>>> From: Xu Kuohai <xukuohai@huawei.com>
>>>>>>>
>>>>>>> With lsm return value check, the no-alu32 version 
>>>>>>> test_libbpf_get_fd_by_id_opts
>>>>>>> is rejected by the verifier, and the log says:
>>>>>>>
>>>>>>>    0: R1=ctx() R10=fp0
>>>>>>>    ; int BPF_PROG(check_access, struct bpf_map *map, fmode_t 
>>>>>>> fmode) @ test_libbpf_get_fd_by_id_opts.c:27
>>>>>>>    0: (b7) r0 = 0                        ; R0_w=0
>>>>>>>    1: (79) r2 = *(u64 *)(r1 +0)
>>>>>>>    func 'bpf_lsm_bpf_map' arg0 has btf_id 916 type STRUCT 'bpf_map'
>>>>>>>    2: R1=ctx() R2_w=trusted_ptr_bpf_map()
>>>>>>>    ; if (map != (struct bpf_map *)&data_input) @ 
>>>>>>> test_libbpf_get_fd_by_id_opts.c:29
>>>>>>>    2: (18) r3 = 0xffff9742c0951a00       ; 
>>>>>>> R3_w=map_ptr(map=data_input,ks=4,vs=4)
>>>>>>>    4: (5d) if r2 != r3 goto pc+4         ; 
>>>>>>> R2_w=trusted_ptr_bpf_map() R3_w=map_ptr(map=data_input,ks=4,vs=4)
>>>>>>>    ; int BPF_PROG(check_access, struct bpf_map *map, fmode_t 
>>>>>>> fmode) @ test_libbpf_get_fd_by_id_opts.c:27
>>>>>>>    5: (79) r0 = *(u64 *)(r1 +8)          ; R0_w=scalar() R1=ctx()
>>>>>>>    ; if (fmode & FMODE_WRITE) @ test_libbpf_get_fd_by_id_opts.c:32
>>>>>>>    6: (67) r0 <<= 62                     ; 
>>>>>>> R0_w=scalar(smax=0x4000000000000000,umax=0xc000000000000000,smin32=0,smax32=umax32=0,var_off=(0x0; 
>>>>>>> 0xc000000000000000))
>>>>>>>    7: (c7) r0 s>>= 63                    ; 
>>>>>>> R0_w=scalar(smin=smin32=-1,smax=smax32=0)
>>>>>>>    ;  @ test_libbpf_get_fd_by_id_opts.c:0
>>>>>>>    8: (57) r0 &= -13                     ; 
>>>>>>> R0_w=scalar(smax=0x7ffffffffffffff3,umax=0xfffffffffffffff3,smax32=0x7ffffff3,umax32=0xfffffff3,var_off=(0x0; 
>>>>>>> 0xfffffffffffffff3))
>>>>>>>    ; int BPF_PROG(check_access, struct bpf_map *map, fmode_t 
>>>>>>> fmode) @ test_libbpf_get_fd_by_id_opts.c:27
>>>>>>>    9: (95) exit
>>>>>>>
>>>>>>> And here is the C code of the prog.
>>>>>>>
>>>>>>> SEC("lsm/bpf_map")
>>>>>>> int BPF_PROG(check_access, struct bpf_map *map, fmode_t fmode)
>>>>>>> {
>>>>>>>     if (map != (struct bpf_map *)&data_input)
>>>>>>>         return 0;
>>>>>>>
>>>>>>>     if (fmode & FMODE_WRITE)
>>>>>>>         return -EACCES;
>>>>>>>
>>>>>>>     return 0;
>>>>>>> }
>>>>>>>
>>>>>>> It is clear that the prog can only return either 0 or -EACCESS, 
>>>>>>> and both
>>>>>>> values are legal.
>>>>>>>
>>>>>>> So why is it rejected by the verifier?
>>>>>>>
>>>>>>> The verifier log shows that the second if and return value setting
>>>>>>> statements in the prog is optimized to bitwise operations "r0 
>>>>>>> s>>= 63"
>>>>>>> and "r0 &= -13". The verifier correctly deduces that the the 
>>>>>>> value of
>>>>>>> r0 is in the range [-1, 0] after verifing instruction "r0 s>>= 63".
>>>>>>> But when the verifier proceeds to verify instruction "r0 &= 
>>>>>>> -13", it
>>>>>>> fails to deduce the correct value range of r0.
>>>>>>>
>>>>>>> 7: (c7) r0 s>>= 63                    ; 
>>>>>>> R0_w=scalar(smin=smin32=-1,smax=smax32=0)
>>>>>>> 8: (57) r0 &= -13                     ; 
>>>>>>> R0_w=scalar(smax=0x7ffffffffffffff3,umax=0xfffffffffffffff3,smax32=0x7ffffff3,umax32=0xfffffff3,var_off=(0x0; 
>>>>>>> 0xfffffffffffffff3))
>>>>>>>
>>>>>>> So why the verifier fails to deduce the result of 'r0 &= -13'?
>>>>>>>
>>>>>>> The verifier uses tnum to track values, and the two ranges "[-1, 
>>>>>>> 0]" and
>>>>>>> "[0, -1ULL]" are encoded to the same tnum. When verifing 
>>>>>>> instruction
>>>>>>> "r0 &= -13", the verifier erroneously deduces the result from
>>>>>>> "[0, -1ULL] AND -13", which is out of the expected return range
>>>>>>> [-4095, 0].
>>>>>>>
>>>>>>> To fix it, this patch simply adds a special SCALAR32 case for the
>>>>>>> verifier. That is, when the source operand of the AND 
>>>>>>> instruction is
>>>>>>> a constant and the destination operand changes from negative to
>>>>>>> non-negative and falls in range [-256, 256], deduce the result 
>>>>>>> range
>>>>>>> by enumerating all possible AND results.
>>>>>>>
>>>>>>> Signed-off-by: Xu Kuohai <xukuohai@huawei.com>
>>>>>>> ---
>>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> Sorry for the delay, I had to think about this issue a bit.
>>>>>> I found the clang transformation that generates the pattern this 
>>>>>> patch
>>>>>> tries to handle.
>>>>>> It is located in DAGCombiner::SimplifySelectCC() method (see [1]).
>>>>>> The transformation happens as a part of DAG to DAG rewrites
>>>>>> (LLVM uses several internal representations:
>>>>>>   - generic optimizer uses LLVM IR, most of the work is done
>>>>>>     using this representation;
>>>>>>   - before instruction selection IR is converted to Selection DAG,
>>>>>>     some optimizations are applied at this stage,
>>>>>>     all such optimizations are a set of pattern replacements;
>>>>>>   - Selection DAG is converted to machine code, some optimizations
>>>>>>     are applied at the machine code level).
>>>>>>
>>>>>> Full pattern is described as follows:
>>>>>>
>>>>>>    // fold (select_cc seteq (and x, y), 0, 0, A) -> (and (sra 
>>>>>> (shl x)) A)
>>>>>>    // where y is has a single bit set.
>>>>>>    // A plaintext description would be, we can turn the SELECT_CC 
>>>>>> into an AND
>>>>>>    // when the condition can be materialized as an all-ones 
>>>>>> register.  Any
>>>>>>    // single bit-test can be materialized as an all-ones register 
>>>>>> with
>>>>>>    // shift-left and shift-right-arith.
>>>>>>
>>>>>> For this particular test case the DAG is converted as follows:
>>>>>>
>>>>>>                      .---------------- lhs         The meaning of 
>>>>>> this select_cc is:
>>>>>>                      |        .------- rhs         `lhs == rhs ? 
>>>>>> true value : false value`
>>>>>>                      |        | .----- true value
>>>>>>                      |        | |  .-- false value
>>>>>>                      v        v v  v
>>>>>>    (select_cc seteq (and X 2) 0 0 -13)
>>>>>>                            ^
>>>>>> ->                        '---------------.
>>>>>>    (and (sra (sll X 62) 63)                |
>>>>>>         -13)                               |
>>>>>>                                            |
>>>>>> Before pattern is applied, it checks that second 'and' operand has
>>>>>> only one bit set, (which is true for '2').
>>>>>>
>>>>>> The pattern itself generates logical shift left / arithmetic shift
>>>>>> right pair, that ensures that result is either all ones (-1) or all
>>>>>> zeros (0). Hence, applying 'and' to shifts result and false value
>>>>>> generates a correct result.
>>>>>>
>>>>>
>>>>> Thanks for your detailed and invaluable explanation!
>>>>
>>>> Thanks Eduard for detailed explanation. It looks like we could
>>>> resolve this issue without adding too much complexity to verifier.
>>>> Also, this code pattern above seems generic enough to be worthwhile
>>>> with verifier change.
>>>>
>>>> Kuohai, please added detailed explanation (as described by Eduard)
>>>> in the commit message.
>>>>
>>>
>>> Sure, already added, the commit message and the change now is like 
>>> this:
>>>
>>> ---
>>>
>>>     bpf: Fix a false rejection caused by AND operation
>>>
>>>     With lsm return value check, the no-alu32 version 
>>> test_libbpf_get_fd_by_id_opts
>>>     is rejected by the verifier, and the log says:
>>>
>>>     0: R1=ctx() R10=fp0
>>>     ; int BPF_PROG(check_access, struct bpf_map *map, fmode_t fmode) 
>>> @ test_libbpf_get_fd_by_id_opts.c:27
>>>     0: (b7) r0 = 0                        ; R0_w=0
>>>     1: (79) r2 = *(u64 *)(r1 +0)
>>>     func 'bpf_lsm_bpf_map' arg0 has btf_id 916 type STRUCT 'bpf_map'
>>>     2: R1=ctx() R2_w=trusted_ptr_bpf_map()
>>>     ; if (map != (struct bpf_map *)&data_input) @ 
>>> test_libbpf_get_fd_by_id_opts.c:29
>>>     2: (18) r3 = 0xffff9742c0951a00       ; 
>>> R3_w=map_ptr(map=data_input,ks=4,vs=4)
>>>     4: (5d) if r2 != r3 goto pc+4         ; 
>>> R2_w=trusted_ptr_bpf_map() R3_w=map_ptr(map=data_input,ks=4,vs=4)
>>>     ; int BPF_PROG(check_access, struct bpf_map *map, fmode_t fmode) 
>>> @ test_libbpf_get_fd_by_id_opts.c:27
>>>     5: (79) r0 = *(u64 *)(r1 +8)          ; R0_w=scalar() R1=ctx()
>>>     ; if (fmode & FMODE_WRITE) @ test_libbpf_get_fd_by_id_opts.c:32
>>>     6: (67) r0 <<= 62                     ; 
>>> R0_w=scalar(smax=0x4000000000000000,umax=0xc000000000000000,smin32=0,smax32=umax32=0,var_off=(0x0; 
>>> 0xc000000000000000))
>>>     7: (c7) r0 s>>= 63                    ; 
>>> R0_w=scalar(smin=smin32=-1,smax=smax32=0)
>>>     ;  @ test_libbpf_get_fd_by_id_opts.c:0
>>>     8: (57) r0 &= -13                     ; 
>>> R0_w=scalar(smax=0x7ffffffffffffff3,umax=0xfffffffffffffff3,smax32=0x7ffffff3,umax32=0xfffffff3,var_off=(0x0; 
>>> 0xfffffffffffffff3))
>>>     ; int BPF_PROG(check_access, struct bpf_map *map, fmode_t fmode) 
>>> @ test_libbpf_get_fd_by_id_opts.c:27
>>>     9: (95) exit
>>>
>>>     And here is the C code of the prog.
>>>
>>>     SEC("lsm/bpf_map")
>>>     int BPF_PROG(check_access, struct bpf_map *map, fmode_t fmode)
>>>     {
>>>         if (map != (struct bpf_map *)&data_input)
>>>                 return 0;
>>>
>>>         if (fmode & FMODE_WRITE)
>>>                 return -EACCES;
>>>
>>>         return 0;
>>>     }
>>>
>>>     It is clear that the prog can only return either 0 or -EACCESS, 
>>> and both
>>>     values are legal.
>>>
>>>     So why is it rejected by the verifier?
>>>
>>>     The verifier log shows that the second if and return value setting
>>>     statements in the prog is optimized to bitwise operations "r0 
>>> s>>= 63"
>>>     and "r0 &= -13". The verifier correctly deduces that the the 
>>> value of
>>>     r0 is in the range [-1, 0] after verifing instruction "r0 s>>= 63".
>>>     But when the verifier proceeds to verify instruction "r0 &= 
>>> -13", it
>>>     fails to deduce the correct value range of r0.
>>>
>>>     7: (c7) r0 s>>= 63                    ; 
>>> R0_w=scalar(smin=smin32=-1,smax=smax32=0)
>>>     8: (57) r0 &= -13                     ; 
>>> R0_w=scalar(smax=0x7ffffffffffffff3,umax=0xfffffffffffffff3,smax32=0x7ffffff3,umax32=0xfffffff3,var_off=(0x0; 
>>> 0xfffffffffffffff3))
>>>
>>>     So why the verifier fails to deduce the result of 'r0 &= -13'?
>>>
>>>     The verifier uses tnum to track values, and the two ranges "[-1, 
>>> 0]" and
>>>     "[0, -1ULL]" are encoded to the same tnum. When verifing 
>>> instruction
>>>     "r0 &= -13", the verifier erroneously deduces the result from
>>>     "[0, -1ULL] AND -13", which is out of the expected return range
>>>     [-4095, 0].
>>>
>>>     As explained by Eduard in [0], the clang transformation that 
>>> generates this
>>>     pattern is located in DAGCombiner::SimplifySelectCC() method 
>>> (see [1]).
>>>
>>>     The transformation happens as a part of DAG to DAG rewrites
>>>     (LLVM uses several internal representations:
>>>      - generic optimizer uses LLVM IR, most of the work is done
>>>        using this representation;
>>>      - before instruction selection IR is converted to Selection DAG,
>>>        some optimizations are applied at this stage,
>>>        all such optimizations are a set of pattern replacements;
>>>      - Selection DAG is converted to machine code, some optimizations
>>>        are applied at the machine code level).
>>>
>>>     Full pattern is described as follows:
>>>
>>>       // fold (select_cc seteq (and x, y), 0, 0, A) -> (and (sra 
>>> (shl x)) A)
>>>       // where y is has a single bit set.
>>>       // A plaintext description would be, we can turn the SELECT_CC 
>>> into an AND
>>>       // when the condition can be materialized as an all-ones 
>>> register.  Any
>>>       // single bit-test can be materialized as an all-ones register 
>>> with
>>>       // shift-left and shift-right-arith.
>>>
>>>     For this particular test case the DAG is converted as follows:
>>>
>>>                         .---------------- lhs         The meaning of 
>>> this select_cc is:
>>>                         |        .------- rhs         `lhs == rhs ? 
>>> true value : false value`
>>>                         |        | .----- true value
>>>                         |        | |  .-- false value
>>>                         v        v v  v
>>>       (select_cc seteq (and X 2) 0 0 -13)
>>>                               ^
>>>     ->                        '---------------.
>>>       (and (sra (sll X 62) 63)                |
>>>            -13)                               |
>>>                                               |
>>>     Before pattern is applied, it checks that second 'and' operand has
>>>     only one bit set, (which is true for '2').
>>>
>>>     The pattern itself generates logical shift left / arithmetic shift
>>>     right pair, that ensures that result is either all ones (-1) or all
>>>     zeros (0). Hence, applying 'and' to shifts result and false value
>>>     generates a correct result.
>>>
>>>     As suggested by Eduard, this patch makes a special case for source
>>>     or destination register of '&=' operation being in range [-1, 0].
>>>
>>>     Meaning that one of the '&=' operands is either:
>>>     - all ones, in which case the counterpart is the result of the 
>>> operation;
>>>     - all zeros, in which case zero is the result of the operation.
>>>
>>>     And MIN and MAX values could be derived based on above two 
>>> observations.
>>>
>>>     [0] 
>>> https://lore.kernel.org/bpf/e62e2971301ca7f2e9eb74fc500c520285cad8f5.camel@gmail.com/
>>>     [1] 
>>> https://github.com/llvm/llvm-project/blob/4523a267829c807f3fc8fab8e5e9613985a51565/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
>>>
>>>     Suggested-by: Eduard Zingerman <eddyz87@gmail.com>
>>>     Signed-off-by: Xu Kuohai <xukuohai@huawei.com>
>>>
>>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>>> index 640747b53745..30c551d39329 100644
>>> --- a/kernel/bpf/verifier.c
>>> +++ b/kernel/bpf/verifier.c
>>> @@ -13374,6 +13374,24 @@ static void scalar32_min_max_and(struct 
>>> bpf_reg_state *dst_reg,
>>>         dst_reg->u32_min_value = var32_off.value;
>>>         dst_reg->u32_max_value = min(dst_reg->u32_max_value, umax_val);
>>>
>>> +       /* Special case: src_reg is known and dst_reg is in range 
>>> [-1, 0] */
>>> +       if (src_known &&
>>> +               dst_reg->s32_min_value == -1 && 
>>> dst_reg->s32_max_value == 0 &&
>>> +               dst_reg->smin_value == -1 && dst_reg->smax_value == 
>>> 0) {
>>
>> do we need to check dst_reg->smin_value/smax_value here? They should 
>> not impact
>> final dst_reg->s32_{min,max}_value computation, right?
>
> right, the check was simply copied from the old code, which only handled
> the case where 64-bit range is the same as the 32-bit range

What if we do not have 64bit smin_value/smax_value check? Could you give more
explanation here? In my opinion, deducing lower 32bit range should not care
upper 32bit values.

>
>> Similarly, for later 64bit min/max and, 32bit value does not really 
>> matter.
>>
>
> hmm, the 32-bit check is completely unnecessary.
>
>
>>> + dst_reg->s32_min_value = min_t(s32, src_reg->s32_min_value, 0);
>>> +               dst_reg->s32_max_value = max_t(s32, 
>>> src_reg->s32_min_value, 0);
>>> +               return;
>>> +       }
>>> +
>>> +       /* Special case: dst_reg is known and src_reg is in range 
>>> [-1, 0] */
>>> +       if (dst_known &&
>>> +               src_reg->s32_min_value == -1 && 
>>> src_reg->s32_max_value == 0 &&
>>> +               src_reg->smin_value == -1 && src_reg->smax_value == 
>>> 0) {
>>> +               dst_reg->s32_min_value = min_t(s32, 
>>> dst_reg->s32_min_value, 0);
>>> +               dst_reg->s32_max_value = max_t(s32, 
>>> dst_reg->s32_min_value, 0);
>>> +               return;
>>> +       }
>>> +
>>>         /* Safe to set s32 bounds by casting u32 result into s32 
>>> when u32
>>>          * doesn't cross sign boundary. Otherwise set s32 bounds to 
>>> unbounded.
>>>          */
>>> @@ -13404,6 +13422,24 @@ static void scalar_min_max_and(struct 
>>> bpf_reg_state *dst_reg,
>>>         dst_reg->umin_value = dst_reg->var_off.value;
>>>         dst_reg->umax_value = min(dst_reg->umax_value, umax_val);
>>>
>>> +       /* Special case: src_reg is known and dst_reg is in range 
>>> [-1, 0] */
>>> +       if (src_known &&
>>> +               dst_reg->smin_value == -1 && dst_reg->smax_value == 
>>> 0 &&
>>> +               dst_reg->s32_min_value == -1 && 
>>> dst_reg->s32_max_value == 0) {
>>> +               dst_reg->smin_value = min_t(s64, 
>>> src_reg->smin_value, 0);
>>> +               dst_reg->smax_value = max_t(s64, 
>>> src_reg->smin_value, 0);
>>> +               return;
>>> +       }
>>> +
>>> +       /* Special case: dst_reg is known and src_reg is in range 
>>> [-1, 0] */
>>> +       if (dst_known &&
>>> +               src_reg->smin_value == -1 && src_reg->smax_value == 
>>> 0 &&
>>> +               src_reg->s32_min_value == -1 && 
>>> src_reg->s32_max_value == 0) {
>>> +               dst_reg->smin_value = min_t(s64, 
>>> dst_reg->smin_value, 0);
>>> +               dst_reg->smax_value = max_t(s64, 
>>> dst_reg->smin_value, 0);
>>> +               return;
>>> +       }
>>> +
>>>         /* Safe to set s64 bounds by casting u64 result into s64 
>>> when u64
>>>          * doesn't cross sign boundary. Otherwise set s64 bounds to 
>>> unbounded.
>>>          */
>>>
>>>>>
>>>>>> In my opinion the approach taken by this patch is sub-optimal:
>>>>>> - 512 iterations is too much;
>>>>>> - this does not cover all code that could be generated by the above
>>>>>>    mentioned LLVM transformation
>>>>>>    (e.g. second 'and' operand could be 1 << 16).
>>>>>>
>>>>>> Instead, I suggest to make a special case for source or dst register
>>>>>> of '&=' operation being in range [-1,0].
>>>>>> Meaning that one of the '&=' operands is either:
>>>>>> - all ones, in which case the counterpart is the result of the 
>>>>>> operation;
>>>>>> - all zeros, in which case zero is the result of the operation;
>>>>>> - derive MIN and MAX values based on above two observations.
>>>>>>
>>>>>
>>>>> Totally agree, I'll cook a new patch as you suggested.
>>>>>
>>>>>> [1] 
>>>>>> https://github.com/llvm/llvm-project/blob/4523a267829c807f3fc8fab8e5e9613985a51565/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp#L5391
>>>>>>
>>>>>> Best regards,
>>>>>> Eduard
>>>>>
>>>>>
>>>>
>>>
>

  reply	other threads:[~2024-04-25 16:28 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-11 12:27 [PATCH bpf-next v3 00/11] Add check for bpf lsm return value Xu Kuohai
2024-04-11 12:27 ` [PATCH bpf-next v3 01/11] bpf, lsm: Annotate lsm hook return value range Xu Kuohai
2024-04-11 12:27 ` [PATCH bpf-next v3 02/11] bpf, lsm: Add helper to read " Xu Kuohai
2024-04-11 12:27 ` [PATCH bpf-next v3 03/11] bpf, lsm: Check bpf lsm hook return values in verifier Xu Kuohai
2024-04-13 11:44   ` Eduard Zingerman
2024-04-11 12:27 ` [PATCH bpf-next v3 04/11] bpf, lsm: Add bpf lsm disabled hook list Xu Kuohai
2024-04-11 12:27 ` [PATCH bpf-next v3 05/11] bpf: Avoid progs for different hooks calling each other with tail call Xu Kuohai
2024-04-11 12:27 ` [PATCH bpf-next v3 06/11] bpf: Fix compare error in function retval_range_within Xu Kuohai
2024-04-12  8:53   ` Shung-Hsi Yu
2024-04-25 23:41   ` Andrii Nakryiko
2024-04-26  8:08     ` Xu Kuohai
2024-04-11 12:27 ` [PATCH bpf-next v3 07/11] bpf: Fix a false rejection caused by AND operation Xu Kuohai
2024-04-19 23:00   ` Eduard Zingerman
2024-04-20  8:33     ` Xu Kuohai
2024-04-23 21:55       ` Yonghong Song
2024-04-24  2:25         ` Xu Kuohai
2024-04-24 22:06           ` Yonghong Song
2024-04-25  2:42             ` Xu Kuohai
2024-04-25 16:28               ` Yonghong Song [this message]
2024-04-26  7:43                 ` Xu Kuohai
2024-04-26 20:36           ` Andrii Nakryiko
2024-04-28 15:15             ` Xu Kuohai
2024-04-29 20:58               ` Andrii Nakryiko
2024-04-29 22:18                 ` Eduard Zingerman
2024-04-30  3:56                   ` Xu Kuohai
2024-04-30  3:54                 ` Xu Kuohai
2024-04-29 21:56               ` Eduard Zingerman
2024-04-11 12:27 ` [PATCH bpf-next v3 08/11] selftests/bpf: Avoid load failure for token_lsm.c Xu Kuohai
2024-04-11 12:27 ` [PATCH bpf-next v3 09/11] selftests/bpf: Add return value checks for failed tests Xu Kuohai
2024-04-11 12:27 ` [PATCH bpf-next v3 10/11] selftests/bpf: Add test for lsm tail call Xu Kuohai
2024-04-11 12:27 ` [PATCH bpf-next v3 11/11] selftests/bpf: Add verifier tests for bpf lsm Xu Kuohai

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8a8d7802-2429-478e-9835-0b56fde99393@linux.dev \
    --to=yonghong.song@linux.dev \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=casey@schaufler-ca.com \
    --cc=daniel@iogearbox.net \
    --cc=eddyz87@gmail.com \
    --cc=haoluo@google.com \
    --cc=jackmanb@chromium.org \
    --cc=jmorris@namei.org \
    --cc=john.fastabend@gmail.com \
    --cc=john.johansen@canonical.com \
    --cc=jolsa@kernel.org \
    --cc=kamrankhadijadj@gmail.com \
    --cc=keescook@chromium.org \
    --cc=kpsingh@kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=linux-security-module@vger.kernel.org \
    --cc=lukas.bulwahn@gmail.com \
    --cc=martin.lau@linux.dev \
    --cc=mattbobrowski@google.com \
    --cc=netdev@vger.kernel.org \
    --cc=omosnace@redhat.com \
    --cc=paul@paul-moore.com \
    --cc=roberto.sassu@huawei.com \
    --cc=sdf@google.com \
    --cc=serge@hallyn.com \
    --cc=shung-hsi.yu@suse.com \
    --cc=song@kernel.org \
    --cc=xukuohai@huaweicloud.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).