netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next V2 0/2] net: use kmem_cache_free_bulk in kfree_skb_list
@ 2023-01-13 13:51 Jesper Dangaard Brouer
  2023-01-13 13:51 ` [PATCH net-next V2 1/2] net: fix call location in kfree_skb_list_reason Jesper Dangaard Brouer
                   ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: Jesper Dangaard Brouer @ 2023-01-13 13:51 UTC (permalink / raw)
  To: netdev
  Cc: Jesper Dangaard Brouer, Jakub Kicinski, David S. Miller,
	edumazet, pabeni

The kfree_skb_list function walks SKB (via skb->next) and frees them
individually to the SLUB/SLAB allocator (kmem_cache). It is more
efficient to bulk free them via the kmem_cache_free_bulk API.

Netstack NAPI fastpath already uses kmem_cache bulk alloc and free
APIs for SKBs.

The kfree_skb_list call got an interesting optimization in commit
520ac30f4551 ("net_sched: drop packets after root qdisc lock is
released") that can create a list of SKBs "to_free" e.g. when qdisc
enqueue fails or deliberately chooses to drop . It isn't a normal data
fastpath, but the situation will likely occur when system/qdisc are
under heavy workloads, thus it makes sense to use a faster API for
freeing the SKBs.

E.g. the (often distro default) qdisc fq_codel will drop batches of
packets from fattest elephant flow, default capped at 64 packets (but
adjustable via tc argument drop_batch).

Performance measurements done in [1]:
 [1] https://github.com/xdp-project/xdp-project/blob/master/areas/mem/kfree_skb_list01.org

---

Jesper Dangaard Brouer (2):
      net: fix call location in kfree_skb_list_reason
      net: kfree_skb_list use kmem_cache_free_bulk


 net/core/skbuff.c | 68 +++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 57 insertions(+), 11 deletions(-)

--


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH net-next V2 1/2] net: fix call location in kfree_skb_list_reason
  2023-01-13 13:51 [PATCH net-next V2 0/2] net: use kmem_cache_free_bulk in kfree_skb_list Jesper Dangaard Brouer
@ 2023-01-13 13:51 ` Jesper Dangaard Brouer
  2023-01-13 13:52 ` [PATCH net-next V2 2/2] net: kfree_skb_list use kmem_cache_free_bulk Jesper Dangaard Brouer
  2023-01-17  9:40 ` [PATCH net-next V2 0/2] net: use kmem_cache_free_bulk in kfree_skb_list patchwork-bot+netdevbpf
  2 siblings, 0 replies; 15+ messages in thread
From: Jesper Dangaard Brouer @ 2023-01-13 13:51 UTC (permalink / raw)
  To: netdev
  Cc: Jesper Dangaard Brouer, Jakub Kicinski, David S. Miller,
	edumazet, pabeni

The SKB drop reason uses __builtin_return_address(0) to give the call
"location" to trace_kfree_skb() tracepoint skb:kfree_skb.

To keep this stable for compilers kfree_skb_reason() is annotated with
__fix_address (noinline __noclone) as fixed in commit c205cc7534a9
("net: skb: prevent the split of kfree_skb_reason() by gcc").

The function kfree_skb_list_reason() invoke kfree_skb_reason(), which
cause the __builtin_return_address(0) "location" to report the
unexpected address of kfree_skb_list_reason.

Example output from 'perf script':
 kpktgend_0  1337 [000]    81.002597: skb:kfree_skb: skbaddr=0xffff888144824700 protocol=2048 location=kfree_skb_list_reason+0x1e reason: QDISC_DROP

Patch creates an __always_inline __kfree_skb_reason() helper call that
is called from both kfree_skb_list() and kfree_skb_list_reason().
Suggestions for solutions that shares code better are welcome.

As preparation for next patch move __kfree_skb() invocation out of
this helper function.

Reviewed-by: Saeed Mahameed <saeed@kernel.org>
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
 net/core/skbuff.c |   34 +++++++++++++++++++++-------------
 1 file changed, 21 insertions(+), 13 deletions(-)

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 4a0eb5593275..007a5fbe284b 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -932,6 +932,21 @@ void __kfree_skb(struct sk_buff *skb)
 }
 EXPORT_SYMBOL(__kfree_skb);
 
+static __always_inline
+bool __kfree_skb_reason(struct sk_buff *skb, enum skb_drop_reason reason)
+{
+	if (unlikely(!skb_unref(skb)))
+		return false;
+
+	DEBUG_NET_WARN_ON_ONCE(reason <= 0 || reason >= SKB_DROP_REASON_MAX);
+
+	if (reason == SKB_CONSUMED)
+		trace_consume_skb(skb);
+	else
+		trace_kfree_skb(skb, __builtin_return_address(0), reason);
+	return true;
+}
+
 /**
  *	kfree_skb_reason - free an sk_buff with special reason
  *	@skb: buffer to free
@@ -944,26 +959,19 @@ EXPORT_SYMBOL(__kfree_skb);
 void __fix_address
 kfree_skb_reason(struct sk_buff *skb, enum skb_drop_reason reason)
 {
-	if (unlikely(!skb_unref(skb)))
-		return;
-
-	DEBUG_NET_WARN_ON_ONCE(reason <= 0 || reason >= SKB_DROP_REASON_MAX);
-
-	if (reason == SKB_CONSUMED)
-		trace_consume_skb(skb);
-	else
-		trace_kfree_skb(skb, __builtin_return_address(0), reason);
-	__kfree_skb(skb);
+	if (__kfree_skb_reason(skb, reason))
+		__kfree_skb(skb);
 }
 EXPORT_SYMBOL(kfree_skb_reason);
 
-void kfree_skb_list_reason(struct sk_buff *segs,
-			   enum skb_drop_reason reason)
+void __fix_address
+kfree_skb_list_reason(struct sk_buff *segs, enum skb_drop_reason reason)
 {
 	while (segs) {
 		struct sk_buff *next = segs->next;
 
-		kfree_skb_reason(segs, reason);
+		if (__kfree_skb_reason(segs, reason))
+			__kfree_skb(segs);
 		segs = next;
 	}
 }



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next V2 2/2] net: kfree_skb_list use kmem_cache_free_bulk
  2023-01-13 13:51 [PATCH net-next V2 0/2] net: use kmem_cache_free_bulk in kfree_skb_list Jesper Dangaard Brouer
  2023-01-13 13:51 ` [PATCH net-next V2 1/2] net: fix call location in kfree_skb_list_reason Jesper Dangaard Brouer
@ 2023-01-13 13:52 ` Jesper Dangaard Brouer
  2023-01-18 16:05   ` Eric Dumazet
  2023-01-18 21:37   ` Jesper Dangaard Brouer
  2023-01-17  9:40 ` [PATCH net-next V2 0/2] net: use kmem_cache_free_bulk in kfree_skb_list patchwork-bot+netdevbpf
  2 siblings, 2 replies; 15+ messages in thread
From: Jesper Dangaard Brouer @ 2023-01-13 13:52 UTC (permalink / raw)
  To: netdev
  Cc: Jesper Dangaard Brouer, Jakub Kicinski, David S. Miller,
	edumazet, pabeni

The kfree_skb_list function walks SKB (via skb->next) and frees them
individually to the SLUB/SLAB allocator (kmem_cache). It is more
efficient to bulk free them via the kmem_cache_free_bulk API.

This patches create a stack local array with SKBs to bulk free while
walking the list. Bulk array size is limited to 16 SKBs to trade off
stack usage and efficiency. The SLUB kmem_cache "skbuff_head_cache"
uses objsize 256 bytes usually in an order-1 page 8192 bytes that is
32 objects per slab (can vary on archs and due to SLUB sharing). Thus,
for SLUB the optimal bulk free case is 32 objects belonging to same
slab, but runtime this isn't likely to occur.

The expected gain from using kmem_cache bulk alloc and free API
have been assessed via a microbencmark kernel module[1].

The module 'slab_bulk_test01' results at bulk 16 element:
 kmem-in-loop Per elem: 109 cycles(tsc) 30.532 ns (step:16)
 kmem-bulk    Per elem: 64 cycles(tsc) 17.905 ns (step:16)

More detailed description of benchmarks avail in [2].

[1] https://github.com/netoptimizer/prototype-kernel/tree/master/kernel/mm
[2] https://github.com/xdp-project/xdp-project/blob/master/areas/mem/kfree_skb_list01.org

V2: rename function to kfree_skb_add_bulk.

Reviewed-by: Saeed Mahameed <saeed@kernel.org>
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
 net/core/skbuff.c |   40 +++++++++++++++++++++++++++++++++++++++-
 1 file changed, 39 insertions(+), 1 deletion(-)

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 007a5fbe284b..79c9e795a964 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -964,16 +964,54 @@ kfree_skb_reason(struct sk_buff *skb, enum skb_drop_reason reason)
 }
 EXPORT_SYMBOL(kfree_skb_reason);
 
+#define KFREE_SKB_BULK_SIZE	16
+
+struct skb_free_array {
+	unsigned int skb_count;
+	void *skb_array[KFREE_SKB_BULK_SIZE];
+};
+
+static void kfree_skb_add_bulk(struct sk_buff *skb,
+			       struct skb_free_array *sa,
+			       enum skb_drop_reason reason)
+{
+	/* if SKB is a clone, don't handle this case */
+	if (unlikely(skb->fclone != SKB_FCLONE_UNAVAILABLE)) {
+		__kfree_skb(skb);
+		return;
+	}
+
+	skb_release_all(skb, reason);
+	sa->skb_array[sa->skb_count++] = skb;
+
+	if (unlikely(sa->skb_count == KFREE_SKB_BULK_SIZE)) {
+		kmem_cache_free_bulk(skbuff_head_cache, KFREE_SKB_BULK_SIZE,
+				     sa->skb_array);
+		sa->skb_count = 0;
+	}
+}
+
 void __fix_address
 kfree_skb_list_reason(struct sk_buff *segs, enum skb_drop_reason reason)
 {
+	struct skb_free_array sa;
+
+	sa.skb_count = 0;
+
 	while (segs) {
 		struct sk_buff *next = segs->next;
 
+		skb_mark_not_on_list(segs);
+
 		if (__kfree_skb_reason(segs, reason))
-			__kfree_skb(segs);
+			kfree_skb_add_bulk(segs, &sa, reason);
+
 		segs = next;
 	}
+
+	if (sa.skb_count)
+		kmem_cache_free_bulk(skbuff_head_cache, sa.skb_count,
+				     sa.skb_array);
 }
 EXPORT_SYMBOL(kfree_skb_list_reason);
 



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next V2 0/2] net: use kmem_cache_free_bulk in kfree_skb_list
  2023-01-13 13:51 [PATCH net-next V2 0/2] net: use kmem_cache_free_bulk in kfree_skb_list Jesper Dangaard Brouer
  2023-01-13 13:51 ` [PATCH net-next V2 1/2] net: fix call location in kfree_skb_list_reason Jesper Dangaard Brouer
  2023-01-13 13:52 ` [PATCH net-next V2 2/2] net: kfree_skb_list use kmem_cache_free_bulk Jesper Dangaard Brouer
@ 2023-01-17  9:40 ` patchwork-bot+netdevbpf
  2 siblings, 0 replies; 15+ messages in thread
From: patchwork-bot+netdevbpf @ 2023-01-17  9:40 UTC (permalink / raw)
  To: Jesper Dangaard Brouer; +Cc: netdev, kuba, davem, edumazet, pabeni

Hello:

This series was applied to netdev/net-next.git (master)
by Paolo Abeni <pabeni@redhat.com>:

On Fri, 13 Jan 2023 14:51:54 +0100 you wrote:
> The kfree_skb_list function walks SKB (via skb->next) and frees them
> individually to the SLUB/SLAB allocator (kmem_cache). It is more
> efficient to bulk free them via the kmem_cache_free_bulk API.
> 
> Netstack NAPI fastpath already uses kmem_cache bulk alloc and free
> APIs for SKBs.
> 
> [...]

Here is the summary with links:
  - [net-next,V2,1/2] net: fix call location in kfree_skb_list_reason
    https://git.kernel.org/netdev/net-next/c/a4650da2a2d6
  - [net-next,V2,2/2] net: kfree_skb_list use kmem_cache_free_bulk
    https://git.kernel.org/netdev/net-next/c/eedade12f4cb

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next V2 2/2] net: kfree_skb_list use kmem_cache_free_bulk
  2023-01-13 13:52 ` [PATCH net-next V2 2/2] net: kfree_skb_list use kmem_cache_free_bulk Jesper Dangaard Brouer
@ 2023-01-18 16:05   ` Eric Dumazet
  2023-01-18 16:42     ` Jesper Dangaard Brouer
  2023-01-18 21:37   ` Jesper Dangaard Brouer
  1 sibling, 1 reply; 15+ messages in thread
From: Eric Dumazet @ 2023-01-18 16:05 UTC (permalink / raw)
  To: Jesper Dangaard Brouer; +Cc: netdev, Jakub Kicinski, David S. Miller, pabeni

On Fri, Jan 13, 2023 at 2:52 PM Jesper Dangaard Brouer
<brouer@redhat.com> wrote:
>
> The kfree_skb_list function walks SKB (via skb->next) and frees them
> individually to the SLUB/SLAB allocator (kmem_cache). It is more
> efficient to bulk free them via the kmem_cache_free_bulk API.
>
> This patches create a stack local array with SKBs to bulk free while
> walking the list. Bulk array size is limited to 16 SKBs to trade off
> stack usage and efficiency. The SLUB kmem_cache "skbuff_head_cache"
> uses objsize 256 bytes usually in an order-1 page 8192 bytes that is
> 32 objects per slab (can vary on archs and due to SLUB sharing). Thus,
> for SLUB the optimal bulk free case is 32 objects belonging to same
> slab, but runtime this isn't likely to occur.
>
> The expected gain from using kmem_cache bulk alloc and free API
> have been assessed via a microbencmark kernel module[1].
>
> The module 'slab_bulk_test01' results at bulk 16 element:
>  kmem-in-loop Per elem: 109 cycles(tsc) 30.532 ns (step:16)
>  kmem-bulk    Per elem: 64 cycles(tsc) 17.905 ns (step:16)
>
> More detailed description of benchmarks avail in [2].
>
> [1] https://github.com/netoptimizer/prototype-kernel/tree/master/kernel/mm
> [2] https://github.com/xdp-project/xdp-project/blob/master/areas/mem/kfree_skb_list01.org
>
> V2: rename function to kfree_skb_add_bulk.
>
> Reviewed-by: Saeed Mahameed <saeed@kernel.org>
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> ---

According to syzbot, this patch causes kernel panics, in IP fragmentation logic.

Can you double check if there is no obvious bug ?

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next V2 2/2] net: kfree_skb_list use kmem_cache_free_bulk
  2023-01-18 16:05   ` Eric Dumazet
@ 2023-01-18 16:42     ` Jesper Dangaard Brouer
  2023-01-18 16:42       ` Eric Dumazet
  0 siblings, 1 reply; 15+ messages in thread
From: Jesper Dangaard Brouer @ 2023-01-18 16:42 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: brouer, netdev, Jakub Kicinski, David S. Miller, pabeni


On 18/01/2023 17.05, Eric Dumazet wrote:
> On Fri, Jan 13, 2023 at 2:52 PM Jesper Dangaard Brouer
> <brouer@redhat.com> wrote:
>>
>> The kfree_skb_list function walks SKB (via skb->next) and frees them
>> individually to the SLUB/SLAB allocator (kmem_cache). It is more
>> efficient to bulk free them via the kmem_cache_free_bulk API.
>>
>> This patches create a stack local array with SKBs to bulk free while
>> walking the list. Bulk array size is limited to 16 SKBs to trade off
>> stack usage and efficiency. The SLUB kmem_cache "skbuff_head_cache"
>> uses objsize 256 bytes usually in an order-1 page 8192 bytes that is
>> 32 objects per slab (can vary on archs and due to SLUB sharing). Thus,
>> for SLUB the optimal bulk free case is 32 objects belonging to same
>> slab, but runtime this isn't likely to occur.
>>
>> The expected gain from using kmem_cache bulk alloc and free API
>> have been assessed via a microbencmark kernel module[1].
>>
>> The module 'slab_bulk_test01' results at bulk 16 element:
>>   kmem-in-loop Per elem: 109 cycles(tsc) 30.532 ns (step:16)
>>   kmem-bulk    Per elem: 64 cycles(tsc) 17.905 ns (step:16)
>>
>> More detailed description of benchmarks avail in [2].
>>
>> [1] https://github.com/netoptimizer/prototype-kernel/tree/master/kernel/mm
>> [2] https://github.com/xdp-project/xdp-project/blob/master/areas/mem/kfree_skb_list01.org
>>
>> V2: rename function to kfree_skb_add_bulk.
>>
>> Reviewed-by: Saeed Mahameed <saeed@kernel.org>
>> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
>> ---
> 
> According to syzbot, this patch causes kernel panics, in IP fragmentation logic.
> 
> Can you double check if there is no obvious bug ?

Do you have a link to the syzbot issue?

--Jesper


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next V2 2/2] net: kfree_skb_list use kmem_cache_free_bulk
  2023-01-18 16:42     ` Jesper Dangaard Brouer
@ 2023-01-18 16:42       ` Eric Dumazet
  0 siblings, 0 replies; 15+ messages in thread
From: Eric Dumazet @ 2023-01-18 16:42 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: brouer, netdev, Jakub Kicinski, David S. Miller, pabeni

On Wed, Jan 18, 2023 at 5:42 PM Jesper Dangaard Brouer
<jbrouer@redhat.com> wrote:
>
>
> On 18/01/2023 17.05, Eric Dumazet wrote:
> > On Fri, Jan 13, 2023 at 2:52 PM Jesper Dangaard Brouer
> > <brouer@redhat.com> wrote:
> >>
> >> The kfree_skb_list function walks SKB (via skb->next) and frees them
> >> individually to the SLUB/SLAB allocator (kmem_cache). It is more
> >> efficient to bulk free them via the kmem_cache_free_bulk API.
> >>
> >> This patches create a stack local array with SKBs to bulk free while
> >> walking the list. Bulk array size is limited to 16 SKBs to trade off
> >> stack usage and efficiency. The SLUB kmem_cache "skbuff_head_cache"
> >> uses objsize 256 bytes usually in an order-1 page 8192 bytes that is
> >> 32 objects per slab (can vary on archs and due to SLUB sharing). Thus,
> >> for SLUB the optimal bulk free case is 32 objects belonging to same
> >> slab, but runtime this isn't likely to occur.
> >>
> >> The expected gain from using kmem_cache bulk alloc and free API
> >> have been assessed via a microbencmark kernel module[1].
> >>
> >> The module 'slab_bulk_test01' results at bulk 16 element:
> >>   kmem-in-loop Per elem: 109 cycles(tsc) 30.532 ns (step:16)
> >>   kmem-bulk    Per elem: 64 cycles(tsc) 17.905 ns (step:16)
> >>
> >> More detailed description of benchmarks avail in [2].
> >>
> >> [1] https://github.com/netoptimizer/prototype-kernel/tree/master/kernel/mm
> >> [2] https://github.com/xdp-project/xdp-project/blob/master/areas/mem/kfree_skb_list01.org
> >>
> >> V2: rename function to kfree_skb_add_bulk.
> >>
> >> Reviewed-by: Saeed Mahameed <saeed@kernel.org>
> >> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> >> ---
> >
> > According to syzbot, this patch causes kernel panics, in IP fragmentation logic.
> >
> > Can you double check if there is no obvious bug ?
>
> Do you have a link to the syzbot issue?

Not yet, I will release it, with a repro.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next V2 2/2] net: kfree_skb_list use kmem_cache_free_bulk
  2023-01-13 13:52 ` [PATCH net-next V2 2/2] net: kfree_skb_list use kmem_cache_free_bulk Jesper Dangaard Brouer
  2023-01-18 16:05   ` Eric Dumazet
@ 2023-01-18 21:37   ` Jesper Dangaard Brouer
  2023-01-19  2:26     ` Jakub Kicinski
  2023-01-19  2:39     ` Yunsheng Lin
  1 sibling, 2 replies; 15+ messages in thread
From: Jesper Dangaard Brouer @ 2023-01-18 21:37 UTC (permalink / raw)
  To: netdev; +Cc: brouer, Jakub Kicinski, David S. Miller, edumazet, pabeni

(related to syzbot issue[1])

On 13/01/2023 14.52, Jesper Dangaard Brouer wrote:
> The kfree_skb_list function walks SKB (via skb->next) and frees them
> individually to the SLUB/SLAB allocator (kmem_cache). It is more
> efficient to bulk free them via the kmem_cache_free_bulk API.
> 
> This patches create a stack local array with SKBs to bulk free while
> walking the list. Bulk array size is limited to 16 SKBs to trade off
> stack usage and efficiency. The SLUB kmem_cache "skbuff_head_cache"
> uses objsize 256 bytes usually in an order-1 page 8192 bytes that is
> 32 objects per slab (can vary on archs and due to SLUB sharing). Thus,
> for SLUB the optimal bulk free case is 32 objects belonging to same
> slab, but runtime this isn't likely to occur.
> 
> The expected gain from using kmem_cache bulk alloc and free API
> have been assessed via a microbencmark kernel module[1].
> 
> The module 'slab_bulk_test01' results at bulk 16 element:
>   kmem-in-loop Per elem: 109 cycles(tsc) 30.532 ns (step:16)
>   kmem-bulk    Per elem: 64 cycles(tsc) 17.905 ns (step:16)
> 
> More detailed description of benchmarks avail in [2].
> 
> [1] https://github.com/netoptimizer/prototype-kernel/tree/master/kernel/mm
> [2] https://github.com/xdp-project/xdp-project/blob/master/areas/mem/kfree_skb_list01.org
> 
> V2: rename function to kfree_skb_add_bulk.
> 
> Reviewed-by: Saeed Mahameed <saeed@kernel.org>
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> ---
>   net/core/skbuff.c |   40 +++++++++++++++++++++++++++++++++++++++-
>   1 file changed, 39 insertions(+), 1 deletion(-)
> 
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 007a5fbe284b..79c9e795a964 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -964,16 +964,54 @@ kfree_skb_reason(struct sk_buff *skb, enum skb_drop_reason reason)
>   }
>   EXPORT_SYMBOL(kfree_skb_reason);
>   
> +#define KFREE_SKB_BULK_SIZE	16
> +
> +struct skb_free_array {
> +	unsigned int skb_count;
> +	void *skb_array[KFREE_SKB_BULK_SIZE];
> +};
> +
> +static void kfree_skb_add_bulk(struct sk_buff *skb,
> +			       struct skb_free_array *sa,
> +			       enum skb_drop_reason reason)
> +{
> +	/* if SKB is a clone, don't handle this case */
> +	if (unlikely(skb->fclone != SKB_FCLONE_UNAVAILABLE)) {
> +		__kfree_skb(skb);
> +		return;
> +	}
> +
> +	skb_release_all(skb, reason);
> +	sa->skb_array[sa->skb_count++] = skb;
> +
> +	if (unlikely(sa->skb_count == KFREE_SKB_BULK_SIZE)) {
> +		kmem_cache_free_bulk(skbuff_head_cache, KFREE_SKB_BULK_SIZE,
> +				     sa->skb_array);
> +		sa->skb_count = 0;
> +	}
> +}
> +
>   void __fix_address
>   kfree_skb_list_reason(struct sk_buff *segs, enum skb_drop_reason reason)
>   {
> +	struct skb_free_array sa;
> +
> +	sa.skb_count = 0;
> +
>   	while (segs) {
>   		struct sk_buff *next = segs->next;
>   
> +		skb_mark_not_on_list(segs);

The syzbot[1] bug goes way if I remove this skb_mark_not_on_list().

I don't understand why I cannot clear skb->next here?

[1] https://lore.kernel.org/all/000000000000d58eae05f28ca51f@google.com/

>   		if (__kfree_skb_reason(segs, reason))
> -			__kfree_skb(segs);
> +			kfree_skb_add_bulk(segs, &sa, reason);
> +
>   		segs = next;
>   	}
> +
> +	if (sa.skb_count)
> +		kmem_cache_free_bulk(skbuff_head_cache, sa.skb_count,
> +				     sa.skb_array);
>   }
>   EXPORT_SYMBOL(kfree_skb_list_reason);
>   
> 
> 


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next V2 2/2] net: kfree_skb_list use kmem_cache_free_bulk
  2023-01-18 21:37   ` Jesper Dangaard Brouer
@ 2023-01-19  2:26     ` Jakub Kicinski
  2023-01-19 10:17       ` Jesper Dangaard Brouer
  2023-01-19  2:39     ` Yunsheng Lin
  1 sibling, 1 reply; 15+ messages in thread
From: Jakub Kicinski @ 2023-01-19  2:26 UTC (permalink / raw)
  To: Jesper Dangaard Brouer; +Cc: netdev, brouer, David S. Miller, edumazet, pabeni

On Wed, 18 Jan 2023 22:37:47 +0100 Jesper Dangaard Brouer wrote:
> > +		skb_mark_not_on_list(segs);  
> 
> The syzbot[1] bug goes way if I remove this skb_mark_not_on_list().
> 
> I don't understand why I cannot clear skb->next here?

Some of the skbs on the list are not private?
IOW we should only unlink them if skb_unref().

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next V2 2/2] net: kfree_skb_list use kmem_cache_free_bulk
  2023-01-18 21:37   ` Jesper Dangaard Brouer
  2023-01-19  2:26     ` Jakub Kicinski
@ 2023-01-19  2:39     ` Yunsheng Lin
  1 sibling, 0 replies; 15+ messages in thread
From: Yunsheng Lin @ 2023-01-19  2:39 UTC (permalink / raw)
  To: Jesper Dangaard Brouer, netdev
  Cc: brouer, Jakub Kicinski, David S. Miller, edumazet, pabeni

On 2023/1/19 5:37, Jesper Dangaard Brouer wrote:
> (related to syzbot issue[1])
> 
> On 13/01/2023 14.52, Jesper Dangaard Brouer wrote:
>> The kfree_skb_list function walks SKB (via skb->next) and frees them
>> individually to the SLUB/SLAB allocator (kmem_cache). It is more
>> efficient to bulk free them via the kmem_cache_free_bulk API.
>>
>> This patches create a stack local array with SKBs to bulk free while
>> walking the list. Bulk array size is limited to 16 SKBs to trade off
>> stack usage and efficiency. The SLUB kmem_cache "skbuff_head_cache"
>> uses objsize 256 bytes usually in an order-1 page 8192 bytes that is
>> 32 objects per slab (can vary on archs and due to SLUB sharing). Thus,
>> for SLUB the optimal bulk free case is 32 objects belonging to same
>> slab, but runtime this isn't likely to occur.
>>
>> The expected gain from using kmem_cache bulk alloc and free API
>> have been assessed via a microbencmark kernel module[1].
>>
>> The module 'slab_bulk_test01' results at bulk 16 element:
>>   kmem-in-loop Per elem: 109 cycles(tsc) 30.532 ns (step:16)
>>   kmem-bulk    Per elem: 64 cycles(tsc) 17.905 ns (step:16)
>>
>> More detailed description of benchmarks avail in [2].
>>
>> [1] https://github.com/netoptimizer/prototype-kernel/tree/master/kernel/mm
>> [2] https://github.com/xdp-project/xdp-project/blob/master/areas/mem/kfree_skb_list01.org
>>
>> V2: rename function to kfree_skb_add_bulk.
>>
>> Reviewed-by: Saeed Mahameed <saeed@kernel.org>
>> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
>> ---
>>   net/core/skbuff.c |   40 +++++++++++++++++++++++++++++++++++++++-
>>   1 file changed, 39 insertions(+), 1 deletion(-)
>>
>> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
>> index 007a5fbe284b..79c9e795a964 100644
>> --- a/net/core/skbuff.c
>> +++ b/net/core/skbuff.c
>> @@ -964,16 +964,54 @@ kfree_skb_reason(struct sk_buff *skb, enum skb_drop_reason reason)
>>   }
>>   EXPORT_SYMBOL(kfree_skb_reason);
>>   +#define KFREE_SKB_BULK_SIZE    16
>> +
>> +struct skb_free_array {
>> +    unsigned int skb_count;
>> +    void *skb_array[KFREE_SKB_BULK_SIZE];
>> +};
>> +
>> +static void kfree_skb_add_bulk(struct sk_buff *skb,
>> +                   struct skb_free_array *sa,
>> +                   enum skb_drop_reason reason)
>> +{
>> +    /* if SKB is a clone, don't handle this case */
>> +    if (unlikely(skb->fclone != SKB_FCLONE_UNAVAILABLE)) {
>> +        __kfree_skb(skb);
>> +        return;
>> +    }
>> +
>> +    skb_release_all(skb, reason);
>> +    sa->skb_array[sa->skb_count++] = skb;
>> +
>> +    if (unlikely(sa->skb_count == KFREE_SKB_BULK_SIZE)) {
>> +        kmem_cache_free_bulk(skbuff_head_cache, KFREE_SKB_BULK_SIZE,
>> +                     sa->skb_array);
>> +        sa->skb_count = 0;
>> +    }
>> +}
>> +
>>   void __fix_address
>>   kfree_skb_list_reason(struct sk_buff *segs, enum skb_drop_reason reason)
>>   {
>> +    struct skb_free_array sa;
>> +
>> +    sa.skb_count = 0;
>> +
>>       while (segs) {
>>           struct sk_buff *next = segs->next;
>>   +        skb_mark_not_on_list(segs);
> 
> The syzbot[1] bug goes way if I remove this skb_mark_not_on_list().
> 
> I don't understand why I cannot clear skb->next here?

Clearing skb->next seems unrelated, it may just increase the problem
recurrence probability.

Because It seems kfree_skb_list_reason() is also used to release skb in
shinfo->frag_list, which should go through the skb_unref() checking,
and this patch seems to skip the skb_unref() checking for skb in
shinfo->frag_list.

> 
> [1] https://lore.kernel.org/all/000000000000d58eae05f28ca51f@google.com/
> 
>>           if (__kfree_skb_reason(segs, reason))
>> -            __kfree_skb(segs);
>> +            kfree_skb_add_bulk(segs, &sa, reason);
>> +
>>           segs = next;
>>       }
>> +
>> +    if (sa.skb_count)
>> +        kmem_cache_free_bulk(skbuff_head_cache, sa.skb_count,
>> +                     sa.skb_array);
>>   }
>>   EXPORT_SYMBOL(kfree_skb_list_reason);
>>  
>>
> 
> .
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next V2 2/2] net: kfree_skb_list use kmem_cache_free_bulk
  2023-01-19  2:26     ` Jakub Kicinski
@ 2023-01-19 10:17       ` Jesper Dangaard Brouer
  2023-01-19 10:28         ` Eric Dumazet
  0 siblings, 1 reply; 15+ messages in thread
From: Jesper Dangaard Brouer @ 2023-01-19 10:17 UTC (permalink / raw)
  To: Jakub Kicinski, Jesper Dangaard Brouer
  Cc: brouer, netdev, David S. Miller, edumazet, pabeni



On 19/01/2023 03.26, Jakub Kicinski wrote:
> On Wed, 18 Jan 2023 22:37:47 +0100 Jesper Dangaard Brouer wrote:
>>> +		skb_mark_not_on_list(segs);
>>
>> The syzbot[1] bug goes way if I remove this skb_mark_not_on_list().
>>
>> I don't understand why I cannot clear skb->next here?
> 
> Some of the skbs on the list are not private?
> IOW we should only unlink them if skb_unref().

Yes, you are right.

The skb_mark_not_on_list() should only be called if __kfree_skb_reason()
returns true, meaning the SKB is ready to be free'ed (as it calls/check
skb_unref()).

I will send a proper fix patch shortly... after syzbot do a test on it.

--Jesper


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next V2 2/2] net: kfree_skb_list use kmem_cache_free_bulk
  2023-01-19 10:17       ` Jesper Dangaard Brouer
@ 2023-01-19 10:28         ` Eric Dumazet
  2023-01-19 11:22           ` Jesper Dangaard Brouer
  0 siblings, 1 reply; 15+ messages in thread
From: Eric Dumazet @ 2023-01-19 10:28 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: Jakub Kicinski, brouer, netdev, David S. Miller, pabeni

On Thu, Jan 19, 2023 at 11:18 AM Jesper Dangaard Brouer
<jbrouer@redhat.com> wrote:
>
>
>
> On 19/01/2023 03.26, Jakub Kicinski wrote:
> > On Wed, 18 Jan 2023 22:37:47 +0100 Jesper Dangaard Brouer wrote:
> >>> +           skb_mark_not_on_list(segs);
> >>
> >> The syzbot[1] bug goes way if I remove this skb_mark_not_on_list().
> >>
> >> I don't understand why I cannot clear skb->next here?
> >
> > Some of the skbs on the list are not private?
> > IOW we should only unlink them if skb_unref().
>
> Yes, you are right.
>
> The skb_mark_not_on_list() should only be called if __kfree_skb_reason()
> returns true, meaning the SKB is ready to be free'ed (as it calls/check
> skb_unref()).


This was the case already before your changes.

skb->next/prev can not be shared by multiple users.

One skb can be put on a single list by definition.

Whoever calls kfree_skb_list(list) owns all the skbs->next|prev found
in the list

So you can mangle skb->next as you like, even if the unref() is
telling that someone
else has a reference on skb.

>
> I will send a proper fix patch shortly... after syzbot do a test on it.
>
> --Jesper
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next V2 2/2] net: kfree_skb_list use kmem_cache_free_bulk
  2023-01-19 10:28         ` Eric Dumazet
@ 2023-01-19 11:22           ` Jesper Dangaard Brouer
  2023-01-19 11:46             ` Eric Dumazet
  0 siblings, 1 reply; 15+ messages in thread
From: Jesper Dangaard Brouer @ 2023-01-19 11:22 UTC (permalink / raw)
  To: Eric Dumazet, Jesper Dangaard Brouer
  Cc: brouer, Jakub Kicinski, netdev, David S. Miller, pabeni



On 19/01/2023 11.28, Eric Dumazet wrote:
> On Thu, Jan 19, 2023 at 11:18 AM Jesper Dangaard Brouer
> <jbrouer@redhat.com> wrote:
>>
>>
>>
>> On 19/01/2023 03.26, Jakub Kicinski wrote:
>>> On Wed, 18 Jan 2023 22:37:47 +0100 Jesper Dangaard Brouer wrote:
>>>>> +           skb_mark_not_on_list(segs);
>>>>
>>>> The syzbot[1] bug goes way if I remove this skb_mark_not_on_list().
>>>>
>>>> I don't understand why I cannot clear skb->next here?
>>>
>>> Some of the skbs on the list are not private?
>>> IOW we should only unlink them if skb_unref().
>>
>> Yes, you are right.
>>
>> The skb_mark_not_on_list() should only be called if __kfree_skb_reason()
>> returns true, meaning the SKB is ready to be free'ed (as it calls/check
>> skb_unref()).
> 
> 
> This was the case already before your changes.
> 
> skb->next/prev can not be shared by multiple users.
> 
> One skb can be put on a single list by definition.
> 
> Whoever calls kfree_skb_list(list) owns all the skbs->next|prev found
> in the list
> 
> So you can mangle skb->next as you like, even if the unref() is
> telling that someone
> else has a reference on skb.

Then why does the bug go way if I remove the skb_mark_not_on_list() call 
then?

>>
>> I will send a proper fix patch shortly... after syzbot do a test on it.
>>

I've send a patch for syzbot that only calls skb_mark_not_on_list() when 
unref() and __kfree_skb_reason() "permits" this.
I tested it locally with reproducer and it also fixes/"removes" the bug.

--Jesper


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next V2 2/2] net: kfree_skb_list use kmem_cache_free_bulk
  2023-01-19 11:22           ` Jesper Dangaard Brouer
@ 2023-01-19 11:46             ` Eric Dumazet
  2023-01-19 13:18               ` Jesper Dangaard Brouer
  0 siblings, 1 reply; 15+ messages in thread
From: Eric Dumazet @ 2023-01-19 11:46 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: brouer, Jakub Kicinski, netdev, David S. Miller, pabeni

On Thu, Jan 19, 2023 at 12:22 PM Jesper Dangaard Brouer
<jbrouer@redhat.com> wrote:
>
>
>
> On 19/01/2023 11.28, Eric Dumazet wrote:
> > On Thu, Jan 19, 2023 at 11:18 AM Jesper Dangaard Brouer
> > <jbrouer@redhat.com> wrote:
> >>
> >>
> >>
> >> On 19/01/2023 03.26, Jakub Kicinski wrote:
> >>> On Wed, 18 Jan 2023 22:37:47 +0100 Jesper Dangaard Brouer wrote:
> >>>>> +           skb_mark_not_on_list(segs);
> >>>>
> >>>> The syzbot[1] bug goes way if I remove this skb_mark_not_on_list().
> >>>>
> >>>> I don't understand why I cannot clear skb->next here?
> >>>
> >>> Some of the skbs on the list are not private?
> >>> IOW we should only unlink them if skb_unref().
> >>
> >> Yes, you are right.
> >>
> >> The skb_mark_not_on_list() should only be called if __kfree_skb_reason()
> >> returns true, meaning the SKB is ready to be free'ed (as it calls/check
> >> skb_unref()).
> >
> >
> > This was the case already before your changes.
> >
> > skb->next/prev can not be shared by multiple users.
> >
> > One skb can be put on a single list by definition.
> >
> > Whoever calls kfree_skb_list(list) owns all the skbs->next|prev found
> > in the list
> >
> > So you can mangle skb->next as you like, even if the unref() is
> > telling that someone
> > else has a reference on skb.
>
> Then why does the bug go way if I remove the skb_mark_not_on_list() call
> then?
>

Some side effects.

This _particular_ repro uses a specific pattern that might be defeated
by a small change.
(just working around another bug)

Instead of setting skb->next to NULL, try to set it to

skb->next = (struct sk_buff *)0x800;

This might show a different pattern.

> >>
> >> I will send a proper fix patch shortly... after syzbot do a test on it.
> >>
>
> I've send a patch for syzbot that only calls skb_mark_not_on_list() when
> unref() and __kfree_skb_reason() "permits" this.
> I tested it locally with reproducer and it also fixes/"removes" the bug.

This does not mean we will accept a patch with no clear explanation
other than "this removes a syzbot bug, so this must be good"

Make sure to give precise details on _why_ this is needed or not.

Again, the user of kfree_skb_list(list) _owns_ skb->next for sure.
If you think this assertion is not true, we are in big trouble.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next V2 2/2] net: kfree_skb_list use kmem_cache_free_bulk
  2023-01-19 11:46             ` Eric Dumazet
@ 2023-01-19 13:18               ` Jesper Dangaard Brouer
  0 siblings, 0 replies; 15+ messages in thread
From: Jesper Dangaard Brouer @ 2023-01-19 13:18 UTC (permalink / raw)
  To: Eric Dumazet, Jesper Dangaard Brouer
  Cc: brouer, Jakub Kicinski, netdev, David S. Miller, pabeni


On 19/01/2023 12.46, Eric Dumazet wrote:
> On Thu, Jan 19, 2023 at 12:22 PM Jesper Dangaard Brouer
> <jbrouer@redhat.com> wrote:
>>
>>
>>
>> On 19/01/2023 11.28, Eric Dumazet wrote:
>>> On Thu, Jan 19, 2023 at 11:18 AM Jesper Dangaard Brouer
>>> <jbrouer@redhat.com> wrote:
>>>>
>>>>
>>>>
>>>> On 19/01/2023 03.26, Jakub Kicinski wrote:
>>>>> On Wed, 18 Jan 2023 22:37:47 +0100 Jesper Dangaard Brouer wrote:
>>>>>>> +           skb_mark_not_on_list(segs);
>>>>>>
>>>>>> The syzbot[1] bug goes way if I remove this skb_mark_not_on_list().
>>>>>>
>>>>>> I don't understand why I cannot clear skb->next here?
>>>>>
>>>>> Some of the skbs on the list are not private?
>>>>> IOW we should only unlink them if skb_unref().
>>>>
>>>> Yes, you are right.
>>>>
>>>> The skb_mark_not_on_list() should only be called if __kfree_skb_reason()
>>>> returns true, meaning the SKB is ready to be free'ed (as it calls/check
>>>> skb_unref()).
>>>
>>>
>>> This was the case already before your changes.
>>>
>>> skb->next/prev can not be shared by multiple users.
>>>
>>> One skb can be put on a single list by definition.
>>>
>>> Whoever calls kfree_skb_list(list) owns all the skbs->next|prev found
>>> in the list
>>>
>>> So you can mangle skb->next as you like, even if the unref() is
>>> telling that someone
>>> else has a reference on skb.
>>
>> Then why does the bug go way if I remove the skb_mark_not_on_list() call
>> then?
>>
> 
> Some side effects.
> 
> This _particular_ repro uses a specific pattern that might be defeated
> by a small change.
> (just working around another bug)
> 
> Instead of setting skb->next to NULL, try to set it to
> 
> skb->next = (struct sk_buff *)0x800;
> 
> This might show a different pattern.

Nice trick, I'll use this next time.

I modified to code and added a kfree_skb tracepoint with a known reason
(PROTO_MEM) to capture the callstack. See end of email.  Which shows
multicast code path is involved.

  trace_kfree_skb(segs, __builtin_return_address(0), 
SKB_DROP_REASON_PROTO_MEM);

>>>>
>>>> I will send a proper fix patch shortly... after syzbot do a test on it.
>>>>
>>
>> I've send a patch for syzbot that only calls skb_mark_not_on_list() when
>> unref() and __kfree_skb_reason() "permits" this.
>> I tested it locally with reproducer and it also fixes/"removes" the bug.
> 
> This does not mean we will accept a patch with no clear explanation
> other than "this removes a syzbot bug, so this must be good"
> 
> Make sure to give precise details on _why_ this is needed or not.
> 
> Again, the user of kfree_skb_list(list) _owns_ skb->next for sure.
> If you think this assertion is not true, we are in big trouble.
> 

I think I have found an explanation, why/when refcnt can be elevated on
an SKB-list.  The skb_shinfo(skb)->flag_list can increase refcnt.

See code:
  static void skb_clone_fraglist(struct sk_buff *skb)
  {
	struct sk_buff *list;

	skb_walk_frags(skb, list)
		skb_get(list);
  }

This walks the SKBs on the shinfo->frag_list and increase the refcnt
(skb->users).

Notice that kfree_skb_list is also called when freeing the SKBs
"frag_list" in skb_release_data().

IMHO this explains why we can only remove the SKB from the list, when
"permitted" by skb_unref(), e.g. if __kfree_skb_reason() returns true.

--Jesper


Call-stack of case with elevated refcnt when walking SKB-list:
--------------------------------------------------------------

repro  3048 [003]   101.689670: skb:kfree_skb: 
skbaddr=0xffff888104086600 protocol=0 
location=skb_release_data.cold+0x25 reason: PROTO_MEM
         ffffffff81bb6448 kfree_skb_list_reason.cold+0x3b 
(/boot/vmlinux-6.2.0-rc3-net-next-kfunc-xdp-hints+)
         ffffffff81bb6448 kfree_skb_list_reason.cold+0x3b 
(/boot/vmlinux-6.2.0-rc3-net-next-kfunc-xdp-hints+)
         ffffffff81bb6484 skb_release_data.cold+0x25 
(/boot/vmlinux-6.2.0-rc3-net-next-kfunc-xdp-hints+)
         ffffffff819137e1 kfree_skb_reason+0x41 
(/boot/vmlinux-6.2.0-rc3-net-next-kfunc-xdp-hints+)
         ffffffff81a1e246 igmp_rcv+0xf6 
(/boot/vmlinux-6.2.0-rc3-net-next-kfunc-xdp-hints+)
         ffffffff819c5e85 ip_protocol_deliver_rcu+0x165 
(/boot/vmlinux-6.2.0-rc3-net-next-kfunc-xdp-hints+)
         ffffffff819c5f12 ip_local_deliver_finish+0x72 
(/boot/vmlinux-6.2.0-rc3-net-next-kfunc-xdp-hints+)
         ffffffff81930b89 __netif_receive_skb_one_core+0x69 
(/boot/vmlinux-6.2.0-rc3-net-next-kfunc-xdp-hints+)
         ffffffff81930e31 process_backlog+0x91 
(/boot/vmlinux-6.2.0-rc3-net-next-kfunc-xdp-hints+)
         ffffffff8193197b __napi_poll+0x2b 
(/boot/vmlinux-6.2.0-rc3-net-next-kfunc-xdp-hints+)
         ffffffff81931e86 net_rx_action+0x276 
(/boot/vmlinux-6.2.0-rc3-net-next-kfunc-xdp-hints+)
         ffffffff81bcf083 __do_softirq+0xd3 
(/boot/vmlinux-6.2.0-rc3-net-next-kfunc-xdp-hints+)
         ffffffff81087243 do_softirq+0x63 
(/boot/vmlinux-6.2.0-rc3-net-next-kfunc-xdp-hints+)
         ffffffff810872e4 __local_bh_enable_ip+0x64 
(/boot/vmlinux-6.2.0-rc3-net-next-kfunc-xdp-hints+)
         ffffffff8192ba8f netif_rx+0xdf 
(/boot/vmlinux-6.2.0-rc3-net-next-kfunc-xdp-hints+)
         ffffffff8192bb27 dev_loopback_xmit+0x77 
(/boot/vmlinux-6.2.0-rc3-net-next-kfunc-xdp-hints+)
         ffffffff819c9195 ip_mc_finish_output+0x65 
(/boot/vmlinux-6.2.0-rc3-net-next-kfunc-xdp-hints+)
         ffffffff819cc597 ip_mc_output+0x137 
(/boot/vmlinux-6.2.0-rc3-net-next-kfunc-xdp-hints+)
         ffffffff819cd2b8 ip_push_pending_frames+0xa8 
(/boot/vmlinux-6.2.0-rc3-net-next-kfunc-xdp-hints+)
         ffffffff81a04f67 raw_sendmsg+0x607 
(/boot/vmlinux-6.2.0-rc3-net-next-kfunc-xdp-hints+)
         ffffffff8190153b sock_sendmsg+0x8b 
(/boot/vmlinux-6.2.0-rc3-net-next-kfunc-xdp-hints+)
         ffffffff8190323b __sys_sendto+0xeb 
(/boot/vmlinux-6.2.0-rc3-net-next-kfunc-xdp-hints+)
         ffffffff819032b0 __x64_sys_sendto+0x20 
(/boot/vmlinux-6.2.0-rc3-net-next-kfunc-xdp-hints+)
         ffffffff81bb9ffa do_syscall_64+0x3a 
(/boot/vmlinux-6.2.0-rc3-net-next-kfunc-xdp-hints+)
         ffffffff81c000aa entry_SYSCALL_64+0xaa 
(/boot/vmlinux-6.2.0-rc3-net-next-kfunc-xdp-hints+)
                   10af3d syscall+0x1d (/usr/lib64/libc.so.6)
                     46d3 do_sandbox_none+0x81 
(/home/jbrouer/syzbot-kfree_skb_list/repro)
                     48ce main+0xa5 
(/home/jbrouer/syzbot-kfree_skb_list/repro)
                    29550 __libc_start_call_main+0x80 (/usr/lib64/libc.so.6)




Resolving some symbols:

ip_mc_finish_output+0x65
------------------------
[net-next]$ ./scripts/faddr2line net/ipv4/ip_output.o 
ip_mc_finish_output+0x65
ip_mc_finish_output+0x65/0x190:
ip_mc_finish_output at net/ipv4/ip_output.c:356

ip_mc_output+0x137
------------------
[net-next]$ ./scripts/faddr2line vmlinux ip_mc_output+0x137
ip_mc_output+0x137/0x2a0:
skb_network_header at include/linux/skbuff.h:2829
(inlined by) ip_hdr at include/linux/ip.h:21
(inlined by) ip_mc_output at net/ipv4/ip_output.c:401

igmp_rcv+0xf6
-------------
[net-next]$ ./scripts/faddr2line vmlinux igmp_rcv+0xf6
igmp_rcv+0xf6/0x2e0:
igmp_rcv at net/ipv4/igmp.c:1130


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2023-01-19 13:19 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-13 13:51 [PATCH net-next V2 0/2] net: use kmem_cache_free_bulk in kfree_skb_list Jesper Dangaard Brouer
2023-01-13 13:51 ` [PATCH net-next V2 1/2] net: fix call location in kfree_skb_list_reason Jesper Dangaard Brouer
2023-01-13 13:52 ` [PATCH net-next V2 2/2] net: kfree_skb_list use kmem_cache_free_bulk Jesper Dangaard Brouer
2023-01-18 16:05   ` Eric Dumazet
2023-01-18 16:42     ` Jesper Dangaard Brouer
2023-01-18 16:42       ` Eric Dumazet
2023-01-18 21:37   ` Jesper Dangaard Brouer
2023-01-19  2:26     ` Jakub Kicinski
2023-01-19 10:17       ` Jesper Dangaard Brouer
2023-01-19 10:28         ` Eric Dumazet
2023-01-19 11:22           ` Jesper Dangaard Brouer
2023-01-19 11:46             ` Eric Dumazet
2023-01-19 13:18               ` Jesper Dangaard Brouer
2023-01-19  2:39     ` Yunsheng Lin
2023-01-17  9:40 ` [PATCH net-next V2 0/2] net: use kmem_cache_free_bulk in kfree_skb_list patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).