[v5,net-next,09/11] skbuff: allow to optionally use NAPI cache from __alloc_skb()
diff mbox series

Message ID 20210211185220.9753-10-alobakin@pm.me
State New, archived
Headers show
Series
  • skbuff: introduce skbuff_heads bulking and reusing
Related show

Commit Message

Alexander Lobakin Feb. 11, 2021, 6:54 p.m. UTC
Reuse the old and forgotten SKB_ALLOC_NAPI to add an option to get
an skbuff_head from the NAPI cache instead of inplace allocation
inside __alloc_skb().
This implies that the function is called from softirq or BH-off
context, not for allocating a clone or from a distant node.

Signed-off-by: Alexander Lobakin <alobakin@pm.me>
---
 net/core/skbuff.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

Comments

Alexander Duyck Feb. 12, 2021, 3:18 a.m. UTC | #1
On Thu, Feb 11, 2021 at 11:00 AM Alexander Lobakin <alobakin@pm.me> wrote:
>
> Reuse the old and forgotten SKB_ALLOC_NAPI to add an option to get
> an skbuff_head from the NAPI cache instead of inplace allocation
> inside __alloc_skb().
> This implies that the function is called from softirq or BH-off
> context, not for allocating a clone or from a distant node.
>
> Signed-off-by: Alexander Lobakin <alobakin@pm.me>
> ---
>  net/core/skbuff.c | 13 +++++++++----
>  1 file changed, 9 insertions(+), 4 deletions(-)
>
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 9e1a8ded4acc..a0b457ae87c2 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -397,15 +397,20 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
>         struct sk_buff *skb;
>         u8 *data;
>         bool pfmemalloc;
> +       bool clone;
>
> -       cache = (flags & SKB_ALLOC_FCLONE)
> -               ? skbuff_fclone_cache : skbuff_head_cache;
> +       clone = !!(flags & SKB_ALLOC_FCLONE);

The boolean conversion here is probably unnecessary. I would make
clone an int like flags and work with that. I suspect the compiler is
doing it already, but it is better to be explicit.

> +       cache = clone ? skbuff_fclone_cache : skbuff_head_cache;
>
>         if (sk_memalloc_socks() && (flags & SKB_ALLOC_RX))
>                 gfp_mask |= __GFP_MEMALLOC;
>
>         /* Get the HEAD */
> -       skb = kmem_cache_alloc_node(cache, gfp_mask & ~__GFP_DMA, node);
> +       if ((flags & SKB_ALLOC_NAPI) && !clone &&

Rather than having to do two checks you could just check for
SKB_ALLOC_NAPI and SKB_ALLOC_FCLONE in a single check. You could just
do something like:
    if ((flags & (SKB_ALLOC_FCLONE | SKB_ALLOC_NAPI) == SKB_ALLOC_NAPI)

That way you can avoid the extra conditional jumps and can start
computing the flags value sooner.

> +           likely(node == NUMA_NO_NODE || node == numa_mem_id()))
> +               skb = napi_skb_cache_get();
> +       else
> +               skb = kmem_cache_alloc_node(cache, gfp_mask & ~GFP_DMA, node);
>         if (unlikely(!skb))
>                 return NULL;
>         prefetchw(skb);
> @@ -436,7 +441,7 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
>         __build_skb_around(skb, data, 0);
>         skb->pfmemalloc = pfmemalloc;
>
> -       if (flags & SKB_ALLOC_FCLONE) {
> +       if (clone) {
>                 struct sk_buff_fclones *fclones;
>
>                 fclones = container_of(skb, struct sk_buff_fclones, skb1);
> --
> 2.30.1
>
>
Alexander Lobakin Feb. 13, 2021, 11:53 a.m. UTC | #2
From: Alexander Duyck <alexander.duyck@gmail.com>
Date: Thu, 11 Feb 2021 19:18:45 -0800

> On Thu, Feb 11, 2021 at 11:00 AM Alexander Lobakin <alobakin@pm.me> wrote:
> >
> > Reuse the old and forgotten SKB_ALLOC_NAPI to add an option to get
> > an skbuff_head from the NAPI cache instead of inplace allocation
> > inside __alloc_skb().
> > This implies that the function is called from softirq or BH-off
> > context, not for allocating a clone or from a distant node.
> >
> > Signed-off-by: Alexander Lobakin <alobakin@pm.me>
> > ---
> >  net/core/skbuff.c | 13 +++++++++----
> >  1 file changed, 9 insertions(+), 4 deletions(-)
> >
> > diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> > index 9e1a8ded4acc..a0b457ae87c2 100644
> > --- a/net/core/skbuff.c
> > +++ b/net/core/skbuff.c
> > @@ -397,15 +397,20 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
> >         struct sk_buff *skb;
> >         u8 *data;
> >         bool pfmemalloc;
> > +       bool clone;
> >
> > -       cache = (flags & SKB_ALLOC_FCLONE)
> > -               ? skbuff_fclone_cache : skbuff_head_cache;
> > +       clone = !!(flags & SKB_ALLOC_FCLONE);
> 
> The boolean conversion here is probably unnecessary. I would make
> clone an int like flags and work with that. I suspect the compiler is
> doing it already, but it is better to be explicit.
> 
> > +       cache = clone ? skbuff_fclone_cache : skbuff_head_cache;
> >
> >         if (sk_memalloc_socks() && (flags & SKB_ALLOC_RX))
> >                 gfp_mask |= __GFP_MEMALLOC;
> >
> >         /* Get the HEAD */
> > -       skb = kmem_cache_alloc_node(cache, gfp_mask & ~__GFP_DMA, node);
> > +       if ((flags & SKB_ALLOC_NAPI) && !clone &&
> 
> Rather than having to do two checks you could just check for
> SKB_ALLOC_NAPI and SKB_ALLOC_FCLONE in a single check. You could just
> do something like:
>     if ((flags & (SKB_ALLOC_FCLONE | SKB_ALLOC_NAPI) == SKB_ALLOC_NAPI)
> 
> That way you can avoid the extra conditional jumps and can start
> computing the flags value sooner.

I thought about combined check for two flags yesterday, so yeah, that
probably should be better than the current version.

> > +           likely(node == NUMA_NO_NODE || node == numa_mem_id()))
> > +               skb = napi_skb_cache_get();
> > +       else
> > +               skb = kmem_cache_alloc_node(cache, gfp_mask & ~GFP_DMA, node);
> >         if (unlikely(!skb))
> >                 return NULL;
> >         prefetchw(skb);
> > @@ -436,7 +441,7 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
> >         __build_skb_around(skb, data, 0);
> >         skb->pfmemalloc = pfmemalloc;
> >
> > -       if (flags & SKB_ALLOC_FCLONE) {
> > +       if (clone) {
> >                 struct sk_buff_fclones *fclones;
> >
> >                 fclones = container_of(skb, struct sk_buff_fclones, skb1);
> > --
> > 2.30.1

Thanks,
Al

Patch
diff mbox series

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 9e1a8ded4acc..a0b457ae87c2 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -397,15 +397,20 @@  struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
 	struct sk_buff *skb;
 	u8 *data;
 	bool pfmemalloc;
+	bool clone;
 
-	cache = (flags & SKB_ALLOC_FCLONE)
-		? skbuff_fclone_cache : skbuff_head_cache;
+	clone = !!(flags & SKB_ALLOC_FCLONE);
+	cache = clone ? skbuff_fclone_cache : skbuff_head_cache;
 
 	if (sk_memalloc_socks() && (flags & SKB_ALLOC_RX))
 		gfp_mask |= __GFP_MEMALLOC;
 
 	/* Get the HEAD */
-	skb = kmem_cache_alloc_node(cache, gfp_mask & ~__GFP_DMA, node);
+	if ((flags & SKB_ALLOC_NAPI) && !clone &&
+	    likely(node == NUMA_NO_NODE || node == numa_mem_id()))
+		skb = napi_skb_cache_get();
+	else
+		skb = kmem_cache_alloc_node(cache, gfp_mask & ~GFP_DMA, node);
 	if (unlikely(!skb))
 		return NULL;
 	prefetchw(skb);
@@ -436,7 +441,7 @@  struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
 	__build_skb_around(skb, data, 0);
 	skb->pfmemalloc = pfmemalloc;
 
-	if (flags & SKB_ALLOC_FCLONE) {
+	if (clone) {
 		struct sk_buff_fclones *fclones;
 
 		fclones = container_of(skb, struct sk_buff_fclones, skb1);