linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Alexander Lobakin <alobakin@pm.me>
Cc: "Paolo Abeni" <pabeni@redhat.com>,
	"David S. Miller" <davem@davemloft.net>,
	"Jakub Kicinski" <kuba@kernel.org>,
	"Jonathan Lemon" <jonathan.lemon@gmail.com>,
	"Eric Dumazet" <edumazet@google.com>,
	"Dmitry Vyukov" <dvyukov@google.com>,
	"Willem de Bruijn" <willemb@google.com>,
	"Randy Dunlap" <rdunlap@infradead.org>,
	"Kevin Hao" <haokexin@gmail.com>,
	"Pablo Neira Ayuso" <pablo@netfilter.org>,
	"Jakub Sitnicki" <jakub@cloudflare.com>,
	"Marco Elver" <elver@google.com>,
	"Dexuan Cui" <decui@microsoft.com>,
	"Alexei Starovoitov" <ast@kernel.org>,
	"Daniel Borkmann" <daniel@iogearbox.net>,
	"Andrii Nakryiko" <andriin@fb.com>,
	"Taehee Yoo" <ap420073@gmail.com>,
	"Cong Wang" <xiyou.wangcong@gmail.com>,
	"Björn Töpel" <bjorn@kernel.org>,
	"Miaohe Lin" <linmiaohe@huawei.com>,
	"Guillaume Nault" <gnault@redhat.com>,
	"Yonghong Song" <yhs@fb.com>, zhudi <zhudi21@huawei.com>,
	"Michal Kubecek" <mkubecek@suse.cz>,
	"Marcelo Ricardo Leitner" <marcelo.leitner@gmail.com>,
	"Dmitry Safonov" <0x7f454c46@gmail.com>,
	"Yang Yingliang" <yangyingliang@huawei.com>,
	linux-kernel@vger.kernel.org, netdev@vger.kernel.org,
	"Edward Cree" <ecree.xilinx@gmail.com>,
	brouer@redhat.com, "Alexander Duyck" <alexander.duyck@gmail.com>
Subject: Re: [v3 net-next 08/10] skbuff: reuse NAPI skb cache on allocation path (__build_skb())
Date: Wed, 10 Feb 2021 18:25:26 +0100	[thread overview]
Message-ID: <20210210182526.3fd3c0ba@carbon> (raw)
In-Reply-To: <20210210122414.8064-1-alobakin@pm.me>

On Wed, 10 Feb 2021 12:25:04 +0000
Alexander Lobakin <alobakin@pm.me> wrote:

> From: Paolo Abeni <pabeni@redhat.com>
> Date: Wed, 10 Feb 2021 11:21:06 +0100
> 
> > I'm sorry for the late feedback, I could not step-in before.
> > 
> > Also adding Jesper for awareness, as he introduced the bulk free
> > infrastructure.

Thanks (and Alexander Duyck also did part of the work while at Red Hat).

In my initial versions of my patchsets I actually also had reuse of the
SKBs that were defer freed during NAPI context.  But I dropped that
part because it was getting nitpicked and the merge window was getting
close, so I ended up dropping that part.



> > On Tue, 2021-02-09 at 20:48 +0000, Alexander Lobakin wrote:  
> > > @@ -231,7 +256,7 @@ struct sk_buff *__build_skb(void *data, unsigned int frag_size)
> > >   */
> > >  struct sk_buff *build_skb(void *data, unsigned int frag_size)
> > >  {
> > > -	struct sk_buff *skb = __build_skb(data, frag_size);
> > > +	struct sk_buff *skb = __build_skb(data, frag_size, true);  
> > 
> > I must admit I'm a bit scared of this. There are several high speed
> > device drivers that will move to bulk allocation, and we don't have any
> > performance figure for them.
> > 
> > In my experience with (low end) MIPS board, cache misses cost tend to
> > be much less visible there compared to reasonably recent server H/W,
> > because the CPU/memory access time difference is much lower.
> > 
> > When moving to higher end H/W the performance gain you measured could
> > be completely countered by less optimal cache usage.
> > 
> > I fear also latency spikes - I'm unsure if a 32 skbs allocation vs a
> > single skb would be visible e.g. in a round-robin test. Generally
> > speaking bulk allocating 32 skbs looks a bit too much. IIRC, when
> > Edward added listification to GRO, he did several measures with
> > different list size and found 8 to be the optimal value (for the tested
> > workload). Above such number the list become too big and the pressure
> > on the cache outweighted the bulking benefits.  
> 
> I can change to logics the way so it would allocate the first 8.
> I think I've already seen this batch value somewhere in XDP code,
> so this might be a balanced one.

(Speaking about SLUB code): Bulk ALLOC side disables interrupts, and
can call slow path (___slab_alloc), which is bad for latency sensitive
workloads.   This I don't recommend large bulk ALLOCATIONS.

> Regarding bulk-freeing: can the batch size make sense when freeing
> or it's okay to wipe 32 (currently 64 in baseline) in a row?

(Speaking about SLUB code):  You can bulk FREE large amount of object
without hurting latency sensitive workloads, because it doesn't disable
interrupts (I'm quite proud that this was possible).


> > Perhaps giving the device drivers the ability to opt-in on this infra
> > via a new helper - as done back then with napi_consume_skb() - would
> > make this change safer?  
> 
> That's actually a very nice idea. There's only a little in the code
> to change to introduce an ability to take heads from the cache
> optionally. This way developers could switch to it when needed.

Well, I actually disagree that this should be hidden behind a switch
for drivers to enable, as this will take forever to get proper enabled.



> Thanks for the suggestions! I'll definitely absorb them into the code
> and give it a test.
> 
> > > @@ -838,31 +863,31 @@ void __consume_stateless_skb(struct sk_buff *skb)
> > >  	kfree_skbmem(skb);
> > >  }
> > >
> > > -static inline void _kfree_skb_defer(struct sk_buff *skb)
> > > +static void napi_skb_cache_put(struct sk_buff *skb)
> > >  {
> > >  	struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
> > > +	u32 i;
> > >
> > >  	/* drop skb->head and call any destructors for packet */
> > >  	skb_release_all(skb);
> > >
> > > -	/* record skb to CPU local list */
> > > +	kasan_poison_object_data(skbuff_head_cache, skb);
> > >  	nc->skb_cache[nc->skb_count++] = skb;
> > >
> > > -#ifdef CONFIG_SLUB
> > > -	/* SLUB writes into objects when freeing */
> > > -	prefetchw(skb);
> > > -#endif  
> > 
> > It looks like this chunk has been lost. Is that intentional?  
> 
> Yep. This prefetchw() assumed that skbuff_heads will be wiped
> immediately or at the end of network softirq. Reusing this cache
> means that heads can be reused later or may be kept in a cache for
> some time, so prefetching makes no sense anymore.

I agree with this statement, the prefetchw() is no-longer needed.

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


  parent reply	other threads:[~2021-02-10 17:27 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-09 20:46 [v3 net-next 00/10] skbuff: introduce skbuff_heads bulking and reusing Alexander Lobakin
2021-02-09 20:47 ` [v3 net-next 01/10] skbuff: move __alloc_skb() next to the other skb allocation functions Alexander Lobakin
2021-02-09 20:47 ` [v3 net-next 02/10] skbuff: simplify kmalloc_reserve() Alexander Lobakin
2021-02-09 20:47 ` [v3 net-next 03/10] skbuff: make __build_skb_around() return void Alexander Lobakin
2021-02-09 20:48 ` [v3 net-next 04/10] skbuff: simplify __alloc_skb() a bit Alexander Lobakin
2021-02-09 20:48 ` [v3 net-next 05/10] skbuff: use __build_skb_around() in __alloc_skb() Alexander Lobakin
2021-02-09 20:48 ` [v3 net-next 06/10] skbuff: remove __kfree_skb_flush() Alexander Lobakin
2021-02-09 20:48 ` [v3 net-next 07/10] skbuff: move NAPI cache declarations upper in the file Alexander Lobakin
2021-02-09 20:48 ` [v3 net-next 08/10] skbuff: reuse NAPI skb cache on allocation path (__build_skb()) Alexander Lobakin
2021-02-10 10:21   ` Paolo Abeni
2021-02-10 12:25     ` Alexander Lobakin
2021-02-10 12:51       ` Paolo Abeni
2021-02-10 17:25       ` Jesper Dangaard Brouer [this message]
2021-02-10 13:15   ` Florian Westphal
2021-02-09 20:49 ` [v3 net-next 09/10] skbuff: reuse NAPI skb cache on allocation path (__alloc_skb()) Alexander Lobakin
2021-02-09 20:49 ` [v3 net-next 10/10] skbuff: queue NAPI_MERGED_FREE skbs into NAPI cache instead of freeing Alexander Lobakin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210210182526.3fd3c0ba@carbon \
    --to=brouer@redhat.com \
    --cc=0x7f454c46@gmail.com \
    --cc=alexander.duyck@gmail.com \
    --cc=alobakin@pm.me \
    --cc=andriin@fb.com \
    --cc=ap420073@gmail.com \
    --cc=ast@kernel.org \
    --cc=bjorn@kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=decui@microsoft.com \
    --cc=dvyukov@google.com \
    --cc=ecree.xilinx@gmail.com \
    --cc=edumazet@google.com \
    --cc=elver@google.com \
    --cc=gnault@redhat.com \
    --cc=haokexin@gmail.com \
    --cc=jakub@cloudflare.com \
    --cc=jonathan.lemon@gmail.com \
    --cc=kuba@kernel.org \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=marcelo.leitner@gmail.com \
    --cc=mkubecek@suse.cz \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=pablo@netfilter.org \
    --cc=rdunlap@infradead.org \
    --cc=willemb@google.com \
    --cc=xiyou.wangcong@gmail.com \
    --cc=yangyingliang@huawei.com \
    --cc=yhs@fb.com \
    --cc=zhudi21@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).