netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Eric Dumazet <edumazet@google.com>
To: Jakub Kicinski <kuba@kernel.org>
Cc: Alexander Lobakin <alobakin@pm.me>,
	Edward Cree <ecree.xilinx@gmail.com>,
	"David S. Miller" <davem@davemloft.net>,
	Edward Cree <ecree@solarflare.com>,
	Jonathan Lemon <jonathan.lemon@gmail.com>,
	Willem de Bruijn <willemb@google.com>,
	Miaohe Lin <linmiaohe@huawei.com>,
	Steffen Klassert <steffen.klassert@secunet.com>,
	Guillaume Nault <gnault@redhat.com>,
	Yadu Kishore <kyk.segfault@gmail.com>,
	Al Viro <viro@zeniv.linux.org.uk>,
	netdev <netdev@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH net-next 0/5] skbuff: introduce skbuff_heads bulking and reusing
Date: Wed, 13 Jan 2021 05:46:05 +0100	[thread overview]
Message-ID: <CANn89i+ppTAPYwQ2mH5cZtcMqanFU8hXzD4szdygrjOBewPb+Q@mail.gmail.com> (raw)
In-Reply-To: <20210112170242.414b8664@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>

On Wed, Jan 13, 2021 at 2:02 AM Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Tue, 12 Jan 2021 13:23:16 +0100 Eric Dumazet wrote:
> > On Tue, Jan 12, 2021 at 12:08 PM Alexander Lobakin <alobakin@pm.me> wrote:
> > >
> > > From: Edward Cree <ecree.xilinx@gmail.com>
> > > Date: Tue, 12 Jan 2021 09:54:04 +0000
> > >
> > > > Without wishing to weigh in on whether this caching is a good idea...
> > >
> > > Well, we already have a cache to bulk flush "consumed" skbs, although
> > > kmem_cache_free() is generally lighter than kmem_cache_alloc(), and
> > > a page frag cache to allocate skb->head that is also bulking the
> > > operations, since it contains a (compound) page with the size of
> > > min(SZ_32K, PAGE_SIZE).
> > > If they wouldn't give any visible boosts, I think they wouldn't hit
> > > mainline.
> > >
> > > > Wouldn't it be simpler, rather than having two separate "alloc" and "flush"
> > > >  caches, to have a single larger cache, such that whenever it becomes full
> > > >  we bulk flush the top half, and when it's empty we bulk alloc the bottom
> > > >  half?  That should mean fewer branches, fewer instructions etc. than
> > > >  having to decide which cache to act upon every time.
> > >
> > > I though about a unified cache, but couldn't decide whether to flush
> > > or to allocate heads and how much to process. Your suggestion answers
> > > these questions and generally seems great. I'll try that one, thanks!
> >
> > The thing is : kmalloc() is supposed to have batches already, and nice
> > per-cpu caches.
> >
> > This looks like an mm issue, are we sure we want to get over it ?
> >
> > I would like a full analysis of why SLAB/SLUB does not work well for
> > your test workload.
>
> +1, it does feel like we're getting into mm territory

I read the existing code, and with Edward Cree idea of reusing the
existing cache (storage of pointers),
ths now all makes sense, since there will be not much added code (and
new storage of 64 pointers)

The remaining issue is to make sure KASAN will still work, we need
this to detect old and new bugs.

Thanks !

  reply	other threads:[~2021-01-13  4:47 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-11 18:27 [PATCH net-next 0/5] skbuff: introduce skbuff_heads bulking and reusing Alexander Lobakin
2021-01-11 18:28 ` [PATCH net-next 1/5] skbuff: rename fields of struct napi_alloc_cache to be more intuitive Alexander Lobakin
2021-01-11 18:28   ` [PATCH net-next 2/5] skbuff: open-code __build_skb() inside __napi_alloc_skb() Alexander Lobakin
2021-01-11 18:29   ` [PATCH net-next 3/5] skbuff: reuse skbuff_heads from flush_skb_cache if available Alexander Lobakin
2021-01-11 18:29   ` [PATCH net-next 4/5] skbuff: allocate skbuff_heads by bulks instead of one by one Alexander Lobakin
2021-01-11 18:29   ` [PATCH net-next 5/5] skbuff: refill skb_cache early from deferred-to-consume entries Alexander Lobakin
2021-01-11 18:49   ` [PATCH net-next 1/5] skbuff: rename fields of struct napi_alloc_cache to be more intuitive Jonathan Lemon
2021-01-11 21:03     ` Alexander Lobakin
2021-01-12  8:20 ` [PATCH net-next 0/5] skbuff: introduce skbuff_heads bulking and reusing Eric Dumazet
2021-01-12 10:56   ` Alexander Lobakin
2021-01-12 12:32     ` Eric Dumazet
2021-01-12 18:26       ` Alexander Lobakin
2021-01-12 19:19         ` Eric Dumazet
2021-01-12  9:54 ` Edward Cree
2021-01-12 11:08   ` Alexander Lobakin
2021-01-12 12:23     ` Eric Dumazet
2021-01-13  1:02       ` Jakub Kicinski
2021-01-13  4:46         ` Eric Dumazet [this message]
2021-01-13 17:03           ` Jakub Kicinski
2021-01-13 17:15             ` Eric Dumazet
2021-01-13 18:12               ` Jakub Kicinski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CANn89i+ppTAPYwQ2mH5cZtcMqanFU8hXzD4szdygrjOBewPb+Q@mail.gmail.com \
    --to=edumazet@google.com \
    --cc=alobakin@pm.me \
    --cc=davem@davemloft.net \
    --cc=ecree.xilinx@gmail.com \
    --cc=ecree@solarflare.com \
    --cc=gnault@redhat.com \
    --cc=jonathan.lemon@gmail.com \
    --cc=kuba@kernel.org \
    --cc=kyk.segfault@gmail.com \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=steffen.klassert@secunet.com \
    --cc=viro@zeniv.linux.org.uk \
    --cc=willemb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).