bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
To: Shay Agroskin <shayagr@amazon.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>,
	Yunsheng Lin <linyunsheng@huawei.com>,
	Matteo Croce <mcroce@linux.microsoft.com>,
	netdev@vger.kernel.org, linux-mm@kvack.org,
	Ayush Sawal <ayush.sawal@chelsio.com>,
	Vinay Kumar Yadav <vinay.yadav@chelsio.com>,
	Rohit Maheshwari <rohitm@chelsio.com>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Thomas Petazzoni <thomas.petazzoni@bootlin.com>,
	Marcin Wojtas <mw@semihalf.com>,
	Russell King <linux@armlinux.org.uk>,
	Mirko Lindner <mlindner@marvell.com>,
	Stephen Hemminger <stephen@networkplumber.org>,
	Tariq Toukan <tariqt@nvidia.com>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	John Fastabend <john.fastabend@gmail.com>,
	Boris Pismenny <borisp@nvidia.com>, Arnd Bergmann <arnd@arndb.de>,
	Andrew Morton <akpm@linux-foundation.org>,
	"Peter Zijlstra (Intel)" <peterz@infradead.org>,
	Vlastimil Babka <vbabka@suse.cz>, Yu Zhao <yuzhao@google.com>,
	Will Deacon <will@kernel.org>,
	Michel Lespinasse <walken@google.com>,
	Fenghua Yu <fenghua.yu@intel.com>, Roman Gushchin <guro@fb.com>,
	Hugh Dickins <hughd@google.com>, Peter Xu <peterx@redhat.com>,
	Jason Gunthorpe <jgg@ziepe.ca>,
	Guoqing Jiang <guoqing.jiang@cloud.ionos.com>,
	Jonathan Lemon <jonathan.lemon@gmail.com>,
	Alexander Lobakin <alobakin@pm.me>,
	Cong Wang <cong.wang@bytedance.com>, wenxu <wenxu@ucloud.cn>,
	Kevin Hao <haokexin@gmail.com>,
	Aleksandr Nogikh <nogikh@google.com>,
	Jakub Sitnicki <jakub@cloudflare.com>,
	Marco Elver <elver@google.com>,
	Willem de Bruijn <willemb@google.com>,
	Miaohe Lin <linmiaohe@huawei.com>,
	Guillaume Nault <gnault@redhat.com>,
	linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org,
	bpf@vger.kernel.org, Matthew Wilcox <willy@infradead.org>,
	Eric Dumazet <edumazet@google.com>,
	David Ahern <dsahern@gmail.com>,
	Lorenzo Bianconi <lorenzo@kernel.org>,
	Saeed Mahameed <saeedm@nvidia.com>, Andrew Lunn <andrew@lunn.ch>,
	Paolo Abeni <pabeni@redhat.com>
Subject: Re: [PATCH net-next v3 0/5] page_pool: recycle buffers
Date: Tue, 11 May 2021 11:41:23 +0300	[thread overview]
Message-ID: <YJpDMwhX3OJrdjDd@apalos.home> (raw)
In-Reply-To: <pj41zl4kfclce0.fsf@u570694869fb251.ant.amazon.com>

Hi Shay,

On Sun, May 09, 2021 at 08:11:35AM +0300, Shay Agroskin wrote:
> 
> Jesper Dangaard Brouer <brouer@redhat.com> writes:
> 
> > On Fri, 7 May 2021 16:28:30 +0800
> > Yunsheng Lin <linyunsheng@huawei.com> wrote:
> > 
> > > On 2021/5/7 15:06, Ilias Apalodimas wrote:
> > > > On Fri, May 07, 2021 at 11:23:28AM +0800, Yunsheng Lin wrote:  >>
> > > On 2021/5/6 20:58, Ilias Apalodimas wrote:  >>>>>>  >>>>>
> > ...
> > > > > > I think both choices are sane.  What I am trying to explain >
> > > here, is
> > > > regardless of what we choose now, we can change it in the > future
> > > without
> > > > affecting the API consumers at all.  What will change > internally
> > > is the way we
> > > > lookup the page pool pointer we are trying to recycle.
> > > 
> > > It seems the below API need changing?
> > > +static inline void skb_mark_for_recycle(struct sk_buff *skb, struct
> > > page *page,
> > > +					struct xdp_mem_info *mem)
> > 
> > I don't think we need to change this API, to support future memory
> > models.  Notice that xdp_mem_info have a 'type' member.
> 
> Hi,
> Providing that we will (possibly as a future optimization) store the pointer
> to the page pool in struct page instead of strcut xdp_mem_info, passing
> xdp_mem_info * instead of struct page_pool * would mean that for every
> packet we'll need to call
>             xa = rhashtable_lookup(mem_id_ht, &mem->id,
> mem_id_rht_params);
>             xa->page_pool;
> 
> which might pressure the Dcache to fetch a pointer that might be present
> already in cache as part of driver's data-structures.
> 
> I tend to agree with Yunsheng that it makes more sense to adjust the API for
> the clear use-case now rather than using xdp_mem_info indirection. It seems
> to me like
> the page signature provides the same information anyway and allows to
> support different memory types.

We've switched the patches already.  We didn't notice any performance boost
by doing so (tested on a machiattobin), but I agree as well.  As I
explained the only thing that will change if we ever the need the struct
xdp_mem_info in struct page is the internal contract between struct page
and the recycling function, so let's start clean and see if we ever need
that.


Cheers
/Ilias
> 
> Shay
> 
> > 
> > Naming in Computer Science is a hard problem ;-). Something that seems
> > to confuse a lot of people is the naming of the struct "xdp_mem_info".
> > Maybe we should have named it "mem_info" instead or "net_mem_info", as
> > it doesn't indicate that the device is running XDP.
> > 
> > I see XDP as the RX-layer before the network stack, that helps drivers
> > to support different memory models, also for handling normal packets
> > that doesn't get process by XDP, and the drivers doesn't even need to
> > support XDP to use the "xdp_mem_info" type.
> 

  reply	other threads:[~2021-05-11  8:41 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-09 22:37 [PATCH net-next v3 0/5] page_pool: recycle buffers Matteo Croce
2021-04-09 22:37 ` [PATCH net-next v3 1/5] xdp: reduce size of struct xdp_mem_info Matteo Croce
2021-04-09 22:37 ` [PATCH net-next v3 2/5] mm: add a signature in struct page Matteo Croce
2021-04-10 15:48   ` Matthew Wilcox
2021-04-10 16:16     ` Ilias Apalodimas
2021-04-10 17:42       ` Shakeel Butt
2021-04-10 18:27         ` Ilias Apalodimas
2021-04-10 19:39           ` Matthew Wilcox
2021-04-11 10:05             ` Jesper Dangaard Brouer
2021-04-14 19:41           ` Jesper Dangaard Brouer
2021-04-14 20:09             ` Shakeel Butt
2021-04-14 20:51               ` Eric Dumazet
2021-04-19  5:12               ` Ilias Apalodimas
2021-04-19 14:57                 ` Shakeel Butt
2021-04-19 15:43                   ` Ilias Apalodimas
2021-04-19 16:21                     ` Shakeel Butt
2021-04-19 18:41                       ` Ilias Apalodimas
2021-04-19 11:22               ` Jesper Dangaard Brouer
2021-04-19 13:01                 ` Matthew Wilcox
2021-04-20  8:10                   ` Ilias Apalodimas
2021-04-09 22:37 ` [PATCH net-next v3 3/5] page_pool: Allow drivers to hint on SKB recycling Matteo Croce
2021-04-10  0:11   ` Ilias Apalodimas
2021-04-10  0:39     ` Matteo Croce
2021-04-09 22:38 ` [PATCH net-next v3 4/5] mvpp2: recycle buffers Matteo Croce
2021-04-09 22:38 ` [PATCH net-next v3 5/5] mvneta: " Matteo Croce
2021-04-29  8:27 ` [PATCH net-next v3 0/5] page_pool: " Yunsheng Lin
2021-04-29 18:51   ` Ilias Apalodimas
2021-04-30  3:01     ` Yunsheng Lin
2021-04-30 16:24       ` Ilias Apalodimas
2021-04-30 17:32         ` Ilias Apalodimas
2021-05-03  7:29           ` Jesper Dangaard Brouer
2021-05-06 12:34         ` Yunsheng Lin
2021-05-06 12:58           ` Ilias Apalodimas
2021-05-07  3:23             ` Yunsheng Lin
2021-05-07  7:06               ` Ilias Apalodimas
2021-05-07  8:28                 ` Yunsheng Lin
2021-05-07 10:19                   ` Jesper Dangaard Brouer
2021-05-07 11:31                     ` Christoph Hellwig
2021-05-09  5:11                     ` Shay Agroskin
2021-05-11  8:41                       ` Ilias Apalodimas [this message]
2021-05-10  2:20                     ` Yunsheng Lin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YJpDMwhX3OJrdjDd@apalos.home \
    --to=ilias.apalodimas@linaro.org \
    --cc=akpm@linux-foundation.org \
    --cc=alobakin@pm.me \
    --cc=andrew@lunn.ch \
    --cc=arnd@arndb.de \
    --cc=ast@kernel.org \
    --cc=ayush.sawal@chelsio.com \
    --cc=borisp@nvidia.com \
    --cc=bpf@vger.kernel.org \
    --cc=brouer@redhat.com \
    --cc=cong.wang@bytedance.com \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=dsahern@gmail.com \
    --cc=edumazet@google.com \
    --cc=elver@google.com \
    --cc=fenghua.yu@intel.com \
    --cc=gnault@redhat.com \
    --cc=guoqing.jiang@cloud.ionos.com \
    --cc=guro@fb.com \
    --cc=haokexin@gmail.com \
    --cc=hawk@kernel.org \
    --cc=hughd@google.com \
    --cc=jakub@cloudflare.com \
    --cc=jgg@ziepe.ca \
    --cc=john.fastabend@gmail.com \
    --cc=jonathan.lemon@gmail.com \
    --cc=kuba@kernel.org \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=linux@armlinux.org.uk \
    --cc=linyunsheng@huawei.com \
    --cc=lorenzo@kernel.org \
    --cc=mcroce@linux.microsoft.com \
    --cc=mlindner@marvell.com \
    --cc=mw@semihalf.com \
    --cc=netdev@vger.kernel.org \
    --cc=nogikh@google.com \
    --cc=pabeni@redhat.com \
    --cc=peterx@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rohitm@chelsio.com \
    --cc=saeedm@nvidia.com \
    --cc=shayagr@amazon.com \
    --cc=stephen@networkplumber.org \
    --cc=tariqt@nvidia.com \
    --cc=thomas.petazzoni@bootlin.com \
    --cc=vbabka@suse.cz \
    --cc=vinay.yadav@chelsio.com \
    --cc=walken@google.com \
    --cc=wenxu@ucloud.cn \
    --cc=will@kernel.org \
    --cc=willemb@google.com \
    --cc=willy@infradead.org \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).