bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Yunsheng Lin <linyunsheng@huawei.com>
To: Jesper Dangaard Brouer <jbrouer@redhat.com>,
	Alexander Duyck <alexander.duyck@gmail.com>
Cc: <brouer@redhat.com>, <davem@davemloft.net>, <kuba@kernel.org>,
	<pabeni@redhat.com>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>,
	Lorenzo Bianconi <lorenzo@kernel.org>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	Ilias Apalodimas <ilias.apalodimas@linaro.org>,
	Eric Dumazet <edumazet@google.com>,
	Maryam Tahhan <mtahhan@redhat.com>, bpf <bpf@vger.kernel.org>
Subject: Re: [PATCH net-next v3 3/4] page_pool: introduce page_pool_alloc() API
Date: Fri, 16 Jun 2023 19:57:14 +0800	[thread overview]
Message-ID: <dcc9db4c-207b-e118-3d84-641677cd3d80@huawei.com> (raw)
In-Reply-To: <0ba1bf9c-2e45-cd44-60d3-66feeb3268f3@redhat.com>

On 2023/6/16 0:19, Jesper Dangaard Brouer wrote:

...

> You have mentioned veth as the use-case. I know I acked adding page_pool
> use-case to veth, for when we need to convert an SKB into an
> xdp_buff/xdp-frame, but maybe it was the wrong hammer(?).
> In this case in veth, the size is known at the page allocation time.
> Thus, using the page_pool API is wasting memory.  We did this for
> performance reasons, but we are not using PP for what is was intended
> for.  We mostly use page_pool, because it an existing recycle return
> path, and we were too lazy to add another alloc-type (see enum
> xdp_mem_type).
> 
> Maybe you/we can extend veth to use this dynamic size API, to show us
> that this is API is a better approach.  I will signup for benchmarking
> this (and coordinating with CC Maryam as she came with use-case we
> improved on).

Thanks, let's find out if page pool is the right hammer for the
veth XDP case.

Below is the change for veth using the new api in this patch.
Only compile test as I am not familiar enough with veth XDP and
testing environment for it.
Please try it if it is helpful.

diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index 614f3e3efab0..8850394f1d29 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -736,7 +736,7 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq,
        if (skb_shared(skb) || skb_head_is_locked(skb) ||
            skb_shinfo(skb)->nr_frags ||
            skb_headroom(skb) < XDP_PACKET_HEADROOM) {
-               u32 size, len, max_head_size, off;
+               u32 size, len, max_head_size, off, truesize, page_offset;
                struct sk_buff *nskb;
                struct page *page;
                int i, head_off;
@@ -752,12 +752,15 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq,
                if (skb->len > PAGE_SIZE * MAX_SKB_FRAGS + max_head_size)
                        goto drop;

+               size = min_t(u32, skb->len, max_head_size);
+               truesize = size;
+
                /* Allocate skb head */
-               page = page_pool_dev_alloc_pages(rq->page_pool);
+               page = page_pool_dev_alloc(rq->page_pool, &page_offset, &truesize);
                if (!page)
                        goto drop;

-               nskb = napi_build_skb(page_address(page), PAGE_SIZE);
+               nskb = napi_build_skb(page_address(page) + page_offset, truesize);
                if (!nskb) {
                        page_pool_put_full_page(rq->page_pool, page, true);
                        goto drop;
@@ -767,7 +770,6 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq,
                skb_copy_header(nskb, skb);
                skb_mark_for_recycle(nskb);

-               size = min_t(u32, skb->len, max_head_size);
                if (skb_copy_bits(skb, 0, nskb->data, size)) {
                        consume_skb(nskb);
                        goto drop;
@@ -782,14 +784,17 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq,
                len = skb->len - off;

                for (i = 0; i < MAX_SKB_FRAGS && off < skb->len; i++) {
-                       page = page_pool_dev_alloc_pages(rq->page_pool);
+                       size = min_t(u32, len, PAGE_SIZE);
+                       truesize = size;
+
+                       page = page_pool_dev_alloc(rq->page_pool, &page_offset,
+                                                  &truesize);
                        if (!page) {
                                consume_skb(nskb);
                                goto drop;
                        }

-                       size = min_t(u32, len, PAGE_SIZE);
-                       skb_add_rx_frag(nskb, i, page, 0, size, PAGE_SIZE);
+                       skb_add_rx_frag(nskb, i, page, page_offset, size, truesize);
                        if (skb_copy_bits(skb, off, page_address(page),
                                          size)) {
                                consume_skb(nskb);


> 
> --Jesper
> 
> .
> 

  reply	other threads:[~2023-06-16 11:57 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-09 13:17 [PATCH net-next v3 0/4] introduce page_pool_alloc() API Yunsheng Lin
     [not found] ` <20230609131740.7496-4-linyunsheng@huawei.com>
     [not found]   ` <CAKgT0UfVwQ=ri7ZDNnsATH2RQpEz+zDBBb6YprvniMEWGdw+dQ@mail.gmail.com>
     [not found]     ` <36366741-8df2-1137-0dd9-d498d0f770e4@huawei.com>
     [not found]       ` <CAKgT0UdXTSv1fDHBX4UC6Ok9NXKMJ_9F88CEv5TK+mpzy0N21g@mail.gmail.com>
     [not found]         ` <c06f6f59-6c35-4944-8f7a-7f6f0e076649@huawei.com>
     [not found]           ` <CAKgT0UccmDe+CE6=zDYQHi1=3vXf5MptzDo+BsPrKdmP5j9kgQ@mail.gmail.com>
2023-06-15 16:19             ` [PATCH net-next v3 3/4] page_pool: " Jesper Dangaard Brouer
2023-06-16 11:57               ` Yunsheng Lin [this message]
2023-06-16 16:31                 ` Jesper Dangaard Brouer
2023-06-16 17:34                   ` Alexander Duyck
2023-06-16 18:41                     ` Jesper Dangaard Brouer
2023-06-16 18:47                       ` Jakub Kicinski
2023-06-16 19:50                       ` Alexander Duyck
2023-06-18 15:05                         ` Lorenzo Bianconi
2023-06-20 16:19                           ` Alexander Duyck
2023-06-20 21:16                             ` Lorenzo Bianconi
2023-06-21 11:55                               ` Jesper Dangaard Brouer
2023-06-24 14:44                                 ` Yunsheng Lin
2023-06-17 12:47                     ` Yunsheng Lin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=dcc9db4c-207b-e118-3d84-641677cd3d80@huawei.com \
    --to=linyunsheng@huawei.com \
    --cc=alexander.duyck@gmail.com \
    --cc=bpf@vger.kernel.org \
    --cc=brouer@redhat.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=ilias.apalodimas@linaro.org \
    --cc=jbrouer@redhat.com \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lorenzo@kernel.org \
    --cc=mtahhan@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).