All of lore.kernel.org
 help / color / mirror / Atom feed
From: Matteo Croce <mcroce@redhat.com>
To: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Ilias Apalodimas <ilias.apalodimas@linaro.org>,
	netdev <netdev@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Lorenzo Bianconi <lorenzo@kernel.org>,
	Maxime Chevallier <maxime.chevallier@bootlin.com>,
	Antoine Tenart <antoine.tenart@bootlin.com>,
	Luka Perkov <luka.perkov@sartura.hr>,
	Tomislav Tomasic <tomislav.tomasic@sartura.hr>,
	Marcin Wojtas <mw@semihalf.com>,
	Stefan Chulski <stefanc@marvell.com>,
	Nadav Haklai <nadavh@marvell.com>
Subject: Re: [RFC net-next 0/2] mvpp2: page_pool support
Date: Tue, 24 Dec 2019 15:37:49 +0100	[thread overview]
Message-ID: <CAGnkfhzYdqBPvRM8j98HVMzeHSbJ8RyVH+nLpoKBuz2iqErPog@mail.gmail.com> (raw)
In-Reply-To: <20191224150058.4400ffab@carbon>

On Tue, Dec 24, 2019 at 3:01 PM Jesper Dangaard Brouer
<brouer@redhat.com> wrote:
>
> On Tue, 24 Dec 2019 11:52:29 +0200
> Ilias Apalodimas <ilias.apalodimas@linaro.org> wrote:
>
> > On Tue, Dec 24, 2019 at 02:01:01AM +0100, Matteo Croce wrote:
> > > This patches change the memory allocator of mvpp2 from the frag allocator to
> > > the page_pool API. This change is needed to add later XDP support to mvpp2.
> > >
> > > The reason I send it as RFC is that with this changeset, mvpp2 performs much
> > > more slower. This is the tc drop rate measured with a single flow:
> > >
> > > stock net-next with frag allocator:
> > > rx: 900.7 Mbps 1877 Kpps
> > >
> > > this patchset with page_pool:
> > > rx: 423.5 Mbps 882.3 Kpps
> > >
> > > This is the perf top when receiving traffic:
> > >
> > >   27.68%  [kernel]            [k] __page_pool_clean_page
> >
> > This seems extremly high on the list.
>
> This looks related to the cost of dma unmap, as page_pool have
> PP_FLAG_DMA_MAP. (It is a little strange, as page_pool have flag
> DMA_ATTR_SKIP_CPU_SYNC, which should make it less expensive).
>
>
> > >    9.79%  [kernel]            [k] get_page_from_freelist
>
> You are clearly hitting page-allocator every time, because you are not
> using page_pool recycle facility.
>
>
> > >    7.18%  [kernel]            [k] free_unref_page
> > >    4.64%  [kernel]            [k] build_skb
> > >    4.63%  [kernel]            [k] __netif_receive_skb_core
> > >    3.83%  [mvpp2]             [k] mvpp2_poll
> > >    3.64%  [kernel]            [k] eth_type_trans
> > >    3.61%  [kernel]            [k] kmem_cache_free
> > >    3.03%  [kernel]            [k] kmem_cache_alloc
> > >    2.76%  [kernel]            [k] dev_gro_receive
> > >    2.69%  [mvpp2]             [k] mvpp2_bm_pool_put
> > >    2.68%  [kernel]            [k] page_frag_free
> > >    1.83%  [kernel]            [k] inet_gro_receive
> > >    1.74%  [kernel]            [k] page_pool_alloc_pages
> > >    1.70%  [kernel]            [k] __build_skb
> > >    1.47%  [kernel]            [k] __alloc_pages_nodemask
> > >    1.36%  [mvpp2]             [k] mvpp2_buf_alloc.isra.0
> > >    1.29%  [kernel]            [k] tcf_action_exec
> > >
> > > I tried Ilias patches for page_pool recycling, I get an improvement
> > > to ~1100, but I'm still far than the original allocator.
> --
> Best regards,
>   Jesper Dangaard Brouer
>   MSc.CS, Principal Kernel Engineer at Red Hat
>   LinkedIn: http://www.linkedin.com/in/brouer
>

The change I did to use the recycling is the following:

--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
@@ -3071,7 +3071,7 @@ static int mvpp2_rx(struct mvpp2_port *port,
struct napi_struct *napi,
    if (pp)
-       page_pool_release_page(pp, virt_to_page(data));
+      skb_mark_for_recycle(skb, virt_to_page(data), &rxq->xdp_rxq.mem);
    else
         dma_unmap_single_attrs(dev->dev.parent, dma_addr,




--
Matteo Croce
per aspera ad upstream


  reply	other threads:[~2019-12-24 14:38 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-24  1:01 [RFC net-next 0/2] mvpp2: page_pool support Matteo Croce
2019-12-24  1:01 ` [RFC net-next 1/2] mvpp2: use page_pool allocator Matteo Croce
2019-12-24  1:01 ` [RFC net-next 2/2] mvpp2: memory accounting Matteo Croce
2019-12-24  9:52 ` [RFC net-next 0/2] mvpp2: page_pool support Ilias Apalodimas
2019-12-24 13:34   ` Matteo Croce
2019-12-24 14:04     ` Jesper Dangaard Brouer
2019-12-24 14:00   ` Jesper Dangaard Brouer
2019-12-24 14:37     ` Matteo Croce [this message]
2019-12-27 11:51       ` Ilias Apalodimas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAGnkfhzYdqBPvRM8j98HVMzeHSbJ8RyVH+nLpoKBuz2iqErPog@mail.gmail.com \
    --to=mcroce@redhat.com \
    --cc=antoine.tenart@bootlin.com \
    --cc=brouer@redhat.com \
    --cc=ilias.apalodimas@linaro.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lorenzo@kernel.org \
    --cc=luka.perkov@sartura.hr \
    --cc=maxime.chevallier@bootlin.com \
    --cc=mw@semihalf.com \
    --cc=nadavh@marvell.com \
    --cc=netdev@vger.kernel.org \
    --cc=stefanc@marvell.com \
    --cc=tomislav.tomasic@sartura.hr \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.