linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
To: Matteo Croce <mcroce@redhat.com>
Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	Lorenzo Bianconi <lorenzo@kernel.org>,
	Maxime Chevallier <maxime.chevallier@bootlin.com>,
	Antoine Tenart <antoine.tenart@bootlin.com>,
	Luka Perkov <luka.perkov@sartura.hr>,
	Tomislav Tomasic <tomislav.tomasic@sartura.hr>,
	Marcin Wojtas <mw@semihalf.com>,
	Stefan Chulski <stefanc@marvell.com>,
	Jesper Dangaard Brouer <brouer@redhat.com>,
	Nadav Haklai <nadavh@marvell.com>
Subject: Re: [RFC net-next 0/2] mvpp2: page_pool support
Date: Tue, 24 Dec 2019 11:52:29 +0200	[thread overview]
Message-ID: <20191224095229.GA24310@apalos.home> (raw)
In-Reply-To: <20191224010103.56407-1-mcroce@redhat.com>

On Tue, Dec 24, 2019 at 02:01:01AM +0100, Matteo Croce wrote:
> This patches change the memory allocator of mvpp2 from the frag allocator to
> the page_pool API. This change is needed to add later XDP support to mvpp2.
> 
> The reason I send it as RFC is that with this changeset, mvpp2 performs much
> more slower. This is the tc drop rate measured with a single flow:
> 
> stock net-next with frag allocator:
> rx: 900.7 Mbps 1877 Kpps
> 
> this patchset with page_pool:
> rx: 423.5 Mbps 882.3 Kpps
> 
> This is the perf top when receiving traffic:
> 
>   27.68%  [kernel]            [k] __page_pool_clean_page

This seems extremly high on the list. 

>    9.79%  [kernel]            [k] get_page_from_freelist
>    7.18%  [kernel]            [k] free_unref_page
>    4.64%  [kernel]            [k] build_skb
>    4.63%  [kernel]            [k] __netif_receive_skb_core
>    3.83%  [mvpp2]             [k] mvpp2_poll
>    3.64%  [kernel]            [k] eth_type_trans
>    3.61%  [kernel]            [k] kmem_cache_free
>    3.03%  [kernel]            [k] kmem_cache_alloc
>    2.76%  [kernel]            [k] dev_gro_receive
>    2.69%  [mvpp2]             [k] mvpp2_bm_pool_put
>    2.68%  [kernel]            [k] page_frag_free
>    1.83%  [kernel]            [k] inet_gro_receive
>    1.74%  [kernel]            [k] page_pool_alloc_pages
>    1.70%  [kernel]            [k] __build_skb
>    1.47%  [kernel]            [k] __alloc_pages_nodemask
>    1.36%  [mvpp2]             [k] mvpp2_buf_alloc.isra.0
>    1.29%  [kernel]            [k] tcf_action_exec
> 
> I tried Ilias patches for page_pool recycling, I get an improvement
> to ~1100, but I'm still far than the original allocator.

Can you post the recycling perf for comparison?

> 
> Any idea on why I get such bad numbers?

Nop but it's indeed strange

> 
> Another reason to send it as RFC is that I'm not fully convinced on how to
> use the page_pool given the HW limitation of the BM.

I'll have a look right after holidays

> 
> The driver currently uses, for every CPU, a page_pool for short packets and
> another for long ones. The driver also has 4 rx queue per port, so every
> RXQ #1 will share the short and long page pools of CPU #1.
> 

I am not sure i am following the hardware config here

> This means that for every RX queue I call xdp_rxq_info_reg_mem_model() twice,
> on two different page_pool, can this be a problem?
> 
> As usual, ideas are welcome.
> 
> Matteo Croce (2):
>   mvpp2: use page_pool allocator
>   mvpp2: memory accounting
> 
>  drivers/net/ethernet/marvell/Kconfig          |   1 +
>  drivers/net/ethernet/marvell/mvpp2/mvpp2.h    |   7 +
>  .../net/ethernet/marvell/mvpp2/mvpp2_main.c   | 142 +++++++++++++++---
>  3 files changed, 125 insertions(+), 25 deletions(-)
> 
> -- 
> 2.24.1
> 
Cheers
/Ilias

  parent reply	other threads:[~2019-12-24  9:52 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-24  1:01 [RFC net-next 0/2] mvpp2: page_pool support Matteo Croce
2019-12-24  1:01 ` [RFC net-next 1/2] mvpp2: use page_pool allocator Matteo Croce
2019-12-24  1:01 ` [RFC net-next 2/2] mvpp2: memory accounting Matteo Croce
2019-12-24  9:52 ` Ilias Apalodimas [this message]
2019-12-24 13:34   ` [RFC net-next 0/2] mvpp2: page_pool support Matteo Croce
2019-12-24 14:04     ` Jesper Dangaard Brouer
2019-12-24 14:00   ` Jesper Dangaard Brouer
2019-12-24 14:37     ` Matteo Croce
2019-12-27 11:51       ` Ilias Apalodimas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191224095229.GA24310@apalos.home \
    --to=ilias.apalodimas@linaro.org \
    --cc=antoine.tenart@bootlin.com \
    --cc=brouer@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lorenzo@kernel.org \
    --cc=luka.perkov@sartura.hr \
    --cc=maxime.chevallier@bootlin.com \
    --cc=mcroce@redhat.com \
    --cc=mw@semihalf.com \
    --cc=nadavh@marvell.com \
    --cc=netdev@vger.kernel.org \
    --cc=stefanc@marvell.com \
    --cc=tomislav.tomasic@sartura.hr \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).