All of lore.kernel.org
 help / color / mirror / Atom feed
From: Matteo Croce <mcroce@redhat.com>
To: netdev@vger.kernel.org
Cc: linux-kernel@vger.kernel.org,
	Ilias Apalodimas <ilias.apalodimas@linaro.org>,
	Lorenzo Bianconi <lorenzo@kernel.org>,
	Maxime Chevallier <maxime.chevallier@bootlin.com>,
	Antoine Tenart <antoine.tenart@bootlin.com>,
	Luka Perkov <luka.perkov@sartura.hr>,
	Tomislav Tomasic <tomislav.tomasic@sartura.hr>,
	Marcin Wojtas <mw@semihalf.com>,
	Stefan Chulski <stefanc@marvell.com>,
	Jesper Dangaard Brouer <brouer@redhat.com>,
	Nadav Haklai <nadavh@marvell.com>
Subject: [RFC net-next 0/2] mvpp2: page_pool support
Date: Tue, 24 Dec 2019 02:01:01 +0100	[thread overview]
Message-ID: <20191224010103.56407-1-mcroce@redhat.com> (raw)

This patches change the memory allocator of mvpp2 from the frag allocator to
the page_pool API. This change is needed to add later XDP support to mvpp2.

The reason I send it as RFC is that with this changeset, mvpp2 performs much
more slower. This is the tc drop rate measured with a single flow:

stock net-next with frag allocator:
rx: 900.7 Mbps 1877 Kpps

this patchset with page_pool:
rx: 423.5 Mbps 882.3 Kpps

This is the perf top when receiving traffic:

  27.68%  [kernel]            [k] __page_pool_clean_page
   9.79%  [kernel]            [k] get_page_from_freelist
   7.18%  [kernel]            [k] free_unref_page
   4.64%  [kernel]            [k] build_skb
   4.63%  [kernel]            [k] __netif_receive_skb_core
   3.83%  [mvpp2]             [k] mvpp2_poll
   3.64%  [kernel]            [k] eth_type_trans
   3.61%  [kernel]            [k] kmem_cache_free
   3.03%  [kernel]            [k] kmem_cache_alloc
   2.76%  [kernel]            [k] dev_gro_receive
   2.69%  [mvpp2]             [k] mvpp2_bm_pool_put
   2.68%  [kernel]            [k] page_frag_free
   1.83%  [kernel]            [k] inet_gro_receive
   1.74%  [kernel]            [k] page_pool_alloc_pages
   1.70%  [kernel]            [k] __build_skb
   1.47%  [kernel]            [k] __alloc_pages_nodemask
   1.36%  [mvpp2]             [k] mvpp2_buf_alloc.isra.0
   1.29%  [kernel]            [k] tcf_action_exec

I tried Ilias patches for page_pool recycling, I get an improvement
to ~1100, but I'm still far than the original allocator.

Any idea on why I get such bad numbers?

Another reason to send it as RFC is that I'm not fully convinced on how to
use the page_pool given the HW limitation of the BM.

The driver currently uses, for every CPU, a page_pool for short packets and
another for long ones. The driver also has 4 rx queue per port, so every
RXQ #1 will share the short and long page pools of CPU #1.

This means that for every RX queue I call xdp_rxq_info_reg_mem_model() twice,
on two different page_pool, can this be a problem?

As usual, ideas are welcome.

Matteo Croce (2):
  mvpp2: use page_pool allocator
  mvpp2: memory accounting

 drivers/net/ethernet/marvell/Kconfig          |   1 +
 drivers/net/ethernet/marvell/mvpp2/mvpp2.h    |   7 +
 .../net/ethernet/marvell/mvpp2/mvpp2_main.c   | 142 +++++++++++++++---
 3 files changed, 125 insertions(+), 25 deletions(-)

-- 
2.24.1


             reply	other threads:[~2019-12-24  1:01 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-24  1:01 Matteo Croce [this message]
2019-12-24  1:01 ` [RFC net-next 1/2] mvpp2: use page_pool allocator Matteo Croce
2019-12-24  1:01 ` [RFC net-next 2/2] mvpp2: memory accounting Matteo Croce
2019-12-24  9:52 ` [RFC net-next 0/2] mvpp2: page_pool support Ilias Apalodimas
2019-12-24 13:34   ` Matteo Croce
2019-12-24 14:04     ` Jesper Dangaard Brouer
2019-12-24 14:00   ` Jesper Dangaard Brouer
2019-12-24 14:37     ` Matteo Croce
2019-12-27 11:51       ` Ilias Apalodimas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191224010103.56407-1-mcroce@redhat.com \
    --to=mcroce@redhat.com \
    --cc=antoine.tenart@bootlin.com \
    --cc=brouer@redhat.com \
    --cc=ilias.apalodimas@linaro.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lorenzo@kernel.org \
    --cc=luka.perkov@sartura.hr \
    --cc=maxime.chevallier@bootlin.com \
    --cc=mw@semihalf.com \
    --cc=nadavh@marvell.com \
    --cc=netdev@vger.kernel.org \
    --cc=stefanc@marvell.com \
    --cc=tomislav.tomasic@sartura.hr \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.