All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC net-next 0/2] mvpp2: page_pool support
@ 2019-12-24  1:01 Matteo Croce
  2019-12-24  1:01 ` [RFC net-next 1/2] mvpp2: use page_pool allocator Matteo Croce
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Matteo Croce @ 2019-12-24  1:01 UTC (permalink / raw)
  To: netdev
  Cc: linux-kernel, Ilias Apalodimas, Lorenzo Bianconi,
	Maxime Chevallier, Antoine Tenart, Luka Perkov, Tomislav Tomasic,
	Marcin Wojtas, Stefan Chulski, Jesper Dangaard Brouer,
	Nadav Haklai

This patches change the memory allocator of mvpp2 from the frag allocator to
the page_pool API. This change is needed to add later XDP support to mvpp2.

The reason I send it as RFC is that with this changeset, mvpp2 performs much
more slower. This is the tc drop rate measured with a single flow:

stock net-next with frag allocator:
rx: 900.7 Mbps 1877 Kpps

this patchset with page_pool:
rx: 423.5 Mbps 882.3 Kpps

This is the perf top when receiving traffic:

  27.68%  [kernel]            [k] __page_pool_clean_page
   9.79%  [kernel]            [k] get_page_from_freelist
   7.18%  [kernel]            [k] free_unref_page
   4.64%  [kernel]            [k] build_skb
   4.63%  [kernel]            [k] __netif_receive_skb_core
   3.83%  [mvpp2]             [k] mvpp2_poll
   3.64%  [kernel]            [k] eth_type_trans
   3.61%  [kernel]            [k] kmem_cache_free
   3.03%  [kernel]            [k] kmem_cache_alloc
   2.76%  [kernel]            [k] dev_gro_receive
   2.69%  [mvpp2]             [k] mvpp2_bm_pool_put
   2.68%  [kernel]            [k] page_frag_free
   1.83%  [kernel]            [k] inet_gro_receive
   1.74%  [kernel]            [k] page_pool_alloc_pages
   1.70%  [kernel]            [k] __build_skb
   1.47%  [kernel]            [k] __alloc_pages_nodemask
   1.36%  [mvpp2]             [k] mvpp2_buf_alloc.isra.0
   1.29%  [kernel]            [k] tcf_action_exec

I tried Ilias patches for page_pool recycling, I get an improvement
to ~1100, but I'm still far than the original allocator.

Any idea on why I get such bad numbers?

Another reason to send it as RFC is that I'm not fully convinced on how to
use the page_pool given the HW limitation of the BM.

The driver currently uses, for every CPU, a page_pool for short packets and
another for long ones. The driver also has 4 rx queue per port, so every
RXQ #1 will share the short and long page pools of CPU #1.

This means that for every RX queue I call xdp_rxq_info_reg_mem_model() twice,
on two different page_pool, can this be a problem?

As usual, ideas are welcome.

Matteo Croce (2):
  mvpp2: use page_pool allocator
  mvpp2: memory accounting

 drivers/net/ethernet/marvell/Kconfig          |   1 +
 drivers/net/ethernet/marvell/mvpp2/mvpp2.h    |   7 +
 .../net/ethernet/marvell/mvpp2/mvpp2_main.c   | 142 +++++++++++++++---
 3 files changed, 125 insertions(+), 25 deletions(-)

-- 
2.24.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2019-12-27 11:52 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-24  1:01 [RFC net-next 0/2] mvpp2: page_pool support Matteo Croce
2019-12-24  1:01 ` [RFC net-next 1/2] mvpp2: use page_pool allocator Matteo Croce
2019-12-24  1:01 ` [RFC net-next 2/2] mvpp2: memory accounting Matteo Croce
2019-12-24  9:52 ` [RFC net-next 0/2] mvpp2: page_pool support Ilias Apalodimas
2019-12-24 13:34   ` Matteo Croce
2019-12-24 14:04     ` Jesper Dangaard Brouer
2019-12-24 14:00   ` Jesper Dangaard Brouer
2019-12-24 14:37     ` Matteo Croce
2019-12-27 11:51       ` Ilias Apalodimas

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.