netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Tariq Toukan <tariqt@mellanox.com>
To: Jonathan Lemon <jonathan.lemon@gmail.com>,
	Saeed Mahameed <saeedm@mellanox.com>
Cc: "ilias.apalodimas@linaro.org" <ilias.apalodimas@linaro.org>,
	Tariq Toukan <tariqt@mellanox.com>,
	"brouer@redhat.com" <brouer@redhat.com>,
	Netdev <netdev@vger.kernel.org>, kernel-team <kernel-team@fb.com>
Subject: Re: [PATCH 01/10 net-next] net/mlx5e: RX, Remove RX page-cache
Date: Sun, 20 Oct 2019 07:29:07 +0000	[thread overview]
Message-ID: <b73c8c15-3f10-d539-f648-5a0c772d9fc0@mellanox.com> (raw)
In-Reply-To: <7C9F38DB-6164-4ACB-A717-1699ACC9DCB0@gmail.com>



On 10/19/2019 3:10 AM, Jonathan Lemon wrote:
> I was running the updated patches on machines with various workloads, and
> have a bunch of different results.
> 
> For the following numbers,
>    Effective = hit / (hit + empty + stall) * 100
> 
> In other words, show the hit rate for for every trip to the cache,
> and the cache full stat is ignored.
> 
> On a webserver:
> 
> [web] # ./eff
> ('rx_pool_cache_hit:', '360127643')
> ('rx_pool_cache_full:', '0')
> ('rx_pool_cache_empty:', '6455735977')
> ('rx_pool_ring_produce:', '474958')
> ('rx_pool_ring_consume:', '0')
> ('rx_pool_ring_return:', '474958')
> ('rx_pool_flush:', '144')
> ('rx_pool_node_change:', '0')
> cache effectiveness:  5.28
> 
> On a proxygen:
> # ethtool -S eth0 | grep rx_pool
>       rx_pool_cache_hit: 1646798
>       rx_pool_cache_full: 0
>       rx_pool_cache_empty: 15723566
>       rx_pool_ring_produce: 474958
>       rx_pool_ring_consume: 0
>       rx_pool_ring_return: 474958
>       rx_pool_flush: 144
>       rx_pool_node_change: 0
> cache effectiveness:  9.48
> 
> On both of these, only pages with refcount = 1 are being kept.
> 
> 
> I changed things around in the page pool so:
> 
> 1) the cache behaves like a ring instead of a stack, this
>     sacrifices temporal locality.
> 
> 2) it caches all pages returned regardless of refcount, but
>     only returns pages with refcount=1.
> 
> This is the same behavior as the mlx5 cache.  Some gains
> would come about if the sojourn time though the cache is
> greater than the lifetime of the page usage by the networking
> stack, as it provides a fixed working set of mapped pages.
> 
> On the web server, this is a net loss:
> [web] # ./eff
> ('rx_pool_cache_hit:', '6052662')
> ('rx_pool_cache_full:', '156355415')
> ('rx_pool_cache_empty:', '409600')
> ('rx_pool_cache_stall:', '302787473')
> ('rx_pool_ring_produce:', '156633847')
> ('rx_pool_ring_consume:', '9925520')
> ('rx_pool_ring_return:', '278788')
> ('rx_pool_flush:', '96')
> ('rx_pool_node_change:', '0')
> cache effectiveness:  1.95720846778
> 
> For proxygen on the other hand, it's a win:
> [proxy] # ./eff
> ('rx_pool_cache_hit:', '69235177')
> ('rx_pool_cache_full:', '35404387')
> ('rx_pool_cache_empty:', '460800')
> ('rx_pool_cache_stall:', '42932530')
> ('rx_pool_ring_produce:', '35717618')
> ('rx_pool_ring_consume:', '27879469')
> ('rx_pool_ring_return:', '404800')
> ('rx_pool_flush:', '108')
> ('rx_pool_node_change:', '0')
> cache effectiveness:  61.4721608624
> 
> So the correct behavior isn't quite clear cut here - caching a
> working set of mapped pages is beneficial in spite of the HOL
> blocking stalls for some workloads, but I'm sure that it wouldn't
> be too difficult to exceed the WS size.
> 
> Thoughts?
> 

We have a WIP in which we avoid the HOL block, by having pages returned 
to the available-queue only when their refcnt reaches back to 1. This 
requires catching this case in the page/skb release path.

See:
https://github.com/xdp-project/xdp-project/tree/master/areas/mem

  reply	other threads:[~2019-10-20  7:37 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-16 22:50 [PATCH 00/10 net-next] page_pool cleanups Jonathan Lemon
2019-10-16 22:50 ` [PATCH 01/10 net-next] net/mlx5e: RX, Remove RX page-cache Jonathan Lemon
2019-10-17  1:30   ` Jakub Kicinski
2019-10-18 20:03     ` Jonathan Lemon
2019-10-18 20:53   ` Saeed Mahameed
2019-10-19  0:10     ` Jonathan Lemon
2019-10-20  7:29       ` Tariq Toukan [this message]
2019-10-16 22:50 ` [PATCH 02/10 net-next] net/mlx5e: RX, Manage RX pages only via page pool API Jonathan Lemon
2019-10-16 22:50 ` [PATCH 03/10 net-next] net/mlx5e: RX, Internal DMA mapping in page_pool Jonathan Lemon
2019-10-17  1:33   ` Jakub Kicinski
2019-10-18 20:04     ` Jonathan Lemon
2019-10-18 21:01   ` Saeed Mahameed
2019-10-16 22:50 ` [PATCH 04/10 net-next] page_pool: Add API to update numa node and flush page caches Jonathan Lemon
2019-10-17 12:06   ` Ilias Apalodimas
2019-10-18 21:07     ` Saeed Mahameed
2019-10-18 23:38       ` Jonathan Lemon
2019-10-16 22:50 ` [PATCH 05/10 net-next] net/mlx5e: Rx, Update page pool numa node when changed Jonathan Lemon
2019-10-16 22:50 ` [PATCH 06/10 net-next] page_pool: Add page_pool_keep_page Jonathan Lemon
2019-10-16 22:50 ` [PATCH 07/10 net-next] page_pool: allow configurable linear cache size Jonathan Lemon
2019-10-17  8:51   ` Jesper Dangaard Brouer
2019-10-18 20:31     ` Jonathan Lemon
2019-10-16 22:50 ` [PATCH 08/10 net-next] page_pool: Add statistics Jonathan Lemon
2019-10-17  8:29   ` Jesper Dangaard Brouer
2019-10-18 21:29   ` Saeed Mahameed
2019-10-18 23:37     ` Jonathan Lemon
2019-10-16 22:50 ` [PATCH 09/10 net-next] net/mlx5: Add page_pool stats to the Mellanox driver Jonathan Lemon
2019-10-17 11:09   ` Jesper Dangaard Brouer
2019-10-18 20:45     ` Jonathan Lemon
2019-10-16 22:50 ` [PATCH 10/10 net-next] page_pool: Cleanup and rename page_pool functions Jonathan Lemon
2019-10-18 20:50 ` [PATCH 00/10 net-next] page_pool cleanups Saeed Mahameed
2019-10-18 21:21   ` Saeed Mahameed
2019-10-18 23:32   ` Jonathan Lemon
2019-10-21 19:08     ` Saeed Mahameed
2019-10-21 21:45       ` Jonathan Lemon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b73c8c15-3f10-d539-f648-5a0c772d9fc0@mellanox.com \
    --to=tariqt@mellanox.com \
    --cc=brouer@redhat.com \
    --cc=ilias.apalodimas@linaro.org \
    --cc=jonathan.lemon@gmail.com \
    --cc=kernel-team@fb.com \
    --cc=netdev@vger.kernel.org \
    --cc=saeedm@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).