linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1 net] octeontx2-pf: fix page_pool creation fail for rings > 32k
@ 2023-08-14 13:24 Ratheesh Kannoth
  2023-08-15  8:12 ` Simon Horman
  2023-08-15  8:41 ` Jesper Dangaard Brouer
  0 siblings, 2 replies; 5+ messages in thread
From: Ratheesh Kannoth @ 2023-08-14 13:24 UTC (permalink / raw)
  To: netdev, linux-kernel
  Cc: kuba, pabeni, edumazet, aleksander.lobakin, hawk, sgoutham,
	gakula, sbhatta, hkelam, Ratheesh Kannoth

octeontx2 driver calls page_pool_create() during driver probe()
and fails if queue size > 32k. Page pool infra uses these buffers
as shock absorbers for burst traffic. These pages are pinned
down as soon as page pool is created. As page pool does direct
recycling way more aggressivelyi, often times ptr_ring is left
unused at all. Instead of clamping page_pool size to 32k at
most, limit it even more to 2k to avoid wasting memory on much
less used ptr_ring.

Fixes: b2e3406a38f0 ("octeontx2-pf: Add support for page pool")
Suggested-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Ratheesh Kannoth <rkannoth@marvell.com>

---

ChangeLogs:

v0->v1: Commit message changes.
---
---
 drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c | 2 +-
 drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h | 2 ++
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
index 77c8f650f7ac..fc8a1220eb39 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
@@ -1432,7 +1432,7 @@ int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id,
 	}
 
 	pp_params.flags = PP_FLAG_PAGE_FRAG | PP_FLAG_DMA_MAP;
-	pp_params.pool_size = numptrs;
+	pp_params.pool_size = OTX2_PAGE_POOL_SZ;
 	pp_params.nid = NUMA_NO_NODE;
 	pp_params.dev = pfvf->dev;
 	pp_params.dma_dir = DMA_FROM_DEVICE;
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
index ba8091131ec0..f6fea43617ff 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
@@ -30,6 +30,8 @@
 #include <rvu_trace.h>
 #include "qos.h"
 
+#define OTX2_PAGE_POOL_SZ 2048
+
 /* IPv4 flag more fragment bit */
 #define IPV4_FLAG_MORE				0x20
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v1 net] octeontx2-pf: fix page_pool creation fail for rings > 32k
  2023-08-14 13:24 [PATCH v1 net] octeontx2-pf: fix page_pool creation fail for rings > 32k Ratheesh Kannoth
@ 2023-08-15  8:12 ` Simon Horman
  2023-08-16  2:15   ` [EXT] " Ratheesh Kannoth
  2023-08-15  8:41 ` Jesper Dangaard Brouer
  1 sibling, 1 reply; 5+ messages in thread
From: Simon Horman @ 2023-08-15  8:12 UTC (permalink / raw)
  To: Ratheesh Kannoth
  Cc: netdev, linux-kernel, kuba, pabeni, edumazet, aleksander.lobakin,
	hawk, sgoutham, gakula, sbhatta, hkelam, davem

On Mon, Aug 14, 2023 at 06:54:11PM +0530, Ratheesh Kannoth wrote:

+ David S. Miller <davem@davemloft.net>
  Corrected address for Palao Abeni <pabeni@redhat.com>

> octeontx2 driver calls page_pool_create() during driver probe()
> and fails if queue size > 32k. Page pool infra uses these buffers
> as shock absorbers for burst traffic. These pages are pinned
> down as soon as page pool is created. As page pool does direct
> recycling way more aggressivelyi, often times ptr_ring is left

nit: aggressivelyi -> aggressively

> unused at all. Instead of clamping page_pool size to 32k at
> most, limit it even more to 2k to avoid wasting memory on much
> less used ptr_ring.
> 
> Fixes: b2e3406a38f0 ("octeontx2-pf: Add support for page pool")
> Suggested-by: Alexander Lobakin <aleksander.lobakin@intel.com>
> Signed-off-by: Ratheesh Kannoth <rkannoth@marvell.com>
> 
> ---
> 
> ChangeLogs:
> 
> v0->v1: Commit message changes.
> ---
> ---
>  drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c | 2 +-
>  drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h | 2 ++
>  2 files changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> index 77c8f650f7ac..fc8a1220eb39 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> @@ -1432,7 +1432,7 @@ int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id,
>  	}
>  
>  	pp_params.flags = PP_FLAG_PAGE_FRAG | PP_FLAG_DMA_MAP;
> -	pp_params.pool_size = numptrs;
> +	pp_params.pool_size = OTX2_PAGE_POOL_SZ;
>  	pp_params.nid = NUMA_NO_NODE;
>  	pp_params.dev = pfvf->dev;
>  	pp_params.dma_dir = DMA_FROM_DEVICE;
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
> index ba8091131ec0..f6fea43617ff 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
> @@ -30,6 +30,8 @@
>  #include <rvu_trace.h>
>  #include "qos.h"
>  
> +#define OTX2_PAGE_POOL_SZ 2048
> +
>  /* IPv4 flag more fragment bit */
>  #define IPV4_FLAG_MORE				0x20
>  
> -- 
> 2.25.1
> 
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v1 net] octeontx2-pf: fix page_pool creation fail for rings > 32k
  2023-08-14 13:24 [PATCH v1 net] octeontx2-pf: fix page_pool creation fail for rings > 32k Ratheesh Kannoth
  2023-08-15  8:12 ` Simon Horman
@ 2023-08-15  8:41 ` Jesper Dangaard Brouer
  2023-08-16  2:14   ` [EXT] " Ratheesh Kannoth
  1 sibling, 1 reply; 5+ messages in thread
From: Jesper Dangaard Brouer @ 2023-08-15  8:41 UTC (permalink / raw)
  To: Ratheesh Kannoth, netdev, linux-kernel
  Cc: kuba, pabeni, edumazet, aleksander.lobakin, hawk, sgoutham,
	gakula, sbhatta, hkelam



On 14/08/2023 15.24, Ratheesh Kannoth wrote:
> octeontx2 driver calls page_pool_create() during driver probe()
> and fails if queue size > 32k. Page pool infra uses these buffers
> as shock absorbers for burst traffic. These pages are pinned
> down as soon as page pool is created. 

It isn't true that "pages are pinned down as soon as page pool is created".
We need to improve this commit text.
My suggestion:

  These pages are pinned down over time as working sets varies, due to
  the recycling nature of page pool, given page pool (currently) don't
  have a shrinker mechanism, the pages remain pinned down in ptr_ring.

> As page pool does direct
> recycling way more aggressivelyi, often times ptr_ring is left
                                  ^
Typo
(my suggestion already covers recycling)

> unused at all. Instead of clamping page_pool size to 32k at
> most, limit it even more to 2k to avoid wasting memory on much
> less used ptr_ring.

I would adjust and delete "much less used".

I assume you have the octeontx2 hardware available (which I don't).
Can you test that this adjustment to 2k doesn't cause a performance
regression on your hardware?
And then produce a statement in the commit desc like:

  This have been tested on octeontx2 hardware (devel board xyz).
  TCP and UDP tests using netperf shows not performance regressions.


2K with page_size 4KiB is around 8MiB if PP gets full.

It would be convincing if commit message said e.g. PP pool_size 2k can
maximum pin down 8MiB per RX-queue (assuming page size 4K), but that is
okay as systems using octeontx2 hardware often have many GB of memory.

> 
> Fixes: b2e3406a38f0 ("octeontx2-pf: Add support for page pool")
> Suggested-by: Alexander Lobakin <aleksander.lobakin@intel.com>
> Signed-off-by: Ratheesh Kannoth <rkannoth@marvell.com>
> 
> ---
> 
> ChangeLogs:
> 
> v0->v1: Commit message changes.
> ---
> ---
>   drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c | 2 +-
>   drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h | 2 ++
>   2 files changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> index 77c8f650f7ac..fc8a1220eb39 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> @@ -1432,7 +1432,7 @@ int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id,
>   	}
>   
>   	pp_params.flags = PP_FLAG_PAGE_FRAG | PP_FLAG_DMA_MAP;
> -	pp_params.pool_size = numptrs;
> +	pp_params.pool_size = OTX2_PAGE_POOL_SZ;
>   	pp_params.nid = NUMA_NO_NODE;
>   	pp_params.dev = pfvf->dev;
>   	pp_params.dma_dir = DMA_FROM_DEVICE;
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
> index ba8091131ec0..f6fea43617ff 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
> @@ -30,6 +30,8 @@
>   #include <rvu_trace.h>
>   #include "qos.h"
>   
> +#define OTX2_PAGE_POOL_SZ 2048
> +
>   /* IPv4 flag more fragment bit */
>   #define IPV4_FLAG_MORE				0x20
>   

^ permalink raw reply	[flat|nested] 5+ messages in thread

* RE: [EXT] Re: [PATCH v1 net] octeontx2-pf: fix page_pool creation fail for rings > 32k
  2023-08-15  8:41 ` Jesper Dangaard Brouer
@ 2023-08-16  2:14   ` Ratheesh Kannoth
  0 siblings, 0 replies; 5+ messages in thread
From: Ratheesh Kannoth @ 2023-08-16  2:14 UTC (permalink / raw)
  To: Jesper Dangaard Brouer, netdev, linux-kernel
  Cc: kuba, pabeni, edumazet, aleksander.lobakin,
	Sunil Kovvuri Goutham, Geethasowjanya Akula,
	Subbaraya Sundeep Bhatta, Hariprasad Kelam

> From: Jesper Dangaard Brouer <hawk@kernel.org>
> Sent: Tuesday, August 15, 2023 2:12 PM
> To: Ratheesh Kannoth <rkannoth@marvell.com>; netdev@vger.kernel.org;
> Subject: [EXT] Re: [PATCH v1 net] octeontx2-pf: fix page_pool creation fail
> for rings > 32k
> 
>   These pages are pinned down over time as working sets varies, due to
>   the recycling nature of page pool, given page pool (currently) don't
>   have a shrinker mechanism, the pages remain pinned down in ptr_ring.

ACK

> 
> > As page pool does direct
> > recycling way more aggressivelyi, often times ptr_ring is left
>                                   ^
> Typo
> (my suggestion already covers recycling)
ACK, will remove this line.

> > unused at all. Instead of clamping page_pool size to 32k at most,
> > limit it even more to 2k to avoid wasting memory on much less used
> > ptr_ring.
> 
> I would adjust and delete "much less used".
ACK

> And then produce a statement in the commit desc like:
> 
>   This have been tested on octeontx2 hardware (devel board xyz).
>   TCP and UDP tests using netperf shows not performance regressions.
Let me do more testing and get back.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* RE: [EXT] Re: [PATCH v1 net] octeontx2-pf: fix page_pool creation fail for rings > 32k
  2023-08-15  8:12 ` Simon Horman
@ 2023-08-16  2:15   ` Ratheesh Kannoth
  0 siblings, 0 replies; 5+ messages in thread
From: Ratheesh Kannoth @ 2023-08-16  2:15 UTC (permalink / raw)
  To: Simon Horman
  Cc: netdev, linux-kernel, kuba, pabeni, edumazet, aleksander.lobakin,
	hawk, Sunil Kovvuri Goutham, Geethasowjanya Akula,
	Subbaraya Sundeep Bhatta, Hariprasad Kelam, davem

> From: Simon Horman <horms@kernel.org>
> Subject: [EXT] Re: [PATCH v1 net] octeontx2-pf: fix page_pool creation fail
> for rings > 32k


> + David S. Miller <davem@davemloft.net>
>   Corrected address for Palao Abeni <pabeni@redhat.com>
ACK
 
> nit: aggressivelyi -> aggressively
ACK


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2023-08-16  2:17 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-14 13:24 [PATCH v1 net] octeontx2-pf: fix page_pool creation fail for rings > 32k Ratheesh Kannoth
2023-08-15  8:12 ` Simon Horman
2023-08-16  2:15   ` [EXT] " Ratheesh Kannoth
2023-08-15  8:41 ` Jesper Dangaard Brouer
2023-08-16  2:14   ` [EXT] " Ratheesh Kannoth

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).