All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/2] net: fec: add xdp and page pool statistics
@ 2022-11-15 15:57 Shenwei Wang
  2022-11-15 15:57 ` [PATCH v4 1/2] net: page_pool: export page_pool_stats definition Shenwei Wang
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Shenwei Wang @ 2022-11-15 15:57 UTC (permalink / raw)
  To: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni
  Cc: Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, netdev, linux-kernel, imx,
	Shenwei Wang

 Changes in V4:
 - Using u64 to record the XDP statistics
 - Changing strncpy to strscpy
 - Remove the "PAGE_POOL_STATS" select per Alexander's feedback
 - Export the page_pool_stats definition in the page_pool.h

 Changes in v3:
 - change memcpy to strncpy to fix the warning reported by Paolo Abeni
 - fix the compile errors on powerpc

 Changes in v2:
 - clean up and restructure the codes per Andrew Lunn's review comments
 - clear the statistics when the adaptor is down

Shenwei Wang (2):
  net: page_pool: export page_pool_stats definition
  net: fec: add xdp and page pool statistics

 drivers/net/ethernet/freescale/fec.h      |  15 ++++
 drivers/net/ethernet/freescale/fec_main.c | 100 ++++++++++++++++++++--
 include/net/page_pool.h                   |   2 +-
 3 files changed, 110 insertions(+), 7 deletions(-)

--
2.34.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v4 1/2] net: page_pool: export page_pool_stats definition
  2022-11-15 15:57 [PATCH v4 0/2] net: fec: add xdp and page pool statistics Shenwei Wang
@ 2022-11-15 15:57 ` Shenwei Wang
  2022-11-15 17:12   ` Andrew Lunn
  2022-11-15 15:57 ` [PATCH v4 2/2] net: fec: add xdp and page pool statistics Shenwei Wang
  2022-11-15 17:57 ` [PATCH v4 0/2] " Saeed Mahameed
  2 siblings, 1 reply; 10+ messages in thread
From: Shenwei Wang @ 2022-11-15 15:57 UTC (permalink / raw)
  To: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni
  Cc: Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, netdev, linux-kernel, imx,
	Shenwei Wang

The definition of the 'struct page_pool_stats' is required even when
the CONFIG_PAGE_POOL_STATS is not defined. Otherwise, it is required
the drivers to handle the case of CONFIG_PAGE_POOL_STATS undefined.

Signed-off-by: Shenwei Wang <shenwei.wang@nxp.com>
---
 include/net/page_pool.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index 813c93499f20..adfd4bb609a7 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -84,7 +84,6 @@ struct page_pool_params {
 	void *init_arg;
 };
 
-#ifdef CONFIG_PAGE_POOL_STATS
 struct page_pool_alloc_stats {
 	u64 fast; /* fast path allocations */
 	u64 slow; /* slow-path order 0 allocations */
@@ -117,6 +116,7 @@ struct page_pool_stats {
 	struct page_pool_recycle_stats recycle_stats;
 };
 
+#ifdef CONFIG_PAGE_POOL_STATS
 int page_pool_ethtool_stats_get_count(void);
 u8 *page_pool_ethtool_stats_get_strings(u8 *data);
 u64 *page_pool_ethtool_stats_get(u64 *data, void *stats);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v4 2/2] net: fec: add xdp and page pool statistics
  2022-11-15 15:57 [PATCH v4 0/2] net: fec: add xdp and page pool statistics Shenwei Wang
  2022-11-15 15:57 ` [PATCH v4 1/2] net: page_pool: export page_pool_stats definition Shenwei Wang
@ 2022-11-15 15:57 ` Shenwei Wang
  2022-11-15 17:28   ` Andrew Lunn
  2022-11-15 17:57 ` [PATCH v4 0/2] " Saeed Mahameed
  2 siblings, 1 reply; 10+ messages in thread
From: Shenwei Wang @ 2022-11-15 15:57 UTC (permalink / raw)
  To: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni
  Cc: Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, netdev, linux-kernel, imx,
	Shenwei Wang, kernel test robot

Added xdp and page pool statistics.

Signed-off-by: Shenwei Wang <shenwei.wang@nxp.com>
Reported-by: kernel test robot <lkp@intel.com>
---
 drivers/net/ethernet/freescale/fec.h      |  15 ++++
 drivers/net/ethernet/freescale/fec_main.c | 100 ++++++++++++++++++++--
 2 files changed, 109 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/freescale/fec.h b/drivers/net/ethernet/freescale/fec.h
index 61e847b18343..adbe661552be 100644
--- a/drivers/net/ethernet/freescale/fec.h
+++ b/drivers/net/ethernet/freescale/fec.h
@@ -526,6 +526,19 @@ struct fec_enet_priv_txrx_info {
 	struct  sk_buff *skb;
 };
 
+enum {
+	RX_XDP_REDIRECT = 0,
+	RX_XDP_PASS,
+	RX_XDP_DROP,
+	RX_XDP_TX,
+	RX_XDP_TX_ERRORS,
+	TX_XDP_XMIT,
+	TX_XDP_XMIT_ERRORS,
+
+	/* The following must be the last one */
+	XDP_STATS_TOTAL,
+};
+
 struct fec_enet_priv_tx_q {
 	struct bufdesc_prop bd;
 	unsigned char *tx_bounce[TX_RING_SIZE];
@@ -546,6 +559,8 @@ struct fec_enet_priv_rx_q {
 	/* page_pool */
 	struct page_pool *page_pool;
 	struct xdp_rxq_info xdp_rxq;
+	struct u64_stats_sync syncp;
+	u64 stats[XDP_STATS_TOTAL];
 
 	/* rx queue number, in the range 0-7 */
 	u8 id;
diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
index 616eea712ca8..4706cbc8ec5c 100644
--- a/drivers/net/ethernet/freescale/fec_main.c
+++ b/drivers/net/ethernet/freescale/fec_main.c
@@ -1507,7 +1507,8 @@ static void fec_enet_update_cbd(struct fec_enet_priv_rx_q *rxq,
 
 static u32
 fec_enet_run_xdp(struct fec_enet_private *fep, struct bpf_prog *prog,
-		 struct xdp_buff *xdp, struct fec_enet_priv_rx_q *rxq, int index)
+		 struct xdp_buff *xdp, struct fec_enet_priv_rx_q *rxq,
+		 u32 stats[], int index)
 {
 	unsigned int sync, len = xdp->data_end - xdp->data;
 	u32 ret = FEC_ENET_XDP_PASS;
@@ -1523,10 +1524,12 @@ fec_enet_run_xdp(struct fec_enet_private *fep, struct bpf_prog *prog,
 
 	switch (act) {
 	case XDP_PASS:
+		stats[RX_XDP_PASS]++;
 		ret = FEC_ENET_XDP_PASS;
 		break;
 
 	case XDP_REDIRECT:
+		stats[RX_XDP_REDIRECT]++;
 		err = xdp_do_redirect(fep->netdev, xdp, prog);
 		if (!err) {
 			ret = FEC_ENET_XDP_REDIR;
@@ -1549,6 +1552,7 @@ fec_enet_run_xdp(struct fec_enet_private *fep, struct bpf_prog *prog,
 		fallthrough;    /* handle aborts by dropping packet */
 
 	case XDP_DROP:
+		stats[RX_XDP_DROP]++;
 		ret = FEC_ENET_XDP_CONSUMED;
 		page = virt_to_head_page(xdp->data);
 		page_pool_put_page(rxq->page_pool, page, sync, true);
@@ -1582,6 +1586,7 @@ fec_enet_rx_queue(struct net_device *ndev, int budget, u16 queue_id)
 	struct bpf_prog *xdp_prog = READ_ONCE(fep->xdp_prog);
 	u32 ret, xdp_result = FEC_ENET_XDP_PASS;
 	u32 data_start = FEC_ENET_XDP_HEADROOM;
+	u32 xdp_stats[XDP_STATS_TOTAL];
 	struct xdp_buff xdp;
 	struct page *page;
 	u32 sub_len = 4;
@@ -1656,11 +1661,13 @@ fec_enet_rx_queue(struct net_device *ndev, int budget, u16 queue_id)
 		fec_enet_update_cbd(rxq, bdp, index);
 
 		if (xdp_prog) {
+			memset(xdp_stats, 0, sizeof(xdp_stats));
 			xdp_buff_clear_frags_flag(&xdp);
 			/* subtract 16bit shift and FCS */
 			xdp_prepare_buff(&xdp, page_address(page),
 					 data_start, pkt_len - sub_len, false);
-			ret = fec_enet_run_xdp(fep, xdp_prog, &xdp, rxq, index);
+			ret = fec_enet_run_xdp(fep, xdp_prog, &xdp, rxq,
+					       xdp_stats, index);
 			xdp_result |= ret;
 			if (ret != FEC_ENET_XDP_PASS)
 				goto rx_processing_done;
@@ -1762,6 +1769,15 @@ fec_enet_rx_queue(struct net_device *ndev, int budget, u16 queue_id)
 	if (xdp_result & FEC_ENET_XDP_REDIR)
 		xdp_do_flush_map();
 
+	if (xdp_prog) {
+		int i;
+
+		u64_stats_update_begin(&rxq->syncp);
+		for (i = 0; i < XDP_STATS_TOTAL; i++)
+			rxq->stats[i] += xdp_stats[i];
+		u64_stats_update_end(&rxq->syncp);
+	}
+
 	return pkt_received;
 }
 
@@ -2701,6 +2717,16 @@ static const struct fec_stat {
 
 #define FEC_STATS_SIZE		(ARRAY_SIZE(fec_stats) * sizeof(u64))
 
+static const char *fec_xdp_stat_strs[XDP_STATS_TOTAL] = {
+	"rx_xdp_redirect",           /* RX_XDP_REDIRECT = 0, */
+	"rx_xdp_pass",               /* RX_XDP_PASS, */
+	"rx_xdp_drop",               /* RX_XDP_DROP, */
+	"rx_xdp_tx",                 /* RX_XDP_TX, */
+	"rx_xdp_tx_errors",          /* RX_XDP_TX_ERRORS, */
+	"tx_xdp_xmit",               /* TX_XDP_XMIT, */
+	"tx_xdp_xmit_errors",        /* TX_XDP_XMIT_ERRORS, */
+};
+
 static void fec_enet_update_ethtool_stats(struct net_device *dev)
 {
 	struct fec_enet_private *fep = netdev_priv(dev);
@@ -2710,6 +2736,44 @@ static void fec_enet_update_ethtool_stats(struct net_device *dev)
 		fep->ethtool_stats[i] = readl(fep->hwp + fec_stats[i].offset);
 }
 
+static void fec_enet_get_xdp_stats(struct fec_enet_private *fep, u64 *data)
+{
+	u64 xdp_stats[XDP_STATS_TOTAL] = { 0 };
+	struct fec_enet_priv_rx_q *rxq;
+	unsigned int start;
+	int i, j;
+
+	for (i = fep->num_rx_queues - 1; i >= 0; i--) {
+		rxq = fep->rx_queue[i];
+
+		do {
+			start = u64_stats_fetch_begin(&rxq->syncp);
+			for (j = 0; j < XDP_STATS_TOTAL; j++)
+				xdp_stats[j] += rxq->stats[j];
+		} while (u64_stats_fetch_retry(&rxq->syncp, start));
+	}
+
+	memcpy(data, xdp_stats, sizeof(xdp_stats));
+}
+
+static void fec_enet_page_pool_stats(struct fec_enet_private *fep, u64 *data)
+{
+	struct page_pool_stats stats = {};
+	struct fec_enet_priv_rx_q *rxq;
+	int i;
+
+	for (i = fep->num_rx_queues - 1; i >= 0; i--) {
+		rxq = fep->rx_queue[i];
+
+		if (!rxq->page_pool)
+			continue;
+
+		page_pool_get_stats(rxq->page_pool, &stats);
+	}
+
+	page_pool_ethtool_stats_get(data, &stats);
+}
+
 static void fec_enet_get_ethtool_stats(struct net_device *dev,
 				       struct ethtool_stats *stats, u64 *data)
 {
@@ -2719,6 +2783,12 @@ static void fec_enet_get_ethtool_stats(struct net_device *dev,
 		fec_enet_update_ethtool_stats(dev);
 
 	memcpy(data, fep->ethtool_stats, FEC_STATS_SIZE);
+	data += FEC_STATS_SIZE / sizeof(u64);
+
+	fec_enet_get_xdp_stats(fep, data);
+	data += XDP_STATS_TOTAL;
+
+	fec_enet_page_pool_stats(fep, data);
 }
 
 static void fec_enet_get_strings(struct net_device *netdev,
@@ -2727,9 +2797,15 @@ static void fec_enet_get_strings(struct net_device *netdev,
 	int i;
 	switch (stringset) {
 	case ETH_SS_STATS:
-		for (i = 0; i < ARRAY_SIZE(fec_stats); i++)
-			memcpy(data + i * ETH_GSTRING_LEN,
-				fec_stats[i].name, ETH_GSTRING_LEN);
+		for (i = 0; i < ARRAY_SIZE(fec_stats); i++) {
+			memcpy(data, fec_stats[i].name, ETH_GSTRING_LEN);
+			data += ETH_GSTRING_LEN;
+		}
+		for (i = 0; i < ARRAY_SIZE(fec_xdp_stat_strs); i++) {
+			strscpy(data, fec_xdp_stat_strs[i], ETH_GSTRING_LEN);
+			data += ETH_GSTRING_LEN;
+		}
+		page_pool_ethtool_stats_get_strings(data);
 		break;
 	case ETH_SS_TEST:
 		net_selftest_get_strings(data);
@@ -2739,9 +2815,13 @@ static void fec_enet_get_strings(struct net_device *netdev,
 
 static int fec_enet_get_sset_count(struct net_device *dev, int sset)
 {
+	int count;
+
 	switch (sset) {
 	case ETH_SS_STATS:
-		return ARRAY_SIZE(fec_stats);
+		count = ARRAY_SIZE(fec_stats) + XDP_STATS_TOTAL;
+		count += page_pool_ethtool_stats_get_count();
+		return count;
 	case ETH_SS_TEST:
 		return net_selftest_get_count();
 	default:
@@ -2752,6 +2832,7 @@ static int fec_enet_get_sset_count(struct net_device *dev, int sset)
 static void fec_enet_clear_ethtool_stats(struct net_device *dev)
 {
 	struct fec_enet_private *fep = netdev_priv(dev);
+	struct fec_enet_priv_rx_q *rxq;
 	int i;
 
 	/* Disable MIB statistics counters */
@@ -2760,6 +2841,11 @@ static void fec_enet_clear_ethtool_stats(struct net_device *dev)
 	for (i = 0; i < ARRAY_SIZE(fec_stats); i++)
 		writel(0, fep->hwp + fec_stats[i].offset);
 
+	for (i = fep->num_rx_queues - 1; i >= 0; i--) {
+		rxq = fep->rx_queue[i];
+		memset(rxq->stats, 0, sizeof(rxq->stats));
+	}
+
 	/* Don't disable MIB statistics counters */
 	writel(0, fep->hwp + FEC_MIB_CTRLSTAT);
 }
@@ -3126,6 +3212,8 @@ static void fec_enet_free_buffers(struct net_device *ndev)
 		for (i = 0; i < rxq->bd.ring_size; i++)
 			page_pool_release_page(rxq->page_pool, rxq->rx_skb_info[i].page);
 
+		memset(rxq->stats, 0, sizeof(rxq->stats));
+
 		if (xdp_rxq_info_is_reg(&rxq->xdp_rxq))
 			xdp_rxq_info_unreg(&rxq->xdp_rxq);
 		page_pool_destroy(rxq->page_pool);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v4 1/2] net: page_pool: export page_pool_stats definition
  2022-11-15 15:57 ` [PATCH v4 1/2] net: page_pool: export page_pool_stats definition Shenwei Wang
@ 2022-11-15 17:12   ` Andrew Lunn
  2022-11-15 17:18     ` [EXT] " Shenwei Wang
  0 siblings, 1 reply; 10+ messages in thread
From: Andrew Lunn @ 2022-11-15 17:12 UTC (permalink / raw)
  To: Shenwei Wang
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, netdev, linux-kernel, imx

On Tue, Nov 15, 2022 at 09:57:43AM -0600, Shenwei Wang wrote:
> The definition of the 'struct page_pool_stats' is required even when
> the CONFIG_PAGE_POOL_STATS is not defined. Otherwise, it is required
> the drivers to handle the case of CONFIG_PAGE_POOL_STATS undefined.

I agree the API is broken, but i think there is a better fix.

There should be a stub of page_pool_get_stats() for when
CONFIG_PAGE_POOL_STATS is disabled.

Nothing actually dereferences struct page_pool_stats when you have
this stub. So it might be enough to simply have

struct page_pool_stats{
};

       Andrew

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [EXT] Re: [PATCH v4 1/2] net: page_pool: export page_pool_stats definition
  2022-11-15 17:12   ` Andrew Lunn
@ 2022-11-15 17:18     ` Shenwei Wang
  2022-11-15 17:43       ` Andrew Lunn
  0 siblings, 1 reply; 10+ messages in thread
From: Shenwei Wang @ 2022-11-15 17:18 UTC (permalink / raw)
  To: Andrew Lunn
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, netdev, linux-kernel, imx



> -----Original Message-----
> From: Andrew Lunn <andrew@lunn.ch>
> Sent: Tuesday, November 15, 2022 11:12 AM
> To: Shenwei Wang <shenwei.wang@nxp.com>
> Cc: David S. Miller <davem@davemloft.net>; Eric Dumazet
> <edumazet@google.com>; Jakub Kicinski <kuba@kernel.org>; Paolo Abeni
> <pabeni@redhat.com>; Jesper Dangaard Brouer <hawk@kernel.org>; Ilias
> Apalodimas <ilias.apalodimas@linaro.org>; Alexei Starovoitov
> <ast@kernel.org>; Daniel Borkmann <daniel@iogearbox.net>; John Fastabend
> <john.fastabend@gmail.com>; netdev@vger.kernel.org; linux-
> kernel@vger.kernel.org; imx@lists.linux.dev
> Subject: [EXT] Re: [PATCH v4 1/2] net: page_pool: export page_pool_stats
> definition
> 
> Caution: EXT Email
> 
> On Tue, Nov 15, 2022 at 09:57:43AM -0600, Shenwei Wang wrote:
> > The definition of the 'struct page_pool_stats' is required even when
> > the CONFIG_PAGE_POOL_STATS is not defined. Otherwise, it is required
> > the drivers to handle the case of CONFIG_PAGE_POOL_STATS undefined.
> 
> I agree the API is broken, but i think there is a better fix.
> 
> There should be a stub of page_pool_get_stats() for when
> CONFIG_PAGE_POOL_STATS is disabled.
> 
> Nothing actually dereferences struct page_pool_stats when you have this stub.
> So it might be enough to simply have
> 
> struct page_pool_stats{
> };
> 

As the structure is open when the CONFIG_PAGE_POOL_STATS is enabled, you can not
prevent a user to access its members. The empty stub will still have problems in this
kind of situations.

Regards,
Shenwei

>        Andrew

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v4 2/2] net: fec: add xdp and page pool statistics
  2022-11-15 15:57 ` [PATCH v4 2/2] net: fec: add xdp and page pool statistics Shenwei Wang
@ 2022-11-15 17:28   ` Andrew Lunn
  2022-11-15 17:53     ` [EXT] " Shenwei Wang
  0 siblings, 1 reply; 10+ messages in thread
From: Andrew Lunn @ 2022-11-15 17:28 UTC (permalink / raw)
  To: Shenwei Wang
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, netdev, linux-kernel, imx,
	kernel test robot

> @@ -1582,6 +1586,7 @@ fec_enet_rx_queue(struct net_device *ndev, int budget, u16 queue_id)
>  	struct bpf_prog *xdp_prog = READ_ONCE(fep->xdp_prog);
>  	u32 ret, xdp_result = FEC_ENET_XDP_PASS;
>  	u32 data_start = FEC_ENET_XDP_HEADROOM;
> +	u32 xdp_stats[XDP_STATS_TOTAL];
>  	struct xdp_buff xdp;
>  	struct page *page;
>  	u32 sub_len = 4;
> @@ -1656,11 +1661,13 @@ fec_enet_rx_queue(struct net_device *ndev, int budget, u16 queue_id)
>  		fec_enet_update_cbd(rxq, bdp, index);
>  
>  		if (xdp_prog) {
> +			memset(xdp_stats, 0, sizeof(xdp_stats));
>  			xdp_buff_clear_frags_flag(&xdp);
>  			/* subtract 16bit shift and FCS */
>  			xdp_prepare_buff(&xdp, page_address(page),
>  					 data_start, pkt_len - sub_len, false);
> -			ret = fec_enet_run_xdp(fep, xdp_prog, &xdp, rxq, index);
> +			ret = fec_enet_run_xdp(fep, xdp_prog, &xdp, rxq,
> +					       xdp_stats, index);
>  			xdp_result |= ret;
>  			if (ret != FEC_ENET_XDP_PASS)
>  				goto rx_processing_done;
> @@ -1762,6 +1769,15 @@ fec_enet_rx_queue(struct net_device *ndev, int budget, u16 queue_id)
>  	if (xdp_result & FEC_ENET_XDP_REDIR)
>  		xdp_do_flush_map();
>  
> +	if (xdp_prog) {
> +		int i;
> +
> +		u64_stats_update_begin(&rxq->syncp);
> +		for (i = 0; i < XDP_STATS_TOTAL; i++)
> +			rxq->stats[i] += xdp_stats[i];
> +		u64_stats_update_end(&rxq->syncp);
> +	}
> +

This looks wrong. You are processing upto the napi budget, 64 frames,
in a loop. The memset to 0 happens inside the loop, but you do the
accumulation outside the loop?

This patch is getting pretty big. Please break it up, at least into
one patch for XDP stats and one for page pool stats.

    Andrew

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [EXT] Re: [PATCH v4 1/2] net: page_pool: export page_pool_stats definition
  2022-11-15 17:18     ` [EXT] " Shenwei Wang
@ 2022-11-15 17:43       ` Andrew Lunn
  2022-11-15 17:52         ` Shenwei Wang
  0 siblings, 1 reply; 10+ messages in thread
From: Andrew Lunn @ 2022-11-15 17:43 UTC (permalink / raw)
  To: Shenwei Wang
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, netdev, linux-kernel, imx

> > I agree the API is broken, but i think there is a better fix.
> > 
> > There should be a stub of page_pool_get_stats() for when
> > CONFIG_PAGE_POOL_STATS is disabled.
> > 
> > Nothing actually dereferences struct page_pool_stats when you have this stub.
> > So it might be enough to simply have
> > 
> > struct page_pool_stats{
> > };
> > 
> 
> As the structure is open when the CONFIG_PAGE_POOL_STATS is enabled, you can not
> prevent a user to access its members. The empty stub will still have problems in this
> kind of situations.

The users, i.e. the driver, has no need to access its members. The
members can change, new ones can be added, and it will not cause a
problem, given the way this API is defined. Ideally, page_pool_stats
would of been an opaque cookie, but that is not how it was
implemented :-(

	    Andrew

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [EXT] Re: [PATCH v4 1/2] net: page_pool: export page_pool_stats definition
  2022-11-15 17:43       ` Andrew Lunn
@ 2022-11-15 17:52         ` Shenwei Wang
  0 siblings, 0 replies; 10+ messages in thread
From: Shenwei Wang @ 2022-11-15 17:52 UTC (permalink / raw)
  To: Andrew Lunn
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, netdev, linux-kernel, imx



> -----Original Message-----
> From: Andrew Lunn <andrew@lunn.ch>
> Sent: Tuesday, November 15, 2022 11:43 AM
> To: Shenwei Wang <shenwei.wang@nxp.com>
> Cc: David S. Miller <davem@davemloft.net>; Eric Dumazet
> <edumazet@google.com>; Jakub Kicinski <kuba@kernel.org>; Paolo Abeni
> <pabeni@redhat.com>; Jesper Dangaard Brouer <hawk@kernel.org>; Ilias
> Apalodimas <ilias.apalodimas@linaro.org>; Alexei Starovoitov
> <ast@kernel.org>; Daniel Borkmann <daniel@iogearbox.net>; John Fastabend
> <john.fastabend@gmail.com>; netdev@vger.kernel.org; linux-
> kernel@vger.kernel.org; imx@lists.linux.dev
> Subject: Re: [EXT] Re: [PATCH v4 1/2] net: page_pool: export page_pool_stats
> definition
> 
> Caution: EXT Email
> 
> > > I agree the API is broken, but i think there is a better fix.
> > >
> > > There should be a stub of page_pool_get_stats() for when
> > > CONFIG_PAGE_POOL_STATS is disabled.
> > >
> > > Nothing actually dereferences struct page_pool_stats when you have this
> stub.
> > > So it might be enough to simply have
> > >
> > > struct page_pool_stats{
> > > };
> > >
> >
> > As the structure is open when the CONFIG_PAGE_POOL_STATS is enabled,
> > you can not prevent a user to access its members. The empty stub will
> > still have problems in this kind of situations.
> 
> The users, i.e. the driver, has no need to access its members. The members can
> change, new ones can be added, and it will not cause a problem, given the way
> this API is defined. Ideally, page_pool_stats would of been an opaque cookie,
> but that is not how it was implemented :-(
> 

Okay if the assumption is the user would never access its members.

Regards,
Shenwei

>             Andrew

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [EXT] Re: [PATCH v4 2/2] net: fec: add xdp and page pool statistics
  2022-11-15 17:28   ` Andrew Lunn
@ 2022-11-15 17:53     ` Shenwei Wang
  0 siblings, 0 replies; 10+ messages in thread
From: Shenwei Wang @ 2022-11-15 17:53 UTC (permalink / raw)
  To: Andrew Lunn
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, netdev, linux-kernel, imx,
	kernel test robot



> -----Original Message-----
> From: Andrew Lunn <andrew@lunn.ch>
> Sent: Tuesday, November 15, 2022 11:29 AM
> To: Shenwei Wang <shenwei.wang@nxp.com>
> Cc: David S. Miller <davem@davemloft.net>; Eric Dumazet
> <edumazet@google.com>; Jakub Kicinski <kuba@kernel.org>; Paolo Abeni
> <pabeni@redhat.com>; Jesper Dangaard Brouer <hawk@kernel.org>; Ilias
> Apalodimas <ilias.apalodimas@linaro.org>; Alexei Starovoitov
> <ast@kernel.org>; Daniel Borkmann <daniel@iogearbox.net>; John Fastabend
> <john.fastabend@gmail.com>; netdev@vger.kernel.org; linux-
> kernel@vger.kernel.org; imx@lists.linux.dev; kernel test robot <lkp@intel.com>
> Subject: [EXT] Re: [PATCH v4 2/2] net: fec: add xdp and page pool statistics
> 
> Caution: EXT Email
> 
> > @@ -1582,6 +1586,7 @@ fec_enet_rx_queue(struct net_device *ndev, int
> budget, u16 queue_id)
> >       struct bpf_prog *xdp_prog = READ_ONCE(fep->xdp_prog);
> >       u32 ret, xdp_result = FEC_ENET_XDP_PASS;
> >       u32 data_start = FEC_ENET_XDP_HEADROOM;
> > +     u32 xdp_stats[XDP_STATS_TOTAL];
> >       struct xdp_buff xdp;
> >       struct page *page;
> >       u32 sub_len = 4;
> > @@ -1656,11 +1661,13 @@ fec_enet_rx_queue(struct net_device *ndev, int
> budget, u16 queue_id)
> >               fec_enet_update_cbd(rxq, bdp, index);
> >
> >               if (xdp_prog) {
> > +                     memset(xdp_stats, 0, sizeof(xdp_stats));
> >                       xdp_buff_clear_frags_flag(&xdp);
> >                       /* subtract 16bit shift and FCS */
> >                       xdp_prepare_buff(&xdp, page_address(page),
> >                                        data_start, pkt_len - sub_len, false);
> > -                     ret = fec_enet_run_xdp(fep, xdp_prog, &xdp, rxq, index);
> > +                     ret = fec_enet_run_xdp(fep, xdp_prog, &xdp, rxq,
> > +                                            xdp_stats, index);
> >                       xdp_result |= ret;
> >                       if (ret != FEC_ENET_XDP_PASS)
> >                               goto rx_processing_done; @@ -1762,6
> > +1769,15 @@ fec_enet_rx_queue(struct net_device *ndev, int budget, u16
> queue_id)
> >       if (xdp_result & FEC_ENET_XDP_REDIR)
> >               xdp_do_flush_map();
> >
> > +     if (xdp_prog) {
> > +             int i;
> > +
> > +             u64_stats_update_begin(&rxq->syncp);
> > +             for (i = 0; i < XDP_STATS_TOTAL; i++)
> > +                     rxq->stats[i] += xdp_stats[i];
> > +             u64_stats_update_end(&rxq->syncp);
> > +     }
> > +
> 
> This looks wrong. You are processing upto the napi budget, 64 frames, in a loop.
> The memset to 0 happens inside the loop, but you do the accumulation outside
> the loop?
> 

My bad. That should be moved outside the loop.

Thanks,
Shenwei

> This patch is getting pretty big. Please break it up, at least into one patch for XDP
> stats and one for page pool stats.
> 
>     Andrew

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v4 0/2] net: fec: add xdp and page pool statistics
  2022-11-15 15:57 [PATCH v4 0/2] net: fec: add xdp and page pool statistics Shenwei Wang
  2022-11-15 15:57 ` [PATCH v4 1/2] net: page_pool: export page_pool_stats definition Shenwei Wang
  2022-11-15 15:57 ` [PATCH v4 2/2] net: fec: add xdp and page pool statistics Shenwei Wang
@ 2022-11-15 17:57 ` Saeed Mahameed
  2 siblings, 0 replies; 10+ messages in thread
From: Saeed Mahameed @ 2022-11-15 17:57 UTC (permalink / raw)
  To: Shenwei Wang
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, netdev, linux-kernel, imx

On 15 Nov 09:57, Shenwei Wang wrote:
> Changes in V4:
> - Using u64 to record the XDP statistics
> - Changing strncpy to strscpy
> - Remove the "PAGE_POOL_STATS" select per Alexander's feedback
> - Export the page_pool_stats definition in the page_pool.h
>

Reviewed-by: Saeed Mahameed <saeed@kernel.org>


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-11-15 17:58 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-15 15:57 [PATCH v4 0/2] net: fec: add xdp and page pool statistics Shenwei Wang
2022-11-15 15:57 ` [PATCH v4 1/2] net: page_pool: export page_pool_stats definition Shenwei Wang
2022-11-15 17:12   ` Andrew Lunn
2022-11-15 17:18     ` [EXT] " Shenwei Wang
2022-11-15 17:43       ` Andrew Lunn
2022-11-15 17:52         ` Shenwei Wang
2022-11-15 15:57 ` [PATCH v4 2/2] net: fec: add xdp and page pool statistics Shenwei Wang
2022-11-15 17:28   ` Andrew Lunn
2022-11-15 17:53     ` [EXT] " Shenwei Wang
2022-11-15 17:57 ` [PATCH v4 0/2] " Saeed Mahameed

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.