All of lore.kernel.org
 help / color / mirror / Atom feed
From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
To: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net
Cc: netdev@vger.kernel.org, magnus.karlsson@intel.com,
	alexandr.lobakin@intel.com, maximmi@nvidia.com, kuba@kernel.org,
	bjorn@kernel.org,
	Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Subject: [PATCH v2 bpf-next 05/14] ice: xsk: terminate Rx side of NAPI when XSK Rx queue gets full
Date: Wed, 13 Apr 2022 17:30:06 +0200	[thread overview]
Message-ID: <20220413153015.453864-6-maciej.fijalkowski@intel.com> (raw)
In-Reply-To: <20220413153015.453864-1-maciej.fijalkowski@intel.com>

When XSK pool uses need_wakeup feature, correlate -ENOBUFS that was
returned from xdp_do_redirect() with a XSK Rx queue being full. In such
case, terminate the Rx processing that is being done on the current HW
Rx ring and let the user space consume descriptors from XSK Rx queue so
that there is room that driver can use later on.

Introduce new internal return code ICE_XDP_EXIT that will indicate case
described above.

Note that it does not affect Tx processing that is bound to the same
NAPI context, nor the other Rx rings.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_txrx.h |  1 +
 drivers/net/ethernet/intel/ice/ice_xsk.c  | 29 +++++++++++++++--------
 2 files changed, 20 insertions(+), 10 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h
index cead3eb149bd..f5a906c03669 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.h
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.h
@@ -133,6 +133,7 @@ static inline int ice_skb_pad(void)
 #define ICE_XDP_CONSUMED	BIT(0)
 #define ICE_XDP_TX		BIT(1)
 #define ICE_XDP_REDIR		BIT(2)
+#define ICE_XDP_EXIT		BIT(3)
 
 #define ICE_RX_DMA_ATTR \
 	(DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING)
diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
index e9ff05de0084..99bfa21c3938 100644
--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
+++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
@@ -540,9 +540,13 @@ ice_run_xdp_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
 
 	if (likely(act == XDP_REDIRECT)) {
 		err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);
-		if (err)
-			goto out_failure;
-		return ICE_XDP_REDIR;
+		if (!err)
+			return ICE_XDP_REDIR;
+		if (xsk_uses_need_wakeup(rx_ring->xsk_pool) && err == -ENOBUFS)
+			result = ICE_XDP_EXIT;
+		else
+			result = ICE_XDP_CONSUMED;
+		goto out_failure;
 	}
 
 	switch (act) {
@@ -553,15 +557,16 @@ ice_run_xdp_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
 		if (result == ICE_XDP_CONSUMED)
 			goto out_failure;
 		break;
+	case XDP_DROP:
+		result = ICE_XDP_CONSUMED;
+		break;
 	default:
 		bpf_warn_invalid_xdp_action(rx_ring->netdev, xdp_prog, act);
 		fallthrough;
 	case XDP_ABORTED:
+		result = ICE_XDP_CONSUMED;
 out_failure:
 		trace_xdp_exception(rx_ring->netdev, xdp_prog, act);
-		fallthrough;
-	case XDP_DROP:
-		result = ICE_XDP_CONSUMED;
 		break;
 	}
 
@@ -629,12 +634,16 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget)
 		xsk_buff_dma_sync_for_cpu(xdp, rx_ring->xsk_pool);
 
 		xdp_res = ice_run_xdp_zc(rx_ring, xdp, xdp_prog, xdp_ring);
-		if (likely(xdp_res & (ICE_XDP_TX | ICE_XDP_REDIR)))
+		if (likely(xdp_res & (ICE_XDP_TX | ICE_XDP_REDIR))) {
 			xdp_xmit |= xdp_res;
-		else if (xdp_res == ICE_XDP_CONSUMED)
+		} else if (xdp_res == ICE_XDP_EXIT) {
+			failure = true;
+			break;
+		} else if (xdp_res == ICE_XDP_CONSUMED) {
 			xsk_buff_free(xdp);
-		else
+		} else if (xdp_res == ICE_XDP_PASS) {
 			goto construct_skb;
+		}
 
 		total_rx_bytes += size;
 		total_rx_packets++;
@@ -669,7 +678,7 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget)
 		ice_receive_skb(rx_ring, skb, vlan_tag);
 	}
 
-	failure = !ice_alloc_rx_bufs_zc(rx_ring, ICE_DESC_UNUSED(rx_ring));
+	failure |= !ice_alloc_rx_bufs_zc(rx_ring, ICE_DESC_UNUSED(rx_ring));
 
 	ice_finalize_xdp_rx(xdp_ring, xdp_xmit);
 	ice_update_rx_ring_stats(rx_ring, total_rx_packets, total_rx_bytes);
-- 
2.33.1


  parent reply	other threads:[~2022-04-13 15:30 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-13 15:30 [PATCH v2 bpf-next 00/14] xsk: stop NAPI Rx processing on full XSK RQ Maciej Fijalkowski
2022-04-13 15:30 ` [PATCH v2 bpf-next 01/14] xsk: improve xdp_do_redirect() error codes Maciej Fijalkowski
2022-04-13 15:30 ` [PATCH v2 bpf-next 02/14] xsk: diversify return codes in xsk_rcv_check() Maciej Fijalkowski
2022-04-13 15:30 ` [PATCH v2 bpf-next 03/14] ice: xsk: decorate ICE_XDP_REDIR with likely() Maciej Fijalkowski
2022-04-13 15:30 ` [PATCH v2 bpf-next 04/14] ixgbe: xsk: decorate IXGBE_XDP_REDIR " Maciej Fijalkowski
2022-04-13 15:30 ` Maciej Fijalkowski [this message]
2022-04-13 15:30 ` [PATCH v2 bpf-next 06/14] i40e: xsk: terminate Rx side of NAPI when XSK Rx queue gets full Maciej Fijalkowski
2022-04-13 15:30 ` [PATCH v2 bpf-next 07/14] ixgbe: " Maciej Fijalkowski
2022-04-13 15:30 ` [PATCH v2 bpf-next 08/14] ice: xsk: diversify return values from xsk_wakeup call paths Maciej Fijalkowski
2022-04-13 15:30 ` [PATCH v2 bpf-next 09/14] i40e: " Maciej Fijalkowski
2022-04-13 15:30 ` [PATCH v2 bpf-next 10/14] ixgbe: " Maciej Fijalkowski
2022-04-13 15:30 ` [PATCH v2 bpf-next 11/14] mlx5: " Maciej Fijalkowski
2022-04-13 15:30 ` [PATCH v2 bpf-next 12/14] stmmac: " Maciej Fijalkowski
2022-04-13 15:30 ` [PATCH v2 bpf-next 13/14] ice: xsk: avoid refilling single Rx descriptors Maciej Fijalkowski
2022-04-13 15:30 ` [PATCH v2 bpf-next 14/14] xsk: drop ternary operator from xskq_cons_has_entries Maciej Fijalkowski
2022-04-15 19:20 ` [PATCH v2 bpf-next 00/14] xsk: stop NAPI Rx processing on full XSK RQ patchwork-bot+netdevbpf
2022-08-19  8:35 ` Maxim Mikityanskiy
     [not found] ` <f1eea2e9ca337e0c4e072bdd94a89859a4539c09.camel@nvidia.com>
2022-08-23  9:49   ` Maxim Mikityanskiy
2022-08-23 11:24     ` Maciej Fijalkowski
2022-08-23 13:46       ` Maxim Mikityanskiy
2022-08-24  5:18         ` John Fastabend
2022-08-25 14:42           ` Maxim Mikityanskiy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220413153015.453864-6-maciej.fijalkowski@intel.com \
    --to=maciej.fijalkowski@intel.com \
    --cc=alexandr.lobakin@intel.com \
    --cc=ast@kernel.org \
    --cc=bjorn@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=kuba@kernel.org \
    --cc=magnus.karlsson@intel.com \
    --cc=maximmi@nvidia.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.