From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
To: intel-wired-lan@lists.osuosl.org
Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, davem@davemloft.net,
anthony.l.nguyen@intel.com, kuba@kernel.org, bjorn@kernel.org,
magnus.karlsson@intel.com, jesse.brandeburg@intel.com,
alexandr.lobakin@intel.com, joamaki@gmail.com, toke@redhat.com,
brett.creeley@intel.com,
Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Subject: [PATCH v7 intel-next 4/9] ice: unify xdp_rings accesses
Date: Thu, 19 Aug 2021 13:59:59 +0200 [thread overview]
Message-ID: <20210819120004.34392-5-maciej.fijalkowski@intel.com> (raw)
In-Reply-To: <20210819120004.34392-1-maciej.fijalkowski@intel.com>
There has been a long lasting issue of improper xdp_rings indexing for
XDP_TX and XDP_REDIRECT actions. Given that currently rx_ring->q_index
is mixed with smp_processor_id(), there could be a situation where Tx
descriptors are produced onto XDP Tx ring, but tail is never bumped -
for example pin a particular queue id to non-matching IRQ line.
Address this problem by ignoring the user ring count setting and always
initialize the xdp_rings array to be of num_possible_cpus() size. Then,
always use the smp_processor_id() as an index to xdp_rings array. This
provides serialization as at given time only a single softirq can run on
a particular CPU.
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
---
drivers/net/ethernet/intel/ice/ice_lib.c | 2 +-
drivers/net/ethernet/intel/ice/ice_main.c | 2 +-
drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index ff403d6a5156..d76e34515483 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -3221,7 +3221,7 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi)
ice_vsi_map_rings_to_vectors(vsi);
if (ice_is_xdp_ena_vsi(vsi)) {
- vsi->num_xdp_txq = vsi->alloc_rxq;
+ vsi->num_xdp_txq = num_possible_cpus();
ret = ice_prepare_xdp_rings(vsi, vsi->xdp_prog);
if (ret)
goto err_vectors;
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index 5645b6e95fbe..8dc00a14ef56 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -2694,7 +2694,7 @@ ice_xdp_setup_prog(struct ice_vsi *vsi, struct bpf_prog *prog,
}
if (!ice_is_xdp_ena_vsi(vsi) && prog) {
- vsi->num_xdp_txq = vsi->alloc_rxq;
+ vsi->num_xdp_txq = num_possible_cpus();
xdp_ring_err = ice_prepare_xdp_rings(vsi, prog);
if (xdp_ring_err)
NL_SET_ERR_MSG_MOD(extack, "Setting up XDP Tx resources failed");
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c
index d6d71f82142f..bc64610df7cb 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c
@@ -297,7 +297,7 @@ void ice_finalize_xdp_rx(struct ice_rx_ring *rx_ring, unsigned int xdp_res)
if (xdp_res & ICE_XDP_TX) {
struct ice_tx_ring *xdp_ring =
- rx_ring->vsi->xdp_rings[rx_ring->q_index];
+ rx_ring->vsi->xdp_rings[smp_processor_id()];
ice_xdp_ring_update_tail(xdp_ring);
}
--
2.20.1
next prev parent reply other threads:[~2021-08-19 12:15 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-08-19 11:59 [PATCH v7 intel-next 0/9] XDP_TX improvements for ice Maciej Fijalkowski
2021-08-19 11:59 ` [PATCH v7 intel-next 1/9] ice: remove ring_active from ice_ring Maciej Fijalkowski
2021-09-13 6:47 ` [Intel-wired-lan] " G, GurucharanX
2021-08-19 11:59 ` [PATCH v7 intel-next 2/9] ice: move ice_container_type onto ice_ring_container Maciej Fijalkowski
2021-09-13 6:50 ` [Intel-wired-lan] " G, GurucharanX
2021-08-19 11:59 ` [PATCH v7 intel-next 3/9] ice: split ice_ring onto Tx/Rx separate structs Maciej Fijalkowski
2021-09-22 18:28 ` [Intel-wired-lan] " G, GurucharanX
2021-08-19 11:59 ` Maciej Fijalkowski [this message]
2021-09-03 6:37 ` [Intel-wired-lan] [PATCH v7 intel-next 4/9] ice: unify xdp_rings accesses Kuruvinakunnel, George
2021-08-19 12:00 ` [PATCH v7 intel-next 5/9] ice: do not create xdp_frame on XDP_TX Maciej Fijalkowski
2021-09-03 6:40 ` [Intel-wired-lan] " Kuruvinakunnel, George
2021-08-19 12:00 ` [PATCH v7 intel-next 6/9] ice: propagate xdp_ring onto rx_ring Maciej Fijalkowski
2021-09-03 6:45 ` [Intel-wired-lan] " Kuruvinakunnel, George
2021-08-19 12:00 ` [PATCH v7 intel-next 7/9] ice: optimize XDP_TX workloads Maciej Fijalkowski
2021-09-03 6:50 ` [Intel-wired-lan] " Kuruvinakunnel, George
2021-08-19 12:00 ` [PATCH v7 intel-next 8/9] ice: introduce XDP_TX fallback path Maciej Fijalkowski
2021-09-03 6:50 ` [Intel-wired-lan] " Kuruvinakunnel, George
2021-08-19 12:00 ` [PATCH v7 intel-next 9/9] ice: make use of ice_for_each_* macros Maciej Fijalkowski
2021-09-22 18:26 ` [Intel-wired-lan] " G, GurucharanX
2021-08-25 7:57 ` [Intel-wired-lan] [PATCH v7 intel-next 0/9] XDP_TX improvements for ice Magnus Karlsson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210819120004.34392-5-maciej.fijalkowski@intel.com \
--to=maciej.fijalkowski@intel.com \
--cc=alexandr.lobakin@intel.com \
--cc=anthony.l.nguyen@intel.com \
--cc=bjorn@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=brett.creeley@intel.com \
--cc=davem@davemloft.net \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=jesse.brandeburg@intel.com \
--cc=joamaki@gmail.com \
--cc=kuba@kernel.org \
--cc=magnus.karlsson@intel.com \
--cc=netdev@vger.kernel.org \
--cc=toke@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).