All of lore.kernel.org
 help / color / mirror / Atom feed
* [pull request][net 00/12] mlx5 fixes 2021-11-16
@ 2021-11-16 20:23 Saeed Mahameed
  2021-11-16 20:23 ` [net 01/12] net/mlx5e: kTLS, Fix crash in RX resync flow Saeed Mahameed
                   ` (11 more replies)
  0 siblings, 12 replies; 14+ messages in thread
From: Saeed Mahameed @ 2021-11-16 20:23 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski; +Cc: netdev, Saeed Mahameed

From: Saeed Mahameed <saeedm@nvidia.com>

Hi Dave, Hi Jakub,

Please pull this mlx5 fixes series, or let me know in case of any problem.

Thanks,
Saeed.

The following changes since commit 848e5d66fa3105b4136c95ddbc5654e9c43ba7d7:

  Merge branch '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue (2021-11-16 13:27:32 +0000)

are available in the Git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux.git tags/mlx5-fixes-2021-11-16

for you to fetch changes up to c4c3176739dfa6efcc5b1d1de4b3fd2b51b048c7:

  net/mlx5: E-Switch, return error if encap isn't supported (2021-11-16 12:20:23 -0800)

----------------------------------------------------------------
mlx5-fixes-2021-11-16

----------------------------------------------------------------
Avihai Horon (1):
      net/mlx5: Fix flow counters SF bulk query len

Maher Sanalla (1):
      net/mlx5: Lag, update tracker when state change event received

Mark Bloch (1):
      net/mlx5: E-Switch, rebuild lag only when needed

Neta Ostrovsky (1):
      net/mlx5: Update error handler for UCTX and UMEM

Paul Blakey (1):
      net/mlx5: E-Switch, Fix resetting of encap mode when entering switchdev

Raed Salem (1):
      net/mlx5: E-Switch, return error if encap isn't supported

Roi Dayan (1):
      net/mlx5e: CT, Fix multiple allocations and memleak of mod acts

Tariq Toukan (1):
      net/mlx5e: kTLS, Fix crash in RX resync flow

Valentine Fatiev (1):
      net/mlx5e: nullify cq->dbg pointer in mlx5_debug_cq_remove()

Vlad Buslov (1):
      net/mlx5e: Wait for concurrent flow deletion during neigh/fib events

Yevgeny Kliteynik (2):
      net/mlx5: DR, Handle eswitch manager and uplink vports separately
      net/mlx5: DR, Fix check for unsupported fields in match param

 drivers/net/ethernet/mellanox/mlx5/core/cmd.c      |  4 +-
 drivers/net/ethernet/mellanox/mlx5/core/cq.c       |  5 +-
 drivers/net/ethernet/mellanox/mlx5/core/debugfs.c  |  4 +-
 drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c | 26 ++++++----
 drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h |  2 +
 .../net/ethernet/mellanox/mlx5/core/en/tc_priv.h   |  1 +
 .../ethernet/mellanox/mlx5/core/en/tc_tun_encap.c  |  8 +++-
 .../ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c | 23 ++++++---
 drivers/net/ethernet/mellanox/mlx5/core/en_tc.c    | 10 +++-
 drivers/net/ethernet/mellanox/mlx5/core/eswitch.c  | 21 ++++++--
 .../ethernet/mellanox/mlx5/core/eswitch_offloads.c |  9 +---
 .../net/ethernet/mellanox/mlx5/core/fs_counters.c  |  2 +-
 drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c  | 28 +++++------
 .../mellanox/mlx5/core/steering/dr_domain.c        | 56 +++++++++-------------
 .../mellanox/mlx5/core/steering/dr_matcher.c       | 11 +++--
 .../mellanox/mlx5/core/steering/dr_types.h         |  1 +
 include/linux/mlx5/eswitch.h                       |  4 +-
 17 files changed, 124 insertions(+), 91 deletions(-)

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [net 01/12] net/mlx5e: kTLS, Fix crash in RX resync flow
  2021-11-16 20:23 [pull request][net 00/12] mlx5 fixes 2021-11-16 Saeed Mahameed
@ 2021-11-16 20:23 ` Saeed Mahameed
  2021-11-17 11:10   ` patchwork-bot+netdevbpf
  2021-11-16 20:23 ` [net 02/12] net/mlx5e: Wait for concurrent flow deletion during neigh/fib events Saeed Mahameed
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 14+ messages in thread
From: Saeed Mahameed @ 2021-11-16 20:23 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski
  Cc: netdev, Tariq Toukan, Maxim Mikityanskiy, Saeed Mahameed

From: Tariq Toukan <tariqt@nvidia.com>

For the TLS RX resync flow, we maintain a list of TLS contexts
that require some attention, to communicate their resync information
to the HW.
Here we fix list corruptions, by protecting the entries against
movements coming from resync_handle_seq_match(), until their resync
handling in napi is fully completed.

Fixes: e9ce991bce5b ("net/mlx5e: kTLS, Add resiliency to RX resync failures")
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Maxim Mikityanskiy <maximmi@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 .../mellanox/mlx5/core/en_accel/ktls_rx.c     | 23 ++++++++++++++-----
 1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c
index 62abce008c7b..a2a9f68579dd 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c
@@ -55,6 +55,7 @@ struct mlx5e_ktls_offload_context_rx {
 	DECLARE_BITMAP(flags, MLX5E_NUM_PRIV_RX_FLAGS);
 
 	/* resync */
+	spinlock_t lock; /* protects resync fields */
 	struct mlx5e_ktls_rx_resync_ctx resync;
 	struct list_head list;
 };
@@ -386,14 +387,18 @@ static void resync_handle_seq_match(struct mlx5e_ktls_offload_context_rx *priv_r
 	struct mlx5e_icosq *sq;
 	bool trigger_poll;
 
-	memcpy(info->rec_seq, &priv_rx->resync.sw_rcd_sn_be, sizeof(info->rec_seq));
-
 	sq = &c->async_icosq;
 	ktls_resync = sq->ktls_resync;
+	trigger_poll = false;
 
 	spin_lock_bh(&ktls_resync->lock);
-	list_add_tail(&priv_rx->list, &ktls_resync->list);
-	trigger_poll = !test_and_set_bit(MLX5E_SQ_STATE_PENDING_TLS_RX_RESYNC, &sq->state);
+	spin_lock_bh(&priv_rx->lock);
+	memcpy(info->rec_seq, &priv_rx->resync.sw_rcd_sn_be, sizeof(info->rec_seq));
+	if (list_empty(&priv_rx->list)) {
+		list_add_tail(&priv_rx->list, &ktls_resync->list);
+		trigger_poll = !test_and_set_bit(MLX5E_SQ_STATE_PENDING_TLS_RX_RESYNC, &sq->state);
+	}
+	spin_unlock_bh(&priv_rx->lock);
 	spin_unlock_bh(&ktls_resync->lock);
 
 	if (!trigger_poll)
@@ -617,6 +622,8 @@ int mlx5e_ktls_add_rx(struct net_device *netdev, struct sock *sk,
 	if (err)
 		goto err_create_key;
 
+	INIT_LIST_HEAD(&priv_rx->list);
+	spin_lock_init(&priv_rx->lock);
 	priv_rx->crypto_info  =
 		*(struct tls12_crypto_info_aes_gcm_128 *)crypto_info;
 
@@ -730,10 +737,14 @@ bool mlx5e_ktls_rx_handle_resync_list(struct mlx5e_channel *c, int budget)
 		priv_rx = list_first_entry(&local_list,
 					   struct mlx5e_ktls_offload_context_rx,
 					   list);
+		spin_lock(&priv_rx->lock);
 		cseg = post_static_params(sq, priv_rx);
-		if (IS_ERR(cseg))
+		if (IS_ERR(cseg)) {
+			spin_unlock(&priv_rx->lock);
 			break;
-		list_del(&priv_rx->list);
+		}
+		list_del_init(&priv_rx->list);
+		spin_unlock(&priv_rx->lock);
 		db_cseg = cseg;
 	}
 	if (db_cseg)
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [net 02/12] net/mlx5e: Wait for concurrent flow deletion during neigh/fib events
  2021-11-16 20:23 [pull request][net 00/12] mlx5 fixes 2021-11-16 Saeed Mahameed
  2021-11-16 20:23 ` [net 01/12] net/mlx5e: kTLS, Fix crash in RX resync flow Saeed Mahameed
@ 2021-11-16 20:23 ` Saeed Mahameed
  2021-11-16 20:23 ` [net 03/12] net/mlx5: E-Switch, Fix resetting of encap mode when entering switchdev Saeed Mahameed
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Saeed Mahameed @ 2021-11-16 20:23 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski
  Cc: netdev, Vlad Buslov, Roi Dayan, Saeed Mahameed

From: Vlad Buslov <vladbu@nvidia.com>

Function mlx5e_take_tmp_flow() skips flows with zero reference count. This
can cause syndrome 0x179e84 when the called from neigh or route update code
and the skipped flow is not removed from the hardware by the time
underlying encap/decap resource is deleted. Add new completion
'del_hw_done' that is completed when flow is unoffloaded. This is safe to
do because flow with reference count zero needs to be detached from
encap/decap entry before its memory is deallocated, which requires taking
the encap_tbl_lock mutex that is held by the event handlers code.

Fixes: 8914add2c9e5 ("net/mlx5e: Handle FIB events to update tunnel endpoint device")
Signed-off-by: Vlad Buslov <vladbu@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/en/tc_priv.h      | 1 +
 drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c | 8 +++++++-
 drivers/net/ethernet/mellanox/mlx5/core/en_tc.c           | 2 ++
 3 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_priv.h b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_priv.h
index 8f64f2c8895a..b689701ac7d8 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_priv.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_priv.h
@@ -102,6 +102,7 @@ struct mlx5e_tc_flow {
 	refcount_t refcnt;
 	struct rcu_head rcu_head;
 	struct completion init_done;
+	struct completion del_hw_done;
 	int tunnel_id; /* the mapped tunnel id of this flow */
 	struct mlx5_flow_attr *attr;
 };
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
index 660cca73c36c..042b1abe1437 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
@@ -245,8 +245,14 @@ static void mlx5e_take_tmp_flow(struct mlx5e_tc_flow *flow,
 				struct list_head *flow_list,
 				int index)
 {
-	if (IS_ERR(mlx5e_flow_get(flow)))
+	if (IS_ERR(mlx5e_flow_get(flow))) {
+		/* Flow is being deleted concurrently. Wait for it to be
+		 * unoffloaded from hardware, otherwise deleting encap will
+		 * fail.
+		 */
+		wait_for_completion(&flow->del_hw_done);
 		return;
+	}
 	wait_for_completion(&flow->init_done);
 
 	flow->tmp_entry_index = index;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index 835caa1c7b74..cb76c41fe163 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -1600,6 +1600,7 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv,
 		else
 			mlx5e_tc_unoffload_fdb_rules(esw, flow, attr);
 	}
+	complete_all(&flow->del_hw_done);
 
 	if (mlx5_flow_has_geneve_opt(flow))
 		mlx5_geneve_tlv_option_del(priv->mdev->geneve);
@@ -4465,6 +4466,7 @@ mlx5e_alloc_flow(struct mlx5e_priv *priv, int attr_size,
 	INIT_LIST_HEAD(&flow->l3_to_l2_reformat);
 	refcount_set(&flow->refcnt, 1);
 	init_completion(&flow->init_done);
+	init_completion(&flow->del_hw_done);
 
 	*__flow = flow;
 	*__parse_attr = parse_attr;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [net 03/12] net/mlx5: E-Switch, Fix resetting of encap mode when entering switchdev
  2021-11-16 20:23 [pull request][net 00/12] mlx5 fixes 2021-11-16 Saeed Mahameed
  2021-11-16 20:23 ` [net 01/12] net/mlx5e: kTLS, Fix crash in RX resync flow Saeed Mahameed
  2021-11-16 20:23 ` [net 02/12] net/mlx5e: Wait for concurrent flow deletion during neigh/fib events Saeed Mahameed
@ 2021-11-16 20:23 ` Saeed Mahameed
  2021-11-16 20:23 ` [net 04/12] net/mlx5e: nullify cq->dbg pointer in mlx5_debug_cq_remove() Saeed Mahameed
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Saeed Mahameed @ 2021-11-16 20:23 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski
  Cc: netdev, Paul Blakey, Mark Bloch, Maor Dickman, Saeed Mahameed

From: Paul Blakey <paulb@nvidia.com>

E-Switch encap mode is relevant only when in switchdev mode.
The RDMA driver can query the encap configuration via
mlx5_eswitch_get_encap_mode(). Make sure it returns the currently
used mode and not the set one.

This reverts the cited commit which reset the encap mode
on entering switchdev and fixes the original issue properly.

Fixes: 9a64144d683a ("net/mlx5: E-Switch, Fix default encap mode")
Signed-off-by: Paul Blakey <paulb@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Maor Dickman <maord@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/eswitch.c        | 9 +++++++--
 .../net/ethernet/mellanox/mlx5/core/eswitch_offloads.c   | 7 -------
 include/linux/mlx5/eswitch.h                             | 4 ++--
 3 files changed, 9 insertions(+), 11 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index ec136b499204..5872cc8bf953 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -1572,6 +1572,11 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev)
 	esw->enabled_vports = 0;
 	esw->mode = MLX5_ESWITCH_NONE;
 	esw->offloads.inline_mode = MLX5_INLINE_MODE_NONE;
+	if (MLX5_CAP_ESW_FLOWTABLE_FDB(dev, reformat) &&
+	    MLX5_CAP_ESW_FLOWTABLE_FDB(dev, decap))
+		esw->offloads.encap = DEVLINK_ESWITCH_ENCAP_MODE_BASIC;
+	else
+		esw->offloads.encap = DEVLINK_ESWITCH_ENCAP_MODE_NONE;
 
 	dev->priv.eswitch = esw;
 	BLOCKING_INIT_NOTIFIER_HEAD(&esw->n_head);
@@ -1934,7 +1939,7 @@ int mlx5_eswitch_get_vport_stats(struct mlx5_eswitch *esw,
 	return err;
 }
 
-u8 mlx5_eswitch_mode(struct mlx5_core_dev *dev)
+u8 mlx5_eswitch_mode(const struct mlx5_core_dev *dev)
 {
 	struct mlx5_eswitch *esw = dev->priv.eswitch;
 
@@ -1948,7 +1953,7 @@ mlx5_eswitch_get_encap_mode(const struct mlx5_core_dev *dev)
 	struct mlx5_eswitch *esw;
 
 	esw = dev->priv.eswitch;
-	return mlx5_esw_allowed(esw) ? esw->offloads.encap :
+	return (mlx5_eswitch_mode(dev) == MLX5_ESWITCH_OFFLOADS)  ? esw->offloads.encap :
 		DEVLINK_ESWITCH_ENCAP_MODE_NONE;
 }
 EXPORT_SYMBOL(mlx5_eswitch_get_encap_mode);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index f4eaa5893886..80fa76f60e1e 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -3183,12 +3183,6 @@ int esw_offloads_enable(struct mlx5_eswitch *esw)
 	u64 mapping_id;
 	int err;
 
-	if (MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, reformat) &&
-	    MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, decap))
-		esw->offloads.encap = DEVLINK_ESWITCH_ENCAP_MODE_BASIC;
-	else
-		esw->offloads.encap = DEVLINK_ESWITCH_ENCAP_MODE_NONE;
-
 	mutex_init(&esw->offloads.termtbl_mutex);
 	mlx5_rdma_enable_roce(esw->dev);
 
@@ -3286,7 +3280,6 @@ void esw_offloads_disable(struct mlx5_eswitch *esw)
 	esw_offloads_metadata_uninit(esw);
 	mlx5_rdma_disable_roce(esw->dev);
 	mutex_destroy(&esw->offloads.termtbl_mutex);
-	esw->offloads.encap = DEVLINK_ESWITCH_ENCAP_MODE_NONE;
 }
 
 static int esw_mode_from_devlink(u16 mode, u16 *mlx5_mode)
diff --git a/include/linux/mlx5/eswitch.h b/include/linux/mlx5/eswitch.h
index 97afcea39a7b..8b18fe9771f9 100644
--- a/include/linux/mlx5/eswitch.h
+++ b/include/linux/mlx5/eswitch.h
@@ -145,13 +145,13 @@ u32 mlx5_eswitch_get_vport_metadata_for_set(struct mlx5_eswitch *esw,
 	GENMASK(31 - ESW_TUN_ID_BITS - ESW_RESERVED_BITS, \
 		ESW_TUN_OPTS_OFFSET + 1)
 
-u8 mlx5_eswitch_mode(struct mlx5_core_dev *dev);
+u8 mlx5_eswitch_mode(const struct mlx5_core_dev *dev);
 u16 mlx5_eswitch_get_total_vports(const struct mlx5_core_dev *dev);
 struct mlx5_core_dev *mlx5_eswitch_get_core_dev(struct mlx5_eswitch *esw);
 
 #else  /* CONFIG_MLX5_ESWITCH */
 
-static inline u8 mlx5_eswitch_mode(struct mlx5_core_dev *dev)
+static inline u8 mlx5_eswitch_mode(const struct mlx5_core_dev *dev)
 {
 	return MLX5_ESWITCH_NONE;
 }
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [net 04/12] net/mlx5e: nullify cq->dbg pointer in mlx5_debug_cq_remove()
  2021-11-16 20:23 [pull request][net 00/12] mlx5 fixes 2021-11-16 Saeed Mahameed
                   ` (2 preceding siblings ...)
  2021-11-16 20:23 ` [net 03/12] net/mlx5: E-Switch, Fix resetting of encap mode when entering switchdev Saeed Mahameed
@ 2021-11-16 20:23 ` Saeed Mahameed
  2021-11-16 20:23 ` [net 05/12] net/mlx5: DR, Handle eswitch manager and uplink vports separately Saeed Mahameed
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Saeed Mahameed @ 2021-11-16 20:23 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski
  Cc: netdev, Valentine Fatiev, Leon Romanovsky, Saeed Mahameed

From: Valentine Fatiev <valentinef@nvidia.com>

Prior to this patch in case mlx5_core_destroy_cq() failed it proceeds
to rest of destroy operations. mlx5_core_destroy_cq() could be called again
by user and cause additional call of mlx5_debug_cq_remove().
cq->dbg was not nullify in previous call and cause the crash.

Fix it by nullify cq->dbg pointer after removal.

Also proceed to destroy operations only if FW return 0
for MLX5_CMD_OP_DESTROY_CQ command.

general protection fault, probably for non-canonical address 0x2000300004058: 0000 [#1] SMP PTI
CPU: 5 PID: 1228 Comm: python Not tainted 5.15.0-rc5_for_upstream_min_debug_2021_10_14_11_06 #1
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014
RIP: 0010:lockref_get+0x1/0x60
Code: 5d e9 53 ff ff ff 48 8d 7f 70 e8 0a 2e 48 00 c7 85 d0 00 00 00 02
00 00 00 c6 45 70 00 fb 5d c3 c3 cc cc cc cc cc cc cc cc 53 <48> 8b 17
48 89 fb 85 d2 75 3d 48 89 d0 bf 64 00 00 00 48 89 c1 48
RSP: 0018:ffff888137dd7a38 EFLAGS: 00010206
RAX: 0000000000000000 RBX: ffff888107d5f458 RCX: 00000000fffffffe
RDX: 000000000002c2b0 RSI: ffffffff8155e2e0 RDI: 0002000300004058
RBP: ffff888137dd7a88 R08: 0002000300004058 R09: ffff8881144a9f88
R10: 0000000000000000 R11: 0000000000000000 R12: ffff8881141d4000
R13: ffff888137dd7c68 R14: ffff888137dd7d58 R15: ffff888137dd7cc0
FS:  00007f4644f2a4c0(0000) GS:ffff8887a2d40000(0000)
knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055b4500f4380 CR3: 0000000114f7a003 CR4: 0000000000170ea0
Call Trace:
  simple_recursive_removal+0x33/0x2e0
  ? debugfs_remove+0x60/0x60
  debugfs_remove+0x40/0x60
  mlx5_debug_cq_remove+0x32/0x70 [mlx5_core]
  mlx5_core_destroy_cq+0x41/0x1d0 [mlx5_core]
  devx_obj_cleanup+0x151/0x330 [mlx5_ib]
  ? __pollwait+0xd0/0xd0
  ? xas_load+0x5/0x70
  ? xa_load+0x62/0xa0
  destroy_hw_idr_uobject+0x20/0x80 [ib_uverbs]
  uverbs_destroy_uobject+0x3b/0x360 [ib_uverbs]
  uobj_destroy+0x54/0xa0 [ib_uverbs]
  ib_uverbs_cmd_verbs+0xaf2/0x1160 [ib_uverbs]
  ? uverbs_finalize_object+0xd0/0xd0 [ib_uverbs]
  ib_uverbs_ioctl+0xc4/0x1b0 [ib_uverbs]
  __x64_sys_ioctl+0x3e4/0x8e0

Fixes: 94b960b9deff ("net/mlx5e: Fix memory leak in mlx5_core_destroy_cq() error path")
Signed-off-by: Valentine Fatiev <valentinef@nvidia.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/cq.c      | 5 +++--
 drivers/net/ethernet/mellanox/mlx5/core/debugfs.c | 4 +++-
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cq.c b/drivers/net/ethernet/mellanox/mlx5/core/cq.c
index 02e77ffe5c3e..5371ad0a12eb 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/cq.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/cq.c
@@ -164,13 +164,14 @@ int mlx5_core_destroy_cq(struct mlx5_core_dev *dev, struct mlx5_core_cq *cq)
 	MLX5_SET(destroy_cq_in, in, cqn, cq->cqn);
 	MLX5_SET(destroy_cq_in, in, uid, cq->uid);
 	err = mlx5_cmd_exec_in(dev, destroy_cq, in);
+	if (err)
+		return err;
 
 	synchronize_irq(cq->irqn);
-
 	mlx5_cq_put(cq);
 	wait_for_completion(&cq->free);
 
-	return err;
+	return 0;
 }
 EXPORT_SYMBOL(mlx5_core_destroy_cq);
 
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c b/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c
index 07c8d9811bc8..10d195042ab5 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c
@@ -507,6 +507,8 @@ void mlx5_debug_cq_remove(struct mlx5_core_dev *dev, struct mlx5_core_cq *cq)
 	if (!mlx5_debugfs_root)
 		return;
 
-	if (cq->dbg)
+	if (cq->dbg) {
 		rem_res_tree(cq->dbg);
+		cq->dbg = NULL;
+	}
 }
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [net 05/12] net/mlx5: DR, Handle eswitch manager and uplink vports separately
  2021-11-16 20:23 [pull request][net 00/12] mlx5 fixes 2021-11-16 Saeed Mahameed
                   ` (3 preceding siblings ...)
  2021-11-16 20:23 ` [net 04/12] net/mlx5e: nullify cq->dbg pointer in mlx5_debug_cq_remove() Saeed Mahameed
@ 2021-11-16 20:23 ` Saeed Mahameed
  2021-11-16 20:23 ` [net 06/12] net/mlx5: DR, Fix check for unsupported fields in match param Saeed Mahameed
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Saeed Mahameed @ 2021-11-16 20:23 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski; +Cc: netdev, Yevgeny Kliteynik, Saeed Mahameed

From: Yevgeny Kliteynik <kliteyn@nvidia.com>

When querying eswitch manager vport capabilities as "other = 1",
we encounter a FW compatibility issue with older FW versions.
To maintain backward compatibility, eswitch manager vport should
be queried as "other = 0" vport both for ECPF and non-ECPF cases.

This patch fixes these queries and improves the code readability
by handling eswitch manager and uplink vports separately, avoiding
the excessive 'if' conditions. Also, uplink caps are stored similar
to esw manager and not as part of xarray.

Fixes: dd4acb2a0954 ("net/mlx5: DR, Add missing query for vport 0")
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 .../mellanox/mlx5/core/steering/dr_domain.c   | 56 ++++++++-----------
 .../mellanox/mlx5/core/steering/dr_types.h    |  1 +
 2 files changed, 24 insertions(+), 33 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c
index 49089cbe897c..8cbd36c82b3b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c
@@ -135,25 +135,14 @@ static void dr_domain_fill_uplink_caps(struct mlx5dr_domain *dmn,
 
 static int dr_domain_query_vport(struct mlx5dr_domain *dmn,
 				 u16 vport_number,
+				 bool other_vport,
 				 struct mlx5dr_cmd_vport_cap *vport_caps)
 {
-	u16 cmd_vport = vport_number;
-	bool other_vport = true;
 	int ret;
 
-	if (vport_number == MLX5_VPORT_UPLINK) {
-		dr_domain_fill_uplink_caps(dmn, vport_caps);
-		return 0;
-	}
-
-	if (dmn->info.caps.is_ecpf && vport_number == MLX5_VPORT_ECPF) {
-		other_vport = false;
-		cmd_vport = 0;
-	}
-
 	ret = mlx5dr_cmd_query_esw_vport_context(dmn->mdev,
 						 other_vport,
-						 cmd_vport,
+						 vport_number,
 						 &vport_caps->icm_address_rx,
 						 &vport_caps->icm_address_tx);
 	if (ret)
@@ -161,7 +150,7 @@ static int dr_domain_query_vport(struct mlx5dr_domain *dmn,
 
 	ret = mlx5dr_cmd_query_gvmi(dmn->mdev,
 				    other_vport,
-				    cmd_vport,
+				    vport_number,
 				    &vport_caps->vport_gvmi);
 	if (ret)
 		return ret;
@@ -176,9 +165,15 @@ static int dr_domain_query_esw_mngr(struct mlx5dr_domain *dmn)
 {
 	return dr_domain_query_vport(dmn,
 				     dmn->info.caps.is_ecpf ? MLX5_VPORT_ECPF : 0,
+				     false,
 				     &dmn->info.caps.vports.esw_manager_caps);
 }
 
+static void dr_domain_query_uplink(struct mlx5dr_domain *dmn)
+{
+	dr_domain_fill_uplink_caps(dmn, &dmn->info.caps.vports.uplink_caps);
+}
+
 static struct mlx5dr_cmd_vport_cap *
 dr_domain_add_vport_cap(struct mlx5dr_domain *dmn, u16 vport)
 {
@@ -190,7 +185,7 @@ dr_domain_add_vport_cap(struct mlx5dr_domain *dmn, u16 vport)
 	if (!vport_caps)
 		return NULL;
 
-	ret = dr_domain_query_vport(dmn, vport, vport_caps);
+	ret = dr_domain_query_vport(dmn, vport, true, vport_caps);
 	if (ret) {
 		kvfree(vport_caps);
 		return NULL;
@@ -207,16 +202,26 @@ dr_domain_add_vport_cap(struct mlx5dr_domain *dmn, u16 vport)
 	return vport_caps;
 }
 
+static bool dr_domain_is_esw_mgr_vport(struct mlx5dr_domain *dmn, u16 vport)
+{
+	struct mlx5dr_cmd_caps *caps = &dmn->info.caps;
+
+	return (caps->is_ecpf && vport == MLX5_VPORT_ECPF) ||
+	       (!caps->is_ecpf && vport == 0);
+}
+
 struct mlx5dr_cmd_vport_cap *
 mlx5dr_domain_get_vport_cap(struct mlx5dr_domain *dmn, u16 vport)
 {
 	struct mlx5dr_cmd_caps *caps = &dmn->info.caps;
 	struct mlx5dr_cmd_vport_cap *vport_caps;
 
-	if ((caps->is_ecpf && vport == MLX5_VPORT_ECPF) ||
-	    (!caps->is_ecpf && vport == 0))
+	if (dr_domain_is_esw_mgr_vport(dmn, vport))
 		return &caps->vports.esw_manager_caps;
 
+	if (vport == MLX5_VPORT_UPLINK)
+		return &caps->vports.uplink_caps;
+
 vport_load:
 	vport_caps = xa_load(&caps->vports.vports_caps_xa, vport);
 	if (vport_caps)
@@ -241,17 +246,6 @@ static void dr_domain_clear_vports(struct mlx5dr_domain *dmn)
 	}
 }
 
-static int dr_domain_query_uplink(struct mlx5dr_domain *dmn)
-{
-	struct mlx5dr_cmd_vport_cap *vport_caps;
-
-	vport_caps = mlx5dr_domain_get_vport_cap(dmn, MLX5_VPORT_UPLINK);
-	if (!vport_caps)
-		return -EINVAL;
-
-	return 0;
-}
-
 static int dr_domain_query_fdb_caps(struct mlx5_core_dev *mdev,
 				    struct mlx5dr_domain *dmn)
 {
@@ -281,11 +275,7 @@ static int dr_domain_query_fdb_caps(struct mlx5_core_dev *mdev,
 		goto free_vports_caps_xa;
 	}
 
-	ret = dr_domain_query_uplink(dmn);
-	if (ret) {
-		mlx5dr_err(dmn, "Failed to query uplink vport caps (err: %d)", ret);
-		goto free_vports_caps_xa;
-	}
+	dr_domain_query_uplink(dmn);
 
 	return 0;
 
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h
index 3028b776da00..2333c2439c28 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h
@@ -764,6 +764,7 @@ struct mlx5dr_roce_cap {
 
 struct mlx5dr_vports {
 	struct mlx5dr_cmd_vport_cap esw_manager_caps;
+	struct mlx5dr_cmd_vport_cap uplink_caps;
 	struct xarray vports_caps_xa;
 };
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [net 06/12] net/mlx5: DR, Fix check for unsupported fields in match param
  2021-11-16 20:23 [pull request][net 00/12] mlx5 fixes 2021-11-16 Saeed Mahameed
                   ` (4 preceding siblings ...)
  2021-11-16 20:23 ` [net 05/12] net/mlx5: DR, Handle eswitch manager and uplink vports separately Saeed Mahameed
@ 2021-11-16 20:23 ` Saeed Mahameed
  2021-11-16 20:23 ` [net 07/12] net/mlx5: Update error handler for UCTX and UMEM Saeed Mahameed
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Saeed Mahameed @ 2021-11-16 20:23 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski; +Cc: netdev, Yevgeny Kliteynik, Saeed Mahameed

From: Yevgeny Kliteynik <kliteyn@nvidia.com>

The existing loop doesn't cast the buffer while scanning it, which
results in out-of-bounds read and failure to create the matcher.

Fixes: 941f19798a11 ("net/mlx5: DR, Add check for unsupported fields in match param")
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 .../ethernet/mellanox/mlx5/core/steering/dr_matcher.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_matcher.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_matcher.c
index 75c775bee351..793365242e85 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_matcher.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_matcher.c
@@ -924,11 +924,12 @@ static int dr_matcher_init(struct mlx5dr_matcher *matcher,
 
 	/* Check that all mask data was consumed */
 	for (i = 0; i < consumed_mask.match_sz; i++) {
-		if (consumed_mask.match_buf[i]) {
-			mlx5dr_dbg(dmn, "Match param mask contains unsupported parameters\n");
-			ret = -EOPNOTSUPP;
-			goto free_consumed_mask;
-		}
+		if (!((u8 *)consumed_mask.match_buf)[i])
+			continue;
+
+		mlx5dr_dbg(dmn, "Match param mask contains unsupported parameters\n");
+		ret = -EOPNOTSUPP;
+		goto free_consumed_mask;
 	}
 
 	ret =  0;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [net 07/12] net/mlx5: Update error handler for UCTX and UMEM
  2021-11-16 20:23 [pull request][net 00/12] mlx5 fixes 2021-11-16 Saeed Mahameed
                   ` (5 preceding siblings ...)
  2021-11-16 20:23 ` [net 06/12] net/mlx5: DR, Fix check for unsupported fields in match param Saeed Mahameed
@ 2021-11-16 20:23 ` Saeed Mahameed
  2021-11-16 20:23 ` [net 08/12] net/mlx5: E-Switch, rebuild lag only when needed Saeed Mahameed
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Saeed Mahameed @ 2021-11-16 20:23 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski
  Cc: netdev, Neta Ostrovsky, Leon Romanovsky, Saeed Mahameed

From: Neta Ostrovsky <netao@nvidia.com>

In the fast unload flow, the device state is set to internal error,
which indicates that the driver started the destroy process.
In this case, when a destroy command is being executed, it should return
MLX5_CMD_STAT_OK.
Fix MLX5_CMD_OP_DESTROY_UCTX and MLX5_CMD_OP_DESTROY_UMEM to return OK
instead of EIO.

This fixes a call trace in the umem release process -
[ 2633.536695] Call Trace:
[ 2633.537518]  ib_uverbs_remove_one+0xc3/0x140 [ib_uverbs]
[ 2633.538596]  remove_client_context+0x8b/0xd0 [ib_core]
[ 2633.539641]  disable_device+0x8c/0x130 [ib_core]
[ 2633.540615]  __ib_unregister_device+0x35/0xa0 [ib_core]
[ 2633.541640]  ib_unregister_device+0x21/0x30 [ib_core]
[ 2633.542663]  __mlx5_ib_remove+0x38/0x90 [mlx5_ib]
[ 2633.543640]  auxiliary_bus_remove+0x1e/0x30 [auxiliary]
[ 2633.544661]  device_release_driver_internal+0x103/0x1f0
[ 2633.545679]  bus_remove_device+0xf7/0x170
[ 2633.546640]  device_del+0x181/0x410
[ 2633.547606]  mlx5_rescan_drivers_locked.part.10+0x63/0x160 [mlx5_core]
[ 2633.548777]  mlx5_unregister_device+0x27/0x40 [mlx5_core]
[ 2633.549841]  mlx5_uninit_one+0x21/0xc0 [mlx5_core]
[ 2633.550864]  remove_one+0x69/0xe0 [mlx5_core]
[ 2633.551819]  pci_device_remove+0x3b/0xc0
[ 2633.552731]  device_release_driver_internal+0x103/0x1f0
[ 2633.553746]  unbind_store+0xf6/0x130
[ 2633.554657]  kernfs_fop_write+0x116/0x190
[ 2633.555567]  vfs_write+0xa5/0x1a0
[ 2633.556407]  ksys_write+0x4f/0xb0
[ 2633.557233]  do_syscall_64+0x5b/0x1a0
[ 2633.558071]  entry_SYSCALL_64_after_hwframe+0x65/0xca
[ 2633.559018] RIP: 0033:0x7f9977132648
[ 2633.559821] Code: 89 02 48 c7 c0 ff ff ff ff eb b3 0f 1f 80 00 00 00 00 f3 0f 1e fa 48 8d 05 55 6f 2d 00 8b 00 85 c0 75 17 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 58 c3 0f 1f 80 00 00 00 00 41 54 49 89 d4 55
[ 2633.562332] RSP: 002b:00007fffb1a83888 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
[ 2633.563472] RAX: ffffffffffffffda RBX: 000000000000000c RCX: 00007f9977132648
[ 2633.564541] RDX: 000000000000000c RSI: 000055b90546e230 RDI: 0000000000000001
[ 2633.565596] RBP: 000055b90546e230 R08: 00007f9977406860 R09: 00007f9977a54740
[ 2633.566653] R10: 0000000000000000 R11: 0000000000000246 R12: 00007f99774056e0
[ 2633.567692] R13: 000000000000000c R14: 00007f9977400880 R15: 000000000000000c
[ 2633.568725] ---[ end trace 10b4fe52945e544d ]---

Fixes: 6a6fabbfa3e8 ("net/mlx5: Update pci error handler entries and command translation")
Signed-off-by: Neta Ostrovsky <netao@nvidia.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/cmd.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
index f71ec4d9d68e..8eaa24d865c5 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
@@ -339,6 +339,8 @@ static int mlx5_internal_err_ret_value(struct mlx5_core_dev *dev, u16 op,
 	case MLX5_CMD_OP_PAGE_FAULT_RESUME:
 	case MLX5_CMD_OP_QUERY_ESW_FUNCTIONS:
 	case MLX5_CMD_OP_DEALLOC_SF:
+	case MLX5_CMD_OP_DESTROY_UCTX:
+	case MLX5_CMD_OP_DESTROY_UMEM:
 		return MLX5_CMD_STAT_OK;
 
 	case MLX5_CMD_OP_QUERY_HCA_CAP:
@@ -464,9 +466,7 @@ static int mlx5_internal_err_ret_value(struct mlx5_core_dev *dev, u16 op,
 	case MLX5_CMD_OP_MODIFY_GENERAL_OBJECT:
 	case MLX5_CMD_OP_QUERY_GENERAL_OBJECT:
 	case MLX5_CMD_OP_CREATE_UCTX:
-	case MLX5_CMD_OP_DESTROY_UCTX:
 	case MLX5_CMD_OP_CREATE_UMEM:
-	case MLX5_CMD_OP_DESTROY_UMEM:
 	case MLX5_CMD_OP_ALLOC_MEMIC:
 	case MLX5_CMD_OP_MODIFY_XRQ:
 	case MLX5_CMD_OP_RELEASE_XRQ_ERROR:
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [net 08/12] net/mlx5: E-Switch, rebuild lag only when needed
  2021-11-16 20:23 [pull request][net 00/12] mlx5 fixes 2021-11-16 Saeed Mahameed
                   ` (6 preceding siblings ...)
  2021-11-16 20:23 ` [net 07/12] net/mlx5: Update error handler for UCTX and UMEM Saeed Mahameed
@ 2021-11-16 20:23 ` Saeed Mahameed
  2021-11-16 20:23 ` [net 09/12] net/mlx5: Fix flow counters SF bulk query len Saeed Mahameed
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Saeed Mahameed @ 2021-11-16 20:23 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski; +Cc: netdev, Mark Bloch, Saeed Mahameed

From: Mark Bloch <mbloch@nvidia.com>

A user can enable VFs without changing E-Switch mode, this can happen
when a user moves straight to switchdev mode and only once in switchdev
VFs are enabled via the sysfs interface.

The cited commit assumed this isn't possible and exposed a single
API function where the E-switch calls into the lag code, breaks the lag
and prevents any other lag operations to take place until the
E-switch update has ended.

Breaking the hardware lag when it isn't needed can make it such that
hardware lag can't be enabled again.

In the sysfs call path check if the current E-Switch mode is NONE,
in the context of the function it can only mean the E-Switch is moving
out of NONE mode and the hardware lag should be disabled and enabled
once the mode change has ended. If the mode isn't NONE it means
VFs are about to be enabled and such operation doesn't require
toggling the hardware lag.

Fixes: cac1eb2cf2e3 ("net/mlx5: Lag, properly lock eswitch if needed")
Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/eswitch.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 5872cc8bf953..51a8cecc4a7c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -1305,12 +1305,17 @@ int mlx5_eswitch_enable_locked(struct mlx5_eswitch *esw, int mode, int num_vfs)
  */
 int mlx5_eswitch_enable(struct mlx5_eswitch *esw, int num_vfs)
 {
+	bool toggle_lag;
 	int ret;
 
 	if (!mlx5_esw_allowed(esw))
 		return 0;
 
-	mlx5_lag_disable_change(esw->dev);
+	toggle_lag = esw->mode == MLX5_ESWITCH_NONE;
+
+	if (toggle_lag)
+		mlx5_lag_disable_change(esw->dev);
+
 	down_write(&esw->mode_lock);
 	if (esw->mode == MLX5_ESWITCH_NONE) {
 		ret = mlx5_eswitch_enable_locked(esw, MLX5_ESWITCH_LEGACY, num_vfs);
@@ -1324,7 +1329,10 @@ int mlx5_eswitch_enable(struct mlx5_eswitch *esw, int num_vfs)
 			esw->esw_funcs.num_vfs = num_vfs;
 	}
 	up_write(&esw->mode_lock);
-	mlx5_lag_enable_change(esw->dev);
+
+	if (toggle_lag)
+		mlx5_lag_enable_change(esw->dev);
+
 	return ret;
 }
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [net 09/12] net/mlx5: Fix flow counters SF bulk query len
  2021-11-16 20:23 [pull request][net 00/12] mlx5 fixes 2021-11-16 Saeed Mahameed
                   ` (7 preceding siblings ...)
  2021-11-16 20:23 ` [net 08/12] net/mlx5: E-Switch, rebuild lag only when needed Saeed Mahameed
@ 2021-11-16 20:23 ` Saeed Mahameed
  2021-11-16 20:23 ` [net 10/12] net/mlx5e: CT, Fix multiple allocations and memleak of mod acts Saeed Mahameed
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Saeed Mahameed @ 2021-11-16 20:23 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski; +Cc: netdev, Avihai Horon, Saeed Mahameed

From: Avihai Horon <avihaih@nvidia.com>

When doing a flow counters bulk query, the number of counters to query
must be aligned to 4. Current SF bulk query len is not aligned to 4,
which leads to an error when trying to query more than 4 counters.

Fix it by aligning SF bulk query len to 4.

Fixes: 2fdeb4f4c2ae ("net/mlx5: Reduce flow counters bulk query buffer size for SFs")
Signed-off-by: Avihai Horon <avihaih@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c
index 31c99d53faf7..7e0e04cf26f8 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c
@@ -40,7 +40,7 @@
 #define MLX5_FC_STATS_PERIOD msecs_to_jiffies(1000)
 /* Max number of counters to query in bulk read is 32K */
 #define MLX5_SW_MAX_COUNTERS_BULK BIT(15)
-#define MLX5_SF_NUM_COUNTERS_BULK 6
+#define MLX5_SF_NUM_COUNTERS_BULK 8
 #define MLX5_FC_POOL_MAX_THRESHOLD BIT(18)
 #define MLX5_FC_POOL_USED_BUFF_RATIO 10
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [net 10/12] net/mlx5e: CT, Fix multiple allocations and memleak of mod acts
  2021-11-16 20:23 [pull request][net 00/12] mlx5 fixes 2021-11-16 Saeed Mahameed
                   ` (8 preceding siblings ...)
  2021-11-16 20:23 ` [net 09/12] net/mlx5: Fix flow counters SF bulk query len Saeed Mahameed
@ 2021-11-16 20:23 ` Saeed Mahameed
  2021-11-16 20:23 ` [net 11/12] net/mlx5: Lag, update tracker when state change event received Saeed Mahameed
  2021-11-16 20:23 ` [net 12/12] net/mlx5: E-Switch, return error if encap isn't supported Saeed Mahameed
  11 siblings, 0 replies; 14+ messages in thread
From: Saeed Mahameed @ 2021-11-16 20:23 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski
  Cc: netdev, Roi Dayan, Paul Blakey, Maor Dickman, Saeed Mahameed

From: Roi Dayan <roid@nvidia.com>

CT clear action offload adds additional mod hdr actions to the
flow's original mod actions in order to clear the registers which
hold ct_state.
When such flow also includes encap action, a neigh update event
can cause the driver to unoffload the flow and then reoffload it.

Each time this happens, the ct clear handling adds that same set
of mod hdr actions to reset ct_state until the max of mod hdr
actions is reached.

Also the driver never releases the allocated mod hdr actions and
causing a memleak.

Fix above two issues by moving CT clear mod acts allocation
into the parsing actions phase and only use it when offloading the rule.
The release of mod acts will be done in the normal flow_put().

 backtrace:
    [<000000007316e2f3>] krealloc+0x83/0xd0
    [<00000000ef157de1>] mlx5e_mod_hdr_alloc+0x147/0x300 [mlx5_core]
    [<00000000970ce4ae>] mlx5e_tc_match_to_reg_set_and_get_id+0xd7/0x240 [mlx5_core]
    [<0000000067c5fa17>] mlx5e_tc_match_to_reg_set+0xa/0x20 [mlx5_core]
    [<00000000d032eb98>] mlx5_tc_ct_entry_set_registers.isra.0+0x36/0xc0 [mlx5_core]
    [<00000000fd23b869>] mlx5_tc_ct_flow_offload+0x272/0x1f10 [mlx5_core]
    [<000000004fc24acc>] mlx5e_tc_offload_fdb_rules.part.0+0x150/0x620 [mlx5_core]
    [<00000000dc741c17>] mlx5e_tc_encap_flows_add+0x489/0x690 [mlx5_core]
    [<00000000e92e49d7>] mlx5e_rep_update_flows+0x6e4/0x9b0 [mlx5_core]
    [<00000000f60f5602>] mlx5e_rep_neigh_update+0x39a/0x5d0 [mlx5_core]

Fixes: 1ef3018f5af3 ("net/mlx5e: CT: Support clear action")
Signed-off-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Paul Blakey <paulb@nvidia.com>
Reviewed-by: Maor Dickman <maord@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 .../ethernet/mellanox/mlx5/core/en/tc_ct.c    | 26 ++++++++++++-------
 .../ethernet/mellanox/mlx5/core/en/tc_ct.h    |  2 ++
 .../net/ethernet/mellanox/mlx5/core/en_tc.c   |  8 ++++--
 3 files changed, 25 insertions(+), 11 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
index c1c6e74c79c4..2445e2ae3324 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
@@ -1356,9 +1356,13 @@ mlx5_tc_ct_match_add(struct mlx5_tc_ct_priv *priv,
 int
 mlx5_tc_ct_parse_action(struct mlx5_tc_ct_priv *priv,
 			struct mlx5_flow_attr *attr,
+			struct mlx5e_tc_mod_hdr_acts *mod_acts,
 			const struct flow_action_entry *act,
 			struct netlink_ext_ack *extack)
 {
+	bool clear_action = act->ct.action & TCA_CT_ACT_CLEAR;
+	int err;
+
 	if (!priv) {
 		NL_SET_ERR_MSG_MOD(extack,
 				   "offload of ct action isn't available");
@@ -1369,6 +1373,17 @@ mlx5_tc_ct_parse_action(struct mlx5_tc_ct_priv *priv,
 	attr->ct_attr.ct_action = act->ct.action;
 	attr->ct_attr.nf_ft = act->ct.flow_table;
 
+	if (!clear_action)
+		goto out;
+
+	err = mlx5_tc_ct_entry_set_registers(priv, mod_acts, 0, 0, 0, 0);
+	if (err) {
+		NL_SET_ERR_MSG_MOD(extack, "Failed to set registers for ct clear");
+		return err;
+	}
+	attr->action |= MLX5_FLOW_CONTEXT_ACTION_MOD_HDR;
+
+out:
 	return 0;
 }
 
@@ -1898,23 +1913,16 @@ __mlx5_tc_ct_flow_offload_clear(struct mlx5_tc_ct_priv *ct_priv,
 
 	memcpy(pre_ct_attr, attr, attr_sz);
 
-	err = mlx5_tc_ct_entry_set_registers(ct_priv, mod_acts, 0, 0, 0, 0);
-	if (err) {
-		ct_dbg("Failed to set register for ct clear");
-		goto err_set_registers;
-	}
-
 	mod_hdr = mlx5_modify_header_alloc(priv->mdev, ct_priv->ns_type,
 					   mod_acts->num_actions,
 					   mod_acts->actions);
 	if (IS_ERR(mod_hdr)) {
 		err = PTR_ERR(mod_hdr);
 		ct_dbg("Failed to add create ct clear mod hdr");
-		goto err_set_registers;
+		goto err_mod_hdr;
 	}
 
 	pre_ct_attr->modify_hdr = mod_hdr;
-	pre_ct_attr->action |= MLX5_FLOW_CONTEXT_ACTION_MOD_HDR;
 
 	rule = mlx5_tc_rule_insert(priv, orig_spec, pre_ct_attr);
 	if (IS_ERR(rule)) {
@@ -1930,7 +1938,7 @@ __mlx5_tc_ct_flow_offload_clear(struct mlx5_tc_ct_priv *ct_priv,
 
 err_insert:
 	mlx5_modify_header_dealloc(priv->mdev, mod_hdr);
-err_set_registers:
+err_mod_hdr:
 	netdev_warn(priv->netdev,
 		    "Failed to offload ct clear flow, err %d\n", err);
 	kfree(pre_ct_attr);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h
index 363329f4aac6..99662af1e41a 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h
@@ -110,6 +110,7 @@ int mlx5_tc_ct_add_no_trk_match(struct mlx5_flow_spec *spec);
 int
 mlx5_tc_ct_parse_action(struct mlx5_tc_ct_priv *priv,
 			struct mlx5_flow_attr *attr,
+			struct mlx5e_tc_mod_hdr_acts *mod_acts,
 			const struct flow_action_entry *act,
 			struct netlink_ext_ack *extack);
 
@@ -172,6 +173,7 @@ mlx5_tc_ct_add_no_trk_match(struct mlx5_flow_spec *spec)
 static inline int
 mlx5_tc_ct_parse_action(struct mlx5_tc_ct_priv *priv,
 			struct mlx5_flow_attr *attr,
+			struct mlx5e_tc_mod_hdr_acts *mod_acts,
 			const struct flow_action_entry *act,
 			struct netlink_ext_ack *extack)
 {
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index cb76c41fe163..3d45f4ae80c0 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -3608,7 +3608,9 @@ parse_tc_nic_actions(struct mlx5e_priv *priv,
 			attr->dest_chain = act->chain_index;
 			break;
 		case FLOW_ACTION_CT:
-			err = mlx5_tc_ct_parse_action(get_ct_priv(priv), attr, act, extack);
+			err = mlx5_tc_ct_parse_action(get_ct_priv(priv), attr,
+						      &parse_attr->mod_hdr_acts,
+						      act, extack);
 			if (err)
 				return err;
 
@@ -4277,7 +4279,9 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv,
 				NL_SET_ERR_MSG_MOD(extack, "Sample action with connection tracking is not supported");
 				return -EOPNOTSUPP;
 			}
-			err = mlx5_tc_ct_parse_action(get_ct_priv(priv), attr, act, extack);
+			err = mlx5_tc_ct_parse_action(get_ct_priv(priv), attr,
+						      &parse_attr->mod_hdr_acts,
+						      act, extack);
 			if (err)
 				return err;
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [net 11/12] net/mlx5: Lag, update tracker when state change event received
  2021-11-16 20:23 [pull request][net 00/12] mlx5 fixes 2021-11-16 Saeed Mahameed
                   ` (9 preceding siblings ...)
  2021-11-16 20:23 ` [net 10/12] net/mlx5e: CT, Fix multiple allocations and memleak of mod acts Saeed Mahameed
@ 2021-11-16 20:23 ` Saeed Mahameed
  2021-11-16 20:23 ` [net 12/12] net/mlx5: E-Switch, return error if encap isn't supported Saeed Mahameed
  11 siblings, 0 replies; 14+ messages in thread
From: Saeed Mahameed @ 2021-11-16 20:23 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski
  Cc: netdev, Maher Sanalla, Mark Bloch, Jianbo Liu, Roi Dayan, Saeed Mahameed

From: Maher Sanalla <msanalla@nvidia.com>

Currently, In NETDEV_CHANGELOWERSTATE/NETDEV_CHANGEUPPERSTATE events
handling, tracking is not fully completed if the LAG device is not ready
at the time the events occur. But, we must keep track of the upper and
lower states after receiving the events because RoCE needs this info in
mlx5_lag_get_roce_netdev() - in order to return the corresponding port
that its running on. Returning the wrong (not most recent) port will lead
to gids table being incorrect.

For example: If during the attachment of a slave to the bond, the other
non-attached port performs pci_reload, then the LAG device is not ready,
but that should not result in dismissing attached slave tracker update
automatically (which is performed in mlx5_handle_changelowerstate()), Since
these events might not come later, which can lead to both bond ports
having tx_enabled=0 - which is not a valid state of LAG bond.

Fixes: 9b412cc35f00 ("net/mlx5e: Add LAG warning if bond slave is not lag master")
Signed-off-by: Maher Sanalla <msanalla@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Jianbo Liu <jianbol@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 .../net/ethernet/mellanox/mlx5/core/lag/lag.c | 28 +++++++++----------
 1 file changed, 13 insertions(+), 15 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
index 48d2ea690d7a..4ddf6b330a44 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
@@ -615,6 +615,7 @@ static int mlx5_handle_changeupper_event(struct mlx5_lag *ldev,
 	bool is_bonded, is_in_lag, mode_supported;
 	int bond_status = 0;
 	int num_slaves = 0;
+	int changed = 0;
 	int idx;
 
 	if (!netif_is_lag_master(upper))
@@ -653,27 +654,27 @@ static int mlx5_handle_changeupper_event(struct mlx5_lag *ldev,
 	 */
 	is_in_lag = num_slaves == MLX5_MAX_PORTS && bond_status == 0x3;
 
-	if (!mlx5_lag_is_ready(ldev) && is_in_lag) {
-		NL_SET_ERR_MSG_MOD(info->info.extack,
-				   "Can't activate LAG offload, PF is configured with more than 64 VFs");
-		return 0;
-	}
-
 	/* Lag mode must be activebackup or hash. */
 	mode_supported = tracker->tx_type == NETDEV_LAG_TX_TYPE_ACTIVEBACKUP ||
 			 tracker->tx_type == NETDEV_LAG_TX_TYPE_HASH;
 
-	if (is_in_lag && !mode_supported)
-		NL_SET_ERR_MSG_MOD(info->info.extack,
-				   "Can't activate LAG offload, TX type isn't supported");
-
 	is_bonded = is_in_lag && mode_supported;
 	if (tracker->is_bonded != is_bonded) {
 		tracker->is_bonded = is_bonded;
-		return 1;
+		changed = 1;
 	}
 
-	return 0;
+	if (!is_in_lag)
+		return changed;
+
+	if (!mlx5_lag_is_ready(ldev))
+		NL_SET_ERR_MSG_MOD(info->info.extack,
+				   "Can't activate LAG offload, PF is configured with more than 64 VFs");
+	else if (!mode_supported)
+		NL_SET_ERR_MSG_MOD(info->info.extack,
+				   "Can't activate LAG offload, TX type isn't supported");
+
+	return changed;
 }
 
 static int mlx5_handle_changelowerstate_event(struct mlx5_lag *ldev,
@@ -716,9 +717,6 @@ static int mlx5_lag_netdev_event(struct notifier_block *this,
 
 	ldev    = container_of(this, struct mlx5_lag, nb);
 
-	if (!mlx5_lag_is_ready(ldev) && event == NETDEV_CHANGELOWERSTATE)
-		return NOTIFY_DONE;
-
 	tracker = ldev->tracker;
 
 	switch (event) {
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [net 12/12] net/mlx5: E-Switch, return error if encap isn't supported
  2021-11-16 20:23 [pull request][net 00/12] mlx5 fixes 2021-11-16 Saeed Mahameed
                   ` (10 preceding siblings ...)
  2021-11-16 20:23 ` [net 11/12] net/mlx5: Lag, update tracker when state change event received Saeed Mahameed
@ 2021-11-16 20:23 ` Saeed Mahameed
  11 siblings, 0 replies; 14+ messages in thread
From: Saeed Mahameed @ 2021-11-16 20:23 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski
  Cc: netdev, Raed Salem, Mark Bloch, Maor Dickman, Saeed Mahameed

From: Raed Salem <raeds@nvidia.com>

On regular ConnectX HCAs getting encap mode isn't supported when the
E-Switch is in NONE mode. Current code would return no error code when
trying to get encap mode in such case which is wrong.

Fix by returning error value to indicate failure to caller in such case.

Fixes: 8e0aa4bc959c ("net/mlx5: E-switch, Protect eswitch mode changes")
Signed-off-by: Raed Salem <raeds@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Maor Dickman <maord@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index 80fa76f60e1e..a46455694f7a 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -3623,7 +3623,7 @@ int mlx5_devlink_eswitch_encap_mode_get(struct devlink *devlink,
 	*encap = esw->offloads.encap;
 unlock:
 	up_write(&esw->mode_lock);
-	return 0;
+	return err;
 }
 
 static bool
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [net 01/12] net/mlx5e: kTLS, Fix crash in RX resync flow
  2021-11-16 20:23 ` [net 01/12] net/mlx5e: kTLS, Fix crash in RX resync flow Saeed Mahameed
@ 2021-11-17 11:10   ` patchwork-bot+netdevbpf
  0 siblings, 0 replies; 14+ messages in thread
From: patchwork-bot+netdevbpf @ 2021-11-17 11:10 UTC (permalink / raw)
  To: Saeed Mahameed; +Cc: davem, kuba, netdev, tariqt, maximmi, saeedm

Hello:

This series was applied to netdev/net.git (master)
by Saeed Mahameed <saeedm@nvidia.com>:

On Tue, 16 Nov 2021 12:23:10 -0800 you wrote:
> From: Tariq Toukan <tariqt@nvidia.com>
> 
> For the TLS RX resync flow, we maintain a list of TLS contexts
> that require some attention, to communicate their resync information
> to the HW.
> Here we fix list corruptions, by protecting the entries against
> movements coming from resync_handle_seq_match(), until their resync
> handling in napi is fully completed.
> 
> [...]

Here is the summary with links:
  - [net,01/12] net/mlx5e: kTLS, Fix crash in RX resync flow
    https://git.kernel.org/netdev/net/c/cc4a9cc03faa
  - [net,02/12] net/mlx5e: Wait for concurrent flow deletion during neigh/fib events
    https://git.kernel.org/netdev/net/c/362980eada85
  - [net,03/12] net/mlx5: E-Switch, Fix resetting of encap mode when entering switchdev
    https://git.kernel.org/netdev/net/c/d7751d647618
  - [net,04/12] net/mlx5e: nullify cq->dbg pointer in mlx5_debug_cq_remove()
    https://git.kernel.org/netdev/net/c/76ded29d3fcd
  - [net,05/12] net/mlx5: DR, Handle eswitch manager and uplink vports separately
    https://git.kernel.org/netdev/net/c/9091b821aaa4
  - [net,06/12] net/mlx5: DR, Fix check for unsupported fields in match param
    https://git.kernel.org/netdev/net/c/455832d49666
  - [net,07/12] net/mlx5: Update error handler for UCTX and UMEM
    https://git.kernel.org/netdev/net/c/ba50cd9451f6
  - [net,08/12] net/mlx5: E-Switch, rebuild lag only when needed
    https://git.kernel.org/netdev/net/c/2eb0cb31bc4c
  - [net,09/12] net/mlx5: Fix flow counters SF bulk query len
    https://git.kernel.org/netdev/net/c/38a54cae6f76
  - [net,10/12] net/mlx5e: CT, Fix multiple allocations and memleak of mod acts
    https://git.kernel.org/netdev/net/c/806401c20a0f
  - [net,11/12] net/mlx5: Lag, update tracker when state change event received
    https://git.kernel.org/netdev/net/c/ae396d85c01c
  - [net,12/12] net/mlx5: E-Switch, return error if encap isn't supported
    https://git.kernel.org/netdev/net/c/c4c3176739df

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2021-11-17 11:10 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-16 20:23 [pull request][net 00/12] mlx5 fixes 2021-11-16 Saeed Mahameed
2021-11-16 20:23 ` [net 01/12] net/mlx5e: kTLS, Fix crash in RX resync flow Saeed Mahameed
2021-11-17 11:10   ` patchwork-bot+netdevbpf
2021-11-16 20:23 ` [net 02/12] net/mlx5e: Wait for concurrent flow deletion during neigh/fib events Saeed Mahameed
2021-11-16 20:23 ` [net 03/12] net/mlx5: E-Switch, Fix resetting of encap mode when entering switchdev Saeed Mahameed
2021-11-16 20:23 ` [net 04/12] net/mlx5e: nullify cq->dbg pointer in mlx5_debug_cq_remove() Saeed Mahameed
2021-11-16 20:23 ` [net 05/12] net/mlx5: DR, Handle eswitch manager and uplink vports separately Saeed Mahameed
2021-11-16 20:23 ` [net 06/12] net/mlx5: DR, Fix check for unsupported fields in match param Saeed Mahameed
2021-11-16 20:23 ` [net 07/12] net/mlx5: Update error handler for UCTX and UMEM Saeed Mahameed
2021-11-16 20:23 ` [net 08/12] net/mlx5: E-Switch, rebuild lag only when needed Saeed Mahameed
2021-11-16 20:23 ` [net 09/12] net/mlx5: Fix flow counters SF bulk query len Saeed Mahameed
2021-11-16 20:23 ` [net 10/12] net/mlx5e: CT, Fix multiple allocations and memleak of mod acts Saeed Mahameed
2021-11-16 20:23 ` [net 11/12] net/mlx5: Lag, update tracker when state change event received Saeed Mahameed
2021-11-16 20:23 ` [net 12/12] net/mlx5: E-Switch, return error if encap isn't supported Saeed Mahameed

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.