All of lore.kernel.org
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH] net/mlx5: fix Tx queues creation type check for scheduling
@ 2021-07-29 12:26 Viacheslav Ovsiienko
  2021-07-29 15:16 ` Raslan Darawsheh
  2021-07-29 17:39 ` Thomas Monjalon
  0 siblings, 2 replies; 3+ messages in thread
From: Viacheslav Ovsiienko @ 2021-07-29 12:26 UTC (permalink / raw)
  To: dev; +Cc: rasland, matan, stable

The send scheduling on timestamp offload requires the Send
Queue (SQ) shares its User Access Region (UAR) with the
pacing Clock Queue. The SQ can be created by mlx5 PMD either
with DevX or with Verbs. If the SQ is being created with
DevX the dedicated UAR can be specified and all the SQs
share the single UAR. Once SQ is being created with Verbs
the SQ's UAR is allocated by the rdma-core library internally
on its own and there are no UAR sharing. This caused hardware
errors on WAIT WQEs and overall send scheduling malfunction.

If SQs are going to be created with Verbs and the send
scheduling offload is explicitly requested via tx_pp devarg
the device probing is rejected as device configuration
can't satisfy the requirements.

Fixes: 3ec73abeed52 ("net/mlx5/linux: fix Tx queue operations decision")
Fixes: 8f848f32fc24 ("net/mlx5: introduce send scheduling devargs")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/linux/mlx5_os.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index aa5210fa45..e3c949ffc8 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1769,6 +1769,24 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	} else {
 		priv->obj_ops = ibv_obj_ops;
 	}
+	if (config->tx_pp &&
+	    (priv->config.dv_esw_en ||
+	     priv->obj_ops.txq_obj_new != mlx5_os_txq_obj_new)) {
+		/*
+		 * HAVE_MLX5DV_DEVX_UAR_OFFSET is required to support
+		 * packet pacing and already checked above. Hence, we should
+		 * only make sure the SQs will be created with DevX, not with
+		 * Verbs. Verbs allocates the SQ UAR on its own and it can't
+		 * be shared with Clock Queue UAR as it required for the
+		 * Tx scheduling feature.
+		 */
+		DRV_LOG(ERR, "Verbs SQs, UAR can't be shared"
+			     " as required for packet pacing");
+			err = ENODEV;
+			goto error;
+		err = ENODEV;
+		goto error;
+	}
 	priv->drop_queue.hrxq = mlx5_drop_action_create(eth_dev);
 	if (!priv->drop_queue.hrxq)
 		goto error;
-- 
2.18.1


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-dev] [PATCH] net/mlx5: fix Tx queues creation type check for scheduling
  2021-07-29 12:26 [dpdk-dev] [PATCH] net/mlx5: fix Tx queues creation type check for scheduling Viacheslav Ovsiienko
@ 2021-07-29 15:16 ` Raslan Darawsheh
  2021-07-29 17:39 ` Thomas Monjalon
  1 sibling, 0 replies; 3+ messages in thread
From: Raslan Darawsheh @ 2021-07-29 15:16 UTC (permalink / raw)
  To: Slava Ovsiienko, dev; +Cc: Matan Azrad, stable

Hi,

> -----Original Message-----
> From: Slava Ovsiienko <viacheslavo@nvidia.com>
> Sent: Thursday, July 29, 2021 3:27 PM
> To: dev@dpdk.org
> Cc: Raslan Darawsheh <rasland@nvidia.com>; Matan Azrad
> <matan@nvidia.com>; stable@dpdk.org
> Subject: [PATCH] net/mlx5: fix Tx queues creation type check for scheduling
> 
> The send scheduling on timestamp offload requires the Send
> Queue (SQ) shares its User Access Region (UAR) with the
> pacing Clock Queue. The SQ can be created by mlx5 PMD either
> with DevX or with Verbs. If the SQ is being created with
> DevX the dedicated UAR can be specified and all the SQs
> share the single UAR. Once SQ is being created with Verbs
> the SQ's UAR is allocated by the rdma-core library internally
> on its own and there are no UAR sharing. This caused hardware
> errors on WAIT WQEs and overall send scheduling malfunction.
> 
> If SQs are going to be created with Verbs and the send
> scheduling offload is explicitly requested via tx_pp devarg
> the device probing is rejected as device configuration
> can't satisfy the requirements.
> 
> Fixes: 3ec73abeed52 ("net/mlx5/linux: fix Tx queue operations decision")
> Fixes: 8f848f32fc24 ("net/mlx5: introduce send scheduling devargs")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

Patch applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-dev] [PATCH] net/mlx5: fix Tx queues creation type check for scheduling
  2021-07-29 12:26 [dpdk-dev] [PATCH] net/mlx5: fix Tx queues creation type check for scheduling Viacheslav Ovsiienko
  2021-07-29 15:16 ` Raslan Darawsheh
@ 2021-07-29 17:39 ` Thomas Monjalon
  1 sibling, 0 replies; 3+ messages in thread
From: Thomas Monjalon @ 2021-07-29 17:39 UTC (permalink / raw)
  To: Viacheslav Ovsiienko; +Cc: dev, rasland, matan, stable

29/07/2021 14:26, Viacheslav Ovsiienko:
> +	if (config->tx_pp &&
> +	    (priv->config.dv_esw_en ||
> +	     priv->obj_ops.txq_obj_new != mlx5_os_txq_obj_new)) {
> +		/*
> +		 * HAVE_MLX5DV_DEVX_UAR_OFFSET is required to support
> +		 * packet pacing and already checked above. Hence, we should
> +		 * only make sure the SQs will be created with DevX, not with
> +		 * Verbs. Verbs allocates the SQ UAR on its own and it can't
> +		 * be shared with Clock Queue UAR as it required for the
> +		 * Tx scheduling feature.
> +		 */
> +		DRV_LOG(ERR, "Verbs SQs, UAR can't be shared"
> +			     " as required for packet pacing");

Don't split logs.

> +			err = ENODEV;
> +			goto error;
> +		err = ENODEV;
> +		goto error;

I assume only the last 2 lines should be kept.




^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-07-29 17:38 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-29 12:26 [dpdk-dev] [PATCH] net/mlx5: fix Tx queues creation type check for scheduling Viacheslav Ovsiienko
2021-07-29 15:16 ` Raslan Darawsheh
2021-07-29 17:39 ` Thomas Monjalon

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.