All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH rdma-next] RDMA/mlx5: Support TX port affinity for VF drivers in LAG mode
@ 2020-05-26 14:34 Leon Romanovsky
  2020-05-26 15:07 ` Mark Bloch
  2020-05-26 17:27 ` Leon Romanovsky
  0 siblings, 2 replies; 3+ messages in thread
From: Leon Romanovsky @ 2020-05-26 14:34 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Mark Zhang, linux-rdma, Maor Gottlieb

From: Mark Zhang <markz@mellanox.com>

The mlx5 VF driver doesn't set QP tx port affinity because it doesn't
know if the lag is active or not, since the "lag_active" works only for
PF interfaces. In this case for VF interfaces only one lag is used
which brings performance issue.

Add a lag_tx_port_affinity CAP bit; When it is enabled and
"num_lag_ports > 1", then driver always set QP tx affinity, regardless
of lag state.

Signed-off-by: Mark Zhang <markz@mellanox.com>
Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/main.c    | 2 +-
 drivers/infiniband/hw/mlx5/mlx5_ib.h | 7 +++++++
 drivers/infiniband/hw/mlx5/qp.c      | 3 ++-
 3 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 570c519ca530..4719da201382 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -1984,7 +1984,7 @@ static int mlx5_ib_alloc_ucontext(struct ib_ucontext *uctx,
 	context->lib_caps = req.lib_caps;
 	print_lib_caps(dev, context->lib_caps);
 
-	if (dev->lag_active) {
+	if (mlx5_ib_lag_should_assign_affinity(dev)) {
 		u8 port = mlx5_core_native_port_num(dev->mdev) - 1;
 
 		atomic_set(&context->tx_port_affinity,
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index b486139b08ce..0f5a713ac2a9 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -1553,4 +1553,11 @@ static inline bool mlx5_ib_can_use_umr(struct mlx5_ib_dev *dev,
 
 int mlx5_ib_enable_driver(struct ib_device *dev);
 int mlx5_ib_test_wc(struct mlx5_ib_dev *dev);
+
+static inline bool mlx5_ib_lag_should_assign_affinity(struct mlx5_ib_dev *dev)
+{
+	return dev->lag_active ||
+		(MLX5_CAP_GEN(dev->mdev, num_lag_ports) &&
+		 MLX5_CAP_GEN(dev->mdev, lag_tx_port_affinity));
+}
 #endif /* MLX5_IB_H */
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 1988a0375696..9364a7a76ac2 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -3653,7 +3653,8 @@ static unsigned int get_tx_affinity(struct ib_qp *qp,
 	struct mlx5_ib_qp_base *qp_base;
 	unsigned int tx_affinity;
 
-	if (!(dev->lag_active && qp_supports_affinity(qp)))
+	if (!(mlx5_ib_lag_should_assign_affinity(dev) &&
+	      qp_supports_affinity(qp)))
 		return 0;
 
 	if (mqp->flags & MLX5_IB_QP_CREATE_SQPN_QP1)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH rdma-next] RDMA/mlx5: Support TX port affinity for VF drivers in LAG mode
  2020-05-26 14:34 [PATCH rdma-next] RDMA/mlx5: Support TX port affinity for VF drivers in LAG mode Leon Romanovsky
@ 2020-05-26 15:07 ` Mark Bloch
  2020-05-26 17:27 ` Leon Romanovsky
  1 sibling, 0 replies; 3+ messages in thread
From: Mark Bloch @ 2020-05-26 15:07 UTC (permalink / raw)
  To: Leon Romanovsky, Doug Ledford, Jason Gunthorpe
  Cc: Mark Zhang, linux-rdma, Maor Gottlieb



On 5/26/2020 07:34, Leon Romanovsky wrote:
> From: Mark Zhang <markz@mellanox.com>
> 
> The mlx5 VF driver doesn't set QP tx port affinity because it doesn't
> know if the lag is active or not, since the "lag_active" works only for
> PF interfaces. In this case for VF interfaces only one lag is used
> which brings performance issue.
> 
> Add a lag_tx_port_affinity CAP bit; When it is enabled and
> "num_lag_ports > 1", then driver always set QP tx affinity, regardless
> of lag state.
> 
> Signed-off-by: Mark Zhang <markz@mellanox.com>
> Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
> ---
>  drivers/infiniband/hw/mlx5/main.c    | 2 +-
>  drivers/infiniband/hw/mlx5/mlx5_ib.h | 7 +++++++
>  drivers/infiniband/hw/mlx5/qp.c      | 3 ++-
>  3 files changed, 10 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
> index 570c519ca530..4719da201382 100644
> --- a/drivers/infiniband/hw/mlx5/main.c
> +++ b/drivers/infiniband/hw/mlx5/main.c
> @@ -1984,7 +1984,7 @@ static int mlx5_ib_alloc_ucontext(struct ib_ucontext *uctx,
>  	context->lib_caps = req.lib_caps;
>  	print_lib_caps(dev, context->lib_caps);
>  
> -	if (dev->lag_active) {
> +	if (mlx5_ib_lag_should_assign_affinity(dev)) {
>  		u8 port = mlx5_core_native_port_num(dev->mdev) - 1;
>  
>  		atomic_set(&context->tx_port_affinity,
> diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
> index b486139b08ce..0f5a713ac2a9 100644
> --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
> +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
> @@ -1553,4 +1553,11 @@ static inline bool mlx5_ib_can_use_umr(struct mlx5_ib_dev *dev,
>  
>  int mlx5_ib_enable_driver(struct ib_device *dev);
>  int mlx5_ib_test_wc(struct mlx5_ib_dev *dev);
> +
> +static inline bool mlx5_ib_lag_should_assign_affinity(struct mlx5_ib_dev *dev)
> +{
> +	return dev->lag_active ||
> +		(MLX5_CAP_GEN(dev->mdev, num_lag_ports) &&
> +		 MLX5_CAP_GEN(dev->mdev, lag_tx_port_affinity));

first of all in the commit message you write:
"When it is enabled and > "num_lag_ports > 1""
which isn't the case here as even num_lag_ports == 1 will pass.

In addition, even on a system without lag this cap can be 2 (on a 2 port NIC
where the FW supports lag).

I assume/hope/think that somewhere in the FW there is a check that says:
"If user has set port_affinity but lag isn't active use the native port affinity"
either way I think it needs to be documented in the commit message.

Mark

> +}
>  #endif /* MLX5_IB_H */
> diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
> index 1988a0375696..9364a7a76ac2 100644
> --- a/drivers/infiniband/hw/mlx5/qp.c
> +++ b/drivers/infiniband/hw/mlx5/qp.c
> @@ -3653,7 +3653,8 @@ static unsigned int get_tx_affinity(struct ib_qp *qp,
>  	struct mlx5_ib_qp_base *qp_base;
>  	unsigned int tx_affinity;
>  
> -	if (!(dev->lag_active && qp_supports_affinity(qp)))
> +	if (!(mlx5_ib_lag_should_assign_affinity(dev) &&
> +	      qp_supports_affinity(qp)))
>  		return 0;
>  
>  	if (mqp->flags & MLX5_IB_QP_CREATE_SQPN_QP1)
> 

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH rdma-next] RDMA/mlx5: Support TX port affinity for VF drivers in LAG mode
  2020-05-26 14:34 [PATCH rdma-next] RDMA/mlx5: Support TX port affinity for VF drivers in LAG mode Leon Romanovsky
  2020-05-26 15:07 ` Mark Bloch
@ 2020-05-26 17:27 ` Leon Romanovsky
  1 sibling, 0 replies; 3+ messages in thread
From: Leon Romanovsky @ 2020-05-26 17:27 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Mark Zhang, linux-rdma, Maor Gottlieb

On Tue, May 26, 2020 at 05:34:57PM +0300, Leon Romanovsky wrote:
> From: Mark Zhang <markz@mellanox.com>
>
> The mlx5 VF driver doesn't set QP tx port affinity because it doesn't
> know if the lag is active or not, since the "lag_active" works only for
> PF interfaces. In this case for VF interfaces only one lag is used
> which brings performance issue.
>
> Add a lag_tx_port_affinity CAP bit; When it is enabled and
> "num_lag_ports > 1", then driver always set QP tx affinity, regardless
> of lag state.
>
> Signed-off-by: Mark Zhang <markz@mellanox.com>
> Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
> ---
>  drivers/infiniband/hw/mlx5/main.c    | 2 +-
>  drivers/infiniband/hw/mlx5/mlx5_ib.h | 7 +++++++
>  drivers/infiniband/hw/mlx5/qp.c      | 3 ++-
>  3 files changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
> index 570c519ca530..4719da201382 100644
> --- a/drivers/infiniband/hw/mlx5/main.c
> +++ b/drivers/infiniband/hw/mlx5/main.c
> @@ -1984,7 +1984,7 @@ static int mlx5_ib_alloc_ucontext(struct ib_ucontext *uctx,
>  	context->lib_caps = req.lib_caps;
>  	print_lib_caps(dev, context->lib_caps);
>
> -	if (dev->lag_active) {
> +	if (mlx5_ib_lag_should_assign_affinity(dev)) {
>  		u8 port = mlx5_core_native_port_num(dev->mdev) - 1;
>
>  		atomic_set(&context->tx_port_affinity,
> diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
> index b486139b08ce..0f5a713ac2a9 100644
> --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
> +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
> @@ -1553,4 +1553,11 @@ static inline bool mlx5_ib_can_use_umr(struct mlx5_ib_dev *dev,
>
>  int mlx5_ib_enable_driver(struct ib_device *dev);
>  int mlx5_ib_test_wc(struct mlx5_ib_dev *dev);
> +
> +static inline bool mlx5_ib_lag_should_assign_affinity(struct mlx5_ib_dev *dev)
> +{
> +	return dev->lag_active ||
> +		(MLX5_CAP_GEN(dev->mdev, num_lag_ports) &&
> +		 MLX5_CAP_GEN(dev->mdev, lag_tx_port_affinity));
> +}

We did some investigation offline and it seems that this line should be
changed to be
 +static inline bool mlx5_ib_lag_should_assign_affinity(struct mlx5_ib_dev *dev)
 +{
 +	return dev->lag_active ||
 +		(MLX5_CAP_GEN(dev->mdev, num_lag_ports) > 1 &&
 +		 MLX5_CAP_GEN(dev->mdev, lag_tx_port_affinity));
 +}

Thanks

>  #endif /* MLX5_IB_H */
> diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
> index 1988a0375696..9364a7a76ac2 100644
> --- a/drivers/infiniband/hw/mlx5/qp.c
> +++ b/drivers/infiniband/hw/mlx5/qp.c
> @@ -3653,7 +3653,8 @@ static unsigned int get_tx_affinity(struct ib_qp *qp,
>  	struct mlx5_ib_qp_base *qp_base;
>  	unsigned int tx_affinity;
>
> -	if (!(dev->lag_active && qp_supports_affinity(qp)))
> +	if (!(mlx5_ib_lag_should_assign_affinity(dev) &&
> +	      qp_supports_affinity(qp)))
>  		return 0;
>
>  	if (mqp->flags & MLX5_IB_QP_CREATE_SQPN_QP1)
> --
> 2.26.2
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-05-26 17:27 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-26 14:34 [PATCH rdma-next] RDMA/mlx5: Support TX port affinity for VF drivers in LAG mode Leon Romanovsky
2020-05-26 15:07 ` Mark Bloch
2020-05-26 17:27 ` Leon Romanovsky

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.