linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH rdma-next 0/2] Query IBoE speed directly
@ 2023-04-10 13:12 Leon Romanovsky
  2023-04-10 13:12 ` [PATCH rdma-next 1/2] IB/core: Query IBoE link speed with a new driver API Leon Romanovsky
  2023-04-10 13:12 ` [PATCH rdma-next 2/2] IB/mlx5: Implement query_iboe_speed " Leon Romanovsky
  0 siblings, 2 replies; 6+ messages in thread
From: Leon Romanovsky @ 2023-04-10 13:12 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Leon Romanovsky, linux-kernel, linux-rdma, Mark Zhang

From: Leon Romanovsky <leonro@nvidia.com>

From Mark,

This series adds a new driver API to query the IBoE link speed, as well
as the mlx5 implementation.

Currently the ethtool API is used, which must be protected with the rtnl
lock. This becomes a bottleneck when try to create many rdma-cm
connections at the same time, especially with multiple processes.

With this new API we can get rid of the rtnl lock and ethtool operation.
Test result shows clear improvement, An example below (the time needed
for a process to create all connections):

        One process with    One process with     Eight processes, each
	1,000 connections   10,000 connections   with 1,000 connections
old:         10330ms             106107ms            47723ms
new:         7937ms              80108ms             19446ms
Improvement: 23.2%                24.5%               59.3%

Thanks

Mark Zhang (2):
  IB/core: Query IBoE link speed with a new driver API
  IB/mlx5: Implement query_iboe_speed driver API

 drivers/infiniband/core/cma.c                |  6 ++-
 drivers/infiniband/core/device.c             |  1 +
 drivers/infiniband/hw/mlx5/main.c            | 41 ++++++++++++++++++++
 drivers/infiniband/ulp/ipoib/ipoib_ethtool.c | 24 ------------
 include/rdma/ib_addr.h                       | 31 +++++++++------
 include/rdma/ib_verbs.h                      | 26 +++++++++++++
 6 files changed, 92 insertions(+), 37 deletions(-)

-- 
2.39.2


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH rdma-next 1/2] IB/core: Query IBoE link speed with a new driver API
  2023-04-10 13:12 [PATCH rdma-next 0/2] Query IBoE speed directly Leon Romanovsky
@ 2023-04-10 13:12 ` Leon Romanovsky
  2023-04-10 23:23   ` Jason Gunthorpe
  2023-04-10 13:12 ` [PATCH rdma-next 2/2] IB/mlx5: Implement query_iboe_speed " Leon Romanovsky
  1 sibling, 1 reply; 6+ messages in thread
From: Leon Romanovsky @ 2023-04-10 13:12 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Mark Zhang, linux-rdma

From: Mark Zhang <markzhang@nvidia.com>

Currently the ethtool API is used to get IBoE link speed, which must be
protected with the rtnl lock. This becomes a bottleneck when try to setup
many rdma-cm connections at the same time, especially with multiple
processes.

In order to avoid it, a new driver API is introduced to query the IBoE
rate. It will be used firstly, and back to ethtool way if it fails.

Signed-off-by: Mark Zhang <markzhang@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/infiniband/core/cma.c    |  6 ++++--
 drivers/infiniband/core/device.c |  1 +
 include/rdma/ib_addr.h           | 31 ++++++++++++++++++++-----------
 include/rdma/ib_verbs.h          |  3 +++
 4 files changed, 28 insertions(+), 13 deletions(-)

diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
index 9c7d26a7d243..ff706d2e39c6 100644
--- a/drivers/infiniband/core/cma.c
+++ b/drivers/infiniband/core/cma.c
@@ -3296,7 +3296,8 @@ static int cma_resolve_iboe_route(struct rdma_id_private *id_priv)
 	route->path_rec->traffic_class = tos;
 	route->path_rec->mtu = iboe_get_mtu(ndev->mtu);
 	route->path_rec->rate_selector = IB_SA_EQ;
-	route->path_rec->rate = iboe_get_rate(ndev);
+	route->path_rec->rate = iboe_get_rate(ndev, id_priv->id.device,
+					      id_priv->id.port_num);
 	dev_put(ndev);
 	route->path_rec->packet_life_time_selector = IB_SA_EQ;
 	/* In case ACK timeout is set, use this value to calculate
@@ -4962,7 +4963,8 @@ static int cma_iboe_join_multicast(struct rdma_id_private *id_priv,
 	if (!ndev)
 		return -ENODEV;
 
-	ib.rec.rate = iboe_get_rate(ndev);
+	ib.rec.rate = iboe_get_rate(ndev, id_priv->id.device,
+				    id_priv->id.port_num);
 	ib.rec.hop_limit = 1;
 	ib.rec.mtu = iboe_get_mtu(ndev->mtu);
 
diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
index a666847bd714..ba06a08c6497 100644
--- a/drivers/infiniband/core/device.c
+++ b/drivers/infiniband/core/device.c
@@ -2693,6 +2693,7 @@ void ib_set_device_ops(struct ib_device *dev, const struct ib_device_ops *ops)
 	SET_DEVICE_OP(dev_ops, query_ah);
 	SET_DEVICE_OP(dev_ops, query_device);
 	SET_DEVICE_OP(dev_ops, query_gid);
+	SET_DEVICE_OP(dev_ops, query_iboe_speed);
 	SET_DEVICE_OP(dev_ops, query_pkey);
 	SET_DEVICE_OP(dev_ops, query_port);
 	SET_DEVICE_OP(dev_ops, query_qp);
diff --git a/include/rdma/ib_addr.h b/include/rdma/ib_addr.h
index d808dc3d239e..de762210ebd1 100644
--- a/include/rdma/ib_addr.h
+++ b/include/rdma/ib_addr.h
@@ -194,24 +194,33 @@ static inline enum ib_mtu iboe_get_mtu(int mtu)
 		return 0;
 }
 
-static inline int iboe_get_rate(struct net_device *dev)
+static inline int iboe_get_rate(struct net_device *ndev,
+				struct ib_device *ibdev, u32 port_num)
 {
 	struct ethtool_link_ksettings cmd;
-	int err;
+	int speed, err;
 
-	rtnl_lock();
-	err = __ethtool_get_link_ksettings(dev, &cmd);
-	rtnl_unlock();
-	if (err)
-		return IB_RATE_PORT_CURRENT;
+	if (ibdev->ops.query_iboe_speed) {
+		err = ibdev->ops.query_iboe_speed(ibdev, port_num, &speed);
+		if (err)
+			return IB_RATE_PORT_CURRENT;
+	} else {
+		rtnl_lock();
+		err = __ethtool_get_link_ksettings(ndev, &cmd);
+		rtnl_unlock();
+		if (err)
+			return IB_RATE_PORT_CURRENT;
+
+		speed = cmd.base.speed;
+	}
 
-	if (cmd.base.speed >= 40000)
+	if (speed >= 40000)
 		return IB_RATE_40_GBPS;
-	else if (cmd.base.speed >= 30000)
+	else if (speed >= 30000)
 		return IB_RATE_30_GBPS;
-	else if (cmd.base.speed >= 20000)
+	else if (speed >= 20000)
 		return IB_RATE_20_GBPS;
-	else if (cmd.base.speed >= 10000)
+	else if (speed >= 10000)
 		return IB_RATE_10_GBPS;
 	else
 		return IB_RATE_PORT_CURRENT;
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index cc2ddd4e6c12..b143258b847f 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -2678,6 +2678,9 @@ struct ib_device_ops {
 	int (*query_ucontext)(struct ib_ucontext *context,
 			      struct uverbs_attr_bundle *attrs);
 
+	/* Query driver for IBoE link speed */
+	int (*query_iboe_speed)(struct ib_device *device, u32 port_num,
+				int *speed);
 	/*
 	 * Provide NUMA node. This API exists for rdmavt/hfi1 only.
 	 * Everyone else relies on Linux memory management model.
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH rdma-next 2/2] IB/mlx5: Implement query_iboe_speed driver API
  2023-04-10 13:12 [PATCH rdma-next 0/2] Query IBoE speed directly Leon Romanovsky
  2023-04-10 13:12 ` [PATCH rdma-next 1/2] IB/core: Query IBoE link speed with a new driver API Leon Romanovsky
@ 2023-04-10 13:12 ` Leon Romanovsky
  1 sibling, 0 replies; 6+ messages in thread
From: Leon Romanovsky @ 2023-04-10 13:12 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Mark Zhang, linux-rdma

From: Mark Zhang <markzhang@nvidia.com>

Implement this API for RoCE, get the link speed by querying PTYS
register.

Signed-off-by: Mark Zhang <markzhang@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/infiniband/hw/mlx5/main.c            | 41 ++++++++++++++++++++
 drivers/infiniband/ulp/ipoib/ipoib_ethtool.c | 24 ------------
 include/rdma/ib_verbs.h                      | 23 +++++++++++
 3 files changed, 64 insertions(+), 24 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 5e5ed1c8299d..2f108006b7e6 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -3912,6 +3912,46 @@ static int mlx5_ib_stage_raw_eth_non_default_cb(struct mlx5_ib_dev *dev)
 	return 0;
 }
 
+static int mlx5_ib_query_iboe_speed(struct ib_device *ibdev, u32 port_num,
+				    int *speed)
+{
+	struct mlx5_ib_dev *dev = to_mdev(ibdev);
+	u32 out[MLX5_ST_SZ_DW(ptys_reg)] = {};
+	u32 eth_prot_oper, mdev_port_num;
+	struct mlx5_core_dev *mdev;
+	u16 active_speed;
+	u8 active_width;
+	bool ext;
+	int err;
+
+	mdev = mlx5_ib_get_native_port_mdev(dev, port_num, &mdev_port_num);
+	if (!mdev)
+		return -EINVAL;
+
+	if (dev->is_rep)
+		mdev_port_num = 1;
+
+	err = mlx5_query_port_ptys(mdev, out, sizeof(out), MLX5_PTYS_EN,
+				   mdev_port_num);
+	if (err)
+		goto out;
+
+	ext = !!MLX5_GET_ETH_PROTO(ptys_reg, out, true, eth_proto_capability);
+	eth_prot_oper = MLX5_GET_ETH_PROTO(ptys_reg, out, ext, eth_proto_oper);
+
+	active_width = IB_WIDTH_4X;
+	active_speed = IB_SPEED_QDR;
+
+	translate_eth_proto_oper(eth_prot_oper, &active_speed,
+				 &active_width, ext);
+	*speed = ib_speed_enum_to_int(active_speed) *
+		ib_width_enum_to_int(active_width);
+
+out:
+	mlx5_ib_put_native_port_mdev(dev, port_num);
+	return err;
+}
+
 static const struct ib_device_ops mlx5_ib_dev_common_roce_ops = {
 	.create_rwq_ind_table = mlx5_ib_create_rwq_ind_table,
 	.create_wq = mlx5_ib_create_wq,
@@ -3919,6 +3959,7 @@ static const struct ib_device_ops mlx5_ib_dev_common_roce_ops = {
 	.destroy_wq = mlx5_ib_destroy_wq,
 	.get_netdev = mlx5_ib_get_netdev,
 	.modify_wq = mlx5_ib_modify_wq,
+	.query_iboe_speed = mlx5_ib_query_iboe_speed,
 
 	INIT_RDMA_OBJ_SIZE(ib_rwq_ind_table, mlx5_ib_rwq_ind_table,
 			   ib_rwq_ind_tbl),
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ethtool.c b/drivers/infiniband/ulp/ipoib/ipoib_ethtool.c
index 8af99b18d361..8ece31078558 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_ethtool.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_ethtool.c
@@ -155,30 +155,6 @@ static int ipoib_get_sset_count(struct net_device __always_unused *dev,
 	return -EOPNOTSUPP;
 }
 
-/* Return lane speed in unit of 1e6 bit/sec */
-static inline int ib_speed_enum_to_int(int speed)
-{
-	switch (speed) {
-	case IB_SPEED_SDR:
-		return SPEED_2500;
-	case IB_SPEED_DDR:
-		return SPEED_5000;
-	case IB_SPEED_QDR:
-	case IB_SPEED_FDR10:
-		return SPEED_10000;
-	case IB_SPEED_FDR:
-		return SPEED_14000;
-	case IB_SPEED_EDR:
-		return SPEED_25000;
-	case IB_SPEED_HDR:
-		return SPEED_50000;
-	case IB_SPEED_NDR:
-		return SPEED_100000;
-	}
-
-	return SPEED_UNKNOWN;
-}
-
 static int ipoib_get_link_ksettings(struct net_device *netdev,
 				    struct ethtool_link_ksettings *cmd)
 {
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index b143258b847f..4a7a62c7e3e8 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -563,6 +563,29 @@ enum ib_port_speed {
 	IB_SPEED_NDR	= 128,
 };
 
+static inline int ib_speed_enum_to_int(int speed)
+{
+	switch (speed) {
+	case IB_SPEED_SDR:
+		return SPEED_2500;
+	case IB_SPEED_DDR:
+		return SPEED_5000;
+	case IB_SPEED_QDR:
+	case IB_SPEED_FDR10:
+		return SPEED_10000;
+	case IB_SPEED_FDR:
+		return SPEED_14000;
+	case IB_SPEED_EDR:
+		return SPEED_25000;
+	case IB_SPEED_HDR:
+		return SPEED_50000;
+	case IB_SPEED_NDR:
+		return SPEED_100000;
+	}
+
+	return SPEED_UNKNOWN;
+}
+
 enum ib_stat_flag {
 	IB_STAT_FLAG_OPTIONAL = 1 << 0,
 };
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH rdma-next 1/2] IB/core: Query IBoE link speed with a new driver API
  2023-04-10 13:12 ` [PATCH rdma-next 1/2] IB/core: Query IBoE link speed with a new driver API Leon Romanovsky
@ 2023-04-10 23:23   ` Jason Gunthorpe
  2023-04-10 23:42     ` Mark Zhang
  0 siblings, 1 reply; 6+ messages in thread
From: Jason Gunthorpe @ 2023-04-10 23:23 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: Mark Zhang, linux-rdma

On Mon, Apr 10, 2023 at 04:12:06PM +0300, Leon Romanovsky wrote:
> From: Mark Zhang <markzhang@nvidia.com>
> 
> Currently the ethtool API is used to get IBoE link speed, which must be
> protected with the rtnl lock. This becomes a bottleneck when try to setup
> many rdma-cm connections at the same time, especially with multiple
> processes.
> 
> In order to avoid it, a new driver API is introduced to query the IBoE
> rate. It will be used firstly, and back to ethtool way if it fails.
> 
> Signed-off-by: Mark Zhang <markzhang@nvidia.com>
> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> ---
>  drivers/infiniband/core/cma.c    |  6 ++++--
>  drivers/infiniband/core/device.c |  1 +
>  include/rdma/ib_addr.h           | 31 ++++++++++++++++++++-----------
>  include/rdma/ib_verbs.h          |  3 +++
>  4 files changed, 28 insertions(+), 13 deletions(-)
> 
> diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
> index 9c7d26a7d243..ff706d2e39c6 100644
> --- a/drivers/infiniband/core/cma.c
> +++ b/drivers/infiniband/core/cma.c
> @@ -3296,7 +3296,8 @@ static int cma_resolve_iboe_route(struct rdma_id_private *id_priv)
>  	route->path_rec->traffic_class = tos;
>  	route->path_rec->mtu = iboe_get_mtu(ndev->mtu);
>  	route->path_rec->rate_selector = IB_SA_EQ;
> -	route->path_rec->rate = iboe_get_rate(ndev);
> +	route->path_rec->rate = iboe_get_rate(ndev, id_priv->id.device,
> +					      id_priv->id.port_num);
>  	dev_put(ndev);
>  	route->path_rec->packet_life_time_selector = IB_SA_EQ;
>  	/* In case ACK timeout is set, use this value to calculate
> @@ -4962,7 +4963,8 @@ static int cma_iboe_join_multicast(struct rdma_id_private *id_priv,
>  	if (!ndev)
>  		return -ENODEV;
>  
> -	ib.rec.rate = iboe_get_rate(ndev);
> +	ib.rec.rate = iboe_get_rate(ndev, id_priv->id.device,
> +				    id_priv->id.port_num);
>  	ib.rec.hop_limit = 1;
>  	ib.rec.mtu = iboe_get_mtu(ndev->mtu);

What do we even use rate for in roce mode in the first place?

Jason

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH rdma-next 1/2] IB/core: Query IBoE link speed with a new driver API
  2023-04-10 23:23   ` Jason Gunthorpe
@ 2023-04-10 23:42     ` Mark Zhang
  2023-04-11 11:20       ` Jason Gunthorpe
  0 siblings, 1 reply; 6+ messages in thread
From: Mark Zhang @ 2023-04-10 23:42 UTC (permalink / raw)
  To: Jason Gunthorpe, Leon Romanovsky; +Cc: linux-rdma

On 4/11/2023 7:23 AM, Jason Gunthorpe wrote:
> On Mon, Apr 10, 2023 at 04:12:06PM +0300, Leon Romanovsky wrote:
>> From: Mark Zhang <markzhang@nvidia.com>
>>
>> Currently the ethtool API is used to get IBoE link speed, which must be
>> protected with the rtnl lock. This becomes a bottleneck when try to setup
>> many rdma-cm connections at the same time, especially with multiple
>> processes.
>>
>> In order to avoid it, a new driver API is introduced to query the IBoE
>> rate. It will be used firstly, and back to ethtool way if it fails.
>>
>> Signed-off-by: Mark Zhang <markzhang@nvidia.com>
>> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
>> ---
>>   drivers/infiniband/core/cma.c    |  6 ++++--
>>   drivers/infiniband/core/device.c |  1 +
>>   include/rdma/ib_addr.h           | 31 ++++++++++++++++++++-----------
>>   include/rdma/ib_verbs.h          |  3 +++
>>   4 files changed, 28 insertions(+), 13 deletions(-)
>>
>> diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
>> index 9c7d26a7d243..ff706d2e39c6 100644
>> --- a/drivers/infiniband/core/cma.c
>> +++ b/drivers/infiniband/core/cma.c
>> @@ -3296,7 +3296,8 @@ static int cma_resolve_iboe_route(struct rdma_id_private *id_priv)
>>   	route->path_rec->traffic_class = tos;
>>   	route->path_rec->mtu = iboe_get_mtu(ndev->mtu);
>>   	route->path_rec->rate_selector = IB_SA_EQ;
>> -	route->path_rec->rate = iboe_get_rate(ndev);
>> +	route->path_rec->rate = iboe_get_rate(ndev, id_priv->id.device,
>> +					      id_priv->id.port_num);
>>   	dev_put(ndev);
>>   	route->path_rec->packet_life_time_selector = IB_SA_EQ;
>>   	/* In case ACK timeout is set, use this value to calculate
>> @@ -4962,7 +4963,8 @@ static int cma_iboe_join_multicast(struct rdma_id_private *id_priv,
>>   	if (!ndev)
>>   		return -ENODEV;
>>   
>> -	ib.rec.rate = iboe_get_rate(ndev);
>> +	ib.rec.rate = iboe_get_rate(ndev, id_priv->id.device,
>> +				    id_priv->id.port_num);
>>   	ib.rec.hop_limit = 1;
>>   	ib.rec.mtu = iboe_get_mtu(ndev->mtu);
> 
> What do we even use rate for in roce mode in the first place?
> 
In mlx5 it is set to "address_path.stat_rate", but I believe we always 
set 0 for roce actually. Not sure about other devices?

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH rdma-next 1/2] IB/core: Query IBoE link speed with a new driver API
  2023-04-10 23:42     ` Mark Zhang
@ 2023-04-11 11:20       ` Jason Gunthorpe
  0 siblings, 0 replies; 6+ messages in thread
From: Jason Gunthorpe @ 2023-04-11 11:20 UTC (permalink / raw)
  To: Mark Zhang; +Cc: Leon Romanovsky, linux-rdma

On Tue, Apr 11, 2023 at 07:42:37AM +0800, Mark Zhang wrote:
> On 4/11/2023 7:23 AM, Jason Gunthorpe wrote:
> > On Mon, Apr 10, 2023 at 04:12:06PM +0300, Leon Romanovsky wrote:
> > > From: Mark Zhang <markzhang@nvidia.com>
> > > 
> > > Currently the ethtool API is used to get IBoE link speed, which must be
> > > protected with the rtnl lock. This becomes a bottleneck when try to setup
> > > many rdma-cm connections at the same time, especially with multiple
> > > processes.
> > > 
> > > In order to avoid it, a new driver API is introduced to query the IBoE
> > > rate. It will be used firstly, and back to ethtool way if it fails.
> > > 
> > > Signed-off-by: Mark Zhang <markzhang@nvidia.com>
> > > Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> > > ---
> > >   drivers/infiniband/core/cma.c    |  6 ++++--
> > >   drivers/infiniband/core/device.c |  1 +
> > >   include/rdma/ib_addr.h           | 31 ++++++++++++++++++++-----------
> > >   include/rdma/ib_verbs.h          |  3 +++
> > >   4 files changed, 28 insertions(+), 13 deletions(-)
> > > 
> > > diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
> > > index 9c7d26a7d243..ff706d2e39c6 100644
> > > --- a/drivers/infiniband/core/cma.c
> > > +++ b/drivers/infiniband/core/cma.c
> > > @@ -3296,7 +3296,8 @@ static int cma_resolve_iboe_route(struct rdma_id_private *id_priv)
> > >   	route->path_rec->traffic_class = tos;
> > >   	route->path_rec->mtu = iboe_get_mtu(ndev->mtu);
> > >   	route->path_rec->rate_selector = IB_SA_EQ;
> > > -	route->path_rec->rate = iboe_get_rate(ndev);
> > > +	route->path_rec->rate = iboe_get_rate(ndev, id_priv->id.device,
> > > +					      id_priv->id.port_num);
> > >   	dev_put(ndev);
> > >   	route->path_rec->packet_life_time_selector = IB_SA_EQ;
> > >   	/* In case ACK timeout is set, use this value to calculate
> > > @@ -4962,7 +4963,8 @@ static int cma_iboe_join_multicast(struct rdma_id_private *id_priv,
> > >   	if (!ndev)
> > >   		return -ENODEV;
> > > -	ib.rec.rate = iboe_get_rate(ndev);
> > > +	ib.rec.rate = iboe_get_rate(ndev, id_priv->id.device,
> > > +				    id_priv->id.port_num);
> > >   	ib.rec.hop_limit = 1;
> > >   	ib.rec.mtu = iboe_get_mtu(ndev->mtu);
> > 
> > What do we even use rate for in roce mode in the first place?
> > 
> In mlx5 it is set to "address_path.stat_rate", but I believe we always set 0
> for roce actually. Not sure about other devices?

"rate" is to reduce the packet rate of connections, it should always
be 0 for roce, AFAIK.

Maybe we should look into making that the case instead of this?

Jason

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2023-04-11 11:20 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-10 13:12 [PATCH rdma-next 0/2] Query IBoE speed directly Leon Romanovsky
2023-04-10 13:12 ` [PATCH rdma-next 1/2] IB/core: Query IBoE link speed with a new driver API Leon Romanovsky
2023-04-10 23:23   ` Jason Gunthorpe
2023-04-10 23:42     ` Mark Zhang
2023-04-11 11:20       ` Jason Gunthorpe
2023-04-10 13:12 ` [PATCH rdma-next 2/2] IB/mlx5: Implement query_iboe_speed " Leon Romanovsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).