All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next v10 00/10] Implement devlink-rate API and extend it
@ 2022-11-07 18:13 Michal Wilczynski
  2022-11-07 18:13 ` [PATCH net-next v10 01/10] devlink: Introduce new attribute 'tx_priority' to devlink-rate Michal Wilczynski
                   ` (10 more replies)
  0 siblings, 11 replies; 18+ messages in thread
From: Michal Wilczynski @ 2022-11-07 18:13 UTC (permalink / raw)
  To: netdev
  Cc: alexandr.lobakin, jacob.e.keller, jesse.brandeburg,
	przemyslaw.kitszel, anthony.l.nguyen, kuba, ecree.xilinx, jiri,
	Michal Wilczynski

This patch series implements devlink-rate for ice driver. Unfortunately
current API isn't flexible enough for our use case, so there is a need to
extend it. Some functions have been introduced to enable the driver to
export current Tx scheduling configuration.

Pasting justification for this series from commit implementing devlink-rate
in ice driver(that is a part of this series):

There is a need to support modification of Tx scheduler tree, in the
ice driver. This will allow user to control Tx settings of each node in
the internal hierarchy of nodes. As a result user will be able to use
Hierarchy QoS implemented entirely in the hardware.

This patch implemenents devlink-rate API. It also exports initial
default hierarchy. It's mostly dictated by the fact that the tree
can't be removed entirely, all we can do is enable the user to modify
it. For example root node shouldn't ever be removed, also nodes that
have children are off-limits.

Example initial tree with 2 VF's:

[root@fedora ~]# devlink port function rate show
pci/0000:4b:00.0/node_27: type node parent node_26
pci/0000:4b:00.0/node_26: type node parent node_0
pci/0000:4b:00.0/node_34: type node parent node_33
pci/0000:4b:00.0/node_33: type node parent node_32
pci/0000:4b:00.0/node_32: type node parent node_16
pci/0000:4b:00.0/node_19: type node parent node_18
pci/0000:4b:00.0/node_18: type node parent node_17
pci/0000:4b:00.0/node_17: type node parent node_16
pci/0000:4b:00.0/node_21: type node parent node_20
pci/0000:4b:00.0/node_20: type node parent node_3
pci/0000:4b:00.0/node_14: type node parent node_5
pci/0000:4b:00.0/node_5: type node parent node_3
pci/0000:4b:00.0/node_13: type node parent node_4
pci/0000:4b:00.0/node_12: type node parent node_4
pci/0000:4b:00.0/node_11: type node parent node_4
pci/0000:4b:00.0/node_10: type node parent node_4
pci/0000:4b:00.0/node_9: type node parent node_4
pci/0000:4b:00.0/node_8: type node parent node_4
pci/0000:4b:00.0/node_7: type node parent node_4
pci/0000:4b:00.0/node_6: type node parent node_4
pci/0000:4b:00.0/node_4: type node parent node_3
pci/0000:4b:00.0/node_3: type node parent node_16
pci/0000:4b:00.0/node_16: type node parent node_15
pci/0000:4b:00.0/node_15: type node parent node_0
pci/0000:4b:00.0/node_2: type node parent node_1
pci/0000:4b:00.0/node_1: type node parent node_0
pci/0000:4b:00.0/node_0: type node
pci/0000:4b:00.0/1: type leaf parent node_27
pci/0000:4b:00.0/2: type leaf parent node_27


Let me visualize part of the tree:

                        +---------+
                        |  node_0 |
                        +---------+
                             |
                        +----v----+
                        | node_26 |
                        +----+----+
                             |
                        +----v----+
                        | node_27 |
                        +----+----+
                             |
                    |-----------------|
               +----v----+       +----v----+
               |   VF 1  |       |   VF 2  |
               +----+----+       +----+----+

So at this point there is a couple things that can be done.
For example we could only assign parameters to VF's.

[root@fedora ~]# devlink port function rate set pci/0000:4b:00.0/1 \
                 tx_max 5Gbps

This would cap the VF 1 BW to 5Gbps.

But let's say you would like to create a completely new branch.
This can be done like this:

[root@fedora ~]# devlink port function rate add \
                 pci/0000:4b:00.0/node_custom parent node_0
[root@fedora ~]# devlink port function rate add \
                 pci/0000:4b:00.0/node_custom_1 parent node_custom
[root@fedora ~]# devlink port function rate set \
                 pci/0000:4b:00.0/1 parent node_custom_1

This creates a completely new branch and reassigns VF 1 to it.

A number of parameters is supported per each node: tx_max, tx_share,
tx_priority and tx_weight.

V10:
 - changed attributes type from u16 to u32 as they are padded to u32
   anyway
 - changed NL_SET_ERR_MSG_MOD to NL_SET_ERR_MSG_ATTR as it points to
   a problem with specific attribute
 - fixed function parameter not described
 - added documentation in ice.rst

V9:
 - changed misleading name from 'parameter' to 'attribute'
 - removed mechanism referring for kernel object by string,
   now it's referring to them as pointers
 - removed limiting name size in devl_rate_node_create()
 - removed commit that allowed for change of priv in set_parent
   callback
 - added commit that allows for pre-allocation of ice_sched
   elements

V8:
- address minor formatting issues
- fix memory leak
- address warnings

V7:
- split into smaller commits
- paste justification for this series to cover letter

V6:
- replaced strncpy with strscpy
- renamed rate_vport -> rate_leaf

V5:
- removed queue support per community request
- fix division of 64bit variable with 32bit divisor by using div_u64()
- remove RDMA, ADQ exlusion as it's not necessary anymore
- changed how driver exports configuration, as queues are not supported
  anymore
- changed IDA to Xarray for unique node identification


V4:
- changed static variable counter to per port IDA to
  uniquely identify nodes

V3:
- removed shift macros, since FIELD_PREP is used
- added static_assert for struct
- removed unnecessary functions
- used tab instead of space in define

V2:
- fixed Alexandr comments
- refactored code to fix checkpatch issues
- added mutual exclusion for RDMA, DCB


Michal Wilczynski (10):
  devlink: Introduce new attribute 'tx_priority' to devlink-rate
  devlink: Introduce new attribute 'tx_weight' to devlink-rate
  devlink: Enable creation of the devlink-rate nodes from the driver
  devlink: Allow for devlink-rate nodes parent reassignment
  devlink: Allow to set up parent in devl_rate_leaf_create()
  ice: Introduce new parameters in ice_sched_node
  ice: Add an option to pre-allocate memory for ice_sched_node
  ice: Implement devlink-rate API
  ice: Prevent ADQ, DCB coexistence with Custom Tx scheduler
  ice: add documentation for devlink-rate implementation

 Documentation/networking/devlink/ice.rst      | 106 ++++
 .../net/ethernet/intel/ice/ice_adminq_cmd.h   |   4 +-
 drivers/net/ethernet/intel/ice/ice_common.c   |   7 +-
 drivers/net/ethernet/intel/ice/ice_dcb.c      |   2 +-
 drivers/net/ethernet/intel/ice/ice_dcb_lib.c  |   4 +
 drivers/net/ethernet/intel/ice/ice_devlink.c  | 486 ++++++++++++++++++
 drivers/net/ethernet/intel/ice/ice_devlink.h  |   2 +
 drivers/net/ethernet/intel/ice/ice_repr.c     |  13 +
 drivers/net/ethernet/intel/ice/ice_sched.c    | 102 +++-
 drivers/net/ethernet/intel/ice/ice_sched.h    |  31 +-
 drivers/net/ethernet/intel/ice/ice_type.h     |   9 +
 .../mellanox/mlx5/core/esw/devlink_port.c     |   4 +-
 drivers/net/netdevsim/dev.c                   |   2 +-
 include/net/devlink.h                         |  18 +-
 include/uapi/linux/devlink.h                  |   3 +
 net/core/devlink.c                            | 130 ++++-
 16 files changed, 896 insertions(+), 27 deletions(-)

-- 
2.37.2


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH net-next v10 01/10] devlink: Introduce new attribute 'tx_priority' to devlink-rate
  2022-11-07 18:13 [PATCH net-next v10 00/10] Implement devlink-rate API and extend it Michal Wilczynski
@ 2022-11-07 18:13 ` Michal Wilczynski
  2022-11-07 18:13 ` [PATCH net-next v10 02/10] devlink: Introduce new attribute 'tx_weight' " Michal Wilczynski
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 18+ messages in thread
From: Michal Wilczynski @ 2022-11-07 18:13 UTC (permalink / raw)
  To: netdev
  Cc: alexandr.lobakin, jacob.e.keller, jesse.brandeburg,
	przemyslaw.kitszel, anthony.l.nguyen, kuba, ecree.xilinx, jiri,
	Michal Wilczynski, Jiri Pirko

To fully utilize offload capabilities of Intel 100G card QoS capabilities
new attribute 'tx_priority' needs to be introduced. This attribute allows
for usage of strict priority arbiter among siblings. This arbitration
scheme attempts to schedule nodes based on their priority as long as the
nodes remain within their bandwidth limit.

Introduce new attribute in devlink-rate that will allow for configuration
of strict priority. New attribute is optional.

Signed-off-by: Michal Wilczynski <michal.wilczynski@intel.com>
Reviewed-by: Jiri Pirko <jiri@nvidia.com>
---
 include/net/devlink.h        |  6 ++++++
 include/uapi/linux/devlink.h |  1 +
 net/core/devlink.c           | 31 +++++++++++++++++++++++++++++++
 3 files changed, 38 insertions(+)

diff --git a/include/net/devlink.h b/include/net/devlink.h
index fa6e936af1a5..849c87832f55 100644
--- a/include/net/devlink.h
+++ b/include/net/devlink.h
@@ -114,6 +114,8 @@ struct devlink_rate {
 			refcount_t refcnt;
 		};
 	};
+
+	u32 tx_priority;
 };
 
 struct devlink_port {
@@ -1502,10 +1504,14 @@ struct devlink_ops {
 				      u64 tx_share, struct netlink_ext_ack *extack);
 	int (*rate_leaf_tx_max_set)(struct devlink_rate *devlink_rate, void *priv,
 				    u64 tx_max, struct netlink_ext_ack *extack);
+	int (*rate_leaf_tx_priority_set)(struct devlink_rate *devlink_rate, void *priv,
+					 u32 tx_priority, struct netlink_ext_ack *extack);
 	int (*rate_node_tx_share_set)(struct devlink_rate *devlink_rate, void *priv,
 				      u64 tx_share, struct netlink_ext_ack *extack);
 	int (*rate_node_tx_max_set)(struct devlink_rate *devlink_rate, void *priv,
 				    u64 tx_max, struct netlink_ext_ack *extack);
+	int (*rate_node_tx_priority_set)(struct devlink_rate *devlink_rate, void *priv,
+					 u32 tx_priority, struct netlink_ext_ack *extack);
 	int (*rate_node_new)(struct devlink_rate *rate_node, void **priv,
 			     struct netlink_ext_ack *extack);
 	int (*rate_node_del)(struct devlink_rate *rate_node, void *priv,
diff --git a/include/uapi/linux/devlink.h b/include/uapi/linux/devlink.h
index 2f24b53a87a5..1a9214d35ef5 100644
--- a/include/uapi/linux/devlink.h
+++ b/include/uapi/linux/devlink.h
@@ -607,6 +607,7 @@ enum devlink_attr {
 
 	DEVLINK_ATTR_SELFTESTS,			/* nested */
 
+	DEVLINK_ATTR_RATE_TX_PRIORITY,		/* u32 */
 	/* add new attributes above here, update the policy in devlink.c */
 
 	__DEVLINK_ATTR_MAX,
diff --git a/net/core/devlink.c b/net/core/devlink.c
index 40fcdded57e6..d8310dd7e429 100644
--- a/net/core/devlink.c
+++ b/net/core/devlink.c
@@ -1203,6 +1203,9 @@ static int devlink_nl_rate_fill(struct sk_buff *msg,
 			      devlink_rate->tx_max, DEVLINK_ATTR_PAD))
 		goto nla_put_failure;
 
+	if (nla_put_u32(msg, DEVLINK_ATTR_RATE_TX_PRIORITY,
+			devlink_rate->tx_priority))
+		goto nla_put_failure;
 	if (devlink_rate->parent)
 		if (nla_put_string(msg, DEVLINK_ATTR_RATE_PARENT_NODE_NAME,
 				   devlink_rate->parent->name))
@@ -1936,6 +1939,7 @@ static int devlink_nl_rate_set(struct devlink_rate *devlink_rate,
 {
 	struct nlattr *nla_parent, **attrs = info->attrs;
 	int err = -EOPNOTSUPP;
+	u32 priority;
 	u64 rate;
 
 	if (attrs[DEVLINK_ATTR_RATE_TX_SHARE]) {
@@ -1964,6 +1968,20 @@ static int devlink_nl_rate_set(struct devlink_rate *devlink_rate,
 		devlink_rate->tx_max = rate;
 	}
 
+	if (attrs[DEVLINK_ATTR_RATE_TX_PRIORITY]) {
+		priority = nla_get_u32(attrs[DEVLINK_ATTR_RATE_TX_PRIORITY]);
+		if (devlink_rate_is_leaf(devlink_rate))
+			err = ops->rate_leaf_tx_priority_set(devlink_rate, devlink_rate->priv,
+							     priority, info->extack);
+		else if (devlink_rate_is_node(devlink_rate))
+			err = ops->rate_node_tx_priority_set(devlink_rate, devlink_rate->priv,
+							     priority, info->extack);
+
+		if (err)
+			return err;
+		devlink_rate->tx_priority = priority;
+	}
+
 	nla_parent = attrs[DEVLINK_ATTR_RATE_PARENT_NODE_NAME];
 	if (nla_parent) {
 		err = devlink_nl_rate_parent_node_set(devlink_rate, info,
@@ -1995,6 +2013,12 @@ static bool devlink_rate_set_ops_supported(const struct devlink_ops *ops,
 			NL_SET_ERR_MSG_MOD(info->extack, "Parent set isn't supported for the leafs");
 			return false;
 		}
+		if (attrs[DEVLINK_ATTR_RATE_TX_PRIORITY] && !ops->rate_leaf_tx_priority_set) {
+			NL_SET_ERR_MSG_ATTR(info->extack,
+					    attrs[DEVLINK_ATTR_RATE_TX_PRIORITY],
+					    "TX priority set isn't supported for the leafs");
+			return false;
+		}
 	} else if (type == DEVLINK_RATE_TYPE_NODE) {
 		if (attrs[DEVLINK_ATTR_RATE_TX_SHARE] && !ops->rate_node_tx_share_set) {
 			NL_SET_ERR_MSG_MOD(info->extack, "TX share set isn't supported for the nodes");
@@ -2009,6 +2033,12 @@ static bool devlink_rate_set_ops_supported(const struct devlink_ops *ops,
 			NL_SET_ERR_MSG_MOD(info->extack, "Parent set isn't supported for the nodes");
 			return false;
 		}
+		if (attrs[DEVLINK_ATTR_RATE_TX_PRIORITY] && !ops->rate_node_tx_priority_set) {
+			NL_SET_ERR_MSG_ATTR(info->extack,
+					    attrs[DEVLINK_ATTR_RATE_TX_PRIORITY],
+					    "TX priority set isn't supported for the nodes");
+			return false;
+		}
 	} else {
 		WARN(1, "Unknown type of rate object");
 		return false;
@@ -9184,6 +9214,7 @@ static const struct nla_policy devlink_nl_policy[DEVLINK_ATTR_MAX + 1] = {
 	[DEVLINK_ATTR_LINECARD_INDEX] = { .type = NLA_U32 },
 	[DEVLINK_ATTR_LINECARD_TYPE] = { .type = NLA_NUL_STRING },
 	[DEVLINK_ATTR_SELFTESTS] = { .type = NLA_NESTED },
+	[DEVLINK_ATTR_RATE_TX_PRIORITY] = { .type = NLA_U32 },
 };
 
 static const struct genl_small_ops devlink_nl_ops[] = {
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net-next v10 02/10] devlink: Introduce new attribute 'tx_weight' to devlink-rate
  2022-11-07 18:13 [PATCH net-next v10 00/10] Implement devlink-rate API and extend it Michal Wilczynski
  2022-11-07 18:13 ` [PATCH net-next v10 01/10] devlink: Introduce new attribute 'tx_priority' to devlink-rate Michal Wilczynski
@ 2022-11-07 18:13 ` Michal Wilczynski
  2022-11-07 18:13 ` [PATCH net-next v10 03/10] devlink: Enable creation of the devlink-rate nodes from the driver Michal Wilczynski
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 18+ messages in thread
From: Michal Wilczynski @ 2022-11-07 18:13 UTC (permalink / raw)
  To: netdev
  Cc: alexandr.lobakin, jacob.e.keller, jesse.brandeburg,
	przemyslaw.kitszel, anthony.l.nguyen, kuba, ecree.xilinx, jiri,
	Michal Wilczynski, Jiri Pirko

To fully utilize offload capabilities of Intel 100G card QoS capabilities
new attribute 'tx_weight' needs to be introduced. This attribute allows
for usage of Weighted Fair Queuing arbitration scheme among siblings.
This arbitration scheme can be used simultaneously with the strict
priority.

Introduce new attribute in devlink-rate that will allow for configuration
of Weighted Fair Queueing. New attribute is optional.

Signed-off-by: Michal Wilczynski <michal.wilczynski@intel.com>
Reviewed-by: Jiri Pirko <jiri@nvidia.com>
---
 include/net/devlink.h        |  5 +++++
 include/uapi/linux/devlink.h |  2 ++
 net/core/devlink.c           | 33 +++++++++++++++++++++++++++++++++
 3 files changed, 40 insertions(+)

diff --git a/include/net/devlink.h b/include/net/devlink.h
index 849c87832f55..0e763de4cbf5 100644
--- a/include/net/devlink.h
+++ b/include/net/devlink.h
@@ -116,6 +116,7 @@ struct devlink_rate {
 	};
 
 	u32 tx_priority;
+	u32 tx_weight;
 };
 
 struct devlink_port {
@@ -1506,12 +1507,16 @@ struct devlink_ops {
 				    u64 tx_max, struct netlink_ext_ack *extack);
 	int (*rate_leaf_tx_priority_set)(struct devlink_rate *devlink_rate, void *priv,
 					 u32 tx_priority, struct netlink_ext_ack *extack);
+	int (*rate_leaf_tx_weight_set)(struct devlink_rate *devlink_rate, void *priv,
+				       u32 tx_weight, struct netlink_ext_ack *extack);
 	int (*rate_node_tx_share_set)(struct devlink_rate *devlink_rate, void *priv,
 				      u64 tx_share, struct netlink_ext_ack *extack);
 	int (*rate_node_tx_max_set)(struct devlink_rate *devlink_rate, void *priv,
 				    u64 tx_max, struct netlink_ext_ack *extack);
 	int (*rate_node_tx_priority_set)(struct devlink_rate *devlink_rate, void *priv,
 					 u32 tx_priority, struct netlink_ext_ack *extack);
+	int (*rate_node_tx_weight_set)(struct devlink_rate *devlink_rate, void *priv,
+				       u32 tx_weight, struct netlink_ext_ack *extack);
 	int (*rate_node_new)(struct devlink_rate *rate_node, void **priv,
 			     struct netlink_ext_ack *extack);
 	int (*rate_node_del)(struct devlink_rate *rate_node, void *priv,
diff --git a/include/uapi/linux/devlink.h b/include/uapi/linux/devlink.h
index 1a9214d35ef5..498d0d5d0957 100644
--- a/include/uapi/linux/devlink.h
+++ b/include/uapi/linux/devlink.h
@@ -608,6 +608,8 @@ enum devlink_attr {
 	DEVLINK_ATTR_SELFTESTS,			/* nested */
 
 	DEVLINK_ATTR_RATE_TX_PRIORITY,		/* u32 */
+	DEVLINK_ATTR_RATE_TX_WEIGHT,		/* u32 */
+
 	/* add new attributes above here, update the policy in devlink.c */
 
 	__DEVLINK_ATTR_MAX,
diff --git a/net/core/devlink.c b/net/core/devlink.c
index d8310dd7e429..395fdf3e0299 100644
--- a/net/core/devlink.c
+++ b/net/core/devlink.c
@@ -1206,6 +1206,11 @@ static int devlink_nl_rate_fill(struct sk_buff *msg,
 	if (nla_put_u32(msg, DEVLINK_ATTR_RATE_TX_PRIORITY,
 			devlink_rate->tx_priority))
 		goto nla_put_failure;
+
+	if (nla_put_u32(msg, DEVLINK_ATTR_RATE_TX_WEIGHT,
+			devlink_rate->tx_weight))
+		goto nla_put_failure;
+
 	if (devlink_rate->parent)
 		if (nla_put_string(msg, DEVLINK_ATTR_RATE_PARENT_NODE_NAME,
 				   devlink_rate->parent->name))
@@ -1940,6 +1945,7 @@ static int devlink_nl_rate_set(struct devlink_rate *devlink_rate,
 	struct nlattr *nla_parent, **attrs = info->attrs;
 	int err = -EOPNOTSUPP;
 	u32 priority;
+	u32 weight;
 	u64 rate;
 
 	if (attrs[DEVLINK_ATTR_RATE_TX_SHARE]) {
@@ -1982,6 +1988,20 @@ static int devlink_nl_rate_set(struct devlink_rate *devlink_rate,
 		devlink_rate->tx_priority = priority;
 	}
 
+	if (attrs[DEVLINK_ATTR_RATE_TX_WEIGHT]) {
+		weight = nla_get_u32(attrs[DEVLINK_ATTR_RATE_TX_WEIGHT]);
+		if (devlink_rate_is_leaf(devlink_rate))
+			err = ops->rate_leaf_tx_weight_set(devlink_rate, devlink_rate->priv,
+							   weight, info->extack);
+		else if (devlink_rate_is_node(devlink_rate))
+			err = ops->rate_node_tx_weight_set(devlink_rate, devlink_rate->priv,
+							   weight, info->extack);
+
+		if (err)
+			return err;
+		devlink_rate->tx_weight = weight;
+	}
+
 	nla_parent = attrs[DEVLINK_ATTR_RATE_PARENT_NODE_NAME];
 	if (nla_parent) {
 		err = devlink_nl_rate_parent_node_set(devlink_rate, info,
@@ -2019,6 +2039,12 @@ static bool devlink_rate_set_ops_supported(const struct devlink_ops *ops,
 					    "TX priority set isn't supported for the leafs");
 			return false;
 		}
+		if (attrs[DEVLINK_ATTR_RATE_TX_WEIGHT] && !ops->rate_leaf_tx_weight_set) {
+			NL_SET_ERR_MSG_ATTR(info->extack,
+					    attrs[DEVLINK_ATTR_RATE_TX_WEIGHT],
+					    "TX weight set isn't supported for the leafs");
+			return false;
+		}
 	} else if (type == DEVLINK_RATE_TYPE_NODE) {
 		if (attrs[DEVLINK_ATTR_RATE_TX_SHARE] && !ops->rate_node_tx_share_set) {
 			NL_SET_ERR_MSG_MOD(info->extack, "TX share set isn't supported for the nodes");
@@ -2039,6 +2065,12 @@ static bool devlink_rate_set_ops_supported(const struct devlink_ops *ops,
 					    "TX priority set isn't supported for the nodes");
 			return false;
 		}
+		if (attrs[DEVLINK_ATTR_RATE_TX_WEIGHT] && !ops->rate_node_tx_weight_set) {
+			NL_SET_ERR_MSG_ATTR(info->extack,
+					    attrs[DEVLINK_ATTR_RATE_TX_WEIGHT],
+					    "TX weight set isn't supported for the nodes");
+			return false;
+		}
 	} else {
 		WARN(1, "Unknown type of rate object");
 		return false;
@@ -9215,6 +9247,7 @@ static const struct nla_policy devlink_nl_policy[DEVLINK_ATTR_MAX + 1] = {
 	[DEVLINK_ATTR_LINECARD_TYPE] = { .type = NLA_NUL_STRING },
 	[DEVLINK_ATTR_SELFTESTS] = { .type = NLA_NESTED },
 	[DEVLINK_ATTR_RATE_TX_PRIORITY] = { .type = NLA_U32 },
+	[DEVLINK_ATTR_RATE_TX_WEIGHT] = { .type = NLA_U32 },
 };
 
 static const struct genl_small_ops devlink_nl_ops[] = {
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net-next v10 03/10] devlink: Enable creation of the devlink-rate nodes from the driver
  2022-11-07 18:13 [PATCH net-next v10 00/10] Implement devlink-rate API and extend it Michal Wilczynski
  2022-11-07 18:13 ` [PATCH net-next v10 01/10] devlink: Introduce new attribute 'tx_priority' to devlink-rate Michal Wilczynski
  2022-11-07 18:13 ` [PATCH net-next v10 02/10] devlink: Introduce new attribute 'tx_weight' " Michal Wilczynski
@ 2022-11-07 18:13 ` Michal Wilczynski
  2022-11-07 18:13 ` [PATCH net-next v10 04/10] devlink: Allow for devlink-rate nodes parent reassignment Michal Wilczynski
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 18+ messages in thread
From: Michal Wilczynski @ 2022-11-07 18:13 UTC (permalink / raw)
  To: netdev
  Cc: alexandr.lobakin, jacob.e.keller, jesse.brandeburg,
	przemyslaw.kitszel, anthony.l.nguyen, kuba, ecree.xilinx, jiri,
	Michal Wilczynski, Jiri Pirko

Intel 100G card internal firmware hierarchy for Hierarchicial QoS is very
rigid and can't be easily removed. This requires an ability to export
default hierarchy to allow user to modify it. Currently the driver is
only able to create the 'leaf' nodes, which usually represent the vport.
This is not enough for HQoS implemented in Intel hardware.

Introduce new function devl_rate_node_create() that allows for creation
of the devlink-rate nodes from the driver.

Signed-off-by: Michal Wilczynski <michal.wilczynski@intel.com>
Reviewed-by: Jiri Pirko <jiri@nvidia.com>
---
 include/net/devlink.h |  3 +++
 net/core/devlink.c    | 45 +++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 48 insertions(+)

diff --git a/include/net/devlink.h b/include/net/devlink.h
index 0e763de4cbf5..e5c0e091d692 100644
--- a/include/net/devlink.h
+++ b/include/net/devlink.h
@@ -1609,6 +1609,9 @@ void devlink_port_attrs_pci_sf_set(struct devlink_port *devlink_port,
 				   u32 controller, u16 pf, u32 sf,
 				   bool external);
 int devl_rate_leaf_create(struct devlink_port *port, void *priv);
+struct devlink_rate *
+devl_rate_node_create(struct devlink *devlink, void *priv, char *node_name,
+		      struct devlink_rate *parent);
 void devl_rate_leaf_destroy(struct devlink_port *devlink_port);
 void devl_rate_nodes_destroy(struct devlink *devlink);
 void devlink_port_linecard_set(struct devlink_port *devlink_port,
diff --git a/net/core/devlink.c b/net/core/devlink.c
index 395fdf3e0299..0266301416c8 100644
--- a/net/core/devlink.c
+++ b/net/core/devlink.c
@@ -10381,6 +10381,51 @@ void devlink_port_attrs_pci_sf_set(struct devlink_port *devlink_port, u32 contro
 }
 EXPORT_SYMBOL_GPL(devlink_port_attrs_pci_sf_set);
 
+/**
+ * devl_rate_node_create - create devlink rate node
+ * @devlink: devlink instance
+ * @priv: driver private data
+ * @node_name: name of the resulting node
+ * @parent: parent devlink_rate struct
+ *
+ * Create devlink rate object of type node
+ */
+struct devlink_rate *
+devl_rate_node_create(struct devlink *devlink, void *priv, char *node_name,
+		      struct devlink_rate *parent)
+{
+	struct devlink_rate *rate_node;
+
+	rate_node = devlink_rate_node_get_by_name(devlink, node_name);
+	if (!IS_ERR(rate_node))
+		return ERR_PTR(-EEXIST);
+
+	rate_node = kzalloc(sizeof(*rate_node), GFP_KERNEL);
+	if (!rate_node)
+		return ERR_PTR(-ENOMEM);
+
+	if (parent) {
+		rate_node->parent = parent;
+		refcount_inc(&rate_node->parent->refcnt);
+	}
+
+	rate_node->type = DEVLINK_RATE_TYPE_NODE;
+	rate_node->devlink = devlink;
+	rate_node->priv = priv;
+
+	rate_node->name = kstrdup(node_name, GFP_KERNEL);
+	if (!rate_node->name) {
+		kfree(rate_node);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	refcount_set(&rate_node->refcnt, 1);
+	list_add(&rate_node->list, &devlink->rate_list);
+	devlink_rate_notify(rate_node, DEVLINK_CMD_RATE_NEW);
+	return rate_node;
+}
+EXPORT_SYMBOL_GPL(devl_rate_node_create);
+
 /**
  * devl_rate_leaf_create - create devlink rate leaf
  * @devlink_port: devlink port object to create rate object on
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net-next v10 04/10] devlink: Allow for devlink-rate nodes parent reassignment
  2022-11-07 18:13 [PATCH net-next v10 00/10] Implement devlink-rate API and extend it Michal Wilczynski
                   ` (2 preceding siblings ...)
  2022-11-07 18:13 ` [PATCH net-next v10 03/10] devlink: Enable creation of the devlink-rate nodes from the driver Michal Wilczynski
@ 2022-11-07 18:13 ` Michal Wilczynski
  2022-11-07 18:13 ` [PATCH net-next v10 05/10] devlink: Allow to set up parent in devl_rate_leaf_create() Michal Wilczynski
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 18+ messages in thread
From: Michal Wilczynski @ 2022-11-07 18:13 UTC (permalink / raw)
  To: netdev
  Cc: alexandr.lobakin, jacob.e.keller, jesse.brandeburg,
	przemyslaw.kitszel, anthony.l.nguyen, kuba, ecree.xilinx, jiri,
	Michal Wilczynski, Jiri Pirko

Currently it's not possible to reassign the parent of the node using one
command. As the previous commit introduced a way to export entire
hierarchy from the driver, being able to modify and reassign parents
become important. This way user might easily change QoS settings without
interrupting traffic.

Example command:
devlink port function rate set pci/0000:4b:00.0/1 parent node_custom_1

This reassigns leaf node parent to node_custom_1.

Signed-off-by: Michal Wilczynski <michal.wilczynski@intel.com>
Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
Reviewed-by: Jiri Pirko <jiri@nvidia.com>
---
 net/core/devlink.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/net/core/devlink.c b/net/core/devlink.c
index 0266301416c8..9f00ea85b5f8 100644
--- a/net/core/devlink.c
+++ b/net/core/devlink.c
@@ -1887,10 +1887,8 @@ devlink_nl_rate_parent_node_set(struct devlink_rate *devlink_rate,
 	int err = -EOPNOTSUPP;
 
 	parent = devlink_rate->parent;
-	if (parent && len) {
-		NL_SET_ERR_MSG_MOD(info->extack, "Rate object already has parent.");
-		return -EBUSY;
-	} else if (parent && !len) {
+
+	if (parent && !len) {
 		if (devlink_rate_is_leaf(devlink_rate))
 			err = ops->rate_leaf_parent_set(devlink_rate, NULL,
 							devlink_rate->priv, NULL,
@@ -1904,7 +1902,7 @@ devlink_nl_rate_parent_node_set(struct devlink_rate *devlink_rate,
 
 		refcount_dec(&parent->refcnt);
 		devlink_rate->parent = NULL;
-	} else if (!parent && len) {
+	} else if (len) {
 		parent = devlink_rate_node_get_by_name(devlink, parent_name);
 		if (IS_ERR(parent))
 			return -ENODEV;
@@ -1931,6 +1929,10 @@ devlink_nl_rate_parent_node_set(struct devlink_rate *devlink_rate,
 		if (err)
 			return err;
 
+		if (devlink_rate->parent)
+			/* we're reassigning to other parent in this case */
+			refcount_dec(&devlink_rate->parent->refcnt);
+
 		refcount_inc(&parent->refcnt);
 		devlink_rate->parent = parent;
 	}
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net-next v10 05/10] devlink: Allow to set up parent in devl_rate_leaf_create()
  2022-11-07 18:13 [PATCH net-next v10 00/10] Implement devlink-rate API and extend it Michal Wilczynski
                   ` (3 preceding siblings ...)
  2022-11-07 18:13 ` [PATCH net-next v10 04/10] devlink: Allow for devlink-rate nodes parent reassignment Michal Wilczynski
@ 2022-11-07 18:13 ` Michal Wilczynski
  2022-11-07 18:13 ` [PATCH net-next v10 06/10] ice: Introduce new parameters in ice_sched_node Michal Wilczynski
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 18+ messages in thread
From: Michal Wilczynski @ 2022-11-07 18:13 UTC (permalink / raw)
  To: netdev
  Cc: alexandr.lobakin, jacob.e.keller, jesse.brandeburg,
	przemyslaw.kitszel, anthony.l.nguyen, kuba, ecree.xilinx, jiri,
	Michal Wilczynski, Jiri Pirko

Currently the driver is able to create leaf nodes for the devlink-rate,
but is unable to set parent for them. This wasn't as issue before the
possibility to export hierarchy from the driver. After adding the export
feature, in order for the driver to supply correct hierarchy, it's
necessary for it to be able to supply a parent name to
devl_rate_leaf_create().

Introduce a new parameter 'parent_name' in devl_rate_leaf_create().

Signed-off-by: Michal Wilczynski <michal.wilczynski@intel.com>
Reviewed-by: Jiri Pirko <jiri@nvidia.com>
---
 .../net/ethernet/mellanox/mlx5/core/esw/devlink_port.c   | 4 ++--
 drivers/net/netdevsim/dev.c                              | 2 +-
 include/net/devlink.h                                    | 4 +++-
 net/core/devlink.c                                       | 9 ++++++++-
 4 files changed, 14 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c
index 9bc7be95db54..084a910bb4e7 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c
@@ -91,7 +91,7 @@ int mlx5_esw_offloads_devlink_port_register(struct mlx5_eswitch *esw, u16 vport_
 	if (err)
 		goto reg_err;
 
-	err = devl_rate_leaf_create(dl_port, vport);
+	err = devl_rate_leaf_create(dl_port, vport, NULL);
 	if (err)
 		goto rate_err;
 
@@ -160,7 +160,7 @@ int mlx5_esw_devlink_sf_port_register(struct mlx5_eswitch *esw, struct devlink_p
 	if (err)
 		return err;
 
-	err = devl_rate_leaf_create(dl_port, vport);
+	err = devl_rate_leaf_create(dl_port, vport, NULL);
 	if (err)
 		goto rate_err;
 
diff --git a/drivers/net/netdevsim/dev.c b/drivers/net/netdevsim/dev.c
index 387c05953a8b..a5bd6dcca980 100644
--- a/drivers/net/netdevsim/dev.c
+++ b/drivers/net/netdevsim/dev.c
@@ -1401,7 +1401,7 @@ static int __nsim_dev_port_add(struct nsim_dev *nsim_dev, enum nsim_dev_port_typ
 
 	if (nsim_dev_port_is_vf(nsim_dev_port)) {
 		err = devl_rate_leaf_create(&nsim_dev_port->devlink_port,
-					    nsim_dev_port);
+					    nsim_dev_port, NULL);
 		if (err)
 			goto err_nsim_destroy;
 	}
diff --git a/include/net/devlink.h b/include/net/devlink.h
index e5c0e091d692..e65fe71593f0 100644
--- a/include/net/devlink.h
+++ b/include/net/devlink.h
@@ -1608,10 +1608,12 @@ void devlink_port_attrs_pci_vf_set(struct devlink_port *devlink_port, u32 contro
 void devlink_port_attrs_pci_sf_set(struct devlink_port *devlink_port,
 				   u32 controller, u16 pf, u32 sf,
 				   bool external);
-int devl_rate_leaf_create(struct devlink_port *port, void *priv);
 struct devlink_rate *
 devl_rate_node_create(struct devlink *devlink, void *priv, char *node_name,
 		      struct devlink_rate *parent);
+int
+devl_rate_leaf_create(struct devlink_port *devlink_port, void *priv,
+		      struct devlink_rate *parent);
 void devl_rate_leaf_destroy(struct devlink_port *devlink_port);
 void devl_rate_nodes_destroy(struct devlink *devlink);
 void devlink_port_linecard_set(struct devlink_port *devlink_port,
diff --git a/net/core/devlink.c b/net/core/devlink.c
index 9f00ea85b5f8..f556f715a6b7 100644
--- a/net/core/devlink.c
+++ b/net/core/devlink.c
@@ -10432,10 +10432,12 @@ EXPORT_SYMBOL_GPL(devl_rate_node_create);
  * devl_rate_leaf_create - create devlink rate leaf
  * @devlink_port: devlink port object to create rate object on
  * @priv: driver private data
+ * @parent: parent devlink_rate struct
  *
  * Create devlink rate object of type leaf on provided @devlink_port.
  */
-int devl_rate_leaf_create(struct devlink_port *devlink_port, void *priv)
+int devl_rate_leaf_create(struct devlink_port *devlink_port, void *priv,
+			  struct devlink_rate *parent)
 {
 	struct devlink *devlink = devlink_port->devlink;
 	struct devlink_rate *devlink_rate;
@@ -10449,6 +10451,11 @@ int devl_rate_leaf_create(struct devlink_port *devlink_port, void *priv)
 	if (!devlink_rate)
 		return -ENOMEM;
 
+	if (parent) {
+		devlink_rate->parent = parent;
+		refcount_inc(&devlink_rate->parent->refcnt);
+	}
+
 	devlink_rate->type = DEVLINK_RATE_TYPE_LEAF;
 	devlink_rate->devlink = devlink;
 	devlink_rate->devlink_port = devlink_port;
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net-next v10 06/10] ice: Introduce new parameters in ice_sched_node
  2022-11-07 18:13 [PATCH net-next v10 00/10] Implement devlink-rate API and extend it Michal Wilczynski
                   ` (4 preceding siblings ...)
  2022-11-07 18:13 ` [PATCH net-next v10 05/10] devlink: Allow to set up parent in devl_rate_leaf_create() Michal Wilczynski
@ 2022-11-07 18:13 ` Michal Wilczynski
  2022-11-07 18:13 ` [PATCH net-next v10 07/10] ice: Add an option to pre-allocate memory for ice_sched_node Michal Wilczynski
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 18+ messages in thread
From: Michal Wilczynski @ 2022-11-07 18:13 UTC (permalink / raw)
  To: netdev
  Cc: alexandr.lobakin, jacob.e.keller, jesse.brandeburg,
	przemyslaw.kitszel, anthony.l.nguyen, kuba, ecree.xilinx, jiri,
	Michal Wilczynski

To support new devlink-rate API ice_sched_node struct needs to store
a number of additional parameters. This includes tx_max, tx_share,
tx_weight, and tx_priority.

Add new fields to ice_sched_node struct. Add new functions to configure
the hardware with new parameters. Introduce new xarray to identify
nodes uniquely.

Signed-off-by: Michal Wilczynski <michal.wilczynski@intel.com>
---
 .../net/ethernet/intel/ice/ice_adminq_cmd.h   |  4 +-
 drivers/net/ethernet/intel/ice/ice_common.c   |  3 +
 drivers/net/ethernet/intel/ice/ice_sched.c    | 79 +++++++++++++++++--
 drivers/net/ethernet/intel/ice/ice_sched.h    | 27 +++++++
 drivers/net/ethernet/intel/ice/ice_type.h     |  8 ++
 5 files changed, 114 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
index 1bdc70aa979d..958c1e435232 100644
--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
@@ -848,9 +848,9 @@ struct ice_aqc_txsched_elem {
 	u8 generic;
 #define ICE_AQC_ELEM_GENERIC_MODE_M		0x1
 #define ICE_AQC_ELEM_GENERIC_PRIO_S		0x1
-#define ICE_AQC_ELEM_GENERIC_PRIO_M	(0x7 << ICE_AQC_ELEM_GENERIC_PRIO_S)
+#define ICE_AQC_ELEM_GENERIC_PRIO_M	        GENMASK(3, 1)
 #define ICE_AQC_ELEM_GENERIC_SP_S		0x4
-#define ICE_AQC_ELEM_GENERIC_SP_M	(0x1 << ICE_AQC_ELEM_GENERIC_SP_S)
+#define ICE_AQC_ELEM_GENERIC_SP_M	        GENMASK(4, 4)
 #define ICE_AQC_ELEM_GENERIC_ADJUST_VAL_S	0x5
 #define ICE_AQC_ELEM_GENERIC_ADJUST_VAL_M	\
 	(0x3 << ICE_AQC_ELEM_GENERIC_ADJUST_VAL_S)
diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index 039342a0ed15..e2e661010176 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -1105,6 +1105,9 @@ int ice_init_hw(struct ice_hw *hw)
 
 	hw->evb_veb = true;
 
+	/* init xarray for identifying scheduling nodes uniquely */
+	xa_init_flags(&hw->port_info->sched_node_ids, XA_FLAGS_ALLOC);
+
 	/* Query the allocated resources for Tx scheduler */
 	status = ice_sched_query_res_alloc(hw);
 	if (status) {
diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c
index 118595763bba..80cde5683371 100644
--- a/drivers/net/ethernet/intel/ice/ice_sched.c
+++ b/drivers/net/ethernet/intel/ice/ice_sched.c
@@ -1,6 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0
 /* Copyright (c) 2018, Intel Corporation. */
 
+#include <net/devlink.h>
 #include "ice_sched.h"
 
 /**
@@ -355,6 +356,9 @@ void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node)
 	/* leaf nodes have no children */
 	if (node->children)
 		devm_kfree(ice_hw_to_dev(hw), node->children);
+
+	kfree(node->name);
+	xa_erase(&pi->sched_node_ids, node->id);
 	devm_kfree(ice_hw_to_dev(hw), node);
 }
 
@@ -875,7 +879,7 @@ void ice_sched_cleanup_all(struct ice_hw *hw)
  *
  * This function add nodes to HW as well as to SW DB for a given layer
  */
-static int
+int
 ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node,
 		    struct ice_sched_node *parent, u8 layer, u16 num_nodes,
 		    u16 *num_nodes_added, u32 *first_node_teid)
@@ -940,6 +944,22 @@ ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node,
 
 		new_node->sibling = NULL;
 		new_node->tc_num = tc_node->tc_num;
+		new_node->tx_weight = ICE_SCHED_DFLT_BW_WT;
+		new_node->tx_share = ICE_SCHED_DFLT_BW;
+		new_node->tx_max = ICE_SCHED_DFLT_BW;
+		new_node->name = kzalloc(SCHED_NODE_NAME_MAX_LEN, GFP_KERNEL);
+		if (!new_node->name)
+			return -ENOMEM;
+
+		status = xa_alloc(&pi->sched_node_ids, &new_node->id, NULL, XA_LIMIT(0, UINT_MAX),
+				  GFP_KERNEL);
+		if (status) {
+			ice_debug(hw, ICE_DBG_SCHED, "xa_alloc failed for sched node status =%d\n",
+				  status);
+			break;
+		}
+
+		snprintf(new_node->name, SCHED_NODE_NAME_MAX_LEN, "node_%u", new_node->id);
 
 		/* add it to previous node sibling pointer */
 		/* Note: siblings are not linked across branches */
@@ -2154,7 +2174,7 @@ ice_sched_get_free_vsi_parent(struct ice_hw *hw, struct ice_sched_node *node,
  * This function removes the child from the old parent and adds it to a new
  * parent
  */
-static void
+void
 ice_sched_update_parent(struct ice_sched_node *new_parent,
 			struct ice_sched_node *node)
 {
@@ -2188,7 +2208,7 @@ ice_sched_update_parent(struct ice_sched_node *new_parent,
  *
  * This function move the child nodes to a given parent.
  */
-static int
+int
 ice_sched_move_nodes(struct ice_port_info *pi, struct ice_sched_node *parent,
 		     u16 num_items, u32 *list)
 {
@@ -3560,7 +3580,7 @@ ice_sched_set_eir_srl_excl(struct ice_port_info *pi,
  * node's RL profile ID of type CIR, EIR, or SRL, and removes old profile
  * ID from local database. The caller needs to hold scheduler lock.
  */
-static int
+int
 ice_sched_set_node_bw(struct ice_port_info *pi, struct ice_sched_node *node,
 		      enum ice_rl_type rl_type, u32 bw, u8 layer_num)
 {
@@ -3596,6 +3616,55 @@ ice_sched_set_node_bw(struct ice_port_info *pi, struct ice_sched_node *node,
 				       ICE_AQC_RL_PROFILE_TYPE_M, old_id);
 }
 
+/**
+ * ice_sched_set_node_priority - set node's priority
+ * @pi: port information structure
+ * @node: tree node
+ * @priority: number 0-7 representing priority among siblings
+ *
+ * This function sets priority of a node among it's siblings.
+ */
+int
+ice_sched_set_node_priority(struct ice_port_info *pi, struct ice_sched_node *node,
+			    u16 priority)
+{
+	struct ice_aqc_txsched_elem_data buf;
+	struct ice_aqc_txsched_elem *data;
+
+	buf = node->info;
+	data = &buf.data;
+
+	data->valid_sections |= ICE_AQC_ELEM_VALID_GENERIC;
+	data->generic |= FIELD_PREP(ICE_AQC_ELEM_GENERIC_MODE_M, 0x1);
+	data->generic |= FIELD_PREP(ICE_AQC_ELEM_GENERIC_PRIO_M, priority);
+
+	return ice_sched_update_elem(pi->hw, node, &buf);
+}
+
+/**
+ * ice_sched_set_node_weight - set node's weight
+ * @pi: port information structure
+ * @node: tree node
+ * @weight: number 1-200 representing weight for WFQ
+ *
+ * This function sets weight of the node for WFQ algorithm.
+ */
+int
+ice_sched_set_node_weight(struct ice_port_info *pi, struct ice_sched_node *node, u16 weight)
+{
+	struct ice_aqc_txsched_elem_data buf;
+	struct ice_aqc_txsched_elem *data;
+
+	buf = node->info;
+	data = &buf.data;
+
+	data->valid_sections = ICE_AQC_ELEM_VALID_CIR | ICE_AQC_ELEM_VALID_EIR;
+	data->cir_bw.bw_alloc = cpu_to_le16(weight);
+	data->eir_bw.bw_alloc = cpu_to_le16(weight);
+
+	return ice_sched_update_elem(pi->hw, node, &buf);
+}
+
 /**
  * ice_sched_set_node_bw_lmt - set node's BW limit
  * @pi: port information structure
@@ -3606,7 +3675,7 @@ ice_sched_set_node_bw(struct ice_port_info *pi, struct ice_sched_node *node,
  * It updates node's BW limit parameters like BW RL profile ID of type CIR,
  * EIR, or SRL. The caller needs to hold scheduler lock.
  */
-static int
+int
 ice_sched_set_node_bw_lmt(struct ice_port_info *pi, struct ice_sched_node *node,
 			  enum ice_rl_type rl_type, u32 bw)
 {
diff --git a/drivers/net/ethernet/intel/ice/ice_sched.h b/drivers/net/ethernet/intel/ice/ice_sched.h
index 4f91577fed56..920db43ed4fa 100644
--- a/drivers/net/ethernet/intel/ice/ice_sched.h
+++ b/drivers/net/ethernet/intel/ice/ice_sched.h
@@ -6,6 +6,8 @@
 
 #include "ice_common.h"
 
+#define SCHED_NODE_NAME_MAX_LEN 32
+
 #define ICE_QGRP_LAYER_OFFSET	2
 #define ICE_VSI_LAYER_OFFSET	4
 #define ICE_AGG_LAYER_OFFSET	6
@@ -69,6 +71,28 @@ int
 ice_aq_query_sched_elems(struct ice_hw *hw, u16 elems_req,
 			 struct ice_aqc_txsched_elem_data *buf, u16 buf_size,
 			 u16 *elems_ret, struct ice_sq_cd *cd);
+
+int
+ice_sched_set_node_bw_lmt(struct ice_port_info *pi, struct ice_sched_node *node,
+			  enum ice_rl_type rl_type, u32 bw);
+
+int
+ice_sched_set_node_bw(struct ice_port_info *pi, struct ice_sched_node *node,
+		      enum ice_rl_type rl_type, u32 bw, u8 layer_num);
+
+int
+ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node,
+		    struct ice_sched_node *parent, u8 layer, u16 num_nodes,
+		    u16 *num_nodes_added, u32 *first_node_teid);
+
+int
+ice_sched_move_nodes(struct ice_port_info *pi, struct ice_sched_node *parent,
+		     u16 num_items, u32 *list);
+
+int ice_sched_set_node_priority(struct ice_port_info *pi, struct ice_sched_node *node,
+				u16 priority);
+int ice_sched_set_node_weight(struct ice_port_info *pi, struct ice_sched_node *node, u16 weight);
+
 int ice_sched_init_port(struct ice_port_info *pi);
 int ice_sched_query_res_alloc(struct ice_hw *hw);
 void ice_sched_get_psm_clk_freq(struct ice_hw *hw);
@@ -82,6 +106,9 @@ ice_sched_find_node_by_teid(struct ice_sched_node *start_node, u32 teid);
 int
 ice_sched_add_node(struct ice_port_info *pi, u8 layer,
 		   struct ice_aqc_txsched_elem_data *info);
+void
+ice_sched_update_parent(struct ice_sched_node *new_parent,
+			struct ice_sched_node *node);
 void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node);
 struct ice_sched_node *ice_sched_get_tc_node(struct ice_port_info *pi, u8 tc);
 struct ice_sched_node *
diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h
index e1abfcee96dc..c93f2449d3c3 100644
--- a/drivers/net/ethernet/intel/ice/ice_type.h
+++ b/drivers/net/ethernet/intel/ice/ice_type.h
@@ -524,8 +524,15 @@ struct ice_sched_node {
 	struct ice_sched_node *sibling; /* next sibling in the same layer */
 	struct ice_sched_node **children;
 	struct ice_aqc_txsched_elem_data info;
+	char *name;
+	struct devlink_rate *rate_node;
+	u64 tx_max;
+	u64 tx_share;
 	u32 agg_id;			/* aggregator group ID */
+	u32 id;
 	u16 vsi_handle;
+	u16 tx_priority;
+	u16 tx_weight;
 	u8 in_use;			/* suspended or in use */
 	u8 tx_sched_layer;		/* Logical Layer (1-9) */
 	u8 num_children;
@@ -706,6 +713,7 @@ struct ice_port_info {
 	/* List contain profile ID(s) and other params per layer */
 	struct list_head rl_prof_list[ICE_AQC_TOPO_MAX_LEVEL_NUM];
 	struct ice_qos_cfg qos_cfg;
+	struct xarray sched_node_ids;
 	u8 is_vf:1;
 };
 
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net-next v10 07/10] ice: Add an option to pre-allocate memory for ice_sched_node
  2022-11-07 18:13 [PATCH net-next v10 00/10] Implement devlink-rate API and extend it Michal Wilczynski
                   ` (5 preceding siblings ...)
  2022-11-07 18:13 ` [PATCH net-next v10 06/10] ice: Introduce new parameters in ice_sched_node Michal Wilczynski
@ 2022-11-07 18:13 ` Michal Wilczynski
  2022-11-07 18:13 ` [PATCH net-next v10 08/10] ice: Implement devlink-rate API Michal Wilczynski
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 18+ messages in thread
From: Michal Wilczynski @ 2022-11-07 18:13 UTC (permalink / raw)
  To: netdev
  Cc: alexandr.lobakin, jacob.e.keller, jesse.brandeburg,
	przemyslaw.kitszel, anthony.l.nguyen, kuba, ecree.xilinx, jiri,
	Michal Wilczynski

devlink-rate API requires a priv object to be allocated when node still
doesn't have a parent. This is problematic, because ice_sched_node can't
be currently created without a parent.

Add an option to pre-allocate memory for ice_sched_node struct. Add
new arguments to ice_sched_add() and ice_sched_add_elems() that allow
for pre-allocation of memory for ice_sched_node struct.

Signed-off-by: Michal Wilczynski <michal.wilczynski@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_common.c |  4 ++--
 drivers/net/ethernet/intel/ice/ice_dcb.c    |  2 +-
 drivers/net/ethernet/intel/ice/ice_sched.c  | 23 +++++++++++++++------
 drivers/net/ethernet/intel/ice/ice_sched.h  |  6 ++++--
 4 files changed, 24 insertions(+), 11 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index e2e661010176..216370ec60d4 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -4603,7 +4603,7 @@ ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 q_handle,
 	q_ctx->q_teid = le32_to_cpu(node.node_teid);
 
 	/* add a leaf node into scheduler tree queue layer */
-	status = ice_sched_add_node(pi, hw->num_tx_sched_layers - 1, &node);
+	status = ice_sched_add_node(pi, hw->num_tx_sched_layers - 1, &node, NULL);
 	if (!status)
 		status = ice_sched_replay_q_bw(pi, q_ctx);
 
@@ -4838,7 +4838,7 @@ ice_ena_vsi_rdma_qset(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
 	for (i = 0; i < num_qsets; i++) {
 		node.node_teid = buf->rdma_qsets[i].qset_teid;
 		ret = ice_sched_add_node(pi, hw->num_tx_sched_layers - 1,
-					 &node);
+					 &node, NULL);
 		if (ret)
 			break;
 		qset_teid[i] = le32_to_cpu(node.node_teid);
diff --git a/drivers/net/ethernet/intel/ice/ice_dcb.c b/drivers/net/ethernet/intel/ice/ice_dcb.c
index 0b146a0d4205..6be02f9b0b8c 100644
--- a/drivers/net/ethernet/intel/ice/ice_dcb.c
+++ b/drivers/net/ethernet/intel/ice/ice_dcb.c
@@ -1580,7 +1580,7 @@ ice_update_port_tc_tree_cfg(struct ice_port_info *pi,
 		/* new TC */
 		status = ice_sched_query_elem(pi->hw, teid2, &elem);
 		if (!status)
-			status = ice_sched_add_node(pi, 1, &elem);
+			status = ice_sched_add_node(pi, 1, &elem, NULL);
 		if (status)
 			break;
 		/* update the TC number */
diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c
index 80cde5683371..2a55fffb943c 100644
--- a/drivers/net/ethernet/intel/ice/ice_sched.c
+++ b/drivers/net/ethernet/intel/ice/ice_sched.c
@@ -143,12 +143,14 @@ ice_aq_query_sched_elems(struct ice_hw *hw, u16 elems_req,
  * @pi: port information structure
  * @layer: Scheduler layer of the node
  * @info: Scheduler element information from firmware
+ * @prealloc_node: preallocated ice_sched_node struct for SW DB
  *
  * This function inserts a scheduler node to the SW DB.
  */
 int
 ice_sched_add_node(struct ice_port_info *pi, u8 layer,
-		   struct ice_aqc_txsched_elem_data *info)
+		   struct ice_aqc_txsched_elem_data *info,
+		   struct ice_sched_node *prealloc_node)
 {
 	struct ice_aqc_txsched_elem_data elem;
 	struct ice_sched_node *parent;
@@ -177,7 +179,10 @@ ice_sched_add_node(struct ice_port_info *pi, u8 layer,
 	if (status)
 		return status;
 
-	node = devm_kzalloc(ice_hw_to_dev(hw), sizeof(*node), GFP_KERNEL);
+	if (prealloc_node)
+		node = prealloc_node;
+	else
+		node = devm_kzalloc(ice_hw_to_dev(hw), sizeof(*node), GFP_KERNEL);
 	if (!node)
 		return -ENOMEM;
 	if (hw->max_children[layer]) {
@@ -876,13 +881,15 @@ void ice_sched_cleanup_all(struct ice_hw *hw)
  * @num_nodes: number of nodes
  * @num_nodes_added: pointer to num nodes added
  * @first_node_teid: if new nodes are added then return the TEID of first node
+ * @prealloc_nodes: preallocated nodes struct for software DB
  *
  * This function add nodes to HW as well as to SW DB for a given layer
  */
 int
 ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node,
 		    struct ice_sched_node *parent, u8 layer, u16 num_nodes,
-		    u16 *num_nodes_added, u32 *first_node_teid)
+		    u16 *num_nodes_added, u32 *first_node_teid,
+		    struct ice_sched_node **prealloc_nodes)
 {
 	struct ice_sched_node *prev, *new_node;
 	struct ice_aqc_add_elem *buf;
@@ -928,7 +935,11 @@ ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node,
 	*num_nodes_added = num_nodes;
 	/* add nodes to the SW DB */
 	for (i = 0; i < num_nodes; i++) {
-		status = ice_sched_add_node(pi, layer, &buf->generic[i]);
+		if (prealloc_nodes)
+			status = ice_sched_add_node(pi, layer, &buf->generic[i], prealloc_nodes[i]);
+		else
+			status = ice_sched_add_node(pi, layer, &buf->generic[i], NULL);
+
 		if (status) {
 			ice_debug(hw, ICE_DBG_SCHED, "add nodes in SW DB failed status =%d\n",
 				  status);
@@ -1023,7 +1034,7 @@ ice_sched_add_nodes_to_hw_layer(struct ice_port_info *pi,
 	}
 
 	return ice_sched_add_elems(pi, tc_node, parent, layer, num_nodes,
-				   num_nodes_added, first_node_teid);
+				   num_nodes_added, first_node_teid, NULL);
 }
 
 /**
@@ -1288,7 +1299,7 @@ int ice_sched_init_port(struct ice_port_info *pi)
 			    ICE_AQC_ELEM_TYPE_ENTRY_POINT)
 				hw->sw_entry_point_layer = j;
 
-			status = ice_sched_add_node(pi, j, &buf[i].generic[j]);
+			status = ice_sched_add_node(pi, j, &buf[i].generic[j], NULL);
 			if (status)
 				goto err_init_port;
 		}
diff --git a/drivers/net/ethernet/intel/ice/ice_sched.h b/drivers/net/ethernet/intel/ice/ice_sched.h
index 920db43ed4fa..9c100747445a 100644
--- a/drivers/net/ethernet/intel/ice/ice_sched.h
+++ b/drivers/net/ethernet/intel/ice/ice_sched.h
@@ -83,7 +83,8 @@ ice_sched_set_node_bw(struct ice_port_info *pi, struct ice_sched_node *node,
 int
 ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node,
 		    struct ice_sched_node *parent, u8 layer, u16 num_nodes,
-		    u16 *num_nodes_added, u32 *first_node_teid);
+		    u16 *num_nodes_added, u32 *first_node_teid,
+		    struct ice_sched_node **prealloc_node);
 
 int
 ice_sched_move_nodes(struct ice_port_info *pi, struct ice_sched_node *parent,
@@ -105,7 +106,8 @@ struct ice_sched_node *
 ice_sched_find_node_by_teid(struct ice_sched_node *start_node, u32 teid);
 int
 ice_sched_add_node(struct ice_port_info *pi, u8 layer,
-		   struct ice_aqc_txsched_elem_data *info);
+		   struct ice_aqc_txsched_elem_data *info,
+		   struct ice_sched_node *prealloc_node);
 void
 ice_sched_update_parent(struct ice_sched_node *new_parent,
 			struct ice_sched_node *node);
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net-next v10 08/10] ice: Implement devlink-rate API
  2022-11-07 18:13 [PATCH net-next v10 00/10] Implement devlink-rate API and extend it Michal Wilczynski
                   ` (6 preceding siblings ...)
  2022-11-07 18:13 ` [PATCH net-next v10 07/10] ice: Add an option to pre-allocate memory for ice_sched_node Michal Wilczynski
@ 2022-11-07 18:13 ` Michal Wilczynski
  2022-11-07 18:13 ` [PATCH net-next v10 09/10] ice: Prevent ADQ, DCB coexistence with Custom Tx scheduler Michal Wilczynski
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 18+ messages in thread
From: Michal Wilczynski @ 2022-11-07 18:13 UTC (permalink / raw)
  To: netdev
  Cc: alexandr.lobakin, jacob.e.keller, jesse.brandeburg,
	przemyslaw.kitszel, anthony.l.nguyen, kuba, ecree.xilinx, jiri,
	Michal Wilczynski

There is a need to support modification of Tx scheduler tree, in the
ice driver. This will allow user to control Tx settings of each node in
the internal hierarchy of nodes. As a result user will be able to use
Hierarchy QoS implemented entirely in the hardware.

This patch implemenents devlink-rate API. It also exports initial
default hierarchy. It's mostly dictated by the fact that the tree
can't be removed entirely, all we can do is enable the user to modify
it. For example root node shouldn't ever be removed, also nodes that
have children are off-limits.

Example initial tree with 2 VF's:

[root@fedora ~]# devlink port function rate show

pci/0000:4b:00.0/node_27: type node parent node_26
pci/0000:4b:00.0/node_26: type node parent node_0
pci/0000:4b:00.0/node_34: type node parent node_33
pci/0000:4b:00.0/node_33: type node parent node_32
pci/0000:4b:00.0/node_32: type node parent node_16
pci/0000:4b:00.0/node_19: type node parent node_18
pci/0000:4b:00.0/node_18: type node parent node_17
pci/0000:4b:00.0/node_17: type node parent node_16
pci/0000:4b:00.0/node_21: type node parent node_20
pci/0000:4b:00.0/node_20: type node parent node_3
pci/0000:4b:00.0/node_14: type node parent node_5
pci/0000:4b:00.0/node_5: type node parent node_3
pci/0000:4b:00.0/node_13: type node parent node_4
pci/0000:4b:00.0/node_12: type node parent node_4
pci/0000:4b:00.0/node_11: type node parent node_4
pci/0000:4b:00.0/node_10: type node parent node_4
pci/0000:4b:00.0/node_9: type node parent node_4
pci/0000:4b:00.0/node_8: type node parent node_4
pci/0000:4b:00.0/node_7: type node parent node_4
pci/0000:4b:00.0/node_6: type node parent node_4
pci/0000:4b:00.0/node_4: type node parent node_3
pci/0000:4b:00.0/node_3: type node parent node_16
pci/0000:4b:00.0/node_16: type node parent node_15
pci/0000:4b:00.0/node_15: type node parent node_0
pci/0000:4b:00.0/node_2: type node parent node_1
pci/0000:4b:00.0/node_1: type node parent node_0
pci/0000:4b:00.0/node_0: type node
pci/0000:4b:00.0/1: type leaf parent node_27
pci/0000:4b:00.0/2: type leaf parent node_27

Let me visualize part of the tree:

                    +---------+
                    |  node_0 |
                    +---------+
                         |
                    +----v----+
                    | node_26 |
                    +----+----+
                         |
                    +----v----+
                    | node_27 |
                    +----+----+
                         |
                |-----------------|
           +----v----+       +----v----+
           |   VF 1  |       |   VF 2  |
           +----+----+       +----+----+

So at this point there is a couple things that can be done.
For example we could only assign parameters to VF's.

[root@fedora ~]# devlink port function rate set pci/0000:4b:00.0/1 \
                 tx_max 5Gbps

This would cap the VF 1 BW to 5Gbps.

But let's say you would like to create a completely new branch.
This can be done like this:

[root@fedora ~]# devlink port function rate add \
                 pci/0000:4b:00.0/node_custom parent node_0
[root@fedora ~]# devlink port function rate add \
                 pci/0000:4b:00.0/node_custom_1 parent node_custom
[root@fedora ~]# devlink port function rate set \
                 pci/0000:4b:00.0/1 parent node_custom_1

This creates a completely new branch and reassigns VF 1 to it.

A number of parameters is supported per each node: tx_max, tx_share,
tx_priority and tx_weight.

Signed-off-by: Michal Wilczynski <michal.wilczynski@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_devlink.c | 421 +++++++++++++++++++
 drivers/net/ethernet/intel/ice/ice_devlink.h |   2 +
 drivers/net/ethernet/intel/ice/ice_repr.c    |  13 +
 3 files changed, 436 insertions(+)

diff --git a/drivers/net/ethernet/intel/ice/ice_devlink.c b/drivers/net/ethernet/intel/ice/ice_devlink.c
index 455489e9457d..00ed3883c492 100644
--- a/drivers/net/ethernet/intel/ice/ice_devlink.c
+++ b/drivers/net/ethernet/intel/ice/ice_devlink.c
@@ -713,6 +713,410 @@ ice_devlink_port_unsplit(struct devlink *devlink, struct devlink_port *port,
 	return ice_devlink_port_split(devlink, port, 1, extack);
 }
 
+/**
+ * ice_traverse_tx_tree - traverse Tx scheduler tree
+ * @devlink: devlink struct
+ * @node: current node, used for recursion
+ * @tc_node: tc_node struct, that is treated as a root
+ * @pf: pf struct
+ *
+ * This function traverses Tx scheduler tree and exports
+ * entire structure to the devlink-rate.
+ */
+static void ice_traverse_tx_tree(struct devlink *devlink, struct ice_sched_node *node,
+				 struct ice_sched_node *tc_node, struct ice_pf *pf)
+{
+	struct devlink_rate *rate_node = NULL;
+	struct ice_vf *vf;
+	int i;
+
+	devl_lock(devlink);
+
+	if (node->parent == tc_node) {
+		/* create root node */
+		rate_node = devl_rate_node_create(devlink, node, node->name, NULL);
+	} else if (node->vsi_handle &&
+		   pf->vsi[node->vsi_handle]->vf) {
+		vf = pf->vsi[node->vsi_handle]->vf;
+		if (!vf->devlink_port.devlink_rate)
+			/* leaf nodes doesn't have children
+			 * so we don't set rate_node
+			 */
+			devl_rate_leaf_create(&vf->devlink_port, node,
+					      node->parent->rate_node);
+	} else if (node->info.data.elem_type != ICE_AQC_ELEM_TYPE_LEAF &&
+		   node->parent->rate_node) {
+		rate_node = devl_rate_node_create(devlink, node, node->name,
+						  node->parent->rate_node);
+	}
+
+	if (rate_node && !IS_ERR(rate_node))
+		node->rate_node = rate_node;
+
+	devl_unlock(devlink);
+
+	for (i = 0; i < node->num_children; i++)
+		ice_traverse_tx_tree(devlink, node->children[i], tc_node, pf);
+}
+
+/**
+ * ice_devlink_rate_init_tx_topology - export Tx scheduler tree to devlink rate
+ * @devlink: devlink struct
+ * @vsi: main vsi struct
+ *
+ * This function finds a root node, then calls ice_traverse_tx tree, which
+ * traverses the tree and export it's contents to devlink rate.
+ */
+int ice_devlink_rate_init_tx_topology(struct devlink *devlink, struct ice_vsi *vsi)
+{
+	struct ice_port_info *pi = vsi->port_info;
+	struct ice_sched_node *tc_node;
+	struct ice_pf *pf = vsi->back;
+	int i;
+
+	tc_node = pi->root->children[0];
+	mutex_lock(&pi->sched_lock);
+	for (i = 0; i < tc_node->num_children; i++)
+		ice_traverse_tx_tree(devlink, tc_node->children[i], tc_node, pf);
+	mutex_unlock(&pi->sched_lock);
+
+	return 0;
+}
+
+/**
+ * ice_set_object_tx_share - sets node scheduling parameter
+ * @pi: devlink struct instance
+ * @node: node struct instance
+ * @extack: extended netdev ack structure
+ *
+ * This function sets ICE_MIN_BW scheduling BW limit.
+ */
+static int ice_set_object_tx_share(struct ice_port_info *pi, struct ice_sched_node *node,
+				   struct netlink_ext_ack *extack)
+{
+	int status;
+
+	mutex_lock(&pi->sched_lock);
+	status = ice_sched_set_node_bw_lmt(pi, node, ICE_MIN_BW, node->tx_share);
+	mutex_unlock(&pi->sched_lock);
+
+	if (status)
+		NL_SET_ERR_MSG_MOD(extack, "Can't set scheduling node tx_share");
+
+	return status;
+}
+
+/**
+ * ice_set_object_tx_max - sets node scheduling parameter
+ * @pi: devlink struct instance
+ * @node: node struct instance
+ * @extack: extended netdev ack structure
+ *
+ * This function sets ICE_MAX_BW scheduling BW limit.
+ */
+static int ice_set_object_tx_max(struct ice_port_info *pi, struct ice_sched_node *node,
+				 struct netlink_ext_ack *extack)
+{
+	int status;
+
+	mutex_lock(&pi->sched_lock);
+	status = ice_sched_set_node_bw_lmt(pi, node, ICE_MAX_BW, node->tx_max);
+	mutex_unlock(&pi->sched_lock);
+
+	if (status)
+		NL_SET_ERR_MSG_MOD(extack, "Can't set scheduling node tx_max");
+
+	return status;
+}
+
+/**
+ * ice_set_object_tx_priority - sets node scheduling parameter
+ * @pi: devlink struct instance
+ * @node: node struct instance
+ * @extack: extended netdev ack structure
+ *
+ * This function sets priority of node among siblings.
+ */
+static int ice_set_object_tx_priority(struct ice_port_info *pi, struct ice_sched_node *node,
+				      struct netlink_ext_ack *extack)
+{
+	int status;
+
+	if (node->tx_priority >= 8) {
+		NL_SET_ERR_MSG_MOD(extack, "Priority should be less than 8");
+		return -EINVAL;
+	}
+
+	mutex_lock(&pi->sched_lock);
+	status = ice_sched_set_node_priority(pi, node, node->tx_priority);
+	mutex_unlock(&pi->sched_lock);
+
+	if (status)
+		NL_SET_ERR_MSG_MOD(extack, "Can't set scheduling node tx_priority");
+
+	return status;
+}
+
+/**
+ * ice_set_object_tx_weight - sets node scheduling parameter
+ * @pi: devlink struct instance
+ * @node: node struct instance
+ * @extack: extended netdev ack structure
+ *
+ * This function sets node weight for WFQ algorithm.
+ */
+static int ice_set_object_tx_weight(struct ice_port_info *pi, struct ice_sched_node *node,
+				    struct netlink_ext_ack *extack)
+{
+	int status;
+
+	if (node->tx_weight > 200 || node->tx_weight < 1) {
+		NL_SET_ERR_MSG_MOD(extack, "Weight must be between 1 and 200");
+		return -EINVAL;
+	}
+
+	mutex_lock(&pi->sched_lock);
+	status = ice_sched_set_node_weight(pi, node, node->tx_weight);
+	mutex_unlock(&pi->sched_lock);
+
+	if (status)
+		NL_SET_ERR_MSG_MOD(extack, "Can't set scheduling node tx_weight");
+
+	return status;
+}
+
+/**
+ * ice_get_pi_from_dev_rate - get port info from devlink_rate
+ * @rate_node: devlink struct instance
+ *
+ * This function returns corresponding port_info struct of devlink_rate
+ */
+static struct ice_port_info *ice_get_pi_from_dev_rate(struct devlink_rate *rate_node)
+{
+	struct ice_pf *pf = devlink_priv(rate_node->devlink);
+
+	return ice_get_main_vsi(pf)->port_info;
+}
+
+static int ice_devlink_rate_node_new(struct devlink_rate *rate_node, void **priv,
+				     struct netlink_ext_ack *extack)
+{
+	struct ice_sched_node *node;
+	struct ice_port_info *pi;
+
+	pi = ice_get_pi_from_dev_rate(rate_node);
+
+	/* preallocate memory for ice_sched_node */
+	node = devm_kzalloc(ice_hw_to_dev(pi->hw), sizeof(*node), GFP_KERNEL);
+	*priv = node;
+
+	return 0;
+}
+
+static int ice_devlink_rate_node_del(struct devlink_rate *rate_node, void *priv,
+				     struct netlink_ext_ack *extack)
+{
+	struct ice_sched_node *node, *tc_node;
+	struct ice_port_info *pi;
+
+	pi = ice_get_pi_from_dev_rate(rate_node);
+	tc_node = pi->root->children[0];
+	node = priv;
+
+	if (!rate_node->parent || !node || tc_node == node || !extack)
+		return 0;
+
+	/* can't allow to delete a node with children */
+	if (node->num_children)
+		return -EINVAL;
+
+	mutex_lock(&pi->sched_lock);
+	ice_free_sched_node(pi, node);
+	mutex_unlock(&pi->sched_lock);
+
+	return 0;
+}
+
+static int ice_devlink_rate_leaf_tx_max_set(struct devlink_rate *rate_leaf, void *priv,
+					    u64 tx_max, struct netlink_ext_ack *extack)
+{
+	struct ice_sched_node *node = priv;
+
+	if (!node)
+		return 0;
+
+	node->tx_max = div_u64(tx_max, 10);
+
+	return ice_set_object_tx_max(ice_get_pi_from_dev_rate(rate_leaf), node, extack);
+}
+
+static int ice_devlink_rate_leaf_tx_share_set(struct devlink_rate *rate_leaf, void *priv,
+					      u64 tx_share, struct netlink_ext_ack *extack)
+{
+	struct ice_sched_node *node = priv;
+
+	if (!node)
+		return 0;
+
+	node->tx_share = div_u64(tx_share, 10);
+
+	return ice_set_object_tx_share(ice_get_pi_from_dev_rate(rate_leaf), node, extack);
+}
+
+static int ice_devlink_rate_leaf_tx_priority_set(struct devlink_rate *rate_leaf, void *priv,
+						 u32 tx_priority, struct netlink_ext_ack *extack)
+{
+	struct ice_sched_node *node = priv;
+
+	if (!node)
+		return 0;
+
+	node->tx_priority = tx_priority;
+
+	return ice_set_object_tx_priority(ice_get_pi_from_dev_rate(rate_leaf), node, extack);
+}
+
+static int ice_devlink_rate_leaf_tx_weight_set(struct devlink_rate *rate_leaf, void *priv,
+					       u32 tx_weight, struct netlink_ext_ack *extack)
+{
+	struct ice_sched_node *node = priv;
+
+	if (!node)
+		return 0;
+
+	node->tx_weight = tx_weight;
+
+	return ice_set_object_tx_weight(ice_get_pi_from_dev_rate(rate_leaf), node, extack);
+}
+
+static int ice_devlink_rate_node_tx_max_set(struct devlink_rate *rate_node, void *priv,
+					    u64 tx_max, struct netlink_ext_ack *extack)
+{
+	struct ice_sched_node *node = priv;
+
+	if (!node)
+		return 0;
+
+	node->tx_max = div_u64(tx_max, 10);
+
+	return ice_set_object_tx_max(ice_get_pi_from_dev_rate(rate_node), node, extack);
+}
+
+static int ice_devlink_rate_node_tx_share_set(struct devlink_rate *rate_node, void *priv,
+					      u64 tx_share, struct netlink_ext_ack *extack)
+{
+	struct ice_sched_node *node = priv;
+
+	if (!node)
+		return 0;
+
+	node->tx_share = div_u64(tx_share, 10);
+
+	return ice_set_object_tx_share(ice_get_pi_from_dev_rate(rate_node), node, extack);
+}
+
+static int ice_devlink_rate_node_tx_priority_set(struct devlink_rate *rate_node, void *priv,
+						 u32 tx_priority, struct netlink_ext_ack *extack)
+{
+	struct ice_sched_node *node = priv;
+
+	if (!node)
+		return 0;
+
+	node->tx_priority = tx_priority;
+
+	return ice_set_object_tx_priority(ice_get_pi_from_dev_rate(rate_node), node, extack);
+}
+
+static int ice_devlink_rate_node_tx_weight_set(struct devlink_rate *rate_node, void *priv,
+					       u32 tx_weight, struct netlink_ext_ack *extack)
+{
+	struct ice_sched_node *node = priv;
+
+	if (!node)
+		return 0;
+
+	node->tx_weight = tx_weight;
+
+	return ice_set_object_tx_weight(ice_get_pi_from_dev_rate(rate_node), node, extack);
+}
+
+static int ice_devlink_set_parent(struct devlink_rate *devlink_rate,
+				  struct devlink_rate *parent,
+				  void *priv, void *parent_priv,
+				  struct netlink_ext_ack *extack)
+{
+	struct ice_port_info *pi = ice_get_pi_from_dev_rate(devlink_rate);
+	struct ice_sched_node *tc_node, *node, *parent_node;
+	u16 num_nodes_added;
+	u32 first_node_teid;
+	u32 node_teid;
+	int status;
+
+	tc_node = pi->root->children[0];
+	node = priv;
+
+	if (!extack)
+		return 0;
+
+	if (!parent) {
+		if (!node || tc_node == node || node->num_children)
+			return -EINVAL;
+
+		mutex_lock(&pi->sched_lock);
+		ice_free_sched_node(pi, node);
+		mutex_unlock(&pi->sched_lock);
+
+		return 0;
+	}
+
+	parent_node = parent_priv;
+
+	/* if the node doesn't exist, create it */
+	if (!node->parent) {
+		mutex_lock(&pi->sched_lock);
+
+		status = ice_sched_add_elems(pi, tc_node, parent_node,
+					     parent_node->tx_sched_layer + 1,
+					     1, &num_nodes_added, &first_node_teid,
+					     &node);
+
+		mutex_unlock(&pi->sched_lock);
+
+		if (status) {
+			NL_SET_ERR_MSG_MOD(extack, "Can't add a new node");
+			return status;
+		}
+
+		if (devlink_rate->tx_share) {
+			node->tx_share = devlink_rate->tx_share;
+			ice_set_object_tx_share(pi, node, extack);
+		}
+		if (devlink_rate->tx_max) {
+			node->tx_max = devlink_rate->tx_max;
+			ice_set_object_tx_max(pi, node, extack);
+		}
+		if (devlink_rate->tx_priority) {
+			node->tx_priority = devlink_rate->tx_priority;
+			ice_set_object_tx_priority(pi, node, extack);
+		}
+		if (devlink_rate->tx_weight) {
+			node->tx_weight = devlink_rate->tx_weight;
+			ice_set_object_tx_weight(pi, node, extack);
+		}
+	} else {
+		node_teid = le32_to_cpu(node->info.node_teid);
+		mutex_lock(&pi->sched_lock);
+		status = ice_sched_move_nodes(pi, parent_node, 1, &node_teid);
+		mutex_unlock(&pi->sched_lock);
+
+		if (status)
+			NL_SET_ERR_MSG_MOD(extack, "Can't move existing node to a new parent");
+	}
+
+	return status;
+}
+
 static const struct devlink_ops ice_devlink_ops = {
 	.supported_flash_update_params = DEVLINK_SUPPORT_FLASH_UPDATE_OVERWRITE_MASK,
 	.reload_actions = BIT(DEVLINK_RELOAD_ACTION_FW_ACTIVATE),
@@ -725,6 +1129,22 @@ static const struct devlink_ops ice_devlink_ops = {
 	.eswitch_mode_set = ice_eswitch_mode_set,
 	.info_get = ice_devlink_info_get,
 	.flash_update = ice_devlink_flash_update,
+
+	.rate_node_new = ice_devlink_rate_node_new,
+	.rate_node_del = ice_devlink_rate_node_del,
+
+	.rate_leaf_tx_max_set = ice_devlink_rate_leaf_tx_max_set,
+	.rate_leaf_tx_share_set = ice_devlink_rate_leaf_tx_share_set,
+	.rate_leaf_tx_priority_set = ice_devlink_rate_leaf_tx_priority_set,
+	.rate_leaf_tx_weight_set = ice_devlink_rate_leaf_tx_weight_set,
+
+	.rate_node_tx_max_set = ice_devlink_rate_node_tx_max_set,
+	.rate_node_tx_share_set = ice_devlink_rate_node_tx_share_set,
+	.rate_node_tx_priority_set = ice_devlink_rate_node_tx_priority_set,
+	.rate_node_tx_weight_set = ice_devlink_rate_node_tx_weight_set,
+
+	.rate_leaf_parent_set = ice_devlink_set_parent,
+	.rate_node_parent_set = ice_devlink_set_parent,
 };
 
 static int
@@ -1089,6 +1509,7 @@ int ice_devlink_create_vf_port(struct ice_vf *vf)
  */
 void ice_devlink_destroy_vf_port(struct ice_vf *vf)
 {
+	devl_rate_leaf_destroy(&vf->devlink_port);
 	devlink_port_unregister(&vf->devlink_port);
 }
 
diff --git a/drivers/net/ethernet/intel/ice/ice_devlink.h b/drivers/net/ethernet/intel/ice/ice_devlink.h
index fe006d9946f8..8bfed9ee2c4c 100644
--- a/drivers/net/ethernet/intel/ice/ice_devlink.h
+++ b/drivers/net/ethernet/intel/ice/ice_devlink.h
@@ -18,4 +18,6 @@ void ice_devlink_destroy_vf_port(struct ice_vf *vf);
 void ice_devlink_init_regions(struct ice_pf *pf);
 void ice_devlink_destroy_regions(struct ice_pf *pf);
 
+int ice_devlink_rate_init_tx_topology(struct devlink *devlink, struct ice_vsi *vsi);
+
 #endif /* _ICE_DEVLINK_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_repr.c b/drivers/net/ethernet/intel/ice/ice_repr.c
index 0483eb14c288..46f58d48318c 100644
--- a/drivers/net/ethernet/intel/ice/ice_repr.c
+++ b/drivers/net/ethernet/intel/ice/ice_repr.c
@@ -389,6 +389,7 @@ static void ice_repr_rem(struct ice_vf *vf)
  */
 void ice_repr_rem_from_all_vfs(struct ice_pf *pf)
 {
+	struct devlink *devlink;
 	struct ice_vf *vf;
 	unsigned int bkt;
 
@@ -396,6 +397,14 @@ void ice_repr_rem_from_all_vfs(struct ice_pf *pf)
 
 	ice_for_each_vf(pf, bkt, vf)
 		ice_repr_rem(vf);
+
+	/* since all port representors are destroyed, there is
+	 * no point in keeping the nodes
+	 */
+	devlink = priv_to_devlink(pf);
+	devl_lock(devlink);
+	devl_rate_nodes_destroy(devlink);
+	devl_unlock(devlink);
 }
 
 /**
@@ -404,6 +413,7 @@ void ice_repr_rem_from_all_vfs(struct ice_pf *pf)
  */
 int ice_repr_add_for_all_vfs(struct ice_pf *pf)
 {
+	struct devlink *devlink;
 	struct ice_vf *vf;
 	unsigned int bkt;
 	int err;
@@ -416,6 +426,9 @@ int ice_repr_add_for_all_vfs(struct ice_pf *pf)
 			goto err;
 	}
 
+	devlink = priv_to_devlink(pf);
+	ice_devlink_rate_init_tx_topology(devlink, ice_get_main_vsi(pf));
+
 	return 0;
 
 err:
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net-next v10 09/10] ice: Prevent ADQ, DCB coexistence with Custom Tx scheduler
  2022-11-07 18:13 [PATCH net-next v10 00/10] Implement devlink-rate API and extend it Michal Wilczynski
                   ` (7 preceding siblings ...)
  2022-11-07 18:13 ` [PATCH net-next v10 08/10] ice: Implement devlink-rate API Michal Wilczynski
@ 2022-11-07 18:13 ` Michal Wilczynski
  2022-11-07 18:13 ` [PATCH net-next v10 10/10] ice: add documentation for devlink-rate implementation Michal Wilczynski
  2022-11-07 18:13 ` [PATCH net-next v10 10/10] ice: Add " Michal Wilczynski
  10 siblings, 0 replies; 18+ messages in thread
From: Michal Wilczynski @ 2022-11-07 18:13 UTC (permalink / raw)
  To: netdev
  Cc: alexandr.lobakin, jacob.e.keller, jesse.brandeburg,
	przemyslaw.kitszel, anthony.l.nguyen, kuba, ecree.xilinx, jiri,
	Michal Wilczynski

ADQ, DCB might interfere with Custom Tx Scheduler changes that user
might introduce using devlink-rate API.

Check if ADQ, DCB is active, when user tries to change any setting
in exported Tx scheduler tree. If any of those are active block the user
from doing so, and log an appropriate message.

Signed-off-by: Michal Wilczynski <michal.wilczynski@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_dcb_lib.c |  4 ++
 drivers/net/ethernet/intel/ice/ice_devlink.c | 65 ++++++++++++++++++++
 drivers/net/ethernet/intel/ice/ice_type.h    |  1 +
 3 files changed, 70 insertions(+)

diff --git a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
index add90e75f05c..8d7fc76f49af 100644
--- a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
@@ -364,6 +364,10 @@ int ice_pf_dcb_cfg(struct ice_pf *pf, struct ice_dcbx_cfg *new_cfg, bool locked)
 	/* Enable DCB tagging only when more than one TC */
 	if (ice_dcb_get_num_tc(new_cfg) > 1) {
 		dev_dbg(dev, "DCB tagging enabled (num TC > 1)\n");
+		if (pf->hw.port_info->is_custom_tx_enabled) {
+			dev_err(dev, "Custom Tx scheduler feature enabled, can't configure DCB\n");
+			return -EBUSY;
+		}
 		set_bit(ICE_FLAG_DCB_ENA, pf->flags);
 	} else {
 		dev_dbg(dev, "DCB tagging disabled (num TC = 1)\n");
diff --git a/drivers/net/ethernet/intel/ice/ice_devlink.c b/drivers/net/ethernet/intel/ice/ice_devlink.c
index 00ed3883c492..6fe2dd95578f 100644
--- a/drivers/net/ethernet/intel/ice/ice_devlink.c
+++ b/drivers/net/ethernet/intel/ice/ice_devlink.c
@@ -8,6 +8,7 @@
 #include "ice_devlink.h"
 #include "ice_eswitch.h"
 #include "ice_fw_update.h"
+#include "ice_dcb_lib.h"
 
 static int ice_active_port_option = -1;
 
@@ -713,6 +714,37 @@ ice_devlink_port_unsplit(struct devlink *devlink, struct devlink_port *port,
 	return ice_devlink_port_split(devlink, port, 1, extack);
 }
 
+/**
+ * ice_enable_custom_tx - try to enable custom Tx feature
+ * @pf: devlink struct
+ *
+ * This function tries to enabled custom Tx feature,
+ * it's not possible to enable it, if DCB is active.
+ */
+static bool ice_enable_custom_tx(struct ice_pf *pf)
+{
+	struct ice_port_info *pi = ice_get_main_vsi(pf)->port_info;
+	struct device *dev = ice_pf_to_dev(pf);
+
+	if (pi->is_custom_tx_enabled)
+		/* already enabled, return true */
+		return true;
+
+	if (ice_is_adq_active(pf)) {
+		dev_err(dev, "ADQ active, can't modify Tx scheduler tree\n");
+		return false;
+	}
+
+	if (ice_is_dcb_active(pf)) {
+		dev_err(dev, "DCB active, can't modify Tx scheduler tree\n");
+		return false;
+	}
+
+	pi->is_custom_tx_enabled = true;
+
+	return true;
+}
+
 /**
  * ice_traverse_tx_tree - traverse Tx scheduler tree
  * @devlink: devlink struct
@@ -906,6 +938,9 @@ static int ice_devlink_rate_node_new(struct devlink_rate *rate_node, void **priv
 
 	pi = ice_get_pi_from_dev_rate(rate_node);
 
+	if (!ice_enable_custom_tx(devlink_priv(rate_node->devlink)))
+		return -EBUSY;
+
 	/* preallocate memory for ice_sched_node */
 	node = devm_kzalloc(ice_hw_to_dev(pi->hw), sizeof(*node), GFP_KERNEL);
 	*priv = node;
@@ -926,6 +961,9 @@ static int ice_devlink_rate_node_del(struct devlink_rate *rate_node, void *priv,
 	if (!rate_node->parent || !node || tc_node == node || !extack)
 		return 0;
 
+	if (!ice_enable_custom_tx(devlink_priv(rate_node->devlink)))
+		return -EBUSY;
+
 	/* can't allow to delete a node with children */
 	if (node->num_children)
 		return -EINVAL;
@@ -942,6 +980,9 @@ static int ice_devlink_rate_leaf_tx_max_set(struct devlink_rate *rate_leaf, void
 {
 	struct ice_sched_node *node = priv;
 
+	if (!ice_enable_custom_tx(devlink_priv(rate_leaf->devlink)))
+		return -EBUSY;
+
 	if (!node)
 		return 0;
 
@@ -955,6 +996,9 @@ static int ice_devlink_rate_leaf_tx_share_set(struct devlink_rate *rate_leaf, vo
 {
 	struct ice_sched_node *node = priv;
 
+	if (!ice_enable_custom_tx(devlink_priv(rate_leaf->devlink)))
+		return -EBUSY;
+
 	if (!node)
 		return 0;
 
@@ -968,6 +1012,9 @@ static int ice_devlink_rate_leaf_tx_priority_set(struct devlink_rate *rate_leaf,
 {
 	struct ice_sched_node *node = priv;
 
+	if (!ice_enable_custom_tx(devlink_priv(rate_leaf->devlink)))
+		return -EBUSY;
+
 	if (!node)
 		return 0;
 
@@ -981,6 +1028,9 @@ static int ice_devlink_rate_leaf_tx_weight_set(struct devlink_rate *rate_leaf, v
 {
 	struct ice_sched_node *node = priv;
 
+	if (!ice_enable_custom_tx(devlink_priv(rate_leaf->devlink)))
+		return -EBUSY;
+
 	if (!node)
 		return 0;
 
@@ -994,6 +1044,9 @@ static int ice_devlink_rate_node_tx_max_set(struct devlink_rate *rate_node, void
 {
 	struct ice_sched_node *node = priv;
 
+	if (!ice_enable_custom_tx(devlink_priv(rate_node->devlink)))
+		return -EBUSY;
+
 	if (!node)
 		return 0;
 
@@ -1007,6 +1060,9 @@ static int ice_devlink_rate_node_tx_share_set(struct devlink_rate *rate_node, vo
 {
 	struct ice_sched_node *node = priv;
 
+	if (!ice_enable_custom_tx(devlink_priv(rate_node->devlink)))
+		return -EBUSY;
+
 	if (!node)
 		return 0;
 
@@ -1020,6 +1076,9 @@ static int ice_devlink_rate_node_tx_priority_set(struct devlink_rate *rate_node,
 {
 	struct ice_sched_node *node = priv;
 
+	if (!ice_enable_custom_tx(devlink_priv(rate_node->devlink)))
+		return -EBUSY;
+
 	if (!node)
 		return 0;
 
@@ -1033,6 +1092,9 @@ static int ice_devlink_rate_node_tx_weight_set(struct devlink_rate *rate_node, v
 {
 	struct ice_sched_node *node = priv;
 
+	if (!ice_enable_custom_tx(devlink_priv(rate_node->devlink)))
+		return -EBUSY;
+
 	if (!node)
 		return 0;
 
@@ -1059,6 +1121,9 @@ static int ice_devlink_set_parent(struct devlink_rate *devlink_rate,
 	if (!extack)
 		return 0;
 
+	if (!ice_enable_custom_tx(devlink_priv(devlink_rate->devlink)))
+		return -EBUSY;
+
 	if (!parent) {
 		if (!node || tc_node == node || node->num_children)
 			return -EINVAL;
diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h
index c93f2449d3c3..089c90f66ef6 100644
--- a/drivers/net/ethernet/intel/ice/ice_type.h
+++ b/drivers/net/ethernet/intel/ice/ice_type.h
@@ -715,6 +715,7 @@ struct ice_port_info {
 	struct ice_qos_cfg qos_cfg;
 	struct xarray sched_node_ids;
 	u8 is_vf:1;
+	u8 is_custom_tx_enabled:1;
 };
 
 struct ice_switch_info {
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net-next v10 10/10] ice: add documentation for devlink-rate implementation
  2022-11-07 18:13 [PATCH net-next v10 00/10] Implement devlink-rate API and extend it Michal Wilczynski
                   ` (8 preceding siblings ...)
  2022-11-07 18:13 ` [PATCH net-next v10 09/10] ice: Prevent ADQ, DCB coexistence with Custom Tx scheduler Michal Wilczynski
@ 2022-11-07 18:13 ` Michal Wilczynski
  2022-11-08 22:39   ` Jakub Kicinski
  2022-11-07 18:13 ` [PATCH net-next v10 10/10] ice: Add " Michal Wilczynski
  10 siblings, 1 reply; 18+ messages in thread
From: Michal Wilczynski @ 2022-11-07 18:13 UTC (permalink / raw)
  To: netdev
  Cc: alexandr.lobakin, jacob.e.keller, jesse.brandeburg,
	przemyslaw.kitszel, anthony.l.nguyen, kuba, ecree.xilinx, jiri,
	Michal Wilczynski

Add documentation to a newly added devlink-rate feature. Provide some
examples on how to use the features, which netlink attributes are
supported and descriptions of the attributes.

Signed-off-by: Michal Wilczynski <michal.wilczynski@intel.com>
---
 Documentation/networking/devlink/ice.rst | 101 +++++++++++++++++++++++
 1 file changed, 101 insertions(+)

diff --git a/Documentation/networking/devlink/ice.rst b/Documentation/networking/devlink/ice.rst
index 0c89ceb8986d..d3299bd160dc 100644
--- a/Documentation/networking/devlink/ice.rst
+++ b/Documentation/networking/devlink/ice.rst
@@ -254,3 +254,104 @@ Users can request an immediate capture of a snapshot via the
     0000000000000210 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
 
     $ devlink region delete pci/0000:01:00.0/device-caps snapshot 1
+
+Devlink Rate
+==========
+
+The ``ice`` driver implements devlink-rate API. It allows for offload of
+the Hierarchical QoS to the hardware. It enables user to group Virtual
+Functions in a tree structure and assign supported parameters: tx_share,
+tx_max, tx_priority and tx_weight to each node in a tree. So effectively
+user gains an ability to control how much bandwidth is allocated for each
+VF group. This is later enforced by the HW.
+
+It is assumed that this feature is mutually exclusive with DCB and ADQ, or
+any driver feature that would trigger changes in QoS, for example creation
+of the new traffic class. This feature is also dependent on switchdev
+being enabled in the system. It's required bacause devlink-rate requires
+devlink-port objects to be present, and those objects are only created
+in switchdev mode.
+
+If the driver is set to the switchdev mode, it will export
+internal hierarchy the moment the VF's are created. Root of the tree
+is always represented by the node_0. This node can't be deleted by the user.
+Leaf nodes and nodes with children also can't be deleted.
+
+.. list-table:: Attributes supported
+    :widths: 15 85
+
+    * - Name
+      - Description
+    * - ``tx_max``
+      - This attribute allows for specifying a maximum bandwidth to be
+        consumed by the tree Node. Rate Limit is an absolute number
+        specifying a maximum amount of bytes a Node may consume during
+        the course of one second. Rate limit guarantees that a link will
+        not oversaturate the receiver on the remote end and also enforces
+        an SLA between the subscriber and network provider.
+    * - ``tx_share``
+      - This attribute allows for specifying a minimum bandwidth allocated
+        to a tree node when it is not blocked. It specifies an absolute
+        BW. While tx_max defines the maximum bandwidth the node may consume,
+        the tx_share marks committed BW for the Node.
+    * - ``tx_priority``
+      - This attribute allows for usage of strict priority arbiter among
+        siblings. This arbitration scheme attempts to schedule nodes based
+        on their priority as long as the nodes remain within their
+        bandwidth limit. Range 0-7.
+    * - ``tx_weight``
+      - This attribute allows for usage of Weighted Fair Queuing
+        arbitration scheme among siblings. This arbitration scheme can be
+        used simultaneously with the strict priority. Range 1-200.
+
+.. code:: shell
+
+    # enable switchdev
+    $ devlink dev eswitch set pci/0000:4b:00.0 mode switchdev
+
+    # at this point driver should export internal hierarchy
+    $ echo 2 > /sys/class/net/ens785np0/device/sriov_numvfs
+
+    $ devlink port function rate show
+    pci/0000:4b:00.0/node_25: type node parent node_24
+    pci/0000:4b:00.0/node_24: type node parent node_0
+    pci/0000:4b:00.0/node_32: type node parent node_31
+    pci/0000:4b:00.0/node_31: type node parent node_30
+    pci/0000:4b:00.0/node_30: type node parent node_16
+    pci/0000:4b:00.0/node_19: type node parent node_18
+    pci/0000:4b:00.0/node_18: type node parent node_17
+    pci/0000:4b:00.0/node_17: type node parent node_16
+    pci/0000:4b:00.0/node_14: type node parent node_5
+    pci/0000:4b:00.0/node_5: type node parent node_3
+    pci/0000:4b:00.0/node_13: type node parent node_4
+    pci/0000:4b:00.0/node_12: type node parent node_4
+    pci/0000:4b:00.0/node_11: type node parent node_4
+    pci/0000:4b:00.0/node_10: type node parent node_4
+    pci/0000:4b:00.0/node_9: type node parent node_4
+    pci/0000:4b:00.0/node_8: type node parent node_4
+    pci/0000:4b:00.0/node_7: type node parent node_4
+    pci/0000:4b:00.0/node_6: type node parent node_4
+    pci/0000:4b:00.0/node_4: type node parent node_3
+    pci/0000:4b:00.0/node_3: type node parent node_16
+    pci/0000:4b:00.0/node_16: type node parent node_15
+    pci/0000:4b:00.0/node_15: type node parent node_0
+    pci/0000:4b:00.0/node_2: type node parent node_1
+    pci/0000:4b:00.0/node_1: type node parent node_0
+    pci/0000:4b:00.0/node_0: type node
+    pci/0000:4b:00.0/1: type leaf parent node_25
+    pci/0000:4b:00.0/2: type leaf parent node_25
+
+    # let's create some custom node
+    $ devlink port function rate add pci/0000:4b:00.0/node_custom parent node_0
+
+    # second custom node
+    $ devlink port function rate add pci/0000:4b:00.0/node_custom_1 parent node_custom
+
+    # reassign second VF to newly created branch
+    $ devlink port function rate set pci/0000:4b:00.0/2 parent node_custom_1
+
+    # assign tx_weight to the VF
+    $ devlink port function rate set pci/0000:4b:00.0/2 tx_weight 5
+
+    # assign tx_share to the VF
+    $ devlink port function rate set pci/0000:4b:00.0/2 tx_share 500Mbps
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net-next v10 10/10] ice: Add documentation for devlink-rate implementation
  2022-11-07 18:13 [PATCH net-next v10 00/10] Implement devlink-rate API and extend it Michal Wilczynski
                   ` (9 preceding siblings ...)
  2022-11-07 18:13 ` [PATCH net-next v10 10/10] ice: add documentation for devlink-rate implementation Michal Wilczynski
@ 2022-11-07 18:13 ` Michal Wilczynski
  10 siblings, 0 replies; 18+ messages in thread
From: Michal Wilczynski @ 2022-11-07 18:13 UTC (permalink / raw)
  To: netdev
  Cc: alexandr.lobakin, jacob.e.keller, jesse.brandeburg,
	przemyslaw.kitszel, anthony.l.nguyen, kuba, ecree.xilinx, jiri,
	Michal Wilczynski

Add documentation to a newly added devlink-rate feature. Provide some
examples on how to use the commands, which netlink attributes are
supported and descriptions of the attributes.

Signed-off-by: Michal Wilczynski <michal.wilczynski@intel.com>
---
 Documentation/networking/devlink/ice.rst | 101 +++++++++++++++++++++++
 1 file changed, 101 insertions(+)

diff --git a/Documentation/networking/devlink/ice.rst b/Documentation/networking/devlink/ice.rst
index 0c89ceb8986d..d3299bd160dc 100644
--- a/Documentation/networking/devlink/ice.rst
+++ b/Documentation/networking/devlink/ice.rst
@@ -254,3 +254,104 @@ Users can request an immediate capture of a snapshot via the
     0000000000000210 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
 
     $ devlink region delete pci/0000:01:00.0/device-caps snapshot 1
+
+Devlink Rate
+==========
+
+The ``ice`` driver implements devlink-rate API. It allows for offload of
+the Hierarchical QoS to the hardware. It enables user to group Virtual
+Functions in a tree structure and assign supported parameters: tx_share,
+tx_max, tx_priority and tx_weight to each node in a tree. So effectively
+user gains an ability to control how much bandwidth is allocated for each
+VF group. This is later enforced by the HW.
+
+It is assumed that this feature is mutually exclusive with DCB and ADQ, or
+any driver feature that would trigger changes in QoS, for example creation
+of the new traffic class. This feature is also dependent on switchdev
+being enabled in the system. It's required bacause devlink-rate requires
+devlink-port objects to be present, and those objects are only created
+in switchdev mode.
+
+If the driver is set to the switchdev mode, it will export
+internal hierarchy the moment the VF's are created. Root of the tree
+is always represented by the node_0. This node can't be deleted by the user.
+Leaf nodes and nodes with children also can't be deleted.
+
+.. list-table:: Attributes supported
+    :widths: 15 85
+
+    * - Name
+      - Description
+    * - ``tx_max``
+      - This attribute allows for specifying a maximum bandwidth to be
+        consumed by the tree Node. Rate Limit is an absolute number
+        specifying a maximum amount of bytes a Node may consume during
+        the course of one second. Rate limit guarantees that a link will
+        not oversaturate the receiver on the remote end and also enforces
+        an SLA between the subscriber and network provider.
+    * - ``tx_share``
+      - This attribute allows for specifying a minimum bandwidth allocated
+        to a tree node when it is not blocked. It specifies an absolute
+        BW. While tx_max defines the maximum bandwidth the node may consume,
+        the tx_share marks committed BW for the Node.
+    * - ``tx_priority``
+      - This attribute allows for usage of strict priority arbiter among
+        siblings. This arbitration scheme attempts to schedule nodes based
+        on their priority as long as the nodes remain within their
+        bandwidth limit. Range 0-7.
+    * - ``tx_weight``
+      - This attribute allows for usage of Weighted Fair Queuing
+        arbitration scheme among siblings. This arbitration scheme can be
+        used simultaneously with the strict priority. Range 1-200.
+
+.. code:: shell
+
+    # enable switchdev
+    $ devlink dev eswitch set pci/0000:4b:00.0 mode switchdev
+
+    # at this point driver should export internal hierarchy
+    $ echo 2 > /sys/class/net/ens785np0/device/sriov_numvfs
+
+    $ devlink port function rate show
+    pci/0000:4b:00.0/node_25: type node parent node_24
+    pci/0000:4b:00.0/node_24: type node parent node_0
+    pci/0000:4b:00.0/node_32: type node parent node_31
+    pci/0000:4b:00.0/node_31: type node parent node_30
+    pci/0000:4b:00.0/node_30: type node parent node_16
+    pci/0000:4b:00.0/node_19: type node parent node_18
+    pci/0000:4b:00.0/node_18: type node parent node_17
+    pci/0000:4b:00.0/node_17: type node parent node_16
+    pci/0000:4b:00.0/node_14: type node parent node_5
+    pci/0000:4b:00.0/node_5: type node parent node_3
+    pci/0000:4b:00.0/node_13: type node parent node_4
+    pci/0000:4b:00.0/node_12: type node parent node_4
+    pci/0000:4b:00.0/node_11: type node parent node_4
+    pci/0000:4b:00.0/node_10: type node parent node_4
+    pci/0000:4b:00.0/node_9: type node parent node_4
+    pci/0000:4b:00.0/node_8: type node parent node_4
+    pci/0000:4b:00.0/node_7: type node parent node_4
+    pci/0000:4b:00.0/node_6: type node parent node_4
+    pci/0000:4b:00.0/node_4: type node parent node_3
+    pci/0000:4b:00.0/node_3: type node parent node_16
+    pci/0000:4b:00.0/node_16: type node parent node_15
+    pci/0000:4b:00.0/node_15: type node parent node_0
+    pci/0000:4b:00.0/node_2: type node parent node_1
+    pci/0000:4b:00.0/node_1: type node parent node_0
+    pci/0000:4b:00.0/node_0: type node
+    pci/0000:4b:00.0/1: type leaf parent node_25
+    pci/0000:4b:00.0/2: type leaf parent node_25
+
+    # let's create some custom node
+    $ devlink port function rate add pci/0000:4b:00.0/node_custom parent node_0
+
+    # second custom node
+    $ devlink port function rate add pci/0000:4b:00.0/node_custom_1 parent node_custom
+
+    # reassign second VF to newly created branch
+    $ devlink port function rate set pci/0000:4b:00.0/2 parent node_custom_1
+
+    # assign tx_weight to the VF
+    $ devlink port function rate set pci/0000:4b:00.0/2 tx_weight 5
+
+    # assign tx_share to the VF
+    $ devlink port function rate set pci/0000:4b:00.0/2 tx_share 500Mbps
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v10 10/10] ice: add documentation for devlink-rate implementation
  2022-11-07 18:13 ` [PATCH net-next v10 10/10] ice: add documentation for devlink-rate implementation Michal Wilczynski
@ 2022-11-08 22:39   ` Jakub Kicinski
  2022-11-09 18:54     ` Wilczynski, Michal
  0 siblings, 1 reply; 18+ messages in thread
From: Jakub Kicinski @ 2022-11-08 22:39 UTC (permalink / raw)
  To: Michal Wilczynski
  Cc: netdev, alexandr.lobakin, jacob.e.keller, jesse.brandeburg,
	przemyslaw.kitszel, anthony.l.nguyen, ecree.xilinx, jiri

On Mon,  7 Nov 2022 19:13:26 +0100 Michal Wilczynski wrote:
> Add documentation to a newly added devlink-rate feature. Provide some
> examples on how to use the features, which netlink attributes are
> supported and descriptions of the attributes.

> +Devlink Rate
> +==========
> +
> +The ``ice`` driver implements devlink-rate API. It allows for offload of
> +the Hierarchical QoS to the hardware. It enables user to group Virtual
> +Functions in a tree structure and assign supported parameters: tx_share,
> +tx_max, tx_priority and tx_weight to each node in a tree. So effectively
> +user gains an ability to control how much bandwidth is allocated for each
> +VF group. This is later enforced by the HW.
> +
> +It is assumed that this feature is mutually exclusive with DCB and ADQ, or
> +any driver feature that would trigger changes in QoS, for example creation
> +of the new traffic class.

Meaning? Will the devlink API no longer reflect reality once one of 
the VFs enables DCB for example? 

> This feature is also dependent on switchdev
> +being enabled in the system. It's required bacause devlink-rate requires
> +devlink-port objects to be present, and those objects are only created
> +in switchdev mode.
> +
> +If the driver is set to the switchdev mode, it will export
> +internal hierarchy the moment the VF's are created. Root of the tree
> +is always represented by the node_0. This node can't be deleted by the user.
> +Leaf nodes and nodes with children also can't be deleted.
> +
> +.. list-table:: Attributes supported
> +    :widths: 15 85
> +
> +    * - Name
> +      - Description
> +    * - ``tx_max``
> +      - This attribute allows for specifying a maximum bandwidth to be

Drop the "This attribute allows for specifying a" from all attrs.

> +        consumed by the tree Node. Rate Limit is an absolute number
> +        specifying a maximum amount of bytes a Node may consume during
> +        the course of one second. Rate limit guarantees that a link will
> +        not oversaturate the receiver on the remote end and also enforces
> +        an SLA between the subscriber and network provider.
> +    * - ``tx_share``

Wouldn't it be more common to call this tx_min, like in the old VF API
and the cgroup APIs?

> +      - This attribute allows for specifying a minimum bandwidth allocated
> +        to a tree node when it is not blocked. It specifies an absolute
> +        BW. While tx_max defines the maximum bandwidth the node may consume,
> +        the tx_share marks committed BW for the Node.
> +    * - ``tx_priority``
> +      - This attribute allows for usage of strict priority arbiter among
> +        siblings. This arbitration scheme attempts to schedule nodes based
> +        on their priority as long as the nodes remain within their
> +        bandwidth limit. Range 0-7.

Nodes meaning it will (W)RR across all nodes of highest prio?

Is prio 0 or 7 highest?

> +    * - ``tx_weight``
> +      - This attribute allows for usage of Weighted Fair Queuing
> +        arbitration scheme among siblings. This arbitration scheme can be
> +        used simultaneously with the strict priority. Range 1-200.

Would be good to specify how the interaction with SP looks.
Does the absolute value of the weight matter or only the relative
values? (IOW is 1 vs 10 the same as 10 vs 100)

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v10 10/10] ice: add documentation for devlink-rate implementation
  2022-11-08 22:39   ` Jakub Kicinski
@ 2022-11-09 18:54     ` Wilczynski, Michal
  2022-11-09 21:25       ` Jakub Kicinski
  0 siblings, 1 reply; 18+ messages in thread
From: Wilczynski, Michal @ 2022-11-09 18:54 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: netdev, alexandr.lobakin, jacob.e.keller, jesse.brandeburg,
	przemyslaw.kitszel, anthony.l.nguyen, ecree.xilinx, jiri



On 11/8/2022 11:39 PM, Jakub Kicinski wrote:
> On Mon,  7 Nov 2022 19:13:26 +0100 Michal Wilczynski wrote:
>> Add documentation to a newly added devlink-rate feature. Provide some
>> examples on how to use the features, which netlink attributes are
>> supported and descriptions of the attributes.
>> +Devlink Rate
>> +==========
>> +
>> +The ``ice`` driver implements devlink-rate API. It allows for offload of
>> +the Hierarchical QoS to the hardware. It enables user to group Virtual
>> +Functions in a tree structure and assign supported parameters: tx_share,
>> +tx_max, tx_priority and tx_weight to each node in a tree. So effectively
>> +user gains an ability to control how much bandwidth is allocated for each
>> +VF group. This is later enforced by the HW.
>> +
>> +It is assumed that this feature is mutually exclusive with DCB and ADQ, or
>> +any driver feature that would trigger changes in QoS, for example creation
>> +of the new traffic class.
> Meaning? Will the devlink API no longer reflect reality once one of
> the VFs enables DCB for example?

By DCB I mean the DCB that's implemented in the FW, and I'm not aware
of any flow that would enable the VF to tweak FW DCB on/off. Additionally
there is a commit in this patch series that should prevent any devlink-rate
changes if the FW DCB is enabled, and should prevent enabling FW DCB
enablement if any changes were made with the devlink-rate.

I don't think there is a way to detect that the SW DCB is enabled though.
In that case the software would try to enforce it's own settings in the SW
stack and the HW would try to enforce devlink-rate settings.

>
>> This feature is also dependent on switchdev
>> +being enabled in the system. It's required bacause devlink-rate requires
>> +devlink-port objects to be present, and those objects are only created
>> +in switchdev mode.
>> +
>> +If the driver is set to the switchdev mode, it will export
>> +internal hierarchy the moment the VF's are created. Root of the tree
>> +is always represented by the node_0. This node can't be deleted by the user.
>> +Leaf nodes and nodes with children also can't be deleted.
>> +
>> +.. list-table:: Attributes supported
>> +    :widths: 15 85
>> +
>> +    * - Name
>> +      - Description
>> +    * - ``tx_max``
>> +      - This attribute allows for specifying a maximum bandwidth to be
> Drop the "This attribute allows for specifying a" from all attrs.

Sure.

>
>> +        consumed by the tree Node. Rate Limit is an absolute number
>> +        specifying a maximum amount of bytes a Node may consume during
>> +        the course of one second. Rate limit guarantees that a link will
>> +        not oversaturate the receiver on the remote end and also enforces
>> +        an SLA between the subscriber and network provider.
>> +    * - ``tx_share``
> Wouldn't it be more common to call this tx_min, like in the old VF API
> and the cgroup APIs?

I agree on this one, I'm not really sure why this attribute is called
tx_share. In it's iproute documentation tx_share is described as:
"specifies minimal tx rate value shared among all rate objects. If rate
object is a part of some rate group, then this value shared with rate
objects of this rate group.".
So tx_min is more intuitive, but I suspect that the original author
wanted to emphasize that this BW is shared among all the children
nodes.

>
>> +      - This attribute allows for specifying a minimum bandwidth allocated
>> +        to a tree node when it is not blocked. It specifies an absolute
>> +        BW. While tx_max defines the maximum bandwidth the node may consume,
>> +        the tx_share marks committed BW for the Node.
>> +    * - ``tx_priority``
>> +      - This attribute allows for usage of strict priority arbiter among
>> +        siblings. This arbitration scheme attempts to schedule nodes based
>> +        on their priority as long as the nodes remain within their
>> +        bandwidth limit. Range 0-7.
> Nodes meaning it will (W)RR across all nodes of highest prio?

Yes nodes with the same priority will be treated equally.

>
> Is prio 0 or 7 highest?
7 is the highest, nodes with 7 are selected first.

>
>> +    * - ``tx_weight``
>> +      - This attribute allows for usage of Weighted Fair Queuing
>> +        arbitration scheme among siblings. This arbitration scheme can be
>> +        used simultaneously with the strict priority. Range 1-200.
> Would be good to specify how the interaction with SP looks.

In each arbitration flow, the winning node will be selected from
the group’s non-blocked siblings with the highest priority.
The basic rule is that each sibling group consists of a sub group
with high priority nodes, a WFQ sub group with intermediate
priority nodes, and a sub group with low priority nodes.

So basically if several sibling nodes have same priority configured
they are treated as a sub-group and arbitration among them is
performed using weights.

Will add this info in the documentation.

> Does the absolute value of the weight matter or only the relative
> values? (IOW is 1 vs 10 the same as 10 vs 100)

Only relative values matter. Will also add this to documentation
I guess.



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v10 10/10] ice: add documentation for devlink-rate implementation
  2022-11-09 18:54     ` Wilczynski, Michal
@ 2022-11-09 21:25       ` Jakub Kicinski
  2022-11-10 16:54         ` Wilczynski, Michal
  0 siblings, 1 reply; 18+ messages in thread
From: Jakub Kicinski @ 2022-11-09 21:25 UTC (permalink / raw)
  To: Wilczynski, Michal
  Cc: netdev, alexandr.lobakin, jacob.e.keller, jesse.brandeburg,
	przemyslaw.kitszel, anthony.l.nguyen, ecree.xilinx, jiri

On Wed, 9 Nov 2022 19:54:52 +0100 Wilczynski, Michal wrote:
> On 11/8/2022 11:39 PM, Jakub Kicinski wrote:
> > On Mon,  7 Nov 2022 19:13:26 +0100 Michal Wilczynski wrote:  
> >> Add documentation to a newly added devlink-rate feature. Provide some
> >> examples on how to use the features, which netlink attributes are
> >> supported and descriptions of the attributes.
> >> +Devlink Rate
> >> +==========
> >> +
> >> +The ``ice`` driver implements devlink-rate API. It allows for offload of
> >> +the Hierarchical QoS to the hardware. It enables user to group Virtual
> >> +Functions in a tree structure and assign supported parameters: tx_share,
> >> +tx_max, tx_priority and tx_weight to each node in a tree. So effectively
> >> +user gains an ability to control how much bandwidth is allocated for each
> >> +VF group. This is later enforced by the HW.
> >> +
> >> +It is assumed that this feature is mutually exclusive with DCB and ADQ, or
> >> +any driver feature that would trigger changes in QoS, for example creation
> >> +of the new traffic class.  
> > Meaning? Will the devlink API no longer reflect reality once one of
> > the VFs enables DCB for example?  
> 
> By DCB I mean the DCB that's implemented in the FW, and I'm not aware
> of any flow that would enable the VF to tweak FW DCB on/off. Additionally
> there is a commit in this patch series that should prevent any devlink-rate
> changes if the FW DCB is enabled, and should prevent enabling FW DCB
> enablement if any changes were made with the devlink-rate.

Nice, but in case DCB or TC/ADQ gets enabled devlink rate will just
show a stale hierarchy?

We need to document clearly that the driver is supposed to prevent
multiple APIs being used, and how we decide which API takes precedence.

> I don't think there is a way to detect that the SW DCB is enabled though.
> In that case the software would try to enforce it's own settings in the SW
> stack and the HW would try to enforce devlink-rate settings.
>
> >> +        consumed by the tree Node. Rate Limit is an absolute number
> >> +        specifying a maximum amount of bytes a Node may consume during
> >> +        the course of one second. Rate limit guarantees that a link will
> >> +        not oversaturate the receiver on the remote end and also enforces
> >> +        an SLA between the subscriber and network provider.
> >> +    * - ``tx_share``  
> > Wouldn't it be more common to call this tx_min, like in the old VF API
> > and the cgroup APIs?  
> 
> I agree on this one, I'm not really sure why this attribute is called
> tx_share. In it's iproute documentation tx_share is described as:
> "specifies minimal tx rate value shared among all rate objects. If rate
> object is a part of some rate group, then this value shared with rate
> objects of this rate group.".
> So tx_min is more intuitive, but I suspect that the original author
> wanted to emphasize that this BW is shared among all the children
> nodes.

Ah :/ I missed you're not adding this one :S

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v10 10/10] ice: add documentation for devlink-rate implementation
  2022-11-09 21:25       ` Jakub Kicinski
@ 2022-11-10 16:54         ` Wilczynski, Michal
  2022-11-10 17:03           ` Jakub Kicinski
  0 siblings, 1 reply; 18+ messages in thread
From: Wilczynski, Michal @ 2022-11-10 16:54 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: netdev, alexandr.lobakin, jacob.e.keller, jesse.brandeburg,
	przemyslaw.kitszel, anthony.l.nguyen, ecree.xilinx, jiri



On 11/9/2022 10:25 PM, Jakub Kicinski wrote:
> On Wed, 9 Nov 2022 19:54:52 +0100 Wilczynski, Michal wrote:
>> On 11/8/2022 11:39 PM, Jakub Kicinski wrote:
>>> On Mon,  7 Nov 2022 19:13:26 +0100 Michal Wilczynski wrote:
>>>> Add documentation to a newly added devlink-rate feature. Provide some
>>>> examples on how to use the features, which netlink attributes are
>>>> supported and descriptions of the attributes.
>>>> +Devlink Rate
>>>> +==========
>>>> +
>>>> +The ``ice`` driver implements devlink-rate API. It allows for offload of
>>>> +the Hierarchical QoS to the hardware. It enables user to group Virtual
>>>> +Functions in a tree structure and assign supported parameters: tx_share,
>>>> +tx_max, tx_priority and tx_weight to each node in a tree. So effectively
>>>> +user gains an ability to control how much bandwidth is allocated for each
>>>> +VF group. This is later enforced by the HW.
>>>> +
>>>> +It is assumed that this feature is mutually exclusive with DCB and ADQ, or
>>>> +any driver feature that would trigger changes in QoS, for example creation
>>>> +of the new traffic class.
>>> Meaning? Will the devlink API no longer reflect reality once one of
>>> the VFs enables DCB for example?
>> By DCB I mean the DCB that's implemented in the FW, and I'm not aware
>> of any flow that would enable the VF to tweak FW DCB on/off. Additionally
>> there is a commit in this patch series that should prevent any devlink-rate
>> changes if the FW DCB is enabled, and should prevent enabling FW DCB
>> enablement if any changes were made with the devlink-rate.
> Nice, but in case DCB or TC/ADQ gets enabled devlink rate will just
> show a stale hierarchy?

Yes there will be hierarchy exported during the VF creation, so if
the user enable DCB/ADQ in the meantime, it will be a stale hierarchy.
User won't be able to modify any nodes/parameters.

I will clarify this also in v11.

>
> We need to document clearly that the driver is supposed to prevent
> multiple APIs being used, and how we decide which API takes precedence.

OK, agree will do that in v11.

>
>> I don't think there is a way to detect that the SW DCB is enabled though.
>> In that case the software would try to enforce it's own settings in the SW
>> stack and the HW would try to enforce devlink-rate settings.
>>
>>>> +        consumed by the tree Node. Rate Limit is an absolute number
>>>> +        specifying a maximum amount of bytes a Node may consume during
>>>> +        the course of one second. Rate limit guarantees that a link will
>>>> +        not oversaturate the receiver on the remote end and also enforces
>>>> +        an SLA between the subscriber and network provider.
>>>> +    * - ``tx_share``
>>> Wouldn't it be more common to call this tx_min, like in the old VF API
>>> and the cgroup APIs?
>> I agree on this one, I'm not really sure why this attribute is called
>> tx_share. In it's iproute documentation tx_share is described as:
>> "specifies minimal tx rate value shared among all rate objects. If rate
>> object is a part of some rate group, then this value shared with rate
>> objects of this rate group.".
>> So tx_min is more intuitive, but I suspect that the original author
>> wanted to emphasize that this BW is shared among all the children
>> nodes.
> Ah :/ I missed you're not adding this one :S


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v10 10/10] ice: add documentation for devlink-rate implementation
  2022-11-10 16:54         ` Wilczynski, Michal
@ 2022-11-10 17:03           ` Jakub Kicinski
  2022-11-10 17:40             ` Wilczynski, Michal
  0 siblings, 1 reply; 18+ messages in thread
From: Jakub Kicinski @ 2022-11-10 17:03 UTC (permalink / raw)
  To: Wilczynski, Michal
  Cc: netdev, alexandr.lobakin, jacob.e.keller, jesse.brandeburg,
	przemyslaw.kitszel, anthony.l.nguyen, ecree.xilinx, jiri

On Thu, 10 Nov 2022 17:54:03 +0100 Wilczynski, Michal wrote:
> > Nice, but in case DCB or TC/ADQ gets enabled devlink rate will just
> > show a stale hierarchy?  
> 
> Yes there will be hierarchy exported during the VF creation, so if
> the user enable DCB/ADQ in the meantime, it will be a stale hierarchy.
> User won't be able to modify any nodes/parameters.

Why not tear it down if it's stale?

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v10 10/10] ice: add documentation for devlink-rate implementation
  2022-11-10 17:03           ` Jakub Kicinski
@ 2022-11-10 17:40             ` Wilczynski, Michal
  0 siblings, 0 replies; 18+ messages in thread
From: Wilczynski, Michal @ 2022-11-10 17:40 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: netdev, alexandr.lobakin, jacob.e.keller, jesse.brandeburg,
	przemyslaw.kitszel, anthony.l.nguyen, ecree.xilinx, jiri



On 11/10/2022 6:03 PM, Jakub Kicinski wrote:
> On Thu, 10 Nov 2022 17:54:03 +0100 Wilczynski, Michal wrote:
>>> Nice, but in case DCB or TC/ADQ gets enabled devlink rate will just
>>> show a stale hierarchy?
>> Yes there will be hierarchy exported during the VF creation, so if
>> the user enable DCB/ADQ in the meantime, it will be a stale hierarchy.
>> User won't be able to modify any nodes/parameters.
> Why not tear it down if it's stale?

I don't think there is any harm in tearing it down in those cases,
can add a commit that tears down the hierarchy when DCB/ADQ gets
activated.



^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2022-11-10 17:40 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-07 18:13 [PATCH net-next v10 00/10] Implement devlink-rate API and extend it Michal Wilczynski
2022-11-07 18:13 ` [PATCH net-next v10 01/10] devlink: Introduce new attribute 'tx_priority' to devlink-rate Michal Wilczynski
2022-11-07 18:13 ` [PATCH net-next v10 02/10] devlink: Introduce new attribute 'tx_weight' " Michal Wilczynski
2022-11-07 18:13 ` [PATCH net-next v10 03/10] devlink: Enable creation of the devlink-rate nodes from the driver Michal Wilczynski
2022-11-07 18:13 ` [PATCH net-next v10 04/10] devlink: Allow for devlink-rate nodes parent reassignment Michal Wilczynski
2022-11-07 18:13 ` [PATCH net-next v10 05/10] devlink: Allow to set up parent in devl_rate_leaf_create() Michal Wilczynski
2022-11-07 18:13 ` [PATCH net-next v10 06/10] ice: Introduce new parameters in ice_sched_node Michal Wilczynski
2022-11-07 18:13 ` [PATCH net-next v10 07/10] ice: Add an option to pre-allocate memory for ice_sched_node Michal Wilczynski
2022-11-07 18:13 ` [PATCH net-next v10 08/10] ice: Implement devlink-rate API Michal Wilczynski
2022-11-07 18:13 ` [PATCH net-next v10 09/10] ice: Prevent ADQ, DCB coexistence with Custom Tx scheduler Michal Wilczynski
2022-11-07 18:13 ` [PATCH net-next v10 10/10] ice: add documentation for devlink-rate implementation Michal Wilczynski
2022-11-08 22:39   ` Jakub Kicinski
2022-11-09 18:54     ` Wilczynski, Michal
2022-11-09 21:25       ` Jakub Kicinski
2022-11-10 16:54         ` Wilczynski, Michal
2022-11-10 17:03           ` Jakub Kicinski
2022-11-10 17:40             ` Wilczynski, Michal
2022-11-07 18:13 ` [PATCH net-next v10 10/10] ice: Add " Michal Wilczynski

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.