All of lore.kernel.org
 help / color / mirror / Atom feed
* [net-next Patch v5 0/6] octeontx2-pf: HTB offload support
@ 2023-03-26 18:12 Hariprasad Kelam
  2023-03-26 18:12 ` [net-next Patch v5 1/6] sch_htb: Allow HTB priority parameter in offload mode Hariprasad Kelam
                   ` (5 more replies)
  0 siblings, 6 replies; 19+ messages in thread
From: Hariprasad Kelam @ 2023-03-26 18:12 UTC (permalink / raw)
  To: netdev, linux-kernel
  Cc: kuba, davem, willemdebruijn.kernel, andrew, sgoutham, lcherian,
	gakula, jerinj, sbhatta, hkelam, naveenm, edumazet, pabeni, jhs,
	xiyou.wangcong, jiri, maxtram95

octeontx2 silicon and CN10K transmit interface consists of five
transmit levels starting from MDQ, TL4 to TL1. Once packets are
submitted to MDQ, hardware picks all active MDQs using strict
priority, and MDQs having the same priority level are chosen using
round robin. Each packet will traverse MDQ, TL4 to TL1 levels.
Each level contains an array of queues to support scheduling and
shaping.

As HTB supports classful queuing mechanism by supporting rate and
ceil and allow the user to control the absolute bandwidth to
particular classes of traffic the same can be achieved by
configuring shapers and schedulers on different transmit levels.

This series of patches adds support for HTB offload,

Patch1: Allow strict priority parameter in HTB offload mode.

Patch2: Rename existing total tx queues for better readability

Patch3: defines APIs such that the driver can dynamically initialize/
        deinitialize the send queues.

Patch4: Refactors transmit alloc/free calls as preparation for QOS
        offload code.

Patch5: Adds actual HTB offload support.

Patch6: Add documentation about htb offload flow in driver


Hariprasad Kelam (3):
  octeontx2-pf: Rename tot_tx_queues to non_qos_queues
  octeontx2-pf: Refactor schedular queue alloc/free calls
  docs: octeontx2: Add Documentation for QOS

Naveen Mamindlapalli (2):
  sch_htb: Allow HTB priority parameter in offload mode
  octeontx2-pf: Add support for HTB offload

Subbaraya Sundeep (1):
  octeontx2-pf: qos send queues management
-----
v1 -> v2 :
          ensure other drivers won't affect by allowing 'prio'
          a parameter in htb offload mode.

v2 -> v3 :
          1. discard patch supporting devlink to configure TL1 round
             robin priority
          2. replace NL_SET_ERR_MSG with NL_SET_ERR_MSG_MOD
          3. use max3 instead of using max couple of times and use a better
             naming convention in send queue management code.

v3 -> v4:
	  1. fix sparse warnings.
	  2. release mutex lock in error conditions.

v4 -> v5:
	  1. fix pahole reported issues
          2. add documentation for htb offload flow.

 .../ethernet/marvell/octeontx2.rst            |   39 +
 .../ethernet/marvell/octeontx2/af/common.h    |    2 +-
 .../marvell/octeontx2/af/rvu_debugfs.c        |    5 +
 .../ethernet/marvell/octeontx2/af/rvu_nix.c   |   45 +
 .../ethernet/marvell/octeontx2/nic/Makefile   |    2 +-
 .../marvell/octeontx2/nic/otx2_common.c       |  120 +-
 .../marvell/octeontx2/nic/otx2_common.h       |   40 +-
 .../marvell/octeontx2/nic/otx2_ethtool.c      |   31 +-
 .../ethernet/marvell/octeontx2/nic/otx2_pf.c  |  110 +-
 .../ethernet/marvell/octeontx2/nic/otx2_reg.h |   13 +
 .../ethernet/marvell/octeontx2/nic/otx2_tc.c  |    7 +-
 .../marvell/octeontx2/nic/otx2_txrx.c         |   24 +-
 .../marvell/octeontx2/nic/otx2_txrx.h         |    3 +-
 .../ethernet/marvell/octeontx2/nic/otx2_vf.c  |   13 +-
 .../net/ethernet/marvell/octeontx2/nic/qos.c  | 1460 +++++++++++++++++
 .../net/ethernet/marvell/octeontx2/nic/qos.h  |   69 +
 .../ethernet/marvell/octeontx2/nic/qos_sq.c   |  304 ++++
 .../net/ethernet/mellanox/mlx5/core/en/qos.c  |    7 +-
 include/net/pkt_cls.h                         |    1 +
 net/sched/sch_htb.c                           |    7 +-
 20 files changed, 2197 insertions(+), 105 deletions(-)
 create mode 100644 drivers/net/ethernet/marvell/octeontx2/nic/qos.c
 create mode 100644 drivers/net/ethernet/marvell/octeontx2/nic/qos.h
 create mode 100644 drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c

--
2.17.1

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [net-next Patch v5 1/6] sch_htb: Allow HTB priority parameter in offload mode
  2023-03-26 18:12 [net-next Patch v5 0/6] octeontx2-pf: HTB offload support Hariprasad Kelam
@ 2023-03-26 18:12 ` Hariprasad Kelam
  2023-03-28 14:44   ` Simon Horman
  2023-03-26 18:12 ` [net-next Patch v5 2/6] octeontx2-pf: Rename tot_tx_queues to non_qos_queues Hariprasad Kelam
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 19+ messages in thread
From: Hariprasad Kelam @ 2023-03-26 18:12 UTC (permalink / raw)
  To: netdev, linux-kernel
  Cc: kuba, davem, willemdebruijn.kernel, andrew, sgoutham, lcherian,
	gakula, jerinj, sbhatta, hkelam, naveenm, edumazet, pabeni, jhs,
	xiyou.wangcong, jiri, maxtram95

From: Naveen Mamindlapalli <naveenm@marvell.com>

The current implementation of HTB offload returns the EINVAL error
for unsupported parameters like prio and quantum. This patch removes
the error returning checks for 'prio' parameter and populates its
value to tc_htb_qopt_offload structure such that driver can use the
same.

Add prio parameter check in mlx5 driver, as mlx5 devices are not capable
of supporting the prio parameter when htb offload is used. Report error
if prio parameter is set to a non-default value.

Signed-off-by: Naveen Mamindlapalli <naveenm@marvell.com>
Co-developed-by: Rahul Rameshbabu <rrameshbabu@nvidia.com>
Signed-off-by: Rahul Rameshbabu <rrameshbabu@nvidia.com>
Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/en/qos.c | 7 ++++++-
 include/net/pkt_cls.h                            | 1 +
 net/sched/sch_htb.c                              | 7 +++----
 3 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c
index 2842195ee548..1874c2f0587f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c
@@ -379,6 +379,12 @@ int mlx5e_htb_setup_tc(struct mlx5e_priv *priv, struct tc_htb_qopt_offload *htb_
 	if (!htb && htb_qopt->command != TC_HTB_CREATE)
 		return -EINVAL;
 
+	if (htb_qopt->prio) {
+		NL_SET_ERR_MSG_MOD(htb_qopt->extack,
+				   "prio parameter is not supported by device with HTB offload enabled.");
+		return -EOPNOTSUPP;
+	}
+
 	switch (htb_qopt->command) {
 	case TC_HTB_CREATE:
 		if (!mlx5_qos_is_supported(priv->mdev)) {
@@ -515,4 +521,3 @@ int mlx5e_mqprio_rl_get_node_hw_id(struct mlx5e_mqprio_rl *rl, int tc, u32 *hw_i
 	*hw_id = rl->leaves_id[tc];
 	return 0;
 }
-
diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index b3b5b0b62f16..a2ea45c7b53e 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -868,6 +868,7 @@ struct tc_htb_qopt_offload {
 	u16 qid;
 	u64 rate;
 	u64 ceil;
+	u8 prio;
 };
 
 #define TC_HTB_CLASSID_ROOT U32_MAX
diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c
index 92f2975b6a82..1cd9b48c96cd 100644
--- a/net/sched/sch_htb.c
+++ b/net/sched/sch_htb.c
@@ -1814,10 +1814,6 @@ static int htb_change_class(struct Qdisc *sch, u32 classid,
 			NL_SET_ERR_MSG(extack, "HTB offload doesn't support the quantum parameter");
 			goto failure;
 		}
-		if (hopt->prio) {
-			NL_SET_ERR_MSG(extack, "HTB offload doesn't support the prio parameter");
-			goto failure;
-		}
 	}
 
 	/* Keeping backward compatible with rate_table based iproute2 tc */
@@ -1913,6 +1909,7 @@ static int htb_change_class(struct Qdisc *sch, u32 classid,
 					TC_HTB_CLASSID_ROOT,
 				.rate = max_t(u64, hopt->rate.rate, rate64),
 				.ceil = max_t(u64, hopt->ceil.rate, ceil64),
+				.prio = hopt->prio,
 				.extack = extack,
 			};
 			err = htb_offload(dev, &offload_opt);
@@ -1933,6 +1930,7 @@ static int htb_change_class(struct Qdisc *sch, u32 classid,
 					TC_H_MIN(parent->common.classid),
 				.rate = max_t(u64, hopt->rate.rate, rate64),
 				.ceil = max_t(u64, hopt->ceil.rate, ceil64),
+				.prio = hopt->prio,
 				.extack = extack,
 			};
 			err = htb_offload(dev, &offload_opt);
@@ -2018,6 +2016,7 @@ static int htb_change_class(struct Qdisc *sch, u32 classid,
 				.classid = cl->common.classid,
 				.rate = max_t(u64, hopt->rate.rate, rate64),
 				.ceil = max_t(u64, hopt->ceil.rate, ceil64),
+				.prio = hopt->prio,
 				.extack = extack,
 			};
 			err = htb_offload(dev, &offload_opt);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [net-next Patch v5 2/6] octeontx2-pf: Rename tot_tx_queues to non_qos_queues
  2023-03-26 18:12 [net-next Patch v5 0/6] octeontx2-pf: HTB offload support Hariprasad Kelam
  2023-03-26 18:12 ` [net-next Patch v5 1/6] sch_htb: Allow HTB priority parameter in offload mode Hariprasad Kelam
@ 2023-03-26 18:12 ` Hariprasad Kelam
  2023-03-28 14:44   ` Simon Horman
  2023-03-26 18:12 ` [net-next Patch v5 3/6] octeontx2-pf: qos send queues management Hariprasad Kelam
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 19+ messages in thread
From: Hariprasad Kelam @ 2023-03-26 18:12 UTC (permalink / raw)
  To: netdev, linux-kernel
  Cc: kuba, davem, willemdebruijn.kernel, andrew, sgoutham, lcherian,
	gakula, jerinj, sbhatta, hkelam, naveenm, edumazet, pabeni, jhs,
	xiyou.wangcong, jiri, maxtram95

current implementation is such that tot_tx_queues contains both
xdp queues and normal tx queues. which will be allocated in interface
open calls and deallocated on interface down calls respectively.

With addition of QOS, where send quees are allocated/deallacated upon
user request Qos send queues won't be part of tot_tx_queues. So this
patch renames tot_tx_queues to non_qos_queues.

Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
---
 .../ethernet/marvell/octeontx2/nic/otx2_common.c   | 12 ++++++------
 .../ethernet/marvell/octeontx2/nic/otx2_common.h   |  2 +-
 .../net/ethernet/marvell/octeontx2/nic/otx2_pf.c   | 14 +++++++-------
 .../net/ethernet/marvell/octeontx2/nic/otx2_vf.c   |  2 +-
 4 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
index 8a41ad8ca04f..43bc56fb3c33 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
@@ -762,7 +762,7 @@ void otx2_sqb_flush(struct otx2_nic *pfvf)
 	int timeout = 1000;
 
 	ptr = (u64 *)otx2_get_regaddr(pfvf, NIX_LF_SQ_OP_STATUS);
-	for (qidx = 0; qidx < pfvf->hw.tot_tx_queues; qidx++) {
+	for (qidx = 0; qidx < pfvf->hw.non_qos_queues; qidx++) {
 		incr = (u64)qidx << 32;
 		while (timeout) {
 			val = otx2_atomic64_add(incr, ptr);
@@ -1048,7 +1048,7 @@ int otx2_config_nix_queues(struct otx2_nic *pfvf)
 	}
 
 	/* Initialize TX queues */
-	for (qidx = 0; qidx < pfvf->hw.tot_tx_queues; qidx++) {
+	for (qidx = 0; qidx < pfvf->hw.non_qos_queues; qidx++) {
 		u16 sqb_aura = otx2_get_pool_idx(pfvf, AURA_NIX_SQ, qidx);
 
 		err = otx2_sq_init(pfvf, qidx, sqb_aura);
@@ -1095,7 +1095,7 @@ int otx2_config_nix(struct otx2_nic *pfvf)
 
 	/* Set RQ/SQ/CQ counts */
 	nixlf->rq_cnt = pfvf->hw.rx_queues;
-	nixlf->sq_cnt = pfvf->hw.tot_tx_queues;
+	nixlf->sq_cnt = pfvf->hw.non_qos_queues;
 	nixlf->cq_cnt = pfvf->qset.cq_cnt;
 	nixlf->rss_sz = MAX_RSS_INDIR_TBL_SIZE;
 	nixlf->rss_grps = MAX_RSS_GROUPS;
@@ -1133,7 +1133,7 @@ void otx2_sq_free_sqbs(struct otx2_nic *pfvf)
 	int sqb, qidx;
 	u64 iova, pa;
 
-	for (qidx = 0; qidx < hw->tot_tx_queues; qidx++) {
+	for (qidx = 0; qidx < hw->non_qos_queues; qidx++) {
 		sq = &qset->sq[qidx];
 		if (!sq->sqb_ptrs)
 			continue;
@@ -1349,7 +1349,7 @@ int otx2_sq_aura_pool_init(struct otx2_nic *pfvf)
 	stack_pages =
 		(num_sqbs + hw->stack_pg_ptrs - 1) / hw->stack_pg_ptrs;
 
-	for (qidx = 0; qidx < hw->tot_tx_queues; qidx++) {
+	for (qidx = 0; qidx < hw->non_qos_queues; qidx++) {
 		pool_id = otx2_get_pool_idx(pfvf, AURA_NIX_SQ, qidx);
 		/* Initialize aura context */
 		err = otx2_aura_init(pfvf, pool_id, pool_id, num_sqbs);
@@ -1369,7 +1369,7 @@ int otx2_sq_aura_pool_init(struct otx2_nic *pfvf)
 		goto fail;
 
 	/* Allocate pointers and free them to aura/pool */
-	for (qidx = 0; qidx < hw->tot_tx_queues; qidx++) {
+	for (qidx = 0; qidx < hw->non_qos_queues; qidx++) {
 		pool_id = otx2_get_pool_idx(pfvf, AURA_NIX_SQ, qidx);
 		pool = &pfvf->qset.pool[pool_id];
 
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
index 3d22cc6a2804..b926a50138cc 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
@@ -189,7 +189,7 @@ struct otx2_hw {
 	u16                     rx_queues;
 	u16                     tx_queues;
 	u16                     xdp_queues;
-	u16                     tot_tx_queues;
+	u16                     non_qos_queues; /* tx queues plus xdp queues */
 	u16			max_queues;
 	u16			pool_cnt;
 	u16			rqpool_cnt;
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
index 179433d0a54a..33d677849aa9 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
@@ -1257,7 +1257,7 @@ static irqreturn_t otx2_q_intr_handler(int irq, void *data)
 	}
 
 	/* SQ */
-	for (qidx = 0; qidx < pf->hw.tot_tx_queues; qidx++) {
+	for (qidx = 0; qidx < pf->hw.non_qos_queues; qidx++) {
 		u64 sq_op_err_dbg, mnq_err_dbg, snd_err_dbg;
 		u8 sq_op_err_code, mnq_err_code, snd_err_code;
 
@@ -1383,7 +1383,7 @@ static void otx2_free_sq_res(struct otx2_nic *pf)
 	otx2_ctx_disable(&pf->mbox, NIX_AQ_CTYPE_SQ, false);
 	/* Free SQB pointers */
 	otx2_sq_free_sqbs(pf);
-	for (qidx = 0; qidx < pf->hw.tot_tx_queues; qidx++) {
+	for (qidx = 0; qidx < pf->hw.non_qos_queues; qidx++) {
 		sq = &qset->sq[qidx];
 		qmem_free(pf->dev, sq->sqe);
 		qmem_free(pf->dev, sq->tso_hdrs);
@@ -1433,7 +1433,7 @@ static int otx2_init_hw_resources(struct otx2_nic *pf)
 	 * so, aura count = pool count.
 	 */
 	hw->rqpool_cnt = hw->rx_queues;
-	hw->sqpool_cnt = hw->tot_tx_queues;
+	hw->sqpool_cnt = hw->non_qos_queues;
 	hw->pool_cnt = hw->rqpool_cnt + hw->sqpool_cnt;
 
 	/* Maximum hardware supported transmit length */
@@ -1688,7 +1688,7 @@ int otx2_open(struct net_device *netdev)
 
 	netif_carrier_off(netdev);
 
-	pf->qset.cq_cnt = pf->hw.rx_queues + pf->hw.tot_tx_queues;
+	pf->qset.cq_cnt = pf->hw.rx_queues + pf->hw.non_qos_queues;
 	/* RQ and SQs are mapped to different CQs,
 	 * so find out max CQ IRQs (i.e CINTs) needed.
 	 */
@@ -1708,7 +1708,7 @@ int otx2_open(struct net_device *netdev)
 	if (!qset->cq)
 		goto err_free_mem;
 
-	qset->sq = kcalloc(pf->hw.tot_tx_queues,
+	qset->sq = kcalloc(pf->hw.non_qos_queues,
 			   sizeof(struct otx2_snd_queue), GFP_KERNEL);
 	if (!qset->sq)
 		goto err_free_mem;
@@ -2520,7 +2520,7 @@ static int otx2_xdp_setup(struct otx2_nic *pf, struct bpf_prog *prog)
 		xdp_features_clear_redirect_target(dev);
 	}
 
-	pf->hw.tot_tx_queues += pf->hw.xdp_queues;
+	pf->hw.non_qos_queues += pf->hw.xdp_queues;
 
 	if (if_up)
 		otx2_open(pf->netdev);
@@ -2751,7 +2751,7 @@ static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	hw->pdev = pdev;
 	hw->rx_queues = qcount;
 	hw->tx_queues = qcount;
-	hw->tot_tx_queues = qcount;
+	hw->non_qos_queues = qcount;
 	hw->max_queues = qcount;
 	hw->rbuf_len = OTX2_DEFAULT_RBUF_LEN;
 	/* Use CQE of 128 byte descriptor size by default */
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
index ab126f8706c7..a078949430ce 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
@@ -570,7 +570,7 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	hw->rx_queues = qcount;
 	hw->tx_queues = qcount;
 	hw->max_queues = qcount;
-	hw->tot_tx_queues = qcount;
+	hw->non_qos_queues = qcount;
 	hw->rbuf_len = OTX2_DEFAULT_RBUF_LEN;
 	/* Use CQE of 128 byte descriptor size by default */
 	hw->xqe_size = 128;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [net-next Patch v5 3/6] octeontx2-pf: qos send queues management
  2023-03-26 18:12 [net-next Patch v5 0/6] octeontx2-pf: HTB offload support Hariprasad Kelam
  2023-03-26 18:12 ` [net-next Patch v5 1/6] sch_htb: Allow HTB priority parameter in offload mode Hariprasad Kelam
  2023-03-26 18:12 ` [net-next Patch v5 2/6] octeontx2-pf: Rename tot_tx_queues to non_qos_queues Hariprasad Kelam
@ 2023-03-26 18:12 ` Hariprasad Kelam
  2023-03-28 14:43   ` Simon Horman
  2023-03-26 18:12 ` [net-next Patch v5 4/6] octeontx2-pf: Refactor schedular queue alloc/free calls Hariprasad Kelam
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 19+ messages in thread
From: Hariprasad Kelam @ 2023-03-26 18:12 UTC (permalink / raw)
  To: netdev, linux-kernel
  Cc: kuba, davem, willemdebruijn.kernel, andrew, sgoutham, lcherian,
	gakula, jerinj, sbhatta, hkelam, naveenm, edumazet, pabeni, jhs,
	xiyou.wangcong, jiri, maxtram95

From: Subbaraya Sundeep <sbhatta@marvell.com>

Current implementation is such that the number of Send queues (SQs)
are decided on the device probe which is equal to the number of online
cpus. These SQs are allocated and deallocated in interface open and c
lose calls respectively.

This patch defines new APIs for initializing and deinitializing Send
queues dynamically and allocates more number of transmit queues for
QOS feature.

Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
---
 .../marvell/octeontx2/af/rvu_debugfs.c        |   5 +
 .../ethernet/marvell/octeontx2/nic/Makefile   |   2 +-
 .../marvell/octeontx2/nic/otx2_common.c       |  43 ++-
 .../marvell/octeontx2/nic/otx2_common.h       |  30 +-
 .../ethernet/marvell/octeontx2/nic/otx2_pf.c  |  44 ++-
 .../marvell/octeontx2/nic/otx2_txrx.c         |  24 +-
 .../marvell/octeontx2/nic/otx2_txrx.h         |   3 +-
 .../ethernet/marvell/octeontx2/nic/otx2_vf.c  |   7 +-
 .../net/ethernet/marvell/octeontx2/nic/qos.h  |  19 ++
 .../ethernet/marvell/octeontx2/nic/qos_sq.c   | 290 ++++++++++++++++++
 10 files changed, 427 insertions(+), 40 deletions(-)
 create mode 100644 drivers/net/ethernet/marvell/octeontx2/nic/qos.h
 create mode 100644 drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
index 26cfa501f1a1..66d3354e02b2 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
@@ -1221,6 +1221,11 @@ static int rvu_dbg_npa_ctx_display(struct seq_file *m, void *unused, int ctype)
 
 	for (aura = id; aura < max_id; aura++) {
 		aq_req.aura_id = aura;
+
+		/* Skip if queue is uninitialized */
+		if (ctype == NPA_AQ_CTYPE_POOL && !test_bit(aura, pfvf->pool_bmap))
+			continue;
+
 		seq_printf(m, "======%s : %d=======\n",
 			   (ctype == NPA_AQ_CTYPE_AURA) ? "AURA" : "POOL",
 			aq_req.aura_id);
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/Makefile b/drivers/net/ethernet/marvell/octeontx2/nic/Makefile
index 73fdb8798614..3d31ddf7c652 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/Makefile
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/Makefile
@@ -8,7 +8,7 @@ obj-$(CONFIG_OCTEONTX2_VF) += rvu_nicvf.o otx2_ptp.o
 
 rvu_nicpf-y := otx2_pf.o otx2_common.o otx2_txrx.o otx2_ethtool.o \
                otx2_flows.o otx2_tc.o cn10k.o otx2_dmac_flt.o \
-               otx2_devlink.o
+               otx2_devlink.o qos_sq.o
 rvu_nicvf-y := otx2_vf.o otx2_devlink.o
 
 rvu_nicpf-$(CONFIG_DCB) += otx2_dcbnl.o
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
index 43bc56fb3c33..adbcc087d2a8 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
@@ -513,8 +513,8 @@ void otx2_config_irq_coalescing(struct otx2_nic *pfvf, int qidx)
 		     (pfvf->hw.cq_ecount_wait - 1));
 }
 
-int __otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool,
-		      dma_addr_t *dma)
+static int __otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool,
+			     dma_addr_t *dma)
 {
 	u8 *buf;
 
@@ -532,8 +532,8 @@ int __otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool,
 	return 0;
 }
 
-static int otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool,
-			   dma_addr_t *dma)
+int otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool,
+		    dma_addr_t *dma)
 {
 	int ret;
 
@@ -758,11 +758,16 @@ int otx2_txschq_stop(struct otx2_nic *pfvf)
 void otx2_sqb_flush(struct otx2_nic *pfvf)
 {
 	int qidx, sqe_tail, sqe_head;
+	struct otx2_snd_queue *sq;
 	u64 incr, *ptr, val;
 	int timeout = 1000;
 
 	ptr = (u64 *)otx2_get_regaddr(pfvf, NIX_LF_SQ_OP_STATUS);
-	for (qidx = 0; qidx < pfvf->hw.non_qos_queues; qidx++) {
+	for (qidx = 0; qidx < otx2_get_total_tx_queues(pfvf); qidx++) {
+		sq = &pfvf->qset.sq[qidx];
+		if (!sq->sqb_ptrs)
+			continue;
+
 		incr = (u64)qidx << 32;
 		while (timeout) {
 			val = otx2_atomic64_add(incr, ptr);
@@ -862,7 +867,7 @@ int otx2_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura)
 	return otx2_sync_mbox_msg(&pfvf->mbox);
 }
 
-static int otx2_sq_init(struct otx2_nic *pfvf, u16 qidx, u16 sqb_aura)
+int otx2_sq_init(struct otx2_nic *pfvf, u16 qidx, u16 sqb_aura)
 {
 	struct otx2_qset *qset = &pfvf->qset;
 	struct otx2_snd_queue *sq;
@@ -935,9 +940,17 @@ static int otx2_cq_init(struct otx2_nic *pfvf, u16 qidx)
 		cq->cint_idx = qidx - pfvf->hw.rx_queues;
 		cq->cqe_cnt = qset->sqe_cnt;
 	} else {
-		cq->cq_type = CQ_XDP;
-		cq->cint_idx = qidx - non_xdp_queues;
-		cq->cqe_cnt = qset->sqe_cnt;
+		if (pfvf->hw.xdp_queues &&
+		    qidx < non_xdp_queues + pfvf->hw.xdp_queues) {
+			cq->cq_type = CQ_XDP;
+			cq->cint_idx = qidx - non_xdp_queues;
+			cq->cqe_cnt = qset->sqe_cnt;
+		} else {
+			cq->cq_type = CQ_QOS;
+			cq->cint_idx = qidx - non_xdp_queues -
+				       pfvf->hw.xdp_queues;
+			cq->cqe_cnt = qset->sqe_cnt;
+		}
 	}
 	cq->cqe_size = pfvf->qset.xqe_size;
 
@@ -1095,7 +1108,7 @@ int otx2_config_nix(struct otx2_nic *pfvf)
 
 	/* Set RQ/SQ/CQ counts */
 	nixlf->rq_cnt = pfvf->hw.rx_queues;
-	nixlf->sq_cnt = pfvf->hw.non_qos_queues;
+	nixlf->sq_cnt = otx2_get_total_tx_queues(pfvf);
 	nixlf->cq_cnt = pfvf->qset.cq_cnt;
 	nixlf->rss_sz = MAX_RSS_INDIR_TBL_SIZE;
 	nixlf->rss_grps = MAX_RSS_GROUPS;
@@ -1133,7 +1146,7 @@ void otx2_sq_free_sqbs(struct otx2_nic *pfvf)
 	int sqb, qidx;
 	u64 iova, pa;
 
-	for (qidx = 0; qidx < hw->non_qos_queues; qidx++) {
+	for (qidx = 0; qidx < otx2_get_total_tx_queues(pfvf); qidx++) {
 		sq = &qset->sq[qidx];
 		if (!sq->sqb_ptrs)
 			continue;
@@ -1201,8 +1214,8 @@ void otx2_aura_pool_free(struct otx2_nic *pfvf)
 	pfvf->qset.pool = NULL;
 }
 
-static int otx2_aura_init(struct otx2_nic *pfvf, int aura_id,
-			  int pool_id, int numptrs)
+int otx2_aura_init(struct otx2_nic *pfvf, int aura_id,
+		   int pool_id, int numptrs)
 {
 	struct npa_aq_enq_req *aq;
 	struct otx2_pool *pool;
@@ -1278,8 +1291,8 @@ static int otx2_aura_init(struct otx2_nic *pfvf, int aura_id,
 	return 0;
 }
 
-static int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id,
-			  int stack_pages, int numptrs, int buf_size)
+int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id,
+		   int stack_pages, int numptrs, int buf_size)
 {
 	struct npa_aq_enq_req *aq;
 	struct otx2_pool *pool;
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
index b926a50138cc..3834cc447426 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
@@ -27,6 +27,7 @@
 #include "otx2_txrx.h"
 #include "otx2_devlink.h"
 #include <rvu_trace.h>
+#include "qos.h"
 
 /* IPv4 flag more fragment bit */
 #define IPV4_FLAG_MORE				0x20
@@ -189,6 +190,7 @@ struct otx2_hw {
 	u16                     rx_queues;
 	u16                     tx_queues;
 	u16                     xdp_queues;
+	u16			tc_tx_queues;
 	u16                     non_qos_queues; /* tx queues plus xdp queues */
 	u16			max_queues;
 	u16			pool_cnt;
@@ -501,6 +503,8 @@ struct otx2_nic {
 	u16			pfc_schq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
 	bool			pfc_alloc_status[NIX_PF_PFC_PRIO_MAX];
 #endif
+	/* qos */
+	struct otx2_qos		qos;
 
 	/* napi event count. It is needed for adaptive irq coalescing. */
 	u32 napi_events;
@@ -888,12 +892,23 @@ static inline void otx2_dma_unmap_page(struct otx2_nic *pfvf,
 
 static inline u16 otx2_get_smq_idx(struct otx2_nic *pfvf, u16 qidx)
 {
+	u16 smq;
 #ifdef CONFIG_DCB
 	if (qidx < NIX_PF_PFC_PRIO_MAX && pfvf->pfc_alloc_status[qidx])
 		return pfvf->pfc_schq_list[NIX_TXSCH_LVL_SMQ][qidx];
 #endif
+	/* check if qidx falls under QOS queues */
+	if (qidx >= pfvf->hw.non_qos_queues)
+		smq = pfvf->qos.qid_to_sqmap[qidx - pfvf->hw.non_qos_queues];
+	else
+		smq = pfvf->hw.txschq_list[NIX_TXSCH_LVL_SMQ][0];
 
-	return pfvf->hw.txschq_list[NIX_TXSCH_LVL_SMQ][0];
+	return smq;
+}
+
+static inline u16 otx2_get_total_tx_queues(struct otx2_nic *pfvf)
+{
+	return pfvf->hw.non_qos_queues + pfvf->hw.tc_tx_queues;
 }
 
 /* MSI-X APIs */
@@ -922,17 +937,22 @@ int otx2_txschq_config(struct otx2_nic *pfvf, int lvl, int prio, bool pfc_en);
 int otx2_txsch_alloc(struct otx2_nic *pfvf);
 int otx2_txschq_stop(struct otx2_nic *pfvf);
 void otx2_sqb_flush(struct otx2_nic *pfvf);
-int __otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool,
-		      dma_addr_t *dma);
+int otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool,
+		    dma_addr_t *dma);
 int otx2_rxtx_enable(struct otx2_nic *pfvf, bool enable);
 void otx2_ctx_disable(struct mbox *mbox, int type, bool npa);
 int otx2_nix_config_bp(struct otx2_nic *pfvf, bool enable);
 void otx2_cleanup_rx_cqes(struct otx2_nic *pfvf, struct otx2_cq_queue *cq);
 void otx2_cleanup_tx_cqes(struct otx2_nic *pfvf, struct otx2_cq_queue *cq);
+int otx2_sq_init(struct otx2_nic *pfvf, u16 qidx, u16 sqb_aura);
 int otx2_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura);
 int cn10k_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura);
 int otx2_alloc_buffer(struct otx2_nic *pfvf, struct otx2_cq_queue *cq,
 		      dma_addr_t *dma);
+int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id,
+		   int stack_pages, int numptrs, int buf_size);
+int otx2_aura_init(struct otx2_nic *pfvf, int aura_id,
+		   int pool_id, int numptrs);
 
 /* RSS configuration APIs*/
 int otx2_rss_init(struct otx2_nic *pfvf);
@@ -1040,4 +1060,8 @@ static inline void cn10k_handle_mcs_event(struct otx2_nic *pfvf,
 {}
 #endif /* CONFIG_MACSEC */
 
+/* qos support */
+void otx2_qos_sq_setup(struct otx2_nic *pfvf, int qos_txqs);
+u16 otx2_select_queue(struct net_device *netdev, struct sk_buff *skb,
+		      struct net_device *sb_dev);
 #endif /* OTX2_COMMON_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
index 33d677849aa9..50288e638770 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
@@ -23,6 +23,7 @@
 #include "otx2_struct.h"
 #include "otx2_ptp.h"
 #include "cn10k.h"
+#include "qos.h"
 #include <rvu_trace.h>
 
 #define DRV_NAME	"rvu_nicpf"
@@ -1228,6 +1229,7 @@ static char *nix_snd_status_e_str[NIX_SND_STATUS_MAX] =  {
 static irqreturn_t otx2_q_intr_handler(int irq, void *data)
 {
 	struct otx2_nic *pf = data;
+	struct otx2_snd_queue *sq;
 	u64 val, *ptr;
 	u64 qidx = 0;
 
@@ -1257,10 +1259,14 @@ static irqreturn_t otx2_q_intr_handler(int irq, void *data)
 	}
 
 	/* SQ */
-	for (qidx = 0; qidx < pf->hw.non_qos_queues; qidx++) {
+	for (qidx = 0; qidx < otx2_get_total_tx_queues(pf); qidx++) {
 		u64 sq_op_err_dbg, mnq_err_dbg, snd_err_dbg;
 		u8 sq_op_err_code, mnq_err_code, snd_err_code;
 
+		sq = &pf->qset.sq[qidx];
+		if (!sq->sqb_ptrs)
+			continue;
+
 		/* Below debug registers captures first errors corresponding to
 		 * those registers. We don't have to check against SQ qid as
 		 * these are fatal errors.
@@ -1383,7 +1389,7 @@ static void otx2_free_sq_res(struct otx2_nic *pf)
 	otx2_ctx_disable(&pf->mbox, NIX_AQ_CTYPE_SQ, false);
 	/* Free SQB pointers */
 	otx2_sq_free_sqbs(pf);
-	for (qidx = 0; qidx < pf->hw.non_qos_queues; qidx++) {
+	for (qidx = 0; qidx < otx2_get_total_tx_queues(pf); qidx++) {
 		sq = &qset->sq[qidx];
 		qmem_free(pf->dev, sq->sqe);
 		qmem_free(pf->dev, sq->tso_hdrs);
@@ -1433,7 +1439,7 @@ static int otx2_init_hw_resources(struct otx2_nic *pf)
 	 * so, aura count = pool count.
 	 */
 	hw->rqpool_cnt = hw->rx_queues;
-	hw->sqpool_cnt = hw->non_qos_queues;
+	hw->sqpool_cnt = otx2_get_total_tx_queues(pf);
 	hw->pool_cnt = hw->rqpool_cnt + hw->sqpool_cnt;
 
 	/* Maximum hardware supported transmit length */
@@ -1688,11 +1694,14 @@ int otx2_open(struct net_device *netdev)
 
 	netif_carrier_off(netdev);
 
-	pf->qset.cq_cnt = pf->hw.rx_queues + pf->hw.non_qos_queues;
 	/* RQ and SQs are mapped to different CQs,
 	 * so find out max CQ IRQs (i.e CINTs) needed.
 	 */
-	pf->hw.cint_cnt = max(pf->hw.rx_queues, pf->hw.tx_queues);
+	pf->hw.cint_cnt = max3(pf->hw.rx_queues, pf->hw.tx_queues,
+			       pf->hw.tc_tx_queues);
+
+	pf->qset.cq_cnt = pf->hw.rx_queues + otx2_get_total_tx_queues(pf);
+
 	qset->napi = kcalloc(pf->hw.cint_cnt, sizeof(*cq_poll), GFP_KERNEL);
 	if (!qset->napi)
 		return -ENOMEM;
@@ -1743,6 +1752,11 @@ int otx2_open(struct net_device *netdev)
 		else
 			cq_poll->cq_ids[CQ_XDP] = CINT_INVALID_CQ;
 
+		cq_poll->cq_ids[CQ_QOS] = (qidx < pf->hw.tc_tx_queues) ?
+					  (qidx + pf->hw.rx_queues +
+					   pf->hw.non_qos_queues) :
+					  CINT_INVALID_CQ;
+
 		cq_poll->dev = (void *)pf;
 		cq_poll->dim.mode = DIM_CQ_PERIOD_MODE_START_FROM_CQE;
 		INIT_WORK(&cq_poll->dim.work, otx2_dim_work);
@@ -1938,6 +1952,12 @@ static netdev_tx_t otx2_xmit(struct sk_buff *skb, struct net_device *netdev)
 	int qidx = skb_get_queue_mapping(skb);
 	struct otx2_snd_queue *sq;
 	struct netdev_queue *txq;
+	int sq_idx;
+
+	/* XDP SQs are not mapped with TXQs
+	 * advance qid to derive correct sq maped with QOS
+	 */
+	sq_idx = (qidx >= pf->hw.tx_queues) ? (qidx + pf->hw.xdp_queues) : qidx;
 
 	/* Check for minimum and maximum packet length */
 	if (skb->len <= ETH_HLEN ||
@@ -1946,7 +1966,7 @@ static netdev_tx_t otx2_xmit(struct sk_buff *skb, struct net_device *netdev)
 		return NETDEV_TX_OK;
 	}
 
-	sq = &pf->qset.sq[qidx];
+	sq = &pf->qset.sq[sq_idx];
 	txq = netdev_get_tx_queue(netdev, qidx);
 
 	if (!otx2_sq_append_skb(netdev, sq, skb, qidx)) {
@@ -1964,8 +1984,8 @@ static netdev_tx_t otx2_xmit(struct sk_buff *skb, struct net_device *netdev)
 	return NETDEV_TX_OK;
 }
 
-static u16 otx2_select_queue(struct net_device *netdev, struct sk_buff *skb,
-			     struct net_device *sb_dev)
+u16 otx2_select_queue(struct net_device *netdev, struct sk_buff *skb,
+		      struct net_device *sb_dev)
 {
 #ifdef CONFIG_DCB
 	struct otx2_nic *pf = netdev_priv(netdev);
@@ -1987,6 +2007,7 @@ static u16 otx2_select_queue(struct net_device *netdev, struct sk_buff *skb,
 #endif
 	return netdev_pick_tx(netdev, skb, NULL);
 }
+EXPORT_SYMBOL(otx2_select_queue);
 
 static netdev_features_t otx2_fix_features(struct net_device *dev,
 					   netdev_features_t features)
@@ -2703,10 +2724,10 @@ static void otx2_sriov_vfcfg_cleanup(struct otx2_nic *pf)
 static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 {
 	struct device *dev = &pdev->dev;
+	int err, qcount, qos_txqs;
 	struct net_device *netdev;
 	struct otx2_nic *pf;
 	struct otx2_hw *hw;
-	int err, qcount;
 	int num_vec;
 
 	err = pcim_enable_device(pdev);
@@ -2731,8 +2752,9 @@ static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 
 	/* Set number of queues */
 	qcount = min_t(int, num_online_cpus(), OTX2_MAX_CQ_CNT);
+	qos_txqs = min_t(int, qcount, OTX2_QOS_MAX_LEAF_NODES);
 
-	netdev = alloc_etherdev_mqs(sizeof(*pf), qcount, qcount);
+	netdev = alloc_etherdev_mqs(sizeof(*pf), qcount + qos_txqs, qcount);
 	if (!netdev) {
 		err = -ENOMEM;
 		goto err_release_regions;
@@ -2920,6 +2942,8 @@ static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 		goto err_pf_sriov_init;
 #endif
 
+	otx2_qos_sq_setup(pf, qos_txqs);
+
 	return 0;
 
 err_pf_sriov_init:
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
index 7045fedfd73a..e288f46b23a8 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
@@ -464,12 +464,13 @@ static int otx2_tx_napi_handler(struct otx2_nic *pfvf,
 			break;
 		}
 
-		if (cq->cq_type == CQ_XDP) {
+		qidx = cq->cq_idx - pfvf->hw.rx_queues;
+
+		if (cq->cq_type == CQ_XDP)
 			otx2_xdp_snd_pkt_handler(pfvf, sq, cqe);
-		} else {
-			otx2_snd_pkt_handler(pfvf, cq, sq, cqe, budget,
-					     &tx_pkts, &tx_bytes);
-		}
+		else
+			otx2_snd_pkt_handler(pfvf, cq, &pfvf->qset.sq[qidx],
+					     cqe, budget, &tx_pkts, &tx_bytes);
 
 		cqe->hdr.cqe_type = NIX_XQE_TYPE_INVALID;
 		processed_cqe++;
@@ -486,7 +487,11 @@ static int otx2_tx_napi_handler(struct otx2_nic *pfvf,
 	if (likely(tx_pkts)) {
 		struct netdev_queue *txq;
 
-		txq = netdev_get_tx_queue(pfvf->netdev, cq->cint_idx);
+		qidx = cq->cq_idx - pfvf->hw.rx_queues;
+
+		if (qidx >= pfvf->hw.tx_queues)
+			qidx -= pfvf->hw.xdp_queues;
+		txq = netdev_get_tx_queue(pfvf->netdev, qidx);
 		netdev_tx_completed_queue(txq, tx_pkts, tx_bytes);
 		/* Check if queue was stopped earlier due to ring full */
 		smp_mb();
@@ -736,7 +741,8 @@ static void otx2_sqe_add_hdr(struct otx2_nic *pfvf, struct otx2_snd_queue *sq,
 		sqe_hdr->aura = sq->aura_id;
 		/* Post a CQE Tx after pkt transmission */
 		sqe_hdr->pnc = 1;
-		sqe_hdr->sq = qidx;
+		sqe_hdr->sq = (qidx >=  pfvf->hw.tx_queues) ?
+			       qidx + pfvf->hw.xdp_queues : qidx;
 	}
 	sqe_hdr->total = skb->len;
 	/* Set SQE identifier which will be used later for freeing SKB */
@@ -1221,8 +1227,10 @@ void otx2_cleanup_tx_cqes(struct otx2_nic *pfvf, struct otx2_cq_queue *cq)
 	struct nix_cqe_tx_s *cqe;
 	int processed_cqe = 0;
 	struct sg_list *sg;
+	int qidx;
 
-	sq = &pfvf->qset.sq[cq->cint_idx];
+	qidx = cq->cq_idx - pfvf->hw.rx_queues;
+	sq = &pfvf->qset.sq[qidx];
 
 	if (otx2_nix_cq_op_status(pfvf, cq) || !cq->pend_cqe)
 		return;
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h
index 93cac2c2664c..7ab6db9a986f 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h
@@ -102,7 +102,8 @@ enum cq_type {
 	CQ_RX,
 	CQ_TX,
 	CQ_XDP,
-	CQS_PER_CINT = 3, /* RQ + SQ + XDP */
+	CQ_QOS,
+	CQS_PER_CINT = 4, /* RQ + SQ + XDP + QOS_SQ */
 };
 
 struct otx2_cq_poll {
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
index a078949430ce..ec4f9dd4879e 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
@@ -479,6 +479,7 @@ static const struct net_device_ops otx2vf_netdev_ops = {
 	.ndo_open = otx2vf_open,
 	.ndo_stop = otx2vf_stop,
 	.ndo_start_xmit = otx2vf_xmit,
+	.ndo_select_queue = otx2_select_queue,
 	.ndo_set_rx_mode = otx2vf_set_rx_mode,
 	.ndo_set_mac_address = otx2_set_mac_address,
 	.ndo_change_mtu = otx2vf_change_mtu,
@@ -524,10 +525,10 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 {
 	int num_vec = pci_msix_vec_count(pdev);
 	struct device *dev = &pdev->dev;
+	int err, qcount, qos_txqs;
 	struct net_device *netdev;
 	struct otx2_nic *vf;
 	struct otx2_hw *hw;
-	int err, qcount;
 
 	err = pcim_enable_device(pdev);
 	if (err) {
@@ -550,7 +551,8 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	pci_set_master(pdev);
 
 	qcount = num_online_cpus();
-	netdev = alloc_etherdev_mqs(sizeof(*vf), qcount, qcount);
+	qos_txqs = min_t(int, qcount, OTX2_QOS_MAX_LEAF_NODES);
+	netdev = alloc_etherdev_mqs(sizeof(*vf), qcount + qos_txqs, qcount);
 	if (!netdev) {
 		err = -ENOMEM;
 		goto err_release_regions;
@@ -699,6 +701,7 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	if (err)
 		goto err_shutdown_tc;
 #endif
+	otx2_qos_sq_setup(vf, qos_txqs);
 
 	return 0;
 
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos.h b/drivers/net/ethernet/marvell/octeontx2/nic/qos.h
new file mode 100644
index 000000000000..73a62d092e99
--- /dev/null
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Marvell RVU Ethernet driver
+ *
+ * Copyright (C) 2023 Marvell.
+ *
+ */
+#ifndef OTX2_QOS_H
+#define OTX2_QOS_H
+
+#define OTX2_QOS_MAX_LEAF_NODES                16
+
+int otx2_qos_enable_sq(struct otx2_nic *pfvf, int qidx, u16 smq);
+void otx2_qos_disable_sq(struct otx2_nic *pfvf, int qidx, u16 mdq);
+
+struct otx2_qos {
+	       u16 qid_to_sqmap[OTX2_QOS_MAX_LEAF_NODES];
+	};
+
+#endif
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c b/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c
new file mode 100644
index 000000000000..1c77f024c360
--- /dev/null
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c
@@ -0,0 +1,290 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Marvell RVU Physical Function ethernet driver
+ *
+ * Copyright (C) 2023 Marvell.
+ *
+ */
+
+#include <linux/netdevice.h>
+#include <net/tso.h>
+
+#include "cn10k.h"
+#include "otx2_reg.h"
+#include "otx2_common.h"
+#include "otx2_txrx.h"
+#include "otx2_struct.h"
+
+#define OTX2_QOS_MAX_LEAF_NODES 16
+
+void otx2_qos_sq_setup(struct otx2_nic *pfvf, int qos_txqs)
+{
+	struct otx2_hw *hw = &pfvf->hw;
+
+	hw->tc_tx_queues = qos_txqs;
+}
+EXPORT_SYMBOL(otx2_qos_sq_setup);
+
+static void otx2_qos_aura_pool_free(struct otx2_nic *pfvf, int pool_id)
+{
+	struct otx2_pool *pool;
+
+	if (!pfvf->qset.pool)
+		return;
+
+	pool = &pfvf->qset.pool[pool_id];
+	qmem_free(pfvf->dev, pool->stack);
+	qmem_free(pfvf->dev, pool->fc_addr);
+	pool->stack = NULL;
+	pool->fc_addr = NULL;
+}
+
+static int otx2_qos_sq_aura_pool_init(struct otx2_nic *pfvf, int qidx)
+{
+	struct otx2_qset *qset = &pfvf->qset;
+	int pool_id, stack_pages, num_sqbs;
+	struct otx2_hw *hw = &pfvf->hw;
+	struct otx2_snd_queue *sq;
+	struct otx2_pool *pool;
+	dma_addr_t bufptr;
+	int err, ptr;
+	u64 iova, pa;
+
+	/* Calculate number of SQBs needed.
+	 *
+	 * For a 128byte SQE, and 4K size SQB, 31 SQEs will fit in one SQB.
+	 * Last SQE is used for pointing to next SQB.
+	 */
+	num_sqbs = (hw->sqb_size / 128) - 1;
+	num_sqbs = (qset->sqe_cnt + num_sqbs) / num_sqbs;
+
+	/* Get no of stack pages needed */
+	stack_pages =
+		(num_sqbs + hw->stack_pg_ptrs - 1) / hw->stack_pg_ptrs;
+
+	pool_id = otx2_get_pool_idx(pfvf, AURA_NIX_SQ, qidx);
+	pool = &pfvf->qset.pool[pool_id];
+
+	/* Initialize aura context */
+	err = otx2_aura_init(pfvf, pool_id, pool_id, num_sqbs);
+	if (err)
+		return err;
+
+	/* Initialize pool context */
+	err = otx2_pool_init(pfvf, pool_id, stack_pages,
+			     num_sqbs, hw->sqb_size);
+	if (err)
+		goto aura_free;
+
+	/* Flush accumulated messages */
+	err = otx2_sync_mbox_msg(&pfvf->mbox);
+	if (err)
+		goto pool_free;
+
+	/* Allocate pointers and free them to aura/pool */
+	sq = &qset->sq[qidx];
+	sq->sqb_count = 0;
+	sq->sqb_ptrs = kcalloc(num_sqbs, sizeof(*sq->sqb_ptrs), GFP_KERNEL);
+	if (!sq->sqb_ptrs) {
+		err = -ENOMEM;
+		goto pool_free;
+	}
+
+	for (ptr = 0; ptr < num_sqbs; ptr++) {
+		err = otx2_alloc_rbuf(pfvf, pool, &bufptr);
+		if (err)
+			goto sqb_free;
+		pfvf->hw_ops->aura_freeptr(pfvf, pool_id, bufptr);
+		sq->sqb_ptrs[sq->sqb_count++] = (u64)bufptr;
+	}
+
+	return 0;
+
+sqb_free:
+	while (ptr--) {
+		if (!sq->sqb_ptrs[ptr])
+			continue;
+		iova = sq->sqb_ptrs[ptr];
+		pa = otx2_iova_to_phys(pfvf->iommu_domain, iova);
+		dma_unmap_page_attrs(pfvf->dev, iova, hw->sqb_size,
+				     DMA_FROM_DEVICE,
+				     DMA_ATTR_SKIP_CPU_SYNC);
+		put_page(virt_to_page(phys_to_virt(pa)));
+		otx2_aura_allocptr(pfvf, pool_id);
+	}
+	sq->sqb_count = 0;
+	kfree(sq->sqb_ptrs);
+pool_free:
+	qmem_free(pfvf->dev, pool->stack);
+aura_free:
+	qmem_free(pfvf->dev, pool->fc_addr);
+	otx2_mbox_reset(&pfvf->mbox.mbox, 0);
+	return err;
+}
+
+static void otx2_qos_sq_free_sqbs(struct otx2_nic *pfvf, int qidx)
+{
+	struct otx2_qset *qset = &pfvf->qset;
+	struct otx2_hw *hw = &pfvf->hw;
+	struct otx2_snd_queue *sq;
+	u64 iova, pa;
+	int sqb;
+
+	sq = &qset->sq[qidx];
+	if (!sq->sqb_ptrs)
+		return;
+	for (sqb = 0; sqb < sq->sqb_count; sqb++) {
+		if (!sq->sqb_ptrs[sqb])
+			continue;
+		iova = sq->sqb_ptrs[sqb];
+		pa = otx2_iova_to_phys(pfvf->iommu_domain, iova);
+		dma_unmap_page_attrs(pfvf->dev, iova, hw->sqb_size,
+				     DMA_FROM_DEVICE,
+				     DMA_ATTR_SKIP_CPU_SYNC);
+		put_page(virt_to_page(phys_to_virt(pa)));
+	}
+
+	sq->sqb_count = 0;
+
+	sq = &qset->sq[qidx];
+	qmem_free(pfvf->dev, sq->sqe);
+	qmem_free(pfvf->dev, sq->tso_hdrs);
+	kfree(sq->sg);
+	kfree(sq->sqb_ptrs);
+	qmem_free(pfvf->dev, sq->timestamps);
+
+	memset((void *)sq, 0, sizeof(*sq));
+}
+
+/* send queue id */
+static void otx2_qos_sqb_flush(struct otx2_nic *pfvf, int qidx)
+{
+	int sqe_tail, sqe_head;
+	u64 incr, *ptr, val;
+
+	ptr = (__force u64 *)otx2_get_regaddr(pfvf, NIX_LF_SQ_OP_STATUS);
+	incr = (u64)qidx << 32;
+	val = otx2_atomic64_add(incr, ptr);
+	sqe_head = (val >> 20) & 0x3F;
+	sqe_tail = (val >> 28) & 0x3F;
+	if (sqe_head != sqe_tail)
+		usleep_range(50, 60);
+}
+
+static int otx2_qos_ctx_disable(struct otx2_nic *pfvf, u16 qidx, int aura_id)
+{
+	struct nix_cn10k_aq_enq_req *cn10k_sq_aq;
+	struct npa_aq_enq_req *aura_aq;
+	struct npa_aq_enq_req *pool_aq;
+	struct nix_aq_enq_req *sq_aq;
+
+	if (test_bit(CN10K_LMTST, &pfvf->hw.cap_flag)) {
+		cn10k_sq_aq = otx2_mbox_alloc_msg_nix_cn10k_aq_enq(&pfvf->mbox);
+		if (!cn10k_sq_aq)
+			return -ENOMEM;
+		cn10k_sq_aq->qidx = qidx;
+		cn10k_sq_aq->sq.ena = 0;
+		cn10k_sq_aq->sq_mask.ena = 1;
+		cn10k_sq_aq->ctype = NIX_AQ_CTYPE_SQ;
+		cn10k_sq_aq->op = NIX_AQ_INSTOP_WRITE;
+	} else {
+		sq_aq = otx2_mbox_alloc_msg_nix_aq_enq(&pfvf->mbox);
+		if (!sq_aq)
+			return -ENOMEM;
+		sq_aq->qidx = qidx;
+		sq_aq->sq.ena = 0;
+		sq_aq->sq_mask.ena = 1;
+		sq_aq->ctype = NIX_AQ_CTYPE_SQ;
+		sq_aq->op = NIX_AQ_INSTOP_WRITE;
+	}
+
+	aura_aq = otx2_mbox_alloc_msg_npa_aq_enq(&pfvf->mbox);
+	if (!aura_aq) {
+		otx2_mbox_reset(&pfvf->mbox.mbox, 0);
+		return -ENOMEM;
+	}
+
+	aura_aq->aura_id = aura_id;
+	aura_aq->aura.ena = 0;
+	aura_aq->aura_mask.ena = 1;
+	aura_aq->ctype = NPA_AQ_CTYPE_AURA;
+	aura_aq->op = NPA_AQ_INSTOP_WRITE;
+
+	pool_aq = otx2_mbox_alloc_msg_npa_aq_enq(&pfvf->mbox);
+	if (!pool_aq) {
+		otx2_mbox_reset(&pfvf->mbox.mbox, 0);
+		return -ENOMEM;
+	}
+
+	pool_aq->aura_id = aura_id;
+	pool_aq->pool.ena = 0;
+	pool_aq->pool_mask.ena = 1;
+
+	pool_aq->ctype = NPA_AQ_CTYPE_POOL;
+	pool_aq->op = NPA_AQ_INSTOP_WRITE;
+
+	return otx2_sync_mbox_msg(&pfvf->mbox);
+}
+
+int otx2_qos_enable_sq(struct otx2_nic *pfvf, int qidx, u16 smq)
+{
+	struct otx2_hw *hw = &pfvf->hw;
+	int pool_id, sq_idx, err;
+
+	if (pfvf->flags & OTX2_FLAG_INTF_DOWN)
+		return -EPERM;
+
+	sq_idx = hw->non_qos_queues + qidx;
+
+	mutex_lock(&pfvf->mbox.lock);
+	err = otx2_qos_sq_aura_pool_init(pfvf, sq_idx);
+	if (err)
+		goto out;
+
+	pool_id = otx2_get_pool_idx(pfvf, AURA_NIX_SQ, sq_idx);
+	pfvf->qos.qid_to_sqmap[qidx] = smq;
+	err = otx2_sq_init(pfvf, sq_idx, pool_id);
+	if (err)
+		goto out;
+out:
+	mutex_unlock(&pfvf->mbox.lock);
+	return err;
+}
+
+void otx2_qos_disable_sq(struct otx2_nic *pfvf, int qidx, u16 mdq)
+{
+	struct otx2_qset *qset = &pfvf->qset;
+	struct otx2_hw *hw = &pfvf->hw;
+	struct otx2_snd_queue *sq;
+	struct otx2_cq_queue *cq;
+	int pool_id, sq_idx;
+
+	sq_idx = hw->non_qos_queues + qidx;
+
+	/* If the DOWN flag is set SQs are already freed */
+	if (pfvf->flags & OTX2_FLAG_INTF_DOWN)
+		return;
+
+	sq = &pfvf->qset.sq[sq_idx];
+	if (!sq->sqb_ptrs)
+		return;
+
+	if (sq_idx < hw->non_qos_queues ||
+	    sq_idx >= otx2_get_total_tx_queues(pfvf)) {
+		netdev_err(pfvf->netdev, "Send Queue is not a QoS queue\n");
+		return;
+	}
+
+	cq = &qset->cq[pfvf->hw.rx_queues + sq_idx];
+	pool_id = otx2_get_pool_idx(pfvf, AURA_NIX_SQ, sq_idx);
+
+	otx2_qos_sqb_flush(pfvf, sq_idx);
+	otx2_smq_flush(pfvf, otx2_get_smq_idx(pfvf, sq_idx));
+	otx2_cleanup_tx_cqes(pfvf, cq);
+
+	mutex_lock(&pfvf->mbox.lock);
+	otx2_qos_ctx_disable(pfvf, sq_idx, pool_id);
+	mutex_unlock(&pfvf->mbox.lock);
+
+	otx2_qos_sq_free_sqbs(pfvf, sq_idx);
+	otx2_qos_aura_pool_free(pfvf, pool_id);
+}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [net-next Patch v5 4/6] octeontx2-pf: Refactor schedular queue alloc/free calls
  2023-03-26 18:12 [net-next Patch v5 0/6] octeontx2-pf: HTB offload support Hariprasad Kelam
                   ` (2 preceding siblings ...)
  2023-03-26 18:12 ` [net-next Patch v5 3/6] octeontx2-pf: qos send queues management Hariprasad Kelam
@ 2023-03-26 18:12 ` Hariprasad Kelam
  2023-03-28 14:44   ` Simon Horman
  2023-03-26 18:12 ` [net-next Patch v5 5/6] octeontx2-pf: Add support for HTB offload Hariprasad Kelam
  2023-03-26 18:12 ` [net-next Patch v5 6/6] docs: octeontx2: Add Documentation for QOS Hariprasad Kelam
  5 siblings, 1 reply; 19+ messages in thread
From: Hariprasad Kelam @ 2023-03-26 18:12 UTC (permalink / raw)
  To: netdev, linux-kernel
  Cc: kuba, davem, willemdebruijn.kernel, andrew, sgoutham, lcherian,
	gakula, jerinj, sbhatta, hkelam, naveenm, edumazet, pabeni, jhs,
	xiyou.wangcong, jiri, maxtram95

Multiple transmit scheduler queues can be configured at different
levels to support traffic shaping and scheduling. But on txschq free
requests, the transmit schedular config in hardware is not getting
reset. This patch adds support to reset the stale config.

The txschq alloc response handler updates the default txschq
array which is used to configure the transmit packet path from
SMQ to TL2 levels. However, for new features such as QoS offload
that requires it's own txschq queues, this handler is still
invoked and results in undefined behavior. The code now handles
txschq response in the mbox caller function.

Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
Signed-off-by: Naveen Mamindlapalli <naveenm@marvell.com>
Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
---
 .../ethernet/marvell/octeontx2/af/rvu_nix.c   | 45 +++++++++++++++++++
 .../marvell/octeontx2/nic/otx2_common.c       | 36 ++++++++-------
 .../ethernet/marvell/octeontx2/nic/otx2_pf.c  |  4 --
 .../ethernet/marvell/octeontx2/nic/otx2_vf.c  |  4 --
 4 files changed, 64 insertions(+), 25 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index 4ad707e758b9..79ed7af0b0a4 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -1691,6 +1691,42 @@ handle_txschq_shaper_update(struct rvu *rvu, int blkaddr, int nixlf,
 	return true;
 }
 
+static void nix_reset_tx_schedule(struct rvu *rvu, int blkaddr,
+				  int lvl, int schq)
+{
+	u64 tlx_parent = 0, tlx_schedule = 0;
+
+	switch (lvl) {
+	case NIX_TXSCH_LVL_TL2:
+		tlx_parent   = NIX_AF_TL2X_PARENT(schq);
+		tlx_schedule = NIX_AF_TL2X_SCHEDULE(schq);
+		break;
+	case NIX_TXSCH_LVL_TL3:
+		tlx_parent   = NIX_AF_TL3X_PARENT(schq);
+		tlx_schedule = NIX_AF_TL3X_SCHEDULE(schq);
+		break;
+	case NIX_TXSCH_LVL_TL4:
+		tlx_parent   = NIX_AF_TL4X_PARENT(schq);
+		tlx_schedule = NIX_AF_TL4X_SCHEDULE(schq);
+		break;
+	case NIX_TXSCH_LVL_MDQ:
+		/* no need to reset SMQ_CFG as HW clears this CSR
+		 * on SMQ flush
+		 */
+		tlx_parent   = NIX_AF_MDQX_PARENT(schq);
+		tlx_schedule = NIX_AF_MDQX_SCHEDULE(schq);
+		break;
+	default:
+		return;
+	}
+
+	if (tlx_parent)
+		rvu_write64(rvu, blkaddr, tlx_parent, 0x0);
+
+	if (tlx_schedule)
+		rvu_write64(rvu, blkaddr, tlx_schedule, 0x0);
+}
+
 /* Disable shaping of pkts by a scheduler queue
  * at a given scheduler level.
  */
@@ -2039,6 +2075,7 @@ int rvu_mbox_handler_nix_txsch_alloc(struct rvu *rvu,
 				pfvf_map[schq] = TXSCH_MAP(pcifunc, 0);
 			nix_reset_tx_linkcfg(rvu, blkaddr, lvl, schq);
 			nix_reset_tx_shaping(rvu, blkaddr, nixlf, lvl, schq);
+			nix_reset_tx_schedule(rvu, blkaddr, lvl, schq);
 		}
 
 		for (idx = 0; idx < req->schq[lvl]; idx++) {
@@ -2048,6 +2085,7 @@ int rvu_mbox_handler_nix_txsch_alloc(struct rvu *rvu,
 				pfvf_map[schq] = TXSCH_MAP(pcifunc, 0);
 			nix_reset_tx_linkcfg(rvu, blkaddr, lvl, schq);
 			nix_reset_tx_shaping(rvu, blkaddr, nixlf, lvl, schq);
+			nix_reset_tx_schedule(rvu, blkaddr, lvl, schq);
 		}
 	}
 
@@ -2143,6 +2181,7 @@ static int nix_txschq_free(struct rvu *rvu, u16 pcifunc)
 				continue;
 			nix_reset_tx_linkcfg(rvu, blkaddr, lvl, schq);
 			nix_clear_tx_xoff(rvu, blkaddr, lvl, schq);
+			nix_reset_tx_shaping(rvu, blkaddr, nixlf, lvl, schq);
 		}
 	}
 	nix_clear_tx_xoff(rvu, blkaddr, NIX_TXSCH_LVL_TL1,
@@ -2181,6 +2220,7 @@ static int nix_txschq_free(struct rvu *rvu, u16 pcifunc)
 		for (schq = 0; schq < txsch->schq.max; schq++) {
 			if (TXSCH_MAP_FUNC(txsch->pfvf_map[schq]) != pcifunc)
 				continue;
+			nix_reset_tx_schedule(rvu, blkaddr, lvl, schq);
 			rvu_free_rsrc(&txsch->schq, schq);
 			txsch->pfvf_map[schq] = TXSCH_MAP(0, NIX_TXSCHQ_FREE);
 		}
@@ -2240,6 +2280,9 @@ static int nix_txschq_free_one(struct rvu *rvu,
 	 */
 	nix_clear_tx_xoff(rvu, blkaddr, lvl, schq);
 
+	nix_reset_tx_linkcfg(rvu, blkaddr, lvl, schq);
+	nix_reset_tx_shaping(rvu, blkaddr, nixlf, lvl, schq);
+
 	/* Flush if it is a SMQ. Onus of disabling
 	 * TL2/3 queue links before SMQ flush is on user
 	 */
@@ -2249,6 +2292,8 @@ static int nix_txschq_free_one(struct rvu *rvu,
 		goto err;
 	}
 
+	nix_reset_tx_schedule(rvu, blkaddr, lvl, schq);
+
 	/* Free the resource */
 	rvu_free_rsrc(&txsch->schq, schq);
 	txsch->pfvf_map[schq] = TXSCH_MAP(0, NIX_TXSCHQ_FREE);
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
index adbcc087d2a8..32c02a2d3554 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
@@ -716,7 +716,8 @@ EXPORT_SYMBOL(otx2_smq_flush);
 int otx2_txsch_alloc(struct otx2_nic *pfvf)
 {
 	struct nix_txsch_alloc_req *req;
-	int lvl;
+	struct nix_txsch_alloc_rsp *rsp;
+	int lvl, schq, rc;
 
 	/* Get memory to put this msg */
 	req = otx2_mbox_alloc_msg_nix_txsch_alloc(&pfvf->mbox);
@@ -726,8 +727,24 @@ int otx2_txsch_alloc(struct otx2_nic *pfvf)
 	/* Request one schq per level */
 	for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++)
 		req->schq[lvl] = 1;
+	rc = otx2_sync_mbox_msg(&pfvf->mbox);
+	if (rc)
+		return rc;
 
-	return otx2_sync_mbox_msg(&pfvf->mbox);
+	rsp = (struct nix_txsch_alloc_rsp *)
+	      otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
+	if (IS_ERR(rsp))
+		return PTR_ERR(rsp);
+
+	/* Setup transmit scheduler list */
+	for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++)
+		for (schq = 0; schq < rsp->schq[lvl]; schq++)
+			pfvf->hw.txschq_list[lvl][schq] =
+				rsp->schq_list[lvl][schq];
+
+	pfvf->hw.txschq_link_cfg_lvl = rsp->link_cfg_lvl;
+
+	return 0;
 }
 
 int otx2_txschq_stop(struct otx2_nic *pfvf)
@@ -1642,21 +1659,6 @@ void mbox_handler_cgx_fec_stats(struct otx2_nic *pfvf,
 	pfvf->hw.cgx_fec_uncorr_blks += rsp->fec_uncorr_blks;
 }
 
-void mbox_handler_nix_txsch_alloc(struct otx2_nic *pf,
-				  struct nix_txsch_alloc_rsp *rsp)
-{
-	int lvl, schq;
-
-	/* Setup transmit scheduler list */
-	for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++)
-		for (schq = 0; schq < rsp->schq[lvl]; schq++)
-			pf->hw.txschq_list[lvl][schq] =
-				rsp->schq_list[lvl][schq];
-
-	pf->hw.txschq_link_cfg_lvl = rsp->link_cfg_lvl;
-}
-EXPORT_SYMBOL(mbox_handler_nix_txsch_alloc);
-
 void mbox_handler_npa_lf_alloc(struct otx2_nic *pfvf,
 			       struct npa_lf_alloc_rsp *rsp)
 {
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
index 50288e638770..a32f0cb89fc4 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
@@ -792,10 +792,6 @@ static void otx2_process_pfaf_mbox_msg(struct otx2_nic *pf,
 	case MBOX_MSG_NIX_LF_ALLOC:
 		mbox_handler_nix_lf_alloc(pf, (struct nix_lf_alloc_rsp *)msg);
 		break;
-	case MBOX_MSG_NIX_TXSCH_ALLOC:
-		mbox_handler_nix_txsch_alloc(pf,
-					     (struct nix_txsch_alloc_rsp *)msg);
-		break;
 	case MBOX_MSG_NIX_BP_ENABLE:
 		mbox_handler_nix_bp_enable(pf, (struct nix_bp_cfg_rsp *)msg);
 		break;
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
index ec4f9dd4879e..24dbea86ce97 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
@@ -70,10 +70,6 @@ static void otx2vf_process_vfaf_mbox_msg(struct otx2_nic *vf,
 	case MBOX_MSG_NIX_LF_ALLOC:
 		mbox_handler_nix_lf_alloc(vf, (struct nix_lf_alloc_rsp *)msg);
 		break;
-	case MBOX_MSG_NIX_TXSCH_ALLOC:
-		mbox_handler_nix_txsch_alloc(vf,
-					     (struct nix_txsch_alloc_rsp *)msg);
-		break;
 	case MBOX_MSG_NIX_BP_ENABLE:
 		mbox_handler_nix_bp_enable(vf, (struct nix_bp_cfg_rsp *)msg);
 		break;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [net-next Patch v5 5/6] octeontx2-pf: Add support for HTB offload
  2023-03-26 18:12 [net-next Patch v5 0/6] octeontx2-pf: HTB offload support Hariprasad Kelam
                   ` (3 preceding siblings ...)
  2023-03-26 18:12 ` [net-next Patch v5 4/6] octeontx2-pf: Refactor schedular queue alloc/free calls Hariprasad Kelam
@ 2023-03-26 18:12 ` Hariprasad Kelam
  2023-03-28 10:56   ` Paolo Abeni
                     ` (2 more replies)
  2023-03-26 18:12 ` [net-next Patch v5 6/6] docs: octeontx2: Add Documentation for QOS Hariprasad Kelam
  5 siblings, 3 replies; 19+ messages in thread
From: Hariprasad Kelam @ 2023-03-26 18:12 UTC (permalink / raw)
  To: netdev, linux-kernel
  Cc: kuba, davem, willemdebruijn.kernel, andrew, sgoutham, lcherian,
	gakula, jerinj, sbhatta, hkelam, naveenm, edumazet, pabeni, jhs,
	xiyou.wangcong, jiri, maxtram95

From: Naveen Mamindlapalli <naveenm@marvell.com>

This patch registers callbacks to support HTB offload.

Below are features supported,

- supports traffic shaping on the given class by honoring rate and ceil
configuration.

- supports traffic scheduling,  which prioritizes different types of
traffic based on strict priority values.

- supports the creation of leaf to inner classes such that parent node
rate limits apply to all child nodes.

Signed-off-by: Naveen Mamindlapalli <naveenm@marvell.com>
Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
---
 .../ethernet/marvell/octeontx2/af/common.h    |    2 +-
 .../ethernet/marvell/octeontx2/nic/Makefile   |    2 +-
 .../marvell/octeontx2/nic/otx2_common.c       |   35 +-
 .../marvell/octeontx2/nic/otx2_common.h       |    8 +-
 .../marvell/octeontx2/nic/otx2_ethtool.c      |   31 +-
 .../ethernet/marvell/octeontx2/nic/otx2_pf.c  |   56 +-
 .../ethernet/marvell/octeontx2/nic/otx2_reg.h |   13 +
 .../ethernet/marvell/octeontx2/nic/otx2_tc.c  |    7 +-
 .../net/ethernet/marvell/octeontx2/nic/qos.c  | 1460 +++++++++++++++++
 .../net/ethernet/marvell/octeontx2/nic/qos.h  |   58 +-
 .../ethernet/marvell/octeontx2/nic/qos_sq.c   |   20 +-
 11 files changed, 1657 insertions(+), 35 deletions(-)
 create mode 100644 drivers/net/ethernet/marvell/octeontx2/nic/qos.c

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/common.h b/drivers/net/ethernet/marvell/octeontx2/af/common.h
index 8931864ee110..f5bf719a6ccf 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/common.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/common.h
@@ -142,7 +142,7 @@ enum nix_scheduler {
 
 #define TXSCH_RR_QTM_MAX		((1 << 24) - 1)
 #define TXSCH_TL1_DFLT_RR_QTM		TXSCH_RR_QTM_MAX
-#define TXSCH_TL1_DFLT_RR_PRIO		(0x1ull)
+#define TXSCH_TL1_DFLT_RR_PRIO		(0x7ull)
 #define CN10K_MAX_DWRR_WEIGHT          16384 /* Weight is 14bit on CN10K */
 
 /* Min/Max packet sizes, excluding FCS */
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/Makefile b/drivers/net/ethernet/marvell/octeontx2/nic/Makefile
index 3d31ddf7c652..5664f768cb0c 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/Makefile
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/Makefile
@@ -8,7 +8,7 @@ obj-$(CONFIG_OCTEONTX2_VF) += rvu_nicvf.o otx2_ptp.o
 
 rvu_nicpf-y := otx2_pf.o otx2_common.o otx2_txrx.o otx2_ethtool.o \
                otx2_flows.o otx2_tc.o cn10k.o otx2_dmac_flt.o \
-               otx2_devlink.o qos_sq.o
+               otx2_devlink.o qos_sq.o qos.o
 rvu_nicvf-y := otx2_vf.o otx2_devlink.o
 
 rvu_nicpf-$(CONFIG_DCB) += otx2_dcbnl.o
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
index 32c02a2d3554..b4542a801291 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
@@ -89,6 +89,11 @@ int otx2_update_sq_stats(struct otx2_nic *pfvf, int qidx)
 	if (!pfvf->qset.sq)
 		return 0;
 
+	if (qidx >= pfvf->hw.non_qos_queues) {
+		if (!test_bit(qidx - pfvf->hw.non_qos_queues, pfvf->qos.qos_sq_bmap))
+			return 0;
+	}
+
 	otx2_nix_sq_op_stats(&sq->stats, pfvf, qidx);
 	return 1;
 }
@@ -747,29 +752,47 @@ int otx2_txsch_alloc(struct otx2_nic *pfvf)
 	return 0;
 }
 
-int otx2_txschq_stop(struct otx2_nic *pfvf)
+void otx2_txschq_free_one(struct otx2_nic *pfvf, u16 lvl, u16 schq)
 {
 	struct nix_txsch_free_req *free_req;
-	int lvl, schq, err;
+	int err;
 
 	mutex_lock(&pfvf->mbox.lock);
-	/* Free the transmit schedulers */
+
 	free_req = otx2_mbox_alloc_msg_nix_txsch_free(&pfvf->mbox);
 	if (!free_req) {
 		mutex_unlock(&pfvf->mbox.lock);
-		return -ENOMEM;
+		netdev_err(pfvf->netdev,
+			   "Failed alloc txschq free req\n");
+		return;
 	}
 
-	free_req->flags = TXSCHQ_FREE_ALL;
+	free_req->schq_lvl = lvl;
+	free_req->schq = schq;
+
 	err = otx2_sync_mbox_msg(&pfvf->mbox);
+	if (err) {
+		netdev_err(pfvf->netdev,
+			   "Failed stop txschq %d at level %d\n", lvl, schq);
+	}
+
 	mutex_unlock(&pfvf->mbox.lock);
+}
+
+void otx2_txschq_stop(struct otx2_nic *pfvf)
+{
+	int lvl, schq;
+
+	/* free non QOS TLx nodes */
+	for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++)
+		otx2_txschq_free_one(pfvf, lvl,
+				     pfvf->hw.txschq_list[lvl][0]);
 
 	/* Clear the txschq list */
 	for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
 		for (schq = 0; schq < MAX_TXSCHQ_PER_FUNC; schq++)
 			pfvf->hw.txschq_list[lvl][schq] = 0;
 	}
-	return err;
 }
 
 void otx2_sqb_flush(struct otx2_nic *pfvf)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
index 3834cc447426..4b219e8e5b32 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
@@ -252,6 +252,7 @@ struct otx2_hw {
 #define CN10K_RPM		3
 #define CN10K_PTP_ONESTEP	4
 #define CN10K_HW_MACSEC		5
+#define QOS_CIR_PIR_SUPPORT	6
 	unsigned long		cap_flag;
 
 #define LMT_LINE_SIZE		128
@@ -586,6 +587,7 @@ static inline void otx2_setup_dev_hw_settings(struct otx2_nic *pfvf)
 		__set_bit(CN10K_LMTST, &hw->cap_flag);
 		__set_bit(CN10K_RPM, &hw->cap_flag);
 		__set_bit(CN10K_PTP_ONESTEP, &hw->cap_flag);
+		__set_bit(QOS_CIR_PIR_SUPPORT, &hw->cap_flag);
 	}
 
 	if (is_dev_cn10kb(pfvf->pdev))
@@ -935,7 +937,7 @@ int otx2_config_nix(struct otx2_nic *pfvf);
 int otx2_config_nix_queues(struct otx2_nic *pfvf);
 int otx2_txschq_config(struct otx2_nic *pfvf, int lvl, int prio, bool pfc_en);
 int otx2_txsch_alloc(struct otx2_nic *pfvf);
-int otx2_txschq_stop(struct otx2_nic *pfvf);
+void otx2_txschq_stop(struct otx2_nic *pfvf);
 void otx2_sqb_flush(struct otx2_nic *pfvf);
 int otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool,
 		    dma_addr_t *dma);
@@ -953,6 +955,7 @@ int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id,
 		   int stack_pages, int numptrs, int buf_size);
 int otx2_aura_init(struct otx2_nic *pfvf, int aura_id,
 		   int pool_id, int numptrs);
+void otx2_txschq_free_one(struct otx2_nic *pfvf, u16 lvl, u16 schq);
 
 /* RSS configuration APIs*/
 int otx2_rss_init(struct otx2_nic *pfvf);
@@ -1064,4 +1067,7 @@ static inline void cn10k_handle_mcs_event(struct otx2_nic *pfvf,
 void otx2_qos_sq_setup(struct otx2_nic *pfvf, int qos_txqs);
 u16 otx2_select_queue(struct net_device *netdev, struct sk_buff *skb,
 		      struct net_device *sb_dev);
+int otx2_get_txq_by_classid(struct otx2_nic *pfvf, u16 classid);
+void otx2_qos_config_txschq(struct otx2_nic *pfvf);
+void otx2_clean_qos_queues(struct otx2_nic *pfvf);
 #endif /* OTX2_COMMON_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
index 0f8d1a69139f..e8722a4f4cc6 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
@@ -92,10 +92,16 @@ static void otx2_get_qset_strings(struct otx2_nic *pfvf, u8 **data, int qset)
 			*data += ETH_GSTRING_LEN;
 		}
 	}
-	for (qidx = 0; qidx < pfvf->hw.tx_queues; qidx++) {
+
+	for (qidx = 0; qidx < otx2_get_total_tx_queues(pfvf); qidx++) {
 		for (stats = 0; stats < otx2_n_queue_stats; stats++) {
-			sprintf(*data, "txq%d: %s", qidx + start_qidx,
-				otx2_queue_stats[stats].name);
+			if (qidx >= pfvf->hw.non_qos_queues)
+				sprintf(*data, "txq_qos%d: %s",
+					qidx + start_qidx - pfvf->hw.non_qos_queues,
+					otx2_queue_stats[stats].name);
+			else
+				sprintf(*data, "txq%d: %s", qidx + start_qidx,
+					otx2_queue_stats[stats].name);
 			*data += ETH_GSTRING_LEN;
 		}
 	}
@@ -159,7 +165,7 @@ static void otx2_get_qset_stats(struct otx2_nic *pfvf,
 				[otx2_queue_stats[stat].index];
 	}
 
-	for (qidx = 0; qidx < pfvf->hw.tx_queues; qidx++) {
+	for (qidx = 0; qidx <  otx2_get_total_tx_queues(pfvf); qidx++) {
 		if (!otx2_update_sq_stats(pfvf, qidx)) {
 			for (stat = 0; stat < otx2_n_queue_stats; stat++)
 				*((*data)++) = 0;
@@ -254,7 +260,8 @@ static int otx2_get_sset_count(struct net_device *netdev, int sset)
 		return -EINVAL;
 
 	qstats_count = otx2_n_queue_stats *
-		       (pfvf->hw.rx_queues + pfvf->hw.tx_queues);
+		       (pfvf->hw.rx_queues + pfvf->hw.non_qos_queues +
+			pfvf->hw.tc_tx_queues);
 	if (!test_bit(CN10K_RPM, &pfvf->hw.cap_flag))
 		mac_stats = CGX_RX_STATS_COUNT + CGX_TX_STATS_COUNT;
 	otx2_update_lmac_fec_stats(pfvf);
@@ -282,7 +289,7 @@ static int otx2_set_channels(struct net_device *dev,
 {
 	struct otx2_nic *pfvf = netdev_priv(dev);
 	bool if_up = netif_running(dev);
-	int err = 0;
+	int err, qos_txqs;
 
 	if (!channel->rx_count || !channel->tx_count)
 		return -EINVAL;
@@ -296,14 +303,19 @@ static int otx2_set_channels(struct net_device *dev,
 	if (if_up)
 		dev->netdev_ops->ndo_stop(dev);
 
-	err = otx2_set_real_num_queues(dev, channel->tx_count,
+	qos_txqs = bitmap_weight(pfvf->qos.qos_sq_bmap,
+				 OTX2_QOS_MAX_LEAF_NODES);
+
+	err = otx2_set_real_num_queues(dev, channel->tx_count + qos_txqs,
 				       channel->rx_count);
 	if (err)
 		return err;
 
 	pfvf->hw.rx_queues = channel->rx_count;
 	pfvf->hw.tx_queues = channel->tx_count;
-	pfvf->qset.cq_cnt = pfvf->hw.tx_queues +  pfvf->hw.rx_queues;
+	if (pfvf->xdp_prog)
+		pfvf->hw.xdp_queues = channel->rx_count;
+	pfvf->hw.non_qos_queues =  pfvf->hw.tx_queues + pfvf->hw.xdp_queues;
 
 	if (if_up)
 		err = dev->netdev_ops->ndo_open(dev);
@@ -1405,7 +1417,8 @@ static int otx2vf_get_sset_count(struct net_device *netdev, int sset)
 		return -EINVAL;
 
 	qstats_count = otx2_n_queue_stats *
-		       (vf->hw.rx_queues + vf->hw.tx_queues);
+		       (vf->hw.rx_queues + vf->hw.tx_queues +
+			vf->hw.tc_tx_queues);
 
 	return otx2_n_dev_stats + otx2_n_drv_stats + qstats_count + 1;
 }
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
index a32f0cb89fc4..d0192f9089ee 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
@@ -1387,6 +1387,9 @@ static void otx2_free_sq_res(struct otx2_nic *pf)
 	otx2_sq_free_sqbs(pf);
 	for (qidx = 0; qidx < otx2_get_total_tx_queues(pf); qidx++) {
 		sq = &qset->sq[qidx];
+		/* Skip freeing Qos queues if they are not initialized */
+		if (!sq->sqb_count)
+			continue;
 		qmem_free(pf->dev, sq->sqe);
 		qmem_free(pf->dev, sq->tso_hdrs);
 		kfree(sq->sg);
@@ -1518,8 +1521,7 @@ static int otx2_init_hw_resources(struct otx2_nic *pf)
 	otx2_free_cq_res(pf);
 	otx2_ctx_disable(mbox, NIX_AQ_CTYPE_RQ, false);
 err_free_txsch:
-	if (otx2_txschq_stop(pf))
-		dev_err(pf->dev, "%s failed to stop TX schedulers\n", __func__);
+	otx2_txschq_stop(pf);
 err_free_sq_ptrs:
 	otx2_sq_free_sqbs(pf);
 err_free_rq_ptrs:
@@ -1554,21 +1556,21 @@ static void otx2_free_hw_resources(struct otx2_nic *pf)
 	struct mbox *mbox = &pf->mbox;
 	struct otx2_cq_queue *cq;
 	struct msg_req *req;
-	int qidx, err;
+	int qidx;
 
 	/* Ensure all SQE are processed */
 	otx2_sqb_flush(pf);
 
 	/* Stop transmission */
-	err = otx2_txschq_stop(pf);
-	if (err)
-		dev_err(pf->dev, "RVUPF: Failed to stop/free TX schedulers\n");
+	otx2_txschq_stop(pf);
 
 #ifdef CONFIG_DCB
 	if (pf->pfc_en)
 		otx2_pfc_txschq_stop(pf);
 #endif
 
+	otx2_clean_qos_queues(pf);
+
 	mutex_lock(&mbox->lock);
 	/* Disable backpressure */
 	if (!(pf->pcifunc & RVU_PFVF_FUNC_MASK))
@@ -1836,6 +1838,9 @@ int otx2_open(struct net_device *netdev)
 	/* 'intf_down' may be checked on any cpu */
 	smp_wmb();
 
+	/* Enable QoS configuration before starting tx queues */
+	otx2_qos_config_txschq(pf);
+
 	/* we have already received link status notification */
 	if (pf->linfo.link_up && !(pf->pcifunc & RVU_PFVF_FUNC_MASK))
 		otx2_handle_link_event(pf);
@@ -1980,14 +1985,45 @@ static netdev_tx_t otx2_xmit(struct sk_buff *skb, struct net_device *netdev)
 	return NETDEV_TX_OK;
 }
 
+static int otx2_qos_select_htb_queue(struct otx2_nic *pf, struct sk_buff *skb,
+				     u16 htb_maj_id)
+{
+	u16 classid;
+
+	if ((TC_H_MAJ(skb->priority) >> 16) == htb_maj_id)
+		classid = TC_H_MIN(skb->priority);
+	else
+		classid = READ_ONCE(pf->qos.defcls);
+
+	if (!classid)
+		return 0;
+
+	return otx2_get_txq_by_classid(pf, classid);
+}
+
 u16 otx2_select_queue(struct net_device *netdev, struct sk_buff *skb,
 		      struct net_device *sb_dev)
 {
-#ifdef CONFIG_DCB
 	struct otx2_nic *pf = netdev_priv(netdev);
+	bool qos_enabled;
+#ifdef CONFIG_DCB
 	u8 vlan_prio;
 #endif
+	int txq;
 
+	qos_enabled = (netdev->real_num_tx_queues > pf->hw.tx_queues) ? true : false;
+	if (unlikely(qos_enabled)) {
+		u16 htb_maj_id = smp_load_acquire(&pf->qos.maj_id); /* barrier */
+
+		if (unlikely(htb_maj_id)) {
+			txq = otx2_qos_select_htb_queue(pf, skb, htb_maj_id);
+			if (txq > 0)
+				return txq;
+			goto process_pfc;
+		}
+	}
+
+process_pfc:
 #ifdef CONFIG_DCB
 	if (!skb_vlan_tag_present(skb))
 		goto pick_tx;
@@ -2001,7 +2037,11 @@ u16 otx2_select_queue(struct net_device *netdev, struct sk_buff *skb,
 
 pick_tx:
 #endif
-	return netdev_pick_tx(netdev, skb, NULL);
+	txq = netdev_pick_tx(netdev, skb, NULL);
+	if (unlikely(qos_enabled))
+		return txq % pf->hw.tx_queues;
+
+	return txq;
 }
 EXPORT_SYMBOL(otx2_select_queue);
 
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h
index 1b967eaf948b..45a32e4b49d1 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h
@@ -145,12 +145,25 @@
 #define NIX_AF_TL1X_TOPOLOGY(a)		(0xC80 | (a) << 16)
 #define NIX_AF_TL2X_PARENT(a)		(0xE88 | (a) << 16)
 #define NIX_AF_TL2X_SCHEDULE(a)		(0xE00 | (a) << 16)
+#define NIX_AF_TL2X_TOPOLOGY(a)		(0xE80 | (a) << 16)
+#define NIX_AF_TL2X_CIR(a)              (0xE20 | (a) << 16)
+#define NIX_AF_TL2X_PIR(a)              (0xE30 | (a) << 16)
 #define NIX_AF_TL3X_PARENT(a)		(0x1088 | (a) << 16)
 #define NIX_AF_TL3X_SCHEDULE(a)		(0x1000 | (a) << 16)
+#define NIX_AF_TL3X_SHAPE(a)		(0x1010 | (a) << 16)
+#define NIX_AF_TL3X_CIR(a)		(0x1020 | (a) << 16)
+#define NIX_AF_TL3X_PIR(a)		(0x1030 | (a) << 16)
+#define NIX_AF_TL3X_TOPOLOGY(a)		(0x1080 | (a) << 16)
 #define NIX_AF_TL4X_PARENT(a)		(0x1288 | (a) << 16)
 #define NIX_AF_TL4X_SCHEDULE(a)		(0x1200 | (a) << 16)
+#define NIX_AF_TL4X_SHAPE(a)		(0x1210 | (a) << 16)
+#define NIX_AF_TL4X_CIR(a)		(0x1220 | (a) << 16)
 #define NIX_AF_TL4X_PIR(a)		(0x1230 | (a) << 16)
+#define NIX_AF_TL4X_TOPOLOGY(a)		(0x1280 | (a) << 16)
 #define NIX_AF_MDQX_SCHEDULE(a)		(0x1400 | (a) << 16)
+#define NIX_AF_MDQX_SHAPE(a)		(0x1410 | (a) << 16)
+#define NIX_AF_MDQX_CIR(a)		(0x1420 | (a) << 16)
+#define NIX_AF_MDQX_PIR(a)		(0x1430 | (a) << 16)
 #define NIX_AF_MDQX_PARENT(a)		(0x1480 | (a) << 16)
 #define NIX_AF_TL3_TL2X_LINKX_CFG(a, b)	(0x1700 | (a) << 16 | (b) << 3)
 
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
index 044cc211424e..42c49249f4e7 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
@@ -19,6 +19,7 @@
 
 #include "cn10k.h"
 #include "otx2_common.h"
+#include "qos.h"
 
 /* Egress rate limiting definitions */
 #define MAX_BURST_EXPONENT		0x0FULL
@@ -147,8 +148,8 @@ static void otx2_get_egress_rate_cfg(u64 maxrate, u32 *exp,
 	}
 }
 
-static u64 otx2_get_txschq_rate_regval(struct otx2_nic *nic,
-				       u64 maxrate, u32 burst)
+u64 otx2_get_txschq_rate_regval(struct otx2_nic *nic,
+				u64 maxrate, u32 burst)
 {
 	u32 burst_exp, burst_mantissa;
 	u32 exp, mantissa, div_exp;
@@ -1127,6 +1128,8 @@ int otx2_setup_tc(struct net_device *netdev, enum tc_setup_type type,
 	switch (type) {
 	case TC_SETUP_BLOCK:
 		return otx2_setup_tc_block(netdev, type_data);
+	case TC_SETUP_QDISC_HTB:
+		return otx2_setup_tc_htb(netdev, type_data);
 	default:
 		return -EOPNOTSUPP;
 	}
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos.c b/drivers/net/ethernet/marvell/octeontx2/nic/qos.c
new file mode 100644
index 000000000000..22c5b6a2871a
--- /dev/null
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos.c
@@ -0,0 +1,1460 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Marvell RVU Ethernet driver
+ *
+ * Copyright (C) 2023 Marvell.
+ *
+ */
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/inetdevice.h>
+#include <linux/bitfield.h>
+
+#include "otx2_common.h"
+#include "cn10k.h"
+#include "qos.h"
+
+#define OTX2_QOS_QID_INNER		0xFFFFU
+#define OTX2_QOS_QID_NONE		0xFFFEU
+#define OTX2_QOS_ROOT_CLASSID		0xFFFFFFFF
+#define OTX2_QOS_CLASS_NONE		0
+#define OTX2_QOS_DEFAULT_PRIO		0xF
+#define OTX2_QOS_INVALID_SQ		0xFFFF
+
+/* Egress rate limiting definitions */
+#define MAX_BURST_EXPONENT		0x0FULL
+#define MAX_BURST_MANTISSA		0xFFULL
+#define MAX_BURST_SIZE			130816ULL
+#define MAX_RATE_DIVIDER_EXPONENT	12ULL
+#define MAX_RATE_EXPONENT		0x0FULL
+#define MAX_RATE_MANTISSA		0xFFULL
+
+/* Bitfields in NIX_TLX_PIR register */
+#define TLX_RATE_MANTISSA		GENMASK_ULL(8, 1)
+#define TLX_RATE_EXPONENT		GENMASK_ULL(12, 9)
+#define TLX_RATE_DIVIDER_EXPONENT	GENMASK_ULL(16, 13)
+#define TLX_BURST_MANTISSA		GENMASK_ULL(36, 29)
+#define TLX_BURST_EXPONENT		GENMASK_ULL(40, 37)
+
+static int otx2_qos_update_tx_netdev_queues(struct otx2_nic *pfvf)
+{
+	int tx_queues, qos_txqs, err;
+	struct otx2_hw *hw = &pfvf->hw;
+
+	qos_txqs = bitmap_weight(pfvf->qos.qos_sq_bmap,
+				 OTX2_QOS_MAX_LEAF_NODES);
+
+	tx_queues = hw->tx_queues + qos_txqs;
+
+	err = netif_set_real_num_tx_queues(pfvf->netdev, tx_queues);
+	if (err) {
+		netdev_err(pfvf->netdev,
+			   "Failed to set no of Tx queues: %d\n", tx_queues);
+		return err;
+	}
+
+	return 0;
+}
+
+static u64 otx2_qos_convert_rate(u64 rate)
+{
+	u64 converted_rate;
+
+	/* convert bytes per second to Mbps */
+	converted_rate = rate * 8;
+	converted_rate = max_t(u64, converted_rate / 1000000, 1);
+
+	return converted_rate;
+}
+
+static void __otx2_qos_txschq_cfg(struct otx2_nic *pfvf,
+				  struct otx2_qos_node *node,
+				  struct nix_txschq_config *cfg)
+{
+	struct otx2_hw *hw = &pfvf->hw;
+	int num_regs = 0;
+	u64 maxrate;
+	u8 level;
+
+	level = node->level;
+
+	/* program txschq registers */
+	if (level == NIX_TXSCH_LVL_SMQ) {
+		cfg->reg[num_regs] = NIX_AF_SMQX_CFG(node->schq);
+		cfg->regval[num_regs] = ((u64)pfvf->tx_max_pktlen << 8) |
+					OTX2_MIN_MTU;
+		cfg->regval[num_regs] |= (0x20ULL << 51) | (0x80ULL << 39) |
+					 (0x2ULL << 36);
+		num_regs++;
+
+		/* configure parent txschq */
+		cfg->reg[num_regs] = NIX_AF_MDQX_PARENT(node->schq);
+		cfg->regval[num_regs] = node->parent->schq << 16;
+		num_regs++;
+
+		/* configure prio/quantum */
+		if (node->qid == OTX2_QOS_QID_NONE) {
+			cfg->reg[num_regs] = NIX_AF_MDQX_SCHEDULE(node->schq);
+			cfg->regval[num_regs] =  node->prio << 24 |
+						 mtu_to_dwrr_weight(pfvf,
+								    pfvf->tx_max_pktlen);
+			num_regs++;
+			goto txschq_cfg_out;
+		}
+
+		/* configure prio */
+		cfg->reg[num_regs] = NIX_AF_MDQX_SCHEDULE(node->schq);
+		cfg->regval[num_regs] = (node->schq -
+					 node->parent->prio_anchor) << 24;
+		num_regs++;
+
+		/* configure PIR */
+		maxrate = (node->rate > node->ceil) ? node->rate : node->ceil;
+
+		cfg->reg[num_regs] = NIX_AF_MDQX_PIR(node->schq);
+		cfg->regval[num_regs] =
+			otx2_get_txschq_rate_regval(pfvf, maxrate, 65536);
+		num_regs++;
+
+		/* configure CIR */
+		if (!test_bit(QOS_CIR_PIR_SUPPORT, &pfvf->hw.cap_flag)) {
+			/* Don't configure CIR when both CIR+PIR not supported
+			 * On 96xx, CIR + PIR + RED_ALGO=STALL causes deadlock
+			 */
+			goto txschq_cfg_out;
+		}
+
+		cfg->reg[num_regs] = NIX_AF_MDQX_CIR(node->schq);
+		cfg->regval[num_regs] =
+			otx2_get_txschq_rate_regval(pfvf, node->rate, 65536);
+		num_regs++;
+	} else if (level == NIX_TXSCH_LVL_TL4) {
+		/* configure parent txschq */
+		cfg->reg[num_regs] = NIX_AF_TL4X_PARENT(node->schq);
+		cfg->regval[num_regs] = node->parent->schq << 16;
+		num_regs++;
+
+		/* return if not htb node */
+		if (node->qid == OTX2_QOS_QID_NONE) {
+			cfg->reg[num_regs] = NIX_AF_TL4X_SCHEDULE(node->schq);
+			cfg->regval[num_regs] =  node->prio << 24 |
+						 mtu_to_dwrr_weight(pfvf,
+								    pfvf->tx_max_pktlen);
+			num_regs++;
+			goto txschq_cfg_out;
+		}
+
+		/* configure priority */
+		cfg->reg[num_regs] = NIX_AF_TL4X_SCHEDULE(node->schq);
+		cfg->regval[num_regs] = (node->schq -
+					 node->parent->prio_anchor) << 24;
+		num_regs++;
+
+		/* configure PIR */
+		maxrate = (node->rate > node->ceil) ? node->rate : node->ceil;
+		cfg->reg[num_regs] = NIX_AF_TL4X_PIR(node->schq);
+		cfg->regval[num_regs] =
+			otx2_get_txschq_rate_regval(pfvf, maxrate, 65536);
+		num_regs++;
+
+		/* configure CIR */
+		if (!test_bit(QOS_CIR_PIR_SUPPORT, &pfvf->hw.cap_flag)) {
+			/* Don't configure CIR when both CIR+PIR not supported
+			 * On 96xx, CIR + PIR + RED_ALGO=STALL causes deadlock
+			 */
+			goto txschq_cfg_out;
+		}
+
+		cfg->reg[num_regs] = NIX_AF_TL4X_CIR(node->schq);
+		cfg->regval[num_regs] =
+			otx2_get_txschq_rate_regval(pfvf, node->rate, 65536);
+		num_regs++;
+	} else if (level == NIX_TXSCH_LVL_TL3) {
+		/* configure parent txschq */
+		cfg->reg[num_regs] = NIX_AF_TL3X_PARENT(node->schq);
+		cfg->regval[num_regs] = node->parent->schq << 16;
+		num_regs++;
+
+		/* configure link cfg */
+		if (level == pfvf->qos.link_cfg_lvl) {
+			cfg->reg[num_regs] = NIX_AF_TL3_TL2X_LINKX_CFG(node->schq, hw->tx_link);
+			cfg->regval[num_regs] = BIT_ULL(13) | BIT_ULL(12);
+			num_regs++;
+		}
+
+		/* return if not htb node */
+		if (node->qid == OTX2_QOS_QID_NONE) {
+			cfg->reg[num_regs] = NIX_AF_TL3X_SCHEDULE(node->schq);
+			cfg->regval[num_regs] =  node->prio << 24 |
+						 mtu_to_dwrr_weight(pfvf,
+								    pfvf->tx_max_pktlen);
+			num_regs++;
+			goto txschq_cfg_out;
+		}
+
+		/* configure priority */
+		cfg->reg[num_regs] = NIX_AF_TL3X_SCHEDULE(node->schq);
+		cfg->regval[num_regs] = (node->schq -
+					 node->parent->prio_anchor) << 24;
+		num_regs++;
+
+		/* configure PIR */
+		maxrate = (node->rate > node->ceil) ? node->rate : node->ceil;
+		cfg->reg[num_regs] = NIX_AF_TL3X_PIR(node->schq);
+		cfg->regval[num_regs] =
+			otx2_get_txschq_rate_regval(pfvf, maxrate, 65536);
+		num_regs++;
+
+		/* configure CIR */
+		if (!test_bit(QOS_CIR_PIR_SUPPORT, &pfvf->hw.cap_flag)) {
+			/* Don't configure CIR when both CIR+PIR not supported
+			 * On 96xx, CIR + PIR + RED_ALGO=STALL causes deadlock
+			 */
+			goto txschq_cfg_out;
+		}
+
+		cfg->reg[num_regs] = NIX_AF_TL3X_CIR(node->schq);
+		cfg->regval[num_regs] =
+			otx2_get_txschq_rate_regval(pfvf, node->rate, 65536);
+		num_regs++;
+	} else if (level == NIX_TXSCH_LVL_TL2) {
+		/* configure parent txschq */
+		cfg->reg[num_regs] = NIX_AF_TL2X_PARENT(node->schq);
+		cfg->regval[num_regs] = hw->tx_link << 16;
+		num_regs++;
+
+		/* configure link cfg */
+		if (level == pfvf->qos.link_cfg_lvl) {
+			cfg->reg[num_regs] = NIX_AF_TL3_TL2X_LINKX_CFG(node->schq, hw->tx_link);
+			cfg->regval[num_regs] = BIT_ULL(13) | BIT_ULL(12);
+			num_regs++;
+		}
+
+		/* return if not htb node */
+		if (node->qid == OTX2_QOS_QID_NONE) {
+			cfg->reg[num_regs] = NIX_AF_TL2X_SCHEDULE(node->schq);
+			cfg->regval[num_regs] =  node->prio << 24 |
+						 mtu_to_dwrr_weight(pfvf,
+								    pfvf->tx_max_pktlen);
+			num_regs++;
+			goto txschq_cfg_out;
+		}
+
+		/* check if node is root */
+		if (node->qid == OTX2_QOS_QID_INNER && !node->parent) {
+			cfg->reg[num_regs] = NIX_AF_TL2X_SCHEDULE(node->schq);
+			cfg->regval[num_regs] =  TXSCH_TL1_DFLT_RR_PRIO << 24 |
+						 mtu_to_dwrr_weight(pfvf,
+								    pfvf->tx_max_pktlen);
+			num_regs++;
+			goto txschq_cfg_out;
+		}
+
+		/* configure priority/quantum */
+		cfg->reg[num_regs] = NIX_AF_TL2X_SCHEDULE(node->schq);
+		cfg->regval[num_regs] = (node->schq -
+					 node->parent->prio_anchor) << 24;
+		num_regs++;
+
+		/* configure PIR */
+		maxrate = (node->rate > node->ceil) ? node->rate : node->ceil;
+		cfg->reg[num_regs] = NIX_AF_TL2X_PIR(node->schq);
+		cfg->regval[num_regs] =
+			otx2_get_txschq_rate_regval(pfvf, maxrate, 65536);
+		num_regs++;
+
+		/* configure CIR */
+		if (!test_bit(QOS_CIR_PIR_SUPPORT, &pfvf->hw.cap_flag)) {
+			/* Don't configure CIR when both CIR+PIR not supported
+			 * On 96xx, CIR + PIR + RED_ALGO=STALL causes deadlock
+			 */
+			goto txschq_cfg_out;
+		}
+
+		cfg->reg[num_regs] = NIX_AF_TL2X_CIR(node->schq);
+		cfg->regval[num_regs] =
+			otx2_get_txschq_rate_regval(pfvf, node->rate, 65536);
+		num_regs++;
+	}
+
+txschq_cfg_out:
+	cfg->num_regs = num_regs;
+}
+
+static int otx2_qos_txschq_set_parent_topology(struct otx2_nic *pfvf,
+					       struct otx2_qos_node *parent)
+{
+	struct mbox *mbox = &pfvf->mbox;
+	struct nix_txschq_config *cfg;
+	int rc;
+
+	if (parent->level == NIX_TXSCH_LVL_MDQ)
+		return 0;
+
+	mutex_lock(&mbox->lock);
+
+	cfg = otx2_mbox_alloc_msg_nix_txschq_cfg(&pfvf->mbox);
+	if (!cfg) {
+		mutex_unlock(&mbox->lock);
+		return -ENOMEM;
+	}
+
+	cfg->lvl = parent->level;
+
+	if (parent->level == NIX_TXSCH_LVL_TL4)
+		cfg->reg[0] = NIX_AF_TL4X_TOPOLOGY(parent->schq);
+	else if (parent->level == NIX_TXSCH_LVL_TL3)
+		cfg->reg[0] = NIX_AF_TL3X_TOPOLOGY(parent->schq);
+	else if (parent->level == NIX_TXSCH_LVL_TL2)
+		cfg->reg[0] = NIX_AF_TL2X_TOPOLOGY(parent->schq);
+	else if (parent->level == NIX_TXSCH_LVL_TL1)
+		cfg->reg[0] = NIX_AF_TL1X_TOPOLOGY(parent->schq);
+
+	cfg->regval[0] = (u64)parent->prio_anchor << 32;
+	if (parent->level == NIX_TXSCH_LVL_TL1)
+		cfg->regval[0] |= (u64)TXSCH_TL1_DFLT_RR_PRIO << 1;
+
+	cfg->num_regs++;
+
+	rc = otx2_sync_mbox_msg(&pfvf->mbox);
+
+	mutex_unlock(&mbox->lock);
+
+	return rc;
+}
+
+static void otx2_qos_free_hw_node_schq(struct otx2_nic *pfvf,
+				       struct otx2_qos_node *parent)
+{
+	struct otx2_qos_node *node;
+
+	list_for_each_entry_reverse(node, &parent->child_schq_list, list)
+		otx2_txschq_free_one(pfvf, node->level, node->schq);
+}
+
+static void otx2_qos_free_hw_node(struct otx2_nic *pfvf,
+				  struct otx2_qos_node *parent)
+{
+	struct otx2_qos_node *node, *tmp;
+
+	list_for_each_entry_safe(node, tmp, &parent->child_list, list) {
+		otx2_qos_free_hw_node(pfvf, node);
+		otx2_qos_free_hw_node_schq(pfvf, node);
+		otx2_txschq_free_one(pfvf, node->level, node->schq);
+	}
+}
+
+static void otx2_qos_free_hw_cfg(struct otx2_nic *pfvf,
+				 struct otx2_qos_node *node)
+{
+	mutex_lock(&pfvf->qos.qos_lock);
+
+	/* free child node hw mappings */
+	otx2_qos_free_hw_node(pfvf, node);
+	otx2_qos_free_hw_node_schq(pfvf, node);
+
+	/* free node hw mappings */
+	otx2_txschq_free_one(pfvf, node->level, node->schq);
+
+	mutex_unlock(&pfvf->qos.qos_lock);
+}
+
+static void otx2_qos_sw_node_delete(struct otx2_nic *pfvf,
+				    struct otx2_qos_node *node)
+{
+	hash_del(&node->hlist);
+
+	if (node->qid != OTX2_QOS_QID_INNER && node->qid != OTX2_QOS_QID_NONE) {
+		__clear_bit(node->qid, pfvf->qos.qos_sq_bmap);
+		otx2_qos_update_tx_netdev_queues(pfvf);
+	}
+
+	list_del(&node->list);
+	kfree(node);
+}
+
+static void otx2_qos_free_sw_node_schq(struct otx2_nic *pfvf,
+				       struct otx2_qos_node *parent)
+{
+	struct otx2_qos_node *node, *tmp;
+
+	list_for_each_entry_safe(node, tmp, &parent->child_schq_list, list) {
+		list_del(&node->list);
+		kfree(node);
+	}
+}
+
+static void __otx2_qos_free_sw_node(struct otx2_nic *pfvf,
+				    struct otx2_qos_node *parent)
+{
+	struct otx2_qos_node *node, *tmp;
+
+	list_for_each_entry_safe(node, tmp, &parent->child_list, list) {
+		__otx2_qos_free_sw_node(pfvf, node);
+		otx2_qos_free_sw_node_schq(pfvf, node);
+		otx2_qos_sw_node_delete(pfvf, node);
+	}
+}
+
+static void otx2_qos_free_sw_node(struct otx2_nic *pfvf,
+				  struct otx2_qos_node *node)
+{
+	mutex_lock(&pfvf->qos.qos_lock);
+
+	__otx2_qos_free_sw_node(pfvf, node);
+	otx2_qos_free_sw_node_schq(pfvf, node);
+	otx2_qos_sw_node_delete(pfvf, node);
+
+	mutex_unlock(&pfvf->qos.qos_lock);
+}
+
+static void otx2_qos_destroy_node(struct otx2_nic *pfvf,
+				  struct otx2_qos_node *node)
+{
+	otx2_qos_free_hw_cfg(pfvf, node);
+	otx2_qos_free_sw_node(pfvf, node);
+}
+
+static void otx2_qos_fill_cfg_schq(struct otx2_qos_node *parent,
+				   struct otx2_qos_cfg *cfg)
+{
+	struct otx2_qos_node *node;
+
+	list_for_each_entry(node, &parent->child_schq_list, list)
+		cfg->schq[node->level]++;
+}
+
+static void otx2_qos_fill_cfg_tl(struct otx2_qos_node *parent,
+				 struct otx2_qos_cfg *cfg)
+{
+	struct otx2_qos_node *node;
+
+	list_for_each_entry(node, &parent->child_list, list) {
+		otx2_qos_fill_cfg_tl(node, cfg);
+		cfg->schq_contig[node->level]++;
+		otx2_qos_fill_cfg_schq(node, cfg);
+	}
+}
+
+static void otx2_qos_prepare_txschq_cfg(struct otx2_nic *pfvf,
+					struct otx2_qos_node *parent,
+					struct otx2_qos_cfg *cfg)
+{
+	mutex_lock(&pfvf->qos.qos_lock);
+	otx2_qos_fill_cfg_tl(parent, cfg);
+	mutex_unlock(&pfvf->qos.qos_lock);
+}
+
+static void otx2_qos_read_txschq_cfg_schq(struct otx2_qos_node *parent,
+					  struct otx2_qos_cfg *cfg)
+{
+	struct otx2_qos_node *node;
+	int cnt;
+
+	list_for_each_entry(node, &parent->child_schq_list, list) {
+		cnt = cfg->dwrr_node_pos[node->level];
+		cfg->schq_list[node->level][cnt] = node->schq;
+		cfg->schq[node->level]++;
+		cfg->dwrr_node_pos[node->level]++;
+	}
+}
+
+static void otx2_qos_read_txschq_cfg_tl(struct otx2_qos_node *parent,
+					struct otx2_qos_cfg *cfg)
+{
+	struct otx2_qos_node *node;
+	int cnt;
+
+	list_for_each_entry(node, &parent->child_list, list) {
+		otx2_qos_read_txschq_cfg_tl(node, cfg);
+		cnt = cfg->static_node_pos[node->level];
+		cfg->schq_contig_list[node->level][cnt] = node->schq;
+		cfg->schq_contig[node->level]++;
+		cfg->static_node_pos[node->level]++;
+		otx2_qos_read_txschq_cfg_schq(node, cfg);
+	}
+}
+
+static void otx2_qos_read_txschq_cfg(struct otx2_nic *pfvf,
+				     struct otx2_qos_node *node,
+				     struct otx2_qos_cfg *cfg)
+{
+	mutex_lock(&pfvf->qos.qos_lock);
+	otx2_qos_read_txschq_cfg_tl(node, cfg);
+	mutex_unlock(&pfvf->qos.qos_lock);
+}
+
+static struct otx2_qos_node *
+otx2_qos_alloc_root(struct otx2_nic *pfvf)
+{
+	struct otx2_qos_node *node;
+
+	node = kzalloc(sizeof(*node), GFP_KERNEL);
+	if (!node)
+		return ERR_PTR(-ENOMEM);
+
+	node->parent = NULL;
+	if (!is_otx2_vf(pfvf->pcifunc))
+		node->level = NIX_TXSCH_LVL_TL1;
+	else
+		node->level = NIX_TXSCH_LVL_TL2;
+
+	node->qid = OTX2_QOS_QID_INNER;
+	node->classid = OTX2_QOS_ROOT_CLASSID;
+
+	hash_add(pfvf->qos.qos_hlist, &node->hlist, node->classid);
+	list_add_tail(&node->list, &pfvf->qos.qos_tree);
+	INIT_LIST_HEAD(&node->child_list);
+	INIT_LIST_HEAD(&node->child_schq_list);
+
+	return node;
+}
+
+static int otx2_qos_add_child_node(struct otx2_qos_node *parent,
+				   struct otx2_qos_node *node)
+{
+	struct list_head *head = &parent->child_list;
+	struct otx2_qos_node *tmp_node;
+	struct list_head *tmp;
+
+	for (tmp = head->next; tmp != head; tmp = tmp->next) {
+		tmp_node = list_entry(tmp, struct otx2_qos_node, list);
+		if (tmp_node->prio == node->prio)
+			return -EEXIST;
+		if (tmp_node->prio > node->prio) {
+			list_add_tail(&node->list, tmp);
+			return 0;
+		}
+	}
+
+	list_add_tail(&node->list, head);
+	return 0;
+}
+
+static int otx2_qos_alloc_txschq_node(struct otx2_nic *pfvf,
+				      struct otx2_qos_node *node)
+{
+	struct otx2_qos_node *txschq_node, *parent, *tmp;
+	int lvl;
+
+	parent = node;
+	for (lvl = node->level - 1; lvl >= NIX_TXSCH_LVL_MDQ; lvl--) {
+		txschq_node = kzalloc(sizeof(*txschq_node), GFP_KERNEL);
+		if (!txschq_node)
+			goto err_out;
+
+		txschq_node->parent = parent;
+		txschq_node->level = lvl;
+		txschq_node->classid = OTX2_QOS_CLASS_NONE;
+		txschq_node->qid = OTX2_QOS_QID_NONE;
+		txschq_node->rate = 0;
+		txschq_node->ceil = 0;
+		txschq_node->prio = 0;
+
+		mutex_lock(&pfvf->qos.qos_lock);
+		list_add_tail(&txschq_node->list, &node->child_schq_list);
+		mutex_unlock(&pfvf->qos.qos_lock);
+
+		INIT_LIST_HEAD(&txschq_node->child_list);
+		INIT_LIST_HEAD(&txschq_node->child_schq_list);
+		parent = txschq_node;
+	}
+
+	return 0;
+
+err_out:
+	list_for_each_entry_safe(txschq_node, tmp, &node->child_schq_list,
+				 list) {
+		list_del(&txschq_node->list);
+		kfree(txschq_node);
+	}
+	return -ENOMEM;
+}
+
+static struct otx2_qos_node *
+otx2_qos_sw_create_leaf_node(struct otx2_nic *pfvf,
+			     struct otx2_qos_node *parent,
+			     u16 classid, u32 prio, u64 rate, u64 ceil,
+			     u16 qid)
+{
+	struct otx2_qos_node *node;
+	int err;
+
+	node = kzalloc(sizeof(*node), GFP_KERNEL);
+	if (!node)
+		return ERR_PTR(-ENOMEM);
+
+	node->parent = parent;
+	node->level = parent->level - 1;
+	node->classid = classid;
+	node->qid = qid;
+	node->rate = otx2_qos_convert_rate(rate);
+	node->ceil = otx2_qos_convert_rate(ceil);
+	node->prio = prio;
+
+	__set_bit(qid, pfvf->qos.qos_sq_bmap);
+
+	hash_add(pfvf->qos.qos_hlist, &node->hlist, classid);
+
+	mutex_lock(&pfvf->qos.qos_lock);
+	err = otx2_qos_add_child_node(parent, node);
+	if (err) {
+		mutex_unlock(&pfvf->qos.qos_lock);
+		return ERR_PTR(err);
+	}
+	mutex_unlock(&pfvf->qos.qos_lock);
+
+	INIT_LIST_HEAD(&node->child_list);
+	INIT_LIST_HEAD(&node->child_schq_list);
+
+	err = otx2_qos_alloc_txschq_node(pfvf, node);
+	if (err) {
+		otx2_qos_sw_node_delete(pfvf, node);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	return node;
+}
+
+static struct otx2_qos_node *
+otx2_sw_node_find(struct otx2_nic *pfvf, u32 classid)
+{
+	struct otx2_qos_node *node = NULL;
+
+	hash_for_each_possible(pfvf->qos.qos_hlist, node, hlist, classid) {
+		if (node->classid == classid)
+			break;
+	}
+
+	return node;
+}
+
+int otx2_get_txq_by_classid(struct otx2_nic *pfvf, u16 classid)
+{
+	struct otx2_qos_node *node;
+	u16 qid;
+	int res;
+
+	node = otx2_sw_node_find(pfvf, classid);
+	if (IS_ERR(node)) {
+		res = -ENOENT;
+		goto out;
+	}
+	qid = READ_ONCE(node->qid);
+	if (qid == OTX2_QOS_QID_INNER) {
+		res = -EINVAL;
+		goto out;
+	}
+	res = pfvf->hw.tx_queues + qid;
+out:
+	return res;
+}
+
+static int
+otx2_qos_txschq_config(struct otx2_nic *pfvf, struct otx2_qos_node *node)
+{
+	struct mbox *mbox = &pfvf->mbox;
+	struct nix_txschq_config *req;
+	int rc;
+
+	mutex_lock(&mbox->lock);
+
+	req = otx2_mbox_alloc_msg_nix_txschq_cfg(&pfvf->mbox);
+	if (!req) {
+		mutex_unlock(&mbox->lock);
+		return -ENOMEM;
+	}
+
+	req->lvl = node->level;
+	__otx2_qos_txschq_cfg(pfvf, node, req);
+
+	rc = otx2_sync_mbox_msg(&pfvf->mbox);
+
+	mutex_unlock(&mbox->lock);
+
+	return rc;
+}
+
+static int otx2_qos_txschq_alloc(struct otx2_nic *pfvf,
+				 struct otx2_qos_cfg *cfg)
+{
+	struct nix_txsch_alloc_req *req;
+	struct nix_txsch_alloc_rsp *rsp;
+	struct mbox *mbox = &pfvf->mbox;
+	int lvl, rc, schq;
+
+	mutex_lock(&mbox->lock);
+	req = otx2_mbox_alloc_msg_nix_txsch_alloc(&pfvf->mbox);
+	if (!req) {
+		mutex_unlock(&mbox->lock);
+		return -ENOMEM;
+	}
+
+	for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
+		req->schq[lvl] = cfg->schq[lvl];
+		req->schq_contig[lvl] = cfg->schq_contig[lvl];
+	}
+
+	rc = otx2_sync_mbox_msg(&pfvf->mbox);
+	if (rc) {
+		mutex_unlock(&mbox->lock);
+		return rc;
+	}
+
+	rsp = (struct nix_txsch_alloc_rsp *)
+	      otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
+
+	for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
+		for (schq = 0; schq < rsp->schq_contig[lvl]; schq++) {
+			cfg->schq_contig_list[lvl][schq] =
+				rsp->schq_contig_list[lvl][schq];
+		}
+	}
+
+	for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
+		for (schq = 0; schq < rsp->schq[lvl]; schq++) {
+			cfg->schq_list[lvl][schq] =
+				rsp->schq_list[lvl][schq];
+		}
+	}
+
+	pfvf->qos.link_cfg_lvl = rsp->link_cfg_lvl;
+
+	mutex_unlock(&mbox->lock);
+
+	return rc;
+}
+
+static void otx2_qos_txschq_fill_cfg_schq(struct otx2_nic *pfvf,
+					  struct otx2_qos_node *node,
+					  struct otx2_qos_cfg *cfg)
+{
+	struct otx2_qos_node *tmp;
+	int cnt;
+
+	list_for_each_entry(tmp, &node->child_schq_list, list) {
+		cnt = cfg->dwrr_node_pos[tmp->level];
+		tmp->schq = cfg->schq_list[tmp->level][cnt];
+		cfg->dwrr_node_pos[tmp->level]++;
+	}
+}
+
+static void otx2_qos_txschq_fill_cfg_tl(struct otx2_nic *pfvf,
+					struct otx2_qos_node *node,
+					struct otx2_qos_cfg *cfg)
+{
+	struct otx2_qos_node *tmp;
+	int cnt;
+
+	list_for_each_entry(tmp, &node->child_list, list) {
+		otx2_qos_txschq_fill_cfg_tl(pfvf, tmp, cfg);
+		cnt = cfg->static_node_pos[tmp->level];
+		tmp->schq = cfg->schq_contig_list[tmp->level][cnt];
+		if (cnt == 0)
+			node->prio_anchor = tmp->schq;
+		cfg->static_node_pos[tmp->level]++;
+		otx2_qos_txschq_fill_cfg_schq(pfvf, tmp, cfg);
+	}
+}
+
+static void otx2_qos_txschq_fill_cfg(struct otx2_nic *pfvf,
+				     struct otx2_qos_node *node,
+				     struct otx2_qos_cfg *cfg)
+{
+	mutex_lock(&pfvf->qos.qos_lock);
+	otx2_qos_txschq_fill_cfg_tl(pfvf, node, cfg);
+	otx2_qos_txschq_fill_cfg_schq(pfvf, node, cfg);
+	mutex_unlock(&pfvf->qos.qos_lock);
+}
+
+static int otx2_qos_txschq_push_cfg_schq(struct otx2_nic *pfvf,
+					 struct otx2_qos_node *node,
+					 struct otx2_qos_cfg *cfg)
+{
+	struct otx2_qos_node *tmp;
+	int ret = 0;
+
+	list_for_each_entry(tmp, &node->child_schq_list, list) {
+		ret = otx2_qos_txschq_config(pfvf, tmp);
+		if (ret)
+			return -EIO;
+		ret = otx2_qos_txschq_set_parent_topology(pfvf, tmp->parent);
+		if (ret)
+			return -EIO;
+	}
+
+	return 0;
+}
+
+static int otx2_qos_txschq_push_cfg_tl(struct otx2_nic *pfvf,
+				       struct otx2_qos_node *node,
+				       struct otx2_qos_cfg *cfg)
+{
+	struct otx2_qos_node *tmp;
+	int ret;
+
+	list_for_each_entry(tmp, &node->child_list, list) {
+		ret = otx2_qos_txschq_push_cfg_tl(pfvf, tmp, cfg);
+		if (ret)
+			return -EIO;
+		ret = otx2_qos_txschq_config(pfvf, tmp);
+		if (ret)
+			return -EIO;
+		ret = otx2_qos_txschq_push_cfg_schq(pfvf, tmp, cfg);
+		if (ret)
+			return -EIO;
+	}
+
+	ret = otx2_qos_txschq_set_parent_topology(pfvf, node);
+	if (ret)
+		return -EIO;
+
+	return 0;
+}
+
+static int otx2_qos_txschq_push_cfg(struct otx2_nic *pfvf,
+				    struct otx2_qos_node *node,
+				    struct otx2_qos_cfg *cfg)
+{
+	int ret;
+
+	mutex_lock(&pfvf->qos.qos_lock);
+	ret = otx2_qos_txschq_push_cfg_tl(pfvf, node, cfg);
+	if (ret)
+		goto out;
+	ret = otx2_qos_txschq_push_cfg_schq(pfvf, node, cfg);
+out:
+	mutex_unlock(&pfvf->qos.qos_lock);
+	return ret;
+}
+
+static int otx2_qos_txschq_update_config(struct otx2_nic *pfvf,
+					 struct otx2_qos_node *node,
+					 struct otx2_qos_cfg *cfg)
+{
+	otx2_qos_txschq_fill_cfg(pfvf, node, cfg);
+
+	return otx2_qos_txschq_push_cfg(pfvf, node, cfg);
+}
+
+static int otx2_qos_txschq_update_root_cfg(struct otx2_nic *pfvf,
+					   struct otx2_qos_node *root,
+					   struct otx2_qos_cfg *cfg)
+{
+	root->schq = cfg->schq_list[root->level][0];
+	return otx2_qos_txschq_config(pfvf, root);
+}
+
+static void otx2_qos_free_cfg(struct otx2_nic *pfvf, struct otx2_qos_cfg *cfg)
+{
+	int lvl, idx, schq;
+
+	for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
+		for (idx = 0; idx < cfg->schq[lvl]; idx++) {
+			schq = cfg->schq_list[lvl][idx];
+			otx2_txschq_free_one(pfvf, lvl, schq);
+		}
+	}
+
+	for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
+		for (idx = 0; idx < cfg->schq_contig[lvl]; idx++) {
+			schq = cfg->schq_contig_list[lvl][idx];
+			otx2_txschq_free_one(pfvf, lvl, schq);
+		}
+	}
+}
+
+static void otx2_qos_enadis_sq(struct otx2_nic *pfvf,
+			       struct otx2_qos_node *node,
+			       u16 qid)
+{
+	if (pfvf->qos.qid_to_sqmap[qid] != OTX2_QOS_INVALID_SQ)
+		otx2_qos_disable_sq(pfvf, qid);
+
+	pfvf->qos.qid_to_sqmap[qid] = node->schq;
+	otx2_qos_enable_sq(pfvf, qid);
+}
+
+static void otx2_qos_update_smq_schq(struct otx2_nic *pfvf,
+				     struct otx2_qos_node *node,
+				     bool action)
+{
+	struct otx2_qos_node *tmp;
+
+	if (node->qid == OTX2_QOS_QID_INNER)
+		return;
+
+	list_for_each_entry(tmp, &node->child_schq_list, list) {
+		if (tmp->level == NIX_TXSCH_LVL_MDQ) {
+			if (action == QOS_SMQ_FLUSH)
+				otx2_smq_flush(pfvf, tmp->schq);
+			else
+				otx2_qos_enadis_sq(pfvf, tmp, node->qid);
+		}
+	}
+}
+
+static void __otx2_qos_update_smq(struct otx2_nic *pfvf,
+				  struct otx2_qos_node *node,
+				  bool action)
+{
+	struct otx2_qos_node *tmp;
+
+	list_for_each_entry(tmp, &node->child_list, list) {
+		__otx2_qos_update_smq(pfvf, tmp, action);
+		if (tmp->qid == OTX2_QOS_QID_INNER)
+			continue;
+		if (tmp->level == NIX_TXSCH_LVL_MDQ) {
+			if (action == QOS_SMQ_FLUSH)
+				otx2_smq_flush(pfvf, tmp->schq);
+			else
+				otx2_qos_enadis_sq(pfvf, tmp, tmp->qid);
+		} else {
+			otx2_qos_update_smq_schq(pfvf, tmp, action);
+		}
+	}
+}
+
+static void otx2_qos_update_smq(struct otx2_nic *pfvf,
+				struct otx2_qos_node *node,
+				bool action)
+{
+	mutex_lock(&pfvf->qos.qos_lock);
+	__otx2_qos_update_smq(pfvf, node, action);
+	otx2_qos_update_smq_schq(pfvf, node, action);
+	mutex_unlock(&pfvf->qos.qos_lock);
+}
+
+static int otx2_qos_push_txschq_cfg(struct otx2_nic *pfvf,
+				    struct otx2_qos_node *node,
+				    struct otx2_qos_cfg *cfg)
+{
+	int ret = 0;
+
+	ret = otx2_qos_txschq_alloc(pfvf, cfg);
+	if (ret)
+		return -ENOSPC;
+
+	if (!(pfvf->netdev->flags & IFF_UP)) {
+		otx2_qos_txschq_fill_cfg(pfvf, node, cfg);
+		return 0;
+	}
+
+	ret = otx2_qos_txschq_update_config(pfvf, node, cfg);
+	if (ret) {
+		otx2_qos_free_cfg(pfvf, cfg);
+		return -EIO;
+	}
+
+	otx2_qos_update_smq(pfvf, node, QOS_CFG_SQ);
+
+	return 0;
+}
+
+static int otx2_qos_update_tree(struct otx2_nic *pfvf,
+				struct otx2_qos_node *node,
+				struct otx2_qos_cfg *cfg)
+{
+	otx2_qos_prepare_txschq_cfg(pfvf, node->parent, cfg);
+	return otx2_qos_push_txschq_cfg(pfvf, node->parent, cfg);
+}
+
+static int otx2_qos_root_add(struct otx2_nic *pfvf, u16 htb_maj_id, u16 htb_defcls,
+			     struct netlink_ext_ack *extack)
+{
+	struct otx2_qos_cfg *new_cfg;
+	struct otx2_qos_node *root;
+	int err;
+
+	netdev_dbg(pfvf->netdev,
+		   "TC_HTB_CREATE: handle=0x%x defcls=0x%x\n",
+		   htb_maj_id, htb_defcls);
+
+	INIT_LIST_HEAD(&pfvf->qos.qos_tree);
+	mutex_init(&pfvf->qos.qos_lock);
+
+	root = otx2_qos_alloc_root(pfvf);
+	if (IS_ERR(root)) {
+		mutex_destroy(&pfvf->qos.qos_lock);
+		err = PTR_ERR(root);
+		return err;
+	}
+
+	/* allocate txschq queue */
+	new_cfg = kzalloc(sizeof(*new_cfg), GFP_KERNEL);
+	if (!new_cfg) {
+		NL_SET_ERR_MSG_MOD(extack, "Memory allocation error");
+		mutex_destroy(&pfvf->qos.qos_lock);
+		return -ENOMEM;
+	}
+	/* allocate htb root node */
+	new_cfg->schq[root->level] = 1;
+	err = otx2_qos_txschq_alloc(pfvf, new_cfg);
+	if (err) {
+		NL_SET_ERR_MSG_MOD(extack, "Error allocating txschq");
+		goto free_root_node;
+	}
+
+	if (!(pfvf->netdev->flags & IFF_UP) ||
+	    root->level == NIX_TXSCH_LVL_TL1) {
+		root->schq = new_cfg->schq_list[root->level][0];
+		goto out;
+	}
+
+	/* update the txschq configuration in hw */
+	err = otx2_qos_txschq_update_root_cfg(pfvf, root, new_cfg);
+	if (err) {
+		NL_SET_ERR_MSG_MOD(extack,
+				   "Error updating txschq configuration");
+		goto txschq_free;
+	}
+
+out:
+	WRITE_ONCE(pfvf->qos.defcls, htb_defcls);
+	smp_store_release(&pfvf->qos.maj_id, htb_maj_id); /* barrier */
+	kfree(new_cfg);
+	return 0;
+
+txschq_free:
+	otx2_qos_free_cfg(pfvf, new_cfg);
+free_root_node:
+	kfree(new_cfg);
+	otx2_qos_sw_node_delete(pfvf, root);
+	mutex_destroy(&pfvf->qos.qos_lock);
+	return err;
+}
+
+static int otx2_qos_root_destroy(struct otx2_nic *pfvf)
+{
+	struct otx2_qos_node *root;
+
+	netdev_dbg(pfvf->netdev, "TC_HTB_DESTROY\n");
+
+	/* find root node */
+	root = otx2_sw_node_find(pfvf, OTX2_QOS_ROOT_CLASSID);
+	if (IS_ERR(root))
+		return -ENOENT;
+
+	/* free the hw mappings */
+	otx2_qos_destroy_node(pfvf, root);
+	mutex_destroy(&pfvf->qos.qos_lock);
+
+	return 0;
+}
+
+static int otx2_qos_validate_configuration(struct otx2_qos_node *parent,
+					   struct netlink_ext_ack *extack,
+					   struct otx2_nic *pfvf,
+					   u64 prio)
+{
+	if (test_bit(prio, parent->prio_bmap)) {
+		NL_SET_ERR_MSG_MOD(extack,
+				   "Static priority child with same priority exists");
+		return -EEXIST;
+	}
+
+	if (prio == TXSCH_TL1_DFLT_RR_PRIO) {
+		NL_SET_ERR_MSG_MOD(extack,
+				   "Priority is reserved for Round Robin");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int otx2_qos_leaf_alloc_queue(struct otx2_nic *pfvf, u16 classid,
+				     u32 parent_classid, u64 rate, u64 ceil,
+				     u64 prio, struct netlink_ext_ack *extack)
+{
+	struct otx2_qos_cfg *old_cfg, *new_cfg;
+	struct otx2_qos_node *node, *parent;
+	int qid, ret, err;
+
+	netdev_dbg(pfvf->netdev,
+		   "TC_HTB_LEAF_ALLOC_QUEUE: classid=0x%x parent_classid=0x%x rate=%lld ceil=%lld prio=%lld\n",
+		   classid, parent_classid, rate, ceil, prio);
+
+	if (prio > OTX2_QOS_MAX_PRIO) {
+		NL_SET_ERR_MSG_MOD(extack, "Valid priority range 0 to 7");
+		ret = -EOPNOTSUPP;
+		goto out;
+	}
+
+	/* get parent node */
+	parent = otx2_sw_node_find(pfvf, parent_classid);
+	if (IS_ERR(parent)) {
+		NL_SET_ERR_MSG_MOD(extack, "parent node not found");
+		ret = -ENOENT;
+		goto out;
+	}
+	if (parent->level == NIX_TXSCH_LVL_MDQ) {
+		NL_SET_ERR_MSG_MOD(extack, "HTB qos max levels reached");
+		ret = -EOPNOTSUPP;
+		goto out;
+	}
+
+	ret = otx2_qos_validate_configuration(parent, extack, pfvf, prio);
+	if (ret)
+		goto out;
+
+	set_bit(prio, parent->prio_bmap);
+
+	/* read current txschq configuration */
+	old_cfg = kzalloc(sizeof(*old_cfg), GFP_KERNEL);
+	if (!old_cfg) {
+		NL_SET_ERR_MSG_MOD(extack, "Memory allocation error");
+		ret = -ENOMEM;
+		goto out;
+	}
+	otx2_qos_read_txschq_cfg(pfvf, parent, old_cfg);
+
+	/* allocate a new sq */
+	qid = otx2_qos_get_qid(pfvf);
+	if (qid < 0) {
+		NL_SET_ERR_MSG_MOD(extack, "Reached max supported QOS SQ's");
+		ret = -ENOMEM;
+		goto free_old_cfg;
+	}
+
+	/* Actual SQ mapping will be updated after SMQ alloc */
+	pfvf->qos.qid_to_sqmap[qid] = OTX2_QOS_INVALID_SQ;
+
+	/* allocate and initialize a new child node */
+	node = otx2_qos_sw_create_leaf_node(pfvf, parent, classid, prio, rate,
+					    ceil, qid);
+	if (IS_ERR(node)) {
+		NL_SET_ERR_MSG_MOD(extack, "Unable to allocate leaf node");
+		ret = PTR_ERR(node);
+		goto free_old_cfg;
+	}
+
+	/* push new txschq config to hw */
+	new_cfg = kzalloc(sizeof(*new_cfg), GFP_KERNEL);
+	if (!new_cfg) {
+		NL_SET_ERR_MSG_MOD(extack, "Memory allocation error");
+		ret = -ENOMEM;
+		goto free_node;
+	}
+	ret = otx2_qos_update_tree(pfvf, node, new_cfg);
+	if (ret) {
+		NL_SET_ERR_MSG_MOD(extack, "HTB HW configuration error");
+		kfree(new_cfg);
+		otx2_qos_sw_node_delete(pfvf, node);
+		/* restore the old qos tree */
+		err = otx2_qos_txschq_update_config(pfvf, parent, old_cfg);
+		if (err) {
+			netdev_err(pfvf->netdev,
+				   "Failed to restore txcshq configuration");
+			goto free_old_cfg;
+		}
+
+		otx2_qos_update_smq(pfvf, parent, QOS_CFG_SQ);
+		goto free_old_cfg;
+	}
+
+	/* update tx_real_queues */
+	otx2_qos_update_tx_netdev_queues(pfvf);
+
+	/* free new txschq config */
+	kfree(new_cfg);
+
+	/* free old txschq config */
+	otx2_qos_free_cfg(pfvf, old_cfg);
+	kfree(old_cfg);
+
+	return pfvf->hw.tx_queues + qid;
+
+free_node:
+	otx2_qos_sw_node_delete(pfvf, node);
+free_old_cfg:
+	kfree(old_cfg);
+out:
+	return ret;
+}
+
+static int otx2_qos_leaf_to_inner(struct otx2_nic *pfvf, u16 classid,
+				  u16 child_classid, u64 rate, u64 ceil, u64 prio,
+				  struct netlink_ext_ack *extack)
+{
+	struct otx2_qos_cfg *old_cfg, *new_cfg;
+	struct otx2_qos_node *node, *child;
+	int ret, err;
+	u16 qid;
+
+	netdev_dbg(pfvf->netdev,
+		   "TC_HTB_LEAF_TO_INNER classid %04x, child %04x, rate %llu, ceil %llu\n",
+		   classid, child_classid, rate, ceil);
+
+	if (prio > OTX2_QOS_MAX_PRIO) {
+		NL_SET_ERR_MSG_MOD(extack, "Valid priority range 0 to 7");
+		ret = -EOPNOTSUPP;
+		goto out;
+	}
+
+	/* find node related to classid */
+	node = otx2_sw_node_find(pfvf, classid);
+	if (IS_ERR(node)) {
+		NL_SET_ERR_MSG_MOD(extack, "HTB node not found");
+		ret = -ENOENT;
+		goto out;
+	}
+	/* check max qos txschq level */
+	if (node->level == NIX_TXSCH_LVL_MDQ) {
+		NL_SET_ERR_MSG_MOD(extack, "HTB qos level not supported");
+		ret = -EOPNOTSUPP;
+		goto out;
+	}
+
+	set_bit(prio, node->prio_bmap);
+
+	/* store the qid to assign to leaf node */
+	qid = node->qid;
+
+	/* read current txschq configuration */
+	old_cfg = kzalloc(sizeof(*old_cfg), GFP_KERNEL);
+	if (!old_cfg) {
+		NL_SET_ERR_MSG_MOD(extack, "Memory allocation error");
+		ret = -ENOMEM;
+		goto out;
+	}
+	otx2_qos_read_txschq_cfg(pfvf, node, old_cfg);
+
+	/* delete the txschq nodes allocated for this node */
+	otx2_qos_free_sw_node_schq(pfvf, node);
+
+	/* mark this node as htb inner node */
+	node->qid = OTX2_QOS_QID_INNER;
+
+	/* allocate and initialize a new child node */
+	child = otx2_qos_sw_create_leaf_node(pfvf, node, child_classid,
+					     prio, rate, ceil, qid);
+	if (IS_ERR(child)) {
+		NL_SET_ERR_MSG_MOD(extack, "Unable to allocate leaf node");
+		ret = PTR_ERR(child);
+		goto free_old_cfg;
+	}
+
+	/* push new txschq config to hw */
+	new_cfg = kzalloc(sizeof(*new_cfg), GFP_KERNEL);
+	if (!new_cfg) {
+		NL_SET_ERR_MSG_MOD(extack, "Memory allocation error");
+		ret = -ENOMEM;
+		goto free_node;
+	}
+	ret = otx2_qos_update_tree(pfvf, child, new_cfg);
+	if (ret) {
+		NL_SET_ERR_MSG_MOD(extack, "HTB HW configuration error");
+		kfree(new_cfg);
+		otx2_qos_sw_node_delete(pfvf, child);
+		/* restore the old qos tree */
+		node->qid = qid;
+		err = otx2_qos_alloc_txschq_node(pfvf, node);
+		if (err) {
+			netdev_err(pfvf->netdev,
+				   "Failed to restore old leaf node");
+			goto free_old_cfg;
+		}
+		err = otx2_qos_txschq_update_config(pfvf, node, old_cfg);
+		if (err) {
+			netdev_err(pfvf->netdev,
+				   "Failed to restore txcshq configuration");
+			goto free_old_cfg;
+		}
+		otx2_qos_update_smq(pfvf, node, QOS_CFG_SQ);
+		goto free_old_cfg;
+	}
+
+	/* free new txschq config */
+	kfree(new_cfg);
+
+	/* free old txschq config */
+	otx2_qos_free_cfg(pfvf, old_cfg);
+	kfree(old_cfg);
+
+	return 0;
+
+free_node:
+	otx2_qos_sw_node_delete(pfvf, child);
+free_old_cfg:
+	kfree(old_cfg);
+out:
+	return ret;
+}
+
+static int otx2_qos_leaf_del(struct otx2_nic *pfvf, u16 *classid,
+			     struct netlink_ext_ack *extack)
+{
+	struct otx2_qos_node *node, *parent;
+	u64 prio;
+	u16 qid;
+
+	netdev_dbg(pfvf->netdev, "TC_HTB_LEAF_DEL classid %04x\n", *classid);
+
+	/* find node related to classid */
+	node = otx2_sw_node_find(pfvf, *classid);
+	if (IS_ERR(node)) {
+		NL_SET_ERR_MSG_MOD(extack, "HTB node not found");
+		return -ENOENT;
+	}
+	parent = node->parent;
+	prio   = node->prio;
+	qid    = node->qid;
+
+	otx2_qos_disable_sq(pfvf, node->qid);
+
+	otx2_qos_destroy_node(pfvf, node);
+	pfvf->qos.qid_to_sqmap[qid] = OTX2_QOS_INVALID_SQ;
+
+	clear_bit(prio, parent->prio_bmap);
+
+	return 0;
+}
+
+static int otx2_qos_leaf_del_last(struct otx2_nic *pfvf, u16 classid, bool force,
+				  struct netlink_ext_ack *extack)
+{
+	struct otx2_qos_node *node, *parent;
+	struct otx2_qos_cfg *new_cfg;
+	u64 prio;
+	int err;
+	u16 qid;
+
+	netdev_dbg(pfvf->netdev,
+		   "TC_HTB_LEAF_DEL_LAST classid %04x\n", classid);
+
+	/* find node related to classid */
+	node = otx2_sw_node_find(pfvf, classid);
+	if (IS_ERR(node)) {
+		NL_SET_ERR_MSG_MOD(extack, "HTB node not found");
+		return -ENOENT;
+	}
+
+	/* save qid for use by parent */
+	qid = node->qid;
+	prio = node->prio;
+
+	parent = otx2_sw_node_find(pfvf, node->parent->classid);
+	if (IS_ERR(parent)) {
+		NL_SET_ERR_MSG_MOD(extack, "parent node not found");
+		return -ENOENT;
+	}
+
+	/* destroy the leaf node */
+	otx2_qos_destroy_node(pfvf, node);
+	pfvf->qos.qid_to_sqmap[qid] = OTX2_QOS_INVALID_SQ;
+
+	clear_bit(prio, parent->prio_bmap);
+
+	/* create downstream txschq entries to parent */
+	err = otx2_qos_alloc_txschq_node(pfvf, parent);
+	if (err) {
+		NL_SET_ERR_MSG_MOD(extack, "HTB failed to create txsch configuration");
+		return err;
+	}
+	parent->qid = qid;
+	__set_bit(qid, pfvf->qos.qos_sq_bmap);
+
+	/* push new txschq config to hw */
+	new_cfg = kzalloc(sizeof(*new_cfg), GFP_KERNEL);
+	if (!new_cfg) {
+		NL_SET_ERR_MSG_MOD(extack, "Memory allocation error");
+		return -ENOMEM;
+	}
+	/* fill txschq cfg and push txschq cfg to hw */
+	otx2_qos_fill_cfg_schq(parent, new_cfg);
+	err = otx2_qos_push_txschq_cfg(pfvf, parent, new_cfg);
+	if (err) {
+		NL_SET_ERR_MSG_MOD(extack, "HTB HW configuration error");
+		kfree(new_cfg);
+		return err;
+	}
+	kfree(new_cfg);
+
+	/* update tx_real_queues */
+	otx2_qos_update_tx_netdev_queues(pfvf);
+
+	return 0;
+}
+
+void otx2_clean_qos_queues(struct otx2_nic *pfvf)
+{
+	struct otx2_qos_node *root;
+
+	root = otx2_sw_node_find(pfvf, OTX2_QOS_ROOT_CLASSID);
+	if (IS_ERR(root))
+		return;
+
+	otx2_qos_update_smq(pfvf, root, QOS_SMQ_FLUSH);
+}
+
+void otx2_qos_config_txschq(struct otx2_nic *pfvf)
+{
+	struct otx2_qos_node *root;
+	int err;
+
+	root = otx2_sw_node_find(pfvf, OTX2_QOS_ROOT_CLASSID);
+	if (IS_ERR(root))
+		return;
+
+	err = otx2_qos_txschq_config(pfvf, root);
+	if (err) {
+		netdev_err(pfvf->netdev, "Error update txschq configuration\n");
+		goto root_destroy;
+	}
+
+	err = otx2_qos_txschq_push_cfg_tl(pfvf, root, NULL);
+	if (err) {
+		netdev_err(pfvf->netdev, "Error update txschq configuration\n");
+		goto root_destroy;
+	}
+
+	otx2_qos_update_smq(pfvf, root, QOS_CFG_SQ);
+	return;
+
+root_destroy:
+	netdev_err(pfvf->netdev, "Failed to update Scheduler/Shaping config in Hardware\n");
+	/* Free resources allocated */
+	otx2_qos_root_destroy(pfvf);
+}
+
+int otx2_setup_tc_htb(struct net_device *ndev, struct tc_htb_qopt_offload *htb)
+{
+	struct otx2_nic *pfvf = netdev_priv(ndev);
+	int res;
+
+	switch (htb->command) {
+	case TC_HTB_CREATE:
+		return otx2_qos_root_add(pfvf, htb->parent_classid,
+					 htb->classid, htb->extack);
+	case TC_HTB_DESTROY:
+		return otx2_qos_root_destroy(pfvf);
+	case TC_HTB_LEAF_ALLOC_QUEUE:
+		res = otx2_qos_leaf_alloc_queue(pfvf, htb->classid,
+						htb->parent_classid,
+						htb->rate, htb->ceil,
+						htb->prio, htb->extack);
+		if (res < 0)
+			return res;
+		htb->qid = res;
+		return 0;
+	case TC_HTB_LEAF_TO_INNER:
+		return otx2_qos_leaf_to_inner(pfvf, htb->parent_classid,
+					      htb->classid, htb->rate,
+					      htb->ceil, htb->prio,
+					      htb->extack);
+	case TC_HTB_LEAF_DEL:
+		return otx2_qos_leaf_del(pfvf, &htb->classid, htb->extack);
+	case TC_HTB_LEAF_DEL_LAST:
+	case TC_HTB_LEAF_DEL_LAST_FORCE:
+		return otx2_qos_leaf_del_last(pfvf, htb->classid,
+				htb->command == TC_HTB_LEAF_DEL_LAST_FORCE,
+					      htb->extack);
+	case TC_HTB_LEAF_QUERY_QUEUE:
+		res = otx2_get_txq_by_classid(pfvf, htb->classid);
+		htb->qid = res;
+		return 0;
+	case TC_HTB_NODE_MODIFY:
+		fallthrough;
+	default:
+		return -EOPNOTSUPP;
+	}
+}
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos.h b/drivers/net/ethernet/marvell/octeontx2/nic/qos.h
index 73a62d092e99..26de1af2aa57 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/qos.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos.h
@@ -7,13 +7,63 @@
 #ifndef OTX2_QOS_H
 #define OTX2_QOS_H
 
+#include <linux/types.h>
+#include <linux/netdevice.h>
+#include <linux/rhashtable.h>
+
+#define OTX2_QOS_MAX_LVL		4
+#define OTX2_QOS_MAX_PRIO		7
 #define OTX2_QOS_MAX_LEAF_NODES                16
 
-int otx2_qos_enable_sq(struct otx2_nic *pfvf, int qidx, u16 smq);
-void otx2_qos_disable_sq(struct otx2_nic *pfvf, int qidx, u16 mdq);
+enum qos_smq_operations {
+	QOS_CFG_SQ,
+	QOS_SMQ_FLUSH,
+};
+
+u64 otx2_get_txschq_rate_regval(struct otx2_nic *nic, u64 maxrate, u32 burst);
+
+int otx2_setup_tc_htb(struct net_device *ndev, struct tc_htb_qopt_offload *htb);
+int otx2_qos_get_qid(struct otx2_nic *pfvf);
+void otx2_qos_free_qid(struct otx2_nic *pfvf, int qidx);
+int otx2_qos_enable_sq(struct otx2_nic *pfvf, int qidx);
+void otx2_qos_disable_sq(struct otx2_nic *pfvf, int qidx);
+
+struct otx2_qos_cfg {
+	u16 schq[NIX_TXSCH_LVL_CNT];
+	u16 schq_contig[NIX_TXSCH_LVL_CNT];
+	int static_node_pos[NIX_TXSCH_LVL_CNT];
+	int dwrr_node_pos[NIX_TXSCH_LVL_CNT];
+	u16 schq_contig_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
+	u16 schq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
+};
 
 struct otx2_qos {
-	       u16 qid_to_sqmap[OTX2_QOS_MAX_LEAF_NODES];
-	};
+	DECLARE_HASHTABLE(qos_hlist, order_base_2(OTX2_QOS_MAX_LEAF_NODES));
+	struct mutex qos_lock; /* child list lock */
+	u16 qid_to_sqmap[OTX2_QOS_MAX_LEAF_NODES];
+	struct list_head qos_tree;
+	DECLARE_BITMAP(qos_sq_bmap, OTX2_QOS_MAX_LEAF_NODES);
+	u16 maj_id;
+	u16 defcls;
+	u8  link_cfg_lvl; /* LINKX_CFG CSRs mapped to TL3 or TL2's index ? */
+};
+
+struct otx2_qos_node {
+	struct list_head list; /* list managment */
+	struct list_head child_list;
+	struct list_head child_schq_list;
+	struct hlist_node hlist;
+	DECLARE_BITMAP(prio_bmap, OTX2_QOS_MAX_PRIO + 1);
+	struct otx2_qos_node *parent;	/* parent qos node */
+	u64 rate; /* htb params */
+	u64 ceil;
+	u32 classid;
+	u32 prio;
+	u16 schq; /* hw txschq */
+	u16 qid;
+	u16 prio_anchor;
+	u8 level;
+};
+
 
 #endif
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c b/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c
index 1c77f024c360..8a1e89668a1b 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c
@@ -225,7 +225,22 @@ static int otx2_qos_ctx_disable(struct otx2_nic *pfvf, u16 qidx, int aura_id)
 	return otx2_sync_mbox_msg(&pfvf->mbox);
 }
 
-int otx2_qos_enable_sq(struct otx2_nic *pfvf, int qidx, u16 smq)
+int otx2_qos_get_qid(struct otx2_nic *pfvf)
+{
+	int qidx;
+
+	qidx = find_first_zero_bit(pfvf->qos.qos_sq_bmap,
+				   pfvf->hw.tc_tx_queues);
+
+	return qidx == pfvf->hw.tc_tx_queues ? -ENOSPC : qidx;
+}
+
+void otx2_qos_free_qid(struct otx2_nic *pfvf, int qidx)
+{
+	clear_bit(qidx, pfvf->qos.qos_sq_bmap);
+}
+
+int otx2_qos_enable_sq(struct otx2_nic *pfvf, int qidx)
 {
 	struct otx2_hw *hw = &pfvf->hw;
 	int pool_id, sq_idx, err;
@@ -241,7 +256,6 @@ int otx2_qos_enable_sq(struct otx2_nic *pfvf, int qidx, u16 smq)
 		goto out;
 
 	pool_id = otx2_get_pool_idx(pfvf, AURA_NIX_SQ, sq_idx);
-	pfvf->qos.qid_to_sqmap[qidx] = smq;
 	err = otx2_sq_init(pfvf, sq_idx, pool_id);
 	if (err)
 		goto out;
@@ -250,7 +264,7 @@ int otx2_qos_enable_sq(struct otx2_nic *pfvf, int qidx, u16 smq)
 	return err;
 }
 
-void otx2_qos_disable_sq(struct otx2_nic *pfvf, int qidx, u16 mdq)
+void otx2_qos_disable_sq(struct otx2_nic *pfvf, int qidx)
 {
 	struct otx2_qset *qset = &pfvf->qset;
 	struct otx2_hw *hw = &pfvf->hw;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [net-next Patch v5 6/6] docs: octeontx2: Add Documentation for QOS
  2023-03-26 18:12 [net-next Patch v5 0/6] octeontx2-pf: HTB offload support Hariprasad Kelam
                   ` (4 preceding siblings ...)
  2023-03-26 18:12 ` [net-next Patch v5 5/6] octeontx2-pf: Add support for HTB offload Hariprasad Kelam
@ 2023-03-26 18:12 ` Hariprasad Kelam
  2023-03-28 14:45   ` Simon Horman
  5 siblings, 1 reply; 19+ messages in thread
From: Hariprasad Kelam @ 2023-03-26 18:12 UTC (permalink / raw)
  To: netdev, linux-kernel
  Cc: kuba, davem, willemdebruijn.kernel, andrew, sgoutham, lcherian,
	gakula, jerinj, sbhatta, hkelam, naveenm, edumazet, pabeni, jhs,
	xiyou.wangcong, jiri, maxtram95

Add QOS example configuration along with tc-htb commands

Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
---
 .../ethernet/marvell/octeontx2.rst            | 39 +++++++++++++++++++
 1 file changed, 39 insertions(+)

diff --git a/Documentation/networking/device_drivers/ethernet/marvell/octeontx2.rst b/Documentation/networking/device_drivers/ethernet/marvell/octeontx2.rst
index 5ba9015336e2..eca4309964c8 100644
--- a/Documentation/networking/device_drivers/ethernet/marvell/octeontx2.rst
+++ b/Documentation/networking/device_drivers/ethernet/marvell/octeontx2.rst
@@ -13,6 +13,7 @@ Contents
 - `Drivers`_
 - `Basic packet flow`_
 - `Devlink health reporters`_
+- `Quality of service`_
 
 Overview
 ========
@@ -287,3 +288,41 @@ For example::
 	 NIX_AF_ERR:
 	        NIX Error Interrupt Reg : 64
 	        Rx on unmapped PF_FUNC
+
+
+Quality of service
+==================
+
+octeontx2 silicon and CN10K transmit interface consists of five transmit levels starting from SMQ/MDQ, TL4 to TL1.
+The hardware uses the below algorithms depending on the priority of scheduler queues
+
+1. Strict Priority
+
+      -  Once packets are submitted to MDQ, hardware picks all active MDQs having different priority
+         using strict priority.
+
+2. Round Robin
+
+      - Active MDQs having the same priority level are chosen using round robin.
+
+3. Each packet will traverse MDQ, TL4 to TL1 levels. Each level contains an array of queues to support scheduling and
+   shaping.
+
+4. once the user creates tc classes with different priority
+
+   -  Driver configures schedulers allocated to the class with specified priority along with rate-limiting configuration.
+
+5. Enable HW TC offload on the interface::
+
+        # ethtool -K <interface> hw-tc-offload on
+
+6. Crate htb root::
+
+        # tc qdisc add dev <interface> clsact
+        # tc qdisc replace dev <interface> root handle 1: htb offload
+
+7. Create tc classes with different  priorities::
+
+        # tc class add dev <interface> parent 1: classid 1:1 htb rate 10Gbit prio 1
+
+        # tc class add dev <interface> parent 1: classid 1:2 htb rate 10Gbit prio 7
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [net-next Patch v5 5/6] octeontx2-pf: Add support for HTB offload
  2023-03-26 18:12 ` [net-next Patch v5 5/6] octeontx2-pf: Add support for HTB offload Hariprasad Kelam
@ 2023-03-28 10:56   ` Paolo Abeni
  2023-03-29 16:44     ` Hariprasad Kelam
  2023-03-28 14:41   ` Simon Horman
  2023-03-28 18:42   ` Maxim Mikityanskiy
  2 siblings, 1 reply; 19+ messages in thread
From: Paolo Abeni @ 2023-03-28 10:56 UTC (permalink / raw)
  To: Hariprasad Kelam, netdev, linux-kernel
  Cc: kuba, davem, willemdebruijn.kernel, andrew, sgoutham, lcherian,
	gakula, jerinj, sbhatta, naveenm, edumazet, jhs, xiyou.wangcong,
	jiri, maxtram95

On Sun, 2023-03-26 at 23:42 +0530, Hariprasad Kelam wrote:
[...]
> +static int otx2_qos_root_add(struct otx2_nic *pfvf, u16 htb_maj_id, u16 htb_defcls,
> +			     struct netlink_ext_ack *extack)
> +{
> +	struct otx2_qos_cfg *new_cfg;
> +	struct otx2_qos_node *root;
> +	int err;
> +
> +	netdev_dbg(pfvf->netdev,
> +		   "TC_HTB_CREATE: handle=0x%x defcls=0x%x\n",
> +		   htb_maj_id, htb_defcls);
> +
> +	INIT_LIST_HEAD(&pfvf->qos.qos_tree);
> +	mutex_init(&pfvf->qos.qos_lock);

It's quite strange and error prone dynamically init this mutex and the
list here. Why don't you do such init ad device creation time?

> +
> +	root = otx2_qos_alloc_root(pfvf);
> +	if (IS_ERR(root)) {
> +		mutex_destroy(&pfvf->qos.qos_lock);
> +		err = PTR_ERR(root);
> +		return err;
> +	}
> +
> +	/* allocate txschq queue */
> +	new_cfg = kzalloc(sizeof(*new_cfg), GFP_KERNEL);
> +	if (!new_cfg) {
> +		NL_SET_ERR_MSG_MOD(extack, "Memory allocation error");

Here the root node is leaked.

> +		mutex_destroy(&pfvf->qos.qos_lock);
> +		return -ENOMEM;
> +	}


[...]

Cheers,

Paolo


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [net-next Patch v5 5/6] octeontx2-pf: Add support for HTB offload
  2023-03-26 18:12 ` [net-next Patch v5 5/6] octeontx2-pf: Add support for HTB offload Hariprasad Kelam
  2023-03-28 10:56   ` Paolo Abeni
@ 2023-03-28 14:41   ` Simon Horman
  2023-03-29 17:17     ` Hariprasad Kelam
  2023-03-28 18:42   ` Maxim Mikityanskiy
  2 siblings, 1 reply; 19+ messages in thread
From: Simon Horman @ 2023-03-28 14:41 UTC (permalink / raw)
  To: Hariprasad Kelam
  Cc: netdev, linux-kernel, kuba, davem, willemdebruijn.kernel, andrew,
	sgoutham, lcherian, gakula, jerinj, sbhatta, naveenm, edumazet,
	pabeni, jhs, xiyou.wangcong, jiri, maxtram95

On Sun, Mar 26, 2023 at 11:42:44PM +0530, Hariprasad Kelam wrote:
> From: Naveen Mamindlapalli <naveenm@marvell.com>
> 
> This patch registers callbacks to support HTB offload.
> 
> Below are features supported,
> 
> - supports traffic shaping on the given class by honoring rate and ceil
> configuration.
> 
> - supports traffic scheduling,  which prioritizes different types of
> traffic based on strict priority values.
> 
> - supports the creation of leaf to inner classes such that parent node
> rate limits apply to all child nodes.
> 
> Signed-off-by: Naveen Mamindlapalli <naveenm@marvell.com>
> Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
> Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>

Reviewed-by: Simon Horman <simon.horman@corigine.com>

> ---
>  .../ethernet/marvell/octeontx2/af/common.h    |    2 +-
>  .../ethernet/marvell/octeontx2/nic/Makefile   |    2 +-
>  .../marvell/octeontx2/nic/otx2_common.c       |   35 +-
>  .../marvell/octeontx2/nic/otx2_common.h       |    8 +-
>  .../marvell/octeontx2/nic/otx2_ethtool.c      |   31 +-
>  .../ethernet/marvell/octeontx2/nic/otx2_pf.c  |   56 +-
>  .../ethernet/marvell/octeontx2/nic/otx2_reg.h |   13 +
>  .../ethernet/marvell/octeontx2/nic/otx2_tc.c  |    7 +-
>  .../net/ethernet/marvell/octeontx2/nic/qos.c  | 1460 +++++++++++++++++
>  .../net/ethernet/marvell/octeontx2/nic/qos.h  |   58 +-
>  .../ethernet/marvell/octeontx2/nic/qos_sq.c   |   20 +-
>  11 files changed, 1657 insertions(+), 35 deletions(-)

nit: this is a rather long patch.

...

> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c

...

> @@ -159,7 +165,7 @@ static void otx2_get_qset_stats(struct otx2_nic *pfvf,
>  				[otx2_queue_stats[stat].index];
>  	}
>  
> -	for (qidx = 0; qidx < pfvf->hw.tx_queues; qidx++) {
> +	for (qidx = 0; qidx <  otx2_get_total_tx_queues(pfvf); qidx++) {

nit: extra whitespace after '<'

>  		if (!otx2_update_sq_stats(pfvf, qidx)) {
>  			for (stat = 0; stat < otx2_n_queue_stats; stat++)
>  				*((*data)++) = 0;

...

> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos.c b/drivers/net/ethernet/marvell/octeontx2/nic/qos.c

...

> +static int otx2_qos_update_tx_netdev_queues(struct otx2_nic *pfvf)
> +{
> +	int tx_queues, qos_txqs, err;
> +	struct otx2_hw *hw = &pfvf->hw;

nit: reverse xmas tree - longest line to shortest -
     for local variable declarations.

> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos.h b/drivers/net/ethernet/marvell/octeontx2/nic/qos.h

...

> +struct otx2_qos_node {
> +	struct list_head list; /* list managment */

nit: s/managment/management/

> +	struct list_head child_list;
> +	struct list_head child_schq_list;
> +	struct hlist_node hlist;
> +	DECLARE_BITMAP(prio_bmap, OTX2_QOS_MAX_PRIO + 1);
> +	struct otx2_qos_node *parent;	/* parent qos node */
> +	u64 rate; /* htb params */
> +	u64 ceil;
> +	u32 classid;
> +	u32 prio;
> +	u16 schq; /* hw txschq */
> +	u16 qid;
> +	u16 prio_anchor;
> +	u8 level;
> +};
> +
>  
>  #endif

...

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [net-next Patch v5 3/6] octeontx2-pf: qos send queues management
  2023-03-26 18:12 ` [net-next Patch v5 3/6] octeontx2-pf: qos send queues management Hariprasad Kelam
@ 2023-03-28 14:43   ` Simon Horman
  2023-03-29 17:19     ` Hariprasad Kelam
  0 siblings, 1 reply; 19+ messages in thread
From: Simon Horman @ 2023-03-28 14:43 UTC (permalink / raw)
  To: Hariprasad Kelam
  Cc: netdev, linux-kernel, kuba, davem, willemdebruijn.kernel, andrew,
	sgoutham, lcherian, gakula, jerinj, sbhatta, naveenm, edumazet,
	pabeni, jhs, xiyou.wangcong, jiri, maxtram95

On Sun, Mar 26, 2023 at 11:42:42PM +0530, Hariprasad Kelam wrote:
> From: Subbaraya Sundeep <sbhatta@marvell.com>
> 
> Current implementation is such that the number of Send queues (SQs)
> are decided on the device probe which is equal to the number of online
> cpus. These SQs are allocated and deallocated in interface open and c
> lose calls respectively.
> 
> This patch defines new APIs for initializing and deinitializing Send
> queues dynamically and allocates more number of transmit queues for
> QOS feature.
> 
> Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
> Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
> Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>

Reviewed-by: Simon Horman <simon.horman@corigine.com>

> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c

...

> @@ -1938,6 +1952,12 @@ static netdev_tx_t otx2_xmit(struct sk_buff *skb, struct net_device *netdev)
>  	int qidx = skb_get_queue_mapping(skb);
>  	struct otx2_snd_queue *sq;
>  	struct netdev_queue *txq;
> +	int sq_idx;
> +
> +	/* XDP SQs are not mapped with TXQs
> +	 * advance qid to derive correct sq maped with QOS

nit: s/maped/mapped/

...

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [net-next Patch v5 1/6] sch_htb: Allow HTB priority parameter in offload mode
  2023-03-26 18:12 ` [net-next Patch v5 1/6] sch_htb: Allow HTB priority parameter in offload mode Hariprasad Kelam
@ 2023-03-28 14:44   ` Simon Horman
  0 siblings, 0 replies; 19+ messages in thread
From: Simon Horman @ 2023-03-28 14:44 UTC (permalink / raw)
  To: Hariprasad Kelam
  Cc: netdev, linux-kernel, kuba, davem, willemdebruijn.kernel, andrew,
	sgoutham, lcherian, gakula, jerinj, sbhatta, naveenm, edumazet,
	pabeni, jhs, xiyou.wangcong, jiri, maxtram95

On Sun, Mar 26, 2023 at 11:42:40PM +0530, Hariprasad Kelam wrote:
> From: Naveen Mamindlapalli <naveenm@marvell.com>
> 
> The current implementation of HTB offload returns the EINVAL error
> for unsupported parameters like prio and quantum. This patch removes
> the error returning checks for 'prio' parameter and populates its
> value to tc_htb_qopt_offload structure such that driver can use the
> same.
> 
> Add prio parameter check in mlx5 driver, as mlx5 devices are not capable
> of supporting the prio parameter when htb offload is used. Report error
> if prio parameter is set to a non-default value.
> 
> Signed-off-by: Naveen Mamindlapalli <naveenm@marvell.com>
> Co-developed-by: Rahul Rameshbabu <rrameshbabu@nvidia.com>
> Signed-off-by: Rahul Rameshbabu <rrameshbabu@nvidia.com>
> Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
> Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>

Reviewed-by: Simon Horman <simon.horman@corigine.com>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [net-next Patch v5 2/6] octeontx2-pf: Rename tot_tx_queues to non_qos_queues
  2023-03-26 18:12 ` [net-next Patch v5 2/6] octeontx2-pf: Rename tot_tx_queues to non_qos_queues Hariprasad Kelam
@ 2023-03-28 14:44   ` Simon Horman
  0 siblings, 0 replies; 19+ messages in thread
From: Simon Horman @ 2023-03-28 14:44 UTC (permalink / raw)
  To: Hariprasad Kelam
  Cc: netdev, linux-kernel, kuba, davem, willemdebruijn.kernel, andrew,
	sgoutham, lcherian, gakula, jerinj, sbhatta, naveenm, edumazet,
	pabeni, jhs, xiyou.wangcong, jiri, maxtram95

On Sun, Mar 26, 2023 at 11:42:41PM +0530, Hariprasad Kelam wrote:
> current implementation is such that tot_tx_queues contains both
> xdp queues and normal tx queues. which will be allocated in interface
> open calls and deallocated on interface down calls respectively.
> 
> With addition of QOS, where send quees are allocated/deallacated upon
> user request Qos send queues won't be part of tot_tx_queues. So this
> patch renames tot_tx_queues to non_qos_queues.
> 
> Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>

Reviewed-by: Simon Horman <simon.horman@corigine.com>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [net-next Patch v5 4/6] octeontx2-pf: Refactor schedular queue alloc/free calls
  2023-03-26 18:12 ` [net-next Patch v5 4/6] octeontx2-pf: Refactor schedular queue alloc/free calls Hariprasad Kelam
@ 2023-03-28 14:44   ` Simon Horman
  0 siblings, 0 replies; 19+ messages in thread
From: Simon Horman @ 2023-03-28 14:44 UTC (permalink / raw)
  To: Hariprasad Kelam
  Cc: netdev, linux-kernel, kuba, davem, willemdebruijn.kernel, andrew,
	sgoutham, lcherian, gakula, jerinj, sbhatta, naveenm, edumazet,
	pabeni, jhs, xiyou.wangcong, jiri, maxtram95

On Sun, Mar 26, 2023 at 11:42:43PM +0530, Hariprasad Kelam wrote:
> Multiple transmit scheduler queues can be configured at different
> levels to support traffic shaping and scheduling. But on txschq free
> requests, the transmit schedular config in hardware is not getting
> reset. This patch adds support to reset the stale config.
> 
> The txschq alloc response handler updates the default txschq
> array which is used to configure the transmit packet path from
> SMQ to TL2 levels. However, for new features such as QoS offload
> that requires it's own txschq queues, this handler is still
> invoked and results in undefined behavior. The code now handles
> txschq response in the mbox caller function.
> 
> Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
> Signed-off-by: Naveen Mamindlapalli <naveenm@marvell.com>
> Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>

Reviewed-by: Simon Horman <simon.horman@corigine.com>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [net-next Patch v5 6/6] docs: octeontx2: Add Documentation for QOS
  2023-03-26 18:12 ` [net-next Patch v5 6/6] docs: octeontx2: Add Documentation for QOS Hariprasad Kelam
@ 2023-03-28 14:45   ` Simon Horman
  0 siblings, 0 replies; 19+ messages in thread
From: Simon Horman @ 2023-03-28 14:45 UTC (permalink / raw)
  To: Hariprasad Kelam
  Cc: netdev, linux-kernel, kuba, davem, willemdebruijn.kernel, andrew,
	sgoutham, lcherian, gakula, jerinj, sbhatta, naveenm, edumazet,
	pabeni, jhs, xiyou.wangcong, jiri, maxtram95

On Sun, Mar 26, 2023 at 11:42:45PM +0530, Hariprasad Kelam wrote:
> Add QOS example configuration along with tc-htb commands
> 
> Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>

Reviewed-by: Simon Horman <simon.horman@corigine.com>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [net-next Patch v5 5/6] octeontx2-pf: Add support for HTB offload
  2023-03-26 18:12 ` [net-next Patch v5 5/6] octeontx2-pf: Add support for HTB offload Hariprasad Kelam
  2023-03-28 10:56   ` Paolo Abeni
  2023-03-28 14:41   ` Simon Horman
@ 2023-03-28 18:42   ` Maxim Mikityanskiy
  2023-03-29 18:03     ` Hariprasad Kelam
  2 siblings, 1 reply; 19+ messages in thread
From: Maxim Mikityanskiy @ 2023-03-28 18:42 UTC (permalink / raw)
  To: Hariprasad Kelam
  Cc: netdev, linux-kernel, kuba, davem, willemdebruijn.kernel, andrew,
	sgoutham, lcherian, gakula, jerinj, sbhatta, naveenm, edumazet,
	pabeni, jhs, xiyou.wangcong, jiri

I have a few comments about concurrency issues, see below. I didn't
analyze the concurrency model of your driver deeply, so please apologize
me if I missed some bugs or accidentally called some good code buggy.

A few general things to pay attention to, regarding HTB offload:

1. ndo_select_queue can be called at any time, there is no reliable way
to prevent the kernel from calling it, that means that ndo_select_queue
must not crash if HTB configuration and structures are being updated in
another thread.

2. ndo_start_xmit runs in an RCU read lock. If you need to release some
structure that can be used from another thread in the TX datapath, you
can set some atomic flag, synchronize with RCU, then release the object.

3. You can take some inspiration from mlx5e, although you may find it's
a convoluted cooperation of spinlocks, mutexes, atomic operations with
barriers, and RCU. A big part of it is related to the mechanism of safe
reopening of queues, which your driver may not need, but the remaining
parts have a lot of similarities, so you can find useful insights about
the locking for HTB in mlx5e.

On Sun, Mar 26, 2023 at 11:42:44PM +0530, Hariprasad Kelam wrote:
> From: Naveen Mamindlapalli <naveenm@marvell.com>
> 
> This patch registers callbacks to support HTB offload.
> 
> Below are features supported,
> 
> - supports traffic shaping on the given class by honoring rate and ceil
> configuration.
> 
> - supports traffic scheduling,  which prioritizes different types of
> traffic based on strict priority values.
> 
> - supports the creation of leaf to inner classes such that parent node
> rate limits apply to all child nodes.
> 
> Signed-off-by: Naveen Mamindlapalli <naveenm@marvell.com>
> Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
> Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
> ---
>  .../ethernet/marvell/octeontx2/af/common.h    |    2 +-
>  .../ethernet/marvell/octeontx2/nic/Makefile   |    2 +-
>  .../marvell/octeontx2/nic/otx2_common.c       |   35 +-
>  .../marvell/octeontx2/nic/otx2_common.h       |    8 +-
>  .../marvell/octeontx2/nic/otx2_ethtool.c      |   31 +-
>  .../ethernet/marvell/octeontx2/nic/otx2_pf.c  |   56 +-
>  .../ethernet/marvell/octeontx2/nic/otx2_reg.h |   13 +
>  .../ethernet/marvell/octeontx2/nic/otx2_tc.c  |    7 +-
>  .../net/ethernet/marvell/octeontx2/nic/qos.c  | 1460 +++++++++++++++++
>  .../net/ethernet/marvell/octeontx2/nic/qos.h  |   58 +-
>  .../ethernet/marvell/octeontx2/nic/qos_sq.c   |   20 +-
>  11 files changed, 1657 insertions(+), 35 deletions(-)
>  create mode 100644 drivers/net/ethernet/marvell/octeontx2/nic/qos.c
> 
> diff --git a/drivers/net/ethernet/marvell/octeontx2/af/common.h b/drivers/net/ethernet/marvell/octeontx2/af/common.h
> index 8931864ee110..f5bf719a6ccf 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/af/common.h
> +++ b/drivers/net/ethernet/marvell/octeontx2/af/common.h
> @@ -142,7 +142,7 @@ enum nix_scheduler {
>  
>  #define TXSCH_RR_QTM_MAX		((1 << 24) - 1)
>  #define TXSCH_TL1_DFLT_RR_QTM		TXSCH_RR_QTM_MAX
> -#define TXSCH_TL1_DFLT_RR_PRIO		(0x1ull)
> +#define TXSCH_TL1_DFLT_RR_PRIO		(0x7ull)
>  #define CN10K_MAX_DWRR_WEIGHT          16384 /* Weight is 14bit on CN10K */
>  
>  /* Min/Max packet sizes, excluding FCS */
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/Makefile b/drivers/net/ethernet/marvell/octeontx2/nic/Makefile
> index 3d31ddf7c652..5664f768cb0c 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/nic/Makefile
> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/Makefile
> @@ -8,7 +8,7 @@ obj-$(CONFIG_OCTEONTX2_VF) += rvu_nicvf.o otx2_ptp.o
>  
>  rvu_nicpf-y := otx2_pf.o otx2_common.o otx2_txrx.o otx2_ethtool.o \
>                 otx2_flows.o otx2_tc.o cn10k.o otx2_dmac_flt.o \
> -               otx2_devlink.o qos_sq.o
> +               otx2_devlink.o qos_sq.o qos.o
>  rvu_nicvf-y := otx2_vf.o otx2_devlink.o
>  
>  rvu_nicpf-$(CONFIG_DCB) += otx2_dcbnl.o
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> index 32c02a2d3554..b4542a801291 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> @@ -89,6 +89,11 @@ int otx2_update_sq_stats(struct otx2_nic *pfvf, int qidx)
>  	if (!pfvf->qset.sq)
>  		return 0;
>  
> +	if (qidx >= pfvf->hw.non_qos_queues) {
> +		if (!test_bit(qidx - pfvf->hw.non_qos_queues, pfvf->qos.qos_sq_bmap))
> +			return 0;
> +	}
> +
>  	otx2_nix_sq_op_stats(&sq->stats, pfvf, qidx);
>  	return 1;
>  }
> @@ -747,29 +752,47 @@ int otx2_txsch_alloc(struct otx2_nic *pfvf)
>  	return 0;
>  }
>  
> -int otx2_txschq_stop(struct otx2_nic *pfvf)
> +void otx2_txschq_free_one(struct otx2_nic *pfvf, u16 lvl, u16 schq)
>  {
>  	struct nix_txsch_free_req *free_req;
> -	int lvl, schq, err;
> +	int err;
>  
>  	mutex_lock(&pfvf->mbox.lock);
> -	/* Free the transmit schedulers */
> +
>  	free_req = otx2_mbox_alloc_msg_nix_txsch_free(&pfvf->mbox);
>  	if (!free_req) {
>  		mutex_unlock(&pfvf->mbox.lock);
> -		return -ENOMEM;
> +		netdev_err(pfvf->netdev,
> +			   "Failed alloc txschq free req\n");
> +		return;
>  	}
>  
> -	free_req->flags = TXSCHQ_FREE_ALL;
> +	free_req->schq_lvl = lvl;
> +	free_req->schq = schq;
> +
>  	err = otx2_sync_mbox_msg(&pfvf->mbox);
> +	if (err) {
> +		netdev_err(pfvf->netdev,
> +			   "Failed stop txschq %d at level %d\n", lvl, schq);
> +	}
> +
>  	mutex_unlock(&pfvf->mbox.lock);
> +}
> +
> +void otx2_txschq_stop(struct otx2_nic *pfvf)
> +{
> +	int lvl, schq;
> +
> +	/* free non QOS TLx nodes */
> +	for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++)
> +		otx2_txschq_free_one(pfvf, lvl,
> +				     pfvf->hw.txschq_list[lvl][0]);
>  
>  	/* Clear the txschq list */
>  	for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
>  		for (schq = 0; schq < MAX_TXSCHQ_PER_FUNC; schq++)
>  			pfvf->hw.txschq_list[lvl][schq] = 0;
>  	}
> -	return err;
>  }
>  
>  void otx2_sqb_flush(struct otx2_nic *pfvf)
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
> index 3834cc447426..4b219e8e5b32 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
> @@ -252,6 +252,7 @@ struct otx2_hw {
>  #define CN10K_RPM		3
>  #define CN10K_PTP_ONESTEP	4
>  #define CN10K_HW_MACSEC		5
> +#define QOS_CIR_PIR_SUPPORT	6
>  	unsigned long		cap_flag;
>  
>  #define LMT_LINE_SIZE		128
> @@ -586,6 +587,7 @@ static inline void otx2_setup_dev_hw_settings(struct otx2_nic *pfvf)
>  		__set_bit(CN10K_LMTST, &hw->cap_flag);
>  		__set_bit(CN10K_RPM, &hw->cap_flag);
>  		__set_bit(CN10K_PTP_ONESTEP, &hw->cap_flag);
> +		__set_bit(QOS_CIR_PIR_SUPPORT, &hw->cap_flag);
>  	}
>  
>  	if (is_dev_cn10kb(pfvf->pdev))
> @@ -935,7 +937,7 @@ int otx2_config_nix(struct otx2_nic *pfvf);
>  int otx2_config_nix_queues(struct otx2_nic *pfvf);
>  int otx2_txschq_config(struct otx2_nic *pfvf, int lvl, int prio, bool pfc_en);
>  int otx2_txsch_alloc(struct otx2_nic *pfvf);
> -int otx2_txschq_stop(struct otx2_nic *pfvf);
> +void otx2_txschq_stop(struct otx2_nic *pfvf);
>  void otx2_sqb_flush(struct otx2_nic *pfvf);
>  int otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool,
>  		    dma_addr_t *dma);
> @@ -953,6 +955,7 @@ int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id,
>  		   int stack_pages, int numptrs, int buf_size);
>  int otx2_aura_init(struct otx2_nic *pfvf, int aura_id,
>  		   int pool_id, int numptrs);
> +void otx2_txschq_free_one(struct otx2_nic *pfvf, u16 lvl, u16 schq);
>  
>  /* RSS configuration APIs*/
>  int otx2_rss_init(struct otx2_nic *pfvf);
> @@ -1064,4 +1067,7 @@ static inline void cn10k_handle_mcs_event(struct otx2_nic *pfvf,
>  void otx2_qos_sq_setup(struct otx2_nic *pfvf, int qos_txqs);
>  u16 otx2_select_queue(struct net_device *netdev, struct sk_buff *skb,
>  		      struct net_device *sb_dev);
> +int otx2_get_txq_by_classid(struct otx2_nic *pfvf, u16 classid);
> +void otx2_qos_config_txschq(struct otx2_nic *pfvf);
> +void otx2_clean_qos_queues(struct otx2_nic *pfvf);
>  #endif /* OTX2_COMMON_H */
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
> index 0f8d1a69139f..e8722a4f4cc6 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
> @@ -92,10 +92,16 @@ static void otx2_get_qset_strings(struct otx2_nic *pfvf, u8 **data, int qset)
>  			*data += ETH_GSTRING_LEN;
>  		}
>  	}
> -	for (qidx = 0; qidx < pfvf->hw.tx_queues; qidx++) {
> +
> +	for (qidx = 0; qidx < otx2_get_total_tx_queues(pfvf); qidx++) {
>  		for (stats = 0; stats < otx2_n_queue_stats; stats++) {
> -			sprintf(*data, "txq%d: %s", qidx + start_qidx,
> -				otx2_queue_stats[stats].name);
> +			if (qidx >= pfvf->hw.non_qos_queues)
> +				sprintf(*data, "txq_qos%d: %s",
> +					qidx + start_qidx - pfvf->hw.non_qos_queues,
> +					otx2_queue_stats[stats].name);
> +			else
> +				sprintf(*data, "txq%d: %s", qidx + start_qidx,
> +					otx2_queue_stats[stats].name);
>  			*data += ETH_GSTRING_LEN;
>  		}
>  	}
> @@ -159,7 +165,7 @@ static void otx2_get_qset_stats(struct otx2_nic *pfvf,
>  				[otx2_queue_stats[stat].index];
>  	}
>  
> -	for (qidx = 0; qidx < pfvf->hw.tx_queues; qidx++) {
> +	for (qidx = 0; qidx <  otx2_get_total_tx_queues(pfvf); qidx++) {
>  		if (!otx2_update_sq_stats(pfvf, qidx)) {
>  			for (stat = 0; stat < otx2_n_queue_stats; stat++)
>  				*((*data)++) = 0;
> @@ -254,7 +260,8 @@ static int otx2_get_sset_count(struct net_device *netdev, int sset)
>  		return -EINVAL;
>  
>  	qstats_count = otx2_n_queue_stats *
> -		       (pfvf->hw.rx_queues + pfvf->hw.tx_queues);
> +		       (pfvf->hw.rx_queues + pfvf->hw.non_qos_queues +
> +			pfvf->hw.tc_tx_queues);
>  	if (!test_bit(CN10K_RPM, &pfvf->hw.cap_flag))
>  		mac_stats = CGX_RX_STATS_COUNT + CGX_TX_STATS_COUNT;
>  	otx2_update_lmac_fec_stats(pfvf);
> @@ -282,7 +289,7 @@ static int otx2_set_channels(struct net_device *dev,
>  {
>  	struct otx2_nic *pfvf = netdev_priv(dev);
>  	bool if_up = netif_running(dev);
> -	int err = 0;
> +	int err, qos_txqs;
>  
>  	if (!channel->rx_count || !channel->tx_count)
>  		return -EINVAL;
> @@ -296,14 +303,19 @@ static int otx2_set_channels(struct net_device *dev,
>  	if (if_up)
>  		dev->netdev_ops->ndo_stop(dev);
>  
> -	err = otx2_set_real_num_queues(dev, channel->tx_count,
> +	qos_txqs = bitmap_weight(pfvf->qos.qos_sq_bmap,
> +				 OTX2_QOS_MAX_LEAF_NODES);
> +
> +	err = otx2_set_real_num_queues(dev, channel->tx_count + qos_txqs,
>  				       channel->rx_count);
>  	if (err)
>  		return err;
>  
>  	pfvf->hw.rx_queues = channel->rx_count;
>  	pfvf->hw.tx_queues = channel->tx_count;
> -	pfvf->qset.cq_cnt = pfvf->hw.tx_queues +  pfvf->hw.rx_queues;
> +	if (pfvf->xdp_prog)
> +		pfvf->hw.xdp_queues = channel->rx_count;
> +	pfvf->hw.non_qos_queues =  pfvf->hw.tx_queues + pfvf->hw.xdp_queues;
>  
>  	if (if_up)
>  		err = dev->netdev_ops->ndo_open(dev);
> @@ -1405,7 +1417,8 @@ static int otx2vf_get_sset_count(struct net_device *netdev, int sset)
>  		return -EINVAL;
>  
>  	qstats_count = otx2_n_queue_stats *
> -		       (vf->hw.rx_queues + vf->hw.tx_queues);
> +		       (vf->hw.rx_queues + vf->hw.tx_queues +
> +			vf->hw.tc_tx_queues);
>  
>  	return otx2_n_dev_stats + otx2_n_drv_stats + qstats_count + 1;
>  }
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
> index a32f0cb89fc4..d0192f9089ee 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
> @@ -1387,6 +1387,9 @@ static void otx2_free_sq_res(struct otx2_nic *pf)
>  	otx2_sq_free_sqbs(pf);
>  	for (qidx = 0; qidx < otx2_get_total_tx_queues(pf); qidx++) {
>  		sq = &qset->sq[qidx];
> +		/* Skip freeing Qos queues if they are not initialized */
> +		if (!sq->sqb_count)
> +			continue;
>  		qmem_free(pf->dev, sq->sqe);
>  		qmem_free(pf->dev, sq->tso_hdrs);
>  		kfree(sq->sg);
> @@ -1518,8 +1521,7 @@ static int otx2_init_hw_resources(struct otx2_nic *pf)
>  	otx2_free_cq_res(pf);
>  	otx2_ctx_disable(mbox, NIX_AQ_CTYPE_RQ, false);
>  err_free_txsch:
> -	if (otx2_txschq_stop(pf))
> -		dev_err(pf->dev, "%s failed to stop TX schedulers\n", __func__);
> +	otx2_txschq_stop(pf);
>  err_free_sq_ptrs:
>  	otx2_sq_free_sqbs(pf);
>  err_free_rq_ptrs:
> @@ -1554,21 +1556,21 @@ static void otx2_free_hw_resources(struct otx2_nic *pf)
>  	struct mbox *mbox = &pf->mbox;
>  	struct otx2_cq_queue *cq;
>  	struct msg_req *req;
> -	int qidx, err;
> +	int qidx;
>  
>  	/* Ensure all SQE are processed */
>  	otx2_sqb_flush(pf);
>  
>  	/* Stop transmission */
> -	err = otx2_txschq_stop(pf);
> -	if (err)
> -		dev_err(pf->dev, "RVUPF: Failed to stop/free TX schedulers\n");
> +	otx2_txschq_stop(pf);
>  
>  #ifdef CONFIG_DCB
>  	if (pf->pfc_en)
>  		otx2_pfc_txschq_stop(pf);
>  #endif
>  
> +	otx2_clean_qos_queues(pf);
> +
>  	mutex_lock(&mbox->lock);
>  	/* Disable backpressure */
>  	if (!(pf->pcifunc & RVU_PFVF_FUNC_MASK))
> @@ -1836,6 +1838,9 @@ int otx2_open(struct net_device *netdev)
>  	/* 'intf_down' may be checked on any cpu */
>  	smp_wmb();
>  
> +	/* Enable QoS configuration before starting tx queues */
> +	otx2_qos_config_txschq(pf);
> +
>  	/* we have already received link status notification */
>  	if (pf->linfo.link_up && !(pf->pcifunc & RVU_PFVF_FUNC_MASK))
>  		otx2_handle_link_event(pf);
> @@ -1980,14 +1985,45 @@ static netdev_tx_t otx2_xmit(struct sk_buff *skb, struct net_device *netdev)
>  	return NETDEV_TX_OK;
>  }
>  
> +static int otx2_qos_select_htb_queue(struct otx2_nic *pf, struct sk_buff *skb,
> +				     u16 htb_maj_id)
> +{
> +	u16 classid;
> +
> +	if ((TC_H_MAJ(skb->priority) >> 16) == htb_maj_id)
> +		classid = TC_H_MIN(skb->priority);
> +	else
> +		classid = READ_ONCE(pf->qos.defcls);
> +
> +	if (!classid)
> +		return 0;
> +
> +	return otx2_get_txq_by_classid(pf, classid);

This selects queues with numbers >= pf->hw.tx_queues, and otx2_xmit
indexes pfvf->qset.sq with these qids, however, pfvf->qset.sq is
allocated only up to pf->hw.non_qos_queues. Array out-of-bounds?

> +}
> +
>  u16 otx2_select_queue(struct net_device *netdev, struct sk_buff *skb,
>  		      struct net_device *sb_dev)
>  {
> -#ifdef CONFIG_DCB
>  	struct otx2_nic *pf = netdev_priv(netdev);
> +	bool qos_enabled;
> +#ifdef CONFIG_DCB
>  	u8 vlan_prio;
>  #endif
> +	int txq;
>  
> +	qos_enabled = (netdev->real_num_tx_queues > pf->hw.tx_queues) ? true : false;
> +	if (unlikely(qos_enabled)) {
> +		u16 htb_maj_id = smp_load_acquire(&pf->qos.maj_id); /* barrier */

Checkpatch requires to add comments for the barriers for a reason :)

"Barrier" is a useless comment, we all know that smp_load_acquire is a
barrier, you should explain why this barrier is needed and which other
barriers it pairs with.

> +
> +		if (unlikely(htb_maj_id)) {
> +			txq = otx2_qos_select_htb_queue(pf, skb, htb_maj_id);
> +			if (txq > 0)
> +				return txq;
> +			goto process_pfc;
> +		}
> +	}
> +
> +process_pfc:
>  #ifdef CONFIG_DCB
>  	if (!skb_vlan_tag_present(skb))
>  		goto pick_tx;
> @@ -2001,7 +2037,11 @@ u16 otx2_select_queue(struct net_device *netdev, struct sk_buff *skb,
>  
>  pick_tx:
>  #endif
> -	return netdev_pick_tx(netdev, skb, NULL);
> +	txq = netdev_pick_tx(netdev, skb, NULL);
> +	if (unlikely(qos_enabled))
> +		return txq % pf->hw.tx_queues;
> +
> +	return txq;
>  }
>  EXPORT_SYMBOL(otx2_select_queue);
>  
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h
> index 1b967eaf948b..45a32e4b49d1 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h
> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h
> @@ -145,12 +145,25 @@
>  #define NIX_AF_TL1X_TOPOLOGY(a)		(0xC80 | (a) << 16)
>  #define NIX_AF_TL2X_PARENT(a)		(0xE88 | (a) << 16)
>  #define NIX_AF_TL2X_SCHEDULE(a)		(0xE00 | (a) << 16)
> +#define NIX_AF_TL2X_TOPOLOGY(a)		(0xE80 | (a) << 16)
> +#define NIX_AF_TL2X_CIR(a)              (0xE20 | (a) << 16)
> +#define NIX_AF_TL2X_PIR(a)              (0xE30 | (a) << 16)
>  #define NIX_AF_TL3X_PARENT(a)		(0x1088 | (a) << 16)
>  #define NIX_AF_TL3X_SCHEDULE(a)		(0x1000 | (a) << 16)
> +#define NIX_AF_TL3X_SHAPE(a)		(0x1010 | (a) << 16)
> +#define NIX_AF_TL3X_CIR(a)		(0x1020 | (a) << 16)
> +#define NIX_AF_TL3X_PIR(a)		(0x1030 | (a) << 16)
> +#define NIX_AF_TL3X_TOPOLOGY(a)		(0x1080 | (a) << 16)
>  #define NIX_AF_TL4X_PARENT(a)		(0x1288 | (a) << 16)
>  #define NIX_AF_TL4X_SCHEDULE(a)		(0x1200 | (a) << 16)
> +#define NIX_AF_TL4X_SHAPE(a)		(0x1210 | (a) << 16)
> +#define NIX_AF_TL4X_CIR(a)		(0x1220 | (a) << 16)
>  #define NIX_AF_TL4X_PIR(a)		(0x1230 | (a) << 16)
> +#define NIX_AF_TL4X_TOPOLOGY(a)		(0x1280 | (a) << 16)
>  #define NIX_AF_MDQX_SCHEDULE(a)		(0x1400 | (a) << 16)
> +#define NIX_AF_MDQX_SHAPE(a)		(0x1410 | (a) << 16)
> +#define NIX_AF_MDQX_CIR(a)		(0x1420 | (a) << 16)
> +#define NIX_AF_MDQX_PIR(a)		(0x1430 | (a) << 16)
>  #define NIX_AF_MDQX_PARENT(a)		(0x1480 | (a) << 16)
>  #define NIX_AF_TL3_TL2X_LINKX_CFG(a, b)	(0x1700 | (a) << 16 | (b) << 3)
>  
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
> index 044cc211424e..42c49249f4e7 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
> @@ -19,6 +19,7 @@
>  
>  #include "cn10k.h"
>  #include "otx2_common.h"
> +#include "qos.h"
>  
>  /* Egress rate limiting definitions */
>  #define MAX_BURST_EXPONENT		0x0FULL
> @@ -147,8 +148,8 @@ static void otx2_get_egress_rate_cfg(u64 maxrate, u32 *exp,
>  	}
>  }
>  
> -static u64 otx2_get_txschq_rate_regval(struct otx2_nic *nic,
> -				       u64 maxrate, u32 burst)
> +u64 otx2_get_txschq_rate_regval(struct otx2_nic *nic,
> +				u64 maxrate, u32 burst)
>  {
>  	u32 burst_exp, burst_mantissa;
>  	u32 exp, mantissa, div_exp;
> @@ -1127,6 +1128,8 @@ int otx2_setup_tc(struct net_device *netdev, enum tc_setup_type type,
>  	switch (type) {
>  	case TC_SETUP_BLOCK:
>  		return otx2_setup_tc_block(netdev, type_data);
> +	case TC_SETUP_QDISC_HTB:
> +		return otx2_setup_tc_htb(netdev, type_data);
>  	default:
>  		return -EOPNOTSUPP;
>  	}
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos.c b/drivers/net/ethernet/marvell/octeontx2/nic/qos.c
> new file mode 100644
> index 000000000000..22c5b6a2871a
> --- /dev/null
> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos.c
> @@ -0,0 +1,1460 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Marvell RVU Ethernet driver
> + *
> + * Copyright (C) 2023 Marvell.
> + *
> + */
> +#include <linux/netdevice.h>
> +#include <linux/etherdevice.h>
> +#include <linux/inetdevice.h>
> +#include <linux/bitfield.h>
> +
> +#include "otx2_common.h"
> +#include "cn10k.h"
> +#include "qos.h"
> +
> +#define OTX2_QOS_QID_INNER		0xFFFFU
> +#define OTX2_QOS_QID_NONE		0xFFFEU
> +#define OTX2_QOS_ROOT_CLASSID		0xFFFFFFFF
> +#define OTX2_QOS_CLASS_NONE		0
> +#define OTX2_QOS_DEFAULT_PRIO		0xF
> +#define OTX2_QOS_INVALID_SQ		0xFFFF
> +
> +/* Egress rate limiting definitions */
> +#define MAX_BURST_EXPONENT		0x0FULL
> +#define MAX_BURST_MANTISSA		0xFFULL
> +#define MAX_BURST_SIZE			130816ULL
> +#define MAX_RATE_DIVIDER_EXPONENT	12ULL
> +#define MAX_RATE_EXPONENT		0x0FULL
> +#define MAX_RATE_MANTISSA		0xFFULL
> +
> +/* Bitfields in NIX_TLX_PIR register */
> +#define TLX_RATE_MANTISSA		GENMASK_ULL(8, 1)
> +#define TLX_RATE_EXPONENT		GENMASK_ULL(12, 9)
> +#define TLX_RATE_DIVIDER_EXPONENT	GENMASK_ULL(16, 13)
> +#define TLX_BURST_MANTISSA		GENMASK_ULL(36, 29)
> +#define TLX_BURST_EXPONENT		GENMASK_ULL(40, 37)
> +
> +static int otx2_qos_update_tx_netdev_queues(struct otx2_nic *pfvf)
> +{
> +	int tx_queues, qos_txqs, err;
> +	struct otx2_hw *hw = &pfvf->hw;
> +
> +	qos_txqs = bitmap_weight(pfvf->qos.qos_sq_bmap,
> +				 OTX2_QOS_MAX_LEAF_NODES);
> +
> +	tx_queues = hw->tx_queues + qos_txqs;
> +
> +	err = netif_set_real_num_tx_queues(pfvf->netdev, tx_queues);
> +	if (err) {
> +		netdev_err(pfvf->netdev,
> +			   "Failed to set no of Tx queues: %d\n", tx_queues);
> +		return err;
> +	}
> +
> +	return 0;
> +}
> +
> +static u64 otx2_qos_convert_rate(u64 rate)
> +{
> +	u64 converted_rate;
> +
> +	/* convert bytes per second to Mbps */
> +	converted_rate = rate * 8;
> +	converted_rate = max_t(u64, converted_rate / 1000000, 1);
> +
> +	return converted_rate;
> +}
> +
> +static void __otx2_qos_txschq_cfg(struct otx2_nic *pfvf,
> +				  struct otx2_qos_node *node,
> +				  struct nix_txschq_config *cfg)
> +{
> +	struct otx2_hw *hw = &pfvf->hw;
> +	int num_regs = 0;
> +	u64 maxrate;
> +	u8 level;
> +
> +	level = node->level;
> +
> +	/* program txschq registers */
> +	if (level == NIX_TXSCH_LVL_SMQ) {
> +		cfg->reg[num_regs] = NIX_AF_SMQX_CFG(node->schq);
> +		cfg->regval[num_regs] = ((u64)pfvf->tx_max_pktlen << 8) |
> +					OTX2_MIN_MTU;
> +		cfg->regval[num_regs] |= (0x20ULL << 51) | (0x80ULL << 39) |
> +					 (0x2ULL << 36);
> +		num_regs++;
> +
> +		/* configure parent txschq */
> +		cfg->reg[num_regs] = NIX_AF_MDQX_PARENT(node->schq);
> +		cfg->regval[num_regs] = node->parent->schq << 16;
> +		num_regs++;
> +
> +		/* configure prio/quantum */
> +		if (node->qid == OTX2_QOS_QID_NONE) {
> +			cfg->reg[num_regs] = NIX_AF_MDQX_SCHEDULE(node->schq);
> +			cfg->regval[num_regs] =  node->prio << 24 |
> +						 mtu_to_dwrr_weight(pfvf,
> +								    pfvf->tx_max_pktlen);
> +			num_regs++;
> +			goto txschq_cfg_out;
> +		}
> +
> +		/* configure prio */
> +		cfg->reg[num_regs] = NIX_AF_MDQX_SCHEDULE(node->schq);
> +		cfg->regval[num_regs] = (node->schq -
> +					 node->parent->prio_anchor) << 24;
> +		num_regs++;
> +
> +		/* configure PIR */
> +		maxrate = (node->rate > node->ceil) ? node->rate : node->ceil;
> +
> +		cfg->reg[num_regs] = NIX_AF_MDQX_PIR(node->schq);
> +		cfg->regval[num_regs] =
> +			otx2_get_txschq_rate_regval(pfvf, maxrate, 65536);
> +		num_regs++;
> +
> +		/* configure CIR */
> +		if (!test_bit(QOS_CIR_PIR_SUPPORT, &pfvf->hw.cap_flag)) {
> +			/* Don't configure CIR when both CIR+PIR not supported
> +			 * On 96xx, CIR + PIR + RED_ALGO=STALL causes deadlock
> +			 */
> +			goto txschq_cfg_out;
> +		}
> +
> +		cfg->reg[num_regs] = NIX_AF_MDQX_CIR(node->schq);
> +		cfg->regval[num_regs] =
> +			otx2_get_txschq_rate_regval(pfvf, node->rate, 65536);
> +		num_regs++;
> +	} else if (level == NIX_TXSCH_LVL_TL4) {
> +		/* configure parent txschq */
> +		cfg->reg[num_regs] = NIX_AF_TL4X_PARENT(node->schq);
> +		cfg->regval[num_regs] = node->parent->schq << 16;
> +		num_regs++;
> +
> +		/* return if not htb node */
> +		if (node->qid == OTX2_QOS_QID_NONE) {
> +			cfg->reg[num_regs] = NIX_AF_TL4X_SCHEDULE(node->schq);
> +			cfg->regval[num_regs] =  node->prio << 24 |
> +						 mtu_to_dwrr_weight(pfvf,
> +								    pfvf->tx_max_pktlen);
> +			num_regs++;
> +			goto txschq_cfg_out;
> +		}
> +
> +		/* configure priority */
> +		cfg->reg[num_regs] = NIX_AF_TL4X_SCHEDULE(node->schq);
> +		cfg->regval[num_regs] = (node->schq -
> +					 node->parent->prio_anchor) << 24;
> +		num_regs++;
> +
> +		/* configure PIR */
> +		maxrate = (node->rate > node->ceil) ? node->rate : node->ceil;
> +		cfg->reg[num_regs] = NIX_AF_TL4X_PIR(node->schq);
> +		cfg->regval[num_regs] =
> +			otx2_get_txschq_rate_regval(pfvf, maxrate, 65536);
> +		num_regs++;
> +
> +		/* configure CIR */
> +		if (!test_bit(QOS_CIR_PIR_SUPPORT, &pfvf->hw.cap_flag)) {
> +			/* Don't configure CIR when both CIR+PIR not supported
> +			 * On 96xx, CIR + PIR + RED_ALGO=STALL causes deadlock
> +			 */
> +			goto txschq_cfg_out;
> +		}
> +
> +		cfg->reg[num_regs] = NIX_AF_TL4X_CIR(node->schq);
> +		cfg->regval[num_regs] =
> +			otx2_get_txschq_rate_regval(pfvf, node->rate, 65536);
> +		num_regs++;
> +	} else if (level == NIX_TXSCH_LVL_TL3) {
> +		/* configure parent txschq */
> +		cfg->reg[num_regs] = NIX_AF_TL3X_PARENT(node->schq);
> +		cfg->regval[num_regs] = node->parent->schq << 16;
> +		num_regs++;
> +
> +		/* configure link cfg */
> +		if (level == pfvf->qos.link_cfg_lvl) {
> +			cfg->reg[num_regs] = NIX_AF_TL3_TL2X_LINKX_CFG(node->schq, hw->tx_link);
> +			cfg->regval[num_regs] = BIT_ULL(13) | BIT_ULL(12);
> +			num_regs++;
> +		}
> +
> +		/* return if not htb node */
> +		if (node->qid == OTX2_QOS_QID_NONE) {
> +			cfg->reg[num_regs] = NIX_AF_TL3X_SCHEDULE(node->schq);
> +			cfg->regval[num_regs] =  node->prio << 24 |
> +						 mtu_to_dwrr_weight(pfvf,
> +								    pfvf->tx_max_pktlen);
> +			num_regs++;
> +			goto txschq_cfg_out;
> +		}
> +
> +		/* configure priority */
> +		cfg->reg[num_regs] = NIX_AF_TL3X_SCHEDULE(node->schq);
> +		cfg->regval[num_regs] = (node->schq -
> +					 node->parent->prio_anchor) << 24;
> +		num_regs++;
> +
> +		/* configure PIR */
> +		maxrate = (node->rate > node->ceil) ? node->rate : node->ceil;
> +		cfg->reg[num_regs] = NIX_AF_TL3X_PIR(node->schq);
> +		cfg->regval[num_regs] =
> +			otx2_get_txschq_rate_regval(pfvf, maxrate, 65536);
> +		num_regs++;
> +
> +		/* configure CIR */
> +		if (!test_bit(QOS_CIR_PIR_SUPPORT, &pfvf->hw.cap_flag)) {
> +			/* Don't configure CIR when both CIR+PIR not supported
> +			 * On 96xx, CIR + PIR + RED_ALGO=STALL causes deadlock
> +			 */
> +			goto txschq_cfg_out;
> +		}
> +
> +		cfg->reg[num_regs] = NIX_AF_TL3X_CIR(node->schq);
> +		cfg->regval[num_regs] =
> +			otx2_get_txschq_rate_regval(pfvf, node->rate, 65536);
> +		num_regs++;
> +	} else if (level == NIX_TXSCH_LVL_TL2) {
> +		/* configure parent txschq */
> +		cfg->reg[num_regs] = NIX_AF_TL2X_PARENT(node->schq);
> +		cfg->regval[num_regs] = hw->tx_link << 16;
> +		num_regs++;
> +
> +		/* configure link cfg */
> +		if (level == pfvf->qos.link_cfg_lvl) {
> +			cfg->reg[num_regs] = NIX_AF_TL3_TL2X_LINKX_CFG(node->schq, hw->tx_link);
> +			cfg->regval[num_regs] = BIT_ULL(13) | BIT_ULL(12);
> +			num_regs++;
> +		}
> +
> +		/* return if not htb node */
> +		if (node->qid == OTX2_QOS_QID_NONE) {
> +			cfg->reg[num_regs] = NIX_AF_TL2X_SCHEDULE(node->schq);
> +			cfg->regval[num_regs] =  node->prio << 24 |
> +						 mtu_to_dwrr_weight(pfvf,
> +								    pfvf->tx_max_pktlen);
> +			num_regs++;
> +			goto txschq_cfg_out;
> +		}
> +
> +		/* check if node is root */
> +		if (node->qid == OTX2_QOS_QID_INNER && !node->parent) {
> +			cfg->reg[num_regs] = NIX_AF_TL2X_SCHEDULE(node->schq);
> +			cfg->regval[num_regs] =  TXSCH_TL1_DFLT_RR_PRIO << 24 |
> +						 mtu_to_dwrr_weight(pfvf,
> +								    pfvf->tx_max_pktlen);
> +			num_regs++;
> +			goto txschq_cfg_out;
> +		}
> +
> +		/* configure priority/quantum */
> +		cfg->reg[num_regs] = NIX_AF_TL2X_SCHEDULE(node->schq);
> +		cfg->regval[num_regs] = (node->schq -
> +					 node->parent->prio_anchor) << 24;
> +		num_regs++;
> +
> +		/* configure PIR */
> +		maxrate = (node->rate > node->ceil) ? node->rate : node->ceil;
> +		cfg->reg[num_regs] = NIX_AF_TL2X_PIR(node->schq);
> +		cfg->regval[num_regs] =
> +			otx2_get_txschq_rate_regval(pfvf, maxrate, 65536);
> +		num_regs++;
> +
> +		/* configure CIR */
> +		if (!test_bit(QOS_CIR_PIR_SUPPORT, &pfvf->hw.cap_flag)) {
> +			/* Don't configure CIR when both CIR+PIR not supported
> +			 * On 96xx, CIR + PIR + RED_ALGO=STALL causes deadlock
> +			 */
> +			goto txschq_cfg_out;
> +		}
> +
> +		cfg->reg[num_regs] = NIX_AF_TL2X_CIR(node->schq);
> +		cfg->regval[num_regs] =
> +			otx2_get_txschq_rate_regval(pfvf, node->rate, 65536);
> +		num_regs++;
> +	}
> +
> +txschq_cfg_out:
> +	cfg->num_regs = num_regs;
> +}
> +
> +static int otx2_qos_txschq_set_parent_topology(struct otx2_nic *pfvf,
> +					       struct otx2_qos_node *parent)
> +{
> +	struct mbox *mbox = &pfvf->mbox;
> +	struct nix_txschq_config *cfg;
> +	int rc;
> +
> +	if (parent->level == NIX_TXSCH_LVL_MDQ)
> +		return 0;
> +
> +	mutex_lock(&mbox->lock);
> +
> +	cfg = otx2_mbox_alloc_msg_nix_txschq_cfg(&pfvf->mbox);
> +	if (!cfg) {
> +		mutex_unlock(&mbox->lock);
> +		return -ENOMEM;
> +	}
> +
> +	cfg->lvl = parent->level;
> +
> +	if (parent->level == NIX_TXSCH_LVL_TL4)
> +		cfg->reg[0] = NIX_AF_TL4X_TOPOLOGY(parent->schq);
> +	else if (parent->level == NIX_TXSCH_LVL_TL3)
> +		cfg->reg[0] = NIX_AF_TL3X_TOPOLOGY(parent->schq);
> +	else if (parent->level == NIX_TXSCH_LVL_TL2)
> +		cfg->reg[0] = NIX_AF_TL2X_TOPOLOGY(parent->schq);
> +	else if (parent->level == NIX_TXSCH_LVL_TL1)
> +		cfg->reg[0] = NIX_AF_TL1X_TOPOLOGY(parent->schq);
> +
> +	cfg->regval[0] = (u64)parent->prio_anchor << 32;
> +	if (parent->level == NIX_TXSCH_LVL_TL1)
> +		cfg->regval[0] |= (u64)TXSCH_TL1_DFLT_RR_PRIO << 1;
> +
> +	cfg->num_regs++;
> +
> +	rc = otx2_sync_mbox_msg(&pfvf->mbox);
> +
> +	mutex_unlock(&mbox->lock);
> +
> +	return rc;
> +}
> +
> +static void otx2_qos_free_hw_node_schq(struct otx2_nic *pfvf,
> +				       struct otx2_qos_node *parent)
> +{
> +	struct otx2_qos_node *node;
> +
> +	list_for_each_entry_reverse(node, &parent->child_schq_list, list)
> +		otx2_txschq_free_one(pfvf, node->level, node->schq);
> +}
> +
> +static void otx2_qos_free_hw_node(struct otx2_nic *pfvf,
> +				  struct otx2_qos_node *parent)
> +{
> +	struct otx2_qos_node *node, *tmp;
> +
> +	list_for_each_entry_safe(node, tmp, &parent->child_list, list) {
> +		otx2_qos_free_hw_node(pfvf, node);
> +		otx2_qos_free_hw_node_schq(pfvf, node);
> +		otx2_txschq_free_one(pfvf, node->level, node->schq);
> +	}
> +}
> +
> +static void otx2_qos_free_hw_cfg(struct otx2_nic *pfvf,
> +				 struct otx2_qos_node *node)
> +{
> +	mutex_lock(&pfvf->qos.qos_lock);
> +
> +	/* free child node hw mappings */
> +	otx2_qos_free_hw_node(pfvf, node);
> +	otx2_qos_free_hw_node_schq(pfvf, node);
> +
> +	/* free node hw mappings */
> +	otx2_txschq_free_one(pfvf, node->level, node->schq);
> +
> +	mutex_unlock(&pfvf->qos.qos_lock);
> +}
> +
> +static void otx2_qos_sw_node_delete(struct otx2_nic *pfvf,
> +				    struct otx2_qos_node *node)
> +{
> +	hash_del(&node->hlist);
> +
> +	if (node->qid != OTX2_QOS_QID_INNER && node->qid != OTX2_QOS_QID_NONE) {
> +		__clear_bit(node->qid, pfvf->qos.qos_sq_bmap);
> +		otx2_qos_update_tx_netdev_queues(pfvf);
> +	}
> +
> +	list_del(&node->list);
> +	kfree(node);
> +}
> +
> +static void otx2_qos_free_sw_node_schq(struct otx2_nic *pfvf,
> +				       struct otx2_qos_node *parent)
> +{
> +	struct otx2_qos_node *node, *tmp;
> +
> +	list_for_each_entry_safe(node, tmp, &parent->child_schq_list, list) {
> +		list_del(&node->list);
> +		kfree(node);
> +	}
> +}
> +
> +static void __otx2_qos_free_sw_node(struct otx2_nic *pfvf,
> +				    struct otx2_qos_node *parent)
> +{
> +	struct otx2_qos_node *node, *tmp;
> +
> +	list_for_each_entry_safe(node, tmp, &parent->child_list, list) {
> +		__otx2_qos_free_sw_node(pfvf, node);
> +		otx2_qos_free_sw_node_schq(pfvf, node);
> +		otx2_qos_sw_node_delete(pfvf, node);
> +	}
> +}
> +
> +static void otx2_qos_free_sw_node(struct otx2_nic *pfvf,
> +				  struct otx2_qos_node *node)
> +{
> +	mutex_lock(&pfvf->qos.qos_lock);
> +
> +	__otx2_qos_free_sw_node(pfvf, node);
> +	otx2_qos_free_sw_node_schq(pfvf, node);
> +	otx2_qos_sw_node_delete(pfvf, node);
> +
> +	mutex_unlock(&pfvf->qos.qos_lock);
> +}
> +
> +static void otx2_qos_destroy_node(struct otx2_nic *pfvf,
> +				  struct otx2_qos_node *node)
> +{
> +	otx2_qos_free_hw_cfg(pfvf, node);
> +	otx2_qos_free_sw_node(pfvf, node);
> +}
> +
> +static void otx2_qos_fill_cfg_schq(struct otx2_qos_node *parent,
> +				   struct otx2_qos_cfg *cfg)
> +{
> +	struct otx2_qos_node *node;
> +
> +	list_for_each_entry(node, &parent->child_schq_list, list)
> +		cfg->schq[node->level]++;
> +}
> +
> +static void otx2_qos_fill_cfg_tl(struct otx2_qos_node *parent,
> +				 struct otx2_qos_cfg *cfg)
> +{
> +	struct otx2_qos_node *node;
> +
> +	list_for_each_entry(node, &parent->child_list, list) {
> +		otx2_qos_fill_cfg_tl(node, cfg);
> +		cfg->schq_contig[node->level]++;
> +		otx2_qos_fill_cfg_schq(node, cfg);
> +	}
> +}
> +
> +static void otx2_qos_prepare_txschq_cfg(struct otx2_nic *pfvf,
> +					struct otx2_qos_node *parent,
> +					struct otx2_qos_cfg *cfg)
> +{
> +	mutex_lock(&pfvf->qos.qos_lock);
> +	otx2_qos_fill_cfg_tl(parent, cfg);
> +	mutex_unlock(&pfvf->qos.qos_lock);
> +}
> +
> +static void otx2_qos_read_txschq_cfg_schq(struct otx2_qos_node *parent,
> +					  struct otx2_qos_cfg *cfg)
> +{
> +	struct otx2_qos_node *node;
> +	int cnt;
> +
> +	list_for_each_entry(node, &parent->child_schq_list, list) {
> +		cnt = cfg->dwrr_node_pos[node->level];
> +		cfg->schq_list[node->level][cnt] = node->schq;
> +		cfg->schq[node->level]++;
> +		cfg->dwrr_node_pos[node->level]++;
> +	}
> +}
> +
> +static void otx2_qos_read_txschq_cfg_tl(struct otx2_qos_node *parent,
> +					struct otx2_qos_cfg *cfg)
> +{
> +	struct otx2_qos_node *node;
> +	int cnt;
> +
> +	list_for_each_entry(node, &parent->child_list, list) {
> +		otx2_qos_read_txschq_cfg_tl(node, cfg);
> +		cnt = cfg->static_node_pos[node->level];
> +		cfg->schq_contig_list[node->level][cnt] = node->schq;
> +		cfg->schq_contig[node->level]++;
> +		cfg->static_node_pos[node->level]++;
> +		otx2_qos_read_txschq_cfg_schq(node, cfg);
> +	}
> +}
> +
> +static void otx2_qos_read_txschq_cfg(struct otx2_nic *pfvf,
> +				     struct otx2_qos_node *node,
> +				     struct otx2_qos_cfg *cfg)
> +{
> +	mutex_lock(&pfvf->qos.qos_lock);
> +	otx2_qos_read_txschq_cfg_tl(node, cfg);
> +	mutex_unlock(&pfvf->qos.qos_lock);
> +}
> +
> +static struct otx2_qos_node *
> +otx2_qos_alloc_root(struct otx2_nic *pfvf)
> +{
> +	struct otx2_qos_node *node;
> +
> +	node = kzalloc(sizeof(*node), GFP_KERNEL);
> +	if (!node)
> +		return ERR_PTR(-ENOMEM);
> +
> +	node->parent = NULL;
> +	if (!is_otx2_vf(pfvf->pcifunc))
> +		node->level = NIX_TXSCH_LVL_TL1;
> +	else
> +		node->level = NIX_TXSCH_LVL_TL2;
> +
> +	node->qid = OTX2_QOS_QID_INNER;
> +	node->classid = OTX2_QOS_ROOT_CLASSID;
> +
> +	hash_add(pfvf->qos.qos_hlist, &node->hlist, node->classid);
> +	list_add_tail(&node->list, &pfvf->qos.qos_tree);
> +	INIT_LIST_HEAD(&node->child_list);
> +	INIT_LIST_HEAD(&node->child_schq_list);
> +
> +	return node;
> +}
> +
> +static int otx2_qos_add_child_node(struct otx2_qos_node *parent,
> +				   struct otx2_qos_node *node)
> +{
> +	struct list_head *head = &parent->child_list;
> +	struct otx2_qos_node *tmp_node;
> +	struct list_head *tmp;
> +
> +	for (tmp = head->next; tmp != head; tmp = tmp->next) {
> +		tmp_node = list_entry(tmp, struct otx2_qos_node, list);
> +		if (tmp_node->prio == node->prio)
> +			return -EEXIST;
> +		if (tmp_node->prio > node->prio) {
> +			list_add_tail(&node->list, tmp);
> +			return 0;
> +		}
> +	}
> +
> +	list_add_tail(&node->list, head);
> +	return 0;
> +}
> +
> +static int otx2_qos_alloc_txschq_node(struct otx2_nic *pfvf,
> +				      struct otx2_qos_node *node)
> +{
> +	struct otx2_qos_node *txschq_node, *parent, *tmp;
> +	int lvl;
> +
> +	parent = node;
> +	for (lvl = node->level - 1; lvl >= NIX_TXSCH_LVL_MDQ; lvl--) {
> +		txschq_node = kzalloc(sizeof(*txschq_node), GFP_KERNEL);
> +		if (!txschq_node)
> +			goto err_out;
> +
> +		txschq_node->parent = parent;
> +		txschq_node->level = lvl;
> +		txschq_node->classid = OTX2_QOS_CLASS_NONE;
> +		txschq_node->qid = OTX2_QOS_QID_NONE;
> +		txschq_node->rate = 0;
> +		txschq_node->ceil = 0;
> +		txschq_node->prio = 0;
> +
> +		mutex_lock(&pfvf->qos.qos_lock);
> +		list_add_tail(&txschq_node->list, &node->child_schq_list);
> +		mutex_unlock(&pfvf->qos.qos_lock);
> +
> +		INIT_LIST_HEAD(&txschq_node->child_list);
> +		INIT_LIST_HEAD(&txschq_node->child_schq_list);
> +		parent = txschq_node;
> +	}
> +
> +	return 0;
> +
> +err_out:
> +	list_for_each_entry_safe(txschq_node, tmp, &node->child_schq_list,
> +				 list) {
> +		list_del(&txschq_node->list);
> +		kfree(txschq_node);
> +	}
> +	return -ENOMEM;
> +}
> +
> +static struct otx2_qos_node *
> +otx2_qos_sw_create_leaf_node(struct otx2_nic *pfvf,
> +			     struct otx2_qos_node *parent,
> +			     u16 classid, u32 prio, u64 rate, u64 ceil,
> +			     u16 qid)
> +{
> +	struct otx2_qos_node *node;
> +	int err;
> +
> +	node = kzalloc(sizeof(*node), GFP_KERNEL);
> +	if (!node)
> +		return ERR_PTR(-ENOMEM);
> +
> +	node->parent = parent;
> +	node->level = parent->level - 1;
> +	node->classid = classid;
> +	node->qid = qid;
> +	node->rate = otx2_qos_convert_rate(rate);
> +	node->ceil = otx2_qos_convert_rate(ceil);
> +	node->prio = prio;
> +
> +	__set_bit(qid, pfvf->qos.qos_sq_bmap);
> +
> +	hash_add(pfvf->qos.qos_hlist, &node->hlist, classid);
> +
> +	mutex_lock(&pfvf->qos.qos_lock);
> +	err = otx2_qos_add_child_node(parent, node);
> +	if (err) {
> +		mutex_unlock(&pfvf->qos.qos_lock);
> +		return ERR_PTR(err);
> +	}
> +	mutex_unlock(&pfvf->qos.qos_lock);
> +
> +	INIT_LIST_HEAD(&node->child_list);
> +	INIT_LIST_HEAD(&node->child_schq_list);

Looks suspicious that some fields of node are initialized after
otx2_qos_add_child_node is called.

> +
> +	err = otx2_qos_alloc_txschq_node(pfvf, node);
> +	if (err) {
> +		otx2_qos_sw_node_delete(pfvf, node);
> +		return ERR_PTR(-ENOMEM);
> +	}
> +
> +	return node;
> +}
> +
> +static struct otx2_qos_node *
> +otx2_sw_node_find(struct otx2_nic *pfvf, u32 classid)
> +{
> +	struct otx2_qos_node *node = NULL;
> +
> +	hash_for_each_possible(pfvf->qos.qos_hlist, node, hlist, classid) {

This loop may be called from ndo_select_queue, while another thread may
modify qos_hlist. We use RCU in mlx5e to protect this structure. What
protects it in your driver?

> +		if (node->classid == classid)
> +			break;
> +	}
> +
> +	return node;
> +}
> +
> +int otx2_get_txq_by_classid(struct otx2_nic *pfvf, u16 classid)
> +{
> +	struct otx2_qos_node *node;
> +	u16 qid;
> +	int res;
> +
> +	node = otx2_sw_node_find(pfvf, classid);
> +	if (IS_ERR(node)) {
> +		res = -ENOENT;
> +		goto out;
> +	}
> +	qid = READ_ONCE(node->qid);
> +	if (qid == OTX2_QOS_QID_INNER) {
> +		res = -EINVAL;
> +		goto out;
> +	}
> +	res = pfvf->hw.tx_queues + qid;
> +out:
> +	return res;
> +}
> +
> +static int
> +otx2_qos_txschq_config(struct otx2_nic *pfvf, struct otx2_qos_node *node)
> +{
> +	struct mbox *mbox = &pfvf->mbox;
> +	struct nix_txschq_config *req;
> +	int rc;
> +
> +	mutex_lock(&mbox->lock);
> +
> +	req = otx2_mbox_alloc_msg_nix_txschq_cfg(&pfvf->mbox);
> +	if (!req) {
> +		mutex_unlock(&mbox->lock);
> +		return -ENOMEM;
> +	}
> +
> +	req->lvl = node->level;
> +	__otx2_qos_txschq_cfg(pfvf, node, req);
> +
> +	rc = otx2_sync_mbox_msg(&pfvf->mbox);
> +
> +	mutex_unlock(&mbox->lock);
> +
> +	return rc;
> +}
> +
> +static int otx2_qos_txschq_alloc(struct otx2_nic *pfvf,
> +				 struct otx2_qos_cfg *cfg)
> +{
> +	struct nix_txsch_alloc_req *req;
> +	struct nix_txsch_alloc_rsp *rsp;
> +	struct mbox *mbox = &pfvf->mbox;
> +	int lvl, rc, schq;
> +
> +	mutex_lock(&mbox->lock);
> +	req = otx2_mbox_alloc_msg_nix_txsch_alloc(&pfvf->mbox);
> +	if (!req) {
> +		mutex_unlock(&mbox->lock);
> +		return -ENOMEM;
> +	}
> +
> +	for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
> +		req->schq[lvl] = cfg->schq[lvl];
> +		req->schq_contig[lvl] = cfg->schq_contig[lvl];
> +	}
> +
> +	rc = otx2_sync_mbox_msg(&pfvf->mbox);
> +	if (rc) {
> +		mutex_unlock(&mbox->lock);
> +		return rc;
> +	}
> +
> +	rsp = (struct nix_txsch_alloc_rsp *)
> +	      otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
> +
> +	for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
> +		for (schq = 0; schq < rsp->schq_contig[lvl]; schq++) {
> +			cfg->schq_contig_list[lvl][schq] =
> +				rsp->schq_contig_list[lvl][schq];
> +		}
> +	}
> +
> +	for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
> +		for (schq = 0; schq < rsp->schq[lvl]; schq++) {
> +			cfg->schq_list[lvl][schq] =
> +				rsp->schq_list[lvl][schq];
> +		}
> +	}
> +
> +	pfvf->qos.link_cfg_lvl = rsp->link_cfg_lvl;
> +
> +	mutex_unlock(&mbox->lock);
> +
> +	return rc;
> +}
> +
> +static void otx2_qos_txschq_fill_cfg_schq(struct otx2_nic *pfvf,
> +					  struct otx2_qos_node *node,
> +					  struct otx2_qos_cfg *cfg)
> +{
> +	struct otx2_qos_node *tmp;
> +	int cnt;
> +
> +	list_for_each_entry(tmp, &node->child_schq_list, list) {
> +		cnt = cfg->dwrr_node_pos[tmp->level];
> +		tmp->schq = cfg->schq_list[tmp->level][cnt];
> +		cfg->dwrr_node_pos[tmp->level]++;
> +	}
> +}
> +
> +static void otx2_qos_txschq_fill_cfg_tl(struct otx2_nic *pfvf,
> +					struct otx2_qos_node *node,
> +					struct otx2_qos_cfg *cfg)
> +{
> +	struct otx2_qos_node *tmp;
> +	int cnt;
> +
> +	list_for_each_entry(tmp, &node->child_list, list) {
> +		otx2_qos_txschq_fill_cfg_tl(pfvf, tmp, cfg);
> +		cnt = cfg->static_node_pos[tmp->level];
> +		tmp->schq = cfg->schq_contig_list[tmp->level][cnt];
> +		if (cnt == 0)
> +			node->prio_anchor = tmp->schq;
> +		cfg->static_node_pos[tmp->level]++;
> +		otx2_qos_txschq_fill_cfg_schq(pfvf, tmp, cfg);
> +	}
> +}
> +
> +static void otx2_qos_txschq_fill_cfg(struct otx2_nic *pfvf,
> +				     struct otx2_qos_node *node,
> +				     struct otx2_qos_cfg *cfg)
> +{
> +	mutex_lock(&pfvf->qos.qos_lock);
> +	otx2_qos_txschq_fill_cfg_tl(pfvf, node, cfg);
> +	otx2_qos_txschq_fill_cfg_schq(pfvf, node, cfg);
> +	mutex_unlock(&pfvf->qos.qos_lock);
> +}
> +
> +static int otx2_qos_txschq_push_cfg_schq(struct otx2_nic *pfvf,
> +					 struct otx2_qos_node *node,
> +					 struct otx2_qos_cfg *cfg)
> +{
> +	struct otx2_qos_node *tmp;
> +	int ret = 0;
> +
> +	list_for_each_entry(tmp, &node->child_schq_list, list) {
> +		ret = otx2_qos_txschq_config(pfvf, tmp);
> +		if (ret)
> +			return -EIO;
> +		ret = otx2_qos_txschq_set_parent_topology(pfvf, tmp->parent);
> +		if (ret)
> +			return -EIO;
> +	}
> +
> +	return 0;
> +}
> +
> +static int otx2_qos_txschq_push_cfg_tl(struct otx2_nic *pfvf,
> +				       struct otx2_qos_node *node,
> +				       struct otx2_qos_cfg *cfg)
> +{
> +	struct otx2_qos_node *tmp;
> +	int ret;
> +
> +	list_for_each_entry(tmp, &node->child_list, list) {
> +		ret = otx2_qos_txschq_push_cfg_tl(pfvf, tmp, cfg);
> +		if (ret)
> +			return -EIO;
> +		ret = otx2_qos_txschq_config(pfvf, tmp);
> +		if (ret)
> +			return -EIO;
> +		ret = otx2_qos_txschq_push_cfg_schq(pfvf, tmp, cfg);
> +		if (ret)
> +			return -EIO;
> +	}
> +
> +	ret = otx2_qos_txschq_set_parent_topology(pfvf, node);
> +	if (ret)
> +		return -EIO;
> +
> +	return 0;
> +}
> +
> +static int otx2_qos_txschq_push_cfg(struct otx2_nic *pfvf,
> +				    struct otx2_qos_node *node,
> +				    struct otx2_qos_cfg *cfg)
> +{
> +	int ret;
> +
> +	mutex_lock(&pfvf->qos.qos_lock);
> +	ret = otx2_qos_txschq_push_cfg_tl(pfvf, node, cfg);
> +	if (ret)
> +		goto out;
> +	ret = otx2_qos_txschq_push_cfg_schq(pfvf, node, cfg);
> +out:
> +	mutex_unlock(&pfvf->qos.qos_lock);
> +	return ret;
> +}
> +
> +static int otx2_qos_txschq_update_config(struct otx2_nic *pfvf,
> +					 struct otx2_qos_node *node,
> +					 struct otx2_qos_cfg *cfg)
> +{
> +	otx2_qos_txschq_fill_cfg(pfvf, node, cfg);
> +
> +	return otx2_qos_txschq_push_cfg(pfvf, node, cfg);
> +}
> +
> +static int otx2_qos_txschq_update_root_cfg(struct otx2_nic *pfvf,
> +					   struct otx2_qos_node *root,
> +					   struct otx2_qos_cfg *cfg)
> +{
> +	root->schq = cfg->schq_list[root->level][0];
> +	return otx2_qos_txschq_config(pfvf, root);
> +}
> +
> +static void otx2_qos_free_cfg(struct otx2_nic *pfvf, struct otx2_qos_cfg *cfg)
> +{
> +	int lvl, idx, schq;
> +
> +	for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
> +		for (idx = 0; idx < cfg->schq[lvl]; idx++) {
> +			schq = cfg->schq_list[lvl][idx];
> +			otx2_txschq_free_one(pfvf, lvl, schq);
> +		}
> +	}
> +
> +	for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
> +		for (idx = 0; idx < cfg->schq_contig[lvl]; idx++) {
> +			schq = cfg->schq_contig_list[lvl][idx];
> +			otx2_txschq_free_one(pfvf, lvl, schq);
> +		}
> +	}
> +}
> +
> +static void otx2_qos_enadis_sq(struct otx2_nic *pfvf,
> +			       struct otx2_qos_node *node,
> +			       u16 qid)
> +{
> +	if (pfvf->qos.qid_to_sqmap[qid] != OTX2_QOS_INVALID_SQ)
> +		otx2_qos_disable_sq(pfvf, qid);
> +
> +	pfvf->qos.qid_to_sqmap[qid] = node->schq;
> +	otx2_qos_enable_sq(pfvf, qid);
> +}
> +
> +static void otx2_qos_update_smq_schq(struct otx2_nic *pfvf,
> +				     struct otx2_qos_node *node,
> +				     bool action)
> +{
> +	struct otx2_qos_node *tmp;
> +
> +	if (node->qid == OTX2_QOS_QID_INNER)
> +		return;
> +
> +	list_for_each_entry(tmp, &node->child_schq_list, list) {
> +		if (tmp->level == NIX_TXSCH_LVL_MDQ) {
> +			if (action == QOS_SMQ_FLUSH)
> +				otx2_smq_flush(pfvf, tmp->schq);
> +			else
> +				otx2_qos_enadis_sq(pfvf, tmp, node->qid);
> +		}
> +	}
> +}
> +
> +static void __otx2_qos_update_smq(struct otx2_nic *pfvf,
> +				  struct otx2_qos_node *node,
> +				  bool action)
> +{
> +	struct otx2_qos_node *tmp;
> +
> +	list_for_each_entry(tmp, &node->child_list, list) {
> +		__otx2_qos_update_smq(pfvf, tmp, action);
> +		if (tmp->qid == OTX2_QOS_QID_INNER)
> +			continue;
> +		if (tmp->level == NIX_TXSCH_LVL_MDQ) {
> +			if (action == QOS_SMQ_FLUSH)
> +				otx2_smq_flush(pfvf, tmp->schq);
> +			else
> +				otx2_qos_enadis_sq(pfvf, tmp, tmp->qid);
> +		} else {
> +			otx2_qos_update_smq_schq(pfvf, tmp, action);
> +		}
> +	}
> +}
> +
> +static void otx2_qos_update_smq(struct otx2_nic *pfvf,
> +				struct otx2_qos_node *node,
> +				bool action)
> +{
> +	mutex_lock(&pfvf->qos.qos_lock);
> +	__otx2_qos_update_smq(pfvf, node, action);
> +	otx2_qos_update_smq_schq(pfvf, node, action);
> +	mutex_unlock(&pfvf->qos.qos_lock);
> +}
> +
> +static int otx2_qos_push_txschq_cfg(struct otx2_nic *pfvf,
> +				    struct otx2_qos_node *node,
> +				    struct otx2_qos_cfg *cfg)
> +{
> +	int ret = 0;
> +
> +	ret = otx2_qos_txschq_alloc(pfvf, cfg);
> +	if (ret)
> +		return -ENOSPC;
> +
> +	if (!(pfvf->netdev->flags & IFF_UP)) {
> +		otx2_qos_txschq_fill_cfg(pfvf, node, cfg);
> +		return 0;
> +	}
> +
> +	ret = otx2_qos_txschq_update_config(pfvf, node, cfg);
> +	if (ret) {
> +		otx2_qos_free_cfg(pfvf, cfg);
> +		return -EIO;
> +	}
> +
> +	otx2_qos_update_smq(pfvf, node, QOS_CFG_SQ);
> +
> +	return 0;
> +}
> +
> +static int otx2_qos_update_tree(struct otx2_nic *pfvf,
> +				struct otx2_qos_node *node,
> +				struct otx2_qos_cfg *cfg)
> +{
> +	otx2_qos_prepare_txschq_cfg(pfvf, node->parent, cfg);
> +	return otx2_qos_push_txschq_cfg(pfvf, node->parent, cfg);
> +}
> +
> +static int otx2_qos_root_add(struct otx2_nic *pfvf, u16 htb_maj_id, u16 htb_defcls,
> +			     struct netlink_ext_ack *extack)
> +{
> +	struct otx2_qos_cfg *new_cfg;
> +	struct otx2_qos_node *root;
> +	int err;
> +
> +	netdev_dbg(pfvf->netdev,
> +		   "TC_HTB_CREATE: handle=0x%x defcls=0x%x\n",
> +		   htb_maj_id, htb_defcls);
> +
> +	INIT_LIST_HEAD(&pfvf->qos.qos_tree);
> +	mutex_init(&pfvf->qos.qos_lock);
> +
> +	root = otx2_qos_alloc_root(pfvf);
> +	if (IS_ERR(root)) {
> +		mutex_destroy(&pfvf->qos.qos_lock);
> +		err = PTR_ERR(root);
> +		return err;
> +	}
> +
> +	/* allocate txschq queue */
> +	new_cfg = kzalloc(sizeof(*new_cfg), GFP_KERNEL);
> +	if (!new_cfg) {
> +		NL_SET_ERR_MSG_MOD(extack, "Memory allocation error");
> +		mutex_destroy(&pfvf->qos.qos_lock);
> +		return -ENOMEM;
> +	}
> +	/* allocate htb root node */
> +	new_cfg->schq[root->level] = 1;
> +	err = otx2_qos_txschq_alloc(pfvf, new_cfg);
> +	if (err) {
> +		NL_SET_ERR_MSG_MOD(extack, "Error allocating txschq");
> +		goto free_root_node;
> +	}
> +
> +	if (!(pfvf->netdev->flags & IFF_UP) ||
> +	    root->level == NIX_TXSCH_LVL_TL1) {
> +		root->schq = new_cfg->schq_list[root->level][0];
> +		goto out;
> +	}
> +
> +	/* update the txschq configuration in hw */
> +	err = otx2_qos_txschq_update_root_cfg(pfvf, root, new_cfg);
> +	if (err) {
> +		NL_SET_ERR_MSG_MOD(extack,
> +				   "Error updating txschq configuration");
> +		goto txschq_free;
> +	}
> +
> +out:
> +	WRITE_ONCE(pfvf->qos.defcls, htb_defcls);
> +	smp_store_release(&pfvf->qos.maj_id, htb_maj_id); /* barrier */
> +	kfree(new_cfg);
> +	return 0;
> +
> +txschq_free:
> +	otx2_qos_free_cfg(pfvf, new_cfg);
> +free_root_node:
> +	kfree(new_cfg);
> +	otx2_qos_sw_node_delete(pfvf, root);
> +	mutex_destroy(&pfvf->qos.qos_lock);
> +	return err;
> +}
> +
> +static int otx2_qos_root_destroy(struct otx2_nic *pfvf)
> +{
> +	struct otx2_qos_node *root;
> +
> +	netdev_dbg(pfvf->netdev, "TC_HTB_DESTROY\n");
> +
> +	/* find root node */
> +	root = otx2_sw_node_find(pfvf, OTX2_QOS_ROOT_CLASSID);
> +	if (IS_ERR(root))
> +		return -ENOENT;
> +
> +	/* free the hw mappings */
> +	otx2_qos_destroy_node(pfvf, root);
> +	mutex_destroy(&pfvf->qos.qos_lock);
> +
> +	return 0;
> +}
> +
> +static int otx2_qos_validate_configuration(struct otx2_qos_node *parent,
> +					   struct netlink_ext_ack *extack,
> +					   struct otx2_nic *pfvf,
> +					   u64 prio)
> +{
> +	if (test_bit(prio, parent->prio_bmap)) {
> +		NL_SET_ERR_MSG_MOD(extack,
> +				   "Static priority child with same priority exists");
> +		return -EEXIST;
> +	}
> +
> +	if (prio == TXSCH_TL1_DFLT_RR_PRIO) {
> +		NL_SET_ERR_MSG_MOD(extack,
> +				   "Priority is reserved for Round Robin");
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static int otx2_qos_leaf_alloc_queue(struct otx2_nic *pfvf, u16 classid,
> +				     u32 parent_classid, u64 rate, u64 ceil,
> +				     u64 prio, struct netlink_ext_ack *extack)
> +{
> +	struct otx2_qos_cfg *old_cfg, *new_cfg;
> +	struct otx2_qos_node *node, *parent;
> +	int qid, ret, err;
> +
> +	netdev_dbg(pfvf->netdev,
> +		   "TC_HTB_LEAF_ALLOC_QUEUE: classid=0x%x parent_classid=0x%x rate=%lld ceil=%lld prio=%lld\n",
> +		   classid, parent_classid, rate, ceil, prio);
> +
> +	if (prio > OTX2_QOS_MAX_PRIO) {
> +		NL_SET_ERR_MSG_MOD(extack, "Valid priority range 0 to 7");
> +		ret = -EOPNOTSUPP;
> +		goto out;
> +	}
> +
> +	/* get parent node */
> +	parent = otx2_sw_node_find(pfvf, parent_classid);
> +	if (IS_ERR(parent)) {
> +		NL_SET_ERR_MSG_MOD(extack, "parent node not found");
> +		ret = -ENOENT;
> +		goto out;
> +	}
> +	if (parent->level == NIX_TXSCH_LVL_MDQ) {
> +		NL_SET_ERR_MSG_MOD(extack, "HTB qos max levels reached");
> +		ret = -EOPNOTSUPP;
> +		goto out;
> +	}
> +
> +	ret = otx2_qos_validate_configuration(parent, extack, pfvf, prio);
> +	if (ret)
> +		goto out;
> +
> +	set_bit(prio, parent->prio_bmap);
> +
> +	/* read current txschq configuration */
> +	old_cfg = kzalloc(sizeof(*old_cfg), GFP_KERNEL);
> +	if (!old_cfg) {
> +		NL_SET_ERR_MSG_MOD(extack, "Memory allocation error");
> +		ret = -ENOMEM;
> +		goto out;
> +	}
> +	otx2_qos_read_txschq_cfg(pfvf, parent, old_cfg);
> +
> +	/* allocate a new sq */
> +	qid = otx2_qos_get_qid(pfvf);
> +	if (qid < 0) {
> +		NL_SET_ERR_MSG_MOD(extack, "Reached max supported QOS SQ's");
> +		ret = -ENOMEM;
> +		goto free_old_cfg;
> +	}
> +
> +	/* Actual SQ mapping will be updated after SMQ alloc */
> +	pfvf->qos.qid_to_sqmap[qid] = OTX2_QOS_INVALID_SQ;
> +
> +	/* allocate and initialize a new child node */
> +	node = otx2_qos_sw_create_leaf_node(pfvf, parent, classid, prio, rate,
> +					    ceil, qid);
> +	if (IS_ERR(node)) {
> +		NL_SET_ERR_MSG_MOD(extack, "Unable to allocate leaf node");
> +		ret = PTR_ERR(node);
> +		goto free_old_cfg;
> +	}
> +
> +	/* push new txschq config to hw */
> +	new_cfg = kzalloc(sizeof(*new_cfg), GFP_KERNEL);
> +	if (!new_cfg) {
> +		NL_SET_ERR_MSG_MOD(extack, "Memory allocation error");
> +		ret = -ENOMEM;
> +		goto free_node;
> +	}
> +	ret = otx2_qos_update_tree(pfvf, node, new_cfg);
> +	if (ret) {
> +		NL_SET_ERR_MSG_MOD(extack, "HTB HW configuration error");
> +		kfree(new_cfg);
> +		otx2_qos_sw_node_delete(pfvf, node);
> +		/* restore the old qos tree */
> +		err = otx2_qos_txschq_update_config(pfvf, parent, old_cfg);
> +		if (err) {
> +			netdev_err(pfvf->netdev,
> +				   "Failed to restore txcshq configuration");
> +			goto free_old_cfg;
> +		}
> +
> +		otx2_qos_update_smq(pfvf, parent, QOS_CFG_SQ);
> +		goto free_old_cfg;
> +	}
> +
> +	/* update tx_real_queues */
> +	otx2_qos_update_tx_netdev_queues(pfvf);
> +
> +	/* free new txschq config */
> +	kfree(new_cfg);
> +
> +	/* free old txschq config */
> +	otx2_qos_free_cfg(pfvf, old_cfg);
> +	kfree(old_cfg);
> +
> +	return pfvf->hw.tx_queues + qid;
> +
> +free_node:
> +	otx2_qos_sw_node_delete(pfvf, node);
> +free_old_cfg:
> +	kfree(old_cfg);
> +out:
> +	return ret;
> +}
> +
> +static int otx2_qos_leaf_to_inner(struct otx2_nic *pfvf, u16 classid,
> +				  u16 child_classid, u64 rate, u64 ceil, u64 prio,
> +				  struct netlink_ext_ack *extack)
> +{
> +	struct otx2_qos_cfg *old_cfg, *new_cfg;
> +	struct otx2_qos_node *node, *child;
> +	int ret, err;
> +	u16 qid;
> +
> +	netdev_dbg(pfvf->netdev,
> +		   "TC_HTB_LEAF_TO_INNER classid %04x, child %04x, rate %llu, ceil %llu\n",
> +		   classid, child_classid, rate, ceil);
> +
> +	if (prio > OTX2_QOS_MAX_PRIO) {
> +		NL_SET_ERR_MSG_MOD(extack, "Valid priority range 0 to 7");
> +		ret = -EOPNOTSUPP;
> +		goto out;
> +	}
> +
> +	/* find node related to classid */
> +	node = otx2_sw_node_find(pfvf, classid);
> +	if (IS_ERR(node)) {
> +		NL_SET_ERR_MSG_MOD(extack, "HTB node not found");
> +		ret = -ENOENT;
> +		goto out;
> +	}
> +	/* check max qos txschq level */
> +	if (node->level == NIX_TXSCH_LVL_MDQ) {
> +		NL_SET_ERR_MSG_MOD(extack, "HTB qos level not supported");
> +		ret = -EOPNOTSUPP;
> +		goto out;
> +	}
> +
> +	set_bit(prio, node->prio_bmap);
> +
> +	/* store the qid to assign to leaf node */
> +	qid = node->qid;
> +
> +	/* read current txschq configuration */
> +	old_cfg = kzalloc(sizeof(*old_cfg), GFP_KERNEL);
> +	if (!old_cfg) {
> +		NL_SET_ERR_MSG_MOD(extack, "Memory allocation error");
> +		ret = -ENOMEM;
> +		goto out;
> +	}
> +	otx2_qos_read_txschq_cfg(pfvf, node, old_cfg);
> +
> +	/* delete the txschq nodes allocated for this node */
> +	otx2_qos_free_sw_node_schq(pfvf, node);
> +
> +	/* mark this node as htb inner node */
> +	node->qid = OTX2_QOS_QID_INNER;

As you can concurrently read node->qid from the datapath
(ndo_select_queue), you should use READ_ONCE/WRITE_ONCE to guarantee
that the value will not be torn; you already use READ_ONCE, but it
doesn't pair with a WRITE_ONCE here.

> +
> +	/* allocate and initialize a new child node */
> +	child = otx2_qos_sw_create_leaf_node(pfvf, node, child_classid,
> +					     prio, rate, ceil, qid);
> +	if (IS_ERR(child)) {
> +		NL_SET_ERR_MSG_MOD(extack, "Unable to allocate leaf node");
> +		ret = PTR_ERR(child);
> +		goto free_old_cfg;
> +	}
> +
> +	/* push new txschq config to hw */
> +	new_cfg = kzalloc(sizeof(*new_cfg), GFP_KERNEL);
> +	if (!new_cfg) {
> +		NL_SET_ERR_MSG_MOD(extack, "Memory allocation error");
> +		ret = -ENOMEM;
> +		goto free_node;
> +	}
> +	ret = otx2_qos_update_tree(pfvf, child, new_cfg);
> +	if (ret) {
> +		NL_SET_ERR_MSG_MOD(extack, "HTB HW configuration error");
> +		kfree(new_cfg);
> +		otx2_qos_sw_node_delete(pfvf, child);
> +		/* restore the old qos tree */
> +		node->qid = qid;

Same here; might be somewhere else as well.

> +		err = otx2_qos_alloc_txschq_node(pfvf, node);
> +		if (err) {
> +			netdev_err(pfvf->netdev,
> +				   "Failed to restore old leaf node");
> +			goto free_old_cfg;
> +		}
> +		err = otx2_qos_txschq_update_config(pfvf, node, old_cfg);
> +		if (err) {
> +			netdev_err(pfvf->netdev,
> +				   "Failed to restore txcshq configuration");
> +			goto free_old_cfg;
> +		}
> +		otx2_qos_update_smq(pfvf, node, QOS_CFG_SQ);
> +		goto free_old_cfg;
> +	}
> +
> +	/* free new txschq config */
> +	kfree(new_cfg);
> +
> +	/* free old txschq config */
> +	otx2_qos_free_cfg(pfvf, old_cfg);
> +	kfree(old_cfg);
> +
> +	return 0;
> +
> +free_node:
> +	otx2_qos_sw_node_delete(pfvf, child);
> +free_old_cfg:
> +	kfree(old_cfg);
> +out:
> +	return ret;
> +}
> +
> +static int otx2_qos_leaf_del(struct otx2_nic *pfvf, u16 *classid,
> +			     struct netlink_ext_ack *extack)
> +{
> +	struct otx2_qos_node *node, *parent;
> +	u64 prio;
> +	u16 qid;
> +
> +	netdev_dbg(pfvf->netdev, "TC_HTB_LEAF_DEL classid %04x\n", *classid);
> +
> +	/* find node related to classid */
> +	node = otx2_sw_node_find(pfvf, *classid);
> +	if (IS_ERR(node)) {
> +		NL_SET_ERR_MSG_MOD(extack, "HTB node not found");
> +		return -ENOENT;
> +	}
> +	parent = node->parent;
> +	prio   = node->prio;
> +	qid    = node->qid;
> +
> +	otx2_qos_disable_sq(pfvf, node->qid);
> +
> +	otx2_qos_destroy_node(pfvf, node);
> +	pfvf->qos.qid_to_sqmap[qid] = OTX2_QOS_INVALID_SQ;
> +
> +	clear_bit(prio, parent->prio_bmap);
> +
> +	return 0;
> +}
> +
> +static int otx2_qos_leaf_del_last(struct otx2_nic *pfvf, u16 classid, bool force,
> +				  struct netlink_ext_ack *extack)
> +{
> +	struct otx2_qos_node *node, *parent;
> +	struct otx2_qos_cfg *new_cfg;
> +	u64 prio;
> +	int err;
> +	u16 qid;
> +
> +	netdev_dbg(pfvf->netdev,
> +		   "TC_HTB_LEAF_DEL_LAST classid %04x\n", classid);
> +
> +	/* find node related to classid */
> +	node = otx2_sw_node_find(pfvf, classid);
> +	if (IS_ERR(node)) {
> +		NL_SET_ERR_MSG_MOD(extack, "HTB node not found");
> +		return -ENOENT;
> +	}
> +
> +	/* save qid for use by parent */
> +	qid = node->qid;
> +	prio = node->prio;
> +
> +	parent = otx2_sw_node_find(pfvf, node->parent->classid);
> +	if (IS_ERR(parent)) {
> +		NL_SET_ERR_MSG_MOD(extack, "parent node not found");
> +		return -ENOENT;
> +	}
> +
> +	/* destroy the leaf node */
> +	otx2_qos_destroy_node(pfvf, node);
> +	pfvf->qos.qid_to_sqmap[qid] = OTX2_QOS_INVALID_SQ;
> +
> +	clear_bit(prio, parent->prio_bmap);
> +
> +	/* create downstream txschq entries to parent */
> +	err = otx2_qos_alloc_txschq_node(pfvf, parent);
> +	if (err) {
> +		NL_SET_ERR_MSG_MOD(extack, "HTB failed to create txsch configuration");
> +		return err;
> +	}
> +	parent->qid = qid;
> +	__set_bit(qid, pfvf->qos.qos_sq_bmap);
> +
> +	/* push new txschq config to hw */
> +	new_cfg = kzalloc(sizeof(*new_cfg), GFP_KERNEL);
> +	if (!new_cfg) {
> +		NL_SET_ERR_MSG_MOD(extack, "Memory allocation error");
> +		return -ENOMEM;
> +	}
> +	/* fill txschq cfg and push txschq cfg to hw */
> +	otx2_qos_fill_cfg_schq(parent, new_cfg);
> +	err = otx2_qos_push_txschq_cfg(pfvf, parent, new_cfg);
> +	if (err) {
> +		NL_SET_ERR_MSG_MOD(extack, "HTB HW configuration error");
> +		kfree(new_cfg);
> +		return err;
> +	}
> +	kfree(new_cfg);
> +
> +	/* update tx_real_queues */
> +	otx2_qos_update_tx_netdev_queues(pfvf);
> +
> +	return 0;
> +}
> +
> +void otx2_clean_qos_queues(struct otx2_nic *pfvf)
> +{
> +	struct otx2_qos_node *root;
> +
> +	root = otx2_sw_node_find(pfvf, OTX2_QOS_ROOT_CLASSID);
> +	if (IS_ERR(root))
> +		return;
> +
> +	otx2_qos_update_smq(pfvf, root, QOS_SMQ_FLUSH);
> +}
> +
> +void otx2_qos_config_txschq(struct otx2_nic *pfvf)
> +{
> +	struct otx2_qos_node *root;
> +	int err;
> +
> +	root = otx2_sw_node_find(pfvf, OTX2_QOS_ROOT_CLASSID);
> +	if (IS_ERR(root))
> +		return;
> +
> +	err = otx2_qos_txschq_config(pfvf, root);
> +	if (err) {
> +		netdev_err(pfvf->netdev, "Error update txschq configuration\n");
> +		goto root_destroy;
> +	}
> +
> +	err = otx2_qos_txschq_push_cfg_tl(pfvf, root, NULL);
> +	if (err) {
> +		netdev_err(pfvf->netdev, "Error update txschq configuration\n");
> +		goto root_destroy;
> +	}
> +
> +	otx2_qos_update_smq(pfvf, root, QOS_CFG_SQ);
> +	return;
> +
> +root_destroy:
> +	netdev_err(pfvf->netdev, "Failed to update Scheduler/Shaping config in Hardware\n");
> +	/* Free resources allocated */
> +	otx2_qos_root_destroy(pfvf);
> +}
> +
> +int otx2_setup_tc_htb(struct net_device *ndev, struct tc_htb_qopt_offload *htb)
> +{
> +	struct otx2_nic *pfvf = netdev_priv(ndev);
> +	int res;
> +
> +	switch (htb->command) {
> +	case TC_HTB_CREATE:
> +		return otx2_qos_root_add(pfvf, htb->parent_classid,
> +					 htb->classid, htb->extack);
> +	case TC_HTB_DESTROY:
> +		return otx2_qos_root_destroy(pfvf);
> +	case TC_HTB_LEAF_ALLOC_QUEUE:
> +		res = otx2_qos_leaf_alloc_queue(pfvf, htb->classid,
> +						htb->parent_classid,
> +						htb->rate, htb->ceil,
> +						htb->prio, htb->extack);
> +		if (res < 0)
> +			return res;
> +		htb->qid = res;
> +		return 0;
> +	case TC_HTB_LEAF_TO_INNER:
> +		return otx2_qos_leaf_to_inner(pfvf, htb->parent_classid,
> +					      htb->classid, htb->rate,
> +					      htb->ceil, htb->prio,
> +					      htb->extack);
> +	case TC_HTB_LEAF_DEL:
> +		return otx2_qos_leaf_del(pfvf, &htb->classid, htb->extack);
> +	case TC_HTB_LEAF_DEL_LAST:
> +	case TC_HTB_LEAF_DEL_LAST_FORCE:
> +		return otx2_qos_leaf_del_last(pfvf, htb->classid,
> +				htb->command == TC_HTB_LEAF_DEL_LAST_FORCE,
> +					      htb->extack);
> +	case TC_HTB_LEAF_QUERY_QUEUE:
> +		res = otx2_get_txq_by_classid(pfvf, htb->classid);
> +		htb->qid = res;
> +		return 0;
> +	case TC_HTB_NODE_MODIFY:
> +		fallthrough;
> +	default:
> +		return -EOPNOTSUPP;
> +	}
> +}
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos.h b/drivers/net/ethernet/marvell/octeontx2/nic/qos.h
> index 73a62d092e99..26de1af2aa57 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/nic/qos.h
> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos.h
> @@ -7,13 +7,63 @@
>  #ifndef OTX2_QOS_H
>  #define OTX2_QOS_H
>  
> +#include <linux/types.h>
> +#include <linux/netdevice.h>
> +#include <linux/rhashtable.h>
> +
> +#define OTX2_QOS_MAX_LVL		4
> +#define OTX2_QOS_MAX_PRIO		7
>  #define OTX2_QOS_MAX_LEAF_NODES                16
>  
> -int otx2_qos_enable_sq(struct otx2_nic *pfvf, int qidx, u16 smq);
> -void otx2_qos_disable_sq(struct otx2_nic *pfvf, int qidx, u16 mdq);
> +enum qos_smq_operations {
> +	QOS_CFG_SQ,
> +	QOS_SMQ_FLUSH,
> +};
> +
> +u64 otx2_get_txschq_rate_regval(struct otx2_nic *nic, u64 maxrate, u32 burst);
> +
> +int otx2_setup_tc_htb(struct net_device *ndev, struct tc_htb_qopt_offload *htb);
> +int otx2_qos_get_qid(struct otx2_nic *pfvf);
> +void otx2_qos_free_qid(struct otx2_nic *pfvf, int qidx);
> +int otx2_qos_enable_sq(struct otx2_nic *pfvf, int qidx);
> +void otx2_qos_disable_sq(struct otx2_nic *pfvf, int qidx);
> +
> +struct otx2_qos_cfg {
> +	u16 schq[NIX_TXSCH_LVL_CNT];
> +	u16 schq_contig[NIX_TXSCH_LVL_CNT];
> +	int static_node_pos[NIX_TXSCH_LVL_CNT];
> +	int dwrr_node_pos[NIX_TXSCH_LVL_CNT];
> +	u16 schq_contig_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
> +	u16 schq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
> +};
>  
>  struct otx2_qos {
> -	       u16 qid_to_sqmap[OTX2_QOS_MAX_LEAF_NODES];
> -	};
> +	DECLARE_HASHTABLE(qos_hlist, order_base_2(OTX2_QOS_MAX_LEAF_NODES));
> +	struct mutex qos_lock; /* child list lock */
> +	u16 qid_to_sqmap[OTX2_QOS_MAX_LEAF_NODES];
> +	struct list_head qos_tree;
> +	DECLARE_BITMAP(qos_sq_bmap, OTX2_QOS_MAX_LEAF_NODES);
> +	u16 maj_id;
> +	u16 defcls;
> +	u8  link_cfg_lvl; /* LINKX_CFG CSRs mapped to TL3 or TL2's index ? */
> +};
> +
> +struct otx2_qos_node {
> +	struct list_head list; /* list managment */
> +	struct list_head child_list;
> +	struct list_head child_schq_list;
> +	struct hlist_node hlist;
> +	DECLARE_BITMAP(prio_bmap, OTX2_QOS_MAX_PRIO + 1);
> +	struct otx2_qos_node *parent;	/* parent qos node */
> +	u64 rate; /* htb params */
> +	u64 ceil;
> +	u32 classid;
> +	u32 prio;
> +	u16 schq; /* hw txschq */
> +	u16 qid;
> +	u16 prio_anchor;
> +	u8 level;
> +};
> +
>  
>  #endif
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c b/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c
> index 1c77f024c360..8a1e89668a1b 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c
> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c
> @@ -225,7 +225,22 @@ static int otx2_qos_ctx_disable(struct otx2_nic *pfvf, u16 qidx, int aura_id)
>  	return otx2_sync_mbox_msg(&pfvf->mbox);
>  }
>  
> -int otx2_qos_enable_sq(struct otx2_nic *pfvf, int qidx, u16 smq)
> +int otx2_qos_get_qid(struct otx2_nic *pfvf)
> +{
> +	int qidx;
> +
> +	qidx = find_first_zero_bit(pfvf->qos.qos_sq_bmap,
> +				   pfvf->hw.tc_tx_queues);
> +
> +	return qidx == pfvf->hw.tc_tx_queues ? -ENOSPC : qidx;
> +}
> +
> +void otx2_qos_free_qid(struct otx2_nic *pfvf, int qidx)
> +{
> +	clear_bit(qidx, pfvf->qos.qos_sq_bmap);
> +}
> +
> +int otx2_qos_enable_sq(struct otx2_nic *pfvf, int qidx)
>  {
>  	struct otx2_hw *hw = &pfvf->hw;
>  	int pool_id, sq_idx, err;
> @@ -241,7 +256,6 @@ int otx2_qos_enable_sq(struct otx2_nic *pfvf, int qidx, u16 smq)
>  		goto out;
>  
>  	pool_id = otx2_get_pool_idx(pfvf, AURA_NIX_SQ, sq_idx);
> -	pfvf->qos.qid_to_sqmap[qidx] = smq;
>  	err = otx2_sq_init(pfvf, sq_idx, pool_id);
>  	if (err)
>  		goto out;
> @@ -250,7 +264,7 @@ int otx2_qos_enable_sq(struct otx2_nic *pfvf, int qidx, u16 smq)
>  	return err;
>  }
>  
> -void otx2_qos_disable_sq(struct otx2_nic *pfvf, int qidx, u16 mdq)
> +void otx2_qos_disable_sq(struct otx2_nic *pfvf, int qidx)
>  {
>  	struct otx2_qset *qset = &pfvf->qset;
>  	struct otx2_hw *hw = &pfvf->hw;
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [net-next Patch v5 5/6] octeontx2-pf: Add support for HTB offload
  2023-03-28 10:56   ` Paolo Abeni
@ 2023-03-29 16:44     ` Hariprasad Kelam
  0 siblings, 0 replies; 19+ messages in thread
From: Hariprasad Kelam @ 2023-03-29 16:44 UTC (permalink / raw)
  To: Paolo Abeni, netdev, linux-kernel
  Cc: kuba, davem, willemdebruijn.kernel, andrew,
	Sunil Kovvuri Goutham, Linu Cherian, Geethasowjanya Akula,
	Jerin Jacob Kollanukkaran, Subbaraya Sundeep Bhatta, naveenm,
	edumazet, jhs, xiyou.wangcong, jiri, maxtram95



> On Sun, 2023-03-26 at 23:42 +0530, Hariprasad Kelam wrote:
> [...]
> > +static int otx2_qos_root_add(struct otx2_nic *pfvf, u16 htb_maj_id, u16
> htb_defcls,
> > +			     struct netlink_ext_ack *extack) {
> > +	struct otx2_qos_cfg *new_cfg;
> > +	struct otx2_qos_node *root;
> > +	int err;
> > +
> > +	netdev_dbg(pfvf->netdev,
> > +		   "TC_HTB_CREATE: handle=0x%x defcls=0x%x\n",
> > +		   htb_maj_id, htb_defcls);
> > +
> > +	INIT_LIST_HEAD(&pfvf->qos.qos_tree);
> > +	mutex_init(&pfvf->qos.qos_lock);
> 
> It's quite strange and error prone dynamically init this mutex and the list
> here. Why don't you do such init ad device creation time?
ACK,  we can safely move this logic in device init.
Will add these changes in next version.
> 
> > +
> > +	root = otx2_qos_alloc_root(pfvf);
> > +	if (IS_ERR(root)) {
> > +		mutex_destroy(&pfvf->qos.qos_lock);
> > +		err = PTR_ERR(root);
> > +		return err;
> > +	}
> > +
> > +	/* allocate txschq queue */
> > +	new_cfg = kzalloc(sizeof(*new_cfg), GFP_KERNEL);
> > +	if (!new_cfg) {
> > +		NL_SET_ERR_MSG_MOD(extack, "Memory allocation
> error");
> 
> Here the root node is leaked.
ACK, will address this issue in next version.

Thanks,
Hariprasad k
> 
> > +		mutex_destroy(&pfvf->qos.qos_lock);
> > +		return -ENOMEM;
> > +	}
> 
> 
> [...]
> 
> Cheers,
> 
> Paolo


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [net-next Patch v5 5/6] octeontx2-pf: Add support for HTB offload
  2023-03-28 14:41   ` Simon Horman
@ 2023-03-29 17:17     ` Hariprasad Kelam
  0 siblings, 0 replies; 19+ messages in thread
From: Hariprasad Kelam @ 2023-03-29 17:17 UTC (permalink / raw)
  To: Simon Horman
  Cc: netdev, linux-kernel, kuba, davem, willemdebruijn.kernel, andrew,
	Sunil Kovvuri Goutham, Linu Cherian, Geethasowjanya Akula,
	Jerin Jacob Kollanukkaran, Subbaraya Sundeep Bhatta, naveenm,
	edumazet, pabeni, jhs, xiyou.wangcong, jiri, maxtram95

Thanks for the review, will address issues in next version.

Thanks,
Hariprasad k

> On Sun, Mar 26, 2023 at 11:42:44PM +0530, Hariprasad Kelam wrote:
> > From: Naveen Mamindlapalli <naveenm@marvell.com>
> >
> > This patch registers callbacks to support HTB offload.
> >
> > Below are features supported,
> >
> > - supports traffic shaping on the given class by honoring rate and
> > ceil configuration.
> >
> > - supports traffic scheduling,  which prioritizes different types of
> > traffic based on strict priority values.
> >
> > - supports the creation of leaf to inner classes such that parent node
> > rate limits apply to all child nodes.
> >
> > Signed-off-by: Naveen Mamindlapalli <naveenm@marvell.com>
> > Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
> > Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
> 
> Reviewed-by: Simon Horman <simon.horman@corigine.com>
> 
> > ---
> >  .../ethernet/marvell/octeontx2/af/common.h    |    2 +-
> >  .../ethernet/marvell/octeontx2/nic/Makefile   |    2 +-
> >  .../marvell/octeontx2/nic/otx2_common.c       |   35 +-
> >  .../marvell/octeontx2/nic/otx2_common.h       |    8 +-
> >  .../marvell/octeontx2/nic/otx2_ethtool.c      |   31 +-
> >  .../ethernet/marvell/octeontx2/nic/otx2_pf.c  |   56 +-
> >  .../ethernet/marvell/octeontx2/nic/otx2_reg.h |   13 +
> >  .../ethernet/marvell/octeontx2/nic/otx2_tc.c  |    7 +-
> >  .../net/ethernet/marvell/octeontx2/nic/qos.c  | 1460
> +++++++++++++++++
> >  .../net/ethernet/marvell/octeontx2/nic/qos.h  |   58 +-
> >  .../ethernet/marvell/octeontx2/nic/qos_sq.c   |   20 +-
> >  11 files changed, 1657 insertions(+), 35 deletions(-)
> 
> nit: this is a rather long patch.
> 
> ...
> 
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
> > b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
> 
> ...
> 
> > @@ -159,7 +165,7 @@ static void otx2_get_qset_stats(struct otx2_nic
> *pfvf,
> >  				[otx2_queue_stats[stat].index];
> >  	}
> >
> > -	for (qidx = 0; qidx < pfvf->hw.tx_queues; qidx++) {
> > +	for (qidx = 0; qidx <  otx2_get_total_tx_queues(pfvf); qidx++) {
> 
> nit: extra whitespace after '<'
> 
> >  		if (!otx2_update_sq_stats(pfvf, qidx)) {
> >  			for (stat = 0; stat < otx2_n_queue_stats; stat++)
> >  				*((*data)++) = 0;
> 
> ...
> 
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos.c
> > b/drivers/net/ethernet/marvell/octeontx2/nic/qos.c
> 
> ...
> 
> > +static int otx2_qos_update_tx_netdev_queues(struct otx2_nic *pfvf) {
> > +	int tx_queues, qos_txqs, err;
> > +	struct otx2_hw *hw = &pfvf->hw;
> 
> nit: reverse xmas tree - longest line to shortest -
>      for local variable declarations.
> 
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos.h
> > b/drivers/net/ethernet/marvell/octeontx2/nic/qos.h
> 
> ...
> 
> > +struct otx2_qos_node {
> > +	struct list_head list; /* list managment */
> 
> nit: s/managment/management/
> 
> > +	struct list_head child_list;
> > +	struct list_head child_schq_list;
> > +	struct hlist_node hlist;
> > +	DECLARE_BITMAP(prio_bmap, OTX2_QOS_MAX_PRIO + 1);
> > +	struct otx2_qos_node *parent;	/* parent qos node */
> > +	u64 rate; /* htb params */
> > +	u64 ceil;
> > +	u32 classid;
> > +	u32 prio;
> > +	u16 schq; /* hw txschq */
> > +	u16 qid;
> > +	u16 prio_anchor;
> > +	u8 level;
> > +};
> > +
> >
> >  #endif
> 
> ...

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [net-next Patch v5 3/6] octeontx2-pf: qos send queues management
  2023-03-28 14:43   ` Simon Horman
@ 2023-03-29 17:19     ` Hariprasad Kelam
  0 siblings, 0 replies; 19+ messages in thread
From: Hariprasad Kelam @ 2023-03-29 17:19 UTC (permalink / raw)
  To: Simon Horman
  Cc: netdev, linux-kernel, kuba, davem, willemdebruijn.kernel, andrew,
	Sunil Kovvuri Goutham, Linu Cherian, Geethasowjanya Akula,
	Jerin Jacob Kollanukkaran, Subbaraya Sundeep Bhatta, naveenm,
	edumazet, pabeni, jhs, xiyou.wangcong, jiri, maxtram95



> On Sun, Mar 26, 2023 at 11:42:42PM +0530, Hariprasad Kelam wrote:
> > From: Subbaraya Sundeep <sbhatta@marvell.com>
> >
> > Current implementation is such that the number of Send queues (SQs)
> > are decided on the device probe which is equal to the number of online
> > cpus. These SQs are allocated and deallocated in interface open and c
> > lose calls respectively.
> >
> > This patch defines new APIs for initializing and deinitializing Send
> > queues dynamically and allocates more number of transmit queues for
> > QOS feature.
> >
> > Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
> > Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
> > Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
> 
> Reviewed-by: Simon Horman <simon.horman@corigine.com>
> 
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
> > b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
> 
> ...
> 
> > @@ -1938,6 +1952,12 @@ static netdev_tx_t otx2_xmit(struct sk_buff
> *skb, struct net_device *netdev)
> >  	int qidx = skb_get_queue_mapping(skb);
> >  	struct otx2_snd_queue *sq;
> >  	struct netdev_queue *txq;
> > +	int sq_idx;
> > +
> > +	/* XDP SQs are not mapped with TXQs
> > +	 * advance qid to derive correct sq maped with QOS
> 
> nit: s/maped/mapped/
> 
ACK , will add the change in next version.

Thanks,
Hariprasad k
> ...

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [net-next Patch v5 5/6] octeontx2-pf: Add support for HTB offload
  2023-03-28 18:42   ` Maxim Mikityanskiy
@ 2023-03-29 18:03     ` Hariprasad Kelam
  0 siblings, 0 replies; 19+ messages in thread
From: Hariprasad Kelam @ 2023-03-29 18:03 UTC (permalink / raw)
  To: Maxim Mikityanskiy
  Cc: netdev, linux-kernel, kuba, davem, willemdebruijn.kernel, andrew,
	Sunil Kovvuri Goutham, Linu Cherian, Geethasowjanya Akula,
	Jerin Jacob Kollanukkaran, Subbaraya Sundeep Bhatta, naveenm,
	edumazet, pabeni, jhs, xiyou.wangcong, jiri


Please see inline,

> I have a few comments about concurrency issues, see below. I didn't
> analyze the concurrency model of your driver deeply, so please apologize
> me if I missed some bugs or accidentally called some good code buggy.
> 
> A few general things to pay attention to, regarding HTB offload:
> 
> 1. ndo_select_queue can be called at any time, there is no reliable way
> to prevent the kernel from calling it, that means that ndo_select_queue
> must not crash if HTB configuration and structures are being updated in
> another thread.
> 
> 2. ndo_start_xmit runs in an RCU read lock. If you need to release some
> structure that can be used from another thread in the TX datapath, you
> can set some atomic flag, synchronize with RCU, then release the object.
> 
> 3. You can take some inspiration from mlx5e, although you may find it's
> a convoluted cooperation of spinlocks, mutexes, atomic operations with
> barriers, and RCU. A big part of it is related to the mechanism of safe
> reopening of queues, which your driver may not need, but the remaining
> parts have a lot of similarities, so you can find useful insights about
> the locking for HTB in mlx5e.
> 
> On Sun, Mar 26, 2023 at 11:42:44PM +0530, Hariprasad Kelam wrote:
> > From: Naveen Mamindlapalli <naveenm@marvell.com>
> >
> > This patch registers callbacks to support HTB offload.
> >
> > Below are features supported,
> >
> > - supports traffic shaping on the given class by honoring rate and ceil
> > configuration.
> >
> > - supports traffic scheduling,  which prioritizes different types of
> > traffic based on strict priority values.
> >
> > - supports the creation of leaf to inner classes such that parent node
> > rate limits apply to all child nodes.
> >
> > Signed-off-by: Naveen Mamindlapalli <naveenm@marvell.com>
> > Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
> > Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
> > ---
> >  .../ethernet/marvell/octeontx2/af/common.h    |    2 +-
> >  .../ethernet/marvell/octeontx2/nic/Makefile   |    2 +-
> >  .../marvell/octeontx2/nic/otx2_common.c       |   35 +-
> >  .../marvell/octeontx2/nic/otx2_common.h       |    8 +-
> >  .../marvell/octeontx2/nic/otx2_ethtool.c      |   31 +-
> >  .../ethernet/marvell/octeontx2/nic/otx2_pf.c  |   56 +-
> >  .../ethernet/marvell/octeontx2/nic/otx2_reg.h |   13 +
> >  .../ethernet/marvell/octeontx2/nic/otx2_tc.c  |    7 +-
> >  .../net/ethernet/marvell/octeontx2/nic/qos.c  | 1460
> +++++++++++++++++
> >  .../net/ethernet/marvell/octeontx2/nic/qos.h  |   58 +-
> >  .../ethernet/marvell/octeontx2/nic/qos_sq.c   |   20 +-
> >  11 files changed, 1657 insertions(+), 35 deletions(-)
> >  create mode 100644 drivers/net/ethernet/marvell/octeontx2/nic/qos.c
> >
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/af/common.h
> b/drivers/net/ethernet/marvell/octeontx2/af/common.h
> > index 8931864ee110..f5bf719a6ccf 100644
> > --- a/drivers/net/ethernet/marvell/octeontx2/af/common.h
> > +++ b/drivers/net/ethernet/marvell/octeontx2/af/common.h
> > @@ -142,7 +142,7 @@ enum nix_scheduler {
> >
> >  #define TXSCH_RR_QTM_MAX		((1 << 24) - 1)
> >  #define TXSCH_TL1_DFLT_RR_QTM		TXSCH_RR_QTM_MAX
> > -#define TXSCH_TL1_DFLT_RR_PRIO		(0x1ull)
> > +#define TXSCH_TL1_DFLT_RR_PRIO		(0x7ull)
> >  #define CN10K_MAX_DWRR_WEIGHT          16384 /* Weight is 14bit on
> CN10K */
> >
> >  /* Min/Max packet sizes, excluding FCS */
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/Makefile
> b/drivers/net/ethernet/marvell/octeontx2/nic/Makefile
> > index 3d31ddf7c652..5664f768cb0c 100644
> > --- a/drivers/net/ethernet/marvell/octeontx2/nic/Makefile
> > +++ b/drivers/net/ethernet/marvell/octeontx2/nic/Makefile
> > @@ -8,7 +8,7 @@ obj-$(CONFIG_OCTEONTX2_VF) += rvu_nicvf.o
> otx2_ptp.o
> >
> >  rvu_nicpf-y := otx2_pf.o otx2_common.o otx2_txrx.o otx2_ethtool.o \
> >                 otx2_flows.o otx2_tc.o cn10k.o otx2_dmac_flt.o \
> > -               otx2_devlink.o qos_sq.o
> > +               otx2_devlink.o qos_sq.o qos.o
> >  rvu_nicvf-y := otx2_vf.o otx2_devlink.o
> >
> >  rvu_nicpf-$(CONFIG_DCB) += otx2_dcbnl.o
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> > index 32c02a2d3554..b4542a801291 100644
> > --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> > +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> > @@ -89,6 +89,11 @@ int otx2_update_sq_stats(struct otx2_nic *pfvf, int
> qidx)
> >  	if (!pfvf->qset.sq)
> >  		return 0;
> >
> > +	if (qidx >= pfvf->hw.non_qos_queues) {
> > +		if (!test_bit(qidx - pfvf->hw.non_qos_queues, pfvf-
> >qos.qos_sq_bmap))
> > +			return 0;
> > +	}
> > +
> >  	otx2_nix_sq_op_stats(&sq->stats, pfvf, qidx);
> >  	return 1;
> >  }
> > @@ -747,29 +752,47 @@ int otx2_txsch_alloc(struct otx2_nic *pfvf)
> >  	return 0;
> >  }
> >
> > -int otx2_txschq_stop(struct otx2_nic *pfvf)
> > +void otx2_txschq_free_one(struct otx2_nic *pfvf, u16 lvl, u16 schq)
> >  {
> >  	struct nix_txsch_free_req *free_req;
> > -	int lvl, schq, err;
> > +	int err;
> >
> >  	mutex_lock(&pfvf->mbox.lock);
> > -	/* Free the transmit schedulers */
> > +
> >  	free_req = otx2_mbox_alloc_msg_nix_txsch_free(&pfvf->mbox);
> >  	if (!free_req) {
> >  		mutex_unlock(&pfvf->mbox.lock);
> > -		return -ENOMEM;
> > +		netdev_err(pfvf->netdev,
> > +			   "Failed alloc txschq free req\n");
> > +		return;
> >  	}
> >
> > -	free_req->flags = TXSCHQ_FREE_ALL;
> > +	free_req->schq_lvl = lvl;
> > +	free_req->schq = schq;
> > +
> >  	err = otx2_sync_mbox_msg(&pfvf->mbox);
> > +	if (err) {
> > +		netdev_err(pfvf->netdev,
> > +			   "Failed stop txschq %d at level %d\n", lvl, schq);
> > +	}
> > +
> >  	mutex_unlock(&pfvf->mbox.lock);
> > +}
> > +
> > +void otx2_txschq_stop(struct otx2_nic *pfvf)
> > +{
> > +	int lvl, schq;
> > +
> > +	/* free non QOS TLx nodes */
> > +	for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++)
> > +		otx2_txschq_free_one(pfvf, lvl,
> > +				     pfvf->hw.txschq_list[lvl][0]);
> >
> >  	/* Clear the txschq list */
> >  	for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
> >  		for (schq = 0; schq < MAX_TXSCHQ_PER_FUNC; schq++)
> >  			pfvf->hw.txschq_list[lvl][schq] = 0;
> >  	}
> > -	return err;
> >  }
> >
> >  void otx2_sqb_flush(struct otx2_nic *pfvf)
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
> b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
> > index 3834cc447426..4b219e8e5b32 100644
> > --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
> > +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
> > @@ -252,6 +252,7 @@ struct otx2_hw {
> >  #define CN10K_RPM		3
> >  #define CN10K_PTP_ONESTEP	4
> >  #define CN10K_HW_MACSEC		5
> > +#define QOS_CIR_PIR_SUPPORT	6
> >  	unsigned long		cap_flag;
> >
> >  #define LMT_LINE_SIZE		128
> > @@ -586,6 +587,7 @@ static inline void
> otx2_setup_dev_hw_settings(struct otx2_nic *pfvf)
> >  		__set_bit(CN10K_LMTST, &hw->cap_flag);
> >  		__set_bit(CN10K_RPM, &hw->cap_flag);
> >  		__set_bit(CN10K_PTP_ONESTEP, &hw->cap_flag);
> > +		__set_bit(QOS_CIR_PIR_SUPPORT, &hw->cap_flag);
> >  	}
> >
> >  	if (is_dev_cn10kb(pfvf->pdev))
> > @@ -935,7 +937,7 @@ int otx2_config_nix(struct otx2_nic *pfvf);
> >  int otx2_config_nix_queues(struct otx2_nic *pfvf);
> >  int otx2_txschq_config(struct otx2_nic *pfvf, int lvl, int prio, bool pfc_en);
> >  int otx2_txsch_alloc(struct otx2_nic *pfvf);
> > -int otx2_txschq_stop(struct otx2_nic *pfvf);
> > +void otx2_txschq_stop(struct otx2_nic *pfvf);
> >  void otx2_sqb_flush(struct otx2_nic *pfvf);
> >  int otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool,
> >  		    dma_addr_t *dma);
> > @@ -953,6 +955,7 @@ int otx2_pool_init(struct otx2_nic *pfvf, u16
> pool_id,
> >  		   int stack_pages, int numptrs, int buf_size);
> >  int otx2_aura_init(struct otx2_nic *pfvf, int aura_id,
> >  		   int pool_id, int numptrs);
> > +void otx2_txschq_free_one(struct otx2_nic *pfvf, u16 lvl, u16 schq);
> >
> >  /* RSS configuration APIs*/
> >  int otx2_rss_init(struct otx2_nic *pfvf);
> > @@ -1064,4 +1067,7 @@ static inline void cn10k_handle_mcs_event(struct
> otx2_nic *pfvf,
> >  void otx2_qos_sq_setup(struct otx2_nic *pfvf, int qos_txqs);
> >  u16 otx2_select_queue(struct net_device *netdev, struct sk_buff *skb,
> >  		      struct net_device *sb_dev);
> > +int otx2_get_txq_by_classid(struct otx2_nic *pfvf, u16 classid);
> > +void otx2_qos_config_txschq(struct otx2_nic *pfvf);
> > +void otx2_clean_qos_queues(struct otx2_nic *pfvf);
> >  #endif /* OTX2_COMMON_H */
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
> b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
> > index 0f8d1a69139f..e8722a4f4cc6 100644
> > --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
> > +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
> > @@ -92,10 +92,16 @@ static void otx2_get_qset_strings(struct otx2_nic
> *pfvf, u8 **data, int qset)
> >  			*data += ETH_GSTRING_LEN;
> >  		}
> >  	}
> > -	for (qidx = 0; qidx < pfvf->hw.tx_queues; qidx++) {
> > +
> > +	for (qidx = 0; qidx < otx2_get_total_tx_queues(pfvf); qidx++) {
> >  		for (stats = 0; stats < otx2_n_queue_stats; stats++) {
> > -			sprintf(*data, "txq%d: %s", qidx + start_qidx,
> > -				otx2_queue_stats[stats].name);
> > +			if (qidx >= pfvf->hw.non_qos_queues)
> > +				sprintf(*data, "txq_qos%d: %s",
> > +					qidx + start_qidx - pfvf-
> >hw.non_qos_queues,
> > +					otx2_queue_stats[stats].name);
> > +			else
> > +				sprintf(*data, "txq%d: %s", qidx + start_qidx,
> > +					otx2_queue_stats[stats].name);
> >  			*data += ETH_GSTRING_LEN;
> >  		}
> >  	}
> > @@ -159,7 +165,7 @@ static void otx2_get_qset_stats(struct otx2_nic
> *pfvf,
> >  				[otx2_queue_stats[stat].index];
> >  	}
> >
> > -	for (qidx = 0; qidx < pfvf->hw.tx_queues; qidx++) {
> > +	for (qidx = 0; qidx <  otx2_get_total_tx_queues(pfvf); qidx++) {
> >  		if (!otx2_update_sq_stats(pfvf, qidx)) {
> >  			for (stat = 0; stat < otx2_n_queue_stats; stat++)
> >  				*((*data)++) = 0;
> > @@ -254,7 +260,8 @@ static int otx2_get_sset_count(struct net_device
> *netdev, int sset)
> >  		return -EINVAL;
> >
> >  	qstats_count = otx2_n_queue_stats *
> > -		       (pfvf->hw.rx_queues + pfvf->hw.tx_queues);
> > +		       (pfvf->hw.rx_queues + pfvf->hw.non_qos_queues +
> > +			pfvf->hw.tc_tx_queues);
> >  	if (!test_bit(CN10K_RPM, &pfvf->hw.cap_flag))
> >  		mac_stats = CGX_RX_STATS_COUNT +
> CGX_TX_STATS_COUNT;
> >  	otx2_update_lmac_fec_stats(pfvf);
> > @@ -282,7 +289,7 @@ static int otx2_set_channels(struct net_device
> *dev,
> >  {
> >  	struct otx2_nic *pfvf = netdev_priv(dev);
> >  	bool if_up = netif_running(dev);
> > -	int err = 0;
> > +	int err, qos_txqs;
> >
> >  	if (!channel->rx_count || !channel->tx_count)
> >  		return -EINVAL;
> > @@ -296,14 +303,19 @@ static int otx2_set_channels(struct net_device
> *dev,
> >  	if (if_up)
> >  		dev->netdev_ops->ndo_stop(dev);
> >
> > -	err = otx2_set_real_num_queues(dev, channel->tx_count,
> > +	qos_txqs = bitmap_weight(pfvf->qos.qos_sq_bmap,
> > +				 OTX2_QOS_MAX_LEAF_NODES);
> > +
> > +	err = otx2_set_real_num_queues(dev, channel->tx_count +
> qos_txqs,
> >  				       channel->rx_count);
> >  	if (err)
> >  		return err;
> >
> >  	pfvf->hw.rx_queues = channel->rx_count;
> >  	pfvf->hw.tx_queues = channel->tx_count;
> > -	pfvf->qset.cq_cnt = pfvf->hw.tx_queues +  pfvf->hw.rx_queues;
> > +	if (pfvf->xdp_prog)
> > +		pfvf->hw.xdp_queues = channel->rx_count;
> > +	pfvf->hw.non_qos_queues =  pfvf->hw.tx_queues + pfvf-
> >hw.xdp_queues;
> >
> >  	if (if_up)
> >  		err = dev->netdev_ops->ndo_open(dev);
> > @@ -1405,7 +1417,8 @@ static int otx2vf_get_sset_count(struct
> net_device *netdev, int sset)
> >  		return -EINVAL;
> >
> >  	qstats_count = otx2_n_queue_stats *
> > -		       (vf->hw.rx_queues + vf->hw.tx_queues);
> > +		       (vf->hw.rx_queues + vf->hw.tx_queues +
> > +			vf->hw.tc_tx_queues);
> >
> >  	return otx2_n_dev_stats + otx2_n_drv_stats + qstats_count + 1;
> >  }
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
> b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
> > index a32f0cb89fc4..d0192f9089ee 100644
> > --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
> > +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
> > @@ -1387,6 +1387,9 @@ static void otx2_free_sq_res(struct otx2_nic *pf)
> >  	otx2_sq_free_sqbs(pf);
> >  	for (qidx = 0; qidx < otx2_get_total_tx_queues(pf); qidx++) {
> >  		sq = &qset->sq[qidx];
> > +		/* Skip freeing Qos queues if they are not initialized */
> > +		if (!sq->sqb_count)
> > +			continue;
> >  		qmem_free(pf->dev, sq->sqe);
> >  		qmem_free(pf->dev, sq->tso_hdrs);
> >  		kfree(sq->sg);
> > @@ -1518,8 +1521,7 @@ static int otx2_init_hw_resources(struct otx2_nic
> *pf)
> >  	otx2_free_cq_res(pf);
> >  	otx2_ctx_disable(mbox, NIX_AQ_CTYPE_RQ, false);
> >  err_free_txsch:
> > -	if (otx2_txschq_stop(pf))
> > -		dev_err(pf->dev, "%s failed to stop TX schedulers\n",
> __func__);
> > +	otx2_txschq_stop(pf);
> >  err_free_sq_ptrs:
> >  	otx2_sq_free_sqbs(pf);
> >  err_free_rq_ptrs:
> > @@ -1554,21 +1556,21 @@ static void otx2_free_hw_resources(struct
> otx2_nic *pf)
> >  	struct mbox *mbox = &pf->mbox;
> >  	struct otx2_cq_queue *cq;
> >  	struct msg_req *req;
> > -	int qidx, err;
> > +	int qidx;
> >
> >  	/* Ensure all SQE are processed */
> >  	otx2_sqb_flush(pf);
> >
> >  	/* Stop transmission */
> > -	err = otx2_txschq_stop(pf);
> > -	if (err)
> > -		dev_err(pf->dev, "RVUPF: Failed to stop/free TX
> schedulers\n");
> > +	otx2_txschq_stop(pf);
> >
> >  #ifdef CONFIG_DCB
> >  	if (pf->pfc_en)
> >  		otx2_pfc_txschq_stop(pf);
> >  #endif
> >
> > +	otx2_clean_qos_queues(pf);
> > +
> >  	mutex_lock(&mbox->lock);
> >  	/* Disable backpressure */
> >  	if (!(pf->pcifunc & RVU_PFVF_FUNC_MASK))
> > @@ -1836,6 +1838,9 @@ int otx2_open(struct net_device *netdev)
> >  	/* 'intf_down' may be checked on any cpu */
> >  	smp_wmb();
> >
> > +	/* Enable QoS configuration before starting tx queues */
> > +	otx2_qos_config_txschq(pf);
> > +
> >  	/* we have already received link status notification */
> >  	if (pf->linfo.link_up && !(pf->pcifunc & RVU_PFVF_FUNC_MASK))
> >  		otx2_handle_link_event(pf);
> > @@ -1980,14 +1985,45 @@ static netdev_tx_t otx2_xmit(struct sk_buff
> *skb, struct net_device *netdev)
> >  	return NETDEV_TX_OK;
> >  }
> >
> > +static int otx2_qos_select_htb_queue(struct otx2_nic *pf, struct sk_buff
> *skb,
> > +				     u16 htb_maj_id)
> > +{
> > +	u16 classid;
> > +
> > +	if ((TC_H_MAJ(skb->priority) >> 16) == htb_maj_id)
> > +		classid = TC_H_MIN(skb->priority);
> > +	else
> > +		classid = READ_ONCE(pf->qos.defcls);
> > +
> > +	if (!classid)
> > +		return 0;
> > +
> > +	return otx2_get_txq_by_classid(pf, classid);
> 
> This selects queues with numbers >= pf->hw.tx_queues, and otx2_xmit
> indexes pfvf->qset.sq with these qids, however, pfvf->qset.sq is
> allocated only up to pf->hw.non_qos_queues. Array out-of-bounds?
> 

We are supposed to allocated all Sqs (non_qos_queues + tc_tx_queues). 
Looks like we missed this change in refactoring send queue code,
Will update this change.
> > +}
> > +
> >  u16 otx2_select_queue(struct net_device *netdev, struct sk_buff *skb,
> >  		      struct net_device *sb_dev)
> >  {
> > -#ifdef CONFIG_DCB
> >  	struct otx2_nic *pf = netdev_priv(netdev);
> > +	bool qos_enabled;
> > +#ifdef CONFIG_DCB
> >  	u8 vlan_prio;
> >  #endif
> > +	int txq;
> >
> > +	qos_enabled = (netdev->real_num_tx_queues > pf-
> >hw.tx_queues) ? true : false;
> > +	if (unlikely(qos_enabled)) {
> > +		u16 htb_maj_id = smp_load_acquire(&pf->qos.maj_id); /*
> barrier */
> 
> Checkpatch requires to add comments for the barriers for a reason :)
> 
> "Barrier" is a useless comment, we all know that smp_load_acquire is a
> barrier, you should explain why this barrier is needed and which other
> barriers it pairs with.
> 
ACK, will update the details in next version.
> > +
> > +		if (unlikely(htb_maj_id)) {
> > +			txq = otx2_qos_select_htb_queue(pf, skb,
> htb_maj_id);
> > +			if (txq > 0)
> > +				return txq;
> > +			goto process_pfc;
> > +		}
> > +	}
> > +
> > +process_pfc:
> >  #ifdef CONFIG_DCB
> >  	if (!skb_vlan_tag_present(skb))
> >  		goto pick_tx;
> > @@ -2001,7 +2037,11 @@ u16 otx2_select_queue(struct net_device
> *netdev, struct sk_buff *skb,
> >
> >  pick_tx:
> >  #endif
> > -	return netdev_pick_tx(netdev, skb, NULL);
> > +	txq = netdev_pick_tx(netdev, skb, NULL);
> > +	if (unlikely(qos_enabled))
> > +		return txq % pf->hw.tx_queues;
> > +
> > +	return txq;
> >  }
> >  EXPORT_SYMBOL(otx2_select_queue);
> >
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h
> b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h
> > index 1b967eaf948b..45a32e4b49d1 100644
> > --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h
> > +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h
> > @@ -145,12 +145,25 @@
> >  #define NIX_AF_TL1X_TOPOLOGY(a)		(0xC80 | (a) << 16)
> >  #define NIX_AF_TL2X_PARENT(a)		(0xE88 | (a) << 16)
> >  #define NIX_AF_TL2X_SCHEDULE(a)		(0xE00 | (a) << 16)
> > +#define NIX_AF_TL2X_TOPOLOGY(a)		(0xE80 | (a) << 16)
> > +#define NIX_AF_TL2X_CIR(a)              (0xE20 | (a) << 16)
> > +#define NIX_AF_TL2X_PIR(a)              (0xE30 | (a) << 16)
> >  #define NIX_AF_TL3X_PARENT(a)		(0x1088 | (a) << 16)
> >  #define NIX_AF_TL3X_SCHEDULE(a)		(0x1000 | (a) << 16)
> > +#define NIX_AF_TL3X_SHAPE(a)		(0x1010 | (a) << 16)
> > +#define NIX_AF_TL3X_CIR(a)		(0x1020 | (a) << 16)
> > +#define NIX_AF_TL3X_PIR(a)		(0x1030 | (a) << 16)
> > +#define NIX_AF_TL3X_TOPOLOGY(a)		(0x1080 | (a) << 16)
> >  #define NIX_AF_TL4X_PARENT(a)		(0x1288 | (a) << 16)
> >  #define NIX_AF_TL4X_SCHEDULE(a)		(0x1200 | (a) << 16)
> > +#define NIX_AF_TL4X_SHAPE(a)		(0x1210 | (a) << 16)
> > +#define NIX_AF_TL4X_CIR(a)		(0x1220 | (a) << 16)
> >  #define NIX_AF_TL4X_PIR(a)		(0x1230 | (a) << 16)
> > +#define NIX_AF_TL4X_TOPOLOGY(a)		(0x1280 | (a) << 16)
> >  #define NIX_AF_MDQX_SCHEDULE(a)		(0x1400 | (a) << 16)
> > +#define NIX_AF_MDQX_SHAPE(a)		(0x1410 | (a) << 16)
> > +#define NIX_AF_MDQX_CIR(a)		(0x1420 | (a) << 16)
> > +#define NIX_AF_MDQX_PIR(a)		(0x1430 | (a) << 16)
> >  #define NIX_AF_MDQX_PARENT(a)		(0x1480 | (a) << 16)
> >  #define NIX_AF_TL3_TL2X_LINKX_CFG(a, b)	(0x1700 | (a) << 16 | (b) << 3)
> >
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
> b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
> > index 044cc211424e..42c49249f4e7 100644
> > --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
> > +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
> > @@ -19,6 +19,7 @@
> >
> >  #include "cn10k.h"
> >  #include "otx2_common.h"
> > +#include "qos.h"
> >
> >  /* Egress rate limiting definitions */
> >  #define MAX_BURST_EXPONENT		0x0FULL
> > @@ -147,8 +148,8 @@ static void otx2_get_egress_rate_cfg(u64 maxrate,
> u32 *exp,
> >  	}
> >  }
> >
> > -static u64 otx2_get_txschq_rate_regval(struct otx2_nic *nic,
> > -				       u64 maxrate, u32 burst)
> > +u64 otx2_get_txschq_rate_regval(struct otx2_nic *nic,
> > +				u64 maxrate, u32 burst)
> >  {
> >  	u32 burst_exp, burst_mantissa;
> >  	u32 exp, mantissa, div_exp;
> > @@ -1127,6 +1128,8 @@ int otx2_setup_tc(struct net_device *netdev,
> enum tc_setup_type type,
> >  	switch (type) {
> >  	case TC_SETUP_BLOCK:
> >  		return otx2_setup_tc_block(netdev, type_data);
> > +	case TC_SETUP_QDISC_HTB:
> > +		return otx2_setup_tc_htb(netdev, type_data);
> >  	default:
> >  		return -EOPNOTSUPP;
> >  	}
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos.c
> b/drivers/net/ethernet/marvell/octeontx2/nic/qos.c
> > new file mode 100644
> > index 000000000000..22c5b6a2871a
> > --- /dev/null
> > +++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos.c
> > @@ -0,0 +1,1460 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/* Marvell RVU Ethernet driver
> > + *
> > + * Copyright (C) 2023 Marvell.
> > + *
> > + */
> > +#include <linux/netdevice.h>
> > +#include <linux/etherdevice.h>
> > +#include <linux/inetdevice.h>
> > +#include <linux/bitfield.h>
> > +
> > +#include "otx2_common.h"
> > +#include "cn10k.h"
> > +#include "qos.h"
> > +
> > +#define OTX2_QOS_QID_INNER		0xFFFFU
> > +#define OTX2_QOS_QID_NONE		0xFFFEU
> > +#define OTX2_QOS_ROOT_CLASSID		0xFFFFFFFF
> > +#define OTX2_QOS_CLASS_NONE		0
> > +#define OTX2_QOS_DEFAULT_PRIO		0xF
> > +#define OTX2_QOS_INVALID_SQ		0xFFFF
> > +
> > +/* Egress rate limiting definitions */
> > +#define MAX_BURST_EXPONENT		0x0FULL
> > +#define MAX_BURST_MANTISSA		0xFFULL
> > +#define MAX_BURST_SIZE			130816ULL
> > +#define MAX_RATE_DIVIDER_EXPONENT	12ULL
> > +#define MAX_RATE_EXPONENT		0x0FULL
> > +#define MAX_RATE_MANTISSA		0xFFULL
> > +
> > +/* Bitfields in NIX_TLX_PIR register */
> > +#define TLX_RATE_MANTISSA		GENMASK_ULL(8, 1)
> > +#define TLX_RATE_EXPONENT		GENMASK_ULL(12, 9)
> > +#define TLX_RATE_DIVIDER_EXPONENT	GENMASK_ULL(16, 13)
> > +#define TLX_BURST_MANTISSA		GENMASK_ULL(36, 29)
> > +#define TLX_BURST_EXPONENT		GENMASK_ULL(40, 37)
> > +
> > +static int otx2_qos_update_tx_netdev_queues(struct otx2_nic *pfvf)
> > +{
> > +	int tx_queues, qos_txqs, err;
> > +	struct otx2_hw *hw = &pfvf->hw;
> > +
> > +	qos_txqs = bitmap_weight(pfvf->qos.qos_sq_bmap,
> > +				 OTX2_QOS_MAX_LEAF_NODES);
> > +
> > +	tx_queues = hw->tx_queues + qos_txqs;
> > +
> > +	err = netif_set_real_num_tx_queues(pfvf->netdev, tx_queues);
> > +	if (err) {
> > +		netdev_err(pfvf->netdev,
> > +			   "Failed to set no of Tx queues: %d\n", tx_queues);
> > +		return err;
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +static u64 otx2_qos_convert_rate(u64 rate)
> > +{
> > +	u64 converted_rate;
> > +
> > +	/* convert bytes per second to Mbps */
> > +	converted_rate = rate * 8;
> > +	converted_rate = max_t(u64, converted_rate / 1000000, 1);
> > +
> > +	return converted_rate;
> > +}
> > +
> > +static void __otx2_qos_txschq_cfg(struct otx2_nic *pfvf,
> > +				  struct otx2_qos_node *node,
> > +				  struct nix_txschq_config *cfg)
> > +{
> > +	struct otx2_hw *hw = &pfvf->hw;
> > +	int num_regs = 0;
> > +	u64 maxrate;
> > +	u8 level;
> > +
> > +	level = node->level;
> > +
> > +	/* program txschq registers */
> > +	if (level == NIX_TXSCH_LVL_SMQ) {
> > +		cfg->reg[num_regs] = NIX_AF_SMQX_CFG(node->schq);
> > +		cfg->regval[num_regs] = ((u64)pfvf->tx_max_pktlen << 8) |
> > +					OTX2_MIN_MTU;
> > +		cfg->regval[num_regs] |= (0x20ULL << 51) | (0x80ULL << 39)
> |
> > +					 (0x2ULL << 36);
> > +		num_regs++;
> > +
> > +		/* configure parent txschq */
> > +		cfg->reg[num_regs] = NIX_AF_MDQX_PARENT(node-
> >schq);
> > +		cfg->regval[num_regs] = node->parent->schq << 16;
> > +		num_regs++;
> > +
> > +		/* configure prio/quantum */
> > +		if (node->qid == OTX2_QOS_QID_NONE) {
> > +			cfg->reg[num_regs] =
> NIX_AF_MDQX_SCHEDULE(node->schq);
> > +			cfg->regval[num_regs] =  node->prio << 24 |
> > +						 mtu_to_dwrr_weight(pfvf,
> > +								    pfvf-
> >tx_max_pktlen);
> > +			num_regs++;
> > +			goto txschq_cfg_out;
> > +		}
> > +
> > +		/* configure prio */
> > +		cfg->reg[num_regs] = NIX_AF_MDQX_SCHEDULE(node-
> >schq);
> > +		cfg->regval[num_regs] = (node->schq -
> > +					 node->parent->prio_anchor) << 24;
> > +		num_regs++;
> > +
> > +		/* configure PIR */
> > +		maxrate = (node->rate > node->ceil) ? node->rate : node-
> >ceil;
> > +
> > +		cfg->reg[num_regs] = NIX_AF_MDQX_PIR(node->schq);
> > +		cfg->regval[num_regs] =
> > +			otx2_get_txschq_rate_regval(pfvf, maxrate, 65536);
> > +		num_regs++;
> > +
> > +		/* configure CIR */
> > +		if (!test_bit(QOS_CIR_PIR_SUPPORT, &pfvf->hw.cap_flag)) {
> > +			/* Don't configure CIR when both CIR+PIR not
> supported
> > +			 * On 96xx, CIR + PIR + RED_ALGO=STALL causes
> deadlock
> > +			 */
> > +			goto txschq_cfg_out;
> > +		}
> > +
> > +		cfg->reg[num_regs] = NIX_AF_MDQX_CIR(node->schq);
> > +		cfg->regval[num_regs] =
> > +			otx2_get_txschq_rate_regval(pfvf, node->rate,
> 65536);
> > +		num_regs++;
> > +	} else if (level == NIX_TXSCH_LVL_TL4) {
> > +		/* configure parent txschq */
> > +		cfg->reg[num_regs] = NIX_AF_TL4X_PARENT(node->schq);
> > +		cfg->regval[num_regs] = node->parent->schq << 16;
> > +		num_regs++;
> > +
> > +		/* return if not htb node */
> > +		if (node->qid == OTX2_QOS_QID_NONE) {
> > +			cfg->reg[num_regs] =
> NIX_AF_TL4X_SCHEDULE(node->schq);
> > +			cfg->regval[num_regs] =  node->prio << 24 |
> > +						 mtu_to_dwrr_weight(pfvf,
> > +								    pfvf-
> >tx_max_pktlen);
> > +			num_regs++;
> > +			goto txschq_cfg_out;
> > +		}
> > +
> > +		/* configure priority */
> > +		cfg->reg[num_regs] = NIX_AF_TL4X_SCHEDULE(node-
> >schq);
> > +		cfg->regval[num_regs] = (node->schq -
> > +					 node->parent->prio_anchor) << 24;
> > +		num_regs++;
> > +
> > +		/* configure PIR */
> > +		maxrate = (node->rate > node->ceil) ? node->rate : node-
> >ceil;
> > +		cfg->reg[num_regs] = NIX_AF_TL4X_PIR(node->schq);
> > +		cfg->regval[num_regs] =
> > +			otx2_get_txschq_rate_regval(pfvf, maxrate, 65536);
> > +		num_regs++;
> > +
> > +		/* configure CIR */
> > +		if (!test_bit(QOS_CIR_PIR_SUPPORT, &pfvf->hw.cap_flag)) {
> > +			/* Don't configure CIR when both CIR+PIR not
> supported
> > +			 * On 96xx, CIR + PIR + RED_ALGO=STALL causes
> deadlock
> > +			 */
> > +			goto txschq_cfg_out;
> > +		}
> > +
> > +		cfg->reg[num_regs] = NIX_AF_TL4X_CIR(node->schq);
> > +		cfg->regval[num_regs] =
> > +			otx2_get_txschq_rate_regval(pfvf, node->rate,
> 65536);
> > +		num_regs++;
> > +	} else if (level == NIX_TXSCH_LVL_TL3) {
> > +		/* configure parent txschq */
> > +		cfg->reg[num_regs] = NIX_AF_TL3X_PARENT(node->schq);
> > +		cfg->regval[num_regs] = node->parent->schq << 16;
> > +		num_regs++;
> > +
> > +		/* configure link cfg */
> > +		if (level == pfvf->qos.link_cfg_lvl) {
> > +			cfg->reg[num_regs] =
> NIX_AF_TL3_TL2X_LINKX_CFG(node->schq, hw->tx_link);
> > +			cfg->regval[num_regs] = BIT_ULL(13) | BIT_ULL(12);
> > +			num_regs++;
> > +		}
> > +
> > +		/* return if not htb node */
> > +		if (node->qid == OTX2_QOS_QID_NONE) {
> > +			cfg->reg[num_regs] =
> NIX_AF_TL3X_SCHEDULE(node->schq);
> > +			cfg->regval[num_regs] =  node->prio << 24 |
> > +						 mtu_to_dwrr_weight(pfvf,
> > +								    pfvf-
> >tx_max_pktlen);
> > +			num_regs++;
> > +			goto txschq_cfg_out;
> > +		}
> > +
> > +		/* configure priority */
> > +		cfg->reg[num_regs] = NIX_AF_TL3X_SCHEDULE(node-
> >schq);
> > +		cfg->regval[num_regs] = (node->schq -
> > +					 node->parent->prio_anchor) << 24;
> > +		num_regs++;
> > +
> > +		/* configure PIR */
> > +		maxrate = (node->rate > node->ceil) ? node->rate : node-
> >ceil;
> > +		cfg->reg[num_regs] = NIX_AF_TL3X_PIR(node->schq);
> > +		cfg->regval[num_regs] =
> > +			otx2_get_txschq_rate_regval(pfvf, maxrate, 65536);
> > +		num_regs++;
> > +
> > +		/* configure CIR */
> > +		if (!test_bit(QOS_CIR_PIR_SUPPORT, &pfvf->hw.cap_flag)) {
> > +			/* Don't configure CIR when both CIR+PIR not
> supported
> > +			 * On 96xx, CIR + PIR + RED_ALGO=STALL causes
> deadlock
> > +			 */
> > +			goto txschq_cfg_out;
> > +		}
> > +
> > +		cfg->reg[num_regs] = NIX_AF_TL3X_CIR(node->schq);
> > +		cfg->regval[num_regs] =
> > +			otx2_get_txschq_rate_regval(pfvf, node->rate,
> 65536);
> > +		num_regs++;
> > +	} else if (level == NIX_TXSCH_LVL_TL2) {
> > +		/* configure parent txschq */
> > +		cfg->reg[num_regs] = NIX_AF_TL2X_PARENT(node->schq);
> > +		cfg->regval[num_regs] = hw->tx_link << 16;
> > +		num_regs++;
> > +
> > +		/* configure link cfg */
> > +		if (level == pfvf->qos.link_cfg_lvl) {
> > +			cfg->reg[num_regs] =
> NIX_AF_TL3_TL2X_LINKX_CFG(node->schq, hw->tx_link);
> > +			cfg->regval[num_regs] = BIT_ULL(13) | BIT_ULL(12);
> > +			num_regs++;
> > +		}
> > +
> > +		/* return if not htb node */
> > +		if (node->qid == OTX2_QOS_QID_NONE) {
> > +			cfg->reg[num_regs] =
> NIX_AF_TL2X_SCHEDULE(node->schq);
> > +			cfg->regval[num_regs] =  node->prio << 24 |
> > +						 mtu_to_dwrr_weight(pfvf,
> > +								    pfvf-
> >tx_max_pktlen);
> > +			num_regs++;
> > +			goto txschq_cfg_out;
> > +		}
> > +
> > +		/* check if node is root */
> > +		if (node->qid == OTX2_QOS_QID_INNER && !node->parent)
> {
> > +			cfg->reg[num_regs] =
> NIX_AF_TL2X_SCHEDULE(node->schq);
> > +			cfg->regval[num_regs] =  TXSCH_TL1_DFLT_RR_PRIO
> << 24 |
> > +						 mtu_to_dwrr_weight(pfvf,
> > +								    pfvf-
> >tx_max_pktlen);
> > +			num_regs++;
> > +			goto txschq_cfg_out;
> > +		}
> > +
> > +		/* configure priority/quantum */
> > +		cfg->reg[num_regs] = NIX_AF_TL2X_SCHEDULE(node-
> >schq);
> > +		cfg->regval[num_regs] = (node->schq -
> > +					 node->parent->prio_anchor) << 24;
> > +		num_regs++;
> > +
> > +		/* configure PIR */
> > +		maxrate = (node->rate > node->ceil) ? node->rate : node-
> >ceil;
> > +		cfg->reg[num_regs] = NIX_AF_TL2X_PIR(node->schq);
> > +		cfg->regval[num_regs] =
> > +			otx2_get_txschq_rate_regval(pfvf, maxrate, 65536);
> > +		num_regs++;
> > +
> > +		/* configure CIR */
> > +		if (!test_bit(QOS_CIR_PIR_SUPPORT, &pfvf->hw.cap_flag)) {
> > +			/* Don't configure CIR when both CIR+PIR not
> supported
> > +			 * On 96xx, CIR + PIR + RED_ALGO=STALL causes
> deadlock
> > +			 */
> > +			goto txschq_cfg_out;
> > +		}
> > +
> > +		cfg->reg[num_regs] = NIX_AF_TL2X_CIR(node->schq);
> > +		cfg->regval[num_regs] =
> > +			otx2_get_txschq_rate_regval(pfvf, node->rate,
> 65536);
> > +		num_regs++;
> > +	}
> > +
> > +txschq_cfg_out:
> > +	cfg->num_regs = num_regs;
> > +}
> > +
> > +static int otx2_qos_txschq_set_parent_topology(struct otx2_nic *pfvf,
> > +					       struct otx2_qos_node *parent)
> > +{
> > +	struct mbox *mbox = &pfvf->mbox;
> > +	struct nix_txschq_config *cfg;
> > +	int rc;
> > +
> > +	if (parent->level == NIX_TXSCH_LVL_MDQ)
> > +		return 0;
> > +
> > +	mutex_lock(&mbox->lock);
> > +
> > +	cfg = otx2_mbox_alloc_msg_nix_txschq_cfg(&pfvf->mbox);
> > +	if (!cfg) {
> > +		mutex_unlock(&mbox->lock);
> > +		return -ENOMEM;
> > +	}
> > +
> > +	cfg->lvl = parent->level;
> > +
> > +	if (parent->level == NIX_TXSCH_LVL_TL4)
> > +		cfg->reg[0] = NIX_AF_TL4X_TOPOLOGY(parent->schq);
> > +	else if (parent->level == NIX_TXSCH_LVL_TL3)
> > +		cfg->reg[0] = NIX_AF_TL3X_TOPOLOGY(parent->schq);
> > +	else if (parent->level == NIX_TXSCH_LVL_TL2)
> > +		cfg->reg[0] = NIX_AF_TL2X_TOPOLOGY(parent->schq);
> > +	else if (parent->level == NIX_TXSCH_LVL_TL1)
> > +		cfg->reg[0] = NIX_AF_TL1X_TOPOLOGY(parent->schq);
> > +
> > +	cfg->regval[0] = (u64)parent->prio_anchor << 32;
> > +	if (parent->level == NIX_TXSCH_LVL_TL1)
> > +		cfg->regval[0] |= (u64)TXSCH_TL1_DFLT_RR_PRIO << 1;
> > +
> > +	cfg->num_regs++;
> > +
> > +	rc = otx2_sync_mbox_msg(&pfvf->mbox);
> > +
> > +	mutex_unlock(&mbox->lock);
> > +
> > +	return rc;
> > +}
> > +
> > +static void otx2_qos_free_hw_node_schq(struct otx2_nic *pfvf,
> > +				       struct otx2_qos_node *parent)
> > +{
> > +	struct otx2_qos_node *node;
> > +
> > +	list_for_each_entry_reverse(node, &parent->child_schq_list, list)
> > +		otx2_txschq_free_one(pfvf, node->level, node->schq);
> > +}
> > +
> > +static void otx2_qos_free_hw_node(struct otx2_nic *pfvf,
> > +				  struct otx2_qos_node *parent)
> > +{
> > +	struct otx2_qos_node *node, *tmp;
> > +
> > +	list_for_each_entry_safe(node, tmp, &parent->child_list, list) {
> > +		otx2_qos_free_hw_node(pfvf, node);
> > +		otx2_qos_free_hw_node_schq(pfvf, node);
> > +		otx2_txschq_free_one(pfvf, node->level, node->schq);
> > +	}
> > +}
> > +
> > +static void otx2_qos_free_hw_cfg(struct otx2_nic *pfvf,
> > +				 struct otx2_qos_node *node)
> > +{
> > +	mutex_lock(&pfvf->qos.qos_lock);
> > +
> > +	/* free child node hw mappings */
> > +	otx2_qos_free_hw_node(pfvf, node);
> > +	otx2_qos_free_hw_node_schq(pfvf, node);
> > +
> > +	/* free node hw mappings */
> > +	otx2_txschq_free_one(pfvf, node->level, node->schq);
> > +
> > +	mutex_unlock(&pfvf->qos.qos_lock);
> > +}
> > +
> > +static void otx2_qos_sw_node_delete(struct otx2_nic *pfvf,
> > +				    struct otx2_qos_node *node)
> > +{
> > +	hash_del(&node->hlist);
> > +
> > +	if (node->qid != OTX2_QOS_QID_INNER && node->qid !=
> OTX2_QOS_QID_NONE) {
> > +		__clear_bit(node->qid, pfvf->qos.qos_sq_bmap);
> > +		otx2_qos_update_tx_netdev_queues(pfvf);
> > +	}
> > +
> > +	list_del(&node->list);
> > +	kfree(node);
> > +}
> > +
> > +static void otx2_qos_free_sw_node_schq(struct otx2_nic *pfvf,
> > +				       struct otx2_qos_node *parent)
> > +{
> > +	struct otx2_qos_node *node, *tmp;
> > +
> > +	list_for_each_entry_safe(node, tmp, &parent->child_schq_list, list) {
> > +		list_del(&node->list);
> > +		kfree(node);
> > +	}
> > +}
> > +
> > +static void __otx2_qos_free_sw_node(struct otx2_nic *pfvf,
> > +				    struct otx2_qos_node *parent)
> > +{
> > +	struct otx2_qos_node *node, *tmp;
> > +
> > +	list_for_each_entry_safe(node, tmp, &parent->child_list, list) {
> > +		__otx2_qos_free_sw_node(pfvf, node);
> > +		otx2_qos_free_sw_node_schq(pfvf, node);
> > +		otx2_qos_sw_node_delete(pfvf, node);
> > +	}
> > +}
> > +
> > +static void otx2_qos_free_sw_node(struct otx2_nic *pfvf,
> > +				  struct otx2_qos_node *node)
> > +{
> > +	mutex_lock(&pfvf->qos.qos_lock);
> > +
> > +	__otx2_qos_free_sw_node(pfvf, node);
> > +	otx2_qos_free_sw_node_schq(pfvf, node);
> > +	otx2_qos_sw_node_delete(pfvf, node);
> > +
> > +	mutex_unlock(&pfvf->qos.qos_lock);
> > +}
> > +
> > +static void otx2_qos_destroy_node(struct otx2_nic *pfvf,
> > +				  struct otx2_qos_node *node)
> > +{
> > +	otx2_qos_free_hw_cfg(pfvf, node);
> > +	otx2_qos_free_sw_node(pfvf, node);
> > +}
> > +
> > +static void otx2_qos_fill_cfg_schq(struct otx2_qos_node *parent,
> > +				   struct otx2_qos_cfg *cfg)
> > +{
> > +	struct otx2_qos_node *node;
> > +
> > +	list_for_each_entry(node, &parent->child_schq_list, list)
> > +		cfg->schq[node->level]++;
> > +}
> > +
> > +static void otx2_qos_fill_cfg_tl(struct otx2_qos_node *parent,
> > +				 struct otx2_qos_cfg *cfg)
> > +{
> > +	struct otx2_qos_node *node;
> > +
> > +	list_for_each_entry(node, &parent->child_list, list) {
> > +		otx2_qos_fill_cfg_tl(node, cfg);
> > +		cfg->schq_contig[node->level]++;
> > +		otx2_qos_fill_cfg_schq(node, cfg);
> > +	}
> > +}
> > +
> > +static void otx2_qos_prepare_txschq_cfg(struct otx2_nic *pfvf,
> > +					struct otx2_qos_node *parent,
> > +					struct otx2_qos_cfg *cfg)
> > +{
> > +	mutex_lock(&pfvf->qos.qos_lock);
> > +	otx2_qos_fill_cfg_tl(parent, cfg);
> > +	mutex_unlock(&pfvf->qos.qos_lock);
> > +}
> > +
> > +static void otx2_qos_read_txschq_cfg_schq(struct otx2_qos_node
> *parent,
> > +					  struct otx2_qos_cfg *cfg)
> > +{
> > +	struct otx2_qos_node *node;
> > +	int cnt;
> > +
> > +	list_for_each_entry(node, &parent->child_schq_list, list) {
> > +		cnt = cfg->dwrr_node_pos[node->level];
> > +		cfg->schq_list[node->level][cnt] = node->schq;
> > +		cfg->schq[node->level]++;
> > +		cfg->dwrr_node_pos[node->level]++;
> > +	}
> > +}
> > +
> > +static void otx2_qos_read_txschq_cfg_tl(struct otx2_qos_node *parent,
> > +					struct otx2_qos_cfg *cfg)
> > +{
> > +	struct otx2_qos_node *node;
> > +	int cnt;
> > +
> > +	list_for_each_entry(node, &parent->child_list, list) {
> > +		otx2_qos_read_txschq_cfg_tl(node, cfg);
> > +		cnt = cfg->static_node_pos[node->level];
> > +		cfg->schq_contig_list[node->level][cnt] = node->schq;
> > +		cfg->schq_contig[node->level]++;
> > +		cfg->static_node_pos[node->level]++;
> > +		otx2_qos_read_txschq_cfg_schq(node, cfg);
> > +	}
> > +}
> > +
> > +static void otx2_qos_read_txschq_cfg(struct otx2_nic *pfvf,
> > +				     struct otx2_qos_node *node,
> > +				     struct otx2_qos_cfg *cfg)
> > +{
> > +	mutex_lock(&pfvf->qos.qos_lock);
> > +	otx2_qos_read_txschq_cfg_tl(node, cfg);
> > +	mutex_unlock(&pfvf->qos.qos_lock);
> > +}
> > +
> > +static struct otx2_qos_node *
> > +otx2_qos_alloc_root(struct otx2_nic *pfvf)
> > +{
> > +	struct otx2_qos_node *node;
> > +
> > +	node = kzalloc(sizeof(*node), GFP_KERNEL);
> > +	if (!node)
> > +		return ERR_PTR(-ENOMEM);
> > +
> > +	node->parent = NULL;
> > +	if (!is_otx2_vf(pfvf->pcifunc))
> > +		node->level = NIX_TXSCH_LVL_TL1;
> > +	else
> > +		node->level = NIX_TXSCH_LVL_TL2;
> > +
> > +	node->qid = OTX2_QOS_QID_INNER;
> > +	node->classid = OTX2_QOS_ROOT_CLASSID;
> > +
> > +	hash_add(pfvf->qos.qos_hlist, &node->hlist, node->classid);
> > +	list_add_tail(&node->list, &pfvf->qos.qos_tree);
> > +	INIT_LIST_HEAD(&node->child_list);
> > +	INIT_LIST_HEAD(&node->child_schq_list);
> > +
> > +	return node;
> > +}
> > +
> > +static int otx2_qos_add_child_node(struct otx2_qos_node *parent,
> > +				   struct otx2_qos_node *node)
> > +{
> > +	struct list_head *head = &parent->child_list;
> > +	struct otx2_qos_node *tmp_node;
> > +	struct list_head *tmp;
> > +
> > +	for (tmp = head->next; tmp != head; tmp = tmp->next) {
> > +		tmp_node = list_entry(tmp, struct otx2_qos_node, list);
> > +		if (tmp_node->prio == node->prio)
> > +			return -EEXIST;
> > +		if (tmp_node->prio > node->prio) {
> > +			list_add_tail(&node->list, tmp);
> > +			return 0;
> > +		}
> > +	}
> > +
> > +	list_add_tail(&node->list, head);
> > +	return 0;
> > +}
> > +
> > +static int otx2_qos_alloc_txschq_node(struct otx2_nic *pfvf,
> > +				      struct otx2_qos_node *node)
> > +{
> > +	struct otx2_qos_node *txschq_node, *parent, *tmp;
> > +	int lvl;
> > +
> > +	parent = node;
> > +	for (lvl = node->level - 1; lvl >= NIX_TXSCH_LVL_MDQ; lvl--) {
> > +		txschq_node = kzalloc(sizeof(*txschq_node), GFP_KERNEL);
> > +		if (!txschq_node)
> > +			goto err_out;
> > +
> > +		txschq_node->parent = parent;
> > +		txschq_node->level = lvl;
> > +		txschq_node->classid = OTX2_QOS_CLASS_NONE;
> > +		txschq_node->qid = OTX2_QOS_QID_NONE;
> > +		txschq_node->rate = 0;
> > +		txschq_node->ceil = 0;
> > +		txschq_node->prio = 0;
> > +
> > +		mutex_lock(&pfvf->qos.qos_lock);
> > +		list_add_tail(&txschq_node->list, &node->child_schq_list);
> > +		mutex_unlock(&pfvf->qos.qos_lock);
> > +
> > +		INIT_LIST_HEAD(&txschq_node->child_list);
> > +		INIT_LIST_HEAD(&txschq_node->child_schq_list);
> > +		parent = txschq_node;
> > +	}
> > +
> > +	return 0;
> > +
> > +err_out:
> > +	list_for_each_entry_safe(txschq_node, tmp, &node-
> >child_schq_list,
> > +				 list) {
> > +		list_del(&txschq_node->list);
> > +		kfree(txschq_node);
> > +	}
> > +	return -ENOMEM;
> > +}
> > +
> > +static struct otx2_qos_node *
> > +otx2_qos_sw_create_leaf_node(struct otx2_nic *pfvf,
> > +			     struct otx2_qos_node *parent,
> > +			     u16 classid, u32 prio, u64 rate, u64 ceil,
> > +			     u16 qid)
> > +{
> > +	struct otx2_qos_node *node;
> > +	int err;
> > +
> > +	node = kzalloc(sizeof(*node), GFP_KERNEL);
> > +	if (!node)
> > +		return ERR_PTR(-ENOMEM);
> > +
> > +	node->parent = parent;
> > +	node->level = parent->level - 1;
> > +	node->classid = classid;
> > +	node->qid = qid;
> > +	node->rate = otx2_qos_convert_rate(rate);
> > +	node->ceil = otx2_qos_convert_rate(ceil);
> > +	node->prio = prio;
> > +
> > +	__set_bit(qid, pfvf->qos.qos_sq_bmap);
> > +
> > +	hash_add(pfvf->qos.qos_hlist, &node->hlist, classid);
> > +
> > +	mutex_lock(&pfvf->qos.qos_lock);
> > +	err = otx2_qos_add_child_node(parent, node);
> > +	if (err) {
> > +		mutex_unlock(&pfvf->qos.qos_lock);
> > +		return ERR_PTR(err);
> > +	}
> > +	mutex_unlock(&pfvf->qos.qos_lock);
> > +
> > +	INIT_LIST_HEAD(&node->child_list);
> > +	INIT_LIST_HEAD(&node->child_schq_list);
> 
> Looks suspicious that some fields of node are initialized after
> otx2_qos_add_child_node is called.

" otx2_qos_add_child_node" tries to rearrange the node in linked list based on node priority.
Here on success case we are calling INIT_LIST_HEAD. 
Will move this logic inside the function.
> 
> > +
> > +	err = otx2_qos_alloc_txschq_node(pfvf, node);
> > +	if (err) {
> > +		otx2_qos_sw_node_delete(pfvf, node);
> > +		return ERR_PTR(-ENOMEM);
> > +	}
> > +
> > +	return node;
> > +}
> > +
> > +static struct otx2_qos_node *
> > +otx2_sw_node_find(struct otx2_nic *pfvf, u32 classid)
> > +{
> > +	struct otx2_qos_node *node = NULL;
> > +
> > +	hash_for_each_possible(pfvf->qos.qos_hlist, node, hlist, classid) {
> 
> This loop may be called from ndo_select_queue, while another thread may
> modify qos_hlist. We use RCU in mlx5e to protect this structure. What
> protects it in your driver?
> 
We did not came across this case, as we normally wont delete the classes on the fly.
Will test the scenario and update the patch.
> > +		if (node->classid == classid)
> > +			break;
> > +	}
> > +
> > +	return node;
> > +}
> > +
> > +int otx2_get_txq_by_classid(struct otx2_nic *pfvf, u16 classid)
> > +{
> > +	struct otx2_qos_node *node;
> > +	u16 qid;
> > +	int res;
> > +
> > +	node = otx2_sw_node_find(pfvf, classid);
> > +	if (IS_ERR(node)) {
> > +		res = -ENOENT;
> > +		goto out;
> > +	}
> > +	qid = READ_ONCE(node->qid);
> > +	if (qid == OTX2_QOS_QID_INNER) {
> > +		res = -EINVAL;
> > +		goto out;
> > +	}
> > +	res = pfvf->hw.tx_queues + qid;
> > +out:
> > +	return res;
> > +}
> > +
> > +static int
> > +otx2_qos_txschq_config(struct otx2_nic *pfvf, struct otx2_qos_node
> *node)
> > +{
> > +	struct mbox *mbox = &pfvf->mbox;
> > +	struct nix_txschq_config *req;
> > +	int rc;
> > +
> > +	mutex_lock(&mbox->lock);
> > +
> > +	req = otx2_mbox_alloc_msg_nix_txschq_cfg(&pfvf->mbox);
> > +	if (!req) {
> > +		mutex_unlock(&mbox->lock);
> > +		return -ENOMEM;
> > +	}
> > +
> > +	req->lvl = node->level;
> > +	__otx2_qos_txschq_cfg(pfvf, node, req);
> > +
> > +	rc = otx2_sync_mbox_msg(&pfvf->mbox);
> > +
> > +	mutex_unlock(&mbox->lock);
> > +
> > +	return rc;
> > +}
> > +
> > +static int otx2_qos_txschq_alloc(struct otx2_nic *pfvf,
> > +				 struct otx2_qos_cfg *cfg)
> > +{
> > +	struct nix_txsch_alloc_req *req;
> > +	struct nix_txsch_alloc_rsp *rsp;
> > +	struct mbox *mbox = &pfvf->mbox;
> > +	int lvl, rc, schq;
> > +
> > +	mutex_lock(&mbox->lock);
> > +	req = otx2_mbox_alloc_msg_nix_txsch_alloc(&pfvf->mbox);
> > +	if (!req) {
> > +		mutex_unlock(&mbox->lock);
> > +		return -ENOMEM;
> > +	}
> > +
> > +	for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
> > +		req->schq[lvl] = cfg->schq[lvl];
> > +		req->schq_contig[lvl] = cfg->schq_contig[lvl];
> > +	}
> > +
> > +	rc = otx2_sync_mbox_msg(&pfvf->mbox);
> > +	if (rc) {
> > +		mutex_unlock(&mbox->lock);
> > +		return rc;
> > +	}
> > +
> > +	rsp = (struct nix_txsch_alloc_rsp *)
> > +	      otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
> > +
> > +	for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
> > +		for (schq = 0; schq < rsp->schq_contig[lvl]; schq++) {
> > +			cfg->schq_contig_list[lvl][schq] =
> > +				rsp->schq_contig_list[lvl][schq];
> > +		}
> > +	}
> > +
> > +	for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
> > +		for (schq = 0; schq < rsp->schq[lvl]; schq++) {
> > +			cfg->schq_list[lvl][schq] =
> > +				rsp->schq_list[lvl][schq];
> > +		}
> > +	}
> > +
> > +	pfvf->qos.link_cfg_lvl = rsp->link_cfg_lvl;
> > +
> > +	mutex_unlock(&mbox->lock);
> > +
> > +	return rc;
> > +}
> > +
> > +static void otx2_qos_txschq_fill_cfg_schq(struct otx2_nic *pfvf,
> > +					  struct otx2_qos_node *node,
> > +					  struct otx2_qos_cfg *cfg)
> > +{
> > +	struct otx2_qos_node *tmp;
> > +	int cnt;
> > +
> > +	list_for_each_entry(tmp, &node->child_schq_list, list) {
> > +		cnt = cfg->dwrr_node_pos[tmp->level];
> > +		tmp->schq = cfg->schq_list[tmp->level][cnt];
> > +		cfg->dwrr_node_pos[tmp->level]++;
> > +	}
> > +}
> > +
> > +static void otx2_qos_txschq_fill_cfg_tl(struct otx2_nic *pfvf,
> > +					struct otx2_qos_node *node,
> > +					struct otx2_qos_cfg *cfg)
> > +{
> > +	struct otx2_qos_node *tmp;
> > +	int cnt;
> > +
> > +	list_for_each_entry(tmp, &node->child_list, list) {
> > +		otx2_qos_txschq_fill_cfg_tl(pfvf, tmp, cfg);
> > +		cnt = cfg->static_node_pos[tmp->level];
> > +		tmp->schq = cfg->schq_contig_list[tmp->level][cnt];
> > +		if (cnt == 0)
> > +			node->prio_anchor = tmp->schq;
> > +		cfg->static_node_pos[tmp->level]++;
> > +		otx2_qos_txschq_fill_cfg_schq(pfvf, tmp, cfg);
> > +	}
> > +}
> > +
> > +static void otx2_qos_txschq_fill_cfg(struct otx2_nic *pfvf,
> > +				     struct otx2_qos_node *node,
> > +				     struct otx2_qos_cfg *cfg)
> > +{
> > +	mutex_lock(&pfvf->qos.qos_lock);
> > +	otx2_qos_txschq_fill_cfg_tl(pfvf, node, cfg);
> > +	otx2_qos_txschq_fill_cfg_schq(pfvf, node, cfg);
> > +	mutex_unlock(&pfvf->qos.qos_lock);
> > +}
> > +
> > +static int otx2_qos_txschq_push_cfg_schq(struct otx2_nic *pfvf,
> > +					 struct otx2_qos_node *node,
> > +					 struct otx2_qos_cfg *cfg)
> > +{
> > +	struct otx2_qos_node *tmp;
> > +	int ret = 0;
> > +
> > +	list_for_each_entry(tmp, &node->child_schq_list, list) {
> > +		ret = otx2_qos_txschq_config(pfvf, tmp);
> > +		if (ret)
> > +			return -EIO;
> > +		ret = otx2_qos_txschq_set_parent_topology(pfvf, tmp-
> >parent);
> > +		if (ret)
> > +			return -EIO;
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +static int otx2_qos_txschq_push_cfg_tl(struct otx2_nic *pfvf,
> > +				       struct otx2_qos_node *node,
> > +				       struct otx2_qos_cfg *cfg)
> > +{
> > +	struct otx2_qos_node *tmp;
> > +	int ret;
> > +
> > +	list_for_each_entry(tmp, &node->child_list, list) {
> > +		ret = otx2_qos_txschq_push_cfg_tl(pfvf, tmp, cfg);
> > +		if (ret)
> > +			return -EIO;
> > +		ret = otx2_qos_txschq_config(pfvf, tmp);
> > +		if (ret)
> > +			return -EIO;
> > +		ret = otx2_qos_txschq_push_cfg_schq(pfvf, tmp, cfg);
> > +		if (ret)
> > +			return -EIO;
> > +	}
> > +
> > +	ret = otx2_qos_txschq_set_parent_topology(pfvf, node);
> > +	if (ret)
> > +		return -EIO;
> > +
> > +	return 0;
> > +}
> > +
> > +static int otx2_qos_txschq_push_cfg(struct otx2_nic *pfvf,
> > +				    struct otx2_qos_node *node,
> > +				    struct otx2_qos_cfg *cfg)
> > +{
> > +	int ret;
> > +
> > +	mutex_lock(&pfvf->qos.qos_lock);
> > +	ret = otx2_qos_txschq_push_cfg_tl(pfvf, node, cfg);
> > +	if (ret)
> > +		goto out;
> > +	ret = otx2_qos_txschq_push_cfg_schq(pfvf, node, cfg);
> > +out:
> > +	mutex_unlock(&pfvf->qos.qos_lock);
> > +	return ret;
> > +}
> > +
> > +static int otx2_qos_txschq_update_config(struct otx2_nic *pfvf,
> > +					 struct otx2_qos_node *node,
> > +					 struct otx2_qos_cfg *cfg)
> > +{
> > +	otx2_qos_txschq_fill_cfg(pfvf, node, cfg);
> > +
> > +	return otx2_qos_txschq_push_cfg(pfvf, node, cfg);
> > +}
> > +
> > +static int otx2_qos_txschq_update_root_cfg(struct otx2_nic *pfvf,
> > +					   struct otx2_qos_node *root,
> > +					   struct otx2_qos_cfg *cfg)
> > +{
> > +	root->schq = cfg->schq_list[root->level][0];
> > +	return otx2_qos_txschq_config(pfvf, root);
> > +}
> > +
> > +static void otx2_qos_free_cfg(struct otx2_nic *pfvf, struct otx2_qos_cfg
> *cfg)
> > +{
> > +	int lvl, idx, schq;
> > +
> > +	for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
> > +		for (idx = 0; idx < cfg->schq[lvl]; idx++) {
> > +			schq = cfg->schq_list[lvl][idx];
> > +			otx2_txschq_free_one(pfvf, lvl, schq);
> > +		}
> > +	}
> > +
> > +	for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
> > +		for (idx = 0; idx < cfg->schq_contig[lvl]; idx++) {
> > +			schq = cfg->schq_contig_list[lvl][idx];
> > +			otx2_txschq_free_one(pfvf, lvl, schq);
> > +		}
> > +	}
> > +}
> > +
> > +static void otx2_qos_enadis_sq(struct otx2_nic *pfvf,
> > +			       struct otx2_qos_node *node,
> > +			       u16 qid)
> > +{
> > +	if (pfvf->qos.qid_to_sqmap[qid] != OTX2_QOS_INVALID_SQ)
> > +		otx2_qos_disable_sq(pfvf, qid);
> > +
> > +	pfvf->qos.qid_to_sqmap[qid] = node->schq;
> > +	otx2_qos_enable_sq(pfvf, qid);
> > +}
> > +
> > +static void otx2_qos_update_smq_schq(struct otx2_nic *pfvf,
> > +				     struct otx2_qos_node *node,
> > +				     bool action)
> > +{
> > +	struct otx2_qos_node *tmp;
> > +
> > +	if (node->qid == OTX2_QOS_QID_INNER)
> > +		return;
> > +
> > +	list_for_each_entry(tmp, &node->child_schq_list, list) {
> > +		if (tmp->level == NIX_TXSCH_LVL_MDQ) {
> > +			if (action == QOS_SMQ_FLUSH)
> > +				otx2_smq_flush(pfvf, tmp->schq);
> > +			else
> > +				otx2_qos_enadis_sq(pfvf, tmp, node->qid);
> > +		}
> > +	}
> > +}
> > +
> > +static void __otx2_qos_update_smq(struct otx2_nic *pfvf,
> > +				  struct otx2_qos_node *node,
> > +				  bool action)
> > +{
> > +	struct otx2_qos_node *tmp;
> > +
> > +	list_for_each_entry(tmp, &node->child_list, list) {
> > +		__otx2_qos_update_smq(pfvf, tmp, action);
> > +		if (tmp->qid == OTX2_QOS_QID_INNER)
> > +			continue;
> > +		if (tmp->level == NIX_TXSCH_LVL_MDQ) {
> > +			if (action == QOS_SMQ_FLUSH)
> > +				otx2_smq_flush(pfvf, tmp->schq);
> > +			else
> > +				otx2_qos_enadis_sq(pfvf, tmp, tmp->qid);
> > +		} else {
> > +			otx2_qos_update_smq_schq(pfvf, tmp, action);
> > +		}
> > +	}
> > +}
> > +
> > +static void otx2_qos_update_smq(struct otx2_nic *pfvf,
> > +				struct otx2_qos_node *node,
> > +				bool action)
> > +{
> > +	mutex_lock(&pfvf->qos.qos_lock);
> > +	__otx2_qos_update_smq(pfvf, node, action);
> > +	otx2_qos_update_smq_schq(pfvf, node, action);
> > +	mutex_unlock(&pfvf->qos.qos_lock);
> > +}
> > +
> > +static int otx2_qos_push_txschq_cfg(struct otx2_nic *pfvf,
> > +				    struct otx2_qos_node *node,
> > +				    struct otx2_qos_cfg *cfg)
> > +{
> > +	int ret = 0;
> > +
> > +	ret = otx2_qos_txschq_alloc(pfvf, cfg);
> > +	if (ret)
> > +		return -ENOSPC;
> > +
> > +	if (!(pfvf->netdev->flags & IFF_UP)) {
> > +		otx2_qos_txschq_fill_cfg(pfvf, node, cfg);
> > +		return 0;
> > +	}
> > +
> > +	ret = otx2_qos_txschq_update_config(pfvf, node, cfg);
> > +	if (ret) {
> > +		otx2_qos_free_cfg(pfvf, cfg);
> > +		return -EIO;
> > +	}
> > +
> > +	otx2_qos_update_smq(pfvf, node, QOS_CFG_SQ);
> > +
> > +	return 0;
> > +}
> > +
> > +static int otx2_qos_update_tree(struct otx2_nic *pfvf,
> > +				struct otx2_qos_node *node,
> > +				struct otx2_qos_cfg *cfg)
> > +{
> > +	otx2_qos_prepare_txschq_cfg(pfvf, node->parent, cfg);
> > +	return otx2_qos_push_txschq_cfg(pfvf, node->parent, cfg);
> > +}
> > +
> > +static int otx2_qos_root_add(struct otx2_nic *pfvf, u16 htb_maj_id, u16
> htb_defcls,
> > +			     struct netlink_ext_ack *extack)
> > +{
> > +	struct otx2_qos_cfg *new_cfg;
> > +	struct otx2_qos_node *root;
> > +	int err;
> > +
> > +	netdev_dbg(pfvf->netdev,
> > +		   "TC_HTB_CREATE: handle=0x%x defcls=0x%x\n",
> > +		   htb_maj_id, htb_defcls);
> > +
> > +	INIT_LIST_HEAD(&pfvf->qos.qos_tree);
> > +	mutex_init(&pfvf->qos.qos_lock);
> > +
> > +	root = otx2_qos_alloc_root(pfvf);
> > +	if (IS_ERR(root)) {
> > +		mutex_destroy(&pfvf->qos.qos_lock);
> > +		err = PTR_ERR(root);
> > +		return err;
> > +	}
> > +
> > +	/* allocate txschq queue */
> > +	new_cfg = kzalloc(sizeof(*new_cfg), GFP_KERNEL);
> > +	if (!new_cfg) {
> > +		NL_SET_ERR_MSG_MOD(extack, "Memory allocation
> error");
> > +		mutex_destroy(&pfvf->qos.qos_lock);
> > +		return -ENOMEM;
> > +	}
> > +	/* allocate htb root node */
> > +	new_cfg->schq[root->level] = 1;
> > +	err = otx2_qos_txschq_alloc(pfvf, new_cfg);
> > +	if (err) {
> > +		NL_SET_ERR_MSG_MOD(extack, "Error allocating txschq");
> > +		goto free_root_node;
> > +	}
> > +
> > +	if (!(pfvf->netdev->flags & IFF_UP) ||
> > +	    root->level == NIX_TXSCH_LVL_TL1) {
> > +		root->schq = new_cfg->schq_list[root->level][0];
> > +		goto out;
> > +	}
> > +
> > +	/* update the txschq configuration in hw */
> > +	err = otx2_qos_txschq_update_root_cfg(pfvf, root, new_cfg);
> > +	if (err) {
> > +		NL_SET_ERR_MSG_MOD(extack,
> > +				   "Error updating txschq configuration");
> > +		goto txschq_free;
> > +	}
> > +
> > +out:
> > +	WRITE_ONCE(pfvf->qos.defcls, htb_defcls);
> > +	smp_store_release(&pfvf->qos.maj_id, htb_maj_id); /* barrier */
> > +	kfree(new_cfg);
> > +	return 0;
> > +
> > +txschq_free:
> > +	otx2_qos_free_cfg(pfvf, new_cfg);
> > +free_root_node:
> > +	kfree(new_cfg);
> > +	otx2_qos_sw_node_delete(pfvf, root);
> > +	mutex_destroy(&pfvf->qos.qos_lock);
> > +	return err;
> > +}
> > +
> > +static int otx2_qos_root_destroy(struct otx2_nic *pfvf)
> > +{
> > +	struct otx2_qos_node *root;
> > +
> > +	netdev_dbg(pfvf->netdev, "TC_HTB_DESTROY\n");
> > +
> > +	/* find root node */
> > +	root = otx2_sw_node_find(pfvf, OTX2_QOS_ROOT_CLASSID);
> > +	if (IS_ERR(root))
> > +		return -ENOENT;
> > +
> > +	/* free the hw mappings */
> > +	otx2_qos_destroy_node(pfvf, root);
> > +	mutex_destroy(&pfvf->qos.qos_lock);
> > +
> > +	return 0;
> > +}
> > +
> > +static int otx2_qos_validate_configuration(struct otx2_qos_node
> *parent,
> > +					   struct netlink_ext_ack *extack,
> > +					   struct otx2_nic *pfvf,
> > +					   u64 prio)
> > +{
> > +	if (test_bit(prio, parent->prio_bmap)) {
> > +		NL_SET_ERR_MSG_MOD(extack,
> > +				   "Static priority child with same priority
> exists");
> > +		return -EEXIST;
> > +	}
> > +
> > +	if (prio == TXSCH_TL1_DFLT_RR_PRIO) {
> > +		NL_SET_ERR_MSG_MOD(extack,
> > +				   "Priority is reserved for Round Robin");
> > +		return -EINVAL;
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +static int otx2_qos_leaf_alloc_queue(struct otx2_nic *pfvf, u16 classid,
> > +				     u32 parent_classid, u64 rate, u64 ceil,
> > +				     u64 prio, struct netlink_ext_ack *extack)
> > +{
> > +	struct otx2_qos_cfg *old_cfg, *new_cfg;
> > +	struct otx2_qos_node *node, *parent;
> > +	int qid, ret, err;
> > +
> > +	netdev_dbg(pfvf->netdev,
> > +		   "TC_HTB_LEAF_ALLOC_QUEUE: classid=0x%x
> parent_classid=0x%x rate=%lld ceil=%lld prio=%lld\n",
> > +		   classid, parent_classid, rate, ceil, prio);
> > +
> > +	if (prio > OTX2_QOS_MAX_PRIO) {
> > +		NL_SET_ERR_MSG_MOD(extack, "Valid priority range 0 to
> 7");
> > +		ret = -EOPNOTSUPP;
> > +		goto out;
> > +	}
> > +
> > +	/* get parent node */
> > +	parent = otx2_sw_node_find(pfvf, parent_classid);
> > +	if (IS_ERR(parent)) {
> > +		NL_SET_ERR_MSG_MOD(extack, "parent node not found");
> > +		ret = -ENOENT;
> > +		goto out;
> > +	}
> > +	if (parent->level == NIX_TXSCH_LVL_MDQ) {
> > +		NL_SET_ERR_MSG_MOD(extack, "HTB qos max levels
> reached");
> > +		ret = -EOPNOTSUPP;
> > +		goto out;
> > +	}
> > +
> > +	ret = otx2_qos_validate_configuration(parent, extack, pfvf, prio);
> > +	if (ret)
> > +		goto out;
> > +
> > +	set_bit(prio, parent->prio_bmap);
> > +
> > +	/* read current txschq configuration */
> > +	old_cfg = kzalloc(sizeof(*old_cfg), GFP_KERNEL);
> > +	if (!old_cfg) {
> > +		NL_SET_ERR_MSG_MOD(extack, "Memory allocation
> error");
> > +		ret = -ENOMEM;
> > +		goto out;
> > +	}
> > +	otx2_qos_read_txschq_cfg(pfvf, parent, old_cfg);
> > +
> > +	/* allocate a new sq */
> > +	qid = otx2_qos_get_qid(pfvf);
> > +	if (qid < 0) {
> > +		NL_SET_ERR_MSG_MOD(extack, "Reached max supported
> QOS SQ's");
> > +		ret = -ENOMEM;
> > +		goto free_old_cfg;
> > +	}
> > +
> > +	/* Actual SQ mapping will be updated after SMQ alloc */
> > +	pfvf->qos.qid_to_sqmap[qid] = OTX2_QOS_INVALID_SQ;
> > +
> > +	/* allocate and initialize a new child node */
> > +	node = otx2_qos_sw_create_leaf_node(pfvf, parent, classid, prio,
> rate,
> > +					    ceil, qid);
> > +	if (IS_ERR(node)) {
> > +		NL_SET_ERR_MSG_MOD(extack, "Unable to allocate leaf
> node");
> > +		ret = PTR_ERR(node);
> > +		goto free_old_cfg;
> > +	}
> > +
> > +	/* push new txschq config to hw */
> > +	new_cfg = kzalloc(sizeof(*new_cfg), GFP_KERNEL);
> > +	if (!new_cfg) {
> > +		NL_SET_ERR_MSG_MOD(extack, "Memory allocation
> error");
> > +		ret = -ENOMEM;
> > +		goto free_node;
> > +	}
> > +	ret = otx2_qos_update_tree(pfvf, node, new_cfg);
> > +	if (ret) {
> > +		NL_SET_ERR_MSG_MOD(extack, "HTB HW configuration
> error");
> > +		kfree(new_cfg);
> > +		otx2_qos_sw_node_delete(pfvf, node);
> > +		/* restore the old qos tree */
> > +		err = otx2_qos_txschq_update_config(pfvf, parent, old_cfg);
> > +		if (err) {
> > +			netdev_err(pfvf->netdev,
> > +				   "Failed to restore txcshq configuration");
> > +			goto free_old_cfg;
> > +		}
> > +
> > +		otx2_qos_update_smq(pfvf, parent, QOS_CFG_SQ);
> > +		goto free_old_cfg;
> > +	}
> > +
> > +	/* update tx_real_queues */
> > +	otx2_qos_update_tx_netdev_queues(pfvf);
> > +
> > +	/* free new txschq config */
> > +	kfree(new_cfg);
> > +
> > +	/* free old txschq config */
> > +	otx2_qos_free_cfg(pfvf, old_cfg);
> > +	kfree(old_cfg);
> > +
> > +	return pfvf->hw.tx_queues + qid;
> > +
> > +free_node:
> > +	otx2_qos_sw_node_delete(pfvf, node);
> > +free_old_cfg:
> > +	kfree(old_cfg);
> > +out:
> > +	return ret;
> > +}
> > +
> > +static int otx2_qos_leaf_to_inner(struct otx2_nic *pfvf, u16 classid,
> > +				  u16 child_classid, u64 rate, u64 ceil, u64 prio,
> > +				  struct netlink_ext_ack *extack)
> > +{
> > +	struct otx2_qos_cfg *old_cfg, *new_cfg;
> > +	struct otx2_qos_node *node, *child;
> > +	int ret, err;
> > +	u16 qid;
> > +
> > +	netdev_dbg(pfvf->netdev,
> > +		   "TC_HTB_LEAF_TO_INNER classid %04x, child %04x, rate
> %llu, ceil %llu\n",
> > +		   classid, child_classid, rate, ceil);
> > +
> > +	if (prio > OTX2_QOS_MAX_PRIO) {
> > +		NL_SET_ERR_MSG_MOD(extack, "Valid priority range 0 to
> 7");
> > +		ret = -EOPNOTSUPP;
> > +		goto out;
> > +	}
> > +
> > +	/* find node related to classid */
> > +	node = otx2_sw_node_find(pfvf, classid);
> > +	if (IS_ERR(node)) {
> > +		NL_SET_ERR_MSG_MOD(extack, "HTB node not found");
> > +		ret = -ENOENT;
> > +		goto out;
> > +	}
> > +	/* check max qos txschq level */
> > +	if (node->level == NIX_TXSCH_LVL_MDQ) {
> > +		NL_SET_ERR_MSG_MOD(extack, "HTB qos level not
> supported");
> > +		ret = -EOPNOTSUPP;
> > +		goto out;
> > +	}
> > +
> > +	set_bit(prio, node->prio_bmap);
> > +
> > +	/* store the qid to assign to leaf node */
> > +	qid = node->qid;
> > +
> > +	/* read current txschq configuration */
> > +	old_cfg = kzalloc(sizeof(*old_cfg), GFP_KERNEL);
> > +	if (!old_cfg) {
> > +		NL_SET_ERR_MSG_MOD(extack, "Memory allocation
> error");
> > +		ret = -ENOMEM;
> > +		goto out;
> > +	}
> > +	otx2_qos_read_txschq_cfg(pfvf, node, old_cfg);
> > +
> > +	/* delete the txschq nodes allocated for this node */
> > +	otx2_qos_free_sw_node_schq(pfvf, node);
> > +
> > +	/* mark this node as htb inner node */
> > +	node->qid = OTX2_QOS_QID_INNER;
> 
> As you can concurrently read node->qid from the datapath
> (ndo_select_queue), you should use READ_ONCE/WRITE_ONCE to
> guarantee
> that the value will not be torn; you already use READ_ONCE, but it
> doesn't pair with a WRITE_ONCE here.
> 
ACK, will update the suggested change.

Thanks,
Hariprasad k
> > +
> > +	/* allocate and initialize a new child node */
> > +	child = otx2_qos_sw_create_leaf_node(pfvf, node, child_classid,
> > +					     prio, rate, ceil, qid);
> > +	if (IS_ERR(child)) {
> > +		NL_SET_ERR_MSG_MOD(extack, "Unable to allocate leaf
> node");
> > +		ret = PTR_ERR(child);
> > +		goto free_old_cfg;
> > +	}
> > +
> > +	/* push new txschq config to hw */
> > +	new_cfg = kzalloc(sizeof(*new_cfg), GFP_KERNEL);
> > +	if (!new_cfg) {
> > +		NL_SET_ERR_MSG_MOD(extack, "Memory allocation
> error");
> > +		ret = -ENOMEM;
> > +		goto free_node;
> > +	}
> > +	ret = otx2_qos_update_tree(pfvf, child, new_cfg);
> > +	if (ret) {
> > +		NL_SET_ERR_MSG_MOD(extack, "HTB HW configuration
> error");
> > +		kfree(new_cfg);
> > +		otx2_qos_sw_node_delete(pfvf, child);
> > +		/* restore the old qos tree */
> > +		node->qid = qid;
> 
> Same here; might be somewhere else as well.
> 
> > +		err = otx2_qos_alloc_txschq_node(pfvf, node);
> > +		if (err) {
> > +			netdev_err(pfvf->netdev,
> > +				   "Failed to restore old leaf node");
> > +			goto free_old_cfg;
> > +		}
> > +		err = otx2_qos_txschq_update_config(pfvf, node, old_cfg);
> > +		if (err) {
> > +			netdev_err(pfvf->netdev,
> > +				   "Failed to restore txcshq configuration");
> > +			goto free_old_cfg;
> > +		}
> > +		otx2_qos_update_smq(pfvf, node, QOS_CFG_SQ);
> > +		goto free_old_cfg;
> > +	}
> > +
> > +	/* free new txschq config */
> > +	kfree(new_cfg);
> > +
> > +	/* free old txschq config */
> > +	otx2_qos_free_cfg(pfvf, old_cfg);
> > +	kfree(old_cfg);
> > +
> > +	return 0;
> > +
> > +free_node:
> > +	otx2_qos_sw_node_delete(pfvf, child);
> > +free_old_cfg:
> > +	kfree(old_cfg);
> > +out:
> > +	return ret;
> > +}
> > +
> > +static int otx2_qos_leaf_del(struct otx2_nic *pfvf, u16 *classid,
> > +			     struct netlink_ext_ack *extack)
> > +{
> > +	struct otx2_qos_node *node, *parent;
> > +	u64 prio;
> > +	u16 qid;
> > +
> > +	netdev_dbg(pfvf->netdev, "TC_HTB_LEAF_DEL classid %04x\n",
> *classid);
> > +
> > +	/* find node related to classid */
> > +	node = otx2_sw_node_find(pfvf, *classid);
> > +	if (IS_ERR(node)) {
> > +		NL_SET_ERR_MSG_MOD(extack, "HTB node not found");
> > +		return -ENOENT;
> > +	}
> > +	parent = node->parent;
> > +	prio   = node->prio;
> > +	qid    = node->qid;
> > +
> > +	otx2_qos_disable_sq(pfvf, node->qid);
> > +
> > +	otx2_qos_destroy_node(pfvf, node);
> > +	pfvf->qos.qid_to_sqmap[qid] = OTX2_QOS_INVALID_SQ;
> > +
> > +	clear_bit(prio, parent->prio_bmap);
> > +
> > +	return 0;
> > +}
> > +
> > +static int otx2_qos_leaf_del_last(struct otx2_nic *pfvf, u16 classid, bool
> force,
> > +				  struct netlink_ext_ack *extack)
> > +{
> > +	struct otx2_qos_node *node, *parent;
> > +	struct otx2_qos_cfg *new_cfg;
> > +	u64 prio;
> > +	int err;
> > +	u16 qid;
> > +
> > +	netdev_dbg(pfvf->netdev,
> > +		   "TC_HTB_LEAF_DEL_LAST classid %04x\n", classid);
> > +
> > +	/* find node related to classid */
> > +	node = otx2_sw_node_find(pfvf, classid);
> > +	if (IS_ERR(node)) {
> > +		NL_SET_ERR_MSG_MOD(extack, "HTB node not found");
> > +		return -ENOENT;
> > +	}
> > +
> > +	/* save qid for use by parent */
> > +	qid = node->qid;
> > +	prio = node->prio;
> > +
> > +	parent = otx2_sw_node_find(pfvf, node->parent->classid);
> > +	if (IS_ERR(parent)) {
> > +		NL_SET_ERR_MSG_MOD(extack, "parent node not found");
> > +		return -ENOENT;
> > +	}
> > +
> > +	/* destroy the leaf node */
> > +	otx2_qos_destroy_node(pfvf, node);
> > +	pfvf->qos.qid_to_sqmap[qid] = OTX2_QOS_INVALID_SQ;
> > +
> > +	clear_bit(prio, parent->prio_bmap);
> > +
> > +	/* create downstream txschq entries to parent */
> > +	err = otx2_qos_alloc_txschq_node(pfvf, parent);
> > +	if (err) {
> > +		NL_SET_ERR_MSG_MOD(extack, "HTB failed to create txsch
> configuration");
> > +		return err;
> > +	}
> > +	parent->qid = qid;
> > +	__set_bit(qid, pfvf->qos.qos_sq_bmap);
> > +
> > +	/* push new txschq config to hw */
> > +	new_cfg = kzalloc(sizeof(*new_cfg), GFP_KERNEL);
> > +	if (!new_cfg) {
> > +		NL_SET_ERR_MSG_MOD(extack, "Memory allocation
> error");
> > +		return -ENOMEM;
> > +	}
> > +	/* fill txschq cfg and push txschq cfg to hw */
> > +	otx2_qos_fill_cfg_schq(parent, new_cfg);
> > +	err = otx2_qos_push_txschq_cfg(pfvf, parent, new_cfg);
> > +	if (err) {
> > +		NL_SET_ERR_MSG_MOD(extack, "HTB HW configuration
> error");
> > +		kfree(new_cfg);
> > +		return err;
> > +	}
> > +	kfree(new_cfg);
> > +
> > +	/* update tx_real_queues */
> > +	otx2_qos_update_tx_netdev_queues(pfvf);
> > +
> > +	return 0;
> > +}
> > +
> > +void otx2_clean_qos_queues(struct otx2_nic *pfvf)
> > +{
> > +	struct otx2_qos_node *root;
> > +
> > +	root = otx2_sw_node_find(pfvf, OTX2_QOS_ROOT_CLASSID);
> > +	if (IS_ERR(root))
> > +		return;
> > +
> > +	otx2_qos_update_smq(pfvf, root, QOS_SMQ_FLUSH);
> > +}
> > +
> > +void otx2_qos_config_txschq(struct otx2_nic *pfvf)
> > +{
> > +	struct otx2_qos_node *root;
> > +	int err;
> > +
> > +	root = otx2_sw_node_find(pfvf, OTX2_QOS_ROOT_CLASSID);
> > +	if (IS_ERR(root))
> > +		return;
> > +
> > +	err = otx2_qos_txschq_config(pfvf, root);
> > +	if (err) {
> > +		netdev_err(pfvf->netdev, "Error update txschq
> configuration\n");
> > +		goto root_destroy;
> > +	}
> > +
> > +	err = otx2_qos_txschq_push_cfg_tl(pfvf, root, NULL);
> > +	if (err) {
> > +		netdev_err(pfvf->netdev, "Error update txschq
> configuration\n");
> > +		goto root_destroy;
> > +	}
> > +
> > +	otx2_qos_update_smq(pfvf, root, QOS_CFG_SQ);
> > +	return;
> > +
> > +root_destroy:
> > +	netdev_err(pfvf->netdev, "Failed to update Scheduler/Shaping
> config in Hardware\n");
> > +	/* Free resources allocated */
> > +	otx2_qos_root_destroy(pfvf);
> > +}
> > +
> > +int otx2_setup_tc_htb(struct net_device *ndev, struct
> tc_htb_qopt_offload *htb)
> > +{
> > +	struct otx2_nic *pfvf = netdev_priv(ndev);
> > +	int res;
> > +
> > +	switch (htb->command) {
> > +	case TC_HTB_CREATE:
> > +		return otx2_qos_root_add(pfvf, htb->parent_classid,
> > +					 htb->classid, htb->extack);
> > +	case TC_HTB_DESTROY:
> > +		return otx2_qos_root_destroy(pfvf);
> > +	case TC_HTB_LEAF_ALLOC_QUEUE:
> > +		res = otx2_qos_leaf_alloc_queue(pfvf, htb->classid,
> > +						htb->parent_classid,
> > +						htb->rate, htb->ceil,
> > +						htb->prio, htb->extack);
> > +		if (res < 0)
> > +			return res;
> > +		htb->qid = res;
> > +		return 0;
> > +	case TC_HTB_LEAF_TO_INNER:
> > +		return otx2_qos_leaf_to_inner(pfvf, htb->parent_classid,
> > +					      htb->classid, htb->rate,
> > +					      htb->ceil, htb->prio,
> > +					      htb->extack);
> > +	case TC_HTB_LEAF_DEL:
> > +		return otx2_qos_leaf_del(pfvf, &htb->classid, htb->extack);
> > +	case TC_HTB_LEAF_DEL_LAST:
> > +	case TC_HTB_LEAF_DEL_LAST_FORCE:
> > +		return otx2_qos_leaf_del_last(pfvf, htb->classid,
> > +				htb->command ==
> TC_HTB_LEAF_DEL_LAST_FORCE,
> > +					      htb->extack);
> > +	case TC_HTB_LEAF_QUERY_QUEUE:
> > +		res = otx2_get_txq_by_classid(pfvf, htb->classid);
> > +		htb->qid = res;
> > +		return 0;
> > +	case TC_HTB_NODE_MODIFY:
> > +		fallthrough;
> > +	default:
> > +		return -EOPNOTSUPP;
> > +	}
> > +}
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos.h
> b/drivers/net/ethernet/marvell/octeontx2/nic/qos.h
> > index 73a62d092e99..26de1af2aa57 100644
> > --- a/drivers/net/ethernet/marvell/octeontx2/nic/qos.h
> > +++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos.h
> > @@ -7,13 +7,63 @@
> >  #ifndef OTX2_QOS_H
> >  #define OTX2_QOS_H
> >
> > +#include <linux/types.h>
> > +#include <linux/netdevice.h>
> > +#include <linux/rhashtable.h>
> > +
> > +#define OTX2_QOS_MAX_LVL		4
> > +#define OTX2_QOS_MAX_PRIO		7
> >  #define OTX2_QOS_MAX_LEAF_NODES                16
> >
> > -int otx2_qos_enable_sq(struct otx2_nic *pfvf, int qidx, u16 smq);
> > -void otx2_qos_disable_sq(struct otx2_nic *pfvf, int qidx, u16 mdq);
> > +enum qos_smq_operations {
> > +	QOS_CFG_SQ,
> > +	QOS_SMQ_FLUSH,
> > +};
> > +
> > +u64 otx2_get_txschq_rate_regval(struct otx2_nic *nic, u64 maxrate, u32
> burst);
> > +
> > +int otx2_setup_tc_htb(struct net_device *ndev, struct
> tc_htb_qopt_offload *htb);
> > +int otx2_qos_get_qid(struct otx2_nic *pfvf);
> > +void otx2_qos_free_qid(struct otx2_nic *pfvf, int qidx);
> > +int otx2_qos_enable_sq(struct otx2_nic *pfvf, int qidx);
> > +void otx2_qos_disable_sq(struct otx2_nic *pfvf, int qidx);
> > +
> > +struct otx2_qos_cfg {
> > +	u16 schq[NIX_TXSCH_LVL_CNT];
> > +	u16 schq_contig[NIX_TXSCH_LVL_CNT];
> > +	int static_node_pos[NIX_TXSCH_LVL_CNT];
> > +	int dwrr_node_pos[NIX_TXSCH_LVL_CNT];
> > +	u16
> schq_contig_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
> > +	u16 schq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
> > +};
> >
> >  struct otx2_qos {
> > -	       u16 qid_to_sqmap[OTX2_QOS_MAX_LEAF_NODES];
> > -	};
> > +	DECLARE_HASHTABLE(qos_hlist,
> order_base_2(OTX2_QOS_MAX_LEAF_NODES));
> > +	struct mutex qos_lock; /* child list lock */
> > +	u16 qid_to_sqmap[OTX2_QOS_MAX_LEAF_NODES];
> > +	struct list_head qos_tree;
> > +	DECLARE_BITMAP(qos_sq_bmap, OTX2_QOS_MAX_LEAF_NODES);
> > +	u16 maj_id;
> > +	u16 defcls;
> > +	u8  link_cfg_lvl; /* LINKX_CFG CSRs mapped to TL3 or TL2's index ? */
> > +};
> > +
> > +struct otx2_qos_node {
> > +	struct list_head list; /* list managment */
> > +	struct list_head child_list;
> > +	struct list_head child_schq_list;
> > +	struct hlist_node hlist;
> > +	DECLARE_BITMAP(prio_bmap, OTX2_QOS_MAX_PRIO + 1);
> > +	struct otx2_qos_node *parent;	/* parent qos node */
> > +	u64 rate; /* htb params */
> > +	u64 ceil;
> > +	u32 classid;
> > +	u32 prio;
> > +	u16 schq; /* hw txschq */
> > +	u16 qid;
> > +	u16 prio_anchor;
> > +	u8 level;
> > +};
> > +
> >
> >  #endif
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c
> b/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c
> > index 1c77f024c360..8a1e89668a1b 100644
> > --- a/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c
> > +++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c
> > @@ -225,7 +225,22 @@ static int otx2_qos_ctx_disable(struct otx2_nic
> *pfvf, u16 qidx, int aura_id)
> >  	return otx2_sync_mbox_msg(&pfvf->mbox);
> >  }
> >
> > -int otx2_qos_enable_sq(struct otx2_nic *pfvf, int qidx, u16 smq)
> > +int otx2_qos_get_qid(struct otx2_nic *pfvf)
> > +{
> > +	int qidx;
> > +
> > +	qidx = find_first_zero_bit(pfvf->qos.qos_sq_bmap,
> > +				   pfvf->hw.tc_tx_queues);
> > +
> > +	return qidx == pfvf->hw.tc_tx_queues ? -ENOSPC : qidx;
> > +}
> > +
> > +void otx2_qos_free_qid(struct otx2_nic *pfvf, int qidx)
> > +{
> > +	clear_bit(qidx, pfvf->qos.qos_sq_bmap);
> > +}
> > +
> > +int otx2_qos_enable_sq(struct otx2_nic *pfvf, int qidx)
> >  {
> >  	struct otx2_hw *hw = &pfvf->hw;
> >  	int pool_id, sq_idx, err;
> > @@ -241,7 +256,6 @@ int otx2_qos_enable_sq(struct otx2_nic *pfvf, int
> qidx, u16 smq)
> >  		goto out;
> >
> >  	pool_id = otx2_get_pool_idx(pfvf, AURA_NIX_SQ, sq_idx);
> > -	pfvf->qos.qid_to_sqmap[qidx] = smq;
> >  	err = otx2_sq_init(pfvf, sq_idx, pool_id);
> >  	if (err)
> >  		goto out;
> > @@ -250,7 +264,7 @@ int otx2_qos_enable_sq(struct otx2_nic *pfvf, int
> qidx, u16 smq)
> >  	return err;
> >  }
> >
> > -void otx2_qos_disable_sq(struct otx2_nic *pfvf, int qidx, u16 mdq)
> > +void otx2_qos_disable_sq(struct otx2_nic *pfvf, int qidx)
> >  {
> >  	struct otx2_qset *qset = &pfvf->qset;
> >  	struct otx2_hw *hw = &pfvf->hw;
> > --
> > 2.17.1
> >

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2023-03-29 18:03 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-26 18:12 [net-next Patch v5 0/6] octeontx2-pf: HTB offload support Hariprasad Kelam
2023-03-26 18:12 ` [net-next Patch v5 1/6] sch_htb: Allow HTB priority parameter in offload mode Hariprasad Kelam
2023-03-28 14:44   ` Simon Horman
2023-03-26 18:12 ` [net-next Patch v5 2/6] octeontx2-pf: Rename tot_tx_queues to non_qos_queues Hariprasad Kelam
2023-03-28 14:44   ` Simon Horman
2023-03-26 18:12 ` [net-next Patch v5 3/6] octeontx2-pf: qos send queues management Hariprasad Kelam
2023-03-28 14:43   ` Simon Horman
2023-03-29 17:19     ` Hariprasad Kelam
2023-03-26 18:12 ` [net-next Patch v5 4/6] octeontx2-pf: Refactor schedular queue alloc/free calls Hariprasad Kelam
2023-03-28 14:44   ` Simon Horman
2023-03-26 18:12 ` [net-next Patch v5 5/6] octeontx2-pf: Add support for HTB offload Hariprasad Kelam
2023-03-28 10:56   ` Paolo Abeni
2023-03-29 16:44     ` Hariprasad Kelam
2023-03-28 14:41   ` Simon Horman
2023-03-29 17:17     ` Hariprasad Kelam
2023-03-28 18:42   ` Maxim Mikityanskiy
2023-03-29 18:03     ` Hariprasad Kelam
2023-03-26 18:12 ` [net-next Patch v5 6/6] docs: octeontx2: Add Documentation for QOS Hariprasad Kelam
2023-03-28 14:45   ` Simon Horman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.