netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [net-next PATCH 0/6] Add ingress ratelimit offload
@ 2021-06-15 11:34 Subbaraya Sundeep
  2021-06-15 11:34 ` [net-next PATCH 1/5] octeontx2-af: cn10k: Bandwidth profiles config support Subbaraya Sundeep
                   ` (5 more replies)
  0 siblings, 6 replies; 9+ messages in thread
From: Subbaraya Sundeep @ 2021-06-15 11:34 UTC (permalink / raw)
  To: netdev, davem, kuba; +Cc: sgoutham, hkelam, gakula, Subbaraya Sundeep

This patchset adds ingress rate limiting hardware
offload support for CN10K silicons. Police actions
are added for TC matchall and flower filters.
CN10K has ingress rate limiting feature where
a receive queue is mapped to bandwidth profile
and the profile is configured with rate and burst
parameters by software. CN10K hardware supports
three levels of ingress policing or ratelimiting.
Multiple leaf profiles can  point to a single mid
level profile and multiple mid level profile can
point to a single top level one. Only leaf level
profiles are used for configuring rate limiting.

Patch 1 adds the new bandwidth profile contexts
in AF driver similar to other hardware contexts
Patch 2 adds the debugfs changes to dump bandwidth
profile contexts
Patch 3 adds support for police action with TC matchall filter
Patch 4 uses NL_SET_ERR_MSG_MOD for tc code
Patch 5 adds support for police action with TC flower filter


Subbaraya Sundeep (2):
  octeontx2-pf: Use NL_SET_ERR_MSG_MOD for TC
  octeontx2-pf: Add police action for TC flower

Sunil Goutham (3):
  octeontx2-af: cn10k: Bandwidth profiles config support
  octeontx2-af: cn10k: Debugfs support for bandwidth profiles
  octeontx2-pf: TC_MATCHALL ingress ratelimiting offload

 drivers/net/ethernet/marvell/octeontx2/af/mbox.h   |  40 +-
 drivers/net/ethernet/marvell/octeontx2/af/rvu.c    |   8 +
 drivers/net/ethernet/marvell/octeontx2/af/rvu.h    |  16 +
 .../ethernet/marvell/octeontx2/af/rvu_debugfs.c    | 163 ++++++
 .../net/ethernet/marvell/octeontx2/af/rvu_nix.c    | 619 ++++++++++++++++++++-
 .../net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c |   5 +
 .../net/ethernet/marvell/octeontx2/af/rvu_reg.h    |   8 +
 .../net/ethernet/marvell/octeontx2/af/rvu_struct.h |  85 ++-
 drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c | 323 +++++++++++
 drivers/net/ethernet/marvell/octeontx2/nic/cn10k.h |  11 +
 .../ethernet/marvell/octeontx2/nic/otx2_common.h   |   3 +
 .../ethernet/marvell/octeontx2/nic/otx2_ethtool.c  |   6 +
 .../net/ethernet/marvell/octeontx2/nic/otx2_pf.c   |   3 +
 .../net/ethernet/marvell/octeontx2/nic/otx2_tc.c   | 299 ++++++++--
 14 files changed, 1554 insertions(+), 35 deletions(-)

-- 
2.7.4


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [net-next PATCH 1/5] octeontx2-af: cn10k: Bandwidth profiles config support
  2021-06-15 11:34 [net-next PATCH 0/6] Add ingress ratelimit offload Subbaraya Sundeep
@ 2021-06-15 11:34 ` Subbaraya Sundeep
  2021-06-18 16:07   ` Kees Cook
  2021-06-15 11:34 ` [net-next PATCH 2/5] octeontx2-af: cn10k: Debugfs support for bandwidth profiles Subbaraya Sundeep
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 9+ messages in thread
From: Subbaraya Sundeep @ 2021-06-15 11:34 UTC (permalink / raw)
  To: netdev, davem, kuba; +Cc: sgoutham, hkelam, gakula, Subbaraya Sundeep

From: Sunil Goutham <sgoutham@marvell.com>

CN10K silicons supports hierarchial ingress packet ratelimiting.
There are 3 levels of profilers supported leaf, mid and top.
Ratelimiting is done after packet forwarding decision is taken
and a NIXLF's RQ is identified to DMA the packet. RQ's context
points to a leaf bandwidth profile which can be configured
to achieve desired ratelimit.

This patch adds logic for management of these bandwidth profiles
ie profile alloc, free, context update etc.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/mbox.h   |  40 +-
 drivers/net/ethernet/marvell/octeontx2/af/rvu.h    |  11 +
 .../net/ethernet/marvell/octeontx2/af/rvu_nix.c    | 619 ++++++++++++++++++++-
 .../net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c |   5 +
 .../net/ethernet/marvell/octeontx2/af/rvu_reg.h    |   8 +
 .../net/ethernet/marvell/octeontx2/af/rvu_struct.h |  78 ++-
 6 files changed, 757 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index aee6a6f..7d7dfa8 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -260,7 +260,11 @@ M(NIX_BP_DISABLE,	0x8017, nix_bp_disable, nix_bp_cfg_req, msg_rsp) \
 M(NIX_GET_MAC_ADDR, 0x8018, nix_get_mac_addr, msg_req, nix_get_mac_addr_rsp) \
 M(NIX_CN10K_AQ_ENQ,	0x8019, nix_cn10k_aq_enq, nix_cn10k_aq_enq_req, \
 				nix_cn10k_aq_enq_rsp)			\
-M(NIX_GET_HW_INFO,	0x801a, nix_get_hw_info, msg_req, nix_hw_info)
+M(NIX_GET_HW_INFO,	0x801c, nix_get_hw_info, msg_req, nix_hw_info)	\
+M(NIX_BANDPROF_ALLOC,	0x801d, nix_bandprof_alloc, nix_bandprof_alloc_req, \
+				nix_bandprof_alloc_rsp)			    \
+M(NIX_BANDPROF_FREE,	0x801e, nix_bandprof_free, nix_bandprof_free_req,   \
+				msg_rsp)
 
 /* Messages initiated by AF (range 0xC00 - 0xDFF) */
 #define MBOX_UP_CGX_MESSAGES						\
@@ -615,6 +619,9 @@ enum nix_af_status {
 	NIX_AF_ERR_PTP_CONFIG_FAIL  = -423,
 	NIX_AF_ERR_NPC_KEY_NOT_SUPP = -424,
 	NIX_AF_ERR_INVALID_NIXBLK   = -425,
+	NIX_AF_ERR_INVALID_BANDPROF = -426,
+	NIX_AF_ERR_IPOLICER_NOTSUPP = -427,
+	NIX_AF_ERR_BANDPROF_INVAL_REQ  = -428,
 };
 
 /* For NIX RX vtag action  */
@@ -683,6 +690,7 @@ struct nix_cn10k_aq_enq_req {
 		struct nix_cq_ctx_s cq;
 		struct nix_rsse_s   rss;
 		struct nix_rx_mce_s mce;
+		struct nix_bandprof_s prof;
 	};
 	union {
 		struct nix_cn10k_rq_ctx_s rq_mask;
@@ -690,6 +698,7 @@ struct nix_cn10k_aq_enq_req {
 		struct nix_cq_ctx_s cq_mask;
 		struct nix_rsse_s   rss_mask;
 		struct nix_rx_mce_s mce_mask;
+		struct nix_bandprof_s prof_mask;
 	};
 };
 
@@ -701,6 +710,7 @@ struct nix_cn10k_aq_enq_rsp {
 		struct nix_cq_ctx_s cq;
 		struct nix_rsse_s   rss;
 		struct nix_rx_mce_s mce;
+		struct nix_bandprof_s prof;
 	};
 };
 
@@ -716,6 +726,7 @@ struct nix_aq_enq_req {
 		struct nix_cq_ctx_s cq;
 		struct nix_rsse_s   rss;
 		struct nix_rx_mce_s mce;
+		u64 prof;
 	};
 	union {
 		struct nix_rq_ctx_s rq_mask;
@@ -723,6 +734,7 @@ struct nix_aq_enq_req {
 		struct nix_cq_ctx_s cq_mask;
 		struct nix_rsse_s   rss_mask;
 		struct nix_rx_mce_s mce_mask;
+		u64 prof_mask;
 	};
 };
 
@@ -734,6 +746,7 @@ struct nix_aq_enq_rsp {
 		struct nix_cq_ctx_s cq;
 		struct nix_rsse_s   rss;
 		struct nix_rx_mce_s mce;
+		u64 prof;
 	};
 };
 
@@ -975,6 +988,31 @@ struct nix_hw_info {
 	u16 min_mtu;
 };
 
+struct nix_bandprof_alloc_req {
+	struct mbox_msghdr hdr;
+	/* Count of profiles needed per layer */
+	u16 prof_count[BAND_PROF_NUM_LAYERS];
+};
+
+struct nix_bandprof_alloc_rsp {
+	struct mbox_msghdr hdr;
+	u16 prof_count[BAND_PROF_NUM_LAYERS];
+
+	/* There is no need to allocate morethan 1 bandwidth profile
+	 * per RQ of a PF_FUNC's NIXLF. So limit the maximum
+	 * profiles to 64 per PF_FUNC.
+	 */
+#define MAX_BANDPROF_PER_PFFUNC	64
+	u16 prof_idx[BAND_PROF_NUM_LAYERS][MAX_BANDPROF_PER_PFFUNC];
+};
+
+struct nix_bandprof_free_req {
+	struct mbox_msghdr hdr;
+	u8 free_all;
+	u16 prof_count[BAND_PROF_NUM_LAYERS];
+	u16 prof_idx[BAND_PROF_NUM_LAYERS][MAX_BANDPROF_PER_PFFUNC];
+};
+
 /* NPC mbox message structs */
 
 #define NPC_MCAM_ENTRY_INVALID	0xFFFF
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index c88dab7..4d2a5ca 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -296,6 +296,13 @@ struct nix_txvlan {
 	struct mutex rsrc_lock; /* Serialize resource alloc/free */
 };
 
+struct nix_ipolicer {
+	struct rsrc_bmap band_prof;
+	u16 *pfvf_map;
+	u16 *match_id;
+	u16 *ref_count;
+};
+
 struct nix_hw {
 	int blkaddr;
 	struct rvu *rvu;
@@ -305,6 +312,7 @@ struct nix_hw {
 	struct nix_mark_format mark_format;
 	struct nix_lso lso;
 	struct nix_txvlan txvlan;
+	struct nix_ipolicer *ipolicer;
 };
 
 /* RVU block's capabilities or functionality,
@@ -322,6 +330,7 @@ struct hw_cap {
 	bool	nix_rx_multicast;	 /* Rx packet replication support */
 	bool	per_pf_mbox_regs; /* PF mbox specified in per PF registers ? */
 	bool	programmable_chans; /* Channels programmable ? */
+	bool	ipolicer;
 };
 
 struct rvu_hwinfo {
@@ -672,6 +681,8 @@ int rvu_get_next_nix_blkaddr(struct rvu *rvu, int blkaddr);
 void rvu_nix_reset_mac(struct rvu_pfvf *pfvf, int pcifunc);
 int nix_get_struct_ptrs(struct rvu *rvu, u16 pcifunc,
 			struct nix_hw **nix_hw, int *blkaddr);
+int rvu_nix_setup_ratelimit_aggr(struct rvu *rvu, u16 pcifunc,
+				 u16 rq_idx, u16 match_id);
 
 /* NPC APIs */
 int rvu_npc_init(struct rvu *rvu);
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index d8cb665..ebd73a8 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -23,6 +23,14 @@ static int rvu_nix_get_bpid(struct rvu *rvu, struct nix_bp_cfg_req *req,
 			    int type, int chan_id);
 static int nix_update_mce_rule(struct rvu *rvu, u16 pcifunc,
 			       int type, bool add);
+static int nix_setup_ipolicers(struct rvu *rvu,
+			       struct nix_hw *nix_hw, int blkaddr);
+static void nix_ipolicer_freemem(struct nix_hw *nix_hw);
+static int nix_verify_bandprof(struct nix_cn10k_aq_enq_req *req,
+			       struct nix_hw *nix_hw, u16 pcifunc);
+static int nix_free_all_bandprof(struct rvu *rvu, u16 pcifunc);
+static void nix_clear_ratelimit_aggr(struct rvu *rvu, struct nix_hw *nix_hw,
+				     u32 leaf_prof);
 
 enum mc_tbl_sz {
 	MC_TBL_SZ_256,
@@ -699,8 +707,11 @@ static int rvu_nix_blk_aq_enq_inst(struct rvu *rvu, struct nix_hw *nix_hw,
 	pfvf = rvu_get_pfvf(rvu, pcifunc);
 	nixlf = rvu_get_lf(rvu, block, pcifunc, 0);
 
-	/* Skip NIXLF check for broadcast MCE entry init */
-	if (!(!rsp && req->ctype == NIX_AQ_CTYPE_MCE)) {
+	/* Skip NIXLF check for broadcast MCE entry and bandwidth profile
+	 * operations done by AF itself.
+	 */
+	if (!((!rsp && req->ctype == NIX_AQ_CTYPE_MCE) ||
+	      (req->ctype == NIX_AQ_CTYPE_BANDPROF && !pcifunc))) {
 		if (!pfvf->nixlf || nixlf < 0)
 			return NIX_AF_ERR_AF_LF_INVALID;
 	}
@@ -740,6 +751,11 @@ static int rvu_nix_blk_aq_enq_inst(struct rvu *rvu, struct nix_hw *nix_hw,
 		if (rsp)
 			rc = NIX_AF_ERR_AQ_ENQUEUE;
 		break;
+	case NIX_AQ_CTYPE_BANDPROF:
+		if (nix_verify_bandprof((struct nix_cn10k_aq_enq_req *)req,
+					nix_hw, pcifunc))
+			rc = NIX_AF_ERR_INVALID_BANDPROF;
+		break;
 	default:
 		rc = NIX_AF_ERR_AQ_ENQUEUE;
 	}
@@ -796,6 +812,9 @@ static int rvu_nix_blk_aq_enq_inst(struct rvu *rvu, struct nix_hw *nix_hw,
 		else if (req->ctype == NIX_AQ_CTYPE_MCE)
 			memcpy(mask, &req->mce_mask,
 			       sizeof(struct nix_rx_mce_s));
+		else if (req->ctype == NIX_AQ_CTYPE_BANDPROF)
+			memcpy(mask, &req->prof_mask,
+			       sizeof(struct nix_bandprof_s));
 		fallthrough;
 	case NIX_AQ_INSTOP_INIT:
 		if (req->ctype == NIX_AQ_CTYPE_RQ)
@@ -808,6 +827,8 @@ static int rvu_nix_blk_aq_enq_inst(struct rvu *rvu, struct nix_hw *nix_hw,
 			memcpy(ctx, &req->rss, sizeof(struct nix_rsse_s));
 		else if (req->ctype == NIX_AQ_CTYPE_MCE)
 			memcpy(ctx, &req->mce, sizeof(struct nix_rx_mce_s));
+		else if (req->ctype == NIX_AQ_CTYPE_BANDPROF)
+			memcpy(ctx, &req->prof, sizeof(struct nix_bandprof_s));
 		break;
 	case NIX_AQ_INSTOP_NOP:
 	case NIX_AQ_INSTOP_READ:
@@ -885,6 +906,9 @@ static int rvu_nix_blk_aq_enq_inst(struct rvu *rvu, struct nix_hw *nix_hw,
 			else if (req->ctype == NIX_AQ_CTYPE_MCE)
 				memcpy(&rsp->mce, ctx,
 				       sizeof(struct nix_rx_mce_s));
+			else if (req->ctype == NIX_AQ_CTYPE_BANDPROF)
+				memcpy(&rsp->prof, ctx,
+				       sizeof(struct nix_bandprof_s));
 		}
 	}
 
@@ -3624,6 +3648,10 @@ static int rvu_nix_block_init(struct rvu *rvu, struct nix_hw *nix_hw)
 		if (err)
 			return err;
 
+		err = nix_setup_ipolicers(rvu, nix_hw, blkaddr);
+		if (err)
+			return err;
+
 		err = nix_af_mark_format_setup(rvu, nix_hw, blkaddr);
 		if (err)
 			return err;
@@ -3772,6 +3800,8 @@ static void rvu_nix_block_freemem(struct rvu *rvu, int blkaddr,
 			kfree(txsch->schq.bmap);
 		}
 
+		nix_ipolicer_freemem(nix_hw);
+
 		vlan = &nix_hw->txvlan;
 		kfree(vlan->rsrc.bmap);
 		mutex_destroy(&vlan->rsrc_lock);
@@ -3879,6 +3909,8 @@ void rvu_nix_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int nixlf)
 	}
 
 	nix_ctx_free(rvu, pfvf);
+
+	nix_free_all_bandprof(rvu, pcifunc);
 }
 
 #define NIX_AF_LFX_TX_CFG_PTP_EN	BIT_ULL(32)
@@ -3987,3 +4019,586 @@ void rvu_nix_reset_mac(struct rvu_pfvf *pfvf, int pcifunc)
 	if (from_vf)
 		ether_addr_copy(pfvf->mac_addr, pfvf->default_mac);
 }
+
+/* NIX ingress policers or bandwidth profiles APIs */
+static void nix_config_rx_pkt_policer_precolor(struct rvu *rvu, int blkaddr)
+{
+	struct npc_lt_def_cfg defs, *ltdefs;
+
+	ltdefs = &defs;
+	memcpy(ltdefs, rvu->kpu.lt_def, sizeof(struct npc_lt_def_cfg));
+
+	/* Extract PCP and DEI fields from outer VLAN from byte offset
+	 * 2 from the start of LB_PTR (ie TAG).
+	 * VLAN0 is Outer VLAN and VLAN1 is Inner VLAN. Inner VLAN
+	 * fields are considered when 'Tunnel enable' is set in profile.
+	 */
+	rvu_write64(rvu, blkaddr, NIX_AF_RX_DEF_VLAN0_PCP_DEI,
+		    (2UL << 12) | (ltdefs->ovlan.lid << 8) |
+		    (ltdefs->ovlan.ltype_match << 4) |
+		    ltdefs->ovlan.ltype_mask);
+	rvu_write64(rvu, blkaddr, NIX_AF_RX_DEF_VLAN1_PCP_DEI,
+		    (2UL << 12) | (ltdefs->ivlan.lid << 8) |
+		    (ltdefs->ivlan.ltype_match << 4) |
+		    ltdefs->ivlan.ltype_mask);
+
+	/* DSCP field in outer and tunneled IPv4 packets */
+	rvu_write64(rvu, blkaddr, NIX_AF_RX_DEF_OIP4_DSCP,
+		    (1UL << 12) | (ltdefs->rx_oip4.lid << 8) |
+		    (ltdefs->rx_oip4.ltype_match << 4) |
+		    ltdefs->rx_oip4.ltype_mask);
+	rvu_write64(rvu, blkaddr, NIX_AF_RX_DEF_IIP4_DSCP,
+		    (1UL << 12) | (ltdefs->rx_iip4.lid << 8) |
+		    (ltdefs->rx_iip4.ltype_match << 4) |
+		    ltdefs->rx_iip4.ltype_mask);
+
+	/* DSCP field (traffic class) in outer and tunneled IPv6 packets */
+	rvu_write64(rvu, blkaddr, NIX_AF_RX_DEF_OIP6_DSCP,
+		    (1UL << 11) | (ltdefs->rx_oip6.lid << 8) |
+		    (ltdefs->rx_oip6.ltype_match << 4) |
+		    ltdefs->rx_oip6.ltype_mask);
+	rvu_write64(rvu, blkaddr, NIX_AF_RX_DEF_IIP6_DSCP,
+		    (1UL << 11) | (ltdefs->rx_iip6.lid << 8) |
+		    (ltdefs->rx_iip6.ltype_match << 4) |
+		    ltdefs->rx_iip6.ltype_mask);
+}
+
+static int nix_init_policer_context(struct rvu *rvu, struct nix_hw *nix_hw,
+				    int layer, int prof_idx)
+{
+	struct nix_cn10k_aq_enq_req aq_req;
+	int rc;
+
+	memset(&aq_req, 0, sizeof(struct nix_cn10k_aq_enq_req));
+
+	aq_req.qidx = (prof_idx & 0x3FFF) | (layer << 14);
+	aq_req.ctype = NIX_AQ_CTYPE_BANDPROF;
+	aq_req.op = NIX_AQ_INSTOP_INIT;
+
+	/* Context is all zeros, submit to AQ */
+	rc = rvu_nix_blk_aq_enq_inst(rvu, nix_hw,
+				     (struct nix_aq_enq_req *)&aq_req, NULL);
+	if (rc)
+		dev_err(rvu->dev, "Failed to INIT bandwidth profile layer %d profile %d\n",
+			layer, prof_idx);
+	return rc;
+}
+
+static int nix_setup_ipolicers(struct rvu *rvu,
+			       struct nix_hw *nix_hw, int blkaddr)
+{
+	struct rvu_hwinfo *hw = rvu->hw;
+	struct nix_ipolicer *ipolicer;
+	int err, layer, prof_idx;
+	u64 cfg;
+
+	cfg = rvu_read64(rvu, blkaddr, NIX_AF_CONST);
+	if (!(cfg & BIT_ULL(61))) {
+		hw->cap.ipolicer = false;
+		return 0;
+	}
+
+	hw->cap.ipolicer = true;
+	nix_hw->ipolicer = devm_kcalloc(rvu->dev, BAND_PROF_NUM_LAYERS,
+					sizeof(*ipolicer), GFP_KERNEL);
+	if (!nix_hw->ipolicer)
+		return -ENOMEM;
+
+	cfg = rvu_read64(rvu, blkaddr, NIX_AF_PL_CONST);
+
+	for (layer = 0; layer < BAND_PROF_NUM_LAYERS; layer++) {
+		ipolicer = &nix_hw->ipolicer[layer];
+		switch (layer) {
+		case BAND_PROF_LEAF_LAYER:
+			ipolicer->band_prof.max = cfg & 0XFFFF;
+			break;
+		case BAND_PROF_MID_LAYER:
+			ipolicer->band_prof.max = (cfg >> 16) & 0XFFFF;
+			break;
+		case BAND_PROF_TOP_LAYER:
+			ipolicer->band_prof.max = (cfg >> 32) & 0XFFFF;
+			break;
+		}
+
+		if (!ipolicer->band_prof.max)
+			continue;
+
+		err = rvu_alloc_bitmap(&ipolicer->band_prof);
+		if (err)
+			return err;
+
+		ipolicer->pfvf_map = devm_kcalloc(rvu->dev,
+						  ipolicer->band_prof.max,
+						  sizeof(u16), GFP_KERNEL);
+		if (!ipolicer->pfvf_map)
+			return -ENOMEM;
+
+		ipolicer->match_id = devm_kcalloc(rvu->dev,
+						  ipolicer->band_prof.max,
+						  sizeof(u16), GFP_KERNEL);
+		if (!ipolicer->match_id)
+			return -ENOMEM;
+
+		for (prof_idx = 0;
+		     prof_idx < ipolicer->band_prof.max; prof_idx++) {
+			/* Set AF as current owner for INIT ops to succeed */
+			ipolicer->pfvf_map[prof_idx] = 0x00;
+
+			/* There is no enable bit in the profile context,
+			 * so no context disable. So let's INIT them here
+			 * so that PF/VF later on have to just do WRITE to
+			 * setup policer rates and config.
+			 */
+			err = nix_init_policer_context(rvu, nix_hw,
+						       layer, prof_idx);
+			if (err)
+				return err;
+		}
+
+		/* Allocate memory for maintaining ref_counts for MID level
+		 * profiles, this will be needed for leaf layer profiles'
+		 * aggregation.
+		 */
+		if (layer != BAND_PROF_MID_LAYER)
+			continue;
+
+		ipolicer->ref_count = devm_kcalloc(rvu->dev,
+						   ipolicer->band_prof.max,
+						   sizeof(u16), GFP_KERNEL);
+	}
+
+	/* Set policer timeunit to 2us ie  (19 + 1) * 100 nsec = 2us */
+	rvu_write64(rvu, blkaddr, NIX_AF_PL_TS, 19);
+
+	nix_config_rx_pkt_policer_precolor(rvu, blkaddr);
+
+	return 0;
+}
+
+static void nix_ipolicer_freemem(struct nix_hw *nix_hw)
+{
+	struct nix_ipolicer *ipolicer;
+	int layer;
+
+	for (layer = 0; layer < BAND_PROF_NUM_LAYERS; layer++) {
+		ipolicer = &nix_hw->ipolicer[layer];
+
+		if (!ipolicer->band_prof.max)
+			continue;
+
+		kfree(ipolicer->band_prof.bmap);
+	}
+}
+
+static int nix_verify_bandprof(struct nix_cn10k_aq_enq_req *req,
+			       struct nix_hw *nix_hw, u16 pcifunc)
+{
+	struct nix_ipolicer *ipolicer;
+	int layer, hi_layer, prof_idx;
+
+	/* Bits [15:14] in profile index represent layer */
+	layer = (req->qidx >> 14) & 0x03;
+	prof_idx = req->qidx & 0x3FFF;
+
+	ipolicer = &nix_hw->ipolicer[layer];
+	if (prof_idx >= ipolicer->band_prof.max)
+		return -EINVAL;
+
+	/* Check if the profile is allocated to the requesting PCIFUNC or not
+	 * with the exception of AF. AF is allowed to read and update contexts.
+	 */
+	if (pcifunc && ipolicer->pfvf_map[prof_idx] != pcifunc)
+		return -EINVAL;
+
+	/* If this profile is linked to higher layer profile then check
+	 * if that profile is also allocated to the requesting PCIFUNC
+	 * or not.
+	 */
+	if (!req->prof.hl_en)
+		return 0;
+
+	/* Leaf layer profile can link only to mid layer and
+	 * mid layer to top layer.
+	 */
+	if (layer == BAND_PROF_LEAF_LAYER)
+		hi_layer = BAND_PROF_MID_LAYER;
+	else if (layer == BAND_PROF_MID_LAYER)
+		hi_layer = BAND_PROF_TOP_LAYER;
+	else
+		return -EINVAL;
+
+	ipolicer = &nix_hw->ipolicer[hi_layer];
+	prof_idx = req->prof.band_prof_id;
+	if (prof_idx >= ipolicer->band_prof.max ||
+	    ipolicer->pfvf_map[prof_idx] != pcifunc)
+		return -EINVAL;
+
+	return 0;
+}
+
+int rvu_mbox_handler_nix_bandprof_alloc(struct rvu *rvu,
+					struct nix_bandprof_alloc_req *req,
+					struct nix_bandprof_alloc_rsp *rsp)
+{
+	int blkaddr, layer, prof, idx, err;
+	u16 pcifunc = req->hdr.pcifunc;
+	struct nix_ipolicer *ipolicer;
+	struct nix_hw *nix_hw;
+
+	if (!rvu->hw->cap.ipolicer)
+		return NIX_AF_ERR_IPOLICER_NOTSUPP;
+
+	err = nix_get_struct_ptrs(rvu, pcifunc, &nix_hw, &blkaddr);
+	if (err)
+		return err;
+
+	mutex_lock(&rvu->rsrc_lock);
+	for (layer = 0; layer < BAND_PROF_NUM_LAYERS; layer++) {
+		if (layer == BAND_PROF_INVAL_LAYER)
+			continue;
+		if (!req->prof_count[layer])
+			continue;
+
+		ipolicer = &nix_hw->ipolicer[layer];
+		for (idx = 0; idx < req->prof_count[layer]; idx++) {
+			/* Allocate a max of 'MAX_BANDPROF_PER_PFFUNC' profiles */
+			if (idx == MAX_BANDPROF_PER_PFFUNC)
+				break;
+
+			prof = rvu_alloc_rsrc(&ipolicer->band_prof);
+			if (prof < 0)
+				break;
+			rsp->prof_count[layer]++;
+			rsp->prof_idx[layer][idx] = prof;
+			ipolicer->pfvf_map[prof] = pcifunc;
+		}
+	}
+	mutex_unlock(&rvu->rsrc_lock);
+	return 0;
+}
+
+static int nix_free_all_bandprof(struct rvu *rvu, u16 pcifunc)
+{
+	int blkaddr, layer, prof_idx, err;
+	struct nix_ipolicer *ipolicer;
+	struct nix_hw *nix_hw;
+
+	if (!rvu->hw->cap.ipolicer)
+		return NIX_AF_ERR_IPOLICER_NOTSUPP;
+
+	err = nix_get_struct_ptrs(rvu, pcifunc, &nix_hw, &blkaddr);
+	if (err)
+		return err;
+
+	mutex_lock(&rvu->rsrc_lock);
+	/* Free all the profiles allocated to the PCIFUNC */
+	for (layer = 0; layer < BAND_PROF_NUM_LAYERS; layer++) {
+		if (layer == BAND_PROF_INVAL_LAYER)
+			continue;
+		ipolicer = &nix_hw->ipolicer[layer];
+
+		for (prof_idx = 0; prof_idx < ipolicer->band_prof.max; prof_idx++) {
+			if (ipolicer->pfvf_map[prof_idx] != pcifunc)
+				continue;
+
+			/* Clear ratelimit aggregation, if any */
+			if (layer == BAND_PROF_LEAF_LAYER &&
+			    ipolicer->match_id[prof_idx])
+				nix_clear_ratelimit_aggr(rvu, nix_hw, prof_idx);
+
+			ipolicer->pfvf_map[prof_idx] = 0x00;
+			ipolicer->match_id[prof_idx] = 0;
+			rvu_free_rsrc(&ipolicer->band_prof, prof_idx);
+		}
+	}
+	mutex_unlock(&rvu->rsrc_lock);
+	return 0;
+}
+
+int rvu_mbox_handler_nix_bandprof_free(struct rvu *rvu,
+				       struct nix_bandprof_free_req *req,
+				       struct msg_rsp *rsp)
+{
+	int blkaddr, layer, prof_idx, idx, err;
+	u16 pcifunc = req->hdr.pcifunc;
+	struct nix_ipolicer *ipolicer;
+	struct nix_hw *nix_hw;
+
+	if (req->free_all)
+		return nix_free_all_bandprof(rvu, pcifunc);
+
+	if (!rvu->hw->cap.ipolicer)
+		return NIX_AF_ERR_IPOLICER_NOTSUPP;
+
+	err = nix_get_struct_ptrs(rvu, pcifunc, &nix_hw, &blkaddr);
+	if (err)
+		return err;
+
+	mutex_lock(&rvu->rsrc_lock);
+	/* Free the requested profile indices */
+	for (layer = 0; layer < BAND_PROF_NUM_LAYERS; layer++) {
+		if (layer == BAND_PROF_INVAL_LAYER)
+			continue;
+		if (!req->prof_count[layer])
+			continue;
+
+		ipolicer = &nix_hw->ipolicer[layer];
+		for (idx = 0; idx < req->prof_count[layer]; idx++) {
+			prof_idx = req->prof_idx[layer][idx];
+			if (prof_idx >= ipolicer->band_prof.max ||
+			    ipolicer->pfvf_map[prof_idx] != pcifunc)
+				continue;
+
+			/* Clear ratelimit aggregation, if any */
+			if (layer == BAND_PROF_LEAF_LAYER &&
+			    ipolicer->match_id[prof_idx])
+				nix_clear_ratelimit_aggr(rvu, nix_hw, prof_idx);
+
+			ipolicer->pfvf_map[prof_idx] = 0x00;
+			ipolicer->match_id[prof_idx] = 0;
+			rvu_free_rsrc(&ipolicer->band_prof, prof_idx);
+			if (idx == MAX_BANDPROF_PER_PFFUNC)
+				break;
+		}
+	}
+	mutex_unlock(&rvu->rsrc_lock);
+	return 0;
+}
+
+static int nix_aq_context_read(struct rvu *rvu, struct nix_hw *nix_hw,
+			       struct nix_cn10k_aq_enq_req *aq_req,
+			       struct nix_cn10k_aq_enq_rsp *aq_rsp,
+			       u16 pcifunc, u8 ctype, u32 qidx)
+{
+	memset(aq_req, 0, sizeof(struct nix_cn10k_aq_enq_req));
+	aq_req->hdr.pcifunc = pcifunc;
+	aq_req->ctype = ctype;
+	aq_req->op = NIX_AQ_INSTOP_READ;
+	aq_req->qidx = qidx;
+
+	return rvu_nix_blk_aq_enq_inst(rvu, nix_hw,
+				       (struct nix_aq_enq_req *)aq_req,
+				       (struct nix_aq_enq_rsp *)aq_rsp);
+}
+
+static int nix_ipolicer_map_leaf_midprofs(struct rvu *rvu,
+					  struct nix_hw *nix_hw,
+					  struct nix_cn10k_aq_enq_req *aq_req,
+					  struct nix_cn10k_aq_enq_rsp *aq_rsp,
+					  u32 leaf_prof, u16 mid_prof)
+{
+	memset(aq_req, 0, sizeof(struct nix_cn10k_aq_enq_req));
+	aq_req->hdr.pcifunc = 0x00;
+	aq_req->ctype = NIX_AQ_CTYPE_BANDPROF;
+	aq_req->op = NIX_AQ_INSTOP_WRITE;
+	aq_req->qidx = leaf_prof;
+
+	aq_req->prof.band_prof_id = mid_prof;
+	aq_req->prof_mask.band_prof_id = GENMASK(6, 0);
+	aq_req->prof.hl_en = 1;
+	aq_req->prof_mask.hl_en = 1;
+
+	return rvu_nix_blk_aq_enq_inst(rvu, nix_hw,
+				       (struct nix_aq_enq_req *)aq_req,
+				       (struct nix_aq_enq_rsp *)aq_rsp);
+}
+
+int rvu_nix_setup_ratelimit_aggr(struct rvu *rvu, u16 pcifunc,
+				 u16 rq_idx, u16 match_id)
+{
+	int leaf_prof, mid_prof, leaf_match;
+	struct nix_cn10k_aq_enq_req aq_req;
+	struct nix_cn10k_aq_enq_rsp aq_rsp;
+	struct nix_ipolicer *ipolicer;
+	struct nix_hw *nix_hw;
+	int blkaddr, idx, rc;
+
+	if (!rvu->hw->cap.ipolicer)
+		return 0;
+
+	rc = nix_get_struct_ptrs(rvu, pcifunc, &nix_hw, &blkaddr);
+	if (rc)
+		return rc;
+
+	/* Fetch the RQ's context to see if policing is enabled */
+	rc = nix_aq_context_read(rvu, nix_hw, &aq_req, &aq_rsp, pcifunc,
+				 NIX_AQ_CTYPE_RQ, rq_idx);
+	if (rc) {
+		dev_err(rvu->dev,
+			"%s: Failed to fetch RQ%d context of PFFUNC 0x%x\n",
+			__func__, rq_idx, pcifunc);
+		return rc;
+	}
+
+	if (!aq_rsp.rq.policer_ena)
+		return 0;
+
+	/* Get the bandwidth profile ID mapped to this RQ */
+	leaf_prof = aq_rsp.rq.band_prof_id;
+
+	ipolicer = &nix_hw->ipolicer[BAND_PROF_LEAF_LAYER];
+	ipolicer->match_id[leaf_prof] = match_id;
+
+	/* Check if any other leaf profile is marked with same match_id */
+	for (idx = 0; idx < ipolicer->band_prof.max; idx++) {
+		if (idx == leaf_prof)
+			continue;
+		if (ipolicer->match_id[idx] != match_id)
+			continue;
+
+		leaf_match = idx;
+		break;
+	}
+
+	if (idx == ipolicer->band_prof.max)
+		return 0;
+
+	/* Fetch the matching profile's context to check if it's already
+	 * mapped to a mid level profile.
+	 */
+	rc = nix_aq_context_read(rvu, nix_hw, &aq_req, &aq_rsp, 0x00,
+				 NIX_AQ_CTYPE_BANDPROF, leaf_match);
+	if (rc) {
+		dev_err(rvu->dev,
+			"%s: Failed to fetch context of leaf profile %d\n",
+			__func__, leaf_match);
+		return rc;
+	}
+
+	ipolicer = &nix_hw->ipolicer[BAND_PROF_MID_LAYER];
+	if (aq_rsp.prof.hl_en) {
+		/* Get Mid layer prof index and map leaf_prof index
+		 * also such that flows that are being steered
+		 * to different RQs and marked with same match_id
+		 * are rate limited in a aggregate fashion
+		 */
+		mid_prof = aq_rsp.prof.band_prof_id;
+		rc = nix_ipolicer_map_leaf_midprofs(rvu, nix_hw,
+						    &aq_req, &aq_rsp,
+						    leaf_prof, mid_prof);
+		if (rc) {
+			dev_err(rvu->dev,
+				"%s: Failed to map leaf(%d) and mid(%d) profiles\n",
+				__func__, leaf_prof, mid_prof);
+			goto exit;
+		}
+
+		mutex_lock(&rvu->rsrc_lock);
+		ipolicer->ref_count[mid_prof]++;
+		mutex_unlock(&rvu->rsrc_lock);
+		goto exit;
+	}
+
+	/* Allocate a mid layer profile and
+	 * map both 'leaf_prof' and 'leaf_match' profiles to it.
+	 */
+	mutex_lock(&rvu->rsrc_lock);
+	mid_prof = rvu_alloc_rsrc(&ipolicer->band_prof);
+	if (mid_prof < 0) {
+		dev_err(rvu->dev,
+			"%s: Unable to allocate mid layer profile\n", __func__);
+		mutex_unlock(&rvu->rsrc_lock);
+		goto exit;
+	}
+	mutex_unlock(&rvu->rsrc_lock);
+	ipolicer->pfvf_map[mid_prof] = 0x00;
+	ipolicer->ref_count[mid_prof] = 0;
+
+	/* Initialize mid layer profile same as 'leaf_prof' */
+	rc = nix_aq_context_read(rvu, nix_hw, &aq_req, &aq_rsp, 0x00,
+				 NIX_AQ_CTYPE_BANDPROF, leaf_prof);
+	if (rc) {
+		dev_err(rvu->dev,
+			"%s: Failed to fetch context of leaf profile %d\n",
+			__func__, leaf_prof);
+		goto exit;
+	}
+
+	memset(&aq_req, 0, sizeof(struct nix_cn10k_aq_enq_req));
+	aq_req.hdr.pcifunc = 0x00;
+	aq_req.qidx = (mid_prof & 0x3FFF) | (BAND_PROF_MID_LAYER << 14);
+	aq_req.ctype = NIX_AQ_CTYPE_BANDPROF;
+	aq_req.op = NIX_AQ_INSTOP_WRITE;
+	memcpy(&aq_req.prof, &aq_rsp.prof, sizeof(struct nix_bandprof_s));
+	/* Clear higher layer enable bit in the mid profile, just in case */
+	aq_req.prof.hl_en = 0;
+	aq_req.prof_mask.hl_en = 1;
+
+	rc = rvu_nix_blk_aq_enq_inst(rvu, nix_hw,
+				     (struct nix_aq_enq_req *)&aq_req, NULL);
+	if (rc) {
+		dev_err(rvu->dev,
+			"%s: Failed to INIT context of mid layer profile %d\n",
+			__func__, mid_prof);
+		goto exit;
+	}
+
+	/* Map both leaf profiles to this mid layer profile */
+	rc = nix_ipolicer_map_leaf_midprofs(rvu, nix_hw,
+					    &aq_req, &aq_rsp,
+					    leaf_prof, mid_prof);
+	if (rc) {
+		dev_err(rvu->dev,
+			"%s: Failed to map leaf(%d) and mid(%d) profiles\n",
+			__func__, leaf_prof, mid_prof);
+		goto exit;
+	}
+
+	mutex_lock(&rvu->rsrc_lock);
+	ipolicer->ref_count[mid_prof]++;
+	mutex_unlock(&rvu->rsrc_lock);
+
+	rc = nix_ipolicer_map_leaf_midprofs(rvu, nix_hw,
+					    &aq_req, &aq_rsp,
+					    leaf_match, mid_prof);
+	if (rc) {
+		dev_err(rvu->dev,
+			"%s: Failed to map leaf(%d) and mid(%d) profiles\n",
+			__func__, leaf_match, mid_prof);
+		ipolicer->ref_count[mid_prof]--;
+		goto exit;
+	}
+
+	mutex_lock(&rvu->rsrc_lock);
+	ipolicer->ref_count[mid_prof]++;
+	mutex_unlock(&rvu->rsrc_lock);
+
+exit:
+	return rc;
+}
+
+/* Called with mutex rsrc_lock */
+static void nix_clear_ratelimit_aggr(struct rvu *rvu, struct nix_hw *nix_hw,
+				     u32 leaf_prof)
+{
+	struct nix_cn10k_aq_enq_req aq_req;
+	struct nix_cn10k_aq_enq_rsp aq_rsp;
+	struct nix_ipolicer *ipolicer;
+	u16 mid_prof;
+	int rc;
+
+	mutex_unlock(&rvu->rsrc_lock);
+
+	rc = nix_aq_context_read(rvu, nix_hw, &aq_req, &aq_rsp, 0x00,
+				 NIX_AQ_CTYPE_BANDPROF, leaf_prof);
+
+	mutex_lock(&rvu->rsrc_lock);
+	if (rc) {
+		dev_err(rvu->dev,
+			"%s: Failed to fetch context of leaf profile %d\n",
+			__func__, leaf_prof);
+		return;
+	}
+
+	if (!aq_rsp.prof.hl_en)
+		return;
+
+	mid_prof = aq_rsp.prof.band_prof_id;
+	ipolicer = &nix_hw->ipolicer[BAND_PROF_MID_LAYER];
+	ipolicer->ref_count[mid_prof]--;
+	/* If ref_count is zero, free mid layer profile */
+	if (!ipolicer->ref_count[mid_prof]) {
+		ipolicer->pfvf_map[mid_prof] = 0x00;
+		rvu_free_rsrc(&ipolicer->band_prof, mid_prof);
+	}
+}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c
index 6ba6a83..87d7c6a 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c
@@ -1110,6 +1110,11 @@ static int npc_install_flow(struct rvu *rvu, int blkaddr, u16 target,
 	    req->vtag0_type == NIX_AF_LFX_RX_VTAG_TYPE7)
 		rule->vfvlan_cfg = true;
 
+	if (is_npc_intf_rx(req->intf) && req->match_id &&
+	    (req->op == NIX_RX_ACTIONOP_UCAST || req->op == NIX_RX_ACTIONOP_RSS))
+		return rvu_nix_setup_ratelimit_aggr(rvu, req->hdr.pcifunc,
+					     req->index, req->match_id);
+
 	return 0;
 }
 
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
index ce365ae..76837d5 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
@@ -171,6 +171,7 @@
 #define NIX_AF_SQ_CONST			(0x0040)
 #define NIX_AF_CQ_CONST			(0x0048)
 #define NIX_AF_RQ_CONST			(0x0050)
+#define NIX_AF_PL_CONST			(0x0058)
 #define NIX_AF_PSE_CONST		(0x0060)
 #define NIX_AF_TL1_CONST		(0x0070)
 #define NIX_AF_TL2_CONST		(0x0078)
@@ -181,6 +182,7 @@
 #define NIX_AF_LSO_CFG			(0x00A8)
 #define NIX_AF_BLK_RST			(0x00B0)
 #define NIX_AF_TX_TSTMP_CFG		(0x00C0)
+#define NIX_AF_PL_TS			(0x00C8)
 #define NIX_AF_RX_CFG			(0x00D0)
 #define NIX_AF_AVG_DELAY		(0x00E0)
 #define NIX_AF_CINT_DELAY		(0x00F0)
@@ -212,7 +214,9 @@
 #define NIX_AF_RX_DEF_OL2		(0x0200)
 #define NIX_AF_RX_DEF_OIP4		(0x0210)
 #define NIX_AF_RX_DEF_IIP4		(0x0220)
+#define NIX_AF_RX_DEF_VLAN0_PCP_DEI	(0x0228)
 #define NIX_AF_RX_DEF_OIP6		(0x0230)
+#define NIX_AF_RX_DEF_VLAN1_PCP_DEI	(0x0238)
 #define NIX_AF_RX_DEF_IIP6		(0x0240)
 #define NIX_AF_RX_DEF_OTCP		(0x0250)
 #define NIX_AF_RX_DEF_ITCP		(0x0260)
@@ -223,6 +227,10 @@
 #define NIX_AF_RX_DEF_ISCTP		(0x02A0)
 #define NIX_AF_RX_DEF_IPSECX		(0x02B0)
 #define NIX_AF_RX_DEF_CST_APAD1		(0x02A8)
+#define NIX_AF_RX_DEF_IIP4_DSCP		(0x02E0)
+#define NIX_AF_RX_DEF_OIP4_DSCP		(0x02E8)
+#define NIX_AF_RX_DEF_IIP6_DSCP		(0x02F0)
+#define NIX_AF_RX_DEF_OIP6_DSCP		(0x02F8)
 #define NIX_AF_RX_IPSEC_GEN_CFG		(0x0300)
 #define NIX_AF_RX_CPTX_INST_ADDR	(0x0310)
 #define NIX_AF_NDC_TX_SYNC		(0x03F0)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h
index 5e5f45c..8fb002d 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h
@@ -286,7 +286,7 @@ enum nix_aq_ctype {
 	NIX_AQ_CTYPE_MCE  = 0x3,
 	NIX_AQ_CTYPE_RSS  = 0x4,
 	NIX_AQ_CTYPE_DYNO = 0x5,
-	NIX_AQ_CTYPE_BAND_PROF = 0x6,
+	NIX_AQ_CTYPE_BANDPROF = 0x6,
 };
 
 /* NIX admin queue instruction opcodes */
@@ -665,6 +665,82 @@ struct nix_rx_mce_s {
 	uint64_t next       : 16;
 };
 
+enum nix_band_prof_layers {
+	BAND_PROF_LEAF_LAYER = 0,
+	BAND_PROF_INVAL_LAYER = 1,
+	BAND_PROF_MID_LAYER = 2,
+	BAND_PROF_TOP_LAYER = 3,
+	BAND_PROF_NUM_LAYERS = 4,
+};
+
+enum NIX_RX_BAND_PROF_ACTIONRESULT_E {
+	NIX_RX_BAND_PROF_ACTIONRESULT_PASS = 0x0,
+	NIX_RX_BAND_PROF_ACTIONRESULT_DROP = 0x1,
+	NIX_RX_BAND_PROF_ACTIONRESULT_RED = 0x2,
+};
+
+/* NIX ingress policer bandwidth profile structure */
+struct nix_bandprof_s {
+	uint64_t pc_mode                     :  2; /* W0 */
+	uint64_t icolor                      :  2;
+	uint64_t tnl_ena                     :  1;
+	uint64_t reserved_5_7                :  3;
+	uint64_t peir_exponent               :  5;
+	uint64_t reserved_13_15              :  3;
+	uint64_t pebs_exponent               :  5;
+	uint64_t reserved_21_23              :  3;
+	uint64_t cir_exponent                :  5;
+	uint64_t reserved_29_31              :  3;
+	uint64_t cbs_exponent                :  5;
+	uint64_t reserved_37_39              :  3;
+	uint64_t peir_mantissa               :  8;
+	uint64_t pebs_mantissa               :  8;
+	uint64_t cir_mantissa                :  8;
+	uint64_t cbs_mantissa                :  8; /* W1 */
+	uint64_t lmode                       :  1;
+	uint64_t l_sellect                   :  3;
+	uint64_t rdiv                        :  4;
+	uint64_t adjust_exponent             :  5;
+	uint64_t reserved_85_86              :  2;
+	uint64_t adjust_mantissa             :  9;
+	uint64_t gc_action                   :  2;
+	uint64_t yc_action                   :  2;
+	uint64_t rc_action                   :  2;
+	uint64_t meter_algo                  :  2;
+	uint64_t band_prof_id                :  7;
+	uint64_t reserved_111_118            :  8;
+	uint64_t hl_en                       :  1;
+	uint64_t reserved_120_127            :  8;
+	uint64_t ts                          : 48; /* W2 */
+	uint64_t reserved_176_191            : 16;
+	uint64_t pe_accum                    : 32; /* W3 */
+	uint64_t c_accum                     : 32;
+	uint64_t green_pkt_pass              : 48; /* W4 */
+	uint64_t reserved_304_319            : 16;
+	uint64_t yellow_pkt_pass             : 48; /* W5 */
+	uint64_t reserved_368_383            : 16;
+	uint64_t red_pkt_pass                : 48; /* W6 */
+	uint64_t reserved_432_447            : 16;
+	uint64_t green_octs_pass             : 48; /* W7 */
+	uint64_t reserved_496_511            : 16;
+	uint64_t yellow_octs_pass            : 48; /* W8 */
+	uint64_t reserved_560_575            : 16;
+	uint64_t red_octs_pass               : 48; /* W9 */
+	uint64_t reserved_624_639            : 16;
+	uint64_t green_pkt_drop              : 48; /* W10 */
+	uint64_t reserved_688_703            : 16;
+	uint64_t yellow_pkt_drop             : 48; /* W11 */
+	uint64_t reserved_752_767            : 16;
+	uint64_t red_pkt_drop                : 48; /* W12 */
+	uint64_t reserved_816_831            : 16;
+	uint64_t green_octs_drop             : 48; /* W13 */
+	uint64_t reserved_880_895            : 16;
+	uint64_t yellow_octs_drop            : 48; /* W14 */
+	uint64_t reserved_944_959            : 16;
+	uint64_t red_octs_drop               : 48; /* W15 */
+	uint64_t reserved_1008_1023          : 16;
+};
+
 enum nix_lsoalg {
 	NIX_LSOALG_NOP,
 	NIX_LSOALG_ADD_SEGNUM,
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [net-next PATCH 2/5] octeontx2-af: cn10k: Debugfs support for bandwidth profiles
  2021-06-15 11:34 [net-next PATCH 0/6] Add ingress ratelimit offload Subbaraya Sundeep
  2021-06-15 11:34 ` [net-next PATCH 1/5] octeontx2-af: cn10k: Bandwidth profiles config support Subbaraya Sundeep
@ 2021-06-15 11:34 ` Subbaraya Sundeep
  2021-06-15 11:34 ` [net-next PATCH 3/5] octeontx2-pf: TC_MATCHALL ingress ratelimiting offload Subbaraya Sundeep
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Subbaraya Sundeep @ 2021-06-15 11:34 UTC (permalink / raw)
  To: netdev, davem, kuba; +Cc: sgoutham, hkelam, gakula, Subbaraya Sundeep

From: Sunil Goutham <sgoutham@marvell.com>

Added support for dumping current resource status of bandwidth
profiles and contexts of allocated profiles via debugfs.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/rvu.c    |   8 +
 drivers/net/ethernet/marvell/octeontx2/af/rvu.h    |   5 +
 .../ethernet/marvell/octeontx2/af/rvu_debugfs.c    | 163 +++++++++++++++++++++
 .../net/ethernet/marvell/octeontx2/af/rvu_nix.c    |   8 +-
 .../net/ethernet/marvell/octeontx2/af/rvu_struct.h |   7 +
 5 files changed, 187 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
index f11a02d..0b09294 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
@@ -184,6 +184,14 @@ int rvu_rsrc_free_count(struct rsrc_bmap *rsrc)
 	return (rsrc->max - used);
 }
 
+bool is_rsrc_free(struct rsrc_bmap *rsrc, int id)
+{
+	if (!rsrc->bmap)
+		return false;
+
+	return !test_bit(id, rsrc->bmap);
+}
+
 int rvu_alloc_bitmap(struct rsrc_bmap *rsrc)
 {
 	rsrc->bmap = kcalloc(BITS_TO_LONGS(rsrc->max),
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index 4d2a5ca..9e5d9ba6 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -596,6 +596,7 @@ static inline bool is_rvu_fwdata_valid(struct rvu *rvu)
 int rvu_alloc_bitmap(struct rsrc_bmap *rsrc);
 int rvu_alloc_rsrc(struct rsrc_bmap *rsrc);
 void rvu_free_rsrc(struct rsrc_bmap *rsrc, int id);
+bool is_rsrc_free(struct rsrc_bmap *rsrc, int id);
 int rvu_rsrc_free_count(struct rsrc_bmap *rsrc);
 int rvu_alloc_rsrc_contig(struct rsrc_bmap *rsrc, int nrsrc);
 bool rvu_rsrc_check_contig(struct rsrc_bmap *rsrc, int nrsrc);
@@ -683,6 +684,10 @@ int nix_get_struct_ptrs(struct rvu *rvu, u16 pcifunc,
 			struct nix_hw **nix_hw, int *blkaddr);
 int rvu_nix_setup_ratelimit_aggr(struct rvu *rvu, u16 pcifunc,
 				 u16 rq_idx, u16 match_id);
+int nix_aq_context_read(struct rvu *rvu, struct nix_hw *nix_hw,
+			struct nix_cn10k_aq_enq_req *aq_req,
+			struct nix_cn10k_aq_enq_rsp *aq_rsp,
+			u16 pcifunc, u8 ctype, u32 qidx);
 
 /* NPC APIs */
 int rvu_npc_init(struct rvu *rvu);
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
index 7103f82..3cc3c6f 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
@@ -1632,6 +1632,165 @@ static int rvu_dbg_nix_qsize_display(struct seq_file *filp, void *unused)
 
 RVU_DEBUG_SEQ_FOPS(nix_qsize, nix_qsize_display, nix_qsize_write);
 
+static void print_band_prof_ctx(struct seq_file *m,
+				struct nix_bandprof_s *prof)
+{
+	char *str;
+
+	switch (prof->pc_mode) {
+	case NIX_RX_PC_MODE_VLAN:
+		str = "VLAN";
+		break;
+	case NIX_RX_PC_MODE_DSCP:
+		str = "DSCP";
+		break;
+	case NIX_RX_PC_MODE_GEN:
+		str = "Generic";
+		break;
+	case NIX_RX_PC_MODE_RSVD:
+		str = "Reserved";
+		break;
+	}
+	seq_printf(m, "W0: pc_mode\t\t%s\n", str);
+	str = (prof->icolor == 3) ? "Color blind" :
+		(prof->icolor == 0) ? "Green" :
+		(prof->icolor == 1) ? "Yellow" : "Red";
+	seq_printf(m, "W0: icolor\t\t%s\n", str);
+	seq_printf(m, "W0: tnl_ena\t\t%d\n", prof->tnl_ena);
+	seq_printf(m, "W0: peir_exponent\t%d\n", prof->peir_exponent);
+	seq_printf(m, "W0: pebs_exponent\t%d\n", prof->pebs_exponent);
+	seq_printf(m, "W0: cir_exponent\t%d\n", prof->cir_exponent);
+	seq_printf(m, "W0: cbs_exponent\t%d\n", prof->cbs_exponent);
+	seq_printf(m, "W0: peir_mantissa\t%d\n", prof->peir_mantissa);
+	seq_printf(m, "W0: pebs_mantissa\t%d\n", prof->pebs_mantissa);
+	seq_printf(m, "W0: cir_mantissa\t%d\n", prof->cir_mantissa);
+
+	seq_printf(m, "W1: cbs_mantissa\t%d\n", prof->cbs_mantissa);
+	str = (prof->lmode == 0) ? "byte" : "packet";
+	seq_printf(m, "W1: lmode\t\t%s\n", str);
+	seq_printf(m, "W1: l_select\t\t%d\n", prof->l_sellect);
+	seq_printf(m, "W1: rdiv\t\t%d\n", prof->rdiv);
+	seq_printf(m, "W1: adjust_exponent\t%d\n", prof->adjust_exponent);
+	seq_printf(m, "W1: adjust_mantissa\t%d\n", prof->adjust_mantissa);
+	str = (prof->gc_action == 0) ? "PASS" :
+		(prof->gc_action == 1) ? "DROP" : "RED";
+	seq_printf(m, "W1: gc_action\t\t%s\n", str);
+	str = (prof->yc_action == 0) ? "PASS" :
+		(prof->yc_action == 1) ? "DROP" : "RED";
+	seq_printf(m, "W1: yc_action\t\t%s\n", str);
+	str = (prof->rc_action == 0) ? "PASS" :
+		(prof->rc_action == 1) ? "DROP" : "RED";
+	seq_printf(m, "W1: rc_action\t\t%s\n", str);
+	seq_printf(m, "W1: meter_algo\t\t%d\n", prof->meter_algo);
+	seq_printf(m, "W1: band_prof_id\t%d\n", prof->band_prof_id);
+	seq_printf(m, "W1: hl_en\t\t%d\n", prof->hl_en);
+
+	seq_printf(m, "W2: ts\t\t\t%lld\n", (u64)prof->ts);
+	seq_printf(m, "W3: pe_accum\t\t%d\n", prof->pe_accum);
+	seq_printf(m, "W3: c_accum\t\t%d\n", prof->c_accum);
+	seq_printf(m, "W4: green_pkt_pass\t%lld\n",
+		   (u64)prof->green_pkt_pass);
+	seq_printf(m, "W5: yellow_pkt_pass\t%lld\n",
+		   (u64)prof->yellow_pkt_pass);
+	seq_printf(m, "W6: red_pkt_pass\t%lld\n", (u64)prof->red_pkt_pass);
+	seq_printf(m, "W7: green_octs_pass\t%lld\n",
+		   (u64)prof->green_octs_pass);
+	seq_printf(m, "W8: yellow_octs_pass\t%lld\n",
+		   (u64)prof->yellow_octs_pass);
+	seq_printf(m, "W9: red_octs_pass\t%lld\n", (u64)prof->red_octs_pass);
+	seq_printf(m, "W10: green_pkt_drop\t%lld\n",
+		   (u64)prof->green_pkt_drop);
+	seq_printf(m, "W11: yellow_pkt_drop\t%lld\n",
+		   (u64)prof->yellow_pkt_drop);
+	seq_printf(m, "W12: red_pkt_drop\t%lld\n", (u64)prof->red_pkt_drop);
+	seq_printf(m, "W13: green_octs_drop\t%lld\n",
+		   (u64)prof->green_octs_drop);
+	seq_printf(m, "W14: yellow_octs_drop\t%lld\n",
+		   (u64)prof->yellow_octs_drop);
+	seq_printf(m, "W15: red_octs_drop\t%lld\n", (u64)prof->red_octs_drop);
+	seq_puts(m, "==============================\n");
+}
+
+static int rvu_dbg_nix_band_prof_ctx_display(struct seq_file *m, void *unused)
+{
+	struct nix_hw *nix_hw = m->private;
+	struct nix_cn10k_aq_enq_req aq_req;
+	struct nix_cn10k_aq_enq_rsp aq_rsp;
+	struct rvu *rvu = nix_hw->rvu;
+	struct nix_ipolicer *ipolicer;
+	int layer, prof_idx, idx, rc;
+	u16 pcifunc;
+	char *str;
+
+	for (layer = 0; layer < BAND_PROF_NUM_LAYERS; layer++) {
+		if (layer == BAND_PROF_INVAL_LAYER)
+			continue;
+		str = (layer == BAND_PROF_LEAF_LAYER) ? "Leaf" :
+			(layer == BAND_PROF_MID_LAYER) ? "Mid" : "Top";
+
+		seq_printf(m, "\n%s bandwidth profiles\n", str);
+		seq_puts(m, "=======================\n");
+
+		ipolicer = &nix_hw->ipolicer[layer];
+
+		for (idx = 0; idx < ipolicer->band_prof.max; idx++) {
+			if (is_rsrc_free(&ipolicer->band_prof, idx))
+				continue;
+
+			prof_idx = (idx & 0x3FFF) | (layer << 14);
+			rc = nix_aq_context_read(rvu, nix_hw, &aq_req, &aq_rsp,
+						 0x00, NIX_AQ_CTYPE_BANDPROF,
+						 prof_idx);
+			if (rc) {
+				dev_err(rvu->dev,
+					"%s: Failed to fetch context of %s profile %d, err %d\n",
+					__func__, str, idx, rc);
+				return 0;
+			}
+			seq_printf(m, "\n%s bandwidth profile:: %d\n", str, idx);
+			pcifunc = ipolicer->pfvf_map[idx];
+			if (!(pcifunc & RVU_PFVF_FUNC_MASK))
+				seq_printf(m, "Allocated to :: PF %d\n",
+					   rvu_get_pf(pcifunc));
+			else
+				seq_printf(m, "Allocated to :: PF %d VF %d\n",
+					   rvu_get_pf(pcifunc),
+					   (pcifunc & RVU_PFVF_FUNC_MASK) - 1);
+			print_band_prof_ctx(m, &aq_rsp.prof);
+		}
+	}
+	return 0;
+}
+
+RVU_DEBUG_SEQ_FOPS(nix_band_prof_ctx, nix_band_prof_ctx_display, NULL);
+
+static int rvu_dbg_nix_band_prof_rsrc_display(struct seq_file *m, void *unused)
+{
+	struct nix_hw *nix_hw = m->private;
+	struct nix_ipolicer *ipolicer;
+	int layer;
+	char *str;
+
+	seq_puts(m, "\nBandwidth profile resource free count\n");
+	seq_puts(m, "=====================================\n");
+	for (layer = 0; layer < BAND_PROF_NUM_LAYERS; layer++) {
+		if (layer == BAND_PROF_INVAL_LAYER)
+			continue;
+		str = (layer == BAND_PROF_LEAF_LAYER) ? "Leaf" :
+			(layer == BAND_PROF_MID_LAYER) ? "Mid " : "Top ";
+
+		ipolicer = &nix_hw->ipolicer[layer];
+		seq_printf(m, "%s :: Max: %4d  Free: %4d\n", str,
+			   ipolicer->band_prof.max,
+			   rvu_rsrc_free_count(&ipolicer->band_prof));
+	}
+	seq_puts(m, "=====================================\n");
+
+	return 0;
+}
+
+RVU_DEBUG_SEQ_FOPS(nix_band_prof_rsrc, nix_band_prof_rsrc_display, NULL);
+
 static void rvu_dbg_nix_init(struct rvu *rvu, int blkaddr)
 {
 	struct nix_hw *nix_hw;
@@ -1664,6 +1823,10 @@ static void rvu_dbg_nix_init(struct rvu *rvu, int blkaddr)
 			    &rvu_dbg_nix_ndc_rx_hits_miss_fops);
 	debugfs_create_file("qsize", 0600, rvu->rvu_dbg.nix, rvu,
 			    &rvu_dbg_nix_qsize_fops);
+	debugfs_create_file("ingress_policer_ctx", 0600, rvu->rvu_dbg.nix, nix_hw,
+			    &rvu_dbg_nix_band_prof_ctx_fops);
+	debugfs_create_file("ingress_policer_rsrc", 0600, rvu->rvu_dbg.nix, nix_hw,
+			    &rvu_dbg_nix_band_prof_rsrc_fops);
 }
 
 static void rvu_dbg_npa_init(struct rvu *rvu)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index ebd73a8..d6f8210 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -4365,10 +4365,10 @@ int rvu_mbox_handler_nix_bandprof_free(struct rvu *rvu,
 	return 0;
 }
 
-static int nix_aq_context_read(struct rvu *rvu, struct nix_hw *nix_hw,
-			       struct nix_cn10k_aq_enq_req *aq_req,
-			       struct nix_cn10k_aq_enq_rsp *aq_rsp,
-			       u16 pcifunc, u8 ctype, u32 qidx)
+int nix_aq_context_read(struct rvu *rvu, struct nix_hw *nix_hw,
+			struct nix_cn10k_aq_enq_req *aq_req,
+			struct nix_cn10k_aq_enq_rsp *aq_rsp,
+			u16 pcifunc, u8 ctype, u32 qidx)
 {
 	memset(aq_req, 0, sizeof(struct nix_cn10k_aq_enq_req));
 	aq_req->hdr.pcifunc = pcifunc;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h
index 8fb002d..14aa8e3 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h
@@ -679,6 +679,13 @@ enum NIX_RX_BAND_PROF_ACTIONRESULT_E {
 	NIX_RX_BAND_PROF_ACTIONRESULT_RED = 0x2,
 };
 
+enum nix_band_prof_pc_mode {
+	NIX_RX_PC_MODE_VLAN = 0,
+	NIX_RX_PC_MODE_DSCP = 1,
+	NIX_RX_PC_MODE_GEN = 2,
+	NIX_RX_PC_MODE_RSVD = 3,
+};
+
 /* NIX ingress policer bandwidth profile structure */
 struct nix_bandprof_s {
 	uint64_t pc_mode                     :  2; /* W0 */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [net-next PATCH 3/5] octeontx2-pf: TC_MATCHALL ingress ratelimiting offload
  2021-06-15 11:34 [net-next PATCH 0/6] Add ingress ratelimit offload Subbaraya Sundeep
  2021-06-15 11:34 ` [net-next PATCH 1/5] octeontx2-af: cn10k: Bandwidth profiles config support Subbaraya Sundeep
  2021-06-15 11:34 ` [net-next PATCH 2/5] octeontx2-af: cn10k: Debugfs support for bandwidth profiles Subbaraya Sundeep
@ 2021-06-15 11:34 ` Subbaraya Sundeep
  2021-06-15 11:34 ` [net-next PATCH 4/5] octeontx2-pf: Use NL_SET_ERR_MSG_MOD for TC Subbaraya Sundeep
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Subbaraya Sundeep @ 2021-06-15 11:34 UTC (permalink / raw)
  To: netdev, davem, kuba; +Cc: sgoutham, hkelam, gakula, Subbaraya Sundeep

From: Sunil Goutham <sgoutham@marvell.com>

Add TC_MATCHALL ingress ratelimiting offload support with POLICE
action for entire traffic coming into the interface.

Eg: To ratelimit ingress traffic to 100Mbps

$ ethtool -K eth0 hw-tc-offload on
$ tc qdisc add dev eth0 clsact
$ tc filter add dev eth0 ingress matchall skip_sw \
                action police rate 100Mbit burst 32Kbit

To support this, a leaf level bandwidth profile is allocated and all
RQs' contexts used by this interface are updated to point to it.
And the leaf level bandwidth profile is configured with user specified
rate and burst sizes.

Co-developed-by: Subbaraya Sundeep <sbhatta@marvell.com>
Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c | 323 +++++++++++++++++++++
 drivers/net/ethernet/marvell/octeontx2/nic/cn10k.h |  11 +
 .../ethernet/marvell/octeontx2/nic/otx2_common.h   |   2 +
 .../net/ethernet/marvell/octeontx2/nic/otx2_pf.c   |   3 +
 .../net/ethernet/marvell/octeontx2/nic/otx2_tc.c   |  84 ++++++
 5 files changed, 423 insertions(+)

diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c
index 9ec0313..1b08896 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c
@@ -179,3 +179,326 @@ void cn10k_sqe_flush(void *dev, struct otx2_snd_queue *sq, int size, int qidx)
 	sq->head++;
 	sq->head &= (sq->sqe_cnt - 1);
 }
+
+int cn10k_free_all_ipolicers(struct otx2_nic *pfvf)
+{
+	struct nix_bandprof_free_req *req;
+	int rc;
+
+	if (is_dev_otx2(pfvf->pdev))
+		return 0;
+
+	mutex_lock(&pfvf->mbox.lock);
+
+	req = otx2_mbox_alloc_msg_nix_bandprof_free(&pfvf->mbox);
+	if (!req) {
+		rc =  -ENOMEM;
+		goto out;
+	}
+
+	/* Free all bandwidth profiles allocated */
+	req->free_all = true;
+
+	rc = otx2_sync_mbox_msg(&pfvf->mbox);
+out:
+	mutex_unlock(&pfvf->mbox.lock);
+	return rc;
+}
+
+int cn10k_alloc_leaf_profile(struct otx2_nic *pfvf, u16 *leaf)
+{
+	struct nix_bandprof_alloc_req *req;
+	struct nix_bandprof_alloc_rsp *rsp;
+	int rc;
+
+	req = otx2_mbox_alloc_msg_nix_bandprof_alloc(&pfvf->mbox);
+	if (!req)
+		return  -ENOMEM;
+
+	req->prof_count[BAND_PROF_LEAF_LAYER] = 1;
+
+	rc = otx2_sync_mbox_msg(&pfvf->mbox);
+	if (rc)
+		goto out;
+
+	rsp = (struct  nix_bandprof_alloc_rsp *)
+	       otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
+	if (!rsp->prof_count[BAND_PROF_LEAF_LAYER]) {
+		rc = -EIO;
+		goto out;
+	}
+
+	*leaf = rsp->prof_idx[BAND_PROF_LEAF_LAYER][0];
+out:
+	if (rc) {
+		dev_warn(pfvf->dev,
+			 "Failed to allocate ingress bandwidth policer\n");
+	}
+
+	return rc;
+}
+
+int cn10k_alloc_matchall_ipolicer(struct otx2_nic *pfvf)
+{
+	struct otx2_hw *hw = &pfvf->hw;
+	int ret;
+
+	mutex_lock(&pfvf->mbox.lock);
+
+	ret = cn10k_alloc_leaf_profile(pfvf, &hw->matchall_ipolicer);
+
+	mutex_unlock(&pfvf->mbox.lock);
+
+	return ret;
+}
+
+#define POLICER_TIMESTAMP	  1  /* 1 second */
+#define MAX_RATE_EXP		  22 /* Valid rate exponent range: 0 - 22 */
+
+static void cn10k_get_ingress_burst_cfg(u32 burst, u32 *burst_exp,
+					u32 *burst_mantissa)
+{
+	int tmp;
+
+	/* Burst is calculated as
+	 * (1+[BURST_MANTISSA]/256)*2^[BURST_EXPONENT]
+	 * This is the upper limit on number tokens (bytes) that
+	 * can be accumulated in the bucket.
+	 */
+	*burst_exp = ilog2(burst);
+	if (burst < 256) {
+		/* No float: can't express mantissa in this case */
+		*burst_mantissa = 0;
+		return;
+	}
+
+	if (*burst_exp > MAX_RATE_EXP)
+		*burst_exp = MAX_RATE_EXP;
+
+	/* Calculate mantissa
+	 * Find remaining bytes 'burst - 2^burst_exp'
+	 * mantissa = (remaining bytes) / 2^ (burst_exp - 8)
+	 */
+	tmp = burst - rounddown_pow_of_two(burst);
+	*burst_mantissa = tmp / (1UL << (*burst_exp - 8));
+}
+
+static void cn10k_get_ingress_rate_cfg(u64 rate, u32 *rate_exp,
+				       u32 *rate_mantissa, u32 *rdiv)
+{
+	u32 div = 0;
+	u32 exp = 0;
+	u64 tmp;
+
+	/* Figure out mantissa, exponent and divider from given max pkt rate
+	 *
+	 * To achieve desired rate HW adds
+	 * (1+[RATE_MANTISSA]/256)*2^[RATE_EXPONENT] tokens (bytes) at every
+	 * policer timeunit * 2^rdiv ie 2 * 2^rdiv usecs, to the token bucket.
+	 * Here policer timeunit is 2 usecs and rate is in bits per sec.
+	 * Since floating point cannot be used below algorithm uses 1000000
+	 * scale factor to support rates upto 100Gbps.
+	 */
+	tmp = rate * 32 * 2;
+	if (tmp < 256000000) {
+		while (tmp < 256000000) {
+			tmp = tmp * 2;
+			div++;
+		}
+	} else {
+		for (exp = 0; tmp >= 512000000 && exp <= MAX_RATE_EXP; exp++)
+			tmp = tmp / 2;
+
+		if (exp > MAX_RATE_EXP)
+			exp = MAX_RATE_EXP;
+	}
+
+	*rate_mantissa = (tmp - 256000000) / 1000000;
+	*rate_exp = exp;
+	*rdiv = div;
+}
+
+int cn10k_map_unmap_rq_policer(struct otx2_nic *pfvf, int rq_idx,
+			       u16 policer, bool map)
+{
+	struct nix_cn10k_aq_enq_req *aq;
+
+	aq = otx2_mbox_alloc_msg_nix_cn10k_aq_enq(&pfvf->mbox);
+	if (!aq)
+		return -ENOMEM;
+
+	/* Enable policing and set the bandwidth profile (policer) index */
+	if (map)
+		aq->rq.policer_ena = 1;
+	else
+		aq->rq.policer_ena = 0;
+	aq->rq_mask.policer_ena = 1;
+
+	aq->rq.band_prof_id = policer;
+	aq->rq_mask.band_prof_id = GENMASK(9, 0);
+
+	/* Fill AQ info */
+	aq->qidx = rq_idx;
+	aq->ctype = NIX_AQ_CTYPE_RQ;
+	aq->op = NIX_AQ_INSTOP_WRITE;
+
+	return otx2_sync_mbox_msg(&pfvf->mbox);
+}
+
+int cn10k_free_leaf_profile(struct otx2_nic *pfvf, u16 leaf)
+{
+	struct nix_bandprof_free_req *req;
+
+	req = otx2_mbox_alloc_msg_nix_bandprof_free(&pfvf->mbox);
+	if (!req)
+		return -ENOMEM;
+
+	req->prof_count[BAND_PROF_LEAF_LAYER] = 1;
+	req->prof_idx[BAND_PROF_LEAF_LAYER][0] = leaf;
+
+	return otx2_sync_mbox_msg(&pfvf->mbox);
+}
+
+int cn10k_free_matchall_ipolicer(struct otx2_nic *pfvf)
+{
+	struct otx2_hw *hw = &pfvf->hw;
+	int qidx, rc;
+
+	mutex_lock(&pfvf->mbox.lock);
+
+	/* Remove RQ's policer mapping */
+	for (qidx = 0; qidx < hw->rx_queues; qidx++)
+		cn10k_map_unmap_rq_policer(pfvf, qidx,
+					   hw->matchall_ipolicer, false);
+
+	rc = cn10k_free_leaf_profile(pfvf, hw->matchall_ipolicer);
+
+	mutex_unlock(&pfvf->mbox.lock);
+	return rc;
+}
+
+int cn10k_set_ipolicer_rate(struct otx2_nic *pfvf, u16 profile,
+			    u32 burst, u64 rate, bool pps)
+{
+	struct nix_cn10k_aq_enq_req *aq;
+	u32 burst_exp, burst_mantissa;
+	u32 rate_exp, rate_mantissa;
+	u32 rdiv;
+
+	/* Get exponent and mantissa values for the desired rate */
+	cn10k_get_ingress_burst_cfg(burst, &burst_exp, &burst_mantissa);
+	cn10k_get_ingress_rate_cfg(rate, &rate_exp, &rate_mantissa, &rdiv);
+
+	/* Init bandwidth profile */
+	aq = otx2_mbox_alloc_msg_nix_cn10k_aq_enq(&pfvf->mbox);
+	if (!aq)
+		return -ENOMEM;
+
+	/* Set initial color mode to blind */
+	aq->prof.icolor = 0x03;
+	aq->prof_mask.icolor = 0x03;
+
+	/* Set rate and burst values */
+	aq->prof.cir_exponent = rate_exp;
+	aq->prof_mask.cir_exponent = 0x1F;
+
+	aq->prof.cir_mantissa = rate_mantissa;
+	aq->prof_mask.cir_mantissa = 0xFF;
+
+	aq->prof.cbs_exponent = burst_exp;
+	aq->prof_mask.cbs_exponent = 0x1F;
+
+	aq->prof.cbs_mantissa = burst_mantissa;
+	aq->prof_mask.cbs_mantissa = 0xFF;
+
+	aq->prof.rdiv = rdiv;
+	aq->prof_mask.rdiv = 0xF;
+
+	if (pps) {
+		/* The amount of decremented tokens is calculated according to
+		 * the following equation:
+		 * max([ LMODE ? 0 : (packet_length - LXPTR)] +
+		 *	     ([ADJUST_MANTISSA]/256 - 1) * 2^[ADJUST_EXPONENT],
+		 *	1/256)
+		 * if LMODE is 1 then rate limiting will be based on
+		 * PPS otherwise bps.
+		 * The aim of the ADJUST value is to specify a token cost per
+		 * packet in contrary to the packet length that specifies a
+		 * cost per byte. To rate limit based on PPS adjust mantissa
+		 * is set as 384 and exponent as 1 so that number of tokens
+		 * decremented becomes 1 i.e, 1 token per packeet.
+		 */
+		aq->prof.adjust_exponent = 1;
+		aq->prof_mask.adjust_exponent = 0x1F;
+
+		aq->prof.adjust_mantissa = 384;
+		aq->prof_mask.adjust_mantissa = 0x1FF;
+
+		aq->prof.lmode = 0x1;
+		aq->prof_mask.lmode = 0x1;
+	}
+
+	/* Two rate three color marker
+	 * With PEIR/EIR set to zero, color will be either green or red
+	 */
+	aq->prof.meter_algo = 2;
+	aq->prof_mask.meter_algo = 0x3;
+
+	aq->prof.rc_action = NIX_RX_BAND_PROF_ACTIONRESULT_DROP;
+	aq->prof_mask.rc_action = 0x3;
+
+	aq->prof.yc_action = NIX_RX_BAND_PROF_ACTIONRESULT_PASS;
+	aq->prof_mask.yc_action = 0x3;
+
+	aq->prof.gc_action = NIX_RX_BAND_PROF_ACTIONRESULT_PASS;
+	aq->prof_mask.gc_action = 0x3;
+
+	/* Setting exponent value as 24 and mantissa as 0 configures
+	 * the bucket with zero values making bucket unused. Peak
+	 * information rate and Excess information rate buckets are
+	 * unused here.
+	 */
+	aq->prof.peir_exponent = 24;
+	aq->prof_mask.peir_exponent = 0x1F;
+
+	aq->prof.peir_mantissa = 0;
+	aq->prof_mask.peir_mantissa = 0xFF;
+
+	aq->prof.pebs_exponent = 24;
+	aq->prof_mask.pebs_exponent = 0x1F;
+
+	aq->prof.pebs_mantissa = 0;
+	aq->prof_mask.pebs_mantissa = 0xFF;
+
+	/* Fill AQ info */
+	aq->qidx = profile;
+	aq->ctype = NIX_AQ_CTYPE_BANDPROF;
+	aq->op = NIX_AQ_INSTOP_WRITE;
+
+	return otx2_sync_mbox_msg(&pfvf->mbox);
+}
+
+int cn10k_set_matchall_ipolicer_rate(struct otx2_nic *pfvf,
+				     u32 burst, u64 rate)
+{
+	struct otx2_hw *hw = &pfvf->hw;
+	int qidx, rc;
+
+	mutex_lock(&pfvf->mbox.lock);
+
+	rc = cn10k_set_ipolicer_rate(pfvf, hw->matchall_ipolicer, burst,
+				     rate, false);
+	if (rc)
+		goto out;
+
+	for (qidx = 0; qidx < hw->rx_queues; qidx++) {
+		rc = cn10k_map_unmap_rq_policer(pfvf, qidx,
+						hw->matchall_ipolicer, true);
+		if (rc)
+			break;
+	}
+
+out:
+	mutex_unlock(&pfvf->mbox.lock);
+	return rc;
+}
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.h
index e0bc595..71292a4 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.h
@@ -14,4 +14,15 @@ void cn10k_sqe_flush(void *dev, struct otx2_snd_queue *sq, int size, int qidx);
 int cn10k_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura);
 int cn10k_pf_lmtst_init(struct otx2_nic *pf);
 int cn10k_vf_lmtst_init(struct otx2_nic *vf);
+int cn10k_free_all_ipolicers(struct otx2_nic *pfvf);
+int cn10k_alloc_matchall_ipolicer(struct otx2_nic *pfvf);
+int cn10k_free_matchall_ipolicer(struct otx2_nic *pfvf);
+int cn10k_set_matchall_ipolicer_rate(struct otx2_nic *pfvf,
+				     u32 burst, u64 rate);
+int cn10k_map_unmap_rq_policer(struct otx2_nic *pfvf, int rq_idx,
+			       u16 policer, bool map);
+int cn10k_alloc_leaf_profile(struct otx2_nic *pfvf, u16 *leaf);
+int cn10k_set_ipolicer_rate(struct otx2_nic *pfvf, u16 profile,
+			    u32 burst, u64 rate, bool pps);
+int cn10k_free_leaf_profile(struct otx2_nic *pfvf, u16 leaf);
 #endif /* CN10K_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
index 543aee7..8ade5af 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
@@ -180,6 +180,7 @@ struct otx2_hw {
 
 	/* NIX */
 	u16		txschq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
+	u16			matchall_ipolicer;
 
 	/* HW settings, coalescing etc */
 	u16			rx_chan_base;
@@ -325,6 +326,7 @@ struct otx2_nic {
 #define OTX2_FLAG_TX_PAUSE_ENABLED		BIT_ULL(10)
 #define OTX2_FLAG_TC_FLOWER_SUPPORT		BIT_ULL(11)
 #define OTX2_FLAG_TC_MATCHALL_EGRESS_ENABLED	BIT_ULL(12)
+#define OTX2_FLAG_TC_MATCHALL_INGRESS_ENABLED	BIT_ULL(13)
 	u64			flags;
 
 	struct otx2_qset	qset;
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
index 82b53e72..23618b9 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
@@ -1461,6 +1461,9 @@ static void otx2_free_hw_resources(struct otx2_nic *pf)
 
 	otx2_free_cq_res(pf);
 
+	/* Free all ingress bandwidth profiles allocated */
+	cn10k_free_all_ipolicers(pf);
+
 	mutex_lock(&mbox->lock);
 	/* Reset NIX LF */
 	free_req = otx2_mbox_alloc_msg_nix_lf_free(mbox);
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
index 51157b2..af288e4 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
@@ -15,6 +15,7 @@
 #include <net/tc_act/tc_vlan.h>
 #include <net/ipv6.h>
 
+#include "cn10k.h"
 #include "otx2_common.h"
 
 /* Egress rate limiting definitions */
@@ -675,6 +676,87 @@ static int otx2_setup_tc_cls_flower(struct otx2_nic *nic,
 	}
 }
 
+static int otx2_tc_ingress_matchall_install(struct otx2_nic *nic,
+					    struct tc_cls_matchall_offload *cls)
+{
+	struct netlink_ext_ack *extack = cls->common.extack;
+	struct flow_action *actions = &cls->rule->action;
+	struct flow_action_entry *entry;
+	u64 rate;
+	int err;
+
+	err = otx2_tc_validate_flow(nic, actions, extack);
+	if (err)
+		return err;
+
+	if (nic->flags & OTX2_FLAG_TC_MATCHALL_INGRESS_ENABLED) {
+		NL_SET_ERR_MSG_MOD(extack,
+				   "Only one ingress MATCHALL ratelimitter can be offloaded");
+		return -ENOMEM;
+	}
+
+	entry = &cls->rule->action.entries[0];
+	switch (entry->id) {
+	case FLOW_ACTION_POLICE:
+		/* Ingress ratelimiting is not supported on OcteonTx2 */
+		if (is_dev_otx2(nic->pdev)) {
+			NL_SET_ERR_MSG_MOD(extack,
+					   "Ingress policing not supported on this platform");
+			return -EOPNOTSUPP;
+		}
+
+		err = cn10k_alloc_matchall_ipolicer(nic);
+		if (err)
+			return err;
+
+		/* Convert to bits per second */
+		rate = entry->police.rate_bytes_ps * 8;
+		err = cn10k_set_matchall_ipolicer_rate(nic, entry->police.burst, rate);
+		if (err)
+			return err;
+		nic->flags |= OTX2_FLAG_TC_MATCHALL_INGRESS_ENABLED;
+		break;
+	default:
+		NL_SET_ERR_MSG_MOD(extack,
+				   "Only police action supported with Ingress MATCHALL offload");
+		return -EOPNOTSUPP;
+	}
+
+	return 0;
+}
+
+static int otx2_tc_ingress_matchall_delete(struct otx2_nic *nic,
+					   struct tc_cls_matchall_offload *cls)
+{
+	struct netlink_ext_ack *extack = cls->common.extack;
+	int err;
+
+	if (nic->flags & OTX2_FLAG_INTF_DOWN) {
+		NL_SET_ERR_MSG_MOD(extack, "Interface not initialized");
+		return -EINVAL;
+	}
+
+	err = cn10k_free_matchall_ipolicer(nic);
+	nic->flags &= ~OTX2_FLAG_TC_MATCHALL_INGRESS_ENABLED;
+	return err;
+}
+
+static int otx2_setup_tc_ingress_matchall(struct otx2_nic *nic,
+					  struct tc_cls_matchall_offload *cls_matchall)
+{
+	switch (cls_matchall->command) {
+	case TC_CLSMATCHALL_REPLACE:
+		return otx2_tc_ingress_matchall_install(nic, cls_matchall);
+	case TC_CLSMATCHALL_DESTROY:
+		return otx2_tc_ingress_matchall_delete(nic, cls_matchall);
+	case TC_CLSMATCHALL_STATS:
+	default:
+		break;
+	}
+
+	return -EOPNOTSUPP;
+}
+
 static int otx2_setup_tc_block_ingress_cb(enum tc_setup_type type,
 					  void *type_data, void *cb_priv)
 {
@@ -686,6 +768,8 @@ static int otx2_setup_tc_block_ingress_cb(enum tc_setup_type type,
 	switch (type) {
 	case TC_SETUP_CLSFLOWER:
 		return otx2_setup_tc_cls_flower(nic, type_data);
+	case TC_SETUP_CLSMATCHALL:
+		return otx2_setup_tc_ingress_matchall(nic, type_data);
 	default:
 		break;
 	}
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [net-next PATCH 4/5] octeontx2-pf: Use NL_SET_ERR_MSG_MOD for TC
  2021-06-15 11:34 [net-next PATCH 0/6] Add ingress ratelimit offload Subbaraya Sundeep
                   ` (2 preceding siblings ...)
  2021-06-15 11:34 ` [net-next PATCH 3/5] octeontx2-pf: TC_MATCHALL ingress ratelimiting offload Subbaraya Sundeep
@ 2021-06-15 11:34 ` Subbaraya Sundeep
  2021-06-15 11:34 ` [net-next PATCH 5/5] octeontx2-pf: Add police action for TC flower Subbaraya Sundeep
  2021-06-15 18:20 ` [net-next PATCH 0/6] Add ingress ratelimit offload patchwork-bot+netdevbpf
  5 siblings, 0 replies; 9+ messages in thread
From: Subbaraya Sundeep @ 2021-06-15 11:34 UTC (permalink / raw)
  To: netdev, davem, kuba; +Cc: sgoutham, hkelam, gakula, Subbaraya Sundeep

This patch modifies all netdev_err messages in
tc code to NL_SET_ERR_MSG_MOD. NL_SET_ERR_MSG_MOD
does not support format specifiers yet hence
netdev_err messages with only strings are modified.

Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
---
 .../net/ethernet/marvell/octeontx2/nic/otx2_tc.c   | 29 +++++++++++++---------
 1 file changed, 17 insertions(+), 12 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
index af288e4..a46055f 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
@@ -223,15 +223,17 @@ static int otx2_tc_egress_matchall_delete(struct otx2_nic *nic,
 
 static int otx2_tc_parse_actions(struct otx2_nic *nic,
 				 struct flow_action *flow_action,
-				 struct npc_install_flow_req *req)
+				 struct npc_install_flow_req *req,
+				 struct flow_cls_offload *f)
 {
+	struct netlink_ext_ack *extack = f->common.extack;
 	struct flow_action_entry *act;
 	struct net_device *target;
 	struct otx2_nic *priv;
 	int i;
 
 	if (!flow_action_has_entries(flow_action)) {
-		netdev_info(nic->netdev, "no tc actions specified");
+		NL_SET_ERR_MSG_MOD(extack, "no tc actions specified");
 		return -EINVAL;
 	}
 
@@ -248,8 +250,8 @@ static int otx2_tc_parse_actions(struct otx2_nic *nic,
 			priv = netdev_priv(target);
 			/* npc_install_flow_req doesn't support passing a target pcifunc */
 			if (rvu_get_pf(nic->pcifunc) != rvu_get_pf(priv->pcifunc)) {
-				netdev_info(nic->netdev,
-					    "can't redirect to other pf/vf\n");
+				NL_SET_ERR_MSG_MOD(extack,
+						   "can't redirect to other pf/vf");
 				return -EOPNOTSUPP;
 			}
 			req->vf = priv->pcifunc & RVU_PFVF_FUNC_MASK;
@@ -272,6 +274,7 @@ static int otx2_tc_prepare_flow(struct otx2_nic *nic,
 				struct flow_cls_offload *f,
 				struct npc_install_flow_req *req)
 {
+	struct netlink_ext_ack *extack = f->common.extack;
 	struct flow_msg *flow_spec = &req->packet;
 	struct flow_msg *flow_mask = &req->mask;
 	struct flow_dissector *dissector;
@@ -336,7 +339,7 @@ static int otx2_tc_prepare_flow(struct otx2_nic *nic,
 
 		flow_rule_match_eth_addrs(rule, &match);
 		if (!is_zero_ether_addr(match.mask->src)) {
-			netdev_err(nic->netdev, "src mac match not supported\n");
+			NL_SET_ERR_MSG_MOD(extack, "src mac match not supported");
 			return -EOPNOTSUPP;
 		}
 
@@ -354,11 +357,11 @@ static int otx2_tc_prepare_flow(struct otx2_nic *nic,
 		flow_rule_match_ip(rule, &match);
 		if ((ntohs(flow_spec->etype) != ETH_P_IP) &&
 		    match.mask->tos) {
-			netdev_err(nic->netdev, "tos not supported\n");
+			NL_SET_ERR_MSG_MOD(extack, "tos not supported");
 			return -EOPNOTSUPP;
 		}
 		if (match.mask->ttl) {
-			netdev_err(nic->netdev, "ttl not supported\n");
+			NL_SET_ERR_MSG_MOD(extack, "ttl not supported");
 			return -EOPNOTSUPP;
 		}
 		flow_spec->tos = match.key->tos;
@@ -414,8 +417,8 @@ static int otx2_tc_prepare_flow(struct otx2_nic *nic,
 
 		if (ipv6_addr_loopback(&match.key->dst) ||
 		    ipv6_addr_loopback(&match.key->src)) {
-			netdev_err(nic->netdev,
-				   "Flow matching on IPv6 loopback addr is not supported\n");
+			NL_SET_ERR_MSG_MOD(extack,
+					   "Flow matching IPv6 loopback addr not supported");
 			return -EOPNOTSUPP;
 		}
 
@@ -464,7 +467,7 @@ static int otx2_tc_prepare_flow(struct otx2_nic *nic,
 			req->features |= BIT_ULL(NPC_SPORT_SCTP);
 	}
 
-	return otx2_tc_parse_actions(nic, &rule->action, req);
+	return otx2_tc_parse_actions(nic, &rule->action, req, f);
 }
 
 static int otx2_del_mcam_flow_entry(struct otx2_nic *nic, u16 entry)
@@ -525,6 +528,7 @@ static int otx2_tc_del_flow(struct otx2_nic *nic,
 static int otx2_tc_add_flow(struct otx2_nic *nic,
 			    struct flow_cls_offload *tc_flow_cmd)
 {
+	struct netlink_ext_ack *extack = tc_flow_cmd->common.extack;
 	struct otx2_tc_info *tc_info = &nic->tc_info;
 	struct otx2_tc_flow *new_node, *old_node;
 	struct npc_install_flow_req *req;
@@ -562,7 +566,8 @@ static int otx2_tc_add_flow(struct otx2_nic *nic,
 		otx2_tc_del_flow(nic, tc_flow_cmd);
 
 	if (bitmap_full(tc_info->tc_entries_bitmap, nic->flow_cfg->tc_max_flows)) {
-		netdev_err(nic->netdev, "Not enough MCAM space to add the flow\n");
+		NL_SET_ERR_MSG_MOD(extack,
+				   "Not enough MCAM space to add the flow");
 		otx2_mbox_reset(&nic->mbox.mbox, 0);
 		mutex_unlock(&nic->mbox.lock);
 		return -ENOMEM;
@@ -580,7 +585,7 @@ static int otx2_tc_add_flow(struct otx2_nic *nic,
 	/* Send message to AF */
 	rc = otx2_sync_mbox_msg(&nic->mbox);
 	if (rc) {
-		netdev_err(nic->netdev, "Failed to install MCAM flow entry\n");
+		NL_SET_ERR_MSG_MOD(extack, "Failed to install MCAM flow entry");
 		mutex_unlock(&nic->mbox.lock);
 		goto out;
 	}
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [net-next PATCH 5/5] octeontx2-pf: Add police action for TC flower
  2021-06-15 11:34 [net-next PATCH 0/6] Add ingress ratelimit offload Subbaraya Sundeep
                   ` (3 preceding siblings ...)
  2021-06-15 11:34 ` [net-next PATCH 4/5] octeontx2-pf: Use NL_SET_ERR_MSG_MOD for TC Subbaraya Sundeep
@ 2021-06-15 11:34 ` Subbaraya Sundeep
  2021-06-15 18:20 ` [net-next PATCH 0/6] Add ingress ratelimit offload patchwork-bot+netdevbpf
  5 siblings, 0 replies; 9+ messages in thread
From: Subbaraya Sundeep @ 2021-06-15 11:34 UTC (permalink / raw)
  To: netdev, davem, kuba; +Cc: sgoutham, hkelam, gakula, Subbaraya Sundeep

Added police action for ingress TC flower
hardware offload. With this rate limiting can be
done per flow. Since rate limiting is tied to
RQs in hardware the number of TC flower filters
with action as police is limited to number
of receive queues of the interface. Both bps
and pps modes are supported.

Examples to rate limit a flow:
$ ethtool -K eth0 hw-tc-offload on
$ tc qdisc add dev eth0 ingress
$ tc filter add dev eth0 parent ffff: protocol ip \
  flower ip_proto udp dst_port 80 action \
  police rate 100Mbit burst 32Kbit

$ tc filter add dev eth0 parent ffff: \
  protocol ip flower dst_mac 5e:b2:34:ee:29:49 \
  action police pkts_rate 5000 pkts_burst 2048

Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
---
 .../ethernet/marvell/octeontx2/nic/otx2_common.h   |   1 +
 .../ethernet/marvell/octeontx2/nic/otx2_ethtool.c  |   6 +
 .../net/ethernet/marvell/octeontx2/nic/otx2_tc.c   | 194 ++++++++++++++++++---
 3 files changed, 178 insertions(+), 23 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
index 8ade5af..5d27f00 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
@@ -370,6 +370,7 @@ struct otx2_nic {
 
 	struct otx2_flow_config	*flow_cfg;
 	struct otx2_tc_info	tc_info;
+	unsigned long		rq_bmap;
 };
 
 static inline bool is_otx2_lbkvf(struct pci_dev *pdev)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
index 9d9a2e4..8df748e 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
@@ -286,6 +286,12 @@ static int otx2_set_channels(struct net_device *dev,
 	if (!channel->rx_count || !channel->tx_count)
 		return -EINVAL;
 
+	if (bitmap_weight(&pfvf->rq_bmap, pfvf->hw.rx_queues) > 1) {
+		netdev_err(dev,
+			   "Receive queues are in use by TC police action\n");
+		return -EINVAL;
+	}
+
 	if (if_up)
 		dev->netdev_ops->ndo_stop(dev);
 
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
index a46055f..64a7767 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
@@ -42,11 +42,14 @@ struct otx2_tc_flow_stats {
 struct otx2_tc_flow {
 	struct rhash_head		node;
 	unsigned long			cookie;
-	u16				entry;
 	unsigned int			bitpos;
 	struct rcu_head			rcu;
 	struct otx2_tc_flow_stats	stats;
 	spinlock_t			lock; /* lock for stats */
+	u16				rq;
+	u16				entry;
+	u16				leaf_profile;
+	bool				is_act_police;
 };
 
 static void otx2_get_egress_burst_cfg(u32 burst, u32 *burst_exp,
@@ -221,15 +224,72 @@ static int otx2_tc_egress_matchall_delete(struct otx2_nic *nic,
 	return err;
 }
 
+static int otx2_tc_act_set_police(struct otx2_nic *nic,
+				  struct otx2_tc_flow *node,
+				  struct flow_cls_offload *f,
+				  u64 rate, u32 burst, u32 mark,
+				  struct npc_install_flow_req *req, bool pps)
+{
+	struct netlink_ext_ack *extack = f->common.extack;
+	struct otx2_hw *hw = &nic->hw;
+	int rq_idx, rc;
+
+	rq_idx = find_first_zero_bit(&nic->rq_bmap, hw->rx_queues);
+	if (rq_idx >= hw->rx_queues) {
+		NL_SET_ERR_MSG_MOD(extack, "Police action rules exceeded");
+		return -EINVAL;
+	}
+
+	mutex_lock(&nic->mbox.lock);
+
+	rc = cn10k_alloc_leaf_profile(nic, &node->leaf_profile);
+	if (rc) {
+		mutex_unlock(&nic->mbox.lock);
+		return rc;
+	}
+
+	rc = cn10k_set_ipolicer_rate(nic, node->leaf_profile, burst, rate, pps);
+	if (rc)
+		goto free_leaf;
+
+	rc = cn10k_map_unmap_rq_policer(nic, rq_idx, node->leaf_profile, true);
+	if (rc)
+		goto free_leaf;
+
+	mutex_unlock(&nic->mbox.lock);
+
+	req->match_id = mark & 0xFFFFULL;
+	req->index = rq_idx;
+	req->op = NIX_RX_ACTIONOP_UCAST;
+	set_bit(rq_idx, &nic->rq_bmap);
+	node->is_act_police = true;
+	node->rq = rq_idx;
+
+	return 0;
+
+free_leaf:
+	if (cn10k_free_leaf_profile(nic, node->leaf_profile))
+		netdev_err(nic->netdev,
+			   "Unable to free leaf bandwidth profile(%d)\n",
+			   node->leaf_profile);
+	mutex_unlock(&nic->mbox.lock);
+	return rc;
+}
+
 static int otx2_tc_parse_actions(struct otx2_nic *nic,
 				 struct flow_action *flow_action,
 				 struct npc_install_flow_req *req,
-				 struct flow_cls_offload *f)
+				 struct flow_cls_offload *f,
+				 struct otx2_tc_flow *node)
 {
 	struct netlink_ext_ack *extack = f->common.extack;
 	struct flow_action_entry *act;
 	struct net_device *target;
 	struct otx2_nic *priv;
+	u32 burst, mark = 0;
+	u8 nr_police = 0;
+	bool pps;
+	u64 rate;
 	int i;
 
 	if (!flow_action_has_entries(flow_action)) {
@@ -262,15 +322,51 @@ static int otx2_tc_parse_actions(struct otx2_nic *nic,
 			/* use RX_VTAG_TYPE7 which is initialized to strip vlan tag */
 			req->vtag0_type = NIX_AF_LFX_RX_VTAG_TYPE7;
 			break;
+		case FLOW_ACTION_POLICE:
+			/* Ingress ratelimiting is not supported on OcteonTx2 */
+			if (is_dev_otx2(nic->pdev)) {
+				NL_SET_ERR_MSG_MOD(extack,
+					"Ingress policing not supported on this platform");
+				return -EOPNOTSUPP;
+			}
+
+			if (act->police.rate_bytes_ps > 0) {
+				rate = act->police.rate_bytes_ps * 8;
+				burst = act->police.burst;
+			} else if (act->police.rate_pkt_ps > 0) {
+				/* The algorithm used to calculate rate
+				 * mantissa, exponent values for a given token
+				 * rate (token can be byte or packet) requires
+				 * token rate to be mutiplied by 8.
+				 */
+				rate = act->police.rate_pkt_ps * 8;
+				burst = act->police.burst_pkt;
+				pps = true;
+			}
+			nr_police++;
+			break;
+		case FLOW_ACTION_MARK:
+			mark = act->mark;
+			break;
 		default:
 			return -EOPNOTSUPP;
 		}
 	}
 
+	if (nr_police > 1) {
+		NL_SET_ERR_MSG_MOD(extack,
+				   "rate limit police offload requires a single action");
+		return -EOPNOTSUPP;
+	}
+
+	if (nr_police)
+		return otx2_tc_act_set_police(nic, node, f, rate, burst,
+					      mark, req, pps);
+
 	return 0;
 }
 
-static int otx2_tc_prepare_flow(struct otx2_nic *nic,
+static int otx2_tc_prepare_flow(struct otx2_nic *nic, struct otx2_tc_flow *node,
 				struct flow_cls_offload *f,
 				struct npc_install_flow_req *req)
 {
@@ -467,7 +563,7 @@ static int otx2_tc_prepare_flow(struct otx2_nic *nic,
 			req->features |= BIT_ULL(NPC_SPORT_SCTP);
 	}
 
-	return otx2_tc_parse_actions(nic, &rule->action, req, f);
+	return otx2_tc_parse_actions(nic, &rule->action, req, f, node);
 }
 
 static int otx2_del_mcam_flow_entry(struct otx2_nic *nic, u16 entry)
@@ -502,6 +598,7 @@ static int otx2_tc_del_flow(struct otx2_nic *nic,
 {
 	struct otx2_tc_info *tc_info = &nic->tc_info;
 	struct otx2_tc_flow *flow_node;
+	int err;
 
 	flow_node = rhashtable_lookup_fast(&tc_info->flow_table,
 					   &tc_flow_cmd->cookie,
@@ -512,6 +609,27 @@ static int otx2_tc_del_flow(struct otx2_nic *nic,
 		return -EINVAL;
 	}
 
+	if (flow_node->is_act_police) {
+		mutex_lock(&nic->mbox.lock);
+
+		err = cn10k_map_unmap_rq_policer(nic, flow_node->rq,
+						 flow_node->leaf_profile, false);
+		if (err)
+			netdev_err(nic->netdev,
+				   "Unmapping RQ %d & profile %d failed\n",
+				   flow_node->rq, flow_node->leaf_profile);
+
+		err = cn10k_free_leaf_profile(nic, flow_node->leaf_profile);
+		if (err)
+			netdev_err(nic->netdev,
+				   "Unable to free leaf bandwidth profile(%d)\n",
+				   flow_node->leaf_profile);
+
+		__clear_bit(flow_node->rq, &nic->rq_bmap);
+
+		mutex_unlock(&nic->mbox.lock);
+	}
+
 	otx2_del_mcam_flow_entry(nic, flow_node->entry);
 
 	WARN_ON(rhashtable_remove_fast(&nic->tc_info.flow_table,
@@ -531,12 +649,18 @@ static int otx2_tc_add_flow(struct otx2_nic *nic,
 	struct netlink_ext_ack *extack = tc_flow_cmd->common.extack;
 	struct otx2_tc_info *tc_info = &nic->tc_info;
 	struct otx2_tc_flow *new_node, *old_node;
-	struct npc_install_flow_req *req;
-	int rc;
+	struct npc_install_flow_req *req, dummy;
+	int rc, err;
 
 	if (!(nic->flags & OTX2_FLAG_TC_FLOWER_SUPPORT))
 		return -ENOMEM;
 
+	if (bitmap_full(tc_info->tc_entries_bitmap, nic->flow_cfg->tc_max_flows)) {
+		NL_SET_ERR_MSG_MOD(extack,
+				   "Not enough MCAM space to add the flow");
+		return -ENOMEM;
+	}
+
 	/* allocate memory for the new flow and it's node */
 	new_node = kzalloc(sizeof(*new_node), GFP_KERNEL);
 	if (!new_node)
@@ -544,17 +668,11 @@ static int otx2_tc_add_flow(struct otx2_nic *nic,
 	spin_lock_init(&new_node->lock);
 	new_node->cookie = tc_flow_cmd->cookie;
 
-	mutex_lock(&nic->mbox.lock);
-	req = otx2_mbox_alloc_msg_npc_install_flow(&nic->mbox);
-	if (!req) {
-		mutex_unlock(&nic->mbox.lock);
-		return -ENOMEM;
-	}
+	memset(&dummy, 0, sizeof(struct npc_install_flow_req));
 
-	rc = otx2_tc_prepare_flow(nic, tc_flow_cmd, req);
+	rc = otx2_tc_prepare_flow(nic, new_node, tc_flow_cmd, &dummy);
 	if (rc) {
-		otx2_mbox_reset(&nic->mbox.mbox, 0);
-		mutex_unlock(&nic->mbox.lock);
+		kfree_rcu(new_node, rcu);
 		return rc;
 	}
 
@@ -565,14 +683,17 @@ static int otx2_tc_add_flow(struct otx2_nic *nic,
 	if (old_node)
 		otx2_tc_del_flow(nic, tc_flow_cmd);
 
-	if (bitmap_full(tc_info->tc_entries_bitmap, nic->flow_cfg->tc_max_flows)) {
-		NL_SET_ERR_MSG_MOD(extack,
-				   "Not enough MCAM space to add the flow");
-		otx2_mbox_reset(&nic->mbox.mbox, 0);
+	mutex_lock(&nic->mbox.lock);
+	req = otx2_mbox_alloc_msg_npc_install_flow(&nic->mbox);
+	if (!req) {
 		mutex_unlock(&nic->mbox.lock);
-		return -ENOMEM;
+		rc = -ENOMEM;
+		goto free_leaf;
 	}
 
+	memcpy(&dummy.hdr, &req->hdr, sizeof(struct mbox_msghdr));
+	memcpy(req, &dummy, sizeof(struct npc_install_flow_req));
+
 	new_node->bitpos = find_first_zero_bit(tc_info->tc_entries_bitmap,
 					       nic->flow_cfg->tc_max_flows);
 	req->channel = nic->hw.rx_chan_base;
@@ -587,7 +708,8 @@ static int otx2_tc_add_flow(struct otx2_nic *nic,
 	if (rc) {
 		NL_SET_ERR_MSG_MOD(extack, "Failed to install MCAM flow entry");
 		mutex_unlock(&nic->mbox.lock);
-		goto out;
+		kfree_rcu(new_node, rcu);
+		goto free_leaf;
 	}
 	mutex_unlock(&nic->mbox.lock);
 
@@ -597,12 +719,35 @@ static int otx2_tc_add_flow(struct otx2_nic *nic,
 	if (rc) {
 		otx2_del_mcam_flow_entry(nic, req->entry);
 		kfree_rcu(new_node, rcu);
-		goto out;
+		goto free_leaf;
 	}
 
 	set_bit(new_node->bitpos, tc_info->tc_entries_bitmap);
 	tc_info->num_entries++;
-out:
+
+	return 0;
+
+free_leaf:
+	if (new_node->is_act_police) {
+		mutex_lock(&nic->mbox.lock);
+
+		err = cn10k_map_unmap_rq_policer(nic, new_node->rq,
+						 new_node->leaf_profile, false);
+		if (err)
+			netdev_err(nic->netdev,
+				   "Unmapping RQ %d & profile %d failed\n",
+				   new_node->rq, new_node->leaf_profile);
+		err = cn10k_free_leaf_profile(nic, new_node->leaf_profile);
+		if (err)
+			netdev_err(nic->netdev,
+				   "Unable to free leaf bandwidth profile(%d)\n",
+				   new_node->leaf_profile);
+
+		__clear_bit(new_node->rq, &nic->rq_bmap);
+
+		mutex_unlock(&nic->mbox.lock);
+	}
+
 	return rc;
 }
 
@@ -864,6 +1009,9 @@ int otx2_init_tc(struct otx2_nic *nic)
 {
 	struct otx2_tc_info *tc = &nic->tc_info;
 
+	/* Exclude receive queue 0 being used for police action */
+	set_bit(0, &nic->rq_bmap);
+
 	tc->flow_ht_params = tc_flow_ht_params;
 	return rhashtable_init(&tc->flow_table, &tc->flow_ht_params);
 }
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [net-next PATCH 0/6] Add ingress ratelimit offload
  2021-06-15 11:34 [net-next PATCH 0/6] Add ingress ratelimit offload Subbaraya Sundeep
                   ` (4 preceding siblings ...)
  2021-06-15 11:34 ` [net-next PATCH 5/5] octeontx2-pf: Add police action for TC flower Subbaraya Sundeep
@ 2021-06-15 18:20 ` patchwork-bot+netdevbpf
  5 siblings, 0 replies; 9+ messages in thread
From: patchwork-bot+netdevbpf @ 2021-06-15 18:20 UTC (permalink / raw)
  To: Subbaraya Sundeep; +Cc: netdev, davem, kuba, sgoutham, hkelam, gakula

Hello:

This series was applied to netdev/net-next.git (refs/heads/master):

On Tue, 15 Jun 2021 17:04:26 +0530 you wrote:
> This patchset adds ingress rate limiting hardware
> offload support for CN10K silicons. Police actions
> are added for TC matchall and flower filters.
> CN10K has ingress rate limiting feature where
> a receive queue is mapped to bandwidth profile
> and the profile is configured with rate and burst
> parameters by software. CN10K hardware supports
> three levels of ingress policing or ratelimiting.
> Multiple leaf profiles can  point to a single mid
> level profile and multiple mid level profile can
> point to a single top level one. Only leaf level
> profiles are used for configuring rate limiting.
> 
> [...]

Here is the summary with links:
  - [net-next,1/5] octeontx2-af: cn10k: Bandwidth profiles config support
    https://git.kernel.org/netdev/net-next/c/e8e095b3b370
  - [net-next,2/5] octeontx2-af: cn10k: Debugfs support for bandwidth profiles
    https://git.kernel.org/netdev/net-next/c/e7d8971763f3
  - [net-next,3/5] octeontx2-pf: TC_MATCHALL ingress ratelimiting offload
    https://git.kernel.org/netdev/net-next/c/2ca89a2c3752
  - [net-next,4/5] octeontx2-pf: Use NL_SET_ERR_MSG_MOD for TC
    https://git.kernel.org/netdev/net-next/c/5d2fdd86d517
  - [net-next,5/5] octeontx2-pf: Add police action for TC flower
    https://git.kernel.org/netdev/net-next/c/68fbff68dbea

You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [net-next PATCH 1/5] octeontx2-af: cn10k: Bandwidth profiles config support
  2021-06-15 11:34 ` [net-next PATCH 1/5] octeontx2-af: cn10k: Bandwidth profiles config support Subbaraya Sundeep
@ 2021-06-18 16:07   ` Kees Cook
  2021-06-21 10:19     ` [EXT] " Subbaraya Sundeep Bhatta
  0 siblings, 1 reply; 9+ messages in thread
From: Kees Cook @ 2021-06-18 16:07 UTC (permalink / raw)
  To: Subbaraya Sundeep; +Cc: netdev, davem, kuba, sgoutham, hkelam, gakula

On Tue, Jun 15, 2021 at 05:04:27PM +0530, Subbaraya Sundeep wrote:
> [...]
> @@ -885,6 +906,9 @@ static int rvu_nix_blk_aq_enq_inst(struct rvu *rvu, struct nix_hw *nix_hw,
>  			else if (req->ctype == NIX_AQ_CTYPE_MCE)
>  				memcpy(&rsp->mce, ctx,
>  				       sizeof(struct nix_rx_mce_s));
> +			else if (req->ctype == NIX_AQ_CTYPE_BANDPROF)
> +				memcpy(&rsp->prof, ctx,
> +				       sizeof(struct nix_bandprof_s));

rsp->prof is u64 not struct nix_bandprof_s, so the compiler thinks this
memcpy() is overflowing the "prof" field.

Can you please fix this up?

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [EXT] Re: [net-next PATCH 1/5] octeontx2-af: cn10k: Bandwidth profiles config support
  2021-06-18 16:07   ` Kees Cook
@ 2021-06-21 10:19     ` Subbaraya Sundeep Bhatta
  0 siblings, 0 replies; 9+ messages in thread
From: Subbaraya Sundeep Bhatta @ 2021-06-21 10:19 UTC (permalink / raw)
  To: Kees Cook
  Cc: netdev, davem, kuba, Sunil Kovvuri Goutham, Hariprasad Kelam,
	Geethasowjanya Akula

Hi Kees Cook,

Thanks. I will send the fix to net.

Sundeep

________________________________________
From: Kees Cook <keescook@chromium.org>
Sent: Friday, June 18, 2021 9:37 PM
To: Subbaraya Sundeep Bhatta
Cc: netdev@vger.kernel.org; davem@davemloft.net; kuba@kernel.org; Sunil Kovvuri Goutham; Hariprasad Kelam; Geethasowjanya Akula
Subject: [EXT] Re: [net-next PATCH 1/5] octeontx2-af: cn10k: Bandwidth profiles config support

External Email

----------------------------------------------------------------------
On Tue, Jun 15, 2021 at 05:04:27PM +0530, Subbaraya Sundeep wrote:
> [...]
> @@ -885,6 +906,9 @@ static int rvu_nix_blk_aq_enq_inst(struct rvu *rvu, struct nix_hw *nix_hw,
>                       else if (req->ctype == NIX_AQ_CTYPE_MCE)
>                               memcpy(&rsp->mce, ctx,
>                                      sizeof(struct nix_rx_mce_s));
> +                     else if (req->ctype == NIX_AQ_CTYPE_BANDPROF)
> +                             memcpy(&rsp->prof, ctx,
> +                                    sizeof(struct nix_bandprof_s));

rsp->prof is u64 not struct nix_bandprof_s, so the compiler thinks this
memcpy() is overflowing the "prof" field.

Can you please fix this up?

--
Kees Cook

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2021-06-21 10:19 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-15 11:34 [net-next PATCH 0/6] Add ingress ratelimit offload Subbaraya Sundeep
2021-06-15 11:34 ` [net-next PATCH 1/5] octeontx2-af: cn10k: Bandwidth profiles config support Subbaraya Sundeep
2021-06-18 16:07   ` Kees Cook
2021-06-21 10:19     ` [EXT] " Subbaraya Sundeep Bhatta
2021-06-15 11:34 ` [net-next PATCH 2/5] octeontx2-af: cn10k: Debugfs support for bandwidth profiles Subbaraya Sundeep
2021-06-15 11:34 ` [net-next PATCH 3/5] octeontx2-pf: TC_MATCHALL ingress ratelimiting offload Subbaraya Sundeep
2021-06-15 11:34 ` [net-next PATCH 4/5] octeontx2-pf: Use NL_SET_ERR_MSG_MOD for TC Subbaraya Sundeep
2021-06-15 11:34 ` [net-next PATCH 5/5] octeontx2-pf: Add police action for TC flower Subbaraya Sundeep
2021-06-15 18:20 ` [net-next PATCH 0/6] Add ingress ratelimit offload patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).